Vintage Film Colouring – Blender or Photoshop

I’ve been getting a lot of hits lately for Google searches relating to Blender, nodes and vintage film colours. I haven’t covered that topic yet, but I’m guessing the traffic is finding some of my other tutorials.

And, since it’s super simple, here’s the lowdown:

By the way, this works with Photoshop too. It’s basically all in the RGB curves, so most programs (Photo apps like Aperture, as well as video like Vegas or Premier) should be able to use this. I’ll cover a bit about Blender specifically first, so just skip to the RGB part.

Basic Light Physics

This is a render specific thing. If you’ve already got a photo or video taken from real life, skip to the RGB curves below.

Light, by itself in the 3D world, is pretty silly. It’s usually perfectly white and so colours are represented fairly. The above image might look like this:

Which works for a lot of renders; it’s desired for a lot of things. However, I’m assuming for this tutorial you’re wanting something for realistic to how real life light behaves, and from there we’ll talk about how film behaves to capture that light.

As a rule of thumb, sunlight at high noon is not white but warm – it’s slightly yellow. As you approach sunset, it gets increasingly orange-red due to the bouncing and scattering in the atmosphere. The physics aren’t terribly important since new Blender has provided a handy part of the sun lamp: it can do the sky for you, and this includes light colour. It even has a few presets so you don’t have to fuss with any numbers if you don’t want to.

Turning that on “desert” we get this render: (remember, before we’ve even touched any nodes or anything) (Also notice how long the shadows are, the sun is quite low on the horizon – quite sunset-y – quite orange.)

Which gives the light a nice warmth.

But light tends to get cool in the shadows. It actually becomes more blue as it gets darker. We’ll go into the nodes for that. There are a few points to consider, though. It’s the same process, but how much you do it will define how stylized the outcome is. If you’re going for straight photorealism with a modern hypothetical camera, you’ll want a very subtle effect. Film just amplifies it.

RGB Curves

First, an overview of what the curve actually does.

I highly encourage you to open Photoshop / Blender / Other and have some image to play with. It’ll make more sense when you can play with it. These screenshots are from Photoshop CS3, but it’s the same idea everywhere.

The curve from bottom left to top right represents the lightness of the existing pixels. If you were to, say, nudge a point 3/4s in the top right and bring it up a bit, the light parts of the image would brighten further. If you brought the bottom left point up, the dark bits would lighten up. You can combine points and create a curve that would increase contrast (make bright parts brighter and dark parts lower) or do the opposite.

This effects the full RGB spectrum equally. But, there are really four independent graphs: RGB, R, G and B. They all work the same, but the last three effect only their specific channels. In this way, you could make all the highlights redder and all the low bits bluer by making the graphs something like this:

Which is exactly how vintage film (and most Instagram effects) behaves. The lows move into the blue-purple range while the lights go into either the yellow or red ranges.

And, depending on the film type, this will vary. For maximum effect, look at actual samples from a specific film type (Polaroid, Lomo, Kodachrome etc) and try to recreate it. There are lots of bad, bad, attempts with no basis on any one film and it comes out looking really cheesy and obvious fake. So, just be aware.

For the people using both Blender and Photoshop, I will point out that the curves are on different scales between the programs. Although they behave exactly the same, Blender seems to be more sensitive, so move the points in smaller amounts.

As always, if you have any comments, questions, changes or suggestions feel free to let me know.


Blender and HDRi Reflections

I’ve received a few emails in response to the Dues Ex post wanting futher elaboration on the point of HDRi and what it truly is, so although I’ve answered their questions personally, I felt like it’s a good topic for public tutorial as well.

So, here goes!

HDRi, without getting too technical, is an image that holds more information than a normal photo. High Dynamic Range image, which in photography equates to generating information from multiple exposures several stops apart. If you’re not a photographer and just want to use premade ones, you really don’t need to know those sorts of details.

The idea is that normal images only store three values for each pixel: Red, Green and Blue. The extra information could be called intensity or brightness. If you take a [normal] photo of the sun the only information you have that it’s white: 255, 255, 255. But what if there is another white object in the photo? It might also be 255, 255, 255 but it’s not creating it’s own light energy, it’s just a white object reflecting the light. The extra information accounts for that intensity.

Now, in the 3D usage, there are two major usages. The image above is IBL lit, which Blender internal doesn’t (yet) do. That’s a Yaf(a)ray feature that takes the intensity information of a HDRi map and lights the scene with it, instead of using lights in the traditional method.

Compare the lighting and shadows on these two images. Nothing in the scene has changed except the world texture (HDRi) used. Drastically different results, right? So that’s one way: easy scene setup. No need to make lights or reflectors or anything. For this reason, renderers like Hypershot don’t need light abilities, it’s all baked into the map. This has it’s own limitations, of course, but I won’t go there in this tutorial.

Okay, back to Blender

So, that’s all fine for Yafaray users, but what about the Blender internal?

Well, it’s used slightly differently.

Reflection Maps

A reflection map is what I was doing in the DX:HR tutorial. The extra HDR information isn’t really used, but the HDRi maps make for ideal seamless environment textures for our object to reflect.

Basically, you’ve got an item and it would be ridiculous to ray-trace all the reflections for every frame in the animation. But, you want it to have some shiny, reflective properties. This is a great way of doing it without killing your render time.

Normal, default texture is light grey. AO is raytraced @ 7 samples. No lights, just environment lighting > white. 4.04 second render.

Same default texture, but added this basic HDRi map on ‘mix’ – I’ll get into the specifics further down. Same render settings, 4.23 seconds.

Assuming your scene has lights and things the ‘mix’ might be too out of place (remember, the maps won’t line up to the real environment, but try to match them up as close as possible. Also, if you have coloured materials then mix won’t recognize them, it’ll just plaster over it with the map. The solution is the above photo: ‘add’. It just adds the lights from the HDRi to the material as you’ve created it for a subtle effect.

The Nitty Gritty

“You’ve convinced me, Brennan, this looks great! How do I set it up in Blender?”

It’s super easy.

Simply make a image texture like you normally would and change it to “reflection” and then either ‘mix’ or ‘add’ at the bottom. Render!

I would advise finding some great HDRi maps, but any image will work, and sometimes it’s handy to make your own.

Go forth and make shiny things without sacrificing render time!


Questions, comments or concerns? Get in touch.




Super easy light flicker in Blender 2.5

Click for fullsize.

The thing I’m appreciating most about Blender 2.5’s ethos is the ability to keyframe absolutely everything. It’s reminiscent of FL Studio and it definitely makes for an easy workflow.

I made a little test scene here, with really basic settings. Basically, a plane, a few cubes and a standard lamp in the very middle. As you can see, it’s softened a bit and I’m running some AO and EL as well. None of this actually matters to the flickering, as that can be applied to any light regardless of settings mentioned prior.

  1. Make a lamp. Again, it doesn’t really matter.
  2. The Energy defines it’s brightness. Think of it like a dimmer lightswitch on the wall. Default is 1, off is 0 and you can go up to 10. Right click this and “Make keyframe” at energy=1 on frame 1.
  3. In the IPO curve viewer, bring up the right side menu with N or that little grey plus sign in the corner. “Add modifier” > “Noise” will make the IPO graph all noisy (mine’s in pink there).
  4. Set noise settings. The defaults were alright, but I wanted more flicker amplitude (range in brightness difference) so I bumped that up a bit. In hindsight, it’s a bit fast compared to real fluorescent bulbs, so I could have used the scaling to slow it down slightly.
  5. You don’t have to apply or anything. The curve that’s there is the curve that will be rendered.
  6. Render.

Ironically, the upload to Vimeo took longer than the render itself did (50 frames @ 11 seconds each).

Video on Vimeo

Console FM Neue

You’ll recall I fell in love with a few weeks ago and I’m happy to announce they have a slick new UI for our listening pleasure.

I did help out, although in a very tiny way (we had a lengthy discussion on the green for the track bar) and it was super sweet working with the awesome Alex Baldwin who I can’t compliment enough; great guy.

So if you aren’t already using it, definitely go there right now and prepare to have your life changed.

DIY Ixxi

Have you heard of Ixxi? I’ve seen it a couple of places recently and thought it was pretty cool. It’s an interior design mosaic that fully embraces the digital pixel world to create an abstract wall art.

The only drawback? It’s crazy expensive. I can see why, of course, they seem to be some kind of hard plastic or laminate or something:

So! Let’s do it ourselves.

Remember that tutorial I made on cutting up one large photo so you could print them on 4 x 6″  photos and mosaic them? Since you can print tons of them for dirt cheap, why not do that here as well?

Since our final image will probably only be 10-20 pixels wide, it’s not even like you need to have nice, big photos for this. virtually anything will work.

  • Open your image in Photoshop
  • “Image” > “Image size” and make it 20 pixels wide (or whatever) see point 3. before clicking ‘ok’

Each photo will be one pixel, so figure out how many photos will fit on your wall. There’s two options here: you can print them all as 4 x 6″ photos and just use those, but since they aren’t square everything will be slightly stretched. Or! You can cut the photos down to 4 x 4″ with scissors or if you’ve got, a nice paper guillotine cutter. From a Photoshop point of view it won’t really effect anything, but keep it in the back of your mind when doing maths to convert real world dimensions into pixels (as described in the other tutorial.

  • There in the “image size” menu there are a couple of choices at the bottom which deal with how PS resamples the image. That means the math that decides what pixels get left and which ones are maintained, and these options will give you different results in the end.

The first image is the one I’m converting. I chose it because it had some nice colours and lots of contrast. You might recognize it from this fantastic music video.

The second one is what you’ll get if you just hit okay with the default resampling. It’s doing it’s best to antialias and keep everything smooth, which is usually a bonus, but here it just looks messy.

The third image is what you get if you switch it to “preserve hard edges” resampling. Much cleaner, and in my opinion, much cooler looking.

From here it follows the other tutorial. Just remember that whichever downsampling method you choose, you will have to use “preserve hard edges” to upsample to the desired final big size.

The final big size, just like the other tutorial, is how many pixels there are in the small image (20 in mine) x  6 inches x 300 DPI printing.

So my final size would be 20 x 6 x 300 = 36 000 pixels wide. I’ll upsample my 20 pixel wide image (preserving hard edges) to that width and then slice as usual (again, other tutorial). Each slice, as usual, will be 6″ x 300 dpi wide, which is 1800 pixels.

Try it out! If you make something neat, I’d love to see photos.

Questions? Comments? Vulgarities? Send them my way.

Fracture in Blender 2.5

I love it when the work I’m doing allows me to use a new technique I want to learn. This afternoon I taught myself how to use the new-ish Blender Fracture script to make some sweet brick-breaking shatter effects.

Thank you, Blender, for being so awesome.

(c) 2017 ACRYLO | powered by WordPress