I’ve been writing this shader for a few weeks now, but I hadn’t thought to blog about it (or anything, for that matter) until now. That’s probably a good thing; I’m learning a lot as I go along, so there’s a good chance I would’ve explained something incorrectly. I’ve decided to post updates on this more often, so you can expect much more info later. There’s a lot that I’ve already accomplished, so there’s a backlog of things to write about. Here’s what I’m wrapping up today:
What was that?
What you see above is what I’m going to call: The Shaderball. The object itself is of little importance. I found it in a hard-to-navigate-to-area on Pixar’s website. The way The Shaderball looks is important, and that is what I am coding. You will see many images in future blog posts, but the one thing that that will remain constant is that ball. You will see it in the same position and from the same vantage point, but the way the ball is lit and and reacts to light will change.
In this case, you see what should look like a pretty standard “material.” You could describe it as smooth, because it reflects light ideally. It’s index of refraction (or IOR, we’ll talk about this another time) is a very common one, so the amount of light reflected at different viewing angles looks familiar. I’ve chosen pure red (in terms of RGB) and pure white colors as the base colors of these objects, because it makes analyzing the results simpler.
What did you do?
I am creating something you would call an Uber-Shader. It’s a shader that’s meant to cover a very wide range of materials. A shader is the code that ‘s run for every single visible point in that image. It knows a few variables, such as where it is, where it points to, and any user defined parameters. By using general parameters (inputs to the code that modify its output), you can use the same shader to describe glass, rubber, metal, skin, concrete, dust, wax, you get the point now, right? This shader is for surfaces because it describes how each point on the surface looks. The purpose of that image was to test out a light shader. Light shaders are run by surface shaders to provide them with information about the light they are receiving. I’ll expand on that light shader in my next post.
How did you do that?
Using the Renderman Shading Language, I write code that’s interpreted by 3Delight to produce the images you see. Two things are required to produce that image: the scene description and the shaders. The scene description is the code that represents the shape of the ball, its position, where the camera is, and so on. As I said before, the scene description is not what is important.
- I make images, and to make these images I need tools. I don’t like the tools I have, but I can make new tools.
- I’d rather not write a new shader for everything I make.
- The shaders I can use in Maya are crap. The parameters are not intuitive and they describe models that are outdated and not really based in reality.
- This provides a great learning experience.
What should I expect?
You will see many more images. Many of the basics of this shader are already complete, but I’ll go over each of them in future posts:
- Physical plausibility
- Fresnel Reflectance
- Local Illumination
- Global Illumination (GI)
- Reflection / Specular
- Subsurface Scattering (SSS)
My next post will dive into the details of the image above, image-based lighting.