I haven’t done a lot of organic modeling in the past. Maybe because I thought I couldn’t or maybe because it didn’t interest me a lot. I have recently started to play with ZBrush and after reading a couple of tutorials and watching some great videos from zclassroom this is what I just created in my first incursion into zbrush. The best part is that after getting used to the interface, ZBrush is incredibly fast! It took me about 20 minutes to create that tentacle from scratch with ZSpheres and some sculpting.
Weeks ago, people from MirrorShowManagement asked me to rig and create some face expressions of my old M&M’s characters. They needed to create larger than life sculptures for a Mars stand. I finally made the rigging and the morph targets for the face and this is how the M&M’s characters turned out. They used the models to come up with the poses for the characters which they then sent to a sculptor.
Over the last few years, a brand new technology has emerged for creating CGI effects that has already made a big impact on feature film production. It is called point-based rendering, and this powerful technique makes creating global illumination effects for feature film far more practical and efficient than before. In fact, this year the Academy of Motion Picture Arts and Sciences awarded the creators of this innovation, Per Christensen, Michael Bunnell and Christophe Hery, with a Scientific and Engineering Academy award.
In the next article, we will look into the development of this important new technology, how point-based rendering works, and what this all means to the future of feature film production as we know it.
Read the whole article here: Point-based rendering in Pixar’s Renderman
Once again, the Formula 1 season is about to start. As the last three years I’m planning to create the whole collection of cars and helmets. I’ve just started with the new helmet of Fernando Alonso (driving for Ferrari this year). If you are interested in purchasing some of these models or any other from my collection, you can make it through www.digitalelements.be
Sometimes, you have to know when to stop rendering…
Author: Third Seventh
I started to build it from some initial blueprints. I couldn’t find much info on it except for a very basic-stage simple plan and section. Thanks to this, I was able to add a few personal details. The whole main structure and individual elements are mirrored by a central axis, so I centered the modeling around just one of the symmetries.
Walls, floors, stair-steps and so on… they all are modelled from primitives or extruded splines. Mapping was also quite simple: all elements were mapped by a simple UVW Box. As I eventually attached all the steps into one single mesh, I had to boolean them with some cylinders in order to get the proper holes where I could introduce the actual stepping spotlights.
All of the seats come from one original model with randomized elements. Both the back and the seat itself were box modelled with an applied turbosmooth modifier while the rest of the seat elements are simple modified primitives. Once the seat was done, each group of mobile elements were attached into one and transformed into Vray proxies.
Then, I placed their own pivot point in it’s logical place and grouped the seat afterwards.
Why proxies? The seat itself has a lot of polys because of the seams and I had to replicate it 510 times! So the VrayProxy was the best solution… And why did I place the pivot in the mobile axis parts? Cause we want to randomize them!
After duplicating them all by arc splines and the wonderful spacingtool, I had to add a bit of chaos using some wonderful scripts from Blur Studio. Making several random selection sets with “randomselect” script, I could do some rotation variations via the “randomtransform” script (Blur Studio too)…
The scene does not have many different materials. We have basically some timber in contrast with the white fabric seats and ceiling concrete. The original idea was to get three primary colour spread over the big surfaces.
Here are some of the materials and textures used:
This chamber auditorium has no natural light openings so the place had to be lit in a completely artificial way. I chose to ignore the typical stage spotlights since I was more interested in dividing spaces by the light areas: Upper cold fluorescent light against the warmer steps with small leading lights.
Each step spotlight has a vraylight in front of it and is invisible (affecting only in diffuse and specular terms). They’re all instanced so as to allow for more easier and global control.
For the upper light panels, I also cloned an instanced planar vraylight using the spacingtool. The lights are looking down and placed inside the upper hollow pits and they have one-sided sss-plastic single face in front of them which doesn’t cast shadows. The no-casting shadow point is to allow the lights to “freely” affect both diffuse and specular surfaces, but avoid the enclosure panels’ direct shadow while still being back-lit to show a translucent effect.
It is important to care about balance in every composition. After deciding what was going to appear in the comp, I chose to set the shot image aspect. Afterwards, it was time to find the right balance between volumes by changing camera position, focal length, etc… All shots here are made by physical cameras because of the need of real exposure, vertical correction, etc…
All images were rendered in 3ds Max and ChaosGroup Vray in a LinearWorkFlow colorspace. Render parameters are quite standard:
Obviously all process steps are highly important in order to get a desired CG picture. However, I feel IMHO that postproduction work is the most personal step. In this example, I decided to go for an old-analog Lomo’s Holga and Polaroid film look. All post work was done in AfterEffects.
Here are the postprocessing steps:
That’s all folks! Hope this has been helpful.