Sunday, October 31, 2010

Render farm notes

This post is mainly for folks who wonder about how a basic render farm setup works in a animation or visual effects production pipeline. This is nowhere near a technically accurate post since most of it is took from my own personal research on setting up a render farm back at my previous workplace.

Most of render farm uses parallel rendering or distributed rendering for complex or huge amount rendering in the production. The distributed rendering basically breaks up the image sequence into chunks of individual frames and spreads it across the render machines for faster processing.

A very common way to go about distributed rendering is by server clustering, it is way of combining multiple computers or computer servers to perform a common task. There are different ways of clustering meant for different job types. Since our requirement is parallel rendering so the most effective way of clustering would be a parallel gird based cluster, in which dedicated master node (server) is the only computer which artist interact with to submit jobs. This machine then acts as the file server, render manager. This provides a single-system image to the user, who launches the jobs from the master node without ever logging into any worker nodes (other clustered computer under server).

There are quite a lot of render manager software available in the industry today which allows you to effectively set up a parallel cluster based render farm and following are to name a few, DrQueue, Qube, Royal Render, Deadline, Smedge, Muster etc.

Here is a basic flowchart depicting the setup I described above.
Also note that this setup is also commonly known as HPC (High performance computing) depending upon the scale of your farm. It is ironic the main advantage of this setup is not mainly speed, though it is considerably a large factor. But don't expect your frame to be rendered within a split second after your mouse click. It is rather an obnoxious process which will really require some good patience and technical expertise. Other main advantage would be the ability to queue the render jobs and this opens up lot of possibilities like for example, queuing multiple tests version for a shot for a overnight rendering thus increasing your productivity.

Cloud computing and GPU render farm bandwagon.

If you been researching on render farms lately then these are two terms which will hit you again and again and it is important not to get mixed up with both of these technologies since they are not really corresponding to each other.

Cloud computing is more a like remote service thorough web or virtual private networks (VPLAN) where you can use the service provider's hardware for your rendering purpose. And this it makes perfect sense to visual effects production houses who might need to increase the render farm capacity on a particular show's requirement. But acquiring , lodging and maintaining even a one single worker node in a render farm involves lot of costs even the after project is over so cloud render farm suits the industry needs.

Though it sounds like a cost effective scalable solution but comes with its on issues like security, availability of custom scripts, plugins or assets at service provider's site and also remote management of render farm may prone to more network issues.

To understand better the scope of cloud computing, take a look this presentation

GPU rendering on the other hand is going to bring a paradigm shift in the industry where it may to replace most of current CPU based render farms. This doesn't mean the end of CPU based render farms since there still areas (simulation, dynamics and AI) where CPU will perform better than GPU.



Here is snippet took from article called 'Are you ready for GPU revolution?" by Joe at renderstream
To help you understand how GPU acceleration could speed up rendering, lets think of it in terms of bucket rendering. (Please keep in mind this analogy isn’t technically accurate) Most of you are familiar with bucket rendering since modern renderers use that method. As a renderer calculates and ultimately draws pixels, it does so in small portions, or buckets, of a predetermined size. For every number of cores you have in your machine, you will have an equal amount of buckets at render time. For example, a common workstation today will have 4 cores (also known as a quad core) thus you will see 4 buckets at render time. If you have a dual quad core machine you will see 8 buckets and so on…
Today’s GPUs have 240 cores and the next generation will have up to 512 cores. By the time GPU acceleration is available for rendering, there could be even more cores available on the GPU. So, you can start to see how a GPU can have a tremendous impact on rendering. With CPUs we see a bucket work on a small portion of the rendered frame and then move on to another region. With a GPU, the available buckets would essentially fill the entire rendered frame. All portions of the frame would be “worked on” at once allowing for near real time rendering.
It also worth check this a great article on how GPU render helped spped up Avator production. As they described in the article GPU rendering will definitely bring a huge difference in productivity of the artists.

So as you can see I think in the future GPUs will be effectively integrated into current CPU based render farms and it will have huge impact on the render times and the way artists works.

Till then, happy rendering ;)


Post a Comment