Sunday, July 19, 2009

DROIDMAKER

I'm not really sure how this will go over, but i've decided to make my book DROIDMAKER downloadable in its entirety, effective today. It's a long book (518 pages), and I still recommend going to Amazon and getting yourself a copy (it's how you can pay for this "shareware"), but below are links to get PDFs of the book: I've divided it into the three "acts" that makes up the saga.

Thank you, and enjoy!
I just started reading this book and it gave me a really good insight into early career of George Lucas and Francis Coppola. It is a must read for any Lucas fan and It has a really good account of hardships he has gone through before getting where he is now.

I remember once my professor told me back in my college days, its easy to write about greatness of a person like Lucas but its hard to write about what methods or approaches he had used to achieve that greatness.

Download the book from Rubin's blog
http://droidmaker.blogspot.com/

Monday, July 13, 2009

'A pixel is not a little square' says Alvy Ray Smith

Its ironic that before reading this memo by Alvy Ray Smith I never wondered what really makes up a pixel. I always believed the computer stores color information as little squares but actually its only a mathematical representation of color information. Like Smith points out its mainly misconception due to the way computer application shows pixels when we magnify an image.

According to Smith,
'A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.'
He also states that an image is a continuous straight parallel array of point samples and by using an appropriate image reconstruction filter we could create full colorful image what out of it. It is interesting to note that for example, while using any of these filters they would represent array of point in a form of rectangle which can be almost similar to a square. I believe this is the reason why when we zoom on image we see square pixels which represents the main color value of points sample present in that particular area. Smith's explanation on the same as follows
when you zoom in is this: Each point sample is being replicated MxM times, for magnification factor M. When you look at an image consisting of MxM pixels all of the same color, guess what you see: A square of that solid color! It is not an accurate picture of the pixel below. It is a bunch of pixels approximating what you would see if a reconstruction with a box filter were performed. To do a true zoom requires a resampling operation and is much slower than a video card can comfortably support in realtime today.
I think today it is not really important to understand this very basic issue since it is something our today's image manipulation applications manage quite efficiently underneath the user interface. But at the same time understanding this basic issues may help us to understand other important techniques such as 4:2:2 color sampling which I discussed in my last post. The Bayer filter used in 4:2:2 digital image sensors of digital cameras are similar to an image reconstructions filters such as bilinear interpolation, bicubic interpolation and spline interpolation. They all build full color image from incomplete color samples (sample points) and this is process or algorithm is called demosaicing.

The below two images from wikipedia which explains this visually

The four red dots show the data points and the green dot is the point at which we want to interpolate.
Example of bilinear interpolation on the unit square with the z-values 0, 1, 1 and 0.5 as indicated. Interpolated values in between represented by colour.

Who is Alvy Ray Smith?
He was one of the cofounder of Pixar and also Executive Vice President from 1986-1991 and founder of Altamira which was acquired by Microsoft. He was co-awarded the Computer Graphics Achievement Award by SIGGRAPH in 1990 for "seminal contributions to computer paint systems," including the first full-color paint program, the first soft-edged fill program, and the HSV (aka HSB) color space model.

Most interestingly he gave Pixar its name which meant "to make pictures", an invented Spanish verb meaning. Also while at Pixar he played an important role in hiring John Lasseter who is now the CCO at Pixar and Walt Disney Animation Studios.

Finally check out the Pixar founding documents hosted in his website.

Cheers,
Rahul


Sunday, July 5, 2009

Truth about resolution and pixels

Last week our team had a presentation on RED workflow from good people at Icerberg Design here in Singapore. The key issues we discussed at the end were 4:2:2 color sampling and advantage of 10 bit log space of RED workflow. While advantage of 10 bit log space is almost similar to a film workflow but the compromise we would make would be on the quality of the image.

Is RED 4K really 4K? There is a really good interview of John Galt, Panavision Senior Vice President, Advanced Digital Imaging at creativecow.net and here from few good quotes from the interview.

Any of the high-end high definition video cameras, they had 3 sensors: one 1 red, a green and a blue photosite to create 1 RGB pixel.

But what we have seen particularly with these Bayer pattern cameras is that they are basically sub-sampled chroma cameras. In other words they have half the number of color pixels as they do luminance And the luminance is what they call green typically. So what happens is you have two green photo sites for every red and blue.

So how do get RGB out of that? What do you have to do is, you have to interpolate the red and the blues to match the greens. So you are basically creating, interpolating, what wasn't there, you're imagining what it is, what its going to be. Thats essentially what it is. You can do this extremely well, particularly if the green response is very broad.

Well 4K in the world of the professionals who do this, and you say “4K,” it means you have 4096 red, 4096 green and 4096 blue photo sites.

I dug a bit deep into this subject and though I felt it was too technical but I got some good insight on this subject.

To really understand this issue first we must take a look at what really defines quality of the image. A common misconception is that resolutions increases the sharpness of the image but in reality the fine details are mostly irrelevant to human eye,
though human eye is capable of reading extremely fine details. This is also true at greater distance and what really defines the sharpness of the image are the contour defining features at a higher contrast. It is important to note that above statement doesn't mean a low resolution image with high contrast will give better viewing experience since it could lead to aliasing issues.

Since resolution and sharpness are completely separate terms, we establish relationship between them using MTF
(Modulation Transfer Function). So each time when generation loss happens during transfer of medium we define it using MTF.

There is a lot more to read and understand on this subject. A good start on that would be the main source of information of what I have written here, an article called 4K systems by Dr. Hans Keining, R&D Depth ARRI .

To conclude, counting pixels wouldn't be the best way to judge quality of an image and the quality of an image depends upon lot of attributes. The MTF just doesn't start at post production, for example there are modulation happening within the camera during the capturing process. So it is really important to take a closer study at these issues separately.

Coming back to the RED discussion, I believe RED's subsampled 4K image would be slightly advantageous even in HD or SD (based on a broadcast scenario) production since MTF would be comparatively lesser compare to a normal 4:2:2 color sampled HD camera.

Cheers,
Rahul