Friday, May 18, 2012

Real-time vs post-production blurs, what's the difference?

Question

I've always been a huge fan of blurry photos, guess in part because almost any camera is able to take blurry photos, though some better than others.

That said, getting to the point of the question, what's the difference between real-time vs post-production blurs?

Asked by blunders

Answer

In general: "real" blur, either due to optical characteristics (including depth of field, chromatic aberation, spherical aberation, and more) or due to movement, is based on more information. It includes the three-dimensional and time aspects of the scene, and the different reflection and refraction of different wavelengths of light.

In post-processing, there's only a flat, projected rendering to work with. Smart algorithms can try to figure out what was going on and simulate the effect, but they're always at a disadvantage. It's hard to know if something is small because it's far away or because it's just tiny to start with, or if something was moving or just naturally fuzzy — or which direction and how quickly. If you're directing the blur process by hand as an artistic work, you'll get better results because you can apply your own knowledge and scene recognition engine (in, you know, your brain), but even then, it's a lot of work and you'll to approximate distance and differing motion for different objects in the scene — or intentionally start with a photograph where these things are simple.

In the World of Tomorrow, cameras will gather much more information in both time and space. The current Lytro camera is a toy preview of this. With a better 3D model, the effects of different optical configurations can be better simulated — and of course motion blur can be constructed from a recording over time.

Answered by mattdm

No comments:

Post a Comment