Question
lytro.com describes their new light field camera as being able to capture the entire light field, rather than just one plane of light, thereby allowing for a whole new set of post-processing possibilities, including focus and perspective adjustment.
What sort of sensor could "capture every beam of light in every direction at every point in time"? How would an essentially infinite amount information be encoded and manipulated? Would there be a lens up front in any traditional sense?
Here is the inventor's dissertation: http://www.lytro.com/renng-thesis.pdf
Can somebody boil this down for those of us who are familiar with traditional technologies?
Answer
The easy way to think about this is as follows:
Imagine that instead of one camera, you had a grid of 100 cameras in a 10x10 array. When you fire a shot, each of them shoots at the same time. They will each have a slightly different view of the thing that you are taking a picture of. There are some mathematical models you can use to sort of "reverse engineer" the image and rebuild it in different ways. That's what this is kinda all about, except that instead of 100 cameras, you have thousands, and they are all formed by an array of lenses just above the sensor plane. So the image that comes out of the camera sensor has a bunch of circles of partial images that each differ from the one next to them just slightly. Then they use math to reassemble a single image from those partial images.
Check more discussion of this question.
No comments:
Post a Comment