The combination of aperture size and depth of field affects the decisions made before a photographer presses the exposure button, and these two elements are fundamental to a good photograph. The sharpness of an image is dependent on these two elements. The depth of field will determine how much of the subject will be in focus, while a narrow aperture will extend the depth of field and reduce the blur of objects away from the focal plane, and a narrow aperture requires a longer exposure, increasing the blur due to the natural shake of our hands while holding the camera and any movement in the scene.
The result of over a decade of work has produced a plenoptic camera that allows the user to adjust the focus of any picture that it takes after the shot has been made. Using a light field sensor, a camera not only captures the color and intensity of every light ray, but the direction as well, making the camera 4D.
By extracting appropriate 2D slices from the 4D light field of a scene, image-based modeling and rendering methods generate a three-dimensional model and then render some novel views of the scene. The light field is generated from large arrays of both rendered and digitized images.
The light field is a function that describes the radiometric (intensity of radiant energy) properties of light flaring in every direction through three-dimensional space. Light should be interpreted as a field, much like magnetic fields.
A typical digital camera aligns a lens in front of an image sensor, which captures the picture. A new camera, the Lytro, adds an intermediate step – an array of micro-lenses between the primary lens and the image sensor. That array fractures the light that passes through the lens into thousands of discrete light paths, which the sensor and internal processor save as a single .LPF (light-field picture) file. Your standard digital image is composed from pixel data, like color and sharpness, but pixels in a light-field picture add directional information to that mix. When a user decides where in the picture the focus should be, the image is created pixel-by-pixel from either the camera’s internal processor and software or a desktop app.
A single light-field snapshot can provide photos where focus, exposure, and even depth of field are adjustable after the picture is taken. In the future, light-field cameras promise ultra-accurate facial-recognition systems, personalized 3-D televisions, and cameras that provide views of the world that are indistinguishable from what you’d see out a window.â€©
Raytrix penoptic camera
The simplest conception of a plenoptic (light, travelling in every direction in a given space) camera is essentially an array of mini-cameras (micro-lens + micro-pixel array for each logical pixel) that separately captures light from all directions at each point in the image.
A conventional camera allows a photographer to capture the light that hits on the digital sensor (or, film if you’re still using film). The photographer has to be sure that the image is perfectly focused before the shutter release button is pressed. A Light Field camera captures all the light that enters the body of the camera. The processing software allows you to move back and forth in space along those light rays that are inside, thus changing the focus and depth of field.
A light-field camera captures far more light data, from many angles, than is possible with a conventional camera. The camera accomplishes this feat with a special sensor called a micro-lens array, which puts the equivalent of many lenses into a small space.
A key to light-field technology is to use the increasing resolution found in the image sensors in conventional digital cameras. There is a special array of lenses that fit in front of image sensors that helps break the image apart into individual rays, along with software to help reassemble and manipulate it.
There are other benefits as well, such as the images being focused after the fact. The user doesn’t have to spend time focusing before shooting, or worry about focusing on the wrong target.
The radiance along all light rays in a area of three-dimensional space illuminated by an fixed arrangement of lights is called the plenoptic function. The plenoptic illumination function is an idealized function used in computer vision and computer graphics to express the image of a scene from any possible viewing position at any viewing angle at any point in time.
Plenoptic cameras create new imaging functionalities that would be difficult, if not impossible, to achieve using the traditional camera technology. This comes in the form of images with enhanced field of view, spectral resolution, dynamic range, and temporal resolution. Additional flexibility gives the ability to manipulate the optical settings of an image (focus, depth of field, viewpoint, resolution, lighting) after the image has been captured.
One example of this new imaging is omni-directional imaging using catadioptrics, which is an optical system where refraction and reflection are combined in an optical system, usually via lenses (dioptrics) and curved mirrors (catoptrics).
Another example is high dynamic range imaging using assorted pixels, which produces images with a much greater range of light and color than conventional imaging. The effect is stunning, as great as the difference between black-and-white and color television
A further illustration is refocusing using integral imaging, where refocusing is conceptually just a summation of shifted versions of the images that form through pinholes over the entire aperture. This matches shifting and adding the sub-aperture images through post-capture control of spatial/temporal/angular resolution and spectral resolution and plenoptic imaging for recovering scene structure.
With a plenoptic lens, the light rays pass through several lenses before making it to the sensor, thereby getting recorded from several different perspectives. Because there’s all these tiny little lenses in front of the sensor, the resulting image looks like this:
By using some computer calculations, a user is able to resolve those little fragments into a normal image. It’s up to the user to decide what he or she wants in focus, as the user can determine what’s in focus, and what’s not, by moving a slider in the menu. Just imagine what it would be like to never have an out-of-focus picture again?
Security in today’s workplace; surveillance in police work; and home protection involves the necessity of sharp, clear pictures. If security cameras have light field technology, it would be easy to focus the face of a criminal from less than perfect images, due to poor light or inclement weather. We’ve all seen CSI take poor images from a surveillance camera and manipulate the pixels to get a recognizable picture. Now, this can be done, utilizing this new expertise.
Maybe a license plate needs to be clearly read, but the camera is focused on a different plane. With light field technology, the plane can be changed so that the license plate comes in sharp.
Note how, in this set of photos, the focal point is changed from the background to the foreground, improving the clarity of the different elements. Within a given plenoptic frame, enough information is available to recreate many images covering a range of focusing distances. In addition, pairs of refocused images can be generated for 3D applications, measuring the degree of focusing required for each ray that crosses the focal stack.
Besides security applications, we might see this technology used in the motion picture industry.
Len started in the audio visual industry in 1975 and has contributed articles to several publications. He also writes opinion editorials for a local newspaper. He is now retired.
This article contains statements of personal opinion and comments made in good faith in the interest of the public. You should confirm all statements with the manufacturer to verify the correctness of the statements.