Tuesday, September 27, 2011

Does shooting at lower RAW resolution using crop sensor camera mimics qualities of full frame cameras?

Question

I'm not talking about changes to the focal length.

I've read many post that say in full frame camera the pixel density is lower compared to crop sensor camera's and so it captures more light and has thus has better ISO performance and greater dynamic range. So if I change by crop sensor camera to shoot at a lower resolution, will that equate to a better pixel density and mimic the performance of a full frame (or medium format) or will it always shoot at maximum resolution and the reduce the size?

--EDIT: 1--
I've a Canon 60D and I've 3 options for RAW image sizes (RAW, M-RAW amd S-RAW). If RAW is just a dump from the Camera sensors, How can their be 3 different sizes? Does the camera also scale down RAW images as well?

Answer

Given that you have a Canon, the lower RAW modes, mRAW and sRAW, DO INDEED UTILIZE ALL of the available sensor pixels to produce a richer result without the need for bayer interpolation. The actual output format, while it is still contained within a .cr2 Canon RAW image file, is encoded in a Y'CbCr format, similar to many video pulldown formats. It stores luminance information for each FULL pixel (2x2 quad of 1 red, 1 blue, and 2 green pixels), and each chrominance channel is derived from half pixel data (1x2 pair of 1 red+1 green or 1 blue+1 green).

I am not exactly certain what the specific low-level hardware read and encoding differences between mRAW and sRAW are, however generally speaking the smaller the output format, the more sensor pixel input information you can use for each output pixel. The small amount of interpolation present in m/sRAW is moot, as both formats interpolate far less than native RAW. It should also be noted that neither mRAW nor sRAW are actual "RAW" formats in the normal sense...sensor data IS processed and converted into something else before it is saved to a .cr2 file.

For more details about YUV derived formats and Canon sRAW, see my answer here: Why isn't the xvYCC color space seeing uptake for still photography?

From "Understanding What is stored in a Canon RAW .CR2 file":

The sRaw format (for "small RAW") was introduced with the 1D Mark III in 2007. It is a smaller version of the RAW picture.

For the 1D Mark III, then the 1Ds Mark III and the 40D (all with the Digic III), the sRaw size is exactly 1/4 (one fourth) of the RAW size. We can thus suppose than each group of 4 "sensor pixels" is summarized into 1 "pixel" for the sRaw.

With the 50D and the 5D Mark II (with the Digic IV chip), the 1/4th size RAW is still there (sRaw2), and a half size RAW is also appearing : sRaw1. With the 7D, the half size raw is called mraw (same encoding as sraw1), 1/4th raw is called sraw (like the sraw2).

the sRaw lossless Jpeg is always encoded with 3 colors component (nb_comp) and 15 bits.

Jpeg code of Dcraw was first modified (8.79) to handle sRaw because of the h=2 value of the first component (grey background in the table). Normal RAW have always h=1. Starting with the 50D, we have v=2 instead of v=1 (orange in the table). Dcraw 8.89 is the first version to handle this and the sraw1 from 50d and 5D Mark II.

"h" is the horizontal sampling factor and "v" the vertical sampling factor. It specifies how many horizontal/vertical data unit are encoded in each MCU (minimum coded unit). See T-81, page 36.

3.2.1 sRaw and sRaw2 format

h=2 means that the decompressed data will contain 2 values for the first component, 1 for column n and 1 for column n+1. With the 2 other components, decompressed sraw and sraw2 (which all have h=2 & v=1), always have 4 elementary values

[ y1 y2 x z ] [ y1 y2 x z ] [ y1 y2 x z ] ...
(y1 and y2 for first component)

Every "pixel" in sRAW and mRAW images contain four components...a split Y' component (y1 and y2), as well as an x (Chrominance Blue) and z (Chrominance Red). All four components (from a 1/2 image perspective, sRAW1/mRAW) have a column height of 2 (h) and a width of 1 (v). This indicates that the Luminance value (Y') is comprised of a FULL 2x2 pixel quad...or two 2x1 pixel columns stored in y1 and y2.

The references below do not seem to specifically state this, so I am speculating a bit here, however with the sRAW2 (1/4 raw) I believe Luminance information would be derived from a 4x4 pixel block where h=4 and v=2. Encoding chrominance would get more complex at a 1/4 size image, as the bayer color filter array on the sensor is not arranged in neat red and blue columns. I am unsure whether alternating 2x1 height columns are processed for each Cr and Cb component, or if some other form of interpolation is performed. One thing is certain...the interpolation of source data is always larger than the output data, and no overlapping (as in normal bayer interpolation) occurs as far as I can tell.

Finally, sRAW1/mRAW and sRAW/sRAW2 are compressed using a lossless compression algorithm. This is a critical distinction between these formats and JPEG, which also uses a ycc type encoding. JPEG performs lossy compression, making it impossible to restore pixels back to their exact original representation. Canon's s/mRAW formats are indeed able to be restored back to original full precision 15-bit image data.

References:

No comments:

Post a Comment