Question
I'm new to working with RAW images, and I'm capturing simultaneous RAW+JPGs with my new Lumix LX5, and using Bibble to view/process the results.
I'm very surprised that the RAW images taken at 24mm wide 16x9 seem to capture a different (and larger) sensor area compared to the JPGs. The RAW images seem to contain the equivalent of about 100 extra pixels on left and right sides, and a smaller number top and bottom. I say "equivalent", because the actual pixel counts of RAW and JPG are only slightly different, which implies some resizing must be going on...?
JPG: 3968 x 2232 RAW: 3976 x 2238
I guess this small difference is because JPG images must be 16x16 multiples>
The raw image displays noticeable vignetting in the extra pixels, and there's a fair bit of chromatic aberration. I can crop off the 'extra' pixels, but then my RAW image has fewer pixels in it than the JPG, which doesn't feel right.
I'll try and add samples shortly.
Answer
Firstly there are a couple of general reasons raw and JPEG images differ in size, and raw differs from the actual number of pixels on the sensor:
Whilst JPEG image dimensions don't have to be multiples of 16 (or 8 if not using chroma subsampling) it is more efficient to do so, as it allows you to rotate the images without re-encoding (lossless rotation). So that can account for a small image size difference, as you say.
Even raw image sizes typically differ from the actual number of pixels as most sensors have strips masked pixels (that receive no light) down each side in order to detect banding issues with uneven amplification. Further, the size you see in your raw viewer will differ from the actual raw data as some image processing operations use a form of averaging which doesn't work at extreme edges (because there's no data beyond the image to use when averaging) so they get cropped off when the image is viewed/converted.
Secondly the Panasonic Lumix LX3 and LX5 have a different sensor design to most cameras, which is partially responsible for the difference in coverage between raw and jpeg you are experiencing:
The maximum 16:9 image size is actually wider than the maximum 4:3 image size. You would expect them to be the same width but different heights.
This is because they've made the sensor a bit wider for 16:9, employing a non rectangular design and it's pushing the very edges of the lens image circle, this explains the vignetting and CA you observe with the raw. This diagram shows the irregular design:
As John Cavan suggests, the JPEG image pipeline is doing some correction, including barrel distortion correction, given that 24mm equiv. is very wide for a compact, and the sensor is pushing to the very edge of the image circle.
Barrel distortion correction makes straight lines straight again, but will cause the image edges to bend in response. In response to this the correcting transformation enlarges the image slightly and crops to get straight edges again.
Can you see any differences in the appearance of straight lines between the raw and JPEG? It might be quite subtle but get revealed if you overlay them.
Check more discussion of this question.
No comments:
Post a Comment