Monday 26th August 2013 8:55pm
A 10MP (Mega pixel) sensor will produce images that print at 12"x8" for a 300dpi quality output. This isn't particularly large and although significantly higher resolutions are moving towards affordable (a mere £2k for Nikon's 36MP D800) we are still in a world where every pixel counts.
When shooting digitally there's a temptation to think "I'll fix it in post". If you're capturing RAW files and you get a good histogram in all channels then certainly there's plenty of leeway for working an image up in post-production: but ultimately, you only have the pixels available on the sensor to work with. If you throw pixels away through excessive cropping or perspective adjustment there is no means to re-inject that data. Obviously when stretching an image pixels can be interpolated, but that hardly amounts to re-injecting the basic quality of the sensor cells
Quality is never a veneer, a thing that can be layered on later in the process. Every step from shutter release to print causes a drop in quality. It is critical then to hang on to every pixel the camera can capture.
The composition will be arranged for a specific image shape, perhaps:
3:2 (the native aspect ratio of most sensors)
2:1 or 3:1 etc (panorama)
so some percentage of pixels are lost inevitably. When cropping a 3:2 image to any other aspect ratio the worst case will be a loss of 33% of the pixels (1/3rd). The following table indicates the pixel loss for various 'standard' aspect ratios that abound (and I do recommend cropping to standard ratios rather than to arbitrary shapes.
(click or tap any table row to enlarge)
|Apect Ratio||Description||Pixel Loss ('most' digital sensors)|
|1:1||Square format, common in Hasselblad and Rollieflex cameras||33%|
|1:1.4||Aspect ratio of 'A' series papers||7%|
|2:1||Super wide panoramic (Linhof Technorama cameras)||25%|
|3:2||Standard 35mm format, typical for most digital cameras||-|
|4:3||Micro 4/3rds digital format, 6x4.5 film format, traditional TV format||11%|
|5:4||Classic large format camera aspect ratio||17%|
|7:6||Film format used in Pentax 67 and Mamiya 7II cameras||22%|
|16:9||Panoramic format, widescreen' TV format||16%|
The relationship between these aspect ratios is shown visually in the diagram on the left.
Of course, if you do not follow Kappa's mantra (get close, get closer) you will also be cropping additional dead space from the image. It should be clear then, that it is all too easy to find yourself cropping as much as half of the image after shooting; especially if choosing an 'extreme' aspect ratio (1:1 or 2:1) and also removing a modest amount of dead space. So a 10MP capture can easily become an effective 5MP image.
However, once an image has been cropped it may then also be 'perspective adjusted', especially for architectural or strong graphic format images. Consider this 'Windows' shot from my 10MP Nikon 1 V1:
This is (almost!) a square format image shot on a 3:2 sensor, so 33% of the pixels have been lost. However I was shooting from pavement level and had no choice but to angle the camera upwards which caused the verticals to converge. I also (unintentionally) skewed the camera slightly. The following image shows the original capture and indicates how far off true vertical and horizontal alignment I was when I took the shot:
There are always three dimensions of alignment to consider. It's difficult to find a standard for naming these so I indicate the most usual photographic term and also the terms used more precisely to describe the attitude of an object around its centre of mass:
Is the lens pointing up or down (tilt or pitch)
Is the camera base level (rotation or roll)
Is the camera back flat-on to the subject (pan or yaw)
In this example I have errors in all three axes, which is apparent by the asymmetry of the skewing of the actual area (green highlight) within the 'ideal' area (red highlight).
This perspective was corrected in Photoshop - and using simple maths we can calculate the area (green highlight) of pixels used in the final image.
The area of the final image (red highlight) is:
2543 * 2360 = 6,001,480
The areas of each of the four triangles that will be cropped are:
(2360 * 207) /2 = 244,260
(2543 * 118) /2 = 150,037
(2360 * 356) /2 = 420,080
(2543 * 28) / 2 = 35,602
Which is a total of: 849,979 cropped pixels (which in the perspective adjustment will be replaced by interpolated pixel values. (there's about 5% error in this value since two of the triangles overlap, top right).
So the effective 'real-world' pixels in the final image is:
(6,001,480 - 849,979) 5151501
The original full sensor capture contained:
(3872 * 2592) 10,036,224 pixels.
In this case the final image:
Utilised 51.3% of the pixels captured by the sensor.
Cropped 33% of the sensor pixels to achieve the required aspect ratio
Cropped 7% of the sensor pixels to remove 'dead space'
Cropped 8.7% of the pixels to perform perspective adjustment
Using simple trigonometry (from a convenient on-line calculator) the four angles of error were (clockwise from bottom left):
So my 10MP camera delivered a 5MP image and I suffered a 16% drop in quality due to in-the-field limitations and inaccuracies.
Affordable digital cameras do not yet deliver a pixel count that allows high quality suitably sized exhibition prints to be created
Working in aspect ratios different to the camera's sensor can result in a significant loss of pixels (33% composing a square format on a Nikon CX sensor)
Cropping 'dead space' further exacerbates the pixel loss
Perspective adjustments further effect pixel loss
Taking all factors into consideration the effective pixel count of an image can easily be half that of the camera sensor
Accurate framing with regards proximity and perspective at time of shooting is materially significant to the end-quality of the image.