CI Photocommunity

Register a free account now!

If you are registered, you get access to the members only section, can participate in the buy & sell second hand forum and last but not least you can reserve your preferred username before someone else takes it.

How quality of images in DSLRs depends on sensor size & effective pixel (and price)?

bonku

Member
How quality of images are compared in different DSLRs from different companies, comparing effective pixel and sensor size (and price)?

Model- Effective pixel (mp) - Sensor size


Olympus
E3 - 10.1- 17.3 x 13.0 mm ($ 1199.00)
E30- 12.3 - 17.3 x 13.0 mm ($ 899.95)

Nikon
D3X - 24.5- 35.9 x 24.0 mm ($7,999.95)
D300S- 12.3- 23.9 x 36.0 mm ($1,799.95)
D90- 12.3 - 15.8 x 23.6 mm ($999.95)

Pentax
K7- 14.6- 23.4 x 15.6 mm ($1299.95)
KX- 12.4- 23.6 x 15.8 mm ($ 649.95)
K20D- 14.6- 23.4 x 15.6 mm ($ 699.95)

Sony
A900- 24.6- 23.4 x 15.6 mm ($ 2699.99)
A850- 24.6- 23.4 x 15.6 mm ($1999.00)
A700- 12.2- 23.4 x 15.6 mm ($ 899.99)
A550- 14.2- 23.4 x 15.6 mm ($ 949.99)

Canon
EOS-1Ds Mark III- 21.1- 36.0 x 24.0 mm ($ 6999.00)
EOS 7D- 18.0- 22.3 x 14.9 mm ($ 1699.00)
EOS 50D- 15.0- 22.3 x 14.9 mm ($ 1199.00)

Sigma
SD14- 14.06- 20.7 X 13.8 mm ($ 724.95)

(I could not align the table in this format. So please excuse me for your inconvenience)


Will it be OK to assume that Olympus E30 will have a better picture resolution as compared to E3? Or Nikon D3X will have a better picture as compared to canon 1Ds markIII? Will Sony A550 will have a better picture than A700? Or Sigma SD14 can take better picture than Pentax K7?
 

tc95

Well-Known Member
It is easy to get caught up in the MP thing...(Mega-Pixel)...I have found anything over 6mp on a clear sunny day works...anything over 12mp in a low light situation works awesome....

I take pictures with several cameras...

Currently-Have:
Nikon D100 6MP
Sigma SD14 14MP
Sigma DP1 14MP
Sony A100 10MP
Sony A200 10MP
Sony A700 12MP
Sony A900 24MP

What I have had:
Nikon D300 12MP
Nikon D90 12MP
Nikon D70 6MP
Sony A350 12MP
Sigma SD10 10MP

All of them for the right situation take great shots...it is more the person in control of the camera and knowing your F-Stop, ISO & Speed setting that
determine if the shot is good or not...I would not get too caught up in the MP wars...what really makes the difference in your shot...is the glass....the types of lenses you have and the ability to use them....

Here are just samples from five of my cameras:

Nikon D100 6MP:
Fountain 6MP.jpg


Sony A100 10MP:
Dead 10MP.jpg


Sony A700 12MP:
Flower 12MP.jpg


Sigma SD14 14MP:
Sigma 14MP.jpg


Sony A900 24MP:
Food 24MP.jpg


I know these are not all the same picture...but it shows that no matter what the MP count...if you compose your shot...it should come out right...Now I will say that the A900 Sony is really my goto camera now...due to the fact that I can crop an image down and not have to worry about MP loss....

I hope this helps some...

Tony C. :z04_cowboy:


PS the Sony A900 is a 24 X 35.9mm Sensor Size....
 

Steaphany

Well-Known Member
The SD14 is a 4.7M photosite imager, not 14M. 14M counts the individual light sensing diodes, but there are only 4.7M locations across the imager where light falls to build an image, each location has 3 diodes.

The Sigma SD14 also proves that pixel count does not equate to image quality.

Your table does not include any data to define available image resolution.
---
Tony, you beat me to the details
 

bonku

Member
Thanks Tony and Steaphany.

It seemed to me that the image resolution will depend on how many photo-capturing diodes (pixel) present per unit area of the sensor. I tried to get an idea how (effective) pixel count correlatives with sensor size and what is the image quality. I have a 6mp Olympus SP500 UZ. Some of its images are really great, as compared to my SD14. I did not have access to more than those two digital cameras L
So my experiment had a severe limitation to get any meaningful conclusion.

Theoretically, more pixel per unit area (of sensor) should translate into better resolution (ability of the equipment to differentiate two closest points).
Should I assume that so-called pixel or photo-capturing diode’s quality and unit measurement (area covered by a single diode on the sensor) is essentially the same in all the sensors by different manufacturers?
In that sense, higher the area of a single pixel (sensor size divided by total number of pixels) lower will be the resolution, but will have a higher light sensitivity. Right? The same fundamentals of physics can be applied to traditional film (i.e. higher the ISO, lower will be its “quality” or resolution but more sensitivity under low light). Am I right?

SD14 makes the calculation a bit more complicated with its 3 layers, 4.7mp imager. In that sense, the compilation of three layers are achieved by software, not my optics or single layer sensor; as I understand. In short, SD14, theoretically, can give better color reproduction but less sharp image or resolution as compared to a camera with 14mp output with similar size sensors (20.7 X 13.8 mm).
 

Steaphany

Well-Known Member
Should I assume that so-called pixel or photo-capturing diode’s quality and unit measurement (area covered by a single diode on the sensor) is essentially the same in all the sensors by different manufacturers?


No, and that's a big No.

I started my career back in the 1970's doing semiconductor engineering, which included both topology design and wafer process engineering, so this is an area where I've had years of knowledge to go by. Every semiconductor manufacturer has their own process and design rules. The process defines the actual setps performed on the wafers as they go through manufacturing and the design rules define the dimensional criteria that engineers utilize when create a chips topology. In fact, each manufacturer can and usually does have numerous "formula" for their chip designs depending on functional requirements.

Another thing you need to keep in mind is the light sensing area of a photosite comes with overhead. There are the row and column addressing and signal handling circuitry, plus additional factors eat into the real estate.

The concept of covering an imager with a microlens array is a means to compensate. It creates a fly's eye where the light falling on each lens gets focused onto the small active area of each photosite.

Another factor is quantum phenomena become more pronounced as dimensions decrease. Remember, with a imager, the circuitry is analog, not digital. Instead of 1 or 0, you have anything between 1 and 0, which is why noise becomes a greater problem with ever finer geometries.

Even the Foveon imagers used in the SD9, SD10, and SD14 all have different photosite dimensions.

Now, for light sensitivity, you need a large active area. Smaller active area photosites actually have less, not more, light gathering potential.

The same fundamentals of physics can be applied to traditional film (i.e. higher the ISO, lower will be its “quality” or resolution but more sensitivity under low light).

In photographic film, B&W, Color, Negative, or Reversal - is doesn't matter, they all collect light by small crystals of Silver Halide. When a crystal receives enough photons, the whole crystal switches from "unexposed" to "exposed". Photons that fall and hit the film in between the halide crystals does nothing. A fine grain film has small crystals, but for any given crystal to be exposed, it needs to receive enough photons, which is hard to do under low light conditions. How do you make film with a higher sensitivity ? well, get rid of the room in between crystals and increase the sizes for each crystal to catch the few photons available, i.e. larger crystals. These halide crystals, when developed are what is termed as grain and this is the quantum physics behind the magic of film photography.

In that sense, the compilation of three layers are achieved by software, not my optics or single layer sensor; as I understand.

Actually the three layers are real, just has modern color photographic film has three stacked light sensing layers. It doesn't take software to make sense of a color negative or transparency, and neither does software make the magic of Forveon. Software enters this world because the output of the imager is read out, processed through a 12 bit analog to digital converter and now this table of numbers needs to be stored on a digital electronic media. The software only handles this last bit.

give better color reproduction but less sharp image or resolution as compared to a camera with 14mp output with similar size sensors

Why less sharp ? Is your reference a Bayer masker imager or a monochrome imager which shoots three exposures through three primary colored filters ?

The three exposure through three filters is a common practice in astronomical photography. That's even how the Hubble works. This would be the only way to get better resolution.

If you are talking about a Bayer masked imager, well, what constitutes a pixel ? It's my view that four photosites define a pixel. Two Green, a Red, and a Blue. Don't tell me that by measuring the Red at any specific location that you can accurately derive how much green and blue fell on the same spot through a mathematical interpolation of the surrounding photosites. This is the great flaw in Bayer masked technology.

Bayer resolution is also questionable. 50% of the imager area has exclusively Green photosites. The Red and Blue have 25% each. Plus, you never have a Red next to a Red, there are all the Green and Blue photosites "in the way".

I suggest that you read:

Please, Log in or Register to view URLs content!
 

tc95

Well-Known Member
Steaphany...Nicely put...!!! I was wondering on how I would word the Bayer vs Foveon here...

Just to illustrate what Steaphany is saying...

This is a Bayer sensor and how it captures light.....

700px-Bayer_pattern_on_sensor.jpg

500_bayer-sensor.JPG


This is the Foveon Sensor and how it captures light....

foveon_x3_breakout_1_l.jpg

20070309-01al.jpg


This is the two side by side....

sensors.jpg

foveon-vs-mosaic.jpg
 

tc95

Well-Known Member
Lastly...this is the thickness of both chips...

pixellocation.jpg

Thank you again for Steaphany...explanation...I added the pictures to help understand why there is a differnece...and why the Foveon sensor looks 3D...even though it is only 4MP....

And it goes back to my first statement...It does not matter what you shoot with...I choose Sigma and Sony because I like the images I get with both...but I would work on better glass then maybe a better body...

But over all I have been saying this for ever...10% Camera - 20% Glass - 70% the person behind the camera....

Tony C. :z04_cowboy:
 

kakou

Active Member
Even the Foveon imagers used in the SD9, SD10, and SD14 all have different photosite dimensions.

The photosite dimensions on the SD9 and SD10 are the same but SD10 sensor has microlenses which changed the effective pixel size, not the actual size.

Now, for light sensitivity, you need a large active area. Smaller active area photosites actually have less, not more, light gathering potential.

Right. Larger pixels have a better signal/noise ratio, but you end up with fewer pixels for a given sensor size and type. It's a tradeoff.

In photographic film, B&W, Color, Negative, or Reversal - is doesn't matter, they all collect light by small crystals of Silver Halide. When a crystal receives enough photons, the whole crystal switches from "unexposed" to "exposed". Photons that fall and hit the film in between the halide crystals does nothing. A fine grain film has small crystals, but for any given crystal to be exposed, it needs to receive enough photons, which is hard to do under low light conditions. How do you make film with a higher sensitivity ? well, get rid of the room in between crystals and increase the sizes for each crystal to catch the few photons available, i.e. larger crystals. These halide crystals, when developed are what is termed as grain and this is the quantum physics behind the magic of film photography.

What's ironic is that film is actually digital since each crystal is either on or off, like a bit in a memory chip, and digital is actually analog since the sensor produces an analog voltage which is later converted to digital.

Actually the three layers are real, just has modern color photographic film has three stacked light sensing layers. It doesn't take software to make sense of a color negative or transparency, and neither does software make the magic of Forveon. Software enters this world because the output of the imager is read out, processed through a 12 bit analog to digital converter and now this table of numbers needs to be stored on a digital electronic media. The software only handles this last bit.

Film only produces an image after a lot of chemistry and digital sensors (any type) only produces an image after software processes the raw data.

There's also a lot of software in the Foveon magic. Unlike film, the layers don't actually sense red, green or blue, but three overlapping ranges which must be converted to RGB.

The three exposure through three filters is a common practice in astronomical photography. That's even how the Hubble works. This would be the only way to get better resolution.

Only if the subject does not move, otherwise you get nasty colour fringes.

If you are talking about a Bayer masked imager, well, what constitutes a pixel ?

Spatial location, just like anything else.

It's my view that four photosites define a pixel. Two Green, a Red, and a Blue.

Your view is wrong. Bayer does not process groups of four into a single pixel.

Don't tell me that by measuring the Red at any specific location that you can accurately derive how much green and blue fell on the same spot through a mathematical interpolation of the surrounding photosites. This is the great flaw in Bayer masked technology.

That's exactly what happens. Read some of the papers on Bayer demosaicing.
 

kakou

Active Member
It seemed to me that the image resolution will depend on how many photo-capturing diodes (pixel) present per unit area of the sensor.

Generally that's true, but there's more to resolution than just pixel count. Put a lousy lens on a high resolution camera and you won't get a very good result.

Theoretically, more pixel per unit area (of sensor) should translate into better resolution (ability of the equipment to differentiate two closest points).

It should, but as pixel count goes up, the pixel size goes down, which means noise goes up (assuming nothing else changes, of course). Do you want a high resolution noisy photo or a low resolution clean photo?

Should I assume that so-called pixel or photo-capturing diode’s quality and unit measurement (area covered by a single diode on the sensor) is essentially the same in all the sensors by different manufacturers?

No. Different manufacturers make sensors with different levels of noise.

In that sense, higher the area of a single pixel (sensor size divided by total number of pixels) lower will be the resolution, but will have a higher light sensitivity. Right? The same fundamentals of physics can be applied to traditional film (i.e. higher the ISO, lower will be its “quality” or resolution but more sensitivity under low light). Am I right?

High ISO film is grainy and using a sensor at a higher ISO produces more noise, so in that sense, they're similar.

Larger pixels collect more light, which means the sensor will work better at high ISO. That's why a DSLR does much better at high ISO than a compact camera, and why the DP1 and DP2 do better than a lot of compacts - the sensor (and pixel size) is much bigger.

SD14 makes the calculation a bit more complicated with its 3 layers, 4.7mp imager. In that sense, the compilation of three layers are achieved by software, not my optics or single layer sensor; as I understand. In short, SD14, theoretically, can give better color reproduction but less sharp image or resolution as compared to a camera with 14mp output with similar size sensors

The layers complicate it a little.
 

Arvo

Well-Known Member
Don't tell me that by measuring the Red at any specific location that you can accurately derive how much green and blue fell on the same spot through a mathematical interpolation of the surrounding photosites. This is the great flaw in Bayer masked technology.

That's exactly what happens. Read some of the papers on Bayer demosaicing.

There's one catch with that.

Bayer image can be 100% correctly demosaiced only when image spatial frequency (is that right term?) is less than Nyquist frequency for every single color channel (and there is no noise present). This approcah involves using AA filter (lowpass for spatial frequencies) and killing pixel-level details (high frequencies) with that.

Of course there exist algorithms, decoding/interpolating Bayer images, containing higher spatial frequencies too. This is usable in MF and other bigger sensor cameras without AA filter, where introduced pixel-level artifacts and aliasing are almost invisible due to the big pixel count.

But I've yet to see universal foolproof algorithm to demosaic such 'aliasing affected' Bayer images. Some raw decoders allow change algorithms and/or adjust their parameters - for different kind of images demosaicing results differ greatly.

Otherwise I agree with most of your points.
 

kakou

Active Member
There's one catch with that.

Bayer image can be 100% correctly demosaiced only when image spatial frequency (is that right term?) is less than Nyquist frequency for every single color channel (and there is no noise present). This approcah involves using AA filter (lowpass for spatial frequencies) and killing pixel-level details (high frequencies) with that.

It only needs to be less than the sensor's Nyquist frequency, not for each individual colour, and nothing is ever 100% exact anyway.

Of course there exist algorithms, decoding/interpolating Bayer images, containing higher spatial frequencies too. This is usable in MF and other bigger sensor cameras without AA filter, where introduced pixel-level artifacts and aliasing are almost invisible due to the big pixel count.

Higher spatial frequencies need a higher resolution sensor. Medium format is now at 60 megapixels. It's also not cheap.

But I've yet to see universal foolproof algorithm to demosaic such 'aliasing affected' Bayer images. Some raw decoders allow change algorithms and/or adjust their parameters - for different kind of images demosaicing results differ greatly.

There isn't a single algorithm for anything. The better Bayer algorithms are adaptive, which is a good thing.
 

bonku

Member
Thank you all for such valuable information. It'll help me personally to learn a bit more about basics of digital photography.
 

bonku

Member
I just purchased Canon 7D. Besides many other excellent features, Color reproduction in that camera is much better than Sigma SD14 (at least to me). Pics taken at higher ISO like 1000 or 2000 is not an issue at all, while SD14 have noticeable impact for ISO equals or more than 800.
I was wondering how living animals see pics and color around. Then I realized that it’s more like a Bayer sensor (not a foveon type 3 layer mechanism) and that monolayer mechanism has preferred through millions of years of evolution/selection.
 

Arvo

Well-Known Member
Congrats for your new camera!
Now waiting comparison pics - and please try to get *best* off the both cameras (not high ISO for SD14 for example). Because posted images should be enough small, at least resolution advantages of 7D will be void ;)
 

bonku

Member
Thanks Arvo.
I have sold my SD14 and all my Sigma mount lenses :(.
Anyway, I will dig out some old pics taken with SD14 and then post a new pic with Canon 7D.
 

bonku

Member
Here it is.
First Picture was taken by SD14 (Sigma 18-200mm, ISO- 200, Av-3.2 Tv-1/60, WB: Unknown).


The second picture is by Canon 7D (Tamron 18-270mm, ISO-1200, WB: Tungstain, Av-5.6, Tv- 1/15).
Both the pictures were taken in the same room, under same lighting condition (of the same child).
 

Attachments

  • SD14.jpg
    SD14.jpg
    104.1 KB · Views: 13
  • 7D.jpg
    7D.jpg
    104 KB · Views: 13
Top