DPR Forum

Welcome to the Friendly Aisles!
DPRF is a photography forum with people from all over the world freely sharing their knowledge and love of photography. Everybody is welcome, from beginners to the experienced professional. Whether it is Medium Format, fullframe, APS-C, MFT or smaller formats. Digital or film. DPRF is a forum for everybody and for every format.
Enjoy this modern, easy to use software. Look also at our Reviews & Gallery!

Contax ND

> > You say " The Foveon 'technology' does not necessarily allow you to > have more photosites than other "technologies" in either the same unit > area, or in overall area. In fact, it's probably less. The number of > photosites is limited by the cell size, and the cell size is typically > limited by noise.<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">• is self contradicory.

I doubt it, but let's see what you believe it says...

> If as you say, the cell size is limited by > noise - which I can accept -

It is not physically limited by noise, at least in our application, it is PRACTICALLY, as in usably, limited by noise.

> then the more information gathering you > can pack into a single photo site the better off you are.

I don't know what you are saying. It's a physical issue. The larger the photosites are, the more photons they capture. Having multiple sensing units in a single photosite does NOT increase your resolution as they are both seeing the same "column" of light.

> If it takes > three "normal" photo sites to produce one pixel,

That is what you are not understanding. It does not. It takes FOUR (Bayer pattern sensors are RGBG, two G's for additional contrast) photosites to produce FOUR pixels. It is the COLOR (as in chrominance) information that is interpolated over all four pixels, the luminosity. Each photosite in a Bayer pattern imaging sensor has unique physical information that is uniquely from that spatially arrange photosite.

> and if you can > produce one pixel with one photo site the area required is cut by > approximately a factor of three. In the case of the detector used in > the Sigma SD9, there are 3,429,216 photosites (2268 x 1512), hence > pixels, in an area of 20.7 x 13.8 mm (0.0091269841 mm on a side per > photo site) To achieve this same density of photo sites using the old > single layer technology, and three photo sites per pixel, would > require a photo site size considerably less than 9 microns which would > increase noise dramatically.

No, that is a misunderstanding on your part on how Bayer pattern sensors work. If the Bayer pattern sensor has 2268 x 1512 photosites, it produces, through interpolating color information, 2268 x 1512 PIXELS of image information.

Color is NOT as important as you think it is, because of how our eyes work. Edge information is what is important, and with the Bayer pattern sensor, you DO get almost the exact same amount of edge information as you would if you were s&ling all three colors at the same photosite.

There is no increase in density because of the Foveon sensor over the Bayer pattern sensor, if they both have the same number of photosites, and the photosites are the same size. If you have a 3M Foveon, you get the same number of pixels as if you have a 3M Bayer pattern imaging sensor.

The Foveon sensor MAY produce marginally better color information than a Bayer pettern sensor, and that is VERY image dependant...but interestingly enough, there are issues with the Foveon sensors and color. Their color fidelity is actually pretty poor, as is their low light capability. Both of which I stated a long time ago would be issues with this type of sensor architecture.

Austin
 
>Ausitn, > >You say "through interpolating color information, 2268 x 1512 PIXELS of image information." >

The problem with interpolation is that you lose resolution. By definition, you are averaging between sites and not using the true information that is arriving at all of the sites for whic you are claiming to poduce pixels. This is also the source of Moire patterns which is why they do not appear in the images I take.

William

William
 
> >Ausitn, > >You say "through interpolating color information, 2268 x > 1512 PIXELS of image information.<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">•<font color="ff0000">• problem with interpolation is that you lose resolution.

That is not a given. Any CHANCE of losing resolution is small, and typically insignificant. You only lose resolution if the column of light hitting that photosite didn't contain ANY of that photosite's primary color. The color sensitivity of a photosite also overlaps quite far into the neighboring color sensitivity area.

> By > definition, you are averaging between sites and not using the true > information

One of the colors is being %100 used. It is NOT the same as simply interpreting data between points where %0 original information is used, as is done when "rezzing" up an image.

> that is arriving at all of the sites for whic you are > claiming to poduce pixels.

If you use a really bad interpolation algorithm, you can get bad results, but most of them are actually very good. As I said, you are using %100 of one of the colors, interpolating the other colors is not as lossy as you may believe, in fact, it's actually quite accurate. It is very scene dependant whether you actually lose any detail, and what detail you lose is only color information. The edge information you do not lose, unless the column of light you are sensing doesn't have that primary color in it, which is almost never, there is always some RGB content to the light that hits every photosite.

> This is also the source of Moire patterns > which is why they do not appear in the images I take.

The source of the Moire patterns is bad interpolation algorithms. Current cameras do not have the same Moire problems that five/ten year old designs had. This problem, along with others, is exaggerated by Foveon misinformation and hype.

Where are you getting your information from? Stuff you read on the Internet?

Austin
 
William, Austin,
just to add my two cents to your pretty academic discussion (that I find a little off-topic in this particular place - it's about the Contax N-Digital - or am I missing something here?):

Well, Austin is very right with his assessment, that theoretically the array of photosites can be expanded to the size of an entire silicon wafer( they go up to 300mm diameter these days what could give you a rectangular sensor size of a view camera) but practically the yield - that is the "good" photosites on that wafer would not be sufficient. This is, by the way, the main driver, why the industry tends to keep the chip size small: if you have a geometrically splattered yield in photosites over a wafer you dice into say 50 dies you find more of them within the specs than if you dice into 30 larger dies.

The other thing you guys went ballistic about (and I have to say, that Austin again has the more accurate points) is the resolution vs. size of the photosite in context of the wavelength of visible light.
Theoretically a very nice discussion but the gazillions of pixels we might get, confont us with two real world questions:
1) Who has the money to buy a computer big and fast enough to handle the files?
2) More important: who has the money for the crane and 15 helpers to lug around your lenses?

Let's get real: Lithography machines, that are used to project the pattern to a wafer work somewhat like slide projectors. They actually have a field that is about the size of a 35mm lens - and now it comes: the average litho lens is about 4 foot tall, 2 foot wide and I don't know how heavy. This is what they use for visible light wavelength. Go to www.zeiss.com - they make quite decent ex&les of them.

In essence I would like to point you to the fact, that this discussion is absurd. Anything between 6 and 14 Megapixel is able to bring all of our pricey 35mm zooms to their knees and when talking primes, 14 MP is about it. After that you add data, no information.
And although I find the Sigma line of lenses not too bad for what they cost (I have a nice lineup myself for my Nikon F100 - if speed is the matter), they simply do not perform well enough to exceed 6MP or the current chipsize with 1.7x.

Christian
 
Lets change the subject shall we?

I tried the Adobe RAW developer today. Sorry, it doesn't work. I just thought it might, because yesterday we discovered that the Kodak DCS 645C ProBack files did work! They showed up on the PhotoShop 7 Browser, and to my suprize opened in the Adobe RAW window when we double clicked a select file from the 7 browser. This was all a suprize because Adobe doesn't list Kodak RAW as a supported
file form. This now makes the Kodak 14n I have on order even more attractive.

We MUST band together to get the Contax RAW onto the list of supported files. It is even more important to do it quickly because Adobe is including this RAW developer in the next PhotoShop release (PS 8 due in the not to distant future). We cannot be left behind!

When I tried to use the Adobe RAW developer, it tried to do something, it opened a text control window but it was a B&W one, and when I tried processing it, a narrow band of grey noise appeared as the image file which looked like the one from the calibration program. What I am wondering is if you did not install the individual calibration program, would the Adobe recognize the contax RAW files then? Wasn't that calibration program to mesh the Contax RAW developer with the actual files coming from the camera?
 
Hello, Austin,

Could you recommend where I can read about modern algorithms interpolating data from Bayer pattern? Something more in-depth than just 'averaging'? Or this kind of information is classified?

Thanks, Sergei
 
Gentlemen,
I managed to get in correspondece with a Kyocera media relations department. Everybody is more that welcome to jump on my band vagon
happy.gif
Here is the last two letters that we have exchanged with:

Dear Mr.Irakly Shanidze,

Thank you very much for your advice.
We will do our best to make what our customer want.

Sincerely,
Yoshiko Omaru






irakly@shanidze.com on 2003/03/06 15:31:38

Yoshiko Omaru/OE01/KYOCERA@KYOCERA
cc:
RE: Contax TVS Digital Review



Dear Yoshiko,

Thank you very much. I will talk to COntax USA tomorrow.
I would like to turn your attention to a very important matter. Recently Adobe sent me their Raw Converter Photoshop plug-in for review, and it appears that this fine software does not support Contax N Digital, which I must reflect in my review article. This is a huge dissapointment for Contax N Digital owners who had been waiting for this software sinse Adobe had announced it six months ago. I spoke to Adobe about it, and they answered that they need cooperation from Kyocera to be able to include Contax N Digital in a list of supported cameras. Considering the fact that it is Contax RAW Developer that holds many people from using Contax N Digital professionally, cooperation with Adobe may be of great advantage for capturing better market share for this fine camera.

Sincerely,
Irakly
www.shanidze.com
 
Marc, my understanding is that should one eventually manage to develop software compatible with Contax RAW files, it would make use of CCD parameter files. Another words, one will have to supply them to the program. I do not see a problem with that since all Contax ND cameras come with CCD parameter disk, just an inconvenience.
 
Please include me mehrdad sadat msadat@kiakki.net. i did talk to bake = about the software either contracting with phase one (my choice) or adobe. = also we go at this from multiple front, it will be much better.
 
> Could you recommend where I can read about modern algorithms > interpolating data from Bayer pattern? Something more in-depth than > just 'averaging'? Or this kind of information is classified?

Hi Sergei,

There is quite a bit of information on the web, whether it's good information or not, I can't say. I'd suggest searching Amazon for image processing books. There are quite a few very technical books out there, but I can't think of one that deals strictly with interpolation. I actually have never seen a book on it, I've just developed the algorithms my self for both hardware and for software. It's only been recently that interpolation has become a more "mainstream" word. Before a few years ago, it was hardly in the vocabulary of anyone outside the engineering lab!

Regards,

Austin
 
Back
Top