CI Photocommunity

Register a free account now!

If you are registered, you get access to the members only section, can participate in the buy & sell second hand forum and last but not least you can reserve your preferred username before someone else takes it.

Regarding Contax N digital and Leica digital back


Active Member
Dear Kaisern,

Thanks for your correction! Though I had thinking of being a historian during high school it was almost 10 years of my last course in history.

I agree with you that Ching Dynasty were among the best in Chinese History. The Ming Dynasty was a total failure, with absent minded and cruel emporers. The Ching Dynasty, however, did control people's thinkings, in order to control its territory effectively. I believe every dynasty have done a certain amount. In modern times, Kao-Ming party did the same thing when they retreated to Taiwan, being defeated by the communist party.

I also agree with you that the Chinese fell well behind the westerns due to lack of exchange and exploration. Another aspect is the the colonization/imperialism gave the western world much much resources to grow and to advance. We can always say that stimulation, through exchanging and interacting, is crucial to advancements.



Well-Known Member

> ...I > won't answer any comments on this post, especially those challenging > the credential issue:

Why not simply keep the post technical instead of bring up the credential issue again, or bragging about this project or that? Most of it simply isn't verifiable, much less applicable, anyway.

> One basic concept I kept stressing is that the conversion process > (commonly termed as AFE, Analog Front End) from image sensor signals > to raw image file is not done just through an A/D converter.

You (Finney) said that people were mistaken that the raw data is not from the sensor, and though true, no one ever said that it was. No one ever said other than that the raw data was specifically the data out of the A/D converter, nothing more.

No one ever said there was only an A/D either. An analog front end is inherent in any design of this nature. How complicated that AFE is doesn't change the fact that the raw data is still the's the data out of the A/D, and the A/D is the only place the raw data can come from.

Obviously, there can be varying degrees of complication in the AFE, going from simple offset and gain (which is pretty much mandatory in any implementation, and probably the most common implementation for an A/D system), to something far more complicated. That does not mean it HAS to be complicated though, or the actual benefits gained from more complicated methods.

Most of the basic functions done in these chips, has typically been done using multiple components, so, for the most part, these chips are simply an integrated solution, as opposed to a solution using separate components. These chips integrate the functions into a smaller package, and therefore reduces manufacturing costs and board space, reduce power consumption...which is all a good thing, and pretty much a typical electriconic evolution. No news there.

These chips are doing mostly some level of noise filtering, gain and offset correction, and cl&ing...and the ones referenced seem to do so with static values. Clearly what "processing" they do is quite different than what is typically though of as "image" processing as it relates to use of digital cameras, like Bayer pattern reconciliation, tonal correction, setpoints etc.

Really the only thing that can be called processing in the chips shown here is the noise filtering (CDS)...and even at that, calling that processing is a stretch, and not all cameras have CDS. It's a double s&ler, where it simply takes two s&les, and averages them. This would really be called a simple filter. Gain (PGA), offset and cl&ing is not really "processing", even in a very loose use of the word. One can call just about anything processing, and by this loose definition, a decoupling capacitor, or a resistor divider, or even the sense &lifier on the imaging sensor would be processing as well...and those are not typically though of as "processing".

> In the diagrams, PGA stands for programmable gain &lifier, and VGA > is variable gain &lifer. They are used mostly for basic noise > reduction, black level calibration, and white balance, etc, at the > analog stage.

The PGA and VGA are typically used to apply gain (same as multiplication) to the signal level to match the output range of the CCD with the input range of the A/D. They have nothing to do with "basic noise reduction", black level calibration. "Basic noise reduction" is the function of the CDS block, and black level calibration (shift) is simply an offset that is added by the "black level shift" (or "offset", depending on which block diagram) block.

Say the CCD has an output voltage swing of 2V, and the A/D has an input range of 6V. The VGA/PGA simply "map" (apply uniform gain) so that an input of 2V gets "seen" by the A/D as 6V, and that 1V gets "seen" by the A/D as 3V etc. One of them, the National part, allows it's PGA to be used for white balance, as it has the tables for that to work. I don't believe the others do.

But, your point in this post, seems to be that we, as general users of produced equipment, should somehow care about these things, as it relates to specific implementations. I see many issues with that. One is that we may not be able to find out what chips are being used in a particular camera.

> even with the same image sensor, different AFEs will give you > different quality of raw image data.

I agree, but that's not the whole story. There is a LOT to the actual implementation of the chip that can make it work very well, or very poorly, so just having a particular chip in a particular camera does not guarantee a certain level of performance. Most of what is done in these integrated chips has to be done anyway, just using more components. Bottom line is, every implementation will be different, whether these chips are used or not, and using these chips does not mean the result will be better. Also, the significance of the difference has yet to be established, and clearly, this significance (if any) depends on what you want to use the raw data for.

I also do not see there being any possibility for a level of modularity in digital cameras as you appear to be suggesting, above what we already have today. The on camera hardware and software and camera are highly integrated. This is very much different than being able to separate a CD transport mechanism from the D/A, as there is a clear boundry there...the digital boudary (which had many of it's own problems when first decoupled).

Technically, we already have a boundary, and it's the raw image data. And, you can buy after market Bayer pattern processing software, as well as a lot of other image processing software. But in reality, the equivelent would be the analog signal off the sensor, and that isn't going to happen in the forseeable future, if ever at all. The requirements for each sensor is different, and as such, I believe, really prohibits any real level of modularity. Some have suggested being able to replace imaging sensors when newer/"better" ones come out...but that is pretty much impossible for many reasons.

If anything, things will have higher integration (as is typical in the electronics industry), like putting the AFE on the image sensor (more likely on CMOS image sensors), if not the entire digital processing engine as well, or more than likely, just two components, the imaging sensor and "a" chip that integrates the AFE, digital image processing, storage, host communications etc., or some partitioning that makes sense, as there are a few different possible approaches. There are already companies working on this very thing, and higher integration typically means less modularity.

> After reading some call for real answers here, I am thinking of > starting a loosely coupled discussion group to focus more on the > technology side of photography. I will not mind sharing industry news > and whisper there. ... Some may warn this kind of discussion group > will lead to a cult like status...

Well, Finney...ironically, though you accused me of this, it seems like it's you who is "looking for a following"!

There are already many discussion groups/newsgroups/mailing lists that focus on "the technical side of photography". They are all over the Internet. For those interested in Digital B&W photography,
Please, Log in or Register to view URLs content!
is an excellent group. Even the Leben Epson mailing list list has had some real experts in the field of digital imaging on it, and may still, as had the scanners newsgroup. There are many people, including experts, who have been discussing this stuff in open forums, for many years, on the Internet. So, if you aren't simply looking for a following, and you want to actually discuss (not pontificate) with people who are actually in the digital imaging field, and you can take the confrontation, I'd suggest you poke around a bit on some of the already existing lists.



Phew, that's a lot of traffic since my two posts 48 hours ago!


Thanks for your response, and no apology necessary. Your quote from the VueScan manual highlights something that I had missed, and explains something that had been puzzling me. My LS-30 is a 10-bit scanner, but the way Nikon designed it, it returns 8-bit data across the SCSI interface. The extra bits are still valuable because the gamma correction look-up tables (LUTs) are passed to the scanner, so that the corrected 8-bit values it returns are more accurate than if 8-bit linear data had been used as the input.

Therefore, I think you are right to be unimpressed by the reverse conversion done by VueScan, especially if it loads linear conversion curves into the LUTs.

Returning to the case of the Epson, it can pass full-width data back to the host machine, so there should be no problem in that area. Processing the raw files in VueScan ought to produce the same output accuracy as doing the gamma conversion in the scanner. The handling of exposure is a different matter, however.

The reason I drew the conclusion that the Epson used a fixed exposure for film scans was twofold. Firstly, the Epson software provides no manual exposure controls (apart from the post-scan correction controls for brightness etc that we discussed previously). Secondly, VueScan's exposure control does not influence the resulting raw image, either in terms of perceived brightness or in the shape of the histogram. Apart from that, I didn't have much interest in using VueScan with the Epson (I had to visit a friend's place to try the test because I don't have a registered copy of VS myself -- raw file saving is not possible with the evaluation version). The point of the test was to establish whether the lack of exposure control was a shortcoming of the Epson Scan software, or something imposed by the design of the scanner itself. After seeing that the manual exposure control in VS had no effect on the raw files, I concluded that the latter case was true.

Regarding the Dmax spec, the use of fixed exposure would not prevent the quoted figure from being attained. If the exposure has been set at a level where the scanner can still just about distinguish detail at an absolute density of 3.4D, then it should be able to meet the spec. However, if you have an underexposed tranny, then the full value range won't be used. I think this is what I was seeing in my test scans, although I won't 100% swear to it as I only had a handful of raw scans to examine. What I do know for sure, however, is that a fair number of scans (E6 and B&W neg) were not acceptable in terms of tonality and/or colour balance, using the Epson Scan software. I guess I *may* have had a faulty unit, but I find that unlikely because, with one exception, the scans were not outrageously bad, and I can imagine many users being reasonably satisfied with them.

Re your comment as to why Epson would knowingly throw away accuracy, maybe it's the other way around: because they have a lot of bit depth to play with, they may have felt it was appropriate to skip the auto-exposure step in the interest of speed, and let the wide dynamic range of the scanner do the rest.

As far as grain is concerned, you can definitely perceive some grain, but it is nowhere near as sharply rendered as it is by my LS-30, which has less sensor resolution than the Epson. That's what I found disappointing.

Best regards,

-= mike =-


Hi Austin,

Thanks for the help on this one.

Granted, exposure/development may be an issue with some of my negs, but I'm fairly sure it's not the cause of the artifacts I'm complaining about. Some of the most obvious problems occur in the lower mid-tones, which correspond to fairly thin areas of the neg where the scanner ought to be having an easy time of it.

> It's interesting you mention this issue with some images at, a supposedly, professionally done exhibit. People on the B&W scan lists I am on don't complain of this at I have to believe there is something wrong with the scan operation, scanner or something...

Or else others don't object to these artifacts as strongly as I do? I will post an image on a subsequent message that should illustrate what I'm talking about, and I would be interested to hear your opinions. The histograms are not too bad, maybe a little spikey but not really combed. I always do tonal adjustments in 16-bit. If I need to adjust a selected area, I switch to 8-bits, create a mask using the magnetic lasso or whatever, save it as a channel in a new file, undo the change to 8-bit in my main file, then reload the selection and apply the curves.

Best regards,

-= mike =-


Ok, here's the s&le image. This was shot on Ilford Delta 3200, I think rated at EI 1600. Although this problem is worst with grainy negs, it still manifests to some extent with slower stuff to. Similarly, although this scan was done using a Nikon LS-30 which is only a 10-bit scanner, I have scanned the same frame with my new Minolta DiMAGE 5400, and although the problem is much reduced the artifacts are still present.



Well-Known Member
Austin, standard bench tests are indeed being used by camera marketing people, both in PR
(info provided to reviewers, or by reviewers), and at Point -Of -Sale, (Ex&le: Mamiya reps with documents proving sharpness/resolution superority over competitors). It's marketing USE has less to do with the optics people. In most corporations R&D follows a plan of innovation led by market research. If leading indicators call for ultimate optical sharpness then that's the direction R&D usually goes. An ex&le of this can be seen in the direction Leica has gone with it's M ASPH line up. Many Leica users lament this direction as undermining the "Leica Look" of yesterday. But Leica is in a competitive situation where their less tangable "glow" and so called "micro resolution" is a harder sell than raw bench test resolution and sharpness under a least in this day and age of computer designed glass that allows the likes of CV lenses with quite good performance for a LOT less money.

As far as inclusion of certain characteristics in the camera itself, reread my post. I made no reference to resolution as a controllable element in the set-up of the camera. I referenced the illusion of sharpness and contrast, which can be miniplulated with in camera default settings. The first thing I do with all of these digital cameras is reprogram them using the custom functions options. I try to shut off as many operations as possible and leave them to be corrected by more powerful post programs.

This leads me to a perplexing question for those less technically challenged than I:

When you shoot RAW, the camera (or transfer process?) is still performing certain functions set on the camera, such as color temp., contrast, etc. Obviously these are changable with RAW files, as you can eyedropper shift color temp or dial in the exact temp at will. I wonder why just a pure RAW image isn't transfered? Post programs have to be more powerfully endowed compaired to anything in-camera. Yes? No? I don't get it.


Well-Known Member

RAW files can encapsulate the image at any stage of processing: that decision, along with RAW file format, varies with manufacturer and camera model (potentially with firmware revision, too). As a matter of economy in circuitry and battery usage, less processing in the camera is better. Perhaps Contax took that a little too far, what with RAW files being unviewable off the N Digital LCD!

In my experience as Canon D30 shooter and CRW hacker, the data recorded by D30/D60/10D is not affected by settings of color balance, contrast, or sharpening. This is a Good Thing. These settings are merely recorded in a section of the CRW file for use by host software. As alluded by Finney, Canon CMOS RAW data has been preprocessed in such matters as sensor site imbalances, and been scaled for digital storage (exactly 12 bits allocated for the camera models I mentioned).

In order to produce a JPEG (8 measly bits per channel), the camera must decide about color balance and where to center the image on the brightness curve. Especially for big sensors, that means throwing away data. If the resulting image is perfect, no harm done. Otherwise, RAW delivers maximum quality in the digital darkroom - all image data on hand.


Well-Known Member
Hi Rico,

> RAW files can encapsulate the image at any stage of processing:

I believe that claiming that the raw image data is "processed" and can be "at any stage of processing" is misleading. Raw image data is typically the data right out of the A/D, with, at the most, PRNU applied, as that is sensor/camera dependant, and given that information is completely deterministic, and not image dependant, it only makes sense that it is done in the camera, as that is where the PRNU table is. On some, there may be an option to do that externally. What other "processing" have you seen done to "raw" image data?

> As alluded by Finney, Canon CMOS RAW data has been preprocessed in > such matters as sensor site imbalances,

Right, and that is simply PRNU calibration, but that hardly means the image has been "processed" in the sense people talk about digital image processing. The data is still raw data, simply corrected for a completely deterministic sensor characteristic...which is not image dependant. That is simply applying calibration...and you are considering calibrating image processing, which, I disagree with.

> and been scaled for digital > storage (exactly 12 bits allocated for the camera models I mentioned).

That is really not processing, that is simply data packing. All the actual image data is left unchanged.

> Perhaps Contax took that a little too far, what with RAW > files being unviewable off the N Digital LCD!

No raw image file is truly viewable. If a "raw" image is viewable, then it is not a raw image. In order for a raw image to be viewable, it at least has to have the Bayer pattern reconciliation done, setpoints set and tonal curves applied. It's easy for the camera to take the raw data and do this to simply make a display image, but that doesn't mean that the actual raw data has had any digital image processing done to it. But, because you can view the image on the little LCD, it does not tell you a thing about the state of the raw image data. They are totally independant of each other.

> In order to produce a JPEG (8 measly bits per channel), the camera > must decide about color balance and where to center the image on the > brightness curve.

What you mention, plus the Bayer pattern reconciliation is processed image data.

> Especially for big sensors, that means throwing away > data.

Why do you believe the sensor size have anything to do with how much data gets "thrown away"? Certainly it's more, but it's proportionally the same, isn't it?

> Otherwise, RAW > delivers maximum quality in the digital darkroom - all image data on > hand.




Austin Franklin and Finney Tsai,

I have owned Nikon, then Contax 167MT, then AX with great Zeiss lenses. Sold last year to make my wife happy. Purchased another Nikon N90s, with Sigma lens. Very unhappy purchase. The camera and lens did make the difference. Rented the professional ED glass, still clarity and sharpness still not the same as the Zeiss.

I became interested in the Sigma SD9 because of the Foveon chip, but unhappy with the Sgima lens. I was told Foveon presented their chip to the majors, such as Canon and Nikon, Minolta, etc.. No one took the chance. Sigma took the chance. I have to say I'm very impressed with the concept of the 3 layer chip, with RGB on different levels. Why did all of the other manufacturers not seem interested, as the way I understand film technology, there are 3 and sometimes 4 layer film?
I have to admit I was impressed with the Contax ND and was discontinued. I can not bring myself to buy a Sigma SD9, but would dream that Contax might use the Foveon chip in a future camera. Am I totally off base, but after reading August's issue of Shutterbug, the Foveon chip so called 3MP chip beat the Canon 10D 6MP chip as far as no artifacts from the Foveon chip, while the Canon 10D vertical lines had artifacts because of the Bayer chip. Also, I have been told by a Sigma SD9 user the image can actually be enlarged to larger image than Canon. What gives. Should the whole industry maybe reconsider, especially Contax, with the much better Zeiss lenses?


Active Member
> Dear John

I don't know about anyone else, but I'm with you on this one. I think the problem is that at the time Foveon was pushing its chip Contax was too committed to the ND and probably still wrestling with its decision to use the chip it had chosen. It had already strayed once from the path of conformity, and look what happened.

But wouldn't the combination of the Foveon Chip and the Contax lenses be perfect, at least at this stage of the digital evolution?

Also, although I believe that Foveon like to think it has a 10 megapixel sensor, even though it's only officially 3+, it still only seems marginally better that the 6mp Canon sensors. Of course, there's a lot more to it than just counting pixels, because spacing and size are very important too.

I have tried to communicate with both Foveon and Contax on this, but of course no one is going to say anything, or let the cat out of the bag.

But it seems clear to me, looking at the actual pictures presented on the different review sites, that the images from the Foveon chip are superior to those from the chips used by Canon et al. I think there is something galling about the interpolation type programs that must be used for the other sensors, guessing frantically (if logically) at what the image is made of. It is a testimony to human ingenuity and engineering skill that the programs and filters work so well, but what a waste of talent and energy. Why not just do a better job of gathering the actual information in the first place, as the Foveon chip does?

I'm sure someone will reply that the Foveon chip is not perfect, but so far as I can see none of the alternatives, with the exception of the chip in the 1DS Canon, is any better.

How could we persuade these two companies to work together? Suggest that we are interested in buying such a product? In any event, Sigma has done a good job with their effort, and they are to be commended for producing a workable camera.

Richard Stone



Well-Known Member
John and Richard, that Canon passed on the Foven chip is easy to understand. They are a maker of their own chips. Contax's vision was a full frame 35mm capture, which as we know from the Kodak 14n, is no easy task. Full frame is so difficult and expensive, that whole new approaches to camera and lens design are revolving around smaller chips.

I wonder, since the Foveon chip relies on multiple layers, does the problem with light angle become even more severe for full frame capture? IMO, full frame is the frontier for all digital capture. The first full frame under $2,000. wins all the marbles.



Have you read the Shutterbug August issue? The author sort of indicates that Foveon is not the only producer of that type of chip, that some of the medium format backs use similar technologies. The article sort of really opened my eyes to the possibilities. I give credit to Sigma to being brave to use this chip and maybe I might consider the camera if it doesn't go the way that Contax did with it's poor marketing. I still would like some of the experts to comment on the chip. Thanks Marc and Richard.



My impression of the Foveon chip also comes from the galleries from other websites too. The photos are very impressive. This is what really peaked my interest, such as
Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!
this is where I've been looking.


Well-Known Member
Hi Marc,

> I wonder, since the Foveon chip relies on multiple layers, does the > problem with light angle become even more severe for full frame > capture?

I'm not sure, but one would think so. I don't know what the well depth of the Foveon sensor is, and since they aren't used in any "full frame" use, no one has reported on this issue.

> IMO, full frame is the frontier for all digital capture. The > first full frame under $2,000. wins all the marbles.

I agree with that, but I'm not sure that is important to most new camera sales don't care about image format. Keep in mind, that it is only our relating to the lense focal length that format matter.




Well-Known Member
Hi John,

With regards to the Foveon... The "idea" of the Foveon is one that has been around for a long time, long before Foveon developed what they developed. The Foveon is a decent implementation, but it does have it's problems. The two most apparent ones I've seen are long exposure issues, and second color. If you look at some of the pictures from it that were done in low light, you will see a lot of color fringing and blown out highlights. Standard pictures have funny color. So, at this point in time, I don't believe these sensors are high end.

One of the issues that I have questioned about this architecture is the ability to retain color fidelity...simply because the color "filtering" may not be able to be very accurate, unlike Bayer pattern sensors, where the actual optical color filters, and are very very accurate.

A question I have had with the Foveon architecture is the depth of the sensor wells...though in fact, they may be shallower than other sensors, unfortunately, I have not been able to find this information, and no full-frame camera exists that this can be tested on. If anything, the vignetting will not happen uniformly, due to the different depths of the different colors.

Contax probably didn't pick the Foveon probably because it wasn't full frame, and it wasn't very high pixel count available at the time. I also am glad they did not, simply for the color distortion issues seemingly inherent in the Foveon design.

I have hopes that this technology can develop further, but I'm not suitably impressed with their first go-round. Time will tell. I also don't see the current sensor, for both resolution and color distortion, as being able to "take advantage" of the Zeiss the Sigma lenses may be just fine...and I have heard that Sigma does make some pretty decent lenses, though I have no first hand experience with them.

Please, Log in or Register to view URLs content!
has a good message board for Foveon users, and this is where I have seen most of the images I've seen, and found out about people's complaints.




Active Member
Mike, yes, thought about the possibility you mention a while after posting. But then I thought again.

How many levels can we represent on the best of printing processes, vs. 65,000 from a 16-bit capable photo electronics? More important perhaps, how many levels are left in the underexposed 'thin negative' (or its thick cousin) you are trying to recover in scan?

In fact, if I were trying to recover that thin negative, my 'best bits' to use in the sensor system would be those 'crowded' more towards the top of the signal range, thus avoiding any in-system noise. That's what Epson 3200 has actually done, according to your observation, to recover the greyscale that's possible.

You'd have to have far better than a 16-bit-capable analog portion of the system to do better by smearing its range in turning up the gain - gets costly because it runs into physics everywhere.

Back then to asking what the remains of the underexposed negative itself can resolve - and the output screen or paper, which both seem a lot more like the 8-bit plus some at best range, isn't it? I am sure someone has some quick numbers on that to throw.

Same as the wet darkroom, I think: the paper characteristics could give soft _input_ range, capturing differentials that it could find, but its output range, probably not so far at all.

Anyway, the above fits with my small ex&les of negative recovery I've thought quite successful on the 3200.

Regards, Clive