Finney,
> Ok, so you are doing mostly FPGAs and simple asics,
I didn't say that. I've designed far more than "simple" ASICs.
> so what?
So what? I believe I have far more design experience than you do. I don't think you have much at all, if any of any significance. Now, I really don't care that I do, or that you do, except that YOU brought it up, and want to play the "mine is bigger than yours" game. I would prefer to take what someone says on it's own merits...but in your case, the purported qualifications were thrown down, and you came out amorphous guns-a-blazing.
> Does this mean anything regarding to multi-million non-FPAG designs?
I've done many multi-million gate ASIC designs, both as a personal participant, or as the architect, or as the lead engineer, or run the project because I ran the group/department.
> Consulting? What kind of consulting is it?
You asked, so I will answer. Engineering consulting, as well as patent/technology consulting. For patent/technology consulting I consult for Hale & Dorr (amongst others) on patent cases/issues, and for various investment firms on evaluating companies.
For engineering consulting, the jobs vary, as I do fixed priced contracting, and my company does the entire project, hardware, ASIC/FPGA design, drivers, software, firmware...what ever is necessary. The consulting we do is typically high risk and technically difficult. It's also consulting that requires me to know what I am doing. It's also consulting that requires that I have my own enginering lab that is able to do the engineering, testing, rework and small production runs...with a small machine shop, 32 cu/ft thermal/humidity chamber, and a lot of high end engineering test equipment from 50GHz oscilloscopes to EMI/FCC prescreen capability, SMT rework equipment...all in our own 4k sq ft office. We are doing very very well.
> Big deal, huh? Do you want to compare how much XEUC paid you > and how much Xilinx paid me?
You are way out of line, and VERY inappropriate bringing money into the equation.
Let's just say, that I don't think you know as much as you claim to, and that your background is as significant as you claim it is. You haven't given me one good reason to believe so. So, why not just prove that you know what you're talking about by sticking to technical things, as this group doesn't deserve to be abused by your games, eh?
And trying to move on to the technical issues...ignoring the tirade of highly inapproproate personal attacks...
> So you agree that the clocking will be a problem if the design > is bad, right? What do you think? Is 30MHz to 50MHz too low > for you? Yeah, right, easy for you.
Yes, very easy for me. I typically do designs in the GHz range, which is why I have a 50GHz oscilloscope, and 4GHz logic analyzer.
> Show me how many 16bit 50MHz > A/D conversion stuff you have done!
Last 50MHz 16 bit A/D I did was for Teradyne...for one of their IC testers.
> > Line noise is another issue. > > > > Show me where line noise is an issue. It MAY be an issue in a poor > > design, but not if the design is well engineered. > > Line noise is always an issue for an analog signal line > which is clocking at 50Mhz and you have to preseve > the signal integrity. This is not those low end > video cam stuff you have done, OK?
I believe your understanding of line noise is NOT what they were talking about, and no, that type of line noise (analog signal line) is not an issue with proper design for a competent design engineer.
They were talking about along a line in the image, like shown here:
http://www.nucoretech.com/nu2/20_products/ipt/ndx-1260/00.html
(the one YOU provided ;-) and go down to "Line Noise":
"Line noise Shows up as horizontal streaks in dark areas of the image and is caused by the black level calibration using adjustment steps that are too large, and therefore visible."
What they show here has not a thing to do with noise on the analog signal lines! And, they are so kind as to provide a picture of the line noise clearly showing it is noise IN an image line "the effect shows up as horizontal bands...".
> Hehehe... this is getting funny again. I like the term PRNU > but people like to use a better term here.
People where, exactly? Funny, Kodak uses the term in every white paper that discusses this (as that is exactly what it is) as does EVERY other engineer who designs digital imaging devices that I know, and I know quite a few. What IS that "better term" you claim...funny, you didn't mention it...perhaps you could grace us with this information?
> Anyway, > there is not much you can do with raw data is because > if the lens were bad, the raw data could not save you. If the sensor > were bad, the raw data could not save you. If some image details > were lost in the image conversion pipe, the raw data could not > save you.
What other data is there? NONE! There is NO OTHER DATA AVAILABLE, PERIOD! What you get from the A/D IS what you get from the A/D, and THE only data there is. ANY subsequent processing is done on THAT SAME DATA that just came out of the A/D. Where is this other data coming from?
Come on, and you wonder why I don't take you seriously. If you don't understand that, you really simply do NOT know how a digital imaging system works. I try to give you the benefit of the doubt, as you clearly are smart, but there is only so much I can swallow.
BTW, what, exactly, is an "image conversion pipe" that you are talking about here?
> You missed the point. I was saying that the signals coming > out of Canon's CMOS sensor have been highly processed already.
YOU clearly missed the point (and I disagree that is "processed", but that's not relevant to the issue at hand)...NO ONE SAID ANYWHERE that "RAW" data is the data right out of the sensor. Right out of the sensor is an analog signal...though it's data, it is NOT what people mean by "RAW data" with respect to digital camera "RAW data". It is not available, and is NOT suitable for processing in the digital domain until AFTER the A/D conversion has taken place. I can't imaging where you got the notion that people though that they could access the analog data off of the CMOS sensor, and that it's not CLEARLY understood by most anyone who would be interested in this, that the data is digital data, not analog data.
> > > So you will have put signal processing circuits on the boundary of > > the > chip to emulate the shutter behavior... > > > > Huh? The cameras, like the N Digital, USE the mechanical shutter IN > > the camera. > > *Sigh* You also do not know how the shutter works on a CMOS sensor.
Hum. There was a discussion with one of the designers of the N Digital ON the Contax list, and that is what he said. So, please, tell me what, why you believe my statement is wrong. Is the mechanical shutter NOT used? BTW, the Contax N Digital does NOT use a CMOS sensor.
> Anyway, the Nu Core chip you saw is already the second generation > chip.
Right, I know that...
> The key concept is that if you > want to reduce the noises, you'd better solve this analog problem > with analog solution.
I have no problem with that, but the point is, that processing is NOT NECESSARY and, as I've said, IT IS NOT HOW PEOPLE ARE DOING IT NOW IN THE HIGH END DSLRS OR HIGH END BACKS, so your claim that what I said was wrong is simply wrong, and what I said is completely correct.
Also, your misunderstanding about what people mean by "RAW" data is YOUR misunderstanding. Whether this processing is done or not in a camera does NOT change what "RAW" data is or isn't. It IS the data out of the A/D, period.
> This time, I will really ignore you for now.
If I were you, I'd strongly consider doing just that.
Austin