> > Just a few Xilinx FPGA stuff does not mean much.
> That's true. Is that all you've done? I've done over 100 Xilinx FPGA
> designs, and you can find me on their web site in their XPerts
> section...I'm also on their XEUC (Xilinx Expert User's Council)...and
> they even pay me to consult for them.
Ok, so you are doing mostly FPGAs and simple asics, so what?
Does this mean anything regarding to multi-million non-FPAG designs?
You are actually the one who keeps making amorphous
and fuzzy claims. Consulting? What kind of consulting is it?
Xilinx was paying me big bucks to look into their
silicon problem. And how about your consulting work?
Big deal, huh? Do you want to compare how much XEUC paid you
and how much Xilinx paid me?
> > Do you know I had > been involving in design Xilinx Vertex II/III
> FPGA chips?
> Involved? Does that mean YOU did a design using the Xilinx Virtex
> chips (note, you didn't even spell it right...it's spelled Virtex...
> i, not e...)? What, exactly, do you mean by "involved"? It's a very
> amorphous claim, so it's hard to know what the significance of it is.
> Be careful what you claim...
Nothing to be careful here. Xilinx is my customer and we were
helping them solving design problems... and you are just an end user
of Xilinx chips.
> > Now this is > not only a chip you have heard of, it is something you
use a lot?
> The VIII is not out, so how could I use it? And yes, I happen to have
> done quite a few V2 designs, and was on the design review team of the
> V3 (and I reviewd the V2 as well) for Xilinx.
Design review team? Now I really get interested... which design
review team are you in? And what kind of design review have
you done? Design review can be a very fuzzy term, you know?
Who knows, probably you just gave 'design review' a new
meaning. You were so good at this and were doing this
a lot back at Contax Mailing. Now, you WILL have to be careful
with your claim.
> > Happy now?
> Again, the significance of your amorphous statement has yet to be
> established, and really, isn't a place for this list.
You are the one who's making fuzzy statements.
And you got hurt simply because I showed people how much
you really knew.
> > I will ignore you for now.
> You only "ignore" me because you can't answer my challenges.
Sorry, your so called challenges are not up to my definition
of challenges. They are merely silly non-intellectual mumblings
which reflect your self-pity.
> BTW, someone mentioned that my responses weren't that respectful to
> you...well, there is a good reason for this...because you have not
> earned my respect. I have not found that YOU treat me (or anyone who
> challenges you for that matter) with any respect, so accordingly, I
> simply don't find you deserve it. I have tried, but instead of
> actually stepping up to the plate and responding to issues
> technically, you skirt the technical issues, and make it personal.
Believe me, I try it very hard. I even showed you that
to earn peoples repsect, you do not need to tell people
what company you run, which projects you have done, and
which organization you are in. You also do not have
to ask people to do the same to you. And you have NEVER learned.
Remember that YOU ARE the one who first posted the insulting
message and later on you fought simply because your weak
tiny self-ego got hurt. So, is this the way you
called responding to issues technically? Man, you
reall can spin new meaning out of old tricks.
> > Your question regarding to A/D is very good. The clock speed of
> reading a CCD sensor > is fairly high now so clocking error is a big
> Clocking error? Specifically, what do you mean by that? If anything,
> any error in clocking is due to bad implementation/design, there is no
> need for this to be an issue. What frequencly do you believe is
> "fairly high"?
This really makes me laugh. Yeah, if your circuit has problem,
it must be bad implementation/design. How about this,
if your circuit has problem, you must have hired a stupid engineer?
You really have to hire an engineer with over 25 years of excellent
Actually you were answering the question all by yourself.
So you agree that the clocking will be a problem if the design
is bad, right? What do you think? Is 30MHz to 50MHz too low
for you? Yeah, right, easy for you. Show me how many 16bit 50MHz
A/D conversion stuff you have done! And the most important one,
how expensive the solution is!
> Line noise is another issue.
> Show me where line noise is an issue. It MAY be an issue in a poor
> design, but not if the design is well engineered.
Line noise is always an issue for an analog signal line
which is clocking at 50Mhz and you have to preseve
the signal integrity. This is not those low end
video cam stuff you have done, OK?
> > those > discrete components around are also an issue.
> This is not an issue with a proper design, and for me, and other
> professionals with experience in the industry. This is just basic good
> engineering practice. Cold solder joints are an issue as well, but the
> level of issues you are raising is simply not an issue in proper
> designs. This sounds like Foveon advertising...make up things that
> don't exist (or are non-issues) so you can claim you've solved them.
We are talking whether the problem exists or not. Do not dodge
the issue with words again like a good engineering practice
will not make it an issue... blah blah blah
Discrete components are always a concern, if you want to
get the best image quality. Capacitors along a 50MHz
analong line will always bring in noises. Making it works
is different from making it the best. Your bragging
statement only tells me what level of designs you
are dealing with mostly.
> > A/D conversion error > itself is also an issue.
> What A/D conversion error? Do you mean +-1 LSB? That's not an "issue",
> it's inherent in the A/D process, and again, won't be seen...nor is
> there anything that can be done about it.
If your knowledge regarding to A/D conversion error were only
limited to the +-LSB, I think we are in totally different leagues here.
Man, this is like the day when I had to explain
to peole that clock speed did not mean everything for
a CPU performance.
Since you claim to know it all, how about you explain this to me?
Why can an A/D converter introduce digital artifacts? Why?
And what kind of digital artifacts are they?
> > You can not just directly use the data from > the A/D stage as raw
> Well, yes, you CAN. All you need is the PRNU table...and you can do
> anything to the RAW data out of the A/D. How on earth can you claim
> that what you can do with the raw data is limited? That's just
> absurd...as ANYTHING that is done is done WITH this data, whether it's
> in the camera or outside of the camera.
Hehehe... this is getting funny again. I like the term PRNU
but people like to use a better term here. Anyway,
there is not much you can do with raw data is because
if the lens were bad, the raw data could not save you. If the sensor
were bad, the raw data could not save you. If some image details
were lost in the image conversion pipe, the raw data could not
> > Most companies choose to compensate the > data with calibration data
> of the sensor such as geometric noises, > etc. before generating the
> real useful raw data.
> It's called Photo-Response Non-Uniformity (PRNU), and is a
> compensation for each individual sensing element. It's simply a LUT
> (Look-Up Table). If you knew enough about how this really worked, you
> would know what it is called. Like I said, you seem to have a cursory
> knowledge, but no real experience.
> So, if you have the PRNU table, which you do if you have the camera,
> you CAN get the RAW image data from the A/D and use it, and do
> ANYTHING to it you want to. You are not limited in what you can do
> with it, and the "processed" data does NOT give you any more
> information than the RAW data does. How do you think the out of camera
> Bayer pattern processing algorithms work?
Hahaha... so this is just how much you know. Do you ever know
that doing a color table look up fixing after the A/D stage will
create color phase noises? Sure, you have never seen this
since you are doing mostly low end video cam stuff.
> > So you will have put signal processing circuits on the boundary of
> the > chip to emulate the shutter behavior...
> Huh? The cameras, like the N Digital, USE the mechanical shutter IN
> the camera.
*Sigh* You also do not know how the shutter works on a CMOS sensor.
> > The signal output from > the CMOS sensor is not *raw* anymore.
> NO ONE ever said that RAW data was the data directly out of the
> sensing element. Raw data is unquestionably the data out of the A/D,
> period. No one has ever said any differently. This is YOUR
> misunderstanding, not anyone elses.
You missed the point. I was saying that the signals coming
out of Canon's CMOS sensor have been highly processed already.
> > Take a look at the functional diagram at bottom. You will notice a >
> feedback loop around the 16-bit A/D called black-level calibration. >
> You will see how complex it is.
> This is something that someone MAY choose to use/do, but, whether it
> is necessary or not has yet to be shown. No high end digital cameras
> use it, nor do any digital backs...which provide the highest level of
> image quality.
> Just as an FYI, this chip is NOT in any production camera right now.
> Basically, it's a chip that integrates the typical components that are
> found in any digital imaging system, plus adds some bells and whistles
> to distinguish themselves. Again, how well these bells and whistles
> work, much less for high end cameras and DSLRs has yet to be
> So, you've managed to find A chip that isn't in any camera, and it's
> plain fact that none of the high end digital backs (or any of the
> current high end DSLRs that I am aware of) do any of this "processing"
> you claim happens (before the A/D) before you get RAW data...so again,
> what was your point if no one seemingly needs to do, or even does,
> this "processing"?
You are wrong again. This is really annoying that
I have to type the word 'wrong' so many times in one email!
Anyway, the Nu Core chip you saw is already the second generation chip.
People are using the first chip already. A few new designs with
the new chip are coming. The key concept is that if you
want to reduce the noises, you'd better solve this analog problem
with analog solution. If you do it at the digital stage,
it will be a bit late and it will introduce digital artifacts.
Digibacks like Phase One's have been trying to do similar things.
They even put temperature compensation stuff in.
Man, am I hungry? Time for the supper now! Yummy!
This time, I will really ignore you for now.
Or, try to make up some real challenges, OK?