KaRoy
New Member
I understand it is used towards speeding up the picture taking process. But, I can't seem to find reference information on how it's generally done in modern day digital cameras, on what the basic science is and the techniques used.
Here's what I'm suspecting is happening: 1) light hits the CMOS sensor 2) all pixels' sensory information is recorded 3) at ISO100, most (say 90%) of pixels are read from buffer 4) at ISO800, proportionally less is read 5) the picture is made from the data read;
Basically, the decision to use x number of pixels, as a factor of the ISO setting, out of the total is what contributes towards speeding up the picture taking. The resulting noise is then due to the picture making process, the assembly logic implemented in the camera's software is where the differences come from between various camera manufacturers.
So, I'm curious if this is correct, or if not, could somebody throw up a link for the real explanation. I can't seem to ask the right question from Google.
However, whatever the actual process is, it seems to me that this gain in speed by throwing away a select set of pixels is a rather artificial choice. Since the data is already all there, could it be just a question of hardware+software to process all the pixels all the time? I understand the traditional attitude towards ISO, but if the technology is there, why not process all the pictures at full on quality?
Unless ISO control in digital cams really means altering the how much light information is recorded in the first place. Meaning some pixels on the CMOS are actually turned off. That still is an artificial choice, no? It's done to affect processing speed, but sooner or later, the hw+sw combo should take care of this.
karoly
Here's what I'm suspecting is happening: 1) light hits the CMOS sensor 2) all pixels' sensory information is recorded 3) at ISO100, most (say 90%) of pixels are read from buffer 4) at ISO800, proportionally less is read 5) the picture is made from the data read;
Basically, the decision to use x number of pixels, as a factor of the ISO setting, out of the total is what contributes towards speeding up the picture taking. The resulting noise is then due to the picture making process, the assembly logic implemented in the camera's software is where the differences come from between various camera manufacturers.
So, I'm curious if this is correct, or if not, could somebody throw up a link for the real explanation. I can't seem to ask the right question from Google.
However, whatever the actual process is, it seems to me that this gain in speed by throwing away a select set of pixels is a rather artificial choice. Since the data is already all there, could it be just a question of hardware+software to process all the pixels all the time? I understand the traditional attitude towards ISO, but if the technology is there, why not process all the pictures at full on quality?
Unless ISO control in digital cams really means altering the how much light information is recorded in the first place. Meaning some pixels on the CMOS are actually turned off. That still is an artificial choice, no? It's done to affect processing speed, but sooner or later, the hw+sw combo should take care of this.
karoly