Sunday, 29 December 2013

imaging - Can I use an array of inexpensive cameras as an alternative to a telescope?

Yes it would. However, it also depends on your definition of "Detail". For some, it's a tight shot where the whole frame is only a few dozen arcseconds wide, and others think of detail as being wide field, but also "deep" - or having a very high signal to noise ratio.



If you want wide field, and a high signal to noise ratio, then yes, a multi-camera array will work well, and you'd be able to accumulate a large number of high integration time images in a relatively short amount of time.



So here's a breakdown of what you and others who may not know need to know about deep sky astrophotography:



The main goal for all photos is to produce as visually pleasing image as possible. A single image taken of any target will accumulate light from many sources. The first, and obvious source is the light from whatever target you're trying to photograph. The second, less obvious source is light pollution from man-made sources, the moon, and natural sky glow. The third source is the electronic and thermal noise of the camera itself.



Technically, all of these sources are "signal" - however two of the three are unwanted, so we will call the light pollution signal and the camera signal 'noise'. Longer exposures produce more signal from what you want, but also more noise from what you don't want.



Luckilly, noise can be reduced by using several techniques in conjunction with each other. First, you need to take long photos. The signal will increase linearly with time. A 60s photo with a signal of "x" will have a tenth of the signal as a 600s photo. Noise, however, increases with the square root of the exposure. Using the same example, the 600s photo will have a signal to noise ratio that is about 3.2 (sqrt of 10) times better than the single 60s photo.



Increasing the Signal to noise ratio by taking longer exposures is the first step to producing 'detailed' wide field images. Camera noise can be further reduced by taking many images. An array of cameras makes this trivial. Two cameras will produce twice the images as one camera, and 10 cameras will produce 10 times the images as a single camera. That's obvious. Why does it matter?



When we stack images, the more we have, the better the stacking algorithms can preserve the signal we want to keep, and reduce the camera noise we don't. Once again, the signal to noise ratio mathematics work out the same as above. 1 image will have a certain signal to noise ratio, but 10 images will have a signal to noise ratio that can be about 3.2 times higher - because the total integration time has increased ten fold.



Of course, if you're stacking photos, you should be calibrating them as well (dark frames, bias frames, etc) - which really reduce the amount of noise per sub-frame. Because of this, your actual SNR boost by stacking should in theory be a lot higher than 3.2.



If photographed a target across 60 degrees of the sky (30 before meridian and 30 after) then you'd be able to acquire photos for 4 hours. One camera taking 10 minute photos would get 24 photos in the 4 hour period of time. If you had 5 cameras, you'd get 96 photos. 72 additional photos in the same amount of time is a lot of extra data to help reduce the noise, and really let the faint details stand out above the background - especially if all of your photos were well calibrated.

No comments:

Post a Comment