17 Replies Latest reply on Sep 9, 2017 1:19 PM by DAB

    Astrophotography With The 8MP Raspberry Pi Cameras


      I will attempt to use the Raspberry Pi 8MP NoIR and standard v2 camera boards for capturing images of the night time sky, and also with specialized filters in the daytime.


      My question for the E14 community,


      Does anyone have experience in this application of the RPi 8MP v2 cameras?

        • Re: Astrophotography With The 8MP Raspberry Pi Cameras

          Hi Trent,


          Check the RPi website.


          I have seen a number of projects attaching the Picamera to a telescope.


          Depending upon what you want to do, there are ways to do eyepiece projection for high magnification using the camera or you can go prime focus for wide angle.


          The first is just making a small tube to attach the camera to the eyepiece and use the lens on the Picamera for focus.


          To do prime focus you will probably have to remove the lens from the Picamera, which I believe is fixed, so you will have to do some complex surgery on the Picamera.


          See if you can purchase a T-adapter that fits your telescope.


          Then you can experiment with different mounting ideas for the Picamera.


          I hope this helps.



          3 of 3 people found this helpful
          • Re: Astrophotography With The 8MP Raspberry Pi Cameras
            Roger Wolff

            You know... There is something that I don't have the time to try, and I think it should work. So someone should take my idea and implement it. :-) 


            With astrophotography, there is always the problem that the stars tend to move during your (long) exposure.


            The standard trick is to make your camera move along with the rotation of the earth. (or actually to counter the rotation of the earth).


            My plan is to simply take say 1s exposures, but then say 300 of them to get a combined 5 minute exposure. Then in software,  you KNOW that each image is rotated by 1/240th of a degree.


            It's easiest if you start out by telling the software where the rotation point is. But you can optimize (with sub-pixel accuracy) for the rotation point if you have a few hundred images.


            Next, instead of taking a canvas of say 8 Mpixels, you use 128Mpixels. Each pixel is blown up 4x in each direction. But now after rotation by those tiny amounts, you get some sensor pixels that overlap a different set of about 16 pixels in the canvas.


            Next, instead of simply adding the results together you can run an optimization step (i'm explaining this in one dimension but of course it is easy to implement for two): suppose the REAL canvas has a single hot pixel. lets say 100. Now on the first image that means you get some value in pixel 25 of the sensor. Now suppose the sensor moves 1 canvas-pixel compared to the canvas. So next time you'll still get the value in pixel 25, only after 4 images does the measured value end up in the camera-pixel 24. So with the previous variant of the algorithm you'd be spreading the measured value out across 100-103 for the first image, and 101-104 for the second image, and only at the fourth image the 24-th pixel of the image would end up back on 100-103. So you would end up with a triangle of image intensity centered around pixel 103. Suppose we measure 100 in pixel 25 of the first image, and we spread that as 25-25-25-25 over 100-103 and add 8 images this way we'll get 50-100-150-200-150-100-50 in canvas locations 100-106.


            But if we assume simply that canvas pixel 103 has intensity 100 then all measurements would be precisely identical. So we can sharpen the resulting image. This can be done by deconvolving or by an iterative algorithm.


            So when we see pixel 25 with a value of 100 on the first image, we increase the values of 100-103 in the canvas, but when we measure 0 in pixel 24 in the second image this maps to canvas pixels 97-100 on the canvas and we'll decrease the running guesstimate of pixel 100 again. You should be able to reconstruct the original canvas with only pixel 103 bright much better than with the first iteration of the algorithm.


            Some complications prop up in practice. So if one second of exposure is sooo little that when digitizing the digital version only has black pixels, then this of course won't work. You need some noise in the captured images for this to work. So turn up the ISO as far as possible without getting saturated pixels. The individual images will look like shit but the resulting canvas should be fantastic....

            1 of 1 people found this helpful
            • Re: Astrophotography With The 8MP Raspberry Pi Cameras

              Here is the Pi Camera with factory lense removed. This is my first attempt of constructing an adapter. I need a 3D printer or CNC to develop a better design.




              1 of 1 people found this helpful