At this stage of our preprocessing flow, we have our images calibrated, cosmetically corrected, graded, and aligned. One of the last things we should do before merging them into a single image is to run local normalization.
What you will notice is that some of your frames are brighter than the others. Instead of trying to balance the entire frame so they have the same brightness as a hole, local normalization does this in smaller chunks. It breaks the frame up into small blocks of pixels that it normalizes across the frames.
To use local normalization correctly, you really need to know why you are doing it. Take a look at the following 2 images
Both images have a background gradient. Image 1 is bright in the lower-right while image 2 is bright in the upper-left. The gradient in image 1 is not nearly as severe as image 2. Unfortunately, my master flats were unable to remove this adequately. The big difference between the two images was that image 2 as taken during a 75% full moon. Image 1 was during an almost new moon. The moon creates a brightness gradient across the entire sky. Master flats are unable to remove this because your flats are created with a flat, evenly illuminated background.
If you look closely, you will also notice that image two has a donut in the bottom 1/3rd and left 1/2. Again, master flats are unable to completely remove this because of the uneven sky illumination. Image 1 does not have this issue.
Unfortunately, most tutorials on PixInsight Local Normalization tells you to select the best image as your reference. When I ran the PixInsight Subframe Selector process, it said image 2 was better than image 1.
This is what happens when I integrate the images using image 1 and image 2 as my references:
As you can see, the integrated image on the left is much better. The nebula is brighter. You can see more structure. And even though there is still a gradient, it isn’t as severe as image 2, which means it will be much easier to correct.
Local Normalization Settings
Once you understand what local normalization does, it is very easy to implement.
Once you identify your best image, your only other decision is whether to change your scale from the default of 128. For my setup, 256 works better, but you only know until you try multiple settings.
Depending on the number of frames you have, you can improve local normalization even more by creating an integrated image as your reference image.
To do this, select the best 10-20% of your images and run the PixInsight Image Integration process.
Once you have this integrated image, use it as your reference frame for the Local Normalization process.
To get the most out of local normalization, you need at least one image with a minimal background gradient. This means you need one frame taken with the following circumstances
- High transparency – less sky glow
- Near meridian – less gradient
- New moon
Too bad these three things almost never happen simultaneously, but 1 and 2 are more important than 3 (as long as you aren’t imaging during a full moon when the moon is close to the target)
The next step is to take these images and merge them into a single image with PixInsight Image Integration.