Pre-Processing images in Nebulosity.pdf

(108 KB) Pobierz
Pre‐Processing
images
in
Nebulosity

Craig
Stark

You've
taken
your
images
and
are
now
comfortably
inside.
Now
what?
How
do
you

get
all
those
raw
frames
to
look
like
a
nice
pretty
stack?
Just
what
the
heck
is
Bad

Pixel
Mapping?
Should
I
try
Drizzle?


The
rest
of
the
manual
provides
answers
to
many
individual
questions
and

documents
each
of
the
tools.
The
goal
of
this
section
is
to
let
you
see
how
all
of
these

it
together
and
to
give
you
the
necessary
information
to
choose
a
path
through
the

initial
processing
of
your
data.
This
alone
won't
give
you
a
full
understanding
of
how

each
tool
works
(see
the
individual
section
in
the
online
manual
for
each
tool),
but
it

should
help
put
all
the
pieces
together.


The
basic
steps
are
as
follows:


1. Prepare any sets of darks, lats or bias frames for use by stacking them 


2. Take care of hot pixels (dark subtraction or Bad Pixel Mapping), bias signals,
and/or vignetting ( lats) 


3. (optional) Normalize the images 


4. Convert RAW images into color via Demosaic (if one‐shot CCD used and
captured in RAW, which you really should do )
and
square‐up
your
pixels
(if

needed)


5. (optional) Grading and Removing Frames 


6. Stack the images (Align and Combine) 


7. Crop the image to clean it up 


8. (color only) Run the Adjust Color Offset tool to remove skyglow hue 


9. Stretch the image (Levels, DDP, etc) 


The
last
three
steps
(crop,
color
offset,
and
stretch)
are
covered
in
more
detail
in
the

Post‐Processing How‐To 
document.

Step
1.
Preparing
the
darks,
flats,
and
biases

If
you've
taken
darks,
lats,
and/or
bias
frames
for
this
imaging
session,
you'll
need

to
put
them
together
to
form
"master"
darks,
lats,
and/or
bias
frames.
If
you've
not

got
a
new
set
of
these,
simply
skip
to
the
next
step
as
there's
nothing
to
do
here.

Assuming
you
do
have
some,
what
we
need
to
do
is
take
the
set
of
them
(e.g.
20
bias

frames)
and
combine
them
so
that
you
can
use
them
to
remove
artifacts
in
your
light

frames.
Having
more
than
one
dark,
lat,
and/or
bias
frame
is
a
good
thing
as
each

individual
frame
has
both
the
artifact
you
want
to
remove
from
your
lights
and

random
noise.
Stack
a
bunch
of
these
together
and
the
random
noise
goes
away

leaving
you
with
a
clean
image
of
the
artifact
you
want
to
remove.
Use
just
one
and

you
remove
the
artifact
and
whatever
random
noise
that
one
frame
had.
 Since
it's

random
noise
won't
be
the
same
as
the
random
noise
in
your
image,
using
just
one

dark,
lat,
or
bias
will
actually
inject
noise
into
your
light
frame
and
make
it
noisier.
This
is
why
people
take
a
good
number
(20‐100)
of
each
of
these.


When
stacking
these,
we
 don't
want
the
frames
to
move .
That
is,
since
there
isn't
a

star
whose
motion
we
want
to
track,
we
don't
want
to
align
these
images.
We
just

want
them
stacked
on
top
of
each
other
as‐is.
To
do
this:


1. Pull
down
Processing,
Align
and
Combine


2. Select
"None"
for
the
Alignment
method
and
keep
it
set
to
"Save
stack"
and

"Average
/
Default"


3. Click
OK
and
then
select
all
of
your
dark
frames
(or
bias
frames,
or
lat

frames)


4. When
all
are
stacked,
give
the
resulting
combined
dark
frame
a
name
like

"master_dark"
or
"master_dark_1m"
(1m
being
a
code
for
1
minute
‐

something
to
let
you
know
what
kind
of
master
dark
this
is)


5. Repeat
for
any
other
types
you
have
(lats
and/or
biases)


Ugly
Details

At
this
point,
you've
got
nice
stacks
of
each
and
the
stacks
can
be
ready
to
use.
If
you

want
the
absolute
cleanest
pre‐processing
and,
it's
worth
considering
the
following

issue.
 Nebulosity 's
pre‐processing
just
does
the
basic
math
for
you.
It
subtracts
the

dark
and
bias
from
the
image
and
divides
this
by
the
lat.
It
does
not
do
anything
to

the
bias,
dark,
and
lat
you
pass
in
during
Pre‐processing.
It
just
uses
them.


So
what's
the
problem?
The
problem
is
that
that
dark
frame
has
the
bias
error
in
it

already.
The
lat
frame
has
the
bias
error
and
some
amount
of
thermal
noise
in
it

(which
will
lead
to
hot
pixels).
So,
if
you
use
all
of
these
as‐is,
you're
going
to
do

things
like
subtract
out
the
bias
error
twice,
which
will
actually
inject
the
reverse
of

the
bias
error
(still
noise)
back
into
your
image.
Oops.


The
solution
is
to
pre‐process
your
pre‐processing
frames.
You
can,
for
example,

apply
the
bias
frame
as
the
only
pre‐processing
step
for
pre‐processing
your

"master
dark"
and
"master
lat"
frames.
You
can
also
have
a
dark
frame
taken
at

about
the
same
exposure
durtation
as
your
lats
and
apply
this
to
the
lats.
Before

fully
going
down
this
route,
consider
the
following
recommendations:


Recommendaons

If
you
are
using
normal
dark
subtraction
and
not
Bad
Pixel
Mapping
to

address
the
hot
pixels,
your
darks
already
have
the
bias
error
in
them.
Do
not

collect
extra
bias
frames
and
do
not
use
any
bias
frames
during
pre‐
processing.
Just
use
the
darks
and
both
the
dark
current
and
the
bias
error

will
be
removed.


If
using
lats,
it
is
worth
knowing
that
 Nebulosity 
passes
a
mild
smoothing

ilter
over
your
lat
in
any
case
(a
2x2
mean
ilter).
This
will
help
remove
hot

pixels
in
the
lat
if
your
exposure
duration
was
long
enough
to
put
them
in

there
and
will
also
remove
some
of
the
bias
error.
You
may
still
remove
the

bias
from
this
if
you
like,
or
simply
pass
something
like
the
3x3
median
ilter

over
your
lat
to
smooth
it
out
prior
to
applying
this
to
your
light
frames.


If
using
Bad
Pixel
Mapping,
consider
using
bias
frames
as
well.
There
is
no

need
to
clean
up
your
dark
frame
(i.e.
remove
it's
bias
error)
as
with
BPM,

only
the
very
hot
pixels
are
touched.
The
bias
error
in
your
dark
frame
is

ignored
completely.
If
your
camera
has
a
strong
bias
error,
grab
a
stack
of

bias
frames
once
(shortest
exposure
possible)
and
grab
and
stack
a
bunch
of

these
(you
only
need
to
do
this
once).
Call
it
a
"master
bias"
or
"uber‐master‐
bias"
or
whatever
you
like
and
apply
this
during
pre‐processing
(below).


Step
2.
Taking
care
of
hot
pixels,
bias
signals,
and/or
vigneng

At
this
point,
you
should
have
"master"
darks,
lats,
and/or
bias
frames.
If
you
don't

and
you're
processing
without
these,
skip
this
step.
Keep
in
mind,
you
can
use
as

many
of
these
as
you
want
(or
don't
want).
You
can
use
darks
but
nothing
else,
lats

and
biases
but
not
darks,
etc.
It's
up
to
you
and
what
type
of
pre‐processing
images

you
actually
have.
If
you've
got
a
stack
of
darks
to
use,
you
have
a
choice
to
make.


Dark
subtracon
or
Bad
Pixel
Mapping?


Both
of
these
techniques
are
designed
to
deal
with
the
thermal
noise
inherent
in

your
images
and
the
resulting
"hot
pixels"
that
show
up
in
the
same
spot
on
the

image
in
each
frame.
Dark
subtraction
is
the
traditional
way
of
doing
this.
It
works

by
simply
subtracting
the
value
for
each
pixel
in
your
"master
dark"
from
the
value

of
that
pixel
in
each
light
frame.
If
your
light
frames
and
dark
frames
were
taken

with
the
same
exposure
duration
and
at
the
same
temperature,
dark
subtraction
will

remove
the
hot
pixels
(and
"luke‐warm"
pixels
as
well
‐
any
thermal
noise,
not
just

the
brightest).
This
can
work
very
well
 if
you
control
the
temperature,
exposure

duration,
and
take
a
lot
of
dark
frames .
If
you
don't
do
these,
you
can
end
up
with

"holes"
in
the
image
(black
spots
where
the
hot
pixel
used
to
be),
incomplete
hot

pixel
removal,
and
you
can
inject
noise
into
your
light
frames
(see
above).


Bad
Pixel
Mapping
works
differently.
You
irst
create
a
"Bad
Pixel
Map"
(Processing,

Bad
Pixels,
Make
Bad
Pixel
Map)
using
a
dark
frame
or
stack
of
dark
frames.
A
slider

appears
to
let
you
set
a
threshold
(feel
free
to
use
the
default).
Values
in
the
dark

frame
that
are
above
the
threshold
say
"this
pixel
is
bad".
Bad
pixels,
and
only
bad

pixels
are
ixed
in
your
light
frames
by
using
surrounding
good
pixels
to
help
ill
in

what
this
pixel
should
have
been.
For
many
cameras
(in
my
experience,
the
cooled

cameras
with
Sony
sensors
work
best),
this
is
an
exceptionally
powerful
technique

as
the
hot
pixels
are
removed
effectively
with
no
noise
being
injected.
It's
also
very

lexible
as
you
can
use
the
same
"master
dark"
from
night
to
night
and
from

exposure
duration
to
exposure
duration
just
by
adjusting
the
slider
and
making
new

maps
as
needed.


Note:
If
you
use
Bad
Pixel
Mapping
you
will
not
use
Dark
Subtraction
and
vice

versa.
One
or
the
other
but
no
need
for
both.
If
you
use
Bad
Pixel
Mapping
you

can
still
use
lats
and
bias
frames
and
it
doesn't
matter
whether
you
apply

BPM
before
or
after
your
other
pre­processing .


Applying
Bad
Pixel
Mapping

To
apply
BPM
to
your
light
frames:


1. Create
a
Bad
Pixel
Map
if
you
don't
already
have
one.
Processing,
Bad
Pixels,

Make
Bad
Pixel
Map.
Select
a
dark
frame
or
stack
and
start
off
by
just
hitting

OK
to
use
the
default
threshold.


2. Pull
down
Processing,
Remove
Bad
Pixels,
selecting
the
one
for
the
kind
of

image
you
have.
If
you
have
a
one‐shot
color
camera
that
is
still
in
the
RAW

sensor
format
and
looks
like
a
greyscale
image
and
not
color
(another
reason

to
capture
in
RAW
and
not
color...),
select
RAW
color.
If
it's
a
mono
CCD,

select
B&W.
If
it's
already
a
color
image,
you
can't
use
Bad
Pixel
Mapping.


3. A
dialog
will
appear
asking
you
for
your
Bad
Pixel
Map.
Select
it.


4. Another
dialog
will
appear
asking
you
for
the
light
frames.
Select
all
of
them

(shift‐click
is
handy
here).


5. You
will
end
up
with
a
set
of
light
frames
that
have
had
the
bad
pixels

removed.
They
will
be
called
"bad_OriginalName.it"
where
OriginalName
is

whatever
it
used
to
be
called.


Applying
Darks,
Flats
and
Biases

Here,
you
get
to
apply
traditional
dark
subtraction,
lats,
and
biases
in
any

combination
you
wish.
To
do
this:


1. Pull
down
Processing,
Pre‐Process
Color
images
or
Pre‐Process
BW/RAW

images.
Color
images
are
already
full‐color.
BW/RAW
images
were
either

taken
on
a
monochrome
camera
(BW)
or
taken
on
a
one‐shot
color
camera

but
have
not
yet
been
converted
into
full‐color
via
the
Demosaic
process.


2. A
dialog
will
appear
that
will
let
you
select
your
various
pre‐processing

control
frames
(darks,
lats,
and/or
biases).
Select
whichever
you
have
by

pressing
the
button
and
telling
Nebulosity
which
ile
to
use
here.


3. If
you
are
using
dark
subtraction
and
you
doubt
your
exposure
and/or

temperature
control
was
perfect,
select
the
"Autoscale
dark"
option.


4. Click
OK
and
you
will
be
asked
to
select
the
light
frames
you
wish
to
pre‐
process.


5. When
all
is
done,
you
will
have
a
set
of
iles
called
"pproc_OriginalName.it".


Step
3.
Normalize
Images
(oponal)

All
things
being
equal,
your
50
frames
of
M101
should
all
have
the
same
intensity.

They
were
taken
on
the
same
night
one
right
after
the
other
and
all
had
the
same

exposure
duration.
So,
they
should
be
equally
bright,
right?
Yes,
but
there's
that

nagging
"all
things
being
equal"
we
supposed
and,
well,
all
things
aren't
always

equal.
For
example
if
you
start
with
M101
high
in
the
sky
and
image
for
a
few
hours

it
starts
picking
up
more
skyglow
as
the
session
goes
on,
brightening
the
image
up.

That
thin
cloud
that
passed
over
did
a
number
on
a
frame
that
still
looks
good
and

sharp,
but
isn't
the
same
overall
intensity
as
the
others,
etc.
All
things
are
not
always

equal.


If
you're
doing
the
Average/Default
method
of
stacking,
you
need
not
worry
about

this
issue
unless
the
changes
are
really
quite
severe.
If
you're
using
standard‐
deviation
based
stacking,
Drizzle,
or
Colors
in
Motion,
it
is
a
good
idea
to
 normalize
your
images
before
stacking.
What
this
will
do
is
to
get
all
of
the
frames
to
have

roughly
the
same
brightness
by
removing
differences
in
the
background
brightness

and
scaling
across
frames.
To
normalize
a
set
of
images,
simply:


1. Pull
down
Processing,
Normalize
images


2. Select
the
light
frames
you
want
to
normalize


3. In
the
end,
you'll
have
a
set
of
images
named
"norm_OriginalName.it"


Step
4.
Converng
RAW
images
to
Color
and/or
Pixel
Squaring

(aka
Reconstrucon)

The
last
step
before
stacking
your
images
is
to
convert
them
to
color
(if
they
are

from
a
one‐shot
color
camera
and
you
captured
in
RAW)
and
square
them
up
as

needed.
Some
cameras
have
pixels
that
are
not
square
and
this
will
lead
to
oval

rather
than
round
stars.
The
process
of
demosaic'ing
(color
reconstruction)
and/or

pixel
squaring
is
called
 Reconstruction 
in
 Nebulosity .


Note,
you
can
tell
if
your
images
need
to
be
squared
up
by
pulling
down
Image,

Image
Info.
Near
the
bottom
you
will
see
the
pixel
size
and
either
a
(0)
or
(1).
If
it
is

(1),
the
pixels
are
square.
Of
course,
the
pixel
dimensions
will
be
the
same
in
this

case
too.


To
reconstruct
all
of
your
light
frames,
simply:


1. Pull
down
Processing,
Batch
Demosaic
+
Square
(if
images
are
from
a
one‐
shot
color
camera)
or
Batch
Square
(if
images
are
from
a
monochrome

camera
or
you
just
feel
like
squaring
up
a
color
cam's
but
keeping
the
image

as
monochrome
for
some
reason).


2. Select
your
frames


In
the
end,
you'll
have
a
set
of
images
named
"recon_OriginalImage.it"


Step
5.
Grading
and
Removing
Frames
(oponal)

Sometimes
bad
things
happen.
The
tracking
goes
awry,
a
breeze
blows,
you
trip
over

the
mount,
etc.
This
is
a
good
time
to
ind
those
"bad"
frames
and
pretend
they

never
happened.
There
are
two
tools
to
help
you
here.


Grade
Image
Quality

This
will
look
at
a
set
of
frames
and
attempt
to
automatically
grade
them
as
to
how

sharp
they
are
relative
to
each
other.
The
idea
here
being
that
you'll
not
use
the
least

sharp
frames.
Pull
down
Processing,
Grade
Image
Quality
and
point
it
to
your
light

frames.
It
will
rename
them
(or
copy
them
with
a
new
name)
denoting
how
sharp

each
frame
is.


Zgłoś jeśli naruszono regulamin