|
Adobe[^] has a decent high level summary of how NASA scientists use their software to assemble the images we love from collections of grayscale images, or dozens of non-aligned snapshots.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Interesting. So all these APOD photos really wouldn't be "that" special without an human artist to help. Interesting.
|
|
|
|
|
As are many citometric views of cells. It's saddening, at least for me, because I do not like lies. I like fiction, not lies.
GCS d--- s-/++ a- C++++ U+++ P- L- E-- W++ N++ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t++ 5? X R++ tv-- b+ DI+++ D++ G e++>+++ h--- ++>+++ y+++* Weapons extension: ma- k++ F+2 X
If you think 'goto' is evil, try writing an Assembly program without JMP. -- TNCaver
"When you have eliminated the JavaScript, whatever remains must be an empty page." -- Mike Hankey
|
|
|
|
|
Lies? Not taking sides but giving a food-for-thought that I had to grapple with.
I was scanning negatives from family outings. Some were taken in a forest. Now in those days, film didn't have a white-balance setting: you had either indoor film or outdoor film, almost invariably, the latter, balanced for tungsten filament or sunlight.
Well - photos taken with a tree canopy were always rather greenish. So, when scanning them, how do I balance the color: what the films sees, with it's bias towards coloration lit my standard sunlight or what my eyes saw, which had adjusted to the greenish light and thus perceived items in their 'normal' colors? Which is correct - at least philosophically? The light was, in fact greenish . . . but that's not what I saw.
I finally opted for balancing the images to normal colors, where normal is the white-balance magic done by our eyes.
So, apply the above to the images from space: is it lies? fiction? or possibly even closer to the truth then the photos show? The answer could depend upon the attitude and instructions given to the one recomposing the images - but a 3-dimensional reality projected into two dimensions, with colors lit by light other than the standard 'sol' source: not really an obvious answer.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
I like your point of view. The difference between the representation and the object, or "Ceci n'est pas une pipe" as a more relevant individual said before me.
Thanks for sharing your insight
GCS d--- s-/++ a- C++++ U+++ P- L- E-- W++ N++ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t++ 5? X R++ tv-- b+ DI+++ D++ G e++>+++ h--- ++>+++ y+++* Weapons extension: ma- k++ F+2 X
If you think 'goto' is evil, try writing an Assembly program without JMP. -- TNCaver
"When you have eliminated the JavaScript, whatever remains must be an empty page." -- Mike Hankey
|
|
|
|
|
Exactly. What cameras see is *NOT* the same as what your eyes see or could see. Even if the color balance is right our eyes are logarithmic sensors while cameras are basically linear sensors (likewise for film); meaning that they can't handle the same range of contrast as our eyes do in a single exposure. Your computer screen has a dynamic range of 1-256, the sensor on your camera spans a range of a few thousand, your is is IIRC something like 100k to 1M before pupil size adjustments.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
The people doing them at NASA are scientists who know how to use adobe's tools, because (aside from the Hubble Heritage image every year) they're taken for scientific purposes first and foremost; the cameras aren't designed to make pretty pictures out of the box; you get a bunch of RAW files that need even more work than what comes out of your DSLR before they look presentable.
Even ignoring false color (IR/UV/Xray or narrow band) images your baseline images are all taken with a monochrome sensor with sequential exposures with red, green, and blue filters in place (and for fainter objects often a much longer one without any filter at all); also in many cases instead of a single exposure in each channel you take a bunch of shorter ones so you can throw out any bad ones instead of trying to fix them in post (eg vibrations smudging things, your target drifting because you didn't have the scope perfectly aligned, etc). Afterwards you've got a set of HDR raws with the interesting variations in contrast often in several disparate segments of the range: near the top for stars (assuming they aren't all overexposed and blown out) and somewhere in the middle for nebula or distant galaxies. In cases with even greater contrast spreads you might have exposures of differing durations to avoid blowing out the brightest objects while still getting the dimmest above the noise floor; in which case you'll have more than just the baseline 12/16 bits of HDR to compress (generally non-linearly) into a good image.
Images taken with space telescopes will also need to have streaks from cosmic rays edited out; those on the ground may have streaks from satellites, aircraft, or meteors although unlike cosmic rays they're often rare enough you can just toss one or two of several dozen frames and ignore them. Excluding some consumer telescope designs, you'll also have diffraction spikes on bright stars from the holder for your secondary mirror. The example in the Adobe article processed them out, but some people prefer to have them in.
None of this is to say you can't just hang a DSLR off the end of a telescope and still get better results than pros did with giant scopes 30 years ago (at this point I wouldn't be surprised if even a smartphone would top old film results); but the processing in DSLRs is optimized for different use cases; meaning you're at a disadvantage vs people working with dedicated astro-imaging cameras.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
I think the "photos" are great, I really do, but I always have to ask myself, what is real and what was pulled out of scientists asrse. Just saying.
|
|
|
|
|
|
25 years for a book sounds about right. The first generation of digital sensors went to space in the 60's. 1990 was plenty of time for all the techniques to get developed, although I'm curious if they'd made their way down to Photoshop from software that makes Adobe licenses look cheap. I'm blanking on the name, but the program I used in a 2002 lab astronomy course cost something like 5 or 10x as much as Photoshop did at the time.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Does that astronomy software exist today? Or, is it bought-over by the photo shopping guys?
|
|
|
|
|
It was some sort of very high end generic image processing tool not a specific tool for astronomers; and since I'm blanking on the name I obviously can't answer the second question.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
|
After Googling for a few lists of software I think it might be Mira[^]; the headline feature set (focused on data analysis not making pretty pictures for the web) is right for the market segment and the name sounds vaguely familiar. The price for their top end variant Mira MX is around $3k depending on if you qualify for the academic discount or not and personal vs single user licenses (not immediately obvious what the difference is).
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Wow! Seems to be a great tool for scientists and engineers. What is interesting is a quote from their homepage:
In 1994, Mira users were working with 250MB survey images (and doing smooth, real time display adjustments too) using 486 class PC's. Must have done real programming gymnastics to achieve that!
A few years ago, I had the opportunity to work with iSee[^], which is an image analysis tool, for Non-Destructive Inspection, with a much lesser feature set.
|
|
|
|
|
Yeah, I'm assuming image chunking and streaming from the HDD probably was a big part of it. 1024x768 probably helped a lot too, but even then you probably couldn't hold more than one or two screens worth of scroll in ram (16mb max??) at a time.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
|
We were regularly doing image warping in the early 80s, pasting multiple satellite images together. The data is actually continuous in the north-south orbital path, but the side-by side has to be joined and then converted to a mapping projection scheme.
I worked at JPL in 1983 after using their software as a state tech-transfer cooperator since 1973. The image processing software ran on an IBM 350 mainframe and in about '84 the software production moved to a DEC VAX cluster. but that's beside the point; the software was called "Video Image Communication And Retrieval", or VICAR.
http://www-mipl.jpl.nasa.gov/external/vicar.html
"Video" because it came out of the first moon orbital pictures (Ranger) and they literally were done in video. In the earliest days we printed on strips of paper and colored different letters with felt pens, then hung the strips on the wall and taped them together to try and visualize what the data contained. I did meet the fellow who did the original pictures of Jupiter, both before the first fly-by and after from real data. His computer was an ancient (by that time) PDP 8 (If I recall correctly) with only a paper-tape for I/O.
Fond memory of Buzz Slye at Moffett Field who drove a TR3 and still kept his code on card decks in the early 90s. Buzz wrote a series of statistical multi-spectral classifiers as part of the research into computer recognition of agricultural crop types. If he were still with us, his cards would still be readable. I tossed my 9-track 6250 tapes about a decade ago.
|
|
|
|
|
Really honored to get a reply from someone so well-versed and experienced as you. Lesser mortals like me don't have such first-hand experiences.
Regarding your Jupiter reference, I believe you're referring to the Voyager 1 and 2 pictures. As a just-joining-college student in 1981, I had seen a film show (at an institution called the Raman Research Institute in Bangalore, India) on Voyager 2's journey with close-up shots of Jupiter. Really fascinating for a student like me. Especially the Giant Red Spot, seen in slow-motion, was mind-blowing. [Had heard that Voyager 2 reached Jupiter earlier than Voyager 1, and that the movie showed pictures taken from Voyager 2].
VICAR seems to be a great software, and they even announce that it is being released as open-source. Looks like that has not yet happened. Am keen to download and study the code - even trying to understand the code would be a great educating and enlightening experience.
One question: Before you discarded your 9-track tapes, did you backup the code on a more modern storage device?
|
|
|
|
|
I was hardly well versed; more like a member of the coding user community and a rather junior one at that!
the code I worked with starting in the early 80s was adapting VICAR IBM batch code to work with an IIS image processing display system. That system was controlled by a PDP-11/34 with all of 256K of memory and three 5mb removable drives! the IIS machine could store in memory 8 512/512/8bit images on 9 printed circuit cards for each image (8k chips). It had hardware lookup tables as well as a pipeline processor for image to image math operations. by the 90s that was an updated IIS machine (64k chips, one board per image, 32 image channels) driven by a VAX.
I left that business in '96 and about 10 years later gave my personal archive tape to my former boss who tried to have the contents read by a contractor; the attempt failed. The code was still on an old MicroVAX, but it went to the landfill around the same time.
working with code written by the early heavy lifters had a huge impact on my coding career. The IBM version of VICAR was a "language" where a series of commands resulted in a string of image processing operations to be executed. The JCL controlled the files, the language processor was in assembler, and the individual modules controlled by the executive were all in FORTRAN. As a young upstart I once made the comment that wouldn't interactive be better, then you could see what you were producing much faster? The answer I got was "If you know what you are doing and what you are doing is worthwhile, then it is worth waiting for it."
Maybe VICAR was "An elegant weapon, for a more civilized age"
for a text from the early days, see Ken Castleman's Digital Image Processing (http://www.amazon.com/Digital-Image-Processing-Kenneth-Castleman/dp/0132114674[^]) - it was pretty cool to be able to take a class from him.
|
|
|
|
|
As someone whose hobby is astrophotography (and, to toot my own horn a bit, has 4 APODs to his credit), I thought I'd chime in here.
For a bit of background, like the folks at NASA and other professional observatories, I use a monochrome astronomical CCD camera with color filters to take my images. I use a few different software packages for image processing, including Photoshop and others that are specifically designed for astronomical images.
The objects we shoot are extraordinarily faint. Even after hours and hours of exposure time, if you opened a typical image in Photoshop it would appear essentially black. It takes a lot of stretching and massaging to just make the objects visible. As one of my friends put it, we're trying to take a scene shot at night and make it look like it was taken in daylight. All that said, I think it's very safe to say that our goals as astrophotograpers are, at least for RGB/true color shots, to make the images look as real as we can while also displaying as much of the data as possible. This by nature requires a bunch of "selective adjustments" to keep from blowing out highlights when trying to bring out extremely faint stuff -- the dynamic range is huge. We also go through all sorts of steps to balance the color so that it's as accurate as possible. Sure there's artistic license taken -- this is about as much of a blur between art and science as you can get -- but it's definitely not "fake" or a "lie".
Just my 2 cents...
|
|
|
|
|
|
Coffee?
|
|
|
|
|
What else could it be?
The symbols mean: Heaven = fire + water + earth (or take coffee (earth coloured) and water and heat
And you're up tomorrow!
|
|
|
|
|
Awesome. I thought they almost looked like a syringe: the most effective way of injecting coffee into your day
|
|
|
|
|