I’m getting the impression that some people don’t know what they’re looking at. It says “Background: Two Micron All Sky Survey” which you can read about here [1]. This is not a live image. It’s just showing you where it’s pointing, which isn’t so meaningful for most of us. (It’s random-looking stars.)
You can read interesting details about the current observation at the top, though. Currently:
* A census of high-redshift kpc-scale dual quasars
* A 49 minute, 55 second observation.
There’s a link to the research proposal [2]
Apparently it’s a six-month survey of dual (possibly lensed) quasars. Gravitational lensing can cause magnification, distortion, and duplication of the image of whatever is behind it, so this is a way to learn more about very distant (early) quasars.
A quasar is a supermassive black hole at the center of a galaxy, so this seems like a way to take lensed pictures of a lot of early galaxies?
[1] https://en.m.wikipedia.org/wiki/2MASS [2] https://www.stsci.edu/jwst/science-execution/program-informa...
Webb won't ever provide a recent view since the images belong to the research group and are only made public when the results are released.
Also, Webb can't provide a video feed since it is taking long exposures, and probably only sends results back to Earth when image is finished. The images are in the infrared and only look good when processed.
Ehh. I personally can’t stand all the post processing folks love to make their results look “magazine ready”. I think the most minimal transformation possible to map the data into 0xRRGGBB would look the best, ideally with a simple standardized algorithm that doesn’t allow for any “artistic license”.
It would be nice if they stopped with the false color, and just scaled it to whatever color an astrounaut might see from that point of view.
Besides the wavelength being outside of human perception, an astronaut wouldn't see anything due to the low photon flux. These pictures have a very high exposure time.
Given that these images are infrared, that wouldn't be much.
You have 2 choices for these images, false colour or black and white. Anything else is false.
just bring it up in an editor and drop the saturation to zero. That will take it back to a luminance map image.
IR pictures would be not RGB, just black/white (or whatever palette you like). But yes, it would be possible. For ex. from Flir Thermal cameras, you also can output the image just as spreadsheet. You can even choose the values as temperature (which is calculated) or just as energy (what the sensor gets).
Light isn’t RGB. We just receptors that react to certain wavelengths. I suspect the sensors on the telescope have a range of wavelengths they are sensitive to. It would be a straightforward translation to shift it to map it to the visible spectrum without varying the relative intensities to accentuate certain aspects of the image.
That's not how vision works. You see an extremely post processed image that's extremely far away from the original light that hit your retinas. There's nothing at all privileged about shifting something into the visible spectrum directly and seeing junk. You're just making an image that your visual system isn't good at understanding. It's not pure, it's garbage. You would hallucinate things that aren't there, you would miss obvious things that are there, etc. For you to really comprehend something the transformation needs to be designed.
The IR may be in a very narrow band. The visible wavelengths have different colors because there are a range of wavelengths that correspond to different cones in our eyes that roughly match red:green:blue sensors. If you shift the IR frequency up into the visible range, you would just get a luminance image (like grayscale) centered on one visible wavelength like red.
False color imaging sometimes applies colors to different luminance levels or sometimes it takes multiple images at different wavelengths and assigns RGB values to each of those wavelengths. The results are informative but require some editorial / aesthetic decisions to produce the best results.
Everything Webb sees is infrared light, which is invisible to humans, so you have to do some processing to make them look good.
Yes, that is what the map does. Convert values from one domain to another. That is the purpose of mapping. The point is to make it as simple and consistent as possible.
Are they not doing that? What is the origin of the idea that they sit on the images too long choosing a custom wavelength conversion formula? Is it just that their images look good?
I encourage you to try some raw file photography and processing. A bitstream from a sensor is not an image and there is no “correct” or “accurate” image from a captured signal.
Think of it like using a linear or log scale for a chart axis: neither is “more correct”, neither is taking “artistic license”.
Poor example, given many photographs shoot raw precisely because it gives them more room for artistic decisions in post. Obviously the standardized algorithm should have basic factors like gamma and general phase shifting incorporated, but the idea of being able to adjust the maps delta between arbitrary adjacent inputs is of questionable benefit to the community. It’s akin to adjusting levels via curves with many points, and it’d be incorrect to say folks are taking artistic liberties when they do that.
That's what they do already. Each wavelength that comes from different atoms gets a different color.
To expand on this, Webb uses the Deep Space Network (DSN) to communicate with us. It can’t stream data back 24/7. There are generally three contacts per day each lasting a few hours, but I believe this is dependent on the scheduling of contacts with other missions that also use the DSN.
Also, the science data that is sent back is a stream of packets from all the data that was taken since the last contact. The packets are arranged for efficient transmission. One of the first steps of the science data processing is to sort the packets into exposures. Often packets for an exposure are split among multiple SSR (which stands for solid-state recorder) files. Sometimes there are duplicate packets between SSRs (data sent at the end of a contact is repeated at the beginning of the next contact). Only when the processing code determines that all expected packets are present—by using clues from other subsystems—can the next step (creating the uncalibrated FITS) begin.
If anyone is interested more details, the packet stuff is based on standards from the Consultative Committee for Space Data Standards (https://public.ccsds.org/Publications/BlueBooks.aspx).
Is it possible to put an antenna on a roof and capture this data?
Some of it, though no single location is always able to point in the right direction.
The signal is also really weak coming from 30x the distance to geostationary orbit and setup for the DSN, so you need a large parabolic antenna.
https://www.nasa.gov/directorates/somd/space-communications-...