return to table of content

NASA engineers make progress toward understanding Voyager 1 issue

tetris11
60 replies
4d1h

The availability of skills is also an issue. Many of the engineers who worked on the project - Voyager 1 launched in 1977 - are no longer around,

Can you just imagine that? You wrote software in your 30s, and then 50 years later your grandchildren have to come visit you in the old folks home to ask you:

"Grandad, why did you write this goto statement at line 1892? Our AI think it might have to do with avoiding a hardware issue?"

to which you then reply, "my dear, even if you asked me one year after I wrote it, I would not be able to tell you."

kstrauser
10 replies
3d23h

See also: engineers of the B-52 bomber. It first flew 71 years ago and is expected to have another 30 years left.

kstrauser
4 replies
3d22h

Well, my grandpa didn’t know jack about computers, so I very well could have maintained code he’s written based on the disquality of stuff I’ve had to work on in the past.

But yeah. In “A Fire Upon the Deep”, Vinge talked about archaeological programmers. There’s no doubt in my mind we’ll reach that point. “Tell me again why time_t is only 64 bits?” “Pull up a chair while I dig out the LKML archive. You know, this was originally stored in electric fields, if you can believe it!”

acheron
3 replies
3d22h

Fire up the subspace ansible and create a holodeck room for us to talk about it. You can't name the holodeck program "CON", "LPT1", "PRN", or "AUX", though, because those are MS-DOS device names. ( https://news.ycombinator.com/item?id=37076523 )

myself248
1 replies
3d21h

To this day, I use

COPY CON FILENAME.TXT

to make a quick-n-dirty note of something without leaving the terminal.

kstrauser
0 replies
3d21h

I use `cat > foo.txt` all the time for the same reason.

Arrath
0 replies
3d19h

There's an idea for another holodeck mishap episode right there.

The ship's replicators start spitting out PacMan ghosts which quickly overrun the passageways ala Tribbles, in the end it turns out Ensign Crusher attempted to pipe his holodeck game to PRN and the Ship's computer mistook that as a command to begin (3d) printing.

bee_rider
3 replies
3d22h

I bet they’ve replaced most of the computers by now, though.

kstrauser
2 replies
3d22h

Probably, but why’s that spar here? Why did they route that wire around the screw, and why doesn’t the 1960s bomb sight work right if the wire’s only twisted 3 times instead of 4?

jamesy0ung
1 replies
3d21h

Bomb sights aren’t used anymore. The weapons used by the B-52 are all gps guided and the computer on board simply gives a timer for the pilot to press the pickle button. The weapon then self guides to the target.

kstrauser
0 replies
3d21h

That sounds believable, but I bet they still have lots of surprisingly old equipment that works too well to bother replacing.

jacobriis
9 replies
4d

Can you just imagine that?

No this is not a plausible scenario.

dotps1
2 replies
4d

I mean, this stuff does happen.

There is an old electric station near me that is used for various things sometimes. Some band was in there shooting a music video and bumped something and somehow the whole area started filling with water. Nobody could stop it.

The government, the water company, everyone was struggling to figure out what to do, and they decided to call the old guy that used to work there. He was in his 90s but he told them how to fix everything.

flagged24
1 replies
4d

I would love to read the full story on this.

chgs
0 replies
3d18h

If it doesn’t end up with an itemised bill of $1 to turn off a stopcock and $999 to know where it is, I’d be disappointed.

Vecr
2 replies
4d

Because you never write gotos? Many embedded and kernel programmers still do. I think it makes sense in general.

williamcotton
1 replies
3d23h

It is known as error handling. Some languages renamed the practice to try/catch. Others added a Result type.

trealira
0 replies
3d22h

Likely you know this, and you're just being funny, but the try/catch statement is more like setjmp/longjmp in C. The Result type in Rust is syntactic sugar for integer return codes, returning early on errors, and tagged union structures. And where C programmers use goto statements, C++ programmers use destructors, and Rust programmers use the Drop trait. Walter Bright also says that nested procedures in D eliminate most use cases of goto for him.

You can also always avoid goto, in C, but usually, either it has excessive if-statement nesting, it uses boolean flag variables in loop conditions, or it uses structures to create state machines, but these are usually just uglier and more error-prone than the equivalent version using goto. The same applies to avoiding break, continue, and early returns.

xcv123
0 replies
3d22h

No this is not a plausible scenario.

COBOL exists. Billions of lines of COBOL still in production today. The scenario is already happening now.

thebruce87m
0 replies
4d

plausible scenario

I’m not sure what you are calibrating against but I feel like the last 20 odd years are full absolute batshit crazy stuff that doesn’t make sense and this seems rather tame.

hathawsh
0 replies
4d

I wouldn't dismiss that scenario. It seems plausible that the documentation is so extensive that it takes time and effort to answer some questions. It might be easier to just ask the authors, if they're still around.

supportengineer
7 replies
4d

I know a developer in his 70's who maintains code he wrote in his 20's

bloomingeek
5 replies
3d23h

Going out on a limb here, wouldn't code written that long ago be almost impossible to hack with modern tools? (not to mention still using the old hardware.)

HeyLaughingBoy
2 replies
3d22h

One of the Best Practices for maintaining software that you expect to support for decades involves archiving the development tools used.

Of course, finding a platform to run it on could be a problem. Also, the License Server that so many proprietary tools need before they'll run.

myself248
1 replies
3d22h

I remember hearing about some hardware made for the government (military?) where the development tools ran on some 1970s minicomputer that had long since ceased to function, but it was no problem because in the 80s they wrote an emulator that ran on 68k, specifically an Amiga since that's what someone had sitting around the lab at the time. Then in the 2000's they realized the supply of Amiga parts wouldn't last forever so they bundled the whole thing up to run in an Amiga emulator that ran on whatever version of Windows was current at the time. Then 16-bit Windows ceased to be a thing, so....

supportengineer
0 replies
3d21h

This is pretty close

wredue
0 replies
3d23h

I’m not sure why that would be the case. If anything, I’d expect earlier code to be easier to hack.

marcus0x62
0 replies
1d19h

The timeframe described would work for something written for a System/370, no? That stuff, along with the tooling, will run all day long on a current Z Series.

taylorfinley
0 replies
4d

I would love to hear more about this if you can share any details

zoeysmithe
6 replies
4d1h

My understanding is that its very well documented, but the funding isn't there to do much with it. I mean most of the team behind things like the Atari computers, Acorn, Commodore, early Apples, etc are retired but we can emulate them and understand these things on a very technical level. I know this isn't a great analogy, but ultimately age of the project or the age of the original team members isn't the bottleneck.

I sometimes see retro projects on hack-a-day done by people who could be the grandchildren of the original designers of those vintage chips and OS's. They probably know more about that chip or OS than the original people do just due to them being out of the game for so long. The same way people regularly lose to fifth graders in tests because they dont recall 5th grade science and civics. If anything your scenario might be the opposite! Grandpa might be asking his granddaughter how those registers worked or how to emulate his OS from 1982.

I remember reading about the team that maintains the Voyagers. Its a skeleton crew using legacy equipment to keep the communication going with the assumption the next decade, or even earlier, is going to be it.

NASA has the same problem the private sector has. Capitalism rewards things that will generate profit/prestige, not legacy cost centers, and NASA is not immune from that dynamic as NASA employees and managers want to maximize their income and prestige too. So the people maintaining or bug fixing old products are often lowest on the prestige, pay, recruiting, and profit spectrum than those chasing new things. They get the skeleton crew funding and can't do novel things, even if technically possible because of lack of staff and buy-in.

Passion projects make for feel-good documentaries like 'It's Quieter in the Twilight' but ultimately if society isn't vested in these teams, their hands will always be tied.

jcadam
2 replies
3d23h

A young engineer must know that getting stuck on a legacy project is a career-limiting move. Happened to me when I got stuck on a old Ada codebase for a legacy aerospace system (unemployed in 2008, you take whatever job you can get), and it took years, and a good chunk of my spare time doing projects/studying, to find a company willing to give me a chance working on something modern again.

Now, someday when I'm in my 60s/70s, and you have some legacy system written in some defunct language nobody under the age of 50 has any experience with, sure, I'll do it. But it'll cost you.

trealira
0 replies
3d22h

Why do companies do that? Shouldn't it be enough that you have experience contributing to a large project long-term and you can program decently? Is it that modern programming just requires different/additional programming skills compared to most old codebases, like writing asynchronous code?

robocat
0 replies
3d22h

I don't hire, but that story would make me want to hire you. It takes certain skills and pig-headedness to successfully work on old software!

I once asked a company to rehire me saying that I only wanted to do software maintenance work (I wanted low stress). I am good at it, and it's hard to find people that want to do that work. They didn't rehire because although the manager really wanted me back, his idiot boss had taken it personally when I had quit. Idiot boss later got ousted to their dismay: I shouldn't enjoy that but I did!

ngcc_hk
1 replies
3d23h

I hate people just random throw in capitalism. Is the communism better in maintaining old code … do they even exist. Not to mention please we are here not for the money as most does not, just awe of what can be done for a computer job which love to do and have such legacy.

Capitalism does not really exist. It is money, incentives, job, human … or love to hack.

quickslowdown
0 replies
3d23h

I'm going to just outright dismiss this comment the way you're dismissing people talking about capitalism, a very real economic system we're all living under, since it's ridiculous on its face.

justin66
0 replies
3d22h

The thing Voyager has going for it is, it's a small team. JPL is laying people off all over the place lately.

I'm not sure if it's really "society" that is responsible for these funding difficulties, it's the politicians. If you ask a random member of the public how much of the federal budget is allocated to NASA, they'll generally give you a percentage that's wildly higher than the actual figure.

linsomniac
6 replies
4d

I'm kind of surprised, what with all the retro computing emulation software and skills I see getting posted here, that we don't have a Voyager emulator scene and a group of people standing by to run scenarios and propose fixes.

bilegeek
2 replies
3d23h

Not that I'm skillful enough to make an emulator, but I've tried years ago to find technical documents on the Honeywell HDC-402 used in Voyager and Viking (it's also apparently not related to the DDP-516 or 316 AFAIK.) There's SOME information[1], but not enough to make an emulator from what I've found.

Aside from the lack of schematics or listings, there was the problem of the assembler being incomplete!

"One problem Lander software developers had was that no adequate assembler was ever written for the computer, perhaps because of the changing nature of the instruction set.(110) Patches had to be hand-coded in octal, with many jumps to unused memory space because of the lack of an assembler with relocatable addressing." p.174 on Viking, which used the same computer

[1]https://ntrs.nasa.gov/api/citations/19880069935/downloads/19... (page 174 onwards)

makomk
1 replies
3d22h

As far as I can tell the HDC-402 wasn't even the Viking computer that got reused on Voyager anyway - it was something NASA developed in-house from the orbiter side of the Viking mission and that presumably has even less documentation available.

bilegeek
0 replies
3d21h

You're confusing the 24-bit HDC-402 on the lander with the custom 18-bit computer on the orbiter (p.164), which had it's origins in the sequencer for the Mariner missions (p.152-154).

Though given the apparent level of NASA involvement in the 402's design, and the lack of evidence I can find for use outside of NASA, it might as well be called custom.

tetris11
1 replies
3d23h

DarmokJalad1701 below mentions about hearing a podcast where new engineers did work on VMs

ngcc_hk
0 replies
3d23h

Any details? We have people work on a dead moon lander…why not a live v1/2. If sadly one is dead, the other still has some life left ?

hathawsh
0 replies
4d

That would be very interesting. Someone please tell me such a community exists somewhere.

HeyLaughingBoy
4 replies
3d23h

I'm going to be a humorless pedant and reply by saying that with the level of forward and backward traceability required of aerospace software, it's unlikely that such a level of misunderstanding could occur. But maybe DO-178 wasn't a thing back then.

That's just from what I've heard, though. I do medical devices. I'm told that my aerospace counterparts have it even tougher than we do.

sho_hn
3 replies
3d22h

I do medical devices.

Therac-25 happened in 1982 and changed that industry (and safety engineering in general) quite a bit, no?

HeyLaughingBoy
2 replies
3d22h

Yes, but I don't get your point.

malfist
1 replies
3d22h

Voyager launched in 1977, well before the therac-25

HeyLaughingBoy
0 replies
3d20h

OK. And?

pkaye
1 replies
3d23h

There is a nice documentary about what remains of the Voyager project team. The video is not free however.

https://www.itsquieterfilm.com/

sho_hn
0 replies
3d22h

I watched that a couple of months ago and loved it. It does a lot to humanize that team, I've been thinking of them during the recent news cycle.

jdminhbg
1 replies
4d

Can you just imagine that? You wrote software in your 30s, and then 50 years later your grandchildren have to come visit you in the old folks home to ask you...

Yes, I have nightmares like this all the time.

LeifCarrotson
0 replies
4d

If only it was merely nightmares and not my day job.

ceautery
1 replies
4d

I never thought of this before now, but tech from 1977 might not even be using ASCII.

cadr
1 replies
4d

No, but I did find myself in the mid 90s as an intern trying to figure out code from the early 80s. It was really tricky code to get ECG processing working on some very small memory footprint or something. The name of the woman that wrote it was at the top of the file, and I looked her up in the HP company directory and she still worked there. As a hail-mary, I emailed her and asked a question about it. And she immediately got back to me with the answer. So, there are some engineers out there that do remember this sort of thing.

HeyLaughingBoy
0 replies
3d22h

LOL. I remember being asked to consult for my last employer on a problem they had while attempting to port some software that I had written almost 20 years earlier. It's amazing the random details that you remember when your memory is jogged.

wredue
0 replies
3d23h

It’s funny which code I remember and which code I don’t.

There’s code I wrote 10 years ago at work that I still remember and can point out exactly where everything is. Then there’s code I wrote last week that is completely gone.

selimnairb
0 replies
4d

I would use my old person privilege to berate you for using "AI" to "understand" the code rather than getting your soft hands dirty.

eichin
0 replies
3d19h

I (and at least couple thousand other people) are still running code I worked on in 1987. Maybe we should do something for the upcoming 40th anniversary...

aardvark179
0 replies
3d22h

I have definitely had to do archeology on 20+ year old code, including disassembly to work out actual production code was when it clearly wasn’t the version we had source for, and that’s now 30+ year old code which I expect will still be in productions at least a decade from now.

Equally I’ve seen COBOL compiled to new platforms because it has outlived all the systems it ran on.

I’m pretty sure there must be areas of Java, or C++’s standard libraries that haven’t changed for a very long time, and will continue to be used for decades.

The thing is, it’s often easier to just figure out the code, or rebuild the whole underlying platform, than to track down the original author and hope it wasn’t a Friday afternoon commit.

subract
28 replies
4d2h

For those interested in learning more about the challenges in keeping the probes alive, the 2022 documentary It's Quieter in the Twilight follows the small, incredibly dedicated team working the project. Free to stream on Prime.

The simple fact that many of the original engineers are no longer alive presents significant challenges in and of itself.

Solvency
17 replies
4d

I fundamentally don't understand why this project is seemingly so poorly documented? I've read articles describing the current teams having to still reverse engineer things by scrutinizing random documents and sketches as if it's still this very unknown system.

xenadu02
6 replies
3d23h

The project was not really designed to reach interstellar space originally. It was a somewhat rushed program to take advantage of the "Grand Tour" where the gas giants would all be aligned enough that a gravity-assist orbit could allow a spacecraft to fly by each of them. The alignment in question only happens every 175 years.

The interstellar portion was an add-on after the success of the original mission. The spacecraft were still operating so why not just keep operating them?

No one designing or building the probes imagined they'd still be operating 50+ years later. Even if they did space programs are constantly under threat from budget cuts so you can't exactly waste money on what-ifs for the future: you must focus on making the official mission succeed.

Also remember that the "desktop PC" was not yet a thing when this was designed. Engineers were drawing everything on paper. Storage space was extremely expensive in any case.

A modern program would (and most do!) put various versions of drawings in a version control system. Source would use an SCM so code history would be available. Even things like meeting notes would be available and searchable digitally.

RajT88
2 replies
3d20h

I have found that even the group I'm in being documentation-heavy, it's hard to read through everything and build the same context that another engineer has all stuffed into their head.

As you mention:

available and searchable digitally.

Even with 100% everything written down, it takes a while to build up that context, and even carefully written documentation can have subtleties which send a consumer the wrong way.

Things are a lot easier than they used to be, but still not easy-easy.

cduzz
1 replies
3d14h

Documentation is extraordinarily difficult to create. You need to anticipate the potential questions and answer them and anticipate all the potential perspectives and answer them from that perspective.

What is enough? A reference describing all of a thing? The source code to a thing? The source code and build chain to make the thing? The source code, build chain, source code to the build chain to build the build chain to build the thing? The source code and the machine and the tape drive to read the tapes to build the ....

How much documentation for TOPS-10 would you need to implement wireguard on a toad? How much context do you need to even make that sentence even make any sense at all?

RajT88
0 replies
3d3h

Yep, and the example I like to go back to is API docs, or command line docs.

One I was reading this week had a field for "language", but doesn't clue you in to the valid values by providing a working example. English for example could be:

English

English (US)

en

en-US

eng

1033

409

...And that's just for the US dialect of English. And also doesn't tell you if any of this is case-sensitive or not! Sometimes yes, sometimes no.

0cf8612b2e1e
1 replies
3d21h

Shouldn’t this software archeology have been done decades ago?

Someone
0 replies
3d17h

What would be your argument for getting budget to do that?

There are zillions of ways this hardware can break down. You can’t predict which ones you’ll have to handle, or whether there will be a way to recover from them. If you started researching this 30 years ago, and then something had killed this thing 20 years ago, that would be wasted effort.

Also, in the early years, they still could ask the original engineers, and even lacking those, there likely were engineers who hadn’t worked on this specific hardware but were somewhat familiar with this kind of hardware.

renhanxue
0 replies
3d19h

The project was not really designed to reach interstellar space originally.

Not only that, Voyager 2's flyby of Uranus and Neptune in the late 1980's was originally not intended either. As an aside, to this day Voyager 2 remains the only spacecraft to ever have visited either planet, and there are no firm plans for a followup, just some loose ideas about maybe launching something in the mid 2030's. Anyway, doing the Uranus/Neptune part of the mission required extensive software upgrades, which introduced Reed-Solomon error correction and image compression capabilities, among other things - the software as launched would not have been capable of a meaningful mission to Uranus and Neptune.

These days the Voyager program is lauded as an astonishing feat of engineering and one of the most inspiring science and engineering achievements of all time, but in the early 1970's the entire idea was NASA's red-headed stepchild and ended up cut down to a bare minimum. The Grand Tour mission concept (taking advantage of the extremely rare opportunity to visit Jupiter, Saturn, Uranus and Neptune in a single mission) was pitched as early as 1965, and by the early 1970's there were plans for launching four spacecraft, two bound for Jupiter-Saturn-Pluto and two bound for Jupiter-Uranus-Neptune. These were referred to as TOPS, Thermoelectric Outer Planets Spacecraft. But then people started complaining that it might cost a billion dollars (Apollo had cost $25 billion) and the whole thing became intensely political. Quoting from Voyager: The Grand Tour of Big Science (https://www.nasa.gov/history/SP-4219/Chapter11.html) by Andrew J. Butrica:

Further complicating matters was Senator Clinton P. Anderson (D-NM), champion of the Los Alamos nuclear weapons laboratories and an enthusiast, until his retirement in 1973, of the development of a nuclear rocket engine called NERVA. As chair of both the Senate Aeronautical and Space Sciences Committee and the joint Atomic Energy Committee, Anderson provided NASA and the Atomic Energy Commission over $1.4 billion, about $500 million of which was spent in Los Alamos, for the development of the NERVA engine, which, Anderson held, was ideally suited for exploration of the outer planets, as well as for more advanced missions. Anderson worried that NASA and the OMB were shifting money from NERVA to fund Grand Tour. When the NASA budget came before Anderson's Aeronautical and Space Sciences Committee on May 12, 1971, his committee voted five to two to reduce Grand Tour's budget, while an amendment to increase NERVA funding passed. Werner von Braun worried that ardent congressional interest in NERVA would force a loss of Grand Tour in favor of a NERVA that had "no place to go."

Meanwhile, NASA was trying to include Grand Tour as a new start in its 1972 fiscal budget. The Friedman report moved the Office of Management and Budget (OMB), in March 1971, to ask NASA to study simpler, less costly spacecraft alternatives to TOPS. The OMB also attempted to delay the Grand Tour start-up to fiscal 1973.

(...)

As NASA prepared its fiscal 1973 budget, rumors spread that the "budget pinch" was going to affect planetary programs deeply and that the reduction of the Grand Tour payload from 205 to 130 pounds was "a likely fact of life." Furthermore, Grand Tour now began to compete for funding with the latest NASA human program: the Space Shuttle. The fiscal 1973 budget request NASA submitted to the OMB on September 30, 1971 included both Grand Tour and the Space Shuttle. Throughout the autumn of 1971, several press reports presciently reported Grand Tour's vulnerability to a possible elimination or reduction. On December 11, 1971, James Fletcher, NASA administrator since April 27, 1971, learned from White House officials that Nixon was prepared to approve the shuttle program and that Nixon would not let NASA simultaneously fund the shuttle and the full TOPS Grand Tour in the 1973 budget or in subsequent fiscal years. Fletcher had to decide which was more important: Grand Tour or human flight.

Fletcher chose the shuttle, and what could be squeezed into the budget was an extension of the Mariner program to visit Jupiter and Saturn only. For budget reasons the spacecraft development was kept in-house at JPL rather than contracted out, and at JPL the dream of the full Grand Tour was still alive:

Despite the limited aim of the Mariner Jupiter-Saturn, the mission had the Grand Tour launch window, that rare planetary alignment, and the engineers at JPL still had every intention of building a spacecraft that would last long enough to visit Uranus and Neptune. This intention was not emphasized; however, it was stated that a Mariner Jupiter-Saturn spacecraft might continue to Uranus if its mission at Saturn proved successful. The scientists working on the project knew that Mariner Jupiter-Saturn was going to go to Uranus and Neptune, too. As Bradford Smith, Leader of the Imaging Team, explained: "We understood at the time the enormous potential of this mission, that it could very well be one of the truly outstanding if not the most outstanding mission in the whole planetary exploration program."

Also for budget reasons, the spacecraft were limited to mostly reusing existing technology. Getting reprogrammable computers (without which they could never have been kept alive in the way they have) required a separate budget grant from Congress:

Despite the reliance on extant technology, some money was set aside to develop new technology. Congress and the OMB approved an additional $7 million to the Mariner Jupiter-Saturn appropriation for scientific and technological enhancements. Part of that appropriation went to develop a reprogrammable onboard computer, which proved vital to maintaining Voyager 2 as a functioning observatory in space. Without properly functioning hardware, no science could be conducted.

In the end only Voyager 2 was launched on the full Grand Tour trajectory that would allow visiting all of Jupiter, Saturn, Uranus and Neptune; Voyager 1 was launched on an easier and much faster trajectory that would take it only to Jupiter and Saturn. Even then, the official decision to extend the Voyager 2 mission to Uranus was only approved in 1980.

Human spaceflight and its enormous appetite for money has always been a huge threat to actually exploring the solar system beyond Earth orbit, and we should be very glad we got even the very diminished Voyager program that exists today.

TheCondor
2 replies
3d19h

When was the first source code control system released? SCCS was like 1973 and the Voyager code was probably pretty much buttoned by then; with whatever practices that they thought was stable state of the art practices at the time. I imaging that this was a collection of "golden tapes" or something. Now the concept of revision control seems pretty self validating but you're talking about undergoing a culture change on your software team, pretty close to launch.

Then the voyager hardware was bespoke.

We just live in a different world now, they didn't know how to do software engineering like we do. They were just figuring it out. I really don't know the history of it but Voyager systems may have been produced on punchcard. Like the original source code might be physical for parts of the system.

zilti
0 replies
3d18h

they didn't know how to do software engineering like we do.

Yes, luckily. If they did, it would have broken after four years, and would have needed a second nuclear battery due to the inefficient code.

renhanxue
0 replies
3d18h

The Voyager software was updated repeatedly and significantly in flight while the spacecraft were still in their early years. They were very intentionally designed to be patched over the course of the mission. The software as launched was not capable of a meaningful mission beyond Saturn, because for budget reasons that was officially not on the cards at launch (the Voyager name came very late; the program was officially "Mariner Jupiter-Saturn" for a long time). Features like image compression were added in the early 1980's, after Voyager 2's mission extension to Uranus and Neptune had been approved.

Without any inside information on the program, I would expect that a lot of development has been done more or less ad-hoc over the decades, as budgets have allowed and operational requirements demanded.

toast0
1 replies
3d23h

If they have random documents to scrutinize, doesn't that mean that it's documented?

When I work on undocumented systems, it's because someone wrote code with no design docs, no (retained) notes, no requirements, no specs, and it's been determined that it doesn't work right. All I have is the code, and current observations.

liquidpele
0 replies
3d20h

Haha look at this guy, thinking you can trust the docs. ;)

nonethewiser
0 replies
4d

I imagine its a problem of distance, feedback, and lack of any analogous test environment.

mardifoufs
0 replies
3d20h

I think it's because it was more of an awesome moonshot project that didn't really fit into NASA's shifting goals at the time and with the shuttle overshadowing everything else that happened then. No one was really expecting this much from the probes

anigbrowl
0 replies
3d19h

At the time engineers imagined it having a relatively short operating life, and (imho) also thought we would be putting out a lot more probes. During the Cold War space exploration provided both prestige and a technological proving ground. After the USSR fell, a lot of Congressional enthusiasm for space projects diminished and management became increasingly risk-averse because budgets were much tighter.

KineticLensman
0 replies
3d22h

I remember reading about the inquiry into the UK RAF Nimrod aircraft that came down in Afghanistan in 2006 killing its crew of 14 [0]. A significant finding was that recovering the design history and maintenance records involved trawling a massive number of filing cabinets / cardboard boxes scattered in sites across the UK, and was a significant cause of the missed opportunities to uncover the design flaws and near misses that preceded the crash. (long story short: internal fuel leak near a very hot exhaust pipe)

[0] https://en.wikipedia.org/wiki/2006_Royal_Air_Force_Nimrod_cr...

Johnny555
0 replies
3d17h

I'd imagine that a lot of the undocumented stuff was for things that were obvious to an engineer at the time -- I doubt many engineers working on it at the time thought that the probe would outlive their own lifetime.

I've run into lots of software comments in legacy code that refer to features or systems the company used to have that were deprecated years ago and are nearly meaningless today. Knowing that a flag was set to match the flags from the WOPR sytem isn't that useful when WOPR hasn't existed since before I joined the company.

dylan604
3 replies
3d19h

Thanks for the reco. I would have never found this browsing Prime hidden in all of their FreeVee push.

chgs
2 replies
3d17h

I stopped my prime subscription after years (decade+?) because they decided to double dip and put in adverts. Such a shame. They’ve lost £360 so far from me based on my normal Amazon spending for their decision.

pbnjeh
0 replies
3d6h

It has to start somewhere, and for many of us, that somewhere is in individual decisions.

Interacting with Amazon is increasingly feeling like being worked over by a seedy breed of con artist. I just missed a good price (circa 25% off the normal price) because -- best guess -- a driver failed to deliver and now my order is stuck in their "running late" limbo that will see it eventually cancelled. It's hardly the first time, exactly this scenario of an abnormally good price effectively lost, and no call to customer service can fix it. In fact, the driver missed the delivery story is the explanation I've read somewhere, but the unerring correspondence with very favorable pricing leaves me feeling suspicious.

I commend your decision, and your awareness of its impact.

dylan604
0 replies
3d16h

Oh Nooooo! Bezos won't be able to make his yacht payments now!! Won't you think of the starving billionaires before making your kneejerk reactions. Nobody wants to be humiliated and drop out of the 3 commas club

andyjohnson0
3 replies
3d21h

Looks fascinating, but the only place I can find to watch it here in the uk is Prime Video. Does anyone know of legal options for those who don't want to give Amazon their money - still less sign up for a Prime subscription?

mbirth
0 replies
3d15h

Eh, even Prime Video shows “This video is currently unavailable to watch in your location.”

noelwelsh
0 replies
4d1h

Those ancient Sun machines take me back a bit!

piokoch
23 replies
4d5h

Voyager 1/2 is a pick of human achievements in XX century. A piece of hardware run by the computer having a power that today car keys have is flying for over 40 years in the space, outside Solar system, delivering priceless information.

People do not realize how amazing engineering it must be.

One can watch https://en.wikipedia.org/wiki/The_Farthest to appreciate fully what all those great man did and are still doing.

detourdog
19 replies
4d1h

The inverse is also true. This demonstrates how underwhelming current engineering is.

class3shock
7 replies
3d21h

I think most engineers from back then would be astounded with the engineering of today, not underwhelmed.

detourdog
6 replies
3d20h

I could see them being impressed with the human achievements. I have doubts if they would be impressed by the engineering. Everything I see looks more like the results of human hours spent.

Consider giving those old timers the same problem with todays resources and I think we would get great results.

class3shock
3 replies
3d4h

I don't. Our ability to understand problems and validate our solutions for them before even a single part is made is so much more advanced today (in the mechanical engineering world) it would be astonishing to them.

I'm not sure what you mean exactly by the human hours comment, I'm guessing that our advances today are more from low quality, high quantity work then ingenuity? Or something to that effect? I would say that might just be a result of the perception from looking back at a time of rapid advancement, where huge leaps occurred in a short time (and that we only look back at the successful end results, not all the R&D or the failures). Much of the slowness and expansiveness of engineering projects today is due to the increased use of analysis, validation, testing, quality assurance, etc. based on lessons learned from those days. Doesn't mean there is any less ingenious stuff happening, just doesn't stand out in the same way.

As for giving them the tools of today, I doubt they would be able to do much more than the same caliber of people today. In many fields we are pushing the capabilities of materials, analysis, design, etc. to their limits.

I would say the average engineer back then might be better but that's more related to the commodification of degrees than engineering getting worse. In 1970 you had about 9k meche's graduate for a population of 200 mil, in 2015 you had about 26k for a population of 320 mil. A 3x increase for only a 0.5x increase in population. I think the increase is due to alot more people being there for the paycheck, not the passion, which I think was much rarer back in the day.

https://nces.ed.gov/programs/digest/d16/tables/dt16_325.47.a...

All that being said, the engineering/engineers of that era is/are amazing. I mean they did most everything with paper, pencils and slide rules. Slide rules!

detourdog
2 replies
2d22h

Your last sentence best captures my thinking that generation worked so closely to the physics of the problem that the result was quite minimal and robust. I don't see today's problems approached with the same care.

Interesting enough during the rapid advancement of inertial guidance development the developers came from all walks of life and not necessarily college. There was period of maturation were the un-educated were purged from the projects.

class3shock
1 replies
2d13h

I would say they were closer to the problem but not necessarily the physics of them. And that doesn't really have anything to do with why their solutions were simple and robust, they just didn't have any other choice in how to build things. Their limited knowledge and manufacturing capabilities made designing complex solutions difficult and they were robust because they didn't understand the physics well enough to build them with slimmer margins.

There's a saying along the lines of "It doesn't take a great engineer to make a bridge that stands but it does to make one that barely stands.".

As for being approached with the same care, well that's hard to say overall. I don't think you'd see a project like the James Webb be successful without the care of alot of people though.

I do have mixed feelings about the education requirement that is a wall for some people. I know alot of folks that could probably have had great careers as engineers but were stopped by the high end math needed for the degree. I also know many people that have zero engineering intuition that made it through and work in the field.

detourdog
0 replies
51m

I'm basing my thinking on a publication by Autonetics publication EM-1488 in Jan. 1958. Titled an "Introduction to Digital Airborne Computing Techniques." It's 79 pages self bound book published to bring people up to speed on what is going on.

Page 1 is a definition of a digital computer it progresses to number systems, storage devices, boolean algebra, logical design, code design.

That is the first half of the book up to page 38.

The remaining sections cover General purpose computing, Digital Differential Analyzer operations, Digital Differential Analyzer Programming, D.D.A. decision and servo integration, Incremental and Whole Value Solutions of Control Problems.

The book contains a schematic of subsystems and the one complete circuit in the book is of a flip-flop.

I think the systems were robust becuase the computing problem was so tightly bound to the hardware.

nektro
1 replies
2d18h

don't let the doomerism get you. we have a lot of society problems to fix and reconfigure but there are still teams of people out there doing greater work than ever before.

detourdog
0 replies
50m

I mean no doomerism. If anything I have dogmaism. The feeling that people can't see the problem through the inertia of built environment.

detourdog
3 replies
3d18h

Compared to voyager I see that as more of the same and almost disposable.

I have a large collection of technical documentation and physical artifacts of the same computer implemented as discreet components amd integrated circuits. The company had to invent the lithographic process to print circuits.

So ASML is very impressive but could also be seen as a derivative idea.

magicalhippo
2 replies
3d17h

Seen that way, the Voyager probes are also very impressive but also a derivative idea.

After all they were really Mariner 11 and 12[1], if not in name, not even the first interplanetary probes.

Rather I think it's more interesting to view engineering as finding solutions to a problem given a set of constraints.

In that sense I think both the Voyager probes as well as lot of modern engineering is quite remarkable.

[1]: https://en.wikipedia.org/wiki/Mariner_program#Mariner_Jupite...

detourdog
1 replies
3d11h

I would respond that Voyager is still going compared to Mariner. The Voyager project may be derivative but it has out performed all others. I would also lump all those early inertial guidance efforts of early computing together with Voyager as the high water mark.

detourdog
0 replies
3d6h

I think what I'm holding in such high regard is the meager resources and lack of experience caused the development to stick very close to first principles. I think that most of the layers of technology today are actual intermediately related to the problem at hand.

I want modern computing to a svelte as possible with a direct UI that maps to the human tasks and hardware that is tightly coupled to the physical world.

Maybe someday I will have an example of what I think is good. I think I'm getting really close to what I see as 100 year computer aesthetic.

malfist
3 replies
3d22h

Is it? Key fobs probably cost dollars to manufacture. Voyager cost $865 million....in 1977.

Maybe my key fob uses compute power wastefully. But I'd rather it cost a few dollars than everything that needs that amount of processing power costing hundreds of millions of dollars.

detourdog
2 replies
3d20h

I don't think a key fob is a good example. I think if you look at what goes into word processing for the average office or almost any other office task would be my example.

My point is that early on general purpose computing was needed to drive the cost of computing down. I think we are past that stage and it is now time to look at making everything as simple and efficient as possible.

malfist
1 replies
3d4h

What does that gain you?

I'd rather my stuff be inefficient, feature rich and cheap. Remember, your "simple" device is someone else's "missing critical features", your 80% isn't someone else's 80%.

If I'm a manufacturer, why would I spend hundreds of engineering hours designing my widget to be efficient enough to run on an arduino when I can spend 50 cents more per widget and use an esp32 and not have to worry about investing so strongly in computational performance for computational performance's sake.

If I'm a consumer, I care about the cost of the device, and the manufacturer spending hundreds of hours to make the software more efficient is almost always going to be more expensive than a different manufacturer that spent 50 cents more on hardware and much less on R&D.

detourdog
0 replies
2d21h

Everyone has to have their own beliefs and values. Mine appear to be at the other end of the spectrum from yours.

I would prefer less waste with more care and thought.

queuebert
1 replies
4d1h

Or maybe hardware overkill. I have a soft spot for small, dedicated computers like the Apollo Guidance Computer that have physical buttons and simple functionality. The DDIs on jet fighters are another example.

detourdog
0 replies
4d

From my perspective microprocessor grew up around the general purpose computing model. Now the microprocessor power has far outpaced the the actual human needs the focus on general purpose is inefficient.

I see efficiencies to be gained in the overall integration of very task specific computers in common network.

itsthecourier
0 replies
4d1h

box office: $6,900

Human race advances in leaps by a super small group of dedicated people, we are all indebted to

anotherhue
0 replies
4d4h

There's a second film that's a little more recent, was on Prime video last I looked.

https://www.itsquieterfilm.com/

vlovich123
13 replies
4d1h

The availability of skills is also an issue. Many of the engineers who worked on the project - Voyager 1 launched in 1977 - are no longer around, and the team that remains is faced with trawling through reams of decades-old documents to deal with unanticipated issues arising today

At least part of the problem is that we don't regularly send long distance probes. Of course, even with that maintaining a relevant skill set to maintain a 50+ year old technology from over 100 AU seems difficult. I think having it be a single team's life's work is probably our limit to keeping it alive. Our next best window for sending out another group of probes is 2152 and hopefully it'll become cheap enough to send out a bunch of them with even higher resolution imagery & maybe actually hit all the planets this time. Unfortunately, it's likely no one reading this will be alive to see that happen.

jamiek88
9 replies
4d1h

Likely? I’d say certain!

If not then some wonderful healthcare breakthroughs will have happened and wouldn’t that be great?

I’d love to see us image exoplanets for example.

I’ve often thought about a ‘relativistic chamber’.

Some device that is in space looping at an appreciable fraction of light speed.

Enter the box.

Exit a year later and it’s 200 years passed down on earth. Have a mosey around, back in the box!

Have a mooch, back in the box!

And so on.

LeifCarrotson
1 replies
4d

The HTC is the opposite of the cryogenic freezer or relativistic chamber - it accelerates time for people inside it, allowing you to meet external deadlines while the world passes by slowly outside. A cryo freezer or relativistic spaceship decelerates time for people inside, allowing external activities to be accomplished while you don't age.

Vecr
0 replies
3d20h

You are right, I'm wrong. I thought he had something in there for that anyway, but apparently not. Still, you're just on the other side of the Amdahl’s law problems, so you better hope no one is depending on you.

ghodith
0 replies
4d

This is about a time chamber that works in the opposite direction.

brabel
1 replies
4d

In reality, they would wake up to find themselves in a landfill as everyone just forgot what those funny boxes were for and got rid of them.

aembleton
0 replies
3d20h

As predicted by Idiocracy

ribosometronome
0 replies
4d1h

It's a shame we probably live too early for cryogenic freezing to work, either.

shagie
0 replies
3d19h

The Worthing Saga by Orson Scott Card ( https://en.wikipedia.org/wiki/The_Worthing_Saga https://www.goodreads.com/book/show/40304.The_Worthing_Saga )

It was a miracle of science that permitted human beings to live, if not forever, then for a long, long time. Some people, anyway. The rich, the powerful--they lived their lives at the rate of one year every ten. Some created two societies: that of people who lived out their normal span and died, and those who slept away the decades, skipping over the intervening years and events. It allowed great plans to be put in motion. It allowed interstellar Empires to be built.

It came near to destroying humanity.

After a long, long time of decadence and stagnation, a few seed ships were sent out to save our species. They carried human embryos and supplies, and teaching robots, and one man. The Worthing Saga is the story of one of these men, Jason Worthing, and the world he found for the seed he carried.

---

Freeze Frame Revolution ( https://www.goodreads.com/en/book/show/36510759 )

How do you stage a mutiny when you’re only awake one day in a million? How do you conspire when your tiny handful of potential allies changes with each job shift? How do you engage an enemy that never sleeps, that sees through your eyes and hears through your ears, and relentlessly, honestly, only wants what’s best for you? Trapped aboard the starship Eriophora, Sunday Ahzmundin is about to discover the components of any successful revolution: conspiracy, code—and unavoidable casualties.

---

Also going to recommend the various Vernor Vinge books:

   The Peace War
   Marooned in Realtime
   A Deepness in the Sky
and the short story "The Peddler's Apprentice" ( https://en.wikipedia.org/wiki/The_Collected_Stories_of_Verno... )

meragrin_
2 replies
4d

Our next best window for sending out another group of probes is 2152

Planet alignment?

dmead
0 replies
3d21h

It'll be a good year for planetary astronomy anyhow.

nullhole
10 replies
4d5h

  Called a “poke” by the team, the command is meant to gently prompt the FDS to try different sequences in its software package in case the issue could be resolved by going around a corrupted section.
The apparent foresight of the original programmers is impressive though maybe not too surprising given the conditions they expected.

I'd be curious to know if anyone has any book recommendations on software design for space missions; I suspect there would be some lessons in there around testing and reliability that could inform more day-to-day stuff.

isolli
1 replies
4d5h

I can't recommend a book, but I watched this video a while ago, and I found it riveting: "Light Years Ahead | The 1969 Apollo Guidance Computer" [0]

Robert Wills introduces the amazing hardware and software that made up the Apollo Guidance Computer, walks you through the landing procedure step-by-step, and talks about the pioneering design principles that were used to make the landing software robust against any failure.

[0] https://www.youtube.com/watch?v=B1J2RMorJXM

araker
0 replies
3d22h

Thanks for sharing, this is a great talk.

isolli
1 replies
4d5h

I also read this a while ago [1]:

"The Voyager’s computer system was very impressive as well. Knowing the craft would be on its own much of the time, with the lag between command and response from Earth growing longer the farther the craft went into space, engineers developed a self-repairing computer system. The computer has multiple modules that compare the data they receive and the output instructions they decide on. If one module differs from the others, it's assumed to be faulty and is eliminated from the system, replaced by one of the backup modules. It was tested shortly after launch, when a delay in boom deployment was misread as a malfunction. The problem was corrected successfully."

[1] https://science.howstuffworks.com/voyager.htm

shagie
0 replies
3d14h

Some of the fault protection and failover procedures:

https://voyager.gsfc.nasa.gov/Library/DeepCommo_Chapter3--14...

Page 74-75

3.7.3 Spacecraft Fault Protection

The CCS has five fault-protection algorithms (FPAs) stored in memory, as summarized in Table 3-9. The two algorithms most directly related to the telecommunications system are named RF Loss and Command Loss [19].

3.7.3.1 RF Loss. RF Loss provides a means for the spacecraft to automatically recover from an S- or X-band exciter or power amplifier degradation or failure affecting the unit’s RF output. The CCS monitors the output RF power at four points in the RFS: the S-band exciter and S-band power amplifier and the X- band exciter and X-TWTA. If the output RF power from one or more powered- on units drops below a threshold level, the algorithm will attempt to correct the problem by switching to the redundant unit.

3.7.3.2 Command Loss. Command Loss provides a means for the spacecraft to automatically respond to an onboard failure resulting in the inability to receive or recognize ground commands. If a period of time set in the flight software goes by without the spacecraft recognizing a valid uplinked command, the Command Loss timer expires. The algorithm responds to the presumed spacecraft failure28 and attempts to correct that failure by systematically switching to redundant hardware elements until a valid command is received. Command Loss will be executed four consecutive times if command reception is not successful. After four unsuccessful executions, the CCS will disable Command Loss and activate a set of sequences of commands named the backup mission load (BML) and described below.

3.7.3.3 Backup Mission Load. In the event of permanent loss of command reception capability, a BML command sequence stored onboard each spacecraft is programmed to continue controlling the spacecraft and achieving fundamental VIM objectives. The BML will begin execution two weeks after the first execution of Command Loss and continue until the spacecraft stops operating. It will transmit cruise science and engineering telemetry, store science observations on the tape recorder, and downlink playbacks regularly.
KineticLensman
1 replies
3d19h

Sunburst and Luminary by Don Eyles, one of the AGC coders, is an outstanding read. Specific to AGC but indicates some of the challenges

KineticLensman
0 replies
3d4h

Also, 'Digital Apollo: Human and Machine in Spaceflight' is a good read. This is more generic, but has good discussions about the role of humans in the system, including whether the crew or the computers control the flight, and who has the best data (crew or mission control) and how these issues evolved during the early space race.

chasd00
0 replies
4d

gently prompt the FDS

do you suppose their system prompt ensures it responds more favorably to gentle commands? ;)

anonymous_user9
0 replies
4d4h

Software isn’t the sole focus, but you may enjoy “Computers in Spaceflight: The NASA Experience”, a 406-page history of manned, unmanned, and ground computers from the beginning through the shuttle era. https://ntrs.nasa.gov/citations/19880069935

CamperBob2
0 replies
4d3h

It's not specifically a text on software design, but Sunburst and Luminary by Don Eyles is an enjoyable read ( https://www.amazon.com/Sunburst-Luminary-Apollo-Don-Eyles/dp... ). He manages to capture the Apollo-era zeitgeist in more ways than one. Not just another tale from the trenches.

ndiddy
6 replies
4d4h

But an engineer with the agency’s Deep Space Network, which operates the radio antennas that communicate with both Voyagers and other spacecraft traveling to the Moon and beyond, was able to decode the new signal and found that it contains a readout of the entire FDS memory. The FDS memory includes its code, or instructions for what to do, as well as variables, or values used in the code that can change based on commands or the spacecraft’s status. It also contains science or engineering data for downlink.

Does this mean that someone could set up an antenna and get a copy of the Voyager software? Might be cool to see.

shagie
1 replies
3d14h

Does this mean that someone could set up an antenna and get a copy of the Voyager software? Might be cool to see.

DSN NOW is neat to watch - and sometimes you see VGR1 and its related data https://eyes.nasa.gov/dsn/dsn.html

And since no one is talking with it, I'm going to find a screen shot. https://space.stackexchange.com/q/17430

The DATA feed (at 159 baud - bits per second) is being received at -157.95 dBm ... seven years ago. 1.6 x 10^-22 kW.

This is getting into the realm where noise will dominate the signal.

https://www.seti.org/detecting-voyager-1-ata

The above assumes a receiver temperature of 120 Kelvin at 8.4 GHz. The receiver temperature could have also been measured using the quasar observation, but the 120 Kelvin figure is not far from reality given previous measurements.

The Allen Telescope Array is probably the 'cheapest' way to detect the signal.

The telescope comprises 42 fully-steerable 6.1m-diameter telescopes, of which ~20 are fitted with wideband cryogenically cooled feeds.

For comparison, liquid nitrogen is 77 Kelvin.

They were able to detect the signal - but in that diagonal chart, note how difficult it is to see the diagonal from the visual noise.

That is only detecting the signal - not decoding it.

shagie
0 replies
3d4h

As I write this now, VGR1 and VGR2 are both showing up on DSN NOW.

https://imgur.com/6bgg3Uj

This brings up another issue. The DNS network is spread across Goldstone, Madrid, and Canberra allowing for continuous tracking of a spacecraft.

A single site would only be able to get a fraction of the broadcast or may be out of position for a repeating broadcast.

sidewndr46
0 replies
4d3h

Yes. It would also need to be enormous and use the same type of supercooled receiver.

malfist
0 replies
3d22h

Technically yes. But currently you need a 70 meter (230 foot) dish to hear/talk to it.

To talk back to voyager, DSS43 uses 75kw to transmit, so you might have to have a commercial account with your local power company.

colechristensen
0 replies
4d

You would need a dish measured in tens of meters across. When you were done you’d have a decent radio telescope.

CamperBob2
0 replies
4d3h

It's possible to receive signals from some spacecraft with reasonable amateur-level antennas, but not this one, sadly. You'd be lucky to get a photon every day or two.

ck2
4 replies
4d1h

My favorite thought about the Voyager (and Pioneer) probes is that someday, thousands of years from now, humans will launch a ship or drone that will past by it in only a few days (if we last that long, odds are way down).

Another far more hilarious thought is I am glad they chose the Voyager probe for the first Star Trek movie and not Pioneer (hint: the letters dropped in the "mystery name")

ruune
2 replies
4d

That's an incredibly interesting thing in my opinion. Even if we send colony ships now, the odds of them arriving to a planet thousands years later that is already inhabited by humans or human descendents because our technology evolved over that time it non zero. Combined with relative time it gets even more weirder, because an almost-lightspeed ship we'd send could be surpassed by something much more advanced in a matter of days (spaceship time). So when do we send ships? I think there's a Kurzgesagt video about this somewhere

ck2
0 replies
4d

That concept for tic-tac sized probes reaching 10% of the speed of light via laser power transfer seems remotely plausible in a few hundreds years but personally I don't think humans are making it off this planet. We are running out of runway with overpopulation, overheating, overpollution, toxic everything and cutthroat politics will only allow space investment for weapons when there's no money for food and health.

Even if there was a near-future miracle invention for cheap plentiful power, it would be turned into a weapon of war far before space use. Beyond the power requirement, accelerating mass near the speed of light is beyond our comprehension, we can't even deal with radiation in space forget hitting dust that fast.

Vecr
0 replies
4d

As soon as possible, because you don't know you'll last that long. You do need to use resources wisely though.

krallja
0 replies
3d23h

The mystery name for Pioneer could be "one" or "Pion" or "neer".

wolverine876
3 replies
4d

A command from Earth takes 22.5 hours to reach the probe

Voyager 1 is closing in on being the first human-made object to travel 1 light day.

V1 has traveled ~22.5 light hours in ~46.5 years [0], and assuming that average rate of 2.07 years/light hour [1] it will reach 1 light day in around 3.1 years. Does anyone know its potential to have sufficent power to measuring anything and transmit at that point?

[0] https://voyager.jpl.nasa.gov/mission/status/

[1] Notes on Voyager's average rate:

Remember that V1 did not travel in a straight line or at a constant rate from Earth through its planetary explorations, so the average rate now is probably higher than that simple calculation.

Also, if we are looking at distance and speed relative to Earth, and Earth's orbit around the Sun would cause some variation throughout the year. Could Earth's relative orbital positions at V1's launch decades ago and when V1 approaches 1 light day in three years significantly affect V1's distance from Earth? Earth's orbital diameter is roughly 300 million km; V1 travels at ~61,000 km/hr relative to the Sun [0], so the worst case would add ~4,900 hours or ~205 days. (Those are some quick calculations and I have to run to a meeting, so I hope there are no glaring errors!)

Also, I assume the Sun's movement relative to Voyager 1 has been constant since its launch.

rebolek
0 replies
4d

calling Voyagers V1 and V2 seems little bit strange...

palijer
0 replies
3d19h

When the news of the breakage first spread I hacked together a fun shell alias to take the current distance of v1 or V2, calculate out the roundtrip signal time, then inject a sleep() before your command ran.

The idea of troubleshooting a computer system with that sort of delay must make them incredibly creative.

tcgv
3 replies
3d22h

The time lag is a problem. A command from Earth takes 22.5 hours to reach the probe, and the same period is needed again for a response. This means a 45-hour wait to see what a given command might have done.

Astonishing!

IAmGraydon
1 replies
3d21h

I can hardly imagine how frustrating that scenario would be. Think about how rapidly you hack away at the terminal, and imagine that every time you hit “enter”, it takes 45 hours to find out if it worked. You would only be able to issue 194 commands in an entire year.

peddling-brink
0 replies
3d3h

Imagine that each command is rigorously studied by multiple people and an incorrect entry may be a news generating event.

layer8
0 replies
3d18h

I know some CI pipelines that are only barely faster. ;)

tapoxi
2 replies
4d2h

Is the Voyager's software or data stream documented anywhere? I'm a little disappointed by the high level descriptions we get back. I'd love to know what its actually sending.

tivert
0 replies
4d1h

Is the Voyager's software or data stream documented anywhere? I'm a little disappointed by the high level descriptions we get back. I'd love to know what its actually sending.

This has a chapter on the Voyager computer system, it's a lot more technical than typical, but I don't think it gets to the detail of the literal programs or data stream:

https://web.archive.org/web/20190714113800/https://history.n...

I don't know why NASA took down this nice HTML version. The live link now just redirects to a scanned PDF.

I read NASA has a lot less documentation about Voyager than you'd think, and apparently they don't have a ground-based simulator or anything like that (which they have for later probes).

crispyambulance
0 replies
4d2h

NASA does have a data portal (https://data.nasa.gov/). There's datasets relating to voyager, but just checking right now, I get '403 errors on those.

sph
2 replies
4d5h

Is Voyager 1 in hibernation mode because the RTG is not producing as much power, or is it because most components have failed, but could in theory still be powered on? I do not know how long an RTG lasts for.

tokai
1 replies
4d5h

From wikipedia: "generate approximately 157 Watts of electrical power initially – halving every 87.7 years"

It should still have power.

https://en.wikipedia.org/wiki/MHW-RTG

anotherhue
0 replies
4d4h

The thermo couples also degrade so halve the power again. Biggest issue is keeping the fuel warm so the antenna can stay aligned.

https://www.itsquieterfilm.com/

strangattractor
1 replies
3d23h

President Biden recently announced that software should be using memory safe languages. The command sent to Voyager - poke - is a command to directly set a memory location with a value - totally not memory safe:)

oasisaimlessly
0 replies
3d18h

Biden administration outlaws gdb

More news at 11

smackeyacky
1 replies
4d6h

I don’t want to personify it but it’s like a last plea from space asking for help billions of kilometres from home. I hope the nasa engineers can either patch it up or quietly put it to sleep.

Vecr
0 replies
4d

There's no suffering issue, there's zero point in turning it off. The only thing you'd do is re-allocate DSN resources at some point if someone else really needs them.

DarmokJalad1701
1 replies
4d1h

I remember listening to a podcast where someone mentioned that there was a VM that they used for testing the flight software for Voyager -- possibly open source. Unfortunately, I do not remember what podcast it was.

dang
0 replies
4d

Comments moved thither. Thanks!

Edit: so many of those comments have to deal with the contents of this particular article (which does add more background) so I think we need to move them back. Let's keep the less baity title from the other post, though.

ciunix
0 replies
3d22h

I hope the communication will be established again and correctly with the farest device made by mankind. Voyager is giving us a lot of important information about what we have in the near around.

NoMoreNicksLeft
0 replies
4d2h

Wow. Some good news for once. Hope it's something they can repair/mitigate.