return to table of content

Apollo 11 vs. USB-C Chargers (2020)

kens
61 replies
14h33m

Apollo 11 spacecraft contains 4 computers

Analog computers don't get the respect they deserve. There's one more computer, the FCC. The Flight Control Computer is an analog computer in the Saturn V that controlled the rocket gimbals. It's a two-foot cylinder weighing almost 100 pounds.

kragen
37 replies
11h33m

i think it's (unintentionally) misleading to describe analog 'computers' as 'computers'. what distinguishes digital computers from other digital hardware is that they're turing-complete (if given access to enough memory), and there isn't any similar notion in the analog domain

the only reason they have the same name is that they were both originally built to replace people cranking out calculations on mechanical desk calculators, who were also called 'computers'

the flight control 'computer' has more in common with an analog synthesizer module than it does with a cray-1, the agc, an arduino, this laptop, or these chargers, which are by comparison almost indistinguishable

thriftwy
12 replies
10h55m

I wonder why you can't make a turing complete analog computer using feedback loops.

progval
11 replies
10h48m
kragen
10 replies
10h35m

a universal turing machine is a particular machine which can simulate all other turing machines. the gpac, by contrast, is a family of machines: all machines built out of such-and-such a set of parts

you can't simulate an 11-integrator general-purpose analog computer or other differential analyzer with a 10-integrator differential analyzer, and you can't simulate a differential analyzer with 0.1% error on a (more typical) differential analyzer with 1% error, unless it's 100× as large (assuming the error is gaussian)

the ongoing research in the area is of course very interesting but a lot of it relies on an abstraction of the actual differential-analyzer problem in which precision is infinite and error is zero

thriftwy
9 replies
10h24m

Sure, you cannot easily simulate another analog computer, but this is not the requirement. The requirement is turing completeness, which can be done.

lisper
8 replies
10h20m

It can? How?

kragen
6 replies
10h12m

as i understand it, with infinite precision; the real numbers within some range, say -15 volts to +15 volts, have a bijection to infinite strings of bits (some infinitesimally small fraction of which are all zeroes after a finite count). with things like the logistic map you can amplify arbitrarily small differences into totally different system trajectories; usually when we plot bifurcation diagrams from the logistic map we do it in discrete time, but that is not necessary if you have enough continuous state variables (three is obviously sufficient but i think you can do it with two)

given these hypothetical abilities, you can of course simulate a two-counter machine, but a bigger question is whether you can compute anything a turing machine cannot; after all, in a sense you are doing an infinite amount of computation in every finite interval of time, so maybe you could do things like compute whether a turing machine will halt in finite time. so far the results seem to support the contrary hypothesis, that extending computation into continuous time and continuously variable quantities in this way does not actually grant you any additional computational power!

this is all very interesting but obviously not a useful description of analog computation devices that are actually physically realizable by any technology we can now imagine

fanf2
4 replies
7h16m

Except that infinite precision requires infinite settling time. (Which I guess is the analogue computing version of arithmetic not being O(1) even though it is usually modelled that way.)

couchand
3 replies
4h22m

Infinite precision is just analog's version of an infinitely-long tape.

lisper
0 replies
2h38m

But even with infinite precision, how do you build a universal analog computer? And how do you program it?

fanf2
0 replies
1h22m

Right :-) I prefer to say a Turing machine’s tape is “unbounded” rather than “infinite” because it supports calculations of arbitrarily large but finite size. So in the analogue case, I would say unbounded precision and unbounded waiting time.

Dylan16807
0 replies
54m

Infinite precision is exponentially more difficult. It's very easy to have unbounded tape, with a big buffer of tape and some kind of "factory" that makes more tape when you get near the edge. Unbounded precision? Not going to happen in a real machine. You get to have several digits and no more.

lisper
0 replies
2h36m

It's much worse than that. There is this little thing called Planck's constant, and there's this other little thing called the second law of thermodynamics. So even if you allow yourself arbitrarily advanced technology I don't see how you're doing to make it work without new physics.

thriftwy
0 replies
10h8m

It would not be very interesting. You will lose all the interesting properties of analog computer and it would be a poorly performing turing machine. Still it would have those loops and branches necessary, since you can always build digital on top of analog with some additional harness.

ezconnect
12 replies
11h23m

They both do the same thing compute an output from given inputs. So they are properly distinguished from each other on how they do the computing. They both deserve the name 'computer'.

kragen
11 replies
11h20m

only in the same sense that a machinist's micrometer, an optical telescope, an analog television set, an acoustic guitar, a letterpress printing press, a car's manual transmission, a fountain pen, a nomogram, and a transistor also 'compute an output from given inputs'

do you want to call them all 'computers' now?

adrian_b
6 replies
10h31m

The arithmetic circuits alone, like adders, multipliers etc., regardless if they are mechanical or electronic, analog or digital, should not be called computers.

When the arithmetic circuits, i.e. the "central arithmetical part", as called by von Neumann, are coupled with a "central control part", as called by von Neumann, i.e. with a sequencer that is connected in a feedback loop with the arithmetic part, so that the computation results can modify the sequence of computations, then this device must be named as a "computer", regardless whether the computations are done with analog circuits or with digital circuits.

What defines a computer (according to the definition already given by von Neumann, which is the right definition in my opinion) is closing the feedback loop between the arithmetic part and the control part, which raises the order of the system in comparison with a simple finite state automaton, not how those parts are implemented.

The control part must be discrete, i.e. digital, but the arithmetic part can be completely analog. Closing the feedback loop, i.e. the conditional jumps executed by the control part, can be done with analog comparators that provide the predicates tested by the conditional jumps. The state of an analog arithmetic part uses capacitors, inductors or analog integrators, instead of digital registers.

Several decades ago, I had to debug an analog computer during its installation process, before functioning for the first time. That was in a metallurgic plant, and the analog computer provided outputs that controlled the torques of a group of multi-megawatt DC electric motors. The formulae used in the analog computations were very complex, with a large number of adders, multipliers, integrators, square root circuits and so on, which combined inputs from many sensors.

That analog computer (made with op amps) performed a sequence of computations much more complex than the algorithms that were executed on an Intel 8080, which controlled various on-off execution elements of the system, like relays and hydraulic valves and the induction motors that powered some pumps.

The main reason why such analog computers have become obsolete is the difficulty of ensuring that the accuracy of their computations will not change due to aging and due to temperature variations. Making analog computers that are insensitive to aging and temperature raises their cost much above modern digital microcontrollers.

kragen
5 replies
10h18m

as you are of course aware, analog 'computers' do not have the 'central control part' that you are arguing distinguishes 'computers' from 'arithmetic circuits alone'; the choice of which calculation to perform is determined by how the computer is built, or how its plugboard is wired. integrators in particular do have state that changes over time, so the output at a given time is not a function of the input at only that time, but of the entire past, and as is well known, such a system can have extremely complex behavior (sometimes called 'chaos', though in this context that term is likely to give rise to misunderstanding)

you can even include multiplexors in your analog 'computer', even with only adders and multipliers and constants; x · (1 + -1 · y) + z · y interpolates between x and z under the control of y, so that its output is conditionally either x or z (or some intermediate state). but once you start including feedback to push y out of that intermediate zone, you've built a flip-flop, and you're well on your way to building a digital control unit (one you could probably build more easily out of transistors rather than op-amps). and surely before long you can call it a digital computer, though one that is controlling precision linear analog circuitry

it is very commonly the case that analog computation is much, much faster than digital computation; even today, with microprocessors a hundred thousand times faster than an 8080 and fpgas that are faster still, if you're doing submillimeter computation you're going to have to do your front-end filtering, upconversion or downconversion, and probably even detection in the analog domain

adrian_b
4 replies
9h56m

Most "analog computers" have been simple, and even if they usually provided the solution of a system of ordinary differential equations, that does not require a control part, making them no more closer to a complete computer than a music box that performs a fixed sequence.

I agree that this kind of "analog computers" does not deserve the name of "computer", because they are equivalent only with the "registers + ALU" (RALU) simple automaton that is a component of a CPU.

Nevertheless, there is no reason why a digital control part cannot be coupled with an analog arithmetic part and there have existed such "analog computers", even if they have been rarely used, due to high cost and complexity.

It is not completely unlikely that such "analog computers", consisting of a digital control part and an analog arithmetic part, could be revived with the purpose of implementing low-resolution high-speed machine learning inference.

Even now, in circuits like analog-digital converters, there may be analog computing circuits, like switched-capacitor filters, which are reconfigurable by the digital controller of the ADC, based on various criteria, which may depend on the digital output of the converter or on the outputs of some analog comparators (which may detect e.g. the range of the input).

kens
2 replies
9h38m

You're describing a "hybrid computer". These were introduced in the late 1950s, combining a digital processor with analog computing units. I don't understand why you and kragen want to redefine standard terms; this seems like a pointless linguistic exercise.

kragen
0 replies
9h24m

because 'computer' has a meaning now that it didn't have 65 years ago, and people are continuously getting confused by thinking that 'analog computers' are computers, as they understand the term 'computers', which they aren't; they're a different thing that happens to have the same name due to a historical accident of how the advent of the algorithm happened

this is sort of like how biologists try to convince people to stop calling jellyfish 'jellyfish' and starfish 'starfish' because they aren't fish. the difference is that it's unlikely that someone will get confused about what a jellyfish is because they have so much information about jellyfish already

my quest to get people to call cellphones 'hand computers' is motivated by the same values but is probably much more doomed

adrian_b
0 replies
9h16m

"Hybrid computer" cannot be considered as a standard term, because it has been used ambiguously in the past.

Sometimes it has been applied to the kind of computers mentioned by me, with a digital control part and a completely analog arithmetic part.

However it has also been frequently used to describe what were hybrid arithmetic parts, e.g. which included both digital registers and digital adders and an analog section, for instance with analog integrators, which was used to implement signal processing filters or solving differential equations.

IMO, "hybrid computer" is appropriate only in the second sense, for hybrid arithmetic parts.

The control part of a CPU can be based only on a finite state automaton, so there is no need for any term to communicate this.

On the other hand, the arithmetic part can be digital, analog or hybrid, so it is useful to speak about digital computers, analog computers and hybrid computers, based on that.

kragen
0 replies
9h52m

i agree completely; thank you for clarifying despite my perhaps confrontational tone

in some sense almost any circuit in which a digital computer controls an analog multiplexer chip or a so-called digital potentiometer could qualify. and cypress's psoc line has a bit of analog circuitry that can be thus digitally reconfigured

justinjlynn
3 replies
11h4m

What's wrong with that? They are. We can always make the finer distinction of "Von Neumann architecture inspired digital electronic computer" if you wish to exclude the examples you've given. After all, anything which transforms a particular input to a particular output in a consistent fashion could be considered a computer which implements a particular function. I would say - don't confuse the word's meaning with the object's function and simply choose a context in which a word refers to a particular meaning, adapt to others contexts and translate, and simply deal with the fact that there is no firm division between computer and not-computer out in the word somewhere apart from people and their context-rich communications. If the context in which you're operating with an interlocutor is clear enough for you to jump to a correction of usage ... simply don't; beyond verifying your translation is correct, of course. As you're already doing this - likely without realising it - by taking care in doing so consciously you're likely to find your communications more efficient, congenial, and illuminating than they otherwise would be.

shermantanktop
0 replies
10h30m

This is the double-edged sword of deciding to widen (or narrow) the meaning of a term which already has a conventional meaning.

By doing so, you get to make a point—perhaps via analogy, perhaps via precision, perhaps via pedantry—which is illuminating for you but now confusing for your reader. And to explain yourself, you must swim upstream and redefine a term while simultaneously making a different point altogether.

Much has been written about jargon, but a primary benefit of jargon is the chance to create a domain-specific meaning without the baggage of dictionary-correct associations. It’s also why geeks can be bores at dinner parties.

derefr
0 replies
10h28m

We live in a society (of letters.) Communication is not pairwise in a vacuum; all communication is in context of the cultural zeitgeist in which it occurs, and by intentionally choosing to use a non-zeitgeist-central definition of a term, you are wasting the time of anyone who talks to you.

By analogy to HCI: words are affordances. Affordances exist because of familiarity. Don’t make a doorknob that you push on, and expect people not to write in telling you to use a door-bar on that door instead.

atoav
0 replies
10h25m

You are not wrong, yet you are. All of these things are doing computation in a vague, creative sense — sure. But if we call everything that does this or its equivalent a computer we would have to find new words for the thing we mean to be a computer currently.

Unilaterally changing language is not forbidden, but if The Culture Wars™ has thought us anything, it is that people are allergic to talking about what they see as mandated changes to their language, even if it is reasonable and you can explain it.

Colour me stoked, but you could still just do it unilaterally and wait till somebody notices.

However my caveat with viewing everything as computation is that you fall into the same trap as people in the ~1850s did when they wanted to describe everything in the world using complex mechanical devices, because that was the bleeding edge back then. Not everything is an intricate system of pulleys and levers it turned out, even if theoretically you could mimic everything if that system was just complex enough.

eesmith
6 replies
10h15m

What do you regard as the first digital, Turing-complete (if given enough memory) computer?

ENIAC, for example, was not a stored-program computer. Reprogramming required rewiring the machine.

On the other hand, by clever use of arithmetic calculations, https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.... says the Z3 could perform as a Universal Computer, even though, quoting its Wikipedia page, "because it lacked conditional branching, the Z3 only meets this definition by speculatively computing all possible outcomes of a calculation."

Which makes me think the old punched card mechanical tabulators could also be rigged up as a universal machine, were someone clever enough.

"Surprisingly Turing-Complete" or "Accidentally Turing Complete" is a thing, after all, and https://gwern.net/turing-complete includes a bunch of them.

kragen
4 replies
10h1m

let me preface this with the disclaimer that i am far from an expert on the topic

probably numerous eddies in natural turbulent fluid flows have been digital turing-complete computers, given what we know now about the complexity of turbulence and the potential simplicity of turing-complete behavior. but is there an objective, rather than subjective, way to define this? how complicated are our input-preparation and output-interpretation procedures allowed to be? if there is no limit, then any stone or grain of sand will appear to be turing-complete

a quibble: the eniac was eventually augmented to support stored-program operation but not, as i understand it, until after the ias machine (the johnniac) was already operational

another interesting question there is how much human intervention we permit; the ias machine and the eniac were constantly breaking down and requiring repairs, after all, and wouldn't have been capable of much computation without constant human attention. suppose we find that there is a particular traditional card game in which players can use arbitrarily large numbers. if the players decide to simulate minsky's two-counter machine, surely the players are turing-complete; is the game? are the previous games also turing-complete, the ones where they did not make that decision? does it matter if there happens to be a particular state of the cards which obligates them to simulate a two-counter machine?

if instead of attempting to measure the historical internal computational capability of systems that the humans could not perceive at the time, such as thunderstorms and the z3, we use the subjective standard of what people actually programmed to perform universal computation, then the ias machine or one of its contemporaries was the first turing-complete computer (if given enough memory); that's when universal computation first made its effects on human society felt

eesmith
3 replies
9h18m

is the game?

Sure. One of the "Surprisingly Turing-Complete" examples is that "Magic: the Gathering: not just TC, but above arithmetic in the hierarchy ".

See https://arxiv.org/abs/1904.09828 for the preprint "Magic: The Gathering is Turing Complete", https://arstechnica.com/science/2019/06/its-possible-to-buil... for an Ars Technica article, and https://hn.algolia.com/?q=magic+turing for the many HN submissions on that result.

_a_a_a_
2 replies
7h18m

Could you explain "above arithmetic in the hierarchy" in a few words, TIA, never heard of this

eesmith
1 replies
2h56m

Nope. The source page links to https://en.wikipedia.org/wiki/Arithmetical_hierarchy . I can't figure it out.

An HN comment search, https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... , finds a few more lay examples, with https://news.ycombinator.com/item?id=21210043 by dwohnitmok being the easiest for me to somewhat make sense of.

I think the idea is, suppose you have an oracle which tells you if a Turning machine will halt, in finite time. There will still halting problems for that oracle system, which requires an oracle from a higher level system. (That is how I interpret "an oracle that magically gives you the answer to the halting problem for a lower number of interleavings will have its own halting problem it cannot decide in higher numbers of interleavings").

BoiledCabbage
0 replies
41m

This appears to discuss it a bit more. Not certain if it's more helpful than the comment (still going through it), but it does cover it more in detail.

https://risingentropy.com/the-arithmetic-hierarchy-and-compu...

midasuni
0 replies
8h18m

Colossus was before eniac, but was also programmed by plugging up. Baby was a stored program machine and had branching

milkey_mouse
1 replies
8h22m
deaddodo
0 replies
5h37m

If you just want to reference the entire data set of potential "first computers" (including ones that aren't referable in the javaScript app, due to missing toggles), you can access the source data here (warning, small CSV download):

https://www.gleech.org/files/computers.csv

nine_k
0 replies
4h46m

In analog computers, software is hard to separate from the hardware. In ones I had any experience with (as a part of a university course), the programming part was wiring things on patch panels, not unlike how you do it with modular analog synths. You could make the same machine run a really wide variety of analog calculations by connecting opamps and passive components in various ways.

If we could optimize a set of programs down to the FPGA bitstream or even Verilog level, that would approach the kind of programs analog computers run.

I can't tell anything about Turing completeness though. It's a fully discrete concept, and analog computers operate in the continuous signal domain.

eru
0 replies
5h4m

Most digital computers are Turing complete, but interestingly not all programming languages are Turing complete.

Turing completeness is a tar pit that makes your code hard to analyse and optimise. It's an interesting challenge to find languages that allow meaningful and useful computation that are not Turing complete. Regular expressions and SQL-style relational algebra (but not Perl-style regular expressions nor most real-world SQL dialects) are examples familiar to many programmers.

Programming languages like Agda and Idris that require that you prove that your programs terminate [0] are another interesting example, less familiar to people.

[0] It's slightly more sophisticated than this: you can also write event-loops that go on forever, but you have to prove that your program does some new IO after a finite amount of time. (Everything oversimplified here.)

nothercastle
13 replies
14h13m

Exactly this. A lot of the systems had built in analog computers. It’s a lot cheaper to build then now with electronics but you need more computing power to do things that were previously done mechanically

hinkley
12 replies
13h50m

Analog computers have to be rebuilt if it turns out the program is wrong though, don’t they?

coder543
9 replies
13h37m

In the context of this thread, I believe even a digital computer would have to be rebuilt if the program is wrong... :P

Unless you typically salvage digital computers from the wreckage of a failed rocket test and stick it in the next prototype. If the FCC is wrong, kaboom.

Tommstein
6 replies
13h18m

Presumably they meant a program being discovered to be wrong before the computer was actually launched. And meant literally building a whole new computer, not just recompiling a program.

KMag
4 replies
12h43m

For the Apollo Guidance Computer, changing the program meant manually re-weaving wires through or around tiny magnet rings. A good part of the cost of the computer was the time spent painstakingly weaving the wires to store the program.

denton-scratch
1 replies
3h20m

Pardon me, but why would you have to re-weave wires around magnetic rings? The magnetic rings are for storing data; the whole point is that you can change the data without rewiring the memory. If you have to re-wire permanent storage (e.g. program storage), that's equivalent to creating a mask ROM, which is basically just two funny-shaped sheets of conductor. There's no need for magnetic rings.

KMag
0 replies
1h5m

No, I'm not talking about magnetic core memory. Core rope memory also used little magnetic rings.

https://en.wikipedia.org/wiki/Core_rope_memory input wires were energized, and they were coupled (or not) to the output wires depending on if they shared a magnetic ring (or not).

neodypsis
0 replies
12h14m

There's a very nice video about the assembly lines MIT made just for building the Apollo computer [0].

0. https://www.youtube.com/watch?v=ndvmFlg1WmE

Anarch157a
0 replies
9m

Only if the bug was caught after the computer had been assembled for the mission. For development, they used a simulator. Basically, a cable connected to a mainframe, with the bigger computer simulating the signals a bundle of core rope would produce.

hinkley
0 replies
13h8m

Yeah, though to be fair, some of the programs Apollo ran were on hand woven ROMs, so I may be making too fine a distinction. The program itself was built, not compiled. It if we are comparing with today, it would just be installed, not constructed.

scaredginger
0 replies
13h18m

I'm pretty sure you can perform tests and find defects without actually flying the rocket

nothercastle
0 replies
3h47m

I had assumed it meant more simple things like balanced balancing pneumatic or mechanical components that always put you at the the correct ratio sort of like a carburetor vs fuel injection.

nothercastle
0 replies
13h37m

Yes though they tend to be mechanically tuned. So like a pneumatic computer or will get tuned to operate into some range of inputs and you probably bench prototype it before you mass produce it

gumby
0 replies
9h45m

Typically they were reprogrammed by changing the jumpers. The analogous digital change would be replacing the band of cards in a jaquard loom.

Much less than “rebuilding”.

There have been some hybrids too.

KRAKRISMOTT
7 replies
13h14m

You forgot the most important one, the human computers at ground control.

Who said women can't do math?

https://www.smithsonianmag.com/science-nature/history-human-...

asylteltine
3 replies
12h7m

Who said women can't do math?

Nobody

shermantanktop
1 replies
10h23m

A quick search will show you many many examples that say otherwise.

https://www.dailymail.co.uk/news/article-524390/The-women-ad...

Granted, that same search will show you many examples of content accusing unnamed other people of having this attitude.

https://www.science.org/content/article/both-genders-think-w...

It’s an antiquated notion in my mind, but I don’t think it is a thing of the past.

wongarsu
0 replies
7h20m

“women can't do basic math" being an antiquated notion is even weirder. As GP pointed out, companies used to employ lots of predominantly female computers. As a consequence, the first programmers and operators of digital computers were also predominantly women, even if the engineers building them were mostly men.

Women being bad at advanced math would make sense as an antiquated notion, but those in charge of hiring decisions until about 50 years ago evidently thought women were great at basic math.

The study you linked showing that women lag behind men in math to a degree proportional to some gender disparity metric is also interesting, but doesn't really tell us how we got here.

creatonez
0 replies
7h53m

No one says it, but our implicit biases do - https://ilaba.wordpress.com/2013/02/09/gender-bias-101-for-m...

mulmen
1 replies
13h4m

Who said women can't do math?

The straw man?

legostormtroopr
0 replies
12h2m

Straw person?

interfixus
0 replies
10h24m
creatonez
0 replies
8h7m

Some info about the Flight Control Computer:

The Flight Control Computer (FCC) was an entirely analog signal processing device, using relays controlled by the Saturn V Switch Selector Unit to manage internal redundancy and filter bank selection. The FCC contained multiple redundant signal processing paths in a triplex configuration that could switch to a standby channel in the event of a primary channel comparison failure. The flight control computer implemented basic proportional-derivative feedback for thrust vector control during powered flight, and also contained phase plane logic for control of the S-IVB auxiliary propulsion system (APS).

For powered flight, the FCC implemented the control law $ \beta_c = a_0 H_0(s) \theta_e + a_1 H_1(s) \dot{\theta} $ where $ a_0 $ and $ a_1 $ are the proportional and derivative gains, and $ H_0(s) $ are the continuous-time transfer functions of the attitude and attitude rate channel structural bending filters, respectively. In the Saturn V configuration, the gains $ a_0 $ and $ a_1 $ were not scheduled; a discrete gain switch occurred. The Saturn V FCC also implemented an electronic thrust vector cant functionality using a ramp generator that vectored the S-IC engines outboard approximately 2 degrees beginning at 20 seconds following liftoff, in order to mitigate thrust vector misalignment sensitivity.

https://ntrs.nasa.gov/api/citations/20200002830/downloads/20...

vlovich123
29 replies
14h47m

Is the weight/cost calculus sufficiently improved now that it’s cheaper to shield the processor in it’s entirety rather than trying to rad harden the circuitry itself (much more expensive due to inability to use off the shelf parts & limits the ability to use newer tech)?

If I recall correctly this was one of the areas being explored by the mars drone although not sure if Mars surface radiation concerns are different than what you would use in space.

thebestmoshe
23 replies
14h30m

Isn’t this basically what SpaceX is doing?

The flight software is written in C/C++ and runs in the x86 environment. For each calculation/decision, the "flight string" compares the results from both cores. If there is a inconsistency, the string is bad and doesn't send any commands. If both cores return the same response, the string sends the command to the various microcontrollers on the rocket that control things like the engines and grid fins.

https://space.stackexchange.com/a/9446/53026

jojobas
15 replies
14h27m

That sounds way too low. Modern fly-by-wire planes are said to have 12-way voting.

p-e-w
5 replies
13h51m

I don't understand this. If two or more computers fail in the same way simultaneously, isn't it much more likely that there is a systemic design problem/bug rather than some random error? But if there is a design problem, how does having more systems voting help?

jojobas
0 replies
13h1m

Which is why different sets of computers will run software developed by independent groups on different principles, so that they very unlikely to fail simultaneously.

gurchik
0 replies
13h24m

Having at least 3 computers allows you the option to disable a malfunctioning computer while still giving you redundancy for random bit flips or other environmental issues.

etrautmann
0 replies
13h37m

The multi processor voting approach seeks to solve issues introduced by bit flips caused by radiation, not programming issues.

adastra22
0 replies
13h4m

They are not going to fail the same way simultaneously. This is protecting against cosmic ray induced signal errors within the logic elements, not logic errors due to bad software.

GuB-42
0 replies
13h23m

It is possible for a random error to affect two computers simultaneously, if they are made from the same assembly line, they may fail in exactly the same way, especially if they share the same wires.

That's the reason I sometime see that for RAID systems, it is recommended to avoid buying all same disks at the same time, because since they will be used in the same way in the same environment, there is a good chance for them to fail at the same time, defeating the point of a redundant system.

Also, to guard against bugs and design problems, critical software is sometimes developed twice or maybe more by separate teams using different methods. So you may have several combinations of software and hardware. You may also have redundant boards in the same box, and also redundant boxes

jcalvinowens
3 replies
13h46m

Modern fly-by-wire planes are said to have 12-way voting

Do you have a source for that? Everything I've ever read about Airbus says the various flight control systems are doubly redundant (three units). Twelve sounds like it would be far beyond diminishing returns...

jojobas
1 replies
12h56m

That was word of mouth. This website says 5 independent computers, of which 2 use different hardware and software so as not to fail in the same fashion.

https://www.rightattitudes.com/2020/04/06/airbus-flight-cont...

I'd imagine every computer relies on redundant stick/pedal encoders, which is how a 12-way notion appeared.

jcalvinowens
0 replies
12h36m

That blog isn't very authoritative, and doesn't go into any detail at all.

I'd imagine every computer relies on redundant stick/pedal encoders, which is how a 12-way notion appeared.

That's disingenuous at best. The lug nuts on my car aren't 20x redundant... if you randomly loosen four, catastrophic failure is possible.

numpad0
0 replies
7h13m

This shallow dismissal sounds "sus". It's just off.

dikei
2 replies
14h1m

It's more complicated than that, in the link, they described it better:

> The microcontrollers, running on PowerPC processors, received three commands from the three flight strings. They act as a judge to choose the correct course of actions. If all three strings are in agreement the microcontroller executes the command, but if 1 of the 3 is bad, it will go with the strings that have previously been correct.

This is a variation of Byzantine Tolerant Concensus, with a tie-braker to guarantee progress in case of absent voter.

mcbutterbunz
0 replies
13h40m

I’m curious how often the strings are not in agreement. Is this a very rare occurrence or does it happen often?

denton-scratch
0 replies
3h8m

Byzantine Tolerant Concensus

I was taken to task for mis-spelling "consensus"; I used to spell it with two 'c's and two 's's, like you. It was explained to me that it's from the same root as "consent", and that's how I remember the right spelling now.

somethingsaid
1 replies
14h5m

If you read the link it’s actually two cpu cores on a single cpu die each returning a string. Then 3 of those cpus send the resulting string to the microprocessors which then weigh those together to choose what to do. So it’s 6 times redundant in actuality.

SV_BubbleTime
0 replies
13h24m

That’s not 6x though.

It’s a more solid 3x or 3x+3y, which… if you had a power failure at a chip doesn’t take a 6x to make it 5x. It makes it 4x with the two remaining PHY units because two logical cores went down with one error.

The x being physical units, and the y being CPUs in lockstep so that the software is confirmed to not bug out somewhere.

It’s 6x for the calculated code portion only, but 3x for CPU and 1-3x for power or solder or circuit board.

I know it’s pretty pedantic, but I would call it the lowest form for any quality, which is likely 2-3x.

gumby
6 replies
9h42m

Seems risky. I remember the automated train control system for the Vienna Hauptbahnhof (main train station) had an x86 and a SPARC, one programmed in a procedural language and one in a production language. The idea was to make it hard to have the same bug in both systems (which could lead to a false positive in the voting mechanism).

ThePowerOfFuet
5 replies
9h2m

This is a great technique to avoid common-mode failures.

kqr
4 replies
8h1m

Do you have data to back that claim up? I remember reading evidence to the contrary, namely that programmers working on the same problem -- even in different environments -- tend to produce roughly the same set of bugs.

The conclusion of that study was that parallel development mainly accomplishes a false sense of security, and most of the additional reliability in those projects came from other sound engineering techniques. But I have lost the reference, so I don't know how much credibility to lend my memory.

Dah00n
1 replies
5h43m

Isn't this exactly what aeroplanes do? Two or more control systems made in different hardware, etc?

kqr
0 replies
4h49m

I'm not saying people aren't doing it! I'm just not sure it has the intended effect.

(Also to protect against physical failures it works, because physical failures are more independent than software ones, as far as I understand.)

gumby
0 replies
3h27m

That was the reason for the different programming paradigms (Algol-like vs Prolog-like), to reduce the probability.

fanf2
0 replies
1h35m

After some searchengineering I found Knight and Leveson (1986) “AN EXPERIMENTAL EVALUATION OF THE ASSUMPTION OF INDEPENDENCE IN MULTI-VERSION PROGRAMMING” which my memory tells me us the classic paper on common failure modes in reliability via N-version software which I was taught about in my undergrad degree http://sunnyday.mit.edu/papers.html#ft

Leveson also wrote the report on Therac 25.

jojobas
1 replies
14h28m

Aren't high energy space particles a pain in a way that the more shielding you have, the more secondary radiation you generate?

lazide
0 replies
13h27m

It depends on the type of shielding. For gamma radiation, lead only is a definite problem this way. As is neutron and high speed charged particles/cosmic rays.

Water less so.

adgjlsfhk1
1 replies
13h33m

one thing worth remembering is that a bigger computer runs into a lot more radiation. the cortex m0 is about .03mm^2 vs about .2m^2 for the Apollo guidance computer. as such, the m0 will see about 6 million times less radiation.

bpye
0 replies
13h32m

Aren't the smaller transistors going to be more susceptible to damage and bit flips though?

kens
0 replies
14h17m

It's always been an option to use shielding rather than rad-hard chips or in combination. RCA's SCP-234 aerospace computer weighed 7.9 pounds, plus 3 pounds of lead sheets to protect the RAM and ROM. The Galileo probe used sheets of tungsten to protect the probe relay receiver processor, while the Galileo plasma instrument used tantalum shielding. (I was just doing some research on radiation shielding.)

kristopolous
15 replies
15h4m

When we go back to the moon, I wouldn't be surprised if Zilog-Z80s were a major part of the hardware. Well known, well understood, predictable hardware goes a long way. There's a bunch of other considerations in outer space and z80s have proven robust and reliable there. Also I'd expect a bunch of Kermit and Xmodem to be used as well.

johnwalkr
4 replies
14h32m

I work in this space and z80, kermit and xmodem are not part of the solution. Just because this stuff is simple to the user doesn't mean it's the best suited, and there's a whole industry working on this since the Z80 days. You can buy space-qualified microcontroller boards/components with anything from a simple 8 bit microcontroller to a 64-bit, multicore, 1Ghz+ ARM cpu depending on the use case. I'm sure Z80 has been used in space, but in my time in the industry I've never heard of it.

Kermit and xmodem probably aren't what you want to use, they are actually a higher level than what is normally used and would require a big overhead, if they even worked at all with latencies that can reach 5-10s. Search for the keyword "CCSDS" to get hints about data protocols used in space.

kristopolous
3 replies
14h25m

I worked in it 20 years ago building diagnostic and networking tools ... arm was certainly around but there was also what I talked about. Things probably changed since then.

Here's kermit in space ... coincidentally in a 20 year old article. Software I wrote supported diagnosing kermit errors.

https://www.spacedaily.com/news/iss-03zq.html

I guess now I'm old.

johnwalkr
2 replies
14h9m

Thanks for the reference! Kermit could be used locally on ISS or in a lunar mission now that I think about it, but so is/could SSH, web browsers or any modern technology. But most space exploration is robotic and depends on communication to ground stations on Earth, and that is fairly standardized. Perhaps kermit will be used on the lunar surface, and that will be a simplification compared to a web browser interface. But for communication to/from Earth and Moon, there are standards in place and it would be a complication, not simplification to add such a protocol.

kristopolous
1 replies
14h4m

oh who knows ... I stopped working on that stuff in I think 2006. The big push then was over to CAN and something called AFDX which worked over ethernet. I was dealing with octal and bcd daily, mid 2000s.

I have no idea if people still use ARINC-429 or IRIG-B. Embedded RTOS was all proprietary back then for instance, like with VXWORKS. I'm sure it's not any more. I hated vxworks.

lambda
0 replies
11h48m

Yeah, I'm working on a fly by wire eVTOL project, we are using the CAN bus as our primary bus, but there are a number of off the shelf components like ADAHRS that we use that talk ARINC-429 so our FCCs will have a number of ARINC-429 interfaces.

But at least for the components we're developing, we have basically standardized on ARM, the TMS570 specifically since it offers a number of features for safety critical systems, and simplifies our tooling and safety analysis to use the same processor everywhere.

Z80 is pretty retro, and while I'm sure there may be some vendors who still use it, it's got to be getting pretty rare for new designs, between all the PowerPC, Arm, and now RISC-V processors available that allow you to use modern toolchains and so on, I'd be surprised if many people were doing new designs with the Z80

monocasa
3 replies
14h55m

I'm not sure if there's a RAD hard z80 variant.

They've got their own chips and protocols going back just as far, like https://en.wikipedia.org/wiki/MIL-STD-1553

kristopolous
2 replies
14h49m

The space shuttle used both z80 and 8086 until it ended in 2011. The international space station runs on among other chips, 80386SX-20s. IBM/BAE also has a few RADs based on POWER chips.

monocasa
1 replies
14h21m

Do you have a citation for that?

The Space Shuttle Avionics System top level documentation specifically calls out having "no Z80's, 8086s, 68000's, etc."

https://ntrs.nasa.gov/api/citations/19900015844/downloads/19...

kristopolous
0 replies
14h10m

Intel claims they did. https://twitter.com/intel/status/497927245672218624?lang=en although what's that word "some" doing in there...

And also, sigh, to demonstrate once again that when I worked in space it was 20 years ago, https://www.nytimes.com/2002/05/12/us/for-parts-nasa-boldly-... (https://web.archive.org/web/20230607141742/https://www.nytim...)

Knowing how 8086 timing and interrupts worked was still important for what I was doing in the early 2000s. I don't pretend to remember any of it these days.

GlenTheMachine
3 replies
14h49m

They won’t be. We will use RAD750’s, the flight qualified variant of the PowerPC architecture. That’s the standard high end flight processor.

https://www.petervis.com/Vintage%20Chips/PowerPC%20750/RAD75...

The next generation (at least according to NASA) will be RISC-V variants:

https://www.zdnet.com/article/nasa-has-chosen-these-cpus-to-...

kristopolous
1 replies
14h36m

The 750 is still based on a 27 year old chip and runs at half its clockspeed. The point was that spaceflight is relatively computationally modest.

demondemidi
0 replies
6h29m

Reliability is more important. Even more problematic is that many semi companies have been funneled into just a few due decades of mergers. And all of these are chasing profits which means jettisoning RAD hard mil spec devices. Up until the early 2000s intel was still making hardened versions of the 386, now they make no milspec parts.

johnwalkr
0 replies
14h16m

I wouldn't call it the standard, it's just used in designs with legacy to avoid the huge cost of re-qualification of hardware and software. It's infeasible a lot of times due to cost and power consumption. I work in the private sector in space (lunar exploration actually) and everyone is qualifying normal/automotive grade stuff, or using space-grade microcontrollers for in-house designs, with everything from 8-32bit, [1] and ready-made cpu boards[2] for more complex use cases. I'm sharing just 2 examples but there are hundreds, with variations on redundancy implemented in all kinds of ways too, such as in software, on multiple cores, on multiple chips, or on multiple soft-cpu cores on a single or multiple FPGAs.

[1] Example: https://www.militaryaerospace.com/computers/article/16726923...

[2] Example: https://xiphos.com/product-details/q8

dogma1138
0 replies
14h50m

I doubt so and if they will they’ll be abstracted to hell behind modern commodity hardware, Apollo had no bias when it comes to HDI/MMIs so astronauts could be trained on the computer interface that was possible at the time.

The reason why the controls of Dragon and Orion look the way they do is that they are no far off from modern digital cockpits of jets like the F-22 and F-35 and everyone is used to graphical interfaces and touch controls.

Having non intuitive interfaces that go against the bias astronauts and later on civilian contractors already have by using such interfaces over the past 2 decades will be detrimental to overall mission success.

The other reason for why they’ll opt to use commodity hardware is that if we are going back to space for real now you need to be able to build and deploy systems at an ever increasing pace.

We have enough powerful human safety rated hardware from aerospace and automotive there is no need to dig up relics.

And lastly you’ll be hard pressed to find people who still know how to work with such legacy hardware at scale and unless we will drastically change the curriculum of computer science degrees around the US and the world that list would only get smaller each year. We’re far more likely to see ARM and RISC-V in space than z80’s.

atleta
0 replies
14h31m
AnotherGoodName
14 replies
14h48m

Pretty much all USB chips have a fully programmable CPU when you go into the data sheets. It feels silly for simple hid or charging devices but basic microcontrollers are cheap and actually save costs compared to asics.

hinkley
5 replies
13h51m

I still want to see postgres or sqlite running straight on a storage controller some day. They probably don’t have enough memory to do it well though.

lmm
4 replies
13h25m

Booting Linux on a hard drive was what, 15 years ago now?

hinkley
3 replies
13h6m

Have you ever tried to google that?

lmm
2 replies
12h56m

Yes - up until a few years ago it was easy to find by googling, but now google has degraded to the point where I can't manage it.

winrid
0 replies
12h12m

Do you mean this? https://spritesmods.com/?art=hddhack&page=1

Searched "run linux on hard drive without cpu or ram" on Google - third result.

Dah00n
0 replies
5h38m

Yeah sure, people on HN says this all the time, but in reality it isn't true like a lot of comments getting repeated on here. I found it on the first try.

petermcneeley
4 replies
13h48m

I would also argue that this is another example of software eating the world. The role of the electrical engineer is diminished day by day.

etrautmann
1 replies
13h34m

Nah - there are lots of places where you need EEs still. Anything that interfaces with the world. Having programmability does not move most challenges out of the domain of EE. Much of it is less visible than the output of a software role perhaps.

FredPret
0 replies
12h29m

There will always be problems that can only be solved by an EE, chem eng, mech eng, etc.

But the juiciest engineering challenges involve figuring out business logic / mission decisions. This is done increasingly in software while the other disciplines increasingly make only the interfaces.

FredPret
0 replies
12h35m

The role of the non-software engineer, bit just electrical

Almondsetat
0 replies
10h34m

the role of the electrical engineer who doesn't know a thing about programming is diminished day by day*

dclowd9901
1 replies
10h41m

Yeesh. Beware the random wall wart I guess.

masklinn
0 replies
10h28m

Wall warts are not even the biggest worry: https://shop.hak5.org/products/omg-cable

Shawnj2
0 replies
13h2m

Where I work they were considering using an FPGA over an MCU for a certain task but decided against it because the FPGA couldn’t reach the same low power level as the MCU

codezero
10 replies
13h55m

Weird question maybe, but does anyone keep track of quantitative or qualitative data that measures the discrepancy between consumer (commercial) and government computer technology?

TBH, it's kind of amazing that a custom computer from 50 years ago has the specs of a common IC/SoC today, but those specs scale with time.

ajsnigrutin
4 replies
13h45m

There is no difference anymore, the only difference is the scale.

Back then, consumers got nothing, governments got largge computers (room sized+), then consumers got microcomputers (desktop sized), governments got larger mainframes, consumers got PCs, government got big-box supercomputers,...

And now? Consumers get x86_64 servers, governments get x86_64 servers, and the only difference is how much money you have, how many servers can you buy and how much space, energy and cooling you need to run them.

well, "normal users" get laptops and smartphones, but geek-consumers buy servers... and yeah, I know arm is an alternative.

codezero
1 replies
12h20m

I was asking about anyone tracking the disparity between nation-state computing power and commercially available computing power. This seems like something that's uncontroversial.

creer
0 replies
11h4m

"Nation state" doesn't mean "country". It certainly doesn't mean "rich country".

tavavex
0 replies
12h7m

I'd argue that the difference is the price. There is still quite a bit of a difference between average consumer and business hardware, but compute power is cheap enough that the average person can afford what was previously only reserved for large companies. The average "consumer computer" nowadays is an ARM smartphone, and while server equipment is purchasable, you can't exactly hit up your local electronics store to buy a server rack or a server CPU. You can still get those things quite easily, but I wouldn't say their main goal is being sold to individuals.

creer
0 replies
11h12m

Let's not go too far either. Money and determination still buys results. A disposable government system might be stuffed with large FPGAs, ASICs and other exotics. Which would rarely be found in any consumer system - certainly not in quantity. A government system might pour a lot of money in the design of these and the cost of each unit. So, perhaps not much difference for each standard CPU and computer node but still as much difference as ever in the rest?

c0pium
3 replies
13h38m

Why would you expect there to be one? It’s all the same stuff and has been for decades.

codezero
2 replies
12h21m

I expect nation state actors to have immensely more access to computing power than the commercial sector, is that controversial?

ianburrell
0 replies
1h0m

If you look at the top supercomputers, US national labs occupy most of the top 10. But they aren’t enormously larger than the others. They are also built of out of standard parts. I’m surprised that they are recent, I expected the government to be slow and behind.

What do you expect US government to do with lots of computing power? I wouldn’t expect military to need supercomputers. Maybe the NSA would have a lot for cracking something or surveillance. But the big tech companies have more.

ghaff
0 replies
11h39m

Because the big ones spend more money. I expect Google etc. has access to more computing power than most nation states do.

bregma
0 replies
5h37m

By 'government' do you mean banks and large industrial concerns (like the "industry" in "military-industrial complex")? The latter are where all the big iron and massive compute power has always been.

If course, from a certain point of view, they're many of the same people and money.

blauditore
5 replies
8h22m

I'm a bit tired of all the sensationalist "look what landed on the moon vs. today's hardware" comparisons. The first airplanes didn't have any sort of computer on board, so computation power is not the single deciding factor on the performance and success of such an endeavor.

The software (and hardware) of the Apollo missions was very well-engineered. We all know computation became ridiculously more powerful in the meantime, but that wouldn't make it easy to do the same nowadays. More performance doesn't render the need for good engineering obsolete (even though some seem to heavily lean on that premise).

hnlmorg
2 replies
7h6m

I don’t think you’re reading these articles in the right spirit if that’s your take away from them.

What I find more interesting is to compare how complicated the tech we don’t think about has become. It’s amazing that a cable, not a smart device or even 80s digital watch, but a literal cable, has as much technology packed into it as Apollo 11 and we don’t even notice.

Playing devils advocate for your comment, one of the (admittedly many) reasons going to the moon is harder than charging a USB device is because there are not off-the-shelf parts for space travel. If you had to build your USB charger from scratch (including defining the USB specification for the first time) each time you needed to charge your phone, I bet people would quickly talk about USB cables as a “hard problem” too.

That is the biggest takeaway we should get from articles like this. Not that Apollo 11 wasn’t a hugely impressive feat of engineering. But that there is an enormous amount of engineering in our every day lives that is mass produced and we don’t even notice.

nzach
0 replies
4h4m

Your comment reminded me of this[0] video about the Jerry Can.

A simple looking object, but in reality it had a lot o tought put in to get to this form.

It also goes along the lines of "Simplicity is complicated"[1].

[0] - https://www.youtube.com/watch?v=XwUkbGHFAhs

[1] - https://go.dev/talks/2015/simplicity-is-complicated.slide#1

argiopetech
0 replies
4h27m

This is actually about the wall warts the cable could be plugged into.

Otherwise, I completely agree.

bryancoxwell
0 replies
6h41m

The software (and hardware) of the Apollo missions was very well-engineered.

I think this is the whole point of articles like this. I don’t think it’s sensationalist at all to compare older tech with newer and discuss how engineers did more with less.

BSDobelix
0 replies
7h52m

first airplanes didn't have any sort of computer on board,

Sure they had and often still have, it's called wetware.

so computation power is not the single deciding factor on the performance and success of such an endeavor

The endeavor to charge a phone?

orliesaurus
4 replies
14h50m

54 years ago - wow - was the Apollo 11 Moon Landing Guidance Computer (AGC) chip the best tech had to offer back then?

GlenTheMachine
1 replies
14h47m

Yes, given the size, power, and reliability constraints. There were, of course, far more powerful computers around… but not ones you could fit in a spacecraft the size of a Volkswagen Beetle.

The Apollo program consumed something like half of the United States’ entire IC fabrication capacity for a few years.

https://www.bbc.com/future/article/20230516-apollo-how-moon-...

db48x
0 replies
14h34m

The AGC was 2 ft³. I believe the volume was written into the contract for development of the computer, and was simply a verbal guess by the owner of the company during negotiations. On the other hand, they had been designing control systems for aircraft and missiles for over a decade at that point so it was not an entirely uninformed guess.

The amazing thing is that they did manage to make it fit into 2 ft³, even though the integrated circuits it used had not yet been invented when the contract was written.

kens
0 replies
14h24m

The Apollo Guidance Computer was the best technology when it was designed, but it was pretty much obsolete by the time of the Moon landing in 1969. Even by 1967, IBM's 4 Pi aerospace computer was roughly twice as fast and half the size, using TTL integrated circuits rather than the AGC's RTL NOR gates.

dgacmu
0 replies
14h38m

Yes when accounting for size. If you wanted something that was the size of a refrigerator, you could buy a data general Nova in 1969: https://en.m.wikipedia.org/wiki/Data_General_Nova

8KB of RAM! But hundreds of pounds vs 70lb for the AGC with fairly comparable capability (richer instructions/registers, lower initial clock rate).

The AGC was quite impressive in terms of perf/weight

ssgodderidge
3 replies
14h12m

The Anker PowerPort Atom PD 2 USB-C Wall Charger CPU is 563 times faster than the Apollo 11 Guidance Computer

Wild to think the thing that charges my devices could be programmed to put a human on the moon

oldgradstudent
0 replies
13h36m

Wild to think the thing that charges my devices could be programmed to put a human on the moon

With a large enough lithium battery, a charger can easily take you part of the way there.

fuzzfactor
0 replies
9h15m

The proven way to fly people to the moon and back using such low-powered computers was to have a supporting cast of thousands who were naturally well qualified using their personal slide rules to smoothly accomplish things that many of today's engineers would stumble over using their personal computers.

Plenty of engineers on the ground had no computers, and the privileged ones who did had mainframes, not personal at all.

A computer was too valuable to be employed doing anything that didn't absolutely need a computer, most useful for precision or speed of calculation.

But look what happens when you give something like a mainframe to somebody who is naturally good at aerospace when using a slide rule to begin with.

Someone
0 replies
3h53m

From that data point, we don’t know for sure. The Apollo Guidance Computer was programmed to put a human on the moon, but never used to actually do it, so no computer ever “put a human on the moon”. All landings used “fly by wire”, with an astronaut at the stick, and the thrusters controlled by software.

https://www.quora.com/Could-the-Apollo-Guidance-Computer-hav...:

“P64. At about 7,000 feet altitude (a point known as “high gate”), the computer switched automatically to P64. The computer was still doing all the flying, and steered the LM toward its landing target. However, the Commander could look at the landing site, and if he didn’t like it, could pick a different target and the computer would alter its course and steer toward that target.

At this point, they were to use one of three programs to complete the landing:

P66. This was the program that was actually used for all six lunar landings. A few hundred feet above the surface the Commander told the computer to switch to P66. This is what was commonly known as “manual mode”, although it wasn’t really. In this mode, the Commander steered the LM by telling the computer what he wanted to do, and the computer made it happen. This continued through landing.

P65. Here’s the automatic mode you asked about. If the computer remained in P64 until it was about 150 feet above the surface, then the computer automatically switched to P65, which took the LM all the way to the surface under computer control. The problem is that the computer had no way to look for obstacles or tell how level its target landing site was. On every flight, the Commander wanted to choose a different spot than where the computer was taking the LM, and so the Commander switched to P66 before the computer automatically switched to P65. [Update: The code for P65 was removed from the AGC on later flights. The programmers needed memory for additional code elsewhere, and the AGC was so memory-constrained that adding code one place meant removing something else. By that point it was obvious that none of the crews was ever going to use the automatic landing mode, so P65 was removed.]

P67. This is full-on honest-to-goodness manual mode. In P66, even though the pilot is steering, the computer is still in the loop. In P67, the computer is totally disengaged. It is still providing data, such as altitude and descent rate, but has no control over the vehicle.”

tavavex
2 replies
12h23m

I'm curious - are there any ways of finding out the precise hardware that's used in these small-scale devices that are generally not considered to be computers (like smartphone chargers) without actually having to take them apart? Are there special datasheets, or perhaps some documents for government certification, or anything like it? I've always been fascinated with the barebones, low-spec hardware that runs mundane electronic things, so I want to know where the author got all that information from.

ArcticLandfall
0 replies
15m

Wireless devices go through an FCC certification process that publishes teardowns. And iFixit posts.

AnotherGoodName
0 replies
1h5m

Generally no but taking them apart to look at the chips inside is easy. Fwiw everything has a CPU in it these days. Actually that's been true since the 70s. Your old 1970's keyboard had a fully programmable CPU in it. Typically an Intel MCS-48 variant https://en.wikipedia.org/wiki/Intel_MCS-48#Uses

Today it's even more the case. You have fully programmable CPUs in your keyboard, trackpad, mouse, all usb devices, etc.

somat
2 replies
11h9m

The ariicle was a lot of fun however I felt it missed a important aspect about the respective computers. IO channels. I don't know about the USB charge controllers. But the AGC as a flight computer had a bunch of inputs and outputs. Does a Richtek RT7205 have enough IO?

theon144
0 replies
8h11m

I have no clue as to the I/O requirements of the AGC, but I imagine that with ~500x the performance, a simple I/O expander could fill the gap?

jiggawatts
0 replies
10h58m

The most powerful chip in the list (Cypress CYPD4126) has 30x general purpose I/O pins.[1]

AFAIK, this is typical of USB controller chips, which generally have about 20-30 I/O pins, but I’m sure there are outliers.

The AGC seems to have four 16-bit input registers, and five 16-bit output registers[2], for a total of 144 I/O pins total.

[1] https://ta.infinity-component.com/datasheet/9c-CYPD4126-40LQ...

[2] https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Other...

jcalvinowens
2 replies
13h32m

> others point out that the LVDC actually contains triply-redundant logic. The logic gives 3 answers and the voting mechanism picks the winner.

This is a very minor point... but three of something isn't triple redundancy: it's double redundancy. Two is single redundancy, one is no redundancy.

Unless the voting mechanism can somehow produce a correct answer from differing answers from all three implementations of the logic, I don't understand how it could be considered triply redundant. Is the voting mechanism itself functionally a fourth implementation?

somat
0 replies
11h15m

I find it fascinating the two different schools of thought exposed in the LVDC and the AGC.

The LVDC was a highly redundant can not fail design. the AGC had no redundancy and was designed to recover quickly if failure occurred.

kens
0 replies
13h15m

The official name for the LVDC's logic is triple modular redundant (TMR). The voting mechanism simply picks the majority, so it can tolerate one failure. The LVDC is a serial computer, which makes voting simpler to implement, since you're only dealing with one bit at a time.

daxfohl
2 replies
12h45m

So in 50 years the equivalent of a gpt4 training cluster from today's datacenters will fit in a cheap cable, and it will run over 100 times faster than a full cluster today.

ko27
0 replies
4h16m

Yeap, that's how exponential growth works. It just never stops.

FredPret
0 replies
12h28m

Computronium

denton-scratch
1 replies
4h11m

the LVDC actually contains triply-redundant logic

I didn't know that was just for the LVDC.

emulate this voting scheme with 3x microcontrollers with a 4th to tally votes will not make the system any more reliable

I think that's clear enough; the vote-tallier becomes a SPOF. I'm not sure how Tandem and Stratus handled discrepancies between their (twin) processors. Stratus used a pair of OTC 68K processors, which doesn't seem to mean voting; I can't see how you'd resolve a disagreement between just two voters.

I can't see how you make a voting-based "reliable" processor from OTC CPU chips; I imagine it would require each CPU to observe the outputs of the other two, and tell itself to stop voting if it loses a ballot. Which sounds to me like custom CPU hardware.

Any external hardware for comparing votes, telling a CPU to stop voting, and routing the vote-winning output, amounts to a vote-tallier, which is a SPOF. You could have three vote-talliers, checking up on one-another; but then you'd need a vote-tallier-tallier. It's turtles from then on down.

In general, having multiple CPUs voting as a way of improving reliability seems fraught, because it increases complexity, which reduces reliability.

Maybe making reliable processors amounts to just making processors that you can rely on.

ryukoposting
0 replies
3h48m

I can't see how you'd resolve a disagreement between just two voters.

Tell them both to run the calculation again, perhaps?

Klaster_1
1 replies
11h15m

Can you run Doom on a USB-C charger? Did anyone manage to?

yjftsjthsd-h
0 replies
10h51m

I feel like I/O would be the real pain point there. I suppose if you throw out performance you could forward X/VNC/whatever over serial (possibly with TCP/IP in the middle; SLIP is ugly but so flexible), but that's unlikely to be playable.

nolroz
0 replies
14h58m

Hi Forrest!

m463
0 replies
15h0m

It is amazing they were able to miniaturize a computer to fix into a spaceship.

Previously calculators were a room full of people, all of which required food, shelter, clothing and ... oxygen.

jackhack
0 replies
13h40m

it's a fun article, but I would liked to have seen at least a brief mention of power consumption comparison among the four designs.

dang
0 replies
13h0m

Discussed at the time (of the article):

Apollo 11 Guidance Computer vs. USB-C Chargers - https://news.ycombinator.com/item?id=22254719 - Feb 2020 (205 comments)

cubefox
0 replies
7h52m

If anyone is interested, there is a 1965 documentary about an Apollo computer:

https://youtube.com/watch?v=ndvmFlg1WmE

continuational
0 replies
10h37m

Seems like with cables this powerful, it might make sense for some devices to simply run their logic on the cable CPU, instead of coming with their own.

ashvardanian
0 replies
14h15m

Remarkable comparison! I'm surprised it had only one parity bit per 15-bit word. Even on Earth today we keep two parity bits per 8-bit word in most of our servers.

IBM estimated in 1996 that one error per month per 256 MiB of RAM was expected for a desktop computer.

https://web.archive.org/web/20111202020146/https://www.newsc...

Tommstein
0 replies
12h42m

Too bad the link to Jonny Kim's biography is broken (one that works: https://www.nasa.gov/people/jonny-kim/). He has to be one of the most impressive humans who has ever lived. Amongst other things, a decorated Navy SEAL, Harvard medical doctor, and astronaut. Sounds like a kid slapping together the ultimate G.I. Joe.

SV_BubbleTime
0 replies
13h22m

But it is another step toward increasing complexity.

I wish more people understood this, and could better see the coming crisis.

ReptileMan
0 replies
9h16m

The great thing about the AI age is that we are once again performance constrained so people start to rediscover the lost art of actually optimizing a program or runtime (the last such age were the waning days of ps2. Those guys made GoW 2 run on 32 megs of ram ... respect)

Havoc
0 replies
7h0m

It has certainly felt like the limit is software/PEBKAC for a long while. Until lately with LLMs...that does make me feel "wish I had a bigger hammer" again.