Apollo 11 spacecraft contains 4 computers
Analog computers don't get the respect they deserve. There's one more computer, the FCC. The Flight Control Computer is an analog computer in the Saturn V that controlled the rocket gimbals. It's a two-foot cylinder weighing almost 100 pounds.
i think it's (unintentionally) misleading to describe analog 'computers' as 'computers'. what distinguishes digital computers from other digital hardware is that they're turing-complete (if given access to enough memory), and there isn't any similar notion in the analog domain
the only reason they have the same name is that they were both originally built to replace people cranking out calculations on mechanical desk calculators, who were also called 'computers'
the flight control 'computer' has more in common with an analog synthesizer module than it does with a cray-1, the agc, an arduino, this laptop, or these chargers, which are by comparison almost indistinguishable
I wonder why you can't make a turing complete analog computer using feedback loops.
You can: https://en.wikipedia.org/wiki/General_purpose_analog_compute...
There is still active research in the area, eg. https://www.lix.polytechnique.fr/~bournez/i.php?n=Main.Publi...
a universal turing machine is a particular machine which can simulate all other turing machines. the gpac, by contrast, is a family of machines: all machines built out of such-and-such a set of parts
you can't simulate an 11-integrator general-purpose analog computer or other differential analyzer with a 10-integrator differential analyzer, and you can't simulate a differential analyzer with 0.1% error on a (more typical) differential analyzer with 1% error, unless it's 100× as large (assuming the error is gaussian)
the ongoing research in the area is of course very interesting but a lot of it relies on an abstraction of the actual differential-analyzer problem in which precision is infinite and error is zero
Sure, you cannot easily simulate another analog computer, but this is not the requirement. The requirement is turing completeness, which can be done.
It can? How?
as i understand it, with infinite precision; the real numbers within some range, say -15 volts to +15 volts, have a bijection to infinite strings of bits (some infinitesimally small fraction of which are all zeroes after a finite count). with things like the logistic map you can amplify arbitrarily small differences into totally different system trajectories; usually when we plot bifurcation diagrams from the logistic map we do it in discrete time, but that is not necessary if you have enough continuous state variables (three is obviously sufficient but i think you can do it with two)
given these hypothetical abilities, you can of course simulate a two-counter machine, but a bigger question is whether you can compute anything a turing machine cannot; after all, in a sense you are doing an infinite amount of computation in every finite interval of time, so maybe you could do things like compute whether a turing machine will halt in finite time. so far the results seem to support the contrary hypothesis, that extending computation into continuous time and continuously variable quantities in this way does not actually grant you any additional computational power!
this is all very interesting but obviously not a useful description of analog computation devices that are actually physically realizable by any technology we can now imagine
Except that infinite precision requires infinite settling time. (Which I guess is the analogue computing version of arithmetic not being O(1) even though it is usually modelled that way.)
Infinite precision is just analog's version of an infinitely-long tape.
But even with infinite precision, how do you build a universal analog computer? And how do you program it?
Right :-) I prefer to say a Turing machine’s tape is “unbounded” rather than “infinite” because it supports calculations of arbitrarily large but finite size. So in the analogue case, I would say unbounded precision and unbounded waiting time.
Infinite precision is exponentially more difficult. It's very easy to have unbounded tape, with a big buffer of tape and some kind of "factory" that makes more tape when you get near the edge. Unbounded precision? Not going to happen in a real machine. You get to have several digits and no more.
It's much worse than that. There is this little thing called Planck's constant, and there's this other little thing called the second law of thermodynamics. So even if you allow yourself arbitrarily advanced technology I don't see how you're doing to make it work without new physics.
It would not be very interesting. You will lose all the interesting properties of analog computer and it would be a poorly performing turing machine. Still it would have those loops and branches necessary, since you can always build digital on top of analog with some additional harness.
They both do the same thing compute an output from given inputs. So they are properly distinguished from each other on how they do the computing. They both deserve the name 'computer'.
only in the same sense that a machinist's micrometer, an optical telescope, an analog television set, an acoustic guitar, a letterpress printing press, a car's manual transmission, a fountain pen, a nomogram, and a transistor also 'compute an output from given inputs'
do you want to call them all 'computers' now?
The arithmetic circuits alone, like adders, multipliers etc., regardless if they are mechanical or electronic, analog or digital, should not be called computers.
When the arithmetic circuits, i.e. the "central arithmetical part", as called by von Neumann, are coupled with a "central control part", as called by von Neumann, i.e. with a sequencer that is connected in a feedback loop with the arithmetic part, so that the computation results can modify the sequence of computations, then this device must be named as a "computer", regardless whether the computations are done with analog circuits or with digital circuits.
What defines a computer (according to the definition already given by von Neumann, which is the right definition in my opinion) is closing the feedback loop between the arithmetic part and the control part, which raises the order of the system in comparison with a simple finite state automaton, not how those parts are implemented.
The control part must be discrete, i.e. digital, but the arithmetic part can be completely analog. Closing the feedback loop, i.e. the conditional jumps executed by the control part, can be done with analog comparators that provide the predicates tested by the conditional jumps. The state of an analog arithmetic part uses capacitors, inductors or analog integrators, instead of digital registers.
Several decades ago, I had to debug an analog computer during its installation process, before functioning for the first time. That was in a metallurgic plant, and the analog computer provided outputs that controlled the torques of a group of multi-megawatt DC electric motors. The formulae used in the analog computations were very complex, with a large number of adders, multipliers, integrators, square root circuits and so on, which combined inputs from many sensors.
That analog computer (made with op amps) performed a sequence of computations much more complex than the algorithms that were executed on an Intel 8080, which controlled various on-off execution elements of the system, like relays and hydraulic valves and the induction motors that powered some pumps.
The main reason why such analog computers have become obsolete is the difficulty of ensuring that the accuracy of their computations will not change due to aging and due to temperature variations. Making analog computers that are insensitive to aging and temperature raises their cost much above modern digital microcontrollers.
as you are of course aware, analog 'computers' do not have the 'central control part' that you are arguing distinguishes 'computers' from 'arithmetic circuits alone'; the choice of which calculation to perform is determined by how the computer is built, or how its plugboard is wired. integrators in particular do have state that changes over time, so the output at a given time is not a function of the input at only that time, but of the entire past, and as is well known, such a system can have extremely complex behavior (sometimes called 'chaos', though in this context that term is likely to give rise to misunderstanding)
you can even include multiplexors in your analog 'computer', even with only adders and multipliers and constants; x · (1 + -1 · y) + z · y interpolates between x and z under the control of y, so that its output is conditionally either x or z (or some intermediate state). but once you start including feedback to push y out of that intermediate zone, you've built a flip-flop, and you're well on your way to building a digital control unit (one you could probably build more easily out of transistors rather than op-amps). and surely before long you can call it a digital computer, though one that is controlling precision linear analog circuitry
it is very commonly the case that analog computation is much, much faster than digital computation; even today, with microprocessors a hundred thousand times faster than an 8080 and fpgas that are faster still, if you're doing submillimeter computation you're going to have to do your front-end filtering, upconversion or downconversion, and probably even detection in the analog domain
Most "analog computers" have been simple, and even if they usually provided the solution of a system of ordinary differential equations, that does not require a control part, making them no more closer to a complete computer than a music box that performs a fixed sequence.
I agree that this kind of "analog computers" does not deserve the name of "computer", because they are equivalent only with the "registers + ALU" (RALU) simple automaton that is a component of a CPU.
Nevertheless, there is no reason why a digital control part cannot be coupled with an analog arithmetic part and there have existed such "analog computers", even if they have been rarely used, due to high cost and complexity.
It is not completely unlikely that such "analog computers", consisting of a digital control part and an analog arithmetic part, could be revived with the purpose of implementing low-resolution high-speed machine learning inference.
Even now, in circuits like analog-digital converters, there may be analog computing circuits, like switched-capacitor filters, which are reconfigurable by the digital controller of the ADC, based on various criteria, which may depend on the digital output of the converter or on the outputs of some analog comparators (which may detect e.g. the range of the input).
You're describing a "hybrid computer". These were introduced in the late 1950s, combining a digital processor with analog computing units. I don't understand why you and kragen want to redefine standard terms; this seems like a pointless linguistic exercise.
because 'computer' has a meaning now that it didn't have 65 years ago, and people are continuously getting confused by thinking that 'analog computers' are computers, as they understand the term 'computers', which they aren't; they're a different thing that happens to have the same name due to a historical accident of how the advent of the algorithm happened
this is sort of like how biologists try to convince people to stop calling jellyfish 'jellyfish' and starfish 'starfish' because they aren't fish. the difference is that it's unlikely that someone will get confused about what a jellyfish is because they have so much information about jellyfish already
my quest to get people to call cellphones 'hand computers' is motivated by the same values but is probably much more doomed
"Hybrid computer" cannot be considered as a standard term, because it has been used ambiguously in the past.
Sometimes it has been applied to the kind of computers mentioned by me, with a digital control part and a completely analog arithmetic part.
However it has also been frequently used to describe what were hybrid arithmetic parts, e.g. which included both digital registers and digital adders and an analog section, for instance with analog integrators, which was used to implement signal processing filters or solving differential equations.
IMO, "hybrid computer" is appropriate only in the second sense, for hybrid arithmetic parts.
The control part of a CPU can be based only on a finite state automaton, so there is no need for any term to communicate this.
On the other hand, the arithmetic part can be digital, analog or hybrid, so it is useful to speak about digital computers, analog computers and hybrid computers, based on that.
i agree completely; thank you for clarifying despite my perhaps confrontational tone
in some sense almost any circuit in which a digital computer controls an analog multiplexer chip or a so-called digital potentiometer could qualify. and cypress's psoc line has a bit of analog circuitry that can be thus digitally reconfigured
What's wrong with that? They are. We can always make the finer distinction of "Von Neumann architecture inspired digital electronic computer" if you wish to exclude the examples you've given. After all, anything which transforms a particular input to a particular output in a consistent fashion could be considered a computer which implements a particular function. I would say - don't confuse the word's meaning with the object's function and simply choose a context in which a word refers to a particular meaning, adapt to others contexts and translate, and simply deal with the fact that there is no firm division between computer and not-computer out in the word somewhere apart from people and their context-rich communications. If the context in which you're operating with an interlocutor is clear enough for you to jump to a correction of usage ... simply don't; beyond verifying your translation is correct, of course. As you're already doing this - likely without realising it - by taking care in doing so consciously you're likely to find your communications more efficient, congenial, and illuminating than they otherwise would be.
This is the double-edged sword of deciding to widen (or narrow) the meaning of a term which already has a conventional meaning.
By doing so, you get to make a point—perhaps via analogy, perhaps via precision, perhaps via pedantry—which is illuminating for you but now confusing for your reader. And to explain yourself, you must swim upstream and redefine a term while simultaneously making a different point altogether.
Much has been written about jargon, but a primary benefit of jargon is the chance to create a domain-specific meaning without the baggage of dictionary-correct associations. It’s also why geeks can be bores at dinner parties.
We live in a society (of letters.) Communication is not pairwise in a vacuum; all communication is in context of the cultural zeitgeist in which it occurs, and by intentionally choosing to use a non-zeitgeist-central definition of a term, you are wasting the time of anyone who talks to you.
By analogy to HCI: words are affordances. Affordances exist because of familiarity. Don’t make a doorknob that you push on, and expect people not to write in telling you to use a door-bar on that door instead.
You are not wrong, yet you are. All of these things are doing computation in a vague, creative sense — sure. But if we call everything that does this or its equivalent a computer we would have to find new words for the thing we mean to be a computer currently.
Unilaterally changing language is not forbidden, but if The Culture Wars™ has thought us anything, it is that people are allergic to talking about what they see as mandated changes to their language, even if it is reasonable and you can explain it.
Colour me stoked, but you could still just do it unilaterally and wait till somebody notices.
However my caveat with viewing everything as computation is that you fall into the same trap as people in the ~1850s did when they wanted to describe everything in the world using complex mechanical devices, because that was the bleeding edge back then. Not everything is an intricate system of pulleys and levers it turned out, even if theoretically you could mimic everything if that system was just complex enough.
What do you regard as the first digital, Turing-complete (if given enough memory) computer?
ENIAC, for example, was not a stored-program computer. Reprogramming required rewiring the machine.
On the other hand, by clever use of arithmetic calculations, https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.... says the Z3 could perform as a Universal Computer, even though, quoting its Wikipedia page, "because it lacked conditional branching, the Z3 only meets this definition by speculatively computing all possible outcomes of a calculation."
Which makes me think the old punched card mechanical tabulators could also be rigged up as a universal machine, were someone clever enough.
"Surprisingly Turing-Complete" or "Accidentally Turing Complete" is a thing, after all, and https://gwern.net/turing-complete includes a bunch of them.
let me preface this with the disclaimer that i am far from an expert on the topic
probably numerous eddies in natural turbulent fluid flows have been digital turing-complete computers, given what we know now about the complexity of turbulence and the potential simplicity of turing-complete behavior. but is there an objective, rather than subjective, way to define this? how complicated are our input-preparation and output-interpretation procedures allowed to be? if there is no limit, then any stone or grain of sand will appear to be turing-complete
a quibble: the eniac was eventually augmented to support stored-program operation but not, as i understand it, until after the ias machine (the johnniac) was already operational
another interesting question there is how much human intervention we permit; the ias machine and the eniac were constantly breaking down and requiring repairs, after all, and wouldn't have been capable of much computation without constant human attention. suppose we find that there is a particular traditional card game in which players can use arbitrarily large numbers. if the players decide to simulate minsky's two-counter machine, surely the players are turing-complete; is the game? are the previous games also turing-complete, the ones where they did not make that decision? does it matter if there happens to be a particular state of the cards which obligates them to simulate a two-counter machine?
if instead of attempting to measure the historical internal computational capability of systems that the humans could not perceive at the time, such as thunderstorms and the z3, we use the subjective standard of what people actually programmed to perform universal computation, then the ias machine or one of its contemporaries was the first turing-complete computer (if given enough memory); that's when universal computation first made its effects on human society felt
Sure. One of the "Surprisingly Turing-Complete" examples is that "Magic: the Gathering: not just TC, but above arithmetic in the hierarchy ".
See https://arxiv.org/abs/1904.09828 for the preprint "Magic: The Gathering is Turing Complete", https://arstechnica.com/science/2019/06/its-possible-to-buil... for an Ars Technica article, and https://hn.algolia.com/?q=magic+turing for the many HN submissions on that result.
Could you explain "above arithmetic in the hierarchy" in a few words, TIA, never heard of this
Nope. The source page links to https://en.wikipedia.org/wiki/Arithmetical_hierarchy . I can't figure it out.
An HN comment search, https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... , finds a few more lay examples, with https://news.ycombinator.com/item?id=21210043 by dwohnitmok being the easiest for me to somewhat make sense of.
I think the idea is, suppose you have an oracle which tells you if a Turning machine will halt, in finite time. There will still halting problems for that oracle system, which requires an oracle from a higher level system. (That is how I interpret "an oracle that magically gives you the answer to the halting problem for a lower number of interleavings will have its own halting problem it cannot decide in higher numbers of interleavings").
This appears to discuss it a bit more. Not certain if it's more helpful than the comment (still going through it), but it does cover it more in detail.
https://risingentropy.com/the-arithmetic-hierarchy-and-compu...
Colossus was before eniac, but was also programmed by plugging up. Baby was a stored program machine and had branching
https://www.gleech.org/first-computers
If you just want to reference the entire data set of potential "first computers" (including ones that aren't referable in the javaScript app, due to missing toggles), you can access the source data here (warning, small CSV download):
https://www.gleech.org/files/computers.csv
In analog computers, software is hard to separate from the hardware. In ones I had any experience with (as a part of a university course), the programming part was wiring things on patch panels, not unlike how you do it with modular analog synths. You could make the same machine run a really wide variety of analog calculations by connecting opamps and passive components in various ways.
If we could optimize a set of programs down to the FPGA bitstream or even Verilog level, that would approach the kind of programs analog computers run.
I can't tell anything about Turing completeness though. It's a fully discrete concept, and analog computers operate in the continuous signal domain.
Most digital computers are Turing complete, but interestingly not all programming languages are Turing complete.
Turing completeness is a tar pit that makes your code hard to analyse and optimise. It's an interesting challenge to find languages that allow meaningful and useful computation that are not Turing complete. Regular expressions and SQL-style relational algebra (but not Perl-style regular expressions nor most real-world SQL dialects) are examples familiar to many programmers.
Programming languages like Agda and Idris that require that you prove that your programs terminate [0] are another interesting example, less familiar to people.
[0] It's slightly more sophisticated than this: you can also write event-loops that go on forever, but you have to prove that your program does some new IO after a finite amount of time. (Everything oversimplified here.)
Exactly this. A lot of the systems had built in analog computers. It’s a lot cheaper to build then now with electronics but you need more computing power to do things that were previously done mechanically
Analog computers have to be rebuilt if it turns out the program is wrong though, don’t they?
In the context of this thread, I believe even a digital computer would have to be rebuilt if the program is wrong... :P
Unless you typically salvage digital computers from the wreckage of a failed rocket test and stick it in the next prototype. If the FCC is wrong, kaboom.
Presumably they meant a program being discovered to be wrong before the computer was actually launched. And meant literally building a whole new computer, not just recompiling a program.
For the Apollo Guidance Computer, changing the program meant manually re-weaving wires through or around tiny magnet rings. A good part of the cost of the computer was the time spent painstakingly weaving the wires to store the program.
Pardon me, but why would you have to re-weave wires around magnetic rings? The magnetic rings are for storing data; the whole point is that you can change the data without rewiring the memory. If you have to re-wire permanent storage (e.g. program storage), that's equivalent to creating a mask ROM, which is basically just two funny-shaped sheets of conductor. There's no need for magnetic rings.
No, I'm not talking about magnetic core memory. Core rope memory also used little magnetic rings.
https://en.wikipedia.org/wiki/Core_rope_memory input wires were energized, and they were coupled (or not) to the output wires depending on if they shared a magnetic ring (or not).
There's a very nice video about the assembly lines MIT made just for building the Apollo computer [0].
0. https://www.youtube.com/watch?v=ndvmFlg1WmE
Only if the bug was caught after the computer had been assembled for the mission. For development, they used a simulator. Basically, a cable connected to a mainframe, with the bigger computer simulating the signals a bundle of core rope would produce.
Yeah, though to be fair, some of the programs Apollo ran were on hand woven ROMs, so I may be making too fine a distinction. The program itself was built, not compiled. It if we are comparing with today, it would just be installed, not constructed.
I'm pretty sure you can perform tests and find defects without actually flying the rocket
I had assumed it meant more simple things like balanced balancing pneumatic or mechanical components that always put you at the the correct ratio sort of like a carburetor vs fuel injection.
Yes though they tend to be mechanically tuned. So like a pneumatic computer or will get tuned to operate into some range of inputs and you probably bench prototype it before you mass produce it
Typically they were reprogrammed by changing the jumpers. The analogous digital change would be replacing the band of cards in a jaquard loom.
Much less than “rebuilding”.
There have been some hybrids too.
You forgot the most important one, the human computers at ground control.
Who said women can't do math?
https://www.smithsonianmag.com/science-nature/history-human-...
Nobody
A quick search will show you many many examples that say otherwise.
https://www.dailymail.co.uk/news/article-524390/The-women-ad...
Granted, that same search will show you many examples of content accusing unnamed other people of having this attitude.
https://www.science.org/content/article/both-genders-think-w...
It’s an antiquated notion in my mind, but I don’t think it is a thing of the past.
“women can't do basic math" being an antiquated notion is even weirder. As GP pointed out, companies used to employ lots of predominantly female computers. As a consequence, the first programmers and operators of digital computers were also predominantly women, even if the engineers building them were mostly men.
Women being bad at advanced math would make sense as an antiquated notion, but those in charge of hiring decisions until about 50 years ago evidently thought women were great at basic math.
The study you linked showing that women lag behind men in math to a degree proportional to some gender disparity metric is also interesting, but doesn't really tell us how we got here.
No one says it, but our implicit biases do - https://ilaba.wordpress.com/2013/02/09/gender-bias-101-for-m...
The straw man?
Straw person?
Asimov.
https://literature.stackexchange.com/questions/25852/where-d...
Some info about the Flight Control Computer:
https://ntrs.nasa.gov/api/citations/20200002830/downloads/20...