Buries the lede a bit? Near the end is "Conventional GFETs [graphene transistors] do not use semiconducting graphene, making them unsuitable for digital electronics requiring a complete transistor shutdown. [...] the SEC [material] developed by his team allows for a complete shutdown, meeting the stringent requirements of digital electronics."
As far as I heard before, this was the problem with graphene transistors, they had a nonlinear response but did not shut down the current flow, making them not useful for digital logic, only analog circuits. But, it's a while since I read about this so maybe someone else already achieved this before.
Semi analog transistor would be perfectly fine for AI operations though? ( matrix multiplication, sigmoid, tanh etc )
Analog or digital are properties of the signals you put on the circuits, not of the transistors.
You can do digital manipulation with non-gapped transistors too. You just can't use the extremely intense when active but self-limiting designs we use today.
Besides, unless we get some very creative new insights, analog computers are a dead-end.
I've been trying to find the source for a while, but I recall reading about a fundamental property of analog computers that makes them inherently unstable (as in, noise will always win on the long run), unlike digital computers. No matter how good the design is, little bits of noise add up in complex analog computers and the output is inexact.
I guess my Google-fu is good enough, because I've been unable to find where I read about it.
Is it true, though?
Look deep enough, and every modern digital circuit is emulated by an analog circuit, just because the components are analog in their nature, and the 0s and 1s are an interpretation of analog data. That includes digital computers.
Does this make digital computers inherently unstable? Clearly, simple enough computations both of digital and analog kind are good enough to be useful. So there must be a breaking point further away, but on what axis?
A transmission line accumulates noise at it increases in length. Eventually the signal to noise ratio is too high. In old school analogue comms, the solution is to decode the signal to the digital domain before it reaches that point, which allows us to drop the noise. Then the data is reencoded back to a clean analogue signal again.
In digital logic, this process happens at every gate. Hence the reliability of digital logic.
Analogue logic doesn't do this. So analogue logic is only useful if the noise introduced at each step is lower than the error from your source data was already (at whatever point in the computation you have reached). If there is a way round this, I don't know it.
There is a way around this: discretize your values. That's how analog calculations can be a basis for digital calculations.
That's why I think the original comment was about a specific, limited meaning of "analog computation" that does not allow for emulating anything digital. But I struggle to come up with one that doesn't throw the baby of being universal out with the bath water of emulating digital computations.
There's a term for analog systems using discretized values: digital.
Exactly! And under that lens all fundamentals applying to analog also apply to digital: that they are inherently unstable and that noise will always win in the long run.
Which sounds at the very least imprecise to me, considering that I'm writing it on a digital (and therefore analog) computer.
In fact, Von Neumann proved mathematically that it's possible to build a reliable system out of unreliable components. But, modern digital logic uses a simpler mechanism: the thing you are missing is that noise does not propagate or accumulate in discretize (digital) systems, as long as it remains below a threshold, and circuits are designed so that it is far below the threshold.
Are you thinking about the property of analog data that it can't be stored, reprocessed, or copied without the noise increasing?
That doesn't make the computers unstable. It just makes the data less fit for long-term storage. And even then, people manage.
No, it wasn't about storage. It was really during calculations, you can't prevent the noise from affecting the results, from a fundamental level.
Oh, ok. You can't.
I'm not sure you can find a citation though, it's like searching for a source that the sky is blue.
Keep in mind that digital calculations have noise too. The digitization noise behaves in a completely different way, but any single computer has a finite precision whatever the technology behind it. Infinite precision doesn't exist on the real world.
Floating point calculation is "noisy" and yet there are models that use smaller and smaller floating point numbers without losing much accuracy.
For any complicated calculation, you need to store and propagate multiple intermediate results.
This is what goes wrong.
error(x + y) > error(x) + error(y)
Could this be the reason why human (and things with brains) need to sleep from time to time? Sleep would reset the accumulating noice from the brain.
It's not really clear what you mean by the brain accumulating noise. An analog computer can suffer from compounding imprecision, because error bars carry forward. We are analog computers in a vague sense, but as humans we aren't imprecisely crunching numbers all day such that our results are out of whack by nightfall... You could say sleep is like a reset for the brain, but that doesn't say much. Sleep is complex and relates to most everything else about an organism in complex ways.
Nah, humans just blank-out for a couple of seconds mid-task instead.
Multiplying two numbers in an analog circuit requires taking the logarithms and add those, then converting back, or some similar trick.
In practice this is done using non-linear effects of transistors[1], however the exact details of those effects are individual to each transistor and is also temperature dependent.
Since the multiplication circuit relies on different transistors behaving identically, compensation circuitry and trimming is required, which will never be perfect.
[1]: https://www.analog.com/media/en/training-seminars/tutorials/...
Leaky ReLU is the best though.
Yes, analogue computing. It's a world of pain though, each individual operator in each individual device is going to have a slightly different level of accuracy, which is also going to be affected by temperature, so you need to learn a whole new set of analytical skills and operational practices to ensure that your model works correctly on each one at a customer location. And that's not even thinking about the testability at shipment of the devices being in spec, and that they are at all operating temperatures and over the lifetime of the device.
That's the "bandgap" thing mentioned in the article. They exists, but very unreliable.
I believe what you are referring to is called the Bandgap problem.
Good explanation on the issue is here:
https://www.allaboutcircuits.com/technical-articles/graphene...
Convenience quote of relevant section:
Usually, the electrons require some additional energy to jump from the valence band to the conduction band. In FETs, a bias voltage enables a current to flow through the band which acts as an insulator in the absence of the bias.
Unfortunately, the absence of a band gap in GFET makes it hard to turn off the transistor since it cannot behave as an insulator. The inability to completely switch it off results in an on/off current ratio of about 5, which is quite low for logic operations. Consequently, using GFETs in digital circuits is a challenge. However, this is not a problem with analog circuits hence making the GFET suitable for amplifiers, mixed-signal circuits, and other analog applications.
Multiple parties are researching ways to address these bandgap challenges, including techniques such as the negative resistance approach and the bottom-up synthesis technique of fabrication.