return to table of content

Cray-1 vs Raspberry Pi

fastneutron
17 replies
16h17m

When I see comparisons like this, the first thought I have is not the benchmarks, but rather what the most “heroic” real-world calculation of the day would have been on something like the Cray-1, and how to replicate those calculations today on something like a RPi. Weather/climate models? Rad-hydro?

The fidelity would almost certainly be super low compared to modern FEA software, but it would be a fun exercise to try.

simbolit
6 replies
15h45m

One of the early customers was the European Centre for Medium-Range Weather Forecasts, so, wild guess, they probably used it for medium-range weather forecasts.

ithkuil
3 replies
10h4m

they probably used it for medium-range weather forecasts

in europe

pietjepuk88
0 replies
6h58m

I thought the ECMWF models were (and always have been) global?

jorvi
0 replies
6h40m

European Centre for Medium-Range Weather Forecasts
defrost
0 replies
9h54m

FWiW Australia used a CDC Cyber 205 for occassional weather modelling and other mathematical work in the early 1980s.

( There was a seperate dedicated weather computer, this one was used for 'other' jobs like speculative weather modelling, monster group algebraic fun, et al.)

https://en.wikipedia.org/wiki/CDC_Cyber

The UK was the first customer:

    In 1980, the successor to the Cyber 203, the Cyber 205 was announced. The UK Meteorological Office at Bracknell, England was the first customer and they received their Cyber 205 in 1981.

fastneutron
1 replies
5h12m

Numerically, I’m currently what this would have looked like. I’m talking about the governing equation set, discretization methods, data, etc. It would be a fun project to try and implement a toy model like that.

magicalhippo
0 replies
3h18m

It would be a fun project to try and implement a toy model like that.

If you really want a challenge, do it using pen, paper and a slide rule, like in the old days[1]. Just make sure to apply appropriate smoothing of the input data first[2].

[1]: https://www.smithsonianmag.com/history/how-world-war-i-chang...

[2]: https://arxiv.org/abs/2210.01674

buryat
2 replies
11h5m

nuclear weapons simulations

the first machine went to Los Alamos

https://www.theatlantic.com/technology/archive/2014/01/these...

fastneutron
0 replies
5h15m

rad-hydro

These are incredibly expensive even on today’s hardware. If you look through some of the unclassified ASCI reports from the early 2000s, 3D calculations of this equation set were implied to be leadership-class computations. At the time of the Cray, it must’ve been coarse-grid 1D as the standard, with 2D as the dream.

acqq
0 replies
9h6m

The demand for the huge calculations for the design of nuclear weapons started in WW II already:

https://ahf.nuclearmuseum.org/ahf/history/human-computers-lo...

"The staff in the T-5 group included recruited women who had degrees in mathematics or physics, as well as, wives of scientists and other workers at Los Alamos. According to Their Day in the Sun: Women of the Manhattan Project, some of the human computers were Mary Frankel, Josephine Elliot, Beatrice “Bea” Langer, Augusta “Mici” Teller, Jean Bacher, and Kay Manley. While some of the computers worked full time, others, especially those who had young children, only worked part time.

General Leslie R. Groves, the Director of the Manhattan Project, pressured the wives of Los Alamos to work because he felt that it was a waste of resources to accommodate civilians. As told by Kay Manley, the wife of Los Alamos physicist John Manley, the recruitment of wives can also be traced to a desire to limit the housing of “any more people than was absolutely necessary.” This reason makes sense given the secretive nature of Los Alamos and the Manhattan Project. SEDs, a group of drafted men who were to serve domestically using their scientific and engineering backgrounds, also worked in the T division."

DonHopkins
2 replies
15h8m

A Cray-1 could execute an infinite loop in 7.5 seconds!

belter
0 replies
26m

Quite impressive, but can't avoid noticing you did not, go near a higher challenge, like compiling a C++ in under 4 weeks... \s

ant6n
0 replies
6h57m

In a similar way how Chuck Norris counted to infinity.. twice?

ip26
0 replies
15h53m

You could always start with loading up Spec ‘06, which contains micro kernels of such “heroic” workloads.

gshubert17
0 replies
3h57m

I toured an NCAR (National Center for Atmospheric Research) facility in Boulder around 1979; got to sit on a seat on their Cray-1. So yes, weather and climate calculations.

devoutsalsa
0 replies
11h30m

3-D rendering? We had a super computing club in early 90s high school. I remember creating wireframe images, uploading then to a Cray XMP at Lawrence Livermore for the computation, and then downloading finished results.

bee_rider
0 replies
13h4m

You could get some vintage matrices from SuiteSparse (formerly the university of Florida sparse matrix collection).

qgin
16 replies
15h1m

It's wild to imagine that 40 or so years from now, someone will have a drawer full of cheap plastic boxes, each with more power than the fastest computing cluster of 2023... promising to themselves that one day they're finally going to build that hobby project with one of them.

huytersd
5 replies
10h30m

You really think so? Aren’t we at the end of Moore’s law, so I’m really doubtful that we’ll see massive leaps like that.

aembleton
1 replies
9h28m

People have been saying that for at least twenty years

bbarnett
0 replies
9h7m

I remember people saying this in the 80s!

RetroTechie
1 replies
4h50m

Moore's law will die when we have 3D stacks of silicon thick enough that even integrated liquid cooling can't keep it cool. With feature sizes measured in a few atoms.

Or when economics of fabricating such structures just aren't worth it.

jamiek88
0 replies
2h56m

Yeah made out of not silicon.

Moores ‘law’ is a human driven law.

Computing is basically the absolute center of our society.

As long as our civilization exists we will spend massive resources on this.

Thus as long as it’s physically possible we’ll have progress.

topspin
0 replies
4h28m

Aren’t we at the end of Moore’s law

Three semi manufacturers are telling their investors they'll be at 2nm (or something) in late 2024 or 2025. So no, Moore's law has not seen its end, despite ~40 years of predictions to the contrary.

avg_dev
5 replies
14h53m

such a dystopian future... I hope we are not still using plastic then :)

niederman
4 replies
14h21m

How is this dystopian? Plastic is a really great material -- and way more eco-friendly than metal for building computers. It's only mass-produced single-use plastics like water bottles that are bad for the environment.

huytersd
2 replies
10h29m

I wish we could come up with a plastic that would biodegrade after a fixed amount of time say 200 years.

bbarnett
1 replies
9h8m

You want the stored carbon in plastics to escape??

The best outcome for plastics, would be to bury them very deep (like nuclear waste), where they could eventually become some new oil like substance. No carbon escape.

huytersd
0 replies
45m

Yeah but no one is burying the plastic, it’s too expensive. So realistically I’d much rather have the plastic breakdown so it’s not everywhere for 10k years.

djaychela
0 replies
10h14m

How is it more eco-friendly? AFAIK Most plastics used in such applications are not practically recyclable, whereas metals are.

ryandrake
3 replies
13h46m

Didn’t the Apollo guidance computer, which took people to the moon, have 4K of RAM? Today, 1 million times that barely runs the OS and a few Chrome tabs.

jojobas
0 replies
4h42m

Russians ran computers with ferrite-plate RAM of similar size on submarines well into 90s or maybe even 00s. Still using software written for them on Kilo submarines in some sort of VMs.

jes
0 replies
13h26m

I hope the following from Wikipedia is helpful:

The computer had 2048 words of erasable magnetic-core memory and 36,864 words of read-only core rope memory. Both had cycle times of 11.72 microseconds. The memory word length was 16 bits: 15 bits of data and one odd-parity bit. The CPU-internal 16-bit word format was 14 bits of data, one overflow bit, and one sign bit (ones' complement representation). [1]

1. https://en.wikipedia.org/wiki/Apollo_Guidance_Computer

chasd00
0 replies
3h16m

I’ve looked inside that capsule. I wouldn’t ride in it to the grocery store!

qingcharles
14 replies
16h40m

I'd just bought the latest and most expensive Intel x86 CPU in 2013 and built myself a new rig. My wife walked into the office, "You're not working, I can tell that, but I'm not sure what you're doing?" she said looking at the graphs on my screen.

"I'm calculating to see when my PC would have been the fastest on Earth. It looks like in 1992 it would be able to out-compute the latest Dept of Defense $90m supercomputer that filled an entire room, would you believe?"

"That's lovely. How will that help us pay our credit bills?"

Jesting aside. There is a bunch of data for this, like this set here:

https://en.wikipedia.org/wiki/TOP500

And if you extrapolate backwards or find older data, like I did, I came to the conclusion that if I took my PC back to 1981 it would actually be faster than every computer on Earth combined, or some insane statistic like that.

shagie
10 replies
16h15m

One of my favorite machines from Top500 is SystemX.

https://www.top500.org/system/173736/

When it was commissioned in 2004, this array of 1100x Apple PowerPC 970 systems was the 7th most powerful computer on the list.

It's Linpack Performance was 12,250.00 GFlop/s.

iancmceachern
6 replies
15h36m

My favorite was the 33rd in line at the time which was made up of 1700 sony PS3s

https://www.google.com/amp/s/phys.org/news/2010-12-air-plays...

Cacti
3 replies
15h8m

When the DoE claimed the PS3 could be a dual purpose munition, they weren’t kidding.

PaulRobinson
2 replies
9h25m

Saddam Hussein did try to buy a load of PlayStations at some point.

hulitu
1 replies
8h32m

He also had WMDs. /s

iancmceachern
0 replies
1h16m

And Anna Nicole married for love

(Great line in the movie Shooter"

Cockbrand
1 replies
11h26m

They were more or less cousins, as they were both based on the PowerPC CPU architecture.

porbelm
0 replies
8h46m

However the Sony Cell had just a PowerPC controlling core. The real magic, and why it was used in supercomputers at the time, is in its Stream cores; they were highly tailored for vector and floating point maths.

zoky
1 replies
14h53m

You forgot the best part: It was colloquially referred to as the “Big Mac”.

geeB
0 replies
14h54m

About the same headline number as a $350 Xbox Series X! Although fp64 vs fp32 and Linpack vs peak.

kevin_thibedeau
2 replies
16h23m

The other fun thing is to find out the most recent year your phone would have made the bottom of the top 500 list.

hulitu
0 replies
8h30m

My phone is so dumb down that any comparison is useless. It is like driving a Ferrari through a corn field.

bifftastic
0 replies
11h41m

Looks like June 2002 for Pixel 8

whitej125
12 replies
15h50m

Years ago when my daughter was around 5 I was showing her a raspberry Pi zero I had just picked up. I told her - years ago before Daddy was your age a computer like this used to be as big as a house. Her response was - “houses were that small?”

tonymet
3 replies
14h46m

Have you seen houses from the 1920s -- she's not that far off

tgv
0 replies
11h12m

I’ve lived in them for most of my life. Quite roomy.

lagniappe
0 replies
14h26m

they say the 2020's is the new 1920's in that regard

bboygravity
0 replies
13h53m

Have you seen (tiny) houses in 2023s (large cities) -- she's not that far off.

mattnewton
3 replies
13h44m

In your household, the children tell the dad jokes to dad.

chacham15
2 replies
13h5m

I like telling dad jokes...he usually laughs

graphe
1 replies
12h44m

Your dad is named he?

ben_w
0 replies
8h28m

He for short, Hehehe is the full name ;P

utopcell
1 replies
15h47m

Smart kid, thinking out of the box!

speed_spread
0 replies
13h31m

It comes naturally when the box is so small!

zx8080
0 replies
15h8m

:) Did showing raspPi to your daughter have any result (like interested in tech or anything)?

nextaccountic
0 replies
11h26m

Maybe not a house, but depending on your age a large room for sure

snvzz
11 replies
17h21m

It'd make more sense to compare with a RISC-V that has Vector 1.0.

Because a vector machine is what it was.

cmrdporcupine
8 replies
16h45m

Well, or compare to a GPU or a TPU

snvzz
6 replies
16h29m

Those are largely simd but not vector.

dan-robertson
3 replies
6h6m

How is vector different from simd?

IshKebab
2 replies
4h40m

The RISC-V Vector extension allows the vector length to vary at runtime whereas with SIMD the vector length is fixed at compile time (128 bit, 256 bit etc.). It means the code is more portable basically.

With x86 SIMD the standard solution is to compile the same code multiple times for different SIMD widths (using different instructions) and then detect the CPU at runtime. Though that is such a pain that it's only really done in explicitly numerical libraries (Numpy, Eigen, etc.). In theory with Vector you can compile once, run anywhere.

dan-robertson
1 replies
1h34m

That design seems like a reasonable thing for a high level language that could then be converted to different architectures’ simd widths. But I’m kinda surprised it’s good at the ISA level. Eg for something like a vectorized strlen, mightn’t one worry that the cpu would choose vlen[1] too large causing you to load from cache lines (or pages!) that turn out to be unnecessary for finding the length of the string. With the various simd extensions on x86 or arm, such a routine could be carefully written to align with cache lines and so avoid depending on reading the next line when the string ends before it. I also worry about various simd tricks that seem to rely on the width. Eg I think there’s some instruction to interpret each half-byte of one vector as an index into some vector of 16 things. How could these be ported to risc-v? Or maybe that’s not the sort of thing their vector extensions are meant for.

I guess part of my thinking here is that the ISA designers at intel, arm, aren’t stupid, but they ended up with fixed widths for sse, neon, knights landing, avx, avx-512. Presumably they had reasons to prefer that to the dynamic risc-v style thing. So I wonder: are there some risc-v constraints the push this design (eg maybe low-power environments presumably pushed neon to have a small width and this made higher-power environments suffer; having a dynamic length might allow both to use the same machine code), or were there some reasons intel preferred to stick with fixed widths, eg making something that could only work on more expensive chips and thereby having something people can pay more for? Is there something reasonable written about why risc-v went with this design.

[1] what do you even pass to vsetvl in this case as you don’t know your string length.

imtringued
0 replies
23m

I'm not sure it is difficult to see why variable length SIMD makes sense. If you want to process 15 elements with a width of 8, you will need the function twice, once with SIMD processing whole batches of 8 elements and a scalar version of the same function to process the last 7 elements. This makes it inherently difficult to write SIMD code even in the simple and happy case of data parallelism. With RISC-V all you do is set vlen to 7 in the last iteration.

what do you even pass to vsetvl in this case as you don’t know your string length.

I'm not sure what you are trying to say here. You must know the length of the buffer, if you don't know the length of the buffer, then processing the string is inherently sequential, just like reading from a linked list, since accessing even a single byte beyond the null terminator risks a buffer overflow. Why pick an example that can't be vectorized by definition?

cmrdporcupine
1 replies
16h20m

That's fair. I haven't spent any time looking through the RISC-V vector extensions yet. I look forward to it, though.

rwmj
0 replies
4h32m

They are said to be "inspired" by classic Cray vector instructions, although I have of course never used a Cray :-( so I can't comment on how true that is. I did use a Convex C2[0] for a while which also had real vector instructions, but it was all hidden behind a compiler option.

[0] https://en.wikipedia.org/wiki/Convex_Computer

Findecanor
0 replies
8h33m

TPUs tend to be specialised for matrix multiplications, often at low precision.

voxadam
1 replies
15h41m

Has anyone taped out a RISC-V CPU with hardware vector support yet?

snvzz
0 replies
15h29m

Yes, several, with the Vector 1.0 specification.

Some of them (Kendryte K230, a MCU) have already shipped to people.

Years ago, some chips shipped, with 0.7.1 (incompatible, pre-ratification). One of them is the TH1520, SoC in some SBCs released earlier this year.

blackoil
9 replies
15h44m

Is there any software apart from benchmarks that will make it feel that fast. All softwares that I use feel more advanced version of things that I ran on my 386. GUI, IDE, compiler, office...

I understand that the exercise may still be theoretical as any sw used by Met now will be designed for million time fast computers. But there should exist some software that would have required Cray to run then.

xcv123
4 replies
14h20m

No. The Cray-1 ran batch processing jobs, mostly scientific simulations which took hours or days to compute. It had a terminal interface and wasn't used for real time interactive applications.

timthorn
2 replies
7h58m

IIRC, in real world installations the Cray would have been paired with a minicomputer or mainframe.

yjftsjthsd-h
0 replies
3h41m

So should we pair with a PiPDP-11?:)

flyinghamster
0 replies
2h2m

The Wiki article on the Cray-1 indicates that it had a Data General Eclipse as its front-end processor.

jasonwatkinspdx
0 replies
2h28m

It was a later machine than the Cray-1, but I remember seeing a post on the jsoftware forums from someone that ran interactive APL on a Cray. Having a REPL to access a machine like that in that era must have been quite something.

KeplerBoy
2 replies
8h25m

A lot of the stuff that took hours back then can now be done in sub-seconds.

Think about mechanical engineering: Back then they might have simulated how cars deform in a crash. Now we can perform similar simulations in real-time for fun in our video games. Afaik it's hardly ever done because no one actually needs physically accurate models in games, but it could be done.

Same goes for rendering back then they rendered each frame of toy story for a good few hours, now we achieve arguably better graphics in real time.

qup
0 replies
2h30m

I think there's something to be said for the wait period. The time to anticipate the result makes it feel very worthwhile, especially when only a handful of machines on the planet can do the calculation.

It all feels very mundane when I can do it on my slow commodity laptop in under a second.

Eisenstein
0 replies
7h10m

Think about mechanical engineering: Back then they might have simulated how cars deform in a crash. Now we can perform similar simulations in real-time for fun in our video games. Afaik it's hardly ever done because no one actually needs physically accurate models in games, but it could be done.

BeamNG is basically that.

yummypaint
0 replies
13h55m

The best bet might be an old physics code written in fortran. Maybe calculation of scattering cross sections from matrix elements or something with alot of vectorizable linear algebra.

BMc2020
9 replies
17h24m

"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1"

edit: thank you for the Christmas present, yc algorithm. God bless us every one.

jbverschoor
5 replies
17h3m

I’m more impressed that the cray was that fast such a long time ago tbh

amelius
2 replies
16h56m

It was even faster in practice because the software was less bloated.

stevenjgarner
1 replies
16h46m

The primary operating system for the Cray-1 was the Cray Operating System (COS), which was a batch processing system. COS was specifically designed to exploit the hardware capabilities of the Cray-1, focusing on high-speed computation rather than on features like multi-user support. Given its focus on scientific and mathematical computations, COS supported compilers for languages like FORTRAN, which was the dominant language for scientific computing at the time. The Cray FORTRAN Compiler was highly optimized to take advantage of the Cray-1's vector processing capabilities. There was also a set of mathematical libraries optimized for its architecture. These libraries included routines for linear algebra, Fourier transforms, and other mathematical operations critical in scientific computing.

pklausler
0 replies
14h16m

COS supported time-sharing and multiple users quite well, actually, including interactive sessions.

Trivia: Seymour was user U0100 on our in-house systems.

chasil
1 replies
16h18m

That all began with the CDC 6600.

https://en.m.wikipedia.org/wiki/CDC_6600

moffkalast
0 replies
3h39m

CDC's first products were based on the machines designed at Engineering Research Associates (ERA), which Seymour Cray had been asked to update after moving to CDC.

Cray has been credited with creating the supercomputer industry. Joel S. Birnbaum, then chief technology officer of Hewlett-Packard, said of him: "It seems impossible to exaggerate the effect he had on the industry; many of the things that high performance computers now do routinely were at the farthest edge of credibility when Seymour envisioned them.

One story has it that when Cray was asked by management to provide detailed one-year and five-year plans for his next machine, he simply wrote, "Five-year goal: Build the biggest computer in the world. One year goal: One-fifth of the above." And another time, when expected to write a multi-page detailed status report for the company executives, Cray's two sentence report read: "Activity is progressing satisfactorily as outlined under the June plan. There have been no significant changes or deviations from the June plan."

Cray avoided publicity, and there are a number of unusual tales about his life away from work, termed "Rollwagenisms", from then-CEO of Cray Research, John A. Rollwagen. He enjoyed skiing, windsurfing, tennis, and other sports. Another favorite pastime was digging a tunnel under his home; he attributed the secret of his success to "visits by elves" while he worked in the tunnel: "While I'm digging in the tunnel, the elves will often come to me with solutions to my problem."

Well Seymour you are an odd fellow, but I must say you design a good mainframe.

wolfgang42
2 replies
17h16m

"The comment above was for the 2012 Pi 1. In 2020, the Pi 400 average Livermore Loops, Linpack and Whetstone MFLOPS reached 78.8, 49.5 and 95.5 times faster than the Cray 1."

Apart from the Cray-1, that whole section is also worth reading for some interesting insights into relative speed differences between various modern CPUs as well. (Though I do wish it was presented in table rather than narrative form, it’d be a lot easier to follow that way; there are also more detailed tables further down the page.)

LarsDu88
1 replies
17h5m

Ok, do Pi 5 now

moffkalast
0 replies
3h49m

Multiply Pi 4 results by 3 and you have the rough ballpark.

hooverd
8 replies
17h10m

The RPi, unlike the Cray-1, does not offer ample sitting space.

wolfgang42
1 replies
17h5m

Somebody made a Cray-themed Pi Zero cluster which perhaps fits the bill (if you’re a mouse, that is): https://www.clustered-pi.com/blog/clustered-pi-zero.html

linker3000
0 replies
9h42m
m463
1 replies
12h41m

yeah, the pi cases have been disappointing.

Few have adequate cooling. (flirc is good, the ones with fans are just annoying)

I'd love to have a pi case that had a built-in breadboard.

...or a case with comfortable seating.

intrasight
0 replies
5h45m

I like the Flirc aluminum case. My Pi 5 case arrived last week. Now waiting for my new Pi.

widea
0 replies
9h44m

Why not?

tom_
0 replies
16h37m

But for the price of the Cray, even without adjusting for inflation, you could buy a useful number of chairs. And just think of the electricity cost savings!

geerlingguy
0 replies
16h50m

Nor does it have quite the panache in its spartan design.

DonHopkins
0 replies
15h6m

It's not just for sitting. The ultimate hacker fantasy is to get laid on a Cray-1 couch!

nullhole
6 replies
17h8m

There's a line in the Jurassic Park book where a character is made suspicious by an offhand assertion (by Nedry) that he is using a multi XMP system.

Rpi4s are nice, in a sense, because you can only rarely honestly claim that the speed of the system is holding you back. Most times, presumably, it's the efficiency of the operations you are telling it to execute.

Firerouge
2 replies
16h45m

The lack of I/O to the cpu might be the counterpoint to this. A single (exposed) PCIe lane might be enough for any singular task, but you're likely to start bogging down your bandwidth if you need to do any serious simultaneous tasks like network above a gbit, nvme disk IO, additional display or additional parallel computing like a GPU.

Cacti
1 replies
15h5m

it’s easy to forget how many bits needs to be pushed down a graphics pipeline, per pixel, per frame. in fact forget the pipeline, just pushing the bits down the wire fast enough is a non-trivial task.

nullhole
0 replies
14h44m

I agree. I was a bit too flip in my original comment; there are regular tasks now that simply require multiples of the data throughput that the 1980s Cray machines were capable of.

The simpler point I was aiming at was just that: the amount of computational power at the fingertips of so many of us is huge, and it's important to appreciate that.

Uehreka
1 replies
15h59m

Rpi4s are nice, in a sense, because you can only rarely honestly claim that the speed of the system is holding you back.

As someone who uses them for a variety of purposes, I gotta note that they have pretty huge limitations. Like, the moment graphics enter the picture (no pun intended) you’re moving an order of magnitude slower than most desktops or laptops. Not to mention that support for hardware video encode/decode (which, especially decode, we generally take for granted) aren’t always available depending on the library or tool you’re working with.

Like yes, you can totally run a serviceable web server on a Pi and serve a blog or a small web app, but let’s not get carried away here.

moffkalast
0 replies
4h40m

Requiring a small nuclear reactor to power it aside, the Pi 5 feels far more like it's up to the task of a full desktop machine. Admittedly I haven't tried anything but headless workloads on mine so far, but it's so much snappier it's genuinely unreal. I'm really looking forward to seeing how much faster lidar localization and just SLAM in general runs on it once ROS support is sorted.

Although they did remove the h264 decoder and encoder which is a bummer, like you say it's hard to get working support for it anyway. Vulkan + regular GPU acceleration might be easier. And it still only has 4 cores which is crap for desktop multitasking.

eesmith
0 replies
10h17m

From the novelization of the movie "War Games", starting at https://archive.org/details/wargames00davi/page/n117/mode/2u... :

"Jesus," David said. "That's a Cray 2!"

"Ten of them." McKittrick said.

"I didn't know they were out yet."

McKittrick almost preened. "Only ten. Come on, I want to show you something."
boznz
6 replies
14h59m

This is actually quite depressing considering the amount of Raspberry pi's doing tasks like watering a plant, Reminds me of Marvin the paranoid android.

upon_drumhead
1 replies
14h0m

Esp32 ( https://www.espressif.com/en/products/socs/esp32 ) are what most folks should be using to do that stuff. It’s a great platform to build upon.

throwup238
0 replies
13h7m

ESPHome even comes with a well thought out sprinkler controller module: https://esphome.io/components/sprinkler.html

It supports valves, pumps, schedules, etc. I programmed mine once a few years ago with YAML (no code!) and now I just power cycle them once every month or so. Been running great.

speed_spread
1 replies
13h25m

I've come to the conclusion that the value of the RPI is not so much in the hardware platform but rather in being the cheapest Linux running computer you can buy. Its much easier to program for than any embedded board.

topspin
0 replies
4h10m

Its much easier to program for than any embedded board.

Indeed. I work in both MCUs and full-featured Linux environments and there is zero value to using the former for non-safety critical, non-power constrained, low precision applications. Running a sprinkler system on an RPi is an entirely reasonable choice: you have ample storage for history, trivially simple remote control using a variety of protocols and media and ample compute to operate high level languages and run easily maintained programs, including nice-to-haves like continuous integration of public weather data to optimize your schedule against prevailing rainfall.

Can you shoehorn all that into a ESP32 + micropython or whatever? Sure. I'll bet someone already has. And they spent 10x the time it would have taken otherwise. At least.

m463
0 replies
12h42m

...freeing up a human to come inside and read hn

Dork1234
0 replies
14h23m

You can always replace them with RP2040 which have integer performance similar to the Cray-1 if that makes you feel better.

timthorn
5 replies
17h29m

The author's CV is as interesting as the benchmarks

earthscienceman
2 replies
17h5m

link?

timthorn
1 replies
17h3m

I'm referring to the same page, under the "Background Activities" heading

morbusfonticuli
0 replies
54m

www.roylongbottom.org.uk/Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets.htm#anchor1

brcmthrowaway
1 replies
13h6m

The author has the most British name ever.

z3phyr
0 replies
5h55m

The most British name belongs to Lord British.

internet101010
4 replies
11h17m

I've gone through the pi rabbit hole to the kubernetes/swarm level. Just skip that stuff at this point and get a mini pc. The money comes out to be the same and I promise it is way less of a hassle.

fuzztester
3 replies
10h48m

What make and model would you recommend for a mini PC? I was thinking of getting one lately.

nielsole
2 replies
10h25m

I got a used Lenovo ThinkCentre M910q i5-6500T 4x2.5GHz 8GB RAM 240GB SSD for 100 €.

Plenty fast for a couple of VMs with web servers. It's also a lot better for 24/7 than RPi, as I was always struggling with SD card wear

implements
0 replies
9h7m

See also Dell OptiPlex 9020 Mini (i5-4590T 4x2GHz 16GB) - that class of “One Litre PC” make an excellent VPS / multi-VM setup.

fuzztester
0 replies
7h48m

Thanks.

wannacboatmovie
3 replies
12h29m

I'm waiting for the follow-up from Jeff Geerling where he fits a Raspberry Pi into a Cray-1 enclosure.

squarefoot
1 replies
5h26m

I would totally love a Cray1 shaped PC enclosure. A even smaller one for Raspberry Pis would also be cool.

smcameron
0 replies
5h23m
arbitrandomuser
0 replies
7h10m

Or as many as he can with networking

DonHopkins
3 replies
15h18m

The Cray-1 has a much better couch.

qgin
1 replies
15h5m

We need to bring back computer seating

Y_Y
0 replies
6h46m

I've sat on my raspi 3b a couple of times. Can't honestly recommend it.

Animats
0 replies
14h14m

The Cray-1's padded seats cover the power supplies. I was sad to see the Cray-1 at the Computer Museum in Mountain View being used as storage for catering supplies for some event in the lobby.

mherrmann
2 replies
10h43m

TL; DR: ten years ago, Raspberry Pis and Android phones were a handful times faster. Nowadays, they are around 100 times faster. Pretty impressive, considering they fit in our pockets.

hulitu
1 replies
8h29m

Thank god they have a modern OS to slow them down. /s

moffkalast
0 replies
4h53m

No /s required, Wirth's law holds far more solidly than Moore's law. You may live to see man made software bloat beyond your comprehension.

webprofusion
1 replies
15h15m

Dude fix your website certificate, come on.

wannacboatmovie
0 replies
12h31m

This site is served over plain HTTP. Not only does he not need to fix anything, you should fix your broken web browser that's "upgrading" to HTTPS when no one, especially the site owner, asked it to.

travisgriggs
1 replies
13h2m

The article has Android phone comparisons. Any idea how the iPhones stack up against the Cray-1?

hulitu
0 replies
6h35m

My Android phone can not do any batch processing out of the box. You need 3 layers of emulation to be able to run a shell.

mikewarot
1 replies
15h43m

I wonder where a Raspberry Pi Pico - based on the RP2040 fits in all of that.

dgacmu
0 replies
15h5m

A lot slower - roughly 100x - because most of these benchmarks measure FLOPS and the rp2040 doesn't have native floating point, so it has to emulate it, taking it down to 1-3 MFLOPS vs the 160 MFLOPS of the cray-1.

If you compared integer operations it would be a lot closer, but that's not really what the cray was designed for. (The rp2040 at 125mhz * 2 cores is in a pretty similar range)

benj111
1 replies
6h41m

The pi Pico would be an interesting comparison.

It doesn't have an fpu, and not much ram so it might actually be a close race for some of these tests.

moffkalast
0 replies
4h49m

Well I'm anxiously anticipating the first Micropython build for the Cray-1.

DonHopkins
1 replies
15h11m

I knew a guy who worked at one of the national labs that had its own Cray-1 supercomputer, in a machine room with a big observation window that visitors could admire it through.

Just before a big tour group of VIPs he knew would come by, he hid inside the Cray-1, and waited patiently for them to arrive.

Then he casually strolled out from inside the Cray-1, pulling up the zipper of his jeans, with a sheepish relieved expression on his face, looked up startled at the tour group gaping at him through the window, and scurried off!

codezero
0 replies
13h26m

To which they said, "Urine; A lot of trouble."

chatnealbot
0 replies
2h56m

i wonder how moore's law figured into the pricing - for a nuclear simulation, makes sense you'd want to pay a lot, or basic science or political things like apollo moon missions, but for weather or commercial applications, if you wait a few years might not be worth to pay for cray right away, though there's marketing aspect of how advanced your product is.. maybe this was before moores law though

auselen
0 replies
3h37m

Recognizing the domain, you can read about the early history of benchmarks: http://www.roylongbottom.org.uk/whetstone.htm

Havoc
0 replies
16h26m

For a second there I thought there was a new sbc called cray. Well played