Wow, looking at the history of the ARM generation the original versions of the Raspberry Pi uses, it’s hard to believe it’s so old! When the Raspberry Pi B+ was released (2014), the ARM core it used was already 11 years old (using the ARM1176 core from 2003). So it’s not unbelievable that you might need to start supplying an arch flag to produce compatible code building on a different platform (like the newer Raspberry Pi the article says they first built on).
As others have said, it does seem like a misconfiguration (perhaps in the defaults shipped by their distribution) that the correct arch is not picked by default when building on the Raspberry Pi B+ itself.
IIRC the original Pi used leftover chips from a TV box, which is the kind of product that IME never ships more compute than they have to, for price reasons.
TV boxes IME usually ship with less compute than they have to [in order to provide reasonable UX] ;)
And kiosks, like the McDonald's ordering kiosks. I wouldn't be surprised if they spend more on installation than on building the device itself!
I haven't interacted with these much, but in all such things, eg TVs, others, I get peeved at the thought process.
Typical dev time for a new UI from scratch is years. And price drop on parts and their availability will be different down the road.
I wonder of people are running all tests, DEV work in KVM or other emulation stacks for DEV, but then not accurately locking clock rate, and limiting RAM during testing.
Because I'd quit as a DEV, if I saw the fruits of my labours, turning out as complete crap and a laughing stock. I wouldn't want my name associated with laggy, crashy, frustrating junk. I'd want no part of it, no part of everyone hating my work.
And further, how the hell does laggy crap get past the CTO? CEO? I've seen lag just trying to change the channel!
I mean, outside of caring about customers every buying anything with your name again, there's the laughing stock factor.
"Hi, I'm CEO of crappy corp"
"Wow, you must be really proud of yourself, dumbass"
I just don't get it. CEO bonuses aside, saving 10 cents on a part, over 10M units is still only $1M extra profit, and the CEO might see a tiny fraction, as a bonus, of that extra profit.
I don't grok.
Most developers suck.
Even if it were true that is not the problem. Even sucky devs can crank out code that runs reasonably fast. It just depends on whether the company sees it as a required feature.
Making your code run quickly will not help if your software architecture is inefficient or optimized against the customer as so many web applications are these days, for example. There are many commercial web pages that appear and approximately ten seconds later clicking on a button will actually do something. I am not sure why that it considered acceptable, but customer experience doesn't seem to rank very high on the list of priorities.
In some cases, it's because the devs are developing locally and never intentionally test with their browser set to simulate latency.
Part of making code run quickly is architecting the solution correctly.
The problem is, there is no competition, everyone sucks and only builds to "it works somewhat" quality. Hence, no incentive for anyone to invest more money.
My personal peeve is the original Coke Freestyle machines which are overtaxed WinCE systems meant to run with a lower resolution PDA display. They've never resolved all the gross latency issues with them and even the new Freestyle machines are laggy compared to the older Pepsi spire dispensers which could generate fluid full motion video years earlier.
God, yes. Every time I use one of these forsaken machines I can't help but wonder during the long latency pauses "did anyone use this before they shipped it?"
Anything to satisfy whichever law it is that says "lagginess remains constant".
"Huh, this CPU is kind of snappy running native code now. What should we do?"
"Let's move application development to Python then, I suppose."
"Thanks, that fixed it."
Probably what happened in my 55" smart TV dev team.
Definitely happened on the OLPC.
What's worse, they made a "throbber" effect by string substituting a different color into an SVG, and then reparsing the SVG, for each color it fades through.
That's the kind of coding quality the OLPC project had. That's why it failed, and it probably also factored into why they disabled the view source button.
In modern GTK you also have to string-substitute or otherwise construct CSS, pass in as a string and have it reparsed to change element styles. But at least it is native!
It also failed because it took ages to release anything useful. It took years, and suddenly smartphones were upon us all, and OLPC shrunk from a special niche to a tiny niche.
I would bet that instead of python it's JavaScript and a webapp pretending to be a native application.
If it's an LG TV this is literally true. They bought webOS specifically for this purpose. Meanwhile Roku invented a runtime/language (BrightScript) and mandated its use to at least enforce a minimum quality standard and throw away cruft.
Eh, the result may suck, but I don't think it's usually a hardware problem.
No, they're just bad at software.
Phones, not TV's, but that's pretty much the idea. Even better, the ARM core was tacked on as a sort of "dammit, I supposed we'll have to run applications" kinda thing and isn't even necessarily initialised during boot.
Raspberry Pi's actually boot on a really fringe processor called a VideoCore. Arguably the GPU bootstraps the CPU, which makes my brain hurt.
I guess technically a CPU core within the GPU ASIC block loads code into the main CPU. What a weird design. Feels like the kinda of things that Wozniak would come up with to shave cost from the Apple Macintosh.
AFAIK Woz only really worked on Apple II and Apple /// but his Apple II disk drive controller lived on and was included in the Mac.
Woz also designed Apple Desktop Bus for keyboards and mice. First used on the Apple IIGS, and then on Macs from Macintosh II onwards up until it was replaced by USB.
Could it be a protection of IP? This way program on main CPU cannot access initialization code and the user cannot learn how to do it.
Doesn't really help when the VC firmware is loaded from the SD card anyways
Ehhh, video core is basically just a (iirc multi-core/-thread) vector processor. Funnily enough, this also makes these rather cheap number crunching hard real time chips with the high-bandwidth IO (for hard-real-time) of a Pi. Notably, it's camera/display interfaces.
This sort of thing is not uncommon in the embedded space. Lots of devices basically built on a DSP with a tiny arm core tacked on the handle application logic
There were many X terminals running off nothing but a Texas 34010, which was a very DSP-like CPU that ended up in a lot of high-end graphics acceleration boards for PCs and Macs (and Unix workstations).
The fact it could boot up an X server is quite extraordinary.
I wonder what the VideoCore looks like to the programmer.
Sidebar but it’s very annoying how it now takes me a moment to think if people are talking about a social media website or an open source graphical server when I see “X” being discussed in a tech context.
With rare exceptions, the social media website tends to be referred to as "X formerly Twitter" or just "Twitter" for short
The fact it could boot up an X server is quite extraordinary.
Since it was designed explicitly to serve that purpose, I'm not sure why it's 'extraordinary'.
Disclaimer: I spent time at a 34010 X terminal shop.
Are you sure about that? As I understand it, the other product that chip was used in was a Roku stick.
It's true that videocore was intended to be a GPU for phones, though.
I am only going off my memory, so I could be mistaken. But IIRC the OG pi used processors originally designed for phones (at least the one that hit the market, eraly prototypes were based on Atmel micros), The iPhone used a processor originally designed for a set-top box from Samsung which was then underclocked to save on battery.
IIRC they realised that the micros were not going to cut it, they went to Broadcom (Which Eben was working for at the time) and they were able to supply some "overstock processors" for cheap, which became the processor used in the Pi. Remember at the time the Pi was never designed to be for "makers" but to be a cheap computer to help better kickstart education, it was never designed for "us", but we all said "hey, cheap little linux computer, I'll take 5!
It could be the both things are true, it's common enough for chips to be made to serve two markets to save capital cost
I've come across some (unknown provenance) information that bcm2763, which was advertised as a phone chip on an old version of the broadcom website (via archive. org) was the same die as bcm2835, but with dram hooked up in a different way.
I remember a Nokia device that used Videocore, I not sure if it ever made it to market.
Iirc the original pi was a set top box chip, because it still at the time had directfb gpl code available (as every StB used) - I remember it quite distinctly as I laughed how pointless their experiment was (sort of wrong, but actually the rpi has done more for encouraging competition than anything they've produced themselves)
Also broadcom haven't made phones, while they do still have a strong tv/StB market share, but that's declining due to their incessant proprietary attitude
I suspect it's less about having CPU as afterthought, than about how and who is to take CPU out of "awaiting to be booted" state. Ancient CPU simply had a BIOS mask ROM hardwired to reset vector and immediately load from it upon release of reset line, nowadays and on more complex systems, I believe that's what Intel Management Engine/Apple Secure Enclave/VideoCore GPU/etc. will do, not the main CPU itself.
Not strictly either/or. Just used in whatever people wanted to use it for.
never ships more compute than they have to
ARM keeps releasing newer slow cores that support the latest instructions; for example the Cortex-A5 was available and the RPi 1 really should have used that.
She's not building on the B+, though.
Quote:
I started trying to take binaries from my "build host" (a much faster Pi 4B) to run them on this original beast. It throws an illegal instruction.
This is like building something with the latest MSVC on Windows 11 and trying to run the .EXE on an old PC running Windows XP. :)
I suspect the entire Pi distro she's running on the Pi 4B itself won't run on the B+, since all of it is probably compiled the same way, possibly down to the kernel.
She was building on the B+ in the later example of the blog.
Ah I see that now.
The interesting question there is why does the clang binary itself run on the old hardware?
It must be that the distro build uses a different compiler configuration for itself from the configuration imbued into the installed clang.
Maybe it even builds clang twice: once to produce a clang that runs on the machine that builds the distro, which then compiles the packages, including the clang to run on the Pi.
Or the build system uses GCC.
But at the end, she puts together a new SD card for the B+, boots it, and tries to compile an empty program on the B+ itself. "It can compile something it can't even run", she says.
It was meant to be a low price computer.