return to table of content

Clang now makes binaries an original Pi B+ can't run

stephen_g
47 replies
1d10h

Wow, looking at the history of the ARM generation the original versions of the Raspberry Pi uses, it’s hard to believe it’s so old! When the Raspberry Pi B+ was released (2014), the ARM core it used was already 11 years old (using the ARM1176 core from 2003). So it’s not unbelievable that you might need to start supplying an arch flag to produce compatible code building on a different platform (like the newer Raspberry Pi the article says they first built on).

As others have said, it does seem like a misconfiguration (perhaps in the defaults shipped by their distribution) that the correct arch is not picked by default when building on the Raspberry Pi B+ itself.

yjftsjthsd-h
40 replies
1d9h

When the Raspberry Pi B+ was released (2014), the ARM core it used was already 11 years old (using the ARM1176 core from 2003).

IIRC the original Pi used leftover chips from a TV box, which is the kind of product that IME never ships more compute than they have to, for price reasons.

einr
19 replies
1d9h

TV boxes IME usually ship with less compute than they have to [in order to provide reasonable UX] ;)

jeffparsons
9 replies
1d7h

And kiosks, like the McDonald's ordering kiosks. I wouldn't be surprised if they spend more on installation than on building the device itself!

bbarnett
6 replies
1d6h

I haven't interacted with these much, but in all such things, eg TVs, others, I get peeved at the thought process.

Typical dev time for a new UI from scratch is years. And price drop on parts and their availability will be different down the road.

I wonder of people are running all tests, DEV work in KVM or other emulation stacks for DEV, but then not accurately locking clock rate, and limiting RAM during testing.

Because I'd quit as a DEV, if I saw the fruits of my labours, turning out as complete crap and a laughing stock. I wouldn't want my name associated with laggy, crashy, frustrating junk. I'd want no part of it, no part of everyone hating my work.

And further, how the hell does laggy crap get past the CTO? CEO? I've seen lag just trying to change the channel!

I mean, outside of caring about customers every buying anything with your name again, there's the laughing stock factor.

"Hi, I'm CEO of crappy corp"

"Wow, you must be really proud of yourself, dumbass"

I just don't get it. CEO bonuses aside, saving 10 cents on a part, over 10M units is still only $1M extra profit, and the CEO might see a tiny fraction, as a bonus, of that extra profit.

I don't grok.

LtWorf
4 replies
1d5h

Most developers suck.

rowanG077
3 replies
1d4h

Even if it were true that is not the problem. Even sucky devs can crank out code that runs reasonably fast. It just depends on whether the company sees it as a required feature.

butlerm
2 replies
1d3h

Making your code run quickly will not help if your software architecture is inefficient or optimized against the customer as so many web applications are these days, for example. There are many commercial web pages that appear and approximately ten seconds later clicking on a button will actually do something. I am not sure why that it considered acceptable, but customer experience doesn't seem to rank very high on the list of priorities.

shadowgovt
0 replies
1d1h

In some cases, it's because the devs are developing locally and never intentionally test with their browser set to simulate latency.

rowanG077
0 replies
18h55m

Part of making code run quickly is architecting the solution correctly.

mschuster91
0 replies
1d5h

I mean, outside of caring about customers every buying anything with your name again, there's the laughing stock factor.

The problem is, there is no competition, everyone sucks and only builds to "it works somewhat" quality. Hence, no incentive for anyone to invest more money.

kevin_thibedeau
1 replies
22h6m

My personal peeve is the original Coke Freestyle machines which are overtaxed WinCE systems meant to run with a lower resolution PDA display. They've never resolved all the gross latency issues with them and even the new Freestyle machines are laggy compared to the older Pepsi spire dispensers which could generate fluid full motion video years earlier.

deathanatos
0 replies
20h12m

God, yes. Every time I use one of these forsaken machines I can't help but wonder during the long latency pauses "did anyone use this before they shipped it?"

adhesive_wombat
6 replies
1d7h

Anything to satisfy whichever law it is that says "lagginess remains constant".

actionfromafar
5 replies
1d6h

"Huh, this CPU is kind of snappy running native code now. What should we do?"

"Let's move application development to Python then, I suppose."

"Thanks, that fixed it."

Probably what happened in my 55" smart TV dev team.

bitwize
2 replies
1d5h

Definitely happened on the OLPC.

What's worse, they made a "throbber" effect by string substituting a different color into an SVG, and then reparsing the SVG, for each color it fades through.

That's the kind of coding quality the OLPC project had. That's why it failed, and it probably also factored into why they disabled the view source button.

adhesive_wombat
0 replies
1d5h

In modern GTK you also have to string-substitute or otherwise construct CSS, pass in as a string and have it reparsed to change element styles. But at least it is native!

actionfromafar
0 replies
3h25m

It also failed because it took ages to release anything useful. It took years, and suddenly smartphones were upon us all, and OLPC shrunk from a special niche to a tiny niche.

xnzakg
1 replies
1d5h

I would bet that instead of python it's JavaScript and a webapp pretending to be a native application.

treyd
0 replies
22h18m

If it's an LG TV this is literally true. They bought webOS specifically for this purpose. Meanwhile Roku invented a runtime/language (BrightScript) and mandated its use to at least enforce a minimum quality standard and throw away cruft.

yjftsjthsd-h
0 replies
1d8h

Eh, the result may suck, but I don't think it's usually a hardware problem.

Narishma
0 replies
1d1h

No, they're just bad at software.

RantyDave
18 replies
1d8h

Phones, not TV's, but that's pretty much the idea. Even better, the ARM core was tacked on as a sort of "dammit, I supposed we'll have to run applications" kinda thing and isn't even necessarily initialised during boot.

Raspberry Pi's actually boot on a really fringe processor called a VideoCore. Arguably the GPU bootstraps the CPU, which makes my brain hurt.

phendrenad2
5 replies
1d8h

I guess technically a CPU core within the GPU ASIC block loads code into the main CPU. What a weird design. Feels like the kinda of things that Wozniak would come up with to shave cost from the Apple Macintosh.

djmips
1 replies
1d6h

AFAIK Woz only really worked on Apple II and Apple /// but his Apple II disk drive controller lived on and was included in the Mac.

Findecanor
0 replies
1d2h

Woz also designed Apple Desktop Bus for keyboards and mice. First used on the Apple IIGS, and then on Macs from Macintosh II onwards up until it was replaced by USB.

codedokode
1 replies
1d6h

Could it be a protection of IP? This way program on main CPU cannot access initialization code and the user cannot learn how to do it.

dezgeg
0 replies
1d4h

Doesn't really help when the VC firmware is loaded from the SD card anyways

namibj
0 replies
1d4h

Ehhh, video core is basically just a (iirc multi-core/-thread) vector processor. Funnily enough, this also makes these rather cheap number crunching hard real time chips with the high-bandwidth IO (for hard-real-time) of a Pi. Notably, it's camera/display interfaces.

swiftcoder
4 replies
1d6h

This sort of thing is not uncommon in the embedded space. Lots of devices basically built on a DSP with a tiny arm core tacked on the handle application logic

rbanffy
3 replies
1d3h

Lots of devices basically built on a DSP

There were many X terminals running off nothing but a Texas 34010, which was a very DSP-like CPU that ended up in a lot of high-end graphics acceleration boards for PCs and Macs (and Unix workstations).

The fact it could boot up an X server is quite extraordinary.

I wonder what the VideoCore looks like to the programmer.

devmor
1 replies
23h51m

Sidebar but it’s very annoying how it now takes me a moment to think if people are talking about a social media website or an open source graphical server when I see “X” being discussed in a tech context.

brianshaler
0 replies
20h52m

With rare exceptions, the social media website tends to be referred to as "X formerly Twitter" or just "Twitter" for short

kjs3
0 replies
23h10m

The fact it could boot up an X server is quite extraordinary.

Since it was designed explicitly to serve that purpose, I'm not sure why it's 'extraordinary'.

Disclaimer: I spent time at a 34010 X terminal shop.

ajb
3 replies
1d6h

Are you sure about that? As I understand it, the other product that chip was used in was a Roku stick.

It's true that videocore was intended to be a GPU for phones, though.

Crosseye_Jack
1 replies
1d2h

I am only going off my memory, so I could be mistaken. But IIRC the OG pi used processors originally designed for phones (at least the one that hit the market, eraly prototypes were based on Atmel micros), The iPhone used a processor originally designed for a set-top box from Samsung which was then underclocked to save on battery.

IIRC they realised that the micros were not going to cut it, they went to Broadcom (Which Eben was working for at the time) and they were able to supply some "overstock processors" for cheap, which became the processor used in the Pi. Remember at the time the Pi was never designed to be for "makers" but to be a cheap computer to help better kickstart education, it was never designed for "us", but we all said "hey, cheap little linux computer, I'll take 5!

ajb
0 replies
23h1m

It could be the both things are true, it's common enough for chips to be made to serve two markets to save capital cost

I've come across some (unknown provenance) information that bcm2763, which was advertised as a phone chip on an old version of the broadcom website (via archive. org) was the same die as bcm2835, but with dram hooked up in a different way.

secondcoming
0 replies
1d5h

I remember a Nokia device that used Videocore, I not sure if it ever made it to market.

throwaway67743
0 replies
17h55m

Iirc the original pi was a set top box chip, because it still at the time had directfb gpl code available (as every StB used) - I remember it quite distinctly as I laughed how pointless their experiment was (sort of wrong, but actually the rpi has done more for encouraging competition than anything they've produced themselves)

Also broadcom haven't made phones, while they do still have a strong tv/StB market share, but that's declining due to their incessant proprietary attitude

numpad0
0 replies
1d2h

I suspect it's less about having CPU as afterthought, than about how and who is to take CPU out of "awaiting to be booted" state. Ancient CPU simply had a BIOS mask ROM hardwired to reset vector and immediately load from it upon release of reset line, nowadays and on more complex systems, I believe that's what Intel Management Engine/Apple Secure Enclave/VideoCore GPU/etc. will do, not the main CPU itself.

ChrisRR
0 replies
2h7m

Not strictly either/or. Just used in whatever people wanted to use it for.

wmf
0 replies
20h18m

never ships more compute than they have to

ARM keeps releasing newer slow cores that support the latest instructions; for example the Cortex-A5 was available and the RPi 1 really should have used that.

kazinator
4 replies
1d8h

She's not building on the B+, though.

Quote:

I started trying to take binaries from my "build host" (a much faster Pi 4B) to run them on this original beast. It throws an illegal instruction.

This is like building something with the latest MSVC on Windows 11 and trying to run the .EXE on an old PC running Windows XP. :)

I suspect the entire Pi distro she's running on the Pi 4B itself won't run on the B+, since all of it is probably compiled the same way, possibly down to the kernel.

mid-kid
2 replies
1d8h

She was building on the B+ in the later example of the blog.

kazinator
1 replies
1d7h

Ah I see that now.

The interesting question there is why does the clang binary itself run on the old hardware?

It must be that the distro build uses a different compiler configuration for itself from the configuration imbued into the installed clang.

Maybe it even builds clang twice: once to produce a clang that runs on the machine that builds the distro, which then compiles the packages, including the clang to run on the Pi.

actionfromafar
0 replies
1d6h

Or the build system uses GCC.

kelnos
0 replies
1d8h

But at the end, she puts together a new SD card for the B+, boots it, and tries to compile an empty program on the B+ itself. "It can compile something it can't even run", she says.

vkaku
0 replies
20h23m

It was meant to be a low price computer.

johnklos
23 replies
1d10h

You'll see a whole bandwagon of people saying things like, "supporting old hardware is BAD! It takes time and money that nobody has!", as though someone needs to be hired to sit around and do nothing but pore over code and constantly rewrite code for old hardware.

There's plenty of evidence to the contrary, but since when has evidence mattered when it comes to defending the right of big business / big distro to do whatever they want? ;)

Really, this is just laziness and sloppiness on the Linux distro makers' part. Any amount of testing would catch this. Thanks, Rachel!

woodruffw
6 replies
1d8h

This has nothing to do with the distro; it looks like an upstream LLVM bug. And it does demonstrate the problem: old code doesn’t change, but interfaces and invariants do. Those external changes do represent maintainer burden.

Armv6 isn’t really “old hardware” in the “disused, actively rotting” sense. That’s reserved for things like Itanium or HPPA, which distributions (and upstreams) would do perfectly well to remove unless paid buckets of money by their respective corporations.

livrem
4 replies
1d5h

Raspberry Pi Zero (W) are still great. There will be millions of them around in use for decades to come. I will probably always have a few in some drawer. Sad to hear that anyone even considers deprecating support for that hardware. Not to mention all other ARMv6 hardware still around. We need some baseline hardware types that just will always be supported, to add some friction to software rot and bloat in general.

woodruffw
1 replies
1d5h

Sad to hear that anyone even considers deprecating support for that hardware.

I don’t think anybody has. This appears to have been entirely an accidental regression.

livrem
0 replies
9h17m

This was a comment in the thread on distros deprecating ARMv6 support, not the clang issue.

shadowgovt
1 replies
1d1h

Yeah, I can see the benefits.

Are you volunteering to be the one that runs exhaustive regression tests on every distro release? The open source community only thrives as much as people are willing to dedicate volunteer time into making it thrive.

livrem
0 replies
9h19m

I test my own open source code on a RPi Zero when applicable. I can report bugs to other projects if I happen to notice something is missing, but I can't decide what platforms they should support or not. I can hope that as many as possible can see the value in having some standard fixed, low-performance, high-priority, default targets that are "never" deprecated. But the only thing that would scale is that every project find their own volunteers to make it happen.

account42
0 replies
5h38m

This has nothing to do with the distro; it looks like an upstream LLVM bug.

Wrong: https://news.ycombinator.com/item?id=38505879

wmf
2 replies
1d10h

I think RPi has struck a reasonable balance where the mainstream Linux community doesn't really support ancient ARMv6 so RPI themselves maintains forked software (e.g. Raspbian). This way the cost of legacy is borne by those who benefit from it, not everyone.

deaddodo
1 replies
1d9h

Raspbian existed well before ARMv6 support dropped off. It's been their main distro from the outset, but mainstream distros with ARM builds only removed support for ARMv6 in the last 1-3 years (depending on distro).

account42
0 replies
5h35m

Pretty sure Debian's armhf images have always required ARMv7 or at least for much longer than 3 years. There are also armel images but those don't use hardware floats which makes them much less performant than what the original Pi is capable of. Pretty sure that that mismatch is why raspian exists in the first place.

jenadine
2 replies
1d10h

The thing with free software is that you are in no position to demand anything. If the maintainer don't feel like supporting your hardware, they don't have to.

But the beauty of free software is that you can always do it yourself. (Or pay someone to do it)

shadowgovt
0 replies
1d1h

If anything, this can be read as, while an inconvenience, an open source success story.

All pieces of the puzzle were open enough that the author could track down the problem and correct it. That, not indefinite support for no-longer-manufactured hardware, is the benefit of open source. It's the thing that enables the other thing.

And thanks to the magic of the internet, blogs, and search engines, now that one person has solved the problem there's a cracking chance that the next person to have the problem will find the solution.

bobsmith432
0 replies
1h36m

This is true for Windows. Free software has helped keep operating systems like Windows XP and 7 still modern and secure, and many applications have been patched to work on these older systems. Here is for instance a port of Firefox Quantum to XP and a fork of Chromium maintaining support for 7, but I like to stick to the last Ungoogled Chromium version because I hate Google.

https://github.com/Feodor2/Mypal68 https://github.com/win32ss/supermium

bee_rider
2 replies
1d9h

Big distro. These fat-cat volunteer kernel devs are just trying to keep us down by giving us so much free software that we collapse under the weight of it.

zmgsabst
1 replies
1d9h

Aren’t a lot of kernel devs paid by large corporations?

Eg, this article.

https://thenewstack.io/contributes-linux-kernel/

Shish2k
0 replies
1d5h

Corporate developers are typically paid to solve corporate problems. Sometimes a company will hire a big-name OSS developer and tell them “continue maintaining your project in whatever way you think best”, but that’s by far the exception rather than the norm.

Elucalidavah
1 replies
1d10h

sit around and do nothing but pore over code and constantly rewrite code for old hardware

In case of refactoring / restructuring, that's exactly so.

But "drop support for this old hardware" is meant to be an intended decision with a clear deprecation warning, not accidentally.

atemerev
0 replies
1d9h

The bazaar doesn’t work like this. There are no incentives to support old hardware. If there are enough people with old hardware, their activity might be enough to do something. Otherwise — puff, gone.

Open source is already a miracle. The fact that something works somewhere is a miracle. I don’t tempt the powers and I don’t demand even more miracles to satisfy some perfectionist urges.

zozbot234
0 replies
1d5h

Typically, support for old hardware is added as part of the "experimental" featureset. Meaning that it's expected to break from time to time as the underlying codebase changes, until the folks who care about that support come around and fix the breakage. If that maintenance stops altogether and the code stays broken, that's when it gets removed.

yellow_lead
0 replies
1d10h

You'll see a whole bandwagon of people saying things like, "supporting old hardware is BAD! It takes time and money that nobody has!

Any amount of testing would catch this.

Who is paying for the testing? I'm not suggesting supporting old hardware is bad, but we must recognize it takes some effort to uphold backwards compatibility. Stuff gets broken accidentally always, and testing isn't free.

kaba0
0 replies
1d9h

when it comes to defending the right of big business / big distro to do whatever they want? ;)

Anything else about your alternative reality?

Also, why don’t you go and take on support for this given target, if it’s so important for you? Or pay for someone to do it? I’m sure the project wouldn’t mind supporting it if someone would have stepped up, but I’m sure it’s still not too late.

bregma
0 replies
1d5h

You'll see a whole bandwagon of people saying things like, "supporting old hardware is BAD! It takes time and money that nobody has!", as though someone needs to be hired to sit around and do nothing but pore over code and constantly rewrite code for old hardware.

It has been my experience, over the last 6 decades or so, that it almost always boils down to "doing anything except what I want is a waste of time and resources".

When it comes to free software, you do what you do and learn to ignore the "but what about ME!" demands from those who contribute nothing else. Or you move on and put your energy into something else.

JonChesterfield
0 replies
1d6h

Your stance is:

    1/ Old hardware is free to support because the software for it just keeps working

    2/ Lazy Linux people didn't test the software that stopped working on old hardware
Those two things you believe to be true are inconsistent with one another. For example, in this context.

What you're missing is that code changes to do new stuff and sometimes those changes are incompatible with old hardware or operating systems. If noone is testing said old systems and the developer doesn't remember said eccentricities, the old systems will break when the new stuff lands.

If anything it might be better to spend the resources deleting the support for old hardware (probably at the point where people stop testing on it) so that people using the old stuff get a much clearer message that they also need to use old tools with it. It's hard to get sign off to do that either, leaving the probably broken stuff lying around is the spend-no-time-now choice.

fsniper
8 replies
1d6h

Title is unfortunately sensational. This is a default target change. Turns out clang still can build binaries for the Pi B+. You just need to be explicit about the architecture. So perhaps a small title change that's more clear about this being only default setting change?

nottorp
7 replies
1d1h

Doesn’t seem so sensational when it can’t build binaries for the target machine… on the target machine itself…

dagmx
4 replies
23h58m

It can, it just doesn’t by default. Which is what the person you’re replying to is saying.

nottorp
3 replies
21h58m

And... does it make sense to you... when you're not cross compiling?

dagmx
2 replies
20h40m

The point is that it objectively CAN compile to the right target. The capability is not broken.

It however DOESNT due to a configuration bug. Therefore it doesn’t have to make sense because it’s clearly not intentional.

your sentence saying “it can’t build” is therefore incorrect. It’s the distinction between the two capitalized words above.

brian-armstrong
1 replies
12h28m

If "clang helloworld.c" doesn't produce a working a.out out of the box, I think it's fair to say builds are broken. Plenty of projects won't build in those circumstances without some assistance.

dagmx
0 replies
11h36m

Again, that’s not the point. I’m not sure how much clearer this can be made:

1. Nobody is saying it’s not a bad situation. Everyone agrees that it’s non ideal.

2. People who are saying that it can’t produce a usable build are wrong, because it absolutely can produce a usable build with the arch flag explicitly provided.

3. People are conflating a bad default with the inability to do something.

It honestly feels like people are substituting their own sentences in and then arguing against a point that isn’t being made.

fsniper
0 replies
21h37m

Title suggests when using clang, built binaries can't be run on this device. It means it can't build for this architecture at all. But the post eloborates that it's possible to build for this architecture, it's just incorrectly targets another one. It would not matter if it's on the same architecture, or cross compiling. The capability is there. It requires you to be explicit about which architecture you are targeting.

A default for targeting is incorrect, and/or an architecture identification is buggy. But binaries built for Pi B+ - when using correct targeting arguments - can be run on Pi B+.

Now if the title is using wording that suggest a functionality is not there anymore vs the reality, where defaults or identification are incorrect, wouldn't that mean that is hunting for sensation?

cbmuser
0 replies
1d

It’s sensational because it’s wrong. LLVM still supports even ARMv5T which is the baseline of Debian’s armel port.

opello
6 replies
1d7h

Seems like the problem is likely a configuration target change in the clang-13 package that's current for bookworm.

Specifically because under bullseye (and clang-11) the default target is armv6k-unknown-linux-gnueabihf while under bookworm (and clang-13) the default target is arm-unknown-linux-gnueabihf.

Or maybe the default changed for the given build configuration on the LLVM side?

opello
5 replies
1d7h

I really wish I understood the Debian change management process better. I guess I don't even really know if Raspbian is actually maintained by Debian.

But, when comparing [1] to [2], the rules file has a nice test that says "if DEB_HOST_ARCH is armhf, set the LLVM_HOST_TRIPLE to armv6k..." which seems to confirm a build configuration change.

[1] http://raspbian.raspberrypi.org/raspbian/pool/main/l/llvm-to...

[2] http://raspbian.raspberrypi.org/raspbian/pool/main/l/llvm-to...

orra
4 replies
1d7h

To answer your incidental question, Raspian is maintained by Raspberry Pi folks, not Debian.

opello
3 replies
1d6h

Ah, thanks! I dug a little more and found that the original test [1] from llvm-toolchain was a little different in the upstream Debian repository. It set the triple to armv7l for armhf hosts.

But I haven't yet found a similar repository on the Raspbian side. I guess I'd expect to find it within their GitHub org, but my searching didn't reveal it.

[1] https://salsa.debian.org/pkg-llvm-team/llvm-toolchain/-/comm...

orra
2 replies
1d6h

Yeah, Raspbian doesn't seem to be developed as a community project. There's no package tracker[1] like Debian or Ubuntu. I suppose they don't tend to diverge much, in practice.

Anyway, I suppose this all means Raspberry Pi OS/Raspbian are able to patch this, without requiring it to first be fixed in Debian or Clang.

[1] https://raspberrypi.stackexchange.com/questions/1179/does-ra...

opello
1 replies
1d6h

Interesting.

Uncovering this slight difference really makes me long for something like the Debian GitLab instance (or really any kind of public change tracking) to document a bug or suggest a change.

Agreed, sure looks like it's Raspbian's build configuration to fix.

aragilar
0 replies
1d5h

You can also use https://raspi.debian.net/, which is preferable on newer Pis due to the use of arm64 (early Pis had a weird arch which sat between armel and armhf, so either you used armel and things were slow(er), or you rebuilt the packages with the extra float support (as Raspbian did)).

JonChesterfield
5 replies
1d6h

I doubt this is a deliberate change. Picking up information from the environment - a sibling mentions /etc/env.d/gcc - seems fairly likely. I'd guess the default triple is something like arm-unknown-linux unless clang finds or is is told something more specific to use, and the mechanism by which it gets told to use something more specific has fallen over.

This might mean there are no arm v6 buildbots running, or it might mean there are ones running but the implicit configuration is still working on them.

LLVM is a really good cross compiler. Build for any target from any target, no trouble. Clang is less compelling - if it's built with the target, and you manage to tell it what target to build for, it'll probably do the right thing (as in this post - it guessed wrong, but given more information, did the right thing). Then the runtime library story is worse again - you've built for armv4 or whatever, but now you need to find a libc etc for it, and you might need to tell the compiler where those libraries and headers are, and for that part I'm still unclear on the details.

hulitu
2 replies
1d6h

Picking up information from the environment - a sibling mentions /etc/env.d/gcc - seems fairly likely.

Why would CLANG do this ?

elteto
0 replies
1d3h

Clang already replicates a bunch of flags, macros, and behaviors from gcc. The objective is to be a drop-in replacement, and make the developer experience much nicer when migrating. There are some rough corners, of course, but overall it’s actually very nice.

JonChesterfield
0 replies
1d1h

If clang didn't try to do the right thing based on the context it finds itself in, people would have to specify a lot more compiler flags to tell it what to do. Target triple, where libc is, where libstdc++ or libc++ is, what linker to use, what flags to pass the linker and so forth. This is much more annoying than `clang foo.c`.

rcarmo
1 replies
1d5h

Most distros and compilers effectively dropped ARMv6 a couple of years back - I had similar trouble building binaries for my old Synology NAS.

anthk
0 replies
1d4h

Alpine Linux might still support it I think.

schemescape
4 replies
1d10h

I didn't see it addressed here or in the article: this is a bug, right?

Edit: oddly, after searching LLVM bugs, I found a bug that sounds pretty much exactly like this issue... but it's from 2012 and is closed (although the final couple of comments make it sound like maybe it wasn't actually fixed--note: I only skimmed the comments and I probably misunderstood):

https://github.com/llvm/llvm-project/issues/13989

Edit again: I forgot about the comment at the end of the article that clarifies that explicitly passing the target results in a working program. In that case, it sounds like some sort of configuration bug--I would assume (but am not certain) that the default target would be the current processor, at least on Unix. That bug I linked was probably about producing incorrect code even when the target was set correctly which, thankfully, isn't happening today.

wmf
1 replies
1d10h

Yeah, this is not the behavior people expect.

NikkiA
0 replies
3h36m

I would expect a arm64 machine to not build a arm32 compatible binary by default, it's the same as running clang on a x86-64 host and expecting it to produce 386 compatible binaries without a -march=i386 somewhere.

The weird clang install on a fresh B+ install is more puzzling, unless there's some user error somewhere.

krick
0 replies
22h3m

Obviously it is a bug, but, apparently, author didn't bother to report it, opting to write a blogpost with a somewhat clickbait title and ending with "so weird" instead.

Arnavion
0 replies
1d9h

Yes, your bug is about the compiler emitting armv7 instructions despite being told to target armv6. Rachel fixed her problem by telling the compiler to target armv6. So I assume your bug is indeed already fixed and not related to Rachel's problem.

frizlab
4 replies
1d11h

Title is misleading

eimrine
1 replies
1d10h

Because "as a default" statement is missing?

jenadine
0 replies
1d10h

Yes.

It now sounds like it is completely broken. But you can just fix it with a flag. And the change of default was probably an unintentional bug.

usr1106
0 replies
1d9h

Aka called clickbait. Although the bug and the workaround are useful to know for everyone working with that machine.

blahgeek
0 replies
1d8h

Yes. It should be “clang does not correctly detect host architecture in raspberry pi B+”

matja
3 replies
1d9h

clang/clang++ read from /etc/env.d/gcc to get the target flags/profile, it's up to the OS to maintain them and make sure they're correct, looks like that didn't happen for this OS.

My Gentoo ARM SBC based on an even more ancient armv4 arch has been chugging along just fine with the latest gcc/clang updates:

    grep CTARGET /etc/env.d/gcc -r
    /etc/env.d/gcc/armv4tl-softfloat-linux-gnueabi-11.3.0:CTARGET="armv4tl-softfloat-linux-gnueabi"

Arnavion
1 replies
1d9h

/etc/env.d is a Gentoo-specific directory to define default env vars for user sessions. It's not a feature of clang to read that directory, so it's not correct to assume other distros would have it. It's just that Gentoo's compiler setup reads the CTARGET env var to select the target, and Gentoo uses /etc/env.d to set it.

matja
0 replies
1d8h

Is that why other distros break? :)

contingencies
0 replies
1d9h

Gentoo always works .. it just takes longer :)

cbmuser
3 replies
1d

The article doesn’t mention whether Debian or Raspian was installed. And, in case of Debian, whether the armel or armhf port is being used.

Without that information, it’s pretty pointless to make claims about the instruction set LLVM compiles to because that’s a matter of what native target LLVM has been configured for.

FWIW, in Debian, llvm-toolchaim-snapshot still supports armel which uses ARMv5T as the baseline (there is currently an unrelated bug in LLVM’s OpenMP library though which prevents a successful build).

jchw
2 replies
1d

What's weird is that the Clang binary is clearly compiled for an instruction set that is compatible with the Pi B+, but it doesn't target an instruction set that is compatible with the Pi B+. This is genuinely weird, since that's not meant to be a cross-compiler; in theory, the host and the target should be the same.

Presumably the image is Raspbian. I don't see a reason why not to assume that.

guipsp
1 replies
1d

One key thing is missing from your comment (which explains the 'weirdness') - clang, and other llvm-family tools are cross-compatible by default. There is no separate cross compilation binary. This is just a configuration bug.

jchw
0 replies
23h49m

Yes, that's true, although it doesn't actually explain the weirdness. I compile Clang all the time and there's no obvious reason why you'd get cross-compiled binaries out of Clang if you just compile and install it normally. The bug, configuration or otherwise, is the weirdness.

auselen
2 replies
1d8h

Confused article? You make a host/native build instead of cross and expect it to work on some other machine?

daviddoran
1 replies
1d8h

No. Half way through the article she specifically starts doing everything on the B+ (the old RPI with the issue).

auselen
0 replies
1d8h

Thanks for that. I didn’t notice she switch to B+ later.

teddyh
1 replies
1d10h

It would probably be helpful to know the output of the command “dpkg-architecture” and the contents of the file “/etc/os-release”. Otherwise it will be hard to make any useful comments.

cbmuser
0 replies
1d

Exactly. The article omits information that is fundamental to being able to fix this problem.

vkaku
0 replies
21h3m

clang has been broken for a while in the last few versions. Many issues were left unfixed and development moved to 17.0.0 when they should have fixed those as point releases for 16.0.x instead (patches were available and not integrated).

In this particular case though, the end processor/native detection seems to be failing and clang feature detection gets armv7l as native (or could just be the default generation option). Looks like a good bug to report, if only we get the good clang folks who will take the time to land a fix.

I have been playing around with zig. My current focus will be on not using broken compiler backends for a while.

sovietmudkipz
0 replies
16h11m

Oh cool so this is kinda how one might debug why a program isn’t running on arm. I have a Unity Linux build that I can’t get running inside a container. Unity mono is trying to make a system call that isn’t available, even after passing in the amd64 flag to docker when running the container.

I haven’t debugged it because I found a work around (enable development mode, change build settings so mono isn’t used). I should return to it at some point, just to learn more.

rschu1ze
0 replies
23h20m

The database I work on (ClickHouse) tries hard to stay compatible with really old hardware. The standard ARM binaries require Armv8.2 from 2016 (available in Raspberry Pi 2 >=2) and x86 binaries run on hardware from around 2010 (SSE4.2 + pclmul* instructions for fast CRC). We also build (but don't test using CI) binaries for Armv8.0 and SSE2-only systems. A quick install script downloads and unpacks the right binary for the target host.

I find it generally hard to strike a good balance between backwards compatibility and usage of modern CPU features in newer AArch64 generations (https://en.wikipedia.org/wiki/AArch64). We found that there are surprisingly many institutions on a shoestring budget (universities in emerging countries) or hobbyists that can't afford to upgrade their hardware.

On a technical note, what I found quite cumbersome is that the cpu flags in /proc/cpuinfo don't always correspond with the flags passed as -march= to the compiler, e.g. "lrcpc" vs "rcpc". To make all of this work, one really needs to maintain two sets of flags.

1vuio0pswjnm7
0 replies
20h24m

"I guess nobody still runs these old things anywhere?"

I have one running BSD UNIX-like OS as I type this comment.