return to table of content

I got almost all of my wishes granted with RP2350

ryukoposting
50 replies
23h24m

I can't imagine someone using an RP2040 in a real product, but the RP2350 fixes enough of my complaints that I'd be really excited to give it a shot.

There's a lot going for the 2040, don't get me wrong. TBMAN is a really cool concept. It overclocks like crazy. PIO is truly innovative, and it's super valuable for boatloads of companies looking to replace their 8051s/whatever with a daughterboard-adapted ARM core.

But, for every cool thing about the RP2040, there was a bad thing. DSP-level clock speeds but no FPU, and no hardware integer division. A USB DFU function embedded in boot ROM is flatly undesirable in an MCU with no memory protection. PIO support is extremely limited in third-party SDKs like Zephyr, which puts a low ceiling on its usefulness in large-scale projects.

The RP2350 fixes nearly all of my complaints, and that's really exciting.

PIO is a really cool concept, but relying on it to implement garden-variety peripherals like CAN or SDMMC immediately puts RP2350 at a disadvantage. The flexibility is very cool, but if I need to get a product up and running, the last thing I want to do is fiddle around with a special-purpose assembly language. My hope is that they'll eventually provide a library of ready-made "soft peripherals" for common things like SD/MMC, MII, Bluetooth HCI, etc. That would make integration into Zephyr (and friends) easier, and it would massively expand the potential use cases for the chip.

robomartin
19 replies
20h34m

For my work, the lack of flash memory integration on the 2040 is a deal breaker. You cannot secure your code. Not sure that has changed with the new device.

ebenupton
18 replies
20h27m

It has: you can encrypt your code, store a decryption key in OTP, and decrypt into RAM. Or if your code is small and unchanging enough, store it directly in OTP.

ryukoposting
8 replies
15h44m

You can certainly do that, sure, but any Cortex-M MCU can do that, and plenty of others have hardware AES acceleration that would make the process much less asinine.

Also, 520K of RAM wouldn't be enough to fit a the whole application + working memory for any ARM embedded firmware I've worked on in the last 5 years.

dmitrygr
5 replies
12h59m

Also, 520K of RAM wouldn't be enough to fit a the whole application + working memory for any ARM embedded firmware I've worked on in the last 5 years.

what are you smoking? I have an entire decstation3100 system emulator that fits into 4K of code and 384bytes of ram. I boot palmos in 400KB of RAM. if you cannot fit your "application" into half a meg, maybe time to take up javascript and let someone else do embedded?

vardump
3 replies
12h14m

There are plenty of embedded applications that require megabytes or even gigabytes.

For example medical imaging.

As well as plenty that require 16 bytes of RAM and a few hundred bytes of program memory. And everything in between.

my123
2 replies
9h36m

If it's in the gigabyte range it's just not an MCU by any stretch. And if it has a proper (LP)DDR controller it's not one either really

vardump
1 replies
9h31m

Yes and no. Plenty of such applications that use a µC + an FPGA. FPGA interfaces with some DDR memory and CMOS/CCD/whatever.

Up to you what you call it.

misiek08
0 replies
1h8m

So you make hardware costing dozens of thousands of dollars and brag about memory on 5$ chip? That explains a lot why so many medical and industrial (I haven’t touch military hardware) are so badly designed, with some sad proprietary protocols and dead few months after warranty passes. Today I’ve learned!

ryukoposting
0 replies
1h47m

I'm smoking multiprotocol wireless systems for industrial, medical, and military applications. To my recollection, the very smallest of those was around 280K in .text, and 180K in .data. Others have been 2-3x larger in both areas.

I would sure hope a decstation3100 emulator is small. After all, it's worthless unless you actually run something within the emulator, and that will inevitably be much larger than the emulator itself. I wouldn't know, though. Believe it or not, nobody pays me to emulate computers from 1978.

TickleSteve
1 replies
13h48m

520K RAM is huge for most typical embedded apps. Most micros are typically around the 48K->128K SRAM.

ryukoposting
0 replies
2h6m

Define "typical."

To my recollection, every piece of Cortex-M firmware I've worked on professionally in the last 5 years has had at least 300K in .text on debug builds, with some going as high as 800K. I wouldn't call anything I've worked on in that time "atypical." Note that these numbers don't include the bootloader - its size isn't relevant here because we're ramloading.

If you're ram-loading encrypted firmware, the code and data have to share RAM. If your firmware is 250K, that leaves you with 270K left. That seems pretty good, but remember that the 2040 and 2350 are dual-core chips. So there's probably a second image you're loading into RAM too. Let's be generous and imagine that the second core is running something relatively small - perhaps a state machine for a timing-sensitive wireless protocol. Maybe that's another 20K of code, and 60K in data. These aren't numbers I pulled out out of my ass, by the way - they're the actual .text and .data regions used by the off-the-shelf Bluetooth firmware that runs on the secondary core of an nRF5340.

So now you're down to 190K in RAM available for your 250K application. I'd call that "normal," not huge at all. And again, this assumes that whatever you're running is smaller than anything I've worked on in years.

XMPPwocky
5 replies
20h19m

What stops an attacker from uploading their own firmware that dumps out everything in OTP?

ebenupton
4 replies
20h13m

Signed boot. Unless someone at DEF CON wins our $10k bounty of course.

phire
3 replies
16h37m

Do you have any protection against power/clock glitching attacks?

geerlingguy
0 replies
16h0m

I was reading in I believe the Register article that yes, that's one of the protections they've tested... will be interesting to see if anyone can break it this month!

robomartin
2 replies
18h14m

Nice! Thanks for the direct communication BTW.

I guess you are very serious about competing with industrial MCU’s.

We had to use a 2040 shortly after it came out because it was impossible to get STM32’s. Our customer accepted the compromise provided we replaced all the boards (nearly 1000) with STM32 boards as soon as the supply chain normalized.

I hope to also learn that you now have proper support for development under Windows. Back then your support engineers were somewhat hostile towards Windows-based development (just learn Linux, etc.). The problem I don’t think they understood was that it wasn’t a case of not knowing Linux (using Unix before Linux existed). A product isn’t just the code inside a small embedded MCU. The other elements that comprise the full product design are just as important, if not more. Because of this and other reasons, it can make sense to unify development under a single platform. I can’t store and maintain VM’s for 10 years because one of the 200 chips in the design does not have good support for Windows, where all the other tools live.

Anyhow, I explained this to your engineers a few years ago. Not sure they understood.

I have a project that I could fit these new chips into, so long as we don’t have to turn our workflow upside down to do it.

Thanks again.

ebenupton
1 replies
11h27m

It's a fair comment. Give our VSCode extension a go: the aspiration is to provide uniform developer experience across Linux, Windows, and MacOS.

robomartin
0 replies
4h58m

I will when I get a break. I'll bring-up our old RP2040 project and see what has changed.

I remember we had to use either VSC or PyCharm (can't remember which) in conjunction with Thonny to get a workable process. Again, it has been a few years and we switched the product to STM32, forgive me if I don't recall details. I think the issue was that debug communications did not work unless we used Thonny (which nobody was interested in touching for anything other than a downloader).

BTW, that project used MicroPython. That did not go very well. We had to replace portions of the code with ARM assembler for performance reasons, we simply could not get efficient communications with MicroPython.

Thanks again. Very much a fan. I mentored our local FRC robotics high school team for a few years. Lots of learning by the kids using your products. Incredibly valuable.

TaylorAlexander
12 replies
22h28m

My hope is that they'll eventually provide a library of ready-made "soft peripherals"

Perhaps they could be more ready-made, but there are loads of official PIO examples that are easy to get started with.

https://github.com/raspberrypi/pico-examples/tree/master/pio

ryukoposting
5 replies
15h32m

These examples are cute, but this isn't a comprehensive collection. Not even close.

Given that PIO's most compelling use case is replacing legacy MCUs, I find it disappointing that they haven't provided PIO boilerplate for the peripherals that keep those archaic architectures relevant. Namely: Ethernet MII and CANbus.

Also, if RP2xxx is ever going to play ball in the wireless space, then they need an out-of-box Bluetooth HCI implementation, and it needs sample code, and integration into Zephyr.

I speak as someone living in this industry: the only reason Nordic has such a firm grip on BLE product dev is because they're the only company providing a bullshit-free Bluetooth stack out of the box. Everything else about nRF sucks. If I could strap a CYW4343 to an RP2350 with some example code as easily as I can get a BT stack up and running on an nRF52840, I'd dump Nordic overnight.

bboygravity
2 replies
12h40m

Just feed the boilerplate templates to Claude and ask it to "write a CANbus driver, use boiler plate as example" and done?

TaylorAlexander
1 replies
9h42m

I have never had even the slightest luck getting any of the AI services to generate something as specialized as embedded system drivers.

defrost
0 replies
9h41m

I can't even get them to make me a sandwich :(

vardump
0 replies
12h11m

Just pick whatever fits best in your application. No µC is going to solve everything for everyone.

RP2350 is about the best I can think of for interfacing with legacy protocols. Well, short of using FPGAs anyways.

TaylorAlexander
0 replies
12h44m

Well open source CAN and MII implementations do exist. Perhaps you can help provide a pull request to the official repo that checks in appropriate versions of that code, or file an issue requesting them to do it.

https://github.com/KevinOConnor/can2040

https://github.com/sandeepmistry/pico-rmii-ethernet

My biggest issue with their wireless implementation is that I get my boards made at JLCPCB and Raspberry Pi chose a specialty wireless chip for the Pico W which is not widely available, and is not available at JLCPCB.

crote
5 replies
10h59m

I feel like the PIO is just slightly too limited for that. You can already do some absolute magic with it, but it's quite easy to run into scenarios where it becomes really awkward to use due to the limited instruction count, lack of memory, and absence of a direct clock input.

Sure, you can work around it, but that often means making significant sacrifices. Good enough for some hacking, not quite there yet to fully replace hard peripherals.

vardump
4 replies
10h36m

Any concrete examples?

PIO is surprisingly flexible, even more so in RP2350.

crote
3 replies
9h37m

You run into issues if you try to implement something like RMII, which requires an incoming 50MHz clock.

There's an implementation out there which feeds the clock to a GPIO clock input - but because it can't feed the PLL from it and the PIO is driven from the system clock that means your entire chip runs at 50MHz. This has some nasty implications, such as being unable to transmit at 100meg and having to do a lot of postprocessing.

There's another implementation which oversamples the signal instead. This requires overclocking the Pico to 250MHz. That's nearly double the design speed, and close to some peripherals no longer working.

A third implementation feeds the 50MHz clock into the XIN input, allowing the PLL to generate the right clock. This works, except that you've now completely broken the bootloader as it assumes a 12MHz clock when setting up USB. It's also not complete, as the 10meg half duplex mode is broken due to there not being enough space for the necessary PIO instructions.

TaylorAlexander
1 replies
9h13m

Just to clarify, and it sounds like the answer is yes, this is a problem even with an external 50MHz clock signal?

tomooot
0 replies
7h25m

As far as I understood the explanation, the incoming ("external") 50Mhz clock signal is a core requirement of the spec: all of those workarounds are just what is required to meet that spec, and be able to TX/RX using the protocol at all.

rscott2049
0 replies
2h57m

Almost correct - the third implementation does generate the clock, but it isn't necessary to drive the clock directly from the system clock, as there are m/n clock dividers available. I use a 300 MHz system clock, and divide down to 50 MHz which works well. (I've also addressed a few other shortcomings of this library, but am not done yet...) Haven't looked at the 10 MHz half duplex mode, though.

alex-robbins
3 replies
17h38m

A USB DFU function embedded in boot ROM is flatly undesirable in an MCU with no memory protection.

Are you saying DFU is not useful without an MMU/MPU? Why would that be?

ryukoposting
2 replies
15h49m

It's certainly useful, but having it embedded within the hardware with no way to properly secure it makes the RP2040 a non-starter for any product I've ever written firmware for.

TickleSteve
1 replies
11h17m

it has secure boot and TrustZone.

crest
0 replies
9h41m

Not the RP2040. That chip has no boot security from anyone with physical access to the QSPI or SWD pins.

__s
2 replies
18h14m

RP2040 shows up in a lot of qmk keyboards, for real product use

Eduard
1 replies
16h53m

RP2040 shows up in a lot of qmk keyboards

as niche as it gets

tssva
0 replies
5h6m

But real products.

my123
1 replies
22h10m

no hardware integer division

It did have it, but as an out-of-ISA extension

GeorgeTirebiter
0 replies
19h10m

Not only that, the single FP and double FP were provided as optimized subroutines. I was never hampered by inadequate FP performance for simple control tasks.

anymouse123456
1 replies
15h7m

We're using 2040's in a variety of "real" products for an industrial application.

PIO is a huge selling point for me and I'm thrilled to see them leaning into it with this new version.

It's already as you hoped. Folks are developing PIO drivers for various peripherals (i.e., CAN and WS2812, etc.)

ryukoposting
0 replies
2h31m

Oh, I'm sure it's great for industrial, as long as you can live with the hardware security issues. In college, my first serious task as an intern was to take a Cortex-M0+ and make it pretend to be an 8051 MCU that was being obsoleted. Unsurprisingly, this was for an industrial automation firm.

I mimicked the 16-bit data bus using hand-written assembly to make sure the timings were as close as possible to the real chip. It was a pain in the ass. It would have been amazing to have a chip that was designed specifically to mimic peripherals like that.

It's great that there's a community growing around the RPi microcontrollers! That's a really good sign for the long-term health of the ecosystem they're trying to build.

What I'm looking for is a comprehensive library of PIO drivers that are maintained by RPi themselves. There would be a lot of benefits to that as a firmware developer: I would know the drivers have gone through some kind of QA. If I'm having issues, I could shoot a message to my vendor/RPi and they'll be able to provide support. If I find a bug, I could file that bug and know that someone is going to receive it and fix it.

wrs
0 replies
21h50m

I haven’t dug into the RP2xxx but I presumed there would be a library of PIO implementations of the standard protocols from RP themselves. There really isn’t?

Edit: I see, there are “examples”. I’d rather have those be first-class supported things.

valdiorn
0 replies
9h19m

I can't imagine someone using an RP2040 in a real product

Why not? It's a great chip, even if it has some limitations. I use it in several of my pro audio products (a midi controller, a Eurorack module, and a series of guitar pedals). they are absolutely perfect as utility chips, the USB stack is good, the USB bootloader makes it incredibly easy for customers to update the firmware without me having to write a custom bootloader.

I've shipped at least a thousand "real" products with an RP2040 in them.

tliltocatl
0 replies
11h33m

>> extremely limited in third-party SDKs like Zephyr

So is almost any non-trivial peripheral feature. Autonomous peripheral operation, op-amps, comparators, capture/compare timers…

Zephyr tries to provide a common interface like desktop OSes do and this doesn't really work. On desktop having just the least common denominator is often enough. On embedded you choose your platform because you want the uncommon features.

petra
0 replies
22h58m

For high volume products, given the low cost of this chip, it would make sense to bother with the PIO or it's open-source libraries.

nrp
0 replies
12h2m

We ship a large quantity of RP2040’s in real products, and agreed that the RP2350 looks great too!

Part of the reason we went with RP2040 was the design philosophy, but a lot of it was just easy availability coming out of the chip crunch.

crest
0 replies
9h43m

no hardware integer division

The RP2040 SIO block contains one hardware divider per CPU core.

blackkat
32 replies
1d3h

Some specs here: https://www.digikey.ca/en/product-highlight/r/raspberry-pi/r...

Based on the RP2350, designed by Raspberry Pi in the United Kingdom

Dual Arm M33s at 150 MHz with FPU

520 KiB of SRAM

Robust security features (signed boot, OTP, SHA-256, TRNG, glitch detectors and Arm TrustZone for Cortex®-M)

Optional, dual RISC-V Hazard3 CPUs at 150 MHz

Low-power operation

PIO v2 with 3 × programmable I/O co-processors (12 × programmable I/O state machines) for custom peripheral support

Support for PSRAM, faster off-chip XIP QSPI Flash interface

4 MB on-board QSPI Flash storage

5 V tolerant GPIOs

Open source C/C++ SDK, MicroPython support

Software-compatible with Pico 1/RP2040

Drag-and-drop programming using mass storage over USB

Castellated module allows soldering directly to carrier boards

Footprint- and pin-compatible with Pico 1 (21 mm × 51 mm form factor)

26 multifunction GPIO pins, including three analog inputs

Operating temperature: -20°C to +85°C

Supported input voltage: 1.8 VDC to 5.5 VDC

jayyhu
10 replies
23h34m

Edit: See comment below; The RP2350 can be powered by a 5V supply.

Findecanor
5 replies
22h29m

To clarify: You can connect a 5V power source by connecting it to the VSYS pin which leads into the on-board voltage regulator.

But the µC itself runs on 3.3V and is not totally 5V-capable. You'd need level converters to interface with 5V.

jayyhu
3 replies
22h3m

You're right, after re-reading the Power section on the datasheet it seems connecting 5V to the VREG_VIN should suffice to power the digital domains, but if you want to use the ADC, you still need a external 3.3V source.

snvzz
0 replies
15h0m

See section on physical pin gpio electrical tolerances.

The TL;DR is that 3.3v must be fed into IOVDD for 5.5v tolerance to work.

dvdkon
0 replies
21h26m

Maybe not even that:

A separate, nominally 3.3 V, low noise supply (VREG_AVDD) is required for the regulator’s analogue control circuits.

It seems it would be painful trying to run this without 3.3 V.

crote
0 replies
10h48m

It's quite a bit more complicated.

The chip needs a) 1.1V to power the cores, b) 1.8V-3.3V to power IO, and c) 3.3V to properly operate USB and ADC.

The chip has one onboard voltage regulator, which can operate from 2.7V-5.5V. Usually it'll be used to output 1.1V for the cores, but it can be used to output anything from 0.55V to 3.3V. The regulator requires a 3.3V reference input to operate properly.

So yeah, you could feed the regulator with 4-5V, but you're still going to need an external 5V->3.3V converter to make the chip actually operate...

snvzz
0 replies
17h51m

You'd need level converters to interface with 5V.

Part of the GPIOs are CMOS are 5v-tolerant, and TTL considers 2v HIGH, thus it is possible to interface some 5v hardware directly.

skykooler
2 replies
23h2m

How much tolerance does that have - can it run directly off a 3.7v lithium ion battery?

jayyhu
1 replies
21h46m

Yep, they explicitly call out that the onboard voltage regulator can work with a single lithium ion cell.

dvdkon
0 replies
21h29m

The regulator can take that, but as far as I can see it's only for DVDD, the core voltage of 1.1 V. You also need at least IOVDD, which should be between 1.8 V and 3.3 V. So you'll need to supply some lower voltage externally anyway.

I suppose the main draw of the regulator is that the DVDD rail will consume the most power. 1.1 V is also much more exotic than 3.3 V.

giantg2
0 replies
23h13m

I'd rather have it run on the lower voltage - generally easier to step down than buck up. Either way, the modules are pretty cheap, small, and easy to find.

synergy20
9 replies
1d3h

Wow, can't wait. Love the 5V GPIO and security features.

Daneel_
8 replies
1d2h

5V GPIO is a huge deal for me - this immediately opens up a huge range of integrations without having to worry about line level conversion.

I can’t wait to use this!

HeyLaughingBoy
4 replies
23h12m

Be careful with assumptions though. Being 5V tolerant doesn't mean that your 3V output can sufficiently drive an input that expects 0-5V levels correctly.

I ran into this problem using an ESP32 to drive a Broadcom 5V LED dot-matrix display. On paper everything looked fine; in reality it was unreliable until I inserted an LS245 between the ESP and the display.

HeyLaughingBoy
0 replies
20h11m

A better question might be why anyone is using a MAX7219 on a new design in 2024. There are so many other choices for displays than a 20 year-old IC from a company that's gone through two changes of ownership since.

Anyway, a 74LS245 isn't a level shifter, it's an octal buffer. It just happened to be the right choice for my needs. In your application, I'd suggest an actual level shifter. You can find level shift breakout boards at Sparkfun and Adafruit.

irdc
1 replies
20h3m

Being 5V tolerant doesn't mean that your 3V output can sufficiently drive an input that expects 0-5V levels correctly.

It's fine for TTL (like your 74LS245 is), which registers voltages as low as 2V as a logical 1. Being able to directly interface with TTL eases up so many retrocomputing applications.

HeyLaughingBoy
0 replies
15h47m

Which was... exactly the reason I chose it?

azinman2
2 replies
23h55m

Does tolerant mean ok to do? Or it just won’t fry your chip but you should actually run at 3.3?

tredre3
0 replies
23h27m

It usually means it's clamped so it might result in a small amount of wasted energy/heat but no damage.

So yes it means it's okay but if you can you should go for 3.3.

murderfs
0 replies
22h11m

5V tolerant means that it'll accept 5V input (and correctly interpret it as high), but output will still be 3.3V.

IshKebab
3 replies
23h57m

I wonder how well it's been verified.

pclmulqdq
2 replies
15h31m

This is a really big deal. Verifying a core is hard, and if the repo doesn't come with a testbench, I'm very suspicious.

IshKebab
1 replies
11h36m

Even if it does I'm suspicious. The open source RISC-V verification systems are not very good at the moment:

* riscv-arch-tests: ok, but a very low bar. They don't even test combinations of instructions so no hazards etc. * riscv-test: decent but they're hand-written directed tests so they aren't going to get great coverage * TestRig: this is better - random instructions directly compared against the Sail model, but it's still fairly basic - the instructions are completely random so you're unlikely to cover lots of things. Also it requires some setup so they may not have ran it.

The commercial options are much better but I doubt they paid for them.

moffkalast
2 replies
1d2h

Low-power operation

Low power suspend? In a Pi Foundation product? Impossible.

thomasdeleeuw
1 replies
1d

Not sure why this is downvoted but the sleep and dormant pico examples have quite some issues, they are still in "extras" and not in "core", so while documentation of features is my personal favorite aspect of the pico, there is room for improvement here still.

tssva
0 replies
4h49m

It is downvoted because it is a low effort sarcastic comment which provides no real contribution to the discussion. Your comment actually provides real feedback as to where there are currently issues.

coder543
2 replies
1d

I'm having trouble seeing where the datasheet actually says the GPIO pins are 5V tolerant.

EDIT: okay, section 14.8.2.1 mentions two types of digital pins: "Standard Digital" and "Fault Tolerant Digital", and the FT Digital pins might be 5V tolerant, it looks like.

sowbug
1 replies
1d

Page 13: "GPIOs are 5 V-tolerant (powered), and 3.3 V-failsafe (unpowered)"

coder543
0 replies
1d

Yep, I edited a few minutes ago to mention a reference I found in the datasheet. It's cool, but the reality seems a little more nuanced than that quote would indicate, since that only appears to work for GPIO-only pins, not just pins being used as GPIO. (So, if a pin supports analog input, for example, it will not be 5V tolerant.)

synergy20
23 replies
1d3h

You can pick either ARM cores or RISC-V cores on the same die? Never saw design like this before. Will this impact price and power consumption?

"The Hazard3 cores are optional: Users can at boot time select a pair of included Arm Cortex-M33 cores to run, or the pair of Hazard3 cores. Both options run at 150 MHz. The more bold could try running one RV and one Arm core together rather than two RV or two Arm.

Hazard3 is an open source design, and all the materials for it are here. It's a lightweight three-stage in-order RV32IMACZb* machine, which means it supports the base 32-bit RISC-V ISA with support for multiplication and division in hardware, atomic instructions, bit manipulation, and more."

jononor
10 replies
1d1h

This seems like a great way to test the waters before a potential full-on transition to RISC-V. It allows to validate both technically and market reception, for a much lower cost than taping out a additional chip.

MBCook
5 replies
1d

Fun for benchmarking too.

You’re limited to those two exact kinds of cores, but you know every other thing on the entire computer is 100% identical.

It’s not SBC 1 vs SBC 2, but they have different RAM chips and this one has a better cooler but that one better WiFi.

phire
4 replies
19h23m

I really hope people don't do this. Or at least not try to sell it as ARM vs RISC-V tests.

Because what you are really testing is the Cortex-M33 vs the Hazard 3, and they aren't equivalent.

They might both be 3 stage in-order RISC pipelines, but Cortex-M33 is technically superscalar, as it can dual-issue two 16bit instructions in certain situations. Also, the Cortex-M33 has a faster divider, 11 cycles with early termination vs 18 or 19 cycles on the Hazard 3.

snvzz
3 replies
18h16m

It'd help to know how much area each core takes within the die.

I would expect the ARM cores to be much larger, as well as use much more power.

phire
1 replies
16h39m

Hard to tell.

If you ignore the FPU (I think it can be power gated off) the two cores should be roughly the same size and power consumption.

Dual issue sounds like it would add a bunch of complexity, but ARM describe it as "limited" (and that's about all I can say, I couldn't find any documentation). The impression I get is that it's really simple.

Something along the line of "if two 16 bit instructions are 32bit aligned, and they go down different pipelines, and they aren't dependant on each other" then execute both. It might be limitations that the second instruction can't access registers at all (for example, a branch instruction) or that it must only access registers from seperate register file bank, meaning you don't even have to add extra read/write ports to the register file.

If the feature is limited enough, you could get it down to just a few hundred gates in the instruction decode stage, taking advantage of resources in later stages that would have otherwise been idle.

According to ARM's specs, the Cortex-M33 takes the exact same area as the Cortex-M4 (the rough older equivalent without dual-issue, and arguably equal to the Hazard3), uses 2.5% less power and gets 17% more performance in the CoreMark benchmark.

pclmulqdq
0 replies
15h33m

That is exactly what the "limited dual issue" is - two non-conflicting pre-decoded instructions (either 16b+16b or if a stall has occurred) can be sent down the execution pipe at the same time. I believe that must be a memory op and an ALU op.

KaiserPro
0 replies
8h52m

The ARM cores are probably much larger, but I don't think that translates into better power efficiency _automatically_.

askvictor
1 replies
18h5m

Indeed, though I'm curious about the rationale behind it. It is a 'plan B' in case their relationship with ARM sours? It is aiming for cost-cutting in the future (I can't imagine the ARM licences are costing them much given the price of the RP2040, but maybe they're absorbing it to get marketshare)

snvzz
0 replies
17h45m

Embracing RISC-V, the high-quality open-source ISA that is rapidly growing the strongest ecosystem, does make a lot of sense.

kelnos
0 replies
21h43m

I do wonder if the unavailability of some of the security features and -- possibly a big deal for some applications -- the accelerated floating point on the RISC-V cores would skew that experiment, though.

GordonS
0 replies
23h16m

My thoughts exactly - a risc-free (hurhur) way to get RISC-V in the hands of many, many devs.

geerlingguy
7 replies
1d1h

Apparently (this is news to me), you can also choose to run 1+1 Arm/RISC-V, you don't have to switch both cores either/or.

Eben Upton: "They're selectable at boot time: Each port into the bus fabric can be connected either to an M33 or a Hazard3 via a mux. You can even, if you're feeling obtuse, run with one of each."

Source: https://www.theregister.com/2024/08/08/pi_pico_2_risc_v/

ravetcofx
3 replies
1d

But not 2+2? That seems too bad to have each architecture run code based on their strengths for quad core workloads.

simcop2387
0 replies
1d

Yea, i was hoping for 2+2 myself but I suspect it's because the setup doesn't have the ability to mediate peripherals between the cores in a way that'd let that work. I.e. trying to turn on both Risc-v and arm #1 cores means that there'd be bus conflicts. It'd be cool if you could disable the io on the risc-v cores and do all hardware io through arm (or vice versa) so you can use the unconnected ones for just pure compute tasks (say run ws2812b led strips with the arm cores but run python/javascript/lua on the risc-v cores to generate frames to display without interrupting the hardware io).

nine_k
0 replies
1d

Why not both: power distribution and cooling? Having to route twice as many wide buses, and put in twice as much of L0 caches?

ebenupton
0 replies
20h25m

We did look at this, but the AHB A-phase cost of putting a true arbiter (rather than a static mux) on each fabric port was excessive. Also, there's a surprising amount of impact elsewhere in the system design (esp debug).

jaeckel
2 replies
21h37m

Would've been cool for safety applications if the second core could be run in lockstep mode.

4gotunameagain
1 replies
20h44m

afaik that is a whole different rodeo on the silicon level

KaiserPro
0 replies
8h58m

yeah lockstep requires a whole bunch of things to verify and break deadlocks. I suspect you need three processors to do that as well (so you know which one has fucked up.)

bri3d
2 replies
1d2h

This "switchable cores" thing has been appearing in some products for a few years now, for example Sipeed SG2002 (LicheeRV). The area occupied by the actual instruction core is usually pretty small compared to peripherals and internal memories.

zer00eyz
1 replies
1d1h

The MilkV Duo also has this feature I believe... https://milkv.io/duo

Teknoman117
0 replies
22h4m

It's the same SoC as the LicheeRV (SG2002)

GeorgeTirebiter
0 replies
19h1m

Hazard3 pointer https://github.com/Wren6991/Hazard3

I think it's cool as a cucumber that we can choose fully open-source RISC-V if we want. My guess is the RV cores are slower clock-per-clock than the M33 cores; that is benchmark scores for M33's will be better, as Hazard3 is only 3-stage pipeline - but so is M33. Can't wait for the benchmarks.

nimish
23 replies
1d

Gross, the dev board uses micro-USB. It's 2024! Otherwise amazing work. Exactly what's needed to compete with the existing giants.

sowbug
8 replies
1d

Perhaps the unfortunate choice of micro USB is to discourage real consumer products from being built with the dev board.

user_7832
5 replies
23h43m

I wonder if it is more about simply shaving a few cents off. Full USB-C protocol implementation may be much more difficult.

hypercube33
4 replies
23h33m

USB-C doesn't require anything special USB wise as it's decoupled from the versioned standard. It just has more pins and works with all modern cables. Ideally the cables won't wear out like Mini and Micro and get loosey goosey in the ports.

Findecanor
1 replies
22h7m

For a device, USB-C requires two resistors that older USB ports don't.

Declaring yourself as a host/device is also a bit different: USB-C hardware can switch. Micro USB has a "On-the-go" (OTG) indicator pin to indicate host/device.

The USB PHY in RP2040 and the RP2350 is actually capable of being a USB host but the Micro USB port's OTG pin is not connected to anything.

rvense
0 replies
20h42m

Hm, I've used mine as a USB host with an adapter? Not sure of the details, I suppose OTG is the online/runtime switching and I was just running as fixed host?

ewoodrich
0 replies
22h51m

Yep, a USB-C connector is more or less a drop in replacement for MicroUSB if you don’t need USB3 or USB-PD. With one aggravating exception: it requires adding two 5.1kΩ pulldown resistors to be compatible with C-C cables. Thus signaling to a charger that the sink is a legacy non-PD device requesting 5V.

Which is apparently an impossible ask for manufacturers of dev boards or cheap devices in general. It’s slightly more understandable for a tried and true dev board that’s just been connector swapped to USB-C (and I’ll happily take it over dealing with Micro) but inexcusable for a new design.

My hope is Apple going USB-C only on all their charging bricks and now even C-C cables for the iPhone will eventually force Chinese OEMs to build standard compliant designs. Or deal with a 50% Amazon return rate for “broken no power won’t charge”.

Brusco_RF
0 replies
22h34m

As someone who just picked micro USB over USBC for a development card, there is a significant price and footprint size difference between the two.

Teknoman117
0 replies
22h2m

I would assume it's in order to maintain mechanical compatibility with the previous Pico.

Findecanor
0 replies
22h15m

For the microcontroller however, the use in commercial products is encouraged.

There are one-time programmable registers for Vendor, Product, Device and Language IDs that the bootloader would use instead of the default. It would be interesting to see if those are fused on the Pico 2.

janice1999
6 replies
23h42m

It saves cost and none of the features of USB-C (speed, power delivery etc) are supported. Makes sense.

str3wer
3 replies
18h54m

the price difference from usb to usb-c is less than 2 cents

rldjbpin
1 replies
8h0m

devil's advocate: cables for an average user is a different story. also not to forget the vast range of cables already existing out there.

also "proper" usb-c support is another can of worms, and maybe sticking to an older standard gives you freedom from all that.

g15jv2dp
0 replies
6h37m

You're confusing USB C and USB 3.1+. USB C is just the physical spec. You can design a cheap device that will only support USB 2 if you just connect ground, Vbus, D+ and D- and gasp add two resistors. It will work just as well as the micro-usb plug.

refulgentis
0 replies
18h15m

You would be surprised at the amount of effort and success $0.01 represents at BigCo. Even when projected sales are in 6 figure range.

throwaway81523
1 replies
13h2m

How about connections not becoming flaky after you've plugged in the cable a few times. Micro USB was horribly unreliable. USB-C isn't great either, but it's an improvement. Maybe they will get it right some day.

guax
0 replies
12h39m

I always hear that but I never had a micro usb fully fail on me but my phone's usb-c are lint magnets and get super loose and refuse to work. When that happened on micro it was usually the cable tabs a bit worn but the cable always worked.

nine_k
4 replies
21h22m

USB-C is way more complicated, even if you're not trying to push 4K video or 100W power through it. The interface chip ought to be more complex, and thus likely more expensive.

You can still find a number of cheap gadgets with micro-USB on Aliexpress. Likely there's some demand, so yes, you can build a consumer product directly on the dev board, depending on your customer base.

15155
1 replies
20h22m

How are they "way more complicated?" You have to add two resistors and short another pair of DP/DM lines?

nine_k
0 replies
19h52m

Yes, indeed, I've checked, and apparently you don't need anything beyond this if you don't want super speed or power delivery (past 5V 3A).

I did not realize how many pins in a USB-C socket are duplicated to make this possible. (For advanced features, you apparently still need to consider the orientation of the inserted cable.)

nimish
0 replies
21h17m

Chinese boards are both cheaper and have usb type c implemented correctly and in spec, so that's no real excuse for raspberry pi

Ductapemaster
0 replies
20h17m

You can use a USB C connector with standard USB, no interface chip required. It's simply a connector form-factor change.

hoherd
1 replies
22h57m

FWIW the Pimoroni Tiny 2040 and Tiny 2350 use usb-c, but as mentioned by other commenters, the cost for these usb-c boards is higher.

I love having usb-c on all my modern products, but with so many micro-usb cords sitting around, I don't mind that the official Pico and Pico 2 are micro-usb. At least there are options for whichever port you prefer for the project you're using it in.

TaylorAlexander
21 replies
23h56m

This is very exciting! For the last several years I have been developing a brushless motor driver based on the RP2040 [1]. The driver module can handle up to 53 volts at 30A continuous, 50A peak. I broke the driver out to a separate module recently which is helpful for our farm robot and is also important for driver testing as we improve the design. However this rev seems pretty solid, so I might build a single board low cost integrated single motor driver with the RP2350 soon! With the RP2040 the loop rate was 8khz which is totally fine for big farm robot drive motors, but some high performance drivers with floating point do 50khz loop rate.

My board runs SimpleFOC, and people on the forum have been talking about building a flagship design, but they need support for sensorless control as well as floating point, so if I use the new larger pinout variant of the RP2350 with 8 ADC pins, we can measure three current signals and three bridge voltages to make a nice sensorless driver! It will be a few months before I can have a design ready, but follow the git repo or my twitter profile [2] if you would like to stay up to date!

[1] https://github.com/tlalexander/rp2040-motor-controller

[2] https://twitter.com/TLAlexander

sgu999
13 replies
23h27m

for our farm robot

That peaked my interest, here's the video for those who want to save a few clicks: https://www.youtube.com/watch?v=fFhTPHlPAAk

I absolutely love that they use bike parts for the feet and wheels.

tuatoru
8 replies
19h16m

* piqued

GeorgeTirebiter
7 replies
19h13m

yes, piqued. English, so weird! ;-)

(Although, interest peaking is possible!)

speed_spread
6 replies
15h9m

English, so weird

Borrowed from just-as-weird French "piquer" - to stab or jab.

bee_rider
2 replies
11h31m

It is kind of funny that both of the incorrect versions, peaked or peeked, sort of make more sense just based on the definitions of the individual words. “Peaked my interest” in particular could be interpreted as “reached the top of my interest.”

Way better than stabbing my interest, in a French fashion or otherwise.

jhugo
1 replies
10h39m

Right, but that meaning isn’t quite right. To pique your interest is to arouse it, leaving open the possibility that you become even more interested, a possibility which peaking of your interest does not leave open.

digging
0 replies
2h54m

However, in the case where someone means "This interested me so much that I stopped what I was doing and looked up more information," peaked is almost more correct, depending on how one defines "interest" in this context (eg. "capacity for interest"? probably no; "current attention"? probably yes).

funnybeam
0 replies
7h5m

No, French is badly pronounced French - the English (Norman) versions are often closer to the original pronunciation

littlestymaar
0 replies
10h37m

Borrowed from just-as-weird French "piquer" - to stab or jab.

Literally «piquer» means “to sting” or “to prick” more than stab or jab, it's never used to describe inter-human aggression.

And piquer is colloquially used to mean “to steal” (and it's probably the most common way of using it in French after describing mosquito bites)

Edit: and I forgot to mention that we already use it for curiosity, in fact the sentence “it piqued my curiosity” was directly taken from French «ça a piqué ma curiosité».

HeyLaughingBoy
3 replies
23h19m

I have given some thought to a two-wheeled electric tractor for dealing with mud -- horse paddocks turn into basically a 1-foot deep slurry after heavy rain and it can be easier to deal with something small that sinks through the mud, down to solid ground than something using large floatation tires. Additional problem with large tires is that they tend to throw mud around, making everyone nearby even more dirty.

I haven't actually built anything (been paying attention to Taylor's work, though), but I came to the same conclusion that bike wheels & tires would probably be a good choice. It also doesn't hurt that we have many discarded kids' bikes all over the place.

littlestymaar
2 replies
10h52m

Your description fit what I've seen for rice farming, whose machines usually use bike-like tires.

vintagedave
1 replies
7h32m

I’m curious there. I’ve seen rice paddies plowed in Vietnam and the tractors used wide paddle-like wheels. I saw two varieties: one with what looked like more normal wheels but much wider, and one which was of metal with fins, very much akin to a paddle steamer, though still with some kind of flat surface that must have distributed weight.

Would they be more effective with thin wheels? Both humans and cattle seem to sink in a few inches and stop; I don’t know what’s under the layer of mud and what makes up a rice paddy.

littlestymaar
0 replies
6h38m

What I saw was in Taiwan, but I guess it must depends on the depth of the mud and the nature of what's below.

roshankhan28
1 replies
13h9m

i am not a engineer type of person but to even thing that someone is trying to create a motor is really impressive. When i was a kid , i used t break my toy cars and would get motors from it and felt like i really did something. good ol' days.

throwaway81523
0 replies
12h56m

The motor controller is impressive, but it sounds like a motor controller (as it says), rather than a motor. That is, it's not mechanical, it's electrical, it sends inputs to the motor telling it when to turn the individual magnets on and off. That is a nontrivial challenge since it has to monitor the motor speeds under varying loads and send pulses at exactly the right time, but it's software and electronics, not machinery.

qdot76367
1 replies
19h41m

Ah, it's good to see you continuing your work with types of robots that start with f.

TaylorAlexander
0 replies
19h19m

Hah thats right. I did get some parts to try to update the other one you are referring to, but given all my projects it has not made it near the top of the queue yet.

Rinzler89
1 replies
9h26m

>I have been developing a brushless motor driver based on the RP2040

Can I ask why? There's dedicated MCU for BLDC motor control out there that have the dedicated peripherals to get the best and easiest sensored/sensorless BLDC motor control plus the supporting application notes and code samples. The RP2040 is not equipped to be good at this task.

TaylorAlexander
0 replies
9h9m

dedicated MCU for BLDC motor control

During the chip shortage, specialized chips like this were very hard to find. Meanwhile the RP2040 was the highest stocked MCU at digikey and most other places that carried it. The farm robot drive motors don't need high speed control loops or anything. We just needed a low cost flexible system we could have fabbed at JLCPCB. The RP2040 also has very nice documentation and in general is just very lovely to work with.

Also SimpleFOC was already ported to the RP2040, so we had example code etc too. Honestly the CPU was the easy part. As we expected, getting a solid mosfet bridge design was the challenging part.

technofiend
0 replies
1h36m

Taylor, wow! I think you're the only person I've actually seen implement WAAS to boost GPS precision. So cool!

doe_eyes
18 replies
1d3h

I think it's a good way to introduce these chips, and it's a great project, but the author's (frankly weird) beef with STM32H7 is detracting from the point they're trying to make:

So, in conclusion, go replan all your STM32H7 projects with RP2350, save money, headaches, and time.

STM32H7 chips can run much faster and have a wider selection of peripherals than RP2350. RP2350 excels in some other dimensions, including the number of (heterogenous) cores. Either way, this is nowhere near apples-to-apples.

Further, they're not the only Cortex-M7 vendor, so if the conclusion is that STM32H7 sucks (it mostly doesn't), it doesn't follow that you should be instead using Cortex-M33 on RPi. You could be going with Microchip (hobbyist-friendly), NXP (preferred by many commercial buyers), or a number of lesser-known manufacturers.

Archit3ch
7 replies
1d1h

STM32H7 chips can run much faster

STM32H7 tops out at 600MHz. This has 2x 300MHz at 2-3 cycles/op FP64. So maybe your applications can fit into this?

spacedcowboy
2 replies
1d

I'm seeing several statements of 2x300MHz, but the page [1] says 2x150MHz M33's..

I know the RP2040's overclock a lot but these are significantly more complex chips, it seems less likely they'll overclock to 2x the base frequency.

[1] https://www.raspberrypi.com/news/raspberry-pi-pico-2-our-new...

mrandish
1 replies
21h17m

TFA states extensive 300Mhz OC with no special effort (and he's been evaluating pre-release versions for a year).

"It overclocks insanely well. I’ve been running the device at 300MHz in all of my projects with no issues at all."

Also

"Disclaimer: I was not paid or compensated for this article in any way. I was not asked to write it. I did not seek or obtain any approval from anyone to say anything I said. My early access to the RP2350 was not conditional on me saying something positive (or anything at all) about it publicly."

spacedcowboy
0 replies
20h32m

Thanks, I missed that.

mordae
1 replies
22h59m

It's 6 cycles for dadd/dsub, 16 for dmul, 51 for ddiv.

vardump
0 replies
9h44m

6 cycles for dadd/dsub

I guess it depends whether you store to X (or Y), normalize & round (NRDD; is it really necessary after each addition?) and load X back every time.

Both X and Y have 64 bits of mantissa, 14 bits of exponent and 4 bits of flags, including sign. Some headroom compared to IEEE 754 fp64 53 mantissa and 11 bits of exponent, so I'd assume normalization might not be necessary after every step.

The addition (X = X + Y) itself presumably takes 2 cycles; running coprocessor instructions ADD0 and ADD1. 1 cycle more if normalization is always necessary. And for the simplest real world case, 1 cycle more for loading Y.

Regardless, there might be some room for hand optimizing tight fp64 loops.

Edit: This is based on my current understanding of the available documentation. I might very well be wrong.

adrian_b
0 replies
21h54m

As other posters have mentioned, this has 2 Cortex-M33 cores @ 150 MHz, not @ 300 MHz.

Cortex-M7 is in a different size class than Cortex-M33, it has a speed about 50% greater at the same clock frequency and it is also available at higher clock frequencies.

Cortex-M33 is the replacement for the older Cortex-M4 (while Cortex-M23 is the replacement for Cortex-M0+ and Cortex-M85 is the modern replacement for Cortex-M7).

While for a long time the Cortex-M MCUs had been available in 3 main sizes, Cortex-M0+, Cortex-M4 and Cortex-M7, for their modern replacements there is an additional size, Cortex-M55, which is intermediate between Cortex-M33 and Cortex-M85.

15155
0 replies
1d

The STM32H7 and other M7 chips have caches - performance is night and day between 2x300MHz smaller, cacheless cores and chips with L1 caches (and things like TCM, etc.)

The SRAM in that H7 is running at commensurately-high speeds, as well.

Comparing an overclocked 2xM33 to a non-overclocked M7 is also probably a little inaccurate - that M7 will easily make more than the rated speed (not nearly as much as the RP2040 M0+, though.)

dmitrygr
6 replies
1d3h

1. Nobody has a wider selection of peripherals than a chip with 3 PIOs.

2. And my beef is personal - I spent months (MONTHS of my life) debugging the damn H7, only to find a set of huge bugs in the main reason I had been trying to use it (QSPI ram support), showed it to the manufacturer, and had them do nothing. Later they came back and, without admitting i was right about the bugs, said that "another customer is seeing same issues, what was the workaround you said found?" I told them that i'll share the workaround when they admit the problem. Silence since.

I fully reserve the right to be pissy at shitty companies in public on my website!

uticus
2 replies
1d2h

[edit: I retract this, I see you’ve had secretly in your possession to play with for over a year. You lucky dog. ]

I have been anti-recommending STM’s chips to everyone for a few years now due to STM’s behaviour with regards to the clearly-demonstrated-to-them hardware issues.

You certainly reserve the right. However it is unclear to me why the recommendation to complaints over a months-long period is a product that has just been released.

Trying to ask in a very unbiased way since as a hobbyist I’m looking into ST, Microchip, and RP2040. For my part I’ve had two out of four RP2040 come to me dead on arrival, as part of two separate boards from different vendors - one being Pi Pico from Digilent. Not a ton of experience with Microchip but I hear they have their own problems. Nobody’s perfect, the question is how do the options compare.

naikrovek
0 replies
1d1h

they're complaining now because they still feel the pain now. while writing the article, they're thinking of how things would have been different on previous projects if they had had this chip, and that is digging up pain and they felt it should be expressed.

I don't know what's so unclear. Have you never had a strong opinion about someone else's stuff? Man, I have.

limpbizkitfan
0 replies
1d1h

I don’t think the issue is QA related, ST released a chip that says it can perform X when the reality is it can not perform X.

15155
1 replies
1d

1. Nobody has a wider selection of peripherals than a chip with 3 PIOs.

NXP FlexIO says "Hello!"

spacedcowboy
0 replies
14h47m

FlexIO is (I think) powerful, however... I'm not sure if it's me or the way they describe it with all the bit-serialisers/shifters interacting - but I grok the PIO assembly a damn sight easier than FlexIO.

Maybe it's just me. Maybe.

doe_eyes
0 replies
1d2h

I'm not arguing you can't be angry with them, I'm just saying that to me, it detracts from the point about the new platform. Regarding #1, I'm sure you know that peripherals in the MCU world mean more than just digital I/O. Further, even in the digital domain, the reason PIO isn't more popular is that most people don't want to DIY complex communication protocols.

limpbizkitfan
2 replies
1d1h

ST is a zillion dollar company that should be hiring the talent capable of delivering product that match the features in their sales pamphlets. Integration is tricky but a company with STs deep pockets should be able to root cause or at least help troubleshoot an issue, not ask for a fix like some nepotism hire.

doe_eyes
0 replies
3h46m

I'm not an ST fanboy and they're not a vendor I use, but they are very popular in the 32-bit Cortex-M space, so they're clearly doing something right. Meanwhile, companies like Microchip that put effort into accessible documentation and tooling are getting table scraps.

HeyLaughingBoy
0 replies
22h57m

They should also be hiring people that can write clearly in their datasheets, but here we are, so...

jonathrg
14 replies
1d3h

Can someone explain the benefit of having essentially 4 cores (2 ARM + 2 RISC-V) on the chip but only having 2 able to run simultaneously? Does this take significantly less die space than having all 4 available at all times?

coder543
5 replies
1d3h

Coordinating access to the memory bus and peripherals is probably not easy to do when the cores weren’t ever designed to work together. Doing so could require a power/performance penalty at all times, even though most users are unlikely to want to deal with two completely different architectures across four cores on one microcontroller.

Having both architectures available is a cool touch. I believe I criticized the original RP2040 for not being bold enough to go RISC-V, but now they’re offering users the choice. I’ll be very curious to see how the two cores compare… I suspect the ARM cores will probably be noticeably better in this case.

swetland
4 replies
1d2h

They actually let you choose one Cortex-M33 and one RISC-V RV32 as an option (probably not going to be a very common use case) and support atomic instructions from both cores.

coder543
3 replies
1d2h

All of the public mentions of this feature that I've seen indicated it is an either/or scenario, except the datasheet confirms what you're saying:

The ARCHSEL register has one bit for each processor socket, so it is possible to request mixed combinations of Arm and RISC-V processors: either Arm core 0 and RISC-V core 1, or RISC-V core 0 and Arm core 1. Practical applications for this are limited, since this requires two separate program images.

That is fascinating... so, likely what dmitrygr said about the size of the crossbar sounds right to me: https://news.ycombinator.com/item?id=41192580

moffkalast
1 replies
1d2h

Did Dr. Frankenstein design this SoC? Igor, fetch me the cores!

ebenupton
0 replies
20h31m

It's aliiiiiive!

tredre3
2 replies
23h24m

Each arm/riscv set likely share cache and register space (which takes most of the die space by far), resulting in being unable to use them both simultaneously.

formerly_proven
1 replies
23h17m

Considering that these are off-the-shelf Cortex-M designs I doubt that Raspi was able or would be allowed to do that. I'd expect most of the die to be the 512K SRAM, some of the analog and power stuff and a lot of it just bond pads.

ebenupton
0 replies
20h21m

That's correct. The Arm and RISC-V cores are entirely separate, sharing no logic.

dmitrygr
2 replies
1d3h

cores are high bandwidth bus masters. Making a crossbar that supports 5 high bandwidth masters (4x core + dma) is likely harder, larger, and higher power than one that supports 3.

ebenupton
1 replies
20h19m

It's actually 10 masters (I+D for 4 cores + DMA read + DMA write) versus 6 masters. Or you could pre-arbitrate each pair of I and each pair of D ports. But even there the timing impact is unpalatable.

dmitrygr
0 replies
15h28m

Which is even more impressive yet :)

networked
0 replies
23h51m

I see a business decision here. Arm cores have licensing fees attached to them. Arm is becoming more restrictive with licensing and wants to capture more value [1]:

The Financial Times has a report on Arm's "radical shake-up" of its business model. The new plan is to raise prices across the board and charge "several times more" than it currently does for chip licenses. According to the report, Arm wants to stop charging chip vendors to make Arm chips, and instead wants to charge device makers—especially smartphone manufacturers—a fee based on the overall price of the final product.

Even if the particular cores in the RP2350 aren't affected, the general trend is unfavorable to Arm licensees. Raspberry Pi has come up with a clever design that allows it to start commoditizing its complement [2]: make the cores a commodity that is open-source or available from any suitable RISC-V chip designer instead of something you must go to Arm for. Raspberry Pi can get its users accustomed to using the RISC-V cores—for example, by eventually offering better specs and more features on RISC-V than Arm. In the meantime, software that supports the Raspberry Pi Pico will be ported to RISC-V with no disruption. If Arm acts up and RISC-V support is good enough or when it becomes clear users prefer RISC-V, Raspberry Pi can drop the Arm cores.

[1] https://arstechnica.com/gadgets/2023/03/risc-y-business-arm-...

[2] https://gwern.net/complement

blihp
0 replies
1d

Beyond the technical reasons for the limit, it provides for a relatively painless way to begin to build out/for RISC-V[1] without an uncomfortable transition. For those who just want a better next iteration of the controller, they have it. For those who build tools, want to A/B test the architectures, or just do whatever with RISC-V, they have that too. All without necessarily setting the expectation that both will continue to coexist long term.

[1] While it's possible they are envisioning dual architecture indefinitely, it's hard to imagine why this would be desirable long term esp. when one architecture can be royalty free and the other not, power efficiency, paying for dark silicon etc.

mmmlinux
11 replies
1d3h

And still no USB C on the official devboard.

naikrovek
7 replies
1d1h

And still no USB C on the official devboard.

Do you live in a universe where micro-USB cables are not available, or something? There's gonna be something or other that needs micro-USB for the next decade, so just buy a few and move on. They're not expensive.

[later edit: I bet it has to do with backwards compatibility. They don't want people to need to rework case designs to use something that is meant as a drop-in replacement for the Pi Pico 1.]

ewoodrich
3 replies
23h42m

Personally I have about three dozen USB-A to USB-C cables lying around and the thought of actually spending money to acquire extra Micro USB cables in 2024 is very unappealing.

I (deliberately) haven’t bought a consumer electronic device that still uses Micro USB in years so don’t accumulate those cables for free anymore like with USB-C.

Of course ubiquitous USB-C dev boards/breakout boards without 5.1kΩ resistors for C-C power is its own frustration ... But I can tolerate that having so many extra USB-A chargers and cables. Trigger boards are great because they necessarily support PD without playing the AliExpress C-C lottery.

naikrovek
2 replies
16h42m

I (deliberately) haven’t bought a consumer electronic device that still uses Micro USB in years so don’t accumulate those cables for free anymore like with USB-C.

I guess you’re not gonna be buying a Pi Pico 2, then. So why are you complaining about something you aren’t going to use?

ewoodrich
0 replies
15h11m

I think you misread what I wrote: consumer electronic device

Dev boards or niche specialized hardware are about the only thing I've willingly bought with Micro USB in 4+ years. As much as I try to avoid it given my preference for USB-C, sometimes I don't have a good alternative available.

So why are you complaining about something you aren’t going to use?

Because it looks like a great upgrade to my RP2040-Zero boards that I would like to buy but I really dislike the choice of connector? What is wrong with that?

Dylan16807
0 replies
10h21m

Even if you interpreted that sentence right, that's not a reasonable rebuttal. If a feature stops someone from buying a product, then it makes sense to complain about the feature. Their non-purchase doesn't invalidate the complaint. It's only when someone isn't interested in the category at all that complaints lose their value.

wolrah
2 replies
22h31m

Do you live in a universe where micro-USB cables are not available, or something? There's gonna be something or other that needs micro-USB for the next decade, so just buy a few and move on. They're not expensive.

I live in a universe where type C has been the standard interface for devices for years, offering significant advantages with no downsides other than a slightly higher cost connector, and it's reasonable to be frustrated at vendors releasing new devices using the old connector.

It's certainly not as bad as some vendors of networking equipment who still to this day release new designs with Mini-B connectors that are actually officially deprecated, but it's not good nor worthy of defending in any way.

I bet it has to do with backwards compatibility. They don't want people to need to rework case designs to use something that is meant as a drop-in replacement for the Pi Pico 1.

Your logic is likely accurate here, but that just moves the stupid choice back a generation. It was equally dumb and annoying to have Micro-B instead of C on a newly designed and released device in 2021 as it is in 2024.

The type C connector was standardized in 2014 and became standard on phones and widely utilized on laptops starting in 2016.

IMO the only good reason to have a mini-B or micro-B connector on a device is for physical compatibility with a legacy design that existed prior to 2016. Compatibility with a previous bad decision is not a good reason, fix your mistakes.

Type A on hosts will still be a thing for a long time, and full-size type B still makes sense for large devices that are not often plugged/unplugged where the size is actually a benefit, but the mini-B connector is deprecated and the micro-B connector should be.

bigstrat2003
1 replies
20h8m

Micro-B is fine. This is such an overblown non-issue I am shocked that people are making a big deal of it.

crote
0 replies
10h32m

It's not a huge deal, but it's still a very strange choice on a product released in 2024.

Pretty much everyone has a USB-C cable lying around on their desk because they use it to charge their smartphone. I probably have a Micro-B cable lying around in a big box of cables somewhere, last used several years ago. Even cheap Chinese garbage comes with USB-C these days.

Sure, Micro-B is technically just fine, but why did Raspberry Pi go out of their way to make their latest product more cumbersome to use?

moffkalast
1 replies
21h31m

Unless the USB-C connector costs $7-10, these are beyond ridiculously overpriced compared to the official dev board. At least throw in an IMU or something if you plan to sell low volumes at high prices jeez.

jsheard
0 replies
21h2m

The cheapest one I've seen so far is the XIAO RP2350, which is $5, same as the official Pico board. I'm sure there will be more cheap options once more Chinese manufacturers get their hands on the chips, no-name USB-C RP2040 boards are ridiculously cheap.

kaycebasques
11 replies
21h40m

Big day for my team (Pigweed)! Some of our work got mentioned in the main RP2350/Pico2 announcement [1] but for many months we've been working on a new end-to-end SDK [2] built on top of Bazel [3] with support for both RP2040 and RP2350, including upstreaming Bazel support to the Pico SDK. Our new "Tour of Pigweed" [4] shows a bunch of Pigweed features working together in a single codebase, e.g. hermetic builds, on-device unit tests, RPC-centric comms, factory-at-your-desk testing, etc. We're over in our Discord [5] if you've got any questions

[1] https://www.raspberrypi.com/news/raspberry-pi-pico-2-our-new...

[2] https://opensource.googleblog.com/2024/08/introducing-pigwee...

[3] https://blog.bazel.build/2024/08/08/bazel-for-embedded.html

[4] https://pigweed.dev/docs/showcases/sense/

[5] https://discord.gg/M9NSeTA

dheera
8 replies
21h39m

I hate Bazel. A build system for C/C++ should not require a Java JVM. Please keep Java out of microcontroller ecosystem please -__--

kaycebasques
3 replies
20h35m

We realize Bazel is not the right build system for every embedded project. The "Bazel for Embedded" post that came out today (we co-authored it) talks more about why we find Bazel so compelling: https://blog.bazel.build/2024/08/08/bazel-for-embedded.html

bobsomers
1 replies
19h15m

In my experience, Bazel is great if you are a Google-sized company that can afford to have an entire team of at least 5-10 engineers doing nothing but working on your build system full time.

But I've watched it be insanely detrimental to the productivity of smaller companies and teams who don't understand the mountain of incidental complexity they're signing up for when adopting it. It's usually because a startup hires an ex-Googler who raves about how great Blaze is without understanding how much effort is spent internally to make it great.

kaycebasques
0 replies
18h42m

Thanks for the discussion. What was the timeframe of your work in these Bazel codebases (or maybe it's ongoing)? And were they embedded systems or something else?

actionfromafar
0 replies
19h45m

Bazel is great for some Enterprise. Try it somewhere Azure rules and behold the confused looks everywhere.

jaeckel
1 replies
21h34m

And only discord on top, but maybe I'm simply not hip enough

kaycebasques
0 replies
18h39m

I forwarded your feedback to the team and we are now vigorously debating which other comms channels we can all live with

clumsysmurf
0 replies
20h14m

Maybe there is a way to create a native executable with GraalVM...

TickleSteve
0 replies
11h20m

I have to admit, Bazel as a build system would mean it wouldnt even be considered by me, it has to fit in with everything else which typically means Makefiles, like it or not.

TBH, Java + Bazel + Discord makes it seem like its out of step with the embedded world.

snvzz
0 replies
14h58m

Is RISC-V supported?

I am surprised the Pigweed announcement makes no mention of this.

simfoo
0 replies
9h8m

Pretty awesome. I love Bazel and it seems you're making good use of it. It's such a difference seeing everything hermetically integrated with all workflows boiling down to a Bazel command.

vardump
10 replies
22h21m

RP2040 had Doom ported to it. https://kilograham.github.io/rp2040-doom/

RP2350 looks very much like it could potentially run Quake. Heck, some of the changes almost feel like they're designed for this purpose.

FPU, two cores at 150 MHz, overclockable beyond 300 MHz and it supports up to 16 MB of PSRAM with hardware R/W paging support.

chipxsd
8 replies
21h32m

While outputting DVI! I wouldn't be surprised.

rvense
7 replies
20h36m

Mouser have 64 megabyte PSRAMs.

I really want a Mac System 7 grade operating system for this chip...

dmitrygr
5 replies
19h34m

No they do not. 64megabit

rvense
4 replies
9h9m

Did you bother to check? It's octal, not QSPI, so I don't know if it's compatible. (edit - and 1.8V, inconvenient)

dmitrygr
1 replies
4h15m

Did you? It doesnt do qspi mode

rvense
0 replies
2h25m

Of course I did. If you also did, you would know that they do in fact have 64 megabyte PSRAMs as I stated. So a helpful comment would have been "they're not compatible, though". Your reply as it stands just makes it sound like you maybe assumed that I don't know the difference between megabits and megabytes.

andylinpersonal
1 replies
4h51m

Octal PSRAM usually cannot fallback to quad mode like some octal flash did.

refulgentis
0 replies
18h11m

If we can get flutter running on these...

elipsitz
10 replies
1d5h

Can’t find an official announcement or datasheet yet, but according to this post:

* 2x Cortex-M33F * improved DMA * more and improved PIO * external PSRAM support * variants with internal flash (2MB) and 80 pins (!) * 512KiB ram (double) * some RISC-V cores? Low power maybe?

Looks like a significant jump over the RP2040!

HeyLaughingBoy
2 replies
1d2h

Make one in an Arduino Uno form factor and double the price and they'd make a killing :-)

I try to dissuade n00bs from starting their arduino journey with the ancient AVR-based devices, but a lot of the peripherals expect to plug into an Uno.

ta988
0 replies
1d1h

Look at the adafruit metro then. They just announced the rp2350 version

moffkalast
0 replies
1d1h

Well there's the UNO-R4 Renasas I suppose, but this would be much cooler indeed. There's also the 2040 Connect in the Nano form factor with the extra IMU.

jsheard
0 replies
1d3h

Only $1 more than the original Pico, that's an absolute steal. Although the Pico2 doesn't have PSRAM onboard so there's room for higher end RP235x boards above it.

repelsteeltje
1 replies
1d3h

... And RP2354A/B even has 2MB built in flash!

andylinpersonal
0 replies
4h50m

Indeed an in-package winbond flash die though.

zrail
0 replies
1d3h

This is pretty exciting. Can't wait for the datasheet!

dgacmu
0 replies
16h56m

Small (0.5 bits effective) improvement to the ADC also, per the datasheet.

KaiserPro
0 replies
1d2h

I'm hoping that its got much better power management. That would be really cool for me.

tecleandor
6 replies
1d2h

Not off topic but a bit tangentially...

How difficult would be emulating an old SRAM chip with an RP2040 or an RP2350? It's an early 80s (or older) 2048 word, 200ns access time CMOS SRAM that is used to save presets on an old Casio synth. It's not a continuous memory read, it just reads when loading the preset to memory.

I feel like PIO would be perfect for that.

HeyLaughingBoy
2 replies
1d2h

If it's not an academic question and you have an actual need for the SRAM, what's the p/n? I have some old parts stock and may have what you need.

tecleandor
1 replies
1d1h

Oh! Thanks!

I wanted to do a clone or two of said cartridges, that use, IIRC (I'm not in my workshop right now) a couple Hitachi HM6116FP each.

I've also seen some clones from back in the day using a CXK5864PN-15L, that's 8 kilowords, and getting 4 switchable "memory banks" out of it...

HeyLaughingBoy
0 replies
23h3m

Thought I had more than this, but it's been literally decades...

I found (1) HM6116, (4) HM65256's (1) HM6264 and wonder of wonders, a Dallas battery-backed DS1220, although after 20+ years the battery is certainly dead. All in DIP packages of course.

And a couple of 2114's with a 1980 date code! that I think are DRAM's.

If any of this is useful to you, PM me an address and I'll pop them in the mail.

tecleandor
0 replies
1d2h

Ooooh, that looks cool and the PCB seems simple (at least from this side). Congrats!

Do you have anything published?

bschwindHN
6 replies
1d1h

Alright, what's the max image resolution/framerate someone is going to pump out with the HSTX peripheral?

bschwindHN
4 replies
15h41m

I wonder what other uses people will find for it. It's one-way data transfer, I wonder if it could be hooked up to a USB 2.0 or USB 3.0 peripheral, or an ethernet PHY, or something else.

spacedcowboy
3 replies
14h49m

Pretty sure I'm going to link it up with an FPGA at some point - as long as the data is unidirectional, this is a promise of 2400 Mbit/sec - which for a $1 microcontroller is insane. If it overclocks like the processor, you're up to 4800 MBit/sec ... stares into the distance

I can use PIO in the other direction, but this has DDR, so you'll never get the same performance. It's a real shame they didn't make it bi-directional, but maybe the use-case here is (as hinted by the fact it can do TMDS internally) for DVI out.

If they had make it bidirectional, I could see networks of these little microcontrollers transmitting/receiving at gigabit rates... Taken together with PIO, XMOS would have to sit up straight pretty quickly...

bschwindHN
2 replies
11h11m

Right? Bidirectional capability at those speeds would be incredible for the price of this chip.

Either way, still looking forward to see what people cook up with it, and hopefully I'll find a use for it as well. Maybe combine it with some cheap 1920x1080 portable monitors to have some beautiful dashboards around the house or something...

vardump
1 replies
10h49m

1920x1080 30 Hz DVI would require running RP2350 at least at 311 MHz ((1920 * 1080 * 30Hz * 10) / 2). Probably a bit more to account for minimal horizontal and vertical blanking etc. Multiplier 10 comes from 8b10b encoding.

To fit in 520 kB of RAM, the framebuffer would need to be just 1 bpp, 2 colors (1920 * 1080 * 1bpp = 259200 bytes).

From PSRAM I guess you could achieve 4 bpp, 16 colors. 24-bit RGB full color would be achievable at 6 Hz refresh rate.

I guess you might be able to store framebuffer as YUV 4:2:0 (=12 bits per pixel) and achieve 12 Hz refresh rate? The CPU might be just fast enough to compute YUV->RGB in real time. (At 1920x1080@12Hz 12 clock cycles per pixel per core @300 MHz.)

(Not sure whether the displays can accept very low refresh rates.)

bschwindHN
0 replies
7h9m

Hmmm yeah, I haven't done the math and was maybe a bit too ambitious/optimistic.

kvemkon
5 replies
1d3h

1 × USB 1.1 controller and PHY, with host and device support

Sure, after integrating USB 2.0 HS or 1Gb-Ethernet the pico2-board will cost more than $5. So, integrated high-speed interfacing with PC was not a nice-to-have option (for special chip flavor)?

rasz
2 replies
8h49m

USB 2.0 HS

480 Mbps SERDES

or 1Gb-Ethernet

1.25 Gbps SERDES

namibj
0 replies
54m

GigE is actually PAM-4 at ~250 MBaud.

kvemkon
0 replies
7h7m

RP1 I/O chip on RPi5 has so many high-speed interfaces. I've been thinking RP2350 could be some smart I/O chip for a PC/notebook/network attached computers (with only 1 necessary high-speed connection).

solidninja
1 replies
1d1h

I think the target here is low-power peripherals rather than speedy peripherals, and the price is very nice for that :)

boznz
4 replies
21h24m

I got almost all of my wishes granted with RP2350

I got all mine, these guys really listened to the (minor) criticisms of the RP2040 on their forums and knocked them out of the ball park. Cant wait to get my hands on real hardware. Well done guys

ebenupton
3 replies
20h18m

Thank you. It's been a major effort from the team, and I'm very proud of what they've accomplished.

vardump
2 replies
10h22m

Thanks for a great product!

A (small?) ask. Can we have instruction timings please? Like how many cycles SMLAL (signed multiply long, with accumulate) takes?

Will there be an official development board with all 48 GPIOs exposed?

ebenupton
1 replies
8h26m

Cortex-M33 timings aren't documented, but one of our security consultants has made a lot of progress reverse engineering them to support his work on trace stacking for differential power analysis of our AES implementation. I've asked him to write this up to go in a future rev of the datasheet.

No official 48 GPIO board, I think: this is slightly intentional because it creates market space for our partners to do something.

vardump
0 replies
7h54m

I've asked him to write this up to go in a future rev of the datasheet.

Thanks!

rowanG077
3 replies
1d

Would the pio now support sane Ethernet using rmii for example?

rscott2049
2 replies
18h28m

I'm assuming you've looked at the pico-rmii-ethernet library? If so, I feel your pain - I've been fixing issues, and am about halfway done. (This is for the DECstation2040 project, available on github). Look for a release in late aug/early sep. (Maybe with actual lance code? Dmitry??) The RP2350 will make RMII slightly easier - the endless DMA allows elimination of the DMA reload channel(s).

rowanG077
1 replies
3h55m

I looked at it and dismissed it as too hacky for production. I don't remember the real reason why. I would have to look through my notes. The main question is whether the RP2350 will change that. As in it actually possible to do bug free without weird hacks.

rscott2049
0 replies
2h42m

Agreed. I re-wrote the Rx PIO routine to do a proper interrupt at EOF, and added a DMA driven ring buffer, which eliminated a lot of the hackiness...

limpbizkitfan
3 replies
1d3h

Is there an exhaustive list of stm32h7 errata? Has anyone compiled a defect list?

dmitrygr
1 replies
1d3h

STM has an inexhaustible list of them, but does not list at least a few QSPI ones that I am aware of. :/

limpbizkitfan
0 replies
1d1h

:( hoping to play with a pico 2 soon so I can convince my team to move off stm32h7
TheCipster
3 replies
6h17m

While I completely agree with the content of the post, I still think that QFN packages in general, and RP2350's in particular, are very hobbyist-averse.

Moving all GND pins to the bottom pad makes this chip usable only by people with a reflow oven. I really hoped to see at least a version released as (T)QFP.

mastax
0 replies
4h38m

My reflow oven is a $5 hot plate and a $15 toaster oven. I don’t know if that is very hobbyist averse.

jsfnaoad
0 replies
5h41m

Hard disagree. A TQFP package this dense is still quite challenging for a hobbyist. Just use a breakout board, dev board or get the QFN assembled for you at jlcpcb

dgacmu
0 replies
5h48m

Isn't the hobbyist solution to just build a board to which you can attach an entire pico board? That does preclude some things and adds $3, but it makes for a pretty easy prototyping path.

maccam912
2 replies
1d3h

Can anyone speak about plans for a Pico 2 W (or Pico W 2)? I've been playing around recently with mine and even just syncing with the current time over wifi opens up a lot of possibilities.

amelius
2 replies
4h59m

How easy is it to share memory between two of these processors?

sounds
1 replies
1h42m

Hmm, a 4-core cluster?

Easiest would be to wire up two chips with bidirectional links and use a fault handler to transfer small blocks of memory across. You're reimplementing a poor man's MESIF https://stackoverflow.com/questions/31876808.

amelius
0 replies
1h4m

This is something I'd like to see an article about on HN :)

RA2lover
2 replies
1d4h

Is there any info on the analog capabilities compared to the RP2040?

TomWhitwell
1 replies
1d1h

Looks like 4 x ADC channels again, no on-board DAC

RA2lover
0 replies
1d

the 80-pin version has 8.

weinzierl
1 replies
1d

This is fantastic news. Is there information on power consumption? This is something that precludes a good deal of use cases for the RP2040 unfortunately and any improvements here would be good, but maybe the RP's are just not made for ultra low power?

ebenupton
0 replies
20h14m

Significant improvements to flat-out power (switcher vs LDO) and to idle power (low quiescent current LDO for retention). Still not a coin-cell device, but heading in the right direction.

v1ne
1 replies
22h4m

Hmm, it's really nice that they fixed so many complaints. But honestly, reading the errata sheet, I had to chuckle that Dmitry didn't tear this chip to pieces.

I mean, there's erratums about obscure edge cases, about miniscule bugs. Sure, mistakes happen. And then there's this: Internal pull-downs don't work reliably.

Workaround: Disconnect digital input and only connect while you're reading the value. Well, great! Now it takes 3 instructions to read data from a port, significantly reducing the rate at which you can read data!

I guess it's just rare to have pull-downs, so that's naturally mitigating the issue a bit.

colejohnson66
0 replies
18h59m

You can also use external resistors.

swetland
1 replies
1d2h

Lots of nice improvements here. The RISC-V RV32I option is nice -- so many RV32 MCUs have absurdly tiny amounts of SRAM and very limited peripherals. The Cortex M33s are a biiig upgrade from the M0+s in the RP2040. Real atomic operations. An FPU. I'm exited.

guenthert
0 replies
8h12m

Many people seem excited about the FPU. Could you help me understand what hardware floating point support is needed in a MCU for? I remember DSPs using (awkward word-size) fixed point arithmetic.

jsheard
1 replies
1d4h

Speak of the devil: https://news.ycombinator.com/item?id=41156743

Very nice that the "3" turned out to mean the modern M33 core rather than the much older M3 core. It has a real FPU!

dmitrygr
0 replies
1d3h

Yes, well-guessed

fouronnes3
1 replies
1d2h

Curious about the low-power and sleep mode improvements!

geerlingguy
0 replies
1d1h

Me too; I had a little trouble with MicroPython lightsleep and deepsleep in the pre-release version I was testing.

I will re-test and try to get better sleep state in my code either today or tomorrow!

endorphine
1 replies
10h41m

Can someone explain what projects this can be used for?

vardump
0 replies
10h27m

It's a pretty generic µC, especially well suited for projects that require high-speed (GP)I/O. It can replace an FPGA in some cases.

DSP extensions and FPU support are also decent, so good for robotics, (limited) AI, audio, etc.

Also great for learning embedded systems. Very low barrier of entry, just need to download IDE and connect it with an USB cable.

boznz
1 replies
17h31m

This is amazing IP; it makes you wonder if the RPi foundation could be a target for Aquisition by one of the big microcontroller manufacturers.

spacedcowboy
0 replies
17h5m

$deity I hope not. Then we lose the price/performance of these little beauties.

urbandw311er
0 replies
21h51m

I absolutely love this guy’s enthusiasm.

tibbon
0 replies
20h18m

Thanks for making the DEF CON badge! 10000x cooler than last year

ralferoo
0 replies
3h57m

I presume the article was edited to change its headline after it was submitted to HN, but it's interesting that it doesn't match up with the HN title. It's still a subjective but positive title, but somehow feels like it has a different tone to the title on HN:

HN: "I got almost all of my wishes granted with RP2350"

Article: "Why you should fall in love with the RP2350"

title tag: "Introducing the RP2350"

numpad0
0 replies
1d3h

https://www.raspberrypi.com/products/rp2350/

4 variants? "A" and "B" variants in QFN60 and QFN80, "2350" and "2354" variants with and without 2MB Flash. CPU can be switched between dual RISC-V @ 150MHz or dual Cortex-M33 @ 300MHz by software or in one-time programming memory(=permanently).

Datasheet, core switching details, most of docs are 404 as of now; I guess they didn't have embargo date actually written in `crontab`.

e: and datasheet is up!

mastax
0 replies
3h14m

It's a bit surprising that they put so much effort into security for the second microcontroller from a young consumer-oriented* company. My first instinct was to distrust it's security, simply due to lack of experience. However, the "experienced" vendors' secure micros have lots of known security bugs and, more crucially, a demonstrated desire to sweep them under the rug. Two security architecture audits, a $10k bug bounty, and designing a board for glitching as the DEF CON badge shows a pretty big commitment to security. I'm curious about how the Redundancy Coprocessor works. I still wouldn't be surprised if someone breaks it, at least partially.

* By perception at least. They have been prioritizing industrial users from a revenue and supply standpoint, it seems.

katzinsky
0 replies
1d3h

I suppose this isn't the first time a company that started out as a hobbiest board manufacturer produced really amazing micro controllers but man is it insane how far they've knocked the ball out of the park.

jononor
0 replies
1d1h

Aha, the 3 is for M33, not Cortex M3 (as some speculated based on the name). That makes a lot more sense! Integrated FPU is a big improvement over the RP2040, and M33 is a modern but proven core.

jackwilsdon
0 replies
22h29m

I'm most excited for the partition and address translation support - partitions can be mapped to the same address for A/B boot slots (and it supports "try before you buy" to boot into a slot temporarily). No more compiling two copies for the A and B slots (at different addresses)!

hashtag-til
0 replies
8h26m

The disclaimer is brutally honest. I love it.

gchadwick
0 replies
1d

This looks awesome a really great step up from the RP2040. I'm a big fan of the original and I'm excited to see all the improvements and upgrades.

I imagine with the new secure boot functionality they've got a huge new range of customers to tempt to.

Also exciting to see them dip their toe into the open silicon waters with the hazard 3 RISCV core https://github.com/Wren6991/Hazard3.

Of course it they'd used Ibex https://github.com/lowrisc/ibex the RISC-V core we develop and maintain at lowRISC that would have been even better but you can't have everything ;)

ckemere
0 replies
18h15m

I wish there was a way to share memory with a Pi. The PIO looks great for high speed custom IO, but 100Mb scale interface to/from it is quite hard/unsolved.

brcmthrowaway
0 replies
23h54m

Why would I pick this over esp32 if I need to get shit done

begriffs
0 replies
20h33m

I see CMSIS definitions for the RP2040 at https://github.com/raspberrypi/CMSIS-RP2xxx-DFP but none for RP2350. Maybe they'll eventually appear in that repo, given its name is RP2xxx? I thought vendors are legally obligated to provide CMSIS definitions when they license an ARM core.

anyfoo
0 replies
10h6m

> I was not paid or compensated for this article in any way

However the Raspberry Pi engineer in question WAS compensated for the samples, in the form of a flight over downtown Austin in Dmitry's Cirrus SR22.

Hahah, I’ve been in that plane. Only in my case, it was a flight to a steak house in central California, and I didn’t actually do anything to get “compensated”, I was just at the right place at the right time.

Anyway, I am extremely excited about this update, RPi are knocking it out of the park. That there is a variant with flash now is a godsend by itself, but the updates to the PIO and DMA engines make me dream up all sorts of projects.

andylinpersonal
0 replies
4h44m

In terms of security features, it lacks on-the-fly external memory (flash and PSRAM) encryption and decryption as ESP32 and some newer STM32s did. Decrypting by custom OTP bootloader and running entirely in the internal SRAM maybe sometimes limited for larger firmware.

amelius
0 replies
5h1m

So, in conclusion, go replan all your STM32H7 projects with RP2350, save money, headaches, and time.

Except the STM32H7 series goes up until 600MHz.

Overclocking is cool, but you can't do that on most commercial projects.

Taniwha
0 replies
13h38m

New BusPirate 5XL&6 also dropping today - use the 3250

https://buspirate.com/

SethTro
0 replies
1d1h

This has 2 of the 3 features (float support, faster clock) + more POI that was keeping me on ESP32. For projects that need wifi, and can tolerate the random interrupts, I'll stick with ESP32.

GeorgeTirebiter
0 replies
19h14m

What is the process node used? Who is fabbing this for them? Given that the new chip is bigger, my guess is the same (old) process node is being used. RP2040 is manufactured on a 40nm process node.

Whoops, I read the fine print: RP2350 is manufactured on a 40nm process node.

294j59243j
0 replies
17h14m

But still USB-micro instead of USB-C. Raspberry Picos are literally the one and only reason why I still own any older USB cables.

1oooqooq
0 replies
7h3m

It overclocks insanely well

says the guy with engineering samples and creme of the creme silicon parts... i expect most that will actually be available when they do to their normal schedule of scraping the literal bottom of the barrel to keep their always empty stocks that will not be the case.