return to table of content

AMD CEO Lisa Su reminisces about designing the PS3's infamous Cell processor

magicalhippo
42 replies
1d1h

Had a heterogeneous programming class, where we had to implement various things on the PS3, like a basic MPEG-ish encoder and such.

Was an interesting experience, though all I remember now is how it required careful exploitation of the vector units in the SPEs to get any decent performance out of it, and how annoying it was to synchronize between the SPEs and the PPE.

For each assignment the prof would include a benchmark, so we could compare the performance in class. Was a huge difference between the students that spent time optimizing and those which did a basic implementation.

Waterluvian
29 replies
1d1h

I'm super curious what program/course was set up with developer/jailbroken PS3s to use as lab material. Was this specifically about game/console dev?

selykg
25 replies
1d1h

I believe the PS3 had an option to install Linux on it at some point. No? That would've made it a neat option for classes like this.

Waterluvian
18 replies
1d1h

Oh right... I think somewhere around 1st gen they were cool with that. But didn't they end up getting killed in the market because they had to sell it for less than Five Hundred and Ninety Nine U.S. Dollars, effectively at a loss, and people were making compute centres out of them?

I think a later gen locked it right down.

gamepsys
6 replies
1d1h

For the most part the cluster of PS3s supercomputer was a myth used to hype up how powerful the PS3 was. The amount of RAM per compute was so low that it limited the type of workloads it was good at. Then server CPUs became more powerful as the PS3 processor stayed the same.

However PS3 clusters did find niche success as a super computer. There are a handful of notable examples. The fastest one ever built was by the US Airforce, consisted of 1,760 PS3s, and was the 33rd fastest super computer at the time. It was used for satellite image processing.

Arrath
2 replies
22h36m

I always thought it was a bit odd that we went from super-compute clusters built out of PS3 to the PS4/Xbone generation consoles being meh-grade laptop chips in a fancy case.

ethbr1
1 replies
22h18m

It's the performance vs programmability tradeoff.

The article references it.

As one goes back in console history, the performance to achieve contemporarily visually-stunning results required exotic architectures + hand-tuned code.

Over time, state of the art PC + GPU hardware improved, to the point there wasn't a functional difference vs exotic architectures. Additionally, the cost of developing leading-edge custom architectures exploded.

Microsoft really pushed this with the original "PC in a box" XBox, but you saw the console OEMs outsource increasing parts of their chip solutions after that.

Why keep / buy entire chip design teams, when AMD will do it for you?

Especially if the performance difference is negligible.

sillywalk
0 replies
20h31m

The article also references the cost of developing a AAA "HD" title is so high that it pretty much needs to be released on multiple platforms to be profitable.

frou_dh
1 replies
23h49m

When I was paying attention during the 90s and 2000s I remember this being the hype in magazines / on the web for pretty much EVERY upcoming console. Namely that it was so outrageously powerful that it might be considered a supercomputer in its own right. Needless to say, I was hyped up and obsessing about them myself.

throwaway48476
0 replies
1d

There was a time when supercomputers had no GPUs. Nowadays the top list is all GPu accelerators.

xattt
3 replies
1d1h

OtherOS was a way for Sony to work around higher tarifs. General computing devices (enabled by OtherOS) had a lower schedule than consoles.

It was retroactively removed when people started getting close to enabling full GPU access. This was previously crippled under OtherOS to prevent games developed without the PS3 SDK.

pjmlp
0 replies
11h37m

As PS2Linux owner I think that was a reaction to the way the Linux community handled PS2Linux.

Instead of following the Yaroze footsteps, and being a way for a new indies generation to learn how to create console games, the majority used PS2Linux for classical UNIX stuff and play games on emulators.

Note that PS2Linux had GPU access, via PSGL and an easier high level API for the Emotion Engine.

So, no GPU access, no emulators.

musicale
0 replies
15h4m

It was retroactively removed when people started getting close to enabling full GPU access

It was retroactively removed when Sony started worrying about "security" concerns - aka people running unlicensed commercial games, which is usually the killer app for any firmware that supports customization and programming.

It was also advertised on the original PS3 box.

Ultimately there was a lawsuit and some PS3 owners got $10 apiece.

magicalhippo
0 replies
1d

Ah interesting. I do recall we never utilized the GPUs on them, just the Cell.

pclmulqdq
3 replies
1d1h

Reportedly the US government had no problem paying $599 per PS3 to build ML/AI supercomputers out of them.

brianleb
0 replies
1d

Probably referring to:

https://phys.org/news/2010-12-air-playstation-3s-supercomput...

>About the 33rd largest supercomputer in the world right now is the US Air Force Research Laboratory's (AFRL) newest system, which has a core made of 1,760 Sony PlayStation 3 (PS3) consoles. In addition to its large capacity, the so-called "Condor Cluster" is capable of performing 500 trillion floating point operations per second (TFLOPS), making it the fastest interactive computer in the entire US Defense Department.

>It will be used by Air Force centers across the country for tasks such as radar enhancement, pattern recognition, satellite imagery processing, and artificial intelligence research.
jimbobthrowawy
0 replies
15h13m

Each ps3 was cheaper than the per-CPU cost of what you'd get from IBM. I'm sure you'd make it up in power usage pretty quick though.

chaorace
2 replies
1d1h

Rumor has it that Sony axed OtherOS in response to Geohot using it to exploit & compromise the PS3 hypervisor.

MadnessASAP
1 replies
23h35m

If I recall correctly that was a stated fact. OtherOS was removed due to "security vulnerabilities".

scott_s
2 replies
1d1h

Yes. My research lab in grad school had a cluster of 24 PS3s. The back of that rack was hot.

latchkey
0 replies
11h58m

I built a cluster of 20,000 PS5 APU's. The problem was that they didn't have ECC memory, so it was pretty hard to repurpose them for any real work. I tried really hard to find some use cases, but unfortunately getting funding to run them was going to take longer than we had time. It all got shut down.

They did work great for mining ethereum though.

htrp
0 replies
1d1h

Please tell me you guys called it "SuperCELL"

ben7799
1 replies
23h19m

I was out of school for quite a bit by the time the PS3 came out but I remember installing linux on mine and being pretty excited to program on it.

I did a little bit, but it ended up being a super PITA because I didn't want to put my PS3 on my desk and give up playing games on it on my TV.

Because I wanted to keep it with my TV I had tried to use it with Bluetooth KB/Mouse so I could use it from the couch.

And well typical Linux you always end up spending time on the most mundane thing. Bluetooth on the PS3 in Linux at that time didn't seem to work right.

That to me is like peak Linux right there. At some point I had said to myself "I wanted to play with the cell and instead I'm working on Bluetooth" and I lost interest cause of course I had real work too. Then eventually Sony disabled the whole thing.

mjevans
0 replies
14h16m

It's really not 'peek Linux' it's just that many manufacturers refuse to even provide specifications for using their product (basically it's API) without charging money and/or requiring an NDA etc.

This makes any third party trying to support them face nightmare after nightmare of unknowns, and a complex device like a computer of any sort is nearly always a complex web of interlinked mini-computers each doing their own thing. Like the damned bluetooth chip; which worse probably has government regulated transmission compliance stuff baked into it's software rather than hardware. That adds a whole other nightmare you might see with various firmware blobs and driver support, most often seen with other WiFi adjacent projects such as OpenWRT (and supported hardware).

It would be very nice if some law enforced free 'edge technical specification' for all products to anyone who buys them or a product containing them. That would help a level and fair market for competitors as well as right and ability to repair for consumers / device owners. That sounds a lot like effects desired by most (many?) Libre software licenses, like the GPL (and LGPL), and Creative Commons.

pjmlp
0 replies
1d1h

Yes, it was called OtherOS.

stevenwoo
0 replies
18h20m

The first generation devkit PS3s cost something outrageous like 10000 dollars and were the size of large pizza boxes IIRC (shrunk from three pizza boxes stacked on top of each other - eventually they shrunk the devkit down) and has some pretty stringent requirements to acquire. On the team I worked on only the graphics developer got a PS3 devkit, the rest of us lowly programmers got PS3 test kits, that allowed us to deploy/debug but without the same level of support tools, I think it was the same situation with the XBox 360 but the Microsoft developer tools were excellent even with just the test kits. I vaguely remember the cost of the test kits being double or triple the retail price of a regular console, and we had to warranty my test xbox 360s quite regularly since they would red circle of death.

mywittyname
0 replies
1d

Not OP, but we had PS3s at school as well. Early units ran linux. It was sandboxed to a degree in order to prevent it from being used for software piracy. OP's project was cool, ours was basically implementing PCA on one.

I got the impression that Sony encouraged this use of the console early on. They were probably aiming to establish themselves in the scientific computing niche. We also had nvidia-sponsored labs where they taught CUDA (or at least tried to - it was pretty difficult to do without support and most people took the class for access to computer labs equipped with amazing gaming machines). We all know nvidia won that war.

magicalhippo
0 replies
1d

They had Linux on them. Don't think I ever saw them, just had network access.

hmottestad
8 replies
21h28m

I really wanted do take that course. I had taken the OS course where you build an OS from scratch, so this seemed like the next thing to do. The OS course was for 20 points, but the heterogeneous programming course was only 10 points. People said it was way more work than 10 points though and I ended up dropping it because I couldn't justify spending that much time on it when I had other courses to take and a thesis to write.

Here is the OS course: https://www.uio.no/studier/emner/matnat/ifi/IN4000/index-eng...

magicalhippo
3 replies
21h23m

Hah yea I quickly discovered that matnat[1] study points weren't like other study points. Felt like we had to do 2-5x the work compared to the students I knew going to other faculties. But in general the courses were very rewarding I feel.

I heard about the OS one, sounded super cool and also a lot of work from what I gathered. Heard about it too late and required a buddy. Oh well, can't have it all.

[1]: faculty of math and natural sciences

gravescale
1 replies
18h46m

Sounds like the UK too. All my school friend doing non-STEM courses would ask me how many "contact hours" we had per week.

I can't remember exactly how many, but once you included lectures, scheduled labs and tutorials, was not that far from 9-5 (minus the standard university-wide Wednesday afternoon sport slot).

The friends would go quiet and say "oh, we have 12".

And that didn't even incude the "non-contact" hours in the labs grinding the coursework until security shooed us out late at night so they could lock the building. Good times (in retrospect, especially).

schnitzelstoat
0 replies
8h57m

Yeah, I did Physics in the UK and it was the same.

Plus in one term there was some miscommunication between the professors and they set us one experiment every week instead of every two weeks, I remember literally working like 12 hour days on the weekends to get all the data analysis and reports done.

matsemann
0 replies
9h12m

We at NTNU used to complain/joke about how you at UiO even got 10 points for your classes, when we had a flat 7,5 sp for each class in Trondheim...

I say used to, but still do. Even 10 years later me and my girlfriend (that studied at UiO) still tease each other about who is the smartest or had the hardest courses, or she mocks my ring, heh.

DanielHB
3 replies
9h3m

Just out of curiosity what kind of hard projects did you do at the bachelor level? I had to build a unix shell, implemented some CPU scheduling algorithms (but didn't actually integrate it into a proper OS), did some B-tree stuff as well as implementing JPEG-like algorithms. Probably the one that took the most time was designing a CPU that worked in a real FPGA with its own instruction set and assembler (it was such a pain in the ass to troubleshoot, my crappy laptop took 30 min to compile the project and run simulations).

When I talk to people outside my home country it feels most don't do these kind of things at the bachelor level. But, like you, people who go on a masters then might do one of these things and go much more in-depth than I did.

trollied
2 replies
8h25m

I designed a cut down ARM core that ran on a FPGA. Then wrote a compiler to run mmy own code on it.

That's "full stack" development :)

Also wrote a BBC Micro emulator, which was great fun.

Rinzler89
0 replies
8h2m

Got anY blog or write-up on that?

DanielHB
0 replies
6h58m

That sounds fun, I designed the instruction set on my CPU myself and I didn't get around writing a compiler for it (I only wrote an assembler).

Since I designed the instruction set I made some very quirky design decisions to make the test programs I was writing easier, instructions were 16bit but only 6 bits were used for the instruction so I had a lot of single-word instructions that did a lot of heavy lifting for my programs.

Did you really do that at the bachelor level? I feel like my uni was a bit abnormal on how many hardware classes and projects we had at a bachelor level for computer science.

skavi
2 replies
1d1h

that sounds like a super fun class. wonder if it’s still offered?

magicalhippo
0 replies
1d

Seems like they still do[1], though I doubt the PS3 is still in the mix.

Though from the published material for 2024 it seems not to have changed too much in principle, eg the codec63 I recall implementing parts for.

edit: oh and yes, one of the highlights of my degree.

[1]: https://www.uio.no/studier/emner/matnat/ifi/IN5050/index-eng...

crote
0 replies
23h0m

The Program Optimization course I followed did something similar with regular CPUs / GPUs. It started out as "don't do stupid things", but it quickly went into SoA vs AoS, via hand-writing AVX intrinsics, into eventually running OpenCL code on the GPU.

Part of your grade depended on the performance of your code, with a bonus for the fastest submission. It led to a very nice semi-competitive semi-collaborative atmosphere.

A4ET8a8uTh0
33 replies
1d1h

I have a weird problem with the article. It does start with a question over 'infamous' qualifier, but it goes into something deeper that has been bothering me over the past few years.

Lets start from the end, game consoles these days are basically optimized PCs. They are effectively nothing like the first, second, or even third generation consoles that had fairly unique hardware, desirable exclusives and so on.

Cell may have been difficult to program well for, but it speaks volumes about the current state of consoles that devs worry about being able to be lazy with ports. The idiot consumer will buy whatever anyway. The side effect of the first generations' relatively unique hardware profile is that games actually had to be optimized and tailored to the hardware. No lazy ports ( I still get pissy sometimes over ps3 dragon age port ). Exclusives had to be good or at least showcase what the console could do.

So is that what makes Cell infamous? The thing made news when could be run in a cluster[1].

I have a different take. Cell is the remnant of the consoles, when they were still worth one's time.

[1]https://en.wikipedia.org/wiki/PlayStation_3_cluster

ProfessorLayton
15 replies
23h46m

While I too miss exotic hardware (N64 being my all-time favorite), ultimately what people want to play is great games and the hardware is just a means to an end.

The bigger issue with current consoles imo, specifically the PS5 and XB, is their input homogenization and stagnation. Consoles have historically been forward-looking via their controllers to enable new ways to play games:

[NOT including optional accessories, only out of the box abilities]

- N64: Analog controls, 4 controller ports, controller expansion ports

- PS1: Dual analog controls

- PS2: Pressure-sensitive buttons, built-in rumble

- Dreamcast: ...

- Xbox: ...

- Gamecube: ...

- Wii: Motion controls

- PS3: ...

- Wii U: Gamepad screen

- 360: ...

- PS4: [Correction!] Touchpad

- Switch: Detachable and sharable controllers

- XB1: Kinnect

- PS5: ...

- XB1X: ...

I'm definitely not implying that every new input type was successful (See Wii U, kinnect), but consoles up until about PS3/360 had something new to offer regarding ways to experience games. This has all but stopped, with Nintendo being the only one to consistently introduce new ways to play that cannot easily be replicated on a PC. Plugging a PS5/Xbox controller into a PC yields the same or better experience as buying a dedicated console these days, so why buy the console at all?

DuckConference
6 replies
22h34m

PS5 controller replaced rumble with fancy haptics and force feedback on the triggers

ProfessorLayton
4 replies
22h13m

I thought about that, but at its core the PS5 triggers are just another form of feedback akin to the rumble, and didn't really introduce new types of gameplay.

Fancy haptics were introduced by the Switch.

MBCook
3 replies
21h32m

While the switch had higher resolution haptics, the PS5 is another advance past that. They do not feel the same.

ProfessorLayton
2 replies
21h6m

I'm not saying they feel exactly the same, but pointing out the fact that no new gameplay experiences are unlocked by said advancement. This is evidenced by the fact that games are ported between the PS5 and Xbox 1X with little to no gameplay changes between the two, the feel of triggers and rumble aside.

Before haptics became standard, game designers could not base any significant gameplay experiences on the presence of haptics. Early games like Ocarina of Time had neat optional side quests enabled by haptics, but in no way required because they couldn't rely on all customers buying an optional accessory.

The presence of the Switch's HD Rumble (And novel controllers in general) enabled new types of play on games like 1-2-Switch, and Nintendo's Labo series that use the rumble to control cardboard toys' movement.

The same cannot be said for the PS5's more refined rumble or triggers.

MBCook
1 replies
15h35m

Part of the problem is that since they’re only available on the PS5 only exclusive games tend to get real work put into it, if at all.

I do like the way they feel and I think they add to immersion. But honestly I’m not sure you can really enable new kinds of play with any kind of rumble past a basic hot/cold system of how strong it’s running.

While the switch is capable of something more is it really used anywhere besides those two games? A big part of the problem is I’m not sure you can feel it very well unless the joycons are detached. If you keep them attached to the system or use a different controller than you seem to lose the benefit.

ProfessorLayton
0 replies
14h52m

Part of the problem is that since they’re only available on the PS5 only exclusive games tend to get real work put into it, if at all.

This is what GP was arguing about regarding lazy ports on current hardware.

While the switch is capable of something more is it really used anywhere besides those two games?

The success of the feature is kind of orthogonal to its inclusion into the console though, since console makers won't know how successful it'll be until it's developed and released. I agree with you that it didn't really take off, but that's true of other things like button pressure sensitivity, or Kinnect-style controls.

I'd also argue that HD rumble is but a small portion of a much larger and game-impacting package that is the Switch controller, while the PS5 controller is a refinement of the same controller they've had for decades now.

Playstation and Xbox have seemingly abandoned novel hardware that enables new gameplay experiences.

cubefox
0 replies
18h15m

The force feedback in the triggers was indeed new, but the "haptics" (I think it's basically a low frequency loudspeaker) were already present in the Switch controllers, and before that in iPhones under the name "Taptic engine". Unfortunately the Xbox controller still lacks these "haptics", as well as gyroscope and accelerometer, any force feedback etc.

numpad0
3 replies
11h34m

DC had Visual Memory(programmable tamagocchi as memory card) and modem, 360 had near gold standard ergonomic gamepad(also first USB in console)

ProfessorLayton
2 replies
11h9m

The VRU was a separately sold accessory and not an out of the box capability devs could depend on. The 360 had an ergonomic controller, yes, but didn't offer gameplay experiences other contemporaries couldn't offer.

numpad0
1 replies
10h34m

DC snd PS had no save function if that logic was followed, there was no dumb memory option for DC at launch.

ProfessorLayton
0 replies
2h11m

The list was specific to controller input innovations, and not console capabilities as a whole, so memory cards fall outside of that.

The Dreamcast VMU ate multiple CR2032 batteries for lunch, and not everyone bought their console at launch. There were plenty of dumb memory options throughout its lifespan.

dfxm12
1 replies
23h30m

why buy the console at all?

Putting console exclusives aside, I can build a PC that does everything I need it to do but play some new games + a console for less than it costs to build & maintain a "gaming rig". Also, my PS4 is still working after ~10yrs. I would have to upgrade my PC more frequently than that to keep up with new games.

You're also forgetting the two-point capacitive touch pad on the PS4 controller. It's OK, almost all the video game developers did too.

ProfessorLayton
0 replies
23h22m

It's OK, almost all the video game developers did too.

Haha, good catch, corrected!

pezezin
0 replies
17h18m

While I agree with you, I think that we have reached a local optimum that is difficult to improve anymore. I mean, what else is missing from modern controllers? Unless we move to VR with direct neural control, I don't see a way forward.

MBCook
0 replies
21h33m

why buy the console at all?

It’s easier. I don’t have to fiddle with drivers. Or settings. Or choosing which card to upgrade to when. Or debugging mismatches. I don’t have to have the OS + Steam/Epic/whatever + games.

The games are written to what I have and I don’t need to worry about it. The OS is completely built around playing games and making it easy for me to acquire them and I don’t have to think about any of it.

I had the years of building gaming PCs and messing with all of that and doing all the research. And I had fun. And at this point in my life I just don’t wanna deal with it I just wanna play a nice game.

MBCook
11 replies
1d1h

The article gives the answer.

It’s not about being lazy with ports. It’s that the asset budgets are so large that making games is way too expensive. For any normal company you basically _have_ to launch on multiple systems if you want a financial payback.

And every time graphics get better the problem gets worse. More models, more textures, more detail, more everything. $$$

Unless you have a monster hit (Zelda, Last of Us) you may be unable to make your money back on one machine. Even if you could, why not just make more money by adding in the other machines?

There’s a reason pretty much only 1st party studios do exclusives.

A4ET8a8uTh0
10 replies
1d1h

If there is one thing that Steam has proven, it is that games absolutely do not have to focus on 8k assets, celebrity likeness and voices. In other words, that is not really an answer. Or at least not a real answer. It is, at best, a part of it that glosses over why people play games.

dylan604
6 replies
1d

Not once ever have I purchased a game because of a character's voice and looks. In fact, most of the time, I'm skipping the "I wanted to be a director, instead I make games" cut scenes. Does spending money on that actually increase sales for games? I get having great looking assets, but voices?

somenameforme
1 replies
23h31m

I think it's largely about the marketing aspect. Imagine creating marketing for Crusader Kings 2, Mount and Blade, Terraria, and the countless other games which sold a zillion copies and are unbelievably fun, but just look pretty bland on the surface. With high end voice acting, lots of cut scenes, and so on - all of the marketing basically writes itself.

You have your actory voice intro the game, set the stage of the plot, show some brief segments of critical moments in cut scenes, and then occasionally intersperse a half second or so of actual gameplay. No idea why this gets some people fired up, but it seems to work reliably enough. I suppose it's just targeting a demographic that isn't you nor I!

MBCook
0 replies
21h50m

I don’t know that it’s that it makes marketing easier because the clips/etc already exist. I suspect they often just don’t know how to market it without shiny screenshots.

Indies don’t have this problem. But no big companies seem to be able to not spend a ton on graphics.

MBCook
0 replies
16h0m

That was the PS2 era. Both of those were large improvements for the series and a first for the series.

Today it’s just expected many games will have voice acting. It’s rare that there is something terribly special about it.

There are only two I can think of.

First Mass Effect having two voice actors for Shepard stood out. Usually you’d get only one or a silent protagonist.

The second is Disco Elysium: The Final Cut because they voiced _every_ line in the game no matter how small. And that game had an insane number of lines.

Pet_Ant
1 replies
1d

I’ve never understood people who enjoy the Diablo end-game but I love playing the games through for the story, vibe, and lore. The voice acting definitely contributes. The voice of Deckard Cain is instantly recognizable to me.

pezezin
0 replies
17h27m

"Hello my friend, stay awhile and listen!" has to be one of the more iconic phrases in gaming history...

tuna74
1 replies
21h23m

Games like The Last of Us have been really big hits. For certain types of games presentation really matter.

MBCook
0 replies
19h45m

Oh sure. There’s a place for it. But not every game needs to be that way. We mostly get AAA and now “AAAA” games.

We need to get back to AA and maybe even A games. Cheaper, faster to make, still fun.

MBCook
0 replies
23h36m

If there is one thing that Steam has proven, it is that games absolutely do not have to focus on 8k assets

Totally. Thomas Was Alone is one of the best games I’ve ever played and it’s just a bunch of boxes. It’s basically high resolution Atari.

Great graphics are nice and can certainly help, but the be all end all race to always look the best is not working well.

Some kind of reckoning has got to be coming. We’re getting to the point where it’s like 5+ years between sequels to big titles just cause that’s how long it takes. Much like mega budget Hollywood movies no one‘s willing to take a risk when it costs that much.

MisterTea
2 replies
1d

To me the infamy is owed to Sony's "super computer on a chip" hype marketing (similar to the hype around the Emotion Engine int he PS2) surrounding the architecture coupled with the difficulty in programming it. The big issue was SPE's had no way to access main memory so you had to copy the data from RAM into each SPE which had its own local memory.

mrguyorama
1 replies
23h15m

They did a similar marketing push when the PS2 was developed, saying it couldn't be exported to certain countries due to dual-use restrictions because it was "too powerful"

pezezin
0 replies
17h13m

I remember that, and also how it was called the Emotion Engine because it would allow the creation of virtual characters with real emotions, which to my 15-year-old self sounded like a big load of BS.

pezezin
0 replies
17h34m

They are effectively nothing like the first, second, or even third generation consoles that had fairly unique hardware, desirable exclusives and so on.

I am not sure how you are counting them, but the second generation were the Atari 2600 and its peers, and the third generation were the NES and the Master System. Consoles had unique hardware until the 7th generation (PS3 and X360).

lomase
0 replies
10h20m

Calling developers lazy is the pinacle of not understanding how project managment works.

peutetre
28 replies
1d2h

The PlayStation 3 pretty much lost the generation handily to Nintendo's cheap, casual-friendly Wii and Microsoft's less powerful but easier Xbox 360.

No it didn't. The PlayStation 3 outsold the Xbox 360. Have a look at the 7th generation charts on VGChartz:

https://www.vgchartz.com/

It's true the Wii sold the most.

The 8th generation didn't go well for Nintendo. The Wii U was a flop. But they're back on top with the Switch.

EcommerceFlow
13 replies
1d2h

You're talking about an extremely end of cycle resurgence. For the vast majority of the lifecycle, the 360 outsold the PS3 and had better compatibility with multiplatform games.

nixass
7 replies
1d2h

PS3 ended up selling more than double what Xbox was sold "only because of extreme end of cycle resurgence"? Cmon

It's true the devs have become more comfortable develeoping games as the console matured but that's about it.

Xbox was always flop comparing to PS

app13
2 replies
1d2h

They're right. Toward the EOL the PS3 had a big resurgence as THE console of South America and Southeast Asia due to highly available used games, console jailbreaks etc. and of course FIFA.

kd913
1 replies
1d1h

Isn't that kind of the ideal cycle though? From what I recall, consoles operate on the gilette model, ie the console is sold at a loss and games are the breadwinner.

In this case, better to reduce early sales which are sold at a loss (production being expensive, significant R&D, scale being low), maximise at the end of the life when i assume they are maximising profits per console due to scale and R&D already being covered.

crazygringo
0 replies
20h57m

Consoles haven't been sold at a loss for a long time. They're not the Gillette model at all, though a lot of people still think they are.

So, no -- it is never better to reduce early sales. You want to sell as many units as possible, as many games as possible, as many accessories as possible, as soon as possible.

(Also, as the same console does becomes cheaper to manufacture over the years, the price tends to go down as well.)

merb
0 replies
1d2h

PS3 sold more because of blue ray support, but in the games compartment they were basically on par

causality0
0 replies
1d2h

It's not double, it's 87.4 million units vs 86 million units. And he's right about multiplatform games. I can't think of any games that ran better on the PS3 than the 360, and a lot that ran better on the 360 than the PS3.

busterarm
0 replies
1d2h

Where do you get "more than double" from? Are you looking at the current generation or the PS4/XONE generation?

For PS3 vs 360, the PS3 sold 87.40M units to the 360's 85.73M.

ErneX
0 replies
1d1h

PS3 did not sell double the 360, they sold almost the same amount of units.

peutetre
4 replies
1d2h

Nope. The Xbox 360 launched in November of 2005. The PlayStation 3 launched in November of 2006.

The PS3 started outselling the Xbox 360 for the year in 2007 and kept steadily gaining on the Xbox in total sales for the generation until it overtook it.

Xbox's one year head start kept it level with the PlayStation 3 for quite a while but it lost out in the end.

Look at the global yearly sales figures on VGChartz. From 2007 the PS3 outsold the Xbox 360 every year except for 2008:

https://www.vgchartz.com/yearly/

snakeyjake
3 replies
23h58m

I think a lot of people bought the PS3 as a bluray player.

The charts you link to show game sales that lag way behind the xbox. For example Skyrim sold 2.6 million in 2011 on the xbox and only a million on the ps3.

I only say this because I got my ps3 for free with the purchase of a Sony television at a Sony store (remember those?) in late 2009 and never purchased more than a bare handful of games which I almost never played.

After the cost-reduced redesign in 2009 Sony gave away a free ps3 with every television purchase in Sony stores for almost an entire year.

Skyrim was one the games I bought and it was so buggy, even for Bethesda, on the ps3 due to ram limitations (I've read) that I purchased it again for pc.

Used my ps3 to watch a ton of blurays and Netflix, though. My ps3 was, and based on those yearly charts a lot of other peoples' ps3s were also, extremely powerful streaming boxes and I only retired mine when I upgraded to a 4k tv and got an appletv 4k.

peutetre
1 replies
19h37m

For example Skyrim sold 2.6 million in 2011 on the xbox and only a million on the ps3.

You have to consider total software sales, not individual titles. And for the PS3 you have to consider its impact on blu-ray movie sales (which aren't in those charts).

Blu-ray also launched in 2006 and the PS3 was one of the most useful players available because it could do more than just play movies.

lomase
0 replies
10h12m

I only saw the blu-ray support as being a consolle seller on forums where most people are from America.

I think it wasnt a big deal in the rest of the world. Then Microsoft console focused on "doning more that just play" and flopped worldwide.

tuna74
0 replies
21h27m

"The charts you link to show game sales that lag way behind the xbox. For example Skyrim sold 2.6 million in 2011 on the xbox and only a million on the ps3."

As you write later, Skyrim sucked on PS3 (like other multiplatform games like Bayonetta). It is not strange that it sold much less since it was a much worser game on the PS3.

bee_rider
11 replies
1d1h

Nintendo is practically in a different market anyway. People who want a Nintendo console are mostly looking for Nintendo’s first-party characters. Lumping them in with Xbox and PlayStation is like combining Barbie and GI Joe under “dolls.” Technically correct but not very descriptive.

Snark: Nintendo’s market is kids, people with kids, and people who were previously kids. Xbox and PlayStation are fighting over the market of people who’d be better served by a PC but don’t want to deal with all that. /Snark

Clamchop
4 replies
1d

I don't disagree that Nintendo appeals to a market that the others don't, and vice versa, but I'd be surprised if there wasn't a significant number of people who will reliably buy a game console, but won't buy two consoles.

That is to say, I still think there's significant competition for sales.

And if Nintendo didn't exist, it would be easier for Sony and Microsoft to take in the market for cute, whimsical, child-friendly, instead of having to distinguish themselves by being the more serious or more powerful option.

And handhelds; Nintendo has more or less stolen that entire segment, Steam Deck still being relatively niche and PS Portal being a different kind of thing. Sony's handheld ambitions are moribund and not for lack of trying.

vel0city
2 replies
1d

The target market for Xbox/PlayStation are the kind of gamers that play games like Call of Duty, Gran Turismo/Forza, Elden Ring/Dark Souls/The Witcher, or FIFA/Madden/NBA2k.

The target market for the Switch are people who play Animal Crossing, Super Mario Galaxy, and Zelda games.

There's some amount of crossover there, but there's also a large amount of separation in those markets. I know lots of people who play a ton of Animal Crossing and Zelda but would never want to play something like Borderlands or CoD or Darksouls.

Clamchop
1 replies
23h2m

Don't disagree with that, but I disagree that the conclusion to be made from it is that there's no significant competition.

Put another way, if Nintendo had a Sega-like market failure, then Sony and Microsoft would be the heirs apparent to the kingdom of cute. I'm sure they'd savor the opportunity. Do you think they wouldn't take these customers right now if they could?

And again, there's going to be a class of customers that could go either way but won't buy into two or more consoles, because they're expensive. They are directly competing for them.

vel0city
0 replies
22h26m

I'm sure they'd savor the opportunity, but I don't know if they'd really be incredibly successful and picking most of those Nintendo-only gamers if Nintendo just disappeared. I imagine a good chunk of those people who bought a Switch almost exclusively for Animal Crossing would end up just becoming a mobile phone/tablet gamer instead of making the jump to Xbox/PlayStation.

But hey, the future is unknown, and my crystal ball is on the charger at the moment.

And there's gamers like myself, who mostly play games on PC but still have a Switch because while I can play a lot of those console games on my PC there's no offical way to play Zelda or Mario or the wide range of Nintendo first-party titles on PC. I think if Nintendo had a SEGA level implosion and started only being a publisher, whichever platforms Nintendo publishes for would become the heir to those consumers. I know that would be the case for me.

Assuming Nintendo steps away from the hardware biz, would they publish games exclusively to one console? Would they just start making mobile games? Would they sell on all consoles and PCs? ¯\_(ツ)_/¯

lmm
0 replies
19h11m

I'd be surprised if there wasn't a significant number of people who will reliably buy a game console, but won't buy two consoles.

As, what, an ornament? People buy games consoles to play games on (or occasionally for movies or some such), and I think very few of the games that people are buying a Switch to play are even available on Xbox/PS, nor vice versa (or at least, the Switch ports of the games that people buy Xbox/PS to play are not on a level where those people would buy a Switch to play them).

SahAssar
3 replies
1d

kids, people with kids, and people who were previously kids

Who is not included in that? Toddlers?

TillE
1 replies
1d

Implicitly, people who are too old (or too European) to have grown up with an NES.

But yeah, I 100% agree that the overwhelming draw of Nintendo consoles is their exclusives, so it's mostly orthogonal to Xbox/PS/PC. If you want to play Mario or Zelda or Pokemon or Fire Emblem, you've got one choice. The common idea that the Steam Deck is competing with the Switch is almost entirely wrong.

SahAssar
0 replies
1d

The comment doesn't mention anything about the "previously kids" growing up with nintendo.

bee_rider
0 replies
1d

It includes everybody, that’s why they sell so many consoles!

r00fus
1 replies
1d

and people who were previously kids.

That's a pretty damn large market (= all adults). I think you meant to say "people who were previously kids playing Nintendo"

bee_rider
0 replies
23h59m

I’d hoped given the snark tags that that bit would be taken as a bit tongue in cheek.

tokai
1 replies
1d1h

Its wild that we are almost 20 years on, and consolewar posting about which is bigger/best is still going on.

Rinzler89
0 replies
1d

Hey as long as the PC is still king, let them fight over the consoles :D

buildbot
28 replies
1d2h

Oh wow, I did not know Lisa Su had a hand in designing Cell - that’s really cool. In some ways the Cell architecture was very ahead of its time - if you look at it a bit funny, the PPE/SPE map to something like todays grace/hopper - big branchy core to orchestrate smaller but more numerous vector cores.

Made designing/porting games a massive headache though because nothing else was like it.

Rinzler89
27 replies
1d2h

Correction: Cell was ahead of it's time when they started drafting the concept for it in mid-2000 and the most powerful consumer CPU at the time was the Pentium 3, but by the time it launched on the market in 2005, X86_64 was the new undisputed king of CPU performance and ARM in mobile lower power sectors, with ATI and Nvidia leading the GPU race, leaving the Cell processor as an overpriced paperweight that was a jack of all trades, master of none and difficult to program to boot, and the new market had no use for such a thing.

I think neither Sony nor anyone else expected the consumer CPU and GPU industry to evolve so quickly in that time, otherwise they would not have spent hundreds of millions developing anew architecture from scratch. Intel also made the same mistake back then when they tried to reinvent the industry with the new Itanium CPU architecture. Hindsight 20/20.

Those years were indeed wild for tech progress. You bought a new CPU, suddenly two years later you have 64 bit CPUs as the new thing. You then bought a 64 bit CPU to be like the cool kids and suddenly dual core CPUs are all the rage.

scrlk
12 replies
1d2h

IIRC, the original PS3 design was rumoured to have dual Cell processors. It didn't meet performance targets, so the Nvidia GPU was wedged in quite late on in the development cycle.

Rinzler89
11 replies
1d1h

I think the story was a bit different. Sony had envisioned the the final Cell chip to be more powerful than what they launched (4 PPEs and 32 SPEs @ 4GHz versus 1 PPE and 8 SPEs @ 3.2GHz) and they though that would be enough to render graphics with it like they did on the PS2, and only realized later in development that it won't be enough so they went to Nvidia and asked for a discrete GPU.

jandrese
7 replies
1d

Also, I think game developers balked at having to implement their own GPU in a hard-to-program unique architecture.

I wonder how many PS3 games ignored the Cell processor and just developed for the CPU and GPU?

ozarker
5 replies
22h34m

I think I heard the games Valve published for PS3 didn’t really use the cell stuff

monocasa
2 replies
21h40m

They absolutely did. The PPEs were far too anemic to run anything like halflife on their own.

Gabe Newell famously sounded off about how useless it was and wouldn't transfer to any other hardware, but he ended up being wrong. The architectural model forced on you by the SPEs is the model that the industry (not just games but any compute intensive work) has embraced for the heavy multicore but coherent system world.

Rather than the "thread per vague activity" model, we've embraced a one thread per core with a work stealing scheduler walking a DAG of chunked compute tasks.

For a modern view of this, this is exactly how Rust's rayon library works.

The whole thing reminds me of a lot of the complaints about the N64, a lot of which ended up being the fact that DRAM was no longer a single cycle away like it was for the SNES. Yes cache conscious code is more difficult to write, but that memory hierarchy architecture was more a harbinger of the new world rather than a one off weirdness of a single console.

deergomoo
0 replies
17h54m

The PPEs were far too anemic to run anything like halflife on their own

To be fair the PS3 port of the Orange Box runs like absolute crap in many instances. Although I believe it was developed by EA, not Valve.

daemin
0 replies
3h44m

Some games are built the way you've described with job threads and queues, but regrettably most are built with a main thread, a render thread, and sometimes use other threads for some compute.

jamesfinlayson
0 replies
12h46m

I think I heard the games Valve published for PS3 didn’t really use the cell stuff

Maybe - the Orange Box at least was outsourced to EA and the performance was reportedly not great. I think Portal 2 was done in-house though and I don't remember anyone complaining about it.

MBCook
0 replies
21h45m

At lot of early games were something like that. They’d mostly ignore it or maybe use one SPU or shove tasks on them that really didn’t take advantage of their power at all. Just leaving a lot of performance untouched.

monocasa
0 replies
23h36m

Also, I think game developers balked at having to implement their own GPU in a hard-to-program unique architecture.

It would have been a library.

I wonder how many PS3 games ignored the Cell processor and just developed for the CPU and GPU?

Essentially none.

fredoralive
1 replies
23h7m

AIUI it's basically that the plan was similar to the PS2, where you have a "smart" CPU (Emotion Engine / Cell) and a "dumb" rasteriser (Graphic Synthesiser on PS2), with the Cell SPEs taking the role of the vector units on the EE for things like transform + lighting. But the rasteriser chip project failed.

Then they tried the 2 Cells and software rendering approach, to try and keep it as an "in house" solution.

Finally they went to Nvidia and we got the final Cell + RSX solution.

hurrdurr57
0 replies
51m

This is what I've heard as well. They started with one cell, when it failed internal expectations they switched to two cells, then when they heard the rumors of what Microsoft was doing with the Xbox 360 they ran to Nvidia to get a real GPU.

sillywalk
0 replies
1d

There's an interesting book called The Race for a New Game Machine: Creating the Chips Inside the XBox 360 and the Playstation 3 by David Shippy and Mickie Phipps on the development of the Cell processor.

I can't recall if if had all the details regarding adding a GPU to the PS3, but I remember it had stories about the awkwardness of having the IBM Cell team also working on Microsoft's Xenon CPU for the 360, which IIRC used modified Cell PPE cores.

foobarian
6 replies
22h23m

Those years were amazing because every new console generation had a wildly different artisanal design. NES to SNES to N64 to Switch, Jaguar, Sega, PS to PS2 to PS3, Xbox... and then at some point they all turned into disguised PCs.

wk_end
3 replies
18h37m

Hey now, Nintendo’s bucked that trend - the Switch is a disguised tablet instead.

s1gsegv
2 replies
17h39m

I like to think of it as a disguised original Tesla Model S

fragmede
1 replies
15h6m

not sure which is those that's more of an insult to

lagadu
0 replies
8h59m

Oh you mean the disguised nvidia shield?

dtech
1 replies
21h44m

Up until the early 00's specially designed components could get more performance/$.

Now it's impossible to beat commodity SoCs on that front.

0xcde4c3db
0 replies
21h2m

The arcade industry also went through a couple waves of that, with one major shift to console-based hardware in the mid-90s and another to PC-based hardware in the mid-2000s. Namco made a PS3-based system board (System 357, notably the original platform for Tekken 6), but I'm not sure if anyone else did.

monocasa
5 replies
23h38m

Eh, the Cell would have been a fantastic processor if it weren't for hitting the end of dennard scaling like a brick wall.

In the design phase they were expecting almost 5ghz at launch, and to hit around 10ghz at the end of the product lifecycle. When the process guys came back later to let the architects know the reality we've sort of now all internalized (high end consumer electronics can expect ~3ghz with good cooling solutions) a lot of simple but fast pipeline designs stopped making sense. These are the exact same pressures that killed off the Netburst architecture too, making intel go back to a PIII design and make it smarter rather than clock markedly faster.

wmf
4 replies
22h23m

Nah, SPEs are a bad idea at any frequency.

monocasa
3 replies
21h39m

What specifically about them?

wmf
2 replies
21h19m

Mostly not having caches.

corysama
0 replies
16h58m

The SPEs only had cache! (And, a ton of registers.) What they were missing was RAM ;D

But, even that wasn’t as bad as it was made out to be. People rightly moan about the awkwardness of asynchronously moving data between main RAM and the SPE memory. What they don’t often mention is that the latency of those moves was about 500 cycles —the same latency as a cache miss on the PPE CPUs!

So, which was worse: implicitly waiting 500 cycles all over the place? Or, explicitly scheduling 500 cycle waits at specific points? Unsurprisingly, everyone preferred the first option :P

bobmcnamara
0 replies
20h30m

The TCMs were cool. But unless your work fits that...

h0l0cube
0 replies
6h53m

but by the time it launched on the market in 2005, X86_64 was the new undisputed king of CPU performance

PS3 Cell could do about 154 GFLOPs across it's 7 cores (6 SPUs and 1 PPU, excluding the GPU), while Xeons were in the low double digits. What really was awkward about the Cell architecture was the DSP-style nature of it, which works great for something like shaders or audio processing with a clear pipeline, but not for general purpose computation. It was really a successor to the PS2 in this regard, where data flowed between specialized and memory constrained processors using a DMA.

Sony nor anyone else expected the consumer CPU and GPU industry to evolve so quickly in that time

PS3 used an NVidia GPU. Though there was some clunkiness around sharing memory between the Cell and GPU compared to the UMA on PS4, but not especially so compared to a graphics card on a desktop machine.

Waterluvian
15 replies
1d1h

I'm curious about how such a bad idea made it to market. Not that the technical concept is bad, but the business decision to build an entire console generation on such a unique, ultimately migraine-inducing architecture.

MBCook
7 replies
1d1h

Remember Cell was “the future”. It was going to go into the PS3, but also TVs and computers and whatever else. If you squint it’s a bit like Apple putting a ton of investment in the A series chips that then went into iPads and Macs (via the M series).

Problem is: no one wanted it. So none of that panned out. If it had gone on to be something like Intel’s Core 2 Duo that showed up everywhere in hindsight it would look smart.

initplus
5 replies
1d1h

Difference is that Apple invested heavily in backwards compatibility, even old x86 code performs well on their chips. Meanwhile cell requires a reworking of your entire program to take advantage of it.

MBCook
4 replies
23h38m

That was common for consoles at that time. The PS2 couldn’t actually run PS1 games, it just fell back to a PS1 embedded in the system. Same way the GBA played GB/GBC games.

Apple wasn’t a great example but it was the best I could think of. It took Apple 10 years to go from the A4 in the first iPhone with an Apple chip to the M1 in the first Mac. They also were just using Arm, the same instructions that they had been using before and something well understood.

They didn’t suddenly release something people weren’t really prepared for like a Transputer and just declare “this will be everywhere within three years“. They let the switch take its time as necessary.

Sony had the arrogance to do both of those.

bigstrat2003
1 replies
22h50m

That was common for consoles at that time. The PS2 couldn’t actually run PS1 games, it just fell back to a PS1 embedded in the system.

Also, the PS3 did the same for PS2. Sony eventually did away with that to cut costs, but the launch models had it.

Waterluvian
0 replies
22h25m

The software emulator was terrible, sadly. A whole collection of games didn’t work well.

astrange
1 replies
21h36m

They also were just using Arm, the same instructions that they had been using before and something well understood.

Apple switched to (and largely invented) ARMv8/AArch64 in the middle of that, which is very different from the ARMv7 they started with.

MBCook
0 replies
3h58m

True. What I meant was when the A4 came out it could still run the code that was already in the App Store. Ignoring all other hardware, the Cell couldn’t natively run PS2 code.

duxup
0 replies
15h1m

I remember promotional materials showing it would be in refrigerators. I can’t imagine what they thought that refrigerator would do with it….

0x457
4 replies
1d1h

I don't think Sony expected Xbox 360 so soon. PS2 was sales were still good and there was no console on a market better than PS2 at that time.

Then out of nowhere Xbox 360 comes out, Halo that was a macOS title suddenly a main and a well selling IP on Xbox. Not just that, it also picked HD-DVD as its format posing a threat to Blu-ray.

Sony had to put something out.

cmpxchg8b
1 replies
1d

Halo first came out on the original Xbox, not sure it ties into the PS3/360 calculus other than being a launch title.

vel0city
0 replies
1d

There wasn't even a Halo game as a launch title, but there was backwards support for the original Xbox Halo games on the 360.

https://en.wikipedia.org/wiki/Xbox_360_launch#Titles

There wouldn't be a Halo game released for the Xbox 360 until September 2007, nearly two years after the release of the console.

vel0city
0 replies
1d

Halo came out in 2001, four years before the Xbox 360 was released.

The Xbox 360 released its HD-DVD drive a few days before the PS3 hit store shelves in North America. Sony had established the PS3 would use Blu-Rays a year before the Xbox 360 was even publicly announced.

I think there were a number of things Sony wasn't expecting about the Xbox 360, but HD-DVD or Halo wouldn't have been any of those things.

Talanes
0 replies
1d

The 360 didn't pick HD-DVD as it's format, the built-in drive was just regular DVD. The HD-DVD drive was an external add-on that they sold along with the 360 Media Remote.

initplus
0 replies
1d1h

It’s less a confusing one off decision to use a weird architecture, and more a failure to reconsider the status quo and stop using weird architectures.

Previous PlayStation hardware was just as odd. Ps2 reused old surplus Ps1 processors just for audio, and had coprocessors just like cell.

failuser
0 replies
1d

You forget that it was normal before. PS2 had many separate chips you need to program and balance throughout and latencies to make the most of hardware. PS2 was able to achieve 60 fps in many impressive games. Console hardware was very specific, the original Xbox being just an Intel PC was an exception.

mrkramer
11 replies
21h16m

The interviewer points out that Sony's PlayStation 3 is viewed as one of its least successful consoles, which is true. The PlayStation 3 pretty much lost the generation handily to Nintendo's cheap, casual-friendly Wii and Microsoft's less powerful but easier Xbox 360.

In Europe, PS3 was widely popular especially in conjunction with Call of Duty because Xbox never really gained ground in EU, at least that was my impression growing up as an EU gamer. Wii was also popular af at the time but PS3 and Wii buyers were different audience, Wii was more casual type of gaming and it was suitable for family gaming while PS3 was for more serious gamers who played more hardcorish titles like COD, FIFA and PES and ofc there was and is PC master race for real hardcore gamers.

kimixa
3 replies
20h47m

Yes, for much of it's lifetime it was behind the 360 in worldwide sales, and might have been a disappointment for Sony, especially after the PS2 domination, but I wouldn't say it "Lost Handily" to the xbox360. It may have taken time, but it did outsell it.

It seems an incorrectly strong wording by the interviewer.

And people seem to sleep on how powerful the xbox360 was - the CPU was a fair bit smaller than the cell, true, (~150m transistors vs 250m in the cell) not not really the multiples some articles seem to imply.

And the GPU in the xbox was arguably a fair bit more advanced - being a little larger (~300m vs ~335m transistors), and being a unified shader architecture that came to dominate GPUs from then on. And the more flexible architecture enabled more GPGPU-like tasks that the cell SPUs might have handled on the PS3. And these early GPGPU models were also a bitch to program for - like the cell SPUs were.

So it's not like one was simple and easy, the other crazy complex and advanced, but more one had the fancy experimental stuff on the CPU/Cell die, the other on the GPU die. Just the techniques used on the GPU continued to be developed and supplanted much of the Cell-style tech.

sillywalk
1 replies
18h49m

how powerful the xbox360 was

Interestingly, the Xbox 360 (Xenon) CPU cores were based on the Power Processing Element (not the Synergistic Processing Elements) from the Cell processor.

From [0]:

"When the companies entered into their partnership in 2001, Sony, Toshiba and IBM committed themselves to spending $400 million over five years to design the Cell, not counting the millions of dollars it would take to build two production facilities for making the chip itself. IBM provided the bulk of the manpower, with the design team headquartered at its Austin, Texas, offices. Sony and Toshiba sent teams of engineers to Austin to live and work with their partners in an effort to have the Cell ready for the Playstation 3's target launch, Christmas 2005.

But a funny thing happened along the way: A new "partner" entered the picture. In late 2002, Microsoft approached IBM about making the chip for Microsoft's rival game console, the (as yet unnamed) Xbox 360. In 2003, IBM's Adam Bennett showed Microsoft specs for the still-in-development Cell core. Microsoft was interested and contracted with IBM for their own chip, to be built around the core that IBM was still building with Sony.

All three of the original partners had agreed that IBM would eventually sell the Cell to other clients. But it does not seem to have occurred to Sony that IBM would sell key parts of the Cell before it was complete and to Sony's primary video-game console competitor. The result was that Sony's R&D money was spent creating a component for Microsoft to use against it."

( From the WSJ )

[0] https://web.archive.org/web/20150129121858/http://www.wsj.co...

kimixa
0 replies
1h37m

Sure, but the PPE was "just" an iteration on their existing PowerPC core line - that wasn't really the focus of the "unique" Cell changes in the first place.

If they just asked for a 3-core SMP version of a PowerPC and the Cell project never existed, I suggest it's likely it would still look very similar to what was released in the xbox. VMX already existed, POWER5 already had SMT.

I don't see there being a huge amount of new technology made for the Cell's PPE for the Xbox project to import in the first place.

mrkramer
0 replies
19h49m

Yes, for much of it's lifetime it was behind the 360 in worldwide sales, and might have been a disappointment for Sony, especially after the PS2 domination, but I wouldn't call say it "Lost Handily" to the xbox360. It may have taken time, but it did outsell it.

As far as I understood, Halo was the killer app for Xbox but I suppose it was the killer app only in US, because like I said, from my EU gamer's point of view, Call of Duty was all the rage first on PC and then afterwards on PS plus FIFA and PES were huge on PS as well.

BolexNOLA
3 replies
21h7m

That may be true but overall, the PS3 was the loser at the time and it took until basically the end of that console generation, consisting of many price cuts and years of developing a strong catalogue, for them to (narrowly) outsell the 360 worldwide.

Considering the absolute dominance of PSX and PS2, it was definitely not what they were hoping for. Not quite as dramatic as Nintendo with the market loss from SNES to the N64, but I’m sure Sony had a lot of similar conversations. You also have to consider that Xbox was the distant third place console in the previous generation, so barely losing to Sony at the last minute was definitely considered a huge success. Especially after the disaster that was the RROD. If that hadn’t happened Microsoft likely would have beaten Sony’s sales.

mrkramer
2 replies
20h49m

My last gaming console was PS2 and yea I loved it because gaming consoles are meant for "couch gaming" with big TV screen and as one gaming friend of mine told me: "When you buy gaming console, you are set for the next 5 years." Moore's law kinda kills the fun for us PC gamers but I still prefer gaming on PC than on gaming consoles.

somat
0 replies
19h28m

When you buy a gaming console you can play that consoles current generation of games.

When you buy a gaming pc you can play all prior generations of pc games.

* Yeah, Yeah, I know this is not really all that true/over simplified. But you have to understand I am the type who enjoyed the fight, almost more than the game. the fight? you know first you fight the pc a bit and if you win you get to play the game. Really, if you just want to sit down and game get a console.

BolexNOLA
0 replies
19h56m

There’s something really nice about consoles for me still. I love just dropping on my couch, pressing a button on my controller to start run console, pressing one more button to launch a game, and I am exactly where I left off last time with no load times. Takes me about seven seconds to get back in. As somebody with 2 kids it makes gaming more feasible haha

bowsamic
1 replies
12h56m

Maybe that’s a mainland thing. In the UK the Xbox 360 was definitely far more popular

lomase
0 replies
10h44m

In Spain too. The PS3 released with very few games and all multiplatform games ran better on the 360.

tekla
0 replies
21h3m

You must not have been there for the nogaems memes.

noobermin
6 replies
23h26m

The death of the powerpc is a bit sad to me as it feels like the lineage of the 6502 has really ended as of now. As much as everyone hates too on x86, I fear one day it too will disappear and everything will be on some arm isa

theandrewbailey
2 replies
21h40m

PowerPC's sort of descendant/predecessor POWER is still developed and sold. Raptor Computing seems to be the only company selling POWER based systems to the general public.

fb03
1 replies
16h23m

I wonder what is the appeal of these systems, today? On what demand would one chose a POWER system like this one (which I think it's pretty cool) over a, say, more standard X86 or ARM server?

theandrewbailey
0 replies
0m

[delayed]

wk_end
1 replies
18h4m

What's the connection you had in mind between the 6502 and the PowerPC? As far as I know they're geneologically unrelated.

The 6502 was descended from the 6800 - one of a few ur-microprocessors - and somewhat clumsily evolved into the 65816 before its brilliant but highly specialized design (hard to target for HLLs and which fundamentally assumed memory could be accessed roughly as fast as the CPU was running) couldn't keep up with the times. As far as I know that's pretty much it for its branch of the CPU family tree.

Meanwhile - and I admittedly know less about this - as I understand it the PowerPC is ultimately descended from the IBM 801, which was a high-end workstation and server chip. There might've been some influence from Motorola, and Motorola made the 6800 - the 6502's father - so in that sense the 6502 and the PowerPC might be very distant cousins. Still, that's pretty tenuous.

My understanding is that the original ARM team was directly inspired by the 6502 - so if its legacy lives on anywhere, maybe it's there.

kalleboo
0 replies
15h14m

IIRC PowerPC was IBMs core melded to Motorolas bus design, so that Apple could easily integrate it into their existing board designs (and make upgrade cards)

kalleboo
0 replies
15h13m

I didn’t have high hopes for RISC-V outside of the cheap low-power chip space but watching ARM try to destroy Qualcomms new fast chips makes me believe we may see a big ARM/RISC-V showdown in the future, the instruction set wars aren’t over yet

dehrmann
6 replies
1d2h

but SMT wouldn't emerge until 2002

I think they're counting P4 hyperthreading, but I remember dual-CPU Pentium II boards.

trynumber9
2 replies
1d

I am not following. The page says it was investigated but not done.

jylam
0 replies
8h47m

"Although the designs were never finished and no models ever went into production", so it was researched but not done..?

wtallis
0 replies
1d2h

SMT isn't SMP.

adrian_b
0 replies
23h31m

SMT (including that branded as Intel HTT) is something very different from multiprocessors, like the dual-CPU boards.

The concept of multiprocessors was already well understood around 1963, i.e. the multiplication of the performance of a computer by the multiplication of the CPU hardware.

On the other hand, the purpose of SMT is to enhance the throughput of a computer by ensuring that the existing execution units do not stay idle, by executing instructions from multiple program threads in order to keep them busy.

The modern use of SMT has been started by a research paper published in 1992, as a proposed improvement to the superscalar out-of-order CPUs already popularized by IBM POWER, but the name SMT (simultaneous multi-threading) has been coined only later, in 1997 (in the IEEE Micro from September/October 1997).

IBM had already used SMT in the experimental Advanced Computer System in 1968 (and they have filed for SMT patents in 1971: US 3,728,692 & US 3,771,138), but that project has been canceled and its superior architecture has been mostly forgotten for several decades.

CPUs that could execute instructions from multiple program threads, but not during the same clock cycle, like in SMT (hence the "simultaneous" of SMT), are older than the IBM ACS, for instance such a processor with fine-grained multi-threading was the peripheral processor of CDC 6600 (in 1963). SMT can exist only in superscalar CPUs, because only there multiple instructions are initiated in the same clock cycle.

tithe
5 replies
1d2h

Peter Hofstee (one of the chief architects of Cell) gave a talk at UT Austin around 2008.

At the time, UT was building TRIPS[0], a processor capable of > 1 teraflop / sec.

In the middle of his talk (with several TRIPS members present), he made an off-hand remark "I think I could build a petaflop processer...without too much trouble,"* which caused a small stir in the audience!

[0] https://www.cs.utexas.edu/users/cart/trips/

* I might be misremembering the exact quote, but this was the gist of it.

lhl
3 replies
1d2h

Perhaps relevant to the story, he was also on the team that delivered the first petaflop supercomputer (the most powerful supercomputer at the time) around that same time frame (it was around 24K processors):

Roadrunner is a 1.38 Pflop/s-peak (double precision) hybrid-architecture supercomputer developed by LANL and IBM. It contains 12,240 IBM PowerXCell 8i processors and 12,240 AMD Opteron cores in 3,060 compute nodes. Roadrunner is the first supercomputer to run Linpack at a sustained speed in excess of 1 Pflop/s.

https://dl.acm.org/doi/10.5555/1413370.1413372

https://en.wikipedia.org/wiki/Roadrunner_(supercomputer)

bbatha
2 replies
1d1h

What a pain in the ass machine that was to write code for. You had 3 different processors to manage with 3 different architectures and all of the challenges the game dev community had with keeping the SPEs hot. But the SPEs were not enough to solely rely on for number crunching performance unlike today's GPU compute machines, so you also needed to do number crunching on the opterons and the main cell core for optimal performance unlike with a GPU where the CPU is mostly just keeping the GPU memory full. Then to make matters worse the cell had a different endianess than x86 making shared memory very annoying to work with.

bbatha
0 replies
23h1m

Haven't heard of it. Internal communication was not a strong suit at LANL... You also had a ton of different teams who all got access to the hardware at roughly the same time and were shotgunning different approaches and coalescing on the better libraries wasn't really a priority at the beginning of roadrunner when I was there. LANL didn't really have project management at the code base level to direct people to upgrade stuff like that, its mostly independent researchers pulling the code bases in different directions. Most of those researchers were physicists in my world and didn't really care about the "engineering" of software, just that their code ran fast and correct enough to get to the next deadline. Then there is a small core of more computer science researchers who did projects like this. If they were successful enough they'd attempt to integrate it into the mainline codes. But their incentives generally were to publish papers about super computer code like this, not necessarily to integrate it into the mainline codes. So often getting something like this in the codes was giving a talk and hoping a physicist would see it and need the extra performance for their needs.

I was not running codes that would utilize even a small fraction of roadrunner so my projects quickly moved to the GPU based test beds for the next gen computer so I didn't get to see these research coalesce. My understanding from people who stayed on roadrunner is that for the most part people didn't adopt too much fanciness for roadrunner as the rewrites were too deep and GPUs were on the horizon. There was a lot of vision about making generic heterogeneous compute frameworks but too my knowledge they didn't pan out and just writing against CUDA, OpenMP, and MPI won the day.

wmf
0 replies
1d2h

FLOPS are easy; programming is hard. Cell was the wrong direction and TRIPS/TFlex hasn't happened either.

Rapzid
5 replies
1d1h

ATI developed the graphics chips for 360 and Wii. Feels a little too revisionist to say AMD produced the chips even if they did purchase ATI a number of months before Wii released..

fourfour3
4 replies
1d1h

It’s a bit tricky even to say that ATI developed them - they were designed by ArtX (mostly ex SGI staffers who had also worked on the N64), who were bought by ATI.

wmf
2 replies
1d

Maybe Wii was ArtX but wasn't the 360 using a mainstreamish Radeon GPU?

fourfour3
0 replies
11h5m

You’re right - I should have clarified that in my comment!

The Wii GPU is very much the GameCube one with some tweaks and faster clock speeds.

The 360 one is absolutely an ATI custom Radeon part.

corysama
0 replies
16h52m

It had some significant differences. Mainly the fixed SRAM render target. Fun fact is that a lot of tech from the 360 GPU made its way into Qualcomm Adreno GPUs. Reading about the tiled render pass OpenGL extensions in the Adreno was very reminiscent of working on the 360.

Rapzid
0 replies
1d1h

Perhaps but ArtX was acquired by ATI long before. The ATI logo was already on the GameCube at launch.

Der_Einzige
5 replies
1d2h

There were memes circa this time on places like 4chan about "the power of the cell" and implying that it was capable of doing anything. Sony marketing about how good this chip was circa 2005-7 was strong.

toast0
0 replies
1d1h

It's a shame they didn't license blast processing.

scrlk
0 replies
1d1h

The power of the Cell bought us real time rendering of Giant Enemy Crabs, as well as real time weapon changes. All for 599 US dollars!

jeroenhd
0 replies
1d

The cell processor was pretty good at some calculations. Unfortunately, it wasn't great for writing video games and neither was the tooling.

I recall various news stories about governments buying up PS3s because they could be used for cheap and effective computation at scale.

khalilravanna
4 replies
1d

My biggest lament with the PS3 is the forward incompatibility making many great games locked to the platform (MGS4, Demon’s Souls, etc.). Meanwhile in Xbox land there are *633* 360 games you can play on any new Xbox by just slipping the disc in. Now some of this is no doubt due to differing approaches to business by the corporate masters but from what I’ve read a lot of it is the unique and befuddling architecture of the PS3.

https://en.m.wikipedia.org/wiki/List_of_backward-compatible_...

copx
2 replies
22h56m

Despite its exotic architecture the PS3 can be emulated well on "normal" machines, though.

I was amazed that I am actually able to play PS3 games on my (older, not at all top of the line) laptop thanks to the brilliant RPCS3 emulator.

A PS5 is way more powerful than my laptop, so in theory SONY could just make all those PS3 games available through emulation.

failuser
1 replies
19h15m

It took a decade and it’s still not 100% accurate. By now PS5 is powerful enough, but Sony would rather sell you a remaster or remake.

jimbobthrowawy
0 replies
15h7m

I think they'd rather you play the game on their streaming service that runs the game from a ps3-alike sitting in a rack.

throwaway48476
0 replies
1d

Xbox canceled development on their emulator though.

TheMagicHorsey
1 replies
20h18m

If China ever takes over Taiwan, America is so screwed.

Actually, there is such a scattered supply chain for semiconductor chips, that any war that disrupts international trade in a material way, would bring that entire industry to its knees. So many critical suppliers in the semiconductor supply chain are single companies, or single country oligopolies.

eunos
0 replies
6h8m

Well they (and Morris Chang) are supposedly 'recent' immigrants from Zhejiang in mainland so the next gen human resources are as good as secured

mackman
3 replies
16h46m

I loved working on the cell. I remember writing my first spu code when the ps3 dev kit was a huge stainless steel box. I later went in to implement predictive physics for racing line optimization for midnight club on the spus. Was a lot of fun. Weird thing was the floating point unit had a different intermediate bit size than the ppu so for deterministic results you had to schedule threads to always be dispatched to ppu or spu deterministically. That was essential for network games where the sim had to run bit for bit the same across players.

oppositelock
2 replies
16h0m

I'm a systems nerd, and I found working with it quite challenging, but rewarding. It's been many years, but I still remember a number of the challenges. SPE's didn't have shared memory access to RAM, so data transfer was your problem to solve as a developer, and each SPE had 256k of RAM. These things were very fast for the day, so they'd crunch through the data very quickly. We double-buffered the RAM, using about 100k for data, while simultaneously using the other 100k as a read buffer for the DMA engine.

That was the trickiest part - getting the data in and out of the thing. You had 6 SPE's available to you, 2 were reserved by the OS, and keeping them all filled was a challenge because it required nearly optimal usage of the DMA engine. Memory access was slow, something over 1000 cycles from issuing the DMA until data started coming in.

Back then, C++ was all the rage and people did their various C++ patterns, but due to the instruction size being so limited, we just hand-wrote some code to run on the SPU's which didn't match the rest of the engine, so it ended up gluing together two dissimilar codebases.

I both miss the cleverness required back then, but also don't miss the complexity. Things are so much simpler now that game consoles are basically PC's with PC-style dev tools. Also, as much as I complain about the PS3, at least it wasn't the PS2.

mackman
1 replies
14h47m

Yep, all valid. When I started on it we had to do everything ourselves. But by the time I did serious dev on it our engine team had already build vector/matrix libraries that worked on both ppu and spu and had a dispatcher that took care of all the double buffering for me.

djmips
0 replies
4h50m

Indeed, anyone who mastered the parallelism of the PS3 bettered themselves and found the knowledge gained applied to the future of all multi core architectures. Our PC builds greatly benefitted from the architecture changes forced on us by the PS3

light_hue_1
3 replies
17h54m

If I could ask a single CEO one question, it would be Lisa Su.

What is she thinking? She's sitting on what could be the 2nd most valuable company in the world. And she's doing nothing.

AMD's GPUs are as good as NVidia's. But their drivers suck. They did all the hard work. Now they need to invest a relative pittance in fixing up their drivers and open sourcing the rest. But they won't.

Absolutely nothing about this situation makes any sense. It's a company that's sitting on gold (actually, by weight, the value of GPUs is far higher than that of solid gold bars). But... they're throwing it all away for nothing? Is she getting paid off by NVidia to do this?

AMD is the single most confusing entity out there.

JackYoustra
1 replies
13h4m

The other day I tried to rent a spot MI-300. I couldn't find a way to do that! Whereas fairly easy to rent an H-100. I think they're just behind a bit on their tsmc allocation! Their stock seems to think so, it's priced much more aggressively than Nvidia's.

latchkey
0 replies
12h55m

My company Hot Aisle is working on solving this problem.

It has nothing to do with availability or pricing. You can't rent single MI300x today cause ROCm doesn't support virtualization. PCIe passthrough doesn't work yet. AMD knows about the issue and is working on it.

We will have 8xMI300x for rent soon. Once AMD fixes their stuff, we will be able to rent out one at a time.

latchkey
0 replies
12h51m

And she's doing nothing.

This is totally wrong. MI300x was released Dec 2023, just a few months ago. It is a fantastic product, but not into the hands of enough people, yet.

Give it time.

hehdhdjehehegwv
1 replies
23h55m

She’s low-key one of the most successful engineers ever in terms of bringing cutting edge technology to the mass market, but AMD never gets the fawning press attention of NVIDIA, and still has the lingering image of being the poorer cousin of Intel.

Of course, as a woman engineer to make it this far she’s used to not getting the limelight in a male dominated industry - which may also explain some of AMDs recent success.

blitzar
0 replies
11h38m

Their press should focus more on how much of a failure AMD are (vs NVIDIA in particular) rather than fawning puff pieces like this one.

hadrien01
0 replies
23h36m

Wow this website is absolutely amazing! Full of details about consoles, and with a really pleasing mixture of text, images, and even 3D models.

teekert
0 replies
7h6m

I recently got a 2nd hand PS3, never was a gamer. I’m so impressed with it. Racing games are so real looking and GTA5 is still the most recent version of the game. The 2 Little Big Planet were an absolute joy to play with my son.

There is still so much value in this ancient machine.

kevvok
0 replies
1d

There’s a great book about the development of Cell and Xenon (which went into the Xbox 360) called “The Race for a New Game Machine” that was co-written by a couple folks who worked on them at IBM.

djmips
0 replies
17h28m

You might find that this is an article that's commenting on an interview and you might be equipped to interpret it better on your own.

Here's the original interview which has more details that I found interesting.

https://stratechery.com/2024/an-interview-with-amd-ceo-lisa-...