AMD has almost completely taken over the console market. The PS2 was a MIPS system and the PS3/Xbox360 were PowerPC, but, for the last ten years, Sony and Microsoft have been all AMD. Intel has been out of the game since the original XBox, and nvidia only has the switch to its name. The Steam Deck-style handhelds (like the ROG Ally and the Lenovo Legion Go) are AMD systems.
It’s kind of interesting how they have this hold on gaming outside of conventional PCs, but can’t seem to compete with nvidia on just that one market.
A console wants a competitive GPU and a competitive CPU. Nvidia has the first, Intel has the second, AMD has both. The first one is more important, hence the Switch, and AMD's ability to do it in the pre-Ryzen era. (The original Intel-CPU Xbox had an Nvidia GPU.) Console vendors are high volume institutional buyers with aggressive price targets, so being able to get both from the same place is a big advantage.
For PCs, discrete GPUs are getting bought separately so that doesn't apply. AMD does alright there but they were historically the underdog and highly budget-constrained, so without some kind of separate advantage they were struggling.
Now they're making a lot of money from Ryzen/Epyc and GPUs, and reinvesting most of it, so it's plausible they'll be more competitive going forward as the fruits of those investments come to bear.
For gaming, AMD GPUs are generally better bang for buck than Nvidia - notably, they tend to be about the same bang for less buck, at least on the high end and some of the middle tier. The notable exception is ray tracing but that's still pretty niche.
If AMD gets their act together and get the AI tooling for their GPUs to be as accessible as Nvidia's they have a good chance to become the winners there as you can get more VRAM bang for, again, less buck.
In a market where they have similar performance and everything minus ray tracing for somewhere between 50% and 70% of the competitor's prices it will be pretty easy to choose AMD GPUs.
They already have the best CPUs for gaming and really are positioning themselves to have the best GPUs overall as well.
If you’re building a midlife crisis gaming PC the 4090 is the only good choice right now. 4K gaming with all the bells and whistles turned on is a reality with that GPU.
Yup, just did this. Still way cheaper than a convertible or whatever
Especially when most convertibles are slower than a Tesla.
PSA: GeForce Now Ultimate is a great way to check this out. You get a 4080 equivalent that can stream 4K 120Hz. If you have good Internet in the continental US or most of Europe, it’s surprisingly lag-free.
Someone really should market a PC directly as a "midlife crisis gaming PC." I laughed so hard at that.
The 4090 is also upwards of $2k. This is more that what i spent on my entire computer that's a few years old and is still very powerful. We used to rag on people buying titans for gaming, but since Nvidia did the whole marketing switcheroo now titans are just *090 cards and they appear as reasonable cards.
My point is that Nvidia has the absolute highest end, but it's ridiculous to suggest that anyone with a budget less than the GDP of Botswana should consider the 4090 as an option at all. For actually reasonable offers, AMD delivers the most performance per dollar most of the time.
I also think something important here is AMD's strategy with APU has been small to large. Something that really stood out to me over the last few years is that NVidia was capturing the AI market with big and powerful GPU while AMD's efforts were all going into APU research at the low end. My belief is that they were preparing for a mobile-heavy future where small, capable all-purpose chips would have a big edge.
They might even be right. One of the potential advantages of the APU approach is if they GPU can be absorbed into the CPU with shared memory, a lot of the memory management of CUDA would be obsoleted and it becomes not that interesting any more. AMD are competent, they just have sucky crash-prone GPU drivers.
I have an AMD GPU on my desktop PC and I also have a Steam Deck which uses an AMD APU. Never had a driver crash on me on either system.
You're probably using it for graphics though; the graphics drivers are great. I refuse to buy a Nvidia card just because I don't want to put up with closed source drivers.
The issue is when using ROCm. Or more accurately when preparing to crash the system by attempting to use ROCm. Although in fairness as the other commenter notes it is probably a VRAM issue so I've been starting to suspect maybe the real culprit might be X [0]. But it presumably doesn't happen with CUDA and it is a major blocker to using their platform for casual things like multiplying matricies.
But if CPU and GPU share a memory space or it happens automatically behind the scenes, then the problem neatly disappears. I'd imagine that was what AMD was aiming for and why they tolerated the low quality of the experience to start with in ROCm.
[0] I really don't know, there is a lot going on and I'm not sure what tools I'm supposed to be using to debug that sort of locked system. Might be the drivers, might be X responding really badly to some sort of OOM. I lean towards it being a driver bug.
Ah yes. I'm pretty much only using it for games. It does seem that AMD's AI game is really lacking, from reading stuff on the Internet.
When I run a LLM (llama.cpp ROCm) or stable diffusion models (Automatic1111 ROCm) on my 7900XTX under Linux, and it runs out of VRAM, it messes up the driver or hardware so badly that without a reboot all subsequent runs fail.
I have had a RX580, RX590, 6600XT, and 7900XT using Linux with minimal issues. My partner has had a RX590, 7700XT on Windows and she's had so many issues it's infuriating.
AMD tend to have better paper specs but worse drivers/software; it's been like that for decades. At least they're now acknowledging that the state of their software is the problem, but I haven't seen anything to give me confidence that they're actually going to fix it this time.
I'm pretty much all the time on Linux. Have used Windows on my gaming computer mostly to compare performance, out of curiosity.
Generally the performance is at least as good on Linux so I stay there. Never had a driver issue.
Specifically modern "normal" gaming. Once you get outside that comfort zone AMD has problems again. My 7900XTX has noticeably worse performance than the 1070 I replaced when it comes to Yuzu and Xenia.
Also, AMD is the obvious choice for Valve over something like Nvidia due to AMD having a decent upstream Linux support for both CPU and GPU features. It's something Nvidia never cared about.
NVidia has had more reliable and, at least until recently, more featureful drivers on Linux and FreeBSD for decades. They're just not open-source, which doesn't seem like it would matter for something like the Steam Deck.
Reliablie is very moot when they simply support only what they care about and don't support the rest for those decades. Not being upstreamed and not using standard kernel interfaces makes it only worse.
So it's not something Valve wanted to deal with. They commented on benefits of working with upstream GPU drivers, so it clearly matters.
NVidia are the upstream for their drivers, and for a big client like Valve they would likely be willing to support the interfaces they need (and/or work with them to get them using the interfaces NVidia like). Being less coupled to the Linux kernel could go either way - yes they don't tend to support the most cutting-edge Linux features, but by the same token it's easier to get newer NVidia drivers running on old versions of Linux than it is with AMD.
(Does Valve keep the Steam Deck on a rolling/current Linux kernel? I'm honestly surprised if they do, because that seems like a lot of work and compatibility risk for minimal benefit)
Upstream is the kernel itself and standard kernel interfaces. Nvidia doing their own (non upstream) thing is the main problem here. They didn't work with libdrm for years.
Being a client or even a partner doesn't guarantee good cooperation with Nvidia (Evga has a lot to comment on that). As long as Nvidia is not a good citizen in working with upstream Linux kernel, it's just not worth investing effort in using them for someone like Valve.
Stuff like HDR support or anything the like are major examples why it all matters.
Yes, but none of that is important for a console. You're talking about integration into libraries of normal desktop distros which aren't that important when the console can just ship the compatible ones.
Valve disagree. And the fact that they made an updated version that supports HDR demonstrates that it's important.
Form factor (console or not) has no bearing on importance of this issue.
The current SteamOS (what Decks run) is based on Arch but is not rolling. The original for Steam Boxes was based on Debian.
Reasoning for the switch: https://en.wikipedia.org/wiki/SteamOS#Development
But Valve contributes directly to AMD drivers, they employ people working on both on Mesa and DX => vk layers. And kernel. It's very neatly integrated system.
I mean Nvidia goes All out in AI/ML Spaces, they even rebranded their company lol
I doubt they care to work with valve to release steam deck as much as high end compute market
Pierre-Loup Griffais recently re-tweeted NVK progress with enthusiasm, given that he used to work at Nvidia on the proprietary driver it's replacing I think it's a sign that Valve wouldn't particularly want Nvidia's driver even given the choice, external bureaucracy is something those within the company will avoid whenever possible.
Going open source also means their contributions can propagate through the whole Linux ecosystem. They want other vendors to use what they're making, because if they do they'll undoubtedly ship Steam.
Here's a secret: a lot of the featurefulness of AMD's current Linux drivers is not due to AMD putting in the work, but due to Valve paying people to work on GPU drivers and related stuff. Neither AMD nor Nvidia can be bothered to implement every random thing a customer who moves maybe a million low cost units per year wants. But with open source drivers, Valve can do it themselves.
The original Xbox was supposed to use AMD CPU as well, and parts of that remained in the architecture (for example it uses Hyper Transport bus), backroom deals with intel led to last minute replacement of CPU for an intel P6 variant.
So last minute that the AMD engineers found out Microsoft went with Intel at the Xbox launch party [1]
[1] https://kotaku.com/report-xboxs-last-second-intel-switcheroo...
Wow! Good find!
Also, the security system assumes that the system has a general protection fault when the program counter wraps over the 4GB boundary. That only happens on AMD, not Intel.
This also led to a major vulnerability caused by different double-fault behavior between AMD and Intel known as “the visor bug”: https://xboxdevwiki.net/17_Mistakes_Microsoft_Made_in_the_Xb...
The playstation 3 was the last "console". Everything that came after was general purpose computing crap.
The PS3 could do general purpose computing. Its built in operating system let you play games, watch movies, play music, browse the web, etc. At the beginning of the console's life you could even install Linux on it.
The hardware of consoles has been general purpose for decades.
The Pentagon even built a supercomputer using PS3s as compute cores.
The current and previous generations of xbox and playstation do use x86 CPUs with integrated AMD GPUs. But they aren't just PCs in a small box. Using a unified pool of GDDR memory* is a substantial architectural difference.
*except for the xbox one & one s, which had its own weird setup with unified DDR3 and a programmer controlled ESRAM cache.
Gaming is general purpose computing, or at least a part of it.
The CPU really only needs to be adequate - Nvidia can pull that off well enough for Nintendo at least.
Intel is in a far worse position - because they cannot do midrange graphics in an acceptable power/thermal range.
Aren’t the midrange GPUs considered to be pretty decent price/performance wise? IIRC they significantly improved their drivers over the last few years
Intel's?
They're certainly good enough to be the integrated graphics solution for office work, school computers, media centres, casual gaming (the same as AMD's low end), and other day to day tasks.
That's most of the market, but I think it's a stretch to say that they're good enough for consoles.
Also I guess AMD is fine with having much lower margins compared to Nvidia which makes them very competitive in this marker?
Last years Xbox leak revealed that Microsoft was at least considering switching to ARM for the next Xbox generation, though regardless of the CPU architecture they chose they indicated they were sticking with AMD for the GPU. It will be interesting to see how that shakes out, going to ARM would complicate backwards compatibility so they would need a very good reason to move.
AMD holds an arm license and uses them as part of the software tpu in the pro series processors as I understand. It's not unreasonable to assume they could make some ARM x86 hybrid CPU (like apple did for Rosetta) or a mixed arch chip we've never seen before that can run emulators native. Who knows.
But what would be the point of that? If you're already buying it from AMD and the previous generation was x86 so that's what gives backwards compatibility, just do another x86 one. The reason to switch to ARM is to get it from someone other than Intel or AMD.
The reason to switch to ARM is to get better performance, especially per-watt. If the supplier that's making your graphics card can deliver that, then why risk onboarding someone new?
I am not expert, but I cannot remember ever hearing that before. Why would arm have better absolute performance than x86?
As a spectator, reading tech press about architectures gave me the impression that even in performance per watt, the advantage arm has over x86 is fairly small, just that arm chip makers have always focused on optimizing for low power performance for phones.
Better general design, or easier to include more cache. All the normal reasons one architecture might perform better than another. I mean, you heard about Apple switching all their processors to ARM, right?
Intel would certainly like you to believe that. But for all their talk of good low-power x86s being possible, no-one's ever actually managed to make one.
There's an anecdote from Jim Keller that Zen is so modular that it'd be possible to swap out the x86 instruction decode block with an ARM one with relatively little effort. He's also been saying for a while now that ISA has little bearing on performance and efficiency anymore.
Apple's decision to switch to ARM had many reasons, licensing being just as important as performance, perhaps more so.
The low power variants of Zen are very efficient. You're still looking at Intel, but they've been leapfrogged by AMD on most fronts over the past half decade (still not market share, but Intel still has their fabrication advantage).
I'll believe that when I see a Zen-based phone with a non-awful battery life. Yes AMD are ahead of Intel in some areas, but they've got the same vested interest in talking up how power-efficient their x86 cores can be that may not actually be based in reality.
You won't see that happen because AMD have no interest in targeting phones. Why bother? Margins are thin, one of x86's biggest advantages is binary backwards compatibility but that's mostly meaningless on Android, there's additional IP and regulatory pain points like integration of cellular modems.
Intel tried and ran into most of those same problems. Their Atom-based SoCs were pretty competitive with ARM chips of the day, it was the reliance on an external modem, friction with x86 on Android and a brutally competitive landscape that resulted in their failure.
Regardless of architectural advantages from one vendor or another, the point remains that the arguable preeminent expert in CPU architecture believes that ISA makes little difference and given their employment history it's hard to make the argument of bias.
The way I remember it the performance and battery life never quite lived up to what they said it would.
Current employer is a much heavier influence than prior employers, and someone who's moved around and designed for multiple ISAs and presumably likes doing so has a vested interest in claiming that any ISA can be used for any use case.
There's a long history of people claiming architecture X can be used effectively for purpose Y and then failing to actually deliver that. So I'm sticking to "I'll believe it when I'll see it".
In addition to better perf per watt, it’d allow them to shrink the console, potentially to something as small as a Mac Mini or NUC. Less need for cooling means a smaller console.
This could help them make inroads to people who might not have considered a home console before due to their relatively large size. Wouldn’t be surprised if the Switch did well with this market and now MS wants a slice of that pie.
The stuff on the PRO is a Xilinx FPGA, or they have both an ARM Processor and an FPGA?
The NPUs are effectively hard IP blocks on xilinx FPGA fabric (i.e. there are close to no LUTs there - the chip takes in Xilinx bitstream but the only available resources are hard IP blocks)
There's also Xtensa cores (audio coprocessor) and ARM cores included (PSP, Pluton, and I think there's extra ARM coprocessor for handling some sensor stuff optionally)
All Zen 1 CPUs and newer have the PSP / ASP security processor which is ARM based and runs before the x86 cores are released from reset. This applies to all Zen models, not just the PRO versions.
The fTPM does indeed run on the PSP, so on the ARM cores, among many other things like DRAM training.
AMD has previously made a variety of ARM CPUs, such as the recent Opteron “Seattle” parts.
https://en.wikichip.org/wiki/amd/cores/seattle
AMD uses ARM for its PSP as well which is more than just a TPM.
But afaik it also matters what type of license AMD has.
AMD would be need an ARM architecture license (like Apple has). This license allows you to do whatever you want with the chip (such as adding a x86 style memory model).
The downside to the architecture license is you have clean room design the chip entirely.
Apple recently demonstrated ARM is pretty capable of emulating x86
They did, but to get it running as well as they did they had to deviate from generic ARM by adding support for the x86 memory model in hardware. Apple was in a position to do that since they design their own ARM cores anyway, but the off-the-shelf ARM reference designs that most other players are relying on don't have that capability.
You can do it with standard ARM instructions these days.
Which instructions?
Those that are part of FEAT_LRCPC.
You could always do it with instructions itd just be slow no?.
I vouched for this comment, but it looks like you're shadow banned. You might want to email support.
Note that this is actually very cheap in hardware. All ARM CPUs already support memory accesses that behave similarly to x86. It's just that they're special instructions meant for atomics. Most load and store instructions in Aarch64 don't have such variants, because it'd use a lot of instruction encoding space. The TSO bit in Apple's CPUs is really more of an encoding trick, to allow having a large number of "atomic" memory instructions without needing to actually add lots of new instructions.
They haven't demonstrated it at console-level prices.
If you wanna play semantics, and argue that the demonstration requires a M-Something chip, then they have in a $599 iPad Air with M1. Or I guess a Mac Mini with M2 for the same price.
If you want to be more lenient, the $129 Apple TV has A15 which is ~same design, but with less cores.
Sticking with AMD for GPUs makes sense no matter the CPU architecture. AMD is competitive on x86, and at the moment Samsung is working out the kinks to integrate Radeon GPUs with their ARM CPUs... so once Samsung has proven the concept works (and, of course, paid for the effort), maybe large consoles will make the switch.
It shouldn't be much of a problem for game studios in the end, most games run on one of the household-name engines anyway and they have supported ARM for many years for mobile games.
Supposedly a big reason is that Nvidia is difficult to work with.
Is there any info to substantiate that?
Well EVGA withdrew from making Nvidia GPUs despite being one of the best board partners due to how unreasonable Nvidia was.
The question is... why EVGA only?
...and why did EVGA withdraw from the GPU market altogether rather than pivoting to making AMD/Intel cards, if Nvidia was truly the problem?
I think EVGA’s withdrawal had to do with how impossible it was to compete with Nvidia’s first-party cards with the terms Nvidia was setting for AIBs. Other card makers like Asus have several other flagship product lines to be able to sustain the hit while EVGA’s other products consisted of lower-profit accessories.
They may have seen AMD selling their own first-party cards and anticipated AMD eventually following Nvidia’s footsteps. As for Intel, at that point they were probably seen as too much of a gamble to invest in (and probably still are, to a lesser extent).
Because GPUs tend to be low margin to begin with. Nvidia putting all these restrictions on them meant running at a loss.
This compares with PSUs which apparently have a massive margin.
EVGA might come back to the GPU manufacturing space with AMD eventually but Sapphire and Powercolor already fill the niche that EVGA filled for Nvidia cards (high build quality, enthusiast focused, top of the line customer support, warranty, and repairs). So it probably was just not worth picking that fight when the margins aren't really there and AMD is already often seen as "the budget option".
If AMD manages to pull a zen style recovery in the GPU segment, I would expect a decent chance of EVGA joining them as a board partner.
That's a nice little story until you find out that "unreasonable" in this case meant "nVidia didn't buy back units EVGA overstocked on with hopes of scalping people in the crypto-craze and refused to take the loss for EVGA".
Sure, "unreasonable".
Linus Torvalds said a couple of words about it ... https://www.google.com/search?q=linus+nvidia
The friction between Linux wanting all drivers to be open source and Nvidia not wanting to open source their drivers isn't really relevant to any other platform besides Linux. Console manufacturers have no reason to care that Nvidia's drivers aren't open source, they can get documentation and/or source code under NDA if they choose to partner with Nvidia. Secrecy comes with the territory.
Its not about open sourcing their drivers. Its about providing ANY drivers for their hardware. Also note that this was from 2012. Nowadays, they actually do provide decent closed-source drivers for Linux.
https://www.youtube.com/watch?v=xPh-5P4XH6o
There are a few examples / anectodes:
The first is MS' trouble with the original Xbox (https://www.gamesindustry.biz/ati-to-provide-chips-for-futur... - not a great example (20+ year old articles are hard to find) but mentions the issues MS had with Nvidia)
Then there's Apple's drama, which involved warranty claims for laptop parts that led to them being AMD only until the Arm move (https://blog.greggant.com/posts/2021/10/13/apple-vs-nvidia-w...)
Sony only went with Nvidia for the PS3, but that may be more about AMD's APU offerings than Nvidia's shortcomings.
Whether these are signs of a trend or just public anecdotes is in the eye of the beholder or kept away in boardrooms.
It’s second-hand info so take it with a grain of salt, but I read somewhere that there was a lot of friction between Apple and Nvidia because Apple likes to tweak and tailor drivers per model of Mac and generally not be wholly dependent on third parties for driver changes, but that requires driver source access which Nvidia didn’t like (even though they agreed to it for quite some time — drivers for a range of Nvidia cards shipped with OS X for many years and those were all Apple-tweaked).
As for PC gaming, word of mouth plus a huge chunk of money for advertising deals. AMD, or back then rather ATI, drivers were always known for being more rough around the edges and unstable, but the cards were cheaper. Basically you get what you pay for. On the CPU side, it's just the same but without the driver stuff... until AMD turned the tide with Zen and managed to kick Intel's arse so hard they haven't recovered until today.
The console market is a different beast, here the the show isn't run by kids who have had NVIDIA sponsorships in games since they were first playing Unreal Tournament 2004 but by professional beancounters who only look at the price. For them, the answer is clear:
- generally, the studios prefer something that has some sort of market adoption because good luck finding someone skilled enough to work on PowerPC or weird-ass custom GPU architecture. So the console makers want something that their studios can get started on quickly without wasting too much time porting their exclusive titles.
- on the CPU side there's only two major architectures left that fulfill this requirement, ARM and x86, and the only one pulling actual console-worthy high performance out of ARM is Apple who doesn't license their stuff to anyone. That means x86, and in there there's again just two players in town, and Intel can't compete on price, performance or efficiency => off to AMD they go.
- on the GPU side it's the same, it's either AMD and NVIDIA, and NVIDIA won't go and use their fab time to churn out low-margin GPUs for consoles when they can use that fab time to make high-margin gamer GPUs and especially all the ludicrous-margin stuff for first coin miners and now AI hyper(scaler)s => off to AMD they go.
The exception of course is the Nintendo Switch. For Nintendo, it was obvious that it must be an ARM CPU core - x86 and performance under battery constraints Just Is Not A Thing and all the other mobile CPU archs have long since died out. Where I have zero idea is why they went for NVIDIA Tegra that was originally aimed at automotive and settop boxes instead of Qualcomm, Samsung or Mediatek but I guess that the former two demanded inacceptable terms (Qualcomm), didn't want to sell low-margin SoCs when they could run their own high-performance SoCs for their Galaxy lineup (Samsung) or were too sketchy (Mediatek), so they went for Nvidia who could actually use a large deal to showcase "we can also do mobile" for a world that was dominated by the three before-mentioned giants.
Let's not revise history. Zen was better than Bulldozer, but it still took until Zen2+ or Zen 3 (I don't recall exactly) until they reached parity with Intel Core i.
Before that, the vocal crowd was buying AMD because they are the underdog (and still are, to a point) and were cheaper (no longer the case).
If we're going to not revise history it's probably important to not leave out the actual meat of what made AMD's Zen processors so compelling: core count.
Zen 1 launched offering double the core count of any of Intel's competing products at the same price. Intel was ahead on single core performance for a long time but in any well multi threaded benchmark or app Intel was getting absolutely demolished, with AMD offering twice the performance Intel was at any pricepoint. Intel failed to compete in multithreaded apps for 4 product generations, giving AMD enough time to close the single threaded performance gap too.
Now they are both pretty close performance wise, but AMD is well ahead from a power efficiency standing compared to Intel's competing CPUs.
Sure, and they got away with it because multi-thread workloads aren't relevant for the vast majority of the population. They still aren't today.
Most consumer computing workloads, including gaming by far, are dependent on single-thread performance. The vast majority of people do not spend their computing hours encoding video, compiling source code, or simulating proteins. They play games, surf Facebook, watch Youtube, read Mysterious Twitter X, and chat or call friends and family on LINE/Discord/Skype/et al.
In case you are detached from reality, I ask you to realize most peoples' computing needs today can be satisfactorily satisfied with an Intel N100. That's a two generations old 4 core, 4 thread CPU among the lowest tiers of consumer CPUs availabe.
Hell, I personally can satisfy all my daily computing needs with an Intel i7-2700K Sandy Bridge CPU without feeling hindered. I surmise most people will be satisfied with far less.
Another way to put it is: For all the core counts AMD Ryzen (and now Intel) brought, most people can't actually make full use of them even today. That's another reason why AMD Ryzen took so long to become a practical competitor to Intel Core i instead of a meme spread by the vocal minority.
The big factor is before Zen it was dual-core low-end, quad-core high-end for Intel consumer chips on both the desktop and laptop. Advancement in core count and multi-threaded performance clearly was a direct result of Zen.
I think a big part of this comes down to two things:
1. If you’re Nintendo, the Tegra X1 was the fastest mobile graphics chip available. Mali and Adreno weren’t anywhere close at the time. The alternative would’ve required shipping a Switch less powerful than the already-derided Wii U, which just was not an option.
2. Nintendo uses their own, in-house operating system that fits into under 400MB and runs on a microkernel. Naturally, you want good GPU drivers. NVIDIA’s philosophy is to basically make a binary blob with a shim for each OS. Not great, but it demonstrably shows the drivers can be ported with fairly little effort - Windows, Mac, Linux, Android, whatever. Qualcomm and MediaTek’s strategy is to make a bastard fork of the Linux kernel and, maybe, upstream some stuff later, with a particular interest in Android compatibility. I think it goes without saying, that the implementation which isn’t tied to a specific kernel, is a more desirable starting point.
It also helps that console makers like Sony and MS are less concerned with driver quality since they're exposing a much more barebones OS to purpose built games and game engines, with fewer bells and whistles and much more control by Sony/MS over the software on the console.
There is a new one steam-deck-like that's intel, but I don't remember the brand
MSI. But expect very bad power consumption
They all have very bad power consumption lol
Apart from the Steam Deck, that is.
The OLED Deck in particular is great. On less demanding games or streaming with Moonlight where I can crank down the TDP it’s not hard to squeeze 10+ hours of playtime out of its 50Wh battery.
Linux probably helps here, with its greatly reduced background activity compared to Windows.
Yeah, but I would be interested in the performance. I use my steam deck always connected to a power outlet (I have multiple connected around the home), so power consumption has never been a problem, the most I use it in handheld mode is on the subway for a total of about 1 hour/1 hour and a half (this is the total time spent for a roundtrip)
From my understanding AMD holds the console market because its basically lowest price wins so the margins are business that Intel and NVIDIA doesn't really want.
They also more or less won the PS4/XB1 generation by default as the only supplier capable of combining a half decent CPU and GPU into a single chip - Intel had good CPUs but terrible GPUs and Nvidia had good GPUs but terrible CPUs. That tide is shifting now with Intel's renewed push into GPUs, and Nvidia getting access to high performance ARM reference designs.
It's also interesting that at one point, Nvidia's chips were going to support both x86 and ARM (see Project Denver), but x86 support was scrapped due to Intel's patents.
https://semiaccurate.com/2011/08/05/what-is-project-denver-b...
Would of been interesting to have had another large x86 player
As already mentioned by others, AMD won the console market because back then the only thing they could compete on was price.
Which coincidentally, the most important thing to Sony and Microsoft with regards to consoles ("make tons and sell tons" products) is cost of materials. Even getting that cost 1 cent cheaper still means a $1 difference over 100 units, $10 over 1,000 units, $100 over 10,000 units, and onwards. Remember, we're talking many millions of basically identical units sold.
AMD couldn't compete in performance nor efficiency, but they could absolutely compete in price, while both Intel and Nvidia couldn't due either to their business strategy or the logistics for Sony/Microsoft of procuring more materials from different suppliers.
So long as AMD can continue to undercut Intel, Nvidia, and any other contenders they will continue to dominate consoles.
I generally agree with this reasoning, but your example could use some scaling down to convince a reader.
1 cent cheaper would net Sony a total of 500K USD for all PS5 units sold till date. So about a hundred PS5 units at retail as pure profit. A company of the size of Sony for a product of the scale of PS5 would absolutely forego that profit if the alternative offered any tangible benefits at all.
That's the thing though: A new generation console only needs to be better than it's predecessor. It doesn't have to have groundbreaking technologies or innovations, let alone be a pioneer paving the way forward for other computing hardware products.
So cost of materials remains the chief concern.
ATi made the GameCube’s GPU if that counts
Close. They bought the company that made it.
Msi is making an intel based handheld called the claw. Its pretty sweet.
Is it? It doesn't seem to me that Intel can compete on performance per watt, which is what really matters in a handheld. From what I can tell, the APU alone can get to 45W in the claw, while that's the max power draw for the entire steam deck (LCD). Add to that the fact that MSI is not selling games to recoup the very aggressive price point of the steam deck and you just get a worse machine for more money