The article describe it as if the design was surprising with how many chips there were etc, but it's important to understand the context : complete lack of synergy and "fight for dominance" between the Japan and USA team, SEGA JP was making a 2D console, SEGA US was making a 3D console, the JP team was about to win that fight and then the PSX appeared so they, essentially, merged the two together.
You end up with a 2D console with parts and bits of an unfinished 3D console inside it. It makes no sense.
For a tech enthousiast and someone who loves reading dev postmortem, it's glorious. For someone who likes a clean design, it's irksome to no ends. For mass gamers of that era, where the big thing was "arcade in your living room" it's a disapointement, and SEGA not knowing which side to focus on didn't help at all.
The wikipedia article has a lot more details [1]
If you check the other articles about the PlayStation [1] and the Nintendo 64 [2], you'll see that the design of a 3D-capable console in the 90s was a significant challenge for every company. Thus, each one proposed a different solution (with different pros and cons), yet all very interesting to analyse and compare. That's the reason this article was written.
[1] https://www.copetti.org/writings/consoles/playstation/
[2] https://www.copetti.org/writings/consoles/nintendo-64/
This should have been no struggle for Sega. They basically invented the modern 3D game and dominated in the arcade with very advanced 3D games at the time. Did they not leverage Yu Suzuki and the AM division when creating the Saturn? Then again rumor has it they were still stuck on 2D for the home market and then saw the PlayStation specs and freaked and ordered 2 of everything in the Saturn.
In interviews IIRC ex-Sega staff has stated that they thought they had one more console generation before a 3D-first console was viable to the home market. Sure, they could do it right then and there, but it would be kind of janky. Consumers would rather have solid arcade-quality 2D games than glitchy home ports of 3D ones. Then Sony decided that the wow factor was worth kind of janky graphics (affine texture mapping, egregious pop-in, only 16-bit color, aliasing out the wazoo, etc.) and the rest is history.
Nintendo managed largely not-janky graphics with the N64, but it did come out 2-3 years after the Saturn and Playstation.
“Not janky” is a weird way of describing the N64’s graphics. Sure Mario looked good, but have you seen most other games on that platform?
Well it had a proper Z buffer so textures didn't wiggle. Now the fog, draw distance, and texture resolution combined with blurring were terrible.
It was basically the haze console.
Hope it's okay that I just reply to everyone
Wiggling is down to lack of precision and lack of subpixel rendering, unrelated to Z buffering. Z buffers are for hidden surface removal, if you see wiggling on a single triangle floating in a void, it's not a Z buffer problem.
When you see models clipping through themselves because the triangles can't hide each other, that's the lack of Z buffer.
Thanks for clarifying. I knew I was getting something wrong, but can never remember all the details. IIRC PS1 also suffered from render order issues that required some workarounds, problems the N64 and later consoles didn't have.
The lack of media storage was thing that kind of solidified a lot of those issue. Many that worked on the N64 have said that the texture cache on the system was fine enough for the time. Not great but not terrible. The issue was that you were working in 8MB or 16MB space for the entire game. 32MB carts where rare and less than a dozen ever used 64MB carts.
Yeah. I'm not what one would call a graphics snob, but I found the N64 essentially unplayable even at the time of its release. With few exceptions, nearly every game looked like a pile of blurry triangles running at 15fps.
I always felt like N64 games were doing way too much to look good on the crappy CRTs they were usually hooked up to. The other consoles of the era may have had more primitive GPUs, but for the time I think worse may have actually been better, because developers on other platforms were limited by the hardware in how illegible they could make their games. Pixel artists of the time had learned to lean into and exploit the deficiencies of CRTs, but the same tricks can't really be applied when your texture is going to be scaled and distorted by some arbitrary amount before making it to the screen.
A part of this was due to the TRC of Nintendo. It also didn't hep that due to the complexity of the graphics hardware, most developers where railroaded into using things like the Nintendo provided Microcode just to run the thing decently.
Goldeneye was pretty great for the time !
Janky is literally the PSX's style due to its lack of floating point capability
[0]:https://youtu.be/x8TO-nrUtSI?t=222
No, it's due to limited precision in the vertices. If you had 64 bit integers you could have 32.32 fixed-point and it would look as good as floating-point.
did you watch the video?
Compare it to the Playstation, which could not manage proper texture projection and also had such poor precision in rasterization that you could watch polygons shimmer as you moved around.
The N64 in comparison had an accurate and essentially modern (well, "modern" before shaders) graphics pipeline. The deficiencies in it's graphics were not nearly enough graphics specific RAM (you only had 4kb total as a texture cache, half that if you were using some features! Though crazy people figured out you could swap in more graphics from the CARTRIDGE if you were careful) and a god awful bilinear filtering on all output.
Interestingly, the N64 actually had some sort of precursor in form of the RSP "microcode". Unfortunately there was initially no documentation, so most developers just used the code provided by Nintendo, which wasn't very optimized and didn't include advanced features. Only in the last years did homebrew people really push the limits here with "F3DEX3".
I think that's a frequent misconception. The texture filtering was fine, it arguably looks significantly worse when you disable it in an emulator or a recompilation project. The only problem was the small texture cache. The filtering had nothing to do with it. Hardware accelerated PC games at the time also supported texture filtering, but I don't think anyone considered disabling it, as it was an obvious improvement.
But aside from its small texture cache, the N64 also had a different problem related to its main memory bus. This was apparently a major bottleneck for most games, and it wasn't easy to debug at the time, so many games were not properly optimized to avoid the issue, and wasted a large part of the frame time with waiting for the memory bus. There is a way to debug it on a modern microcode though. This video goes into more detail toward the end: https://youtube.com/watch?v=SHXf8DoitGc
Fun trivia for readers, it isn't even normal 4-tap bilinear filtering, it's 3-tap, resulting in a characteristic triangular blurring that some N64 emulators recreate and some don't. (A PC GPU won't do this without special shaders)
https://filthypants.blogspot.com/2014/12/n64-3-point-texture...
It will always be easy to make 3D games that look bad, but on the N64 games tend to look more stable than PS1 or Saturn games. Less polygon jittering[0], aliasing isn't as bad, no texture warping, higher polygon counts overall, etc.
If you took the same animated scene and rendered it on the PS1 and the N64 side by side, the N64 would look better hands down just because it has an FPU and perspective texture mapping.
[0] Polygon jittering caused by the PS1 only being capable of integer math, so there is no subpixel rendering and vertices effectively snap to a grid.
You can do subpixel rendering with fixed-point math https://www.copetti.org/writings/consoles/playstation/#tab-5...
I thought the problem was that it only had 12 or 16-bit precision for vertex coords, which is not enough no matter whether you encode it as fixed-point or floating-point. Floats aren't magic.
In comparison to the other consoles of its generation? It's about as un-janky as things got graphics-wise.
What did take so long for Nintendo?
I'm not 100% sure of the specifics, but Nintendo took a pretty different approach from Sony or Sega at this time. Sony and Sega both rolled their own graphics chips, and both of them made some compromises and strange choices in order to get to market more quickly.
Nintendo instead approached SGI, the most advanced graphics workstation and 3D modeling company in the world at the time, and formed a partnership to scale back their professional graphics hardware to a consumer price point.
Might be one of those instances where just getting something that works from scratch is relatively easy, but taking an existing solution and modifying it to fit a new use case is more difficult.
The cartridge ended up being a huge sore spot too.
Nintendo wanted it because of the instant access time. That’s what gamers were used to and they didn’t want people to have to wait on slow CDs.
Turns out that was the wrong bet. Cartridges just cost too much and if I remember correctly there were supply issues at various points during the N64 era pushing prices up and volumes down.
In comparison CDs were absolutely dirt cheap to manufacture. And people quickly fell in love with all the extra stuff that could fit on a desk compared to a small cartridge. There was simply no way anything like Final Fantasy 7 could have ever been done on the N64. Games with FMV sequences, real recorded music, just large numbers of assets.
Even if everything else about the hardware was the same, Nintendo bet on the wrong horse for the storage medium. It turned out the thing they prioritized (access time) was not nearly as important as the things they opted out of (price, storage space).
Yes but I don't see how a game like Ocarina of time with its streaming data in at high speed would have been possible without a cartridge. Each format enabled unique gaming experiences that the other typically couldn't replicate exactly.
Naughty Dog found a solution - constantly streaming data from the disk, without regard for the hardware's endurance rating:
https://all-things-andy-gavin.com/2011/02/06/making-crash-ba...
Crash Bandicoot is a VERY different game from Ocarina Of Time. They are not comparable at all. They literally had to limit the field of view in order to get anything close to what they were targeting. Have you played the two games? The point still stands, Zelda with its vast open worlds is not feasible on a CD based console that has a max transfer rate of 300KB/s and the latency of an iceberg.
What ND did with Crash Bandicoot was really cool to see in action (page in/out data in 64KB chunks based on location) but you are right - this relied on a very strict control of visuals. OoT didn't have this limitation.
Not just dirt cheap, the turn around time to manufacture was significantly lower. Sony had an existing CD manufacturing business and could produce runs of discs in the span of a week or so, whereas cartridges typically took months. That was already a huge plus to publishers since it meant they could respond more quickly if a game happened to be a runaway success. With cartridges they could end up undershooting, and losing sales, or overshooting and end up with expensive, excess inventory.
Then to top it all off, Sony had much lower licensing fees! So publishers got “free” margin to boot. The Playstation was a sweet deal for publishers.
Tangentially related, but if you haven't already, you should read DF Retro's writeup of the absolutely incredible effort to port the 2 CD game Resident Evil 2 to a single 64MB N64 cartridge: https://www.eurogamer.net/digitalfoundry-2018-retro-why-resi...
Spoilers: it's a shockingly good port.
Nintendo did not approach SGI. SGI was rejected by Sega for the Saturn - Sega felt their offering was too expensive to produce, too buggy at the time despite spending man hours helping fix hardware issues,, and had no chance to make it to market in time for their plans.
For all we know, Nintendo had no plans past the SNES, except for the VirtualBoy. But then again, the VirtualBoy was another case of Nintendo being approached by a company rejected by Sega…
A heroic, and ultimately unnecessary considering the mundane reasons that slowed the N64 down, attempt to consumerize exotic hardware.
The hardware was actually pretty great in the end. The unreleased N64 version of Dinosaur Planet holds up well considering how much more powerful the GameCube was.
/edit
Nintendo were largely the architects of their own misery. First, they set expectations sky high with their “Ultra 64” arcade games, then were actively hostile to developers in multiple ways.
The Model 2 arcade hardware cost over $15,000 when new in 1993. Look at those Model 1 and Model 2, that's some serious silicon. Multiple layers of PCB stacked with chips. The texture mapping chips were from partnerships with Lockheed Martin and GE. There was no home market for 3D accelerators yet; the only companies doing it were folks creating graphics chips for military training use and high end CAD work.
https://sega.fandom.com/wiki/Sega_Model_2
https://segaretro.org/Sega_Model_1
Contrast that with the Saturn. Instead of a $15,000 price target they had to design something that they could sell for $399 and wouldn't consume a kilowatt of power.
Although, in the end, I think the main hurdle was a failure to predict the 3D revolution that Playstation ushered in.
That's an even bigger miss on Sega's part then.
Having such kit out in the field, should have given Sega good insight into the "what's hot, and what's not" for (near-future) gaming needs.
Which features are essential, what's low hanging fruit, what's nice to have but (too) expensive, performance <-> quality <-> complexity tradeoffs, etc.
Besides having hardware & existing titles to test-run along the lines of "what if we cut this down to... how would it look?"
Not saying Sega should have built a cut-down version of their arcade systems! But those could have provided good guidance & inspiration.
But they had the insight. And the insight they got was that 3D was not there yet for the home market, it was unrealistic to have good 3D for cheap (eg. no wobbly textures, etc), as it was still really challenging to have good 3D on expensive dedicated hardware.
Yeah. The 3D revolution was obvious in hindsight, but not so obvious in the mid 1990s. I was a PC gamer as well at the time so even with the benefit of seeing things like DOOM it wasn't necessarily obvious that 2.5D/3D games were going to be popular with the mainstream any time soon.
A lot of casual gamers found early home 3D games kind of confusing and offputting. (Honestly, many still kind of do)
We went from highly evolved colorful, detailed 2D sprites to 3D graphics that were frankly rather ugly most of the time, with controllers and virtual in-game cameras that tended to be rather janky. Analog controllers weren't really even prevalent thing for consoles at this point.
Obviously in hindsight the Saturn made a lot of bad bets and the Playstation made a lot of winning ones.
I dunnnnnnnnno?
The secret to high 3D performance (particularly in those simpler days before advanced shaders and such) wasn't exactly a secret. You needed lots of computing horsepower and lots of memory to draw and texture as many polys as possible.
The arcade hardware was so ridiculous in terms of the number of chips involved, I don't even know how many lessons could be directly carried over. Especially when they didn't design the majority of those chips.
Shrinking that down into a hyper cost optimized consumer device relative to a $15K arcade machine came down to design priorities and engineering chops and Sega just didn't hit the mark.
It's been years since since I read the book "Console Wars", but if memory serves me correctly SGI shopped their tech to SEGA first before Nintendo secured it for the N64.
Yep, Sega had a look at SGI's offering and rejected it. One of the many reasons they did so was because they thought the cost would be too high due to the die size of the chips.
Kind of funny considering the monstrosity the Saturn ended up becoming.
I do wonder what would have happened if the N64 had included a much bigger texture cache. It seemed the tiny size was it biggest con.
The other big problem with the N64 was that the RAM had such high latency that it completely undid any benefit from the supposedly higher bandwidth that RDRAM had and the console was constantly memory starved.
The RDP could rasterize hundreds of thousands of triangles a second but as soon as you put any texture or shading on them, the memory accesses slowed you right down. UMA plus high latency memory was the wrong move.
In fact, in many situations you can "de-optimize" the rendering to draw and redraw more, as long as it uses less memory bandwidth, and end up with a higher FPS in your game.
That's mostly correct. It is as you say, except that shading and texturing come for free. You may be thinking of Playstation where you do indeed get decreased fillrate when texturing is on.
Now, if you enable 2cycle mode, the pipeline will recycle the pixel value back into the pipeline for a second stage, which is used for 2 texture lookups per pixel and some other blending options. Otherwise, the RDP is always outputting 1 pixel per clock at 62.5 mhz. (Though it will be frequently interrupted because of ram contention) There are faster drawing modes but they are for drawing rectangles, not triangles. It's been a long time since I've done benchmarks on the pipeline though.
You're exactly right that the UMA plus high latency murders it. It really does. Enable zbuffer? Now the poor RDP is thrashing read modify writes and you only get 8 pixel chunks at a time. Span caching is minimal. Simply using zbuf will torpedo your effective full rate by 20 to 40 percent. That's why stuff I wrote for it avoided using the zbuffer whenever possible.
The other bandwidth hog was enable anti aliasing. AA processing happened in 2 places: first in the triangle drawing pipeline, for inside polygon edges. Secondly, in the VI when the framebuffer gets displayed, it will apply smoothing to the exterior polygon edges based on coverage information stored in the pixels extra bits.
On average, you get a roughly 15 to 20 percent fillrate boost by turning both those off. If you run only at lowres, it's a bit less since more of your tender time is occupied by triangle setup.
I was misremembering about instances involving the zbuffer and significant overdraw as demonstrated by Kaze https://www.youtube.com/watch?v=GC_jLsxZ7nw
Another example from that video was changing a trig function from a lookup table to an evaluated approximation improved performance because it uses less memory bandwidth.
Was the zbuffer in main memory? Ooof
What's interesting to me is that even Kaze's optimized stuff is around 8k triangles per frame at 30fps. The "accurate" microcode Nintendo shipped claimed about 100k triangles per second. Was that ever achieved, even in a tech demo?
There were many microcode versions and variants released over the years. IIRC one of the official figures was ~180k tri/sec.
I could draw a ~167,600 tri opaque model with all features (shaded, lit by three directional lights plus an ambient one, textured, Z-buffered, anti-aliased, one cycle), plus some large debug overlays (anti-aliased wireframes for text, 3D axes, Blender-style grid, almost fullscreen transparent planes & 32-vert rings) at 2 FPS/~424 ms per frame at 640x476@32bpp, 3 FPS/~331ms at 320x240@32bpp, 3 FPS/~309ms at 320x240@16bpp.
That'd be between around 400k to 540k tri/sec. Sounds weird, right ? But that's extrapolated straight from the CPU counter on real hardware and eyeballing, so it's hard to argue.
I assume the bottleneck at that point is the RSP processing all the geometry, a lot of them will be backface culled, and because of the sheer density at such a low resolution, comparatively most of them will be drawn in no time by the RDP. Or, y'know, the bandwidth. Haven't measured, sorry.
Performance depends on many variables, one of which is how the asset converter itself can optimise the draw calls. The one I used, a slight variant of objn64, prefers duplicating vertices just so it can fully load the cache in one DMA command (gSPVertex) while also maximising gSP2Triangle commands IIRC (check the source if curious). But there's no doubt many other ways of efficiently loading and drawing meshes, not to mention all the ways you could batch the scene graph for things more complex than a demo.
Anyways, the particular result above was with the low-precision F3DEX2 microcode (gspF3DLX2_Rej_fifo), it doubles the vertex cache size in DMEM from 32 to 64 entries, but removes the clipping code: polygons too close to the camera get trivially rejected. The other side effect with objn64 is that the larger vertex cache massively reduces the memory footprint (far less duplication): might've shaved off like 1 MB off the 4 MB compiled data.
Compared to the full precision F3DEX2, my comment said: `~1.25x faster. ~1.4x faster when maxing out the vertex cache.`.
All the microcodes I used have a 16 KB FIFO command buffer held in RDRAM (as opposed to the RSP's DMEM for XBUS microcodes). It goes like this if memory serves right:
1. CPU starts RSP graphics task with a given microcode and display list to interpret from RAM
2. RSP DMAs display list from RAM to DMEM and interprets it
3. RSP generates RDP commands into a FIFO in either RDRAM or DMEM
4. When output command buffer is full, it waits for the RDP to be ready and then asks it to execute the command buffer
5. The RDP reads the 64-bit commands via either the RDRAM or the cross-bus which is the 128-bit internal bus connecting them together, so it avoids RDRAM bus contention.
6. Once the RDP is done, go to step 2/3.
To quote the manual:
I'm glad someone found objn64 useful :) looking back it could've been optimized better but it was Good Enough when I wrote it. I think someone added png texture support at some point. I was going to add CI8 conversion, but never got around to it.
On the subject of XBUS vs FIFO, I trialled both in a demo I wrote with a variety of loads. Benchmarking revealed that over 3 minutes each method was under a second long or shorter. So in time messing with them I never found XBUS to help with contention. I'm sure in some specific application it might be a bit better than FIFO. By the way, I used a 64k FIFO size, which is huge. I don't know if that gave me better results.
Oh, you're the MarshallH ? Thanks so much for everything you've done !
I'm just a nobody who wrote DotN64, and contributed a lil' bit to CEN64, PeterLemon's tests, etc.
For objn64, I don't think PNG was patched in. I only fixed a handful of things like a buffer overflow corrupting output by increasing the tmp_verts line buffer (so you can maximise the scale), making BMP header fields 4 bytes as `long` is platform-defined, bumping limits, etc. Didn't bother submitting patches since I thought nobody used it anymore, but I can still do it if anyone even cares.
Since I didn't have a flashcart to test with for the longest time, I couldn't really profile, but the current microcode setup seems to be more than fine.
Purely out of curiosity, as I now own a SC64, is the 64drive abandonware ? I tried reaching out via email a couple times since my 2018 order (receipt #1532132539), and I still don't know if it's even in the backlog or whether I could update the shipping address. You're also on Discord servers but I didn't want to be pushy.
I don't even mind if it never comes, I'd just like some closure. :p
Thanks again !
Did you see the video of the guy who super optimized Mario 64 to run at 60 fps https://youtu.be/t_rzYnXEQlE?si=MpucGm0r_5KN-Nc_
So Kaze is hitting 240k tris/second right?
Recently Sauraen on Youtube demonstrated their performance profiling on their F3DEX3 optimizations. One thing they could finally do was profile the memory latency and it is BAD! On a frame render time of 50ms, about 30ms of that is the processors just waiting on the RAM. Essentially, at least in Ocarina of time, the GPU is idle 60% of the time!
Whole video is fascinating but skip to the 29 minutes mark to see the discussion of this part.
https://www.youtube.com/watch?v=SHXf8DoitGc
Wasn't the thing you put in the slot infront of the cart a ram extension slot?
I think you can play Rogue Squadron with and without if you want to compare.
Or do youe mean some lower cache level?
that pack added 4MB extra RAM, OOT and Majora’s Mask are like night and day thanks to it.
The N64 had mere kilobytes of texture cache, AFAIK the solution was to stream textures, but it took awhile for developers to figure that out
N64 had a 4KB texture cache while the Ps1 had a 2KB cache. But the N64's mip-mapping requirement meant that it essentially had 2KB + lower resolution maps.
The streaming helped it a lot but I think the cost of larger carts was a big drag on what developers could do. It is one thing to stream textures but if you cannot afford the cart size in the first place it becomes merely academic.
I don’t think storage space was an issue for graphics in particular. Remember the base system had 4MB memory and 320x240 resolution
The real problem was different. The PS1 cache was a real cache, managed transparently by the hardware. Textures could take the full 1 MB of VRAM (minus the framebuffer, of course).
In contrast, the N64 had a 4 kB texture RAM. That's it, all your textures for the current mesh had to fit in just 4 kB. If you wanted bigger textures, you had to come up with all sort of programming tricks.
While this is true, I still think that the PlayStation had the most interesting and forwarding looking design of its generation, especially considering the constraints. The design is significantly cheaper than both Saturn and Nintendo 64, it was fully 3D (compared to Saturn for example), using CD as media was spot-on and also having the MJPEG decoder (that allowed PlayStation to have not only significantly higher video quality than its rivals, but also allowed video to be used for backgrounds for much better quality graphics, see for example Resident Evil or Final Fantasy series).
I really wanted to see a design inspired in the first PlayStation with more memory (since the low memory compared to its rivals was an issue it seemed, especially in e.g.: 2D fighting games where the amount of animations had to be cut a lot compared to Saturn) and maybe some more hardware accelators to help fix some of the issues that plagued the platform.
It is not really any more 3D than the Saturn as it still does texture mapping in 2D space, same as the saturn. It's biggest advantage when it came to 3D graphics, aside from higher performance, was it's UV mapping. They both stretch flat 2D textured shapes around to fake 3D.
The N64 is really far beyond the other two in terms of being "fully 3D", with it's fully perspective correct z buffering and texture mapping, let alone mipmapping with bilinear blending and subpixel correct rasterization.
But N64 was a more expensive design, and also came almost 2 years later, and from an architectural standpoint it also had significant issues (e.g.: the texture cache size that someone said above).
This is why I said considering the constraints, I find the first PlayStation to be impressive.
Sure, a year or two back in the 90s was huge. :)
This is very true. I consider then N64 to be the first to use anything that resembles hardware vague similar to what the rest of the industry ended up with.
It is a shame that SGI's management didn't see a future in PC 3D accelerator cards, it lead to the formation of 3DFX and with things like that SGI's value in the market was crushed astoundingly fast. They had the future but short term thinking blinded them to the path ahead.
Oh I was not criticizing the article per se, my apologies if it came out as such, I just thought this piece of information was important to understand why they ended up with such a random mash of chips.
Ah no worries! From my side I was only trying to explain more about the origins of the article, since I see it often mentioned/speculated in many forums.
By the way, I'm always open to criticism ! (https://github.com/flipacholas/Architecture-of-consoles/issu...)
Thanks for the link – just opened an issue concerning the font weight and text color on the site.
This is largely incorrect. The Saturn was entirely a Sega of Japan design. There's an interview (https://mdshock.com/2020/06/16/hideki-sato-discussing-the-se...) with the Saturn hardware designer that gives some perspective into why he chose to make the hardware the way he did. Basically, he knew that 3D was the future from the response the PSX was getting, but besides AM2 (the team at Sega that did 3D arcade games like Virtua Fighter, Daytona USA, etc), all of Sega's internal expertise was on traditional 2D sprite-based games. Because of this, he felt the best compromise was to make a console that excelled at 2D games and was workable at 3D games. I think his biggest mistake was that he underestimated how quickly the industry would switch to mainly focusing on 3D.
The actual result of Sega's infighting was far more stupid IMO. Sega of America wanted a more conservative design than the Saturn using a Motorola 68020 (successor to the 68000 in the Genesis) which would have lower performance, but developers would be more familiar with the hardware. After they lost this fight, they deemed the Saturn impossible to sell in the US due to its high price. SOA then designed the 32X, a $200 add-on to the Genesis that used the same SH2 processors as the Saturn but drew graphics entirely in software and overlayed them on top of the Genesis graphics. The initial plan was that the Saturn would remain exclusively in Japan for 2-3 years while the 32X would sell overseas. Sega of America spent a ton of money trying to build interest for the 32X and focused their internal development exclusively on the 32X. However, both developers and the media were completely uninterested in it compared to the Saturn. After it became evident that the 32X wouldn't hold the market, Sega of America rushed the Saturn to market to draw attention away from the 32X, but had to rely exclusively on Japanese titles (many of which didn't fit the American market) because they'd spent the past year developing 32X titles (the 32X had more cancelled games than released ones). All of this ended up confusing and pissing off developers and consumers.
So that is the background for the 32X. Thanks.
I was on team N and I was always confused by the weird accessories of the Genesis, and the 32X’s timing always was one of the most confusing bits, but I’d never actually looked into it.
I generally was too. There were some fun games on the 32X, but I bought it at fire sale prices after it failed.
Unfortunately the combination of the 32X mistake plus the rushed Saturn launch just annoyed all partners, retail and development. It’s likely a big reason the Dreamcast did so poorly.
It was a nice system but Sega was already on their back foot, not many people trusted them, and piracy was way too easy. Their partner in MS wasn’t helpful. And then the PS2 was coming…
The DC was doing well, but the bank account was already overdrawn.
Piracy wasn't a big factor, since very very few people had broadband and CD-R drives in 2000. The attach rate was reportedly better than average. And the MS thing was just the availability of middleware that a few games used.
As the console continued on, if we assume it lived a full five years or something, piracy would have gotten worse as more and more CD burners became commonplace.
Didn’t they have to pay MS a small license fee for each unit? I’m assuming that was a drag too. Not one that killed it, but just another little kick.
The MilCD exploit was already patched in the last hardware revisions (VA2) so I imagine it would be a bit like the Switch (IIRC) where early models are vulnerable to exploits and more desirable on the second-hand market.
I'm not sure what the agreement with Microsoft was, but it was probably on a per-game basis if the developers wanted to use Windows CE.
Oh I didn’t know it was patched in hardware.
I knew it was per game whether the software actually used the windows CE stuff. I’m really not sure it was ever used much at all. I know the first version of Sega rally used it but it performed so poorly they had an updated version that didn’t that they put out later to fix the issues. And I’m not sure the bad version even came to the states.
Technically I misspoke. Some VA2 DCs used the older BIOS with the exploit still there, but most did have it patched.
One Reddit thread I see claims that Microsoft actually paid Sega for the Windows CE logo on the front of the system. But as you said, not many games used it.
You COULD d/l a game on 56k, worked best if you had a dedicated phone line. It believe it was just under 48hrs for a large game… I had almost every game! I loved DC
Railroad tycoon for the win as a WinCE game I also didn’t mind the web browser and some of the online stuff - I only had dial up on the Dreamcast - broadband adapters were extremely rare! Plus I didn’t have broadband anyway.
Crazy taxi was a real favorite as well. I worked all summer in high school and my first purchase was a Dreamcast. I was the first person I knew in town that got the utopia boot disc, and then on from there. Discjuggler burns for self boots, manual patching - unlocked Japanese games. What a fun time!
I also love DC - enough to be typing this with an arm with 2 DC tattoos :D
But I've never taken it online. I'll be getting a DreamPi at some stage, but sadly shipping to where I am takes it out of "impulse buy" territory.
Perhaps things were different in my neck of the woods, but my recollection is different: By the turn of the century, CD recorders were pretty common amongst even some of the non-geek computer users I knew. Most of the extant drives were not very good, but by then there were also examples of absolutely stellar drives (some of which are still considered good for burning a proper CD-R, even though they're a ~quarter-century old, like the Plextor PR-820).
I don't recall any particular difficulty in downloading games over 56k dialup around Y2K with a modem, though it was certainly a good bit faster with ISDN (and a couple of years later, 2Mbps-ish DOCSIS).
It was not fast with dialup -- it took a day or so, or longer if slowed it down to avoid what we now call buffer bloat -- but some of us had time, and a ye olde Linux box that was going to be dialed-in ~24/7 anyway.
But usually, the route to piracy that I recall being most-utilized back then involved renting a game from the video store for a few dollars and making a copy (or a few, for friends).
I can only speak from my own experience, but I was on dialup until 2002 and had 1 friend with both broadband and a CD-R drive around the DC's lifespan. We burned many DivX files to Video CDs, but no games (his dad brought back a chipped PS1 and lots of burned games from Kosovo though)
Perhaps the availability of burning games wasn't the right lens for me to look at it, but if the attach rate of 8 games per system is true (as I linked to in another reply) then that's similar to the NES and SNES and not at all a factor in the system's demise.
Because the DC's games were on Yamaha 1GB CDs ("GD-ROM") I doubt popping the disc in a PC and running Nero would have been possible. Most dumps were probably made using chipped consoles and a coder's cable or BBA.
And I can only speak for myself, too, but: By Y2K, I had dual-B channel [128kbps] ISDN dialup with a local ISP. (As housing situations changed, it ebbed and flowed a bit after that for me at home -- at one point after ISDN, I was doing MLPPP with three 33.6k modems and hating every minute of it. But by 2001, I had unfettered access to 860/160kbps ADSL at a music studio I helped build that I could drive to in about 20 minutes, any time of day or night -- by then, "broadband" was common in my old turf. And by mid-2002, I had DOCSIS at home. Changes were happening fast back then, for me.)
The Dreamcast did use GD-ROM, which did hold up to around 1GB, and the usual way to rip those was to connect the [rooted/hacked/jailbroke/whatever-the-term-was] DC to the PC, so the Dreamcast could do all of the reading and transfer that data to the PC for processing and burning.
But even though GD-ROM could hold 1GB, a lot of DC games were (often quite a lot) less than ~650MB or ~700MB of actual-data, so they often fit well onto mostly-kinda-standard CD-R blanks of the day without loss of game data.
There were also release groups -- because of course there were -- that specialized partly in shrinking a larger (>650 or >700MB) DC title down to a size that would fit onto a CD-R, or sometimes onto two of them. (That involved Deep Magic, for the time, and implicitly involved a download stage.)
And my friends and family? I had one sibling who danced around with computers, and she had a (flaky AF, but existing) CD burner a year or two before I did, and most of my computer-geek friends were burning discs before Y2K as well -- even if it meant using a communal CD burner at a mutual friend's house.
But again, that's just my story -- which I've simplified to remove some stages from.
I don't doubt your own story even a little bit. Gaming for high-speed internet access, even in properly-large cities, was kind of the wild west around that time, and I agree that CD burning was unusual around that time.
I was a bit early for getting things done fast in my own area of small-town Ohio, and I think I was generally good at exploiting available options.
(Not all ideas were good ideas: I once helped negotiate and implement a deal to run some Ethernet cable overhead across a parking lot to a small ISP next door, so we could have access to a T1 at the shop I was working at, for $40/month. The money was right according to all parties, and we didn't abuse it too bad or too often...until it all got blown up by lightning, with huge losses on both ends of that connection.
The ISP never fully recovered and died absolutely a month or two later.
Which, incidentally, is how I learned to never, ever run Cat5 between two buildings. Woops.)
My own story is complicated by the fact that my family moved trans-Atlantic at the time. In fact, I got a DC in the UK after the PS2 because we moved back and I got a system and games for cheap from a newspaper classified ad, around late 2002.
I suspect my parents didn't feel the need to upgrade from dial-up to DSL or DOCSIS while we were planning a 5000 mile move. We went from 1p-a-minute dial up ("put a timer on the stove, you have 15 minutes!") in 1998 to unlimited (but still with one phone line; drag the cord from the PC to the kitchen and hope Grandma doesn't try to call) a year-ish after, and then got 1.2Mbps DSL once we were in Canada in mid 2002.
My friend down the street upgraded and got a new PC around 1999 or 2000. I don't remember the specs, but it was much nicer than my dad's 300MHz K6-2 with no USB ports. For a while we even needed to use CursorKeys because the V90 modem used the same COM port as the mouse! So I used their PC for Napster, then Kazaa and WinMX, for 128kbps MP3s and "which pixel is Neo?" 700MB copies of The Matrix (sidenote: I warned him against searching for xXx featuring Vin Diesel, but he had to learn the hard way...)
We were aware of "chipped" consoles, and my family even asked the local indie game store if they could do it ("That'z Entertainment" in Lakeside, if any locals are reading this) but my only experiences with game piracy at that point was my dad's friend who had a modded PSX ("holy crap it says SCEA instead of SCEE!") and then the same friend as the paragraph above getting a PSX that his Army dad brought back from Eastern Europe along with Ice Age on DVD.
DC piracy was 100% a thing in its lifetime, but I still don't think it was widespread enough to have harmed the console's chances. The company was just out of cash.
Even years later I was annoyed that the CDI version of Skies of Arcadia (Echelon, maybe?) didn't support the VGA box.
You've travelled a lot more than I have. I've mostly stuck around Ohio.
I suspect that we're about the same age.
Man, com ports and IRQs: I had 14 functional RS-232 ports on one machine, once, between the BocaBoard (with 10P10C jacks), the STB serial card that 3dfx promised to never erase documentation for (it is erased -- *thanks, nVidia*), and the couple of serial ports that were built into a motherboard or multi-IO card at that time.
Once configured to be actually-working, which I did mostly as a dare to myself, I could find no combination of connected serialized stuff that would upset it.
And that was fun having a ridiculous amount of serial ports: I had dumb terminals scattered around, and I had machines connected with SLIP and PLIP. (Ethernet seemed expensive, and "spare laptops" or "closet laptops" were not at all a thing yet.)
Anyhow, piracy: My friends and I were mostly into PSX back then. I may or may not have installed a dozen mod chips for my friends so I could get a discount on my own mod chip. (I was not trying to make money.)
Aaand there may have been quite a lot of piracy. I remember coming home and checking the mailbox to find a copy of Gran Turismo 2, with a hand-written note from a friend who I didn't expect to be able to succeed in duping a disc with Nero, but he'd done it, and hand-delivered it, and it worked. I subsequently played the fuck out of that game.
But you've clearly got a different perspective. And that's interesting to me.
I've never actually-played a CDI system (I do recall poking at them in retail displays), and I don't know what "the VGA box" is in this context. Can you elaborate?
Piracy of CD based games in the late 90s most assuredly did not require broadband or personal ownership of a CD-R drive. A lot of piracy happened via SneakerNet. As long as you knew someone with games (legit or pirated copies) and a CD-R drive you could get your own copies for the cost of a CD-R disc. Every college had at least one person with a CD binder full of CD-Rs of pirated PC and console games. I suspect a majority of high schools at the time did as well. Dorm ResNets were also loaded down with FTPs and SMB shares filled with games, porn, and warez.
Maybe so. I had an Action Replay disc from the time that presumably used MilCD. But according to this link (sadly Fandom), the attach rate was good: https://vgsales.fandom.com/wiki/Software_tie_ratio#Consoles
So SneakerNet and dorm rooms may have been a factor, but not enough of one to kill the system.
In those days the piracy threat was less "players with broadband and CD-R drives are downloading games for free" and more "flea marketeers with CD-R burners are copying games and selling them to players for a fraction of MSRP".
Oh yes that I definitely knew about, the rushed Saturn launch out of nowhere, as well as its early retirement to make room for the Dreamcast, soured a lot of people.
A shame too, the Dreamcast deserved so much better. It was a great system, and pretty prescient too, at least on the GPU side: Sega America wanted to go with 3dfx, Sega Japan ultimately went with PowerVR, 3dfx turned out to be a dead end, while PowerVR endured until fairly recently (if mostly in the mobile / embedded / power efficient space).
I think PowerVR was a good choice even at the time.
In my teens, my older brother worked at a computer shop and back then was a hardware geek, and a Matrox M3D, and frankly compared to something like a Rush or Riva 128 the M3D was pretty good outside of some blending moire. That I'm talking about visual vs perf says something... and Dreamcast got the version AFTER that...
The biggest thing was their Tile based deferred rendering, which made it easy to create an efficient GPU with a relatively low transistor count.
Also, 3dfx pattern of 'chip per function', aside from it's future scaling issues, would have been a higher BOM.
-----
All of that said, ever wonder what the video game scene, or NVidia would be like today, if the latter didn't derp out on their shot at the Dreamcast in an SEC filing, which caused them to be relegated to the video chip for the Pico?
Credit to the PVR in Dremacast (and the entire design of DC), it was a very efficient processor considering the pricing limitations of the unit. I do love that the Saturn absolute sucked at transparency effects and the PVR was the complete opposite. Just throw the geometry at it in any order (per tile) and it would sort it out and blend it with no problem.
It was efficient but the performance was definitely the lowest of all consoles that generation.
That was really boneheaded on Sega's part. They introduced the DC in Japan first, killing the Saturn which was quite successful there.
Had they waited, or at least launched in NA/EU in 1998, they could have kept money coming in from Saturn in JP while getting back into the market elsewhere.
32x murdered Sega's goodwill. Then the cost of the Saturn led to the legendary "$299" Sony E3 conference... Then Bernie Stolar and his hate for JRPGs...
The MS thing was actually an important PR olive branch after the Saturn.
Saturn had a somewhat deserved bad rep for API/doc issues, and doing 3D was extra painful between the VDP behavior and quads...
Microsoft was flouting around DirectX at the time, Even with it's warts at the time it was accepted by devs as 'better than what we used to deal with!'.
All of it was an attempt to signal to game developers; 'Look we know porting stuff to Saturn sucked, our API is better, but if you're worried, you are doing more PC ports now anyway, so this is a path'.
If anything, I'd say the biggest tactical 'mistake' there was that in providing that special Windows CE, Microsoft probably got a LOT of feedback and understanding of what console devs want in the process, which probably shaped future DirectX APIs as well as the original XBox.
If the PS1 "$299" conference was the sucker punch, PS2's "Oh it's also a DVD Player" was the coupe-de-grace. I knew a LOT of folks that 'waited' but DVD was the winning factor.
Coup-de-grâce. No e at the end of coup.
Some consider the original Xbox as a sequel to the Dreamcast because it reused some of the principles and had some of the same people working on it. Heck, even the original chunky Xbox controller looks more like the Dreamcast and lesser known Saturn 3D controller than it does like modern Xbox controllers.
Now that's a good interview.
> I think his biggest mistake was that he underestimated how quickly the industry would switch to mainly focusing on 3D.
I think his mistake was a bit more subtle than that. Because he didn't have any experience in 3D or anyone to ask for help, he didn't know which features were important and which features could be ignored. And he ended up missing roughly two important features that would have bought the Saturn upto the standard of "workable 3D".
The quads weren't even that big of a problem. Even if the industry did standardise on triangles for 3D hardware, a lot of the artist pipelines sick with quads as much as possible.
The first missing feature is texture mapping. Basically the ability to pass in uv coordinates for each vertex (or even just a single uv offset and some slopes for the whole quad). The lack of texture mapping made it very hard to export or convert 3D models from other consoles. Instead, artists had to create new models where each quad always maps to an 8x8 texel quad of pixels.
The second missing feature is alpha blending, or semitransparent quads. The Saturn did support half-transparency, but it only worked for non-distorted sprites, and you really want more options than just 50% or 100%.
With those two features, I think the Saturn would have been a workable 3D console. Still not as good as the playstation, but probably good enough for Sega to stand its ground until the Dreamcast launched.
These two situations are fundamentally related. Most 3D rasterizers including the Playstation use inverse texture mapping, iterating over framebuffer pixels to find texels to sample. The Saturn uses forward texture mapping, iterating over the texels and drawing them to their corresponding framebuffer pixels.
The choice to use forward mapping has some key design consequences. It isn't practical to implement UV mapping using forward mapping, and quads are also become a more natural primitive to use.
I don't consider this to be a significant problem. The checkerboard mesh feature provides a workable pseudo-half-transparent effect, especially with the blurry analog video signals used at the time.
Side note but forward mapping is also the reason why half-transparency does not work for distorted sprites. Forward mapping means that the same pixel may be overdrawn. If a half-transparent pixel is blended twice, the result is a corrupt pixel.
VDP2 can also provide half-transparent effects in various circumstances - this video provides a comprehensive look at various methods which were used to deliver this effect.
https://www.youtube.com/watch?v=f_OchOV_WDg
Both of these issues can be fixed without switching away from forwards texture mapping.
We have at least one examples of this being implemented in real hardware at around the same time: Nvidia's NV1 (their first attempt at a GPU) was a quad based GPU that used forwards texture mapping. And it had both UV mapping and translucent polygons. The sega saturn titles that were ported to the NV1 look absolutely great. And I the Sega Model 3 also implemented both UV mapping and transparency on quads (I'm just not 100% sure it was using forwards texture mapping).
I'm not suggesting the Saturn should have implemented the full perspective correct quadratic texture mapping that the NV1 had. That would be overkill graphically (especially since the ps1 didn't have perspective correct texture mapping either), and probably would have taken up too many transistors.
But even a simple scheme that ignored perspective correctness were the artist provided UV coord for three of the vertices (and the forth was implicitly derived by assuming the quad and it's UVs were flat) would have provided most of the texture mapping capabilities that artists needed. And when using malformed quads to emulate triangles, the 3 UV coords would match.
And most of the hardware is already there. VDP1 already has hardware to calculate the slopes all four quad edges from vertex coords and an edge walker to walk though all pixels in screen space. It also has similar hardware for calculating the texture space height/width deltas and walking though texture space. It's just that texture space is assumed to always have slopes of zero.
So that's the only changes we make. VDP1 now calculates proper texture space slopes for each quad, and the texture coord walker is updated to take non-zero slopes. I'm hopeful such an improvement could have been made without massively increasing the transistor count of VDP1, and suddenly the saturn's texture mapping capabilities are roughly equal with the PS1.
> I don't consider this to be a significant problem. The checkerboard mesh feature provides a workable pseudo-half-transparent effect
I think you are underestimating just how useful proper transparency is for early 3D hardware.
(by proper transparency, I mean supporting the add/subtract blend modes)
And I haven't put it on my list of "important missing features" because sometimes games want to render transparent objects. As you said, the mesh (checkerboard) mode actually works for a lot of those cases. And if you are doing particles, then you can often get away with using non-distorted quads, which does work with the half-transparent mode. (though, from an artistic perspective, particles would really look a lot better if they could instead use add/subtract modes)
No. Proper transparency is on my list because it's an essential building block for emulating advanced features that aren't natively supported on the hardware.
One great example is the environment mapping effect that is found in a large number of ps1 racing games. Those shiny reflections on your car that sell the effect of your car being made out of metal and glass. The ps1 doesn't have native support for environment mapping or any kind of multitexturing, so how do these games do it?
Well, they actually render your car twice. Once with the texture representing the body paint color, and then a second pass with a texture representing the current environment UV mapped over your car. The second pass is rendered with a transparency mode that neatly adds the reflections over the body paint. The player doesn't even notice that transparency was used.
Environment mapping is perhaps the most visually impressive of these advanced rendering tricks using translucency, but you can find dozens of variations spread out though the entire PS1 and N64 libraries.
Probably the most pervasive of these tricks are decals. Decals are great for providing visual variation to large parts of level geometry. Rather than your walls being a single repeating texture, the artist can just slap random decals every so often. A light switch here, a patch of torn wallpaper there, blood spatters here and graffiti over there. It the kind of thing the player doesn't really notice except when they are missing.
And decals are implemented with transparency. Just draw the surface with it's normal repeating texture and then use transparency to blend the decals over top in a second pass. Sometimes they Add, sometimes they Subtract. Sometimes they just want to skip transparent texels. And because the Saturn doesn't support transparency on 3D quads, it can't do proper decals. So artists need to manually model any decal-like elements into the level geometry itself, which wastes artist time, wastes texture memory and wastes polygons.
For some reason, VDP1 doesn't even support transparent texels on 3D quads, the manual says they only work for non-distorted quads.
> Side note but forward mapping is also the reason why half-transparency does not work for distorted sprites. Forward mapping means that the same pixel may be overdrawn. If a half-transparent pixel is blended twice, the result is a corrupt pixel.
No... This isn't a limitation of forwards texture mapping. Just a limitation of VDP1's underwhelming implementation of forwards texture mapping.
I don't think it would be that hard to adjust the edge walking algorithm to deal with "holes" in a way that avoids double writes.
My gut says this is basically the same problem that triangle-based GPUs run into when two triangles share an edge. The naive approach results in similar issues with both holes and double-written pixels (which breaks transparency). So triangle rasterizers commonly use the top-left rule to ensure every pixel is written exactly once by one of the two triangles based on which side of the edge the triangle is on.
----------
Though that fix only allows 3D quads to correctly use the half-transparency mode.
I'm really confused as to why VDP1 has such limited blending capabilities. The SNES had proper configurable blending in the 90s, and VDP2 also has proper blending between it's layers.
And it's not like proper blending is only useful for 3D games. Proper blending is very useful for 2D games too, and certain effects in cross platform 2D games can end up looking washed out on the Saturn version because they they couldn't use VDP2 for that effect and were limited to just VDP1's 50% transparency on VDP1 instead of add/subtract.
VDP1's 50% transparency already pays the cost of doing a full read-write-modify on the framebuffer, so why not implement the add/substract modes? A single adder wouldn't be that many transistors? That would be enough to more or less match the playstation, which only had four blending modes (50/50, add, subtract and 25% of background + 100% of foreground).
Though, you can go a lot further than just equality with the PS1. The N64 had a fully configurable blender that could take alpha values from either vertex colors or texture colors (or a global alpha value).
> VDP2 can also provide half-transparent effects in various circumstances - this video provides a comprehensive look at various methods which were used to deliver this effect.
> https://www.youtube.com/watch?v=f_OchOV_WDg
Yeah, that's a great video. And it's the main reason why I started digging into the capabilities of the Saturn.
Thanks for the thorough response.
That's a good point which I had not considered. However, just adding support for half-transparency wouldn't have been enough to bring the Saturn up to speed in this regard. Half-transparency is also really damn slow on the Saturn. Quoting the VDP1 User's Manual:
This compounds with the Saturn VDP1 already having a considerable reduction in fillrate compared to the Playstation GPU. Playstation wasn't just capable of transparency, it was also damn fast at it, so emulating effects with transparency was feasible.
PS2 took this approach to its illogical extreme.
Yeah. You have a point.
My proposed changes only really bring the Saturn's 3D graphics into roughly the same technical capabilities as the Playstation.
They don't do anything to help with the performance issues or the higher manufacturing costs. Or the string of bad moves from SEGA management that alienated everyone.
But an alternative history with these minor changes might have been enough to stop developers complaining about the Saturns non-standard 3D graphics and keep a healthy market of cross-platform games. It might have also prevented commenters on the internet from making overally broad claims about the Saturn's lack of 3D capabilities.
---------------
Fixing the performance limitations and BoM cost issues would have required a larger redesign.
I'm thinking something along the lines of eliminating most specialised 2D capabilities (especially all of VDP2). Unify what's left to a single chip and simplify the five blocks of video memory down to a single block on a 32bit wide bus. This would significantly increase the bandwidth to the framebuffer, compared to VDP1's framebuffer which uses a 16bit bus, while decreasing part count and overall cost.
The increased performance of this new 3D focused VDP should hopefully be powerful enough to emulate the lost VDP2 background functionality as large 3D quads.
I'm surprised forward texture mapping can work at all. What happens if the quad is too big? Does it have gaps?
Forward texture mapping sounds like something I would have imagined as a novice before I learned how inverse texture mapping worked.
It'll draw the same texel multiple times if it has to. One of the Saturn's programming manuals describes in limited detail how it will attempt to avoid leaving any gaps in polygons.
https://antime.kapsi.fi/sega/files/ST-013-R3-061694.pdf
AFAIK the problems lie in things like clipping/collision detection. Now that you have four points instead of three, there is no guarantee that the polygon is a flat surface.
That's interesting. I always wondered why "polygons" generally ended up being triangles. I guess this is of the reasons.
That’s one part, and it’s big. I believe the other part is that triangle strips are more efficient because they can share vertices better than quad strips.
Of course, but that's not generally true. Why use two triangles for e.g. a rectangular floor? The walls and the ceiling are also rectangular after all. Probably mixing and matching different polygons isn't very efficient either.
About collision: I don't know how it is computed, but if it's by ray casting, that is now also used for lighting effects and sometimes reflections. Which would be another reason to stick to triangles.
That's not a massive problem.
Most games avoid using the same meshes for both rendered graphics and collision detection. Collision detection is usally done against a simplified mesh with much fewer polygons.
Since most games already already have two meshes, there isn't a problem making one mesh out of quads and the other out of triangles.
Saturn did support alpha blending on quads (quads are ostensibly just sprites). The problem was the blending became uneven due to the distortion. Ie distortion caused the quads to have greater transparency on some parts and lesser transparency on others. This was largely due to developers skewing quads into triangles.
First, it was limited to a single blend equation of 50% foreground + 50% background.
At minimum, it really needed to support the addition/subtraction blend equations.
Second, because it doesn't work correctly for distorted quads, it really doesn't exist in the context of "Saturn's 3D capablties"
It works “correctly” for distorted quads, it’s just pixels overlay as part of the front to back rendering of the distortion process, thus creating combined blend methods greater than 50% but less than 100%. Ie literally what I said in my previous comment.
I think we are basically arguing the same thing, but from my point of view I’d say the end result is visually undesirable but technically correct.
The only "correct" that matters here is if it produces a result that's useful for 3D graphics.
The "some pixels get randomly blended multiple times" behaviour is simply not useful. And if it's not useful, then it might as well not exist. And it certainly can't substitute for the more capable blending behaviour that I'm claiming the Saturn should have.
In another comment [1], I go into more detail about both why visually correct (and capable) translucency modes were very important for early 3D consoles and speculate on the minimal modifications to the hardware design of VDP1 to fix translucency for distorted quads, while working within the framework of the existing design.
[1] https://news.ycombinator.com/item?id=39835875
It’s not random. Not even remotely.
It is useful in some scenarios. The problem here is you’re describing it as a 3D system but it really isn’t. The whole 3D aspect is more a hack than anything
Youtube channel Gamehut once mentioned that you could do 8 levels of transparency on the 3D provided you had no Gouraud shading on the quad. As such it was almost never used and I believe they used it merely to fade in the level.
That's a VDP2 effect. It can only fade the final framebuffer againt other VDP2 layers (and the skybox is implemented with VDP2 layers)
After VDP1 has rendered all quads to the framebuffer (aka VDP2's "sprite data" layer), some pixels in the framebuffer layer can use an alternative color mode. Normal 3D quads (with Gouraud shading) use a 15 bit format with five bits of Red, Green and Blue representing the final color (and the the top most bit is set to one).
Those background elements instead use a palletted color format. The top bit is zero (so VDP can tell which format to be used used on a per-pixel basis) and then there are 3 bits of priority (unused in this case), 3 bits of "Color calculation ratio" which Gamehut used to implement the fade, and a 9 bit color palette index.
So it's not just that you can't use Gouraud shading, but these fading objects are limited to a global total of 512 unique colors which must be preloaded into VDP2's color ram (well, I guess you could swap the colors out each frame)
-----
Importantly, since this trick only works against VDP2 background layers, it can't be used to blend two 3D quads against each other.
It also didn’t help that Sony poached a lot of studios with exclusivity deals.
The fact that the Saturn was harder to develop for, had a smaller market share and Sony were paying studios to release on the PlayStation, it’s no wonder Sony won the console wars that generation.
Sony through their choices also landed on a bit of luck. They where cheaper than the Saturn, had a machine that was much easier to work with and where charging developers a lot less in royalties. Also helped that they where out before N64 and with the cost of cartridges.
I get that you’re saying but I wouldn’t describe that as luck. Their choices weren’t made randomly and are what lead to their success.
interesting. i always thought this was an order from SOJ.
It was. Kalinske tried to push back, but for some reason after all of his success at SoA, SoJ kept undercutting him in the mid 90s with the 32X and early Saturn launch.
I think it is funny that both were wrong. 2D games are still very much being developed and released today, and many outsell 3D games, even AAA 3D games.
People beat Sega up back then, and also now, but I do not think their hardware was that bad. The pricing was a huge issue.
If I were them, I would have NOT released the 32X as a standalone, NOT released the Saturn, and instead focus on a fusion of the two into a standalone cartridge based system. For everything people criticized the Saturn for, the cartridges made it work. The N64 showed off the advantage of using a cartridge over a CD based system.
If I were going to release a non-portable gaming system today, I'd build one around cartridges that are PCIE 5.0 x4 based and design it so that a) you can quickly shut off the system and replace the game rather than digging through ads/clunky ui/launcher. b) access and latency times are fast c) the hardware is potentially extendable in many different ways.
However, I actually liked the 32x and it's games. Maybe that is because I got the 32x on clearance at walmart for $10 and the games for $1-$2 each, or maybe it was because many of the games were fun and were a genuine upgrade over my 16 bit systems.
I don't think the Saturn was bad, but it was overpriced for the time and definitely rushed.
I DO think this rush to make consoles require an internet connection has left a sizable hole in the market, a hole that nintendo is only half filling. Offline gaming is a thing, and just because a console CAN be connected to the internet, doesn't mean it should be. I've been fantasizing about making one in the future. Something that is kind of like the old Steam Machine, but with the aspects I have mentioned. Maybe one day I will. For me it won't be about success, but rather about building something cool.
Anyways, I'm rambling. Have a great night.
they should have focused on supporting the genesis and the saturn for as long as possible.
sega was a dysfunctional business reacting to fickle changes in the market (32x was a response to the Atari Jaguar lol).
probably as they didn't have the strong cash reserves of a nintendo, sony or microsoft.
but ultimately i think they were an arcade business and the transition from arcade to home killed them. the same as snk/neo geo.
While this is an entirely "in retrospect this would have been the best plan!". The youtube channel Video games esoterica had an interesting idea on an alternatives path Sega could have taken.
Namely, lean in hard on 32X for about a year or two to try and slow demand for Ps1 with cheaper hardware. Release the Neptune (Genesis with inbuilt 32x). They then take up Panasonics deal and use the M2 platform to be the Saturn. Release that in 1997 with specs far beyond Ps1/N64.
Neat idea but this is all just fantasy stuff at this point.
Thanks for the YT recommendation. Sega alt-history is something I've spent more than a few hours thinking about.
It's hard to really nail down without knowing about chip prices in the day, but my thinking was to wait on the Sega CD until 92 or 93, include a DSP or two for basic 3D operations (like the Sega Virtua Processor in Virtua Racing) and market the CD as allowing for cheaper, bigger games instead of FMV garbage.
Then release the Saturn in 95 as an upgraded version of the same architecture akin to the SMD/Gen being based on the SMS. Add a newer CPU (SH-3 or PowerPC 603), use an upgraded SMD's graphics chip for 2D (like VDP2 on the Saturn), the 68K for audio, and whatever 3D silicon Sega had developed (probably still quads)
If this was financially / technically possible, it would have staved off Sega's fear about the SMD/Gen falling behind the SNES in 92-95, allowed backwards compatibility, and been less jank to develop for.
This is why I love HN. Busting esoteric misconceptions with detailed knowledge of industry and history. It makes sense why developers hated the Saturn and PSX came out on top. Developer experience is king!
I bought the Saturn on the US launch day and never clearly understood why, for the first maybe 6 months, there were only a handful of titles available. Interesting back story!
The 32X initially sold well, and media coverage was much stronger than the 3DO/Jag/CDi. But SEGA abandoned support for it REALLY quickly. I had a 32X at launch and was excited, but almost all the games sucked. Shadow Squadron is like the only legitimate non-port, original game that's pretty good lol and it came out when the system was about dead.
I think consumers were def skeptical of the 32X despite the marketing, and that was the main thing, in addition to an unimpressive library. Magazine coverage was decent but Gamepro in particular was very negative toward it. Like, Chaotix sucks but the music is amazing, and Gamepro not only trashed the game but gave even the music low marks because "no 32X audio is apparent" lol.
I've looked into it, and from what I can tell, the "3D was added late to the Saturn design" narrative is flawed.
It's commonly cited that VDP2 was added later to give it 3D support. But VDP2 doesn't do 3D at all, it's responsible for the SNES "mode 7" style background layers. If you remove VDP2 (and ignore the fact that VDP is responsible for video scanout) then the resulting console can still do both 3D just fine (Many 3D games leave VDP2 almost completely unused). 2D game would take a bit of a quality hit as they would have to render the background with hundreds of sprites.
If you instead removed VDP1, then all you have left are VDP2's 2D background layers. You don't have 3D and you can't put any sprites on the screen so it's basically useless at 2D games too.
As far as I can tell, the Saturn was always meant to have both VDP1 and VDP2. They were designed together to work in tandem. And I think the intention (from SEGA JP) was always for the design be a 2D powerhouse with some limited 3D capabilities, as we saw on the final design.
I'm not saying there wasn't arguments between SEGA JP and SEGA US. There seems to be plenty of evidence of that. But I don't think they munged the JP and US designs together at the last moment. And the PSX can't have had any influence on the argument, as the Saturn beat the PSX to market in Japan by 12 days.
This is typical of Sega arcade hardware of the era (Model 1 and Model 2); these systems have separate "geometry processor" and "rasterizer" boards, with onboard DSPs. If you squint, the Saturn is what someone might come up with as a cost-optimized version of that architecture.
Yeah. Especially if you rip out VDP2 and some of the more 2D orientated features of VDP1 it looks very much like an attempt at a cost-optimised Model 2.
And if you read the interview that ndiddy linked above (https://mdshock.com/2020/06/16/hideki-sato-discussing-the-se...), it might be accurate to say the Saturn design is what you get when an engineer with no background in 3D graphics (and unable to steal sufficient 3D graphics experience from the Sega Arcade division) attempts to create a cost-optimised Model 2.
I suspect the first pass was just a cost-optimised textured quad renderer, then Sato went back to his 2D graphics experience and extended it into a powerful 2D design.
Is this the PSX[0] you're referring to? I had no idea this existed, or what impact it had on gaming consoles.
Edit (answered): "Why is PlayStation called PSX? Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX)."
[0] https://en.wikipedia.org/wiki/PSX_(digital_video_recorder)
PSX was used to refer to the PlayStation before it was released, and it stuck.
One exception to this is the shmup genre. The Saturn was inundated with Japanese Shmups and many are perfect (or near perfect) arcade ports.
2D fighters, as well. The port of SFA3 on Saturn trounces the PS1 release, for example.
The latest episode of the excellent video game history podcast They Create Worlds (https://www.theycreateworlds.com/listen) does a good job debunking some of these myths.