return to table of content

Sega Saturn Architecture – A practical analysis (2021)

nolok
126 replies
1d7h

The article describe it as if the design was surprising with how many chips there were etc, but it's important to understand the context : complete lack of synergy and "fight for dominance" between the Japan and USA team, SEGA JP was making a 2D console, SEGA US was making a 3D console, the JP team was about to win that fight and then the PSX appeared so they, essentially, merged the two together.

You end up with a 2D console with parts and bits of an unfinished 3D console inside it. It makes no sense.

For a tech enthousiast and someone who loves reading dev postmortem, it's glorious. For someone who likes a clean design, it's irksome to no ends. For mass gamers of that era, where the big thing was "arcade in your living room" it's a disapointement, and SEGA not knowing which side to focus on didn't help at all.

The wikipedia article has a lot more details [1]

[1] https://en.wikipedia.org/wiki/Sega_Saturn

flipacholas
61 replies
1d7h

If you check the other articles about the PlayStation [1] and the Nintendo 64 [2], you'll see that the design of a 3D-capable console in the 90s was a significant challenge for every company. Thus, each one proposed a different solution (with different pros and cons), yet all very interesting to analyse and compare. That's the reason this article was written.

[1] https://www.copetti.org/writings/consoles/playstation/

[2] https://www.copetti.org/writings/consoles/nintendo-64/

Grazester
37 replies
1d5h

This should have been no struggle for Sega. They basically invented the modern 3D game and dominated in the arcade with very advanced 3D games at the time. Did they not leverage Yu Suzuki and the AM division when creating the Saturn? Then again rumor has it they were still stuck on 2D for the home market and then saw the PlayStation specs and freaked and ordered 2 of everything in the Saturn.

VyseofArcadia
29 replies
1d4h

In interviews IIRC ex-Sega staff has stated that they thought they had one more console generation before a 3D-first console was viable to the home market. Sure, they could do it right then and there, but it would be kind of janky. Consumers would rather have solid arcade-quality 2D games than glitchy home ports of 3D ones. Then Sony decided that the wow factor was worth kind of janky graphics (affine texture mapping, egregious pop-in, only 16-bit color, aliasing out the wazoo, etc.) and the rest is history.

Nintendo managed largely not-janky graphics with the N64, but it did come out 2-3 years after the Saturn and Playstation.

jandrese
17 replies
1d4h

“Not janky” is a weird way of describing the N64’s graphics. Sure Mario looked good, but have you seen most other games on that platform?

paulryanrogers
3 replies
1d4h

Well it had a proper Z buffer so textures didn't wiggle. Now the fog, draw distance, and texture resolution combined with blurring were terrible.

It was basically the haze console.

01HNNWZ0MV43FF
1 replies
21h28m

Hope it's okay that I just reply to everyone

Wiggling is down to lack of precision and lack of subpixel rendering, unrelated to Z buffering. Z buffers are for hidden surface removal, if you see wiggling on a single triangle floating in a void, it's not a Z buffer problem.

When you see models clipping through themselves because the triangles can't hide each other, that's the lack of Z buffer.

paulryanrogers
0 replies
21h13m

Thanks for clarifying. I knew I was getting something wrong, but can never remember all the details. IIRC PS1 also suffered from render order issues that required some workarounds, problems the N64 and later consoles didn't have.

VelesDude
0 replies
14h31m

The lack of media storage was thing that kind of solidified a lot of those issue. Many that worked on the N64 have said that the texture cache on the system was fine enough for the time. Not great but not terrible. The issue was that you were working in 8MB or 16MB space for the entire game. 32MB carts where rare and less than a dozen ever used 64MB carts.

JohnBooty
3 replies
1d3h

Yeah. I'm not what one would call a graphics snob, but I found the N64 essentially unplayable even at the time of its release. With few exceptions, nearly every game looked like a pile of blurry triangles running at 15fps.

jordemort
1 replies
1d1h

I always felt like N64 games were doing way too much to look good on the crappy CRTs they were usually hooked up to. The other consoles of the era may have had more primitive GPUs, but for the time I think worse may have actually been better, because developers on other platforms were limited by the hardware in how illegible they could make their games. Pixel artists of the time had learned to lean into and exploit the deficiencies of CRTs, but the same tricks can't really be applied when your texture is going to be scaled and distorted by some arbitrary amount before making it to the screen.

VelesDude
0 replies
14h26m

A part of this was due to the TRC of Nintendo. It also didn't hep that due to the complexity of the graphics hardware, most developers where railroaded into using things like the Nintendo provided Microcode just to run the thing decently.

486sx33
0 replies
18h23m

Goldeneye was pretty great for the time !

nebula8804
2 replies
21h39m

Janky is literally the PSX's style due to its lack of floating point capability

[0]:https://youtu.be/x8TO-nrUtSI?t=222

01HNNWZ0MV43FF
1 replies
21h33m

No, it's due to limited precision in the vertices. If you had 64 bit integers you could have 32.32 fixed-point and it would look as good as floating-point.

nebula8804
0 replies
16h4m

did you watch the video?

mrguyorama
2 replies
1d3h

Compare it to the Playstation, which could not manage proper texture projection and also had such poor precision in rasterization that you could watch polygons shimmer as you moved around.

The N64 in comparison had an accurate and essentially modern (well, "modern" before shaders) graphics pipeline. The deficiencies in it's graphics were not nearly enough graphics specific RAM (you only had 4kb total as a texture cache, half that if you were using some features! Though crazy people figured out you could swap in more graphics from the CARTRIDGE if you were careful) and a god awful bilinear filtering on all output.

cubefox
0 replies
19h32m

well, "modern" before shaders

Interestingly, the N64 actually had some sort of precursor in form of the RSP "microcode". Unfortunately there was initially no documentation, so most developers just used the code provided by Nintendo, which wasn't very optimized and didn't include advanced features. Only in the last years did homebrew people really push the limits here with "F3DEX3".

and a god awful bilinear filtering on all output.

I think that's a frequent misconception. The texture filtering was fine, it arguably looks significantly worse when you disable it in an emulator or a recompilation project. The only problem was the small texture cache. The filtering had nothing to do with it. Hardware accelerated PC games at the time also supported texture filtering, but I don't think anyone considered disabling it, as it was an obvious improvement.

But aside from its small texture cache, the N64 also had a different problem related to its main memory bus. This was apparently a major bottleneck for most games, and it wasn't easy to debug at the time, so many games were not properly optimized to avoid the issue, and wasted a large part of the frame time with waiting for the memory bus. There is a way to debug it on a modern microcode though. This video goes into more detail toward the end: https://youtube.com/watch?v=SHXf8DoitGc

01HNNWZ0MV43FF
0 replies
21h23m

Fun trivia for readers, it isn't even normal 4-tap bilinear filtering, it's 3-tap, resulting in a characteristic triangular blurring that some N64 emulators recreate and some don't. (A PC GPU won't do this without special shaders)

https://filthypants.blogspot.com/2014/12/n64-3-point-texture...

VyseofArcadia
1 replies
1d3h

It will always be easy to make 3D games that look bad, but on the N64 games tend to look more stable than PS1 or Saturn games. Less polygon jittering[0], aliasing isn't as bad, no texture warping, higher polygon counts overall, etc.

If you took the same animated scene and rendered it on the PS1 and the N64 side by side, the N64 would look better hands down just because it has an FPU and perspective texture mapping.

[0] Polygon jittering caused by the PS1 only being capable of integer math, so there is no subpixel rendering and vertices effectively snap to a grid.

01HNNWZ0MV43FF
0 replies
21h29m

You can do subpixel rendering with fixed-point math https://www.copetti.org/writings/consoles/playstation/#tab-5...

I thought the problem was that it only had 12 or 16-bit precision for vertex coords, which is not enough no matter whether you encode it as fixed-point or floating-point. Floats aren't magic.

yellowapple
0 replies
13h0m

In comparison to the other consoles of its generation? It's about as un-janky as things got graphics-wise.

RicoElectrico
10 replies
1d3h

What did take so long for Nintendo?

VyseofArcadia
8 replies
1d2h

I'm not 100% sure of the specifics, but Nintendo took a pretty different approach from Sony or Sega at this time. Sony and Sega both rolled their own graphics chips, and both of them made some compromises and strange choices in order to get to market more quickly.

Nintendo instead approached SGI, the most advanced graphics workstation and 3D modeling company in the world at the time, and formed a partnership to scale back their professional graphics hardware to a consumer price point.

Might be one of those instances where just getting something that works from scratch is relatively easy, but taking an existing solution and modifying it to fit a new use case is more difficult.

MBCook
6 replies
1d

The cartridge ended up being a huge sore spot too.

Nintendo wanted it because of the instant access time. That’s what gamers were used to and they didn’t want people to have to wait on slow CDs.

Turns out that was the wrong bet. Cartridges just cost too much and if I remember correctly there were supply issues at various points during the N64 era pushing prices up and volumes down.

In comparison CDs were absolutely dirt cheap to manufacture. And people quickly fell in love with all the extra stuff that could fit on a desk compared to a small cartridge. There was simply no way anything like Final Fantasy 7 could have ever been done on the N64. Games with FMV sequences, real recorded music, just large numbers of assets.

Even if everything else about the hardware was the same, Nintendo bet on the wrong horse for the storage medium. It turned out the thing they prioritized (access time) was not nearly as important as the things they opted out of (price, storage space).

nebula8804
3 replies
21h28m

There was simply no way anything like Final Fantasy 7 could have ever been done on the N64.

Yes but I don't see how a game like Ocarina of time with its streaming data in at high speed would have been possible without a cartridge. Each format enabled unique gaming experiences that the other typically couldn't replicate exactly.

favorited
2 replies
19h35m

Naughty Dog found a solution - constantly streaming data from the disk, without regard for the hardware's endurance rating:

Andy had given Kelly a rough idea of how we were getting so much detail through the system: spooling. Kelly asked Andy if he understood correctly that any move forward or backward in a level entailed loading in new data, a CD “hit.” Andy proudly stated that indeed it did. Kelly asked how many of these CD hits Andy thought a gamer that finished Crash would have. Andy did some thinking and off the top of his head said “Roughly 120,000.” Kelly became very silent for a moment and then quietly mumbled “the PlayStation CD drive is ‘rated’ for 70,000.”

Kelly thought some more and said “let’s not mention that to anyone” and went back to get Sony on board with Crash.

https://all-things-andy-gavin.com/2011/02/06/making-crash-ba...

nebula8804
1 replies
16h4m

Crash Bandicoot is a VERY different game from Ocarina Of Time. They are not comparable at all. They literally had to limit the field of view in order to get anything close to what they were targeting. Have you played the two games? The point still stands, Zelda with its vast open worlds is not feasible on a CD based console that has a max transfer rate of 300KB/s and the latency of an iceberg.

VelesDude
0 replies
14h7m

What ND did with Crash Bandicoot was really cool to see in action (page in/out data in 64KB chunks based on location) but you are right - this relied on a very strict control of visuals. OoT didn't have this limitation.

rmckayfleming
0 replies
20h56m

Not just dirt cheap, the turn around time to manufacture was significantly lower. Sony had an existing CD manufacturing business and could produce runs of discs in the span of a week or so, whereas cartridges typically took months. That was already a huge plus to publishers since it meant they could respond more quickly if a game happened to be a runaway success. With cartridges they could end up undershooting, and losing sales, or overshooting and end up with expensive, excess inventory.

Then to top it all off, Sony had much lower licensing fees! So publishers got “free” margin to boot. The Playstation was a sweet deal for publishers.

VyseofArcadia
0 replies
23h48m

Tangentially related, but if you haven't already, you should read DF Retro's writeup of the absolutely incredible effort to port the 2 CD game Resident Evil 2 to a single 64MB N64 cartridge: https://www.eurogamer.net/digitalfoundry-2018-retro-why-resi...

Spoilers: it's a shockingly good port.

mairusu
0 replies
21h42m

Nintendo did not approach SGI. SGI was rejected by Sega for the Saturn - Sega felt their offering was too expensive to produce, too buggy at the time despite spending man hours helping fix hardware issues,, and had no chance to make it to market in time for their plans.

For all we know, Nintendo had no plans past the SNES, except for the VirtualBoy. But then again, the VirtualBoy was another case of Nintendo being approached by a company rejected by Sega…

skhr0680
0 replies
1d2h

A heroic, and ultimately unnecessary considering the mundane reasons that slowed the N64 down, attempt to consumerize exotic hardware.

The hardware was actually pretty great in the end. The unreleased N64 version of Dinosaur Planet holds up well considering how much more powerful the GameCube was.

/edit

Nintendo were largely the architects of their own misery. First, they set expectations sky high with their “Ultra 64” arcade games, then were actively hostile to developers in multiple ways.

JohnBooty
4 replies
1d3h

    This should have been no struggle for Sega. They basically 
    invented the modern 3D game and dominated in the arcade with 
    very advanced 3D games at the time
Way different challenges!

The Model 2 arcade hardware cost over $15,000 when new in 1993. Look at those Model 1 and Model 2, that's some serious silicon. Multiple layers of PCB stacked with chips. The texture mapping chips were from partnerships with Lockheed Martin and GE. There was no home market for 3D accelerators yet; the only companies doing it were folks creating graphics chips for military training use and high end CAD work.

https://sega.fandom.com/wiki/Sega_Model_2

https://segaretro.org/Sega_Model_1

Contrast that with the Saturn. Instead of a $15,000 price target they had to design something that they could sell for $399 and wouldn't consume a kilowatt of power.

Although, in the end, I think the main hurdle was a failure to predict the 3D revolution that Playstation ushered in.

RetroTechie
3 replies
1d1h

The Model 2 arcade hardware cost over $15,000 when new in 1993. Look at those Model 1 and Model 2, that's some serious silicon.

That's an even bigger miss on Sega's part then.

Having such kit out in the field, should have given Sega good insight into the "what's hot, and what's not" for (near-future) gaming needs.

Which features are essential, what's low hanging fruit, what's nice to have but (too) expensive, performance <-> quality <-> complexity tradeoffs, etc.

Besides having hardware & existing titles to test-run along the lines of "what if we cut this down to... how would it look?"

Not saying Sega should have built a cut-down version of their arcade systems! But those could have provided good guidance & inspiration.

mairusu
1 replies
23h42m

But they had the insight. And the insight they got was that 3D was not there yet for the home market, it was unrealistic to have good 3D for cheap (eg. no wobbly textures, etc), as it was still really challenging to have good 3D on expensive dedicated hardware.

JohnBooty
0 replies
2h3m

Yeah. The 3D revolution was obvious in hindsight, but not so obvious in the mid 1990s. I was a PC gamer as well at the time so even with the benefit of seeing things like DOOM it wasn't necessarily obvious that 2.5D/3D games were going to be popular with the mainstream any time soon.

A lot of casual gamers found early home 3D games kind of confusing and offputting. (Honestly, many still kind of do)

We went from highly evolved colorful, detailed 2D sprites to 3D graphics that were frankly rather ugly most of the time, with controllers and virtual in-game cameras that tended to be rather janky. Analog controllers weren't really even prevalent thing for consoles at this point.

Obviously in hindsight the Saturn made a lot of bad bets and the Playstation made a lot of winning ones.

JohnBooty
0 replies
2h13m

I dunnnnnnnnno?

The secret to high 3D performance (particularly in those simpler days before advanced shaders and such) wasn't exactly a secret. You needed lots of computing horsepower and lots of memory to draw and texture as many polys as possible.

The arcade hardware was so ridiculous in terms of the number of chips involved, I don't even know how many lessons could be directly carried over. Especially when they didn't design the majority of those chips.

Shrinking that down into a hyper cost optimized consumer device relative to a $15K arcade machine came down to design priorities and engineering chops and Sega just didn't hit the mark.

ac2u
1 replies
1d4h

It's been years since since I read the book "Console Wars", but if memory serves me correctly SGI shopped their tech to SEGA first before Nintendo secured it for the N64.

VelesDude
0 replies
14h4m

Yep, Sega had a look at SGI's offering and rejected it. One of the many reasons they did so was because they thought the cost would be too high due to the die size of the chips.

Kind of funny considering the monstrosity the Saturn ended up becoming.

christkv
14 replies
1d6h

I do wonder what would have happened if the N64 had included a much bigger texture cache. It seemed the tiny size was it biggest con.

mrguyorama
8 replies
1d2h

The other big problem with the N64 was that the RAM had such high latency that it completely undid any benefit from the supposedly higher bandwidth that RDRAM had and the console was constantly memory starved.

The RDP could rasterize hundreds of thousands of triangles a second but as soon as you put any texture or shading on them, the memory accesses slowed you right down. UMA plus high latency memory was the wrong move.

In fact, in many situations you can "de-optimize" the rendering to draw and redraw more, as long as it uses less memory bandwidth, and end up with a higher FPS in your game.

mips_r4300i
6 replies
1d2h

That's mostly correct. It is as you say, except that shading and texturing come for free. You may be thinking of Playstation where you do indeed get decreased fillrate when texturing is on.

Now, if you enable 2cycle mode, the pipeline will recycle the pixel value back into the pipeline for a second stage, which is used for 2 texture lookups per pixel and some other blending options. Otherwise, the RDP is always outputting 1 pixel per clock at 62.5 mhz. (Though it will be frequently interrupted because of ram contention) There are faster drawing modes but they are for drawing rectangles, not triangles. It's been a long time since I've done benchmarks on the pipeline though.

You're exactly right that the UMA plus high latency murders it. It really does. Enable zbuffer? Now the poor RDP is thrashing read modify writes and you only get 8 pixel chunks at a time. Span caching is minimal. Simply using zbuf will torpedo your effective full rate by 20 to 40 percent. That's why stuff I wrote for it avoided using the zbuffer whenever possible.

The other bandwidth hog was enable anti aliasing. AA processing happened in 2 places: first in the triangle drawing pipeline, for inside polygon edges. Secondly, in the VI when the framebuffer gets displayed, it will apply smoothing to the exterior polygon edges based on coverage information stored in the pixels extra bits.

On average, you get a roughly 15 to 20 percent fillrate boost by turning both those off. If you run only at lowres, it's a bit less since more of your tender time is occupied by triangle setup.

mrguyorama
5 replies
23h14m

I was misremembering about instances involving the zbuffer and significant overdraw as demonstrated by Kaze https://www.youtube.com/watch?v=GC_jLsxZ7nw

Another example from that video was changing a trig function from a lookup table to an evaluated approximation improved performance because it uses less memory bandwidth.

Was the zbuffer in main memory? Ooof

What's interesting to me is that even Kaze's optimized stuff is around 8k triangles per frame at 30fps. The "accurate" microcode Nintendo shipped claimed about 100k triangles per second. Was that ever achieved, even in a tech demo?

33985868
2 replies
18h37m

There were many microcode versions and variants released over the years. IIRC one of the official figures was ~180k tri/sec.

I could draw a ~167,600 tri opaque model with all features (shaded, lit by three directional lights plus an ambient one, textured, Z-buffered, anti-aliased, one cycle), plus some large debug overlays (anti-aliased wireframes for text, 3D axes, Blender-style grid, almost fullscreen transparent planes & 32-vert rings) at 2 FPS/~424 ms per frame at 640x476@32bpp, 3 FPS/~331ms at 320x240@32bpp, 3 FPS/~309ms at 320x240@16bpp.

That'd be between around 400k to 540k tri/sec. Sounds weird, right ? But that's extrapolated straight from the CPU counter on real hardware and eyeballing, so it's hard to argue.

I assume the bottleneck at that point is the RSP processing all the geometry, a lot of them will be backface culled, and because of the sheer density at such a low resolution, comparatively most of them will be drawn in no time by the RDP. Or, y'know, the bandwidth. Haven't measured, sorry.

Performance depends on many variables, one of which is how the asset converter itself can optimise the draw calls. The one I used, a slight variant of objn64, prefers duplicating vertices just so it can fully load the cache in one DMA command (gSPVertex) while also maximising gSP2Triangle commands IIRC (check the source if curious). But there's no doubt many other ways of efficiently loading and drawing meshes, not to mention all the ways you could batch the scene graph for things more complex than a demo.

Anyways, the particular result above was with the low-precision F3DEX2 microcode (gspF3DLX2_Rej_fifo), it doubles the vertex cache size in DMEM from 32 to 64 entries, but removes the clipping code: polygons too close to the camera get trivially rejected. The other side effect with objn64 is that the larger vertex cache massively reduces the memory footprint (far less duplication): might've shaved off like 1 MB off the 4 MB compiled data.

Compared to the full precision F3DEX2, my comment said: `~1.25x faster. ~1.4x faster when maxing out the vertex cache.`.

All the microcodes I used have a 16 KB FIFO command buffer held in RDRAM (as opposed to the RSP's DMEM for XBUS microcodes). It goes like this if memory serves right:

1. CPU starts RSP graphics task with a given microcode and display list to interpret from RAM

2. RSP DMAs display list from RAM to DMEM and interprets it

3. RSP generates RDP commands into a FIFO in either RDRAM or DMEM

4. When output command buffer is full, it waits for the RDP to be ready and then asks it to execute the command buffer

5. The RDP reads the 64-bit commands via either the RDRAM or the cross-bus which is the 128-bit internal bus connecting them together, so it avoids RDRAM bus contention.

6. Once the RDP is done, go to step 2/3.

To quote the manual:

The size of the internal buffer used for passing RDP commands is smaller with the XBUS microcode than with the normal FIFO microcode (around 1 Kbyte). As a result, when large OBJECTS (that take time for RDP graphics processing) are continuously rendered, the internal buffer fills up and the RSP halts until the internal buffer becomes free again. This creates a bottleneck and can also slow RSP calculations. Additionally, audio processing by the RSP cannot proceed in parallel with the RDP's graphics processing. Nevertheless, because I/O to RDRAM is smaller than with FIFO (around 1/2), this might be an effective way to counteract CPU/RDP slowdowns caused by competition on the RDRAM bus. So when using the XBUS microcode, please test a variety of combinations.
mips_r4300i
1 replies
13h1m

I'm glad someone found objn64 useful :) looking back it could've been optimized better but it was Good Enough when I wrote it. I think someone added png texture support at some point. I was going to add CI8 conversion, but never got around to it.

On the subject of XBUS vs FIFO, I trialled both in a demo I wrote with a variety of loads. Benchmarking revealed that over 3 minutes each method was under a second long or shorter. So in time messing with them I never found XBUS to help with contention. I'm sure in some specific application it might be a bit better than FIFO. By the way, I used a 64k FIFO size, which is huge. I don't know if that gave me better results.

33985868
0 replies
11h16m

Oh, you're the MarshallH ? Thanks so much for everything you've done !

I'm just a nobody who wrote DotN64, and contributed a lil' bit to CEN64, PeterLemon's tests, etc.

For objn64, I don't think PNG was patched in. I only fixed a handful of things like a buffer overflow corrupting output by increasing the tmp_verts line buffer (so you can maximise the scale), making BMP header fields 4 bytes as `long` is platform-defined, bumping limits, etc. Didn't bother submitting patches since I thought nobody used it anymore, but I can still do it if anyone even cares.

Since I didn't have a flashcart to test with for the longest time, I couldn't really profile, but the current microcode setup seems to be more than fine.

Purely out of curiosity, as I now own a SC64, is the 64drive abandonware ? I tried reaching out via email a couple times since my 2018 order (receipt #1532132539), and I still don't know if it's even in the backlog or whether I could update the shipping address. You're also on Discord servers but I didn't want to be pushy.

I don't even mind if it never comes, I'd just like some closure. :p

Thanks again !

01HNNWZ0MV43FF
0 replies
21h20m

So Kaze is hitting 240k tris/second right?

VelesDude
0 replies
13h53m

Recently Sauraen on Youtube demonstrated their performance profiling on their F3DEX3 optimizations. One thing they could finally do was profile the memory latency and it is BAD! On a frame render time of 50ms, about 30ms of that is the processors just waiting on the RAM. Essentially, at least in Ocarina of time, the GPU is idle 60% of the time!

Whole video is fascinating but skip to the 29 minutes mark to see the discussion of this part.

https://www.youtube.com/watch?v=SHXf8DoitGc

rightbyte
4 replies
1d5h

Wasn't the thing you put in the slot infront of the cart a ram extension slot?

I think you can play Rogue Squadron with and without if you want to compare.

Or do youe mean some lower cache level?

skhr0680
3 replies
1d5h

that pack added 4MB extra RAM, OOT and Majora’s Mask are like night and day thanks to it.

The N64 had mere kilobytes of texture cache, AFAIK the solution was to stream textures, but it took awhile for developers to figure that out

VelesDude
2 replies
14h0m

N64 had a 4KB texture cache while the Ps1 had a 2KB cache. But the N64's mip-mapping requirement meant that it essentially had 2KB + lower resolution maps.

The streaming helped it a lot but I think the cost of larger carts was a big drag on what developers could do. It is one thing to stream textures but if you cannot afford the cart size in the first place it becomes merely academic.

skhr0680
0 replies
10h54m

I don’t think storage space was an issue for graphics in particular. Remember the base system had 4MB memory and 320x240 resolution

pezezin
0 replies
5h45m

The real problem was different. The PS1 cache was a real cache, managed transparently by the hardware. Textures could take the full 1 MB of VRAM (minus the framebuffer, of course).

In contrast, the N64 had a 4 kB texture RAM. That's it, all your textures for the current mesh had to fit in just 4 kB. If you wanted bigger textures, you had to come up with all sort of programming tricks.

m45t3r
4 replies
1d5h

you'll see that the design of a 3D-capable console in the 90s was a significant challenge for every company.

While this is true, I still think that the PlayStation had the most interesting and forwarding looking design of its generation, especially considering the constraints. The design is significantly cheaper than both Saturn and Nintendo 64, it was fully 3D (compared to Saturn for example), using CD as media was spot-on and also having the MJPEG decoder (that allowed PlayStation to have not only significantly higher video quality than its rivals, but also allowed video to be used for backgrounds for much better quality graphics, see for example Resident Evil or Final Fantasy series).

I really wanted to see a design inspired in the first PlayStation with more memory (since the low memory compared to its rivals was an issue it seemed, especially in e.g.: 2D fighting games where the amount of animations had to be cut a lot compared to Saturn) and maybe some more hardware accelators to help fix some of the issues that plagued the platform.

ehaliewicz2
3 replies
14h51m

It is not really any more 3D than the Saturn as it still does texture mapping in 2D space, same as the saturn. It's biggest advantage when it came to 3D graphics, aside from higher performance, was it's UV mapping. They both stretch flat 2D textured shapes around to fake 3D.

The N64 is really far beyond the other two in terms of being "fully 3D", with it's fully perspective correct z buffering and texture mapping, let alone mipmapping with bilinear blending and subpixel correct rasterization.

m45t3r
1 replies
7h3m

But N64 was a more expensive design, and also came almost 2 years later, and from an architectural standpoint it also had significant issues (e.g.: the texture cache size that someone said above).

This is why I said considering the constraints, I find the first PlayStation to be impressive.

ehaliewicz2
0 replies
1h54m

Sure, a year or two back in the 90s was huge. :)

VelesDude
0 replies
14h36m

This is very true. I consider then N64 to be the first to use anything that resembles hardware vague similar to what the rest of the industry ended up with.

It is a shame that SGI's management didn't see a future in PC 3D accelerator cards, it lead to the formation of 3DFX and with things like that SGI's value in the market was crushed astoundingly fast. They had the future but short term thinking blinded them to the path ahead.

nolok
2 replies
1d7h

Oh I was not criticizing the article per se, my apologies if it came out as such, I just thought this piece of information was important to understand why they ended up with such a random mash of chips.

polpo
0 replies
1d4h

Thanks for the link – just opened an issue concerning the font weight and text color on the site.

ndiddy
55 replies
1d4h

This is largely incorrect. The Saturn was entirely a Sega of Japan design. There's an interview (https://mdshock.com/2020/06/16/hideki-sato-discussing-the-se...) with the Saturn hardware designer that gives some perspective into why he chose to make the hardware the way he did. Basically, he knew that 3D was the future from the response the PSX was getting, but besides AM2 (the team at Sega that did 3D arcade games like Virtua Fighter, Daytona USA, etc), all of Sega's internal expertise was on traditional 2D sprite-based games. Because of this, he felt the best compromise was to make a console that excelled at 2D games and was workable at 3D games. I think his biggest mistake was that he underestimated how quickly the industry would switch to mainly focusing on 3D.

The actual result of Sega's infighting was far more stupid IMO. Sega of America wanted a more conservative design than the Saturn using a Motorola 68020 (successor to the 68000 in the Genesis) which would have lower performance, but developers would be more familiar with the hardware. After they lost this fight, they deemed the Saturn impossible to sell in the US due to its high price. SOA then designed the 32X, a $200 add-on to the Genesis that used the same SH2 processors as the Saturn but drew graphics entirely in software and overlayed them on top of the Genesis graphics. The initial plan was that the Saturn would remain exclusively in Japan for 2-3 years while the 32X would sell overseas. Sega of America spent a ton of money trying to build interest for the 32X and focused their internal development exclusively on the 32X. However, both developers and the media were completely uninterested in it compared to the Saturn. After it became evident that the 32X wouldn't hold the market, Sega of America rushed the Saturn to market to draw attention away from the 32X, but had to rely exclusively on Japanese titles (many of which didn't fit the American market) because they'd spent the past year developing 32X titles (the 32X had more cancelled games than released ones). All of this ended up confusing and pissing off developers and consumers.

masklinn
23 replies
1d4h

So that is the background for the 32X. Thanks.

I was on team N and I was always confused by the weird accessories of the Genesis, and the 32X’s timing always was one of the most confusing bits, but I’d never actually looked into it.

MBCook
22 replies
1d

I generally was too. There were some fun games on the 32X, but I bought it at fire sale prices after it failed.

Unfortunately the combination of the 32X mistake plus the rushed Saturn launch just annoyed all partners, retail and development. It’s likely a big reason the Dreamcast did so poorly.

It was a nice system but Sega was already on their back foot, not many people trusted them, and piracy was way too easy. Their partner in MS wasn’t helpful. And then the PS2 was coming…

SpecialistK
14 replies
23h12m

The DC was doing well, but the bank account was already overdrawn.

Piracy wasn't a big factor, since very very few people had broadband and CD-R drives in 2000. The attach rate was reportedly better than average. And the MS thing was just the availability of middleware that a few games used.

MBCook
5 replies
22h14m

As the console continued on, if we assume it lived a full five years or something, piracy would have gotten worse as more and more CD burners became commonplace.

Didn’t they have to pay MS a small license fee for each unit? I’m assuming that was a drag too. Not one that killed it, but just another little kick.

SpecialistK
2 replies
21h56m

The MilCD exploit was already patched in the last hardware revisions (VA2) so I imagine it would be a bit like the Switch (IIRC) where early models are vulnerable to exploits and more desirable on the second-hand market.

I'm not sure what the agreement with Microsoft was, but it was probably on a per-game basis if the developers wanted to use Windows CE.

MBCook
1 replies
20h25m

Oh I didn’t know it was patched in hardware.

I knew it was per game whether the software actually used the windows CE stuff. I’m really not sure it was ever used much at all. I know the first version of Sega rally used it but it performed so poorly they had an updated version that didn’t that they put out later to fix the issues. And I’m not sure the bad version even came to the states.

SpecialistK
0 replies
19h4m

Technically I misspoke. Some VA2 DCs used the older BIOS with the exploit still there, but most did have it patched.

One Reddit thread I see claims that Microsoft actually paid Sega for the Windows CE logo on the front of the system. But as you said, not many games used it.

486sx33
1 replies
18h26m

You COULD d/l a game on 56k, worked best if you had a dedicated phone line. It believe it was just under 48hrs for a large game… I had almost every game! I loved DC

Railroad tycoon for the win as a WinCE game I also didn’t mind the web browser and some of the online stuff - I only had dial up on the Dreamcast - broadband adapters were extremely rare! Plus I didn’t have broadband anyway.

Crazy taxi was a real favorite as well. I worked all summer in high school and my first purchase was a Dreamcast. I was the first person I knew in town that got the utopia boot disc, and then on from there. Discjuggler burns for self boots, manual patching - unlocked Japanese games. What a fun time!

SpecialistK
0 replies
12h35m

I also love DC - enough to be typing this with an arm with 2 DC tattoos :D

But I've never taken it online. I'll be getting a DreamPi at some stage, but sadly shipping to where I am takes it out of "impulse buy" territory.

ssl-3
4 replies
11h40m

Perhaps things were different in my neck of the woods, but my recollection is different: By the turn of the century, CD recorders were pretty common amongst even some of the non-geek computer users I knew. Most of the extant drives were not very good, but by then there were also examples of absolutely stellar drives (some of which are still considered good for burning a proper CD-R, even though they're a ~quarter-century old, like the Plextor PR-820).

I don't recall any particular difficulty in downloading games over 56k dialup around Y2K with a modem, though it was certainly a good bit faster with ISDN (and a couple of years later, 2Mbps-ish DOCSIS).

It was not fast with dialup -- it took a day or so, or longer if slowed it down to avoid what we now call buffer bloat -- but some of us had time, and a ye olde Linux box that was going to be dialed-in ~24/7 anyway.

But usually, the route to piracy that I recall being most-utilized back then involved renting a game from the video store for a few dollars and making a copy (or a few, for friends).

SpecialistK
3 replies
10h59m

I can only speak from my own experience, but I was on dialup until 2002 and had 1 friend with both broadband and a CD-R drive around the DC's lifespan. We burned many DivX files to Video CDs, but no games (his dad brought back a chipped PS1 and lots of burned games from Kosovo though)

Perhaps the availability of burning games wasn't the right lens for me to look at it, but if the attach rate of 8 games per system is true (as I linked to in another reply) then that's similar to the NES and SNES and not at all a factor in the system's demise.

Because the DC's games were on Yamaha 1GB CDs ("GD-ROM") I doubt popping the disc in a PC and running Nero would have been possible. Most dumps were probably made using chipped consoles and a coder's cable or BBA.

ssl-3
2 replies
10h14m

And I can only speak for myself, too, but: By Y2K, I had dual-B channel [128kbps] ISDN dialup with a local ISP. (As housing situations changed, it ebbed and flowed a bit after that for me at home -- at one point after ISDN, I was doing MLPPP with three 33.6k modems and hating every minute of it. But by 2001, I had unfettered access to 860/160kbps ADSL at a music studio I helped build that I could drive to in about 20 minutes, any time of day or night -- by then, "broadband" was common in my old turf. And by mid-2002, I had DOCSIS at home. Changes were happening fast back then, for me.)

The Dreamcast did use GD-ROM, which did hold up to around 1GB, and the usual way to rip those was to connect the [rooted/hacked/jailbroke/whatever-the-term-was] DC to the PC, so the Dreamcast could do all of the reading and transfer that data to the PC for processing and burning.

But even though GD-ROM could hold 1GB, a lot of DC games were (often quite a lot) less than ~650MB or ~700MB of actual-data, so they often fit well onto mostly-kinda-standard CD-R blanks of the day without loss of game data.

There were also release groups -- because of course there were -- that specialized partly in shrinking a larger (>650 or >700MB) DC title down to a size that would fit onto a CD-R, or sometimes onto two of them. (That involved Deep Magic, for the time, and implicitly involved a download stage.)

And my friends and family? I had one sibling who danced around with computers, and she had a (flaky AF, but existing) CD burner a year or two before I did, and most of my computer-geek friends were burning discs before Y2K as well -- even if it meant using a communal CD burner at a mutual friend's house.

But again, that's just my story -- which I've simplified to remove some stages from.

I don't doubt your own story even a little bit. Gaming for high-speed internet access, even in properly-large cities, was kind of the wild west around that time, and I agree that CD burning was unusual around that time.

I was a bit early for getting things done fast in my own area of small-town Ohio, and I think I was generally good at exploiting available options.

(Not all ideas were good ideas: I once helped negotiate and implement a deal to run some Ethernet cable overhead across a parking lot to a small ISP next door, so we could have access to a T1 at the shop I was working at, for $40/month. The money was right according to all parties, and we didn't abuse it too bad or too often...until it all got blown up by lightning, with huge losses on both ends of that connection.

The ISP never fully recovered and died absolutely a month or two later.

Which, incidentally, is how I learned to never, ever run Cat5 between two buildings. Woops.)

SpecialistK
1 replies
9h15m

My own story is complicated by the fact that my family moved trans-Atlantic at the time. In fact, I got a DC in the UK after the PS2 because we moved back and I got a system and games for cheap from a newspaper classified ad, around late 2002.

I suspect my parents didn't feel the need to upgrade from dial-up to DSL or DOCSIS while we were planning a 5000 mile move. We went from 1p-a-minute dial up ("put a timer on the stove, you have 15 minutes!") in 1998 to unlimited (but still with one phone line; drag the cord from the PC to the kitchen and hope Grandma doesn't try to call) a year-ish after, and then got 1.2Mbps DSL once we were in Canada in mid 2002.

My friend down the street upgraded and got a new PC around 1999 or 2000. I don't remember the specs, but it was much nicer than my dad's 300MHz K6-2 with no USB ports. For a while we even needed to use CursorKeys because the V90 modem used the same COM port as the mouse! So I used their PC for Napster, then Kazaa and WinMX, for 128kbps MP3s and "which pixel is Neo?" 700MB copies of The Matrix (sidenote: I warned him against searching for xXx featuring Vin Diesel, but he had to learn the hard way...)

We were aware of "chipped" consoles, and my family even asked the local indie game store if they could do it ("That'z Entertainment" in Lakeside, if any locals are reading this) but my only experiences with game piracy at that point was my dad's friend who had a modded PSX ("holy crap it says SCEA instead of SCEE!") and then the same friend as the paragraph above getting a PSX that his Army dad brought back from Eastern Europe along with Ice Age on DVD.

DC piracy was 100% a thing in its lifetime, but I still don't think it was widespread enough to have harmed the console's chances. The company was just out of cash.

There were also release groups -- because of course there were -- that specialized partly in shrinking a larger (>650 or >700MB) DC title down to a size that would fit onto a CD-R, or sometimes onto two of them. (That involved Deep Magic, for the time, and implicitly involved a download stage.)

Even years later I was annoyed that the CDI version of Skies of Arcadia (Echelon, maybe?) didn't support the VGA box.

ssl-3
0 replies
7h53m

You've travelled a lot more than I have. I've mostly stuck around Ohio.

I suspect that we're about the same age.

Man, com ports and IRQs: I had 14 functional RS-232 ports on one machine, once, between the BocaBoard (with 10P10C jacks), the STB serial card that 3dfx promised to never erase documentation for (it is erased -- *thanks, nVidia*), and the couple of serial ports that were built into a motherboard or multi-IO card at that time.

Once configured to be actually-working, which I did mostly as a dare to myself, I could find no combination of connected serialized stuff that would upset it.

And that was fun having a ridiculous amount of serial ports: I had dumb terminals scattered around, and I had machines connected with SLIP and PLIP. (Ethernet seemed expensive, and "spare laptops" or "closet laptops" were not at all a thing yet.)

Anyhow, piracy: My friends and I were mostly into PSX back then. I may or may not have installed a dozen mod chips for my friends so I could get a discount on my own mod chip. (I was not trying to make money.)

Aaand there may have been quite a lot of piracy. I remember coming home and checking the mailbox to find a copy of Gran Turismo 2, with a hand-written note from a friend who I didn't expect to be able to succeed in duping a disc with Nero, but he'd done it, and hand-delivered it, and it worked. I subsequently played the fuck out of that game.

But you've clearly got a different perspective. And that's interesting to me.

Even years later I was annoyed that the CDI version of Skies of Arcadia (Echelon, maybe?) didn't support the VGA box.

I've never actually-played a CDI system (I do recall poking at them in retail displays), and I don't know what "the VGA box" is in this context. Can you elaborate?

giantrobot
1 replies
18h55m

Piracy wasn't a big factor, since very very few people had broadband and CD-R drives in 2000.

Piracy of CD based games in the late 90s most assuredly did not require broadband or personal ownership of a CD-R drive. A lot of piracy happened via SneakerNet. As long as you knew someone with games (legit or pirated copies) and a CD-R drive you could get your own copies for the cost of a CD-R disc. Every college had at least one person with a CD binder full of CD-Rs of pirated PC and console games. I suspect a majority of high schools at the time did as well. Dorm ResNets were also loaded down with FTPs and SMB shares filled with games, porn, and warez.

SpecialistK
0 replies
18h38m

Maybe so. I had an Action Replay disc from the time that presumably used MilCD. But according to this link (sadly Fandom), the attach rate was good: https://vgsales.fandom.com/wiki/Software_tie_ratio#Consoles

So SneakerNet and dorm rooms may have been a factor, but not enough of one to kill the system.

yellowapple
0 replies
13h5m

Piracy wasn't a big factor, since very very few people had broadband and CD-R drives in 2000.

In those days the piracy threat was less "players with broadband and CD-R drives are downloading games for free" and more "flea marketeers with CD-R burners are copying games and selling them to players for a fraction of MSRP".

masklinn
3 replies
21h43m

the rushed Saturn launch just annoyed all partners, retail and development.

Oh yes that I definitely knew about, the rushed Saturn launch out of nowhere, as well as its early retirement to make room for the Dreamcast, soured a lot of people.

A shame too, the Dreamcast deserved so much better. It was a great system, and pretty prescient too, at least on the GPU side: Sega America wanted to go with 3dfx, Sega Japan ultimately went with PowerVR, 3dfx turned out to be a dead end, while PowerVR endured until fairly recently (if mostly in the mobile / embedded / power efficient space).

to11mtm
1 replies
20h32m

I think PowerVR was a good choice even at the time.

In my teens, my older brother worked at a computer shop and back then was a hardware geek, and a Matrox M3D, and frankly compared to something like a Rush or Riva 128 the M3D was pretty good outside of some blending moire. That I'm talking about visual vs perf says something... and Dreamcast got the version AFTER that...

The biggest thing was their Tile based deferred rendering, which made it easy to create an efficient GPU with a relatively low transistor count.

Also, 3dfx pattern of 'chip per function', aside from it's future scaling issues, would have been a higher BOM.

-----

All of that said, ever wonder what the video game scene, or NVidia would be like today, if the latter didn't derp out on their shot at the Dreamcast in an SEC filing, which caused them to be relegated to the video chip for the Pico?

VelesDude
0 replies
20h6m

Credit to the PVR in Dremacast (and the entire design of DC), it was a very efficient processor considering the pricing limitations of the unit. I do love that the Saturn absolute sucked at transparency effects and the PVR was the complete opposite. Just throw the geometry at it in any order (per tile) and it would sort it out and blend it with no problem.

It was efficient but the performance was definitely the lowest of all consoles that generation.

SpecialistK
0 replies
18h35m

as well as its early retirement to make room for the Dreamcast, soured a lot of people.

That was really boneheaded on Sega's part. They introduced the DC in Japan first, killing the Saturn which was quite successful there.

Had they waited, or at least launched in NA/EU in 1998, they could have kept money coming in from Saturn in JP while getting back into the market elsewhere.

to11mtm
2 replies
20h18m

32x murdered Sega's goodwill. Then the cost of the Saturn led to the legendary "$299" Sony E3 conference... Then Bernie Stolar and his hate for JRPGs...

Their partner in MS wasn’t helpful.

The MS thing was actually an important PR olive branch after the Saturn.

Saturn had a somewhat deserved bad rep for API/doc issues, and doing 3D was extra painful between the VDP behavior and quads...

Microsoft was flouting around DirectX at the time, Even with it's warts at the time it was accepted by devs as 'better than what we used to deal with!'.

All of it was an attempt to signal to game developers; 'Look we know porting stuff to Saturn sucked, our API is better, but if you're worried, you are doing more PC ports now anyway, so this is a path'.

If anything, I'd say the biggest tactical 'mistake' there was that in providing that special Windows CE, Microsoft probably got a LOT of feedback and understanding of what console devs want in the process, which probably shaped future DirectX APIs as well as the original XBox.

PS2 was coming

If the PS1 "$299" conference was the sucker punch, PS2's "Oh it's also a DVD Player" was the coupe-de-grace. I knew a LOT of folks that 'waited' but DVD was the winning factor.

philistine
0 replies
19h16m

Coup-de-grâce. No e at the end of coup.

hnlmorg
0 replies
19h45m

Some consider the original Xbox as a sequel to the Dreamcast because it reused some of the principles and had some of the same people working on it. Heck, even the original chunky Xbox controller looks more like the Dreamcast and lesser known Saturn 3D controller than it does like modern Xbox controllers.

phire
18 replies
1d3h

Now that's a good interview.

> I think his biggest mistake was that he underestimated how quickly the industry would switch to mainly focusing on 3D.

I think his mistake was a bit more subtle than that. Because he didn't have any experience in 3D or anyone to ask for help, he didn't know which features were important and which features could be ignored. And he ended up missing roughly two important features that would have bought the Saturn upto the standard of "workable 3D".

The quads weren't even that big of a problem. Even if the industry did standardise on triangles for 3D hardware, a lot of the artist pipelines sick with quads as much as possible.

The first missing feature is texture mapping. Basically the ability to pass in uv coordinates for each vertex (or even just a single uv offset and some slopes for the whole quad). The lack of texture mapping made it very hard to export or convert 3D models from other consoles. Instead, artists had to create new models where each quad always maps to an 8x8 texel quad of pixels.

The second missing feature is alpha blending, or semitransparent quads. The Saturn did support half-transparency, but it only worked for non-distorted sprites, and you really want more options than just 50% or 100%.

With those two features, I think the Saturn would have been a workable 3D console. Still not as good as the playstation, but probably good enough for Sega to stand its ground until the Dreamcast launched.

mmaniac
5 replies
1d2h

The quads weren't even that big of a problem. Even if the industry did standardise on triangles for 3D hardware, a lot of the artist pipelines sick with quads as much as possible.

The first missing feature is texture mapping. Basically the ability to pass in uv coordinates for each vertex.

These two situations are fundamentally related. Most 3D rasterizers including the Playstation use inverse texture mapping, iterating over framebuffer pixels to find texels to sample. The Saturn uses forward texture mapping, iterating over the texels and drawing them to their corresponding framebuffer pixels.

The choice to use forward mapping has some key design consequences. It isn't practical to implement UV mapping using forward mapping, and quads are also become a more natural primitive to use.

The second missing feature is alpha blending, or semitransparent quads. The Saturn did support half-transparency, but it only worked for non-distorted sprites, and you really want more options than just 50% or 100%.

I don't consider this to be a significant problem. The checkerboard mesh feature provides a workable pseudo-half-transparent effect, especially with the blurry analog video signals used at the time.

Side note but forward mapping is also the reason why half-transparency does not work for distorted sprites. Forward mapping means that the same pixel may be overdrawn. If a half-transparent pixel is blended twice, the result is a corrupt pixel.

VDP2 can also provide half-transparent effects in various circumstances - this video provides a comprehensive look at various methods which were used to deliver this effect.

https://www.youtube.com/watch?v=f_OchOV_WDg

phire
2 replies
13h11m

Both of these issues can be fixed without switching away from forwards texture mapping.

We have at least one examples of this being implemented in real hardware at around the same time: Nvidia's NV1 (their first attempt at a GPU) was a quad based GPU that used forwards texture mapping. And it had both UV mapping and translucent polygons. The sega saturn titles that were ported to the NV1 look absolutely great. And I the Sega Model 3 also implemented both UV mapping and transparency on quads (I'm just not 100% sure it was using forwards texture mapping).

I'm not suggesting the Saturn should have implemented the full perspective correct quadratic texture mapping that the NV1 had. That would be overkill graphically (especially since the ps1 didn't have perspective correct texture mapping either), and probably would have taken up too many transistors.

But even a simple scheme that ignored perspective correctness were the artist provided UV coord for three of the vertices (and the forth was implicitly derived by assuming the quad and it's UVs were flat) would have provided most of the texture mapping capabilities that artists needed. And when using malformed quads to emulate triangles, the 3 UV coords would match.

And most of the hardware is already there. VDP1 already has hardware to calculate the slopes all four quad edges from vertex coords and an edge walker to walk though all pixels in screen space. It also has similar hardware for calculating the texture space height/width deltas and walking though texture space. It's just that texture space is assumed to always have slopes of zero.

So that's the only changes we make. VDP1 now calculates proper texture space slopes for each quad, and the texture coord walker is updated to take non-zero slopes. I'm hopeful such an improvement could have been made without massively increasing the transistor count of VDP1, and suddenly the saturn's texture mapping capabilities are roughly equal with the PS1.

> I don't consider this to be a significant problem. The checkerboard mesh feature provides a workable pseudo-half-transparent effect

I think you are underestimating just how useful proper transparency is for early 3D hardware.

(by proper transparency, I mean supporting the add/subtract blend modes)

And I haven't put it on my list of "important missing features" because sometimes games want to render transparent objects. As you said, the mesh (checkerboard) mode actually works for a lot of those cases. And if you are doing particles, then you can often get away with using non-distorted quads, which does work with the half-transparent mode. (though, from an artistic perspective, particles would really look a lot better if they could instead use add/subtract modes)

No. Proper transparency is on my list because it's an essential building block for emulating advanced features that aren't natively supported on the hardware.

One great example is the environment mapping effect that is found in a large number of ps1 racing games. Those shiny reflections on your car that sell the effect of your car being made out of metal and glass. The ps1 doesn't have native support for environment mapping or any kind of multitexturing, so how do these games do it?

Well, they actually render your car twice. Once with the texture representing the body paint color, and then a second pass with a texture representing the current environment UV mapped over your car. The second pass is rendered with a transparency mode that neatly adds the reflections over the body paint. The player doesn't even notice that transparency was used.

Environment mapping is perhaps the most visually impressive of these advanced rendering tricks using translucency, but you can find dozens of variations spread out though the entire PS1 and N64 libraries.

Probably the most pervasive of these tricks are decals. Decals are great for providing visual variation to large parts of level geometry. Rather than your walls being a single repeating texture, the artist can just slap random decals every so often. A light switch here, a patch of torn wallpaper there, blood spatters here and graffiti over there. It the kind of thing the player doesn't really notice except when they are missing.

And decals are implemented with transparency. Just draw the surface with it's normal repeating texture and then use transparency to blend the decals over top in a second pass. Sometimes they Add, sometimes they Subtract. Sometimes they just want to skip transparent texels. And because the Saturn doesn't support transparency on 3D quads, it can't do proper decals. So artists need to manually model any decal-like elements into the level geometry itself, which wastes artist time, wastes texture memory and wastes polygons.

For some reason, VDP1 doesn't even support transparent texels on 3D quads, the manual says they only work for non-distorted quads.

> Side note but forward mapping is also the reason why half-transparency does not work for distorted sprites. Forward mapping means that the same pixel may be overdrawn. If a half-transparent pixel is blended twice, the result is a corrupt pixel.

No... This isn't a limitation of forwards texture mapping. Just a limitation of VDP1's underwhelming implementation of forwards texture mapping.

I don't think it would be that hard to adjust the edge walking algorithm to deal with "holes" in a way that avoids double writes.

My gut says this is basically the same problem that triangle-based GPUs run into when two triangles share an edge. The naive approach results in similar issues with both holes and double-written pixels (which breaks transparency). So triangle rasterizers commonly use the top-left rule to ensure every pixel is written exactly once by one of the two triangles based on which side of the edge the triangle is on.

----------

Though that fix only allows 3D quads to correctly use the half-transparency mode.

I'm really confused as to why VDP1 has such limited blending capabilities. The SNES had proper configurable blending in the 90s, and VDP2 also has proper blending between it's layers.

And it's not like proper blending is only useful for 3D games. Proper blending is very useful for 2D games too, and certain effects in cross platform 2D games can end up looking washed out on the Saturn version because they they couldn't use VDP2 for that effect and were limited to just VDP1's 50% transparency on VDP1 instead of add/subtract.

VDP1's 50% transparency already pays the cost of doing a full read-write-modify on the framebuffer, so why not implement the add/substract modes? A single adder wouldn't be that many transistors? That would be enough to more or less match the playstation, which only had four blending modes (50/50, add, subtract and 25% of background + 100% of foreground).

Though, you can go a lot further than just equality with the PS1. The N64 had a fully configurable blender that could take alpha values from either vertex colors or texture colors (or a global alpha value).

> VDP2 can also provide half-transparent effects in various circumstances - this video provides a comprehensive look at various methods which were used to deliver this effect.

> https://www.youtube.com/watch?v=f_OchOV_WDg

Yeah, that's a great video. And it's the main reason why I started digging into the capabilities of the Saturn.

mmaniac
1 replies
9h15m

Thanks for the thorough response.

No. Proper transparency is on my list because it's an essential building block for emulating advanced features that aren't natively supported on the hardware. One great example is the environment mapping effect that is found in a large number of ps1 racing games.

That's a good point which I had not considered. However, just adding support for half-transparency wouldn't have been enough to bring the Saturn up to speed in this regard. Half-transparency is also really damn slow on the Saturn. Quoting the VDP1 User's Manual:

Color calculation of half-transparency is performed on the pixel data of the original graphic and the pixel data read from the write coordinates. Drawing in this case slows down, so use caution—it takes six times longer than when color calculation is not performed.

This compounds with the Saturn VDP1 already having a considerable reduction in fillrate compared to the Playstation GPU. Playstation wasn't just capable of transparency, it was also damn fast at it, so emulating effects with transparency was feasible.

PS2 took this approach to its illogical extreme.

phire
0 replies
8h10m

Yeah. You have a point.

My proposed changes only really bring the Saturn's 3D graphics into roughly the same technical capabilities as the Playstation.

They don't do anything to help with the performance issues or the higher manufacturing costs. Or the string of bad moves from SEGA management that alienated everyone.

But an alternative history with these minor changes might have been enough to stop developers complaining about the Saturns non-standard 3D graphics and keep a healthy market of cross-platform games. It might have also prevented commenters on the internet from making overally broad claims about the Saturn's lack of 3D capabilities.

---------------

Fixing the performance limitations and BoM cost issues would have required a larger redesign.

I'm thinking something along the lines of eliminating most specialised 2D capabilities (especially all of VDP2). Unify what's left to a single chip and simplify the five blocks of video memory down to a single block on a 32bit wide bus. This would significantly increase the bandwidth to the framebuffer, compared to VDP1's framebuffer which uses a 16bit bus, while decreasing part count and overall cost.

The increased performance of this new 3D focused VDP should hopefully be powerful enough to emulate the lost VDP2 background functionality as large 3D quads.

01HNNWZ0MV43FF
1 replies
21h36m

I'm surprised forward texture mapping can work at all. What happens if the quad is too big? Does it have gaps?

Forward texture mapping sounds like something I would have imagined as a novice before I learned how inverse texture mapping worked.

mmaniac
0 replies
20h45m

It'll draw the same texel multiple times if it has to. One of the Saturn's programming manuals describes in limited detail how it will attempt to avoid leaving any gaps in polygons.

Polygons contain diagonal lines that may result in pixel dropout (aliasing). When this occurs, holes are anti-aliased. For this reason, some pixels may be written twice, and therefore the results of half-transparaency processing as well as other color calculations cannot be guaranteed.

https://antime.kapsi.fi/sega/files/ST-013-R3-061694.pdf

to11mtm
4 replies
20h46m

The quads weren't even that big of a problem. Even if the industry did standardise on triangles for 3D hardware, a lot of the artist pipelines sick with quads as much as possible.

AFAIK the problems lie in things like clipping/collision detection. Now that you have four points instead of three, there is no guarantee that the polygon is a flat surface.

cubefox
2 replies
20h12m

That's interesting. I always wondered why "polygons" generally ended up being triangles. I guess this is of the reasons.

MBCook
1 replies
16h2m

That’s one part, and it’s big. I believe the other part is that triangle strips are more efficient because they can share vertices better than quad strips.

cubefox
0 replies
8h52m

Of course, but that's not generally true. Why use two triangles for e.g. a rectangular floor? The walls and the ceiling are also rectangular after all. Probably mixing and matching different polygons isn't very efficient either.

About collision: I don't know how it is computed, but if it's by ray casting, that is now also used for lighting effects and sometimes reflections. Which would be another reason to stick to triangles.

phire
0 replies
10h44m

That's not a massive problem.

Most games avoid using the same meshes for both rendered graphics and collision detection. Collision detection is usally done against a simplified mesh with much fewer polygons.

Since most games already already have two meshes, there isn't a problem making one mesh out of quads and the other out of triangles.

hnlmorg
4 replies
19h56m

Saturn did support alpha blending on quads (quads are ostensibly just sprites). The problem was the blending became uneven due to the distortion. Ie distortion caused the quads to have greater transparency on some parts and lesser transparency on others. This was largely due to developers skewing quads into triangles.

phire
3 replies
13h8m

First, it was limited to a single blend equation of 50% foreground + 50% background.

At minimum, it really needed to support the addition/subtraction blend equations.

Second, because it doesn't work correctly for distorted quads, it really doesn't exist in the context of "Saturn's 3D capablties"

hnlmorg
2 replies
10h36m

It works “correctly” for distorted quads, it’s just pixels overlay as part of the front to back rendering of the distortion process, thus creating combined blend methods greater than 50% but less than 100%. Ie literally what I said in my previous comment.

I think we are basically arguing the same thing, but from my point of view I’d say the end result is visually undesirable but technically correct.

phire
1 replies
10h2m

The only "correct" that matters here is if it produces a result that's useful for 3D graphics.

The "some pixels get randomly blended multiple times" behaviour is simply not useful. And if it's not useful, then it might as well not exist. And it certainly can't substitute for the more capable blending behaviour that I'm claiming the Saturn should have.

In another comment [1], I go into more detail about both why visually correct (and capable) translucency modes were very important for early 3D consoles and speculate on the minimal modifications to the hardware design of VDP1 to fix translucency for distorted quads, while working within the framework of the existing design.

[1] https://news.ycombinator.com/item?id=39835875

hnlmorg
0 replies
1h13m

The "some pixels get randomly blended multiple times" behaviour is simply not useful.

It’s not random. Not even remotely.

And if it's not useful, then it might as well not exist.

It is useful in some scenarios. The problem here is you’re describing it as a 3D system but it really isn’t. The whole 3D aspect is more a hack than anything

VelesDude
1 replies
20h25m

Youtube channel Gamehut once mentioned that you could do 8 levels of transparency on the 3D provided you had no Gouraud shading on the quad. As such it was almost never used and I believe they used it merely to fade in the level.

phire
0 replies
12h32m

That's a VDP2 effect. It can only fade the final framebuffer againt other VDP2 layers (and the skybox is implemented with VDP2 layers)

After VDP1 has rendered all quads to the framebuffer (aka VDP2's "sprite data" layer), some pixels in the framebuffer layer can use an alternative color mode. Normal 3D quads (with Gouraud shading) use a 15 bit format with five bits of Red, Green and Blue representing the final color (and the the top most bit is set to one).

Those background elements instead use a palletted color format. The top bit is zero (so VDP can tell which format to be used used on a per-pixel basis) and then there are 3 bits of priority (unused in this case), 3 bits of "Color calculation ratio" which Gamehut used to implement the fade, and a 9 bit color palette index.

So it's not just that you can't use Gouraud shading, but these fading objects are limited to a global total of 512 unique colors which must be preloaded into VDP2's color ram (well, I guess you could swap the colors out each frame)

-----

Importantly, since this trick only works against VDP2 background layers, it can't be used to blend two 3D quads against each other.

hnlmorg
2 replies
20h3m

It also didn’t help that Sony poached a lot of studios with exclusivity deals.

The fact that the Saturn was harder to develop for, had a smaller market share and Sony were paying studios to release on the PlayStation, it’s no wonder Sony won the console wars that generation.

VelesDude
1 replies
14h42m

Sony through their choices also landed on a bit of luck. They where cheaper than the Saturn, had a machine that was much easier to work with and where charging developers a lot less in royalties. Also helped that they where out before N64 and with the cost of cartridges.

hnlmorg
0 replies
10h31m

I get that you’re saying but I wouldn’t describe that as luck. Their choices weren’t made randomly and are what lead to their success.

mouzogu
1 replies
23h39m

Sega of America rushed the Saturn to market

interesting. i always thought this was an order from SOJ.

SpecialistK
0 replies
23h11m

It was. Kalinske tried to push back, but for some reason after all of his success at SoA, SoJ kept undercutting him in the mid 90s with the 32X and early Saturn launch.

eek2121
1 replies
17h37m

I think it is funny that both were wrong. 2D games are still very much being developed and released today, and many outsell 3D games, even AAA 3D games.

People beat Sega up back then, and also now, but I do not think their hardware was that bad. The pricing was a huge issue.

If I were them, I would have NOT released the 32X as a standalone, NOT released the Saturn, and instead focus on a fusion of the two into a standalone cartridge based system. For everything people criticized the Saturn for, the cartridges made it work. The N64 showed off the advantage of using a cartridge over a CD based system.

If I were going to release a non-portable gaming system today, I'd build one around cartridges that are PCIE 5.0 x4 based and design it so that a) you can quickly shut off the system and replace the game rather than digging through ads/clunky ui/launcher. b) access and latency times are fast c) the hardware is potentially extendable in many different ways.

However, I actually liked the 32x and it's games. Maybe that is because I got the 32x on clearance at walmart for $10 and the games for $1-$2 each, or maybe it was because many of the games were fun and were a genuine upgrade over my 16 bit systems.

I don't think the Saturn was bad, but it was overpriced for the time and definitely rushed.

I DO think this rush to make consoles require an internet connection has left a sizable hole in the market, a hole that nintendo is only half filling. Offline gaming is a thing, and just because a console CAN be connected to the internet, doesn't mean it should be. I've been fantasizing about making one in the future. Something that is kind of like the old Steam Machine, but with the aspects I have mentioned. Maybe one day I will. For me it won't be about success, but rather about building something cool.

Anyways, I'm rambling. Have a great night.

mouzogu
0 replies
11h31m

they should have focused on supporting the genesis and the saturn for as long as possible.

sega was a dysfunctional business reacting to fickle changes in the market (32x was a response to the Atari Jaguar lol).

probably as they didn't have the strong cash reserves of a nintendo, sony or microsoft.

but ultimately i think they were an arcade business and the transition from arcade to home killed them. the same as snk/neo geo.

VelesDude
1 replies
20h31m

While this is an entirely "in retrospect this would have been the best plan!". The youtube channel Video games esoterica had an interesting idea on an alternatives path Sega could have taken.

Namely, lean in hard on 32X for about a year or two to try and slow demand for Ps1 with cheaper hardware. Release the Neptune (Genesis with inbuilt 32x). They then take up Panasonics deal and use the M2 platform to be the Saturn. Release that in 1997 with specs far beyond Ps1/N64.

Neat idea but this is all just fantasy stuff at this point.

SpecialistK
0 replies
18h25m

Thanks for the YT recommendation. Sega alt-history is something I've spent more than a few hours thinking about.

It's hard to really nail down without knowing about chip prices in the day, but my thinking was to wait on the Sega CD until 92 or 93, include a DSP or two for basic 3D operations (like the Sega Virtua Processor in Virtua Racing) and market the CD as allowing for cheaper, bigger games instead of FMV garbage.

Then release the Saturn in 95 as an upgraded version of the same architecture akin to the SMD/Gen being based on the SMS. Add a newer CPU (SH-3 or PowerPC 603), use an upgraded SMD's graphics chip for 2D (like VDP2 on the Saturn), the 68K for audio, and whatever 3D silicon Sega had developed (probably still quads)

If this was financially / technically possible, it would have staved off Sega's fear about the SMD/Gen falling behind the SNES in 92-95, allowed backwards compatibility, and been less jank to develop for.

spxneo
0 replies
21h49m

This is why I love HN. Busting esoteric misconceptions with detailed knowledge of industry and history. It makes sense why developers hated the Saturn and PSX came out on top. Developer experience is king!

luma
0 replies
1d

I bought the Saturn on the US launch day and never clearly understood why, for the first maybe 6 months, there were only a handful of titles available. Interesting back story!

Ghaleon
0 replies
3h0m

The 32X initially sold well, and media coverage was much stronger than the 3DO/Jag/CDi. But SEGA abandoned support for it REALLY quickly. I had a 32X at launch and was excited, but almost all the games sucked. Shadow Squadron is like the only legitimate non-port, original game that's pretty good lol and it came out when the system was about dead.

I think consumers were def skeptical of the 32X despite the marketing, and that was the main thing, in addition to an unimpressive library. Magazine coverage was decent but Gamepro in particular was very negative toward it. Like, Chaotix sucks but the music is amazing, and Gamepro not only trashed the game but gave even the music low marks because "no 32X audio is apparent" lol.

phire
2 replies
1d3h

I've looked into it, and from what I can tell, the "3D was added late to the Saturn design" narrative is flawed.

It's commonly cited that VDP2 was added later to give it 3D support. But VDP2 doesn't do 3D at all, it's responsible for the SNES "mode 7" style background layers. If you remove VDP2 (and ignore the fact that VDP is responsible for video scanout) then the resulting console can still do both 3D just fine (Many 3D games leave VDP2 almost completely unused). 2D game would take a bit of a quality hit as they would have to render the background with hundreds of sprites.

If you instead removed VDP1, then all you have left are VDP2's 2D background layers. You don't have 3D and you can't put any sprites on the screen so it's basically useless at 2D games too.

As far as I can tell, the Saturn was always meant to have both VDP1 and VDP2. They were designed together to work in tandem. And I think the intention (from SEGA JP) was always for the design be a 2D powerhouse with some limited 3D capabilities, as we saw on the final design.

I'm not saying there wasn't arguments between SEGA JP and SEGA US. There seems to be plenty of evidence of that. But I don't think they munged the JP and US designs together at the last moment. And the PSX can't have had any influence on the argument, as the Saturn beat the PSX to market in Japan by 12 days.

tadfisher
1 replies
16h23m

This is typical of Sega arcade hardware of the era (Model 1 and Model 2); these systems have separate "geometry processor" and "rasterizer" boards, with onboard DSPs. If you squint, the Saturn is what someone might come up with as a cost-optimized version of that architecture.

phire
0 replies
11h30m

Yeah. Especially if you rip out VDP2 and some of the more 2D orientated features of VDP1 it looks very much like an attempt at a cost-optimised Model 2.

And if you read the interview that ndiddy linked above (https://mdshock.com/2020/06/16/hideki-sato-discussing-the-se...), it might be accurate to say the Saturn design is what you get when an engineer with no background in 3D graphics (and unable to steal sufficient 3D graphics experience from the Sega Arcade division) attempts to create a cost-optimised Model 2.

I suspect the first pass was just a cost-optimised textured quad renderer, then Sato went back to his 2D graphics experience and extended it into a powerful 2D design.

karmakaze
1 replies
1d3h

Is this the PSX[0] you're referring to? I had no idea this existed, or what impact it had on gaming consoles.

Edit (answered): "Why is PlayStation called PSX? Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX)."

[0] https://en.wikipedia.org/wiki/PSX_(digital_video_recorder)

Luc
0 replies
1d3h

PSX was used to refer to the PlayStation before it was released, and it stuck.

dymax78
1 replies
1d4h

For mass gamers of that era, where the big thing was "arcade in your living room" it's a disapointement....

One exception to this is the shmup genre. The Saturn was inundated with Japanese Shmups and many are perfect (or near perfect) arcade ports.

wk_end
0 replies
1d2h

2D fighters, as well. The port of SFA3 on Saturn trounces the PS1 release, for example.

gxqoz
0 replies
20h55m

The latest episode of the excellent video game history podcast They Create Worlds (https://www.theycreateworlds.com/listen) does a good job debunking some of these myths.

cubefox
30 replies
1d7h

This might be the most complex hardware architecture of a home console ever.

pjmlp
24 replies
1d6h

Dreamcast and PS 3 are also quite close in complexity.

masklinn
6 replies
1d4h

The ps3 was incredible, not so much in what it could do as in what IBM managed to make Sony pay for.

kernal
5 replies
22h51m

IBM also made Microsoft pay for the Xenon in the Xbox 360.

masklinn
3 replies
21h53m

Xenon is a pretty standard SMP design, having 3 cores is basically the height of its oddity. It was a fine early-multicore CPU for consoles.

The Cell was very much not that, the SPEs were unnecessarily difficult to use for a console and ultimately a dead end though they made for great supercomputing elements before GPGPU really took off (some people built supercomputers out of PS3s clusters as that was literally the cheapest way to get cells).

The Cell supposedly cost 400 millions to develop, a cost largely borne by Sony. MS got IBM to retune the PPE for Xenon, but they'd likely have made do with an other IBM core as their base had that not existed.

kernal
2 replies
21h3m

This claim that the SPEs were difficult to use and maximize may have been true in the early years, but all of the major engines were optimized to quickly abstract them. Naughty Dog, a key contributor to the PS3 graphics libraries had optimized them so much that porting PS3 games to the PS4 was "hell" in their words.

I wish we had a button that was like ‘Turn On PS4 Mode’, but no,” Druckmann said. "We expected it to be hell, and it was hell. Just getting an image onscreen, even an inferior one with the shadows broken, lighting broken and with it crashing every 30 seconds...that took a long time. These engineers are some of the best in the industry' and they optimized the game so much for the PS3’s SPUs specifically. It was optimized on a binary level, but after shifting those things over, you have to go back to the high level, make sure the systems are intact, and optimize it again. "I can’t describe how difficult a task that is. And once it’s running well, you’re running the [versions] side by side to make sure you didn’t screw something up in the process, like physics being slightly off, which throws the game off, or lighting being shifted and all of a sudden it’s a drastically different look. That’s not improved any more; that’s different. We want to stay faithful while being better.”
pjmlp
0 replies
10h9m

Remember that Naughty Dog was acquired by Sony, thus access to internal info, and until Sony released Phyre Engine, and the GT demoed what was possible with the PS3, most studios were kind of lost targeting it.

masklinn
0 replies
17m

This claim that the SPEs were difficult to use and maximize may have been true in the early years

The 7th gen appropriately "lasted" 7 years, "the early years" was a ton of it.

Naughty Dog, a key contributor to the PS3 graphics libraries

ND is a first party studio, that means access to information and efforts to push the platform most developers can't justify, especially as third parties would usually need cross-platform support: the PS3 was no PS2.

so much that porting PS3 games to the PS4 was "hell"

Which proves the point: the PS3's Cell was an expensive, unnecessarily complicated, and divergent dead end.

breadmaster
0 replies
22h10m

And they had Nintendo paying 'em in that generation (and the one before) too.

brezelgoring
3 replies
23h28m

TIL the PlayStation 3 is used (on a grid/cluster of 16) as a viable alternative to Supercomputers by Physics researchers, this grid calculates how black holes collapse (!). I assume this is a cost-cutting measure by the researchers, and it makes me think how much more expensive a ‘supercomputer’ is and why, if it can be suitably replaced by the console with no games.

xcv123
1 replies
23h0m

That continued with researchers using cheap gaming GPUs for simulations. Today a single Nvidia 4090 GPU exceeds the performance of that PS3 cluster.

pezezin
0 replies
5h9m

"Exceeds" is an understatement. The Cell CPU had a maximum performance of 200 GFLOPS of FP32. A Nvidia 4090 has 73 TFLOPS of FP32, equivalent to 360 PS3.

01HNNWZ0MV43FF
0 replies
21h8m

I think they may have done that because Sony used the PS3 as a loss leader?

tapoxi
0 replies
1d3h

"The Race for a New Game Machine" is an interesting book on the development of Cell, if the author comes off a little annoying at times. It was a neat idea to offload certain operations to the SPEs (glorified vector processors) but they all had their own RAM and communicated via a bus, so you really needed to optimize for it.

msk-lywenn
9 replies
1d6h

I think you mean PS2 and PS3. Dreamcast was rather easy. One might even say, a dream to program for.

SunlitCat
5 replies
1d5h

I wonder how many games utilized Windows CE (which was a thing for it) on the Dreamcast.

Grazester
4 replies
1d5h

Not a whole lot. If you wanted to extract the full power of the system then you needed the native SDK. Sega Rally 2 was ported using CE and it has frame rate issues it really shouldn't have an this is attributed to CE

ac2u
3 replies
1d3h

I guess they had a porting job to do since the arcade hardware Sega Rally ran on wasn't Naomi(which was basically a souped up Dreamcast), and they figured if they used WindowsCE then they could use DirectX and get a PC port out of the same efforts.

But yeah, for a flagship title they should have went for a port with the official SDK.

Grazester
1 replies
21h57m

Naomi games ported over to the Dreamcast just fine. It was essentially a Dreamcast with more memory if I'm not mistaken. Sega Rally 2 used the Model 2 board.

tuna74
0 replies
20h15m

Sega Rally 1 used Model 2, Sega Rally 2 used Model 3.

mairusu
0 replies
21h32m

The goal was to prove to third-party developers that it was trivial to port an existing Windows game to Dreamcast. It was sort of a tech demo for the industry.

The end result though was… demonstrating the Windows CE overhead.

masklinn
2 replies
1d6h

PS2 was the tail end of the bespoke hardware, and Sony gave their chips very marketing names, but I don’t remember the hardware being especially strange. The SDK being ass would be a different issue.

msk-lywenn
0 replies
1d5h

Well the two vector coprocessors (vu0, vu1) each talking to a different processor (cpu or "gpu") was quite weird, imho.

mairusu
0 replies
21h34m

The PS2 was infamous to develop for. Most of the bottleneck came from the vector units. Most middleware eventually made working around the strange hardware much easier.

When asked if they were weary about developing on PS2, due to its reputation of it being tough to code for, Sega devs famously laughed and replied that they already mastered the Saturn - how much harder could it get?

p_l
0 replies
1d6h

Not having developed for them, PS2 seemed the hardest of the latter generations, with PS3 having issues but at least not being designed with hand-written assembly in mind.

masklinn
4 replies
1d6h

The Jaguar was pretty out there.

The thing had two custom RISC chips which could be used as CPU (one with additional GPU capabilities and the other with DSP) plus a 68000 devs were not supposed to use.

MBCook
3 replies
1d4h

Plus the custom chips were buggy!

JohnBooty
2 replies
1d3h

I always wonder what the Jaguar would have been like if not crippled by buggy hardware. Probably still a failure but I bet the games would have ran better.

You know what's also funny? I've never heard anybody complaining about the Saturn being buggy.

It's legendarily hard to code for, but seems like it was at least pretty solid.

Even the Genesis was sort of "buggy." I think there was one particular design choice that crippled digital sound playback. Also the shadow/highlight functionality is kind of weird, not sure if "buggy" is the right word, but weird.

alexisread
1 replies
1d1h

The best thing Atari could have done was release it at launch with the CD, that would have made it much cheaper for devs to launch a game - ROM order pricing would kill many devs. Bundling an SDK would have helped massively as well.

There are lots of quirks with the hardware which point to Atari interfering with the development - the 68K was never supposed to be there (and an 020 would have uncrippled the bus by allowing it to run at full speed, see the arcade board), not using the dual RAM buses, having an object processor (Flare majored on DSP and Blitter, the object processor looks like a 5200/7800/Amiga/Panther throwback mandated by Atari) necessitated having a 2-chip solution where 1-chip would have been faster to develop and better.

Having said that, the Jag VR looked amazing for the time.

JohnBooty
0 replies
3h58m

You seem really knowledgeable. Any insight into why Atari made those fateful decisions?

tosh
20 replies
1d8h

The Sega Saturn had quite a few gems (e.g. Panzer Dragoon Saga, Shining Force III, Burning Rangers, Dragon Force I & II, …) that were never ported or re-made afaiu.

edit: oh, and of course Saturn Bomberman

thevagrant
10 replies
1d7h

Panzer Dragoon made it to 1st gen Xbox iirc.

Saturn and following on, the Dreamcast were quite good and deserved more success.

dagw
5 replies
1d7h

Sega created a Panzer Dragoon game for the Xbox (great game), but the original Panzer Dragoon games never got ported as far as I know

Grazester
2 replies
1d5h

When you unlock Pandora's box in Orta you can play the original game...the PC port version that is

hnlmorg
1 replies
1d4h

That’s not PD Saga though

Saga was a totally different game to the 2 rail shooters that preceded it

Grazester
0 replies
1d1h

Oh I know it wasn't Saga. I own every Panzer Dragoon game(but not the remake of the original that came out on the Switch and Playstation 4 however).

fredoralive
0 replies
1d7h

The first game had a PC port, which was later included as a bonus in Panzer Dragoon Orta.

bdw5204
0 replies
1d5h

The original Panzer Dragoon games were remade for modern consoles a few years ago. Saga still hasn't been ported though.

tetraca
0 replies
1d5h

Panzer Dragoon (the rail shooter) might have but not Panzer Dragoon Saga (the RPG). That was never re-released and the source code was lost.

nolok
0 replies
1d7h

The Dreamcast was great, but while the Saturn had some great game the console itself was really not "quite good" beside as a tech curiosity. It suffered greatly from being two consoles smashed in one.

SuperNinKenDo
0 replies
1d7h

Panzer Dragoon Orta did, along with (I believe emulated) copy of the original Panzer Dragoon embedded inside, but not Saga.

SuperNinKenDo
6 replies
1d7h

I assume the complexity of the platform contributed to games being rarely ported off of it. In fact, the only games I know to exist on it and other platforms, are ports to the Saturn, never the other way around, although maybe someone can correct me.

From what I understand, emulating the platform is still tricky to this day, although there have been some significant advances in the last 10 years.

zilti
1 replies
1d4h

Tomb Raider was ported off the Saturn to other platforms

Narishma
0 replies
23h9m

I'm pretty sure it was developed as a multi-platform game from the start. It only released on the Saturn a couple of weeks earlier than the PC and PS1.

fredoralive
0 replies
1d7h

NiGHTS into Dreams was ported to PlayStation 2, and thence onto PC, PS3 and Xbox 360.

Technically Tomb Raider was out on Saturn in Europe a few weeks before PlayStation and PC, but that's really being silly.

Edit: Forgot I'd already mentioned Panzer Dragoon for PC elsewhere, but there was Sonic R for PC as well, as with Panzer Dragoon, later ports of Sonic R are based on the PC version AFAIK.

Tanoc
0 replies
1d6h

There was a port of Castlevania: Symphony Of The Night made just for the Saturn that never left Japan that included new areas, new items, and a Maria mode. So far as I know nobody's been able to merge the PlayStation or PS Classics version with the content unique to the Saturn version because they're too disparate. There's a few SNK games with content unique to the Saturn like that as well, like Ragnagard and World Heroes Perfect that people want ports of. Or at least the unique content merged into re-releases.

JohnBooty
0 replies
1d3h

Yeah. I think the fact that the Saturn rendered quads instead of triangles made ports really challenging as well.

Plus, the Saturn just wasn't that successful. Games like Panzer Dragoon Saga are legendary but only within niche circles.

flykespice
1 replies
1d3h

Don't forget Virtual Hydlide, if you ignore the absymal framerate

VelesDude
0 replies
13h47m

ALL PRAISE VIRTUAL HYDLIDE!!!

glimshe
13 replies
1d7h

The Sega Saturn had a pretty complicated hardware architecture. I can understand that scaling out the game "work" into multiple CPUs and dedicated processors makes sense from a cost-benefit perspective, but I'm sure this contributed to the Saturn's relatively poor sales.

Many people said that ultimately it was hard for companies to justify the investment in learning it all to make games that fully utilize the hardware. Somehow this reminds me of Sid Meier's saying that the player must have fun, not the game developer - and in this case, perhaps the hardware designers were having too much fun!

ZaoLahma
11 replies
1d5h

Growing up in the 90s, it was bizarre to witness the downfall of Sega. Here the Mega Drive (Genesis) was almost as successful as the SNES. Everyone either had a Mega Drive or played it regularly with friends. It was a very popular piece of hardware.

Then the generation after everyone had a Playstation, and I knew of only one kid who ended up with the Saturn. It's so strange considering that the Saturn was released several months ahead of the Playstation here.

I dont't know if it was due to the Saturn being seen as the inferior option at the time, pricing, availability or some other factor, but the Playstation absolutely killed it. After that Sega was gone.

Solvency
4 replies
1d5h

i'm 38 and grew up with this lineage:

NES, Genesis, PSX/N64, PS2/XBOX, PS3/XBOX360, XboxOne.

I was fanatic about Genesis because of the big title games like Mortal Kombat, Sonic, etc.

I VORACIOUSLY read gaming mags and the hype around the PSX was massive. It utterly dwarfed Sega. And seeing the first image of Cloud starting up at the Shinra tower captivated me. The games they were touting were just incredible looking.

I was just a kid and marketing won me over.

MBCook
1 replies
1d4h

Sega’s colossal screwup of the launch with a high price, pissing off huge retailers, and then months of no games was a total disaster.

giantrobot
0 replies
22h35m

pissing off huge retailers

In the US Sega provided early release units to several retailers but notably skipped Best Buy, Walmart, and KB Toys. KB Toys was so mad they didn't carry Saturn stuff at all from that point.

At the time KB Toys was a pretty major retailer for video games, they'd have a presence in malls where a Babbages, EB, or Software Etc (retailers that got early release Saturns) wouldn't be found.

Sega's early release of the Saturn was one of the dumbest own-goals in video games. The Saturn was already going to have a serious struggle against the PlayStation for other reasons, fucking over retailers did absolutely nothing to help Sega.

ajmurmann
0 replies
1d3h

This! I grew up in Europe around the same time. Me and my small friend group hyped ourselves up for the Saturn. We were frustrated how Sega dropped the ball on marketing. Initially there were a few bad ads showing the mediocre Daytona USA and then nothing. Meanwhile Playstation ads were everywhere and were very well done.

As always bad console sales resulted in a vicious cycle of fewer attention from 3rd-party devs resulting in even fewer console sales.

Ghaleon
0 replies
6h17m

Games media were really wined-and-dined by Sony. I remember Ultra Game Players with a Pic of someone wearing a memory card around their neck and calling it sexy. Sony really beat SEGA at their own game. As a rabbit hardcore videogamer I was miffed why anyone cared about the system since it didn't have many (any, to me) interesting games until FFVII came out. All my friends were amazed by Guardian Heroes and Dragon Force especially, and I was like, why isn't anyone playing these????

fengb
1 replies
1d

In retrospect, it makes a lot of sense. Genesis was huge in NA/Europe, but it was considered somewhat of a failure in Japan.

Sega has always floundered with its home consoles — they released 3 competitors to the NES after all. Genesis ended up being more of a one-trick pony than any indication of longterm success.

Narishma
0 replies
23h16m

In retrospect, it makes a lot of sense. Genesis was huge in NA/Europe, but it was considered somewhat of a failure in Japan.

Ironically, it went the opposite way for the Saturn. It was pretty successful in Japan (slightly ahead of the N64) but a complete failure elsewhere.

wk_end
0 replies
1d3h

It wasn't just the Saturn (though issues surrounding its release definitely didn't help) - Sega already looked kind of like bunglers after neither the Sega CD nor 32X really caught on.

jandrese
0 replies
1d4h

Sony rushing to release the Saturn before the PSX was a mistake IMHO. Not only did it beat Sony to market, it beat its own games to market. But mostly it just cost too much, especially compared to the PSX.

Keyframe
0 replies
1d1h

I remember people at first buying playstation since they could buy cheap pirated games with the swap trick and all

JohnBooty
0 replies
1d3h

I think the missing link in tale is the failure of the SegaCD and 32X, which really poisoned Sega's core fans in the years before the Saturn.

A lot of diehards (like me) felt really burned by those failures.

masklinn
0 replies
1d4h

I can understand that scaling out the game "work" into multiple CPUs and dedicated processors makes sense from a cost-benefit perspective

IIRC it was not that, the Saturn was the most expensive to manufacture of the big three, and the need to price match the PS made it a financial disaster for Sega.

gamepsys
7 replies
1d3h

Consequently, the VDP1 is designed to use quadrilaterals as primitives, which means that it can only compose models using 4-vertex polygons (sprites).

This gave the 3D Sega Saturn games a more boxy look than PS1 counterparts. Comparing Resident Evil on Saturn and PS1 is a good side by side to see the difference. The overall result is that Sega Saturn games have a unique aesthetic in 90s 3D gaming.

It's also worth highlighting that the Sega Saturn's emulation is far behind other platforms. Perhaps it's the lack of success in the west, paired with the complex architecture.

wk_end
3 replies
1d3h

Saturn emulation is very solid at this point. But yeah, for a long time it was quite poor.

segasaturn
1 replies
22h3m

Saturn Emulation is definitely possible but the amount of complexity in the hardware means that it's much more resource intensive than its 32-bit contemporaries:

https://mednafen.github.io/documentation/ss.html#Section_int...

Mednafen's Sega Saturn emulation is extremely CPU intensive. The minimum recommended CPU is a quad-core Intel Haswell-microarchitecture CPU with a base frequency of >= 3.3GHz and a turbo frequency of >= 3.7GHz(e.g. Xeon E3-1226 v3), but note that this recommendation does not apply to any unofficial ports or forks, which may have higher CPU requirements.

Those minimum specs are about the same as what's required to emulate a Wii via Dolphin, two generations ahead of the Saturn!

nikanj
0 replies
9h26m

For some perspective, Haswell came out in 2013. Those steep processor requirements aren't that steep in 2024

breadmaster
0 replies
22h16m

Yes, it's really quite good right now, and very accessible under programs like OpenEmu. I have a Saturn hooked up to a CRT still, but the emu doesn't feel very different these days.

spxneo
2 replies
21h45m

Best alternative to emulation, im not sure where FGPA is but it gives me a peace of mind to just mod the console to support SD cards filled with every single game released for that console, picking up the original game from ebay if I really like the title and show support

Its such a hassle to take out the CD from its plastic casing with rubber gloves to preserve value and put it back in each time but you don't want to trade original game experience with emulation

gamepsys
1 replies
19h22m

You are talking about an Optical Disk Emulator, and a few have created for the Saturn. It takes files from an SD card and supplies the data to the console as if an optical drive was reading the data. I personally own one. Just dropping the correct keyword in case anyone else is curious.

spxneo
0 replies
27m

that's correct. I really think ODEs are right balance but soldering was quite off putting at first. I was concerned about fumes.

End result is you have every single game released on that console playing on the physical hardware.

The next crazy step here would be mods you can do to connect online and play your retro games online somehow (maybe some crazy real time 2d/3d video game segmentation and AI enhancement to generate multiplayer hacked ROMs).

So many games yet so little friends to play with...

thearrow
1 replies
1d6h

Nice analysis! I still have an original Sega Saturn I’ve owned since 1996 that I fire up occasionally for a nostalgia bomb. The thing still runs perfectly, same as the day I unboxed it! They may have ended up with quite a complex hardware architecture, but you’ve gotta love the reliability of the older consoles. The same cannot be said of the more modern consoles I’ve had over the years - burning themselves up or failing in other ways.

Ghaleon
0 replies
6h23m

It's not just old consoles that are reliable, it was SEGA (and Nintendo). When Sony and MS rolled in their consoles are ready cut corners in reliability, and here we are today. PSX and Playstation 2 disc-read errors are extremely common and crippling, even back then. But by the time people noticed they already owned a bunch of games and would just buy a new console lol.

Ghaleon
1 replies
3h15m

In the end, PR and Sony's pockets beat SEGA. That's really it. SEGA had many self-inflicted wounds for sure.

Games: what games set the world on fire on PSX, really? Resident Evil in '96 and FFVII in' 97? And the Saturn had killer games esp in '96. So it's def not the library IMO.

Hard to code for: devs had no problem dealing with the Playstation 2 a gen later, and the DC was easy to utilize but everyone dropped it when SEGA discontinued it, even tho the user base was good (in the USA at least, not sure about EUR).

Consumer good will toward SEGA: yeah, but look at reliability issues with Sony and MS systems. They were pretty bad, esp with the 360, but these didn't hurt their console long-term health at all.

The SEGA CD was not a flop in the States at least. It was always a high-end, kind-of-unnecessary cool product with some great games, but no killer app. It was successful for SEGA. (The 32x WAS a huge eff up tho for everyone involved. But I don't think on a mass-consumer level it's brief existence single-handedly crippled the Saturn).

People will buy anything that's marketed well in the States (can't speak for EUR). The Saturn was marketed like CRAP in the US. SEGA had its head up its ass and threw out all that made the Gen more successful than the SNES.

We can talk tech and minor details about what worked and didn't work for the consoles, but it's really just marketing and no good Sonic at launch (or ever) that doomed it.

lizardking
0 replies
13m

It's fair to say that the consensus is that the PSX had a superior library to the Sega Saturn, particularly in the US. 1997 was a banger of a year - FF7, FF Tactics, Tekken 3, Symphony of the Night, etc.

itomato
0 replies
1d5h

The diversity in consoles reminded me of the diversity in home computers in the waning glory days before PC domination.

Some of the same OEMs and publishers made it through until today.

I’d like to see an infographic and may be so motivated that I make one.

hbn
0 replies
1d3h

Speaking of awkward Sega architecture, MattKC recently did a video[1] on his second channel about the 32X, which if you don't know was a weird module that slotted into the cartridge slot of the Genesis to enable it to play a separate lineup 32-bit games.

Since it was essentially 2 consoles working in tandem, it was another situation where you had 2 CPUs working together to pump out a video image. He tried to wire up his own video cables and found you could cut out the video signal from one machine and only get the output rendered from the other. The 32X itself would pump out 3D rendering while the Genesis would supply 2D graphics for e.g. menus, HUD, sprites, etc.

[1] https://www.youtube.com/watch?v=rl9fjoolS2s

flykespice
0 replies
1d2h

Nowadays Console hardware has gotten boring, lacking the great diversity of the previous generations. It's basically a PC motherboard repurposed.

donatj
0 replies
21h49m

As a lover of the Sega Saturn, I really believe the use of quads over tris really contributed to the Saturn's unique look.

busfahrer
0 replies
21h26m

I love Copetti’s architecture articles, especially the one on PS1, such an interesting early not-quite-3D architecture.

Aissen
0 replies
1d5h

I love Copetti's work (and have previously used it with citation), but it always feels too high-level. But since I know how much work it is to write those, it always feels unfair to ask for more. Anyway, thank you Rodrigo if you're reading this !