When I see pictures and video from 8 and 16 bit era Nintendo development, they draw their sketches on checked paper and transfer to work stations with professional monitors.
This doesn’t square with the idea that they optimized for cheap TVs, with rectangular pixels.
yup, honestly this whole take is so tired. Its a factoid pulled out of nowhere.
and the counterexample is easy: the gameboy launched with LCD screens and games definitely where meant to look blocky. The pokemon games are perhaps the most influential nowadays in terms of pixelart style and show all examples of it: the very blocky styles of the overhead view and the more detailed pictures of the pokemon. They also cover the 8-bit and 16-bit era with the gameboy advance.
Your counterexample make me think you've misunderstood the point from the start.
Devs design games to look good on the target platform. Obviously you can achieve gradient effects with a CRT and 16-bit color that are very different from a four-tone LCD. Both things can be true.
Edit: And some of them did that very deliberately and documented it. https://news.ycombinator.com/item?id=41134689
the central point of the piece is this:
and my point was that blocky pixel art is a faithful representation of an important percentage of games of the era, even though some other games of the era were not meant to be "blocky" many indeed where as the gameboy proves.
The author said "often," not "always." The entire point of the article is to explore the details of the phenomenon and show that it's not as clear-cut as people think. What are you disagreeing with?
Nope.
And your gameboy examples are not counter points to the fact that games were designed around a target screen limitation. They weren't designed around a CRT quirk, but they were designed around their own LCD quirk instead.
The first gameboy's screen was painfully bad, with a lot of ghosting during movement.
It was exploited in various way in games :
https://nerdlypleasures.blogspot.com/2018/03/compatibility-i...
https://nerdlypleasures.blogspot.com/2019/05/screen-persiste...
There was even a game that exploited the screen of the original gameboy to achieve a transparency effect in a spirit (but not implementation) similar to how Sonic devs created transparency on CRTs on the genesis.
https://youtu.be/MytSySMUwv8?t=2892
In this case, artificial flickering doesn't look like flicker on the original gameboy screen because it had a lot of ghosting. But it looks terrible on an emulator. You don't get those effects playing the same games on a modern computer monitor.
The various gameboys, and even the advance, were not as crips as you remember them to be, their LCDs weren't particularly high quality stuff. None of the original pixel art of those times were meant to look the way they look on a modern high contrast, high luminance, high resolution LCD or OLED screens of today. It's particularly true as soon as movement is involved as the ghosting was intense and it was one of the weakness of early LCDs as a whole as even the best computer monitors that came out when LCDs started to show up on the market looked horrible in motion compared to a CRT or a Plasma. So, the pixel art era was never about looking at a crisp image.
And many gameboy games look very, very wrong when run crisply on an emulator. They absolutely weren't meant to look like this. That batman game showing massive flickering around water looked fine on the original gameboy hardware.
I never said that CRTs or whatever didn't influence design sometimes. I just wanted to refute the central claim of the piece:
And no, blocky pixel art is a faithful representation of an important percentage of games from the 8bit to 16bit generations.
Because they weren't. The exact same art techniques used for 320x200 res games have been used for centuries in other media. Pixel art was most like tile mosaics, which also use dithering, and around for millennia. We designed pixel art that way not because CRTs looked like that, but because we had to pack maximum communication in those pixels. Yes, CRTs had a softening and blurring effect. We didn't really talk about that back then because it was obvious, and it wasn't something we had the ability to turn off or on, it was simply a fact of the technology.
No one designed differently for monochrome monitors with higher resolutions, or for the nascent LCD technologies of the day either. Every single article that says what this article says is literally just inventing a story.
Dithering certainly existed prior to CRTs, but CRTs have a unique property of scanline gaps for vertical sharpening and scanlines/composite video bandwidth for horizontal smoothing. I've found documentation of pixel artists taking CRT television properties into account, even when designing art on higher-resolution (CRT) computer monitors: https://www.tumblr.com/vgdensetsu/179656817318/designing-2d-...
Even if they didn't explicitly take the tv properties into account, they drew the graphics on CRTs and saw it running on the target machine. And then adjusted it so it looked "better". Even intuitively.
Did you ever played a ms-dos game in CGA mode? It was not suposed to look like that!
https://www.youtube.com/watch?v=niKblgZupOc
Developers did definitly took advantages of everything they could to show more colors on the screen, remember all those demos?
You're missing the fact that many if not most 8/16-bit devs did not make their art on graph paper but on PCs comparable to their target platform.
They did that because it was quicker and easier, not out of a specific intention to make better-fitting art, but it had that effect anyway. The pics of Dracula and FF6 Siren from the post--there's no way someone plotted those on graph paper before importing them with no changes.
Yeah but you’re talking about primary source documents from real developers, aka knowledge. Can you somehow turn this into a 5 page long intellectually self edifying blog post about your personal brilliance?
How self-edifying would you prefer, on a scale of 1 to Wolfram?
You're thinking of "self-aggrandizing" (and I think doctorpangloss is too, although I'm less certain since they didn't mention Wolfram).
It does square with it. The analogy is in music production. Professional music studios use high end expensive studio monitors with flat frequency response. It's a reference point. From this reference point they can optimize the mix to sound good on other speakers that don't accurately represent the frequency range.
Indeed. Many forms of art happen via an iterative process through many forms to reach their finished state, and things like drawing/viewing pixel art with graph paper or writing out/playing a musical score on paper can be cromulent parts of the journey that the art takes.
Just because someone drafted some pixel art on paper doesn't mean that this must be the final form, and that no revision may ever be done.
And with recorded music in particular: Yeah, a good studio usually has a variety of monitors. Some might be exquisitely flat and unnvervingly detailed, some might be capable of effortlessly reaching big-stage PA levels, and some might resemble common consumer gear more than anything else.
If it works well with all of these, then: It might be considered a good mix.
The Yamaha NS-10 is/was a popular speaker in studios not because it was good, but because it represented a common medium-size bookshelf-ish speaker that people might actually use at home where they listened to music, and it was consistent betwixt different studio spaces.
Tooling wasn't static, but improved over the years.
The first games might have been checked paper sketches and might not have been so CRT optimized.
Even in the eighties, nothing prevented a Nintendo game developing team from using Amiga (or PC) Deluxe Paint for drawing the graphics. (They had also other specialized systems in Japan for pixeling graphics.)
If you're interested in the software used in Japan for pixel art (or dot art, as they call it) I keep a list, which has 235 entries at the time of writing: https://blog.gingerbeardman.com/2023/10/21/list-of-vintage-j...
Before the 32-bit era, game development tooling was very nonstandard, and a lot of studios rolled their own. For example, Sunsoft's NES graphics/animation software ran on the Famicom Disk System (Japanese-exclusive NES add-on that let the system run software from proprietary floppy disks) so they were absolutely optimizing for consumer TVs with rectangular pixels. https://www.youtube.com/watch?v=O8PR2EShp70
Yeah, they do that. But you're seeing the final form. It was as iterated as source code is. You're seeing the final result that was shaped by the constraints of the medium.
In some cases, developers used grid paper with rectangular cells sized to match the aspect ratio of rectangular pixels on the intended display.
Most 16-bit games (and many 8-bit) were of course drawn in PC paint programs, not graph paper, for the same reasons that programmers stopped writing their code out in pencil before submitting it.
Some developers used graph paper, other developers drew the graphics using the actual hardware on a TV. Plot the pixels while zoomed in, but see the real thing live on a TV.
To make drawing on actual hardware work, you either used battery-backed memory, or you used a connection to a PC.
Be careful mentioning pixels next to the word 'CRT', as you'll get people parroting that CRTs have no real pixels even though it's more complicated than that which would be disingenuous being condensed.