I learned to program in Borland Turbo C++. One thing from back then that I really miss is how easy it was to do some complex things. I was able to draw to the screen by just calling geometric shape functions, and it made pictures! I found out that if I drew a shape, then called the xor function on the shape a new but sligtly different shape, I could make animations. So making little sprites that looked like they were running made out of only 1000 lines of C++ code was awesome. Some friends and I we got together and made a final fantasy like game using these tricks, hand crafted sprites and a game world that you could walk across, every map was a whole screen, and you would go to an adjacent map when you hit the sides, and every step if you had a natural 1 you would go into a fight with some enemies.
This was all pretty tedious and everything, but it was a lot of fun to high schoolers. And if we knew more computer science and software engineering we would have done more probably, and it would have worked better. But unlike today we didn't have to learn SFML or or activex or opengl to just start playing and get stuff working, we could just call circle.
I suspect that if we wanted to find an environment for kids to do simple shape programming, we can find something like that.
The difference seems to be that there's been a greater divergence between "professional" tools and "kids/intro tools" whereas Turbo C++ (or Turbo Pascal in my days) were kinda both.
I've been struggling with getting my ten-year-old across this gap. He's outgrown Scratch/Roblox and I don't think PICO-8 is quite the right set of abstractions (no built in entity system, seriously?); we're working with Godot right now and he's making progress, but it's definitely a lot more of a learning curve figuring out how to do stuff in a tool that has everything.
I have a 3 and 1 year olds but starting to think about this.
If we really think basic / turbo pascal has this right. Why don’t we just teach our kids those things. We can get the environments running I am sure.
I think something like basic or pascal is right. The "hard" part I think that Borland did so well was make it so easy to just get things right. A Neo Turbo Pascal that you could could type draw(x,y,z,r,"red") and see a red circle on the screen is the ideal, without having to mess with a very complicated workflow like in unity or unreal.
Python's turtle graphics or Logo is good for this.
Free Logos are available even today.
LOGO is a great take on LISP-like languages overall, unfortunately it uses dynamic scope. Of course this only comes up in larger programs but I do wonder if anyone has made a lexically-scoped LOGO variant and what it might be like to teach coding in it.
What is the difference between dynamic and lexical scope? And what are they? I googled it, but could not figure it out on a quick look, at least.
#lazyweb
Dynamic scope means you that whoever last bound the variable at runtime determines its value, not whichever outer scope surrounds it.
I guess it would be something like this in pseudo code:
I mean, PICO-8 is basically that:
https://pico-8.fandom.com/wiki/Circ
But I'm just not sure if pixel-by-pixel drawing is the right abstraction layer at this point in time.
that looks cool to me, thanks for sharing. I think something that works well for kids is instant feedback on their code/tweaks, looks like this has it.
PICO-8 sits in an interesting spot where it's not exactly a "toy" language/environment the way Scratch is, nor is it especially geared at learning, but on the other side of things it is based on a real programming language (lua) and there's an interesting scene for it where people flex pushing it to its limits making demos, demakes, etc, many of which are far beyond the NES-level capabilities it's meant to have.
I’ve got a 15 yo who decided to start programming in Python using Pythonista on his iPhone. He refuses to take any input from me; just wants to learn on his own. Pythonista comes with some nice game-programming modules. So far he’s shown me a Pong game, 2048 clone, air hockey, and more.
Would processing[0] be a good fit? It's designed to be easy to use and learn but powerful enough for professional use. Very quick to get cool stuff moving on a screen and the syntax is Java with a streamlined editing environment.
[0] https://processing.org/
Throw him off the deep end into Minecraft modding ;)
If you search around there are some pretty decent "download InteliJ and go" things.
The other divergence is in expectations.
A kid in the early to mid-80's could get enough working that they could imagine it possible to match a store bought game.
Once hardware capabilities went up, so did the need for more specialized skills and longer development cycles. Even though it might have still been possible to draw a box on the screen with a single command, the relationship of that box to a valuable outcome was a lot less obvious.
Only if people loose the ability to appreciate simple things. 2048 is a lot of fun. So is wordle. Or the xor game that made HN a week ago.
I didn't see that "xor game" (probably becuase I went on vacation), would you mind linking it for me?
My primary school kid loved it.
https://news.ycombinator.com/item?id=39323511
Flappy Bird too, for that matter.
tetris, 1948
When I was teaching programming my go-to was the JavaScript canvas api. It’s 2d only, and very simple. And being on the web once a student has made something we can host it for them and they can show their friends.
I have a ~20 line html harness which sets up a page with a full screen canvas element and gives you global window width & height variables. That’s all you need to get started. And it’s real JavaScript - so students learn a useful programming language as a result, and the advanced students can go nuts and add sprites, sound and networking if they really want to.
Any chance you have an example of what you are talking about
Another option is to use the p5.js library. They also have a nice online editor, at https://editor.p5js.org/, which makes it easy for students to get up and running quickly.
Yeah this is the template I use. As I say - its like, ~20 lines of boilerplate:
http://josephg.com/canvas_f.zip
I tell people to leave the html file alone and edit the javascript, using the canvas API to make it draw whatever they want. Canvas supports text, shapes and images and translations / transformations. If they want, I show them how to animate their work as well.
People make fun cursed things like this:
https://home.seph.codes/public/niamh/
At the very least, you can always run Borland Turbo C++ on Windows 2000 in a VM.
There has been a lot of effort put into making kids versions of technology, which I think is probably the right move for elementary school aged kids. But once you get into middle and highschool, I think as a kid there is a "coolness" factor to using the same stuff that the pros use. Since at that point you are trying to be an adult, you are trying to do stuff the right way, and hey you might be getting a real job in the field just a couple years anyways. So there is some value I think in making just more friendly of an environment and on boarding process to a tool.
I'll most likely pick FreeBASIC or Python for teaching kids simple graphics programming. FreeBASIC has a QB-compatiblity mode, so using old QB tutorials shouldn't be a problem.
Or Python for 3D stuffs.
It's called Processing.
Modern 2D graphics are not based on plotting pixels to a framebuffer, they have "textures" or "surfaces" as a native part of the system and any compositing is done as a separate step. So if anything making "simple sprites" has become a bit easier since you can just think of any composited surface as a more generic version of a hardware sprite.
I'm curious how does the texturing work in the SDL backend? I did not read the code but am curious now.
Back in the day I heard about of a friend who claimed to write some 2d framework which was faster than Direct2D -- that was a pretty early version of Direct2D I think.
SDL is a cross platform library, so it works by just using whatever native rendering context is available, abstracted to a common API. In more general terms AFAIK it uses the geometry API under the hood and just pushes triangles/quads to the GPU whenever possible.
Thanks a lot. Is it the same for 2d?
I was refering to 2D. for 3D, SDL2 has OpenGL and Vulkan APIs.
SDL3 is going to have a general purpose 3D API with its own shader language AFAIK.
In my experience, if what you want is to just get a window open and render some sprites, have some basic low level stuff handled and not have your hand held, SDL2 is ridiculously easy for it. There's also Raylib[0] and SFML[1], neither of which I've used but I hear good things about.
[0]https://www.raylib.com/
[1]https://www.sfml-dev.org/
Thanks, I'm not familiar with the graphics internals so sorry for the confusion.
Yeah I made half of a Ultima spinoff with SDL2 and it's pretty easy. Basically I load a texture which is a spritesheet and it's trivial to present a tilemap.
I think this is more simple from the pov of making a game engine from scratch or a game with complex effects and graphics. But is it more simple from the pov of a high schooler that just wants to get some flat colored shapes on the screen?
Even rendering "flat colored shapes" efficiently can be a bit non-trivial if you expect pixel-perfect results, like you'd get by plotting to an ordinary framebuffer - the GPU's fixed rendering pipeline is not generally built for that. The emerging approach is to use compute shaders, and these are not yet fully integrated with existing programming languages - you can't just edit ordinary C++/Rust code and have it seamlessly compile for CPU and GPU rendering. But we're getting closer to that.
That still reigns as my favorite IDE of all time by a country mile.
Same here, going into UNIX back in the early 1990's, after using the Borland IDEs across MS-DOS and Windows 3.x (and being aware of their OS/2 versions), felt like time travel to the genesis of programming, CP/M style.
Thankfully a professor pointed us to XEmacs, which I managed to get my Borland experience back, which became my UNIX companion until KDevelop, Eclipse, Netbeans came to rescue.
Early 90's, but later you had WPE and XWPE.
My first real IDE and still my favorite IDE.
One thing from back then that I really miss is how easy it was to do some complex things.
This might be my biggest disappointment with "modern" programming. I want direct access to the hardware with stuff like $100 1GHz 100+ core CPUs with local memories and true multithreaded languages that use immutability and copy-on-write to implement higher-order methods and scatter-gather arrays. Instead we got proprietary DSP/SIMD GPUs with esoteric types like tensors that require the use of display lists and shaders to achieve high performance.
It comes down to the easy vs simple debate.
Most paradigms today go the "easy" route, providing syntactic sugar and similar shortcuts to work within artificial constraints created by market inefficiencies like monopoly. So we're told that the latency between CPU and GPU is too long for old-fashioned C-style programming. Then we have to manage pixel buffers ourselves. We're limited in the number of layers we can draw or the number of memory locations we can read/right simultaneously (like how old arcade boxes only had so many sprites). The graphics driver we're using may not provide such basic types as GL_LINES. Etc etc etc. This path inevitably leads to cookie cutter programming and copypasta, causing software to have a canned feel like the old CGI-BIN and Flash Player days.
Whereas the "simple" route would solve actual problems within the runtime so that we can work at a level of abstraction of our choosing. For example, intrinsics and manual management of memory layout under SSE/Altivec would be substituted for generalized (size-independent) vector operations on any type with the offsets of variables within classes/structs decided internally. GPUs, FPUs and even hyperthreading would go away in favor of microcode-defined types and operations on arbitrary bitfields, more akin to something like VHDL/Verilog running on reprogrammable hardware.
The idea being that computers should do whatever it takes to execute users' instructions, rather than forcing users to adapt their mental models to the hardware/software. Cross-platform compilation, emulation, forced hardware upgrades that ignore Turing completeness, vendor/platform lock-in and planned obsolescence are all symptoms of today's "easy" status quo. Whereas we could have the "simple" MIMD transputer I've discussed endlessly in previous comments that just reconfigures itself to run anything we want at the maximum possible speed. More like how a Star Trek computer might run.
In practice that would mean that a naive for-loop on individual bytes written in C would run the same speed as a highly accelerated shader, because the compiler would optimize the intermediate code (i-code) into its dependent operations and distribute computation across a potentially unlimited number of cores, integrating the results to exactly match a single-threaded runtime.
The hoops we have to jump through between conception and implementation represents how far we've diverged from what computing could be. Modern web development, enterprise software, a la carte microservice hoards like AWS that eventually require nearly every service just to work, etc etc etc, often create workloads which are 90% friction and 10% results.
Just give me the good old days where the runtime gave us everything, no include paths or even compiler flags to worry about, and the compiler stripped out everything we did't use. Think C for the Macintosh mostly worked that way, and even Metrowerks CodeWarrior tried to have sane defaults. Before that, the first fast language I used, called Visual Interactive Programming (VIP), gave the programmer everything and the kitchen sink. And HyperCard practically made it its mission in life to free the user of as much programming jargon as possible.
I feel like I got more done between the ages of 12 and 18 than all the years since. And it's not a fleeting feeling.. it's every single day. And forgetting how good things were in order to focus on the task at hand now takes up so much of my psyche that I'm probably less than 10% as productive as I once was.
Microcode? I don't think that's how modern μarch works. You can definitely make modern compute accelerators more like a plain CPU and less bespoke, and this is what folks like Tenstorrent and Esperanto Technology are working on (building on RISC-V, an outstanding example of "simple" yet effective tech) but a lot of the distinctive featuresets of existing CPUs, GPUs, FPUs, NPUs etc. are directly wired into the hardware, in a way that can't really be changed.
sounds like you want transmeta and the jvm to have a post-oop baby that natively understands matrix and vector math, simd and massive parallelism.
I kind of envy you for having access to Borland Turbo C++ and learning resources for it as a kid. The closest I could get to it was reading a review on my local computing press. Even assuming I could magically get a copy, I still wouldn't know what to do with it without reading material. And even if I had the reading material, I'm not sure how much I would make out of it with my fledgling knowledge of English at the time.
Borland Turbo C++ (and Pascal as well) had great help documentation. Every function was thoroughly explained and there was a lot of examples. I learned C just by reading Turbo C++ help docs. I miss that time.
I wouldn't have access too if it were not for the local pirate scene. Even owners of local software houses doing professional accounting systems didn't mind copying a few disks of software they paid for to a kid who knew how to ask. If you got something new, you'd immediately share with your colleagues.
And everybody kept in mind that if we ever started making money from our hobby, one of our first investments would be into buying properly licensed copies of the tools we used.
good old conio.h and Borland's BGI graphics.h :) SDL is kinda there for that today, albeit not as simple.
xlib > bgi. That thing was so "powerful" it destroyed my monitor using a mode with suspect timings :D
Computer Science classes in Indian schools (fifth to eighth grades in my case) taught programming using Borland Turbo C++. That was back in the 2000s, but I wouldn't be surprised if they still use it today.