Looks incredible! All this needs is a little bit of conversational AI magic in the background to filter and modulate the content according to plain-English student questions and its go time.
Note that this was finished in 2019, so now would be the perfect time for someone to polish this up and expand it to the rest of math! Assuming this is threeJS, you could get an open-source file format going for simulations, and even host crowdsourced applications of it to existing popular math textbooks by Figure/page #. I mean, linear algebra is cool, but the market for good free geometry education is limitless
Does anyone know if the big names in math education offer simulations yet, or is it all animations/images/videos still?
EDIT: definitely ThreeJS — love the vector chapter. What this needs is true spatial computing support - not pages with nested simulations, but site-wide (SPA-wide) simulated objects. What if every student in geometry class could have their own simulation on their Chromebook as they read/follow along? I can’t wait.
It really pains me to see someone suggesting adding AI to a book like this. Current AIs are infamously bad at math. The last thing we need is ChatGPT misplacing a minus sign and confusing readers or setting back their understanding by weeks.
Please read the comment carefully before parroting a canned retort. It doesn't say what you think it says.
GP's comment has been edited since my post. The original said something like "regenerate diagrams according to student questions". It's obviously a bad idea if you're trying to learn vectors and the entire diagram is flipped over the X axis, for example.
Nonetheless, today's AIs still regularly contradict themselves from one sentence to the next. Even if they're only generating text and "modulating" (which I take to mean rephrasing/summarizing), mistakes can and will happen. I stand by my comment even as it applies to the edited GP.
Filtering Modulating means selecting relevant excerpts. Think "AI for Search", not conversational chat generation. This is what LLM has been exceedingly good at, in the Alpha (DeepMind) series of projects.
(Not the person you replied to, but) I just re-read it, and the "canned retort" still looks completely accurate and relevant. Can you elaborate on why you think that AI's (known, admitted, and inherent) propensity for hallucination _wouldn't_ be disastrous in the context of pedagogy?
If the original comment had _just_ proposed to direct students to locations _within_ the original content ("filter"), it would have been less-impactful - being directed to the wrong part of a (non-hallucinated) textbook would still be confusing, but in the "this doesn't look right...?" sense, rather than the "this looks plausible (but is actually incorrect)" sense. But given that the comment referred to "Conversational AI", and to "modulat[ing]" the content (i.e. _giving_ answers, not just providing pointers to the original content), hallucination is still a problem.
I remember a friend reviewing some math before starting grad school was stymied by a typo in her textbook for an inordinate amount of time. It’s really vital that instructional materials avoid errors as much as humanly possible. AI right now ain’t it.
True tho detecting untrue maths is a key skill. Unlike software there is no compiler tests whatever step to filter out your own minds flagrant errors.
In a graduate course, we used a horrible Russian translation of
Jacques Neveu, Mathematical Foundations of the Calculus of Probability https://www.amazon.com/dp/B0006BNQSQ
with a huge amount of misprints in formulas. I spent lots of time hunting for those misprints, and I think it really helped me understand and remember the material.
In my last homework the professor omitted a required assumption and I nevertheless proved the false assertion. Extremely embarrassing. This happened earlier in the semester and I correctly failed to finish the homework problem. I am getting tired I guess.
You clearly have no idea how effective an interactive conversation with a text can be. An AI doesn't have to be "good at math" to be useful. People (and programs) who are "good at math" are a dime a dozen. To be useful to a student, a language model just has to be good at answering questions about math.
That part works, right now. Try it. Go to ChatGPT4 and pretend you're a student who is having trouble grasping, say, what a determinant is. See how the conversation unfolds, then come back and tell us all how "infamously bad" the experience was. Better still, ask it about something you've had trouble understanding yourself.
The key is to use GPT4, not the free version.
Yes, that makes a big difference.
Many people on HN formed their opinions on the basis of GPT3.x-generation models, though. They asked it a question, they got the nonsensical or hallucinated answer they expected, they drew the conclusion they wanted to draw all along, and by golly, that settles it, once and for all.
The AI isn't doing math, the AI is curating the textbook material. In the same way that you have a host of different faculties that enable you to excel in all that you excel at, there is more to math (and math pedagogy) than arithmetical consistency
+1 for Spatial Computing here -- I see immersive here and just think 2D animations of 3D concepts, good start though it may be, is leaving possibilities on the table. 3D consumed inside a fully 6DOF 3D animated space is a better environment to transfer meaning. These collections of links could be piped into WebXR with just a little tweaking and really be immersive.
Just taking something like the threejs GLTFExporter and combining it with modelviewer.dev on the fly could enable a 'view in AR' button compatible with both SceneViewer and Quick Look (i.e. most mobile devices available today).