I feel like prompt injection is getting looked at the wrong way: with chain of thought attention starts being applied to the user input in a fundamentally different way than it normally is
If you use chain of thought and structured output it becomes much harder to successfully prompt inject, since any injection that completely breaks the prompt results in an invalid output.
Your original prompt becomes much harder if not impossible to leak in a valid output structure, and at some steps in the chain of thought user input is hardly being considered by the model assuming you've built a robust chain of thought for handling a wide range of valid (non-prompt injecting) inputs.
Overall if you focus on being robust to user inputs in general, you end up killing prompt injection pretty dead as a bonus
As far as I can tell this doesn't mention prompt injection at all.
I think it's essential to cover this any time you are teaching people how to build things on top of LLMs.
It's not an obscure concept: it's fundamental, because most of the "obvious" things people want to build on top of LLMs need to take it into account.
UPDATE: They've confirmed that this is a topic planned for a forthcoming lesson.