This is about deep learning, of which LLMs are a subset. If you are interested in machine learning, then you should learn deep learning. It is incredibly useful for a lot of reasons.
Unlike other areas of ML, the nature of deep learning is such that its parts are interoperable. You could use a transformer with a CNN if you wish. Also, deep learning enables you to do machine learning on any type of data, text, images, video, audio. Finally, it can naturally scale computationally.
As someone pretty involved in the field, I lament that LLMs are turning people away from ML and deep learning, and following the misconceptions that there’s no reason to do it anymore. Large algorithms are expensive to run, have slow throughput and are still generally poorer performing than purpose built models. They’re not even that easy to use for a lot of tasks, in comparison to encoder networks.
I’m biased, but I think it’s one of the most fun things to learn in computing. And if you have a good idea, you can still build state of the art things with a regular gpu at your house. You just have to find a niche that isn’t getting the attention that LLMs are ;)
As someone who missed the boat on this, is learning about this just for historical purposes now, or is there still relevance to future employment? I just imagine the OpenAI eats everyone's lunch in regards to anything AI related, am I way off base?