I have continually contented that the terminal is the single thing eternally condemning Linux to <5% market penetration.
It's not just entering text. The entire experience is complicated. It's a Concorde jet cockpit[1] (with no labels too), when 95% of the population just wants to fly their drone around[2].
[1]https://qph.cf2.quoracdn.net/main-qimg-2566f4c91b894e4169d77... [2]https://media.thedroningcompany.com/images/tincy/WQZpC56vqMp...
Doesn't Windows, the most popular OS out there, have a cmd.exe terminal thing? Why isn't that harming Windows?
The Windows Command Prompt's default experience is worse than most if not all terminals you find on Linux systems. Even in play TTY, you're bound to find that the shortcuts the author mentioned work like a charm.
To make my cmd.exe bearable I am using https://chrisant996.github.io/clink
tl;dr You're comparing the choice of wheels on a plane to what makes planes sell more.
Essentially nothing in Windows requires cmd.exe. Having a terminal in a desktop OS isn't a problem. Needing one is.
Just like having a GUI isn't a problem for a server OS, but needing one is a problem.
Apple would disagree with you on both counts.
Plus for a long time desktop Windows needed cmd.exe to support login scripts.
Just as people use Linux daily without ever touching the command line. Eg Android, LG smart TVs (webOS), Satellite TV set top boxes (eg Sky Q), home routers, etc.
And if you want to focus on Linux running on laptops, then there are ChromeBooks and the old Asus EeePCs.
The reason desktop Linux isn’t polished is because every time a company invests heavily into desktop Linux, Microsoft undercuts them (like how they sold XP at a loss to thwart Linux in the netbook market). But the fact that Apple could take BSD, Google take Linux, and Nintendo also run BSD on some of their consoles, really speaks volumes about how there’s nothing technically stopping people running a POSIX platform like Linux and still hide the command line from regular users.
Though going back to my “Linux isn’t polished” point, I still think Linux+KDE is a lot more polished than modern Windows. But that’s just my biased opinion.
None of those are (used as) general purpose computers though.
Did you read my next paragraph where I acknowledged that point myself and gave some laptop examples too?
Plus I’d argue Android is the average persons general purpose computer. At least in terms of the people I know, they use their phones for 99% of things and actively avoid using a laptop as much as they can.
I agree about Linux + KDE being more polished.
For my main point, I suppose I should have specified GNU/Systemd/Linux as needing a CLI, not everything with a Linux kernel. POSIX-style kernel + libc is a very good basis for an OS, and such an OS doesn't need a CLI exposed to the user. It's all the Udev/Systemd/SysVInit & similar stuff that's CLI-only, and desktop Linux tends to require interacting with one or more of those on at least an occasional basis.
There are web based GUIs for systemd (and sysv init too).
But the main reason you don’t see GUIs for those services is because Desktop distros tend to abstract away systemd so you don’t even need to manage it, let alone have a GUI to do so.
Like with Windows, the average user wouldn’t be manually managing what services to start and stop.
And that’s the real crux of things. A lot of the stuff that people say you need a CLI for in Linux are operations that the average user wouldn’t know nor want to do on Windows even with a GUI. They just run a browser and if the machine goes slow they ask someone technical (friend or shop) to fix. I know this because I used to be that friend.
So I really don’t think the CLI is what holds back Linux. It’s just the economics was never there while Microsoft dominated the desktop world. And these days most people use phones and tablets as their general purpose device, so in a way Linux did eventually win anyway.
Windows has a new terminal as default that is pretty cool - I install MSYS2 and set it in the new Windows Terminal and it's all good. Gnome Terminal and other terminals on Linux are also pretty cool too.
If you want the worst default terminal experience just boot macOS.
Can you explain what's the big appeal of other terminal apps? I have been using Terminal for 21 years without any issue -- but I'm open to trying something else.
Actually I've only this year switched to an "AI enabled" terminal app called Warp.
I’m curious to know what your experience of changing to Warp has been like? Sounds like terminal was working fine for you - was the switch worth the time?
I signed up for the waitlist ages ago and finally got the announcement of Linux support in February but am still yet to try it. Mainly because I’ve had a particularly busy year and I can’t justify fiddling around with my stack just for fun. I have no appetite for the risk I may lose hours to fixing something that goes wrong.
On Warp: After adjusting a couple of minor settings, I found Warp to be worth it. In fact, I at first trialed it 'side by side,' meaning keeping Terminal.app and Warp both open -- and found myself going for the Warp window more often. So it was an easy call for me.
I chose to not use its custom prompt because I wanted things to be more 'stock' in the terminal and not to rely on an app external to the shell for the features -- the only really fancy prompt feature I have is git branch display, and that's already working fine with my existing zsh prompt.
So, anyway, I can vouch that it didn't try to make changes to my environment (other than that offering to override the prompt, which I think it does in a way that doesn't update your .zshrc/.bashrc file anyway).
And for what it does do, I think it's great. Having command outputs separated and in little scrolling panes, really great. And mouse-able, standard text field to edit commands in, also amazing. Say you have a long URL on the clipboard with a {object_id} variable in the middle. You can paste it in, and use the mouse to select "{object_id}" and replace it instead of using arrow keys etc. to manually delete and replace the variable. So with the above efficiency gains it is already pretty cool compared to any kind of terminal app I knew of.
The AI stuff has been great to have as well. It's really convenient to have free access, right in the terminal, to an LLM that I assume has been well prompted to produce shell or scripting code as requested.
One thing I turned off is their recent "Detect natural language automatically" feature. Before, if you wanted to invoke AI, you just did a #comment. So for instance "# docker command to remove exited containers" -- well, they updated it so that even without that "#" it should "just know" from inspecting the command. But I had a problem with it getting confused by a shell command that was also parseable as 2 English words. I think I prefer actually knowing whether I'm commanding a shell or operating an LLM, anyway.
BTW - it occurs to me that there are also big categories of features that it has which I don't even use, such as like, runbooks that can automate frequently-done tasks, things like that. Those may also factor into your decision.
That's because the overwhelming majority of Windows usage, administration, and programming can be done almost entirely without ever touching the command-line, from Explorer.exe, Task Manager, Edge, Media Player, Paint, and built-in ZIP handling to the huge list of MMC snap-ins[1], and Visual Studio 2022.
For the record, aeroplane cockpits have also gotten considerably simpler in the intervening half-century since Concorde and the first 747s. Here is an Airbus A350 cockpit[2].
[1]: https://serverfault.com/questions/158075/what-are-the-names-...
[2]: https://cdn.airplane-pictures.net/images/uploaded-images/201...
I mean, aesthetically it's simpler sure, but the complexity is still there; it's just now much more digital.
That does not match my experience, even if I also believed that myth.
At one time I had to install Windows Enterprise IoT on several kinds of embedded computers, all of which had various quirks.
The computers worked fine in Linux, from the first attempt to boot it, without the need to do anything special, but a customer wanted to have Windows on those.
After installing Windows, there have been a lot of problems, for instance Windows was unbelievably slow, because the SSD's had a very low writing speed, but they could not be replaced with decent SSD's, because the embedded computers were certified for certain applications only with their original components.
Making Windows usable on those computers has required a week of tuning and finding various workarounds by searching the Windows Knowledge Base and various Internet Forums, where many Windows users had the same complaints as me, but few were able to provide good advice about how to solve the problems.
I have been astonished to discover that for every workaround I was not able to find any way to do it in any graphic interface of Windows, but all workarounds required to use in a Cmd window some obscure Microsoft command-line utilities with a lot of magic command-line options, which I did not understand and I could not find in the official Windows Knowledge Base, but which were suggested as solutions on various Internet forums and indeed they worked as desired.
The most popular desktop OS. It is nowhere to be found in the Top 500 supercomputers, is nowhere to be found on smartphones, lags in the Cloud, is nowhere to be found in the billions of appliances, etc.
It is harming Windows. That horrible cmd.exe is one of the reason Windows lost in all the other markets and has hardly any market share there.
Seriously.
Back in the 90's a lot of movie production was being done on the SGI IRIX computers. Jurassic Park themed ones even. Really spiffy machines with both an excellent GUI and terminal.
Cheap WIN NT boxes with nVidia GPUs on them, were seen as the beat path forward.
SGI gear is also Jurassic Park style, "Spare no expense" and was not cheap! Super expensive machines, but people got what they paid for too.
Alias, Maya were ported to Windows and off to the races right?
Nope. Unix scripting, that awesome terminal and friends made all the difference in the world.
This was true even for smaller or single person shops who would do the work in a now slower SGI because the user experience was bang on point and that meant getting the desired outcome first time no bullshit.
The minute we had nVidia drivers, Linux was in to replace the SGI machines.
And, the production community realized they could each write tools or poet tools they were good at, share and share alike (to a point) and all be in business a whole lot cheaper and faster.
That took a couple maybe few years.
The "terminal" and what can happen in one and why really does matter.
Windows Power Shell is pretty OK at this point, but it is still its own thing, not playing very nice with the other kids in the sane box.
Lol, I always thought Microsoft happening in Redmond was symbolic. It really is!
95% of the population doesn't need to use the terminal.
If you love cars, popping the hood is probably part of the experience. For everyone else, it’s the sign of a very frustrating day ahead.
As computer lovers, I feel we must endeavour to remain mindful that almost everyone else aren’t like us. They’re trying to get a task done. They don’t want to maintain or talk with the computer.
This was pretty much my point. 95% of the population isn't like us, and don't need to interact with the terminal; thus, thinking the terminal is what's keeping Linux back doesn't make much sense to me
The larger issue is the times that the Linux UX goes off the rails and the solution turns out to be "You're going to have to bleat this esoteric incantation at the command line." I plugged my laptop into a second monitor the other day and, oops, for whatever Godforsaken reason it didn't auto-detect. Suddenly I'm munging around in `xrandr` to get my screens to work.
To its credit, all the major desktop distributions today are making real inroads to narrowing the scope of situations where that happens. This was a much larger concern in the past, where key aspects of the user experience were still in the category "There was a command-line tool written to control it and that's all you'll ever need, dammit."
The alternative is "it just won't work". You too can live in that world by simply not trying to fix it.
My wife's Macbook won't suspend correctly when it's plugged into a Dell monitor, the monitor going into power save wakes up the laptop. I wish there was an arcane incantation that would fix it, but no, the proprietary walled garden is well locked up, with glossy user-friendly rounded corners everywhere.
This is the most insightful post in the thread -- there's definitely a false dichotomy here. It's not "have to use a terminal" vs "everything works without a need for a terminal ever" -- it's "able to use a terminal to accomplish some goal or recover from Really Bad Thing" vs "Can't Do That Thing / have to factory restore to recover".
Like the way Apple Maps on my iPhone can't save the person to notify when I navigate to Home. I can save a person to be notified to any arbitrary location besides Home. With Home though, the setting simply doesn't save. I'm sure if I had a terminal I could inspect the data involved and manually fix what's wrong, but the 'Apple' or GUI-only approach simply makes that out of the question. Even better, when the corrupted data is cloud-resident I think the answer is often just that it's broken permanently unless you want to create a brand new account.
There are millions of computers out there that just don't work right, or where people can't accomplish some goal they wish they could. That's the flip side of the 'terminal or not' coin. Neither is a good experience for users who aren't technical though. For nerds though the Terminal provides a great deal.
In general, Apple would solve that problem by recommending you plug an Apple computer into an Apple monitor.
The Apple ecosystem is well incentivized to make all of their pieces work with their other pieces. The Windows ecosystem is pretty decently incentivized to make things work together in general (In both directions... Windows suffers in sales if there's some popular hardware it won't work with, and popular hardware suffers in sales if it doesn't work with Windows).
One certainly does run into the occasional Dell monitor with Apple laptop problem that you then need a couple of insider specialists to address. But I've not really been sold on the notion that The open ecosystem is strictly superior in that sense... In theory it is, in practice you can't be an expert at everything and there's no guarantee anyone's going to come along and care about how to fix your particular Dell / Linux distro configuration.
How does the monitor thing relate to what I was saying? In my case, I'm saying that Apple's tightly-coupled, app-OS-cloud integration means I have no ability to see what's wrong (and I'm positive the minimum wage phone support guys will not be able to escalate a bug like that to Engineering and get it fixed). My point was even first-party walled garden stuff doesn't work right, and having no access beneath the GUI rules out anyone outside being able to fix it. I get your point though that the 'right person' who could use that access to more obscure things may be rare anyways.
I think my main gripe though is that there simply isn't enough drive to move things out of the terminal and into a GUI.
I really think that a core problem is that once you are in a position where you are contributing to distros, contributing to the kernal, you are well versed and acquainted with the terminal, and you love it's incredible power and efficiency. The cockpit is no longer a sci-fi meme, it's a meticulous masterpiece granting a god-like computing interface. Nobody at that level is interested in much GUI-ification, because, holy shit, this CLI is the best! (Never mind that software engineering is dominated by "complexity and control" types vs "simplicity and automatic" types of the wider population.)
Sorry if I was vague! I was commenting in complete agreement with you.
Computers are complex things. Sometimes, things broke and you just can't do the task. But for a knowledgeable person, you just "pop the hood", fix the problem, and then get things done. On My iPad when the Files app freeze, there's nothing to be done other than reboot to get it working again. On Linux, I could pkill a software, restart it in debug mode or go find the logs to know what's wrong. Yes, there's a difference of mindset. But I prefer the "hood" to be there instead of having it be an opaque box.
If you want to get a task done, learn the software that can do it, externalize everything else (IT support). But sometimes, that software is the terminal or it's where the software is.
if you have a problem with gnu/linux (as opposed to android) you can usually find the solution on stack overflow, server fault, or superuser, but the solution you find there is going to be a textual command line, not a gui. gpt-4 can tell you how to solve the problem, but only using a textual command line, not a gui. text is the most powerful, useful, effective communication technology ever, period. always bet on text: https://graydon2.dreamwidth.org/193447.html
https://news.ycombinator.com/item?id=40909185 documents that this is actually also true for microsoft windows
on the other hand, 'always bet on text' doesn't mean 'always bet on editing your command line with inconsistent control-character commands in a mode where clicking with the mouse doesn't do anything'
But that is passionately what I hate about it.
In GUI environments there are check boxes, buttons, menus and english labels for everything.
With linux it's like you are copying down a sacred language and presenting it at the alter with your fingers crossed. You just changed something. Didn't fix the problem. But the change still happened. Can you undo it? Probably not without way more digging. So now you just cross another set of fingers hoping you didn't break it more, or break something else in the future.
Compare to windows:
I checked this box that says "disable firewall". Then hit "Apply".
That did not fix my problem.
I unchecked the box that says "disable firewall". Then hit "Apply".
The usual solution is to read the manual. The CLI is not an esoteric language. Software does complex things and you can either go with the GUI-fiction which...does things. Or have every options available to do what the software can do. Take find where the first line is the manual is: find - search for files in a directory hierarchy. Think however you want to find files inside a specific directory and find can do it for you and more. Most users will only have a few use cases for finding and that's what most GUI file explorers offer, but when you need that extra power, it's available to you.
The solution for your problem is to not do stuff you don't understand. And for the above use case, there's usually a GUI for it. The CLI stuff heavily assume that you know what you're doing.
the cli is just as much of a fiction, just fiction with a different focus
i agree that reading the manual is helpful, but doing stuff you don't understand is necessary to come to understand it, so i don't think it's good advice to avoid it. using the cli for simple things builds the skills you can use for more complex things. the same is true of a gui
(of course there are clis and guis incapable of doing complex things, and those are kind of a dead end)
Only if you're doing an experiment and can constrain accidents. Any other type of activities would require to read the manual first. They're terse because they're supposed to be a reference, but you can usually find books that ease the way in. And then there's the domain expertise that is required. You need networking knowledge to interact with software like ip, operating system knowledge to understand what top is showing to you, etc. You can't get around that.
The GUI is a fixed canvas already painted by someone else, the CLI let you write your own poetry.
as i said in my other comment at https://news.ycombinator.com/item?id=40912296, text really helps a lot with constraining accidents
obtaining networking and operating systems knowledge includes as an essential part using software like ip and top (though try htop instead). it's not a strict sequencing but a back-and-forth interplay, synergistic with factors like study and mentorship
as for poetry, there are plenty of clis that aren't very expressive—rt-11, cp/m, grub, ms-dos, and mpv come to mind—and plenty of guis that are very expressive, such as godot, blender, inkscape, labview, solidworks, freecad, and sieuferd. you can write your own poetry as easily in godot as in bash
what would be ideal from my point of view for check boxes, dropdowns, and labels would be if they were a simultaneous alternative view of a command line, or rather a configuration line. an additional simultaneous view would provide a live preview of the result of that line. but you could still type and copy and paste the text
interestingly this is not too far from my usual experience with the command line. i want to compile a target, so i type `make ` and hit tab twice. i see a list of targets and pick one by typing a letter or three and tab, and hit enter. the compiler errors out right away with a c99 construct, so i add `-k` to the command line (^p spc - k ret) to see how widespread the carnage is. it's everywhere, so i add a compiler flag to the options and try again. and in 30 seconds i have a working build. or maybe i need to use a different compiler, or something, but it's easy to return to the fresh unpacked tarball or git checkout
this is close to the opposite extreme from what you describe in
i very rarely do anything in text mode that is in any way hard to undo. shell commands are mostly purely ephemeral: their only effect is some text on your screen, an entry in your history file, and maybe an output file. if i want to change one, i hit ^p and change it before hitting enter. as for configuration changes, i would say that change management is actually the major strength of the text approach: you can copy configuration lines into your notes, comment out old versions in case you want to go back to them, check the whole configuration into git, diff it to see what all has changed, etc. everything can be undone in exactly the same way. everything is a controlled experiment, with the computer itself recording the configuration of each trial automatically and implicitly
admittedly there are occasional exceptions, like when you're reconfiguring the firewall or upgrading debian to a new release. though current configuration-as-code systems like docker and ansible go a long way to making all that stuff just as recoverable. the server went catatonic? too bad, revert the firewall rule change and reinstall it, and 45 seconds later the problem is fixed
by contrast, it's almost never obvious how to undo clicking on a command button or a menu item. even a dropdown selection is hard to undo: you have to remember what was previously selected
but yeah having to read the manual and slowly piece together a working command or configuration file is definitely worse than having every option documented in the place where you choose it
That's where domain knowledge comes in. If you don't know anything about networking other than selecting the WiFi network and entering the password, you're going to have a hard time interacting with ip. If you don't know anything about codecs, ffmpeg will seem esoteric.
More often than not, in MacOS and the like, the GUI is reliable and there will be apps for not so common stuff. In Linux, the software (however complex) has already been written in CLI mode and works fine. Someone could do XLD or iTunes, but they're already happy with their ffmpeg scripts, and their MPD setups.
And the key to get there is to usually get a book about Linux at first, then learn about the software you are using.
Trust me man, I know a few Concorde pilots and they sit in that cockpit and flip switches and the spin the controls such that the plane will glide like butter through a hurricane - while myself here is practically crash landing at every stop.
There is the chronic issue with skilled linux users where they keep flexing how good the terminal is (and lets be honest: flexing their fluidity and dexterity with it too) while completely missing the forest for the trees. The terminal sucks because it is difficult to use with an arduous learning curve.
There is a reason why Android, the most successful linux "distro" to the point at the summation of others is a rounding error, has no user terminal and a robust GUI. People don't even know it's linux. Thats what we need; an opensource linux distro that people don't even know is linux.
It also allows you to copypaste the solution, as opposed to following directions for a GUI settings window.
Linux the kernel + some userland without terminal is used by majority of people in the form of Android and Chromebooks.
A terminal is a developer and sysadmin tool. It just so happens that most users of desktop GNU/Linux are developers and sysadmins. They are a few percent of population.
Exactly, so why hasn't the open source community driving free distros taken note of this?
Now more than ever, people want out of Windows, but the only they step into the seat of exit craft and see the above cockpit.
Gnome tries hard to emulate Apple's approach. Results are mixed, but I've seen fans of it.
Elementary OS goes in the "normal users" direction, too.
OTOH if something goes badly wrong, and you do need to open the hood and follow an arcane tech support recipe, you open a command line, be it Linux. macOS, or Windows. (Not Android or iOS though, where you do a reset + restore dance.)
Yes, here's where they meet Steam Deck, that did take note, and just puts you into a cared-for gaming environment. It can also run a typical desktop. The hardware is proprietary, but the software is (mostly / completely?) free.
Except Classic Mac OS never had this. You'd sometimes need to do arcane rituals like "re-bless the system folder", but they were entirely GUI-based arcane rituals that involved double-clicking icons, opening and closing windows, and dragging files in and out of folders.
There was no other, simpler interface. And the whole system was much simpler (and less capable). The graphical hardware was way simpler and fully in-house, unlike the current GPUs that are third-party and evolve quickly.
Exactly, there was no other, complicated interface. Except in 1991-1995 when System 6 was going to merge with AIX and be A/UX and Apple was going to merge with IBM and it was all briefly very weird.
Text interfaces: streams of discrete objects (characters) with some having special purposes
GUI: A matrix of pixels + a set of of rectangles with special properties attached.
If you want to quickly write a program, text is the way to go. Most developers are scratching their own itch and already know the system. They may surface a few GUI settings (if it's GUI), but no one wants to build a complete GUI ecosystem on top of Linux (unless you go fo a restricted version like ChromeOS or Android).
And with scripts, you can quickly write your own software by using existing ones. It can be your very special computing world.
If the only Linux distributions were in the style of LFS, Gentoo and Arch* you would have a point, but as long as Ubuntu, Mint, OpenSUSE, Fedora, Manjaro and the myriad other user-friendly exist I fail to see the argument.
Yes, Ubuntu has a Concorde cockpit behind the scenes and yes, I can access it on my Ubuntu work* laptop and I am grateful that I have that choice. I would hate not having that level of control.
Meanwhile, my mother's laptop is also Ubuntu and my girlfriend's is Manjaro, and they are both perfectly happy "flying their drones around" without ever setting foot in the Concorde cockpit.
(*) I use Arch btw
I have used ubuntu for 2 years now, and its the motivation for the post.
Ubuntu (or really any distro I am aware of) is great for pensioners (email readers) and power users (career linux user), with a protracted and hellishly complicated experience in between (technologically adept, but lifelong windows user).
True, if you just want to fly straight to one of the most popular cities, you can just enter it in the autopilot and it will go there. But if you want to go anywhere or do anything else, you better be good a googling and understanding "how to fly a Concorde".
A better way is to get a book about linux administration (if you want an in-depth manual) or "How Linux Works" by Brian Ward. Then learn bash.
Windows is pretty much all GUI (I've left before getting used to Powershell) and that works great until you want to do rules-based changes or do profiles (without using MDM) with changes snapshot.
Most software have good manuals so once you've got a bit used to the linux's way, it's quite easy to adjust anything you need. And after a while, you find you're mostly using a few handful packages and you'd have their configuration saved in your dotfiles. As for packages suggestions, that's what distros are there for.
Linux is all about reading.
Trust me, I understand that Linux is all about reading.
I also understand that it is perpetually irrelevant in the consumer computer space, despite everyone in that space absolutely hating the dominate OS. A hate that you can watch grow in virtually real time on any social platform. But that hate is focused on pushing the giant back in-line, not at abandoning that giant for fresh pastures. You know why?
Because 30 years ago, someone who touched grass realized that consumers cannot, let me repeat: cannot, use an OS that is "all about reading".
Most consumers want appliances, aka a collection of software pre configured with a few customization options. When the current solutions (ipad, phone, MacBook, PC) fit, it’s great. And the current linux desktop distros are doing great in that regard, except on two points. They do not come pre-installed (with hardware support) and some software don’t support linux.
But as soon as you want something custom and do your own configuration, the manuals are required. Or you get someone to do it for you.
Truth, but!
The middle hellish experience being described here is not a bad thing.
Fact is, Ubuntu used as autopilot is pretty great! It is way better than it used to be. Mere mortals can jump on a computer and often get the few things they want done.
What I do with users of that type, and myself depending on my moods and motivations for a particular machine, is treat it like Android.
Find the users an app they can click on and or tell them it is not going to happen.
Many will be happy with that.
The ones who are not need help.
Either they grow and become readers, and can pilot the computer properly, or they won't and helping them makes as much sense as their own efforts do.
However, GNU/Linux's better terminal experience compared to Unix could be part of the reason why it wiped the floor with Unix.
When you log into a BSD box, it's like taking a time machine back to the 1980s.
When things work fine, then the average person does not need a terminal.
When you do have to fix something, you need a terminal. On Windows you need to touch the registry or worse. Try attaching a debugger to figure out the reason of a blue screen and tell me how user friendly it is.
Finding and changing a cryptic file to fix an issue is troublesome, but if the alternative is "just works" well... you cannot blame the terminal in that case, but the problems that are showing up.
What's condemning Linux to negligible market share is lack of preinstalls. Virtually no one used Windows, either, until Microsoft did everything short of coercing OEMs to bundle and preinstall it during the 3.x era.
I must be doing something wrong because I can just use Home to go to the beginning of the line and End to go to the end and I am pretty sure all the things that are easy to type as just as easy to type be in Gnome Terminal on Ubuntu or Windows Terminal on Windows.
The only place I need to know a bunch of weird shortcuts to figure things out is in the macOS terminal where everything is the less intuitive as possible.