I've learned this as "swallowing errors" and IMO it's a poor practice. Not only does it not solve the issue at hand (you cannot print on an xbox), but it actively hides how broken the software is, which makes bug discovery and testing much harder.
This is one thing I like about Go's panic. You're mostly not supposed to use it or recover from it at run time. It serves as a great vehicle to blare loud sirens at testing time that you (the programmer) screwed up (and that's ok, we all do), and it's time to figure out where and how to fix it :)
PS this analogy works in a lot of domains - If you have actors in a system actively trying to hide their flaws/errors it will be exponentially harder to root them out and solve the issues.
Error design depends on context. For most of the systems I work on, fail fast on request for anything out of spec is the correct design. B2B/Microservices/API focused systems. Windows however has focused on bending over backwards to provide compatibility (including memory patching popular software that was broken, like Simcity https://arstechnica.com/gadgets/2022/10/windows-95-went-the-... ). In the context of a user desktop where a user has no control of correcting the system code, the final experience is more important that correctness in many cases. It doesn't matter who is "in the wrong". Microsoft learned that it doesn't matter why the BSOD occurred, just that bad software/hardware was giving them the bad reputation.
So yeah, fail fast/halt catch fire on any invalid input/condition is my personal preferred design, but I can see the value in this approach in this environmental context. The important thing here is that context and not applying either dogmatically. Don't take Mr Chen's approach in reactor or HFT designs for example. Fantastic approach for game engines.
It's hard to overstate how hard Microsoft has worked to maintain backwards compatibility.
Recently I had to read an old Access file and where I work we still keep Office '97 around for this purpose and it is quite amazing that it installs and works just fine on Win 11, Clippy works just fine, in fact all the other lame-ass things Office '97 does to take over your desktop all still work even if they don't quite visually match the current desktop.
The thing is that Microsoft has a framework built into Windows where they can specify different system behaviors such as that Simcity case where the game didn't work with a new memory allocator so they put in a flag so an application that needs it can get the old memory allocator. They have a very systematic approach to the half-baked and haphazard process of patching the OS so you don't have to patch applications.
Here's a pretty detailed list:
- It is possible to target Windows XP SP3 (released 2008, EOL 2014, 10 years ago) from Windows 11 and Visual Studio 2022[1] using C++23. Windows 2000 can be targeted with a little more setup[2]. Windows 2000 is 24 years old this year.
- It is possible to run a binary written for and compiled in Windows NT 3.1 x86 on a modern 2024 Intel/AMD PC running Windows 11 with absolutely no modifications to said binary whatsoever. Windows NT 3.1 is 31 years old this year.
- It is possible to write one binary and target multiple versions of Windows by simply choosing a Platform Toolset, which is paired with a Visual C/C++ Redistributable.
- Windows has a run-as mode to run programs in what is essentially a super-thin VM; like you mentioned, emulating different iterations' behaviour for a given program.
All four of these are nearly impossible on Linux. The fourth is essentially Docker, which is needed even to target an older glibc (i.e. the equivalent of the first situation). Windows has gone to extreme lengths to not only maintain API compatibility, but ABI compatibility as well. The fact that unmodified binaries from 20, 25, 30 years ago can run without a hitch is a testament to how important backwards/forwards compatibility still is on Windows.
Side tangent: all the criticisms levelled at Windows here and in many hacker fora are limited to its userspace—things like its shell and user-space programs, and trivial complaints like 'Candy Crush', or 'start bar', or 'extra right click', or even 'PowerShell aliased curl to Get-Content, grr'. The userspace changes so often because it can; it is a superficial part of Windows. To the haters: Try actually programming with NT OS primitives and even the (undocumented) NT syscalls. It is as enjoyable as UNIX system programming, and I daresay more productive.
[1]: https://learn.microsoft.com/en-sg/cpp/build/configuring-prog...
[2]: https://building.enlyze.com/posts/modern-visual-studio-meets...
This is great and all until you completely ignored the fact that wine exists and went on to call user space design questions by Microsoft “trivial”. I could also start listing ways that the Linux kernel maintains backwards compatibility with not just software but hardware that’s decades old but the list would get too long. No one is complaining that Microsoft has too much backward compatibility, it’s their utter disregard for user choices and privacy that drives away the hacker community to either Linux or even macOS.
Linux kernel on its own doesn't run software.
But it comes with drivers supporting very old hardware, hence the inclusion.
That alone still doesn't run applications.
GNU/Linux + the linux kernel.
Somehow, it feels like you don't actually care about the point I was making as much as putting me down. I misspoke...barely, yet you're so fixated on pointing it out in the most unhelpful manner. You keep replying so why not try using more than one sentence next time?
Now that you got there, which distribution released in 2023 allows me to run a GNU/Linux binary compiled in 2000, regardless of the distribution it was compiled on back in 2000?
If a binary from 2000 doesn't run, it's because of glibc ABI changes. I still have faint memories of glibc crap happening in the early 2000s. But if the binary from 2000 is statically linked, then the Linux kernel probably runs it fine today.
Which is weird considering the argument you had with others above about how "Linux (kernel) doesn't run software", was it a buildup to convince us that "GNU(glibc)/Linux" is really bad at running old binaries? Because your argument doesn't hold for the Linux kernel itself running statically linked binaries.
What am I chatgpt? Just make your point and tell us which 2000 binaries don't run and we can argue about whether or not that counts as a mark against backwards compatibility.
Also, is this a trick question about binaries that weren't patched for y2k or something?
application /= software. You're moving the (extremely ill-defined) goalposts. Next you'll be arguing that device drivers or the Linux VFS layer aren't "software".
The goal hasn't moved, we are talking about ABI compatibility for a full operating system across multiple decades and generations of operating systems releases, without requiring building those applications from source.
Not a kernel booting directly into an application doing straight syscalls.
Linux runs /sbin/init.
Which you can make any executable you want.
Not to mention initial RAMdisk (loaded by a bootloader like grub) which can be of arbitrary size & loaded up with an arbitrary ton of goodness.
Yeah, which is quite far from a full OS experience providing ABI compatibility across several generations of operating systems.
Linux (kernel) has a userland ABI of its own. Which is pretty stable & rarely broken (Linus will probably breath fire @ any kernel developer who does break user space). So eg. 10y old statically compiled binaries will likely run fine on recent kernels.
But as you state: it's the OS-level ABI's (C library, desktop environments, middleware libraries, etc) that keep changing constantly. And thus, keep breaking pre-compiled binaries.
Source-level API vs. binary ABI stability. Kind of a philosophical debate imho. But sadly, even that source-level API isn't too stable in some circles.
Do you have an actual argument or are you really committing to "nuh-uh"
The most stable API/ABI for the Linux desktop is provided by Wine.
Thankfully the year of the desktop is around the corner, by combining WSL with the original Win32.
Day zero feature availability, without compatibility issues.
I wish they included an 8086 emulator so that old software compiled for DOS would still run. It worked on 32 bit systems until 32 bit support was dropped, which only happened a few years ago. That was due to Intel's virtual 8086 mode, which is not available if your CPU is running in 64 bit (long) mode. Modern computers are fast enough for the emulation overhead to be negligible, even if you don't do any fancy JIT tricks and just go with a switch/case inside a while(true).
I would personally make use of this, I know of a 16-bit program whose latest version was released in the late XP days, so it's not even that old. The idea there was that it was always compatible with DOS, some users might presumably still want to run it on DOS, and there's no point in breaking that compatibility if modern Windows can run it just fine. Then development stopped, 64 bit systems got more popular, and a recompiled version was never released.
I guess the lesson there is that if you're keeping an API for backwards compatibility, some programmers will refuse to switch to a replacement to make their software work on older systems, making the API impossible to remove down the line without breaking modern programs.
At least you can still do this with third-party software like https://github.com/otya128/winevdm I guess? I imagine Microsoft doesn't see the returns for it to develop something they'll have to support for decades more…
This is impressive, but other parts of Windows are so dreary. Installs of apps that throw up all over the disk, Windows Updates that mysteriously fail in unrecoverable ways 87% of the way through and cryptic error codes and procedures to dig yourself out of the jam (before you must reinstall).
This seems like a norm for most operating systems. Linux mixes files from all sorts of apps all over the place, for example, and a `make install` might put files anywhere.
I have unfortunately run into exceptions. I tried to play Neverhood on my Windows install, and it wouldn't start up. Tried the various compatibility modes. No luck. I ended up running it under Wine in Win95 mode (or similar; I don't remember the exact version) on my Fedora desktop and it ran fine.
I haven't tried running too many old programs, though, so I have no sense for how common this might be.
"The car works great technically, people just have trivial complaints about the steering wheel being made of razor blades." That is becoming less and less of an exaggeration as Windows is progressively further enshittified.
NT seems to be a nice OS at the core, but that's more about what it can do and how it is implemented than about how pleasant it is to use. Some of its syscalls are much more convoluted than the UNIX ones. Typical functions take many parameters, some of them complicated on their own, and typical invocations pass a bunch of NULL arguments (you still need to roughly understand what they mean).
It's hard to overstate*
Fixed
I'm curious about this. What do you mean? I remember using Office '97 on a Windows '98 machine, but I don't remember Office trying to take over all my desktop.
It’s close to common sense, end users don’t care which monkey(s) threw in the wrench if they encounter an error, just that some entity did.
The only caveats I can think of are that it must prominently display that it’s running in a “compatibility mode” and that any encrypted subsystems can’t revert to a lower standard of encryption, which may render the application unusable anyways depending on how tightly integrated it is.
But the primary resolution to that problem was to force hardware vendors to write higher quality drivers (or for MS to write good enough default drivers that HW vendors wouldn't need their own), not to hide the BSOD. Technical details were only removed from them 2 decades after MS started fixing drivers.
Right, but if you read the article, the author is talking less about the developer experience and more about the user experience. "blare loud sirens" is great if you're a tester/developer, not so much if you're an end user. When it comes to the end user, "swallowing errors" is preferable to crashing.
That is a dangerous assumption. Unless you know a great deal about every possible use case, you can't know the potential ramifications of incorrect output. Proceeding from invalid state (which would often be the result of swallowing errors) is essentially undefined behavior.
This isn't invalid state. They're not telling the app about a fake corrupt printer, they are using the API contact to represent the truth (there is no printer you can use) in a way the app already has to support
But I was responding to a general statement.
And I still think you're wrong. If incorrect input can't be handled gracefully in a way that you can be sure nothing bad will happen, it's possible that crashing is the best option.
But I think in most cases that just isn't what's going on. An unsupported API that makes a feature not work is just not a big deal. Lack of support, say, for a cryptographic primitive, could be a big deal, so you might choose to handle that case differently.
And would you treat an API endpoint that's down the same way? Just silently ignore that a feature that's part of the user's workflow isn't actually working, that maybe only half of what was supposed to happen when they pressed a button actually happened?
Yes, and in the case presented in the article -- trying to print on an Xbox -- we really do actually know the potential ramifications of trying to print and then being presented with no printers to print to. Simply: there are no ramifications worth worrying about.
On the other hand, we do know what will probably happen if an undocumented exception gets thrown: the app will crash, possibly causing the user to lose data.
I’m sorry, maybe the article used the wrong example but the main issue comes from the fact that an Xbox app is trying to print something. It should fail, not for the developer experience, but because to start, there is no way the user would want to print something on an Xbox. Something is already really wrong with your app if it tries to do things like this.
Also he says that apps are developed and tested on PCs and that they could print in this context. I don’t know a single thing about Xbox development but I hope you can run/debug them in the Xbox environment (or a simulation).
Let me hope that nobody is running their Xbox apps/games on Windows APIs at development time and releases them on Xbox without further testing.
The point is that the UWP allows running apps developed and tested on PCs on an Xbox. It's for the user's convenience (not having to wait on developers to port to Xbox) as much as the developer's (not having to port to Xbox).
If a user wants to run an app on their Xbox, telling them "no, the developer didn't test this on the Xbox, so I'm not going to let you do that because you might try to print and get confused about it" isn't what the user wants to hear.
When the app tries to print because the developer was "lazy" and didn't test on Xbox, telling them "I'm going to crash your app now because you clicked Print, even though I know you're on an Xbox and I could just ignore that" also isn't helpful to the user.
It's interesting because that's true in some way (in the sense that PC and Xbox are different), and also not true in another (in the sense that a Xbox is in a way a PC, only with a different UI paradigm)
So in the latter sense UWP allows developing apps for that universal platform, and it only so happens that some apps are only designed, developed, and tested for one system (PC) these apps can run on.
In a way I can see working from the Xbox up being a better way to have a robust, secure, uniform platform than the Windows 8 attempt of slapping a secondary paradigm on top of the Windows 1.0 / OS/2 descendants.
I mean, the facility that underpins e.g WSL2 is exactly the same as the one that underpins Xbox game/apps segregation and Quick Resume. In a way the Xbox OS is very much like Qubes OS! In a way the OS UI we see on an Xbox device is a UI for the hypervisor itself.
I would certainly be interested in a "PC" that is so stable, restores state exactly upon updates, "it just works", allows to play games, allows to run a bunch of Linux VMs, with forever perfect backwards compatibility across hardware arch changes and OS evolution through virtualisation/emulation, and has a UI that allows many kinds of inputs and scales from big screen (gamepad, that accessibility input device I can't recall the name) to desktop (kb+mouse) and possibly laptop or even tablet/phone.
I mean it's not that far fetched (technically) that MS would announce tomorrow that an Xbox can run Linux, Windows 10, or even Windows 3.1 in a VM: all the facilities are there and you can even plug a keyboard + mouse today.
One may philosophically balk at the idea ("How dare you touch at my very open IBM PC! Where are my floating windows! Freedoooom!"), but I think it makes sense technically, and it makes sense as a product, and MS has all the bricks to make it happen.
Part of the context here is that UWP (universal windows platform), is a target to write-once and run on any windows platform situation.
This made much more sense when Microsoft had multiple platforms running windows with just different sets of apis activated.
At it's peak, this was: PC, phone, hololens, Xbox.
SMS apis may only work on phone, spatial APIs may only work on hololens, printing may work on several, but not all targets. There are ways for developers to check which APIs are supported at runtime, but you can still call these APIs since they are part of the UWP surface.
The base philosophy of "if it's wrong it should fail" is primarily for developpers, and shouldn't apply to generic customer products.
If it's dangerous, or will cost money, or will have severe ill adverse effects, I'd see the point. If a credit card transaction is wrong, make it fail.
But short of these extremes, the default should be graceful handling of exceptions and help the customer app keep going and deliver some value to the user even if it's poorly written, mishandling the context.
Even as an end user, I hate when my computer seems to be trying to hide something from me. Even if I can't do anything about it, I want to know it's happening. Don't worry, Microsoft. I'm a grown-up and can handle the bad news. If I'm a layman, maybe I just dismiss the dialog and try again. But if I'm a little more of a power user, maybe I'll look up the error message and see if I can start diagnosing or helping.
If you swallow the error message, I'll have zero idea that something is even going wrong! And almost just as bad: if you put up one of those useless infantilizing "oopsie doopsie computer made a poopsie" error messages, I still won't know what went wrong AND I'm being treated like a moron.
I worked for a software company once where our software basically crashed every 2-3 hours of continuous use due to a huge backlog of technical debt, memory leaks, and years of rushing. My manager's solution to this was not to fix the bugs--it was to build a separate "launcher" process that would detect that the application crashed, eat the error messages, and silently re-launch it hoping the user doesn't notice. Way to treat your users with respect...
There is no error message here - more often than not it's a straight up crash. No HRESULT, no popup, just NullPointerException, straight to jail.
In many cases like this, crashes like this are in app startup, so it's not like you learn not to hit a certain button - it's just that the app doesn't work, an awful UX.
As called out in the article, there are (and always should be) APIs like "IsPrintingEnabled" so that forward-thinking apps can show better UX. These practices aren't for those apps, they're for everyone else.
Also, if your app preserves state well enough that a keep-alive daemon can restore after a crash and the user doesn't notice, that is ABSOLUTELY an improvement in UX over just crashing. Sure, you should still fix the bugs, but don't let perfect be the enemy of good.
Maybe then raise an assertion error, which only has an effect in development mode.
Nothing is being compiled here; your customers aren't running debug builds of their apps, and the API code is part of the platform and isn't running the debug bits.
If you read the article, this isn't swallowing errors, it's just returning more-backwards-compatible errors.
I'm annoyed when "modern web apps" (or similar desktop apps) seemingly do nothing when there's some error, you don't know if you need to wait a bit, or click again (great fun when the UI jumps 1ms before your re-click), or full reload/restart ... luckily that's not at all what this article recommends!
It has irked me for decades that when the internet connection fails, all you get is a message that it failed. But what failed?
1. the software on your computer
2. the computer's hardware
3. the ethernet cable or wifi
4. the switch
5. the router
6. the cable modem
7. the internet cable to your house
8. the ISP
9. the remote system you're trying to connect to
Nothing has improved for decades.
Add a parallel step there, somewhere, for DNS. My raspberry pi runs pihole but the hardware is failing somehow so the server crashes so DNS lookups fail. All existing connections are fine, direct IPs are fine, locally cached results are fine, but new lookups fail. It is somewhat fun to watch it happen.
Keep going... ...
Try: ISP router intercepts DNS and drops the record because your company put private addresses on public DNS and the router has rebind protection.
Windows has a tool to diagnose network problems, and it's usually completely useless but I think DNS not working is something it does identify.
This is something I kind of like about the Microsoft APIs. Their error codes aren't always perfect, but at least they give you some indication where things went wrong.
From MSDN:
The "reserved value" also includes a bit for non-Microsoft code (which driver vendors and other API producers can use, although I don't know how often they do)
There's a list of common "facilities" here: https://learn.microsoft.com/en-us/openspecs/windows_protocol...
As a regular user, you will see errors like 0x8ACEF00D, but if you decode them, you can get a sense of what part of the system ran into the failure. Compared to the "negative value indicates failure, look up the possible failures and what they mean for every function" approach many other APIs follow, that's a welcome change.
Of course there's no guarantee that Microsoft doesn't return some kind of meaningless internal E_SOMETHING_WENT_WRONG value, but for a lot of APIs, there are details hidden in plain sight. It won't tell you your ISP's fiber has snapped, but it'll tell you if the problem is within the driver, a security limitation, an HTTP error, or a generic network stack issue.
HRESULTs are well intentioned. They originated in OS/2; but Microsoft always seemed to use them half-heartedly.
Users should not be seeing "08CAE5D012" error messages. They should be seeing "Connection refused. (08CAE5D012)" messages, or "No connection. (83255321)" messages instead.
There's is a non-trivial protocol for translating HRESULTs into error text; but it is often not supported properly by Windows applications, or Windows OS components.
e.g. DirectX which returns generic 0x8004005 E_FAIL errors. Or the WinSock APIs, which return HRESULTs, but don't provide text message translations; or Microsoft Installers that report raw HRESULTs with no attempt to capture the associated error text.
With discipline, the entire system could have worked. But there wasn't, and it didn't.
I've been suffering that TODAY.
"Connection Refused No Further Information"
Well thanks pal.
I was getting the equivalent from my Roku box the other day, something like "no connection to the internet". Sigh.
You're making the assumption that it is all caused by hardware or software malfunction in the sending of an IP packet to the target server. However the first step is usually a DNS lookup. A list would look more like this:
1. DNS lookup fails because the DNS server address configured at the computer OR router is incorrect 2. DNS lookup fails because the configured DNS server is down 3. DNS lookup fails because some firewall is blocking requests to it 4. DNS lookup fails because the network was congested 5. DNS lookup fails because of interference in the Wifi channel from other Wifi networks 6. Etc. Etc.
So indeed, usually when the network is down you only get "DNS lookup failed". Because the actual reason may be complicated. Of course, usually it is due to your computer not being connected to a router so usually the error message hints at that (in layperson's terms: not connected to the internet). So that's why browsers hint at it being your connection, i.e. ethernet or wifi.
But there is no way to make sure. All we know is that DNS seems to fail. The reason it fails could be any part of its configuration which is spread out across physical systems and software components.
The best we can say: probably your ethernet/wifi connection. If we trust that none of the other components are failing then it must be your connection that is failing.
It's like trying to find out what's wrong in 1+2+32+4=10. Sure, it seems like the 32 should be a 3. But maybe the 10 should be a 39. There is no way to tell anymore. All we can do is make an educated guess.
This is the Microsoft way to do things, this is why their products like Windows are so crappy, full of bugs, unexpected behaviors and a big dump of shit of legacy behaviors expected on top of a lot other legacy behaviors on top of another 1989 legacy behavior that every forgot.
Much of the crappy legacy behavior that Microsoft maintains is the fault of other applications, not themselves.
The classic one is the error code for the file open function. Early DOS would return error codes of only 3, 4, 5, and no more. DOS programs would actually just indirect-jump using the error number as an index into a lookup table of addresses. When Microsoft tried to add any error codes (say 6), the program would jump out into hyperspace since 6 was beyond that lookup table and that memory word could be anything. So Microsoft was stuck folding every possible file open error into code 5, and to this day that's why just about any file error in Windows just says "5 Access Denied". And no, Microsoft couldn't add more error codes and just let the applications break, since then nobody would buy the new operating system versions that their programs wouldn't work on.
Well. I have two or three (I am not even sure because for one email address there are probably two accounts, but no way to distinguish between them) MS Teams accounts. There is no painless way to switch between them. Currently there are probably at least three different MS Teams apps installed on my machine, one of them self-installed without my consent (my PC is not managed by an org or domain, it's mine). Switching accounts involves several confusing errors, at least three password inputs, and several complaints by MS Teams that something is wrong, but it won't tell me how to fix it.
Use the browser version they said. Well for scheduled meetings it works, but for adhoc calls Firefox does not (there is no reason provided, but at least it does say outright), while Chrome seems to work but does not transmit my voice and camera. All other meeting software, of course, works.
I am not the youngest anymore and this is my lifelong experience with Microsoft. Its software works barely enough to sell to corporations; poor users have to endure it.
This doesn't pass the sniff test because
1. There's documented error codes that go all the way up to 16k: https://learn.microsoft.com/en-us/windows/win32/debug/system...
2. There's file related error codes way above 5, eg. ERROR_FILE_EXISTS 80 (0x50)
The general gist of your comment is correct though. Suppose windows didn't have file locks before and they were adding it. Returning an error code like FILE_LOCKED or whatever would be much more descriptive, but would also require all existing code to handle this error case. With that in mind returning ACCESS_DENIED makes perfect sense, even if programs aren't using jump tables.
No, it's not. Like what is described in the original post, it is they spirit. For example for the case with the printer and the xbox, they could have an explicit obvious error for the application. Then it is the task of the application dev to validate its application for target it will run on. If it is does not work, it does not work. Otherwise, you will still have dozens of bugs and unexpected behaviors, so a crappy application.
And new app developers will have to build upon that. Imagine the next app developer, you have to take into account that a platform pretends to support printers when it is not the case. So you will do things like: if there is a printer api but i don't see a printer available (/connected), pretend that we can't print and don't show the button... and so on...
If you have doubts about that, you can read the history of the openxml format: https://ooxmlisdefectivebydesign.blogspot.com/2007/08/micros...
This is something that Elixir nailed. The typical idioms for return types are `{:ok, ok_value} | {:error, error_value}`, for which callers can pattern match against and handle appropriately. If fail-fast is desired, many functions will also have a variant signaled by a postfix exclamation mark (e.g., some_function!(...)), that returns `ok_value` or raises an exception.
Agree this is a good pattern. Im not categorically against throwing errors but I like returning errors or the result and then having to check at the caller. Do this in typescript a lot.
Any typed language could use this pattern, though with Elixir specifically it was idiomized from the very beginning and all libraries use it, which makes error handling very consistent even when piping operations. The common option to fail fast is also handy when crashing the process is preferred and no matching is required to unwrap the value
The point of the article, in large part, is that when you’re designing an alternative runtime for a new platform, it is up to you what conditions are considered to be errors. Deciding to make a component of the runtime “inert” exists before and above the decision to make the implementation panic.
In this specific case: it is up to Microsoft to decide what the semantics of “printing on an Xbox” are. It could be an error; or it could be something that “could be supported in theory in the future, but for now has no way to actually accomplish it.” This is a design choice, not a development choice.
After all, in theory, you could print on an Xbox — plug in a USB printer, find a “game” that knows how to print (some enhanced port of a Gameboy game that had GB Printer support, maybe?), and tell it to do so. It’s not necessarily, fundamentally an error that a Xbox game is trying to print. You could define it as an error — but that’s a choice like any other, with trade-offs (in this case, to application portability.) You could define it as asking for the user to choose from no options, as the Xbox actually does. You could have the API lie and say it printed something. You could even actually support printing. These are all choices.
It’s only once you’ve made that choice, defining the semantics of “printing on an Xbox” as an error, that it becomes an implementation/debugging problem if that error isn’t thrown — i.e. gets “swallowed.”
UWP apps that don’t require specifically desktop APIs will run on Xbox — including, for instance, Excel.
Anything wrong with "print to file"? There'd be no dead tree output but (file) output nonetheless & printing as a function should work.
Btw love the title of this article!
Agreed this is akin to
Not at all! It's like porting a program which expects there to be a TCP stack to a system which doesn't have one, and wiring up a component which responds to all HTTP requests with 404 instead of letting it hang on an infinite loop or crash. Say it uses a browser for rendering, but in the original you can also fetch websites, and the assumption is deeply baked into the code.
If your choice is between playing whack-a-mole with all parts of the system which might call out, or just issuing a 404 (after all, if there's no Internet, you're not going to find a web page on it), that's a reasonable way to solve the problem.
How is a 404 equivalent to not throwing an error? 404 would be like throwing the error and then not handling the 404.
And the equivalent of hanging and crashing in the xbox example would be hanging and crashing.
Are you serious? I thought a key principle of Go was to handle panics in a layered way at runtime
in my experience that's partly true (it's a judgement call and lies on a spectrum). Like a HTTP server should handle panics in handler code so as to not crash the other go routines handling requests. Or maybe a long running queue worker (eg reading off SQS) should recover from a single go routine handling a message (assuming it can recover and handle the next message successfully).
But it'd be way out of scope to put a recover block outside every function call just incase it did a runtime error: index out of range [1] with length . In this way its much different than try catch in languages that explicitly call out what they throw
This is not swallowing errors. This, in the Linux parlance, is not breaking user-space.
There are two ways to handle the situation presented: error out because the machine can never have printers or return an empty list of printers because the machine can never have printers. They're both valid but only one of them doesn't break user-space.
As a Go programmer, how often do you check if fmt.Print returned an error? You get to assume that stdout and stderr exist for a start, even if your code is being run in an environment where that makes no sense. And what is the correct behavior for your software when IO errors start happening when you print? In a dev environment, probably crash. But in a production environment? A customers workstation?
The article isn't so much about swallowing exceptions, but more about designing the system so you don't have to raise exceptions.
It does actually. This approach ensures that old apps (whose authors never thought it would run on something called an Xbox) will seamlessly run and perform all functions properly except for printing, which isn't supported on Xbox. Panicking here would mean every older app has to update their code to support Xbox.
That kind of thing intuitively sounds bad. But I guess of this is in a space shuttle I would take that back.
Completely disagree. Getting back at the programmer for making a mistake isn't what matters. Presenting the best experience possible to your users is all that matters.
If the user clicks on a print button in an app on the Xbox, and the app crashes (possibly losing some of the user's data), that's a bad experience for the user. Why do that to them just to stick it to the programmer?
And on top of that, now some developer at some company (often not even the person who initially made the bad assumptions about printing) has now been called in on a weekend by their boss to scramble to fix the issue and push out a new release. Why do that to people?
Regardless, also consider the API contract. If the printing API isn't documented to throw any exceptions, and you start doing that, you're breaking the contract. You can't blame the programmer for not considering platforms without printing support; you've documented that API as always returning something without blowing up.
But that's still irrelevant; don't break user code just because you can, or because it's more expedient for you to do so.
Your Go example is completely unrelated; you're talking about making things blow up at testing time, which I agree is the right thing to do. But that's also when you have full control over the code and the testing. If a third-party platform API starts behaving in ways you didn't expect it to behave, that's a whole other thing.
I wonder if Go's panic was inspired by Symbian OS User::Panic, an uncatchable sort of exception that was only used in situations when the programmer screwed up. It would kill the entire thread.
Not limited to testing time, though.
Nah I don't think this counts as swallowing errors.
There's nothing wrong or buggy about the printing API on Xbox returning a list of no printers instead of an exception.
It's essentially the Null Object design pattern. Even Go implements this in the way that a zero list can still be accessed an jsut behaves as an empty list.
Or in unix how /dev/null exists, which implements the file API but doesn't do anything. This is way nicer than fixing every UNIX program to handle stdout not existing sometimes.
Think of it like an emulator. The goal of DosBox is to provide Doom with the environment it wants so that it will run. That will necessitate some amount of lying.