return to table of content

Just Disconnect the Internet

anticristi
49 replies
12h12m

In Sweden, there is a private network (Sjunet) which is isolated from the Internet. It is used by healthcare providers. Its purpose is to make computers valuable communication devices (I love how the article points this out), but without exposing your hospital IT to the whole Internet. Members of Sjunet are expected to know their networks and keep tight controls on IT.

I guess Sjunet can be seen as an industry-wide air-gapped environment. I'd say it improves security, but at a smaller cost than each organization having its own air-gapped network with a huge allowlist.

actionfromafar
10 replies
11h13m

It's not exactly like just a WAN or intranet over the Internet. It's a separate network with agreed on availability guarantees.

nindalf
9 replies
10h47m

The problem is that you think it’s private but it isn’t. If an attacker wants access they’ll get access. At that point the false sense of security is a hindrance, because systems might not have been secured like they would have been on the public Internet.

jaapz
2 replies
9h55m

Who says they're not securing anything apart from being air-gapped from the internet?

robjan
0 replies
7h40m

It's not necessarily air-gapped. There are many ways to accidentally or deliberately patch the intranet and internet together.

grvbck
0 replies
7h2m

Sjunet is not air-gapped though. Clients can connect via vpn over the internet.

actionfromafar
2 replies
9h7m

It's not only about security but also availability. If the regular Internet goes down for some reason, the private network (is meant to) keep operating.

msla
1 replies
3h23m

So they actually have multiple physical sets of cables?

actionfromafar
0 replies
2h37m

Yes, I think so. There's not much public information, perhaps on purpose.

usrnm
0 replies
10h14m

might not have been secured like they would have been on the public Internet

Yes, because we all know how secure the tings on the public Internet are. /s

Nobody's saying that a private network doesn't have to be properly secured, you're fighting a strawman argument

krab
0 replies
9h57m

Maybe knowing there are many institutions on the network is a good motivation to keep services secure. It's apparent any hospital or vendor may be breached. So if you overcome the false sense of security, the separate network will give you another layer of defense.

clan
0 replies
9h46m

Secure is not a binary term.

If sjunet is managed as a number of interconnected airgapped networks then I for sure find that more secure than a Internet connected network. The attacker surely still have vectors in but whole classes of common attacks are mitigated.

Even if it is just "one big intranet" it is still better than one big intranet with one really good ((zero) trust me bro!) firewall to the Internet.

Various levels of zero trust principles can easily be applied within sjunet. That makes it better in my eyes.

For critical infrastructure I find this an important step. In the end security relies on us stupid humans. And it is easier to manage an airgap. It is the number of things we do afterwards to bypass it which is the problem.

The idea of an Intranet is still sound. But private does not mean secure. It is just a security layer. The next layer is if you run it fully open. Are the rooms locked? Do you require 802.11X certificates for connectivity? Are all ports open for all clients/hosts. Do you have a sensible policy for you host configuration? Have you segmented the network even further? Etc. Etc.

So your point is still valid for sure! You should secure it like on the public Internet aka a hostile environment. That is the important takeaway.

My point is that is should no be used as an argument against a private network. For large critical infrastructure such as hospitals it makes good sense. It is an added layer for the attacker to overcome - it is not security theater. For some the hassle might not be worth the while but that is then the trade off as with all forms of security.

It ain't binary but discussion often end up like that. Done right it can be additive. Done wrong it just adds pain and agony.

We all dread the security theatre. I boldly claim this aint't it.

3np
2 replies
4h13m

You know what I've seen give decision-makers a false sense of security?

"Zero Trust Architecture" and not thinking to deeply about the extent to which you're not actually removing overall trust from the system, just shifting and consolidating much of it from internal employees to external vendors.

I'm not even thinking about CS here. It's curious to see what the implications on individual agency and seem to become when the "Zero Trust" story is allowed to play out - not by necessity but because it's "the way we do things now".

(As the wiki page you linked notes, the concept is older and there are certainly valuable lessons there. I am commenting on the "ZTA" trend kicked off by NIST. I bet the NSA are happy about warm reception of the message from industry...)

marcosdumay
0 replies
19m

In principle, there are many good practice for zero trust architecture that make it viable to have a secure network while keeping it open. And also in principle, even then you'd still not want to make it open because you gain nothing by it.

In practice, no big company follows any of those practices. So, yeah, anything that's derived from "Zero Trust Architecture" is wrong from its inception.

llm_trw
0 replies
3h0m

I think we saw how it plays out in the last few days.

The worst IT outage ever!

>The worst IT outage so far.
moffkalast
1 replies
6h30m

If you can't trust anything, you can't do anything. The result is that people who actually need to get their job done then circumvent the entire system and reduce security to absolute zero. As much as the average security expert would like to lock everyone in a padded room forever, there needs to be an acceptable trade-off level of safety and usability.

Post-its with passwords are the most classical example, but removing internet access from an entire institution is just gonna lead to people bringing their own mobile networked devices and does honestly sound like a completely braindead idea.

orkoden
0 replies
3h54m

Post-it‘s with passwords aren’t the worst in security. Physical access to the note is required to get the password. One post-it under each keyboard with a different password is better than the same password shared widely.

throw0101a
0 replies
5h11m

I bet that gives hospital IT a false sense of security.

Why?

They can just as effectively use (e.g.) Nessus/Rapid7/Qualsys to do security sweeps of that network as any other.

At my last job we had an IoT HVAC network that we regularly scanned from a dual-homed machine where the on-network devices could not get to the general Internet (no gateway).

raxxorraxor
0 replies
7h7m

That is a solution for companies like Google or non-essential cloud software provider. For all others serious network segmentation is the safer approach. You could argue that this network is far too large and that is probably true.

There is future tech on ancient software stacks. There is no safe solution to put it on the net directly.

AWS was an example in the article. Easy to get a fixed IP? True. Getting a fixed IP for outgoing traffic? Not that easy anymore - AWS is nice, but for many application it just isn't a solution.

kreddor
10 replies
11h43m

Denmark has something quite similar (Sundhedsdatanettet).

jmnicolas
6 replies
11h17m

Sundhedsdatanettet

What a tongue twister for non danish speaking people :D

skrebbel
2 replies
10h11m

It’s even better when you know that the proper pronunciation is essentially “soondhldlddlnl”

(Source: I speak Danish as a second language. I used to think Georgian was the language with the most consecutive consonants but then I learned how little the Danes respect their vowels so now I know better)

Symbiote
2 replies
11h3m

In English we would put spaces between parts of a "compound" word.

Sundheds data nettet

Sund-hed is "sound-ness" (or even "sound-hood"), i.e. health.

The health data network
ithkuil
0 replies
10h1m

Yep. not putting spaces on compound words doesn't twist the tongue but twist the eyes!

Eyetwister

mrweasel
2 replies
7h43m

Sundhedsdatanettet actually runs on "public IPs". They aren't public, they aren't routed and they certainly are not connected to the internet, but they do exist within a public range. Not sure why a private range wasn't picked, but I'd guess it's to avoid conflicts with other networks.

myself248
0 replies
6h7m

Could that actually provide a benefit, in that if someone accidentally DOES connect it to the public internet, all sorts of things break immediately and obviously?

If the two networks are entirely separate, and they absolutely must be, then there's no reason for addressing concerns of one to influence the other one iota. (Except that certain OSes might have baked-in assumptions about things like the 127/8 network, so you'd have to work around those.)

anticristi
0 replies
2h25m

Sjunet also uses public IP, but never exposes those on the Internet. No clue why, probably it turned out to be the easiest solution to avoiding collision with private ranges used at all member organizations.

sz4kerto
9 replies
11h22m

UK has that (called the HSCN). I don't think it's a good thing. Couple of years ago you had to pay hundreds of dollars for a a TLS certificate because there were only a couple of 'approved' certificate providers. It also provides a false sense of security and provides an excuse to bad security policies. The bandwidth is low and expensive.

actionfromafar
5 replies
11h11m

Whether an implementation is bad is orthogonal to whether the idea itself is good.

sz4kerto
2 replies
11h5m

I don't agree fully. If some idea looks really good but implementations tend to be very problematic then the idea is likely presented incompletely or inaccurately, because it carries some hidden/non-apparent risk.

Some good-looking ideas almost always result in beneficial implementations, some good-looking ideas almost always result in bad implementations.

ithkuil
0 replies
10h4m

If all implementations of a "good" idea are bad then that's a strong indication that the "good" idea might have some significant flaws.

If the "good" idea has some bad implementations as well as some good implementations (like the swedish network example?) then perhaps you shouldn't dismiss the "good" idea so quickly

actionfromafar
0 replies
10h50m

Sure, let's get to concrete things. What is a separate physical network worth, availability wise? Kind of hard to answer. It depends on the threat model. Even geography.

roenxi
1 replies
7h30m

In this case though the two things are closely intertwined. The reason we all use the internet is because it is the most fit-for-purpose network for moving bits around between intranets. If there was a substantially more effective way to do it then it'd be cheaper or better and we'd all migrate to it over time. Countless businesses at all levels of the abstraction stack labour to make the internet cheaper and more convenient (CDNs are unbelievable, I say!).

So people choosing to create a new network are, with high confidence, going to end up with networks that are substantially worse at moving bits around cost effectively than the internet. The reality that they are inconvenient and expensive is built in once the deliberate choice is made to avoid the internet. It might be worth the cost, but the cost comes with the idea.

3np
0 replies
4h0m

Not sure what you are even refering to. Could you be specific? Got examples in mind?

simonjgreen
0 replies
11h9m

It’s not sure it’s quite the same, HSCN does provide border connectivity to Internet as well as a peering exchange. Sjunet on the other hand is an entirely private network with no border connectivity. I have dealt with both.

digging
0 replies
2h48m

provides an excuse to bad security policies

That's a (highly predictable) implementation problem of HSCN, not a problem with the idea. These complaints boil down to the same old thing: stupidly written law setting a (potentially) good policy up for failure.

citrin_ru
0 replies
7h4m

It also provides a false sense of security

The same argument was against seat belts in cars and bicycle/motorcycle hemlets. IMHO this arguments is rarely good. False sense of security should not be addressed by removing protection.

provides an excuse to bad security policies

It should not be used as an excuse but bad policies in air-gaped network is less bad than bad policies in the Interned connected one. I doubt policies will be quickly improve as soon as you connect to the Internet.

nox101
3 replies
11h34m

Given the state of IT in healthcare in pretty much every other country, is there any reason to believe "Members of Sjunet are expected to know their networks and keep tight controls on IT" has any meaning? Does the government audit every computer on the network? Are they all updated with the latest patches? Do we know people aren't plugging in random USB devices, etc..?

jmprspret
0 replies
10h40m

Yeah. As someone who has literally been in this industry.. As sad as it is, its a pretty massive ask to expect all healthcare places to have their security "tight". All it takes is one lax clinic or hospital (and truth be told they are ALL lax in their security in one way or another) for it to come crumbling down.

hulitu
0 replies
11h27m

Are they all updated with the latest patches?

Are the latest patches security updates ?

anticristi
0 replies
2h27m

My understanding is that the members need to sign a contract to join Sjunet. I'm not sure of penalties, but being kicked out of Sjunet is likely an incentive for decent IT staffing.

wkat4242
2 replies
11h33m

I kinda wish there was a WAN the way internet used to be in the 90s. With more hobby stuff, no commercial things and no regulations.

A bit like tor but without all the creepy stuff I guess.

simonjgreen
1 replies
11h6m

There are several overlay WANs for fun and learning. For example, check out DN42.

wkat4242
0 replies
3h21m

Interesting, thanks! I will check it out.

miki123211
0 replies
8h34m

Poland has the little-known "źródło" (meaning "source" in English).

It's a network that interconnects county offices, town halls and such, giving them access to the central databases where citizens' personal information are stored. It's what is used when e.g. changing your address with the government, getting a new ID card, registering a child or marriage etc.

As far as I know, the "Źródło" app runs on separate, "airgapped" computers, with access to the internal network but not the internet, using cryptographic client certificates (via smart cards) for authentication.

jmnicolas
0 replies
11h19m

No computers connected to the internet in Swedish hospitals?

If there are, a bridge could be made willingly or not. OFC it's more secure than everything on the internet.

flumpcakes
47 replies
7h35m

I work in security/systems/ops/etc. and fundamentally disagree with this premise. I understand the author is saying "it's not that easy" and I agree completely with that, but I don't agree that it means you're doing your job well.

Unfortunately the vast majority of people do their jobs poorly. The entire industry is set-up to support people doing their job poorly and to make doing your job well hard.

If I deploy digital signage the only network access it should have is whitelisted to my servers' IP addresses and it should only accept updates that are signed and connections that have been established with certificate pinning.

This makes it nearly impossible for a remote attacker to mess with it. Look at the security industry that has exploded from the rise of IoT. There's signage out there (replace with any other IoT/SCADA/deployed device) with open ports and default passwords, I guarantee it.

IoT is just a computer, but it's also a computer that you neglect even more than the servers/virtual machines you're already running poorly.

People don't want to accept this, or even might be affronted by this.

There are some places doing things well - but it's the vast minorities of companies out there, because you are not incentivised to do things well.

"Best practises" or following instructions from vendors does not mean you are doing things well. It means you are doing just enough that a vendor can be bothered to support. Which in a lot of cases is unfettered network access.

karmarepellent
27 replies
7h4m

I understand the author is saying "it's not that easy" and I agree completely with that, but I don't agree that it means you're doing your job well.

Could you elaborate what you mean by this? It seems to me that your comment just highlights another set of problems that should (in theory) motivate people to think more clearly about the ways their system communicates with the internet.

I don't see where you disagree with the blog author. Or are you saying that it's fundamentally impossible to improve security in internet-connected systems because people are not equipped to do so?

flumpcakes
22 replies
6h52m

There's no reason for digital signage inside an airport to be connected to the internet (or running enterprise security software either). The author seemingly doesn't agree with this. Hospital computers should not be connected to the internet. If you are receiving real-time updates directly from a vendor, you are connected to the internet.

Ideally updates should come from a central source internally to the organisation that has been vetted and approved by the organisation itself. Clearly CrowdStrike knows this and that's why they offer N, N-1, N-2 updates for their falcon sensor.

It's easier to remote into a box and just pull updates from the internet though.

Granted I have not had dozens of jobs, but the only place I have worked where security was treated as the first class issue that it is, (and this type of CrowdStrike incident probably wouldn't have happened), is at one of the largest financial services companies in the world. And it did not hamper development, it actually improved it because you couldn't make stupid mistakes like relying on externally hosted CDN content for your backend app. But for people that don't do their job well, it's a pain. "Hey why doesn't my docker image build on a prod machine, why can't I download from docker hub?"

horsawlarway
8 replies
5h53m

I don't really think you're speaking from a spot that's actually considered the requirements of the system.

Ex - you say this:

Hospital computers should not be connected to the internet.

But then you immediately jump onwards, as though what you've said is obvious common sense - but I don't think it is.

Can you explain to me why you believe hospital computers shouldn't be connected to the internet, and then discuss and weigh the downsides of NOT connecting them?

Because there I think that comment exposes the exact mindset that the author was discussing... no obvious appreciation or understanding of the situation, just an ill-informed off-the-cuff remark.

Can you tell me how you plan to implement cross facility dosage tracking for patients?

Can you let me know how you're going to send CT scans or x-rays to correct expert?

Can you tell me how that patient's records are going to be updated, how billing is going to be generated, or how their personal doctor is going to be notified of their condition?

I can think of a LOT of reasons that hospital computers really should be connected to a network. Maybe not every computer, maybe not every network, but even that distinction appears to be far beyond the thought you put into it before immediately saying "Hospital computers should be connected to the internet".

You're basically making the author's point for him here.

flumpcakes
2 replies
5h6m

You're basically making the author's point for him here.

And this is were I fundamentally disagree with the author. Nothing you've listed requires access to the internet - it requires access to a network.

It's a lot easier to just deploy everything and set the firewall to any-any and go home because it's working.

Like the author says, it's hard and difficult to find the right level, but to scoff at the simplest of advice of "it shouldn't be on the internet" is giving up.

worthless-trash
0 replies
3h41m

it's hard and difficult to find the right level

There are some pretty solid documentation on this and has been for some time, the knowledge simply has been lost or discarded because this kind of knowledge was considered 'arcane' or 'restrictive'.

There were times when infrastructure had devel/testing/production environments with staged rollouts and deployment.

Production had only only the minimal access, with admin config routable only to a private network, hidden behind the frontend cluster. Things were hard for admins and hackers alike.

There was at one point gated networks and the idea of militarized and demilitarized zones, router level firewalls, outgoing connection limiting. Centralized logging (nah, don't do that, just run your apps on a POD and forget your security, forensic recover of your app is dead by the next deployment (probably twice over today) already) and many many more things.

We bought the newthink of 'web security' as how we should build our infra as the the true way. When we see it fall apart on a blue friday afternoon do we look back to see the bigger picture ? No, we can't take responsibility for the weakness because any suggestion of personal responsibility that requires work is out of the question.

horsawlarway
0 replies
2h31m

And this is were I fundamentally disagree with the author. Nothing you've listed requires access to the internet - it requires access to a network.

Ok - now what? I don't understand the disagreement you seem to think you have.

That network inevitably requires a connection to the outside world or those exact same features I listed above stop working. So you're just shifting blame without an answer...

So continue with your path - now I have an X-Ray machine that's connected to a network. The router on that network still has to connect to the internet to facilitate functional use of the machine, so let's assume crowdstrike is running there - tell me how your advice of "Don't connect it to the internet" is meaningful here?

I have an expert radiologist who I want to confer with on my patients x-ray, he's in a different state - what is your advice? How is my problem solved with a banal "Just don't connect it!"?

bornfreddy
2 replies
5h35m

The answer is simple: security. Attacker needs a way into the machine to control it. Granted, there are some obscure vectors of attack that don't need network connectivity (Stuxnet), however the game becomes much harder for the attacker when they have no direct connection to the target system.

Many countries solve this by having a separate network for hospitals, but it is not the only way.

In general, it is a trade-off between security and convenience. Yes, you can't send an e-mail without an Internet connection (well... not easily). But do you need to? From the computer that controls the MRI machine? Or is it just easier to say "we need Internet because updates"?

kwhitefoot
1 replies
4h39m

you can't send an e-mail without an Internet connection

Of course you can.

bornfreddy
0 replies
1h1m

To an external recepient? And you misquoted me by removing end of the sentence in parenthesis.

throw0101a
0 replies
5h15m

I can think of a LOT of reasons that hospital computers really should be connected to a network.

Connected to network ≠ connected to Internet.

CapstanRoller
0 replies
3h47m

Most of your questions can be answered with these two weird old tricks: site-to-site VPNs && VLANs

Why has everyone seemingly discarded these ideas or forgotten? Yes, it is a pain to manage. Industry could reduce this pain, but investing in good security isn't profitable.

Blame the profit incentive. Blame the VCs (especially the people who own this website)

jstanley
7 replies
5h24m

Hospital computers should not be connected to the internet.

But then how will doctors google the patients' symptoms?

If your answer is "they should already know all that is required to do their job without looking it up online", then consider whether you hold yourself to the same standard. I don't.

Mistletoe
1 replies
4h0m

Medical care is so bad now I have had the doctors and nurse practitioners look up webmd while I’m in the room with them.

devbent
0 replies
2h45m

Doctors and nurses have always had large reference tomes that they refered to.

List of medication side effects, dosing guidelines and so forth have been common throughout the industry almost since it's very inception. Indeed, there are books going back thousands of years across multiple cultures around the world that are just referenced guides for medical practitioners.

rightbyte
0 replies
5h7m

They could have designated internet connected computers and other computers for admin tasks and processing x-rays etc.

Having the hospital admin and some machines connected outwards seems like a recipe for killing patients.

mixmastamyk
0 replies
31m

Mobile device, second network, vlan etc. Lots of options.

karmarepellent
0 replies
5h12m

It is hard to make assumptions about what parent means by that statement, given that there are degrees in which a system can be "connected to the internet". For example, every request coming in or going out, as well as SSH access, could go through a proxy. I would still call that being "connected to the internet", but it's different from giving your server a public IP address.

flumpcakes
0 replies
5h4m

This information (looking up symptoms) would be better served by a curated internal wiki that all practitioners have access to. Not google.

cowboylowrez
0 replies
3h14m

doctor: heart arythmia

google ai: remove heart

doctor: ok lets schedule the operation!

devbent
2 replies
2h47m

Hospital computers should not be connected to the internet.

Scheduling an urgent care appointment is connected to my account at the hospital network. When I step into the hospital and get tests done, everything is automatically uploaded to a web portal where I can view it and doctors can easily forward my test results to other facilities. Lot of the imaging work is actually done at third party facilities but they still show up in my medical records , presumably having been forwarded.

When my appointment begins my doctor can look up any comments I left when scheduling the appointment so she knows why I'm there.

Is it possible all the different buildings and facilities that are part of the hospital Network I belong to which extends across multiple counties across my state could all be running on their own private isolated network that is air gapped with medical records manually transferred over to the web portal via sneaker net?

Sure, but no one is going to do that.

reaperducer
1 replies
1h24m

When my appointment begins my doctor can look up any comments I left when scheduling the appointment so she knows why I'm there.

Or, you could… you know… talk to her.

I work for a healthcare company that runs several hospitals and primary care clinics. When you become a patient, you're given a little notebook and a branded pen so that inbetween appointments, the patient writes down every little health question and problem they have. When the patient shows up for the appointment, the little notebook is reviewed by the doctor.

Convenience should not always trump security.

devbent
0 replies
39m

That sounds like an absolute nightmare. I'd lose the pen and paper within 30 seconds. I much prefer being able to send a quick message to a healthcare provider on my phone and have them get back to me same or next day. Especially for the "should I come in for this rash?" types of questions. (I got MRSA at the gym one time!)

At my child's pediatrician I can upload images through the web portal and have a nurse call me back anytime of the day or night. If there's any follow-up questions at my child's next appointment, his doctor has full access to all communications that happened through both text message and the web portal.

This kind of 24-hour digitally connected healthcare access with a huge boon as a first-time parent and made life a lot easier, especially when incidents like when my son woke up at 3:00 in the morning screaming and then projectile vomited 6 ft across his room (which by the way, was nothing to worry about and apparently completely normal... So long as it just happened once.)

karmarepellent
1 replies
6h28m

Just to add to this: Apart from the security aspect of hosting your software dependencies internally, it also gives you the added benefit of better availability and performance.

As you mentioned in your other comment however, this presumes a certain mindset in people where they are willing to plan upfront and are mindful of the dependencies their software needs. As you say, just pulling whatever from Docker hub is certainly easier.

Internally hosted repositories also allow you to pull and install updates at your own pace, possibly days after they have been released upstream. So if a patch is borked you won't be affected.

worthless-trash
0 replies
4h0m

I hate to be that guy but 'back in my day', that was called "testing/production" deployments.

myself248
1 replies
5h51m

I interpreted it as "Software is crap, and it's hard to make crap work offline. The problem is not the offline, it's the crap."

The question is where to lay the blame for the crap, and how to change that.

I would love to see the author's "lists" turned into a table of sorts, and then any given piece of software could be rated by how many situations on each list it works in without modification, works in with trivial config tweaks, works in with more elaborate measures, or cannot work in. Turn the whole table green and your software is more attractive to certain environments.

karmarepellent
0 replies
5h33m

I don't think it's always the software that is to blame. Sure there is software that wants to self-update and only accepts one upstream update source run by the vendor, instead of allowing users to run their own mirror and control update distribution to some extent.

But there are also cases where the software could be perfectly run in air-gapped systems but people are unwilling to put in the work (for some reason or another). For example everyone could run their own Docker image mirror that only contains images that are actually needed and pulls them from upstream with some delay. Docker allows you to pull images from your own registry. But not veryone is willing to operate their own registry.

flumpcakes
1 replies
6h48m

Or are you saying that it's fundamentally impossible to improve security in internet-connected systems because people are not equipped to do so?

Yes - but I don't think think it's that hard. There's 90% easy work to be more secure than most out there. It just requires expertise and for people to change how they work.

Instead people spend $bn on cyber security when you can get 97% of the way there by following good standards and knowing your systems.

I am by no means perfect, I spent all day Friday fixing hundreds of machines manually that had BSOD'd from CrowdStrike. In this case the vendor had made it impossible to do my job well because they offered zero control on how these updates are rolled out - there is no option to put them through QA first. Unlike the sensor itself, which we do roll out gradually after it has been proofed in QA.

karmarepellent
0 replies
6h37m

I agree with you. Unfortunately my place of work has a habit of buying snake-oil security appliances as well which seems to magically absolve everyone from actually thinking about security deeply themselves.

Rather ironically said appliance (that basically acts as a man-in-the-middle in remote access) prevents me from going the last mile in securely configuring my systems. I would not be surprised if the appliance self-updates, but I'm not sure.

Regardless you could make the case that practices like these do not improve overall security, but instead just cost a large amount of money that you could hire three security-minded engineers from.

hypeatei
15 replies
7h23m

For IoT in particular, you hit a crossroads where the embedded devs haven't really dealt with advanced security concepts so you kinda have to micromange the implementation. And, in small teams it's hard to justify the overhead of managing x509 certs and all the processes that come along with it. Just my personal experience.

kwhitefoot
8 replies
4h42m

Surely white lists and certificate pinning are not advanced security concepts?

pizzafeelsright
3 replies
4h36m

Yes, they are.

Perhaps not in a practical or educational sense but in the real world, of people with non-cryptographic or security related jobs, a certificate is a PITA that goes beyond the functional requirements.

I have seen many insecure building automation systems that are maintained by reclassified HVAC technicians. The movies about hackers taking over an elevator are entirely accurate.

CapstanRoller
2 replies
3h39m

The hassles of cert pinning, etc. should not be laid at the feet of the customer/integrator/whatever. Regardless of whether that person is an HVAC tech who learned about serial ports & busybox yesterday or is a seasoned expert with Ghidra & Wireshark & binwalk.

Companies are being incredibly lazy (at our expense), and the author states this obliquely:

virtually the entire software landscape has been designed with the assumption of internet connectivity
shadowgovt
1 replies
2h56m

The issue is the alternative does not scale.

It's not that companies are being lazy at our expense; it's that nobody wants to pick up the bill. If you write something to work against an online system, the fact it is online implies it adheres to some standard that you can work with, so solving the problem for one online client creates an artifact that is likely applicable to many clients.

Air-gapped systems drift. They get bespoke. They get very out of date. So you have the two practical problems of labor: (a) the product created solves the problem here, today, but nobody else benefits from repurposing that solution and (b) the developer isn't gaining as many transferrable skills for the next gig, and they know it, and so the developers who are willing to do the air-gapped work are harder to find and more expensive.

(I believe this is also the reason you see air-gap a lot more often in government security and banks: they can afford to retain talent past the current project with the certitude there will be more projects in the future).

reaperducer
0 replies
2h18m

The issue is the alternative does not scale.

That's a feature, not a bug.

Almost the entire downfall of the modern tech industry can be attributed to two things: greed, and the fetishization of "scale."

Not everything has to scale. Not everything should scale. Scale is too often used as an excuse to pinch pennies. If you business model only works at massive scale, then your business model might be broken. (Not always, but more often than most people think.)

hypeatei
2 replies
4h25m

Embedded devs can come from a variety of backgrounds (e.g. Electrical engineering) that don't necessarily concern themselves with software security. They're not dumb, it just isn't something they (typically) are knowledgeable in.

simianparrot
1 replies
3h50m

Then they need to learn it. Otherwise they’re being unprofessional and bad at their job.

dsr_
0 replies
3h29m

They were hired by a company which is bad at its job of delivering secure or securable products. The products were purchased by someone bad at their job of selecting secure products. They were deployed by someone who was told that having the signs working ASAP is more important than anything else, so the management is bad at their job of securing the company.

But I won't say that the designing engineer was bad at their job, I would say that the product manager was bad at their job... but probably got promoted, because the company made a bigger profit and delivered faster because security didn't get any attention.

And that's why we need regulation, because "this product is secure" is not easily and cheaply verifiable and carries no penalties for being incorrect. The market can't tell, so everything is a lemon.

michaelt
0 replies
3h30m

The makers of PC BIOSes are arguably the firmware developers who are closest to being normal PC programmers. They've been at it for 40+ years, and they have long provided network-connected features like network boot and remote management.

And yet over 200 motherboards and laptops have their secure boot root of trust key set to a log-ago-leaked example key from a development kit, named "DO NOT TRUST - AMI Test PK" [1]

The firmware industry at large just ain't good at this stuff.

(Of course from the perspective of the firmware industry, they can make a non-internet-connected heating timer or a washing machine control board that will work fine and reliably with no software updates, for 25+ years - while us PC software cowboys make software so bad crashes are just a fact of life, and bug fix/security updates are a daily occurrence. So the firmware industry isn't all bad - only when they start putting things onto the internet.)

[1] https://news.ycombinator.com/item?id=41071708

throw0101a
2 replies
5h23m

And, in small teams it's hard to justify the overhead of managing x509 certs and all the processes that come along with it. Just my personal experience.

If you're using (say) Python in your client code, call SSLSocket.getpeercert() and check if your company's domain is in the subjectAltName:

* https://docs.python.org/3/library/ssl.html#ssl.SSLSocket.get...

You can ensure it is a valid cert from a valid public CA (like Let's Encrypt) instead of doing your own private CA (which you would specify with SSLContext.load_verify_locations()).

karmarepellent
1 replies
5h14m

I think parent refers to the infrastructure that is required to (automatically?) sign certificates by an internal CA and managing the distribution of those certificates. I don't think verification is the issue.

hypeatei
0 replies
4h26m

This is correct. You have to consider every step from when the device is manufactured to when something goes catastrophically wrong in the field. All the internal documentation and tools so Joe from support can help customers and Bob in manufacturing can provision devices on his own all while maintaining controls around that process so nothing is getting leaked or abused.

szundi
0 replies
6h1m

Yeah you know, just roll out our MVP, let's see where the business goes with it, and then we'll fix it. Whaat? Budget of fixing it is 2x of the product itself? Hm. Let's have meetings over meetings to postpone the decision until the next one, indefinitely - we cannot really make the decision not to do it of course.

skywhopper
0 replies
2h47m

Sure, for “small teams”. Does that apply to the companies with huge impact from this issue? Is Delta Airlines IT run by a small team? I hope not.

CapstanRoller
0 replies
3h36m

OK, that's fine. Not everyone has to know everything.

So why aren't their employers investing in educating their devs & PMs about security? (rhetorical - we all know why)

zippergz
1 replies
1h29m

A sign connected to the internet but with IP whitelists and cryptographic checks is still CONNECTED TO THE INTERNET. Yeah, it's way safer than the same sign with ports open to the world and no authentication, but you can't treat it as "not connected to the internet." You still have to worry about networking bugs, cryptographic vulnerabilities, configuration errors, and other issues that can allow remote attackers to exploit the system. If you want to make the point, you have to give an example of something that's literally not connected to the internet, not one that's simply locked down better.

Spivak
0 replies
7m

The number of people who are willing and able to build their own disconnected network is vanishingly small which is the author's point. When deploying "edge" computing like signage which demands remote administration telling your customers anything other than "get it connected to the internet and we'll handle it from there" isn't going to go over well.

"Sorry you can't deploy our signs because we haven't deployed our custom LoRa towers in your area" is just gonna get laughs.

jongjong
22 replies
12h29m

'Disconnect from the internet' is a kind of 'Security through obscurity'; which isn't very good security.

It's basically an admission that the software may be full of vulnerabilities and the only way to protect it is to limit its exposure to the outside world.

The root of the problem is that almost all software is poorly designed and full of unnecessary complexity which leaves room for exploitation. Companies don't have a good model for quality software and don't aim for it as a goal. They just pile on layer upon layer of complexity.

Quality software tends to be minimalistic. The code should be so easy to read that an average hacker could hack it in under an hour if there was an issue with it... But if the code is both simple and there is no vulnerability within it, then you can rest assured that there exist no hackers on the face of the earth who can exploit it in unexpected ways.

The attack surface should be crystal clear.

You don't want to play a game of cat and mouse with hackers because it's only a matter of time before you come across a hacker who can surpass your expectations. Also, it's orders of magnitude more work to create complex secure software than it is to create simple secure software.

The mindset to adopt is that bad code deserves to be hacked. Difficulty involved in pulling off the hack is not a factor. It's a matter of time before hackers can disentangle the complexity.

sureIy
7 replies
12h27m

'Security through obscurity'; which isn't very good security.

I never understood this. You never have absolute security, that’s why you must apply the Swiss cheese model. Obscurity is definitely a worthy slice to have. Few people can attack you if you can only be attacked in person.

Animats
6 replies
12h8m

The "Swiss cheese model" worked against amateur attackers. It doesn't hold up against well-funded or patient ones who can work through the holes in each layer. The extreme demo of this was Stuxnet.

DaSHacka
1 replies
11h56m

Hence the purpose behind threat modelling?

All security is really just the swiss-cheese model. Some entities just invest in more slices than others to keep more sophisticated/determined attackers out (such as nation states).

What other practical model is there for security then defense in depth? "Just make 100% bulletproof computers with no faults?"

Animats
0 replies
1h0m

Alternative models:

- Systems that store the code in read-only memory. Example: slot machines.

- Systems with backup systems completely different from the main system, implemented by a different group, and thoroughly tested. Example: Airbus aircraft.

- Systems continuously sanity-checked by hard-wired checkers. Example: Traffic lights.

- Systems where the important computational functions are totally stateless and hardware reset to a ground state for each transaction. Example: #5 Crossbar.

wkat4242
0 replies
11h24m

No but it makes it a hell of a lot harder for them to do it undetected.

There's a reason Stuxnet was an exception. These things are not very common and the only reason we even know about it is because it managed to spread further than its intended target.

lelanthran
0 replies
11h3m

Nothing works against an attack like stuxnet. This doesn't mean that you should do nothing.

Obscurity is one layer, and it does protect against drive by attacks.

Obscurity as the only layer does not work.

Obscurity as an added layer improves security.

UncleMeat
0 replies
4h51m

Systems security is an economics game. It is valuable to be protected against amateur attackers even if the most extreme state actors can still breach you.

The criticism of "security through obscurity" is specifically Kerchoff's Principle, which applies to cryptographic systems. It is not an absolute rule outside of that domain.

Pavilion2095
0 replies
10h26m

What is better than the Swiss cheese model or its derivatives? Planes still crash from time to time, but nobody is saying that the model is wrong as the reliability is insane.

fiatpandas
6 replies
12h23m

But if the code is both simple and there is no issue with it, then you can rest assured that there exist no hackers on the face of the earth who can exploit it

Ah yes, security through absolute perfection.

jongjong
4 replies
12h7m

It's difficult to get there but it's often achievable and worthwhile. When a company is worth billions, what's the cost of aiming to reach perfection? What's the cost of not trying?

mewpmewp2
1 replies
11h57m

What is your experience with that perfection? Have you been able to achieve it in a large org?

lelanthran
0 replies
10h28m

Nevermind a large org, I'd be surprised if you can achieve that perfection in tiny software written in the safest languages reviewed by experienced engineers.

kalleboo
0 replies
11h2m

The cost is you lose to your competitor who offers features (complexity) instead of security and now all your effort is for naught.

People talk a lot about security but nobody actually values it. You just send out some Uber Eats coupons or free Credit Protection vouchers and keep on doing what you were doing and in a month everyone has forgotten.

DaSHacka
0 replies
11h53m

I'd argue more aptly: what's the cost when this "perfect" solution inevitably fails? If it was easy (or even _possible_) to make perfect computers, I assure you we already would.

formerly_proven
0 replies
12h10m

security through absolute holistic perfection (STAHP)

m_eiman
0 replies
11h30m

'Disconnect from the internet' is a kind of 'Security through obscurity'; which isn't very good security.

Just think of it as a very efficient firewall.

epigramx
0 replies
11h44m

Give an example of software that exists for decades and never had an exploit. You might say basic OS tools. The OS might not be secure then; giving internet terminals to everyone in the world is just a stupid oversight; the best is probably a combination of both having internet when needed but most access to it to be through an extremely thick layer of firewalls (e.g. only system administrators updating stuff should be exposed to the internet for software involving airport security).

eimrine
0 replies
12h9m

'Disconnect from the internet' is a kind of 'Security through obscurity'

I disagree with this, no internet is not an obscurity this is more like incapsulation for the sake of having controllable interface via setters and getters only.

If some computer rules something (something big as an airport or something tiny like a washing machine) how often it really needs an update of something system-related like the kernel? How many MB of code with potential 0-days are you going to expose to the wild for the sake of that autoupdate?

d4mi3n
0 replies
12h14m

Cybersecurity is mainly a technical application of risk management.

An untrusted network (the internet) is a risk. Removing access from that network is one way to mitigate that risk.

Obscurity doesn’t remove a risk, it just reduces its likelihood. An obscurity approach here would be more akin to changing your SSH port from 22 to some random number rather than blocking SSH entirely.

christianqchung
0 replies
12h25m

I have no experience in cybersecurity, but if the software is possibly full of vulnerabilities and the company is not willing to fix it, why not support disconnecting it if it functions anyway? Analogously, someone who lives in a dangerous neighborhood locks their doors because they can't move somewhere safer, and that's viewed as normal.

bux93
0 replies
12h3m

I don't agree that airgapping is security through obscurity; it's defense in depth, just like putting up a fence around your datacenter. It doesn't solve your insider risk (or 12 foot ladder risk), but it is an additional measure.

PhilipRoman
0 replies
9h52m

I actually agree with this. Of course it's easy to dismiss as "just don't make mistakes" but there is a profound lack of simplicity. For example, a security boundary like ssh or vpn should not have a billion configuration options (or any options for that matter), some less secure than others. It also shouldn't have any complex negotiating before auth. Receive a fixed size magic + auth key, validate with small formally verified crypto, if doesn't match then drop connection without any IO or other side effect.

But instead we have protocols where the security boundary represents thousands of pages of specifications, parsing of complex structures in elevated context, network requests on behalf of untrusted users, logging without input escaping, and a dozen "unused" extensions added by some company in 1990s to be backwards compatible with their 5 bit EBDIC machines.

andrewstuart
11 replies
11h31m

I don't think systems should not be connected to the Internet.

I did find it surprising however that so many systems shown on TV run Windows.

Digital signage screens, shopping registers all sorts of stuff that I assumed would be running Linux.

It is surprising to me that systems with functions like a cash register would be doing automatic updates at all.

cqqxo4zV46cp
4 replies
9h19m

Because desktop Linux is an absolute bloody mess and most IT departments are completely justified in not wanting to deal with it?

I’m not saying that Windows is great. I haven’t willingly used it in 15 years. But you can’t keep your head in the sand about the sad state of Linux and anything graphical, especially on esoteric hardware.

POS systems are often effectively Internet-connected, because they need to report stock levels, connect to financial networks, process BNPL applications, etc. it’s completely warranted to treat them like ‘endpoints’, because they are.

bregma
2 replies
7h43m

POS terminals and electronic billboards are not desktops, though, so arguments about desktop software is irrelevant. These are all dedicated application appliances with known, controlled hardware and software constraints. Using a general-purpose desktop designed for corporate executives running Excel and PowerPoint is just the wrong technology choice for such an application. Some kind of specialized Linux-based system, on the other hand, is an excellent choice.

heraldgeezer
0 replies
4h57m

I have seen digital signage just be a PPT file running in full screen though.

Good? No, but that's the reality of things.

duckmysick
0 replies
4h24m

Most of the point of sale systems I've seen run Windows, which means most of the off-the-shelf apps are written for Windows. Even if they are written in Java, they have hard dependencies on Windows.

Using a general-purpose desktop designed for corporate executives running Excel and PowerPoint is just the wrong technology choice for such an application.

Agree, which is why most of the time you use Windows Embedded for Point of Service or Windows IoT Enterprise. Which again, is Windows.

andrewstuart
0 replies
8h32m

I have alot of experience with Linux running Custom builds with chrome.

I can say it’s not easy to configure but once done it’s very stable and simple.

willi59549879
0 replies
10h33m

It surprised me too. Maybe it is because people are just more used to windows. Or it might be because of more software geared to roll out software updates.

mrweasel
0 replies
7h36m

It is surprising to me that systems with functions like a cash register would be doing automatic updates at all.

Yeah that's weird, at least do it via some on-premise "proxy". Windows has WSUS and I'd assume that Crowdstrike has something similar. I know that TrendMicro provides, or have provide an update service, allowing customers to rollout patches at their own pace.

Sadly very few things seems to run correctly without internet access these days. I get the complaint about management and updates for something like things in people homes, but if you're an airport, would it be so bad to have critical infrastructure not on the internet? I don't really care if the digital signs run Windows, there are plenty of reasons why you'd choose that, but why run Crowdstrike on those devices? Shouldn't they be read-only anyway?

hnthrow289570
0 replies
5h44m

I don't think many developers are going to be really excited about building signs or kiosks, so they will not be bringing their A-game.

Since MS has a kiosk mode officially, they probably assume either choice is good enough.

heraldgeezer
0 replies
4h58m

Because software is bought from vendors that require Windows. This is often the case with Point of Sale software.

OR the solution is a powerpoint or mp4 file running on a TV for signage.

If every office computer is already Windows, IT has management applications like GPO, SCCM/Intune, or RMMs like Datto/Ninjaone to deploy policy and manage Windows computers remotely. It then makes sense to just keep using those, rather than making a whole new system just for the digital signage computers.

forinti
0 replies
6h7m

A long time ago I built a multimedia kiosk for a retail chain. I used Linux and X without a Window Manager, so my worst case scenario was that the clients would see a gray screen.

I agree that it does not make sense to use Windows for this sort of thing.

ahoka
0 replies
3h55m

It wouldn't be much better if they ran Fedora 14.

creesch
10 replies
10h39m

A bit of a tangent to the subject of the blog, but something that has been bugging for a while. What's up with all these blogs that choose fonts that are just not that good for readability? In this case, monospace. It's not code, it is not formatted as code, making it a bad choice for comfortable reading.

Are these people not writing blogs to be read?

And just to be ahead of it, just because you are able to read it doesn't mean it wouldn't be easier and more comfortable to read in a more suitable font.

inetknght
5 replies
4h54m

What's up with all these blogs that choose fonts that are just not that good for readability? In this case, monospace. It's not code, it is not formatted as code, making it a bad choice for comfortable reading.

That's a subjective opinion.

I vastly prefer monospaced fonts. They're easier to read!

creesch
4 replies
3h35m

Not really, there have been various studies that have shown that for the majority of people and cases sans serif fonts are the better choice for reading.

There are some exceptions. Obviously, code is one of these, as code is explicitly differently structured. Dyslexia is another one where monospaced fonts might actually increase readability.

But overall they decrease readability compared to other font types.

msla
3 replies
3h19m

Not really, there have been various studies that have shown that for the majority of people and cases sans serif fonts are the better choice for reading.

... so therefore thinline grey-on-gray text is ideal! Good meeting, let's do lunch.

You can nitpick the linked site, but it is amazingly readable compared to sites that feel compelled to adhere to modern fashions, like having blinking, throbbing nonsense in the field of vision making it impossible to concentrate on the actual text, or making the text too small unless you have exactly the same ultra-retina 8K HD phone the author does, or thinking "contrast" is a city in Turkey.

creesch
2 replies
2h34m

Well that is one way to go over the top with a counterargument. I am not advocating for any of that, just that maybe a sans serif would have been a more suitable choice.

msla
1 replies
2h19m

It's odd how you insist that sans serif is more readable when body text in every book (OK, every grown-up book) I've read has been serif, as far as I can remember.

creesch
0 replies
1h48m

This very much feels like an arguing for the sake of arguing type response to me. Given that, what I am typing isn't obscure knowledge in the slightest. Anyway, assuming you are honestly just curious. Sans serif has shown to be the better readable font type on displays. Granted, on modern displays with higher density pixels that is less important.

Either way, both are a better choice compared to a monospaced font.

williamdclt
1 replies
7h42m

I always think that if I prefer to enable Firefox's reader view on a blog, they _really_ messed up: a bland, basic, generic syling is preferrable to their custom one. It's what happens with most blogs, sadly

creesch
0 replies
7h24m

It is indeed what prompted me to go on this tangent. I see so many blog posts being posted here that are just unnecessarily uncomfortable to read. It's just baffling how a person can spend time and effort to publish something on the internet, clearly wanting people to read it, and then not consider the bare basics.

0898
1 replies
10h27m

It’s a retro nod to newsletters like NTK and Popbitch, I believe.

creesch
0 replies
10h10m

I get that it is an aesthetic choice. One I can't really understand given the main purpose of a blog, I think anyway, is to be read.

fifteen1506
3 replies
9h11m

The author is failing to see a potential solution.

Whitelist all needed IPs for business functionality, enable the whole Internet once every 3 hours for an hour.

Bonus points if you can do it by network segment.

It would be enough to spare half your computers from the CrowdStrike issue, since I believe the update was pulled after an hour.

Will any-one do this? Probably not. But it is worth entertaining as a possibility between the fully on connectivity and the fully disconnected.

vel0city
1 replies
2h28m

Whitelist all needed IPs for business functionality

I really don't like this mentality. The IP I'm serving some service might change. DNS is a useful thing.

deathanatos
0 replies
36m

Security types love it. But from a infra eng viewpoint, it's an utter pain in the ass, and the thought of "what if IP changes?" — which inevitably happens — has no process, no plan, and ends up as "manually update O(n) different configurations, of which there does not exist a list of, so you'll never know if you got them all."

deathanatos
0 replies
38m

It would be enough

That depends on the phase of your "every 3 hours for an hour" signal, and the phase of "the update was pulled after an hour.". That's a 33% overlap. Feelin' lucky?

RF_Savage
3 replies
11h51m

There is also hamnet, which is partly internet routable and partly not on the 44Net IP block.

https://hamnetdb.net/map.cgi

It has interesting limitations due to the amateur radio spectrum used. Including total ban commercial use.

As that is the social contract of the spectrum, you get cheap access to loads of spectrum between 136kHz and 241GHz, but can't make money with it.

wkat4242
2 replies
11h31m

Yeah it's really hard to get an uplink to it though. Even in a big city.

Only in the Netherlands and Germany is it really widespread: https://hamnetdb.net/map.cgi . Here in Spain it's not available anywhere near me.

withinboredom
1 replies
8h36m

With hamm radio, it doesn't need to be near IIRC. It's been a long af time since I've messed about with radio, but pretty sure you'd be able to use the ionosphere as an antenna.

wkat4242
0 replies
6h55m

For this it does need to be near. These are all high-speed connections that need line of sight. Basically those microwave dishes that you see everywhere. Or at least a grid reflector or yagi or something.

With HF yes you can use the various atmospheric layers to reflect depending on band but in those bands the available bandwidth is extremely low (the entire HF range itself is only 30mhz and the amateurs only have a few small slices of that). The only practical digital operations there are Morse, RTTY (basically telex) and some obscure extremely-slow GPS synced data modes like WISPR and FT8 that are basically for distance bragging rights but don't transmit useful payload.

So in effect, no. In this case line of sight or at least short distances (VHF/UHF) are required.

Also, I don't have space for huge antennas that HF requires as I'm in a small apartment in the middle of a built-up city.

tjoff
1 replies
11h33m

Good article, though I really thought it would be about the other end. You know hacking movies in the 90s(?) where the good guys face a hacker-attack, frantically typing at the keyboard trying to keep the hackers away. It is a losing battle though, but just at the nick of time (the progress bar is at 97%) the hero unplugs the power cord or internet cable.

Or, in the case of crowdstrike. I can imagine support starts to get some calls, at some time you realize that something has gone horribly wrong. An update, maybe not obvious which, is wreaking havoc. How do you stop it? Have you foreseen this scenario and have a simple switch to stop sending updates?

Or, do you cut the internet? Unlike the movies there isn't a single cord to pull, maybe the servers are in a different building or some cloud somewhere. They probably have a CDN, can you pull the files efficiently?

Now maybe by the time they discovered this it was mostly too late, all online systems might already have gotten the latest updates (but even if that is the case, do they know that is the case?).

JKCalhoun
0 replies
5h43m

I have resisted "auto updates" for the OS's of my personal machines. Instead the OS nags me when there is a software update and I just ignore it for a week or so. I assume that any accidentally buggy software update will be found by others (or Apple) first and I can have dodged that particular bullet.

Not air-gap, temporal gap.

renegat0x0
1 replies
11h21m

Connection between clownstrike and cybersecurity is flimsy. This was not an attack.

This was a resource management problem, a process problem.

Meaning: if your process are invalid, you can also fail in off-line scenario. If you do not treat quality control, or tests correctly you gonna have a bad time.

owl57
0 replies
10h59m

> if your process are invalid, you can also fail in off-line scenario

Online amplifies failure at least as well as it amplifies success. Offline maintenance is quite unlikely to bluescreen 8 million devices before anyone has time to figure out something's going wrong.

forinti
1 replies
6h10m

After watching a video of a person playing with a MacDonald's kiosk, I started to do the same with equipment I found at different places.

One food court had kiosks with Windows and complete access to the Internet. Somebody could download malware and steal credit card data. Every time I used one, I turned it off or left a message on the screen. Eventually they started running it in kiosk mode.

Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.

The third was an interactive display for a brand of beer. This one wasn't going to cause any harm, but I liked to leave Notepad open with "Drink water" on it. Eventually they turned it off. That's one way to fix it, I guess.

remus
0 replies
5h37m

Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.

I don't know the details of how the parking kiosks near me are setup, but I can only assume they're put together really poorly because once, after mashing buttons in frustration, it started refunding me for tickets that I'd not purchased. You'd have thought "Don't give money to random passers by" would have been fairly high on the list of requirements, but there we are.

asynchronous
1 replies
12h46m

Great write up on the issues and challenges with airgapped and entirely internet avoidant systems in the modern software world.

gala8y
0 replies
11h13m

I will piggyback on your comment. Absolutely stellar example of fine grain thinking which accidentally shows huge expertise and real life experience of the author. Regardless of type of a problem, the amount of thinking put into solving it makes the difference. It is thinking in systems, building complex mental models and finding edge cases which pays off, regardless of the domain / problem at hand. People tend to halt too soon in building mental models. When you are responsible for any given area, you better build a fine grained model. Obviously this is costly in terms of time, money, lost opportunity,... and there is of course a blurry line where you should stop building complexity of a model and just implement a solution. Life will come knocking on your door anyways, Aunt Entropy shows up sooner or later.

This is also why almost all news is non-sense for an expert in given domain. Basically... "It's not that simple."

NoboruWataya
1 replies
7h14m

It seems fairly obvious that an airline reservation system needs to be connected to a network at least, I haven't heard many people claim they should have been all offline. But for example I heard stories of lathe machines in workshops that were disabled by this. You gotta wonder whether they really needed to be online. (I'm sure there are reasons, but they are reasons that should be weighed against the risks.)

Beyond that there are plenty of even more ridiculous examples of things that are now connected to the internet, like refrigerators, kettles, garage doors etc. (I don't know if many, or any, of these things were affected by the CrowdStrike incident, but if not, it's only a matter of time until the next one.)

As for the claim that non-connected systems are "very, very annoying", my experience as a user is that all security is "very, very annoying". 2FA, mandatory password changing, locked down devices, malware scanners, link sanitisers - some of it is necessary, some of it is bullshit (and I'm not qualified to tell the difference), but all of it is friction.

kwhitefoot
0 replies
4h29m

a network

Of course. But not the Internet.

LeifCarrotson
1 replies
2h18m

I'm a controls engineer. I've built hundreds of machines, they do have Ethernet cables for fieldbus networks but should never be connected to the Internet.

Every tool and die shop in your neighborhood industrial park contains CNC machines with Ethernet ports that cannot be put on the Internet. Every manufacturing plant with custom equipment, conveyor lines and presses and robots and CNCs and pump stations and on and on, use PLC and HMI systems that speak Ethernet but are not suitable for exposure to the Internet.

The article says:

In other words, the modern business computer is almost primarily a communications device.

There are not that many practical line-of-business computer systems that produce value without interconnection with other line-of-business computer systems.

which ignores the entirety of the manufacturing sector as well as the electronic devices produced by that sector. Millions of embedded systems and PLCs produce value all day long by checking once every millisecond whether one or more physical or logical digital inputs have changed state, and if so, changing the state of one or more physical or logical digital outputs.

There's no need for the resistance welder whose castings were built more than a century ago, and whose last update was to receive a PLC and black-and-white screen for recipe configurations in 2003 to be updated with 2024 security systems. You just take your clipboard to it, punch in the targets, and precisely melt some steel.

Typically, you only connect to machines like this by literally picking up your laptop and walking out to the machine with an Ethernet patch cable. If anything beyond that, I expect my customers to put them on a firewalled OT network, or bridge between information technology (IT) and operations technology (OT) with a Tosibox, Ixon, or other SCADA/VPN appliance.

bo1024
0 replies
13m

It's reassuring that such things still exist. My mental model of consumer hardware is that they take devices like the ones you describe, and just add wifi, bluetooth, telemetry, ads, and an app.

tormeh
0 replies
7h13m

I remain unconvinced that you shouldn't air-gap systems because that means you can't use internet-centric development practices. I find this argument absurd. The systems that should have their ethernet ports epoxyed also should never have been programmed using internet-centric development practices in the first place. Your MRI machine fetches JS dependencies from NPM on boot? Straight to jail. Not metaphorically.

rowbin
0 replies
9h50m

> The stronger versions, things from List 1 and List 2, are mostly only seen in defense and intelligence

And I don't think that is enough. I agree that it easier and sufficient for most systems to just be connected over the internet. But health, aviation and critical infrastructure in general should try to be offline as much as possible. Many of the issues described with being offline stem from having many third party dependencies (which typically assume internet access). In general but for critical infrastructure especially you want as little third party dependencies as possible. Sure it's not as easy as saying "we don't want third party dependencies" and all is well. You'll have to make a lot of sacrifices. But you also have a lot to gain when dramatically decreasing complexity, not only from a security standpoint. I really do believe there are many cases where it would be better to use a severely limited tech stack (hardware and software) and use a data diode like approach where necessary.

One of the key headaches mentioned when going offline is TLS. I agree and I think the solution is to not use TLS at all. Using a VPN inside the air-gapped network should be slightly better. It's still a huge headache and you have to get this right, but being connected to the internet at all times is also a HUGE headache.

readyplayernull
0 replies
2h3m

And I just got this from big bro Google:

[...] With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. [...] Devices in the network use Bluetooth to scan for nearby items.

Full email content:

Find My Device network is coming soon

You can use Find My Device today to locate devices when they’re connected to the internet. With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. You can also find any compatible Fast Pair accessories when they’re disconnected from your device. This includes compatible earbuds and headphones, and trackers that you can attach to your wallet, keys, or bike.

To help you find your items when they’re offline, Find My Device will use the network of over a billion devices in the Android community and store your devices’ recent locations.

How it works

Devices in the network use Bluetooth to scan for nearby items. If other devices detect your items, they’ll securely send the locations where the items were detected to Find My Device. Your Android devices will do the same to help others find their offline items when detected nearby.

Your devices’ locations will be encrypted using the PIN, pattern, or password for your Android devices. They can only be seen by you and those you share your devices with in Find My Device. They will not be visible to Google or used for other purposes.

You’ll get a confirmation email in 3 days when this feature is turned on for your Android devices. Until then, you can opt out of the network through Find My Device on the web. Your choice will apply to all Android devices linked to [email]. After the feature is on, you can manage device participation anytime through Find My Device settings on the device.

Learn more

lokimedes
0 replies
10h42m

That pretty well summed up my time delivering state of the art AI solutions to military customers. 80% of the effort was getting internet-native tooling to work seamlessly in an air-gapped environment.

llm_trw
0 replies
3h3m

The description of updates is painfully true.

A long time ago I worked at a broker trader where all communications, including servers communications, had to go through zscaler as a man in the middle.

What had been routine all of a sudden became impossible.

Turns out that git, apt, pip, cabal and ctan all had different ideas about how easy they should make this. After a month of fighting each of them I gave up. I just downloaded everything from their public ftp repos and build from source over a week. I wish good luck to whoever had to maintain it.

halfcat
0 replies
5m

There are many fundamental assumptions that ought to be challenged like this.

Does a computer that can access your accounting system need to access the internet? Or email?

A user could run two computers, one that’s for internet stuff, and one that does important internal stuff. But that’s a silly idea because it’s costly.

However, we can achieve the same thing with virtualization, where a user’s web browser is running in a container/VM somewhere and if compromised, goes away.

Stuff like this exists throughout society in general. When should a city employee carry a gun? On one end of the spectrum, the SWAT team probably needs guns. On the other end, the guy who put a note on my door that my fence was leaning into the neighbor’s property didn’t have a gun. So the question is, is a a traffic stop closer to the SWAT team or the guy kindly letting me know I’ve violated a city ordinance?

I don’t know why these things don’t get reassessed. Is it that infrastructure is slower to iterate on? Reworking the company’s network infrastructure, or retraining law enforcement departments, is a big, costly undertaking.

gwern
0 replies
1h22m

Marvin Minsky in 1970 (54 years ago) on how you can't just "turn off the X" when it is a powerful economically-valuable pervasive computer system:

"Many computer scientists believe that people who talk about computer autonomy are indulging in a lot of cybernetic hoopla. Most of these skeptics are engineers who work mainly with technical problems in computer hardware and who are preoccupied with the mechanical operations of these machines. Other computer experts seriously doubt that the finer psychic processes of the human mind will ever be brought within the scope of circuitry, but they see autonomy as a prospect and are persuaded that the social impact will be immense.

Up to a point, says Minsky, the impact will be positive. “The machine dehumanized man, but it could rehumanize him.” By automating all routine work and even tedious low-grade thinking, computers could free billions of people to spend most of their time doing pretty much as they d—n please. But such progress could also produce quite different results. “It might happen”, says Herbert Simon, “that the Puritan work ethic would crumble to dust and masses of people would succumb to the diseases of leisure.” An even greater danger may be in man’s increasing and by now irreversible dependency upon the computer

The electronic circuit has already replaced the dynamo at the center of technological civilization. Many US industries and businesses, the telephone and power grids, the airlines and the mail service, the systems for distributing food and, not least, the big government bureaucracies would be instantly disrupted and threatened with complete breakdown if the computers they depend on were disconnected. The disorder in Western Europe and the Soviet Union would be almost as severe. What’s more, our dependency on computers seems certain to increase at a rapid rate. Doctors are already beginning to rely on computer diagnosis and computer-administered postoperative care. Artificial Intelligence experts believe that fiscal planners in both industry and government, caught up in deepening economic complexities, will gradually delegate to computers nearly complete control of the national (and even the global) economy. In the interests of efficiency, cost-cutting and speed of reaction, the Department of Defense may well be forced more and more to surrender human direction of military policies to machines that plan strategy and tactics. In time, say the scientist, diplomats will abdicate judgment to computers that predict, say, Russian policy by analyzing their own simulations of the entire Soviet state and of the personalities—or the computers—in power there. Man, in short, is coming to depend on thinking machines to make decisions that involve his vital interests and even his survival as a species. What guarantee do we base that in making these decisions the machines will always consider our best interests? There is no guarantee unless we provide it, says Minsky, and it will not be easy to provide—after all, man has not been able to guarantee that his own decisions are made in his own best interests. Any supercomputer could be programmed to test important decisions for their value to human beings, but such a computer, being autonomous, could also presumably write a program that countermanded these “ethical” instructions. There need be no question of computer malice here, merely a matter of computer creativity overcoming external restraints."

gizmo
0 replies
9h52m

But that just, you know, scratches the surface. You probably develop and deploy software using a half dozen different package managers with varying degrees of accommodation for operating against private, internal repositories.

That's non-ironically the problem. Current software culture creates "secure software" with a 200 million line of code attack surface and then act surprised when it blows up spectacularly. We do this because there is effectively no liability for software vendors or for their customers. What software security vendors sell is regulatory compliance, not security.

eqqn
0 replies
8h18m

"Don't worry, the software in question seems to have fallen out of favor and cannot hurt you."

It may not be the software in question, but proprietary snowflake entitlement management software that has a lot of black box and proprietary voodoo, that does not have any disaster recovery capacity and would be considered tech debt a decade ago... Disgracefully came into life in the year 2021. It did not gracefully recover from clownstrike to say the least.

djha-skin
0 replies
4h25m

If you are operating a private network, your internal services probably don't have TLS certificates signed by a popular CA that is in root programs. You will spend many valuable hours of your life trying to remember the default password for the JRE's special private trust store and discovering all of the other things that have special private trust stores, even though your operating system provides a perfectly reasonable trust store that is relatively easy to manage, because of Reasons. You will discover that in some tech stacks this is consistent but in others it depends on what libraries you use.

Oof, I feel this one. I tried to get IntelliJ's JRE trust store to understand that there was a new certificate for zscaler that it had to use and there were two or three different JDKs to choose from, and all of their trust stores were given the new certificate and it still didn't work and we didn't know why.

RajT88
0 replies
2h53m

My big take-away is not that "all these systems shouldn't be connected to the internet", it's a few other things:

1. These systems shouldn't allow outbound network flows. That will stop all auto-updates, which you can then manage via internal distribution channels.

2. Even without that, you can disable auto-updates on many enterprise software products - Windows notably, but also Crowdstrike itself. I heard about CS customers disabling auto-update and doing manual rollouts who were saved by this practice.

3. Tacking on to number 2 - gradual rollout of updates which you've done some smoke testing on. Just in case. Again - I heard of CS customers who did a gradual rollout, and managed to only have a fraction of their machines impacted.

1970-01-01
0 replies
1h4m

One way to see how she is right is by asking how many layers of 'disconnect from the Internet' do you need? Are you expecting a firewall rule of deny all? Closing all ports on the hosts? Ripping away the TCP/IP stack? Where are you expecting the line of success? Remember, all traffic is routable.