return to table of content

The six dumbest ideas in computer security (2005)

lobsang
91 replies
2d5h

Maybe I missed it, but I was surprised there was no mention of passwords.

Mandatory password composition rules (excluding minimum length) and rotating passwords as well as all attempts at "replacing passwords" are inherintly dumb in my opinion.

The first have obvious consequences (people writing passwords down, choosing the same passwords, adding 1) leading to the second which have horrible / confusing UX (no I don't want to have my phone/random token generator on me any time I try to do something) and default to "passwords" anyway.

Please just let me choose a password of greater than X length containing or not containing any chachters I choose. That way I can actually remember it when I'm not using my phone/computer, in a foreign country, etc.

temporallobe
34 replies
2d3h

I’ve been preaching this message for many years now. For example, since password generators basically make keys that can’t be remembered, this has led to the advent of password managers, all protected by a single password, so your single point of failure is now just ONE password, the consequences of which would be that an attacker would have access to all of your passwords.

The n-tries lockout rule is much more effective anyway, as it breaks the brute-force attack vector in most cases. I am not a cybersecurity expert, so perhaps there are cases where high-complexity, long passwords may make a difference.

Not to mention MFA makes most of this moot anyway.

nouveaux
15 replies
2d1h

Most of us can't remember more than one password. This means that if one site is compromised, then the attacker now has access to multiple sites. A password manager mitigates this issue.

userbinator
7 replies
1d22h

Vary the password per site based on your own algorithm.

jay_kyburz
5 replies
1d21h

AKA, put the name of the site in the password :)

userbinator
2 replies
1d20h

Not necessarily, but just a pattern that only you would likely remember.

viraptor
0 replies
1d14h

You need a pattern that only you recognise/understand, not just remember. It takes only one leak of your password from service FooBar that looks like "f....b" to know what to try on other sites. Patterns easy to remember are mostly easy to understand.

ArnoVW
0 replies
1d11h

With LLM that sort of approach can be attacked at scale

Sohcahtoa82
1 replies
1d

"MyPasswordIsSecureDespiteNotBeingComplexBecauseItIsLong_BobsForum" is great until Bob's Forum gets hacked and it turns out that they were storing your password in plain text and your password of "MyPasswordIsSecureDespiteNotBeingComplexBecauseItIsLong_Google" becomes easily guessed.

zzo38computer
0 replies
1d

One way to mitigate such a problem is to use the hash of this text as the password, instead of using the text directly.

tshaddox
0 replies
1d3h

That algorithm becomes analogous to the password to your password manager.

soupbowl
3 replies
1d13h

Most people can surely remember beyond one password.

defrost
0 replies
1d13h

Not to mention they're like underpants, you can use the same one forwards, backwards, inside out, and inside out backwards.

chipsrafferty
0 replies
1d13h

Surely not more than 1 or 2

bende511
0 replies
1d13h

They can remember O(1) passwords, but they need O(n) passwords

cardanome
2 replies
1d20h

People used to memorize the phone numbers of all important family members and close friends without much trouble. Anyone without a serious disability should have no trouble memorizing multiple passwords.

Sure, I do use password managers for random sites and services but I probably have at lower double digit amount of passwords memorized for the stuff that matters. Especially for stuff that I want to be able to access in an emergency when my phone/laptop gets stolen.

watwut
0 replies
1d9h

They did not. They had papers with all those numbers written down next to landline phones. They also had little notebooks they carried everywhere with them with those numbers written down. You could buy those little notebooks in any store and they fitted into a pocket.

Moreover, those numbers did not changed for years and years. Unlike passwords that change, like, every 3 months.

542354234235
0 replies
1d5h

People used to memorize a few phone numbers, likely less than 10, and used notebooks made specifically for writing down phone numbers to keep track of the rest.

Phone numbers of the people you called the most (the 10 you memorized) were overwhelmingly likely to be local numbers, so you were only memorizing (3 number chunk) + (4 number chunk). Password rules are all over the place. Memorizing numbers, letters, whole words, the capitalization of those letters and words, and special characters, that are far longer than ye olde timey phone numbers, is orders of magnitude more difficult.

I have over 100 passwords in my password manager. They are all unique, so if any one is compromised, it is contained. My password manager is protected by strong 2FA, so someone would have to physically interact with my property to gain access. In the real world, there is no scenario where memorizing all your passwords is more secure.

borski
15 replies
2d1h

so your single point of failure is now just ONE password, the consequences of which would be that an attacker would have access to all of your passwords.

Most managers have 2FA, or an offline key, to prevent this issue, and encrypt your passwords at rest so that without that key (and the password) the database is useless.

nottorp
14 replies
2d

and encrypt your passwords at rest

I haven't turned off my desktop this year. How does encryption at rest help?

doubled112
9 replies
2d

My password manager locks when I lock my screen. You can configure it to lock after some time.

The database is encrypted at rest.

necovek
8 replies
1d23h

Locking is not sufficient: it would need to overwrite the memory where passwords were decrypted to. With virtual memory, this becomes harder.

necovek
2 replies
1d7h

On Unix-like systems, KeePass 2.x uses ChaCha20, because Mono does not provide any effective memory protection method.

So only Windows seems to use secure memory protection.

necovek
0 replies
1d

Ok, we seem to be moving the goalposts a bit.

My point is that you need to read up on it to ensure the implementation of memory handling for your password manager is really safe. As you demonstrate yourself, KeePass has different clients with different memory protection profiles which also depends on the system.

compootr
1 replies
1d21h

What's sufficient depends on your threat model.

Normal dude in a secure office? An auto-locking password manager would suffice.

Someone that should be concerned with passwords in-memory is someone who believes another has full physical access to their computer (and can, say, freeze RAM in nitrogen to extract passwords

My largest concern would be an adversary snatching my phone while my password manager was actively opened

necovek
0 replies
23h56m

Locking a password manager and your computer is certainly good enough in many cases. But gaining access to memory might not need the sophistication of using nitrogen (see eg https://en.m.wikipedia.org/wiki/DMA_attack).

freeone3000
0 replies
1d20h

But still not particularly hard. mmap has a MMAP_FIXED flag for this particular reason — overwrite the arena you’re decrypting to, and you should be set.

marshray
3 replies
1d22h

When your old hard drive turns up on ebay.

nottorp
2 replies
1d22h

It's not safe to sell SSDs is it?

And even if it were, who would buy a used SSD with unknown durability gone?

ziml77
0 replies
1d21h

If the data was always encrypted, then simply discarding the keys effectively means the drive is left filled with random data. Also, NVMe drives can be sent the sanitize command which can erase/overwrite the data across the entire physical drive rather than just what's mapped into the logical view. I believe there's SATA commands to perform similar actions.

justsomehnguy
0 replies
1d11h

t's not safe to sell SSDs is it?

Bitlocker (or anything comparable) makes it safe or ATA Secure Erase if you can issue it (not usable for the system drive most of the times) and check it afterwards.

And even if it were, who would buy a used SSD with unknown

it doesn't worth it for $30 drive, for the multi-TB ones it's quite common, especially for the ssrver grade ones (look for the PM1723/PM1733)

wruza
0 replies
1d13h

My bitwarden plugin locks out after a few minutes of inactivity. New installations are protected by totp. So one has to physically be at one of my devices few minutes after I leave even if they have a password. This reduces the attack source to a few people that I have to trust anyway. Also I can lock / logout manually if situation suggests. Or not log in at all and instead type the password from my phone screen.

I understand the conceptual risk of storing everything behind a single “door”. That’s not ideal. But in practice, circumstances force you to create passwords, expose passwords, reset passwords, so you cannot remember them all. You either write them down (where? how secure?) or resort to having only a few “that you usually use”.

Password managers solve the “where? how secure?” part. They don’t solve security, they help you to not do stupid things under pressure.

unethical_ban
0 replies
1d14h

The one password and the app that uses it are more secure than most other applications. Lock out is just another term for DDoS if a bad actor knows usernames.

I love proton pass.

II2II
29 replies
1d23h

Mandatory password composition rules (excluding minimum length) and rotating passwords as well as all attempts at "replacing passwords" are inherintly dumb in my opinion.

I suspect that rotating passwords was a good idea at the time. There was some pretty poor security practices several decades ago, like sending passwords as clear text, which took decades to resolve. There are also people like to share passwords like candies. I'm not talking about sharing passwords to a streaming service you subscribe to, I'm talking about sharing access to critical resources with colleagues within an organization. I mean, it's still pretty bad which is why I disagree with them dismissing educating end users. Sure, some stuff can be resolved via technical means. They gave examples of that. Yet the social problems are rarely solvable via technical means (e.g. password sharing).

Jerrrrrrry
17 replies
1d22h

  >I suspect that rotating passwords was a good idea at the time.
yes, when all password hashes were available to all users, and therefore had an expected bruteforce/expiration date.

It is just another evolutionary artifact from a developing technology complexed with messy humans.

Repeated truisms - especially in compsci, can be dangerous.

NIST has finally understood that complex password requirements decrease security, because nobody is attacking the entrophy space - they are attacking the post-it note/notepad text file instead.

This is actually a good example of an opposite case of Chesterton’s Fence

https://fs.blog/chestertons-fence/

marshray
13 replies
1d22h

It's not crazy to want a system to be designed such that it tends to converge to a secure state over time. We still have expiration dates on ID and credit cards and https certificates.

The advantages just didn't outweigh the disadvantages in this scenario.

Jerrrrrrry
10 replies
1d21h

Apples to oranges.

Usernames are public now.

Back then, your username was public, and your password was assumed cracked/public, within a designated time-frame.

Your analogy would hold if when your cert expires, everyone gets to spoof it consequence free.

freeone3000
9 replies
1d20h

The cert can still be broken. The signatures are difficult, but not impossible, to break: it can and has been done with much older certificates, which means it will likely be doable to current certificates in a few years. In addition, certificate rotation allows for mandatory algorithm updates and certificate transparency logs. CT itself has exposed a few actors breaking the CA/B rules by backdating certificates with weaker encryption standards.

Certificate expiration, and cryptographic key rotation in general, works and is useful.

bawolff
6 replies
1d17h

which means it will likely be doable to current certificates in a few years

It is extremely unlikely a modern certificate will be broken in the time horizon of a few years through a cryptography break.

All systems eventually fail, but i expect it will be several decades at the earliest before a modern certificate breaks from a crypto attack.

Keep in mind that md5 started to be warned against in 1996. It wasn't until 2012 that a malicious attack used md5's weakness. That is 16 years from warning to attack. At this stage we dont even know about any weaknesses about currently used crypto (except quantum stuff)

Rotating certificates is more about guarding against incorrectly issued and compromised certificates.

j16sdiz
5 replies
1d16h

Many modern crypto can be broken by nonce reuse.

Rotating certificates guard against bad PRNG and birthday paradox

bawolff
4 replies
1d11h

I disagree. I don't think rotating certificates would help against birthday attacks or bad prng.

Tbh, i have no idea which part you are attacking with the birthday attack in this specific context. It doesn't seem particularly relavent.

(At the risk of saying something stupid) - i was under the impression RSA did not use nonces, so i don't see how that is relavent for an rsa cert.

For an ecdsa cert, nonce reuse is pretty catastrophic. I fail to see how short lived certs help since the old certs don't magically disappear, they still exist and can be used in attacks even after being rotated.

marshray
1 replies
18h12m

If properly generated even the smallest RSA key sizes used in practice are still safe from birthday collisions.

But there have been several high-profile cases of bad RNGs generating multiple certs with RSA keys that had common factors. I think if you were put at risk by such a broken RNG, frequently re-generating your certs would tend to make things worse, not better.

Jerrrrrrry
0 replies
16h42m

Don't be nice, or do it thrice - hash input twice.

freeone3000
1 replies
1d2h

Certs should be checked against a CRL and CT for revocation, and expired certs should never be accepted, for this reason among others.

bawolff
0 replies
17h54m

CT isn't used for revocation. CRLs aren't really a thing in practise. Refusing to accept expired certs is important for other reasons but won't save you from a reused ECDSA nonce.

marshray
0 replies
22h45m

Crypto breaks are a concern for sure, but typically the more short-term concern is server compromise. Cert revocation is not reliably checked by all clients, and sites may not even know to revoke it.

So it's essential that if/when a bad guy pops a single server that they don't get a secret that allows them to conduct further attacks against the site for some indefinite period into the future.

watwut
0 replies
1d9h

You had 6 systems to log into, each with different requirements put on password. You are not supposed to share the password between system. Each system forces you to change the password in different schedule. And IT acts angry when you, predictably, forget password.

It did not converged to secure state, it necessary converged to everyone creating some predictable password system.

II2II
0 replies
1d18h

Expiration dates on passwords is probably a good idea, except that it encourages bad habits from the end user. It encouraged bad habits since the expiration period was typically very short. For example: I don't have much of an issue with the 1 year period at one workplace, but I do have an issue with the 3 month period at another work place. The other issue is that people have to manage many passwords. Heck, I worked at one place where each employee was supposed to access multiple systems and have different passwords on each system. (Never mind all of the passwords they have to manage outside of the workplace.)

Contrast that to the other examples you provided. All of them are typically valid for several years. In two of the cases, people are managing a limited number of pieces of plastic.

n4r9
1 replies
1d8h

yes, when all password hashes were available to all users, and therefore had an expected bruteforce/expiration date.

Pretty much anyone I've spoken to candidly about rotating passwords has said that they use a basic change to derive the next password from the old. For example, incrementing a number and/or a letter. If that was as common a practise as I suspect, then rotating passwords doesn't add much security. It just meanstthat hackers had to go through a few common manipulation strategies after breaking the hash.

Jerrrrrrry
0 replies
20h46m

This is actually the third-order effect of itself, by itself.

Require frequent passwords, humans cheat, boom: your brute-force space just went from 1024 bits to 14, assuming you can onboard a red-team plant far enough to get the template for the default passwords.

If I know _bigcorp_ gives defaulted credentials in the format of [First Initial + Middle Initial + month_day] then not only can I piggyback a trivially-created IT/support ticket, I can also just guess that in 60, 90, 120 days, your credentials are the same, but the month_day - even if not correct, the search space is reduced by magnitudes.

nmadden
0 replies
1d11h

NIST has finally understood that complex password requirements decrease security, because nobody is attacking the entrophy space - they are attacking the post-it note/notepad text file instead.

Actually NIST provide a detailed rationale for their advice [1]. Attackers very much are attacking the entropy space (credential stuffing with cracked passwords is the #1 technique used in breaches). But password change and complexity rules are pointless precisely because they don’t increase the entropy of the passwords. From NIST:

As noted above, composition rules are commonly used in an attempt to increase the difficulty of guessing user-chosen passwords. Research has shown, however, that users respond in very predictable ways to the requirements imposed by composition rules [Policies]. For example, a user that might have chosen “password” as their password would be relatively likely to choose “Password1” if required to include an uppercase letter and a number, or “Password1!” if a symbol is also required.

[1]: https://pages.nist.gov/800-63-3/sp800-63b.html#appA

marshray
10 replies
1d22h

Much of the advice around passwords comes from time-sharing systems and predates the internet.

Rules like "don't write passwords down," "don't show them on the screen", and "change them every N days" all make a lot more sense if you're managing a bank branch open-plan office with hardwired terminals.

01HNNWZ0MV43FF
9 replies
1d20h

It's funny, writing passwords down is excellent advice on today's Internet.

Physical security is easy, who's gonna see inside your purse? How often does that get stolen? Phones and laptops are high-value targets for thieves, and if they're online there could be a vuln. Paper doesn't have RCE.

(That said, I use KeePass, because it's nice to have it synced and encrypted. These days only my KeePass password is written down.)

II2II
8 replies
1d18h

I was going to say such advice does have its limits. Then I remembered something: even though my current credit card does have a chip, the design suggests that it is primarily intended as a record of a "user id" and "password" (e.g. large easy to read numbers, rather than the embossed numbers intended to make impressions upon carbon copy forms that typically became impossible to read with wear).

j16sdiz
7 replies
1d16h

Not exactly. Some transactions are cryptographically authenticated. "The algorithm" looks at those bits. Transactions with proper chips authentication are less likely to be flagged as fraud

viraptor
6 replies
1d14h

Also the embossed numbers are not that common in countries outside the US. For quite a while the numbers themselves are also disappearing from the front. (If you even use the physical card rather than your phone)

bboygravity
5 replies
1d11h

I remember being mind-blown on my first trip to the US when a taxi driver took my card and literally carbon copied it manually (with a pencil and carbon copy booklet) on the spot.

I had been using my credit card for at least a decade (Europe) and it never ever occured to me that the embossed letters had any function other than aesthetic.

And the cheques... jaw drop

Zanfa
2 replies
1d9h

I had a food delivery guy use one in the US 2012/2013. It was like seeing a native tribe perform their traditional dance. It still blows my mind that chip + "signature" is a thing in the US. What good is a random indiscernible scribble on a tiny resistive touch screen as far as proving anything?

staunton
1 replies
1d7h

It's quite illegal to fake signatures so it acts as a deterrent. I can't think of anything else...

542354234235
0 replies
1d5h

I could be wrong but I'm pretty sure it is also illegal to steal someone's credit card and use it. If you have already done that, I don't think the idea of scribbling illegally is going to warn anyone off. Chip+PIN is objectively far more secure. People used debit cards with swipe+PIN for decades just fine and chip+PIN is used in many other countries without an issue. It is just silly to keep using signature and acting like it does absolutely anything at all.

chgs
0 replies
1d10h

That was how things worked back in the 80s and 90s, online or chip systems were in place 20 years ago in Europe

crngefest
7 replies
1d23h

Well, my experience working in the industry is that almost no company uses good security practices or goes beyond some outdated checklists - a huge number wants to rotate passwords, disallow/require special characters, lock out users after X attempts, or disallow users to choose a password they used previously (never understood that one).

I think the number of orgs that follow best practices from NIST etc is pretty low.

Dalewyn
2 replies
1d21h

lock out users after X attempts

Legitimate users usually aren't going to fail more than a couple times. If someone (or something) is repeatedly failing, lock that shit down so a sysadmin can take a look at leisure.

disallow users to choose a password they used previously (never understood that one)

It's so potentially compromised passwords from before don't come back into cycle now.

michaelt
0 replies
1d8h

> Legitimate users usually aren't going to fail more than a couple times.

Have your users authenticate to the wifi with a certificate that expires after 18 months, and you'll find users will reboot a dozen times or so, racking up authentication failures each time, before they call IT support.

marky1991_2
0 replies
1d12h

I fail all the time. Oops, forgot to change my keyboard layout back or 'is it flamingmonkey1, 2, or 3 this time?' (because I have to rotate it every N months and clearly I'm not going to keep generating new passwords that I have to remember, unless the security people really explain why, which they never do), or 'oops, capslock was on', or 'does this password prompt require special characters (is it flamingmonkey1!?) or does it ban them? (or worst of all 'is whatever validates passwords just broken mysteriously and I have to reset my password to fix it?')

There's so many reasons I get passwords wrong. (it doesn't help that work has 4 systems that all use different passwords, all with different requirements).

If you locked me out (without me being able to easily unlock myself), I would immediately consider this an even-more-hostile relationship than normal and would immediately respond in kind.

8xeh
1 replies
1d22h

It's not necessarily the organization's fault. In several companies that I've worked for (including government contractors) we are required to implement "certifications" of one kind or another to handle certain kinds of data, or to get some insurance, or to win some contract.

There's nothing inherently wrong with that, but many of these require dubious "checkbox security" procedures and practices.

Unfortunately, there's no point in arguing with an insurance company or a contract or a certification organization, certainly not when you're "just" the engineer, IT guy, or end user.

There's also little point in arguing with your boss about it either. "Hey boss, this security requirement is pointless because of technical reason X and Y." Boss: "We have to do it to get the million dollar contract. Besides, more security is better, right? What's the problem?"

funnybeam
0 replies
7h11m

I’ve had several companies, including cyber insurers, ask for specific password expiry policies and when I’ve gone back to them explaining that we don’t expire passwords and referencing the NCSC and NIST advice all of them have accepted that without any arguments.

As you say, these are largely box ticking exercises but you don’t have to accept the limited options they give you as long as you can justify your position

jay_kyburz
0 replies
1d21h

disallow users to choose a password they used previously

I think Epic Game Store hit me with that one the other day. Had to add a 1 to the end.

A common pattern for me is that I create an account at home, and make a new secure password.

Then one day I log in a work but don't have the password on me so I reset it.

Then I try and login again at home, don't have the password from work, so try and reset it back to the password I have at home.

ivlad
0 replies
1d17h

disallow users to choose a password they used previously (never understood that one)

That’s because you never responded to an incident when user changed their compromised password because they were forced to only to change it back next day because “it’s too hard to remember a new one”.

belinder
7 replies
2d4h

Minimum length is dumb too because people just append 1 until it fits

rekabis
2 replies
2d4h

I would love to see most drop-in/bolt-on authentication packages (such as DotNet’s Identity system) to adopt “bitwise complexity” as the only rule: not based on length or content, only the mathematical complexity of the bits used. KeePass uses this as an estimate of password “goodness”, and it’s altered my entire view of how appropriate any one password can be.

jamesfinlayson
0 replies
1d13h

I'm told that at work we're not allowed to have the same character appear three or more times consecutively in a password (I have never tried).

Terr_
0 replies
1d20h

IIRC the key point there is that it's contextual to whatever generation method scheme you used--or at least what method you told it was used--and it assumes the attacker knows the generation scheme.

So "arugula" will score is very badly in the context of a passphrase of English words, but scores better as a (supposedly) random assortment of lowercase letters, etc.

hunter2_
2 replies
2d4h

But when someone tries to attack such a password, as long as whatever the user devised isn't represented by an entry in the attack dictionary, the attack strategy falls back to brute force, at which point a repetition scheme is irrelevant to attack time. Granted, if I were creating a repetitive password to meet a length requirement without high mental load, I'd repeat a more interesting part over and over, not a single character.

borski
1 replies
2d1h

Sure. But most people add “111111” or “123456” to the end. That’s why it’s on top of every password list.

hunter2_
0 replies
1d13h

If cracking techniques catch the concatenation of those as suffixes to short undefined things, which I think many people would do at minimum, that would be worrisome indeed.

NeoTar
0 replies
1d4h

Undisclosed minimum length is particularly egregious.

It's very frustrating when you've got a secure system and you spend a few minutes thinking up a great, memorable, secure password; then realize that it's too few (or worse, too many!) characters.

Even worse when the length requirements are incompatible with your password generation tool.

red_admiral
1 replies
1d22h

I was going to say passwords too ... but now I think passkeys would be a better candidate for dumbest ideas. For the average user, I expect they will cause no end of confusion.

cqqxo4zV46cp
0 replies
1d18h

That’s just recency bias.

uconnectlol
0 replies
1d18h

Password policies are a joke since you use 5 websites and they will have 5 policies.

1. Bank etc will not allow special characters, because that's a "hacking attempt". So Firefox's password generator, for example, won't work. The user works around this by typing in suckmyDICK123!! and his password still never gets hacked because there usually isn't enough bruteforce throughput even with 1000 proxies or you'll just get your account locked forever once someone attempts to log into it 5 times and those 1000 IPs only get between 0.5-3 tries each with today's snakeoil appliances on the network. There's also the fact that most people already know that "bots will try your passwords at superhuman rate" by now. Then there's also the fact that not even one of these password policies stops users from choosing bad passwords. This is simply a case of "responsible" people trying and wasting tons of times to solve reality. These people who claim to know better than you have not even thought this out and have definitely not thought about much at all.

2. For everything that isn't your one or two sensitive things, like the bank, you want to use the same password. For example the 80 games you played for one minute that obnoxiously require making an account (for the bullshit non-game aspects of the game such as in game trading items). Most have custom GUIs too and you can't paste into them. You could use a password manager for these but why bother. You just use the same pass for all of them.

izacus
0 replies
1d23h

Based on the type of this rant - all security focused with little thought about usability of systems they're talking about - the author would probably be one of those people that mandate password rotation every week with minimum of 18 characters to "design systems safely by defatult". Oh, and prevent keyboards from working because they can infect computers via USB or something.

(Yes, I'm commenting on the wierd idea about not allowing processes to run without asking - we're now learning from mobile OSes that this isn't practically feasible to build a universally useful OS that drove most of computer growth in the last 30 years).

ivlad
0 replies
1d17h

Dear user with password “password11111111111” logging in from a random computer with two password stealers active, from a foreign country, and not willing to use MFA, incident response team will thank you and prepare a warm welcome when you are back to office.

Honestly, this comment shows, that user education does not work.

hot_gril
0 replies
1d1h

I don't get how it took until present day for randomly-generated asymmetric keys to become somewhat commonly used on the Web etc in the form of "passkeys" (confusing name btw). Password rotation and other rules never worked. Some sites still require a capital letter, number, and symbol, as if 99% of people aren't going to transform "cheese" -> "Cheese1!".

cuu508
0 replies
1d10h

That way I can actually remember it when I'm not using my phone/computer, in a foreign country, etc.

I'd be very wary of logging into accounts on any computer/phone other than my own.

JJMcJ
0 replies
1d19h

people writing passwords down

Which is better, a strong password written down, or better yet stored a secured password manager, or a weak password committed to memory?

As usual, XKCD has something to say about it: https://xkcd.com/936/

munchausen42
39 replies
1d10h

About 'Default Deny': 'It's not much harder to do than 'Default Permit,' but you'll sleep much better at night.'

Great that you, the IT security person, sleeps much better at night. Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department. And, btw., the more annoyed people are, the more likely they are to use workarounds that undermine your IT security concept (e.g., think of the typical 'password1', 'password2', 'password3' passwords when you force users to change their password every month).

So no, good IT security does not just mean unplugging the network cable. Good IT security is invisible and unobtrusive for your users, like magic :)

spogbiper
9 replies
1d3h

Good IT security is invisible and unobtrusive for your users, like magic

Why is this a standard for "good" IT security but not any other security domain? Would you say good airport security must be invisible and magic? Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?

Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.

graemep
6 replies
1d3h

Would you say good airport security must be invisible and magic?

Very possibly. IMO a lot of the intrusive airport security is security theatre. Things like intelligence do a lot more. Other things we do not notice too, I suspect.

THe thing about the intrusive security is that attackers know abut it and can plan around it.

Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?

No, but they are simple and easy to use, and have rarely stopped me from doing anything I needed to.

Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.

Agree entirely.

Jcowell
5 replies
1d3h

I never quite understood the security theater thing. Isn’t the fact that at each airport , you will be scanned and possibly frisked a deterrent and you can’t measure what dissent occur so the only way to know if it works is observe a timeline where it doesn’t exist?

graemep
4 replies
1d1h

For one thing the rules adopted vary and different countries do very different things. It struck me once on a flight where at one end liquids were restricted, but shoes were not checked, and at the other we had to take our shoes off but there were no restrictions on liquids.

So an attacker who wanted to use a shoe bomb would do it at one end, and one who wanted to use liquids would do it at the other.

There are also some very weird things like rules against taking things that look vague like weapons. An example in the UK were aftershave bottles that are banned - does this look dangerous to you? https://www.fragrancenet.com/fragrances?f=b-spicebomb

Then there are things you can buy from shops after security that are not allowed if you bring them in before (some sharp things). Then things that are minimal threats (has anyone ever managed to hijack a plane with small pen knife? I would laugh at someone trying to carjack with one).

know if it works is observe a timeline where it doesn’t exist?

Absolute proof maybe, but precautions need to be common sense and evidence based.

graemep
1 replies
9h41m

A very small grenade, made of glass, and fill with liquid?

It looks like a grenade in the same way a doll looks like a human being.

Izkata
0 replies
4h28m

In full color vision sure, but not to the machines used to scan the insides of bags. You pretty much just get a silhouette.

nvy
0 replies
22h24m

has anyone ever managed to hijack a plane with small pen knife?

Well, the 9/11 hijackers used box cutters. Might as well be the same thing.

w10-1
0 replies
22h34m

It is the standard for all security domains - police, army, etc.

I would reword it to say that security should work for the oblivious user, and we should not depend on good user behavior (or fail to defend against malicious or negligent behavior).

I would still say the ideal is for the security interface to prevent problems - like having doors so we don't fall out of cars, or ABS to correct brake inputs.

pc86
0 replies
1d3h

If you have two security models that provide identical actual security, and one of them is invisible to the user and the other one is outright user-hostile like the TSA, yes of course the invisible one is better.

pif
8 replies
1d6h

Good IT security is invisible and unobtrusive for your users

I wish more and more IT administrators would use seat belt and airbags as models of security: they impose a tiny, minor annoyance in everyday usage of your cars, but their presence is gold when an accident happens.

Instead, most of them consider it normal to prevent you from working in order to hide their ignorance and lack of professionalism.

thyrsus
5 replies
1d4h

Wise IT admins >know< they are ignorant and design for that. Before an application gets deployed, its requirements need to be learned - and the users rarely know what those requirements are, so cycles of information gathering and specification of permitted behavior ensue. You do not declare the application ready until that process converges, and the business knows and accepts the risks required to operate the application. Few end users know what a CVE is, much less have mitigated them.

I also note that seatbelts and airbags have undergone decades of engineering refinement; give that time to your admins, and your experience will be equally frictionless. Don't expect it to be done as soon as the download finishes.

pif
4 replies
1d1h

I think you are missing the main point of my analogy: seatbelts and airbags work on damage mitigation, while the kind of security that bothers users so much is the one focused on prevention.

Especially in IT, where lives are not at stake, having a good enough mitigation strategy would help enormously in relaxing on the prevention side.

egberts1
1 replies
23h10m

Damage. Pinhole is just as damaging to corporation that may result in leakage of password files, sales projections, customer records, confidential data, and mass encamping of external hackers infesting your company's entire networked infrastructure.

account42
0 replies
7h15m

Slowing down everyone is also incredibly damaging to the corporation though. And as others have pointed out might even be counterproductive as workers look for workarounds to route around your restrictions which may come with bigger security issues than you started out with.

ZeoVII
1 replies
23h13m

Depending on your sector, I would argue that in IT, lives can be at stake. Imagine the IT department of a hospital, a power company, or other vital infrastructure.

Most mitigation tends to be in the form of backup and disaster recovery plans, which, when well implemented and executed, can restore everything in less than a day.

The issue is that some threats can lurk for weeks, if not months, before triggering. In a car analogy, it would be like someone sabotaging your airbag and cutting your seatbelt without you knowing. Preventing a crash in the first place is far more effective and way less traumatic. Even if the mitigation strategy allows you to survive the crash, the car could still be totaled. The reputation loss you suffer from having your database breached can be catastrophic.

Izkata
0 replies
17h44m

Prevention in the car analogy would be like adding a breathalyzer and not allowing it to start if the person in the driver's seat fails.

It's been a gimmick idea for decades but I'm not aware of any car that actually comes with that as a feature. Kinda think there's a reason with how much friction it would add - I just did a quick search to double check and found there are add-ons for this, but without even searching for it most of the results were how to bypass them.

horsawlarway
1 replies
1d1h

So much this.

There is a default and unremovable contention between usability and security.

If you are "totally safe" then you are also "utterly useless". Period.

I really, really wish most security folks understood and respected the following idea:

"A ship in harbor is safe, but that is not what ships are built for".

Good security is a trade. Always. You must understand when and where you settle based on what you're trying to do.

trey-jones
0 replies
1d1h

Really well put and I always tell people this when talking about security. It's a sliding scale, and if you want your software to be "good" it can't be at either extreme.

eadmund
5 replies
1d4h

Company IT exists to serve the company. It should not cost more than it benefits.

There’s a balancing act. On the one hand, you don’t want a one-week turnaround to open a port; on the other you don’t want people running webservers on their company desktops with proprietary plans coincidentally sitting on them.

graemep
2 replies
1d3h

The biggest problem I can see with default deny is that it makes if far harder to get uptake for new protocols once you get "we only allow ports 80 and 443 through the firewall".

account42
1 replies
7h6m

Wich also makes the security benefit moot as now all malware also knows to use ports 80 and 443.

graemep
0 replies
23m

Yes, I think blocking outgoing connections by port is not the most useful approach, especially for default deny. Blocking incoming makes more sense, and should be default deny with allow for specific ports on specific servers.

cjalmeida
0 replies
1d3h

One-week turnaround to open a port would be a dream in most large companies.

causal
0 replies
1d3h

The problem is that security making things difficult results in employees resorting to workarounds like running rogue webservers to get their jobs done.

If IT security's KPIs are only things like "number of breaches" without any KPIs like "employee satisfaction", security will deteriorate.

clwg
2 replies
1d7h

Good IT security isn't invisible; it's there to prevent people from deploying poorly designed applications that require unfettered open outbound access to the internet. It's there to champion MFA and work with stakeholders from the start of the process to ensure security from the outset.

Mostly, it's there to identify and mitigate risks for the business. Have you considered that all your applications are considered a liability and new ones that deviate from the norm need to be dealt with on a case by case basis?

darby_nine
0 replies
1d4h

I think the idea is that if you don't work with engineering or product, people will perceive you as friction rather than protection. Agreeing on processes to deploy new applications should satisfy both parties without restrictions being perceived as an unexpected problem.

RHSeeger
0 replies
1d3h

But it needs to be a balance. IT policy that costs tremendous amounts of time and resources just isn't viable. Decisions need to be made such that it's possible for people to do their work AND safety concerns are address; and _both_ of them need to compromise some.

As a simplified example

- You have a client database that has confidential information

- You have some employees that _must_ be able to interact with the data in that database

- You don't want random programs installed on a computer <that has access to that database> to leak the information

You could lock down every computer in the company to not allow application installation. This would likely cause all kinds of problems getting work done.

You could lock down access to the database so nobody has access to it. This also causes all kinds of problems.

You could lock down access to the database to a very specific set of computers and lock down _those_ computers so additional applications cannot be installed on them. This provides something close to a complete lockdown, but with far less impact on the rest of the work.

Sure it's stupidly simple example, but it just demonstrates the idea that compromises are necessary (for all participants)

cmiles74
1 replies
1d6h

I believe a "default deny" policy for security infrastructure around workstations is a good idea. When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.

That being said, in my opinion, application servers and other public facing infrastructure should definitely be working under a "default deny" policy. I'm having trouble thinking of situations where this wouldn't be the case.

hulitu
0 replies
1d1h

When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.

Many years ago, we had , in our company's billing system a "Waiting for IT". They weren't happy.

Some things got _days_ to get fixed.

7bit
1 replies
1d7h

Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department

This is such an uninformed and ignorant opinion.

1. Permission concepts don't always involve IT. In fact, they can be designed by IT without ever involving IT again - such is the case in our company.

2. The privacy department sleeps much better knowing that GDPR violations require an extra u careful action, than being a default. Management sleeps better knowing that confidential projects need to be shared, instead of forgetting to deny access for everybody first. Compliance sleeps better because all of the above. And users know that data they create is private until explicitly shared.

3. Good IT security is not invisible. Entering a password is a visible step. Approving MFA requests is a visible step. Granting access to resources is a visible step. Teaching users how to identify spam and phishing is a visible step. Or teaching them about good passwords.

munchausen42
0 replies
1d2h

hm I don't think that passwords are an example of good IT security. There are much better options like physical tokens, biometric features, passkeys etc. that are less obtrusive and don't require the users to follow certain learned rules and behaviors.

If the security concept is based on educating and teaching people how to behave it's prone to fail anyway, as there will always be that one uninformed and ignorant person like me that doesn't get the message. As soon as there is one big gaping hole in the wall, the whole fortress becomes useless (Case in point: haveibeenpwned.com) Also, good luck teaching everyone in the company how to identify a personalized phishing message crafted by ChatGPT.

For the other two arguments: I don't see how "But we solved it in my company" and "Some other departments also have safety/security-related primary KPIs" justifies that IT security should be allowed to just air-gap the company if it serves these goals.

michaelcampbell
0 replies
6h1m

"If you're able to do your job, security/it/infosec/etc isn't doing theirs." Perhaps necessary at times, but true all too often.

manvillej
0 replies
1d3h

good IT security is invisible, allows me to do everything I need, protects us from every threat, costs nothing, and scales to every possible technology the business buys. /s

lencastre
0 replies
11h23m

That’s what I gave my firewall, all out traffic is default deny, then as the screaming began, I started opening the necessary ports to designated IPs here and there. Now the screaming is not so frequent. A minor hassle… the tricky one is the DNS over HTTPS… that is a whack-a-mole if I ever saw one.

kabouseng
0 replies
1d6h

That's because IT security reports to the C level, and their KPI's are concerned with security and vulnerabilities, but not the performance or effectiveness of the personnel.

So every time, if there is a choice, security will be prioritized at the cost of personnel performance / effectiveness. And this is how big corporations become less and less effective to the point where the average employee rarely has a productive day.

delusional
0 replies
1d1h

Meanwhile, the rest of the company is super annoyed because nothing ever works

Who even cares if they're annoyed. The IT security gets to sleep at night, but the entire corporation might be operating illegally because they can't file the important compliance report because somebody fiddled with the firewall rules again.

There is so much more to enterprise security than IT security. Sometimes you don't open a port because "it's the right thing to do" as identified by some process. Sometimes you do it because the alternative RIGHT NOW is failing an audit.

cowboylowrez
0 replies
4h27m

the article is great, but reading some of the anti security comments are really triggering for me.

TheRealDunkirk
0 replies
1d4h

A friend of mine has trouble running a very important vendor application for his department. It stopped working some time ago, so he opened a ticket with IT. It was so confusing to them that it got to a point that they allowed him to run Microsoft's packet capture on his machine. He followed their instructions, and captured what was going on. Despite the capture, they were unable to get it working, so out of frustration, he sent the capture to me. Even though our laptops are really locked down, as a dev, I get admin on my machine, and I have MSDN, so I downloaded Microsoft's tool, looked over the capture, and discovered that it the application was a client/server implementation ON THE LOCAL MACHINE. The front end was working over networking ports to talk to the back end, which then talked to the vendor's servers. I only knew that I had just undergone a lot of pain with my own development workflow, because the company had started doing "default deny," and it was f*king with my in several ways. Ways that, as you say, I found workarounds for, that they probably aren't aware of. I told him what to tell IT, and how they could whitelist this application, but he's still having problems. Why am I being vague about the details here? It's not because of confidentiality, though that would apply. No, it's because my friend had been "working with IT" for over a year to get to this point, and THIS WAS TWO YEARS AGO, and I've forgotten a lot of the details. So, to say that it will take "3 extra rounds" is a bit of an understatement when IT starts doing "default deny," at least in legacy manufacturing companies.

dale_glass
19 replies
2d7h

"We're Not a Target" isn't a minor dumb. It's the standpoint of every non-technical person I've ever met. "All I do with my computer is to read cooking recipes and upload cat photos. Who'd want to break in? I'm boring."

The best way I found to change their mind is to make a car analogy. Who'd want to steal your car? Any criminal with an use for it. Why? Because any car is valuable in itself. It can be sold for money. It can be used as a getaway vehicle. It can be used to crash into a jewelry shop. It can be used for a joy ride. It can be used to transport drugs. It can be used to kill somebody.

A criminal stealing a car isn't hoping that there are Pentagon secrets in the glove box. They have an use for the car itself. In the same way, somebody breaking into your computer has uses for the computer itself. They won't say no to finding something valuable, but it's by no means a requirement.

jampekka
18 replies
2d6h

A major dumb is that security people think breaking in is the end of the world. For vast majority of users it's not, and it's a balance between usability and security.

I know it's rather easy to break through a glass window, but I still prefer to see outside. I know I could faff with multiple locks for my bike, but I rather accept some risk for it to be stolen for the convenience.

If there's something I really don't want to risk stolen, I can take it into a bank's vault. But I don't want to live in a vault.

dale_glass
8 replies
2d6h

A major dumb is that security people think breaking in is the end of the world. For vast majority of users it's not, and it's a balance between usability and security.

End of the world? No. But it's really, really bad.

When you get your stolen car back, problem over.

But your broken into system should in most cases be considered forever tainted until fully reinstalled. You can't enumerate badness. That the antivirus got rid of one thing doesn't mean they didn't sneak in something it didn't find. You could be still a DoS node, a CSAM distributor, or a spam sender.

jampekka
5 replies
2d4h

But your broken into system should in most cases be considered forever tainted until fully reinstalled.

Reinstalling an OS is not really, really bad. It's an inconvenience. Less so than e.g. having to get new cards after a lost wallet or getting a new car.

Security people don't seem to really assess what are the actual consequences of breaches. Just that they are "really really bad" and have to be protected against all costs. Often literally the cost being an unusable system.

dave4420
1 replies
2d4h

Is reinstalling the OS enough?

Isn’t there malware around that can store itself in the BIOS or something, and survive an OS reinstall?

Andrex
0 replies
2d1h

It would need to be a zero-day (or close to it), which means nation-state level sophistication.

You can decide for yourself whether to include that in your personal threat analysis.

worik
0 replies
1d22h

Security people don't seem to really assess what are the actual consequences of breaches. Just that they are "really really bad" and

No

Security people are acutely aware of the consequences of a breach.

Look at the catastrophic consequences of the recent wave of ransomware attacks.

Lax security at all levels, victim blaming (they clicked a link....) and no consequences I know of for those responsible for that bad design. Our comrades built those vulnerable systems

worik
0 replies
1d22h

Security people don't seem to really assess what are the actual consequences of breaches. Just that they are "really really bad" and

No

Security people are acutely aware of the consequences of a breach.

Look at the catastrophic consequences of the recent wave of ransomware attacks.

Lax security at all levels, victim blaming (they clicked a link....) and no consequences I know of foe those responsible for that bad design

marcosdumay
0 replies
1d23h

Reinstalling an OS is not really, really bad. It's an inconvenience.

Reinstalling an OS is not nearly enough. You have to reinstall all of them, without letting the "dirty" ones contaminate the clean part of your network; you have to re-obtain all of your binaries; and good luck trusting any local source code.

The way most places are organized today, getting computers infected is a potentially unfixable issue.

kazinator
0 replies
2d5h

When you get your stolen car back, problem over.

Not if it contains computers; either the original ones it had before being stolen, or some new, gifted ones you don't know about.

SoftTalker
0 replies
1d22h

When you get your stolen car back, problem over.

But your broken into system should in most cases be considered forever tainted

Actually this is exactly how stolen cars work. A stolen car that is recovered will have a branded title from then on (at least it will if an insurance company wrote it off).

smokel
2 replies
2d6h

I'm afraid this is a false dichotomy.

People can use HTTPS now instead of HTTP, without degrading usability. This has taken a lot of people a lot of work, but everyone gets to enjoy better security. No need to lock and unlock every REST call as if it were a bicycle.

Also, a hacker will replace the broken glass within milliseconds, and you won't find out it was ever broken.

jampekka
0 replies
2d4h

It shouldn't be a dichotomy, but security zealots not caring about usability or putting the risks in context makes it such.

HTTPS by default is good, especially after Let's Encrypt. Before that is was not worth the hassle/cost most of the time.

E.g. forced MFA everywhere is not good.

Also, a hacker will replace the broken glass within milliseconds, and you won't find out it was ever broken.

This is very rare in practice for normal users. Again, risks in context please.

izacus
0 replies
1d23h

You're ignoring that HTTPS took decades to be default thanks to massive work of a lot of security engineers who UNDERSTOOD that work and process around certificates was too onerous and hard for users. It took them literally decades of work to get HTTPS cert issuance to such a low cost process that everyone does it. It *really* cannot be understated how much important work that was.

Meanwhile, other security zealots were just happy to scream at users for not sending 20 forms and thousands of dollars to cert authorities.

Usability matters - and the author of this original rant seems to be one of those security people who don't understand why the systems they're guarding are useful, used and how are they used. That's the core security cancer still in the wild - security experts not understanding just how transparent the security has to be and that it's sometimes ok to have a less secure system if that means users won't do something worse.

dspillett
2 replies
2d5h

> I know it's rather easy to break through a glass window, but I still prefer to see outside.

Bad analogy. It is not that easy to break modern multi-layer glazing, and it is also a lot easier to get away with breaking into a computer or account than breaking a window, undetected, until it is time to let the user know (for a ransom attempt or other such). Locking your doors is a much better analogy. You don't leave them unlocked in case you forget your keys do you? That would be a much better analogy for choosing convenience over security in computing.

> I know I could faff with multiple locks for my bike, but I rather accept some risk for it to be stolen for the convenience.

Someone breaking into a computer or account isn't the same as them taking a single object. It is more akin to them getting into your home or office, or on a smaller scale a briefcase. They don't take an object, but that can collect information that will help in future phishing attacks against you and people you care about.

The intruder could also operate from the hacked resource to continue their attack on the wider Internet.

> A major dumb is that security people think breaking in is the end of the world.

The major dumb of thinking like this is that breaking in is often not the end of anything, it can be the start or continuation of a larger problem. Security people know this and state it all the time, but others often don't listen.

jampekka
1 replies
2d2h

The major dumb of thinking like this is that breaking in is often not the end of anything, it can be the start or continuation of a larger problem. Security people know this and state it all the time, but others often don't listen.

This is exactly the counter productive attitude I criticized. I told you why others don't often listen, but you don't seem to listen to that.

Also, people do listen. They just don't agree.

dspillett
0 replies
9h33m

> Also, people do listen. They just don't agree.

Because the fallout can cause significant problems for others, people not agreeing that online security is relevant to them is like people not agreeing that traffic safety measures (seatbelts, speed limits) are not relevant to them, and should IMO command no greater respect.

Maybe being a bit of a dick about it doesn't help much, but being nicer about it doesn't seem to help at all.

kazinator
1 replies
2d5h

but I still prefer to see outside

Steel bars?

dijit
0 replies
2d5h

I think this is why the analogy holds up.

Some situations definitely call for steel bars, some for having no windows at all.

But for you and me, windows are fine, because the value of being inside my apartment is not the same value as being in a jewellers or in a building with good sight-lines to something even more valuable -- and the value of having unrestricted windows is high for us.

gmuslera
0 replies
2d5h

The act of breaking in is not even the end of it. It is not a broken glass that you clearly see and just replace to forget about it. It may be the start of a process, and you don’t know what will happen down the road. But it won’t be something limited to the affected computer or phone.

billy99k
18 replies
2d4h

This is a mostly terrible 19-year old list.

Here is an example:

"Your software and systems should be secure by design and should have been designed with flaw-handling in mind"

Translation: If we lived in a perfect world, everything would be secure from the start.

This will never happen, so we need to utilize the find and patch technique, which has worked well for the companies that actually patch the vulnerabilities that were found and learn from their mistakes for future coding practices.

The other problem is that most systems are not static. It's not release a secure system and never update it again. Most applications/systems are updated frequently, which means new vulnerabilities will be introduced.

CookieCrisp
8 replies
2d4h

I agree, an example that if you say something dumb with enough confidence a lot of people will think it's smart.

dartos
7 replies
2d4h

The CTO at my last company was like this.

In the same breath he talked about how he wanted to build this “pristine” system with safety and fault tolerance as priority and how he wanted to use raw pointers to shared memory to communicate between processes which both use multiple threads to read/write to this block of shared memory because he didn’t like how chatty message queues are.

He also didn’t want to use a ring buffer since he saw it as a kind of lock

marshray
4 replies
1d22h

I've had the CTO who was also a frustrated lock-free data structure kernel driver developer too.

Fun times.

dartos
3 replies
1d21h

I forgot to mention that we were building all this in C#, as mandated by Mr CTO.

He also couldn’t decide between windows server and some .RHEL or Debian flavor

I doubt this guy even knew what a kernel driver was.

He very transparently just discounted anything he didn’t already understand. After poorly explaining why he didn’t like ring buffers, he said we should take inspiration from some system his friend made.

We started reading over the system and it all hinged on a “CircularBuffer” class which was a ring buffer implementation.

utensil4778
2 replies
1d21h

Okay, that would be a normal amount of bonkers thing to suggest in C or another language with real pointers.

But in C#, that is a batshit insane thing to suggest. I'm not even sure if it's even legal in C# to take a pointer to an arbitrary address outside of your memory. That's.. That's just not how this works. That's not how any of this works!

neonsunset
1 replies
1d20h

It is legal to do so. C# pointers == C pointers, C# generics with struct arguments == Rust generics with struct (i.e. not Box<dyn Trait>) arguments and are monomorphized in the same way.

All of the following works:

    byte* stack = stackalloc byte[128];
    byte* malloc = (byte*)NativeMemory.Alloc(128);
    byte[] array = new byte[128];
    fixed (byte* gcheap = array)
    {
        // work with pinned object memory
    }
Additionally, all of the above can be unified with (ReadOnly)Span<byte>:

    var stack = (stackalloc byte[128]); // Span<byte>
    var literal = "Hello, World"u8; // ReadOnlySpan<byte>
    var malloc = NativeMemory.Alloc(128); // void*
    var wrapped = new Span<byte>(malloc, 128);
    var gcheap = new byte[128].AsSpan(); // Span<byte>
Subsequently such span of bytes (or any other T) can be passed to pretty much anything e.g. int.Parse, Encoding.UTF8.GetString, socket.Send, RandomAccess.Write(fileHandle, buffer, offset), etc. It can also be sliced in a zero-cost way. Effectively, it is C#'s rendition of Rust's &[T], C++ has pretty much the same and names it std::span<T> as well.

Note that (ReadOnly)Span<T> internally is `ref T _reference` and `int _length`. `ref T` is a so-called "byref", a special type of pointer GC is aware of, so that if it happens to point to object memory, it will be updated should that object be relocated by GC. At the same time, a byref can also point to any non-GC owned memory like stack or any unmanaged source (malloc, mmap, pinvoke regular or reverse - think function pointers or C exports with AOT). This allows to write code that uses byref arithmetics, same as with pointers, but without having to pin the object retaining the ability to implement algorithms that match hand-tuned C++ (e.g. with SIMD) while serving all sources of sequential data.

C# is a language with strong low-level capabilities :)

dartos
0 replies
1d13h

I was surprised to learn c# could do all that, honestly.

Though when you’re doing that much hacking, a lot of the security features and syntax of C# get in the way

SoftTalker
1 replies
1d22h

That sounds pretty deep in the weeds for a CTO. Was it a small company?

dartos
0 replies
1d21h

It was. I was employee number 10. Just started and was entirely bankrolled by that CTO

The CTO sold a software company he bootstrapped in 2008 and afaik has been working as an exec since.

The CEO, a close friend of Mr CTO, said that the system was going to be Mr CTO’s career encore. (Read: they were very full of themselves)

The CIO quit 4 days before I started for, rumor has it, butting heads with the CTO.

Mr CTO ended up firing (with no warning) me and another dev who were vocal about his nonsense. (Out of 5 devs total)

A 3rd guy quit less than a month after.

That’s how my 2024 started

worik
1 replies
1d22h

This is a mostly terrible 19-year old list.

This is an excellent list that is two decades over due, for some

software and systems should be secure by design

That should be obvious. But nobody gets rich except by adding features, so this needs to be said over and over again

This will never happen, so we need to utilize the find and patch technique,

Oh my giddy GAD! It is up to *us* to make this happen. Us. The find and patch technique does not work. Secure by design does work. The article had some good examples

Most applications/systems are updated frequently, which means new vulnerabilities will be introduced.

That is only true when we are not allowed to do our jobs. When we are able to act like responsible professionals we can build secure software.

The flaw in the professional approach is how to get over the fact that features sell now, for cash, and building securely adds (a small amount of) cost for no visual benefit

I do not have a magic wand for that one. But we could look to the practices of civil engineers. Bridges do collapse, but they are not as unreliable as software

ang_cire
0 replies
1d21h

The flaw in the professional approach is how to get over the fact that features sell now, for cash, and building securely adds (a small amount of) cost for no visual benefit

Because Capitalism means management and shareholders only care about stuff that does sell now, for cash.

But we could look to the practices of civil engineers

If bridge-building projects were expected to produce profit, and indeed increasing profit over time, with civil engineers making new additions to the bridges to make them more exciting and profitable, they'd be in the same boat we are.

Pannoniae
1 replies
1d23h

*Translation: If we didn't just pile on dependencies upon dependencies, everything would be secure from the start.

Come on. The piss-poor security situation might have something to do with the fact that the vast majority of software is built upon dependencies the authors didn't even look at...

Making quality software seems to be a lost art now.

worik
0 replies
1d22h

Making quality software seems to be a lost art now

No it is not. Lost that is

Not utilised enough....

sulandor
0 replies
2d4h

though, frequent updates mainly serve to hide unfit engineering practices and encourage unfit products.

the world is not static, but most things have patterns that need to be identified and handled, which takes time that you don't have if you sprint from quick-fix to quick-fix of your mvp.

notagainlol
0 replies
1d16h

I really didn't think "write secure software" would be controversial, but here we are. How is the nihilist defeatism going? I'll get back to you after I clean up the fallout from having my data leaked yet again this week.

msla
0 replies
1d22h

It's also outright stupid. For example, from the section about hacking:

"Timid people could become criminals."

This fully misunderstands hacking, criminality, and human nature, in that criminals go where the money is, you don't need to be a Big Burly Wrestler to point a gun at someone and get all of their money at the nearest ATM, and you don't need to be Snerd The Nerd to Know Computers. It's a mix of idiocy straight out of the stupidest 1980s comedy films.

Also:

"Remote computing freed criminals from the historic requirement of proximity to their crimes."

This is so blatantly stupid it barely bears refutation. What does this idiot think mail enables? We have Spanish Prisoner scams going back centuries, and that's the same scam as the one the 419 mugus are running.

Plus:

Anonymity and freedom from personal victim confrontation increased the emotional ease of crime, i.e., the victim was only an inanimate computer, not a real person or enterprise.

Yeah, criminals will defraud you (or, you know, beat the shit out of you and threaten to kill you if you don't empty your bank accounts) just as easily if they can see your great, big round face. It doesn't matter. They're criminals.

Finally, this:

Your software and systems should be secure by design and should have been designed with flaw-handling in mind.

"Just do it completely right the first time, idiot!" fails to be an actionable plan.

duskwuff
0 replies
1d21h

Steelmanning for a moment: I think what the author is trying to address is overly targeted "patches" to security vulnerabilities which fail to address the faulty design practices which led to the vulnerabilities. An example might be "fixing" cross-site scripting vulnerabilities in a web application by blocking requests containing keywords like "script" or "onclick".

chefandy
0 replies
1d11h

There's definitely a few worthwhile nuggets in there, but at least half of this reads like a cringey tirade you'd overhear at the tail end of the company holiday party from the toasted new helpdesk intern. I'm surprised to see it from a subject matter expert, that he kept it on his website for 20 years, and also that it was so heavily upvoted.

tptacek
9 replies
1d16h

We're doing this again, I see.

https://hn.algolia.com/?q=six+dumbest+ideas+in+computer+secu...

You can pick this apart, but the thing I always want to call out is the subtext here about vulnerability research, which Ranum opposed. At the time (the late 90s and early aughts) Marcus Ranum and Bruce Schneier were the intellectual champions of the idea that disclosure of vulnerabilities did more harm than good, and that vendors, not outside researchers, should do all of that work.

Needless to say, that perspective didn't prove out.

It's interesting that you could bundle up external full-disclosure vulnerability research under the aegis of "hacking" in 2002, but you couldn't do that at all today: all of the Big Four academic conferences on security (and, obviously, all the cryptography literature, though that was true at the time too) host offensive research today.

ericpauley
3 replies
1d11h

Completely agree that offensive research has (for better or for worse) become a mainstay at the major venues.

As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation. Unfortunately vendors are often too unskilled or obstinate to properly respond to disclosure from academics.

For their part academics have room to improve as well. Rather than the pendulum swinging back the other way, I anticipate that the majors will eventually have more involved expectations for reducing harm from disclosures, such as by expanding the scope of the “vendor” to other possible mitigating parties, like OS or Firewall vendors.

bawolff
1 replies
1d11h

As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation.

That assumes that without these disclosures we wouldn't see active exploits. I'm not sure i agree with that. I think bad actors are perfectly capable of finding exploits by themselves. I suspect the total number of active exploits (and especially targeted exploits) would be much higher without these disclosures.

ericpauley
0 replies
1d8h

Both can be true. It’s intellectually lazy to throw up our hands and say attacks would happen anyway instead of doing our best to mitigate harms.

tptacek
0 replies
1d1h

I was going to respond in detail to this, but realized I'd be recapitulating an age-old debate about full- vs. "responsible-" disclosure, and it occurred to me that I haven't been in one of those debates in many years, because I think the issue is dead and buried.

andrecarini
2 replies
1d10h

the Big Four academic conferences on security

Which ones are those?

mici
1 replies
1d8h

IEEE S&P, USENIX Security, ACM CCS, NDSS

ysnp
0 replies
1d3h

Is there a newsletter/e-zine or something similar that specifically follows research presented at these conferences?

ggm
1 replies
1d13h

Maybe they were right for their time? I'm not arguing that, I just posit that post fact rationalisation about decisions made in the past have to be considered against the evidence in the past.

Things network exploded size-wise and the numbers of participants in the field exploded too.

michaelt
0 replies
1d8h

It was 100% a reasonable-sounding theory before we knew any better.

In the real world if you saw someone going around a car park, trying the door of every car you'd call the cops - not praise them as a security researcher investigating insecure car doors.

And in the imagination of idealists, the idea of a company covering up a security vulnerability or just not bothering to fix it was inconceivable. The problems were instead things like how to distribute the security patches when your customers brought boxed floppy disks from retail stores.

It just turns out that in practice vendors are less diligent and professional than was hoped; the car door handles get jiggled a hundred times a day, the people doing it are untraceable, and the cops can't do anything.

umanghere
7 replies
2d7h

4) Hacking is Cool

Pardon my French, but this is the dumbest thing I have read all week. You simply cannot work on defensive techniques without understanding offensive techniques - plainly put, good luck developing exploit mitigations without having ever written or understood an exploit yourself. That’s how you get a slew of mitigations and security strategy that have questionable, if not negative value.

blablabla123
2 replies
2d7h

That. Also not educating users is a bad idea but it also becomes quite clear that the article was written in 2005 where the IT/security landscape was a much different one.

crngefest
1 replies
2d6h

I concur with his views on educating users.

It’s so much better to prevent them from doing unsafe things in the first place, education is a long and hard undertaking and I see little practical evidence that it works on the majority of people.

But, but, but I really really need to do $unsafething

No in almost all cases you don’t - it’s just taking shortcuts and cutting corners that is the problem here

blablabla123
0 replies
2d4h

The attacks with the biggest impact are usually social engineering attacks though. It can be as simple as shoulder surfing, tailgating or as advanced as an AI voice scam. Actually these are widely popularized since the early 90s by people like Kevin Mitnick

TacticalCoder
1 replies
2d6h

I don't think the argument is that dumb. For a start there's a difference between white hack hackers and dark hat hackers. Then here he's talking specifically about people who do pentesting known exploits on broken systems.

Think about it this way: do you think Theo Deraadt (from OpenBSD and OpenSSH fame) spends his time trying to see if Acme corp is vulnerable to OpenSSH exploit x.y.z, which has been patched 3 months ago?

I don't care about attacking systems: it is of very little interest to me. I've done it in the past: it's all too easy because we live in a mediocre work full of insecure crap. However I love spending some time making life harder for dark hat hackers.

We know what creates exploits and yet people everywhere are going to repeat the same mistakes over and over again.

My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure". That is the mindset we need. But it didn't stop people using Unicode in places where we should never have used it, like in domain names for examples. Then when you test an homoglyphic attack on IDN, it's not "cool". It's lame. It's pathetic. Of course you can do homglyphic attacks and trick people: an actual security expert (not a pentester testing known exploits on broken configs) warned about that 30 years ago.

There's nothing to "understand" by abusing such exploit yourself besides "people who don't understand security have taken stupid decisions".

OpenBSD and OpenSSH are among the most secure software ever written (even if OpenSSH had a few issues lately). I don't think Theo Deraadt spends his time pentesting so that he can be able to then write secure software.

What strikes me the most is the mediocrity of most exploits. Exploits that, had the software been written with the mindset of the person who wrote TFA, would for the most part not have been possible.

He is spot on when he says that default permit and enumerate badness are dumb ideas. I think it's worth trying to understand what he means when he says "hacking is not cool".

SoftTalker
0 replies
1d22h

My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure".

The same is true of containers, VMs, sandboxes, etc.

The idea that we all willingly run applications that continuously download and execute code from all over the internet is quite remarkable.

watwut
0 replies
2d1h

You do not have to be able to build actual sql injection yourself in order to have properly secured queries. Same with xss injection. Having rough ideas about attacks is probably necessary, but beyond that you primary need the discipline and correct frameworks that wont facilitate you to shoot yourself in the foot.

klabb3
0 replies
2d5h

Agreed, eyebrows were elevated at this point in the article. If you want to build a good lock, you definitely want to consult the lock picking lawyer. And its not just a poor choice of title either:

teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole

Ah yes, I too remember when buffer overflows, xss and sql injections became stale when the world learned about them and they were removed from all code bases, never to be seen again.

Remote computing freed criminals from the historic requirement of proximity to their crimes. Anonymity and freedom from personal victim confrontation increased the emotional ease of crime […] hacking is a social problem. It's not a technology problem, at all. "Timid people could become criminals."

Like any white collar crime then? Anyway, there’s some truth in this, but the analysis is completely off. Remote hacking has lower risk, is easier to conceal, and you can mount many automated attacks in a short period of time. Also, feelings of guilt are often tamed by the victim being an (often rich) organization. Nobody would glorify, justify or brag about deploying ransomware on some grandma. Those crimes happen, but you won’t find them on tech blogs.

kstrauser
7 replies
2d1h

Hacking is cool. Well, gaining access to someone else's data and systems is not. Learning a system you own so thoroughly that you can find ways to make it misbehave to benefit you is. Picking your neighbor's door lock is uncool. Picking your own is cool. Manipulating a remote computer to give yourself access you shouldn't have is uncool. Manipulating your own to let you do things you're not suppose to be able to is cool.

That exploration of the edges of possibility is what make moves the world ahead. I doubt there's ever been a successful human society that praised staying inside the box.

janalsncm
5 replies
1d22h

We can say that committing crimes is uncool but there’s definitely something appealing about knowing how to do subversive things like pick a lock, hotwire a car, create weapons, or run John the Ripper.

It effectively turns you into a kind of wizard, unconstrained by the rules everyone else believes are there.

kstrauser
2 replies
1d21h

Well put. There’s something inherently cool in knowledge you’re not suppose to have.

jay_kyburz
1 replies
1d21h

You have to know how to subvert existing security in order to build better secure systems. You _are_ supposed to know this stuff.

kstrauser
0 replies
1d21h

Some people think we shouldn’t because “what if criminals also learn it?” Uh, they already know the best techniques. You’re right: you and I need to know those things, too, so we can defend against them.

account42
0 replies
6h54m

We can say that committing crimes is uncool

Disagree in general. Laws != morals. Often enough laws are unjust and ignoring them is the cool thing to do.

Sohcahtoa82
0 replies
1d

And sometimes, knowing that information is useful for legit scenarios.

When my grandma was moving across the country to move in with my mom, she got one of those portable on-demand storage things, but she put the key in a box that got loaded inside and didn't realize it until the POD got delivered to my mom's place.

I came over with my lock picks and had it open in a couple minutes.

tinycombinator
0 replies
1d11h

Manipulating a remote computer to give yourself access you shouldn't have can be cool if that computer was used in phone scam centers, holding the private data of countless elderly victims. Using that access to disrupt said scam business could be incredibly cool (and funny).

It could be technically illegal, and would fall under vigilante justice. But we're not talking about legality here, we're talking about "cool": vigilantes are usually seen as "cool" especially when done from a sense of personal justice. Again, not talking about legal or societal justice.

lsb
5 replies
2d12h

Default permit, enumerating badness, penetrate and patch, hacking is cool, educating users, action is better than inaction

Etheryte
4 replies
2d7h

I think commenting short summaries like this is not beneficial on HN. It destroys all nuance, squashes depth out of the discussion, and invites people to comment solely on the subtitles of a full article. That's not the kind of discussion I would like HN to degrade into — if I wanted that, I'd go to Buzzfeed. Instead I hope everyone takes the time to read the article, or at least some of the article, before commenting. Short tldrs don't facilitate that.

smokel
0 replies
2d7h

As much as I agree with this, I must admit that it did trigger me to read the actual article :)

I assume that in a not-so-distant future, we get AI powered summaries of the target page for free, similar to how Wikipedia shows a preview of the target page when hovering over a link.

omoikane
0 replies
2d2h

Looks like lsb was the one who submitted the article, and this comment appears to be submitted at the same time (based on same timestamp and consecutive id in the URL), possibly to encourage people to read the article in case if the title sounded like clickbait.

chuckadams
0 replies
2d

One one hand I agree, on the other is that the article itself is pretty much a bunch of grumpy and insubstantial hot takes …

arcbyte
0 replies
2d5h

While i agree with all the potential downsides you mentioned, i still lean heavily on the side of short summaries being extremely helpful.

This was an interesting title, but having seen the summary and discussion, im not particularñy keen to read it. In fact i would never have commented on this post except to reputation yours.

nottorp
4 replies
2d1h

#4) Hacking is Cool

Hacking is cool. Why the security theater industry has appropriated "hacking" to mean accessing other people's systems without authorization, I don't know.

kstrauser
1 replies
1d20h

From Britannia: https://www.britannica.com/topic/hacker

Indeed, the first recorded use of the word hacker in print appeared in a 1963 article in MIT’s The Tech detailing how hackers managed to illegally access the university’s telephone network.

I get what you’re saying, but I think we’re tilting at windmills. If “hacker” has a connotation of “breaking in” for 61 years now, then the descriptivist answer is to let it be.

supertrope
0 replies
1d17h

If owners are able to tweak or upgrade the machine themselves, it will hurt sales of next year’s model. If “hacking” helped corporations make money they would spend billions promoting it. The old meaning of hacking has been replaced with “maker.”

Kamq
0 replies
2d

Why the security theater industry has appropriated "hacking" to mean accessing other people's systems without authorization, I don't know.

A lot of early remote access was done by hackers. Same with exploiting vulnerabilities.

One of my favorite is the Robin Hood worm: https://users.cs.utah.edu/~elb/folklore/xerox.txt

TL;DR: Engineers from Motorola exploited a vulnerability illustrate it, and they did so in a humorous way. Within the tribe of hackers, this is pretty normal, but the only difference between that and stealing everything once the vulnerability has been exploited is intent.

Normies only hear about the ones where people steal things. They don't care about the funny kind.

Zak
4 replies
2d5h

I'd drop "hacking is cool" from this list and add "trusting the client".

I've seen an increase in attempts to trust the client lately, from mobile apps demanding proof the OS is unmodified to Google's recent attempt to add similar DRM to the web. If your network security model relies on trusting client software, it is broken.

strangecharm2
3 replies
1d17h

It's not about security, it's about control. Modified systems can be used for nefarious purposes, like blocking ads. And Google wouldn't like that.

Zak
2 replies
1d6h

It's about control for Google and friends. If your bank's app uses SafetyNet, it's probably about some manager's very confused concept of security.

account42
1 replies
6h46m

If your bank's app uses SafetyNet, it's probably about some manager's very confused concept of security.

Or about making the auditor for the government-imposed security certification happy with the least amount of effort. It's always more work to come up with good answers why you are not doing the industry standard thing.

Zak
0 replies
20m

It only became a standard practice because of a misguided desire to rely on trusting the client.

CM30
4 replies
2d4h

I think the main problem is that there's usually an unfortunate trade off between usability and security, and most of the issues mentioned as dumb ideas here come from trying to make the system less frustrating for your average user at the expense of security.

For example, default allow is terrible for security, and the cause of many issues in Windows... but many users don't like the idea of having to explicitly permit every new program they install. Heck, when Microsoft added that confirmation, many considered it terrible design that made the software way more annoying to use.

'Default Permit', 'Enumerating Badness' and 'Penetrate and Patch ' are all unfortunately defaults because of this. Because people would rather make it easier/more convenient to use their computer/write software than do what would be best for security.

Personally I'd say that passwords in general are probably one of the dumbest ideas in security though. Like, the very definition of a good password likely means something that's hard to remember, hard to enter on devices without a proper keyboard, and generally inconvenient for the user in almost every way. Is it any wonder that most people pick extremely weak passwords, reuse them for most sites and apps, etc?

But there's no real alternative sadly. Sending links to email means that anyone with access to that compromises everything, though password resets usually mean the same thing anyway. Physical devices for authentication mean the user can't log in from places outside of home that they might want to login from, or they have to carry another trinket around everywhere. And virtually everything requires good opsec, which 99.9% of the population don't really give a toss about...

bpfrh
1 replies
1d23h

meh passwords where a good idea for a long time.

The first 10(20?) years there where no devices without a good keyboard.

The big problem imho was the idea that passwords had to be complicated and long, e.g. a random, alpanumeric, some special chars and at least 12 characters long, while a better solution would have been a few words.

Edit: To be clear I agree with most of your points about passwords, just wanted to point out that we often don't appreciate how much tech changed after the smartphone introduction and that for the environemnt before that (computer/laptops) passwords where a good choice.

CM30
0 replies
1d21h

That's a fair point. Originally devices generally had decent keyboards, or didn't need passwords.

The rise of not just smartphones, but tablets, online games consoles, smart TVs, smart appliances, etc had a pretty big impact on their usefulness.

thyrsus
0 replies
1d4h

I recommend using a well respected browser based password manager, protected by a strong password, and having it generate strong passwords that you never think of memorizing. Web sites that disable that with JavaScript on the password field should be liable for damages with added penalties - I'm looking at you, banks.

freeone3000
0 replies
1d20h

It’s that insight that brought forward passkeys, which have elements of SSO and 2FA-only logins. Apple has fully integrated, allowing cloud-sync’d passkeys: on-device for apple devices, 2FA-only if you’ve got an apple device on you. Chrome is also happy to act as a passkey. So’s BitWarden. It can’t be spoofed, can’t be subverted, you choose your provider, and you don’t even have to remember anything because the site can give you the name of the provider you registered with.

oschvr
3 replies
2d6h

"If you're a security practitioner, teaching yourself how to hack is also part of the "Hacking is Cool" dumb idea."

lol'd at the irony of the fact that this was posted here, Hacker News...

smokel
2 replies
2d6h

At the risk of stating the obvious, the word "hacker" has (at least) two distinct meanings. The article talks about people who try to break into systems, and Hacker News is about people who like to hack away at their keyboard to program interesting things.

The world might have been a better place if we used the terms "cracker" and "tinkerer" instead.

jrm4
1 replies
2d

Doubt it; it's utterly naive and wishful thinking to think that those two things are easily separable; it's never as simple as Good Guys wear White, Bad Guys wear Black, which is the level of intelligence this idea operates at.

smokel
0 replies
2d

It might be naive, but I'm not the only one in using this distinction. See the "Definitions" section of https://en.wikipedia.org/wiki/Hacker

moring
2 replies
2d6h

Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole.

No, it means that you learn practical aspects alongside theory, and that's very useful.

move-on-by
0 replies
2d4h

I also took issue with this point. One does not become an author without first learning how to read. The usefulness of reading has not diminished once you publish a book.

You must learn how known exploits work to be able to discover unknown exploits. When the known exploits are patched, your knowledge of how they occurred has not diminished. You may not be able to use them anymore, but surely that was not the goal in learning them.

Sohcahtoa82
0 replies
1d

Not necessarily.

There are a lot of script kiddies that don't know a damn thing about what TCP is or what an HTTP request looks like, but know how to use LOIC to take down a site.

dvfjsdhgfv
2 replies
2d6h

The cure for "Enumerating Badness" is, of course, "Enumerating Goodness." Amazingly, there is virtually no support in operating systems for such software-level controls.

Really? SELinux and AppArmor have existed since, I don't know, late nineties? The problem is not that these controls don't exist, it's just they make using your system much, much harder. You will probably spent some time "teaching" them first, then actually enable, and still fight with them every time you install something or make other changes in your system.

ivlad
1 replies
1d17h

You will probably spent some time "teaching" them first

SELinux works well out of the box in RHEL and its derivatives since many years. You comment shows, you did not actually try it.

fight with them every time you install something or make other changes in your system

If you install anything that does not take permissions into account, it will break. Try running nginx with nginx.conf permissions set to 000, you will not be surprised, it does not work.

amelius
2 replies
1d8h

Also needs mention:

- Having an OS that treats users as "suspicious" but applications as "safe".

(I.e., all Unix-like systems)

vrighter
1 replies
1d8h

xdg-desktop-portal was created to allow applications running in a sandbox to access system resources. Nowadays, more and more, regular applications have to pass through this piece of crap to do their usual work, one which by design was never intended for them, but for flatpakked applications.

Oh the joy of going through the bluetooth pairing process for my controller, or physically getting up and connecting a physical wire to it, only for the system to wait until I'm in the game and touch the controller and immediately the game hangs because a pop up appears asking me if i want to give permission to my damn controller to control stuff. Or having to manually reposition my windows every single time, because a window knowing where it is is somehow "insecure"

I'm the one putting the software on there and deciding what to run. If I run it, then it's because I wanted the application to do what it does. If someone else is in the position of running software on my machine, they're already on the other side of the airtight hatchway. They can already give themselves the permissions they need. They can just click yes on any pop up that appears. Yes, the applications should be considered safe. Because the OS cannot possibly make any informed assumptions about what's legitimate and what's malicious.

To me it feels like I can't do certain stuff on my PC because someone else might misuse something on theirs. How is that my problem?

zzo38computer
0 replies
23h58m

xdg-desktop-portal was created to allow applications running in a sandbox to access system resources.

There are many problems with it; I do not use it on my computer. A better sandbox system would be possible, but xdg-desktop-portal is not designed very well.

Oh the joy of going through the bluetooth pairing process for my controller, or physically getting up and connecting a physical wire to it, only for the system to wait until I'm in the game and touch the controller and immediately the game hangs

That is also a problem of a bad design. If a permission is required, it should be possible to set up the permissions ahead of time (and to configure it to automatically grant permission if you do not want to restrict it; possibly could even be the default setting), instead of waiting for that to ask you and to hang like that.

Or having to manually reposition my windows every single time, because a window knowing where it is is somehow "insecure"

I would think that the window manager should know where the windows are and automatically position them if you have configured it to remember where they are. (The windows themself should not usually need to know where they are, since the window manager would handle it instead, and the applications should not need to know what window manager is in use, since different window managers will work in different ways and if the application program assumes it knows how it works then that can be a problem.)

I'm the one putting the software on there and deciding what to run. If I run it, then it's because I wanted the application to do what it does.

Yes, although sometimes you do not want it to do what it does, which is why it should be possible to configure the security, preferably with proxy capabilities.

Because the OS cannot possibly make any informed assumptions about what's legitimate and what's malicious.

I agree, although that is why it must be possible for the operator to specify such things. I think that proxy capabilities would be the way to be done (which, in addition to improving security, also allows more control over the interaction between the programs and other parts of the system).

To me it feels like I can't do certain stuff on my PC because someone else might misuse something on theirs.

Yes, it seem like that, because it is the badly design of some programs, protocols, etc.

woodruffw
1 replies
1d22h

Some of this has aged pretty poorly -- "hacking is cool" has, in fact, largely worked out for the US's security community.

hot_gril
0 replies
1d1h

Yeah, we're not gonna convince people in other countries that hacking is uncool. Better to have the advantage.

trey-jones
1 replies
1d1h

Most security-oriented articles are written by extremely security-minded people. These people in my experience ignore the difficulties that a purely security-oriented approach imposes on users of the secure software. I always present security as a sliding scale. On one end "Secure", and on the other "Convenient". Purely Secure software design will almost never have any users (because it's too inconvenient), and purely Convenient software design will ultimately end up the same (because it's not secure enough).

That said, this is a good read for the most part. I heavily, heavily disagree with the notion that trying to write exploits or learn to exploit certain systems as a security professional is dumb (under "Hacking is C00L"). I learned more about security by studying vulnerabilities and exploits and trying to implement my own (white hat!) than I ever did by "studying secure design". As they say, "It takes one to know one." or something.

delusional
0 replies
1d1h

These people in my experience ignore the difficulties that a purely security-oriented approach imposes on users

Your scale analogy is probably more approachable, but I'm a little more combative. I usually start out my arguments with security weenies with something along the lines of "the most secure thing would be to close up shop tomorrow, but we're probably not going to get sign-off on that." After we've had a little chuckle at that, we can discuss the compromise we're going to make.

I've also had some luck with changing the default position on them by asserting that if they tell me no, then I'll just do it without them, and it'll have whatever security I happen to give it. I can always find a way, but they're welcome to guide me to something secure. I try to avoid that though, because it tends to create animosity.

mrbluecoat
1 replies
2d3h

sometime around 1992 the amount of Badness in the Internet began to vastly outweigh the amount of Goodness

I'd be interested in seeing a source for this. Feels a bit anecdotal hyperbole.

julesallen
0 replies
1d22h

It's a little anecdotal as nobody was really writing history down at that point but it feels about the right timing.

The first time the FTP server I ran got broken into was about then, it was a shock as why would some a-hole want to do that? I wasn't aware until one of my users tipped me off a couple of days after the breach. They were sharing warez rather than porn at least, having the bandwidth to download even crappy postage stamp 8 bit color videos back then would take you hours.

When this happened I built the company's first router a few days later and put everything behind it. Before that all the machines that needed Internet access would turn on TCP/IP and we'd give them a static IP from the public IP range we'd secured. Our pipe was only 56k so if you needed it you had to have a really good reason. No firewall on the machines. Crazy, right?

Very different times for sure.

mikewarot
1 replies
1d17h

My "security flavor of the month" is almost universally ignored... Capability Based Security/Multilevel Secure Computing. If it's not ignored, it's mis-understood.

It's NOT the UAC we all grew to hate with Windows 8, et al. It's NOT the horrible mode toggles present on our smartphones. It's NOT AppArmor.

I'm still hoping that Genode (or HURD) makes something I can run as my daily driver before I die.

zzo38computer
0 replies
1d

I also think that capability based security is a good idea, and that proxy capabilities should also be possible. (This would include all I/O, including measuring time.)

But, how the UI is working with capability based security, is a separate issue (although I have some ideas).

(Furthermore, I also think that capability based security with proxy capabilities can solve some other problems as well (if the system is designed well), including some that are not directly related to security features. It can be used if you want to use programs to do some things that it was not directly designed to do; e.g. if a program is designed to receive audio directly from a microphone, you can instead add special effects in between by using other programs before a program receives the audio data, or use a file on the computer instead (which can be useful in case you do not have a microphone), etc. It can also be used for testing; e.g. to test that a program works correctly on February 29 even if the current date is not February 29, or if the program does have such a bug, to bypass it by telling that program (and only that program) that the date is not February 29; and you can make fault simulations, etc.)

grahar64
1 replies
1d22h

"Educating users ... If it worked, it would have worked by now"

ang_cire
0 replies
1d20h

It does work, but user training is something that- whether for security or otherwise- is a continuous process. New hires. Training on new technologies. New operating models. etc etc etc...

IT is not static; there is no such thing as a problem that the entire field solves at once, and is forever afterward gone.

When you're training an athlete, you teach them about fitness and diet, which underpins their other skills. And you keep teaching and updating that training, even though "being fit" is ancillary to their actual job (i.e. playing football, gymnastics, etc). Pro athletes have full-time trainers, even though a layman might think, "well haven't they learned how to keep themselves fit by now?"

philipwhiuk
0 replies
2d7h

From the 2015 submission:

Perhaps another 10 years from now, rogue AI will be the primary opponent, making the pro hackers of today look like the script kiddies.

Step on it OpenAI, you've only got 1 year left ;)

dasil003
1 replies
1d23h

I was 5 years into a professional software career when this was written, at this point I suspect I'm about the age of the author at the time of its writing. It's fascinating to read this now and recognize the wisdom coming from experience honed in the 90s and the explosion of the internet, but also the cultural gap from the web/mobile generation, and how experience doesn't always translate to new contexts.

For instance, the first bad idea, Default Permit, is clearly bad in the realm of networking. I might quibble a bit and suggest Default Permit isn't so much an idea as the natural state of when one invents computer networking. But clearly Default Deny was a very very good idea and critical idea necessary for the internet's growth. It makes a lot of sense in the context of global networking, but it's not quite as powerful in other security contexts. For instance, SELinux has never really taken off, largely because it's a colossal pain in the ass and the threat models don't typically justify the overhead.

The other bad idea that stands out is "Action is Better Than Inaction". I think this one shows a very strong big company / enterprise bias more than anything else—of course when you are big you have more to lose and should value prudence. And yeah, good security in general is not based on shiny things, so I don't totally fault the author. That said though, there's a reason that modern software companies tout principles like "bias for action" or "move fast and break things"—because software is malleable and as the entire world population shifted to carrying a smartphone on their person at all times, there was a huge land grab opportunity that was won by those who could move quickly enough to capitalize on it. Granted, this created a lot of security risk and problems along the way, but in that type of environment, adopting a "wait-and-see" attitude can also be an existential threat to a company. At the end of the day though, I don't think there's any rule of thumb for whether action vs inaction is better, each decision must be made in context, and security is only one consideration of any given choice.

worik
0 replies
1d22h

This true in a very small part of our business

Most of us are not involved in a "land grab", in that metaphorical world most of us are past "homesteading " and are paving roads and I filling infrastructure

Even small companies should take care when building infrastructure

"Go fast and break things" is, was, an irresponsible bad idea. It made Zuck rich, but that same hubris and arrogance is bringing down the things he created

bawolff
1 replies
1d22h

If you're a security practitioner, teaching yourself how to hack is also part of the "Hacking is Cool" dumb idea. Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole.

I would strongly disagree with that.

You can't defend against something you don't understand.

You definitely shouldn't spend time learning some script-kiddie tool, that is pointless. You should understand how exploits work from first principles. The principles mostly won't change or at least not very fast, and you need to understand how they work to make systems resistant to them.

One of the worst ideas in computer security in my mind is cargo culting - where people just mindlessly repeat practises thinking it will improve security. Sometimes they don't work because they have been taken out of their original context. Other times they never made sense in the first place. Understanding how exploits work stops this.

strangecharm2
0 replies
1d22h

True security can only come from understanding how your system works. Otherwise, you're just inventing a religion, and doing everything on faith. "We're fine, we update our dependencies." Except you have no idea what's in those dependencies, or how they work. This is, apparently, a controversial opinion now.

Arch-TK
1 replies
1d8h

#3) Penetrate and Patch

This is one of the reasons why I feel my job in security is so unfulfilling.

Almost nobody I work with really cares about getting it right to begin with, designing comprehensive test suites to fuzz or outright prove that things are secure, using designs which rule out the possibility of error.

You get asked: please look at this gigantic piece of software, maybe you get the source code, maybe it's written in Java or C#. Either way, you look at <1% of it, you either find something seriously wrong or you don't[0], you report your findings, maybe the vendor fixes it. Or the vendor doesn't care and the business soliciting the test after purchasing the software from the vendor just accepts the risk, maybe puts in a tissue paper mitigation.

This approach seems so pointless that it's difficult to bother sometimes.

edit:

#4) Hacking is Cool

I think it's good to split unlawful access from security consultancy.

You don't learn nearly as much about how to secure a system if you work solely from the point of view of an engineer designing a system to be secure. You can get much better insight into how to design a secure system if you try to break in. Thinking like a bad actor, learning how exploitation works, etc. These are all things which strictly help.

[0]: It's crazy how often I find bare PKCS#7 padded AES in CBC mode. Bonus points if you either use a "passphrase" directly, or hash it with some bare hash algorithm before using various lengths of the hash for both the key and IV. Extra bonus points if you hard code a "default" password/key and then never override this in the codebase.

regularfry
0 replies
1d7h

It's very easy for a big organisation with a leadership that needs to show it is doing something to pass down a mandate that relies on throwing money at the problem for little tangible benefit. "Everything needs to be pen tested" is the sort of thing that sounds like the right thing to do if you don't understand the problem space; it's exactly as wrong as using lines of code as a productivity metric.

All it does is to say, very expensively, "there are no obvious vulnerabilities". If it even manages that. What you want to say is "there are obviously no vulnerabilities" but if you're having to strap that onto a pre-existing bucket of bugs then it's a complete rebuild. And nobody has time for that when there's an angry exec breathing down your neck asking why the product keeps getting hacked.

The fundamental problem is the feature factory model of software development. Treating software design and engineering as a cost to be minimised means that anything in the way of getting New Shiny Feature out of the door is Bad. And that approach, where you separate product design from software implementation, where you control what happens in the organisation with budgetary controls and the software delivery organisation is treated as subordinate because it is framed as pure cost, drives the behaviour you see.

voidUpdate
0 replies
1d10h

I'm not sure if I'm completely misunderstanding #4 or if it's wrong. Pentesting an application is absolutely a good idea, and its not about "teaching yourself a bunch of exploits and how to use them" in the same way programming isn't just "learning a bunch of sorting algorithms and how to use them". It's about knowing why an exploit works, and how it can be adapted to attack something else and find a new vulnerability, and then it goes back to the programming side of working out why that vulnerability works and how to fix it

uconnectlol
0 replies
1d18h

Please wait while your request is being verified...

Speaking of snakeoil

tracerbulletx
0 replies
1d19h

Hacking is cool.

tonnydourado
0 replies
2d4h

I've seen "Penetrate and Patch" play out a lot on software development in general. When a new requirement shows up, or technical debt starts to grow, or performance issues, the first instinct of a lot of people is to try and find the smallest, easiest possible change to achieve the immediate goal, and just move to the next user story.

That's not a bad instinct by itself, but when it's your only approach, it leads to a snowball of problems. Sometimes you have to question the assumptions, to take a step back and try to redesign things, or new addition just won't fit, and the system just become wonkier and wonkier.

teleforce
0 replies
2d1h

This article is quite old and has been submitted probably every year since it's published with past submissions well into double pages.

For modern version and systematic treatment of the subject check out this book by Spafford:

Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail:

https://www.pearson.com/en-us/subject-catalog/p/cybersecurit...

rkagerer
0 replies
1d16h

#7 Authentication via SMS

ricktdotorg
0 replies
1d4h

it's 2024! if you run your own infrastructure in your own DC and your defaults are NOT:

- to heavily VLAN via load type/department/importance/whatever your org prefers

- default denying everything except common infra like DNS/NTP/maybe ICMP/maybe proxy arp/etc between those VLANs

- every proto/port hole poked through is a security-reviewed request

then you are doing it wrong.

"ahh but these ACL request reviews take too long and slow down our devs" -- fix the review process, it can be done.

spend the time on speeding up the security review, not on increasing your infrastructure's attack surface.

motohagiography
0 replies
1d4h

what has changed since 2005 is that these ideas are no longer dumb, but describe the factors of the dynamic security teams have to manage now. previously, security was an engineering and gating problem when systems were less interdependent and complex, but now it's a policing and management problem where there is a level of pervasive risk that you find ways to extract value from.

I would be interested in whether he still thinks these are true, as if you are doing security today, you are doing exactly these things.

- educating users: absolutely the most effective and highest return tool available.

- default permit: there are almost no problems you can't grow your way out of. there are zero startups, or even companies, that have been killed by breaches.

- enumerating badness: when managing a dynamic, you need measurements. there is never zero badness, that's what makes it valuable. the change in badness over time is a proxy for the performance of your organization.

- penetrate and patch: having that talent on your team yields irreplacable expereince. the only reason most programmers know about stacks and heaps today is from smashing them.

- hacking is cool: 30 years later, what is cooler, hacking or marcus?

michaelmrose
0 replies
1d23h

Educating Users

This actually DOES work it just doesn't make you immune to trouble any more than a flu vaccine means nobody gets the flu. If you drill shit into people's heads and coach people who make mistakes you can decrease the number of people who do dumb company destroying behaviors by specifically enumerating the exact things they shouldn't do.

It just can't be your sole line of defense. For instance if Fred gives his creds out for a candy bar and 2FA keeps those creds from working and you educate and or fire Fred not only did your second line of defense succeed your first one is now stronger without Fred.

kazinator
0 replies
2d5h

Penetrate and Patch is a useful exercise, because it lets the IT security team deliver some result and show they have value, in times when nothing bad is happening and everyone forgets they exist.

jrm4
0 replies
2d

Missed the most important.

"You can have meaningful security without skin-in-the-game."

This is literally the beginning and end of the problem.

jojobas
0 replies
1d17h

My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years.

19 years later, hacking is still cool.

jibe
0 replies
2d

"We're Not a Target" deserves promotion to major.

jeffrallen
0 replies
2d5h

mjr (as I always knew him from mailing lists and whatnot) seems to have given up on security and enjoys forging metal instead now.

Somewhere in there, security became a suppurating chest wound and put us all on the treadmill of infinite patches and massive downloads. I fought in those trenches for 30 years – as often against my customers (“no you should not put a fork in a light socket. Oh, ow, that looks painful. Here, let me put some boo boo cream on it so you can do it again as soon as I leave.”) as for them. It was interesting and lucrative and I hope I helped a little bit, but I’m afraid I accomplished relatively nothing.

Smart guy, hope he enjoys his retirement.

janalsncm
0 replies
1d21h

hacking is cool

Hacking will always be cool now that there’s an entire aesthetic around it. The Matrix, Mr. Robot, even the Social Network.

iandanforth
0 replies
2d7h

And yet the consequence of letting people like this run your security org is that it takes a JIRA ticket and multiple days, weeks, never to be able to install 'unapproved' software on you laptop.

Then if you've got the software you need to do your job you're stuck in endless cycles of "pause and think" trying to create the mythical "secure by design" software which does not exist. And then you get hacked anyway because someone got an email (with no attachments) telling them to call the CISO right away, who then helpfully walks them through a security "upgrade" on their machine.

Caveats: Yes there is a balance and log anomaly detection followed by actual human inspection is a good idea!

esjeon
0 replies
1d15h

Educating Users

isn't dumb, because "users" are proven to be the weakest link in the whole security chain. Users must be aware of the workplace security, just like how they should be trained w/ the workplace safety.

Also, there's no security to deal with if the system is unusable by the users. The trade-off b/w usability and security is simply unsolvable, and education is a patch to that problem.

dingody
0 replies
1d15h

I don’t entirely agree with the author’s viewpoint on “Hacking is Cool.” There was a time when I thought similarly, believing that “finding some system vulnerabilities is just like helping those system programmers find bugs, and I can never be better than those programmers.” However, I gradually rejected this idea. The appeal of cybersecurity lies in its “breadth” rather than its “depth.” In a specific area, a hacker might never be as proficient as a programmer, but hackers often possess a broad knowledge base across various fields.

A web security researcher might simultaneously discover vulnerabilities in “PHP, JSP, Servlet, ASP.NET, IIS, and Tomcat.” A binary researcher might have knowledge of “Windows, Android, and iOS.” A network protocol researcher might be well-versed in “TCP/IP, HTTP, and FTP” protocols. More likely, a hacker often masters all these basic knowledge areas.

So, what I want to say is that the key is not the technology itself, nor the nitty-gritty of offense and defense in some vulnerabilities, but rather the ability to use the wide range of knowledge and the hacker’s reverse thinking to challenge seemingly sound systems. This is what our society needs and it is immensely enjoyable.

cwbrandsma
0 replies
2d5h

Penatrate and Patch: because if it doesn’t work the first time then throw everything away, fire the developers, hire new ones, and start over completely.

cratermoon
0 replies
1d14h

The real dumbest idea in computer security is "we'll add security later, after the basic functionality is complete"

chha
0 replies
1d21h

If you're a security practitioner, teaching yourself how to hack is also part of the "Hacking is Cool" dumb idea. Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole.

If only this was true... Injection has been on Owasp Top 10 since its inception, and is unlikely to go away anytime soon. Learning some techniques can be useful just to do quick assessments of basic attack vectors, and to really understand how you can protect yourself.

amelius
0 replies
1d8h

"Let's go production with it now and we can secure it later" - no, you won't. A better question to ask yourself is "If we don't have time to do it correctly now, will we have time to do it over once it's broken?"

I guess the idea is generally that if we go in production now we will make profits, and with those profits we can scale and hire real security folks (which may or may not happen).

al2o3cr
0 replies
2d1h

    My guess is that this will extend to knowing not to open weird attachments from strangers.
I've got bad news for ya, 2005... :P

Jean-Papoulos
0 replies
1d11h

As a younger generation of workers moves into the workforce, they will come pre-installed with a healthy skepticism about phishing and social engineering.

hahahahahaha

Hendrikto
0 replies
1d12h

This is full of very bad takes.

I know other networks that it is, literally, pointless to "penetration test" because they were designed from the ground up to be permeable only in certain directions and only to certain traffic destined to carefully configured servers running carefully secured software.

”I don‘t need to test, because I designed, implemented, and configured my system carefully.“ might be the actual worst security take I ever heard.

[…] hacking is a social problem. It's not a technology problem, at all.

This is security by obscurity. Also it‘s not always social. Take corporate espionage and nation states for example.

AtlasBarfed
0 replies
1d23h

#1) Great, least secure privilege, oh wait, HOW LONG DOES IT TAKE TO OPEN A PORT? HOW MANY APPROVALS? HOW MANY FORMS?

Least secure privilege never talks about how security people in charge of granting back the permissions do their jobs at an absolute sloth pace.

#2) Ok, what happens when goodness becomes badness, via exploits or internal attacks? How do you know when a good guy becomes corrupted without some enumeration of the behavior of infections?

#3) Is he arguing to NOT patch?

#4) Hacking will continue to be cool as long as modern corporations and governments are oppressive, controlling, invasive, and exploitative. It's why Hollywood loves the Mafia.

#5) ok, correct, people are hard to train at things they really don't care about. But "educating users", if you squint is "organizational compliance". You know who LOVES compliance checklists? Security folks.

#6) Apparently, there are good solutions in corporate IT, and all new ones are bad.

I'll state it once again: my #1 recommendation to "security" people is PROVIDE SOLUTIONS. Parachuting in with compliance checklists is stupid. PROVIDE THE SOLUTION.

But security people don't want to provide solutions, because they are then REALLY screwed when inevitably the provided solution gets hacked. It's way better to have some endless checklist and shrug if the "other" engineers mess up the security aspects.

And by PROVIDE SOLUTIONS I don't mean "offer the one solution for problem x (keystore, password management), and say fuck you if someone has a legitimate issue with the system you picked". If you can't provide solutions to various needs of people in the org, you are failing.

Corporate Security people don't want to ENGINEER things, again they just want to make compliance powerpoints to C-suite execs and hang out in their offices.