Article/title is a bit confusing and perhaps borderline clickbait, so...
If I understand correctly it seems like his crime was *using* the exposed database credentials to log in to the third-party database server.
So he wasn't charged for simply "exposing" the credentials as the title says, but actually using them to poke around.
It is basically impossible to know what a system is without accessing it and looking around.
It is like being given a key card for security clearance in a building. You assume any door it opens is a room you're allowed to be in. If security finds you in a room you aren't supposed to be in, is that your fault? Or whoever gave you the card with the wrong clearance level?
Also how about the situation where you open a door, look inside, immediately realize you're not supposed to be there and then report it to security? Should you be punished?
I'll argue the other side:
It's more similar to finding a key hidden under the mat at someone's house. You can then contact the owner and inform them of the security issue but what you should not do is use the key to open the door and go in and see if there is really harm in you being able to enter. Because you might accidentally achieve the exact thing that a criminal wants such as finding a note with a password on it. You can then claim you didn't want to find it but the fact is that 1) you broke the law by entering and 2) you caused a malicious event (namely obtaining a password).
You can then pinky swear that you didn't already use it for any further malicious actions but that will be difficult to verify.
If I ever lose my key, I don't want people to enter my house to prove that they can. Inform me of how you obtained the key and I'll change the locks and make sure I don't lose my key again. If you do enter my house, expect me to press charges.
It's not similar at all.
The key (connection string) was already given to him via the app and he was entering the house (database) on a regular basis.
This would be like mistaking a door for the bathroom but find a closet full of gold instead.
But he had no right to examine the app, or enter the database outside the app. Even if that is a simple task for an expert, it's still an obvious difference between legitimate usage of the app, and illegitimate usage of the database.
Wouldn't that account as reverse-engineering, which is often also illegal?
We’re getting downvoted by the “I have no clue about how the criminal justice system works” brigade.
You are getting down voted by people who understand that, while the situation is nuanced, it is NOT equivalent to flipping over the house mat at random house and entering it.
It may not be totally akin to a security card opening more doors than it should, but it is entirely reasonable to assume that "the key in your copy of the app" is "your personal access key".
Well, apparently the courts think it is equivalent, at least in Germany and the United States.
That you favor one argument over another as your lens with which to side when it comes to really stupid analogies, including mine, is entirely in bad faith.
So your argument is that since the courts think a certain way, that we should accept that and move on?
I agree with the courts. Most people so outside of a vocal minority on this forum.
What is you plan if you disaggre?
It's just a fundamental disconnect then, I guess. You and I have a very different understanding of how this "crime" was carried out.
He was charged for connecting to a database with credentials that he was already connecting to before with the application. I could argue that he already had access to the other data because the connection allowed it.
So, that begs the question: what did he actually "break" into and how is it different than connecting to the database via the app on the same PC? Would you even be able to tell the difference as a sysadmin looking at logs?
If you want to argue about the law, you can! There is this entire profession dedicated to this!
I've never been in a situation where I was given an app binary as a user but where the author intended for me to go find a key in the decompilation so that I could manually access information that wasn't visible in the app.
This seems like it is an unreasonable thing to assume. And courts seem to agree.
Illegal vs legal and reasonable vs unreasonable are not entirely analogous.
There are lots of laws that I think are unreasonable and a lot of legal things that I think are unreasonable.
He had access to the key by looking at the source code. The key wasn't intended to be used by him manually.
I wonder if there was some kind of software license that stated that the developer was giving the user the key but the user wasn't permitted to use it. At least in that case, he would have known the company's intentions.
I don't think we can otherwise know for sure the company's intentions. If I leave the front door to my house open (or if I tape the key to the outside of the door, to further strain the physical key analogy), is it my intention that people just come on in? We have no idea.
Are you given a key if it is contained in the source code of a compiled binary that you are given?
...except it was not the source code. Apparently right there in the application in cleartext. If what the investigator intended didn't matter, then what the vendor intended doesn't matter, either.
He literally opened the application binary in a text editor to look for clues (despite the fact that there are common tools like `strings` for this) and saw the credentials, and since he'd been specifically hired to fix a problem with the database, he used those credentials to connect to the database. `SHOW DATABASES` would be a perfectly normal thing to type at this point, and apparently once he saw that these credentials granted access to everything he immediately stopped and logged out.
If his lawyers had been better this would have never made it to court. Liability should have fallen on the contract customer, but for certain the design of Modern Solution's software and application were nothing short of wildy irresponsible. If they really face no risk for this, it's time for all German companies to start contracting firms in other countries where idiots aren't allowed to leave thousands of customers data exposed to the first schlub who happens to notice them.
Well, the key was already “given to him” by the nature of leaving it under the mat.
Oh, but this is a food truck that he’s been visiting on a regular basis so obviously he’s allowed to go in the back door, with full access to all the ingredients and poking around inside the cash register?
Also, if you take someone else’s gold behind a door you unlocked with a key that isn't yours, it is called stealing.
Nothing was "under the mat" since he was already using his PC to connect to the database with those same credentials.
Nothing was stolen, he informed the vendor of the issue immediately.
It wasn't hidden though.
Lets pretend the guy was a pest exterminator, hired to kill some bug infestation. He is given a keycard to access most of the building. As he is hunting down the nest, he finds a hole in the wall. Bugs tend to come through holes in walls, so he goes in to figure out whether what he is looking for originates there.
He enters the room on the other side, and looks around for other holes that might have bugs. He then notices a file with big red letters saying "TOP SECRET". Turns out he accidentally entered the maximum security file room. So now he leaves the room, goes to security and tells them what he found. Then gets arrested for 1 count of trespassing and 1 count of breaking and entering.
How is that fair?
If a key is taped to the outside of the door that it opens, you still can't use it without committing a trespass. Not unless you got authorization to use it from the right person(s) first. Shitty security isn't a legal invitation.
The key was taped to the outside of the door explicitly to be used by the customer though.
No it wasn't intended for customer use, the customer wasn't supposed to be directly hitting that API. There was reverse engineering that had to happen before recovering the plaintext password. He was already someplace fairly iffy for him to be. It was much more like being on the front porch of a house you don't own and finding a key.
Enough with building comparisons.
The vendor put the master keys of the main database in a publicly-accessible package. It’s insane.
Of course the researchers tested the keys, to check whether it was true. If it can be proven that it was used for nefarious reasons, then ok for blaming him, but as far as we know, the researcher just used it to get convincing evidence and left the system.
The company is to blame 100%, and the research should get an encouraging award for white-hat reporting. Not this.
If you're just a "researcher" and you're not a cop then your job isn't to get convincing evidence of anything.
If you’re a company then your job is to keep our bank accounts safe. See, I can do it too.
Now that we’ve made clear that no-one is doing its job, YES, free citizen should be able to try to poke into the awful security of corporate IT systems and report it in good faith without even spending a second dealing with trouble.
No it really doesn't work that way. Companies don't go around testing the physical security of the banks that they use. They rely on the legal system and insurance to deal with any failures that the bank has.
The right to pen test businesses doesn't actually exist.
If that house (the app) was built inside my living room (my phone),
then how does the analogy play out?
I think an electronic keypad works better as a example.
You get to a door, and there is a postit note on the door saying "Keycode is 2424".
You know your job is in the building. Your keycard let you into the room with the door. Therefore, if this door lead to a "high-security" area, surely they wouldn't put a postit note there? Maybe, as the bug exterminator, or maintenance person, it is expected that I should be able to enter that room?
Except that this is an API which is not yours that you've found by reverse engineering. You're constructing an analogy where the authorization surface is much less distinct where you're already allowed into the building because that makes the surface much less distinct. That analogy isn't at all obvious to me and I don't think you've offered any rationale for why that is more appropriate.
Once you start playing around with someone's remote API then I'd strongly advise you to consider it more like a door around the outside of a building. If you want to argue about that with me, then you may wind up arguing about that with a judge.
Hmm, big caveat there is that in most jurisdictions that would only be considered trespass if you’ve been pre-warned that you’re not allowed in.
The “key” analogy though is a bit stretched.
It is clear that here, the credentials were used in a way other than intended.
The questions are now rather “is it reasonable to expect that a person interacting with a website normally would reasonably be permitted to use anything sent to them by that website in any way they choose?”
A lot of us here are more tech-minded and would likely say “yes, if you don’t want people using credentials, don’t provide them”. The courts on the other hand may take a different view and say “well we already restrict what people can do with content on a website through copyright laws, this person should not automatically have the right to use those credentials in any way other than the way provided”.
It will become even more complicated if the website has terms of service which clearly state something which has a similar technical meaning to “no trespassing”’s physical meaning.
In that case, it's his job to enter holes.
Nobody asked this developer to go through any holes. If you see a hole in your hotel room leading to another room, you don't go through it to see if there is anything to steal, even if you go through holes a lot for your job. You inform the owner that there is a hole.
His job was to debug the program though, so wouldn't that literally be all about poking around holes?
His job was to solve a problem with a database's log files. He found another second database. "Maybe this second database is causing duplicate log files?" so he goes in and looks.
It makes sense to me. Based on the german article: (machine translated to english)
To disconfuse: the "displayed" in the final sentence is a mistranslation from "anzeigen", which means pressing charges.
"Thank you for saving us from liability hell and damages by helping us fix our shit. Police officers should arrive at your door shortly."
I hope this company ends up paying dearly for this. Anything else would be a devastating precedence for information security. A business like that has no business staying in business.
Exactly. As the original article states, he just didn't assume he'd stumble onto a sensitive database.
Exactly. Maybe he could just be exploring an API.
It isn't hidden, maybe this is how I'm supposed to get data.
Unfortunately, like many physical security analogies, there's no one correct way of translating the details from the software world to the real world.
I think how close you think those two situations are depends on whether you consider the application to be an agent of the customer (like a personal shopper), or of the shop (like a salesman).
Did:
A - the programmer coerce the application (an agent of the shop) into accessing secret information (breaking into the shop warehouse), or
B - the programmer ask the application (his own agent, a personal shopper) to go and look for interesting things in the database (shop's warehouse) for him, a privilege that the application (personal shopper) was afforded in advance by the shop?
I personally think that A is a dangerous precedent to set for society. Treating any network-bound application as the agent of its creator would mean it was wrong to observe your computer (which you generally use for more than just accessing one online shop), and would therefore effectively kill FOSS.
He didn't ask the application though. He changed the application or used information in the source code in a way that the shop didn't intend. So B is clearly not the case.
But even just assuming that the fact that the app was using a key means that he can try to access other things with it is a dangerous precedent. Can I access your OnlyFans if you give me the password to your Netflix account and you use the same password for both? Can I access the company database directly if the app connects to it? There could be all sorts of confidential info on that server, perhaps the gossip of the customer service people about you or info about other customers.
The word 'house' is doing a lot of illicit work in your metaphor. A better analogy would be if a bunch of families had decided to install CCTV cameras, then got their neighbor Garry to watch them.
Unbeknownst to them, Garry was storing the tapes a lot longer than anybody imagined, was pointing the cameras so they looked into people's bedrooms, and he was selling some metadata to third parties, and he also kept the key to his house in a plant pot by the door.
Some teenagers messing around, find the key, and have a look around.
Suddenly, it's the teenager, not Garry, who's the problem here. That's how data breaches work at the moment. And because it's Garry's 'house', we all think of the company as the victim.
If the mat was in their driveway, maybe.
Finding a key on the floor and then using that key to break into a building is illegal, and for good reason.
It's more like you're given a keycard to a building that says "Floor 20" but then find out it has access to all the floors and telling the building security about it. All they did was 'open' the 'elevator door' here revealing the access they mistakenly had. It doesn't sound like they snooped through other customers' data or downloaded anything.
You're not really given it though. You found it and even though nobody ever asked you to use the key card you used it on the floor that nobody ever asked you to go to.
They weren't not allowed either, though.
Your machine regularly uses that key to access what's behind the door on your behalf; there is no reason you shouldn't be able to access it yourself.
If you don't find the key and realise it's actually a lost one, leading to a potentially dangerous place, someone else will and they won't be benevolent.
If you don't know what the key does who are you reporting it to?
If that's unclear the answer is simple, destroy the key. Otherwise you can try to be a good neighbour and let them know. You do not get to just open random doors and see what's going on.
The real issue here is whether this instance is comparable, not whether opening doors with lost and found keys is a bad thing.
The real difference is whether they 'found' the key or if they were handed it. In this case I'd argue they were handed the key, as there was no plausible protection mechanism preventing them from accessing the key. It wasn't lying around somewhere forgotten or secret, it was in plain sight.
And frankly we need some good Samaritan laws for cases where someone responsibly disclose a vulnerability without doing further harm, even if what they did was illegal on its own it certainly should not be in light of the fact that they responsibly disclosed the vulnerability.
It's not a "lost" key if it's found hardcoded in an easily available place (e.g. an application). It's a negligently placed key leading to a vulnerable place that is going to get into the hands of a malicious person.
US law:
[1] https://www.avvo.com/legal-answers/is-it-breaking-and-enteri...
Yep, what the guy did here seems at worst a misdemeanor, and only because a company's feelings were hurt. The vendor brought charges against an employee of their customer, someone who was paying to use their product.
Its not finding a random key on the floor unless the key was in his house sending data back to a 3rd party server. Closest paradigm using technology terms is finding an S3 key in a 3rd party library and then browsing the the S3 bucket to see whats in it. Authorization was granted by providing the developer the library, they literally sent him a username and password. If the code was unique to each client would the person been charged?
I don't see why an investigation into excessive logging requires running queries on a database.
If they connect with a desktop application, said application might run some queries against `INFORMATION_SCHEMA` in order to display schemas, tables, and columns. If the investigation is open-ended enough ("we have no idea so just figure it out"), then it might seem reasonable to connect to the database to see what it's about.
It's already hard to see the malice in their actions but it's harder still when I consider that they immediately alerted the company who made the error. Even more when I consider that the company fixed the error. This developer did the company a favor and they had charges filed against them over it.
This is a high profile case because they went public with it, but I assure you, stuff like this happens a lot more frequently than you might think (the public can attend court proceedings in that country, but they are neither published nor publicized for the vast, vast majority of cases). It's why anyone remotely familiar with the legal situation (not just in germany, many countries have similarly broad laws, like CFAA or the Computer Misuse Act) will tell you to never ever, ever do vulnerability disclosure yourself to the responsible party.
What should you do instead? How would someone familiar with the legal situation disclose a vulnerability?
It doesn't have to be malicious, it can just be negligent. What he did was essentially digital trespass. He assumed the database only contained data for his client but he knew the database was hosted and operated by the vendor and did not ask for permission to access it. Instead he analyzed the software (in the most basic way, i.e. opening it in a text editor and looking for plaintext strings) to retrieve credentials and used those for accessing the third-party system without permission.
It's of course ridiculous that the police and prosecution called this "decompilation" but I agree with them to a point: the password didn't spontaneously fall in his lap, he deliberately went out to look for it in the software itself, even if he found it in the most trivial way possible. And then he decided to use those credentials to access an external system he must have known did not belong to his client without asking for permission. This rapidly enters grey hat territory and crosses a legal line.
I think it's right to be appalled by the mere act of opening the file in an editor being considered suspect by the prosecution but I also think it's important to understand that it wasn't just the software analysis that got him into trouble, it was the digital trespass that this enabled.
It's rather different. Some time I saw my neighbour left the key sticking outside. Doesn't feel like an invitation to me. Also, garden doors aren't necessarily locked and I think this is difficult to legislate.
German law applies to TFA so compare Hausfriedensbruch (criminal code): the adverbs of choice are "widerrechtlich" like undefined behaviour; "ohne Befugnis", essentially without permission, e.g. in case of not a lawful entry of police. Official translation actually distinguishes "unlawful" and later "without permission". I always feel it says, like, illegal entry is illegal. Vandalism uses the same words, section 303a applied to computer sabotage as "Data manipulation".
https://www.gesetze-im-internet.de/englisch_stgb/englisch_st...
https://www.gesetze-im-internet.de/englisch_stgb/englisch_st...
PS: the relevant section is 202a "Data espionage", following another comment.
https://news.ycombinator.com/item?id=39047283
https://www.gesetze-im-internet.de/englisch_stgb/englisch_st...
This deserves further commentary.
In my humble opinion, what really grinds my gears is the abuse of the letter of the law, “circumventing the access protection”. If your fence has gaping holes, it's not a functional fence.
Since this is hackernews, graffiti "vandalism" is still a good example. The only protection of public facing walls is law enforcement, which is spotty. Private property such as trains may employ fences and security, which can be circumvented. Train stations and trains in service have to open anyhow. Terms of Service may explicitly forbid pollution, defacement, however you want to call it (this holds by analogy if you leave logs on the server, my point being, as it were, that security is a process).
The law makes a practical difference for each of these cases, but the spirit of the law is the same in each case and the baseline is that the law is whatever is deemed appropriate by the powers that be, the finder of facts, population as represented by select individuals, the common joe. This, in turn, is supposed to be enshrined in constitutions of sorts. In sum, “unlawful" (“widerrechtlich” or “unbefugt”) derives in different ways from constitutional rights.
In the given case, subsection 202a is based on confidentiality (Art. 10 GG "privacy of correspondance"), but in my example (guilty as charged) the laws against vandalism are based on property (Art. 14 GG). In result, your comparison is a type error for me (as is circumvent if access control is a process).
https://www.gesetze-im-internet.de/englisch_gg/index.html
Comparative Law is a real thing, by the way, that is most foreign to me, but I make due.
Grafitti satisfy the criterion of Sachbeschädigung (criminal property damage). Nothing (except some reputation) was damaged by the "hacking" involved here.
Well, depending on what kind of data was stored in the database he accessed, this may constitute a data breach according to privacy law in which the vendor also needed to assess whether the incident needs to be reported to its data subjects (i.e. all customers in the same database). Those could then possibly sue for damages.
Of course if that's the case the vendor would have to be found to be in violation of privacy laws by not using state of the art protections (e.g. not shipping plaintext passwords, not using the same database/credentials for data from different customers) and might be fined for that separately.
The problem with your analogy is that in this case the contractor was specifically hired to figure out why when they opened the door the room inside was filled to the ceiling with cheese. OF COURSE one is going to open the door to try and at least verify that there is in fact a bunch of cheese in there.
What reason do you have to needing to know what a system is? Just because you think you have a password for it?
It depends. Do you know you're not supposed to be in that room?
I've run into this exact same situation three times. One was a hard coded SSH key to a root account, two were hard coded passwords.
In all three cases, I simply contacted the vendor, let them know I had this key, coordinated disclosure with them, and then told them what the password was and where I found it.
In all three cases, the disclosure was enough for them to go wide eyed, immediately understand which systems were impacted, and then quickly leave the call to go fix the problem.
There is _zero_ reason for you to _use_ exposed credentials if you find them. It adds nothing to the "security research" you may be doing.
But in this particular case, it sounds like it wasn't known to him that it was an exposed credential. He thought it would just access his own data, so there would have been no reason to report anything to the vendor at that point. The access to protected data here was accidental.
He thought the credentials, which were hard coded into the app, for an mysql server, somehow accessed only his data?
That's hard to believe. Which seems like the defense understood, because they offered that the "name of the remote database" seemed like it could be related to his customer that he was contracted to.
In the end, he's going to pay 3,000 euro, and made an example of. He could have received 3 years in prison. So slightly unfair to everyone but hardly worth stretching credulity to defend.
Hm, I think I misread the article. It just says it was accessing a database named after the client, not that the user/password suggested being specific to the client. So you're right, it being hard coded then suggests it can likely access all of them. He should have stopped there and reported the issue.
While paying a fine of 3000€ is likely without consequence for most programmers, that's not the only thing that happens. It now shows up on his criminal record and would be considered in any future case against him.
Bullshit. For one thing, he wasn't doing "security research" he was trying to fix a problem his client was asking him to fix that directly involved the MySQL database in question. He literally stumbled across the security problem by accident. For the other, the vendor should be facing an investigation into exactly why they thought it was a good idea to have thousands of customers and millions of euros "protected" by one single password that was stored in plaintext on thousands of customer's machines. In a number of places that could easily result in criminal liability on their part--which is probably exactly why they contacted the authorities.
...and I'll point out that he didn't actually even have to know what these credentials were. With elevated privs on the local system, one can merrily let the application connect to the database server and then snatch that socket up and do with it whatever they wish and then the same information would have been revealed--that every one of MS's customers could quite readily access all the data of every other customer.
Troy Hunt called that behavior "way too far into the grey for my comfort" in a recent post about the massive Naz.API leak.
Says the guy who gets emailed hacked database from cybercriminals
Right, the guy who's job is receiving hacked databases of user credentials from cybercriminals argues actually using said credentials would be going too far.
In that case the keys he has were definitely not his to use. Here we're looking at someone handed an API key by a company and then using it to access the API.
A lot of this depends on whether you view a phone as a device running third party' programs on behalf of the user, or a device that third parties allow users to run software on on behalf of the third party.
A lot of society is moving towards the later view, which is of course fundamentally wrong.
Not really. Apparently they interpret the software which has the embedded key as an "agent".
So it's more like you go into a building and get assigned a person who opens the door and retrieves the stuff you want from there for you. Turns out that the person was on drugs and promptly fell asleep ("malfunctioning") and you take the key to get into the room but it turns out you've just witnessed that this company is a huge scam. Now they want to sue you.
Yes, he logged into the server using the credentials embedded in the app. Since the server contained information from other users, this would clearly be some kind of crime if used this access maliciously or maybe even if he just logged in knowing that he wasn't supposed to be allowed to.
But I think the salient point here is whether or not he could have known that before logging into the server. Since the credentials are in the app, should he assume that the company's security is so bad that this would give him access to all their customer data? He is obviously allowed to use the app, and the app uses these credentials so it's not too much of a leap for him to think that he should be allowed to use them as well.
Regardless, I think the result of this ruling will clearly be bad for computer security. In the future maybe someone who finds a vulnerability like this won't report it out of fear of legal retribution.
I guess the moral of the story is, if you find hardcoded credentials, immediately inform whoever is in charge without actually using the credentials.
Or can that still get you sued?
I don't think the "hardcoded" part is at issue here. If this wasnt a MySQL database but an API that exposed other customer information, he would have the same moral duty to disclose and the same legal liability, I think.
Maybe not? Again, only speaking to US law, but your intent matters a lot here, and you have more plausible deniability sending API requests than you do making a direct connection to a database.
A direct connection to a database is an API, too. :-)
Not normally. Which matters here.
$15 what there is no law about illegally accesing information systems which even define API anywhere in the world.
What's your point? People have been convicted for doing things with actual APIs when prosecutors were able to demonstrate that a reasonable person would have assumed they shouldn't have done those things.
API or not API is a technicality which bothers only tech nerds. The law would bother with such nuances.
But in this case, it is, since this is the way the client connects to the (database) server here.
My thinking is that prod credentials that you aren’t supposed to have were used. So if you’ve been ask to investigate, and see something this glaringly bad, then you need to stop immediately, inform your boss, and get explicit approval before continuing.
i think it all comes down to:
it's not a crime to build a house that has open doors and windows.
but it's certainly a crime to enter one as an uninvited guest, let alone do things with traceable logs.
please don't compare houses with databases, this is pointless and discussion usually degrades into house-related things
according to data protection laws, it's certainly a crime to leave some systems unprotected like in this case
But this is the entire issue. It's common practice for a business to have open doors because they intend for anyone to come inside and patronize their establishment. Some of the businesses are even in residential houses, where the area is zoned for that sort of thing.
The question is what that's supposed to mean for a computer system. Obviously answering requests is the intended purpose of a public-facing internet server, and the general expectation is that if you're not allowed to make a particular request, the server will refuse it. Protocols even have widely supported standards for this, e.g. HTTP 403 Forbidden.
So what are you supposed to make it of it when you issue a well-formed request and the server answers it? The default expectation is naturally that they intended it to, because if it was intended to do otherwise then they'd have configured it to do otherwise. How it responds is how you know if you're allowed to do it.
At some point you may be able to reason out that what's happening is the result of a misconfiguration (exceptional circumstance) instead of the standard expectation (server refuses requests if server operator intended them to be refused), but this may not be obvious to the user until after it has already happened.
Not a lawyer, and certainly not in Germany, but spend a lot of time reading and noodling about this space. There's maybe a reach-y contract lawsuit if you violated reverse engineering terms; it wouldn't win, but it could be annoying and expensive.
Actually using the database creds to the point where you can tell a story about the data in the database though is enough to put you at criminal risk in the US; the DOJ doesn't prosecute good-faith vulnerability research, but depending on the kind of poking you do and the kind of logs you keep of what you find, you can put yourself in a position where your good faith isn't assumed.
can't you just forget you ever saw the hardcoded credentials? that seems like the safest course of action to me.
I'd probably not say anything. At most anonymously. I'm not taking the risk as i stand nothing to gain
Do it anonymously if you do it at all.
In my opinion this is like filing criminal charges because someone opened a door at the front of your business. Normally what is known to your front end is not sensitive data for the entire user base. So if you take a peak in, its the same as wondering what the extra front door is to a brick and mortar store. You’ve got the main door with the OPEN sign and then a plain door that, whoops, is unlocked and has all of your customer’s files laying out on tables. At this point you’ve done nothing wrong. If you start rummaging around you’re outside of plausible deniability.
It’s not clear, but what is clear is Cases like this can and often do have a chilling effect on legitimate, well-intentioned reporters of vulnerabilities which leaves everyone else at even greater risk due to negligence on the part of the company. We should be highly critical of these legal outcomes particularly when there was no intent to harm.
I think the moral of the story is if you stumble on such a vuln while working in Germany in the future its best practice to sell it on the darknet since you unknowingly already committed the crime anyway. might as well get paid for it.
Please Dont shoot the messenger, I didnt write the stupid law.
I stand by the phrase "Hacking Is Not A Crime".
It's what you do with the data once you have access to it. If you do nothing, it shouldn't be a crime, the crime should be the, presumably, nefarious usage if used.
You can do a lot of damage by simply accessing data: blackmail, state or industrial espionage, insider trading, HIPPA violations, obtaining signing keys or passwords for lateral movement, etc. All those require additional intent, to be fair, but it's hard to prove intent and much easier to prove access. And there are very few legitimate reasons to access someone else's private data, and many nefarious ones.
You missed the second part, which is that the use of the data is the crime, not the access to it.
I would not recommend trying that defense in a courtroom.
It has been tried, and been successful in many cases (on mobile, so not linking at this time). A proper example is in respect to right to repair laws and court cases. This is one of the primary arguments in the ability to hack your own devices, and others in the cases of grey/white hat hacking defenses.
Are you hereby giving people permission to hack your devices as long as they only use it to do good?
This is true, but he believed that the database was held exclusively for the client, hence only containing data belonging to the client, who gave him permission to access his data. Apparently the name of the database also seemed to indicate this.
As soon as he then noticed that it contained all the data of all customers, he disconnected.
It doesn't matter what he thought was in the database or what it was for. He knew it was hosted and provided by a third party for use with that third party's software which his client was using.
His crime wasn't accessing the data. His crime was accessing the data in a way he had not been authorized to do. As far as he was concerned, the investigation should have stopped at "there are hardcoded plaintext credentials here". But not only did he then also try if those credentials were correct, he also used those credentials to go spelunking. That's trespass even if he had reason to believe there were no other customers' data on that server.
Was the client not authorized to access their data with the use of the password, which the application managed for them?
The password should have been a per-client password intended to protect the client's data from any foreign access, a key given from the service provider to the client in form of an easily usable application in order for the client to make use of the service.
What should have been is irrelevant to the discussion. Yes, the security practices by the vendor were abhorrent and this jeopardized their customers' users privacy and they should be fined by the data protection agency for that. But that has no impact on what the contractor did.
The client was authorized to use the software which accessed the data using the password. The client was clearly not authorized to extract the password, use it to connect to the database manually or through their own software and go spelunking. How could you possibly think that was authorized? Implicit authorization can always only be interpreted in the most limited way. It doesn't matter what he expected to find, it only matters HOW he accessed it. That's just how laws work.
There's nothing confusing about it, it just doesn't frame it in the way you prefer.
From the perspective of the developer, it's natural to assume that the password was in place to prevent non-users from accessing, not legitimate users. After all, the credential wasn't hidden or obscured in any way. When it became clear that users weren't supposed to have access, it was reported to the vendor. Am I missing something here?
On one hand, there's a developer doing their job. On the other, there's another "embarrassed" company retaliating and intimidating would-be bug reporters. It seems crystal clear what's going on.
If someone can suggest a better (more accurate and neutral) title, we can change it above.
(It's best to use a representative phrase from the article body rather than making up new language; that's usually, though not always, possible.)
That isn't hacking which the title implies. Hacking is more involved and exploitation of systems.
This is just taking the keys and unlocking the door to your benefit.