return to table of content

Bitwarden Heist – How to break into password vaults without using passwords

walki
113 replies
3d1h

Microsoft's %Appdata% directory is a security nightmare in my opinion. Ideally applications should only have access to their own directories in %Appdata% by default. I recently came across a python script on GitHub that allows to decrypt passwords the browser stores locally in their %Appdata% directory. Many attacks could be prevented if access to %Appdata% was more restricted.

I also found a post of an admin a few days ago where he asked if there was a Windows setting for disallowing any access to %Appdata%. The response was that if access to %Appdata% is completely blocked Windows won't work anymore.

zelon88
71 replies
3d

"AppData" is where user specific application data is supposed to be stored.

"The Registry" is where application configuration is supposed to be stored.

"ProgramData" is where application specific data is supposed to be stored.

"Program Files" is where read-only application binaries and code is supposed to be stored.

It really is a simple concept from a Windows perspective. What ruins everything is overzealous and/or ignorant programmers who don't take any pride in their work, or lack all respect for the users environment. For example; an .ini file should not be a thing in Windows. That is what the registry is for. But the programmer writes the code for Linux, half-ass ports it to Windows, and leaves the .ini file because his code is more important to him than the end-users operating system.

There is nothing wrong with AppData permissions. The problem is with the users understanding of what it is for, and the developers understanding of how it should be used.

bhdlr
11 replies
2d23h

Can't blame the programmer for that - Windows shouldn't allow the programmer to do stupid shit

EvanAnderson
5 replies
2d23h

The sheer volume of legacy software prevents this from being realistic. Microsoft's commitment to backwards compatibility has reaped rewards for them. Any restrictions would have a user-controllable toggle.

If APIs prevent programmers from stupid shit the devs would encourage the end users to blame Windows and, more than likely, turn off the restrictions. (Case in point: User Account Control and making users non-Administrator by default. I've dealt with so much shitty software that opens its install instructions up w/ "Disable UAC and make sure the user has admin rights.")

There has to be a point you draw the line and say "Dev, grow up and learn about the platform you're using." An app that required users to be root on a Linux machine wouldn't survive community outrage. Windows doesn't have that kind of community. (Try arguing with a vendor about idiot practices in their app and watch their sales gerbil attempt to end-run you to your manager...)

Wowfunhappy
2 replies
2d22h

What if Microsoft limited these APIs to programs with "Compatibility Mode" enabled? (And—this may already be the case, I'm not sure—made it impossible to enable compatibility mode programmatically?)

I feel like this would create a strong incentive for modern software to do things "properly", while still allowing legacy software to run (albeit with a couple of extra clicks).

dessimus
1 replies
2d22h

Look how long we're still dealing with software that requires Java 6/7/8, and all the security issues that come with that. Servers/Appliances with IPMI remote consoles that do not support HTML5. It's easy to say "Replace the equipment" but our budgets don't always allow for that.

Wowfunhappy
0 replies
2d19h

I think Microsoft's commitment to backwards compatibility is awesome. But it would still be better to at least get newer apps working the right way. Even in the event those legacy apps remain in use for ~forever, at least there would be fewer of them.

nytesky
1 replies
2d21h

My Steam Microsoft Flight Sim requires admin rights, so clearly this is a lost battle. We just need to have containers for every app.

wongarsu
0 replies
2d21h

We may just get that. Microsoft's attempt to introduce sandboxing with UWP/msix was ignored by developers. Since then MS has added Windows Sandbox to Win 10 Pro and up, essentially disposable VMs for running sketchy software. I wouldn't be surprised if a couple versions down the line we get the option for more permament app-specific VMs, with integration into the window manager similar to QubesOS. A lot of groundwork for that already exists for WSL2, like more efficient memory use between VMs and shared GPU access.

nulld3v
2 replies
2d23h

I don't think this is fair. Linux and Mac used to operate in generally the same fashion. Only recently have they started sandboxing stuff.

Windows doesn't have the same privileges because they are forced to maintain backwards compatibility.

withinboredom
1 replies
2d22h

It's just as bad there with everyone randomly shoving dot-files in my home directory instead of using ~/.config, ~/.local, ~/.cache, and friends.

Just to name a few in my home dir ... aws, cargo, dotnet, yarn, vscode...

All of these narcissistic tools are pretty annoying.

whoisthemachine
0 replies
2d21h

40% of those tools are majority controlled by Microsoft...

zelon88
1 replies
2d22h

See, I disagree with that. The computer is an arbitrary command execution machine. It does what you tell it to do. Don't tell the computer to do stupid shit and it won't. There are plenty of valid use cases where you want to use the capability of the computer without some arbitrary OS policy preventing you from doing it "because some programmers are irresponsible."

lxgr
0 replies
2d10h

In a world of various medium-trusted apps that I don’t love but still have to use to get my job (or a bank transfer etc. done), that model doesn’t really work for me anymore.

Users aren’t “telling the computer what to do” anymore for the most part, third party app developers are; this puts a lot of responsibility on the OS for protecting the interests of its user against that of a malicious or careless app developer.

Of course I want to be able to fine-tune that protection, but restrictive defaults make sense.

jtriangle
6 replies
2d20h

an .ini file should not be a thing in Windows

Hard, hard disagree there. Having config files available is vastly preferable to using the unmitigated shitshow that is the windows registry. That and a config file at least gives users a prayer at being able to provide some sort of troubleshooting information, and provides savvy users with a way to actually solve problems on their own.

half ass ports it to windows

Redmond, themselves, do all sorts of seemingly 'wrong' things with their directory structure, which tells me the 'free for all' nature of it is intentional, and not wrong at all. It is a terrible structure, it does cause problems, but, that's the conditions you work under while using windows. It's mostly OK in practice, but as bitwarden found out, there are conditions that developers have to account for if you require security and safety.

And factually, your presumed solution of "put things in the right place" is doubly broken, because if one acquires the correct privileges, there is no location on a windows machine where cleartext data is safe. The solution is not "store it in the correct location" the solution is to encrypt sensitive data at rest, regardless of location, which is more or less what bitwarden did. That's the correct strategy, and it's operating system agnostic.

calamari4065
5 replies
2d19h

Agreed. The windows registry needs to be killed with fire.

There's no appreciable difference between the registry and a directory of config files except that instead of an INI parser you have to use the much, much worse WIN32 API.

Editing config files is fairly safe and user-intuitive. Sure you can break something by writing the wrong config file, but you do not risk breaking everything. But clumsy use of regedit does have a chance of totally borking the entire system.

And then you have maniacs who store user data in the registry. I know of at least one game which stores save files in the registry.

I get the intention of the registry, but it's just not fit for purpose. Maybe it was better back in the 90s, but it's just a hellscape now.

raggi
3 replies
2d17h

There are real integration challenges with the "simple file approach":

  - File locking and concurrency
  - Atomic writes / moves
  - Realtime change observations
> clumsy use of regedit does have a chance of totally borking the entire system.

So does a clumsy rm -rf, which shows up in stories here far more often than stories of people breaking their registry.

Can you provide a recent reference to someone bricking their system with regedit?

garaetjjte
1 replies
2d15h

Then use SQLite.

viraptor
0 replies
2d6h

That solves 1 out of the 3 issues... But at that point, why bother? The registry is a database already.

dangus
0 replies
2d16h

I think you could even make the argument that nobody breaks their registry because nobody wants to mess with something so user-unfriendly. Even the developers making applications tend stick all their config in .ini files because files are easier for everyone to work with.

axus
0 replies
2d19h

But it's so easy to export all my PuTTY profiles from the SimonTatham registry folder to a .reg file and use on the next computer...

at_a_remove
6 replies
2d23h

I'm not so sure I completely agree with you about .ini files. I rather miss them. Some people have regarded the registry as a mistake, or at least an over-reach. I like the ability to edit .ini files and make them understandable.

Maybe the compromise solution is to put the user-relevant portion of the .ini file in %AppData%.

EvanAnderson
5 replies
2d23h

Please don't use INI files. The registry is infinitely more manageable for sysadmins than INI files. I hate it when your app makes me write scripts to manage settings versus just using the built-in tooling in Group Policy for dealing with the registry. (Yes, yes-- there is tooling in Group Policy Preferences for dealing with INI files. It fails spectacularly on malformed INI files. It has never been reliable in my experience.)

The idea of a centralized grammatically-accessible configuration store was a good idea (albeit this isn't want the registry was "for" originally-- it was just a file-type registry originally). GConf was a similar idea.

Devs misusing the registry to store opaque binary values (especially gigantic ones), accessing it with too high a velocity, and having a less-than-stellar file format have hurt it, for sure. Having few good schema rules or APIs that limited arbitrary developer access didn't help either.

at_a_remove
4 replies
2d23h

Okay, so that's the sysadmin perspective. Tell me about the user perspective.

Then, we should talk about, when they are in conflict, which one comes first.

EvanAnderson
3 replies
2d23h

A dev is going to include UI to manage the settings if non-technical users are expected to modify them. Whether those settings go in an INI or the registry doesn't matter at all for that UI.

Having said, that level of technical skill req'd to edit an INI or the registry is about the same. Either way you're talking about a non-technical user descending thru a hierarchy of strange-to-them named containers to get to an arcane-looking location where settings are saved.

The user is going to call me when they have problems. It's easier for everybody if I can just administer the software centrally so they don't have problems to begin with.

pi-e-sigma
2 replies
2d21h

How is the registry going to make that administration any easier? The registry is its own micro cosmos, doesn't matter if some setting is in an INI file somewhere on the filesystem or somewhere in the registry

EvanAnderson
1 replies
2d21h

Sysadmins have great tooling to deal with the registry (Group Policy, Local Group Policy for non-domain machines). The tooling for INI files isn't very good.

sigzero
0 replies
2d16h

I don't know one sysadmin that likes how the registry does things. INI files for configuration are vastly easier to understand and edit. Use the registry for permissions and keep your tooling.

newZWhoDis
5 replies
2d22h

Is this post sarcastic and I’m just missing it?

4 different locations to store program data, some of which are hidden, is freaking stupid design. Like, beyond moronic design.

Everything, and I mean everything, about a program should be in a single folder structure and the OS should by-default lock that application to only accessing it’s own folder unless otherwise granted permission (in a centrally auditable/revocable location).

Applications/ExampleApp/

Should contain everything, and deleting it there should clean it as if it was never installed. If it needs to access something in documents/desktop/etc, the OS should ideally present a file picker to pass in a copy, but applications could request access to a specific path if absolutely necessary. You should also be able to “save to desktop” without the application having read/write access to the desktop/documents.

“Exporting” is the application taking the local copy nested in Applications/ExampleApp/ and passing it to a system save dialog, then the OS can store the file (therefore having permissions) wherever the user wishes in an context menu that’s outside the application’s control (it’s the OS).

The idea that every installed application has wide-open filesystem access to say, all my documents, by default is pure insanity.

thereddaikon
4 replies
2d21h

That makes managing a user's application specific data difficult though. For one you have different user's data intermingling which potentially causes new problems. But on top of that you make managing and backing up that data more difficult. As it works now with appdata you can back up a user's profile folder under C:\users and get everything they have assuming they haven't gone out of their way to save data to a strange place. If all data for an app lived in program files then backing up and restoring that data becomes much harder.

whoisthemachine
1 replies
2d21h

Ideally a new instance of the application is installed for each user. This also provides better isolation if one user upgrades/removes/breaks their application instance. I, for one, have really come around to the AppImage model [0] in the last couple of years.

[0] https://appimage.org/

thereddaikon
0 replies
2d2h

I don't like the solution being to just make containers out of everything. That introduces its own problems and lets developers be lazy in other ways.

pbhjpbhj
1 replies
2d19h

I guess the OS keeping track of .../programs/NameOfProgram/user settings/NameOfUser is just impossible? Or having an app install create a link in /users/NameOfUser/program-config/NameOfProgram to the config folder is equally impossible magic ...?

thereddaikon
0 replies
2d2h

That's asking a lot of windows. But as a former sys admin, that sounds like it would make things harder to manage. So its linked. But its not really there. So existing userdata backup automation wouldn't catch it. Sorry your Outlook psts are gone. User data should live with users. The problem isn't with that paradigm. Its that its abused and wide open.

thereddaikon
4 replies
2d22h

Microsoft themselves don't understand that. Teams installs itself to appdata in its entirety. One full install of teams for each user profile. Keeping it updated across one machine is impossible. How can we expect anyone else to do it right when Microsoft allows its own employees to abuse it?

JackeJR
1 replies
2d16h

Teams was kept in appdata like Chrome so that these programs can update themselves without admin privileges and I suppose that is how they keep users on a recent version.

thereddaikon
0 replies
2d2h

Except it completely backfires when you have a workstation with multiple users who infrequently use it. Existing patching solutions, like microsoft's own system center have a hard time coping with applications that live in app data. So you end up with 8 instances of teams on a system. 6 of which are months out of date.

Khaine
0 replies
2d5h

I just hope they kill teams with fire. It is hot garbage.

FuriouslyAdrift
0 replies
2d21h

The original Teams was an Electron app and was stuck with Google's methods.

The new Teams is based on WebView2 and runs from C:\Program Files\WindowsApps\

EvanAnderson
4 replies
3d

As a Windows sysadmin AppData has been an unmitigated shit show forever.

Developers (including those inside Microsoft) don't give a damn about how Microsoft intends anything to work, and AppData has become a dumping ground of software installs to end-run IT departments. A lot of malware dumps into there but good luck limiting execution from that directory hierarchy because all your business-critical end user communication apps live there now too.

The functionality of roaming users profiles (i.e. registry settings "following" you to a different computer, which gives a really slick user experience when it works) was completely ruined by devs dumping piles of small files into "AppData\Roaming" (and completely not understanding that "AppData\Local" even exists, let alone what it's for).

In Windows 2000-land you could redirect AppData to a UNC path and mostly get around this behavior. That's not really "a thing" anymore because you've got apps like Microsoft Teams storing sizable databases in these locations and getting really, really cranky if network connectivity is interrupted.

Windows development betrays its legacy DOS parentage even for devs who never lived thru that era. There were no rules. There was no adult supervision. There was poor documentation of APIs so you just hacked something together that worked well-enough. Periodically Microsoft tries to start over (all the APIs w/ "2" at the end, et. al.) and the cycle repeats.

SSLy
2 replies
2d18h

And converse, on Linux it's so hard to get every shitty tool to put files in XDG dirs, not spew them all over ~.

oneshtein
1 replies
2d11h

A maintainer can patch the package to store it configuration files into a XDG directory, then upstream the patch. If none of maintainers did this, then problem is deeper.

SSLy
0 replies
2d6h

https://wiki.archlinux.org/title/XDG_Base_Directory#Hardcode... upstreams quite too often reject patches or let them languish

vladvasiliu
0 replies
2d9h

was completely ruined by devs dumping piles of small files into "AppData\Roaming" (and completely not understanding that "AppData\Local" even exists, let alone what it's for

As someone who only occasionally uses Windows, I think `%AppData%` sending you to `~\AppData\Roaming` doesn't help.

devwastaken
3 replies
2d22h

"programmers won't use our poorly designed system therefore the programmers are wrong"

Windows registry is in itself insecure. Applications can't own perms to their own entries.

Look at what people are using and optimize for that. Clearly the intended system is wrong, and ego death is necessary to create real fixes.

The easy and expected fix being that applications get perms for their own folder, rejecting 3rd party by default.

The proper larger solution being open code signing. But MS and friends are making big cash so they don't care.

trympet
2 replies
2d22h

Windows registry is in itself insecure. Applications can't own perms to their own entries.

I think registry entries support DACLs, and permissions can be restricted to SIDs or user accounts. I have no first-hand experience with this though; YMMV.

The easy and expected fix being that applications get perms for their own folder, rejecting 3rd party by default.

Back in Windows 8, they launched an app model called UWP or something which does exactly this. Met with luke warm reception from the industry because (you guessed it!) back compat.

sneak
0 replies
2d16h

Restricting to user accounts is useless. Malware runs as your user.

https://xkcd.com/1200/

rkagerer
0 replies
1d8h

They absolutely support DACL's. For the longest time I prohibited my own user account from modifying a certain registry key to prevent Dropbox from constantly reinstalling unwanted green checkmark overlays.

cameronh90
3 replies
2d22h

The point is that, nowadays, apps should by default be isolated from each other, rather than AppData and HKCU being a free-for-all.

Windows makes it hard to whitelist known-safe apps (there’s WDAC but it’s poorly documented and a PITA) and every program you run has access to everything of importance on your system.

Imagine how upset people would be if it turned out TikTok on your phone can access your entire iCloud Drive and Keychain. Yet we accept this security model on our desktops.

pixl97
2 replies
2d17h

We accept it on the desktop because the desktop app model is from before the internet. There was only 'trusted' applications that had access to all the users data (and really most of the time the entire machine), and really there wasn't even the idea of an internet connection being built in at all. In addition desktop applications are based around the ability to read the users data files. Desktop users typically want all their excel files accessed, along with any embedded images from anywhere in their user directory.

For the most part the changes you'd want to implement for security would ruin the productivity most of the workflows desktop users have these days, and would take a massive amount of refactoring to get to work anywhere close to what they do now.

rkagerer
0 replies
1d8h

Older desktop apps also tended to be more trustworthy.

There's so much commodity garbage out there now (e.g. I find it near impossible to find quality ad-free apps on Google Play)

lxgr
0 replies
2d10h

There’s a difference between reading user data (i.e. “My Documents”) and reading other apps’ application data (e.g. Firefox’s cookie jar).

macOS has started disallowing the latter (i.e. restricting access to other sandboxed apps’ files from both sandboxed and unsandboxed apps) more than a decade after the OS was introduced, yet I don’t feel like my productivity has been ruined.

hinkley
2 replies
2d23h

"AppData" is where user specific application data is supposed to be stored.

"ProgramData" is where application specific data is supposed to be stored.

Simple maybe. Coherent, no.

voidfunc
0 replies
2d15h

It's not really any worse than *Nix mess of /bin, /usr, /usr/bin, /usr/local/bin and /opt ... and probably a couple others I missed.

lysp
0 replies
2d4h

It was confusing for me too.

Basically:

- AppData = User (interactive) application storage - ProgramData = Service / Background (non-interactive) application storage

hattmall
2 replies
2d23h

So what category does stored browser passwords fall? Because it sounds like " user specific application data " which is in AppData, which is the issue. But if that's not correct which of those locations is?

oefrha
0 replies
2d21h

It should be in AppData. Gp is just a really weird unrelated rant.

ggp: unsandboxed AppData (unsandboxed filesystem in general, really) allowing everyone to read everyone else’s stuff is a security nightmare.

gp: stupid programmers don’t respect Windows’ simple scheme to place data in four different places!

What? Even if everyone places data correctly, they can still read everyone else’s stuff, as long as they belong to the same user. That’s the problem.

JackeJR
0 replies
2d16h

They belong to encrypted user credentials. https://support.microsoft.com/en-us/windows/accessing-creden....

Aerroon
2 replies
2d22h

But why split up the application like that? Why not have a folder for each application that just contains everything?

Everything being in one place by default also means that a user can just copy the entire application folder as a backup.

johnny22
1 replies
2d20h

Because it'd be really nice to have a place with just application data to backup. no configs, no application state. Or alternatively it's nice to have a place to just factory reset all your app config, but keep all the data.

Aerroon
0 replies
2d2h

But would you trust that you would actually get all the app's configs? Ie window position and all that.

And would you trust that you would only affect that one application and not any others?

And wouldn't you just unusual and reinstall the application anyway?

spacephysics
1 replies
2d21h

I agree in a perfect world, but I believe the OS should have a design that “forces” the programmer to maintain the correct abstraction.

Or at least have the override for such abstractions be blatant and explicit if the programmer wants to circumvent them.

And of course, given the age of Windows OS/ecosystem, it’s a pipe dream to have a redesign that isn’t backwards compatible

zelon88
0 replies
2d21h

How do you tell the program what belongs where? How does the OS know that the application is reading a file full of configuration entries that should be in the registry? What is the difference between reading a file full of data and reading a file containing your own configuration?

How does the OS know that the file you're writing to belongs in AppData or not?

To create the system calls for this you would break everything about windows file permissions. Currently, you interact as a user account. In order to accomplish the real time heuristics you're proposing you would also need an application user account in addition to the users user account.

At what point does the responsibility for knowing how to code fall on the programmer? How much capability are you willing to take away from effective programmers, to artificially protect the ineffective ones from themselves?

jabroni_salad
1 replies
2d20h

You forgot about "my documents", which is of course a great catch-all location for all four types of data you mentioned.

rkagerer
0 replies
1d8h

I actively avoid that dumpster fire. None of my actual documents live in any portion of My Documents.

burnte
1 replies
3d

Thank you! And Program Files is for x64 windows apps, Program Files x86 is for 32bit apps but vendors use both interchangeably and sometimes use both for the same app!

rkagerer
0 replies
1d8h

Don't forget about Wow64 and redirection ;-).

rkagerer
0 replies
1d9h

It really is a simple concept

At first I thought you meant that sarcastically.

Microsoft got overzealous showing off their long file names back when that capability was introduced to their filesystem, and any sense of organization in the OS fell apart after that.

I actually miss .ini files. It was nice being able to keep your software's data alongside it (in a simple folder like C:\Programs\3DS) and made it easier to clean up remnants. I understand what drove the design, but a more sparing and opinionated approach could have produced a much more elegant outcome.

Incidentally, even Microsoft software is wildly inconsistent in how it uses the registry.

jasonjmcghee
0 replies
3d

Making it easier / less work for more devs to do the right thing doesn't seem like an inappropriate request. If users are misusing your system, there are other solutions than RTFM

dvngnt_
0 replies
1d8h

No way is that simple.

your rules would state "application specific data" would not reside in appdata even though those exact terms are there. it's the opposite of self-documenting

dingnuts
0 replies
2d23h

What ruins everything is overzealous and/or ignorant programmers who don't take any pride in their work

uh you mean overzealous product managers and business owners who never let programmers take their time on anything because quality doesn't matter?

why would I take pride over my employer's property? lol if the code he buys from me is bad, that's his problem, especially since I have to stick to his timelines and am not given sufficient equity and agency to feel ownership over the project.

you know what makes programmers lose their desire to take pride in their work? getting blamed when we're ordered to cut corners, or implement bad designs. fuck right off with that, we're not the ones in power.

dingaling
0 replies
2d21h

Makes /bin/, /usr/bin/ and /opt/ seem simple

callalex
0 replies
2d17h

You are speaking orthogonally to the topic you replied to. The parent wants sandboxing between different programs so that one cannot read another’s data without explicit configuration and consent.

mrguyorama
21 replies
3d

python script on GitHub that allows to decrypt passwords the browser stores locally in their %Appdata% directory.

Yes, otherwise known as "if you run code on your computer, it can run code on your computer".

If a random python program can "decrypt" the passwords, that's not encryption. And browser password management isn't about security, but convenience.

AlienRobot
13 replies
3d

if you run code on your computer, it can run code on your computer

For the love of God will someone please just make a web browser that isn't a web browser and it's just a cross platform multimedia sandbox with a couple of APIs in it, and you can run programs written in rust or something on it, and it doesn't let the programs touch your file system unless it has explicit permission? That would solve 99% of the application use cases. That's literally everything I want. I want the safety of the browser, outside the hell that is web development.

mrguyorama
7 replies
3d

It's called iOS. Browsers are also NOT safe. You know what was safe? Not letting random endpoints ship you code to run. HTML was safe, though implementations at the time likely had security flaws.

You cannot make a turing complete language that JIT compiles into machine code and verify it as "safe". Machine code is not safe, so anything that lets you generate arbitrary machine code cannot be proven to be safe. If you take away the arbitrary machine code generation from javascript, it's too slow to run the modern web.

withinboredom
2 replies
2d22h

I still can't use a password manager to keep my apple account secure. You must memorize your password, and be able to type ... uh, I mean, draw, no, write? your password on a watch as well (if you get one of those).

iOS is not exactly safe until I can use it without knowing my apple password.

fragmede
1 replies
2d21h

I don't know my Apple password, it's in 1password. I don't use it on my watch though, I have a PIN there.

withinboredom
0 replies
2d20h

My watch somehow became unpaired from my phone and needs my password. I just ignore the prompt because all attempts to enter the password fail for one reason or another. Even moving my wrist too much or taking too long clears the prompt.

yencabulator
0 replies
2d15h

What part of "cross platform" does iOS match?

semolino
0 replies
2d22h

On a related note, I appreciate the ability to specifically disable JavaScript JIT in GrapheneOS' browser, Vanadium. Theoretically, it's a nice balance of maintaining site compatibility (as opposed to disabling JS entirely) and reducing one's attack surface.

lxgr
0 replies
2d9h

You're skipping over a lot of pragmatic middle ground between "full hardware access" and "verifiably safe" (i.e. formally proven?) here.

An absence of turing completeness and JIT compilation is neither necessary (see sandboxing) nor sufficient (see variousexploits against media codecs, PDF parsers etc.) to ensure safe processing of untrusted data, whether that data happens to be "actual data" or code.

You can make your own life easier or harder with your choice of sandboxing target, though: x86 Win32 binaries are probably harder to do sandbox in a working and secure way than e.g. WASM/WASI.

AlienRobot
0 replies
2d14h

Then don't compile it into machine code? The problem is in application development, not low-level programming. If a random person on the internet makes an application, there's a non 0% chance it's malware if you try to run it. It shouldn't be that dangerous. It's ridiculous that it still is that dangerous after decades of desktop computing and the only way to avoid this is anti-virus heuristics.

All we want is to get rid of the possibility of an application developer including evil code.

We could have a fully interpreted language layer running on a platform that never lets application code touch the file system. How do applications do fast stuff like GUI then? You just have a package manager with libraries that can do low-level stuff but are vetted so they don't expose APIs that let application code interact with the file system. That way in order to exploit an user's computer you need to exploit a flaw in a library thousands of other programmers use instead of just importing std io.

A lot of security seems geared toward server environments where you are only dealing with code you fully trust in, like the left-pad library. If bad code broke your server, you could really just load a backup. But most of people using computers are on their personal computers, a majority of them have no backup, and they are downloading and running random programs all the time. It makes it harder for both desktop application developers and their users if there isn't a sandboxing layer in the middle. It's probably one of the factors that is killing desktop apps in first place since most users can trust a website that is an image editor but fewer would install an image editor because it can contain a cryptominer, or a ransonware, or a virus, or whatever.

blep_
3 replies
2d17h

The JVM did that many years ago and nobody liked it. I can't help but think wasm is just the same idea but worse.

yencabulator
2 replies
2d15h

Outside of web applets, set-top boxes, and DVD players, JVM didn't really do much sandboxing. On the desktop or server, it did practically none.

mdaniel
1 replies
2d14h

I think the rest of your sentence was "by default" which is the same thing the comment you're replying to said: "security gets in the way of everything"

One could always launch any java process with java -Djava.security.manager -Djava.security.policy=someURL and it would sandbox a huge number of things (see: https://docs.oracle.com/en/java/javase/17/security/permissio... )

The problem is that defining a reasonable policy for any modern app is a gargantuan pain -- as is the case with any security policy language -- so as the GP said people hated it and now it's dead https://openjdk.org/jeps/411

yencabulator
0 replies
2d14h

I think a key part of solving that is by not thinking of it as a set of security enforcement rules on top of the preexisting platform, but as a new platform (that just runs everywhere). So, instead of ACL listing what files can be accessed, shove it in a sandbox where the app has its own files, and the platform open file dialog enables the user to authorize one-time access to individual files.

You basically can't take a complex thing and write complex security rules for it and expect success & real world adoption.

justincormack
0 replies
2d22h

Thats pretty much Wasm with Wasi (minus multimedia though right now)

lukevp
3 replies
3d

Full unrestricted disk access for all users and code isn’t the only way an OS can be designed.

mrguyorama
2 replies
3d

AppData is specifically where apps store data, and there are and were plenty of legitimate examples where you want some code to access data from an app in there.

The entire point is that it is not meant to be a secure location, was never meant to be a secure location, has no intended security features etc. If you store your passwords in a text file on the desktop, that is also insecure but you would be wrong to say Notepad has a security vulnerability. Similarly, if you stored your passwords in the Windows registry unencrypted, that would also be insecure, but does not demonstrate a flaw in the Windows registry.

If you want to be able to leave your secrets in the open without them being compromised, then you encrypt them.

Browser password managers are not secure. That is not Window's fault.

MiguelX413
1 replies
3d

Regardless, full unrestricted disk access for all users and code is insecure.

mrguyorama
0 replies
2d23h

It isn't full unrestricted disk access for all users and all code. Any OTHER user, or code running with that user's permissions cannot access YOUR appdata directory. The appdata stuff was the running user's appdata. They already had total control of the user's machine, and in fact, had control of that user's domain administrator! This attack is only possible if you have control of the user's domain administrator AND data access to the user's machine so that you can use both the locally stored Bitwarden data AND the domain's backup decryption keys. The phone OS model wouldn't work here. The security compromise happened when the domain administrator account was breached.

baldfat
2 replies
3d

I tell myself and other people if you have it saved in your browser are you okay if bad people know that password. Also it makes it easy for people in authority to get to that password with a simple court order.

mrguyorama
0 replies
3d

Most average people are not sure of password managers because the idea of losing the god password and losing access to EVERYTHING is terrifying, and there is mathematically no way to recover your secrets. Most normal people have lost a password before, so that's something they think about.

Also for most normal people, an unencrypted note on their desktop with plaintext passwords that are DIFFERENT FOR EVERY SITE is STILL more secure than the SOP of using one strong password for everything. For that to be compromised, someone needs to be able to run code on my local machine, in which case, they can just install a keylogger, so encrypted passwords are no increase in security. I genuinely don't care if App1 on my computer can fiddle with App2's bits, because I chose to run App1 and App2, they are trusted.

lxgr
0 replies
2d9h

Passwords saved in browsers for most users only protect access to accounts that are also accessible with a simple court order, though.

Pesthuf
4 replies
2d15h

Is there even a way to opt in to having a secret be accessible only for your process? Like, a way to maybe sign your executable and then use a windows api that then gets "oh. This process is made by the same vendor that created this secret, so it’ll be allowed access".

It’s just ridiculous that the most trivial, unprivileged process can just steal any file and any secret accessible by the user it’s run as. Unless that secret is protected with a key derived from a separate password the user has to put in.

o11c
3 replies
2d14h

I don't think it's possible on Windows.

It's trivial on Unix - just make the program setgid and change the folder permissions to only allow the group. This can be nested, though that requires that the relevant program be aware of the need to walk through several levels, though often a symlink can hide that.

Note that when creating such a directory setup, `chown`ing away the user requires a privileged helper utility. But you need to make such utilities anyway so the user can delete such directories.

***

Important note - most other "solutions" only protect you from apps the opt in to security. A proper solution, like this one, protects from all processes running as user, except the process of note.

viraptor
2 replies
2d6h

Or use selinux/apparmor - those have supported app sandboxing without group tricks for a long time.

o11c
1 replies
2d

Those are useless because they're opt-in, and we can't expect malicious programs to opt in.

There's probably some mandatory mode but since it breaks all sorts of programs nobody can afford to use it.

viraptor
0 replies
1d22h

Apparmor is opt-in so it protects from exploration mostly, but selinux can definitely work with the whole system by default. It's not trivial, but you can at least prevent apps from accessing personal information unless explicitly allowed. I've been using it for years without issues. It really requires only a minimal amount of learning and you don't need to turn it off.

shortsunblack
2 replies
2d22h

Microsoft is trying to do that with msix and a new filesystem driver that transparently restricts file system access to app. Should land into Windows 11 this year. See https://youtu.be/8T6ClX-y2AE for the functionality explaination.

viraptor
0 replies
2d6h

The msix story is really weird/incomplete so far. Let me just leave it at: creating services is part of msix on windows 11, but not windows server. Maybe it will be more then a toy in a few years, but we'll still have to wait for old server versions to get replaced.

hervem
0 replies
2d16h

And AppStore distributed application which are by default isolated, removing some feature too. (custom shortcut for example).

giancarlostoro
2 replies
3d

The response was that if access to %Appdata% is completely blocked Windows won't work anymore.

Yikes. I really wish that instead of Microsoft wasting resources on telemetry nonsense, they would focus on optimizing their OS and modernizing some of these blatant security issues.

I guess it wont happen until we have another wave of ransomware malware or something of the sort.

iudqnolq
1 replies
2d5h

That's like saying Linux doesn't have sandboxed apps because chmod -R 000 /var will break your system. Technically sort of right, but not a useful or interesting observation.

giancarlostoro
0 replies
1d23h

The difference being Windows is a much higher target for abuse since its the most commonly used OS for Desktop, somewhere as high as 70% marketshare or higher, depending on where you get those numbers from. It's also used a lot in corporate environments as well. Linux usage of /var/ and /etc/ differs depending on various factors too... Developers / distro maintainers put files in different places.

bhdlr
2 replies
2d23h

So - the moral of the story is to never use Windows?

sjfjsjdjwvwvc
1 replies
2d21h

Or don’t use their „security“ features. AFAICT everything would have been fine if they used a hardware key as second factor.

lxgr
0 replies
2d9h

How do you know which app is accessing your hardware key in the absence of any OS feature mediating access to it?

adontz
1 replies
2d23h

I believe the following is the solution. https://learn.microsoft.com/en-us/windows/win32/secauthz/app... No?

cameronh90
0 replies
2d22h

Essentially yeah but it’s currently opt-in from the app developer. I believe (but may be wrong) that an app which doesn’t implement AppContainer isolation can currently access the data written by apps that do implement it. I think the intention is for it to become the default one day.

AlienRobot
1 replies
3d

There's probably nothing that I hate in programming more than having full access to the file system. Any time I write a program that has to delete a file I just make it move into a trash folder instead just in case I mess up somewhere and accidentally delete the entire file system.

pi-e-sigma
0 replies
2d21h
dist-epoch
0 replies
3d

AppData is the Windows equivalent to Linux home directory dotfiles.

Ideally applications should only have access to their own directories

This happens for Windows Store apps, which are sandboxed similarly to mobile phone apps.

WalterBright
20 replies
3d

I've always considered password vaults as a single point of failure that will compromise all of your passwords. I've had lots of intelligent, well-informed programmers argue that my concern is groundless.

firen777
5 replies
2d17h

The way I look at it is, password vault is a single point of failure with a very VERY tiny attack surface that attacker will need to directly target you with a sniper rifle to actually hit you (assuming you are not using things like Lastpass. I personally use Keepass and synchronize the local vault across devices using Syncthing). Suffice to say, unless your last name is Snowden, it should not be a concern to you.

Comparing to the common way of "managing" password (i.e. reusing one password everywhere), it is still a single point of failure. The difference is the attack surface balloons up in proportion to the number of website you sign up to. And just like a balloon, all it need is one poke, one website storing your password in plaintext to blow it all up.

lxgr
4 replies
2d9h

Suffice to say, unless your last name is Snowden, it should not be a concern to you.

I wouldn't be so sure about that. People store banking/payment credentials in them, so there is a large incentive to mount a scalable attack against an even moderately popular password manager. Crypto wallets are a popular target too for the same reason (although the risk is even more immediate there).

gruez
3 replies
2d3h

How are you going to "mount a scalable attack" against a local-only password manager?

lxgr
2 replies
1d23h

Malware targeting unlocked local password managers would be one option.

gruez
1 replies
1d22h

In that case aren't you already hosed because the same malware can steal all your login sessions?

dvngnt_
0 replies
1d7h

no because I'm not logged into all of my accounts at once but if they can open the PW database they can

lordofmoria
4 replies
3d

Everything is a tradeoff - but the basic balance is very strongly in favor of password managers:

1. without a password manager that is shared on all your devices, you WILL re-use passwords out of frustration. 2. without a password manager, if you do any sort of regular sharing passwords with a engineering team, friends & family, you'll resort to pretty insecure channels. 3. true E2E encryption, while still providing some surface area, has proven in the field through multiple pretty bad breaches[1], that it's a security model that holds up under real-world circumstances.

On the flip side, you are right: you are one compromised browser extension / binary away from having your local vault decrypted, and ALL your passwords compromised. But think about this: if someone has this much local access, chances are they can install a keylogger anyway, or read your clipboard, so the real difference is you've conveniently pre-loaded all your sensitive information in one go for the bad actor.

[1]For example: https://blog.lastpass.com/2022/12/notice-of-recent-security-...

WalterBright
3 replies
3d

With a keylogger, you lose passwords you typed in since the keylogger was installed, but that is rarely all of your passwords.

lordofmoria
1 replies
3d

Absolutely agree - that's why I said "so the real difference is you've conveniently pre-loaded all your sensitive information in one go for the bad actor."

kemotep
0 replies
2d19h

The average person usually does the same but without encryption or strong passwords.

I’ll stick to passwords that are impossible to guess and an encrypted vault with multifactor authentication.

edrxty
0 replies
2d22h

Most of these managers support some form of 2fa. I use a yubikey with mine such that if my master password is compromised someone would still need to obtain my security key. You can enroll multiple and keep one in a safe and one or more on your person. It's not perfect, but it prevents the vast majority of huge dragnet style malware attacks and a lot of the targeted ones until you get to the point where someone is trying to hunt you down on the street.

This still leaves a case where someone manages to get the final key out of memory but you're pretty hosed at that point anyway. I'd prefer a system where the yubikey itself is doing the final credential decryption instead of the CPU, unfortunately most people aren't that paranoid though.

tamimio
1 replies
3d

You can use password vaults without creating a single point of failure by enabling 2FA for the accounts in the vault, without storing the keys there. Of course, it would still be bad if the vault was compromised, but it would be unlikely that anyone could access those accounts without accessing your 2FA.

dzhiurgis
0 replies
2d19h

Is there is a good solution/mitigation to sharing passwords with 2FA's in vault?

chimprich
1 replies
3d

That's because it is a SPOF. However, a password manager seems to me the best compromise along the security / convenience axes.

I memorise good passwords for a handful of my most critical stuff (and have MFA). They don't go in my password manager.

If my password manager gets compromised then I probably could lose some cash, maybe get embarrassed by being impersonated on social media - it could get very inconvenient but not catastrophic.

NegativeK
0 replies
2d22h

PW managers are SPOF that typically replace a different, worse SPOF: humans trying to remember all of the passwords.

paulpauper
0 replies
3d

If done correctly it works. correctly being the operative word.

lxgr
0 replies
2d9h

I don't think anybody is arguing that password managers are the be-all and end-all of secure user authentication.

But what would you use instead for services that support only password authentication? And even for services with 2FA: If one of the factors is a password, where do you store it?

hypeatei
0 replies
3d

They make it easy to have strong passwords and sync across devices.

You could use a local vault and sync yourself, use a piece of paper in a safe, or use your brain to store them.

All of these come with tradeoffs and their own risks. Pick your poison.

_moof
0 replies
2d3h

Before password managers people used the same password on every site. Vaults being a SPOF is true but not really relevant. They're still an improvement over what people did before.

Sohcahtoa82
0 replies
3d

Without using a vault, people end up re-using passwords or using weak passwords, which is IMO worse.

hypeatei
14 replies
3d2h

I'm glad they made some improvements to security as a result of this finding. This "attack" is still very specialized though and requires local access which (as mentioned) could've exposed the user to keyloggers and other malware.

RedTeamPT
13 replies
3d1h

Yes, it requires an attacker in a powerful position with local access. However, it does not require special privileges or techniques that may trigger endpoint security (such as keyloggers or memory dumping). The only requirements are reading a JSON file and making a single Windows API call to retrieve the key.

malfist
8 replies
3d1h

Do hardware keyloggers trigger endpoint security?

RedTeamPT
2 replies
3d1h

No, but hardware keylogger require physical access.

malfist
1 replies
3d

What is the difference between "physical access" and "powerful position with local access"

Robin_Message
0 replies
2d10h

It's the difference between the evil maid attack (someone sneaks a keylogger into your turned-off machine whilst cleaning your room) vs local privilege escalation (the sysadmin installs a game and now your entire network is owned).

sumedh
1 replies
2d7h

I asked ChatGpt "where can I buy hardware keyloggers"

It just shut me down "I can't assist with that request."

ametrau
0 replies
2d1h
Sohcahtoa82
1 replies
3d

A hardware keylogger has to sit as a MitM between the keyboard and the USB port.

Sufficiently paranoid endpoint security could trip when the keyboard is unplugged and then plugged back in.

dannyw
0 replies
2d15h

That must have a lot of false positives for all but the most paranoid environments.

eddythompson80
0 replies
3d1h

They do not

jabart
2 replies
3d1h

It sounds like this required both local access AND a Active Directory Domain Administrator account (which should have triggered EDR at some point) which is the end game anyway. They just managed to hop out of the AD environment to a non-ad server because of the other password being in this vault. Glad they made it more user interactive to decrypt.

kadoban
1 replies
3d1h

No, the final one only required local access as the user in question (this is mentioned after the one you're referring to that required AD Domain takeover).

jabart
0 replies
3d

Ah yeah.

1. Off workstation decrypt using the AD DPAPI Backup keys. 2. Local DPAPI List and Dump for the windows hello biometric key

hypeatei
0 replies
3d1h

Good point.

nlawalker
11 replies
3d2h

TL;DR: It's definitely interesting, but this is about attacking vaults with biometric unlock enabled (and are thus stored on disk) on Windows, and requires workstation access and a Bitwarden design flaw that was fixed in April.

> the attack already assumes access to the workstation of the victim and the Windows domain

The underlying issue has been corrected in Bitwarden v2023.4.0 in April 2023

As it turns out, we were not the first to discover this in March 2023, it had already been reported to Bitwarden through HackerOne.[1]

I could have sworn [1] had a dedicated post here on HN but couldn't find it, it's worth a read too.

[1]: https://hackerone.com/reports/1874155

Dalewyn
10 replies
3d2h

the attack already assumes access to the workstation of the victim

I seldom can take "vulnerabilities" that require physical access seriously, because if a hostile is physically next to my computer I have more pressing concerns than some passwords.

elzbardico
6 replies
3d2h

The problem is that an unsophisticated user doesn't necessarily think like that, and could come to the conclusion that it is not a big deal to leave his workstation unlocked while going to fetch a coffee, after all, well... "I have a password manager, and to have access to it, it requires unlocking". Then some colleague calls them for an ongoing meeting so they can share some insight about some question that was raised in the meeting and so on.

A far-fetched scenario? yes. But if it can happen, it will happen.

eddythompson80
2 replies
3d2h

To this day I don’t understand how “computer repair” shops are in business. When I was a shithead 16 year old I used to work at one. I found it amusing to see what files people deleted before giving us full physical access to their machines. I definitely saw things I shouldn’t have seen. It wasn’t until I saw something illegal that I freaked out and stopped doing it. I was so paranoid that I srm’ed my entire drive and theirs and never mentioned it to anybody. In retrospect I should have, but I was 16 and didn’t know what to do.

Dalewyn
1 replies
3d

For most people (ie: not us), computers are just another household appliance in the same vein as televisions, washing machines, refrigerators, and air conditioners. If it breaks, you get it fixed by a technician or go and get a new replacement.

eddythompson80
0 replies
2d21h

Yet they instinctively understand that they should delete certain files and prepare their computers for repair. We've had many who would walk in asking "hey, my computer is doing X is this fixable?" we would have no idea of course. We'd always ask can we see it, and they would say "I just don't wanna have to get it ready for repair if it wasn't possible"

This is why I totally understand when Apple or MS go overzealous with encryption or T2 or secure boot. Despite "people like us" complaining about it.

Zetobal
1 replies
3d1h

If you have machines that have users logged in, are unlocked when none of your users are working on them and that are in reach of a 3rd party you have bigger problems than this.

ben0x539
0 replies
2d21h

Everybody thinks their machines aren't within reach of a 3rd party until they are!

BobaFloutist
0 replies
3d2h

That unsophisticated user is also likely to have a printed out list of passwords taped to their monitor, or an unprotected excel file labelled "Passwords".

RedTeamPT
1 replies
3d1h

Yes, it requires an attacker in a powerful position but it does not require physical access. Any program that runs in the user's session (without any special privileges) could have autonomously retrieved the biometric key and decrypted the vault without user interaction and without Bitwarden running.

dist-epoch
0 replies
3d

They mentioned not wanting to use keyloggers which would be their standard approach.

gtirloni
0 replies
3d2h

In this case, physical access is very brief and almost imperceptible if you're not paying attention.

It's different from trying to pry open an encrypted hard disk from a laptop or something similar.

You probably won't even know that coworker you trust is compromised and attacked you this way.

guerby
8 replies
3d2h

I wonder if biometric bitwarden unlock on Android has the same kind of issue or not.

harrygeez
5 replies
3d1h

There are a few convenient scapegoats here but ultimately in this case it is not biometric unlock that enabled this but rather characteristic of the Active Directory's design (I'm not sure I will call it a weakness).

For Android and iOS if you forget your PIN code I believe you are screwed, as in no one can decrypt your device for you.

RedTeamPT
4 replies
3d1h

Actually it is not just an issue with AD design, but the AD design only makes it slightly worse. The underlying issue is that biometrics are not required to retrieve the biometric key from DPAPI and instead of authenticating with Windows Hello, any program could just simply ask DPAPI for the key.

lxgr
2 replies
2d9h

Have you looked into how (whether?) Windows Hello actually checks which app is asking it to perform a private key operation?

On Android, this is tied to the app UID, and on iOS/macOS it's tied (I believe) to the developer team identifier. Hopefully there's a similar mechanism on Windows...?

briHass
1 replies
1d13h

It doesn't, or at least it doesn't for traditional applications. UWP (store apps) might, but I've never seen it.

To be fair, identifying an app when not delivered through some locked down store mechanism is actually problematic. DPAPI is tied to the user/machine account along with additional entropy provided by the application itself. It would be nice if MS added an option for DPAPI to use a hash of the name blessed by a CA in a valid code signing cert. However, that wouldn't matter in this case, since they had domain admin and could easily manipulate the cert store.

lxgr
0 replies
1d7h

Self-signed code signing certificates would seem to be a good compromise (like e.g. Android does it).

Even a hash over the executable (+loaded DLLs) would work in a pinch. Breaks app binary updates, but for a “stay logged in and unlock via biometrics“ feature (as opposed to “store this credential forever”), that might be acceptable.

magicalhippo
0 replies
3d1h

My understanding from a quick reading is that Bitwarden essentially used Windows Hello to ask "is the user there" and if so, asked DAPI to give Bitwarden the secret vault credentials which it happily did because that's its job.

The problem with this was that the vault credentials in DAPI was not safe from other programs running as the user, nor from domain admins which could use the recovery key stored on the AD server (which they did in their attack after gaining admin access).

The solution was to use Windows Hello the way it was meant. That is, to store an asymmetric key pair, where the private key is hidden and protected by the biometrics or hardware security key, and use that to encrypt the secret vault credentials before storing them in DAPI.

lxgr
0 replies
2d9h

Probably not, given that Android's security model sandboxes apps and accordingly can identify which one is trying to access a given keychain credential.

RedTeamPT
0 replies
3d1h

Not that we are aware of. The security model of Android and iOS also makes it much easier to implement biometric unlock correctly.

gtirloni
7 replies
3d2h

> As usual, we managed to get administrative access to the domain controller

As usual? Is that the state of Windows Server security these days? I never managed a Windows-based network so I have no idea. I heard about these things back in the 2000's but I'm surprised this is "usual".

jabroni_salad
3 replies
3d1h

Yes. If you have LLMNR, NTLM enabled, unsigned SMB allowed, and nonencrypted LDAP bindings then your domain controller can be popped with zero effort by metasploit.

Legacy protocols can be very sticky and most repeat pentest engagements I am able to use the same exact method every time because they will never get addressed. Modern windows (since like vista-era) will use better stuff out of the box but will also allow downgrade attacks in the name of compatibility.

Hell, I still find SMBv1 in a lot of places.

charcircuit
1 replies
3d

Why hasn't Microsoft at least sandboxed these protocols if they are so bad in regards to security?

gruez
0 replies
2d3h

It's not broken in terms of the implementation (eg. buffer overflows). The protocol itself is fundamentally broken. Sandboxing http isn't going to protect your credit card information, and sandboxing md5 isn't going to prevent people from finding collisions.

SV_BubbleTime
0 replies
3d

Hell, I still find SMBv1 in a lot of places.

It cost me thousands of dollars last year to get our MSP to disable SMBv1 and force correct policies. They "Needed to audit for a week" to make sure this "didn't break older software". It was annoying I even had to ask that they didn't come to me saying "We won't support you if you have SMBv1 enabled".

ziddoap
0 replies
3d2h

Well, they're a pentesting company. Getting access to the DC is goal #1 for every engagement they do.

So, I read this to be "as usual for us during our engagements", not "as usual for everyone all the time".

dist-epoch
0 replies
3d

It could be through social engineering which is the easiest way.

SV_BubbleTime
0 replies
3d

I promise if you are using Windows AD and haven't had a pen test and remediation in the last few years - you would lose to a decent pentesting group.

rdl
2 replies
2d20h

The complexity of deployed identification/auth chain/secrets management/ec. is pretty terrifying; even if you can somehow understand it for one OS and hardware platform, if your service needs to support multiple OSes plus web plus multiple auth technologies plus a recovery path and everything else, dragons.

This is one of the few things cryptocurrency gets right in one specific way better than most other applications -- in most cases, everything is explicitly about operations with a key, and you build up protections on both sides of that. Unfortunately those protections themselves are often inadequate (hence billions of dollars in losses), but it's at least conceptually simpler and potentially could be fixed.

dannyw
1 replies
2d13h

I'm not convinced crypto is inherently less secure; I'd argue it's more secure on average. Data breaches happen every day; whether in financial services or not. The difference is that a breach is catastrophic for crypto; but just bad for most businesses.

lxgr
0 replies
2d9h

Have any of these catastrophic failures happened due to client/wallet-side user confusion, though?

I'm not a big fan of many things in crypto, but what I've seen in terms of "what you see is what you sign", clear user interfaces, secure user verification and confirmation etc. in some popular wallets is something that many existing banks could take a lesson from.

hiatus
2 replies
2d23h

Interestingly, the latest versions of bitwarden for mac that are available for download from github no longer work with biometric authentication, requiring the user to download the app from the app store in order to use that functionality.

Pesthuf
1 replies
2d15h

I wonder why that is. Do App Store Applications get extra privileges?

Why isn’t being signed enough for an application to store secrets only it can access in the keychain?

hiatus
0 replies
1d1h

I too wonder. The App Store version did ask for keychain access for the biometric data, while the non-app store version did not (and never asked for that, to my knowledge).

2bluesc
2 replies
3d2h

This affects Windows only.

Really feel that should've made it to the title other it feels like click bait.

selykg
1 replies
3d2h

I worked in managing bug bounty programs at a previous job. If there is one thing I have learned it's that blog posts like this are heavily skewed towards making the problem seem much larger than it is. It's what gets the clicks, so it's not a surprise. It makes dealing with penetration testers and bug bounty participants really stressful and frankly, annoying.

Our policy was that we would be happy if someone were to discuss bounties we paid out for, but we wanted the discussion to be fair and accurate. It did not ever really feel like it was mutually beneficial relationship. I don't miss that work at all really lol.

UberFly
0 replies
2d22h

"BITWARDEN HEIST - HOW TO BREAK INTO PASSWORD VAULTS WITHOUT USING PASSWORDS"

Like this one??

tamimio
1 replies
3d

Ok but it assumes the domain is compromised as stated in the article, and if the domain controller is compromised, it’s a game over for connected machines hence these attacks usually focus on domain admin or schema admin. Edit: it seems the second non-biometric method doesn’t need domain, it’s still however need that local access

S-1-5-21-505269936…

Kind of off topic but around 20years ago when I had my first portable harddisk, I used this method by creating these type of folders and remembering the numbers sequence in a creative way to hide my files when traveling/crossing borders while putting some decoy files in the plain sight, before knowing/using data encryptions, and it worked, I remember the agent taking my hdd and seeing him going through the decoy files and then returning my hdd normally.

alberth
0 replies
3d

Agreed.

"We recently conducted a penetration test with the goal of compromising the internal network of a client in a Windows environment. As usual, we managed to get administrative access to the domain controller"

This article feels like click-bait, when they buried the lede.

southernplaces7
1 replies
2d16h

I've always thought the trust placed in password managers was deeply misplaced. Like any company, it's only a question of time and circumstance until one of them is massively breached, but right here on HN, a whole bunch of people who should know better recommending them as if they were flowers from heaven. Because of course hey, "it's just convenient".

southernplaces7
0 replies
2d9h

And why would this be downvoted? Is there some specific holy aspect to password management services that makes them immune to being the victims of the same massive data leaks that frequently affect a whole broad range of tech companies?

Aerbil313
1 replies
2d20h

Tangential, what is the state of security on Linux desktop nowadays? Say out-of-the-box Debian 12 using Wayland. Is it still just that nobody is attacking Linux so it's safe?

dharmab
0 replies
2d17h

Out of the box is not enough. Debian has a checklist for building a secure system, as well as some helper tools for configuration: https://www.debian.org/doc/manuals/securing-debian-manual/in...

rmbyrro
0 replies
1d22h

There should be a warning label on Windows like there is for cigarretes.

Every time a user logs, Microsoft should be obliged by Law to show: "Your computer will get cancer if you proceed logging in."

nati0n
0 replies
3d3h

Website overloaded. Archived version here:https://web.archive.org/web/20240103131242/https://blog.redt...

kritr
0 replies
2d23h

Sounds like the bigger issue in this case is that it’s not clear to developers in which cases they can rely on DPAPI to be entirely local, which I assume is what’s needed for password manager style applications.

0xbadcafebee
0 replies
3d1h

tl;dr

  This means that any process that runs as the low-privileged user session can simply ask DPAPI for the credentials to unlock the vault, no questions asked and no PIN or fingerprint prompt required and Windows Hello is not even involved at all. The only caveat is that this does not work for other user accounts.
Yikes

  Bitwarden has since made changes to their codebase to mitigate this particular scenario, which we will quickly summarize in the next section. They have also changed the default setting when using Windows Hello as login feature to require entering the main password at least once when Bitwarden is started.
Phew

Props to the security researchers for finding this bug! It's great that we have the infosec community to help protect us. Feels like one of the few industries whose monetary incentive is to help the public.