return to table of content

Quickemu: Quickly run optimised Windows, macOS and Linux virtual machines

acatton
36 replies
11h33m

Just a security reminder from the last time this got posted[1]

This tool downloads random files from the internet, and check their checksum against other random files from the internet. [2]

This is not the best security practice. (The right security practice would be to have the gpg keys of the distro developers committed in the repository, and checking all files against these keys)

This is not downplaying the effort which was put in this project to find the correct flags to pass to QEMU to boot all of these.

[1] https://news.ycombinator.com/item?id=28797129

[2] https://github.com/quickemu-project/quickemu/blob/0c8e1a5205...

colejohnson66
15 replies
11h17m

Can someone explain how this is a security problem? While GPG key verification would be the best way to ensure authenticity, it's doing nothing different from what almost everyone does: download the ISO from the distro's own HTTPS site. It then goes beyond what most people do and validates that the hashes matche.

st3fan
8 replies
9h58m

Because you wrote HTTPS in italic .. HTTPS doesn't mean anything. Both the good and bad actors can have perfectly valid HTTPS configured. It is not a good indicator of trustworthiness of the actual thing you download.

hn_throwaway_99
7 replies
8h47m

HTTPS doesn't mean anything.

That's not accurate at all. HTTPS should mean "we've validated that the content you're receiving comes from the registered domain that you've hit". Yes, it's possible that the domain host itself was compromised, or that the domain owner himself is malicious, but at the end of the day you have to trust the entity you're getting the content from. HTTPS says, importantly, "You're getting the content from whom you think you're getting it from."

electroly
4 replies
8h6m

HTTPS says, importantly, "You're getting the content from whom you think you're getting it from."

You need certificate pinning to know this for sure, due to the existence of MITM HTTPS spoofing in things like corporate firewalls. HTTPS alone isn't enough; you have to confirm the certificate is the one you expected. (You can pin the CA cert rather than the leaf certificate if you want, if you trust the CA; that still prevents MITM spoofing.)

yjftsjthsd-h
2 replies
7h45m

If an attack requires compromising my operating system certificate store, I'm reasonably comfortable excluding it from most of my threat models.

electroly
1 replies
5h15m

Obviously you choose your own relevant threat models, but it's common to do in iOS apps--many apps are including it in their threat models. Pinning the CA cert is what Apple recommends to app developers. It's not an unreasonable thing to do.

https://developer.apple.com/news/?id=g9ejcf8y

yjftsjthsd-h
0 replies
4h42m

That link discusses how to do it but not why. The most likely thing that occurs to me is that iOS apps consider the user a potentially hostile actor in their threat model, which is... technically a valid model, but in the context of this thread I don't that counts as a real concern.

brirec
0 replies
7h54m

I’m not aware of any HTTPS MITM that can function properly without adding its own certificate to the trusted roots on your system (or dismissing a big red warning for every site), so I don’t think certificate pinning is necessary in such an environment (if the concern is MITM by a corporate firewall).

An attacker would still need to either have attacked the domain in question, or be able to forge arbitrary trusted certificates.

st3fan
1 replies
7h33m

Yes but we abandoned that idea a while ago. There are no more green locks in browsers. Nobody buys those expensive certificates that proof ownership. When you curl something it doesn't show anything unless it is an actual invalid certificate.

You are correct that it _should mean_ but reality today is that it doesn't mean anything.

yjftsjthsd-h
0 replies
4h40m

No, it still means that you've connected to the domain that you wanted to connect to and the connection is reasonably resistant to MITM attacks. It doesn't say anything about who controls the domain, but what it provides still isn't nothing.

repelsteeltje
2 replies
10h54m

Absolutely true, but one additional factor (or vector) is that this adds a level of indirection. That is, you're trusting the Quickemu people to take the same diligence you yourself would do when downloading an ISO from, say ubuntu.com for each and every target I can conveniently install with Quickemu.

It's a subtle difference, but the trust-chain could indeed be (mildly) improved by re-distributing the upstream gpg keys.

Joker_vD
1 replies
9h58m

Eh, you can fetch the GPG keys from some GPG keyserver, it's not like those keys are just random files from the Internet. They're cross-signed, after all!

IshKebab
0 replies
50m

How do you know which keys to get? Let me guess... you read their website.

npteljes
0 replies
6h20m

Getting the signature and the file from the same place is questionable practice in itself. If the place is hacked, then all the hacker needs to do is to just hash his own file, which has happened in at least one high profile case [0]. And this practice doesn't even offer any extra protection if the resource was accessed with HTTPS in the first place.

[0] https://www.zdnet.com/article/hacker-hundreds-were-tricked-i...

JeremyNT
0 replies
9h10m

IMO you're exactly right.

I just looked at the shell script and it's not "random" at all, it's getting both the checksum and the ISO from the official source over TLS.

The only way this technique is going to fail is if the distro site is compromised, their DNS lapses, or if there's a MITM attack combined with an incorrectly issued certificate. GPG would be more robust but it's hardly like what this tool is doing is some unforgivable failure either.

It's not that the OP is wrong but I think they give a really dire view of what's happening here.

EasyMark
0 replies
8h53m

Trust is an input into any security equation. Do you trust all sources of these files? I don't think anyone was challenging gpg

prmoustache
7 replies
11h14m

Also, author is typing his user password during live streaming with a mechanical keyboard while microphone is on.

bobim
4 replies
10h22m

You mean that the sound of each key is unique and sufficiently different from the others? Or it has to do with how a person is typing?

overengineer
1 replies
10h10m
bobim
0 replies
9h7m

I’ll be yodeling while typing from now on. Happy open-spacing everyone.

malux85
0 replies
9h8m

It doesn’t need to be unique, it just needs to leak enough information to decrease the search space enough to where brute force (or other methods) can kick in.

coppsilgold
0 replies
6h16m

Each key will produce a different sound even if it's just a touch screen surface keyboard due to being in different positions on the surface and having a relative position to the microphone - it may be more difficult and require a higher quality microphone.

Once you isolate and cluster all the key sounds you end up with a simple substitution cipher that you can crack in seconds.

jampekka
0 replies
8h32m

Poe's law strikes again.

cmiller1
0 replies
10h16m

While this comment doesn't seem 100% serious, I wonder if this kind of attack is made less effective by the trend in mechanical keyboards to isolate the PCB and plate from the case acoustically, e.g. gasket mount, flex cuts, burger mods. In my experience the effect of these changes is each key sounds more similar to the others rather than the traditional case mount setup where each key sound changes drastically based on its proximity to mounting points.

jvanderbot
6 replies
9h38m

How much of this is outdated practice? Shouldn't TCP/TLS be doing checksum and origin signing already?

In the days of FTP, checksum and gpg were vital. With http/TCP, you need more GPG due to TCP handling retries checksum etc, but still both due to MitM.

But with https, how does it still matter? It's doing both verifications and signature checks for you.

acatton
5 replies
8h50m

TLS prevents a different kind of attack, the MitM one which you describe.

GPG signing covers this threat model but much more, the threats include:

* The server runs vulnerable software and is compromised by script-kiddies. They, then, upload arbitrary packages on the server

* The cloud provider is compromised and attackers take over the server from the admin cloud provider account.

* Attacker use a vulnerability (from SSH, HTTPd, ...) to upload arbitrary software packages to the server

GPG doesn't protect against the developer machine getting compromised, but it guarantees that what you're downloading has been issued from the developer's machine.

jvanderbot
3 replies
7h48m

I agree, but I think that model of GPG is not how it's used any more. I think nowadays people upload a one-shot CI key, which is used to sign builds. So you're basically saying "The usual machine built this". Which is good information, don't get me wrong, but it's much less secure than "John was logged into his laptop and entered the password for the key that signed this"

So, you're right, that GPG verifies source, whereas TLS verifies distribution. I suppose those can be very different things.

Perhaps counter example: https://launchpad.net/~lubuntu-ci/+archive/ubuntu/stable-bac...

The packages here are from the latest upstream release with WORK IN PROGRESS packaging, built from our repositories on Phabricator. These are going to be manually uploaded to the Backports PPA once they are considered stable.

And presumably "manually" means "signed and uploaded"

spookie
2 replies
2h39m

No established GNU/Linux distribution is going to half ass GPG signing as you've implied.

jvanderbot
1 replies
2h15m

Which part is half ass? Manual or automatic?

spookie
0 replies
1h45m

One shot CI keys. I guess I shouldn't have used that term, it certainly is more work that doing otherwise.

Nevertheless, their advantages offer nothing of value in this context. At least, I think so. Correct me if I'm wrong.

IshKebab
0 replies
47m

They then upload arbitrary packages on the server

And change the instructions to point to a different GPG key (or none at all).

I think the only situation it possibly helps in is if you are using untrusted mirrors. But then a simple checksum does that too. No need for GPG.

vdaea
1 replies
9h40m

It doesn't download "random files from the internet", it seems to be using original sources only.

TylerE
0 replies
6h55m

If you don't control then source, you can't guarantee that what it points to today is what it points to tomorrow.

password4321
0 replies
9h30m

FWIW:

- Signatures are checked for macOS now

- No signatures are available for Windows

Maybe this year attention from Hacker News will encourage someone to step up and implement signature checking for Linux!

jampekka
0 replies
8h36m

Still magnitudes better security practice than using any proprietary software or service.

dncornholio
0 replies
9h51m

My red flag was that there is no explanation of what an "optimised" image is.

_joel
20 replies
10h37m

Shout out to https://virt-manager.org/ - works much better for me, supports running qemu on remote systems via ssh. I used to use this all the time for managing bunches of disparate vm hosts and local vms.

freedomben
3 replies
9h18m

virt-manager is one of the most underrated softwares there is. It's a powerhouse and I use it all the time. It is going to expect you to know some basic terminology about VMs, but it reminds me a lot of the old skool GUIs that were packed with features and power.

If your needs are simple or you're less technical with the VMs, Gnome Boxes uses the same backend and has a beautiful streamlined GUI. With the simplicity of course comes less flexibility, but cool thing is you can actually open Gnome Boxes VMs with virt-manager should you later need to tweak a setting that isn't exposed through Boxes.

buffet_overflow
1 replies
8h37m

I’m so appreciative that virt-manager has a GUI that crafts and then lets you edit the XML directly. It really eased that beginner into competent stages of using the program for me.

beebeepka
0 replies
5h6m

Agreed, it's much better than nothing, though I still don't know how to port forward.

blcknight
0 replies
8h24m

Absolutely love virt-manager. I try gnome-boxes every so often and it just doesn’t compare. I guess its interface is easier for beginners.

thomastjeffery
2 replies
5h57m

It's wild how important and useful a program that does nothing but configuration can be.

Imagine what life would be like if configuration was separated from the software it configures. You could choose your favorite configuration manager, and use that, rather than learn how each and every program with a UI reinvented the wheel.

The closest thing we have are text configuration files. Every program that uses them has to choose a specific language, and a specific place to save its configs.

An idea I've been playing with a lot lately is a configuration intermediary. Use whatever language/format you want for the user-facing config UI, and use that data as a single source of truth to generate the software-facing config files.

3np
1 replies
4h56m

You have some incumbent competition already, in case you're not aware, and I'd say many of these are closer to what you're describing than text configuration files.

You would do well to learn by past and current attempts. This book should be enlightenig (and yes, Elektra is very much alive): https://www.libelektra.org/ftp/elektra/publications/raab2017...

Would also be a useful excercice to write a new configuration UI for existing configuration backend(s) (preferably something already in use by some software you're already in want of better configuration for) - even if you do end up aiming at your own standard (xkcd.com/927), it should give you some clarity on ways to approach it.

thomastjeffery
0 replies
1h27m

The irony here is that the problem you have proposed - the complexity introduced by creating a new solution - is the same problem that each solution is intended to solve.

That means that any adequate solution should recursively resolve the problem it introduces.

oh, and also thank you for introducing me to Elektra. That was very helpful of you.

stracer
2 replies
2h44m

Libvirt and virt-manager are just simplified user interface to the real software, which is qemu(and KVM). They solve pretty trivial problems, like parsing config file and passing the right options to the qemu binary.

Yes, they have some additional useful administration features like start/stop based on a config file, serial console access, but these are really simple to implement in your own shell scripts. Storage handling in libvirt is horrible, verbose, complex, yet it can't even work with thin LVs or ZFS properly.

Unless you just want to run stuff the standard corporate way and do not care about learning fundamental software like qemu and shell, or require some obscure feature of libvirt, I recommend using qemu on KVM directly, using your own scripts. You'll learn more about qemu and less about underwhelming Python wrappers, and you'll have more control on your systems.

Also, IBM/Red Hat seems to have deprecated virt-manager in favour (of course) a new web interface (Copilot).

Quickemu seems to be of more interest, as it allows launching new VM right after a quick look at examples, without time wasting on learning a big complicated UI.

o11c
0 replies
1h22m

The real advantage to libvirt is that it also works with things other than qemu.

gamepsys
0 replies
1h19m

Quickemu seems to be of more interest, as it allows launching new VM right after a quick look at examples, without time wasting on learning a big complicated UI.

Why would anyone want a qt frontend when you can call a cli wrapper, or better yet the core binary directly?

jthemenace
2 replies
7h7m

Anyone running virt-manager on mac connecting to a headless linux hypervisor on the same network? I tried installing it through "brew", but was getting many random errors.

I thought about running it over the network using XQuartz, but I'm not sure how maintained / well supported that is anymore.

ghostpepper
0 replies
1h40m

This might not fit your use case but what I do is:

ssh -L 5901:localhost:5901 username@hypervisor

on the hypervisor, start Qemu with -vnc :1

Then open a local VNC client like RealVNC and connect to localhost:1

dishsoap
0 replies
3h13m

I've done this several years ago, with no issues, though I haven't tried any time recently.

xmichael909
1 replies
6h24m

I just wish it had a web interface option, but I guess there is always proxmox.

_joel
0 replies
3h39m

Or if you're feeling a little more adventurous https://github.com/retspen/webvirtcloud

mise_en_place
1 replies
5h7m

I just like passing in options to QEMU on cmdline. This works well for some older OS like Windows NT on MIPS, or Ultrix.

stracer
0 replies
2h42m

This is the way.

petepete
0 replies
8m

Virt Manager is fantastic. I've used it for more than a decade and it's been rock solid throughout.

Being able to connect to my TrueNAS Scale server and run VMs across the network is the icing on the cake.

heavyset_go
0 replies
15m

I wish Quickemu would make it easier to interface with libvirt, but apparently that's been marked as out of scope for the project.

antongribok
0 replies
9h42m

I know it's not the same thing, but Quickemu happily works over SSH too.

Run it on a remote system via ssh, and it will "X-Forward" the Qemu console on my local Wayland session in Fedora.

First time I ran it, thinking I was doing a headless mode, and it popped up a window, was quite surprising. :)

arghwhat
9 replies
11h22m

The convenience of such a tool is great, but it's also ~5000 lines of bash across the two main scripts.

I'd want to vet such a thing before I run it, but I also really don't want to read 5000 lines of bash.

nu11ptr
3 replies
10h10m

Why is it different from any other software just because it is a shell script? Do you read the kernel sources for your OS before running it? Your web browser? My point is not that we should blindly run things, but that we all have criteria for what software we choose to run that typically doesn't rely on being familiar with its source code.

arghwhat
1 replies
9h44m

Well, yes, I read code of (and contribute to) the kernel and web browsers I use, but that's not really relevant.

There's a big difference between "large, structured projects developed by thousands of companies with a clear goal" vs. "humongous shell script by small group that downloads and runs random things from the internet without proper validation".

And my own personal opinion: The venn diagram of "Projects that have trustworthy design and security practices", and "projects that are based on multi-thousand line bash scripts" is two circles, each on their own distinct piece of paper.

(Not trying to be mean to the developers - we all had to build our toolkits from somewhere.)

freedomben
0 replies
9h9m

Heh, this reminds me a bit of when on live television Contessa Brewer tried to dismiss Mo Brooks with "well do you have an economics degree?" and he actually did and responded with "Yes ma'am I do, highest honors" :-D [1]

I have no problem with (and have written a few) giant bash scripts, and I completely agree with you. A giant bash script isn't going to have many eyes on it, whereas a huge project like the kernel is going to get a ton of scrutiny.

[1] https://www.youtube.com/watch?v=5mtQyEd-zS4

hnfong
0 replies
9h53m

I believe GP implicitly assumes that bash (and generally POSIX-y shell script) has lots of quirks and footguns (to which I generally agree).

After skimming through the source code though, I'd say the concerns are probably overstated.

keyringlight
1 replies
10h32m

I'd say this is a general issue with software, most generally how and what you do to establish trust, what expectations/responsibilities there are of a developer and user. The "many eyes make all bugs shallow" phrase does seem to be a little bit of a thought terminating cliché for some users, if it's open to scrutiny then it must be fine, conjuring an image of roaming packs of code auditors to inspect everything (I'd expect them more on the malicious side rather than benevolent)

Over for windows, there's been a constant presence of tweak utilities for decades that attract people trying to get everything out of their system on the assumption that 'big corp' developers don't have the motivation to do so and leave easy options on the table behind quick config or registry tweaks that are universally useful. One that comes to mind which I see occasionally is TronScript which if I had to bet on it passes the 'sniff test' with its history and participation I'd say it's good, but presents itself as automation, abstracting away the details and hoping they make good decisions on your behalf. While you could dig into it and research/educate yourself on what is happening and why, for many it might as well be a binary.

I think only saving grace for this is that most of these tools have a limited audience, so they're not worth compromising. When one brand does become used often enough you may get situations like CCleaner from piriform that was backdoored in 2017.

wjdp
0 replies
10h24m

Googled that, found the GitHub with a <h1> of

DO NOT DOWNLOAD TRON FROM GITHUB, IT WILL NOT WORK!! YOU NEED THE ENTIRE PACKAGE FROM r/TronScript

I see later it mentions you can check some signed checksums but that doesn't inspire confidence. Very much epitomises the state of Windows tweaky utilities vs stuff you see on other platforms.

jstrieb
1 replies
8h7m

While I agree in general that shell script is not usually fun to read, this particular code is really not bad.

Not sure if this will sway you, but for what it's worth, I did read the bash script before running it, and it's actually very well-structured. Functionality is nicely broken into functions, variables are sensibly named, there are some helpful comments, there is no crazy control flow or indirection, and there is minimal use of esoteric commands. Overall this repo contains some of the most readable shell scripts I've seen.

Reflecting on what these scripts actually do, it makes sense that the code is fairly straightforward. At its core it really just wants to run one command: the one to start QEMU. All of the other code is checking out the local system for whether to set certain arguments to that one command, and maybe downloading some files if necessary.

arghwhat
0 replies
6h26m

I do see that it is better structured, but as any other bash script it relies heavily on global variables.

For example, `--delete-vm` is effectively `rm -rf $(dirname ${disk_img})`, but the function takes no arguments. It's getting the folder name from the global variable `$VMDIR`, which is set by the handling of the `--vm` option (another global variable named $VM) to `$(dirname ${disk_img})`, which in turn relies on sourcing a script named `$VM`.

First, when it works, it'll `rm -rf` the parent path of the VMs disk_img variable is set to, irrespective of whether it exists or is valid as dirname doesn't check that - it just tries to snip the end of the string. Enter an arbitrary string, and you'll `rm -rf` your current working directory as `dirname` just return ".".

Second, it does not handle relative paths. If you you pass `--vm somedir/name` with `disk_img` just set to the relative file name, it will not resolve`$VMDIR` relative to "somedir"- `dirname` will return ".", resulting in your current working directory being wiped rather than the VM directory.

Third, you're relying on the flow of global variables across several code paths in a huge bash script, not to mention global variables from a sourced bash script that could accidentally mess up quickemu's state, to protect you against even more broken rm -rf behavior. This is fragile and easily messed up by future changes.

The core functionality of just piecing together a qemu instantiation is an entirely fine and safe use of bash, and the script is well-organized for a bash script... But all the extra functionality makes this convoluted, fragile, and one bug away from rm -rf'ing your home folder.

elheffe80
0 replies
9h30m

Probably going to catch some flack for this comment but... if you are that concerned with it, and have some free time, you could always use chatgpt to talk about the code. A prompt could be: "You are a linux guru, and you have extensive experience with bash and all forms of unix/linux. I am going to be pasting a large amount of code in a little bit at a time. Every time I paste code and send it to you, you are going to add it to the previous code and ask me if I am done. When I am done we are going to talk about the code, and you are going to help me break it down and understand what is going on. If you understand you will ask me to start sending code, otherwise ask me any questions before you ask for the code." I have used this method before for some shorter code (sub 1000 lines, but still longer than the prompt allows) and it works pretty well. I will admit that ChatGPT has been lazy of late, and sometimes I have to specifically tell it not to be lazy and give me the full output I am asking for, but overall it does a pretty decent job of explaining code to me.

steve_rambo
6 replies
11h33m

libvirt ships with virt-install which also allows for quickly creating and auto-installing Windows and many Linux distributions. I haven't tried it with mac.

Here's a recent example with Alma Linux:

  $ virt-install --name alma9 --memory 1536 --vcpus 1 --disk path=$PWD/alma9.img,size=20 --cdrom alma9.iso --unattended
Then you go for a coffee, come back and have a fully installed and working Alma Linux VM. To get the list of supported operating systems (which varies with your version of libvirt), use:

  $ osinfo-query os

rwmj
1 replies
10h14m

Also

  $ virt-builder fedora-39
if you wanted a Fedora 39 disk image. (Can be later imported to libvirt using virt-install --import).

stefanha
0 replies
6h5m

virt-builder is awesome for quickly provisioning Linux distros. It skips the installer because it works from template images. You can use virt-builder with virt-manager (GUI) or virt-install (CLI).

mrAssHat
1 replies
11h23m

It is not obvious what the result of this would be. What hostname will it have? How will the disk be partitioned? What packages will be installed? What timezone will be set? What keyboard layout will be set? And so on.

serf
0 replies
11h18m

virt-install can be given all of those parameters as arguments[0], too; parent just didn't post an obnoxiously large shell line to demonstrate.

[0]: https://linux.die.net/man/1/virt-install

presto8
0 replies
9h30m

Does virt-install automatically download the ISOs? When I try it, I get the following message:

    $ virt-install --name alma9 --memory 1536 --vcpus 1 --disk path=$PWD/alma9.img,size=20 --cdrom alma9.iso --unattended
    ERROR    Validating install media 'alma9.iso' failed: Must specify storage creation parameters for non-existent path '/home/foo/alma9.iso'.

JamesonNetworks
0 replies
11h22m

To do this I had to install libosinfo-bin

nightowl_games
6 replies
9h26m

Anyone know if I can I legitamately make and submit iPhone builds off a macosx VM?

jmb99
5 replies
9h14m

Technically, yes probably. You’ll be breaking Apple’s ToS though, so depends how big of a fish you are as to whether Apple cares.

xrd
4 replies
7h44m

I don't think you can. All virtualized MacOS machines, iirc, can't fully install the tools necessary to build software for MacOS. For example, I don't believe you will ever be able to sign and staple the app.

I would really love to have someone prove me wrong on this thread but I've never found a solution other than building on MacOS hardware, which is such a pain to maintain.

I have multiple old MacOS machines that I keep in a stable state just so I can be sure I'll be able to build our app. I'm terrified of failure or just clicking the wrong update button.

saagarjha
3 replies
6h43m

You can run codesign just fine in a VM.

xrd
2 replies
6h35m

I really appreciate your comment, I'm hoping I am wrong about my experiences!

But, this is the issue I believe:

https://mjtsai.com/blog/2023/09/15/limitations-on-macos-virt...

(or, the original is here: https://eclecticlight.co/2023/12/26/when-macos-wont-work-wit...)

You cannot login using AppleID. If you can't do that, aren't you prevented from basically doing any kind of stapling and/or retrieving certificates for signing?

I would LOVE to be wrong about this. You've done that?

saagarjha
1 replies
6h3m

This is only true for products based on the Virtualization framework. Intel “Macs” can sign in just fine. (Also, I think you can authenticate things with an API key these days rather than your credentials?)

xrd
0 replies
5h40m

Meaning, Intel vms? This is great. I'll check it out.

yoyoinbog
5 replies
11h55m

Looks interesting but would someone be so kind to point out if there are any advantages for a guy like me who just runs win 11 in VirtualBox under Ubuntu from time to time?

ge0rg
1 replies
9h41m

Especially regarding GPU acceleration... Running video-conferencing inside windows inside vbox is almost impossible, and even modestly complex GUI apps have a significant lag there.

user_7832
0 replies
6h40m

Does qemu allow GPU acceleration while running with a single GPU? From the video on the website it appears so, however from what I’ve read (at least with amd igpus) it doesn’t seem to work.

xdennis
0 replies
11h5m

If it actually runs MacOS then it's a huge advantage to installing in VirtualBox or VMware where it's very difficult to get it running (you have to patch various things).

prmoustache
0 replies
11h18m

I think it is more an alternative to gnome boxes where the tool take care of downloading latest image in addition to offering a default config specific to that distro/os and additionally supporting dirty OSes like windows and macOS.

kxrm
0 replies
11h39m

Hard to answer this question as it largely depends on what you are doing with your VM. This appears to be a wrapper for QEMU and tries to pick reasonable settings to make spinning up new VMs easier.

mihalycsaba
4 replies
11h40m

It's a QEMU wrapper. I don't know how is this useful. It might save you 2 minutes. Maybe more with windows 11 because of tpm.

spongebobstoes
1 replies
11h35m

Looks like this tries to use better default settings for qemu, which doesn't always have good defaults.

I think that is useful practically, as a learning tool, and as a repository of recommended settings.

wufocaculura
0 replies
11h3m

this is what we are really missing, something like: "here are 'good enough' cmd line args that you can use to boot $OS with qemu". Quickemu seems to try to help here.

overbytecode
0 replies
11h24m

Quickemu gives me the ability to instantly spin up a full blown VM without fiddling with QEMU configurations, just by telling it what OS I want.

This might be less useful for those who are quite familiar with QEMU, but it’s great for someone like me who isn’t. So this saves me a whole lot more than 2 minutes. And that’s generally what I want from a wrapper: improved UX.

colejohnson66
0 replies
11h16m

Quickemu is a wrapper for the excellent QEMU that attempts to automatically "do the right thing", rather than expose exhaustive configuration options.

As others have said, it's to get past the awful QEMU configuration step. It makes spinning up a VM as easy as VirtualBox (and friends).

tarruda
3 replies
9h6m

For Linux I highly recommend Incus/LXD. Launching a VM is as simple as

``` incus launch images:ubuntu/22.04 --vm my-ubuntu-vm ```

After launching, access a shell with:

``` incus exec my-ubuntu-vm /bin/bash ```

Incus/LXD also works with system containers.

renonce
2 replies
8h58m

One thing I loved but rarely mentioned is systemd-nspawn. You do `docker create --name ubuntu ubuntu:22.04` and then `docker export ubuntu` to create a tar from an arbitrary docker image. Then you extract that to `/var/lib/machines/ubuntu`. Make sure to choose an image with systemd or install systemd in the container. Finally do `machinectl start ubuntu` and `machinectl shell ubuntu` to get inside.

systemd-nspawn is very simple and lightweight and emulates a real Linux machine very well. You can take an arbitrary root partition based on systemd and boot it using systemd-nspawn and it will just work.

tarruda
1 replies
8h39m

systemd-nspawn is simple but AFAIK it doesn't do any security other than the kernel namespacing. Docker is even worse because it runs containers as root, which means a rogue process can take over the host very easily.

Incus/LXD runs containers as normal users (by default) and also confines the whole namespace in apparmor to further isolate containerized processes from the host. Apparmor confinement is also used for VMs (the qemu process cannot access anything that is not defined in the whitelist)

viraptor
0 replies
8h35m

Docker runs container as the user you tell it to. Same with nspawn. There's not much difference there in that respect.

Nspawn does seccomp-based filtering, similar to the usual systemd services.

ngcc_hk
2 replies
11h20m

Sadly “ macOS Monterey, Big Sur, Catalina, Mojave & High Sierra”

Gabrys1
1 replies
11h4m

Why is it sad?

JoachimS
0 replies
10h28m

Probably because the two latest major versions - Ventura (13.x) and Sonoma (14.x) are not included in that list, and may not be supported. Patches to older versions may be supported. Apples patch policy according to Wikipedia:

``` Only the latest major release of macOS (currently macOS Sonoma) receives patches for all known security vulnerabilities.

The previous two releases receive some security updates, but not for all vulnerabilities known to Apple.

In 2021, Apple fixed a critical privilege escalation vulnerability in macOS Big Sur, but a fix remained unavailable for the previous release, macOS Catalina, for 234 days, until Apple was informed that the vulnerability was being used to infect the computers of people who visited Hong Kong pro-democracy websites. ```

0cf8612b2e1e
2 replies
7h8m

Are there any numbers on performance change vs naively running a VM? Usually running Linux guest inside Linux host and frequently disappointed at the guest performance. I have never done any research on tuning the VM experience, so I am curious how much I might be missing. 5% faster? 100%?

nickstinemates
1 replies
4h12m

How are you running them? Running KVM/Qemu with appropriate settings gives near metal performance.

0cf8612b2e1e
0 replies
3h58m

virt-manager with PopOS host, usually Ubuntu/PopOS guest on a Ryzen 5500(? Something in that series). Do not know what virt-manager runs under the hood. Again, never done anything other than install virt-manager, so would be happy to read a guide on any recommended configuration settings.

siquick
1 replies
8h16m

Would this be how I get to run PC games on Steam on my Mac?

cassianoleal
0 replies
8h1m

No, that would be either Crossover [0] or Game Porting Toolkit [1] (easily run via Whisky [2]).

[0] https://www.codeweavers.com/crossover

[1] https://www.applegamingwiki.com/wiki/Game_Porting_Toolkit

[2] https://getwhisky.app/

sandbags
1 replies
10h22m

I couldn’t answer this from the site. Will this let me run macOS Catalina on an M2 Mac Studio with usable graphics performance? Because that would give me back a bunch of 32-bit games I didn’t want to give up.

sharikous
0 replies
10h3m

No. It will be slow as hell

But something like El Capitan will be somehow acceptable and Lion will be actually usable

nexus6
1 replies
11h53m

Wonder what the difference is with Proxmox and if there’s any optimisation done here that I can manually recreate in my Proxmox environment.

bityard
0 replies
4h33m

This is straggeringly different from Proxmox. Proxmox is made for labs and datacenters that have a need to host lots of servers as VMs. Quickemu looks like it is mainly geared toward desktop use.

claviola
1 replies
10h52m

UTM[0] does this quite well on macOS. They also have a small gallery[1] of pre-built images.

0. https://mac.getutm.app/

1. https://mac.getutm.app/gallery/

ivanjermakov
0 replies
10h9m

UTM even works on iPads! I was able to run Arch Linux in TTY mode quite well.

https://docs.getutm.app/installation/ios/

tambourine_man
0 replies
9h40m

Does it run natively on Arm (Apple Silicon)? How about the latest versions of macOS? Is there graphic acceleration? How's network handled?

osigurdson
0 replies
8h26m

Something like macOS Parallels would be nice on Linux.

makeworld
0 replies
9h1m

quickemu has been great, really convenient for running a performant Windows VM on my Linux laptop.

itherseed
0 replies
9h26m

Is there something similar to this but for Windows 10 or 11? I want a Windows GUI for QEMU to build some Linux machines. I tried QtEMU but didn't like it. Thanks in advance.