Given that this is owned Broadcom now, and they are going all in with squeezing every last drop from ESXi and similar offerings, I wonder what's gonna happen with Fusion in the future; while now you only pay for commercial usage, maybe they are going to let it rot over the years until it's no longer cutting edge software? Would they keep it as a loss-leader?
They're very late to do this IMO, but better late than never. I am certain that VMWare has not been selling many Workstation licenses to personal users (costing ~$300 each) and making the products free gives free advertising and mindshare to VMWare. Visual Studio is a good example of this, Microsoft making Visual Studio free for personal use in 2014 provided a huge boost to a platform that a lot of people had written off as dead, irrelevant, gray corporate software.
What is Visual Studio used for these days? Of course windows development, and I'm guessing xbox development too. Anything else?
I'm guessing it's still the IDE of choice of game devs.
It's probably the best C++ IDE out there, with a great debugger. For C# a lot of people prefer Rider, but in terms of free options VS is much better than VS Code.
AFAIK it's also the primary IDE for Sony and Nintendo SDKs
Primary IDE for Unity and Unreal. Microsoft has been extending beyond Microsoft platforms, so I imagine a decent chunk of Visual Studio use is for cross-platform development.
VMWare has not been selling many Workstation licenses to personal users (costing ~$300 each) and making the products free gives free advertising
Not really. The VMWare Workstation Player had the same engine (but less management functionality) so personal user could actually use a VMWare virtualization product. For basic usage (including snapshotting), which fits a non-commercial user, it was a fitting choice.
Therefore, it's good that they're essentially giving more functionality for free, but they did have a free offer before (for non-commercial users).
Except that VMWare is owned by Broadcom, which is known for only being interested in Fortune 500. That doesn't at all apply to Microsoft. No sane people will buy into VMWare anymore if 500k in cash is not a rounding error of your budget.
to a platform that a lot of people had written off as dead, irrelevant, gray corporate software.
I don't think many people had written Visual Studio off like that in 2014. Maybe now, given VSCode. But that didn't exist in 2014.
I mysteriously stopped needing vmware or virtualbox the year docker came out, I wonder why...
"7 Ways to Escape a Container" - https://www.panoptica.app/research/7-ways-to-escape-a-contai...
I'm a casual Docker user, ran maybe 30 images my whole life. I've never used any of these flags and didn't know most of them even existed.
Are these serious threats? I mean it seems like common sense that if you give a malicious container elevated privileges, it can do bad stuff.
Is a VM any different? If you create a VM and add your host's / directory as a share with write permissions (allowing the VM to modify your host filesystem/binaries) does that mean VMs are bad at isolation and shouldn't be used? Because that's what these "7 ways to escaper a container" ways look like to me.
Containers are called "Leaky Vessels" for a reason...
"Container Escape: New Vulnerabilities Affecting Docker and RunC" - https://www.paloaltonetworks.com/blog/prisma-cloud/leaky-ves...
VMs offer a much better isolation mode.
Thanks, that link made me much more confident in using Docker.
I mean come on: "Attackers could try to exploit this issue by causing the user to build two malicious images at the same time, which can be done by poisoning the registry, typosquatting or other methods"
So basically ridiculous CVEs that will never affect people not in the habit of building random Dockerfiles off Github with 2 stars. Good to know. Only the 1st one isn't dismissable out of hand, I can't tell if it's bogus like the rest./
Those are all explicit privileges given to the guest. You can also "escape" from a VM if you give it access to the Docker daemon on the host...
It's like saying "2 ways to escape an unprivileged user account: 1. type su then the root password, or 2. convince the admin to setuid your shell"
What timing. Literally yesterday I was trying to set this up on my Mac and went with UTM instead. It worked excellently for getting Kali Linux up and doing a WLAN USB passthrough.
Fortunately the open-source VM space is the best it has ever been these days. No reason at all to use proprietary crap anymore.
What open source options would you recommend for running Linux and Windows VM's on Windows? I've been unhappy with Virtualbox because the audio quality is abysmal which is an issue since I use screen reading software. I'm interested to try out VMWare Workstation since it's audio support was pretty good many years ago when I used it at a prior job.
Hyper-V? I don't run VM's on windows, only on Mac and Linux. I'd imagine a first class hypervisor like Hyper-V is the way to go on that platform. AFAIK it is included with Pro versions of Windows.
I've had a lot of frustration getting the SPICE tools to work with slightly older Windows OSes (Windows 7 in my case), for what it's worth.
Performant 3d acceleration in the guest OS is still quite difficult to find an open source solution for, and linux these days relies heavily on this for window management. Mac hosts at least have ParavirtualizedGraphics, even though I don't think the popular open source clients have support for it yet.
Now go try to download it. All links are broken.
https://knowledge.broadcom.com/external/article?articleNumbe...
https://www.vmware.com/content/vmware/vmware-published-sites...
https://customerconnect.vmware.com/web/vmware/downloads/info...
I spent like 15 minutes trying to find an official download link, even registered a broadcom account but that was a waste of time as well. Ended up finding a working download link from some reddit comment. It seems to contain all versions of fusion, player, workstation and remote console.
I had an expired trial license, after updating it now prompts me that my license is expired, but then seems to work just fine instead of closing haha. Thanks for the link!
Here is a simple KB that describes what to do: http://kb.vmware.com/s/article/97817
Another broken link...
Found a mirror here: https://github.com/201853910/VMwareWorkstation/releases/tag/...
The windows binaries have valid authenticode signatures so at least those haven't been tampered with.
This link works for me just fine:
https://support.broadcom.com/group/ecx/productdownloads?subf...
On Windows (Pro) Hyper-V is free, and quite good. Maybe it's less user friendly.
Windows also has Sandbox (based on container technology), which replaces creating a VM to test some software without affecting the system.
yeah Windows Sandbox is pretty great. I use it to test sketchy software when sailing the high seas. And the option to add shared readonly folders on the host OS is nice too.
How do you test sketchy software when Windows Defender is disabled in Windows Sandbox?
Anyone reading this, don't expect a smooth experience for desktop Linux under Hyper-V.
Hyper-V's team only cares about supporting servers. You're not gonna run a full-screen Ubuntu VM without a lot of banging your head against the wall, unless you spend days trawling random Github comments and reddit posts and fixing it whenever it breaks.
If you want Ubuntu use Hyper-V Quick Create instead of booting the .iso you downloaded. That takes care of the integration things.
There is the Hyper-V Server 2019 [0] too which was also free and a standalone OS unlike the current version. I use that on a 2nd PC, you can also install a full GUI [1] on top of the webadmin interface so pretty good actually.
0, https://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-...
1, https://gist.github.com/bp2008/922b326bf30222b51da08146746c7...
I'm surprised they're making Workstation and Fusion free, given that they killed off the free ESXi/vSphere hypervisor. Seems strange?
I suspect they knew (and to be fair, probably correct) that a decent number of small businesses (and maybe even larger ones) were using ESXi and they just decided to shut that down in a push to get more licenses.
If my theory is correct, in about two years (if they haven't killed it entirely by the) they'll introduce a "free for homelab use" variant - maybe.
But Workstation and Fusion were more used by personal people and as a support tool FOR professionals, so they needed to keep those going, but charging $79 for it just wasn't worth the hassle. Notice they're not even selling ANY licenses directly anymore; you have to go through someone else. VMWare sold directly.
Not just not worth the hassle. The product being free now means that if someone files a bug report or request for enhancement, they can more easily just shrug and say "won't fix."
This.
And evenmore, the lesser versions were free for everyone to use.
But now, if you are a business you don't have any free offering anymore.
I guess it makes a lot of sense, to go only after the ones that can pay, the rest would have "other" ways to run it anyway.,,
What @bombcar said.
"Free" ESXi is.. well, free. They are converting enterprises to enterprise customers. Some bloke with WKS is not an enterprise customer.
You could use Workstation Pro to directly assess VMs running on an ESXi server, so maybe the idea is to make company's more dependent upon the central infrastructure where they can squeeze.
You could actually cajole the free player to do this with ESXi but it was definitely not license kosher.
I've mixed feelings on this. One side I love it's free, the annual pricing seems reasonable to what it was. On the other, i've been so burned by their pricing for everything else with clients that i'm reticent to be thankful.
I suppose it's a step in the right direction, bringing back ESXI for homelab users would be a good step too.
bringing back ESXI for homelab users would be a good step too.
It would be interesting if they did. So many have now tried Proxmox and liked it.
Yeah. Once the Proxmox team fixed the recent kernel bug that caused migration hangs with Ryzen/EPYC hardware (~20% of migrations just hung), things have been pretty great.
Presently trialling a HA cluster with it, and likely to deploy that to a local data centre in the next few weeks.
all the homelabbers have moved on to proxmox ve
bringing back ESXI for homelab users would be a good step too.
I agree and frankly I think it was smart VMWare had a a free tier for homelab users. It produces new users who can now more easily enter the workforce with ESXi experience they might not otherwise have.
By locking it down and jacking up prices they'll squeeze out more money now, but eventually the market will shift to whatever everyone has the most experience with, which might end up being Proxmox.
Have been a happy Fusion Pro customer for years, including buying upgrades. A user can install 5 seats which includes both Workstation and Fusion, so I could have it on my Mac desktop, iMac, and Linux desktop. The licence was very clear it is kosher for commercial use. And it was perpetual, no worrying about annual renewals.
Beware the new “free” licence, which is emphatically not for commercial use. Get caught accidentally using the personal licence and your company will be on the hook to pay whatever Broadcom wants you to pay. Oracle did similar shenanigans with VirtualBox (watch out if you download the extension pack) and Java (watch out if you install a JRE on your desktop and use it to compile/develop certain software!)
This does open a market opportunity for Corel/Parallels which is mostly at feature parity with VMware… the main reason I liked using VMware Fusion was solid integration with ESXi, which also won’t be a concern anymore as with the Broadcom acquisition that’s a platform I’ll be trying to avoid.
They provided the method for continuing to purchase the license (much the same way it was before with digital river). $120 annually.
Controlling/watching for that in a large env is going to be a pita.
Beware the new “free” licence, which is emphatically not for commercial use. Get caught accidentally using the personal licence and your company will be on the hook to pay whatever Broadcom wants you to pay
Common way to do this would be to keep an eye out for large fundraises, and then track back to see if you can catch them using the personal license early on.
Download doesn't seem to work. The VMware download area closed in late April, and can't seem to download Fusion now. The replacement store doesn't seem to be available yet.
I'm using Parallels and it is great on an Apple Silicon Mac, but I'm a long-time VMware Workstation and Fusion user, so I'd like to try it again.
As a paying Workstation customer, I had to install it from scratch the other day and couldn't find a binary anywhere. I eventually found an old installer on archive.org (!) and settled for that. Grateful to whoever had the foresight to point the Wayback Machine at VMWare's CDN before it was too late.
And we paying customers did buy a perpetual licence. Better archive that download and save your key somewhere…
You have to go over to Broadcom website. The VMware store is still down.
Since Microsoft came up with WSL, I no longer have the need for VMWare Workstation.
This is how products get killed.
I'm the opposite, I need these desktop hypervisors because Hyper-V is trash for anything but a WSL shell or server VM.
I upgraded to Windows 11 for WSLg (figuring it would replace my Linux desktop), and it was buggy trash. You can't even get a high-resolution Ubuntu desktop (from Microsoft themselves, their own quickbox!) without jumping through hoops, searching all over reddit for knowledge obsoleted by the next update, tweaking arcane settings and running misc Powershell scripts. To say nothing of the occasional freezes.
By enabling WSL2/WSLg, your Windows host is now a privileged guest running under Hyper-V as a hypervisor. Which means lightweight desktop hypervisors like Virtualbox run like trash.
I ended up removing WSLg/turning Hyper-V off, using Virtualbox for desktop Linux, and using WSL1 (not 2) to have a quick Linux shell without enabling Hyper-V.
I'm now considering Workstation due to the superior graphics in the guest over Virtualbox.
If you are running Windows 10 with secure kernel, driver guard, among others, this features require Hyper-V.
Secondly Windows 11 doubles even more on having Hyper-V running for even more security capabilities.
I also think the future is type 1 hypervisors, and in regards to performance, my computers are beefy enough to hardly notice any major impact.
As for Linux configuration problems, business as usual, there is always something that needs hand holding, and I have been using distributions since Slackware 2.0 in 1995's Summer.
I also mostly used Virtualbox only when not allowed to use VMWare products, due to cheap project delivery conditions.
Separate note: what do folks use to virtualize Windows on Linux? Is anything good enough to run older games in Windows in a VM? Think Dota 2 (I know it's available for Linux, just using it as a perf reference).
quickemu is great for this
Proxmox can do this. It is free software. It will work best if you passthrough your graphics card. I also passed through a USB controller and used my dac for sound. You can run most versions of windows in a VM, also macos and linux.
Proxmox is completely free to download and deploy without any cost. For commercial and personal use.
my longterm homelab of ESXi 6.x on Intel NUC with unlimited license hack recently blew up (repeated power surges, no UPS). went with Intel NUC again but proxmox instead for the rebuild and its been a dream so far. i don't miss ESXi or Vmware in general one bit.
This is my setup: Nuc +Proxmox+ K3S. My goal is to set up a Kubernetes Cluster spanning 2 NUCs with K3S to build my playground for any application I decide to develop.
My guess here: the product isn't valuable enough to sell off (see Horizon), the customers who buy the product aren't in their list of 600 accounts that they want to focus on, and the invoice price is a rounding error next to their enterprise offerings.
Broadcom is bloodthirsty and I'd suggest they're doing this out of the goodness of their hearts but there is little evidence that they have one.
When they say "Free" what they mean is these products are now the walking dead. They are on maintenance only support until any existing commercial contracts expire at which point they will be cancelled.
In this case, I admit that I think it's the right thing to do. These products don't really need to exist as commercial offerings except for a few very niche cases.
It doesn't sound like they're trying to kill them, since they're moving commercial usage to a subscription license.
I'm not sure killing them actually makes a lot of sense - the products apparently share a lot of code with esxi, so it's two products for the R&D of one.
Pro free for now, or Pro is free abandonware. Both cases discourage adoption.
Open-source would help if there's any desire to keep it alive, otherwise this is a nice gesture but I would read this as a signal that I should stick away from it because it's dead or a trap.
Who is spinning up home test environments looking for decades of support?
My VMs are doing pretty well if they last the quarter.
We have a whole bunch of automation built around VMWare Fusion for creating and deploying macOS VMs (for CI testing). The VMs aren't long-lived, but the code using VMWare Fusion certainly is and it's a nontrivial project to migrate to a different virtualization system. Thankfully for us we were already planning to do that migration before the acquisition.
I'm trying to figure out exactly how Broadcom expects to profit from this, and what this says for their development and support roadmap.
I was wondering if its related to data gathering for ai
I love how many people just aren't reading the article.
First of all it's both Fusion, the Mac software and Workstation, the Windows one.
Secondly, they're making them free for personal use. They're still paid for commercial use and it's going to be a subscription.
... and Workstation, the Windows one.
Workstation also runs on Linux. :)
As a data point for anyone else running it on Linux, this repo is probably what you should keep an eye on for updated VMware modules that work with newer kernels:
https://github.com/mkubecek/vmware-host-modules
The ones bundled in VMware Workstation officially tend to break in weird ways as new kernels come along. (!)
Orbstack now serves 100% of my virtualization and docker needs on macOS. Hopefully I'll never feel the need to install virtualbox or fusion or parallels ever again.
Orbstack is really nice on an M1. But occasionally I run into the need for a GUI application and then I'm stuck. Is it possible to use Orbstack for that?
Does this offer anything over virt-manager on Linux?
Really good snapshot capabilities, if that's your kind of thing.
Workstation (and Fusion too I think) also provide DirectX 10 + 11 support in Windows VMs.
So if you're wanting to run software that uses that in a Windows, you'll need something like Workstation or Fusion since virt-manager can't (yet) do that.
I don’t trust them anymore
Nobody does. We all moved to other hypervisors now.
I paid for it already lol. Should it not be "broadcom fusion" now? I am already getting emails about migrating my vmware account to broadcom.
Should it not be "broadcom fusion" now?
Given Broadcom's actions they need to rename it to "Broadcom fission" :-P
This is how you know that they're not putting another dime into R&D.
Who cares. They killed ESXi. They are no use to SMBs and people who want to learn virtualization.
I had purchased Fusion Pro recently due to the better graphics support in comparison to VirtualBox. I wonder what the catch is here.
For this who use this product, a query:
I have been using Hyper-V since the early days, and run it on both my development iron (Win11 23H2, heavily castrated) as well as my personal (non-commercial) Win2k22 21H2 Datacentre servers.
Why should I choose VMware over Hyper-V?
Genuinely curious.
Don't get them get away with this, continue to migrate away from it.
I mean... hooray?
It's probably useful for the HomeLab crowd, but when you get an idea and want to scale it for business purposes, you get screwed by their recent commercial market moves.
There's the beginnings of real FLOSS virtualization projects out there. Broadcom will make some money off of the acquisition as measured by quarterly statements, but it's not sustainable over the long run. It's not 2005 anymore. Step on enough toes and the nerds will build their own and give it away for free.
The ransomware epidemics targeting ESXi vulnerabilities probably triggered an exodus to other hypervisors and this could be an attempt to hang on to some users.
Wonder how much tracking shit got crammed in recently
MEH!!! qemu for the win! "Hardware Virtualization Support: Qemu is capable of running virtual machines without hardware virtualization support, also known as software virtualization. On the other hand, VMware Fusion requires hardware-assisted virtualization to run virtual machines efficiently."
Fuck me, i just drop it to use vbox. I migrated all my vms.
Shoot. I saw Fusion and thought Autodesk. I’d forgotten that VMware was still a thing.
I used to have (paid, personal) VMware Workstation licence, but switched to VirtualBox after they stopped updating Workstation and a windows update stopped it working. I thought it was a decent product.
I'd rather not use an Oracle product (VB) but are there any advantages in switching back? Main use is running Ubuntu VMs on Windows.
This is probably the first step to shutting down development of these products.
First of all every desktop now has its own mature virtualization.
And secondly, Broadcom has no interest in this market.
It's almost impossible to create an account with all of the delays. Even then, I got into a loop in "Trade Compliance Verification" which does not proceed.
If you want to virtualize something with good performance on desktop windows you use Hyper-V; if you want to do it on mac you use Apple's Virtualization Framework; if you want to do it on Linux you use KVM.
Desktop virtualization products used to bring the secret sauce with them; now that every OS ships with a well-integrated and well supported type 1 hypervisor they have lost much of the reason for existing. There's only so much UI you can put in front of off-the-shelf os features and still charge hundreds of dollars per year for.
They still need to. You are glossing over the fact that you need to provide device access, USB access, graphics, and a lot of things that are not necessarily provided by the "native" hypervisor (HyperKit does not do even half of what Parallels does, for instance).
Yeah, if you care about 3D acceleration on a Windows guest and aren't doing pcie passthrough, then KVM sure isn't going to do it. There is a driver in the works, but it's not there yet.
edit: I made a mistake and got confused in my head with qemu and the lack of paravirtualized support. (It does have a PV 3D linux driver, though)
Do any of the commercial hypervisors do that today?
Pretty much all of them do, though the platform support varies by hypervisor/guest OS. Paravirtualized (aka non-passthrough) 3D acceleration has been implemented for well over a decade.
However NVIDIA limits it to datacenter GPUs. And you might need an additional license, not sure about that. In their view it's a product for Citrix and other virtual desktops, not something a normal consumer needs.
Yes and no; you can use GPU partitioning in Hyper-V with consumer cards and Windows 10/11 client on both sides, it’s just annoying to set up, and even then there’s hoops to jump through to get decent performance.
If you don’t need vendor-specific features/drivers, then VMware Workstation (even with Hyper-V enabled) supports proper guest 3D acceleration with some light GPU virtualization, up to DX11 IIRC. It doesn’t see the host’s NVIDIA/AMD/Intel card and doesn’t use that vendor’s drivers, so there’s no datacenter SKU restrictions. (But you are limited to pure DX11 & OpenGL usage, no CUDA etc.)
KVM will happily work with real virtual GPU support from every vendor; it's the vendors (except for intel) that feel the need to artificially limit who is allowed to use these features.
I was mostly hoping qemu would get paravirtualized support some day, because it is leagues ahead of VMware Player in speed. Everyone's hopes are riding on https://github.com/virtio-win/kvm-guest-drivers-windows/pull....
I guess my comments make it sound like I don't appreciate this type of work; I absolutely do. An old friend of mine[1] was responsible for the first 3d support in the vmware svga driver, so this is a space I have been following for literally decades at this point.
I just think it should be the objective of vendors to offer actual GPU virtualization first and to support paravirtualization as an optimization in the cases where it is useful or superior and the tradeoffs are acceptable.
[1] https://scanlime.org/
There has been a driver "in the works" for the past decade. Never coming. MS/Apple do not make it easy anyway.
I didn't say they have no reason to exist. I indicated they are moving towards becoming UI shells around standard OS features and/or other commodity software, which they are. Look at UTM, for instance. Even VMware Workstation and VirtualBox on Windows use HyperV under the hood if you have HyperV or WSL features enabled.
While everyone still seems to be busy disagreeing with me because of <insert favorite feature>, I'll mention that HyperV does have official support for transparent GPU paravirtualization with nvidia cards, and there are plenty of other open projects in the works that strive to "bleed through" graphics/gpu/other hardware acceleration api's from host to guest on other platforms and hypervisors. With vendors finally settling around virtio as somewhat of a 'standard pipe' for this, expect rapid progress to continue.
VirtualBox is consistently (and significantly) slower when it uses HyperV as backend than when it uses its original driver, and many features are not supported at all with HyperV. In fact the GUI actually shows a "tortoise" icon in the status bar when running with HyperV backend.
What is the use case for using Virtualbox with Hyper-V? Why not just use Hyper-V directly?
The use case is mainly interoperability with VirtualBox; they can still keep thier own disk/vm formats, guest tools, etc. and use HyperV as the 'virtualization engine'. Users that have workflows that call out to VirtualBox can continue to work; a lot of vm image tools (vagrant, packer) continue to work, etc.
But yes, of course you can also change your tools to use HyperV directly.
"Having to use HyperV" is not actually anything nefarious as the other comment seems to imply. You can't have two type 1 hypervisors running cooperatively on the bare metal and you cant implement your type 2 hypervisor hooks if you have a type 1 hypervisor running. So if you have enabled HyperV directly or indirectly by using WSL2 or installing any of the container runtime platforms (Docker Desktop et al) that use it, then you have to use HyperV as your hypervisor.
Note this is different than nested virtualization (ESXi on HyperV, etc.) which is supported but a completely different beast.
For the same reason you cannot run Xen and KVM VM's simultaneously on Linux (excepting nested virtualization).
For a start, the list of operating systems Hyper-V supports is an order of magnitude less than what VirtualBox supports. Likewise for emulated hardware, like 3D as mentioned a number of times here. The GUI is also much better on VirtualBox.
And Windows many times forces HyperV onto you, taking exclusive control of the CPU's virtualization features, thereby forcing VirtualBox to either use Hyper-V as a (terrible) backend .... or not run at all.
Why is the Hyper-V backend so much slower?
In truth it's not; however if the software has to do extra things like, for instance, translate io calls to a virtualbox disk format that hyperv cannot natively support or do an extra memcpy on the video framebuffer to get their UI to work then there will be necessary performance impacts. How fast a guest os "feels" is mostly down to the performance of the virtualized devices and not necessarily the overhead of virtualization itself.
GPU-P is a pita to keep updated and is half baked. So many things just don't see or use the Nvidia drivers properly. If you want 60fps, Parsec is the only solution I found for desktop/mobile Nvidia graphics.
If accessing windows- look at enabling remoteFX, h264 hardware stream, with YUV444, in RDP. See: https://techcommunity.microsoft.com/t5/security-compliance-a...
Once I discovered that, I haven't looked at parsec. Moonlight/sunshine (whatever the pair is) is... terrible. And, When I was looking YUV444 wasn't a feature. Or, at least not one anybody actually knew how to use.
Agreed. Virtualized 3d acceleration in particular still has quite a bit of "secret sauce" left in it.
Today this is mostly implemented by having a guest driver pass calls through to a layer on the host that does the actual rendering. While I agree that there is a lot of magic to making such an arrangement work, it's a terrible awful idea to suggest that relying on a vendor's emulation layer is how things should be done today.
Proper GPU virtualization and/or partitioning is the right way to do it and the vendors need to get their heads out of their ass and stop restricting its use on consumer hardware. Intel already does; you can use GVT-g to get guest gpu on any platform that wants to implement it.
So you say having a decoupled arrangement in software (which happens to be a de facto open standard) is a "terrible awful idea" and that instead you should just rely on whatever your proprietary hardware graphics vendor proposes to you? Why?
And that's assuming they propose anything at all.
Even GVT-g breaks every other Linux release, is at risk of being abandoned by Intel (e.g. how they already abandoned the Xen version) or limited to specific CPU market segments, and already has ridiculous limitations such as a limit on the number of concurrent framebuffers AND framebuffer sizes (why? VMware Workstation offers you an infinitely resizable window, does it with 3D acceleration just fine, and I have never been able to tell if they have a limit on the number of simultaneous VMs... ).
In the meanwhile "software-based GPU virtualization" allows me to share GPUs in the host that will never have hardware-based partitioning support (e.g. ANY consumer AMD card), and allows guests to have working 3D by implementing only one interface (e.g. https://github.com/JHRobotics/softgpu for retro Windows) instead of having to implement drivers for every GPU in existence.
Sandboxing, and resource quotas / allocations / reservations.
By itself, a paravirtualized GPU just treats each userland workload launched by any given guest onto the GPU, as all being siblings — exactly as if there was no virtualization and you were just running multiple workloads on one host.
And so, just like multiple GPU-using apps on a single non-virtualized host, these workloads will get "thin-provisioned" the resources they need, as they ask for them, with no advance reservation; and workloads may very well end up fighting over those resources, if they attempt to use a lot of them. You're just not supposed to run two things that attempt to use "as much VRAM as possible" at once.
This means that, on a multi-tenant hypervisor host (e.g. the "with GPU" compute machines in most clouds), a paravirtualized GPU would give no protection at all from one tenant using all of a host GPU's resources, leaving none left over for the other guests sharing that host GPU. The cloud vendor would have guaranteed each tenant so much GPU capacity — but that guarantee would be empty!
To enforce multi-tenant QoS, you need hardware-supported virtualization — i.e. the ability to make "all of the GPU" actually mean "some of the GPU", defining how much GPU that is on a per-guest basis.
(And even in PC use-cases, you don't want a guest to be able to starve the host! Especially if you might be running untrusted workloads inside the guest, for e.g. forensic analysis!)
Why does multi-tenant QoS require hardware-supported virtualisation?
An operating system doesn't require virtualisation to manage application resource usage of CPU time, system memory, disk storage, etc – although the details differ from OS to OS, most operating systems have quota and/or prioritisation mechanisms for these – why not for the GPU too?
There is no reason in principle why you can't do that for the GPU too. In fact, there have been a series of Linux cgroup patches going back several years now, to add GPU quotas to Linux cgroups, so you can setup per-app quotas on GPU time and GPU memory – https://lwn.net/ml/cgroups/20231024160727.282960-1-tvrtko.ur... is the most recent I could find (from 6-7 months back), but there were earlier iterations broader in scope, e.g. https://lwn.net/ml/cgroups/20210126214626.16260-1-brian.welt... (from 3+ years ago). For whatever reason none of these have yet been merged to the mainline Linux kernel, but I expect it is going to happen eventually (especially with all the current focus on GPUs for AI applications). Once you have cgroups support for GPUs, why couldn't a paravirtualised GPU driver on a Linux host use that to provide GPU resource management?
And I don't see why it has to wait for GPU cgroups to be upstreamed in the Linux kernel – if all you care about is VMs and not any non-virtualised apps on the same hardware, why couldn't the hypervisor implement the same logic inside a paravirtualised GPU driver?
But "sandboxing" is not a property of hardware-based virtualization. Hardware-based virtualization may even increase your surface attack, not decrease it, as now the guest directly accesses the GPU in some way software does not fully control (and, for many vendors, is completely proprietary). Likewise, resource quotas can be implemented purely in a software manner. Surely an arbitrary program being able to starve the rest of the system UI is a solved problem in platforms these days, otherwise Android/iOS would be unusable... Assuming the GPU's static partitioning is going to prevent this is assuming too much from the quality of most hardware.
And there is an even bigger elephant in the room: most users of desktop virtualization would consider static allocation of _anything_ a bug, not a feature. That's the reason most desktop virtualization precisely wants to to do thin-provisioning of resources even when it is difficult to do so (e.g. memory). i.e. we are still seeing this from the point of view of server virtualization, and just shows how desktop virtualization and server virtualization have almost diametrically opposed goals.
A soft-gpu driver backed by real hardware "somewhere else" is a beautiful piece of software! While it certainly has application in virtual machines, and may even be "optimal" for some use cases like desktop gaming, it's ultimately doesn't fit with the modern definition of "virtualization --
I am talking about virtualization in the sense of being able to divide the hardware resources of a system into isolated domains and give control of those resources to guest operating systems. Passing API calls from guest to host for execution inside of the host domain is not that. A GPU providing a bunch of PCIe virtual functions which are individually mapped to guests interacting directly with the hardware is that.
GPU virtualization should be the base implementation and paravirtualization/HLE/api-passthrough can still sit on top as a fast-path when the compromises of doing it that way can be justified.
Hyper v doesn't have 3d acceleration, so if you play game or want to use linux desktops it's pretty bad.
WSL2 seems to virtualize the GPU pretty well, I had an easier time getting my GPU to work for machine learning inside WSL2 than I have with plain Windows and Linux in the past.
It does, but it’s a whole rabbit hole of specialized settings from what I can tell. Toying around with GPU-PV in Hyper-V with a Windows 11 guest was complicated and ultimately had performance & compatibility problems. (With my previous PC even deadlocking when I used the onboard video encoder from within the VM)
Am I the only one who explicitly does not want type 1 hyper visor on my desktop? Am I out dated?
I like workstation and virtualbox because they're controllable and minimally impactful when I'm not using them.
Installing hyper v (and historically even WSL - not sure if it's still the case but it was never sufficiently explicit) now makes my primary OS a guest, with potential impact on my gaming, multimedia, and other performance (and occasional flaky issues with drivers and whatnots).
Am I the only grouchy geezer here?:-)
I used to worry about this overhead too but this appears to be nothing on modern CPUs. I had minuscule differences here-and-there on Intel 9th gen (9900K) but my current Intel 13th gen (13900K) has zero performance decrease with HV enabled. (At least on any perceptible level)
Apart from Hyper-V and WSL, some Windows 11 security features also depend on virtualization.
Did you measure the performance hit? How often did you encounter driver trouble?
I would never use "performance" and "Hyper-V" in the same sentence.
Hyper-V does not have PCI passthrough and with that you lost me, while ESXi does. Also I want to test my multiplatform software on all major OSes (MacOs included) ESXi is then the only one that can run Darwin in parallel with rest.
VMWork workstation still has a massive leg up in 3d (and to some extent, 2d) video acceleration. Many programs need this to run smoothly these days
If you want to pass through USB, SCSI, or something like that then VMWare Workstation is better than Hyper-V for sure.
Snapshot support in at least Libvirt based KVM still sucks. :(
As in, technically it supports snapshots.
But try have a tree of different snapshots of a VM based upon different points in time.
Super useful when doing integration work and testing out various approaches that evolve over time as things progress.
With VMware Workstation I've been able to do that for years, for Libvirt based KVM it's not even possible.
To be clear, I really wish it could be done in Libvirt/KVM too. ;)
Worse, it's actively worse VMware on Windows vs Hyper-V.
And if you want to do it on FreeBSD you use bHyve.
It’s not the UI you charge for, it’s the Enterprise Support Plan.
Fusion became basically worthless when you couldn't easily run Intel Windows on a Mac anymore because the underlying processor changed.
... I've had a Fusion licence since v10 was current (~2017) and I don't think I've used it to run Windows even once.
I wonder how common that is though.
In terms of people who might consider Fusion you have:
- People who only use Windows
- People who only use macOS
- People who only use Linux
- People who virtualize Windows on macOS
- People who virtualize Linux on macOS
- People who run FreeBSD or similar on their computers
- People who virtualize FreeBSD or similar on macOS
- People who virtualize various operating systems on Windows
- People who virtualize various operating systems on Linux
- People who virtualize various operating systems on FreeBSD or similar
And I would guess that the largest group of people that use Fusion use it for running Windows in a VM on macOS.
I would guess that the people who develop for Linux servers would mainly use Docker if they run macOS, and that also relies on VM, but not using Fusion.
x86 Docker on ARM Mac is an insanely complex setup - it runs an ARM Linux VM inside Hypervisor.framework that then uses a Rosetta client via binfmt that <somehow> communicates with the host macOS to set up all the Rosetta specific stuff and prepare the client process to use TSO for fast x86 memory access.
Unfortunately, Apple heavily gates anything Rosetta, I'm amazed Docker got enough coordination done with them - because QEMU didn't, they don't support anything Apple ARM-specific as a result and don't plan to unless Apple significantly opens up access and documentation; TSO for example is gated behind private entitlements.
There's surely no mystery as to how Docker is doing this:
https://developer.apple.com/documentation/virtualization/run...
Yeah that's a "how to use it in the simple case", that's not a "here is how this shit works under the hood so you can use it for more than just running userland processes" and it also doesn't state the limitations (e.g. what instructions are supported and which are not).
The argument given was that VMWare became useless because of the switch to Arm.
There are more Hypervisor managers available on macOS now than there have ever been before - largely because Apple provides the underlying framework to do most of the hard work... but there is clearly significant demand to run VMs on Arm Macs still, regardless of whether that includes running Windows (which does exist for Arm too)
Well, I use Parallels to run a Windows VM for work (on ARM). It's its own little bubble universe, completely isolated from my Mac desktop, but available at a swipe.
I do use Fusion as well (on my laptop), and have a Windows VM there as well, but solely to run older games. Works fine.
I had Fusion and ran Windows with it early on (it could even play some games!) and since I had it, I used it for Linux and some other things.
Those are now down with an old ESXi box or other forms of VMs now. Maybe I should look into the various VM options still, but I don't have any pressing needs.
What about people who virtualize various operating systems on macOS? That was my entire team at a prior engagement (at Microsoft, as it happens…). I suspect it’s a large number, developers tend to like macOS, so if you’re making a cross platform application and want to be able to test anything at all, you need a VM.
Can’t even run x86 Linux on Mac right now
Linux has always been a bit of an easier deal because you can (often, not always) just get a version for ARM that is "close enough".
If my case that isn’t close as we have deps that aren’t ported to arm
UTM supports Rosetta in Linux VMs: https://docs.getutm.app/advanced/rosetta/
OS still needs to be ARM, as far as I know, but you can then use Rosetta to speed-up x86_64 Linux binaries.
Docker Desktop also uses this to run x86_64 Docker images, and in many cases performance is quite close to the native ARM binaries, but this heavily depends on the workload.
qemu works as well as it always did—which is to say slow as hell but good enough for automation in a pinch
Worthless for some use cases but there are reasons to run Mac-on-Mac vms, including testing, development, and security (isolation). The first two also apply to some folks (maybe not many) for Linux VMs.
Workstation was already rotting for all intents and purposes. Likely Fusion was also rotting since the switch to ARM but never tried it.
The entire space ("desktop virtualization") is dead. Even VirtualBox which I praised a year ago seems to be slowing down.
This likely just poisons the well for a market they had all but abandoned.
Virtual box was never good, it always felt half baked.
Years ago (pre-oracle) it was enough though: i have fond memories of using it with vagrant and be heppy.
Yes, when it was Sun VirtualBox I remember it was a favorite for testing out other operating systems for free with a simple UI. It wasn't the most powerful or flexible, but it's what was recommended if you wanted to (for example) try Ubuntu on your Windows host without dual boot or using another disk, etc.
My point is that at least their paid support fixed the things I asked them to fix; I cannot say the same of VMware where support was already non-existent a couple years ago (I stopped using them the moment someone here in HN said the entire Workstation staff had been fired and replaced with skeleton overseas crew, and this was way before Broadcom).
Pretty much all desktop virtualization/VDI/etc. products have been de-emphasized by essentially everybody except to the degree that they're a largely free byproduct of server virtualization. I doubt any company is devoting more than a minimal number of resources to these products--maybe Apple more than others. Red Hat, for example, even sun-setted its "traditional" enterprise virtualization product in favor of Kuvevirt on OpenShift. And a VDI product was pretty much abandoned years ago.
There are new VDI contenders still coming up though. This caught my attention recently (due to trialling Proxmox for another purpose):
https://www.youtube.com/watch?v=tLK_i-TQ3kQ
There's clearly still demand for VDI solutions too. A recent example:
https://forum.proxmox.com/threads/vdi-solution-for-proxmox.1...
I don’t know anything about high-end workstations really. But I wonder if the whole ecosystem is in a rough spot generally? Seems like cloud tooling is always getting easier.
Shame really, people do fun stuff with excess compute.