I know they’re ex-Sun, but is there any real technical benefit for choosing not-Linux (for their business value prop)?
I know of the technical benefits of illumos over linux, but does that actually matter to the customers who are buying these? Aren’t they opening a whole can of worms for ideology/tradition that won’t sell any more computers?
As someone who runs Linux container workloads, the fact that this is fundamentally not-Linux (yes I know it runs Linux binaries unmodified) would be a reason against buying it, not for.
It's not like we specifically say "oh btw there's illumos inside and that's why you should buy the rack." It's not a customer-facing detail of the product. I'm sure most will never even know that this is the case.
What customers do care about is that the rack is efficient, reliable, suits their needs, etc. Choosing illumos instead of Linux here is a choice made to help effectively deliver on that value. This does not mean that you couldn't build a similar product on top of Linux inherently, by the way, just that we decided illumos was more fit for purpose.
This decision was made with the team, in the form of an RFD[1]. It's #26, though it is not currently public. The two choices that were seriously considered were KVM on Linux, and bhyve on illumos. It is pretty long. In the end, a path must be chosen, and we chose our path. I do not work on this part of the product, but I haven't seen any reason to believe it has been a hindrance, and probably is actually the right call.
I am curious why, if you feel like elaborating. EDIT: oh just saw your comment down here: https://news.ycombinator.com/item?id=39180814
1: https://rfd.shared.oxide.computer/
A team should always pick the tools they are most familiar with. They will always have better results with that, than trying to use something they understand less. With this in mind, using their own stack is a perfectly adequate choice. Factors outside their team will determine if that works out in the long term.
A handful of the team are more familiar with Illumos and the next hundred people they hire after that will be more familiar with Linux.
If your hiring decisions are always based on what people are currently familiar with, you'll always be stuck in the past. You may not even be able to use present day tooling and systems because they could be too new to hire people for.
You're much better off hiring people who are capable of learning, and then giving them the opportunities to learn and advance their knowledge and skills.
Everyone is capable of learning. I can hire someone who is capable of learning Japanese. They can then try to teach the rest of the team Japanese. Does that mean it's a good idea to switch all our internal docs to Japanese? Maybe if I was building a startup in Japan. Similarly, writing internal docs in English for a startup in Japan would be of equal difficulty and value. Hooray, we're learning! And struggling more than needed to build a product.
You're better off hiring experienced people who are highly productive. If they're highly productive with one stack, it makes no sense to change their stack so they're no longer productive, or hiring people who aren't familiar with it and waiting for them to become productive.
There's nothing wrong with using old, well established things. They're quite often better than new things. As long as they're still supported, just use whatever builds a working product. It's the end product that matters.
The difference between Japanese and English is much, much bigger than the difference between one Unix OS and one Unix-like OS. This is a remarkably disingenuous argument. If you really don't understand the difference in scope, there's no point in discussing anything with you because you've managed to disprove your opening sentence with yourself as the counterexample.
You're welcome to see it that way if you want. But if you think you can get to know a completely new kernel, OS, etc in a short amount of time, backwards and forwards, you're equally as disingenuous. You can get by editing a few lines in a pinch, but you could equally just learn a few Japanese phrases. Proper understanding requires a deep knowledge that comes from practice and experience with subtle complexity and context.
(Japanese isn't so radically different from English, it mostly just has more words for more contexts. In many ways it's simpler than English. It would be harder to go from Java to Haskell, with their many different language paradigms)
A lot of people out there claim to know Linux, yet few can prove it. OTOH, if they gain a cult following with lots of people using their stack, those people might become more familiar with their stack than most Linux people are with theirs. They could grow a captive base of prospective hires.
That's not the big concern though. The big concern is whether vendor integration and certification becomes a stumbling block. You can hire any monkey to write good-enough code, but that doesn't give you millions in return. Partnerships with vendors and compliance certifications can give you hundreds of millions. The harder that is, the farther the money is. A totally custom, foreign stack can make it harder, or not; it depends how they allocate their human capital and business strategy, whether they can convince vendors to partner, and clients to buy in. Anything very different is a risk that's hard to ignore.
To be clear, we at the time had already hired people with deep familiarity with Linux at the time this decision was made. In particular, Laura Abbott, as one example.
It is true that the number of developers that know Linux is larger than the ones that know illumos. But this is also true of the number of developers who know C as the ones who know Rust. Just like some folks need to be onboarded to Rust, some will need to be onboarded to illumos. That is of course part of the tradeoff.
As someone that knows UNIX since 1993, starting with Xenix, many that are familiar with Linux, are actually familiar with a specific Linux distribution, as the Linux wars took over UNIX wars.
That being the case, knowing yet another UNIX cousin isn't that big deal.
I do not personally agree with this. I do think that familiarity is a factor to consider, but would not give it this degree of importance.
It also was not discussed as a factor in the RFD.
The Linux vs. Illumos decision seems to be downstream of a more fundamental decision to make VMs the narrow waist of the Oxide system. That's what I'm curious about.
Especially since Oxide has a big fancy firmware stack. I would expect this stack to be able to do an excellent job of securely allocating bare-metal (i.e. VMX root on x86 or EL2 if Oxide ever goes ARM) resources.
This would allow workloads on Oxide to run their own VMs, to safely use PCIe devices without dealing with interrupt redirection, etc.
I'm not affiliated with Oxide but I don't think you can put Crucible and VPC/OPTE in firmware. Without a DPU those components have to run in the hypervisor.
Possibly not.
But I do wonder why cloud and cloud-like systems aren’t more aggressive about splitting the infrastructure and tenant portions of each server into different pieces of hardware, e.g. DPU. A DPU could look DPU could look like a PCIe target exposing NVMe and a NIC, for example.
Obviously this would be an even more custom design than Oxide currently has, but Oxide doesn’t seem particularly shy about such things.
It would be great if that RFD will become public someday, if it of course possible, especially if it's a long read.
If you're running in one of the big 3 cloud providers, the bottom-level hypervisors are not-linux. This is equivalent. Are you anti-AWS or anti-Azure for the same reason?
This is the substrate upon which you will run any virtualized infrastructure.
Small note, that's not true for Google Cloud, which runs on top of Linux, though modified.
Disclaimer: Former Googler, Cloud Support
Another Xoogler here: any idea what they mean by it's not Linux at the bottom for other providers? Like, surely it's _some_ common OS? Either my binaries wouldn't run or AWS is reimplementing Linux so they can, which seems odd.
Or are they just saying that the VM my binary runs on might be some predictable Linux version, but the underlying thing launching the VM could be anything?
Old AWS used to be Xen, Nitro afaik uses customised VMM and I don't recall if it's not a custom OS or hosted on top of something.
Azure is Hyper-V underneath IIRC, a custom variant at least (remember Windows Server Nano? IIRC it was the closest you could get to running it), with sometimes weird things like network cards running Linux and integrating with Windows' built-in SDN facility.
Rest of the bigger ones is mainly Linux with occasional Xen and such, but sometimes you can encounter non-trivial VMware deployments.
Nitro is supposed to be this super customized version of KVM.
Correct, that the Hypervisor isn't running Linux.
I think the only provider where that would make sense would be Microsoft, where they have their own OS.
Azure runs a version of Windows, see:
https://techcommunity.microsoft.com/t5/windows-os-platform-b...
When your programs are running on a VM, the linux that loads and runs your binaries is not at the bottom; that linux image runs inside a virtual machine which is constructed and supervised by a hypervisor which sits underneath it all. That hypervisor may run on the bare machine (or what passes for a bare machine what with all the sub-ring-zero crud out there), or may run on top of another OS which could be linux or something else. And even if there is linux in the middle and linux at the bottom they could be completely different versions of linux from releases made years apart.
> Or are they just saying that the VM my binary runs on might be some predictable Linux version, but the underlying thing launching the VM could be anything?
Yup. eg with Xen the hypervisor wasn't Linux, even if the privileged management VM (dom0) was Linux (or optionally NetBSD in the early days). The very small Xen hypervisor running on the bare metal was not a general purpose OS, and didn't expose any interface itself - it was well hidden and relied on dom0 for administration.
As I understand it, there's linux running on the Google Cloud hardware but the virtualized networking and storage stacks in Google Cloud are google proprietary and largely bypass linux -- in the case of networking see the "Snap: a Microkernel Approach to Host Networking" paper.
In contrast, it appears that Oxide is committing to open-source the equivalent pieces of their virtualization platform.
I don't about EC2 but Lambda and Fargate are presumably Firecracker, which is Linux KVM.
AWS "Nitro" hypervisor which powers EC2 is their (very customized) KVM.
https://docs.aws.amazon.com/whitepapers/latest/security-desi...
I suspect a lot of people would (irrationally) freak out if they saw how the public cloud works because it's so different from "best practices". Oxide would probably trigger people less if they never mentioned Illumos but that's not really an option when it's open source.
Linux is a nightmare in the embedded/appliance space because one ends up just having platform engineers who spend their day fixing problems with the latest kernels, drivers, core libraries, etc, that the actual application depends on.
Or one goes the route of 99% of the IoT/etc vendors, and never update the base OS and pray that there aren't any active exploits targeting it.
This is why a lot of medium-sized companies cried about Centos, which allowed them to largely stick to a fairly stable platform that was getting security updates without having to actually pay/run a full blown RHEL/etc install. Every ten years or so they had to revisit all the dependencies, but that is a far easier problem than dealing with a year or two update cycle, which is too short when the qualification timeframe for some of these systems is 6+ months long.
So, this is almost exclusively a Linux problem; any of the *BSD/etc. alternatives give you almost all of what Linux provides without this constant breakage.
This is a really, really good point -- and is a result of the model of Linux being only a kernel (and not system libraries, commands, etc.). It means that any real use of Linux is not merely signing up for kernel maintenance (which itself can be arduous) but also must make decisions around every other aspect of the system (each with its own communities, release management, etc.). This act is the act of creating a distribution -- and it's a huge burden to take on. Both illumos and the BSD derivatives make this significantly easier by simply including much more of the system within their scope: they are not merely kernels, but also system libraries and commands.
This weighed heavily in our own calculus, so I'm glad you brought it up!
giving limited resources of the dev team it may lead to limited support of the system outside of the narrow set of officially supported/certified hardware with that support falling behind on modern hardware, as it happened with Sun, and vendor lock-in as a result into overpriced and low performing hardware.
There is a reason that back then at Solaris dev there was a joke about embedding Linux kernel as a universal driver for Solaris kernel in order to get reasonable support for the hardware around.
Well they aren't burdened by having to make their own processors, like Sun had to do, or their own full custom chips in general. They just have to support the selection of hardware they pick, and they have complete oversight of what hardware runs on their racks. So I'm not sure if the sun comparison is relevant here, since they can still pick top of the line hardware. Just not any hardware
Any issues with funding or whatever, and their customers would get locked in on the yesterday's "top of the line hardware" (reminds how Oracle used lawyers to force HP to continue support Itanic). Sun was 50K persons company, and they struggled to support even reasonably wide set of hardware. Vendor lock in is like a Newton law in this industry.
The Oxide hw is using available AMD SKUs for CPU.
This is less of an issue for us at Oxide, since we control the hardware (and it is all modern hardware; just a relatively small subset of what exists out there). Part of Sun's issue was that it was tied not just to a software ecosystem, but also to an all-but-proprietary hardware architecture and surrounding platform. Sun eventually tried to move beyond SPARC and SBus/MBus, but they really only succeeded in the latter, not the former.
CentOS wasn’t used in embedded systems.
Even Windows was and is used substantially in embedded systems.
I know about that. This is a special edition for embedded though. But CentOS is news to me. CentOS was targeted for servers.
Arista EOS is definitely CentOS Linux release 7.9.2009 (AltArch) based.
Sure it was. So is RHEL.
Embedded isn't limited to devices equal or less powerful / expensive than the Raspberry Pi.
Interesting that you bring up embedded/appliance space, as I have noticed there are plenty of FOSS alternatives coming up, key features not being Linux based, and not using GPL derived licenses.
FreeRTOS, Nuttx, Zephyr, mbed, Azure RTOS,...
Aren't they also ex-Joyent? Joyent ran customer VMs in prod on Illumos for many years so there's a lot of experience there.
bcantrill used to work at Sun then became CTO at Joyent, so the reason why Joyent ran Illumos is probably the same reason as why Oxide is, because Cantrill likes it and judges that it's a good fit for what they are doing.
As I elaborated above, bcantrill did not decree that we must use illumos. Technical decisions are not handed down from above at Oxide.
I saw your comment[1] after I wrote mine, but I'm not saying that he's forcing you guys to use it (that would not a good way of being a CTO at a start-up…), but that doesn't prevent him from advocating for solutions he believes in.
Would you say that Oxide would have chosen Illumos if he wasn't part of the company?
[1]: https://news.ycombinator.com/item?id=39180706
(I work at Oxide.)
Bryan is just one out of several illumos experts here. If none of those were around, sure, maybe we wouldn't have picked illumos -- but then we'd be unrecognizably different.
I came into Oxide with a Linux background and zero knowledge of illumos. Learning about DTrace especially has been great.
I'd like to learn DTrace (especially after the recent 20yr podcast episode), but I worry it'll never make into mainstream Linux debugging, and hence only useful for more niche jobs.
Your concern is completely reasonable -- a thing I'd add though is that both Windows and macOS have DTrace support.
I was excited, but it looks like both MacOS and Windows require special admin permissions for my laptop that I doubt my work would approve (completely reasonable to require this, it just makes it unusable for me).
I don't know how to respond to this question, because to me it reads like "if things were completely different, what would they be like?" I have no idea if you could even argue that a company could be the same company with different founders.
What I can say is that this line of questioning still makes me feel like you're implying that this choice was made simply based on preference. It was not. I am employee #17 at Oxide, and the decision still wasn't made by the time I joined. But again, the choice was made based on a number of technical factors. The RFD wasn't even authored by Bryan, but instead by four other folks at Oxide. We all (well, everyone who wanted to, I say "we" because I in fact did) wrote out the pros and cons of both, and we weighed it like we would weigh any technical decision: that is, not as a battle of sports teams, but as a "hey we need to drive some screws: should we use a screwdriver, a hammer, or something else?" sort of nuts-and-bolts engineering decision.
I'm not saying otherwise.
In fact, when I wrote my original comment, I actually rewrote it multiple time to be sure it wouldn't suggest I was thinking it was some sort of irrational decision (that's why I added the “it's a good fit for what they are doing”), but given your reaction it looks like I failed. Written language is hard, especially in a foreign language, sorry about that.
It's all good! I re-wrote what I wrote multiple times as well. Communication is hard. I appreciate you taking the effort, sorry to have misunderstood.
Heck, there's a great little mistake of communication in the title: this isn't just "intended" to power the rack, it does power the rack! But they said that because we said that in the README, because that line in the README was written before it ended up happening. Oops!
Many people, including part of the founding team, are ex-Joyent, yes. Some also worked at Sun, on the operating systems that illumos is ultimately derived from.
The main drawbacks to me are
1. No support for nested virtualization, so running a vm inside your vm is not available. This prevents use of projects such as kubevirt or firecracker on a Linux guest, and WSL2 on a Windows guest.
2. No GPU support
If the base hypervisor was Linux, it would be way more capable for users it seems. I also wonder if internally Linux is used for development of the platform itself so they can create "virtual" racks to dogfood the product without full blown physical racks.
With all that said, I do not know the roadmap and admittedly there are already quite a few existing platforms built on kvm, so as their hypervisor improves and becomes more capable it could potentially become strategic advantage.
Developers at Oxide work on whatever platform they'd like, as long as they can do their work. I will say I am in the minority as a Windows user though, most are on some form of Unix.
So one of the reasons why Rust is such an advantage for us is its strong cross-platform support: you can run a simulated version of the control plane on Mac, Linux, and Illumos, without a physical rack. The non-simulated version must run on Helios. [1]
That said we do have a rack in the office (literally named dogfood) that employees can use for various things if they wish.
1: https://github.com/oxidecomputer/omicron?tab=readme-ov-file#...
Interesting thanks for the insight.
Now i'm imagining Helios inside WSI - Windows Subsystem for illumos
You're welcome. I will give you one more fun anecdote here: when I came to Oxide, nobody in my corner of the company was using Windows. And hubris and humility almost Just Worked: we had one build system issue that was using strings instead of the path APIs, but as soon as I fixed those, it all worked. bcantrill remarked that if you had gone back in time and told him long ago that some of his code would Just Work on Windows, he would have called you a liar, and it's one of the things that validates our decisions to go with Rust over C as the default language for development inside Oxide.
That would be pretty funny, ha! IIRC something about simulated omicron doesn't work inside WSL, but since I don't work on it actively, I haven't bothered to try and patch that up. I think I tried one time, I don't remember specifically what the issue was, as I don't generally use WSL for development, so it's a bit foreign to me as well.
Man you can't let Brain live that one down can you?
:)
I didn't bother to git blame the code, I myself do this from time to time :)
I mean... WSL2 is just hyperv with some integration glue, and illumos isn't Linux but unix is unix; that might well be doable.
How is Oxide for GPU-heavy workloads?
There are no GPUs in the rack, so pretty bad, haha.
We certainly understand that there's space in the market for a GPU-focused product, but that's a different one than the one we're starting the company off with. There's additional challenge with how we as a company desire openness, and GPUs are incredibly proprietary. We'll see what the future brings. Luckily for us many people still desire good old classic CPU compute.
Would pass-through to VM work?
At $work I'm running SmartOS servers with GPU passing to a ubuntu bhyve for the occasional CUDA compute and it works wonderfully. Wonder if similar could be possible with Helios?
The software interface isn't the problem: the problem is that there are no physical GPUs in the product. There's nothing to pass through.
Is it that it runs Linux binaries unmodified or that it runs vms and manages VMs which run Linux, and as an end-user, that's what you run your software in?
As far as I recall it's not a VM. They run in "LX Branded Zones" which does require a Linux userland so that the binaries can find their libraries etc but Zones are more like "better cgroups than cgroups, a decade earlier" than VMs.
No, it's a VM, running a bhyve-based hypervisor, Propolis.[0] LX branded zones were/are great -- but for absolute fidelity one really needs VMs.
[0] https://github.com/oxidecomputer/propolis
Do you have a solution for running containers (Kubernetes, etc)? Are you spinning up a Linux VM to run the containers in there, doing VM per container, or something else?
Costumers can decide I would assume. Most likely you install you install some Kubernetes and then just have multible VMs distributed across the rack. And then run multible Pods in each node.
VM per container seems like a waist unless you need that extra isolation.
I wondered if there was any support for running containers built in - something like EKS/AKS/GKE/Cloud Run/etc - but looking at the docs it appears not.
I agree that VM per container can be wasteful - though something like Firecracker at least helps with start time.
From the podcast it seems that they want to deliver a minimal viable product. Their primary costumers already have a lot of their own higher level stack.
They might get into adding more higher level software eventually depending on what costumers want.
It runs VMs -- so it doesn't just run Linux binaries unmodified, it runs Linux kernels unmodified (and, for that matter, Windows, FreeBSD, OpenBSD, etc.).
Keep in mind that Helios is really just an implementation detail of the rack; like Hubris[0], it's not something visible to the user or to applications. (The user of the rack provisions VMs.)
As for why an illumos derivative and not something else, we expanded on this a bit in our Q&A when we shipped our first rack[1] -- and we will expand on it again in the (recorded) discussion that we will have later today.[2]
[0] https://hubris.oxide.computer/
[1] https://www.youtube.com/watch?v=5P5Mk_IggE0&t=2556s
[2] https://mastodon.social/@bcantrill/111840269356297809
Perhaps you could talk a bit about the distributed storage based on Crucible with ZFS as the backing storage tonight. I would really love to hear some of the details and challenges there.
Yes! Crucible[0] is on our list of upcoming episodes. We can touch on it tonight, but it's really deserving of its own deep dive!
[0] https://github.com/oxidecomputer/crucible
The timing of your podcast is the least convenient thing ever for us poor Europeans. And then the brutal wait the next day until its uploaded.
The only thing I miss about Twitter Spaces is that you could listen the morning after.
Yes (hello from Czechia), however there will always be somebody who this is inconvenient for. Also, I have to confess I was at times immersed in other work that I made a few Oxide and Friends live. I might stay up tonight.
I am looking forward to the crucible episode. It sounds like it could be a startup on its own, it wouldn't be the first distributed file/ storage system company.
Do you have the same gut reaction to ESXi?
I sure do. We've finally got to a place where we don't need weird hardware tricks to containerize workloads -- this is why a lot of shops pursue docker-like ops for production. When I buy hardware, long-term maintenance is a factor, and when my whole operations fleet relies on ESX, or in this case a Solaris fork, I'm now beholden to one company for support at that layer. Buying a rack of Supermicro gear and running RHEL or SLES with containerized orchestration on top means I can, in a pinch, hire experts anywhere to work on my systems.
I have no reason to believe Oxide would be anything but responsive and effective in supporting their systems, but introducing bespoke software this deep in the stack severely curtails my options if things get bad.
I can somewhat see your point, but in my experience you can't rely on RHEL or whatever vendor Linux to correctly bring up random OEM hardware. You will slowly discover all of the quirks, like it didn't initialize the platform EDAC the way you expected, or it didn't resolve some weird IRQ issue, etc. Nothing about my experience leads me to believe Linux will JFW on a given box, so I don't feel like Linux has an advantage in this regard, or that niche operating systems have a disadvantage. Certainly I feel like a first-party OS from the hardware vendor is going to have a lot of advantages.
I think the value proposition they're offering is a carefully integrated system where everything has been thoroughly engineered/tested to work with everything else, down to writing custom firmware to guarantee that it's all ship-shape, so that customers don't have to touch any of the innards, and will probably just treat them as a black box. It seems like it's chock-full of stuff that they custom-built and that nobody else would be familiar with, by design. If that's not what you want, this probably isn't the product for you.
This has been / will be the market education challenge; Its the same one Joyent had with SmartOS. Theyre correctly pointing out that the end user or operator will basically never interact with this layer, but it does cause some knee-jerk reactions. All that said, there are some pretty great technical benefits to using illumos derived systems the least of which is the teams familiarity and ability to do real diagnosis on production issues. I wont put words in anyones mouth but I suspect thats going to be critical for them as they support customer deployments w/o direct physical access.
Seems strange to me too but it sounds like the end-users basically never interact with this - it's just firmware humming along in the background. As long as its open-source and reasonably well documented its already lightyears ahead of what else is out there.
It seems healthy to have options, almost like the universe is healing a bit after oracle bought Sun. I can't imagine better hands bringing the oxide system together than that team. As an engineer who works entirely with Linux anymore, I pine for the days of another strong Unix in the mix to run high value workloads on. Comparing openvswitch on Linux, to say, the crossbow SDN facility on Solaris, I'd take crossbow any day. Nothing "wrong" with Linux, but it is sorely lacking in "master plan" levels of cohesion with all the tooling taking their own path, often bringing complexity that requires even for abstraction with yet more complicated tooling on top.
As far as performance and feature set, probably not anymore (I would have answered differently 10 years ago, and if I am wrong today would love to be educated about it).
However, if we are considering code quality, which I consider important if you are actually going to be maintaining it yourself as oxide will have to do since they need customizations, then most of the proprietary Unix sources are just superior imo. That is, they have better organization, more consistency in standards, etc. The BSDs are slightly better in this regard as well, it really isn't a proprietary vs open source issue, it's more about the insane size of the Linux kernel project making strict standards enforcement difficult if not impossible the further you get from the very core system components.
Irregardless of them being ex-Sun (and I am not ex Sun), if I needed a custom OS for a product I was working on, Linux would be close to the last Unix based OS source tree I would try to do it with, only after all other options failed for whatever reason. And that's not even taking into account the licensing, which is a whole other can of worms.
As a customer, I expect most of the technical advantages will be basically being a down stream consumer of ZFS. From a developer / maintainer of an OS, Dtrace and ZFS are large technical wins. Part of the overall value proposition of Oxide, is "correctness". You get an OS/Hardware stack that are designed to work together. You get 20 years of cruft thrown out. You get a lot of tooling, API's, etc written in a performant memory safe language (rust). Also you get a really fantastic podcast about the whole process. And as a customer you get a company that understands their stack from driver to VM and has a ton of internal expertise debugging production problems.
Their customers run virtualised OS on top of this.
This is no different from Azure Host OS, Bottlerocket, Flatcar or whatever.
This maters to them, as knowing the whole stack, some of the kernel code is still theirs from Sun days, and making it available matters to the customers that want source code access for security assement reasons.
I think it's a good idea to have more choice, especially in OSS. A Linux mono culture isn't any better than a chromium mono culture. They might be able to do stuff that just isn't practical if they stuck with Linux. They are also probably more familiar with illumos, or at least familiar enough to know that they can use it to do more than with linux
In one podcast, the reason given was staff familiarity and owning the full stack, not just the kernel I believe.
Not everything needs to be linux. Besides, if monocultures are supposed to be harmful, why is linux being thrown to everything nowadays? Very dangerous to have a single point of failure in (critical) applications.
Perhaps Illumos is particularly well suited for a Hypervisor/Cloud platform due to work upstreamed by Joyent originally for SmartOS?