Well, it was nice while it lasted! HashiCorp always felt like a company made by actual engineers, not "bean counters". Now it will just be another cog in the IBM machine, slowly grinding it down, removing everything attractive, just like RedHat and CentOS.
Hopefully this will create a new wave off innovation, and someone will create something to replace the monopoly on IaC that IBM now owns.
Hashi code, such as Terraform, was (is) an amazing example of a good reference Go codebase. It was very hard for me to get into Go because, outside of the language trivia and hype, it was hard to learn about the patterns and best practices needed for building even a mid-sized application.
That's interesting. I found Go to be a very productive and easy language, coming from Typescript.
But I had a similar experience like yours with PHP, I just couldn't get into it.
I find the claims that Go is easy just wrong. It's actually a harder language to write in because without discipline, you are going to end up maintaining massive amounts of boilerplate.
That's from someone who did a bunch - Perl, Ruby, Python, Java, C++, Scala.
Syntax is one thing, assembling an application with maintainable code is something else.
What in particular did you find difficult building a maintainable codebase in Golang? Not quite understanding the boilerplate reference.
Code generation in Golang is something I've found removed a lot of boilerplate.
I am not used to writing code where 2/3 of it is "if err" statements.
Also, refactoring my logging statements so I could see the chain of events seemed like work I rarely had to do in other languages.
It's a language the designers of which - with ALL due respect - clearly have not built a modern large application in decades.
I understand were you are coming from but I actually like the explicit error handling in Golang. Things being explicit reduces complexity for me a lot and I find it easier to spot and resolve potential issues. It's definitely something that I can understand not working for everyone.
I agree on the logging point but my experience was the explicit error handling and with good test coverage meant we rarely got into situations were we had non-deterministic situation were we relied extensively on logging to resolve. But we also went through several iterations of tuning how we logged errors. It's definitely a rough edge in what is readily available in the language.
This sound a lot of like Apple user arguments about iPhone 1 missing copy & paste over a decade ago.
I am very pedantic about checking responses for errors, but from my experience when working with a team and existing project I see that people notoriously forget to check the result. TBH it is a pain to essentially repeating the boilerplate `if err !=nil ...`.
What's worse is that even documentation skips checks. For example `Close()` method. It's almost always returning error, but I almost never seen anyone check it.
The reason for it, is if you want to use `defer` (which most people do) you would end up with very ugly code.
The other alternative would be to then making sure you place (and properly handle error) close in multiple places (but then you risk of missing a place).
And other solution would be using `goto` in similar way as it is used in Linux Kernel, but there are people who have big problem with it. I had a boss who religiously was against goto (who did not seem to understand Dijkstra's argument), and asked me to remove it even though it made the code more readable.
I think go makes more sense if you imagine spending more time reading MRs and code than writing it.
Standard go error handling maximises for locality. You don't see many "long range" effects where you have to go and read the rest of the code to understand what's going to happen. Ideally everything you need is in the diff in front of you.
Stuff like defer() schedules de-alloc "near" to where things get allocated, you don't have to think about conditionals. If an MR touches only part of a large function you don't have to read the whole thing and understand the control flow.
The relative lack of abstraction limits the "infrastructure" / DSLs that ICs can create which renders code impenetrable to an outside reader. In a lot of C++ codebases you basically can't read an MR without digging into half the program because what looks like a for loop is calling down into a custom iterator, or someone has created a custom allocator or _something_ that means code which looks simple has surprising behaviour.
A partial solution for that problem is to have a LOT of tests, but it manifests in other ways, e.g. figuring out the runtime complexity of a random snippet of C++ can be surprisingly hard without reading a lot of the program.
I personally find these things make go MRs somewhat easier to review than in other languages. IMHO people complaining "it's more annoying to write" (lacking stronger abstractions available in many other languages) are correct but that's not the whole story.
P.S: For Close(), you're right that most examples skip checking the error and maybe it would be better if they didn't. It only costs a few lines to have a function that takes anything Closable and logs an error (usually not much else you can do) but people like to skip that in examples.
Thanks for the Close() example, that's a nice solution, although would it work if you wanted to handle an error (not just log it?)
I'm assuming you're comparing to exceptions.
I don't know about that. I think this relies on discipline of the software engineer. I can see for example someone who is strict and only uses exceptions on failures and returns normal responses during usual operation.
With Go you can use errors.Is and errors.As which take away that locality. Or what's worse, you could have someone actually react based on the string of the error message (although with some packages, this might be the only way).
I still see your point though, but I also think Rust implemented what Go was trying to do.
You get a Result type, which you can either match to get the data and check the error, you can also pass it downward (yes, this will take away that locality, but then compiler will warn you if you have a new unhanded error downstream), or you can chose to unwrap without checking error, which will trigger panic on error.
Good points, I think it's fair to claim Result and Option are technically better (when combined with the necessary language features and compile-time checks).
Re: Close() errors yeah most times you would be better off writing the code in place if you really need to handle them. You can make a little helper if you find yourself repeating the same dance a lot. Usually there's not much you can do about close errors though.
Not really understanding the iPhone reference or how it relates here.
Sounds like the problem you have with the error checking relates more to development practice of colleagues than the language.
We used defer frequently. Never considered it ugly.
'goto' (hypothesising here as not used it) and use of exception handling that is expected to be handled at edge of boundary points in codebase can be elegant but does need careful thought and design is my experience. Can hide all sorts of issues, lead to a lot of spurious error handling for those that don't understand the intent. That's the biggest issue I have with implicit (magical) error handling - too many people do it poorly.
Everything is explicit until someone decides to introduce a panic() somewhere... (I get that exists in more or less any language)
That said, in practice I see it following a similar philosophy to java checked exceptions, just with worse semantics.
Personally, I don't like high-boilerplate languages because they train me to start glossing over code, and it's harder for me to keep context when faced with a ton of boilerplate.
I don't hate go. I don't love it either. It's really good at a few things (static binaries, concurrency, backwards and forwards compatibility). I hate the lack of a fully-fleshed out standard library, the package management system is still a bit wonky (although much improved), and a few other aesthetic or minor gripes.
That said there's no language I really love, save maybe kotlin, which has the advantage of the superb java standard library, without all the structural dogma that used to (or still does) plague the language (OOO only, one public class per file, you need to make an anonymous interface to pass around functions, oh wait now we have a streaming API but its super wonky with almost c++ like compilation error messages, hey null pointers are a great idea right oh wait no okay just toss some lombok annotations everywhere).
End of the day though a lot of talented people are golang first and sometimes you just gotta go where the industry does regardless of personal preference. There's a reason scientists are still using FORTRAN after all these years, and why so much heavy math is done in python of all things (yeah yeah I know Cython is a thing and end of the day numpy etc abstract a lot of it out of the way, but a built in csv and json module combined with the super easy syntax made it sticky for data scientists for a reason)
Yes because other language just hide errors from the user.
I think the reason people find go a bit annoying with the error condition is because go actually treats errors as a primary thought, not an after thought like Python, Java.
I assume you're talking about languages with exceptions when saying "other language just hide errors from the user." I think that's a gross over-simplification of exception-based error handling. I generally do prefer explicit, but there are plenty of cases where exceptions are clearly elegant and more understandable.
My preference is a language like Elixir where most methods have an error-code returning version and a ! version that might raise an exception. Then you (the programmer) can choose what you need. If you're writing a controller method that is for production important code, use explicit. If you're writing tests and just want to catch and handle any exception and log it, use exceptions. Or whatever makes the most sense in each situation.
I've never gotten the explicit argument. Java checked exceptions are also part of the function signature/interface and nothing prevents one from making a language where all exceptions are checked then just doing
I get at the end of the day it's all semantics, but personally I kinda like the error-specific syntax. If you want to do the normal return path, that's fine, but I prefer the semantics of Rust's Result type (EITHER a result OR an error may be set).To each their own, it's not something I really worry about.
Yeah same, Go's explicit argument never resonated with me either. In Elixir it's similar to a Result type, being a tuple such as either `{:ok, return_val}` or `{:err, err_msg}`, which is perfect for using with `case` or `with` depending on your situation.
You can't hide an exception if it crashes your program. You can definitely ignore a return from a function, essentially swallowing it. It's the definition of an anti-pattern.
I prefer to handle errors than ignore them. "If err" is actually one of the best things about Go
In most web applications I write, I have one error-handling block.
Access forbidden? Log a warning and show a 403 page. Is is JSON? Then return JSON.
Exception-handling in general is a pretty small part of most applications. In Go, MOST of the application is error-handling, often just duplicate code that is a nightmare to maintain. I just don't get why people insist it's somehow better, after we "evolved" from the brute-force way.
Errors usually happen during IO, but not in the main business logic and those two can be neatly separated.
But If you are coming from java I can understand the single error handling block is more comfortable, but coming from JavaScript/Typescript it's much more easy to check if err != nil, than to debug errors I forgot to handle, during runtime.
disagree. k8s is written in it just fine. plus, tons of other modern large applications in enterprise settings
K8s was famously written in Go by ex-Java developers, and the code base was full of Java patterns.
Which kind of proves my point. Even Google struggled to write clean, idiomatic Go.
Not the parent, but I find that doing dependency injection or defensive programming results in a lot of boilerplate. Custom error types are extemely wordy. The language also doesn't allow for storing metadata with types, only on structs as tags, which seriously hampers the ability to generate code. For example, you can't really express the concept of a slice in a slice containing an integer needing validation metadata well. You'll need to describe your data structure externally (OpenAPI, JSON schema, etc) and then generate code from that.
My experience of Golang is that dependency injection doesn't really have much benefit. It felt like a square peg in a round hole exercise when my team considered it. The team was almost exclusively Java/Typescript Devs so it was something that we thought we needed but I don't believe we actually missed once we decided to not pursue it.
If you are looking at OpenAPI in Golang I can recommend having a look at https://goa.design/. It's a DSL that generates OpenAPI specs and provides an implementation of the endpoints described. Can also generate gRPC from the same definitions.
We found this removed the need to write almost all of the API layer and a lot of the associated validation. We found the generated code including the server element to be production ready from the get go.
For OpenTofu specifically, having DI for implementing state encryption would have been really nice. Of you look at the PR, a lot of code needed to be touched because the code was entirely procedural. Of course, one could just make a global variable, but that is all sorts of awful and makes the code really hard to test efficiently. But then again, this is a 300k line project, which in my opinion is way beyond what Go is really good for. ContainerSSH with 35k+ lines was already way too big.
Out of interest what language do you think would have been more appropriate and why?
For that size of codebase I'd have thought code structure and modularisation would be more important than language choice.
I wish I had an answer to that, but I don't know. I only worked on projects of comparable size in Go, Java and PHP. Java was maybe the best for abstractions (big surprise), but it really doesn't lend itself to system-level stuff.
Not a gopher by any stretch, but to my way of thinking code generation is literally boilerplate, that's why its generated. Or does Go have some metaprogramming facilities I'm unaware of?
I took took the comment to relate to writing boilerplate.
So unrelated to code generated, if that makes sense. The generated code I'm sure had lots of boilerplate, it's just not code we needed to consider when developing.
After having written probably over 100k lines of Go code, my impression is that Go is simple, but not easy. The language has very few features to learn, but that results in a lot of boilerplate code and there are more than a few footguns burried in the language itself. (My favorite [1])
I find it very hard to write expressive, easy to read code and more often than not I see people using massive switch-case statements and other, hard to maintain patterns instead of abstracting away things because it's so painful to create abstractions. (The Terraform/OpenTofu codebase is absolutely guilty of this btw, there is a reason why it's over 300k lines of code. There is a lot of procedural code in there with plenty of hidden global scope, so getting anything implemented that touches multiple parts typically requires a lot of contortions.)
It's not a bad language by any stretch, but there are things it is good at and things it is not really suited for.
[1]: https://gist.github.com/janosdebugs/f0a3b91a0a070ffb067de4dc...
Is it because secondSlice is a reference (pointer?) to firstSlice?
Yes-ish? Slices are this weird construct where they sometimes behave like references and sometimes not. When I read the explanation, it always makes sense, but when using them it doesn't. For me the rule is: don't reuse slices and don't modify them unless you are the "owner" of the slice. Appending to a slice that was returned to you from a function is usually a pretty good way to have a fun afternoon debugging.
It's because slices have underlying arrays which define their capacity (cap(s)).
Both slices start out having the same underlying (bigger) array -so appending to one slice can affect the other one.
In the "bonus" part, though, the appends outgrew the original array, so new underlying arrays were allocated (i.e. the slices stopped sharing the same backing array).
Thanks for the heads-up, janosdebugs :)
Slices are structures that hold a pointer to the array, a length, and a capacity!
So, when you slice a slice, if you perform an array operation like “append” while there is existing capacity, it will use that array space for the new value.
When the sliced value is assigned to another variable, it’s not a pointer that’s copied, it’s a new slice value (with the old length). So, this new value thinks it has capacity to overwrite that last array value - and it does.
So, that also overwrites the other slice’s last value.
If you append again, though, you get a (new) expanded array. It’s easier to see with more variables as demonstrated here: https://go.dev/play/p/AZR5E5ALnLR
(Sorry for formatting issues in that link, on phone)
Check out this post for more details: https://go.dev/blog/slices-intro
I’ve always found that the Go language is simple in all the ways that don’t matter.
(In contrast to languages like Haskell and Clojure, which are simple in most of the ways that matter.)
Compilation speed matters, among other things, and monomorphization is often costly.
I don’t understand people’s beef with IBM. They have been responsible for incredible R&D within computing. I even LIKE redhat/fedora!
HashiCorp had already been sold out since waaaay before this acquisition and I also don’t understand why their engineers are seen as “special”…
People's beef here with IBM is they don't make shiny phones and laptops and don't create hip jobs where you're paid 500k+ to "change the world" by selling ads or making the 69th messaging app.
They just focus on tried and tested boring SW that big businesses find useful and that's not popular on HN which is more startup and disruption focused.
You have obviously never been the victim of IBMs consulting arm. I caution anyone against buying anything IBM now. Absolute nightmare to work with.
IBM’s consulting arm was finally so radioactive that they spun it out into a new company (Kyndryl). What I’ve seen is that customers still have a low opinion of the new company and they continue to refer to it as IBM.
Kyndryl is IBM??
Yes and you wouldn't believe how bad they are. We had multiple incidents where colleagues had to explain basic stuff to them and hold their hands. I was in a couple of calls with their engineers and those instantly reduced my impostorship syndrome.
I worked for several years with IBM solutions and the like, I thought they ended up opening near shore centers in Europe to "sell" "local" ressources but it was just detached Indian employees from upper cast billed more than us as they were IBM experts.
or just work anywhere within IBM
This is unnecessarily dismissive.
While Hashicorp hasn’t been exciting for a while, I fail to see how an acquisition from IBM will invigorate excitement, much less even a neutral reaction from many developers.
Hashicorp had a huge hand in defining and popularizing the modern DevOps procedures we now declare as best practices. That’s a torch to hold that would be very difficult for a business like IBM.
Perhaps I missed some things but the core of Ansible feels like it’s continuing it’s path to be much less of a priority over the paid value-adds. I can’t help but to think the core of Hashicorp’s products will go down this path, hence my pessimism.
Do you mean Terraform, not Ansible?
IBM owns Ansible, redserk is saying Terraform will go a similar route. Although I don't see what they mean by core being lower priority than paid. The paid features are all available for free via AWX, which is the open source upstream of the paid product AAP.
Red Hat's business model is "Hellware"--the open source versions are designed to be incredibly difficult to install/manage/upgrade or without any kind of stability that you're forced to pay for their versions.
No, it is not. HN has both a "greybeard" audience that will cheer in "Go boring tech" posts and an "hipster" audience that is heavily start-up and disruption focused as GP was saying. When talking about IBM and acquisitions or similar topics, it's usually the second audience that speaks more.
That doesn't mean that some acquisition really kill the product, but you don't need to be as big and old as IBM to do that.
Nah dude. Their business internal is a dinosaur both in girth and age. If they estimate 2 years for you, put away budget for 10. And all you’re gonna get is excuses and blame.
My beef with IBM as someone who worked for a company they acquired is that they would interfere with active deals that I was working on, force us to stand down while IBM tried to sell some other bullshit, then finally “allow us” to follow up with the customer once it’s too late, and the customer decided to move on to something else. Repeatedly.
Fuck IBM.
There are a number of valid criticisms about IBM
IBM repeatedly cleaning house of anyone approaching (let alone in or even rarely beyond) middle age is abhorrent.
It's funny to characterise people's beef with IBM as that they're boring, old, and stale when IBM are apparently allergic to anyone over 40.
Also their consultants have been some of the most weaponised incompetence laden, rude, and entitled idiots I've ever had the sincere displeasure to deal with.
IBM are an embarrassment to their own legacy imo.
IBM was taken over by bean counters years ago. There were researchers and others that would literally skip being in or find a way to avoid bean counters when they walked through IBM Research Labs (like Almaden Research Center) years ago (heard from multiple people years back that were working on contracts/etc there - mainly academics).
Also, IBM has been extremely ageist in their "layoff" policies. They also have declined in quality by outsourcing to low cost/low skill areas.
I knew a guy who was laid off from IBM specifically for being older, which came out years later as part of the class action lawsuit...
There is a former column that was under multiple writers (same name), that did a great expose on IBM and age discrimination, but I don't want to give said column their due since the columnist had other issues.
If it's really their due, you should give it to them. This value system where you have to punish people if they don't have the "right" views needs to stop. Would you like someone to do that to you? If they did good work, it doesn't get infected by whatever "issues" they had.
Like Bourbaki? Or they all happened to share a name?
IBM has always been a punching bag.
I had been wondering who would buy HCP, I sort of figured it was either going to be AWS, Google, or Azure and then I figured the other vendor were going to have support removed (maybe gradually, maybe not.)
It could have been worse: It could have been Oracle.
One of the reasons I left when I did was that it was starting to get really obvious that an acquisition was likely and I desperately did not want my work e-mail address to end in oracle.com.
IBM took away the ability of CentOS to be a free and trivial to swap-in alternative to the paid product RedHat Enterprise. That RedHat was already in financial trouble due to self-cannibalizing their own paid product is irrelevant; emotionally, “IBM” – not “RedHat” – made the decision to stop charging $0 for their custom enterprise patchsets and release trains, and so IBM will always be the focus of community ire about RedHat’s acquisition.
I expect, like RedHat, that the Hashicorp acquisition will result in a lot of startups that do not need enterprise-grade products shifting away from “anything Hashicorp offers that needs to charge money for Hashicorp to stay revenue-positive” and towards “any and all free alternatives that lower the opex of a business”, along with derogatory comments about IBM predictably assigning a non-$0 price for Hashicorp’s future work output.
* Red Hat wasn't ever "in financial trouble" -- their revenue line was up-and-to-the-right for a ridiculous number of consecutive quarters. Even when they missed overall earnings estimates, it was rarely by much and they still usually beat EPS estimates for the quarter.
* IBM had little to do with Red Hat's maneuvers around CentOS (I worked at Red Hat for several years and still have friends there, and nothing anybody there said publicly about CentOS in 2020 or 2023 was materially different from things people there were saying about it internally in 2012). Some people have tried to blame IBM for a general culture shift but as far as I've seen, every bit of the CentOS debacle was laid squarely at the feet of Red Hat staff by most in this industry -- as it should have been, since most of those involved were employed there well before IBM bought the company.
IBM's reputation as an aging dinosaur was well-earned long before it bought Red Hat, and continues to be earned outside it. That earned reputation was why they bought RHT in the first place: IBM Cloud market share was (and still is) declining and they wanted a jumpstart in both revenue and engineering credibility from OpenShift in particular.
Watson
It was special when Mitchell Hashimoto was still at the helm.
I never worked there, but I worked at a security company that hired a bunch of ex-IBM X-Force security guys, and they hated IBM with a passion.
Self selection, to be sure, but their beefs were mostly about the crushing bureaucracy that was imposed on what was supposed to be a nimble type domain; (network) security is, after all, mostly leapfrog with the black hats.
IBM is to software as Boeing is to planes.
I will not be taking questions ;-)
I have the "honor" of getting to use IBM $PRODUCT at $COMPANY.
- it uses some form of consensus algorithm between all nodes that somehow manages to randomly get the whole cluster into a non working state by simply existing, requiring manual reboots
- Patches randomly introduce new features, often times with breaking changes to current behaviour
- Patches tend to break random different things and even the patches for those patches often don't work
- For some reason the process how to apply updates randomly changes between every couple of patches, making automation all but impossible
- the support doesn't know how $PRODUCT works, which leads to us explaining to them how it actually does things
- It is ridiculously expensive, both in hardware and licensing costs
All of this has been going on for years without any signs of improvement for now, to the point that $COMPANY now avoids IBM if at all possible
I just got to spin down a bunch of infra that was originally in Softlayer, which IBM acquired years ago. IBM were terrible to work with, they frequently crashed services by evacuating VMs from hosts and then not powering them back up, and only notifying us long after our own monitoring detected it. Won't miss that.
You talk about beef, look at what they did with a project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm and a shitty one.
https://news.ycombinator.com/item?id=15303555
Look at what they did with the Phoenix project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm that employ cheap labor.
https://news.ycombinator.com/item?id=15303555
A lot of the people I respected from Heroku went there, glad they got a chance to use their skills to build something useful and profitable; glader still that they got their payout.
Sadly I echo your sentiment about the future, as someone who has heard second-hand about the quality of work at modern Redhat.
I am wondering how many more rounds of consolidation are left until there is no more space to innovate and we only have ossified rent-seeking entities in the IT space.
Heh at “got their payout”. HashiCorp IPO’d at $80, employees are locked up for 6 months. This sale is at $35.
They IPO'd in 2021.
Yes. And many of the Heroku employees you speak of would have got RSUs that owed taxes on an $80 basis, been trading far below that for most of that time, and now have a maximum expected value of $35.
This is not a pay day for many people. Anybody who got a pay day were those that could liquidate in the IPO.
Yeah okay, if you had 0.15% stock you're still out with $ 10M.
Smaller and bigger percentages will be different but that's retirement money for hundreds and hundreds unless you pretend to live in very high CoL area. Also, most of them will likely have to keep working thereyears before cashing out some other millions likely.
You've misunderstood my point. RSUs became taxable at the $80 stock price for many. Depending on where you're based that could mean you owe(d) anywhere from from $22 - $38 per share in taxes. At the top end of that range, if you're still holding any stock, this acquisition has just permanently crystalised a capital loss for you. There's no upside that gets you above what you owe/paid in taxes.
There are many many people who made a loss on this, even before the acquisition announcement.
Also I think your ownership % is way off. There's a pretty small group of people, most of them the earliest employees + execs, who would have got out with $10M. HashiCorp currently has thousands of employees and would have churned through thousands more over the years.
I don't know how pre-public to IPO RSUs work but let's do some math assuming IPO day is "day when RSUs vest":
IPO day and you get 1000 RSUs unlocked/vested. Share price is $80. You made 80k gains. For simplicity let's say you owed 40K in taxes.
One of two things happens:
- Hasihcorp auto sells to cover and you get 500 less shares. - You need to pay your taxes on your own and earmark 40K.
Let's pick the easy one: If Hashicorp sold for you that day you are now sitting on 500 shares with a cost basis of $80.
Let's go to today, IBM buys and the person held. 500 shares are now were $35 so the value is $17,500.
You cash out -- getting 17,500 in your account, and a capital loss of $22,500.
Sure, 17K isn't as cool as 40K, but the person still "made money" just _less_. You make it sound like this person is now "underwater" because they had a capital loss.
=====
And kids at home, this is why you sell some/all of your RSUs as you get them. No one company should be more than 15% of your portfolio. Even the one you work at.
I don't need to make any assumptions about anything here, other former colleagues have gone through the specifics in other replies. Nothing is auto-sold at IPO to cover taxes, a maximum of 10% of what had vested was allowed to be sold before the 6mo lockup expired. There was a total of a few weeks before a combination of trading blackout window, lockup, and market crash conspired to have make it easy to be underwater if you hadn't elected to sell everything you could coming into the IPO.
_A lot_ of people ended up with a loss.
I’m pretty sure U.S. law requires companies to withhold at 22% (or optionally higher) for any bonus/non-salary payments, which includes RSU vesting. Companies can choose to either “sell to cover” or just issue a proportionally lower amount of shares (e.g. you vested 1000 shares but only 780 show up in your brokerage account).
The problem occurs when 22% isn’t enough, which is often the case.
Ok -- I need your help. I'm missing something here.
People got RSUs. They owed tax on said RSUs. The tax cannot be higher than the value of the RSU at the time of vest.
If people did not have enough cash to pay their tax bill, and did not sell enough RSUs to get cash to pay said tax bill, then yes, I can see those people "with a loss" because they had a "surprise" tax bill, RSUs price went down and a cash problem now. Is this what you mean happened?
They shouldn't have had to "sell everything" -- at most like 50%.
I'm arguing with you here because this stuff is complex, and many people shy away from trying to understand it, and that's a huge disservice for those in our industry.
For anyone reading along -- It's as simple as this: understand the tax implications of the assets you own, pay your taxes.
The taxes are computed using the IPO price, not the price at opening or closing on the first day of trading.
IPO price was $35.
What are you talking about? The December 2021 IPO price was $80.
IPO price was $80. Briefly touched slightly above $100, and then crashed with the rest of the market and has spent most of its time since below $30.
What? What's their strike price? If they are above the sale price their return is 0.
RSUs are regular shares, folks with options would have a different story.
It's a little more complicated than that.
First of all your percentage of ownership is unrealistic. I joined in November 2019 and got a grant of a few thousand RSUs that fully vested before I left, and that I still have most of, plus I bought some shares in a few rounds of our ESPP when that became available -- as of today I have just under 5,000 shares. HashiCorp has nearly 200 million shares issued, so I own a hair over .0025% of the company. Really early employees got relatively big blocks of options but nobody I knew well there, even employees there long enough to be in that category (and there were very few of them still around by December 2021), was looking at "fuck-you money" just from the IPO.
Second, the current price isn't the whole story for employees. I had RSUs because of when I joined so the story might have been different for earlier employees who had options, but I don't think it differs in ways that matter for this discussion. As background for others:
* On IPO day in December 2021, 10% of our vested RSUs were "unlocked" -- a bit of an unusual deal where we could sell those shares immediately (or at any later time). Note "vested" there -- if you had joined the day before the IPO and not vested any RSUs yet, nothing unlocked for you. (Most of the time, as I understand it, you don't have any unlocked shares as an employee when your company IPOs -- you get to watch the stock price do whatever it does, usually go down a lot, for six months to a year.) * At a later date, if some criteria were met (which were both a report of quarterly earnings coming out and some specific financial metrics I forget), an additional tranche of vested shares (I think an additional 15%) unlocked -- I believe this was targeted at June 2022 and did happen on schedule. * After 1 year, everything vested unlocked.
At the moment of the IPO the price was $80, but it initially climbed into the $90's pretty fast. At one point, during intraday trading, it actually (very briefly) broke just above $100.
So, if you were aware ahead of time that the normal trajectory of stock post-IPO is down, and if you put in the right kind and size of limit orders, and if you were lucky enough to not overestimate the limit and end up not selling anything at all, then you could sell enough shares while it was up to cover the taxes on all of it and potentially make a little money over that. I was that lucky, and managed to hit all of those conditions while selling almost all of my unlocked shares (I even managed to sell a small block of shares at $100), plus my entire first post-IPO vesting block, and ended up with enough to cover the taxes on the whole ball of already-vested shares, plus a few grand left over. Since then, I haven't sold any shares except for what was automatically sold at each of my RSU vesting events.
For RSUs not yet vested at the IPO, the IPO price didn't matter because they sold a tranche of each new vesting block at market price to cover the taxes on them when they vested -- you could end up owing additional taxes but only, as I understand it, if the share price rose between vesting and sale of the remaining shares in the block, so you would inherently have the funds to pay the taxes on the difference. (And if the price fell in that time, you could correspondingly claim a loss to reduce your taxes owed.)
There were a fair number of people who held onto all their shares till it was way down, though, and had to sell a lot to cover their tax bill in early 2022 -- I think if you waited that long you had to sell pretty much all your unlocked shares because the price was well down by tax time (it bottomed out under $30 in early March 2022, then rose for awhile till it was back up over $55 right before tax day, so again, if you were lucky and bet on the timing right, you didn't end up too bad off, but waiting till the day before April 15 was not something I bet a lot of people felt comfortable doing while they were watching the price slide below $50 in late February). I even warned one of the sales reps I worked with, while the price was still up, about the big tax bill he should prepare for, and he was certain I was wrong and that he would only be taxed when he sold, and only on the sale price. (He was of course wrong, but I tried...)
The June unlock was pretty much irrelevant for me because by that point the share price was down under $30 -- it spent the whole month of June after the first week under $35. The highest it went between June 30, 2022 and today, was $44.34. The entire last year it's only made it above $35 on three days, and only closed above $35 on one of them. I figured long-term the company was likely to eventually either become profitable, or get bought, and in either case the price would bump back up.
I was thinking about cutting my losses and cashing out entirely when it dropped below $30 after the June layoffs, and again in November when it was below $20, and then yet again when I left the company in January of this year, but the analyst consensus seemed to be around $32-34 through all of that so I held on -- kinda glad I did now instead of selling at the bottom.
... Barely any employees could have that much stock. There's 2200 employees from the most recent data I see. Even if the outstanding shares were 100% employee owned, a uniform allocation would at best see a 0.045% between them all. Obviously, the shares are not uniformly distributed across employees, nor is hashicorp 100% employee owned.
Wow IBM got quite the discount!
The stock was at $31. The $80 level was just shortly after the IPO. They paid fair market price
It always amazing me how people play telephone with Red Hat and how bad the quality of life is post IBM.
When they show the service awards they don’t even cover 5 years because they don’t have all day.
If it was so bad then you wouldn’t see engineers with 10, 15, or 20 years experience staying there. They already got their money from the IBM purchase so if it were bad then they would leave.
Oh but they don’t innovate anymore.
Summit is coming. Let’s see what gets announced and then a live demo.
Patents are a stronger signal of a company focused on financial engineering than a company focused on innovation.
Every big, old, stagnant company is full of lifers who won’t move on for any number of reasons. The pay is good enough, at least it’s stable, the devil you know is better than the devil you don’t, yada yada yada. There are people in my life who work in jobs like that. They will openly admit that it sucks, but they are risk averse due to a combination of personality and family circumstances, so they stick it out. Their situation sucks, and they assume everything else sucks too. And often, because they’ve only worked in one place so long, they have a hard time finding other opportunities due to a combination of overly narrow experience and ageism.
The movie Office Space is about exactly the sort of company that is filled with lifers who hate their jobs but stay on the path of least resistance.
(I know absolutely nothing about working at Red Hat, so I’m not trying to make a specific claim about them. But I’ve known people in this situation at IBM and other companies that are too big for their own good.)
I too know several lifers at IBM. One thing I've realized is that staying loyal to a company over several years won't save you from ageism.
Your best defense against ageism may be to save more than 50% of your tech income for about 20 years, then move into management and build empires until the music stops.
Red Hat Principal Consultant here, July will be 7 years at the company for me.
Before IBM purchase: travel to clients, build and/or fix their stuff, recommend improvements
After IBM purchase: travel to clients, build and/or fix their stuff, recommend improvements
At least on my side of the aisle I haven't noticed any notable changes in my day to day work for Red Hat. IBM has been very light touch on our consulting services.
our current economic model kind of depends on the idea that we can always disrupt the status quo with american free market ingenuity once it begins to stagnate but maybe we have reached the limits of what friedman's system can do or accounted for.
Regarding Red Hat, I dearly hope someone will replace the slow complicated mess that is ansible. It's crazy that this seems to be the best there is...
Why slow and complicated?
We're just starting to implement it and we've only heard good things about it.
Ansible is great if you have workflows where sysadmins SSH to servers manually. It can pretty much take that workflow and automate it.
The problem is it doesn’t go much beyond that, so you’re limited by SSH roundtrip latency and it’s a pain to parallelize (you end up either learning lots of options or mitogen can help). However fundamentally you’re still SSHing to machines, when really at scale you want some kind of agent on the machine (although ansible is a reasonable way to bootstrap something else).
When I managed a large fleet of EC2 instances running CentOS I had Ansible running locally on each machine via a cron job. I only used remote SSH to orchestrate deployments (stop service, upgrade, test, put back in service).
I wrote a tool similar to ansible in the old days. We both started about the same time, so wasn't really a goal to compete with it. Later I noticed they had some type of funding from Red Hat, which dulled my enthusiasm a bit. Then Docker/containers started hitting it big and I figured it would be the end of the niche and stopped.
Interesting that folks are still using it, though I'm not sure of the market share.
Saltstack is IMO superior to Ansible. It uses ZMQ for command and control. You can write everything in python if you want but the default is YAML + JINJA2. And is desired state not procedural.
Not used it for about 5 years and I think they got bought by VMWare IIRC. The only downside is that Ansible won the mindshare so you're gonna be more on your own when it comes to writing esoteric formulas.
IDK about this, in 2018 I was in a position to pay for their services. They asked for stupid amount of money and got none because they asked so much.
Can't remember what the exact numbers were but but it felt like ElasticSearch or Oracle.
Same. And I didn't feel like we were getting anything for that crazy money aside from than "support" (which management wanted, pre IPO, to make a bunch of security audits seem easier ). We preferred to stick with our own tooling and services that we built around Vault (for example) than use the official enterprise stuff. Same goes for terraform today: I don't feel like we need Terraform Cloud, when we've got other options in that space, including home grown tooling.
Vault's client-based pricing was (is) the worst thing about selling it. When I was there, nobody in sales liked it except the SEs and account reps dealing with the largest customers (and those customers loved it because it actually saved them a substantial amount of money over other vendors' models like per-use or per-secret). All the customers except those very largest ones hated it. The repeated response from those who believed in the client-based pricing model, to those of us pointing out the issues with it, was essentially "if your customers don't like it, they must not understand it because you aren't doing a good enough job explaining it".
What I thought we really needed was a "starter/enterprise" dual-model pricing structure, so that smaller customers could get pricing in some unit they could understand and budget for, that would naturally and predictably grow as they grew, to a point where it would actually be beneficial to them to switch to client-based pricing -- but there seemed to be a general reluctance to have anything but a single pricing model for any of our products.
But it's even more expensive now! There's no limit!
Same. I wanted to pay them for their features, but the pricing was such that I actually thought it was a gag or a troll at first and laughed. When I realized they were serious, I was like Homer fading into the bushes.
Inability to price things correctly sounds exactly like engineer behavior to me…
The timing of this acquisition, and the FTC's ban on non-compete agreements is perfect.
Usually during an acquisition like this, the key staff are paid out after two years on board the new company. So not a non-compete, but an incentive to stay and get their payout.
Most staff with no equity will leave quickly of course, so the invalidity of non compete will definitely help those souls.
"golden handcuffs" they call them.
Ban isn’t yet in effect and would have started discussions a while back. Plus, FTC ban is already being litigated by business groups, unsurprisingly.
My personal opinion is it was a company for crack monkeys. Consul, Vault and Packer have been nothing but pain and misery for me over the last few years. The application of these technologies has been nothing but a loss of ROI and sanity on a promise.
And don't get me started on Terraform, which is a promise but rarely delivers. It's bad enough that a whole ecosystem appeared around it (like terragrunt) to patch up the holes in it.
When a massive ecosystem springs up around a product, that means it’s wildly successful, actually.
The person you are replying to made no statement about the success of the product. Success and pita-ness are completely orthogonal.
Yeah I'm not saying it's not successful. It's just shit!
I see this as an opportunity. Not to replace HashiCorp's products - OpenTofu and OpenBao are snapping up most of the mindshare for now - but to build another OSS-first developer darling company.
Onboardbase is a great alternative to HashiCorp Vault.
https://onboardbase.com/
Btw. OpenTofu 1.7.0 is coming out next week, which is the first release that contains meaningful Tofu-exclusive features! We just released the release candidate today.
State encryption, provider-defined functions on steroids, removed blocks, and a bunch more things are coming, see our docs for all the details[0].
We've also had a fun live-stream today, covering the improvements we're bringing to provider-defined functions[1].
[0]: https://opentofu.org/docs/next/intro/whats-new/
[1]: https://www.youtube.com/watch?v=6OXBv0MYalY
It was this, but hasn’t been for a couple of years at least. The culture really shifted once it was clear the pivot to becoming a SaaS-forward company wasn’t taking off. As soon as the IPO happened and even a little bit before, it felt like the place was being groomed down from somewhere unique and innovative to a standardized widget that would be attractive to enterprise-scale buyers like VMware or IBM.
What we are seeing with VC driven "innovation", is only going to get worse when the Linux/BSD founders generation is gone.
i can only speak to the early days (joined around 11 folks), but the engineers then were top tier and hungry to build cool shit. A few years later (as an outsider) seemed innovation had slowed substantially. i still know there are great folks there, but has felt like HashiCorp’s focus lately has been packaging up all their tools into a cohesive all-in-one solution (this was actually Atlas in the early days) and figuring out their story around service lifecycle with experiments like Waypoint (Otto in the early days). IBM acquisition is likely best outcome.
Honestly, Mitchell should still be very proud of what he built and the legacy of Hashicorp. Sure, the corp has taken a different direction lately but thanks to the licenses of the Hashicorp family of software, it's almost entirely available for forking and re-homing by the community that helped build it up to this point. E.g. opentofu and openbao. I'm sure other projects may follow and the legacy will endure, minus (or maybe not, you never know) contributions from the company they built to try to monetize and support that vision.