It probably would be better to switch the link from the X post to the Vox article [0].
From the article:
“““
It turns out there’s a very clear reason for [why no one who had once worked at OpenAI was talking]. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
”””
[0]: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
So part of their compensation for working is equity, and when they leave thay have to sign an additional agreement in order to keep their previously earned compensation? How is this legal? Mine as well tell them they have to give all their money back too.
What's the consideration for this contract?
That OpenAI are institutionally unethical. That such a young company can be become rotten so quickly can only be due to leadership instruction or leadership failure.
Look at Sam Altman's career and tweets. He's a clown at best, and at worst he's a manipulative crook who only cares about his own enrichment and uses pro-social ideas to give himself a veneer of trustworthiness.
Awfully familiar to the other South-African emerald mine inheritor tech mogul.
Please. Elon's track record to take tesla from concept car stage to current mass production levels and building SpaceX from scratch is hardly comparable to Altman's track record.
SpaceX didn’t start from scratch. Their initial designs were based on NASA designs. Stop perpetuating the “genius engineer” myth around Elon Musk.
I feel like Steve Jobs also fits this category if we are going to talk about people who aren't really worthy of genius title and used other people's accomplishments to reach their goals.
We all know it as the engineers who made iPhone possible.
The people downvoting have never read the Isaacson book obviously.
More like ppl on this site know and respect Jobs for his talent as a revolutionary product manager-style CEO that brought us IPhone and subsequent mobile Era of computing.
Mobile era of computing would have happened just as much if Jobs had never lived.
To be fair, who else could have gone toe-to-toe with the telecom incumbents? Jobs almost didn't succeed at that.
Jobs was a bully through and through.
Someone far more deserving of the title, Dennis Ritchie, died a week after Jobs' stupidity caught up with him. So much attention to Jobs who didn't really deserve it, and so little to Dennis Ritchie who made such a profound impact on the tech world and society in general.
Altman is riding a new tech wave, and his team has a couple of years' head start. Musk's reusable rockets were conceptualized a long time ago (Tintin's Destination Moon dates back to 1953) and could have become a reality several decades ago.
You seriously trying to take his credit away for reusable rocket with "nu uh, it was in scifi first?" Wow.
"A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not ... of superiority but of weakness.”
No, in fact I'm praising Musk for his project management abilities and his ability to take risks.
https://en.wikipedia.org/wiki/McDonnell_Douglas_DC-X
"Quotation is a serviceable substitute for wit." - Oscar Wilde
What's wrong with weakness? Does it make you feel contempt?
By that logic nothing has started from scratch.
SpaceX is still the only company with reusable rockets. NASA only dreams about it and cant even make a regular rocket launch on time
“If you wish to make an apple pie from scratch You must first invent the universe”
…no one “started from scratch", the sum of all knowledge is built on prior foundations.
But he is a manager, not an engineer although he sells himself off as such. He keeps smart capable folks around, abuses most of them pretty horribly, and when he intervenes with products its hit and miss. For example latest Tesla Model 3 changes must have been pretty major fuckup and there is no way he didn't ack it all.
Plus all self-driving lies and more lies well within fraud territory at this point. Not even going into his sociopathic personality, massive childish ego and apparent 'daddy issues' which in men manifest exactly like him. He is not in day-to-day SpaceX control and it shows.
"A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not ... of superiority but of weakness.”
As is repeatedly spamming the same pasta
You’re confusing mommy and daddy issues. Mommy issues is what makes fash control freaks.
Indeed, at least Elon and his teams actually accomplished something worthwhile compared to Altman.
And don't forget StarLink that revolutionized satellite communications.
I disagree. If you watch some long form interviews with Elon, you’ll see that he cares a lot about the truth. Sam doesn’t give me that impression.
You mean the guy who's infamous for lying? The guy who claimed his car was fully self driving more than a decade before it is? The guy who tweeted "funding secured" and facing multiple fraud charges?
Tbh, he wasn’t convicted as far as I know.
But yes, he’s overly optimistic with timelines. He says so himself.
The first time someone is "overly optimistic with a timeline", you should forgive them.
The tenth time, you should have the good sense to realize that they're full of shit and either a habitual liar or utterly incompetent.
Realists are incapable of pushing frontiers.
If you are doing something that has been done before, hire a realist. Your project will ship on time and within budget. If you are doing something that hasn't been done before, you need an optimist. Partly because the realists run for the hills -- they know the odds and the odds are bad -- but also because their hedging behavior will turn your small chance of success into zero chance of success. On these projects, optimism doesn't guarantee success, but pessimism/realism does guarantee failure.
So no, I am not scandalized to find that the world's biggest innovator (I hate his politics, but this is simply the truth) is systematically biased towards optimism. It's not surprising, it is inevitable.
Wright Brothers took a risk and build first planes but didn't have to lie that their planes already left the ground before they did. They didn't claim "it would fly a year from now", they just build it over and over until it flew.
They were optimistic and yet they found a way to be optimistic without claiming anything untruthful.
Clément Ader, on the other hand, claimed that his innovation flew, and was ridiculed when he couldn't proof it.
One look at their works and it's clear who influenced modern planes, and who didn't.
The Wright Brothers are infamous for failing to industrialize their invention -- something that notoriously requires investors and hype. Perhaps they wouldn't have squandered their lead if they had been a bit more public with their hopes and dreams.
I want what you’re smoking
Yes, he is largely incompetent but with a great nose for picking up good companies: https://news.ycombinator.com/item?id=40066514
He may be the second richest but he still doesn’t seem competent enough to provide remotely reasonable estimates.
That, or he’s just a straight up liar who knows the things he says are never going to happen.
Which would you rather it be?
Didnt he call the cave diver, a pedo and the guy who attacked Pelosi's husband they were in a gay relationship.
He doesn't seem have much of a filter because of his aspergers, but I think he genuinely believed those things. And they are more on the level of calling people names on the playground anyway. In the grand scheme of things, those are pretty shallow "lies".
I have multiple relatives on the spectrum. None of them baselessly accuse strangers of being pedophiles.
It's not Musk's lack of filter that makes him unhinged and dangerous. It's that he's deeply stupid, insecure, racist, enamored of conspiracy theories, and powerful.
I figure its the chronic drug abuse and constant affirmation he receives from his internet fanboys and enabler yes-men on his board who are financially dependent on him. he doesn't ever receive push-back from anyone so he get more and more divorced form reality.
He's 52. And running multiple companies. Aspergers is not a justification for his shitty behavior (and blaming this behavior on Aspergers harms perception of people with Aspergers)
Oh so it’s ok to lie and call people a pedophile (which is far beyond playground name-calling; from a famous person a statement like that actually carries a lot of weight) if you genuinely believe it and have Asperger’s?
Those might explain his behavior, but it does not excuse it.
I'm no fan of Sam Altman, but between the two, Elon lies much more often. He's lied about FSD for years, lied about not selling his Tesla stock, lied about "robotaxies" for years, lied about the roadster for years, lied about "funding secured" for Tesla, lied about his twitter free speech ethos, spreads random lies about people he doesn't like, and so much more. The guy is a compulsive liar.
https://elonmusk.today/
I'm starting to think the relatives of South African emerald mine owners might not be the best people to trust...
You are not responsible for the sins of your father regardless of how seriously fucked in the head he is.
No but there is the old nature versus nurture debate. If you're raised in a home with a parent who has zero qualms about exploiting human suffering for profit, that's probably going to have an impact, right?
What are you implying here? The answer to the nature vs. nurture debate is "both", see "epigenetics" for more.
When considering the influence of a parent with morally reprehensible behavior, it's important to recognize that the environment a child grows up in can indeed have a profound impact on their development. Children raised in households where unethical behaviors are normalized may adopt some of these behaviors themselves, either through direct imitation or as a response to the emotional and psychological environment. However, it is equally possible for individuals to reject these influences.
Furthermore, while acknowledging the potential impact of a negative upbringing, it is critical to avoid deterministic assumptions about individuals. People are not simply products of their environment; they possess agency and the capacity for change, and we need to realize that not all individuals perceive and respond to environmental stimuli in the same way. Personal experiences, cognitive processes, and emotional responses can lead to different interpretations and reactions to similar environmental conditions. Therefore, while the influence of a parent's actions cannot be dismissed, it is neither fair nor accurate to presume that an individual will inevitably follow in their footsteps.
As for epigenetics: it highlights how environmental factors can influence gene expression, adding a layer of complexity to how we understand the interaction between genes and environment. While the environment can modify gene expression, individuals may exhibit different levels of susceptibility or resistance to these changes based on genetic variability.
The crux of your thesis is a legal point of view, not a scientific one. It's a relic from when Natural Philosophy was new and hip, and fundamentally obviated by leaded gasoline. Discussing free will in a biological context is meaningless because the concept is defined by social coercion. It's the opposite of slavery.
From a game theory perspective, it can make sense to punish future generations to prevent someone from YOLO'ing at the end of their life. But that only works if they actually care about their children, so perhaps it should be, "you are less responsible for the sins of your father the more seriously fucked in the head he is."
Lmao no point in worrying about AI spreading FUD when people do it all by themselves.
You know what AI is actually gonna be useful for? AR source attachments to everything that comes out of our monkey mouths, or a huge floating [no source] over someone's head.
Realtime factual accuracy checking pls I need it.
Who designs the training set for your putative "fact checker" AI?
If it comes packaged with the constant barrage of ridicule and abuse from others for daring to be slightly wrong about something, nobody may as well talk at all.
Are you saying that Altman has family that did business in South African emerald mines? I can't find info about this
No. Some dude that launches rockets did, though.
They are referring to Elon Musk.
It didn't take long to drag Elon into this thread. The bitterness and cynicism is unreal.
Indeed. I’ve heard first hand accounts that would make it impossible for me to trust him. He’s very good at the game. But I’d not want to touch him with a barge pole.
Any stories or events you can talk about? It sounds interesting
The New Yorker piece is pretty terrifying and manages to be so while bending over backwards to present both sides of not maybe even suck up to SV a bit. Certainly no one forced Altman to say on the record that Ice Nine in the water glass was what he had planned for anyone who crossed him, and no one forced pg to say, likewise on the record that “Sam’s real talent is becoming powerful” or something to that effect.
It pretty much goes downhill from there.
For anyone else like me who hasn't read Kurt Vonnegut, but does know about different ice states (e.g. Ice IX):
"Ice Nine" is a fictional assassination device that makes you turn into ice after consuming ice (?) https://en.m.wikipedia.org/wiki/Ice-nine
"Ice IX" (ice nine) is Ice III at a low enough temperature and high enough pressure to be proton-ordered https://en.m.wikipedia.org/wiki/Phases_of_ice#Known_phases
So here, Sam Altman is stating a death threat.
It's more than just a death threat, the person killed in such a manner would surely generate a human-sized pile of Ice 9, which would pose a much greater threat to humanity than any AGI.
If we're seriously entertaining this off-handed remark as a measure of Altman's true character, it means not only would be willing willing to murder an adversary, but he'd be willing to risk all humanity to do it.
What I take away from this remark is that Altman is a nerd, and I look forward to seeing a shaky cell-phone video of him reciting one of the calypsos of Bokonon while dressed as a cultist at a SciFi convention.
Oh okay, I didn't really grok that implication from my brief scan of the wiki page. Didn't realize it was a cascading all-water-into-Ice-Nine thing.
just to clarify, in the book it's basically just 'a form of ice that stays ice even when warm'. it was described as an abandoned projected by the military to harden mud for infantry men to cross. just like regular ice crystals, the ice9 crystal pattern 'spreads' across water, but without the need for it to be chilled, eg the body temp water freezes etc, it becomes a 'midas touch' problem to anyone dealing with it.
Holy shit I thought he was just good at networking, but it sounds like we have a psychopath in charge of the AI revolution. Fantastic.
The government is behind it all. Here are a bunch of graduate related talks that don't talk about CS, AI, but instead math and social control: https://videos.ahp-numerique.fr/w/p/2UzpXdhJbGRSJtStzVWon9?p...
“Sam is extremely good at becoming powerful” was the quote, which has a distinctly different ring to it. Not that this diminishes from the overall creep factor.
Article: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Paul Graham fired Sam Altman from YC on the spot for "loss of trust". Full details unknown.
The story of the "YC mafia" takeover of Conde Nast era reddit as summarized by ex-ceo Yishan who resigned after tiring of Altman's constant Machiavelli machinations is also hilarious and foreshadowing of future events[0]. I'm sure by the time Altman resigned from the Reddit board OpenAI had long incorporated the entire corpus into ChatGPT already.
At the moment all the engineers at OpenAI, including gdb, who currently have their credibility in tact are nerd-washing Altman's tarnished reputation by staying there. I mentioned this in a comment elsewhere but Peter Hintjens' (ZeroMQ, RIP) book called the "Psychopath Code"[1] is rather on point in this context. He notes that psychopaths are attracted to project groups that have assets and no defenses, i.e. non-profits:
If a group has assets and no defenses, it is inevitable [a psychopath] will invade the group. There is no "if" here. Indeed, you may see several psychopaths striving for advantage...[the psychopath] may be a founder, yet that is rare. If he is a founder, someone else did the hard work. Look for burned-out skeletons in the closet...He may come with grand stories, yet only by his own word. He claims authority from his connections to important people. He spends his time in the group manipulating people against each other. Or, he is absent on important business...His dominance is not earned, yet it is tangible...He breaks the social conventions of the group. Social humans feel fear and anxiety when they do this. This is a dominance mask.
A group of nerds that want to get shit done and work on important problems, who are primed to be optimistic and take what people say to their face at face value, and don't want to waste time with "people problems" are susceptible to these types of characters taking over.
[0] https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...
[1]https://hintjens.gitbooks.io/psychopathcode/content/chapter4...
the name OpenAI itself reminds me every day of this.
I knew their vision of open source AI wouldn't last but it surprised me how fast it was.
That vision, if it was ever there, died before ChatGPT was released. It was just a hiring scheme to attract researchers.
pg calls sama ‘naughty’. I call him ‘dangerous’.
I'm still finding it difficult to understand how their move away from the non-profit mission was legal. Initially, you assert that you are a mission-driven non-profit, a claim that attracts talent, capital, press, partners, and users. Then, you make a complete turnaround and transform into a for-profit enterprise. Why this isn't considered fraud is beyond me.
My understanding is that there were two corporate entities, one of which was always for-profit.
It was impractical from the start; they had to pivot before a they were able to get an LLM proper out (before ~anyone had heard of them)
It’s “Open” as in “open Pandora’s box”, not “open source”. Always has been.
Well, more than 90% of OpenAI employees backed him up when the board fired him. Maybe he's not the clown you claim he is.
Or they didn’t want the company, their job, and all of their equity to evaporate
Well, if he's a clown then his departure should cause the opposite, no? And you're right, more than 90% of them said we don't want the non-profit BS and openness. We want a unicorn tech company that can make us rich. Good for them.
Disclaimer: I'm Sam's best friend from kindergarten. Just joking, never met the guy and have no interest in openai beyond being a happy customer (who will switch in a heartbeat to the competitors' if they give me a good reason to)
Nope, not even close to necessarily true.
Sure, good for them! Dissolve the company and its charter, give the money back to the investors who invested under that charter, and go raise money for a commercial venture.
People are self-motivated more often than not.
The startup world (as the artistic world, the sports world, etc) values healthy transgression of the rules
But the line between healthy and unlawful transgression can be a thin line
The startup world values transgression of the rules.
I'm surprised at such a mean comment and lots of follow-ups with agreement. I don't know Sam personally, I've only heard him here and there online from before OpenAI days and all I got was a good impression. He seems smart and pretty humble. Apart from all openai drama which I don't know enough to have an opinion, past-openai he also seems to be talking with sense.
Since so many people took time to put him down there here can anybody provide some explanation to me? Preferably not just about how closed openai is, but specifically about Sam. He is in a pretty powerful position and maybe I'm missing some info.
People who have worked with him have publicly called him a manipulative liar:
https://www.reddit.com/r/OpenAI/comments/1804u5y/former_open...
I fear your characterization diminishes the real risk: he's incredibly well resourced, well-connected and intelligent while being utterly divorced from the reality of the majority he threatens. People like him and Peter Thiel are not simple crooks or idiots - they truly believe in their convictions. This is far scarier.
Social engineering has been a thing well before computers and the internet...
Many easily fooled rubes believe that veneer, so I guess it's working for him.
We already know there's been a leadership failure due to the mere existence of the board weirdness last year; if there has been any clarity to that, I've missed it for all the popcorn gossiping related to it.
Everyone including the board's own chosen replacements for Altman siding with Altman seems to me to not be compatible with his current leadership being the root cause of the current discontent… so I'm blaming Microsoft, who were the moustache-twirling villains when I was a teen.
Of course, thanks to the NDAs hiding information, I may just be wildly wrong.
Everyone? What about the board that fired him, and all of those who’ve left the company? It seems to me more like those people are leaving who are rightly concerned about the direction things are going, and those people are staying who think that getting rich outweighs ethical – and possibly existential – concerns. Plus maybe those who still believe they can effect a positive change within the company. With regard to the letter – it’s difficult to say how many of the undersigned simply signed because of social pressure.
I meant of the employees, obviously not the board.
Also excluded: all the people who never worked there who think Altman is weird, Elon Musk who is suing them (and probably the New York Times on similar grounds), and the protestors who dropped leaflets on one of his public appearances.
Happened after those events; at the time it was so close to being literally employee who signed the letter saying "bring Sam back or we walk" that the rest can be assumed to have been off sick that day even despite the reputation the US has for very limited holidays and getting people to use those holidays for sick leave.
Obviously so, I'm only asserting that this doesn't appear to be due to Altman, despite him being CEO.
("Appear to be" is of course doing some heavy lifting here: unless someone wants to literally surveil the company and publish the results, and expect that to be illegal because otherwise it makes NDAs pointless, we're all in the dark).
It's hard to guage exactly how much credence to put in that letter due to the gag contracts.
How much was it in support of Altman and how much was in opposition to the extremely poorly explained in board decisions, and how much was pure self interest due to stock options?
I think when a company chooses secrecy, they abandon much of the benefit of the doubt. I don't think there is any basis for absolving Altman.
Clearly by design.
The most dishonest leadership.
To borrow the catchphrase of one of my favorite hackers ever: “correct”.
In the past a lot of options would expire if you didn’t exercise them within eg. 90 days of leaving. And exercising could be really expensive.
Speculation: maybe the options they earn when they work there have some provision like this. In return for the NDA the options get extended.
Options aren't vested equity though.
... They definitely can be. When I worked for a small biotech company all of my options had a tiered vesting schedule.
Options aren't equity, they're only the option to buy equity at a specified price. Vesting just means you can actually buy the shares at the set strike pice.
For example, you may join a company and be given options to buy 10,000 shares at $5 each with a 2 year vesting schedule. They may begin vesting immediately, meaning you can buy 1/24th of the total options each month (or 614 shares). Its also common for a delay up front where no options vest until you've been with the company for say 6 or 12 months.
Until an option vests you don't own anything. Once it vests, you still have to buy the shares by exercising the option at the $5 per share price. When you leave, most companies have a deadline on the scale of a few months where you have to either buy all vested shares or forfeit them and lose the stock options.
Options can vest as do stock grants as well.
Unless I'm mistaken, the difference is that grants vest into actual shares while options only vest into the opportunity to buy the shares at a set price.
Part of my hiring bonus when joining one of the big tech companies were stock grants. As they vested I owned shares directly and could sell them as soon as they vested if I wanted to.
I also joined a couple startups later in my career and was given options as a hiring incentive. I never exercised the vested options so I never owned them at all, and I lost the optios after 30-90 days after leaving the company. For grants I'd take the shares with me and not have to pay for them, they would have directly been my shares.
Well, they'd actually be shares owned by a clearing house and promised to me but that's a very different rabbit hole.
Possession is 90% of ownership
Banks and trading houses are kind of the exception in that regard. I pay my bank monthly for my mortgage, and thus I live in a house that the bank could repossess if they so choose.
The phrase really should be about force rather than possession. Possession only really makes a difference when there's no power imbalance.
Banks have the legal authority to take the home I possess if I don't meet the terms of our contract. Hell, I may own my property outright but the government can still claim eminent domain and take it from me anyway.
Among equals, possession may matter. When one side can force you to comply, possession really is only a sign that the one with power is currently letting you keep it.
Looks like I used the wrong term there, sorry. I was referring to Cede & Co, and in the moment assumed they could be considered a clearing house. It is technically called a certificate depository, sorry for the confusion there.
Cede & Co technically owns most of the stock certificates today [1]. If I buy a share of stock I end up actually owning an IOU for a stock certificate.
You can actually confirm this yourself if you own any stock. Call the broker that manages your account and ask who's name is on the stock certificate. It definitely isn't your name. You'll likely get confused or unclear answers, but if you're persistent enough you will indeed find that the certificate is almost certainly in the name of Cede & Co and there is no certificate in your name, likely no share identifier assigned to you either. You just own the promise to a share, which ultimately isn't a problem unless something massive breaks (at which point we have problems anyway).
[1] https://en.m.wikipedia.org/wiki/Cede_and_Company
The last time I did this I didn't have to buy all of the shares.
I think they mean that you had to buy all the ones you wanted to keep.
That is tautological... You buy what you want to own???
The point being made is that it isn’t all or nothing, you can buy half the vested options and forfeit the rest, should you want to.
Wait, wait. Who is on first?
We’d usually point people here to get a better overview of how options work:
https://carta.com/learn/equity/stock-options/
There can be an advantage to not exercising: it causes a taxable event the IRS will want a cut of the difference between your exercise value and the current valuation, it requires you to commit real money to buy shares that may never be worth anything ....
And there are advantages to exercising: many (most?) companies take back unexercised shares a few weeks/months after you leave, it kicks in a CGT start date, so you can end up paying a lower CGT tax when you eventually sell
You need to understand all this stuff before you make a choice that's right for you
Re-read the post you’re replying to. They said options are not vested equity, which they aren’t. You still need to exercise an option that has vested to purchase the equity shares.
They did not say “options cannot get granted on a tiered vesting schedule”, probably because that isn’t true, as options can be granted with a tiered vesting schedule.
They aren't equity no matter what though?
They can be vested, I realize that.
My unreliable memory is Altman was ( once? ) in favor of extending the period for exercising options. I could be wrong of course but it is consistent with my impression that making other people rich is among his motivations. Not the only one of course. But again I could be wrong.
Wouldn't be too surprised if he changed his mind since then. He is in a very different position now!
Consideration is almost meaningless as an obstacle here. They can give the other party a peppercorn, and that would be enough to count as consideration.
https://en.wikipedia.org/wiki/Peppercorn_(law)
There might be other legal challenges here, but 'consideration' is unlikely to be one of them. Unless OpenAI has idiots for lawyers.
Right, but the employee would be able to refuse the consideration, and thus the contract, and the state of affairs wouldn't change. They would be free to say whatever they wanted.
If they refuse the contract then they lose out on their options vesting. Basically, OpenAI's contracts work like this:
Employment Contract the First:
We are paying you (WAGE) for your labor. In addition you also will be paid (OPTIONS) that, after a vesting period, will pay you a lot of money. If you terminate this employment your options are null and void unless you sign Employment Contract the Second.
Employment Contract the Second:
You agree to shut the fuck up about everything you saw at OpenAI until the end of time and we agree to pay out your options.
Both of these have consideration and as far as I'm aware there's nothing in contract law that requires contracts to be completely self-contained and immutable. If two parties agree to change the deal, then the deal can change. The problem is that OpenAI's agreements are specifically designed to put one counterparty at a disadvantage so that they have to sign the second agreement later.
There is an escape valve in contract law for "nobody would sign this" kinds of clauses, but I'm not sure how you'd use it. The legal term of art that you would allege is that the second contract is "unconscionable". But the standard of what counts as unconscionable in contract law is extremely high, because otherwise people would wriggle out of contracts the moment that what seemed like favorable terms turned unfavorable. Contract law doesn't care if the deal is fair (that's the FTC's job), it cares about whether or not the deal was agreed to.
Who would sign a contract to willfully give away their options?
The same sort of person who would sign a contract agreeing that in order to take advantage of their options, they need to sign a contract with unclear terms at some point in the future if they leave the company.
Bear in mind there are actually three options, one is signing the second contract, one is not signing, and the other is remaining an employee.
is it even a valid contract clause to tie the value of something to a future completely unknown agreement? (or yes, it's valid, and it means that savvy folks should treat it as zero.)
(though most likely the NDA and everything is there from day 1 and there's no second contract, no?)
If say that you were working at Reddit for quite a number of years and all your original options had vested and you had exercised them, then since Reddit went public you would now easily be able to sell your stocks, or keep them if you want. So then you wouldn’t need to sign the second contract. Unless of course you had gotten new options that hadn’t vested yet.
Maybe. But whether the employee can refuse the gag has nothing to do at all with the legal doctrine that requires consideration.
Ok but peppercorn or not, what’s the consideration?
Getting a certain amount (according to their vesting schedule) of stock options, which are worth a substantial amount of money and thus clearly is "good and valuable consideration".
The original stock and vesting agreement that was part of their original compensation probably says that you have to be currently employed by OpenAI for the vesting schedule to apply. So in that case the consideration of this new agreement is that they get to keep their vesting schedule running even though they are no longer employees.
but can they simply leave with the already vested options/stock? are there clawback provisions in the initial contract?
That's the case in many common/similar agreements, but the OpenAI agreement is different because it's specifically clawing back already vested equity. In this case, I think the consideration would be the company allowing transfer of the shares / allowing participation in buyback events. Otherwise until the company goes public there's no way for the employees to cash out without consent of the company.
"I'll pay you a dollar to shut up"
"Deal"
Unfortunately this is how most startup equity agreements are structured. They include terms that let the company cancel options that haven’t been exercised for [various reasons]. Those reasons are very open ended, and maybe they could be challenged in a court, but how can a low level employee afford to do that?
I don’t know of any other such agreements that allow vested equity to be revoked, as the other person said. That doesn’t sound very vested to me. But we already knew there are a lot of weird aspects to OpenAI’s semi-nonprofit/semi-for-profit approximation of equity.
As far as I know it’s part of the stock plan for most startups. There’s usually a standard clause that covers this, usually with phrasing that sounds reasonable (like triggering if company policy is violated or is found to have been violated in the past). But it gives the company a lot of power in deciding if that’s the case.
I assume it’s agreed to at time of employment? Otherwise you’re right that it doesn't make sense
Why do you assume this if it is said here and in the article that they had to sign something at the time of the departure from the company?
They earned wages and paid taxes on them. Anything on top is just the price they're willing to accept in exchange for their principles.
How do you figure that they should pay an additional price (their principle/silence) for this equity when they've supposedly earned it during their employment (assuming this was not planned when they got hired, since they make them sign new terms at the time of their departure)?
Yeah you don't have to sign anything to quit. Ever. No new terms at that time, sorry.
There is usually a carrot along with the stick.
I guess there are indeed countries where this is illegal. Funny that it seems to be legal in the land of the free (speech).
I think we should have the exit agreement (if any) included and agreed to as part of the signing the employment contract.
It's also really weird equity: you don't get an ownership stake in the company but rather profit-sharing units. If OpenAI ever becomes profitable (color me skeptical), you can indeed get rich as an employee. The other trigger is "achieving AGI", as defined by sama (presumably). And while you wait for these dubious events to occur you work insane hours for a mediocre cash salary.
The thing is that this is a private company, so there is no public market to provide liquidity. The company can make itself the sole source of liquidity, at its option, by placing sell restrictions on the grants. Toe the line, or you will find you never get to participate in a liquidity event.
There's more info on how SpaceX uses a scheme like this[0] to force compliance, and seeing as Musk had a hand in creating both orgs, they're bound to be similar.
[0] https://techcrunch.com/2024/03/15/spacex-employee-stock-sale...
In the initial hiring agreement, this would be stated and the employee would have to agree to signing such form if they are to depart
I'm guessing unvested equity is being treated separately from other forms of compensation. Normally, leaving a company loses the individual all rights to unvested options. Here the considetation is that options are retained in exchange for silence.
Perhaps they are stock options and leaving without signing would make them evaporate, but signing turns them back into long-lasting options?
I would guess it’s a bonus and part of their bonus structure and they agreed to the terms of any exit/departure, when they sign their initial contract.
I’m not saying it’s right or that I agree with it, however.
For a company that is actively pursuing AGI (and probably the #1 contender to get there), this type of behaviour is extremely concerning.
There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.
I'm not sure this is true. If all the things people are doing are done so much more cheaply they're almost free, that would be good for us, as we're also the buyers as well as the workers.
However, I also doubt the premise.
Why would you need buyers if AI can create anything you desire?
In an ideal world where gpus are a commodity yes. Btw at least today ai is owned/controlled by the rich and powerful and that's where majority of the research dollars are coming from. Why would they just relinquish ai so generously?
With an ever expanding AI everything should be quickly commoditized, including reduction in energy to run AI and energy itself (ie. viable commercial fusion or otherwise).
Where are you getting energy and land from for these AI's to consume and turn into goods?
Moreso, by making such a magical powerful AI as you've listed, the number one thing some rich controlling asshole with more AI than you, would be to create an army and take what they want because AI does nothing to solve human greed.
Bingo.
The whole justification for keeping consumers happy or healthy goes right out the window.
Same for human workers.
All that matters is that your robots and AIs aren't getting smashed by their robots and AIs.
Doesn't this tend to become "they're almost free to produce" with the actual pricing for end consumers not becoming cheaper? From the point of view of the sellers just expanding their margins instead.
I'm sure businesses will capture some of the value, but is there any reason to assume they'll capture all or even most of it?
Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].
It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!
Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.
[0] https://fred.stlouisfed.org/series/OPHNFB [1] https://dqydj.com/sp-500-profit-margin/ [2] https://fred.stlouisfed.org/series/MEPAINUSA672N
This does not make sense to me. While a higher profit margin is a signal to others that they can earn money by selling equivalent goods and services at lower prices, it is not inevitable that they will be able to. And even if they are, it behooves a seller to take advantage of the higher margins while they can.
Earning less money now in the hopes of competitors being dissuaded from entering the market seems like a poor strategy.
Wait what? I was just listening to the former chief economist of Banknof England going on about how terrible productivity (in the UK) is.
So who is right?
If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.
The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".
You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.
This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.
It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.
What does P=NP have to do with anything? Humans are incomparably smarter than other animals. There is no intelligence test a healthy human would lose to another animal. What is going to happen when agentic robots ascend to this level relative to us? This is what the GP is talking about.
Succeeding at intelligence tests is not the same thing as succeeding at survival, though. We have to be careful not to ascribe magical powers to intelligence: like anything else, it has benefits and tradeoffs and it is unlikely that it is intrinsically effective. It might only be effective insofar that it is built upon an expansive library of animal capabilities (which took far longer to evolve and may turn out to be harder to reproduce), it is likely bottlenecked by experimental back-and-forth, and it is unclear how well it scales in the first place. Human intelligence may very well be the highest level of intelligence that is cost-effective.
To every other mammal, reptile, and fish humans are the intelligence explosion. The fate of their species depends on our good will since we have so utterly dominated the planet by means of our intelligence.
Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.
Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.
Look up where the people in power got their college degrees from and then look up the SAT scores of admitted students from those colleges.
More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.
Clearly we don’t know when/if AGI would happen, but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’. It probably won’t result from just scaling LLMs, but then that’s why there’s a lot of researchers trying to find the next significant advancement, in parallel with others trying to commercially exploit LLMs.
The same way that the expectation of many people working within the self-driving field in 2016 was that level 5 autonomy was right around the corner.
Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).
Sure, but blanket pessimism isn't very insightful either. I'll use the same example you did: self-driving. The public (or "median nerd") consensus has shifted from "right around the corner" (when it struggled to lane-follow if the paint wasn't sharp) to "it's a scam and will never work," even as it has taken off with the other types of AI and started hopping hurdles every month that naysayers said would take decades. Negotiating right-of-way, inferring intent, handling obstructed and ad-hoc roadways... the nasty intractables turned out to not be intractable, but sentiment has not caught up.
For one where the pessimist consensus has already folded, see: coherent image/movie generation and multi-modality. There were loads of pessimists calling people idiots for believing in the possibility. Then it happened. Turns out an image really is worth 16x16 words.
Pessimism isn't insight. There is no substitute for the hard work of evaluation.
they think this because it serves their interests of attracting an enormous amount of attention and money to an industry that they seek to make millions of dollars personally from.
My money is well on environmental/ climate collapse wiping out most of humanity in the next 50-100 years, hundreds of years before anything like an AGI possibly could.
It was the expectation of many people in the field in the 1980s, too
Ah yes, the “our brains are somehow inherently special” coalition. Hand-waving the capabilities of LLM as dumb math while not having a single clue about the math that underlies our own brains’ functionality.
I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.
Can higher level formers with more at stake pool together comp for lower levels with much less at stake so they can speak to it? Obvs they may not be privy to some things, but there’s likely lots to go around.
This is bizarre. Someone hands you a contract as you're leaving a company and if you refuse to agree to whatever they dreamt up and sign the company takes back the equity you earned? That can't be legal
It might be that they agree to it initially when hired, so it doesn't matter if they sign something when they leave.
Agreements with surprise terms that only get detailed later tend not to be very legal.
How do you know there isn't a very clear term in the employment agreement stating that upon termination you'll be asked to sign an NDA on these terms?
Unless the terms of the NDA are provided upfront, that sounds sketch AF.
"I agree to follow unspecified terms in perpetuity, or return the pay I already earned" doesn't vibe with labor laws.
And if those NDA terms were already in the contract, there would be no need to sign them upon exit.
If the NDA terms were agreed in an employment contract they would no longer be valid upon termination of that contract.
Plenty of contracts have survivorship clauses. In particular, non-disclosure clauses and IP rights are the ones to most commonly survive termination.
Why not just get it signed then? Your signing to agree to sign later?
One particularly sus term in my employment agreement is that I adhere to all corporate policies. Guess how many of those there are, how often they're updated, and if I've ever read them!
Doesn't even have to be a surprise. Pretty much startup employment agreement in existence gives the company ("at the board's sole discretion") the right to repurchase your shares upon termination of employment. OpenAI's PPUs are worth $0 until they become profitable. Guess which right they'll choose to exercise if you don't sign the NDA?
Who would accept shares as valuable if the contract said they can be repurchased from you at a price of 0$? This can't be it.
It can. There are many ways to make the number go to zero.
I don't think rght to repurchase is routine. It was a scandal a few years ago when it turned out that Skype did that. https://www.forbes.com/sites/dianahembree/2018/01/10/startup...
Hard to evaluate this without access to the documents. But in CA, agreements cannot be conditioned on the payment of previously earned wages.
Equity adds a wrinkle here, but I suspect if the effect of canceling equity is to cause a forfeiture of earned wages, then ultimately whatever contract is signed under that threat is void.
Well some rich ex-openAI person should test this theory. Only way to find out. I’m sure some of them are rich.
It’s not even equity. OpenAI is a nonprofit.
They’re profit participation units and probably come with a few gotchas like these.
The argument would be that it's coercive. And it might be, and they might be sued over it and lose. Basically the incentives all run strongly in OpenAI's favor. They're not a public company, vested options aren't stock and can't be liquidated except with "permission", which means that an exiting employee is probably not going to take the risk and will just sign the contract.
I have some experience with rich people who think they can just put whatever they want in contracts and then stare at you until you sign it because you are physically dependent on eating food every day.
Turns out they're right, they can put whatever they want in a contract. And again, they are correct that their wage slaves will 99.99% of the time sign whatever paper he pushes in front of them while saying "as a condition of your continued employment, [...]".
But also it turns out that just because you signed something doesn't mean that's it. My friends (all of us young twenty-something software engineers much more familiar with transaction isolation semantics than with contract law) consulted with an attorney.
The TLDR is that:
- nothing in contract law is in perpetuity
- there MUST be consideration for each side (where "consideration" means getting something. something real. like USD. "continued employment" is not consideration.)
- if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.
- and when it comes to employers and employees, the employee had damn well better be getting a good deal out of it, especially if you are trying to prevent the employee (or ex-employee) from working.
A common pattern ended up emerging: our employer would put something perpetual in the contract, and offer no consideration. Our attorney would tell us this isn't even a valid contract and not to worry about it. Employer would offer an employee some nominal amount of USD in severance and put something in perpetuity into the contract. Our attorney tells us the judge would likely use "blue ink rule" to add in "for a period of one year", or, it would be prorated based on the amount of money they were given relative to their former salary.
(I don't work there anymore, naturally).
Even lowest level fast food workers can choose a different employer. An engineer working at OpenAI certainly has a lot of opportunities to choose from. Even when I only had three years in the industry, mid at best, I asked to change the contract I was presented with because non-compete was too restrictive — and they did it. The caliber of talent that OpenAI is attracting (or hopes to attract) can certainly do this too.
I am typically not willing to bet I can get back under health insurance for my family within the next 0-4 weeks. And paying for COBRA on a family plan is basically like going from earning $X/mo to drawing $-X/mo.
The perversely capitalistic healthcare system in the US is perhaps the number one reason why US employers have so much more power over their employees than their European counterparts.
Only thanks to a recent ruling by the FTC that non-competes are valid. in the most egregious uses, bartenders and servers were prohibited from finding another job in the same industry for two years.
Isn't that the reason more competent lawyers put in the royal lives[1] clause? It specifies the contract is valid until 21 years after the death of the last currently-living royal descendant; I believe the youngest one is currently 1 year old, and they all have good healthcare, so it's almost certainly will be beyond the lifetime of any currently-employed persons.
1. https://en.wikipedia.org/wiki/Royal_lives_clause
I know little about law, but isn't this completely ludicrous? Assuming you know a bit more (or someone else here does), I have a few questions:
Would any non-corrupt judge consider this is done in bad fait?
How is this difference if we use a great ancient sea turtles—or some other long-lived organism—instead of the current royal family baby? Like, I guess my point is anything that would likely outlive the employee basically?
It's a standard legal thing to accommodate a rule that you can't write a perpetual contract, it has to have a term delimited by the life of someone alive plus some limited period.
A case where it obviously makes sense is something like a covenant between two companies; whose life would be relevant there, if both parties want the contract to last a long time and have to pick one? The CEOs? Employees? Shareholders? You could easily have a situation where the company gets sold and they all leave, but the contract should still be relevant, and now it depends on the lives of people who are totally unconnected to the parties. Just makes things difficult. Using a monarch and his currently living descendants is easy.
I'm not sure how relevant it is in a more employer employee context. But it's a formalism to create a very long contract that's easy to track, not a secret trick to create a longer contract than you're normally allowed to. An employer asking an employee to agree to it would have no qualms asking instead for it to last the employee's life, and if the employee's willing to sign one then the other doesn't seem that much more exploitative.
Why would anyone want to work at such horrible company.
Money
This is all basically true, but the problem is that retaining an attorney to confidently represent you for such negotiation is proposition with $10k table stakes (probably $15k+ these days with Trumpflation), and much more if the company sticks to their guns and doesn't settle (which is much more likely when the company is holding the cards and you have to go on the offensive). The cost isn't necessarily outright prohibitive in the context of surveillance industry compensation, but is still a chunk of change and likely to give most people pause when the alternative is to just go with the flow and move on.
Personally I'd say there needs to be a general restriction against including blatantly unenforceable terms in a contract document, especially unilateral "terms". The drafter is essentially pushing incorrect legal advice.
He gets my respect for that one both publicly acknowledging why he was leaving and their pantomime. I don't know how much the equity would be for each employee (the article suggests millions but that may skew by role) and I don't know if I would just be like the rest by keeping my lips tight for fear of the equity forfeiture.
It takes a man of real principle to stand up against that and tell them to keep their money if they can't speak ill of a potentially toxic work environment.
Incidentally, that's what Grigory Perelman, the mathematician that rejected the Fields Medal and the $1M prize that came with it, did.
It wasn't a matter of an NDA either; it was a move to make his message heard (TL;DR: "publish or perish" rat race that the academia has become is antithetical to good science).
He was (and still is) widely misunderstood in that move, but I hope people would see it more clearly now.
The enshittification processes of academic and corporate structures are not entirely dissimilar, after all, as money is at the core of corrupting either.
I think, when making a gesture, you need to consider its practical impact, which includes whether and how it will be understood (or not).
In the OpenAI case, the gesture of "forgoing millions of dollars" directly makes you able to do something you couldn't - speak about OpenAI publicly. In the Grigory Perelman case, obviously the message was far less clear to most people (I personally have heard of him turning down the money before and know the broad strokes of his story, but had no idea that that was the reason).
Consider this:
1. If he didn't turn down the money, you wouldn't have heard of him at all;
2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.
3. On a very basic level, it's very easy to understand that there's gotta be something wrong with the award if a deserving recipient turns it down. What exactly is wrong is left as an exercise to the reader — as you'd expect of a mathematician like Perelman.
Quote (from [1]):
From the few public statements made by Perelman and close colleagues, it seems he had become disillusioned with the entire field of mathematics. He was the purest of the purists, consumed with his love for mathematics, and completely uninterested in academic politics, with its relentless jockeying for position and squabbling over credit. He denounced most of his colleagues as conformists. When he opted to quit professional mathematics altogether, he offered this confusing rationale: “As long as I was not conspicuous, I had a choice. Either to make some ugly thing or, if I didn’t do this kind of thing, to be treated as a pet. Now when I become a very conspicuous person, I cannot stay a pet and say nothing. That is why I had to quit.”*
This explanation is confusing only to someone who has never tried to get a tenured position in academia.
Perelman was one of the few people to not only give the finger to the soul-crushing, dehumanizing system, but to also call it out in a way that stung.
He wasn't the only one; but the only other person I can think of is Alexander Grothendiek [2], who went as far as declaring that publishing any of his work would be against his will.
Incidentally, both are of Russian-Jewish origin/roots, and almost certainly autistic.
I find their views very understandable and relatable, but then again, I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).
[1] https://nautil.us/purest-of-the-purists-the-puzzling-case-of...
[2] https://en.wikipedia.org/wiki/Alexander_Grothendieck
I think this is probably not true.
This is a great point and you're probably right.
Really? What do you do nowadays?
(I glanced at your bio and website and you seem to be doing interesting things, I've also dabbled in Computational Geometry and 3d printing.)
Perelman provided a proof of the Poincare Conjecture, which had stumped mathematicians for a century.
It was also one of the seven Millenium problems https://www.claymath.org/millennium-problems/, and as of 2024, the only one to be solved.
Andrew Wiles became pretty well known after proving Fermat's last theorem, despite there not being an financial reward.
Perelman's point is absolutely clear if you listen to him, he's disgusted by the way credit is apportioned in mathematics, doesn't think his contribution is any greater just because it was the last one, and wants no part of the prize he considers tainted.
Compared to what seemed like their original charter, with non-profit structure and all, now it seems like a rather poisonous place.
They will have many successes in the short run, but, their long run future suddenly looks a little murky.
It could work like academia or finance: poisonous environment (it is said), but ambitious enough people still go in to try their luck.
"finance": A bit of a broad brush, don't you think? Is working at a Landsbank or Sparkasse in Germany really so "poisonous"?
Yes, of course, narrow that down to the crazy wolf-of-wall-street subset.
They extracted a lot of value from researchers during their ‘open’ days, but it’s depleted now, so of course they move on to the next source of value. sama is going AGI or bust with a very rational position of ‘if somebody has AGI, I’d rather it was me’ except I don’t like how he does it one bit, it’s got a very dystopian feel to it.
Similar points made here, if anyone is interested in signing: https://www.openailetter.org/
In my experience, and that of others I know, agreements of this kind are generally used to hide/cover-up all kinds of malfeasance. I think that agreements of this kind are highly unethical and should be illegal.
Many year ago I signed a NDA/non-disparagement agreement as part of a severance package when I was fired from a startup for political reasons. I didn't want to sign it... but my family needed the money and I swallowed my pride. There was a lot of unethical stuff going on within the company in terms of fiducial responsibility to investors and BoD. The BoD eventually figured out what was going on and "cleaned house".
With OpenAI, I am concerned this is turning into huge power/money grab with little care for humanity... and "power tends to corrupt and absolute power corrupts absolutely".
The power grab happened a while ago (the shenanigans concerning the board) and is now complete. Care for humanity was just marketing or a cute thought at best.
Maybe humanity will survive life long enough that a company "caring about humanity" becomes possible, I'm not saying it's not worth trying or aspiring to such ideals, but everyone should be extremely surprised if any organization managed to resist such amounts of money to maintain any goal or ideal whatever...
Well, one problem is what does ‘caring for humanity’ even mean, concretely?
One could argue it would mean pampering it.
One could also argue it could be a Skynet—analog doing the equivalent of a God Emperor like Golden Path to ensure humanity is never going to be dumb enough to allow an AGI the power to do that again.
Assuming humanity survives the second one, it has a lot higher chance of actually benefiting humanity long term too.
In EU all of these are mostly illegal and void, or strictly limited. You have to pay a good salary for the whole duration (up to two years), and let the employer know months before them leave. Almost right after they are fired.
Sound like a better solution?
That could very well be the case, OpenAI took quite a few opaque decision/changes not too long ago.
In all likelihood, they are illegal, just that no one has challenged them yet. I can’t imagine a sane court backing up the idea that a person can be forbidden to talk about something (not national security related) for the rest of their lives.
If the original agreement offered equity that vests, then suddenly another future agreement can potentially revoke that vested equity? It makes no sense unless somehow additional conditions were attached to the vested equity in the original agreement.
And almost all equity agreements do exactly that - give the company right of repurchase. If you've ever signed one, go re-read it. You'll likely see that clause right there in black and white.
They give the company the right to repurchase unvested (but exercised) shares, not vested options. At least the ones I’ve signed.
For companies unlisted on stock exchanges the options are then worthless.
These were profit sharing units vs options.
It seems very off to me that they don't give you the NDA before you sign the employment contract, and instead give it to you at the time of termination when you can simply refuse to sign it.
It seems that standard practice would dictate that you sign an NDA before even signing the employment contract.
That's probably because the company closed after hiring them
They have multiple NDA's, including ones that are signed before joining the company [1].
[1]https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/simeon_c-s...
Perfect! So it's so incredibly overreaching that any judge in California would deem the entire NDA unenforceable..
Either that or, in your effort to overstate a point, you exaggerated in a way that undermines the point you were trying to make.
Lots of companies try and impose things on their employees which a judge would obviously rule to be unlawful. Sometimes they just don’t think through it carefully; other times, it’s a calculated decision that few employees will care enough to actually get the issue in front of a judge in the first place. Especially relevant for something like a non disclosure agreement, where no judge is likely to have the opportunity to declare it unenforceable unless the company tries to enforce it on someone who fights back.
Maybe it's unenforceable, but they can make it very expensive for anyone to find out in more ways than one.
I wonder if employees rallying for Altman when the board was trying to fire him were obligated to do it by some secret agreement.
Even without explicit clauses, it's likely they feared the loss of a (perceived) great man would impact their equity -- regardless of his character. Sadly there is too much faith in these Jobs-esque 'great' men to drive innovation. It's a social illness IMO.
It's a taught ideology/theory, the great man theory: https://en.m.wikipedia.org/wiki/Great_man_theory
So if I am a competitor I just need to pay a current employee like 2-3M to break their golden handcuffs and then they can freely start singing.
Not to seem combative, but that assumes that what they share would be advantageous enough to justify the costs... On the other hand, I'm thinking if I'm paying them to disclose all proprietary technology and research for my product, that would definitely make it worthwhile.
This should not be legal.
It doesn't even make logical sense. If someone asks you about the NDA what are you supposed to say? "I can neither confirm nor deny the existence of said NDA" is pretty much confirmation of the NDA!
It is always so impressive to see what the US law allows.
This would be not only unethical viewed in Germany, i could see how a CEO would go to prison for such a thing.
Please stop with these incorrect generalizations. Hush agreements are definitely allowed in Germany as well, part of golden parachutes usually.
I know a manager for an EV project at a big German auto company who also had to sign one when he was let go and was compensated handsomely to keep quiet and not say a word or face legal consequences.
IIRC he got ~12 months wages. After a year of not doing anything at work anyway. Bought a house in the south with it. Good gig.
(This comment was posted to https://news.ycombinator.com/item?id=40394778 before we merged that thread hither.)
Thank you, @dang! On top of things, as usual.
This sounds very illegal, how is California allowing this?
Nobody has challenged it in court.
There are also directly inscentiviced to not talk shit about a company they a lot of stock in.
isn't such a contracting completely unenforceable in the US? I can't sign a contract with a private party that says I won't consult a lawyer for legal advice for example.
When YCR HARC folded, Sam had everyone sign a non-disclosure anti disparagement NDA to keep their computer. I thought is was odd, and the only reason I can even say this is that I bought the iMac I was using before the option became available. Still, I had nothing bad to disclose, so it would have saved me some money.
Then lower level employees who don’t have do much at stake could open up. Formers who have much larger stakes could compensate these lower level formers for forgoing any upside. Now, sure, maybe they don’t have the same inside information, but u bet there’s lots of scuttlebutt to go around.
Yet another ding against the "Open" character of the company.
This is the kind of thing a cult demands of its followers, or an authoritarian government demands of its citizens. I don't know why people would think it's okay for a business to demand this from its employees.
So much for open in open ai. I have no idea why HN jerks off to Altman. He's just another greedy exec incapable of seeing things past his shareholder value fetish.
even if NDA were not a thing, revealing past company trade secrets publicly would render any of them unemployable.
They can’t loose their already vested options for refusing to sign NDA upon departure. Maybe they are offered additional grants or expedited vesting of the remaining options.
I hope I’m still around when some of these guys reach retirement age and say “fuck it, my family pissed me off” and give tell-all memoirs.