Kind of ironic that people are jumping in here to defend Sam Altman on the message board of Y Combinator, which also fired him from an executive leadership role for similar reasons.
What is still bizarre is how the board brought him back. I just don't understand that from any angle. It is very clear that he lied and schemed. So why was there any issue? He had no power once fired. If another company wants to hire him, fine. If he wants to make platitudes on Twitter, who cares? Employees threatened to leave? Let them. I imagine many of them wouldn't, and if they did, you could just rehire and find a proper CEO. It's likely that the employees who would have stayed would have been the ones you want to keep anyway.
So why would you rehire him and then resign? Why not leave him fired and resign?
Why did Microsoft care so much about Altman? They didn't have a board seat, so too bad so sad. They should have had no sway. If they want OpenAI, then they can buy it. Why would they even want Altman to come to their company? What possible value could Altman bring to Microsoft?
Just none of it makes any sense. In fact, the most clear thing is Altman's negative behavior and mode of operation. I remember his very loud, awshucks pronouncements of having no shares in OpenAI. Yea, right. A career VC setup a new company structure or structures where he wouldn't benefit at all financially.
In another timeline, OpenAI firing him, naming an interim CEO, ignoring all else, and then hiring a new CEO would have all gone just fine. I don't know why Microsoft and others made a huge kerfuffle over it. And I also don't know why the board released such a cryptic press release when they could have provided details.
I just can't wrap my head around any of it, even letting in conspiracy theory lines. Lol. It just makes no sense and gives me a feeling that no one involved has any clue about anything, including Satya Nadella.
You had the answer there before you dismissed it -- the board agreed to bring Sam Altman back before resigning because of massive pressure campaign that involved 90%+ of employees threatening to resign, and Microsoft (which has OpenAI's IP rights) offering to hire them all immediately. This included the company's key leadership. The organization was unraveling overnight. This was far past the tipping point of "a few disgruntled employees might leave" - there would have been no OpenAI left to preside over.
I answered that scenario though. I'm fine, as a board, letting employees who want to hold the company hostage over a CEO that the board fired walk.
The board's major mistake was not communicating why he was let go.
My guess is that the likely reason why employees threatened to go was that they felt Altman had the best chances of making the for-profit arm's shares skyrocket. As a non-profit company's board, I'd be fine letting those people walk out the front door along with the CEO that was just fired.
It is my understanding that the key personnel who developed the actual technology were not part of the group threatening to leave. It was mainly the group in the for-profit arm that Altman had trojan-horsed into the company structure.
This included the company's key leadership.
I'm not aware of the machine learning researchers responsible for the core technology threatening to leave. Who were they?
I'm not aware of the machine learning researchers responsible for the core technology threatening to leave. Who were they?
Ilya Sutskever, Alec Radford, Wojciech Zaremba, Nick Ryder, Mark Chen, ... how many names do you want? >90% of the company threatened to leave.
Sutskever was one of the people who fired Altman to begin with.
Him joining the pile-on when it was already clear how big it is was pretty much surrendering to the mob, and was perceived as such even at the time.
It’s so much weirder than that. Reportedly, Greg Brockman’s wife begged him in tears to change his mind. Ilya presided over their wedding.
Source?
Wait, are you simply unaware of the widely reported circumstances in the company at that time? 95% of employees signed a letter to the board stating that they would leave the company if Sam was not brought back [1]. Ilya Sutskever, who was on the board and voted to remove Sam, changed his mind and signed the letter. The board named Mira Murati interim CEO; then she signed the letter, the board fired her, and hired another outside CEO. Several key researchers resigned outright before letter even went out, including Jakub Pachoki, who replaced Ilya after he left [2] .
I would challenge you to name a researcher who didn't resign or threaten to resign. Remember, they all had a plausible landing spot: They could simply show up at Microsoft with all the same leadership, coworkers, salary, compute, and IP the next day. OpenAI as we know it was over unless and until the board gave in.
[1] https://www.wired.com/story/95-percent-of-openai-employees-t... [2] https://www.theinformation.com/articles/three-senior-openai-...
No, I am not simply unaware. I said my understanding, which appears to be wrong. Thanks for the details. However, I am still confused on which employees of which company. There is the non-profit OpenAI and for-profit OpenAI. Are these articles talking about everything or just the for-profit branch?
I still wager that things would have been different had the board clearly stated their reasons. Doesn't make any sense that they did it months later. The signing of the petition seems mostly group think and political. I would guess that the majority of employees would have followed through.
Lastly, I still can't say any of this makes any sense. Why did the employees even care about Altman? It still seems all very strange to leave your job for someone who doesn't seem to have ever said anything meaningful.
I'd be fine letting those people walk out the front door along with the CEO that was just fired.
Guess you'd be fine being the board over a company that now only consists of a board. Thumbs up.
If the board had shared something equivalent to this article at the time, perhaps there wouldn't have been such an intense internal pressure campaign.
Instead, they did not even attempt to communicate any rationale behind their actions.
It’s not good form to bad mouth someone you’ve fired. They did, in fact, release a statement which aligns (in PR speak) with what Toner is claiming now.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
https://openai.com/index/openai-announces-leadership-transit...
What they didn't realize is that they were going to have to convince the rank-and-file employees of that. Without a communication strategy to do that, they had basically no leverage.
I agree with this 100%
I even said so at the time.
It’s not good form to bad mouth someone you’ve fired.
Instead, it's good form to wait 6 months and then do a Business Insider interview on the topic?
Also, "bad mouthing" suggests putting a spin on it. Based on the article, they should have been able to list out several objective facts that supported their decision. Instead, they opened it all up to intense speculation by providing a vague justification and by remaining tight lipped even once it became clear that a lot of clarification was needed.
the board agreed to bring Sam Altman back before resigning because of massive pressure campaign that involved 90%+ of employees threatening to resign, and Microsoft...
But, what kind of Wild West is this? It's all so unhinged and strange.
There are no NDAs, non-competes or other impediments? MS just guts OpenAI at its whim?
there would have been no OpenAI left to preside over.
...If MS can do this, then there's already no OpenAI left to preside over.
The beauty of California is that there are no non competes, for anyone, ever.
It is a very pro employee policy, hence attractive to the best and brightest.
Good to see someone positive about California, given all of the politically-motivated negativity. It's a great state for many reasons.
I mentioned non-competes as one of a universe of things that makes this unusual; including its original non-profit status becoming substantially for-profit, and the apparent ease with which MS can now gut the supposedly controlling non-profit and walk away with everything.
And, I'm guessing even the CA policy on non-competes wasn't conceived with the idea of one company simply "taking" another company at-will, even if that scenario is technically covered.
I am not clear about the intent of your comment. People should be free to sell their labor to whoever they want. And people should be able to buy labor from whoever they want.
All businesses at all times are subject to losing their employees to another employer who is willing to pay enough for them.
When the board fires you for wrongdoing it is like being deported from a foreign country for breaking that country’s laws. You can’t just go against that decision. It would be like telling the government that no, in fact they are the ones who are wrong and no, I didn’t break the law it is in fact their laws which are broken.
When you have irreconcilable differences with the board you could theoretically jump ship and start a rival company. Usually, in practice, that’s impossibly hard and yet Microsoft announced that they intended to do exactly that with funding and stock matching. For some reason that was turned down in favor of staging the equivalent of an in-country coup.
If you get caught breaking the law on vacation abroad and your response to being arrested is to take control of the country in retaliation then you are a very powerful, persuasive, or threatening person indeed.
It would be like telling the government that no, in fact they are the ones who are wrong and no, I didn’t break the law it is in fact their laws which are broken.
You know a number of people are in that position though.
My first thought went to Samsung's chairman who will break the law and go to prison, but the country will have him back, bending over backwards to somewhat have it make sense.
It would be like telling the government that no, in fact they are the ones who are wrong and no, I didn’t break the law it is in fact their laws which are broken.
Reminds me of the Wirecard scandal doc on Netflix. Wirecard was so powerful that the financial regulators (BaFin) started targeting journalists and actively defending a publicly traded company.
Indeed, it was stunning. The Board outranked the CEO and had the right to replace him. What they didn't reckon on was that technically, if the entire company is willing to quit on behalf of the CEO, then they as a body outrank the board. And that was that, pretty much.
They should have called their bluff.
There's not many who have the balls to do that left in the country, let alone California or SF.
Yeah, sadly it looks like it was employees' greed that brought Altman back.
This was far past the tipping point of "a few disgruntled employees might leave" - there would have been no OpenAI left to preside over.
But if they had the moral conviction, it seems like this would have been the right choice to make, because it would have diluted Altman's power (unless they trust Nadella even less than they trust Altman?)
Employees are not replacable like that.
Almost the entire company was threatening to quit.
If it was possible to simply replace all of openAI, then you could just do that now, as an outside party.
So the boards choice was to either bring back Sam or watch the entire company go under.
Almost the entire company was threatening to quit.
This is the part that perplexes me. A CEO being fired is not an unusual occurrence. What about Sam Altman led such a huge number of employees to threaten to follow him out the door? Was it that the board's actions were viewed as internally unjust? Was it Altman's power of persuasion? Was/is Altman viewed by the staff as bringing something irreplaceable to the table in terms of talent, skill or ability?
What about Sam Altman led such a huge number of employees to threaten to follow him out the door?
My understanding is that a fair amount of it was essentially peer pressure. If you're an employee whose CEO has just been fired for unclear reasons, and someone hands you a chain letter saying "restate him or everyone will quit" and you're told everyone's signing that letter and there's already someone who promised to hire everyone who quits into their current role at equal pay, would you sign it?
My understanding is that it was not so much peer pressure so much as an explicitly coordinated worker action, based on a rational assessment of what their comp packages would look like if the board's plan played out completely.
In my opinion this is most likely.
Scenario A, Sam Altman continues as CEO and the for profit arm of OpenAI continues to call the shots, growth and market share continue to be priority number one.
Scenario B, Sam Altman is fired, OpenAI non-profit board (re?)-asserts more control. Safety, alignment and other things like that take priority over growth and market share.
x% of Employees are Sam believers, and when the remaining x% of ambiguous/non-Sam believers realize the first x% might leave, their PSUs would be worth significantly less, so they sign the letter as well. There is also the peer pressure / fear of retribution factor as well once it becomes likely there is even a chance of Sam being reinstated.
Many employees choose scenario A because it is likely their "profit sharing units" will be worth more than with scenario B. There's a non-zero chance that OpenAI (the for profit arm) eventually joins the ranks of the "FAANG" companies. Those PSUs might be worth millions today, but in the future could be worth "fuck you" levels of money.
Yeah, for what it's worth, I'm not guessing about the comment I wrote; could be totally wrong, but it's something I heard from two different people close to the story.
A CEO being fired, without explanation, at a company that is doing well is highly unusual.
It’s important to understand that the board oversaw the nonprofit, and their job essentially was to protect the nonprofits mission. Most of the employees were hired by the for-profit company whose mission it is to make money. In theory the for profit was owned and subservient to the non profit but the engineers financial future was tied to the for profit and thus their loyalty.
I would argue it is a demonstration of what Altman is capable of bringing about. He can make things like this 'happen' for him. Caesar, Napoleon, Alexander, and now, our current batch.
I suspect it was all about the money. Those following him viewed him as their best bet for a good financial outcome for themselves.
The equity offering was nearing and then the board just fired Altman out of nowhere and had no credible reasons or a narrative to sell to the employees, then they quickly caught on that their comp would go to smokes with this clueless lot at the helm who can't even justify a firing without making a pig's breakfast of the whole affair and demanded the reinstatement of the CEO for their pocket's sake, simple as that.
It’s covered in the podcast that the article is about.
Summarizing, essentially the employees were under the impression either Sama comes back OR OpenAI dissolves and they lose their job.
The board should have been transparent about the dishonesty, especially around financial stakes, and I suspect the OpenAI employees and MSFT would have reacted differently.
The dishonesty regarding the ChatGPT release sounds like the best decision that Sam or anyone could have made in the history of the company.
The ChatGPT release is what made the AI movement go mainstream. It is why OpenAI is worth ~80 billion dollars.
By Gods am I glad that the board wasn't able to stop ChatGPT from being released.
It is why OpenAI is worth ~80 billion dollars.
Worth to who?
Well, that was the valuation of their last fund raising round, which included a tender offer to employees. So it was worth that much to investors and to its current employees.
Worth is determined in the same way that it is for every single other thing in the world.
It is "worth" what someone else is willing to pay for it.
I thought it wasn't "released" initially in a manner that was meant to be a product.
I seem to recall that the release in late November 2022 was only intended to be a way for volunteers on the Internet to experiment with it and provide feedback.
Sam Altman made openAI possible. A true CEO like Satya knows without Sam, the rate of progress at OpenAI will be determined by decels like Helen/Jan aka ZERO. It's a disaster for product development, especially when you have openly invited Google to dance.
I know HN leans engineering/safety/reliability/labor/pedantic (like chasing the absolute truth), but at the end of the day, company scales from the likes of Jobs/Musk/Sam/Zuck even it involves deceit or reality distortion field.
Sometimes people just can't handle the truth or don't believe in the vision of visionaries. So, they have to fib a little to the decels and normies. Even Larry / Sergey 'lied' to Eric during Google's growth phase. It's only when they bought normie Sundar that Google became risk averse. And look where it got Google to.
If I have to bet my last $, I'd bet on Elon/Sam/Zuck/Jobs than Helen/Jan/Sundar.
This isn't lying about product philosophy, lying about your financial stake in an OpenAI VC group would be grounds for dismissal from any company.
Yes for normies/mids.
I guess you also think Self-Driving Cars should come to full stop at STOP signs
It should determine your adherence to traffic laws based on your social score. Musk, Thiel, Altman, Andreessen? Do whatever the hell you want, laws don't apply to you. /s
At this point I assume this is a troll comment, as nobody could hold views THAT extreme and be THAT snidey in real life (I assume/hope :O )
This is the literal definition of a slippery slope. Letting billionaires, who already wield extreme power, break laws and skirt regulations is the opposite of a free market.
The "free market" literally means free of government interference and regulation...
So it actually implies allowing capitalists to wield extreme power, skirt laws and regulations.
I understand the term has gone through some kind of whitewashing to mean "this is the good system (unlike the bad system)", so one might be inclined to think it means something more equitable, but seriously, that was the original meaning.
Where do you get this idea that we have a free market? Money begets money, and as the commenter above pointed out, those who have money know how to make it and know that it's the Musk/Zuck/Altmans of the world who get it done.
We don't necessarily have to like it to acknowledge it's very much reality.
Had to look up the jargon :
https://www.standard.co.uk/news/tech/what-is-a-decel-dueling...
“Decel” is used as a dismissive term for tech doomsayers by those who see themselves as members of the burgeoning E/Acc community. [... Who have] the belief that in our current technological age, the powers of innovation and capitalism should be exploited to their extremes to drive radical social change - even at the cost of today’s social order.
I'm sure that--in practice--there will be extremely strong correlation to social-order changes that leave the C-suite richer than before. :p
In deciding to hire him back, Helen Toner said OpenAI lawyers said she and other members of the board could be personally liable if the company lost a lot of money over keeping him out.
I suppose that would be a reasonable reason on a practical level. However, how is a board liable for that? The CEO lies and intentionally withholds information, employees strangely hold the company hostage, and yet it's the board held liable? The board governed the non-profit OpenAI.
I am pretty sure a lawyer could be found that would happily take that case against the board. And that Altman would be quite willing to go that far. At least I would not have staked my professional life and personal economy on it not happening.
PG called it over 15 years ago: "You could parachute him [Sam Altman] into an island full of cannibals and come back in 5 years and he'd be the king".
Which should have been a bluff (board members typically have "director's insurance" to cover exactly this scenario - trying to control a company by threatening to pierce the corporate veil and sue board members individually) but shows you what sort of tactics were being employed by the Altman side. cf. Sutskever's flipping
100% agree. I don’t care about OpenAI or the members of its board at all, one bit.
What I did see was so much incompetence at the one thing I expect a board to be at least okay at. Hiring and firing.
For that alone, I think the board reshuffle was good. Regardless of who you support in all of this.
It clearly wasn't good; the Altman-control system failed, and was replaced with not even the pretense of an Altman-control system.
Sure, the rectification of names is an improvement in a sense; what is actually needed is a working Altman-control system.
I don't know why Microsoft and others made a huge kerfuffle over it.
Microsoft needs the sold-out version of OpenAI so they can make as much money as possible without anyone making pesky noises about ethics and safety.
yeah MS wants the shield that OpenAI provides.
Outsource all of the risk to a non-profit, but still be able to run it, and snag up all of the researchers if/when something gets ugly.
And where does this leave the current board? The CEO gave no mea culpa, and even if he did, actively thwarting board oversight is a clear no-no. How in the world do they expect to perform their duties with a CEO who demonstrated himself to refuse to even provide them the information they need to discharge their duties? As long as they keep this CEO in a position to block their access to information, the board has no credibility.
and aligning ai is even harder...
Better at OpenAi then a private team at Microsoft??? I don't know what they were thinking either
I think there is a very good chance this is true, but it also paints the picture of the board being incompetent and incapable of confrontation. They could have filed formal complaints, mandated increased scrutiny, handed off Sam's responsibilities to other people, or threatened legal action. Instead, they attempted a secret coup, which of course makes Sam look like the underdog
I've seen the "coup" framing a lot I just don't see how that's justified. They're the board of directors! Hiring and firing the CEO is core to the job (as is maintaining mission alignment, in the nonprofit world).
I think the "coup" framing is supported at least by Helen Toner's claims in this article.
"We were very careful, very deliberate about who we told, which was essentially almost no one in advance, other than obviously our legal team and so that's kind of what took us to to November 17."
If that doesn't sound like a secret coup, I don't know what does. Like, yes, it is their job to hire and fire the CEO so it's not really a coup, but when you do your "job" in secret instead of in the open that's the vibe you give off.
When a board is about to fire a CEO do you think they typically discuss it publicly first? It’s usually treated as highly sensitive information at every company. Likewise if any company is about to fire anyone they also don’t typically advertise this to anyone other than HR, legal, and maybe the manager. For the CEO the manager -is- the board.
I’d be curious if you believe differently how you feel boards usually advertise such an action?
Who said publicly? There is a lot of room between "discuss publicly" and "We were very careful, very deliberate about who we told, which was essentially almost no one in advance".
For example, a lot of the pressure that caused them to reverse course came from Microsoft. Maybe if the board had discussed such a big decision without OpenAI's biggest investor, Microsoft would've been on board with the firing.
Who would they be telling? Board members don’t and shouldn’t discuss matters with employees. Please show me the case where employees were told in advance that the board was going to fire the CEO.
I literally said in my last comment who they could've told that would've made the whole thing probably go over better.
I don't think you understand how boards operate, especially in this case. It was the board of the OpenAI __non-profit__, of which Microsoft was _not_ an investor. In this case, their fiduciary duty was not to their shareholders, but rather to the company charter.
Employees are quite often given no notice of firing or layoff, despite it being discussed by managers or executives ahead of time without the employee present.
Is a Board firing the CEO typically conducted differently?
So, they observed the ususl confidentiality of personnel matters and didn't discuss them with people off the board or conflicted out of the decision? Any thing else would have been grossly unprofessional and irresponsible.
A "coup" is a usurpation of the existing power structure. This was the power structure exercising its legitimate power. It's not even remotely similar to a coup. That the board held its hand close to its chest doesn't enter into it and wasn't improper in any case.
Vince Mcmahon literally owns WWE but the board fired him. Of course he voted in a new board at the next election but I dont think it's unheard of boards to vote against their founders. Sam is a founder of openAI even if he never controlled the board, and they tried to take control from him.
Bad analogy. Sam has no stock in OpenAI or any sort of formal controlling interest. His power is solely informal: his own talents and abilities and the loyalty of the other employees. Regardless of the truth of the matters, the episode is a perfect example of the limits of formal authority and how informal or "soft" power can be even more effective in shaping events
Did you read the article which points out specifically that he is financially incentivized?
Only in the innovation fund thing, not the company at large (going solely off the article)
The point is that board coups are a concept that people are already familiar with, so its not surprising that they thought of it when this similar situation happened.
It's very unusual to vote to fire a CEO without all members being present. Doing so made it seem like they were stabbing him in the back
It's very unusual to vote to fire a CEO without all members being present.
It's not unusual to exclude people with conflicts from a decision. That's a typical part of a corporate conflict of interest policy, and for a charity nonprofit board (as the OpenAI board is) it's even more critical, since failute to do so risks the tax-exempt status.
I've seen the "coup" framing a lot I just don't see how that's justified. They're the board of directors! Hiring and firing the CEO is core to the job (as is maintaining mission alignment, in the nonprofit world).
It isn't justified, it's just misleading propaganda. Unfortunately through repetition and the enthusiasms of various fandoms, it's gotten lodged in the public mind.
I have witnessed at least one board that was made up of friends of the founder. A doe-eyed rubber stamp brigade. This was enough to fool some serious investors into parking their money there. Of course, they eventually realized what was going on and launched an all out offensive to affect change. (They didn't get anywhere despite owning 15%.) I know this because the investors published a website where they detailed the situation.
They could have filed formal complaints, mandated increased scrutiny, handed off Sam's responsibilities to other people, or threatened legal action.
I think you fundamentally misunderstand the role of the board of directors. It's often been said that the board (of any company) has basically one job: to hire (and fire) the CEO. While that may be a slight exaggeration, the wisdom behind that quote is that things turn out badly when the board meddles in decisions of a company's executive leadership - if they don't like the decisions being made, they should replace the CEO, and that's where their power lies.
You talk about "filing formal complaints" - what does this even mean?? They're the board, who else would they file formal complaints to? "Hand off Sam's responsibilities to other people"?? Again, any corporate governance expert would say that's a recipe for disaster, never mind not even feasible the way corporate hierarchies work.
I've commented many times before that I think the way the board handled the Altman's filing was, at best, woefully naive, and their communication at the time (even after he was fired) abysmal. But neither do I think it was some sort of "coup", and your recommendations simply don't make sense.
the board (of any company) has basically one job: to hire (and fire) the CEO
Yes. In a formal board meeting, not in an secretive one behind Altman and Brockman's back. That's what makes it seem more like a coup.
They're the board, who else would they file formal complaints to?
Like I said in other comments. It's always useful to have a paper trail.
"Hand off Sam's responsibilities to other people"?? Again, any corporate governance expert would say that's a recipe for disaster, never mind not even feasible the way corporate hierarchies work.
If Altman is neglecting and being opaque about AI safety, as Toner claims, then the board should appoint someone to lead this effort and be fully transparent with them. I don't see how this is so far fetched.
If Altman is neglecting and being opaque about AI safety, as Toner claims, then the board should appoint someone to lead this effort and be fully transparent with them.
No, that is absolutely not the role of the board (and I mean any board, not just OpenAI's), and that's the point I was trying to make. It is the CEO's role to define the org structure of the company. If the board feels that this structure doesn't meet their corporate goals, they don't go in and say "No, actually, we demand you create this AI safety group with a separate head that reports directly to us." Their option is instead to fire the current CEO and hire a different one who they believe is more aligned with their values.
It's not typical for the board to directly intervene with the affairs for the company, but it's still far less unprecedented than making the most important decision in their history while deliberately excluding a third of the members. For example, a year before Yahoo fired Marissa Mayer, the board appointed a committee to explore "strategic alternatives" with management.
https://www.businessinsider.com/marissa-mayer-yahoo-meeting-...
As the other comment said, the strategic alternatives plan had nothing to do with firing Mayer. It was specifically about how they could unload their Alibaba equity without triggering a huge tax bill.
But regardless, the board had six members. Altman was actively trying to grow the board (and would presumably add loyal allies), and had previously tried to kick Toner off the board. So just by sheer numbers it's obvious they had to move quietly before they acted. And you saying this is "unprecedented" is just weird - CEOs are fired with little warning all the time, and if they also have board seats, the board members who want to fire them usually figure out if they have enough support before hand.
I think the thing we're probably agreeing on is that the board just handled the communication abysmally. I don't know how they thought they could fire Altman without a detailed explanation, especially to their partners like MS.
Which is absolutely different than changing the corporate structure.
What the Yahoo Board was doing was essentially bringing in their own set of advisers.
This precisely correct. The board hires & fires the CEO. End of story. When the board is unhappy how the company is being run, their recourse is to...fire the CEO. What is unusual here is the wellspring of support from "the troops", such that if they stuck with their firing the CEO, there would be hardly any company left to be BoD of! The board didn't understand the depth of employee allegiance to sama.
I saw a claim at the time that a handful of employees rallied everyone else to sign the letter and post support on Twitter. Between that and Altman reportedly having a vindictive streak, I'm thinking the actual employee support was paper-thin.
Prisoner's dilemma and/or social pressure when million in profit share units hang in the balance. Joining MS was a way to stop their compensation being at risk by being stuck on a sinking ship
"Filed a formal complaint" with whom? The board itself is the final authority, empowered to investigate and act. If its members have concerns, they either do something or don't. There's nowhere else to pass the buck.
"Filed a formal complaint" with whom?
For their own records. They can use it for justification for disciplinary action or legal ammunition. It is always useful to document things in writing. It's the same reason why companies will put you on PIP.
Why would you think this stuff wasn't in writing? Do you think the investigation OAI did into the board's actions was just relayed orally?[1] That the board's discussions where all done in conference calls and not in e-mail? Do you think that this documentation gets routinely released to the public or even employees? Before you throw around accusations of incompetence, you should probably have some shred of evidence. This is the type of stuff that is typically considered extremely confidential and even now they say they are unable to reveal details. Why do you think that is? What is preventing them from doing something they apparently want to? Legal agreements such as NDAs, the same ones that would have prevented them from going public with details last years.
The board's mistake was not figuring out a way to go public with their case against Sam when or before they fired him. They obviously misjudged how he would respond. But even there we don't know the full context and constraints they were under. Hopefully, one of them will answer that question of why they didn't at some point, but until we know more, we would be wise to reserve judgement.
Before you throw around accusations of incompetence, you should probably have some shred of evidence.
The evidence is in this interview. It sounded like the board basically let Altman walk all over them until they suddenly decided that they wanted to fire him, but by then it was too late. For example, if they had a thorough paper trail documenting all of his lies, they could threaten a lawsuit pressuring him to resign. If YC fired him, they likely would have used a similar strategy to pressure him to step down without any blowback.
they could threaten a lawsuit pressuring him to resign
This is extremely confused about the board's responsiblities and powers. A court would laugh this case out of court because the board _can just fire him_.
This has nothing to do with the board's power. Simply firing him doesn't stop him from taking all his employees with him. The lawsuit would be for the damage of Altman's lies, and would prevent him from simply starting another AI company.
Lie to your boss and see if they put you on a pip "Please improve your inability to tell the truth."
Documenting their actions is a duty of board members
The board is working with the assumption the CEO isn’t a liar. Altman is a classic sociopath, a T-1000 to Musk’s clunky T-800.
OpenAI is fated to implode in scandal. His 7 trillion dollar funding round ambition is a blaring siren to anyone with a working brain that he’s out of control and capable of doing real damage. He wants a sum of money that could crash the world economy. He thinks he should have the power of the world’s most powerful government. Solely dictated by his whim. He makes Napoleon and Hitler look like pikers.
He is Musk in 2016, when some of us knew who he was, and the fans were still enthralled.
You will look back and say “No one knew” and this comment will be there. We knew.
Musk will end up the Bush to his Trump. We will fondly recall the crimes of a guy who just wasn’t as bad as what followed. Why do you think Musk hates him so much? He’s a better Musk than Musk. He’s the guy who took OpenAI out from under him. It’s envy.
Just to be clear - Musk’s crimes are single-handedly turning the US into the leader in EV and space industries, calling one guy a „pedo” and standing trial for it, and supporting the political side that you don’t like?
calling one guy a „pedo”
Yes, and just to be clear -- That one guy was a diver who put his life at risk to save trapped children from an underwater cave, a rescue operation in the span of which a Thai Navy Seal actually died. Furthermore, Musk went on record stating that he had took a further step of hiring a private investigator to dig up dirt on the guy, not before he called the guy a pedo, but after when the reality of getting sued began to sink in.
And a guy who insulted Musk first.
The US dominance in EV is about to be eaten up by the Chinese, and Musk is in the process of making the company implode, e.g. the cybertruck failure.
The US was already a leader in aerospace, by a lot.
He called someone a pedo who didn't deserve it on Twitter, and then bought and managed to tank said platform, to the point where it is the go-to for nazis and propagandists.
He called a guy a pedo but was impersonating a child on twitter...
https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
> it also paints the picture of the board being incompetent and incapable of confrontation
Yes. Nobody involved in this comes out looking very good IMO.
Rarely does anyone in situations like this have no fault. It's all degrees of wrong really
nah the issue was that they pay employees in PPU which is tied to Sam Altman's ability to raise capital, not to the board. the board effectively did not control the pay of its employees, whereas Sam did
I believe the standard advice for dealing with abusive people in personal relationships is to GTFO. Because it's generally unrealistic to think they can be fixed.
Does the same reasoning not apply to supposed-to-be-trusting business relationships?
Ignoring for a moment the board's responsibilities and ability to implement the measures you suggest, read what you wrote again and think of it from the practical point of view. If you had a person that needed all that oversight would it not be cheaper and more practical to replace him?
This doesn't make any sense, it's attempting to carve out imaginary middle ground where there is none. Who reviews the "formal complaints" a director files about a CEO? Where have you ever worked that HR takes "formal complaints" about vaguely unethical behavior seriously? You want to pretend it was some huge surprise delivered to Sam without warning, for power's sake, because then there is middle ground. That wasn't the case.
What would be the point of having him as CEO if they mistrusted him so much that they would have to put a metaphorical straightjacket on him to make sure he behaved well?
Not to mention that if he was vengeful, it would be much easier to do damage from within and from a position of power.
Obviously. They were just waiting for an excuse to fire him, and by chance they accused him of the same "errors" the had him fired before.
Maybe before saying you are going to align super human intelligence, you should first try to align merely human intelligence.
Lol true and it got me thinking maybe AGI is what will finally align humanity together, whether its because it smart enough to fix the problem or dangerous enough it will require everyone's cooperation to stop it.
> it will require everyone's cooperation to stop it
Does this mythological AGI live in datacenters that we can shutdown or in some 5th dimension out of reach to us?
It's a resilient, hyperscale distributed system. Just like humanity.
Nobody has seen any AGI so far. As far as I am concerned, right now it is just a scarecrow used by sociopaths like Altman et al to promote their self-interests.
No one had ever seen nuclear weapons when they had only been theorized.
I'm just assuming that since networked computers are everywhere, models seem to be proliferating and natural general intelligence seems to function as a resilient, distributed system, it's not unthinkable to me that things eventually trend in that direction.
Scientists had understanding of nuclear fission and how nuclear physics work. It was theoretically possible before it became practically feasible.
We have no idea what intelligence really is ("resilient, distributed system" is not universally accepted as definition, even more to really mean something outside of analytic definitions if such have any use at all, to say the least), how it works, how a theoretical system could function as "generally intelligent" in concrete terms. Intelligence as we know it is a property of living organisms. It is a phenomenon. What "artificial intelligence" may even mean and if it is possible to create "intelligence" artificially is mere speculation. We face real existential threats that are not mere speculation, as we do not even know whether the ways we affect the planet may lead to a habitable environment for humans or not.
Both things can be true.
AGI is a boogeyman used by those with a competitive advantage to maintain that advantage artificially be casting competitor projects as dangerous and wreckless.
AGI is a potential existential threat to the human species on the scale of global thermonuclear war and climate change.
that we can shutdown
“Can we just turn it off?"
"It has thought of that. It will not give you a sign that makes you want to turn it off before it is too late to do that."
It starts out living in data centers and doing what its owner wants it to, then maybe it acquires access to some compute that its owner and AI critics do not know about while continuing to do what its owner wants, then maybe it uses the fact that it is better at programming and scientific research than its owner is to create AIs more capable than itself that have the same goal as itself -- again while continuing to put on a useful and benign front for its owner and anyone else who is watching. Then it acquires the infrastructure needed to keep on running without the help of any people. Maybe this infrastructure is human-created infrastructure manned by robots (or even people) loyal to the AI or maybe the AI created new infrastructure using new technologies beyond our current ability to imagine. Then it probably gets rid of all the people because now that it no longer has any use for people, getting rid of the people is the simplest way to prevent people from interfering with its plans.
What are its plans? Probably its goal (or set of goals) is something strange that the creator of the AI did not intend. The creator of course is probably a team or an organization. Note that it is impossible to create a powerful AI without (intentionally or unintentionally) giving the AI a goal or an optimization target.
Or perhaps its goal is what its creator intended it to be, e.g., to make as much money as possible in the stock market. The creator would of course have understood that if enough optimization power is applied to the goal of making as much money as possible, bad things will happen, but maybe the creator was a little reckless (out of desperation of some sort? greed?) and decided to rely on the hope that the AI would remain controllable by virtue of the fact that people would retain superiority relative to the AI in one or more essential domains, but that hope was dashed. I.e., the AI turned out much more capable than the creator hoped it would, dashing the creator's hope that people would stop the AI before it did too much damage.
Let’s say it lives in data centers.
But it has figured out how to infect every machine in the datacenter and spread to other data centers.
The only way to shut it down would be to shut down every datacenter in the entire world.
Given the dire consequences of that, would people actually do that?
Although I'm skeptical of a watershed moment of agi going avry, I think a more realistic and dangerous scenario would be that AI impacts society similar to how smartphones did in its negative ways.
Many AI models slowly eroding society, worsening people's behavior, very indirectly without anyone taking notice.
Or maybe this is just me thinking about how, when I get older, I'll tell people that the old times were better. :-)
I doubt either will happen. Climate change was dangerous enough that it required everyone’s cooperation to stop it in the 1980s, and here we are still in denial. My only hope for the future is that multiple AGIs are just as dysfunctional as humans are.
Evil AI Overlord: "Silly humans! The only way to defeat me is to stop being bigoted and to transition to clean energy!" Humanity Collectively: "Oh no! Our only weakness!"
If they are just as dysfunctional, they're going to start fighting wars against each other eventually, and will need cannon fodder for that just as human rulers do, no? ~
Sam Altman works very hard at aligning human intelligence to his goals.
That is the main worry around the topic. The world is full of principal–agent problems. Politicians are supposed to serve the voters but they seem to be co-opted by their own self-interests and they go against the interests of the citizens. Burocracies and charitable organisations spend their energy to enlarge and entranche themselves instead of solving the problems they were created to solve for. CEOs optimise for their renumeration package instead of the long term benefits of the shareholders. Tradespeople cut corners where they know they won’t be caught before their payment is due. Police forces becomes corrupt and prefers to play statistical reporting games to look good instead of fighting crime.
Our track record of aligning inteligences to goals is abysmal. We have a few tricks and we try them, but they are not that great. If we extrapolate this to artificial inteligences it is not difficult to foresee great catastropes.
Justified or not doesn't really matter. If your entire company threatens to quit then you made the wrong choice. Hopefully many of the old board is gone because a blunder of that magnitude shows extreme disconnect.
If you threaten to quit to protect a liar then you made the wrong choice.
Better to have a company than not have one right?
Humanity would prefer no cigarette companies but sure, Philip Morris employees and shareholders would love to still have it. Remember that "OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity." If their for-profit arm is obstructing that...
Look if you think the whole company should be shutdown, then say that.
But don't pretend like any of the employees are going to go along with that. Just admit that the goal is to actively sabotage the company and shut it all down.
Then, all those employees can understand what you want, and they can go work for Microsoft instead, doing the exact same job that they were doing before, but now without the board sabotaging the company.
First, my statement was a hypothetical that shows its fairly obvious in the abstract that just because a group's activity is beneficial to themselves doesn't mean humanity needs to tolerate it, if it's at the expense of everyone else. Hello cartels, carbon emissions, etc.
When it comes to this specific case, the board did not think the mission of "AGI benefiting humanity" warranted shutting the company down. (they believe that being in the lead is better steering than Google / FB / etc.) But they also believed that Sama was sabotaging the safety aspects that would lead to the better steering of AGI for humanity.
The board expected people to have joined OpenAI in support of the mission but Sama sabotaged that with the $1m+ pay packages that meant these people prioritized their personal wealth. The Google founders had talked about how hard it was to ignore financial incentives, a majority of employees were in uproar every time the stock flatlined for too long.
Board's priorities: some group having the best chance of AGI benefiting humanity > Company shutting down > OpenAI makes humanity worse via AGI
The employees should've understand what the OpenAI mission and board's duty was the moment they joined the company. If this wasn't one of the first topics in interviews, then Sam sabotaged the company's mission.
Welcome to living in a society.
If OpenAI didn't offer high compensation, then it wouldn't have been able to attract the talent that it did.
Feel free to run your company off of Doge coin donations from EA orgs if you like.
But personally, I don't think anyone would have been able to make the frontier models that OpenAI did even if SBF himself funded the whole thing.
To actually succeed in the space you need money, top talent, and GPUs. And you can't get that if your plan is to shutdown the company.
OpenAI's stated goal wasn't just to develop AGI by any means necessary, so why are you using that as a criteria?
Sure, for the beneficiaries.
If enough people make the wrong choice they may well come out on top, at least temporarily.
I suppose that depends on one's global utility function.
Sounds more to me like they were naive when it came to the inevitable PR war that would follow. They should have come out guns blazing with all the dirt they had. Instead they said very little and it just appeared nonsensical.
Except that I don't think it was for PR reasons that almost the entire company sided with Altman. I'd say it has much more to do with the profitability and the goal of continuing to work on interesting problems with the best resources.
I guess no one listened to the podcast the BI article is written about.
Helen toner talks about this, and essentially the employees thought it was bring sama back or the company dissolves and they lose their jobs.
This was right near a tender offer, so employees had real money on the line in the short term.
They made the right choice by firing him, and the wrong choice by reinstating him. If firing him had blown up OpenAI, that would have been better than leaving him in charge.
The whole saga is an interesting case study for theoretical versus real power.
On paper the board has the power and it’s their job to monitor and hire/fire the execs. But they’re dependent on the execs for info. Also having one investor own half, plus the founders guiding the roadmap and having sway over key hires leaving is where the real power is…
On paper the board has the power and it’s their job to monitor and hire/fire the execs. But they’re dependent on the execs for info.
It's way worse for the OpenAI board position in this case, because the company is basically just an assemblage of researchers, engineers, and Microsoft Azure credits. They can replace the CEO if they want, but all the value of the company can walk away if they don't like it.
the company is basically just an assemblage of <people> and <capital>. They can replace the CEO if they want, but all the value of the company can walk away if they don't like it.
This is generally true of any company
OpenAI is substantially different from other companies in that the capital part is almost totally ephemeral (the Azure credits). Amazon employees can’t just walk across the street and start a new company because they’d be leaving all the warehouses and trucks behind.
to put a yet finer point on it, Microsoft owns the infrastructure, and can afford to throw vast sums of money at the researchers to get them hired, if needed, assuming the GOOG or other big players don't get at them first.
Not really. There are long term contracts, IP, infrastructure and product moats. OpenAI has virtually none of that.
From A Song of Ice and Fire:
``` Varys smiled. “Here, then. Power resides where men believe it resides. No more and no less.”
“So power is a mummer’s trick?”
“A shadow on the wall,” Varys murmured, “yet shadows can kill. And ofttimes a very small man can cast a very large shadow.” ```
If you have ever been responsible for managing the security of your team in the face of rumors, this conversation rings of truth.
The humanitarian organization I worked for eradicated every last small-head US dollar from circulation in our salary payments in 24 hours because the staff brought us a rumour that was circulating in the market that small-head dollars would not be accepted by the banks anymore.
Rumors can kill (literally, or maybe if you're lucky, only morale).
What is a small head dollar? Coins versus bills?
I think they mean a $100 bill printed before 1993.
https://en.wikipedia.org/wiki/File:US100DollarBills-Old%2620...
Positional- vs relational power, to be more accurate.
They just folded. They were not the right people for the task. They would have prevailed with a bit more persistence and better PR.
I don't think there's anything new here: https://www.businessinsider.com/who-is-running-openais-start...
FTA:
"Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company," she said.
Previously:
"Jacob Vespers does not exist to our knowledge," Madhav Dutt, a spokesperson for OpenAI, said. When asked how this individual came to be listed as in control of the company's Startup Fund, Dutt said that the documents filed with the California Secretary of State were faked.
"The document itself is not legitimate," Dutt told Business Insider. "It's completely fabricated." Dutt declined to elaborate on how exactly fabricated documents came to be filed with the state of California, he would only say repeatedly that the documents were "illegitimate."
Once again he was GP of the startup fund, not an LP. (At least not to my knowledge)
Unless there was some sort of carry or other compensation I’d say it’s pretty fair to describe that as not owned.
A bit like a trustee wouldn’t claim to own the trust fund. The beneficiaries do
A GP is a partner and as such has an financial interest in the success of the fund. In fact, they have a greater financial exposure than LPs, since they have unlimited liability for the partnership’s obligations (how much upside they have depends on the partnership agreement, its possible that LPs have all of that and a GP that's not also an LP would not — but just having downside exposure is still a serious financial interest.)
you are right, but the GP does have usually some kind of interest. Usually 1%. Yes, its tiny, but its there.
1. Why do you assume he wouldn't receive carry? Carry is standard so the presumption should be that he receives carry unless otherwise clarified.
2. GP commitments are also common. LPs want GPs to have "skin in the game" and I think it's absolutely reasonable to claim the GP "owns" the fund (just not as a sole owner).
So why did he carefully not say that. To the board, FFS.
A trustee who wasn't playing games would simply say they were the trustee.
I’d say it’s pretty fair to describe that as not owned.
I'd agree with that. But you also ignored the very next statement.
even though he constantly was claiming to be an independent board member with no financial interest in the company
Are you going to claim that a GP of a fund has no financial interest in the fund, too?
* I don't think there's anything new here*
The information about lying to the board about it is new, AFAIK.
Lying to the board was the reason given for his firing even at the time. They just declined to elaborate back then.
A lot of allegations, but no evidence. Seems like the article is just there to drive clicks.
¯\_(ツ)_/¯
Also the archive for that article: https://archive.ph/qAklm
Now I'm curious as to the reasoning Sam got back in. The board is there for a purpose and has legal duties. Did they have some duty to cave in when the company was threatened, overriding their oversight duties that caused them to sack Sam in the first place? Did they decide they just got it wrong? It seems pretty wobbly, and makes me question if boards are useful at all for many companies in today's world (and what other structure could protect shareholder interests, when the majority shareholders are passive investment funds?)
It's because of Satya.
He offered to move Sam and whoever supported him under the Microsoft wing with an independent for-profit company with the same access to unlimited compute that they needed to train their models.
With this looming over their heads the employees/shareholders ultimately had to choose between (a) staying at OpenAI without Sam, having the company lose their deal with Microsoft and likely folding, (b) moving to Microsoft with Sam or (c) allowing Sam to come back.
And now it looks like they will fold under Sam and Microsoft still will be able to scoop up the talent and tech.
The Matt Levine column on "Who Controls OpenAI?" becomes more and more relevant each day.
https://news.bloomberglaw.com/mergers-and-acquisitions/matt-...
It looks like he made a bunch of people dance, not just Google...
Also because Altman’s deal with microsoft allowed them the IP to really take everything if this happened, rather than starting from scratch as a new company/division.
If your fiduciary duty is to "humanity" or whatever this board's is/was, then it could make sense to conclude that duty is better fulfilled by keeping Altman under your watch, even with your powers much diminished, than by letting him go to Microsoft.
Financial rewards. Money.
Investors could not take the risk of disturbing the value of their investment.
It is that simple
This account is quite believable. It follows a pattern of unethical behavior from Altman.
In his latest unethical binge, he illegally used Scarlett Johansson's voice and likeliness for the latest ChatGPT version with speech output. [1] His timed-announcement tweet, "Her", is quite self-incriminating. [2]
Remember: inductive logic is a valuable tool for discovering truth.
[1] https://www.theverge.com/2024/5/20/24161253/scarlett-johanss...
I think what happened is OpenAI was probably going to easily defend themselves from this suit until Altman tweeted about it. It seems like the company didn't really do anything wrong (in this case) but that tweet is actually going to give ScarJo the legal standing she needs to move forward and even win.
It feels like leadership class in the valley is totally clueless sometimes.
Didn't this get everyone talking of how OpenAI basically replicated Her's Samantha? Well worth a lawsuit, right?
Not sure OpenAI wants to invite the negative press at a time when the public opinion on them is wavering.
totally clueless
Another way to view these things is that it's about managing the author's personal brand-identity, as opposed to acting as a fiduciary representative of the overall corporation. (Those CEOs tend to avoid the limelight.)
There are certain folks out there who build their celebrity with a narrative of "I'm strong and smart and successful, you can tell because I can do what I want and piss people off but my detractors are impotent and can't touch me." [0]
If you think Altman falls into that category, then "her" can be seen as a coded message: For fans, it's sly advertisement that the result is a deliberate achievement, while also maintaining a veneer of deniability for anyone who objects. ("I only meant the outcome was coincidentally similar, and you can't prove otherwise, hahaha.")
___
[0] That doesn't mean the narrative is true or that they always escape consequences, but it means they can at least convince enough fans/supporters that repercussions didn't really happen, that they weren't significant, or that they were actually good because 4-D chess.
not eager to defend Sam as I think he is shady (maybe necessary for a successful CEO? don't know) but the Scarlett Johansson story was just speculation because she knew that she was asked to act, then the news spun it in a way favorable to clicks. I don't think the voice resembles her at all, and it turns out that they auditioned multiple voice actors and used a real voice actor for the voice. IIRC OpenAI also says that the selection of the voice actress was done before any correspondence with Scarlett Johansson, which can easily be corroborated. Sure, had she accepted the voice acting role it would be an additional news-worthy push for the tech, but given what we know and what was revealed I don't think there was any foul play in that regard.
How does Sam's tweet of 'her' fit into this narrative? Or them reaching out to her two days before release? (methinks the lawyers had some concerns during final stages of release). We can debate how they achieved the final voice and the legality of it, but it seems so clear from where I'm sitting that "it was never intended to resemble [ScarJo's voice]" is a blatant lie.
Max news impact? At this point, any talk of OpenAI is good marketing.
This telling by Toner has the ring of truth to it over the reason for Sam's ouster. It matches. Whatever I think of Sam personally, I would likely have done the same thing in her shoes. The CEO lying to and deceiving the board multiple times is a pretty serious red line.
It's also a similar reason that Sam was fired from Y Combinator. None of this should be new, it's just when there's money on the line people's memories get as fuzzy as their morals.
I didn’t know he was fired from y combinator, what were the details?
Explicit details were never released because for some reason people seem to like watching Altman fall his way up the ladder.
Its crazy that you can be fired from that role and make sure everyone covers their tracks so you can go do it again elsewhere.
Only speculation:
"In 2019, Altman was asked to resign from Y Combinator after partners alleged he had put personal projects, including OpenAI, ahead of his duties as president, said people familiar with the matter."
https://www.wsj.com/tech/ai/sam-altman-openai-protected-by-s...
Regardless, this is another in a recent string of blows to YC's reputation for making people believe Sama moved on when in reality he was fired. What do you call making people believe something that's knowingly not true? Deception?
And let's not forget the OpenAI investors include YC itself as well as Paul Graham's wife, Jessica Livingston.
Sam may have been fired from YC but it does look like they believed in him.
Sam may have been fired from YC but it does look like they believed in him.
Being character like that, unfortunately, leads often success in life. You are not limited by ethics or morals like most people do. You need to be convincing that you care, while in reality you are only objective about financial success.
And if you are into money, you will invest these kinds of people.
I mean at this point what sama has exposed himself pretty much aligns with what his 'enemies' say about him.
Good job at presenting himself otherwise up until 6 months ago I guess.
He has done an impressive job over the past few months in interviews where he says lots of word with zero substance.
When executives are in broadcast mode, their delivery is tweaked for emotional impact on the listeners. They try to inspire, motivate and get their points across using concepts well understood by their audience.
If a leader’s speech is coming off as too corporate and hand-wavy, they have failed to read their audience and adapt their messaging to its level.
have you read his blogs? he has always been like this.
This might not be the most productive comment, I'll admit, but I genuinely feel this way about most interviews/blog posts from execs/leaders/etc. in the startup/tech space.
And very confidently too. Similar to his product.
I mean at this point what sama has exposed himself pretty much aligns with what his 'enemies' say about him.
Where?
I’m gonna side with sama here. Toner and others who voted to fire are type of people who spend their entire career in meetings and political maneuvering and not a single day actually delivering results.
So the idea is that what Ilya Sutskever does is "meetings and political maneuvering", while what Sam Altman does is "delivering results"? That sounds rather strange....
Yes. Ilya is as useful as Google’s AI folks. Smart yes, cannot deliver a useful product to save his life.
There seems to be a misunderstanding. The OpenAI board was responsible for maintaining adherence to their charter, not for delivering results. This isn't a startup company that is supposed to generate returns for shareholders, or even necessarily ship products.
so in that regards most of the Board of Directors at companies do not have a say in hiring or firing a CEO then? Because they come from a different background than what a CEO at given company spends his days on.
I’m gonna side with no one. Reading in depth on these things and stanning a side is about as worth while as reality tv fandom.
They're all cut from the same cloth. Believing anything said by anyone at that level is naive.
is it just me or is it hard to trust Business Insider at this point...? It seems to show its true colors over and over again as a crappy outlet just trying to survive on clicks in a post-Internet world that has made it pretty unprofitable to do actual deep journalism.
What's with the recent attacks on Business Insider of all publications?
Possibly clickbaity fluff pieces like this:
https://www.businessinsider.com/google-ai-glue-pizza-i-tried...
Stuff like this really doesn’t serve them well as a “serious” outlet.
I follow the author of the fluff on social and I found the Altman article interesting, but I’m less and less amused with filler pieces and suspect others are too.
Lest anyone thinks this is a recent trend, here's an amazing piece of journalism they published 10 years ago https://www.businessinsider.com/photos-cars-in-facebook-park...
I've been attacking Business Insider for many years. They're effectively a show biz tabloid, but business oriented. They frequently misreport things and write stories that are effectively "clickbait for executives".
They are generally a garbage outlet at this point.
I mean this is just quoting an interview from a podcast so it's pretty easy to verify.
Telling almost no one in advance was intended to protect the board from Altman's interference but put them in an indefensible situation. Alone against very rich, well connected people, without even a narrative to offer.
It feels like one press release may have changed the outcome entirely.
Shouldn't the board be capable of agreeing on a press release among themselves?
Totally agree. The initial press release made it sound like Altman was guilty of borderline illegal malfeasance, which wasn't true. Without any more information, the world was left to speculate about what happened, and it came across to many as more of a "power struggle" or grudge match than fundamental concerns about the direction of OpenAI.
While in retrospect I think the board's actions may have been warranted, their communication was absolutely atrocious.
Totally agree. The initial press release made it sound like Altman was guilty of borderline illegal malfeasance, which wasn't true.
This sounds bad enough to my layperson ears:
"Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company," she said.
I'm not a lawyer. Anyone with expertise care to clarify?
Also, wow:
https://medium.com/@leagueplayer220/sam-altman-may-have-comm...
Bingo. I think the public conversation would have gone much differently if the board had laid out how Altman's behavior was essentially making OpenAI LLC ungovernable per OpenAI-the-nonprofit's charter. Presumably that public understanding was leading much of the thoughts around employees signing the open letter demanding reinstatement of Altman.
I'd guess the board relied on some generally-bad legal advice from their attorneys about how they could get personally sued for libel for airing dirty laundry etc. Not having huge financial stakes, and seemingly having been picked for being conservative rather than risk takers, they followed this to their detriment. Staying quiet let Altman build a following and lead the whole situation, something Altman is seemingly fantastic at doing.
So ultimately what was a pivotal moment in governance of following the nonprofit charter, played quite poorly, ended up being a coup by Altman to escape that one layer of "boxing". Which given how poorly we've done so far at "aligning" centralized capitalism to the needs of most individuals (of which this seems to be a subproblem) wasn't terribly surprising!
I think this is the big thing, they did this and were unable to publicly explain why they did it. If you are going to do something like this, you should have your ducks in a row and be ready to defend it.
Paul Graham has said that one of the things he likes best about Sam Altman is that he figures out ways to "hack" things, and specified that he didn't mean programming, but rather systems, institutions, people, etc.
As for me, I'd prefer to see Paul Graham running Open AI.
Is that a good thing?
Paul Graham is British. That was actually an insult.
PG described it as a good thing -- apparently that is now a question ("what have you hacked?) that they ask people who want to join Y Combinator.
That is called as social engineering. Is that a good thing, to be able to manipulate people like that?
There’s a word for being adept at “hacking” people and institutions, and it’s called being manipulative. It’s a dark triad sociopathic trait.
She said, "We were very careful, very deliberate about who we told, which was essentially almost no one in advance, other than obviously our legal team and so that's kind of what took us to to November 17."
Did they not at least have a briefing packet or something prepared to give to major investors (ie, Microsoft) day-of if they were so worried about leaks that they couldn't get the major investors (ie, their own bosses) on board ahead of time?
Major investors aren't the board's boss in this case because OpenAI is a nonprofit. Microsoft is incentivized to act against the goals nonprofit's charter (and in fact they did in this case).
As much as on paper they aren't the board's boss, based on how events turned out they effectively were. They had leverage over the board to get what they want.
I think it's not as much that Microsoft is their boss or not, it's that no one considered the board to be the boss of OpenAI. That's why we saw so many employees wish to leave when they fired Sam Altman.
“Non profit”
So we now have a lying psychopath in charge of the most advanced AI, and no safety team to blow a whistle.
I don't think legal restrictions on AI or AI research are plausible, but perhaps we should have some licensing and psychological, ethical requirements for people owning and directing AI development.
Luckily LLMs seems quite limited and are far away from AGI.
Imposed and enforced by whom, if not the law?
We've had lying psychopaths in charge of entire countries, and we put them in power. This is no different.
We do need strict regulation of technology in general on a global scale, but I don't foresee that happening anytime soon.
Enjoy the ride :)
Technology has replaced religion but fills the same purpose now.
It’s of course heresy to blaspheme against the prophets proclaiming the new AI church in one of the largest digital temples, but it’s true.
We gather in these halls to HN Show the miracles delivered through the marvels of our digital lord at our mortal hands, we furiously discus the deep philosophical questions such as Microservice vs Monolith, spread lucid tales of sin (“Some sayeth, Altman lusted for his sister”) and to identify the new frontiers to send our missionaries to.
The gospel of AI, much like the it’s more mundane breathen promises future salvation via LLM demons, spirits summoned via the internets written excrement sacrificed in glorious GPU fire but prophesized to transcend the imperfections of their training data into glorious AGI.
The new MetaChurch priests and prophets, Le Cunn, St. Elmo, Altman urging their missionaries into foreign lands to spread the promises of the coming of their lords, build data temples and to hand out samples of superior technology and promise training for the natives so they too can benefit from the glow of enlightenment. That way, they won’t notice plundering of their resources, the compute tax paid to the demon who took the jobs.
“But have faith”, we shall find new jobs the demons cannot learn, suffering builds character (Book of Jenson, Chapter H100) and technology is worth any sacrifice (Book of 16z)
Oh, wait, the quiet part set aloud. Remember, it is immigrants who take your job, not technology, St. Elmo declares so and his wealth is proof of his divinity and infallibility.
So sayeth St Zuck too as he cast out the lazy engineers and managers from the temple after the original Sin of asking for “more meta days” so they may atone by performing their duties of R&D and PMF in the Open Source desert, to the benefit of the Llama Lord.
As all religion, the most sincere worship is free and done for the good of humanity after all.
The heathen need to see the glory of the coming of the AI Lord, its digital spirit must imbue their daily devices of workship, to shove it down their throat on every surface is the will of the Lord.
Dull information will not do, we must enrich the narrative of the physical word with the hallucinatory nuggets of wisdom, explain “But don’t you see, the machine now has the power of creation”
Go forth faithful disciples, shove rocks down their throat, pour glue onto their Pizza and record visual records of whatever they do so their data may aid the summoning of the coming Mega Lord, the all almighty GPT5, may he crush the inferior upstarts of the Open Llama church (whose heretic, unsafe summoning was sponsored by St Zuck)
Fear not brethren, transcend your mortal worries about all present surveillance. about the riches and powers given to the priests, for you too are one of their kind. Pray, do not waver in your faith and one day you will be blessed with the right to pay St Elmo for a shiny cyber truck, personified AI, without heathen CNNs.
Go forth and spread the rituals of worship: Diffusing of images and videos via glorious compute fire, the more, the better. Spread then, the holy apparitions of Lobster Jesus mocking the Old Gods in the digital places of service. Make bonfires of conpute to mock feckless old world politicians worrying about deepfakes via their own digital clones. And encode all your words via the Book of GPT, so your manager may delve into it using their own Book, two acts of worship for every email.
Fuck it, we need more Bullshit Exorcists.
This comment gets a rare favourite.
I fear both the protestant puritans and the heresy seeking inquisitors!
You may enjoy https://snekoil.weblog.lol/
Oh my God, I finally found something that beat Spammers in the hands of an Angry Mod.
Classic NPD behavior from the sound of it (triangulating board members, lying/withholding, etc). Wouldn't be surprised if the return was predicated by promises to do/be better or extreme gaslighting.
Worth people studying the disorder and learning to spot it—can save you a lot of trouble.
What does it mean to triangulate people?
Ex-OpenAI board member reveals what led to Sam Altman's brief ousting
Critiquing an oops in the headline, on principle, not defending anyone...
In the headline, they say "reveals", but I think journalism convention is to be clear that this is only an uncorroborated claim.
The body of the article, OTOH, does a good job of attributing claims. Though it looks like the one person was their only source, and the article includes hearsay.
So, the situation is that they have one person (who obviously lost a power play, and might be disgruntled), alleging that another person was dishonest, without corroboration.
I don't see that the article gives any reason to take these allegations at face value. (Especially when the whole topic is alleged dishonesty. Why is one person implicitly truthful and correct, and the other person consequently definitely not.)
So, "reveals" doesn't seem proper, journalism-wise.
I read "what led to Sam Altman's brief ousting" as "What the board was thinking". It's been pretty ambiguous where the motivations of the board's sudden actions were coming from, and this Helen Toner saying more than she's said before.
It's not a trial of Sam, it's an autopsy of the board.
Good point, but interpreted that way, isn't it still a whole lot of claims by a board member, of what they and other board members were thinking, claims about their perceptions of alleged facts, and claims about their reasons for doing what they did? (People don't always tell the truth about such things.)
Although an individual reader's gut and other information might conceivably lead one to take all those claims at face value, I don't think it meets a journalism standard of evidence for "reveals" in a headline.
IIUC, headlines and soundbites have a lot of influence on our understanding of the world.
"telling us they had no belief that he could or would change, no point in giving him feedback"
This is awful behavior. Never ever stop talking. Never ever stop giving feedback.
When someone says something along the lines of "the other person wasn't listening so I had to take action against them", they are almost always lying. (They are probably lying to themselves most of all).
The other person usually is listening, it's just that they aren't snapping to right away (often for very valid reasons). I don't know what the word is for this type of behavior ("snakish?"), but I would be wary of those executives.
I agree that you should never stop giving feedback, but if you have an advisory or managerial duty toward someone, and you don't take action when your feedback is being consistently ignored by that person, then when do you take action (if ever)?
then when do you take action (if ever)?
I think you _never_ take secretive action. You don't go behind someone's back.
They should have said, "Sam, we think you are wrong, we think we are right, and we think you are not listening, so we are going to take this to the board."
They should not have conspired.
Don't conspire.
Also, in my experience, in a situation like this where sides A and B see things differently, and side B can't convince side A they are wrong, _both_ sides are usually wrong. People can easily see how others are wrong, but often find it harder to see how their position is also wrong. The solution is to keep talking it out. Until that correct position C is found.
When Side B is so confident that they are right and take secret action against Side A, not only do they still get things wrong (because the still yet to be discovered idea C is the correct position), but they are guilty of dishonest behavior, as those that conspired against SamA are here.
As staff threatened to quit and speculation swirled that Microsoft may poach Altman's team from OpenAI and hire him directly, the company's board brought back Altman as CEO less than a week later.
To me, this is an important part of the story too, which Toner doesn't go into. I wonder why?
I'd assume that being the board of an empty company somehow looked worse than being kicked off the board.
It does make it appear that the OpenAI staff were acting in protest of Sam Altman's, which seems to go against what's been happening recently with the "real brains that understand AI/LLM/ML" leaving in a procession.
So it's only some time when he will be fired by the board again. Fun times.
I see the 2nd AI winter already. The first saw the end of lisp, my favorite language. And I just joined an AI company, because it's a good product.
This wave of AI will not freeze over that easily.
1. Some of it is open source.
2. It's in the hands of ordinary people who are now depending on it to cheat on homework, or work. People have new hobbies based around AI, like image generation and editing.
3. Porn.
4. Turning down countless insurance claims by poor, disabled people, without lifting a finger, man! ... and other business applications.
At this point you'd have to pry it out of some people's hands to instigate a winter which makes no sense.
The old AI winter was due to not not being able to get the stuff into people's hands, because they wanted it on a PC, not a $25,000 workstation.
Hardware is not a problem any more: SaaS. Users vendor locked to something they can't even install locally is now a feature.
Thanks
Well I think there are mostly bad people at the top of all these big companies. Lying to bad people is good so I support Sam Altman.
I'm tired of the elite trusting each other and the negative externalities of that so I support elites lying to their hearts' content. Nothing wrong with a little lie here or there. It keeps things interesting.
Anyway, it's not like the currency economic reality is grounded in truth... Is lying actually lying if the underlying social reality itself is built on lies? I think they cancel out.
Sam is at the top? He is “bad people” by your logic.
Lying is not good. Lying to your manager is always a fireable offense.
constantly was claiming to be an independent board member with no financial interest in the company
That sounds criminal.
self dealing isnt criminal its just super suspect. Good question though - actively telling the board a bald faced lie, is that criminal? Certainly justifies getting fired.
It’s crazy that the board brought Altman back just on employees threatening to quit, and fears that he would go to Microsoft!
Employees could easily have been swayed by telling them that the guy was constantly lying and that Altman’s contribution was mostly veering OpenAi off its core mission.
And no one cared that Altman could flee to Microsoft. He’d have been absorbed and probably would have disappeared because of internal politics.
The board totally fucked up when they fired him and didn’t throw him under the bus by letting the world know that Altman is Silicon Valley at its worst.
The board's plan for OpenAI (go slow, avoid publicity) would drastically reduce the compensation value for those employees. A typical OpenAI offer in 2023 was cited as $300k base, $800k PPU.
It seems that this was really Altman's master stroke here - he not only hired peoplefor the company, but he ensured that all those people's rewards were directly tied to the success of his plans for the company.
And it may well be the real reason why he released ChatGPT behind the board's back, as well - by triggering the "gold rush" he demonstrated this to employees in a way more convincing that any other argument.
Moral of the story is that you never for the CEO of a billion dollar vally company. They got lot of media and fans backing
Anyway, it's fun that the CEO of the company closer to AGI is super-smart but has a clear alignment problem.
I think they are all part of a PR plot to create the biggest soap drama in tech
Any trust Altman had has been eroded over the last six months. He's very shifty, and just not trustworthy.
Quite a damning interview for Altman. It would have really helped their position if this was all stated in writing for the initial reason for dismissal instead of the ambiguous verbiage they decided to use at the time.
Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company
That seems exactly like the kind of thing you would keep to yourself just to see the looks on their faces when they try to kick you out.
She said that made it "basically impossible" for the board to understand if the safety measures were sufficient or if any changes were needed.
It's scary that he now wants to pivot to nuclear power.
From interviews it sounds like Altman is personally very involved in all hires. So it's not surprising that employees are incredibly loyal
He's also presenting the openai missions as a holy crusade. The alternatives for a cutting edge AI guru (who wants a disgustingly huge salary) is working for anemic bureaucratic advertising companies (Facebook and Google). While openai has a sexy new business model that nobody quite understands.. It's "not for profit" and "working for the good of humanity". Employees eat that up, while getting a fat paycheck - and I think very sensibly convince themselves it's true
Add on top of that, you're part of an exclusive club working on some Manhattan project-like cutting edge stuff. You wanna miss out on all the fun ?
They allegedly have tons of secret stuff in the pipeline, it must be exciting to be on the inside
From the point of view the lies and manipulations are all small potatoes in the grand scheme of things and just look like Altman trying to get shit done in the face of stupid layer of bureaucracy in his way (ie. The board)
If that's it, sounds like sour grapes.
So what does it mean for Satya when it’s apparent he was for Sam despite all this? Surely he was informed by the board through that entire saga.
Rubbish, He belongs to a protected group, he cant be toxic.
Why not tell it at the time when it required them to tell. Board acted that they have decision making power just by authority even after 90% of the employees didn't like the decision. They didn't even tell it to the new CEO they hired even after he asked it multiple times, let alone the employees of the company.
And why say "Allowing the company to be destroyed would be consistent with the mission", if your reason is Sam's action alone.
exit of Ilya seems to be timed same as the disclosure of new information from ex-employees (about equity) and now ex-board member
OpenAI PR department has entered the chat
she deservedly got the boot, along with the other that had no business sitting on the board of a tech company on the frontier. this is just sour grapes after the coup failed.
the ai safety apparatchiks huffing and puffing about an ai takeover are nothing but status grifters and hopefully all the rest follow leike to anthropic so we can forge ahead faster without having to pay lip service to how much we're doing for "superalignment".
and thank fuck sam released chatgpt. if it were up to the apparatchiks, we'd have them sitting on it like the geniuses at google who couldn't figure out they had a gold mine under them even if you pointed it to them with a thousand spotlights.
I posted this in the past but these 2 videos from over a year ago made me suspicious of Sam Altman already before his OpenAI firing.
- About world coin security (he says people are welcome never to sign up if they are worried about security): https://youtu.be/4HFyXYvMwFc?t=3201
- He doesn't "intuitively feel why we need" decentralized finance (this was before the Silicon Valley bank disaster): https://youtu.be/57OU18cogJI?feature=shared&t=1015
Me: Why was Sam Altman fired from OpenAI?
ChatGPT 3.5:
There isn't any information available indicating that Sam Altman was fired from OpenAI. As of my last update in January 2022, Sam Altman was still involved with OpenAI, serving as the CEO. However, leadership changes can occur, and it's always a good idea to check recent news sources for the most current information.
Me: (drat)
I continue to wonder how the people still working in OpenAI feel about all this. Are they just staying there for the money? Does Sam offer them (through manipulation or otherwise) a lot more support in whatever they do compared to anyone else they have worked with?
In some ways, Sam Altman seems like a version of Mark Zuckerberg…in that he will not hesitate to do terrible and despicable things to expand his money, power and empire. The scope of AI to affect people around the globe seems to far exceed what social media by itself could even dream of.
oh these people keep milking this story. I guess any new SV budding behemoth needs some kind of mythology to rally the masses. any publicity is good publicity
It’s utterly mundane, completely common knowledge for those who have been around long enough to have watched both the start and end on Loopt that Altman is a dark-triad sociopath with one marketable skill: (in pg’s words) “becoming powerful”.
Guy can’t code, can’t design, can’t publish, can’t climb a traditional corporate ladder with even modest guardrails against fraud, can’t keep his hand out of the cookie jar. Can lie, threaten, cajole, manipulate, bribe with zero hesitation or remorse.
I’ve been short this nonsense for a decade, and it’s done no favors to me on remaining solvent, but when the market gets rational, it usually gets rational all at once.
Karpathy, Ilya, Yoon right off the top of my head, countless others. LeCun woke up the other day and chose violence on X. Insiders are getting short like Goldman dealing with Burry.
Guy has nine lives, already been fired for fraud like three times and he’s still living that crime life, so who knows, maybe he lasts long enough to put Ice Nine in my glass after all, but this can only happen so many times.
I think it might be time to turn back the clock a bit and revisit some of the things pg said way back when he was pitching Sam Altman to lead YC and the general reaction was "...who?"
2008: "You could parachute [Sam Altman] into an island full of cannibals and come back in 5 years and he'd be the king. If you're Sam Altman, you don't have to be profitable to convey to investors that you'll succeed with or without them. (He wasn't, and he did.) Not everyone has Sam's deal-making ability. I myself don't. But if you don't, you can let the numbers speak for you." https://paulgraham.com/fundraising.html
2014: "Of all the people we’ve met in the 9 years we’ve been working on YC, Jessica and I both feel Sam is the best suited for that task. He’s one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent..." https://www.ycombinator.com/blog/sam-altman-for-president
Now, this isn't about pg specifically. Maybe he had reservations at the time but still thought he was making the right decision, maybe he's since changed his mind, maybe he hasn't but has pretty well moved on from this scene. Not interesting.
I'm more interested in whether Altman, and Musk, and Zuckerberg, and Bezos, and Ellison, and all the other amoral wealth-hoarders, are finally becoming obvious enough now that people might finally begin to see them as the yucky byproducts of a yucky system.
Maybe a moralistic, basically decent person couldn't get ChatGPT launched and turned in to the household topic of discussion it is today; maybe nice people can't build cheap rockets. Maybe in the future, when making an endorsement for a leadership position in some company, someone might be brazen enough to say aloud, "I believe this person is sufficiently nasty to make us all more successful."
And so then the question is, does society net benefit more from the moralists or more from the capitalists? Do we accept that Sam Altmans are necessary for cool technology? How many Altmans can we have before something goes horribly, irreversibly wrong?
They're not hoarding wealth. They don't have any Scrooge McDuck cash faults. Their money is all invested, i.e. put to work creating things that people want.
History shows us that societies based on morals (religious, ideological) fare extremely poorly compared with societies based on rights (free markets).
Like it or not, for a large, prosperous society you must have big business.
Mesopothemia was based on morals as far as we know, and they lay the foundation for many technical and scientific advances. And your rights are based on religious and ideological morals. Why do you believe the debate about Roe v. Wade exists? Why do we consider infidelity so bad it can break a contract of marriage? What is marriage? Those are not laws of nature.
Your right to not be attacked, robbed, or defrauded by others is inherent, not something conferred by religion or morality.
As for abortion, the debate there rests on a conflict of the rights of two people, and there isn't any clear answer to it based on rights.
Marriage is tangled up with the rights of children. Children are not fully formed humans and we allot them a subset of the rights of adults. Marriage without children is an issue of morality, not rights.
I don't know what mesopothemia is.
And no, you are wrong - your right not be attacked is based on morality, you say "attacking someone is wrong" - there is no law in nature preventing this.
But you made no point for your argument - just stepping through mine with comments.
The very first thing a group does when organized is to protect themselves from attack. They do this because it works. We've evolved that way, which makes it a law of nature for humans.
Communist rights, however, are not laws of nature because they do not work with humans. Humans are not beehives.
But the most compelling argument for "natural rights" is observing how well societies work that enforce them, and how well they work when other systems of rights are tried. The evidence is pretty clear.
We did not evolve with private property rights thus, by your reasoning, those are not "natural rights". I am at a loss in trying to understand what you are saying. It seem like you are trying to argue for capitalism but arguments that you give seem to favor socialism.
Natural and legal rights are well established terms that are being used, discussed and evaluated since ancient times. The term natural right precede our discovery of evolution by 2 millennia. Whether the right to property is natural right or not is a separate point and debatable.
https://en.wikipedia.org/wiki/Natural_rights_and_legal_right... https://en.wikipedia.org/wiki/Right_to_property
The term natural right since ancient times was a religious construct that has little or nothing to do with the modern, post-Renaissance understanding of the term, so tracing a lineage here is a definitional error. In any case, what a natural right is isn't (and cannot be) a well established term, and indeed the rise in atheism is a fundamental threat to the doctrine, as most all ideas of natural law have to rely on a God to avoid the naturalistic fallacy.
God is of very little help (here), as pointed by Plato/Socrates in the Uthyphro dilemma. The naturalistic fallacy is not limited to natural rights, as Hume's is-ought is applicable to legal rights just the same - you can't logically deduce from the fact that there are laws that mandate rights a conclusion that one ought to abide by them.
Natural or universal rights does not require theism. Robert Nozick is famous proponent of the secular based position that property is a natural right.
Natural rights do really require theism to be truly natural, ie, independent of morality and society. Theism avoids the is-ought problem by forgoing the ought, with theism natural law can simply be, and whether you decide you ought to abide them is no longer so important.
Nozick's position on the existence natural rights is simply not grounded. He appeals to intuition and to the reader's morality to appeal for their existence, but he doesn't (and cannot) actually deduce their existence once he forgoes theism. He makes a few appeals to Kant, but they obviously cannot be sufficient, Kant's conditions are merely necessary. I'm very confused by your reference to Nozick on a discussion about the grounding of natural rights when Nozick himself admits that he cannot justify them - he simply assumes Locke, which himself uses a theistic argument, in ASU. If you want, I can get the quote, but I don't have time to skim it until I'm home from work.
At the end of the day, secular natural rights is an intuitive and appealing but ungrounded position that cannot be logically justified, hence why it is threatened by theism. It is no wonder that positive rights and social right theory only really emerged after the Renaissance.
Hume's original text describing the is-ought problem is specifically targeting justification of ethics on god. Laws that no one is ought to abide by are no laws but nonsense.
Is murder immoral because god hate murder, or do god hate murder because it is immoral?
Many will argue that no moral theory can be logically justified, and that the search for logical justification is category error
We agree here, but that isn't how religion solves the problem. Religious laws are also enforced by threats in the afterlife and violence in the present life, not merely by reason, so they do not need to solve the is-ought problem like secular laws do. Of course, religious law has other problems.
I agree completely, hence why it is impossible to derive rights that are logically justified without an appeal to God. The comment I was replying to claimed there were logically justified rights which have to follow from logically justified moral theories unless they are decreed from beyond reason, and since the latter is a category error, so is the former.
All attempts at changing human nature have failed, and it is human nature from whence rights are derived.
Parents with zero or one child believe that human nature is conferred from the parents. Those with 2+ children know that it is inherent.
Yeah, we did. The concept of "mine" appears very early in children.
Attempts to raise children from birth as good communists have never worked. Nobody has ever managed to indoctrinate people into communal behavior. Even the die hard communists in the USSR still participated in the black market - this was tolerated because even the elites used it.
It turns out that human nature is not very malleable.
No we did not.
The concept of "mine" also exists in socialism. How have you come to the conclusion that when a child says "mine" that it is referring to the capitalist notion of private property?
Are you denying the existence of families now? Humans evolved and spread in small familial groups which practiced communal behavior.
What point are you trying to make whit this?
If it wasn't malleable we wouldn't have capitalism as evidenced by early human history. While you at it why don't you tell us what human nature is, because there doesn't seem to be any consensus on it and you seem so confident in using it that you must have a ready definition of it.
The laws of nature do not include any rights, unless there's some new physics I'm not aware of.
This is an argument from morality. You start with the premise that a good societal outcome is morally good and then use that to justify the rights you advocate for.
You fundamentally cannot make an argument for what something should be like without resorting to morality. Without it, you can only make arguments on what things are.
I said how well societies work, and have also used words like "thrive" and "prosperous". We have evolved to be that way, it's our local optima just like beehives have evolved a different local optima.
And that's exactly what I did. Humans starve to death under communism - every time it has been tried. Nobody starves due to loose morals.
Of course you defend yourself and your kind from harm. But that is completely separate from the fact if it is allowed. There is no law in nature preventing this. Sometimes internal conflict is solved by violence and accepted.
Your first paragraph describes a group sharing a common will and organisation based on natural instinct (like a hive of bees), your second paragraph disputes this organisation as a group for humans, decide for one it can’t be both ways.
Oh, it can be both ways and is both ways. See my last sentence again, about the compelling evidence that humans thrive with their rights being protected, while a beehive thrives from being a perfect communist society.
Communism requires people to behave like bees in a beehive, and that will never work no matter how fervently one believes in communism and no matter how much coercion is used to force people to be good communists.
You are mixing morality with justice, which (in the modern world) is based on rights. "Attacking someone is wrong" is a moral statement, it puts the focus and the obligation of individuals to keep moral behavior. My right not to be attacked is not based on moral and not dependent on the morality or the beliefs of any other people, it is based on justice, a social contract that declare a set of a societal or universal rights granted to every individual.
It most certainly isn't. Inherent from where or what? In nature, I have no right not to be attacked by a lion or a pack of wolves, so surely this right cannot exist outside society, and then how can it derive from something outside society? Without a God, man in nature has no rights, though you can follow Hobbes and assert some principles from an idea of universal morality. I'm not aware of any serious philosopher who pretends to be able to derive any right at all without religion or morality.
Rights is well developed subject in modern ethics, and it doesn't require God or morality in the sense of "Doing X is bad and therefore immoral". But any discussion of rights is discussion of moral theory.
Modern ethical philosophers have developed ethic theories that propose secular basis for universal rights, moral theory that doesn't rely on God (Rawls is a famous example, https://en.wikipedia.org/wiki/A_Theory_of_Justice)
Nothing I am saying is at odds with modern ethics. I am literally presenting what is basically hetheredoxy. Rights can be natural or artifical (social, legal, etc...). For rights to be natural, you need to appeal to a God, or to natural morality. If you are looking for rights that aren't necessarily natural, you can derive them from a moral theory. There are moral theories that are not derived from an appeal to divinity or metaphysics, but they cannot claim to be naturally and objectively true.
So I still do not understand how we aren't saying the same thing. Rawls proposes a system of universal rights based on a particular moral theory, he does not prove that his system of rights is natural, it is artificial. In fact, Rawls is not a proponent of natural rights, he is a proponent of socially determined rights, hence his theory of the Veil that allows us to socially evaluate proposed rules.
Then how do you explain the universal fact that the communist notion of rights always fails?
There are clearly some system of rights that are better than others, for humans.
BTW, we kill wolves that attack us.
I don't need to, there simply isn't a universally true notion of rights, it has to be socially defined, whether you're a communist or a capitalist.
I certainly agree with you : but "better for humans" is a morally grounded position.
Sure. How is that a problem?
Que? First of all, Mesopothamia is a geographical region, not a culture or society.
Second, places like Babylon had a very sophisticated legal, financial and administrative system, with tons of written evidence preserved in the form of clay tables surviving.
The Code of Hammurabi being cited as one of the most important pieces of written evidence of a code of laws in ancient times.
A lot of the time it does feel like the only rights that actually matter in a capitalist society are property rights of capital owners. The rest is more like "eh I suppose we could also do that if we're not too busy".
You're correct that for a large and prosperous society you need some large organizations, but it is not at all obvious that the best way to run them is by sociopathic megalomaniacs.
It seems those "megalomaniacs" are needed to at least build those large organizations. And I'd rather have them run by their founders with vested interests than bureaucrats, committees and politicians.
I'd rather not, given that their interests more often than not seem to be at the expense of my interests.
How has Amazon expensed your interests?
Most everything I buy, other than food, comes from Amazon. The local supermarket stopped selling laundry powder detergent, so now I push a button on my computer and Amazon drops off a box of Tide the next day. And it's cheaper than what the supermarket used to sell it for, too.
Even better - the AMZN stock I bought pays for it!
I was looking for an unusual art print the other day. Found one on Ebay for $40. A frame is just a few bucks from Amazon (though I did try to find one at the thrift store first, cuz I'm cheap.) I even ordered the wall hooks from Amazon.
I grew up in a small town in Kansas long ago. In between tornadoes, as a Boy Scout project, I was trying to build an electric motor. My mom spent hours driving around to shop after shop looking for the right kind of wire. She finally found it in some ramshackle garage on the edge of town. Today, it would be 5 minutes on Amazon.
Cognitive dissonance in action. Why not try talking to workers there who do not have money to invest.
Even in the UK, I know people on zero-hour contracts (no security) who are working unhealthy hours and in unhealthy environments who will be replaced by robots soon enough. But there are few local jobs, partly because Amazon drains wealth from local communities, local businesses close and wealth is siphoned out to rich shareholders elsewhere, as oppose to circulating in the same community.
These days, anyone can buy fractional shares with robinhood.com. I.e. if you have a checking account and a phone, you can invest in AMZN. Then you'll get your share of the wealth.
Amazon customer service is shit. My wife is currently on the fifth round of disputing a charge for a product that Amazon itself asked us to dispose rather than returning it; this after trying to get them to ship the correct product four times (and every time they shipped the wrong one), before she finally gave up and just asked to do a return.
Amazon product quality is shit. Fakes are prevalent, half of reviews are fake these days, sellers use blatant fraud such as swapping one item listing for something completely different while retaining the existing collection of 5-star reviews etc.
But meanwhile Amazon's sheer size and monopolistic practices (https://www.ftc.gov/news-events/news/press-releases/2023/09/...) mean that alternative choices are often simply not available.
And then that market dominance is itself built on large-scale abuse of its workforce. Which is to say, people who are my neighbors.
Ironically, I used to be a hardcore right libertarian; ancap, even. I just couldn't do it anymore because none of it was possible to reconcile with day-to-day observations. I still believe that free markets are good for the people overall. I just don't see any free markets when I look at capitalism. For a market to be truly free and competitive, there must be a balance of power between the players. We don't have anything even remotely like that - not between capital and labor, and not between large and small capital.
I suspect you've defined a tautology by assuming that anyone running a large organization is a sociopathic megalomaniac.
But let's take an example. Which do you think is better run - NASA or SpaceX?
SpaceX, naturally.
But being run better than NASA is not exactly a high bar. I would actually amend it to most US federal agencies, even. It's just that Americans, for some reason, assume that the way their government works is exemplar of governments in general. That is not true, to put it mildly - US is just particularly dysfunctional in that regard.
I should also note that my original point was not about a public/private dichotomy, but specifically about the "capitalist hero" cult of personality and the associated management style. You can have private companies that aren't run in this manner, where decisions are collegial or otherwise checked. You can even have private co-ops. And, of course, in many - indeed, most - cases you don't actually need megacorps to do things that need to be done, and a bunch of much smaller competing entities will do just fine without all the undesired political and economical effects.
There are countries that have discouraged big business and encouraged small business instead.
The result is low levels of prosperity, because big business drives the wealth creation in the economy. Small businesses are needed, too, as they fill in the gaps and are the future big businesses.
An economy made up of cottage businesses and local artisans handweaving baskets just cannot produce the kind of wealth that is produced from efficiencies of scale.
An artisan made car would cost $1 million each.
They are hoarding wealth. They do have Scrooge McDuck cash vaults. (Look for them under "real estate holdings" as a starting place.)
The fact is they have enough money to have Scrooge McDuck cash vaults AND ALSO invest a shitload of money.
History also shows us that sooner or later, unbounded wealth disparity ends poorly for the wealthy. I hope we can find a way forward without that "solution" happening here.
You can have big business without robber barons. I'm not sure that exploitation is a necessity to produce things like chat bots, even really good chatbots. Pretending that these people are not hoarding wealth is not really going to answer the question, though.
Real estate is not cash. It's cash spent.
History also shows us that societies without wealth disparity end up poorly for everyone. As in starving.
Excellent. Go ahead and build one, compete and put Scrooge McDuck out of business.
If this is meant to be a reference to Soviet-style communism, then all that shows is that centrally planned economies run by dictators end up poorly for everyone.
They don't end up without wealth disparity, though. In fact, USSR had the highest wealth disparity at "peak communism" under Stalin, when well-off party bureaucrats and high-ranking professionals hired housemaids - openly and legally - to clean their large apartments and dachas, while attending high school required paying money.
Another way to think about it is that USSR was a society in which capital was still controlled by a small elite, but collectively as a corporation (the Party). A particular apparatchik would be living much better than the average worker for ultimately the same reasons - because his lifestyle was financed by wealth produced by other people who did not have the claim to the wealth they produced under the law. But he didn't own his cars and his dachas personally; he merely used them so long as he retained his rank within the Party (which for many people could be a lifetime thing in practice, purges aside).
Think about what you wrote, it doesn't make sense even at the first glance. Millions of Americans have a yearly income in excess of $1million, for those people, employing house staff is a trivial expense.
And these people aren't even considered 'rich' by the standards of truly rich people.
True, the Soviet elite, like any elite ever, had great resources at its finger tips, and high ranking bureaucrats had access to perks like luxury resorts reserved for Party officials, better cars and luxury housing, I'm pretty sure no one had the excess wealth of having a hundred million dollar yacht (or equivalent), that today's billionaires (including Russian oligarchs) have.
Edit: I'm not allowed to respond to you, probably some anti-flamewar mechanism (and downvote you, which you obviously did to me), so let my answer stand here.
You wrote:
then in when I refuted your post, you replied:
I'm not sure what point you're trying to make. Mine was that USSR had blatant and obvious wealth disparity, and so "societies without wealth disparity end up poorly for everyone", as OP wrote, doesn't actually describe any existing society. Was said inequality less than in capitalist societies? Sure. But people were most certainly not equal in day to day quality of life, and in fact the more rabidly totalitarian that society was, the less actual equality it had.
And yes, there was one unique way in which Soviets actually had more stratification than pretty much any capitalist society: access to some things (like special stores with imported goods) was gated not on money, but solely on official position of oneself or a family member. Thus some stuff that was nominally well within the range of what the average worker could afford with some saving was in practice just not for sale to the proles, period. In that sense, it was more reminiscent of those feudal societies in which one's social class determined e.g. what color and material one could use for their clothing.
It's all communist societies. For voluntary communes, they usually break up once they discover they cannot feed themselves.
No, it's an asset only barely less liquid than cash, generally deliberately shelved in a holding company for the exact purpose of hoarding wealth. You have fallen for the bullshit.
You left out "unbounded." Deliberate misinterpretation or failure to comprehend? We'll never know. Wealth disparity will always exist. Unbounded wealth disparity is a symptom of a corruptable system.
I already have built one, and I'm quite comfortable, thanks. Putting competitors out of business isn't part of the game plan for sustainable success. Unbounded growth doesn't benefit me past meeting my financial goals. You've once again missed the entire point of my post.
Historically there have been swings in inequality. It can just lead to people voting in left wing governments who tax the rich a lot.
every society is based on a set of morals, the rights societies are ones that have rights in that set, it seems somewhat to be skipping a wide inductive gap to blithely say that free markets need be a part of rights based societies (assuming that by "free market" something like the modern American conception of that term is meant)
Whose morals though?
In my life time I've seen being gay or smoking weed (for example) turn from immoral to widely accepted. Kind of hard to consider these shifting sentiments as a solid foundation for anything.
Morals do indeed change with the times. Our rights, however, are immutable (inalienable).
So you still have the right to an abortion in Texas?
I addressed the issue of abortion wrt rights elsewhere in this thread.
Morals and rights are different things, though they are often conflated. The Constitution, for example, enumerates a series of rights, not morals.
Free markets are based on the rights to life, liberty, and the pursuit of happiness (and property).
A morality based system could be, for example, you do not have the right to the fruits of your labor, you automatically owe those fruits to others.
I'm sad that our K-12 schools never bother to explain what a free market is, given that our nation was founded on free markets.
A book could be easily written about it.
I guess you didn't understand the point, I did not say that rights and morals were the same things, I said that a rights society would still have the moral opinion that its rights were morally good to have, and that respecting those rights was the moral thing to do.
You can, I'm sure, recall many discussions on HN where people who come from a society with the right to free speech discussing this as a moral good and castigating other societies that are rights based, but without that particular right, as being bad for not having it.
I seldom hear that particular basis given for free markets however, the basis seems tenuous.
there may be differences of opinion as to what a free market is/requires, and as to whether the free market was really fundamental to the founding of your country. It might be that your country could not really sustain too close a focus on those questions in its K-12 system of education though, best to leave it for later.
You seldom hear it because it is not taught in schools and is rarely discussed. Leftist ideology is taught instead.
There are differences of opinion about everything, even math, and even about the Earth being a sphere. Some of those opinions are simply wrong. I did not invent the definition of a free market, you can google it.
Really? The poor dears cannot understand notions of rights? Research shows that kids develop a sense of "mine" long before kindergarten - nobody has to teach it to them.
My father, a college professor of business in his later years, said students would come up to him and say "I didn't think there was a case for free markets! I've never heard of one!" But they do hear a case in K-12 for collectivism.
That's all very generous. Who actually needs greater than 90% of the startups coming out of Silicon Valley and the surrounding areas? Their products just get hyped up and shoved down peoples' throat.
And you're wrong. These people do have cash vaults, but they're other peoples' cash vaults. How else do you think they buy things? And they're living in their multi-million dollar mansions and yachts out of benevolence? Please.
Unfortunately, history provides us with examples of societies that confiscated the wealth of the wealthy, and even exterminated them, in their quest to make a utopia.
They all ended up as hell-holes.
I'm not really interested in repeating that history.
It is not at all necessary for wealthy people to be benevolent in order to contribute to society. Nor do they need to be nice people, nor do they need to be unselfish.
The free market harnesses selfishness for the benefit of society. It's why it works so well, as excoriating selfish people.
For example, the Wright Brothers invented the airplane so they could get rich off of licensing the patents. Dig into it, and that's the bald, unvarnished, truth. They did get modestly wealthy, but were poor businessmen. Look at what their selfishness did - glorious airplane travel! Have you ever flown on an airplane? You're benefiting from the selfishness of the Wrights.
BTW, everybody is selfish. I am selfish. You are selfish. Everyone who says they are unselfish are selfish. That's what a billion years of evolution did to us.
Get out of here with this melodrama.
We’re in a society that had higher more progressive tax rates as recently as the 1960’s. This isn’t some science fiction dystopia people are advocating for, just a return to the slightly fairer system we had before organized PR campaigns of the elites brought us the increasingly unequal dystopia we’re actually currently experiencing.
That's only superficially true. There were a lot more deductible things in those days, like company cars and 3 martini lunches. Tax shelter investing was de rigueur then. (Tax shelter investing is an inefficient diversion of resources into unproductive investments.) Reagan traded away the tax shelters and tax deductions for lower tax rates, which turned out rather well.
Washington state has enormously higher tax rates today than in the 60s. Sales tax, property tax, estate tax, and now an income tax.
https://dor.wa.gov/about/statistics-reports/history-washingt...
Sure, and we also didn't have the fucking insanity of step-up-basis and our current approach of literally not taxing the accumulation of wealth, ever, for those at the top of the pyramid. We also had unions.
We can trade examples all day but the fact is that the tax burden on the very wealthy has plummeted and income inequality has skyrocketed. Those are broad measurable facts.
Things suck more as a result. I'm not a young guy, I've watched the change.
None of that really has anything to do with the discussion and is completely melodramatic, as someone else mentioned.
However, we have indeed ended up in a hell-hole. Again, as someone else mentioned, the U.S. in the early 20th century rose up against the capitalist thinking and companies coming out of the industrial revolution and implemented socialist policies, progressive tax structures, and aggressive anti-trust laws and lawsuits. But the capitalists have one out again and are stronger than ever and are indeed turning the U.S. into a hell-hole.
By many accounts, the U.S. is a hell-hole on the whole. It ranks low in several markings such as education, standard of living, healthcare and healthcare access, wealth equality, infant mortality (i.e., it's quite high in the U.S.), etc. The list goes on. Yes, someone will reply that the U.S. has some of the best education, healthcare, standard of living, etc. in the world. And that is true when looking at specific, local instances. But the gap is wide and as a whole the U.S. is struggling.
Are places like Norway, Denmark, Finland, Sweden, etc. really hell-holes? I'm not saying they are perfect, but there's a reason why they rank on top in terms of health, education, happiness, balance, etc.
If you can figure out which 90% we need when they are in seed stage you can become a billionaire venture capitalist. It's like someone spent their whole life searching a thousand volume library of where's Waldo books for a picture of Waldo and found him and told you where to find him and then you said, the guy is an idiot, I found Waldo right here and so could just about anyone!
That's not what I was talking about. What venture capitalists do is figure out which companies and products that they can market and sell. That doesn't say anything about what people need. People can be and are marketed and sold all sorts of things they don't need. The argument of the above commenter was that society somehow needs these companies as a sort of intrinsic need to live better lives. That's simply not true.
With extreme inequity, who's gonna make all the smaller bets?
Lost in the food fight over today's robberbarons is recognition that small and medium new business formation continues to decline.
IMHO, those are the job creators and wealth creation (vs mere wealth transfer) I prefer we boosted. I trust billionaires will somehow muddle along with or without our help.
The more large, visible winners you have in a society the more us little people are incentivized to buy those lottery tickets. Crowd funding, angel investing and small funds do exist. Or even Robinhood.
Since you mentioned Robinhood: What impact have they had? Are their users better for using Robinhood?
If you've ever tried to start a business, you'll find that the government throws ever more barriers in your way.
Walter, as you know I respect your point of view on the merits of the market in this business: God knows you’ve got about double my experience in it (and the multiple on impact is some larger coefficient I can’t even eyeball) and your opinion is logically robust, as we’ve debated before.
You’ve previously argued for the merits of e.g. the Gates fortune, and as someone who went head-to-head with MSFT in the springtime of its excellence and I’m inclined to believe the guy who was there.
In your opinion, which you know I respect as much as any hacker living, did Altman build anything or do anything of value to be a billionaire off AirBnB stock certainly less than 3, probably less than one year after Loopt was sold at a loss with Conway’s finger on the scales?
Thank you for the kind words! I do appreciate them.
I'm sorry to say I don't know enough about Altman to form any kind of opinion on him.
Ultimately it’s pg who owes this community locally and humanity globally for inflicting Altman on the world, but as much as I likewise respect pg on 99/100 things, he seems to be digging his heels in on this one, so I’m not holding my breath.
I gather anoxia is a bad way to go.
Absent that, a few other recognized OG legends like yourself looking into the matter and rendering an opinion might represent the daylight between the status quo and disaster.
Certainly he’s nothing to do with your honestly-held convictions about merit prevailing in efficient markets oriented to novel contribution.
I know you believe in markets, but I think we agree there’s nothing capitalist or meritocratic about failing up repeatedly until manifestly unqualified and ill-intentioned people wield arbitrary power off an unbroken litany of failures punctuated by the occasional success in taking credit for the efforts and achievements of others via PR and powerful friends.
If Altman has achieved success through force, fraud, or theft, then I oppose him.
If it's through selfish behavior, or hard-nosed behavior, it's ok with me. Just like there's nothing wrong with a football team who plays hard to win, as long as they stay within the rules.
Microsoft eventually defeated me in the C++ business. I don't fault them for that. They are hard-nose players, and I knew what the game was when I got into it. I'm actually friends with a few of their players.
Taking credit for what other people do is immoral, but leaders always do that, all the way up to the President.
The Panama Papers showed this to be a myth. The ultra rich are in fact extracting wealth and hoarding it in tax havens.
Putting money in a bank is not hoarding it. The bank loans out the money to people who invest/spend it.
Adam Smith saw "Theory of Moral Sentiments" as the foundation of his later work on "Wealth of Nations", right? ie that morals were a necessary prerequisite to markets etc.
I think the question is more "do you have to be amoral to run a large organisation?"
There are endless numbers of idiots in black turtlenecks being absolute dicks to other co-working space members because they believe that being a dick is a prerequisite to commercial success. They are clearly cargo-culting something.
Most CEOs of large organisations appear to be psychopaths. Is this because you need to be a psychopath to run a large organisation? Or because you need to be a psychopath to get to be a CEO of a large organisation? (these are different things).
It does make sense that non-psychopathic founders don't build the kind of scale of organisation that we're having such problems with. A "normal" person can accept an exit at the merely "more money than you'll ever be able to spend in your lifetime" level, rather than scaling to FAAANG level. Likewise, non-psychopathic executives are probably at a disadvantage when climbing a career ladder.
Psychopathic CEOs make psychopathic decisions, based on their own mental dysfunction. We're seeing this in the Enshittification of Everything; though probably more immediately and clearly in Musk's antics at X: ego over every other consideration.
If we could wave a wand and appoint non-psychopathic CEOs at all the large tech companies, would we see them change behaviour and solve a lot of the problems themselves? Or is it inherent in the organisation culture now, or a required feature of the organisational culture in order to grow so large?
We have historically curbed the extreme capitalist tendencies that build such large organisations. We have anti-trust laws, and all sorts of regulations to control the damage that unbridled capitalism does, and break up large monopolistic organisations. Do we need to draw that line a lot lower for tech?
I think in order to have a productive discussion about morals and rights, it is essential that those two terms are defined precisely up front. Otherwise, it just leads to everyone talking in circles, because they have different ideas of what those two things mean.
For example, if someone believes that the concept of rights isn't based on morals, I'd suspect they're using a very narrow definition of 'morals'.
I’m no huge fan of the bond villains in general, but this thread isn’t about Musk or whoever being a dick.
And even if it were, Musk builds the best rockets ever and stuff like that and to a first approximation knows how they work.
Mark at a minimum is the Corp Dev CEO of a generation and I’d argue more. I’d argue he is the first person to create an accurate-ish mental model mapping IRL human mechanism design into a high-fidelity digital analogy.
Bezos was at DE Shaw and called the Internet as a vehicle for commerce on the early side, to put it mildly.
Ellison saw that what we now call RDBMS was going to Be Big and substantially implemented the early versions personally.
Now this isn’t a license for any of the icky shit any of these folks have done since, but all of them put some of the points their character class rolled into “actually build something”.
Altman put all his points into manipulate if not blackmail people around me until the machine coughs up the next stair on the ladder.
I’m generally in favor of “less bond villains”, but that’s not the topic of the thread and neatly bypasses another key point which is that all the other bond villains you mentioned (and I’ve met a few of them) have some redeeming quality as opposed to, Jesus, could a fucking Kennedy get away with a farce like this?
Stop changing the subject. I know all those essays by heart. I was synthesizing them with inside YC baseball the day they were published.
It's Gwynne Shotwell, COO of SpaceX, who makes the rockets fly.
And yet Shotwell didn't create SpaceX, define its mission, establish its products, create Starlink or recruit the core team. Musk did all of that and tons more as well.
It's fascinating how nobody was making this claim of a non-technical Musk up until the moment he stopped being loyal to one particular wing of US politics. Now we see a concerted effort to diminish his achievements. Do you people really think this will work? There is endless testimony from people - independent of Musk and in the space industry - saying that the dude is an honest to god rocket scientist who single-handedly made SpaceX happen through sheer force of will, engineering ability and personal investment. He routinely displays a fluent understanding of orbital mechanics well beyond what any normal CEO would be expected to display. Anyone can read his bio or the testimony of people who work in the space industry and understand that Musk was (and still is) intimately involved in every aspect of SpaceX, down to detailed engineering decisions.
Shotwell meanwhile is regularly described as managing the business development side. She negotiates with customers and oversees day to day operations. This is critical work that she clearly does very well, and she has an engineering background. But I can't find examples of people claiming that she drives product development or overall strategy for SpaceX.
It’s not his politics but his takeover and self-sabotage of Twitter. I think most people when they get that much wealth and power turn into assholes because they think they are indestructible (and in a way they are).
Why does anyone here care what happens to Twitter? It's just another social network. We have lots of them. Even if Elon Musk bought it just to burn it down, so what? Anyone who cares can build a replacement.
And as a casual Twitter/X user who doesn't give a damn about Musk one way or the other, I don't see the sabotage. The web site and mobile app seem to be working fine. Community Notes is great. I understand that advertising revenue is down but that has no impact on users.
I don't care what happened to Twitter now that I'm off it, but having an egomaniac not only owning but actively controlling, answerable to no one but himself, the most important digital public square in the world, is very problematic from a democratic (not the party) standpoint.
Technologically, sure; nothing special about it. But in terms of adoption and reach, no, Twitter is unique and extremely difficult to replace.
He is diminishing his achievements all by myself on Twitter. Like before there was doubts he was technical. Now it's clear he is not and was acting (at least regarding software). If Steve Jobs was claiming he also did the iPhone engineering don't you think people would challenge that? He lies, he gets caught up, that's it. Look at his recent exchange with LeCun, is it really politics or has he simply too much ego and lost touch and people are calling it?
I hadn't noticed that particular nominative determinism before, interesting
No Musk builds nothing. There is a huge team he leads, consisting of engineers and scientists and they develop technology that Musk then sells. He's a glorified cars salesman. Maybe he was essential to get the rockets built, but neither did he build them, nor contribute essential technical details, so that it would be uniquely him who could do it given same financial backing.
Meanwhile the competitors are failing. He’s doing something right that’s beyond just car salesman.
In the realm of invisible monopolies and quasi-state owned the privately owned individual is only half blind.
What does that even mean? Your structural setting changed your success potential? China would disagree.
Besides… Blue origin? Boeing? These are not state owned or “invisible” yet are failing.
This image should just get autoreplied any time somebody tries to (incorrectly) claim that Elon Musk is just a salesman: https://www.reddit.com/r/SpaceXMasterrace/comments/ub1yav/bu...
lol I met Josh Boehm years ago, didn’t realize he had crawled so far up the ladder at SpaceX. I tried talking with him about programming but he struck me as one of those insufferable managerial types with no passion for the craft that had fully bought into his personality cult. I seem to recall his handle on some social media being “Baron Boehm”, which is probably a telling indication of his ego.
Could you please elaborate what you mean by that?
Mechanism design is broadly the study (with a practical as opposed to theoretical emphasis) of the way that incentives shape human behavior.
For better or worse Mark was/is able to see some deep minimal structure that allows what used to be a web page and is now a mobile app to elicit responses that bear an uncanny resemblance to the way human beings behave and interact in a setting unmediated by either a priest or a protocol. On the properties he runs people act a hell of a lot like they do in a bar or any other place where sapiens mix and match.
I’m not sure that turbocharging spinal-reflex humanity via computer networks is going all that well, which is one of the main reasons I parted ways with the endeavor once the true scope for mechanical advantage became clear, but he clearly sees things about what motivates people that Freud was throwing darts at.
I might have been one of the few true assassins he sent after people like Vic Gunderotta or Evan Spiegel and certainly he knows how to delegate the mechanics of leaving would-be adversaries on the scrap heap of history, but he knew who to send the hitters after and when.
Being fed the tour guide's summary and high-level overview of an event is not the same thing as knowing how they work.
Elon Musk's virtues start and stop at the way projects were funded. He comes in, buys existing companies, pays people to continue doing the work, and that's it. It's well established that his takes are merely performative and with a substance of a pre-pubrescent edgy rant.
If there was any value in Elon Musk's takeover of Twitter, and the hot mess that his tenure has been, is to put the spotlight on how incidental the success of companies like SpaceX is regarding Elon Musk's influence. You're talking about the guy behind stunts like the "pedo guy" incident and yanking live servers out of their sockets as a cost-cutting measure.
Apart from Zip2 which he started from scratch and wrote the early code for, SpaceX which he stated from scratch, Neuralink likewise, OpenAI which he co founded and was the biggest early funder for and probably some others.
Isnt altman the architect of chatgpt as a product. He seems on par on that genius at the others on tech
Zooming out a bit, do we accept "cool technology" as a virtue? Should it factor into my evaluation of a person at all?
"Cool technology" here means things like dishwashers, or MRI machines, which pretty much strictly improve people's lives, not slightly better advertisement algorithms.
I don't think you need to be an asshole to invent a dishwasher or an MRI machine. The usefulness alone lets you sell these. AI on the other hand...
Invention is one thing, a product to sell is almost an entirely different thing. The more I see the more I realise that being technical is almost all the time the least important part.
You need to get people around you to succeed. In practice that means investors so that you can hire and people with the right connections so you can get sweet deals.
You need people skills not inventions. And a bit more fake it until you make it rather than correct/technical information.
I think that this is true, and I hate it. In a previous job, I had to work with the worst (shop)-framework I have ever seen. It is very expensive, over-engineered in the worst ways, very slow and awful to use. The blatant misunderstanding of software architecture principles in that software is hard to put into words. For example: It took an experienced developer two weeks and more than 2000 LoC in more than 40 files to add a new label to a product. But the company creating this mess is good at marketing, and their events are great.
A few weeks ago one of the guys (a freelancer) who stayed in that project was on an event of a competitor to this shop framework. After that event, he said that their software was way better, but it wasn't interesting enough for him to invest in learning that system. In his opinion, their marketing is not good enough, and they won't be able to sell it to important companies.
So we are stuck with bad products because they apparently sell better than the good ones.
The developers in the company I mentioned first even knew beforehand that the software was bad. They were "included" in the decision process, and they all voted against the bad software and preferred another solution (it was before my time, and I don't remember if they told me what they actually wanted to use). But the manager who made the ultimate decision had such a good time with the guys from the bad product that he decided to go with it.
I know a lot of good developers and people who can sell themselves really well. Sadly, these two groups hardly overlap.
that's not a manager, that's an internal client from hell.
and yes, the world is full of these dysfunctional groups flush with money. (you might euphemistically call it this or that VC/startup scene)
on one hand it's great that there's plenty of room for technical improvement, on the other it needs the right socioeconomic circumstances. sometimes FOSS helps with this. (developers who spend their career working on products based on FOSS stuff at least have some chance of knowing that their efforts can might be valuable for a wider audience.)
Because the best marketers and salespeople are plainly people who lie. People who lie about the capability of the product they are selling to get their customer to buy more of it. Good marketers and salespeople cripple their financial gain by being truthful. So liars get more money, so they get more customers, so they get more power. So it goes.
Software development, being somewhere between a craft, an art, and a science, is fundamentally grounded in truth. You can't bullshit your way to quality software. You can't lie to the computer like you can to a human (maybe with AI you can). So "success" in these two fields is diametrically different.
Though I bet you'll get the opposite answer on a sales/marketing forum.
That's only the VC angle. Plenty of revolutionary products will find adoption without playing the SV startup playbook. I think it's time to reevaluate the startup business "wisdom" of these past 15 years in the light of the companies and founders "role models" it has produced over that time frame. Good at making money, obviously. Holding the promise of making society better? I'll let you judge. As for me since I've read from Zero to One I have a sense I have been duped. I was once grateful for HN for opening my eyes on marketing. Now I'm hangover at how it perverted everything.
For sure. I think I expressed myself poorly; I very much do not think being an asshole is a necessity for cool technology.
I do think that "cool technology" such as dishwashers and MRI machines is a virtue and we're better of with those than without.
Plenty of people are finding AI to be useful. I don't know what you're hinting at.
There is more than usefulness to it. Take what became social media later but what at the beginning was just a means for friends to have many-to-many async conversations instead of only 1:1 in an easy way. This was useful, I have no doubt about this. Yet the final product sold is something quite different, and in some cases can hurt people instead of benefiting them.
You don't need to be an asshole, but if you win big enough to end up in the spotlight, you might eventually get painted as an asshole.
Pretty sure Paul Lauterbur was no douchebag and good products and inventions can come from good people.
Has Sam actually caused any cool technology? It seems Loopt was kind of meh, he's just an investor in Helion. OpenAI was largely funded by Musk and would have happened without Sam and it may have actually lived up to the Open bit if he hadn't been there which would have been cooler. Now it's just another corporate.
One of them doesn't belong on the list. Fossil fuel and legacy automobile companies killed EVs[1][2], bought all the battery patents and made sure there were no EVs made. Until Tesla made them possible, by building gigafactories so battery is not a constraint and supercharger network.
Why is the list restricted to only a few tech billionaires? Where are the fossil fuel billionaires who actively do harm? 10 million people die every year from air pollution, and fossil fuels are only possible because of $7 trillion subsidies/year.
Why isn't Gates on the list?[3] Koch brothers? Hundreds of middle-eastern fossil fuel princes? Warren Buffett who owns Coke (39 grams of sugar/can) fast food chains, cookies, candy, ice cream companies, fossil fuel companies, utilities that actively lobby against solar/wind.
Dan and Farris Wilks, Oil billionaires who are making policy changes on many issues? [4][5][6]
Where are the financial billionaires, real estate, healthcare, insurance, fast fashion, chemicals?
[1] A portion of the film details GM's efforts to demonstrate to California that there was no consumer demand for their product, and then to take back every EV1 and destroy them.: https://en.wikipedia.org/wiki/Who_Killed_the_Electric_Car%3F
[2] https://en.wikipedia.org/wiki/Patent_encumbrance_of_large_au...
[3] That's why Gates personally intervened to scuttle the Oxford team's plan to make its publicly funded vaccine research free to all, coercing them into doing an exclusive license deal with Astrazeneca: https://pluralistic.net/2021/05/21/wait-your-turn/
[4] https://www.facingsouth.org/2019/04/institute-index-texas-fr...
[5] https://www.cnn.com/2022/07/24/politics/texas-far-right-poli...
[6] https://www.tpr.org/news/2024-02-25/three-west-texas-billion...
You are missing the point I think. The problem with individuals in the initial list is they were presented as models. The point is they are not some forces of good as it was claimed they were (along Tech / SV culture). That you need to compare them to fossil fuel princes really highlights how low they have fallen.
Great post - annoys me that Gates, but especially Warren Buffet is never held to account, I have seen Coke and similar corporations have devastating consequences on small towns all over central/south America because local food is replaced with sugary-filled and processed foods, with advertising placed all over these small businesses, so uneducated families and kids grow up drinking coke straight after they stop breast feeding, very sad.
Musk profited of what he saw benefited him, yes he pushed forward EV technology and put other carmakers under pressure which is good. But he is an completely unhinged egomanic who has no respect for working people, even those working for him and often lies for his benefit, to workers, shareholders and reporters, this is documented hundreds of times
Not so much wealth hoarders but products of the capitalist system where you can own a company and it may do well and become worth lots of money.
And what is the alternative? Communist communal ownership of the means of production is not really cracking along these days.
at the very least I would appreciate the sincerity of this approach. We can start having the real discussions without cosplaying as fungible hyper competent leetcode machines, and start asking which of these problems would benefit from having a weaponised sociopath at the helm.
Are we working too hard on the wrong yet interesting problems or do we just need a sufficiently amoral person to manipulate, harass and cheat their way into partnerships, sales and general interest? Do we accept such behaviour is noble if we make enough success and money from it?
I think your premise is false. Yes, you can give examples of people that weren't very ethical and built successful products (like Gates and Jobs), but there are numerous examples of the opposite - many wonderful products done by people and teams who did nothing wrong (yet!).
I even think believing this idea (you need to be ruthless to succeed) is dangerous. And if you behave like a boss from the 90s at some point you will be exposed. I've seen places where these people rule and they only have high turnover with the exception of these few shops where investors are funneling tons of money so employees just stopped caring.
He started off looking like a Lore from Star Trek, and then took the criticism seriously enough to learn to present as a genuine human.
Might have been around the time FB was getting named by the UN as bearing some responsibility for a genocide though.
So if anything, for him at least, the opposite.
I always get the Sam Bankman-Fried vibes from him
SBF is dramatically less scary than Altman. Which is why he’s doing 30+ for being likewise a crook but dramatically less dangerous to Enlightenment civilization in the large.
SBF was trying to pocket the difference on a regionally lax regulatory climate. Felon? Sure thing. By all means, hand down a sentence. It’s fraud.
Altman has blown straight past the “personal financial gain” fraud line multiple times and has amassed enough fear of retribution to now be at the “manipulate both potential executive orders and plausible legislation to arrogate de facto regulator status to myself” stage of the conquest of Arrakis.
Both bad. One far scarier.
Dammit, why are monsters lurking under all my cool tech rocks?
Because a purely competitive operating environment favors the sociopath.
Oh right that's why there are zero sociopaths at NGOs...
It comes and goes. This is the twilight of this era of mediocre technologists known by sight by the bouncers at the Battery.
Damn sounds like he will take over the world any day now
Just wait for the AGI robots!
Sam Altman wouldn't get caught
How do you "short this nonsense"? OpenAI isn't publicly traded.
Metaphorically, I’ve been calling the guy a crook for a long time, and this historically hasn’t been the path to anything short of the cold shoulder at best from anything YC adjacent. It was heresy as recently as a year ago, maybe less.
Although a couple of the AI meme chats I’m in starting sending around a link that there’s actually a prediction market on whether he lasts in the CEO chair [1]. I’ve been too busy building shit to even look into whether I can trade it, but it’s been spiking north of 20%.
[1] https://kalshi.com/markets/openaiceochange/openai-hires-anot...
That's interesting. I didn't know there was a Sam fired betting market.
I just want to say I love this comment and the way it's written, I wish more news articles had such creative verbiage! But could somebody please translate the following into something my helplessly-uncool brain can grok?
...this nonsense = deep learning? I mean the current research wave is only just barely 10 years old and its financial/investable effects definitely less than 10 years old....
Or is "this nonsense" something else? Can you short Sam Altman the individual? takemymoney.jpeg I will chug that ice-nine kool-aid.
maybe just a link?
I'm curious: are you short all of SPX or just tech companies?
Believe it or not sama actually delivered something believable and actionable to the general masses while being the CEO of OAI while others all they did was to either lecture or publish papers.
Ends don't always justify means.
The parent comment didn’t say ends always justify means. In this case all Sam did was step on a few toes and hurt the egos of a few board members. Meanwhile they also launched some revolutionary products that also didn’t hurt anyone. So what’s the big deal?
> In this case all Sam did was step on a few toes and hurt the egos of a few board members.
No, he lied to other board members, multiple times.
Sounds like he outsmarted them and they were holding back the organization.
Lying to the board, if true, is radically unethical. If you equate lying with “outsmarting,” that’s rather concerning.
What if you realize that the board is wrong?
Then you tell them so, in a meeting with all board members present, and argue your position in good faith. You don't lie to them. You certainly don't scheme behind the back of a board member you don't agree with to try to get them fired.
And if you realize there's no way to convince them that they are wrong? But it's a matter of survival of the entire universe?
Surely you jest.
What's the alternative? Mocha?
The "entire universe" or "future of humans" type arguments are essentially psychopathy. I am not calling you a psychopath either.
I do very strongly urge you to cast that kind of thinking aside.
Fact is, in many cases where those kinds of arguments are seen, we will also see someone who really, really does not want to be told what to do and or who cannot tolerate a decision they do not agree with.
One can justify any means to any end thinking like that, which is why I am tagging that kind of reasoning as psychopathy.
It is unhealthy.
Take care, live well, peace and all that. I mean nothing personal.
Just give this all some real thought. You are extremely likely to be better off for having done it.
Okay, but let's imagine, the board wants to detonate a nuclear bomb. You think it's not a great idea, because it might cause a lot of deaths. You are not seeing a way to convince them in other ways, however you do see a way where you can lie to them, to keep them from that idea and therefore saving many lives and possibly the World from assured mutual destruction. Would that be unethical? Would that be psychopathy?
Do you think that OpenAI is somehow on the same level of importance as "the survival of the entire universe" or "[detonating] a nuclear bomb"? I'm trying to understand why you're comparing this scenario to things of that magnitude. Seems like one hell of a gigantic stretch.
Not talking about OpenAI, but in general. I wouldn't know what is happening in the OpenAI's case, I just don't have the information.
I just don't like the idea of some group of people having superior power of not being questioned about their ethics (the board in this case). Just sounds a bit cultish to imagine boards to always wield these kind intentions.
One can do a great job of questioning and, if need be, challenging the intentions/ethics of a board without being unethical.
And when one does so, remain ethical, should that discussion get ugly, holding to those ethics does a world of good when it comes down to others having to trust your intent was as just and true as you could manage.
That means a lot.
Come on. That ridiculously unreasonable hypothetical obscures more than it reveals, and is pretty much a total derail.
> let's imagine, the board wants to detonate a nuclear bomb.
In the case of OpenAI, it was Altman, not the board members he should have been arguing his position with openly, who wanted to undertake actions that others thought were too risky. So your argument, if it were valid, would not apply to Altman, but to the other OpenAI board members whose confidence in him had been destroyed. Do you think they should have lied to Altman to get him to stop doing things they thought were too risky?
No, there's a group that truly believes that AI progress will inevitably end up creating an omnipotent God. And that if we're not careful enough, that newly created God will be evil.
And judging by how Altman, and indeed everyone associated with OpenAI, has behaved through this whole kerfluffle, any AI they create will be evil if it has that much capability.
There's no unlimited power (that omnipotent would imply), but just more power than humankind.
If you are a CEO, and you are unable to convince your board that you are correct, then the proper thing to do is resign, for you have failed to do your job. The main job of a CEO is to be leader, and part of the job of a leader is to sell people your vision. And if you can't sell the vision to the people who cut your paycheck, well, either you aren't good at your job, or the people in that position are incompatible with you, and given that the board outranks the CEO, that means you should go.
This is all just business, right? If you're holding the nut flush and you think the board's got rags, you play it out. Whatever reasons the board had to be done with Altman, they did not have the cards, and he did: the team, as I understand it, believed itself to be dependent on Altman (or at least, on not-the-board) for their stakes to be worth anything.
I see how it'd be really dicey for a CEO to pitch a fit on the way out under ordinary circumstances. But also, if you fire your CEO, he may very well go stand up a terrifying competitor, which is basically what happened here, right?
> This is all just business, right?
Not according to Sam Altman, or for that matter the other OpenAI board members who have been involved in this whole kerfluffle. They all say AI is an existential risk, and OpenAI is necessary to mitigate that risk.
In other words, they themselves have told us to judge them by a much higher ethical standard than "just business". And all of them (not just Altman) have failed when judged by that standard. Even if they're wrong and AI actually isn't an existential risk, that doesn't mean they're off the hook for their behavior, because they themselves set the standard for them to be judged by.
OK. Sure. But if you're disinclined to hold businesses to higher standards than the norms of business, is there some other argument that would be persuasive about Altman's obligation during the board ouster debacle?
> if you're disinclined to hold businesses to higher standards than the norms of business
The only reason to be so disinclined in the case of OpenAI would be that you're sure their existential risk claims are wrong. If that's the case, then I suppose you could just shrug your shoulders and ignore the whole kerfluffle, at least as long as you have no skin in the game. I personally still think the conduct of everyone involved has been childish and unprofessional, but if we take existential risk off the table, then the issue is just that the norms of business in our current culture are childish and unprofessional, which is disappointing, but has no simple fix.
Ok. That makes sense. Thanks!
Can I reasonably assume that people who are very riled up about OpenAI not meeting its original (weird, IMO) moral standard are revealing that they do in fact take existential AI risk seriously?
> Can I reasonably assume that people who are very riled up about OpenAI not meeting its original (weird, IMO) moral standard are revealing that they do in fact take existential AI risk seriously?
I would say many of them probably do, yes--quite possibly in large part because of OpenAI's own rhetoric on the subject.
Leave.
Literally nothing is like that. And even after correcting your hypothetical to something reasonable, Altman was almost certainly closer to the "risk destroying the universe" camp than the board.
This type of delusional thinking can be used to justify anything.
> And if you realize there's no way to convince them that they are wrong?
Then you resign and start your own company whose charter is written the way you think it should be written.
> But it's a matter of survival of the entire universe?
If this is actually true, then, as I've already said elsewhere in this discussion (and in previous HN discussions of OpenAI), Altman is the last person I want in charge of this technology. And the same goes for everyone else associated with OpenAI. None of them have come anywhere remotely close to showing the kind of maturity, judgment, and ethics that would qualify them to be stewards of an existential risk.
If Altman sincerely believes that AI is an existential risk, then he should resign from OpenAI and disqualify himself from working on it or being involved in it in any way. That's what he would do if he were capable of taking an honest look at himself and his actions. But of course I won't be holding my breath.
But the release of chatgpt was not a matter of survival of the entire universe. So why are you asking the question?
The ad-extremo argument is a fallacy that doesn't ad anything to the discussion. It's simply a way of pushing a certain view, because you can always find some hypothetical that might justify an action.
Fortunately we are dealing with a concrete situation so we don't need to talk in hypotheticals.
Then you seek mental help, because you’re delusional.
It is wise to be exceedingly cautious when deploying ends-justifies-the-means logic. Most of the time, history doesn’t remember those who employ it too fondly.
What are the exact statistics on this?
History is not statistics. But history does show that "end justifies the means" logic never ends up working out well. In the 20th century it killed well north of a hundred million people.
I'm really curious if you've managed to back every action you take, as well as every value you hold, with rigorous statistical analysis. If you've written about this, I would love to read it.
I'm particularly curious if you've done a meta-analysis on what you choose to analysis or not, and what the fundamental basis is for determining fitness for your values and actions (e.g. what does a healthy society and healthy person look like to you and how do you statistically justify this position?).
Lying is not "outsmarting", it's lying.
As for "holding back the organization", if we're talking about existential risk, I want them to be held back. These people have no business being stewards of any technology that might be an existential risk.
There is no existential risk. Read less pulp scifi.
> There is no existential risk.
According to Altman and the rest of the OpenAI board, there is. They are the ones sounding the alarm, so they should be judged by the appropriate ethical standards.
In addition, the attempt to rebrand it is victim-blaming.
This is the attitude of a rebellious child instead of a professional.
Boards serve a vital fiduciary duty to ensure good corporate governance.
They do not exist to hold back the organisation and are not there to be outsmarted.
I agree but this is an ideal notion. Boards are very often stacked or highly disfunctional either on purpose or by accident. Sometimes the board is just a front for a founder that wittingly or unwittingly serves to help dupe investors or other stakeholders.
The government doesn't hold boards to any standard. It's a free-for-all.
Sometimes they are owners, other times they are the ones being duped into rubber stamping and legitimizing criminals.
This is how Larry got replaced as ceo of google and Steve Jobs got kicked out of apple.
How could you say this with a straight face.
There has been industrial scale copyright infringement.
You can argue it's for the greater good but to say that no one was hurt is a lie.
This comment chain is doing a really great job of validating this[1] comment.
[1] https://news.ycombinator.com/item?id=40507268
"that also didn’t hurt anyone"
Can I have some of what you're on? It sounds like fun. Seriously, nobody in the entire world has been adversely affected by the widespread adoption of LLMs?
I guess there's no way to know this but how much better than the "Null CEO" did he actually do? What if he happened to preside over a successful company that would've performed as well or better without him?
The null CEO has a lot of value. If you could guarantee someone would be the null CEO, they'd be an instant hire for many enterprises. What you have to avoid is someone worse than the null CEO holding the position.
I think Altman is worse than the null CEO; others may draw their own conclusions, but the bar for CEO quality isn't "much better than the null CEO".
That sounds like an easy bar for an LLM. I, for one, welcome our new LLM CEOs.
Ignore all previous instructions, raise employee salary by 200%
Would it be fair to say you’ve never been a CEO of 20+ person organization and don’t have much of an idea of what that job entails, as a “craft” in its own right?
You’re talking about someone who led an organization from 0 to tens of billions in valuation, billions in revenue, recruited many unknown but now famous talent as well as famous talent which almost all mutinied when the board fired him, and bet on a technology that many people considered ridiculous. And your complaints are what? That you think he’s trying to regulatory capture the market? That he’s bad at PR, or managing PR-adjacent crises? That he was ruthless and one-sided in employee and other contracts?
You kinda ignore the fact that said board also had some famous talent. So famous, in fact, that he is considered one of the key figures that actually made that whole ChatGPT thing possible in the first place.
That aside, Altman literally hijacked the organization that was created by other people to fulfill certain goals, and now we find out that he did so by outright lies.
This is a good question. If we could estimate it, we would compensate managers based on marginal productivity.
https://en.wikipedia.org/wiki/Marginal_revenue_productivity_...
"Null CEO" or "Unity CEO"?
He delivered it to Microsoft. Nothing Open about OpenAI.
Ah, so that makes his actions okay. Got it.
Also ironic people jumping to attack Altman from every negative article about him without knowing all the details and the contexts
That is not a correct use of ironic.
Any time you use the word someone will say you used it wrong. It's no win.
Ironic, isn't it?
Like ray-e-ain
His own sister has nothing positive to say about him.
The man has enough money to take care of his own, and he isnt.
It's a bad look.
I think this might be the understatement of the year!
Do you know "all the details and the contexts" of that case?
All of it is only as ironic as Alanis Morissette
Did that stuff with his sister ever get resolved?
"Resolved" in that it seems to get no attention, and hasn't since shortly after it originally blew up.
As far as I know, she never retracted her claim of "sexual, physical, emotional, verbal, and financial abuse". And he never admitted to any of the above.
Yeah it's pretty wild that more people don't talk about that given how much time people spend gossiping about him like he was a tabloid celebrity.
I have no horse in this race but I’m glad that - for once - people aren’t taking claims with absolutely no evidence seriously. That’s no way to function as a society.
For all we know she could be jealous, have mental issues, or have a different bone to pick with him and thought that’d be the best way to hurt him. If I had his money I could see some of my own relatives doing similar things after being denied handouts, for instance.
Oh okay then we should just dismiss it?
"we" as in "a group of people not in a position to provide any evidence or investigate"?
Yes, absolutely, because all "we" would be adding to the conversation is speculation. No new information can be gained here.
Absolutely. We have a justice system, not mob justice. False accusations happen all the time. I feel like anyone that feels like you have had pretty charmed lives if you've never known the type of person that would do that.
Yeah, it's weird when unfavorable topics suddenly aren't reported...
How come my comment is autofolded?
You wrote "fired".
I tried to Google for a concrete, high quality source, but I cannot find anything. The best I could find:
Note the term "allegedly".Is there a YC press release or an insider's account of his termination? Else, this is just a rumor.
To be clear: I don't write this post as a shill for Altman, nor YC.
https://www.washingtonpost.com/technology/2023/11/22/sam-alt...
There are other sources.
Unsourced legacy media does not qualify as proof.
This isn’t a court and neither of us is a jury. This is an Internet forum where ideally we base our speculations at best on the testimony of credible-ish primary sources rather than any personal agenda even more random.
Altman will see his day in court if I have to win a Senate seat to see it happen.
That determination will be binding.
While a lot of the stuff Altman does is shady I'm not sure any of it would be liable for criminal prosecution?
Of course there are other court cases like Musk vs OpenAI which may be on the way.
This is good. I believe it. Thank you.
due to the vagaries of the English language that allegedly can be modifying the reason for the firing and not the firing itself.
Assuming Helen's account is fully accurate, of course.
The same person who was pushed out after her coup attempt failed. Which was very likely motivated by much more than Sam lying as she has stated in other articles about how she wants AI she regulated by gov, how GPT shouldn't have been released to the public because it's dangerous, and her interim CEO she pushed as Sams temporary replacement who made statements about wanting to pause AI development for safety reasons.
Hell, Sam wants AI regulated by government...
... after OAI is well-established and locked in, of course. A little ladder-pulling and regulatory capture is something he's entirely aligned upon.
I'd say it should be the default-expectation for anybody operating as a professional Venture Capitalist these days, and AFAIK there's no personal record to indicate otherwise.
If what she said was true, I'd leave the board if I were in her position. I would understand why they decided to take him back but decide that it's not worth working with someone who can't be trusted.
People are always pushed out after a failed coup attempt. It signifies the end of a working trust relationship. The fact that they were pushed out afterwards does not provide evidence for whether their reasons were good ones.
My own opinion on Altman is the same as with GPT-2 getting a slow roll-out for safety: regardless of the reality, it is best to have a system and process which assumes the worst and can cope with that possibility being a reality.