return to table of content

Ilya Sutskever starts Safe Superintelligence Inc

eigenvalue
33 replies
1h9m

Glad to see Ilya is back in a position to contribute to advancing AI. I wonder how they are going to manage to pay the kinds of compensation packages that truly gifted AI researchers can make now from other companies that are more commercially oriented. Perhaps they can find people who are ideologically driven and/or are already financially independent. It's also hard to see how they will be able to access enough compute now that others are spending many billions to get huge new GPU data centers. You sort of need at least the promise/hope of future revenue in a reasonable time frame to marshall the kinds of resources it takes to really compete today with big AI super labs.

PheonixPharts
9 replies
47m

compensation packages that truly gifted AI researchers can make now

I guess it depends on your definition of "truly gifted" but, working in this space, I've found that there is very little correlation between comp and quality of AI research. There's absolutely some brilliant people working for big names and making serious money, there's also plenty of really talented people working for smaller startups doing incredible work but getting paid less, academics making very little, and even the occasional "hobbyist" making nothing and churning out great work while hiding behind an anime girl avatar.

OpenAI clearly has some talented people, but there's also a bunch of the typical "TC optimization" crowd in there these days. The fact that so many were willing to resign with sama if necessary appears largely because they were more concerned with losing their nice compensation packages than any of their obsession with doing top tier research.

kccqzy
4 replies
35m

Two people I knew recently left Google to join OpenAI. They were solid L5 engineers on the verge of being promoted to L6, and their TC is now $900k. And they are not even doing AI research, just general backend infra. You don't need to be gifted, just good. And of course I can't really fault them for joining a company for the purpose of optimizing TC.

raydev
0 replies
1m

their TC is now $900k

As a community we should stop throwing numbers around like this when more than half of this number is speculative. You shouldn't be able to count it as "total compensation" unless you are compensated.

ilrwbwrkhv
0 replies
14m

Google itself is now filled with TC optimizing folks, just one level lower than the ones at Open AI.

iknownthing
0 replies
14m

Seems like you need to have been working at a place like Google too

almostgotcaught
0 replies
8m

their TC is now $900k.

Everyone knows that openai TC is heavily weighted by RSUs that themselves are heavily weighted by hopes and dreams.

auggierose
1 replies
8m

TC optimization being tail call optimization?

klyrs
0 replies
0m

You don't get to that level by thinking about code...

a-dub
0 replies
31m

"...even the occasional "hobbyist" making nothing and churning out great work while hiding behind an anime girl avatar."

the people i often have the most respect for.

015a
0 replies
36m

Definitely true of even normal software engineering; my experience has been the opposite of expectations, that TC-creep has infected the industry to an irreparable degree and the most talented people I've ever worked around or with are in boring, medium-sized enterprises in the midwest US or australia, you'll probably never hear of them, and every big tech company would absolutely love to hire them but just can't figure out the interview process to weed them apart from the TC grifters.

TC is actually totally uncorrelated with the quality of talent you can hire, beyond some low number that pretty much any funded startup could pay. Businesses hate to hear this, because money is easy to turn the dial up on; but most have no idea how to turn the dial up on what really matters to high talent individuals. Fortunately, I doubt Ilya will have any problem with that.

vasco
5 replies
56m

At the end game, a "non-safe" superinteligence seems easier to create, so like any other technology, some people will create it (even if just because they can't make it safe). And in a world with multiple superintelligent agents, how can the safe ones "win"? It seems like a safe AI is at inherent disadvantage for survival.

arbuge
4 replies
50m

The current intelligences of the world (us) have organized their civilization in a way that the conforming members of society are the norm and criminals the outcasts. Certainly not a perfect system, but something along those lines for the most part.

I like to think AGIs will decide to do that too.

vundercind
0 replies
21m

Our current meatware AGIs (corporations) are lawless as fuck and have effectively no ethics at all, which doesn’t bode well.

soulofmischief
0 replies
22m

It's a beautiful system, wherein "criminality" can be used to label and control any and all persons who disagree with the whim of the incumbent class.

Perhaps this isn't a system we should be trying to emulate with a technology that promises to free us of our current inefficiencies or miseries.

insane_dreamer
0 replies
26m

I disagree that civilization is organized along the lines of conforming and criminals. Rather, I would argue that the current intelligences of the world have primarily organized civilization in such a way that a small percentage of its members control the vast majority of all human resources, and the bottom 50% control almost nothing[0]

I would hope that AGI would prioritize humanity itself, but since it's likely to be created and/or controlled by a subset of that same very small percentage of humans, I'm not hopeful.

[0] https://en.wikipedia.org/wiki/Wealth_inequality_in_the_Unite...

Filligree
0 replies
42m

They well may, the problem is ensuring that humanity also survives.

imbusy111
4 replies
1h4m

Last I checked the researcher salaries haven't even reached software engineer levels.

shadow28
3 replies
59m

The kind of AI researchers being discussed here likely make an order of magnitude more than run of the mill "software engineers".

imbusy111
1 replies
58m

You're comparing top names with run of the mill engineers maybe, which isn't fair.

And maybe you need to discover talent rather than buy talent from the previous generation.

shadow28
0 replies
53m

AI researchers at top firms make significantly more than software engineers at the same firms though (granted that the difference is likely not an order of magnitude in this case though).

Q6T46nT668w6i3m
0 replies
47m

Unless you know something I don’t, that’s not the case. It also makes sense, engineers are far more portable and scarcity isn’t an issue (many ML PhDs find engineering positions).

Q6T46nT668w6i3m
3 replies
55m

Academic compensation is different than what you’d find elsewhere on Hacker News. Likewise, academic performance is evaluated differently than what you’d expect as a software engineer. Ultimately, everyone cares about scientific impact so academic compensation relies on name and recognition far more than money. Personally, I care about the performance of the researchers (i.e., their publications), the institution’s larger research program (and their resources), the institution’s commitment to my research (e.g., fellowships and tenure). I want to do science for my entire career so I prioritize longevity rather than a quick buck.

I’ll add, the lack of compute resources was a far worse problem early in the deep learning research boom, but the market has adjusted and most researchers are able to be productive with existing compute infrastructure.

eigenvalue
2 replies
49m

But wouldn't the focus on "safety first" sort of preclude them from giving their researchers the unfettered right to publish their work however and whenever they see fit? Isn't the idea to basically try to solve the problems in secret and only release things when they have high confidence in the safety properties?

If I were a researcher, I think I'd care more about ensuring that I get credit for any important theoretical discoveries I make. This is something that LeCun is constantly stressing and I think people underestimate this drive. Of course, there might be enough researchers today who are sufficiently scared of bad AI safety outcomes that they're willing to subordinate their own ego and professional drive to the "greater good" of society (at least in their own mind).

FeepingCreature
1 replies
26m

If you're working on superintelligence I don't think you'd be worried about not getting credit due to a lack of publications, of all things. If it works, it's the sort of thing that gets you in the history books.

eigenvalue
0 replies
16m

Not sure about that. It might get Ilya in the history books, and maybe some of the other high profile people he recruits early on, but a junior researcher/developer who makes a high impact contribution could easily get overlooked. Whereas if that person can have their name as lead author on a published paper, it makes it much easier to measure individual contributions.

esafak
2 replies
1h3m

I think they will easily find enough capable altruistic people for this mission.

EncomLab
1 replies
5m

I mean SBF was into Altruism - look how that turned out....

esafak
0 replies
2m

So what? He was a phony.

paxys
0 replies
48m

They will be able to pay their researchers the same way every other startup in the space is doing it – by raising an absurd amount of money.

neural_thing
0 replies
26m

Daniel Gross (with his partner Nat Friedman) invested $100M into Magic alone.

I don't think SSI will struggle to raise money.

ldjkfkdsjnv
0 replies
36m

Are you seriously asking how the most talented AI researcher of the last decade will be able to recruit other researchers? Ilya saw the potential of deep learning way before other machine learning academics.

insane_dreamer
0 replies
1h3m

Perhaps they can find people who are ideologically driven

given the nature of their mission, this shouldn't be too terribly difficult; many gifted researchers do not go to the highest bidder

aresant
0 replies
31m

My guess is they will work on a protocol to drive safety with the view that every material player will use / be regulated and required to use that could lead to a very robust business model

I assume that OpenAI and others will support this effort and the comp / training / etc and they will be very well positioned to offer comparable $$$ packages, leverage resources, etc

frenchie4111
29 replies
1h10m

I am not on the bleeding edge of this stuff. I wonder though: How could a safe super intelligence out compete an unrestricted one? Assuming another company exists (maybe OpenAI) that is tackling the same goal without spending the cycles on safety, what chance do they have to compete?

Retr0id
10 replies
1h8m

the first step of safe superintelligence is to abolish capitalism

cscurmudgeon
4 replies
39m

Why does everything have to do with capitalism nowadays?

Racism, unsafe roads, hunger, bad weather, good weather, stubbing toes on furniture, etc.

Don't believe me?

See https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Are there any non-capitalist utopias out there without any problems like this?

Retr0id
2 replies
37m

This is literally a discussion on allocation of capital, it's not a reach to say that capitalism might be involved.

cscurmudgeon
1 replies
24m

Right, so you draw a line from that to abolishing capitalism.

Is that the only solution here? We need to destroy billions of lives so that we can potentially prevent "unsafe" super intelligence?

Let me guess, your cure for cancer involves abolishing humanity?

Should we abolish governments when some random government goes bad?

Retr0id
0 replies
20m

"Abolish" is hyperbole.

Insufficiently regulated capitalism fails to account for negative externalities. Much like a Paperclip Maximising AI.

One could even go as far as saying AGI alignment and economic resource allocation are isomorphic problems.

jdthedisciple
0 replies
15m

To be honest these search results being months apart shows quite the opposite of what you're saying...

Even though I agree with your general point.

next_xibalba
1 replies
1h4m

That’s the first step towards returning to candlelight. So it isn’t a step toward safe super intelligence, but it is a step away from any super intelligence. So I guess some people would consider that a win.

yk
0 replies
53m

Not sure if you want to share the capitalist system with an entity that outcompetes you by definition. Chimps don't seem to do too well under capitalism.

ganyu
1 replies
1h3m

One can already see the beginning of AI enslaving humanity through the establishment. Companies work on AI get more investment and those who don't gets kicked out of the game. Those who employ AI get more investment and those who pay humans lose confidence through the market. People lose jobs, get harshly low birth rates while AI thrives. Tragic.

nemo44x
0 replies
43m

So far it is only people telling AI what to do. When we reach the day where it is common place for AI to tell people what to do then we are possibly in trouble.

speed_spread
0 replies
57m

And then seize the means of production.

weego
8 replies
53m

Honestly, what does it matter. We're many lifetimes away from anything. These people are trying to define concepts that don't apply to us or what we're currently capable of.

AI safety / AGI anything is just a form of tech philosophy at this point and this is all academic grift just with mainstream attention and backing.

mhardcastle
4 replies
41m

This goes massively against the consensus of experts in this field. The modal AI researcher believes that "high-level machine intelligence", roughly AGI, will be achieved by 2047, per the survey below. Given the rapid pace of development in this field, it's likely that timelines would be shorter if this were asked today.

https://www.vox.com/future-perfect/2024/1/10/24032987/ai-imp...

enragedcacti
0 replies
7m

I don't understand how you got 2047. For the 2022 survey:

    - "How many years until you expect: - a 90% probability of HLMI existing?" 
    mode: 100 years
    median: 64 years

    - "How likely is it that HLMI exists: - in 40 years?"
    mode: 50%
    median: 45%
And from the summary of results: "The aggregate forecast time to a 50% chance of HLMI was 37 years, i.e. 2059"

ein0p
0 replies
31m

I am in the field. The consensus is made up by a few loudmouths. No serious front line researcher I know believes we’re anywhere near AGI, or will be in the foreseeable future.

Retr0id
0 replies
38m

Reminds me of what they've always been saying about nuclear fusion.

usrnm
0 replies
38m

We're many lifetimes away from anything

ENIAC was built in 1945, that's roughly a lifetime ago. Just think about it

criddell
0 replies
43m

Ilya the grifter? That’s a take I didn’t expect to see here.

ToValueFunfetti
0 replies
40m

Many lifetimes? As in upwards of 200 years? That's wildly pessimistic if so- imagine predicting today's computer capabilities even one lifetime ago

slashdave
1 replies
34m

Since no one knows how to build an AGI, hard to say. But you might imagine that more restricted goals could end up being easier to accomplish. A "safe" AGI is more focused on doing something useful than figuring out how to take over the world and murder all the humans.

cynusx
0 replies
9m

Hinton's point does make sense though.

Even if you focus an AGI on producing more cars for example, it will quickly realize that if it has more power and resources it can make more cars.

rafaelero
0 replies
40m

It's probably not possible, which makes all these initiatives painfully naive.

mark_l_watson
0 replies
30m

That is a very good question. In a well functioning democracy a government should apply a thin layer of fair rules that are uniformly enforced. I am an old man, but when I was younger, I recall that we sort of had this in the USA.

I don’t think that corporations left on their own will make safe AGI, and I am skeptical that we will have fair and technologically sound legislation - look at some of the anti cryptography and anti privacy laws raising their ugly heads in Europe as an example of government ineptitude and corruption. I have been paid to work in the field of AI since 1982, and all of my optimism is for AI systems that function in partnership with people and I expect continued rapid development of agents based on LLMs, RL, etc. I think that AGIs as seen in the Terminator movies are far into the future, perhaps 25 years?

lmaothough12345
0 replies
45m

Not with that attitude

llamaimperative
0 replies
54m

It can't. Unfortunately.

People spending so much time thinking about the systems (the models) themselves, not enough about the system that builds the systems. The behaviors of the models will be driven by the competitive dynamics of the economy around them, and yeah, that's a big, big problem.

hackerlight
0 replies
24m

This is not a trivial point. Selective pressures will push AI towards unsafe directions due to arms race dynamics between companies and between nations. The only way, other than global regulation, would be to be so far ahead that you can afford to be safe without threatening your own existence.

cynusx
0 replies
10m

Not on its own but in numbers it could.

Similar to how law-abiding citizens turn on law-breaking citizens today or more old-fashioned, how religious societies turn on heretics.

I do think the notion that humanity will be able to manage superintelligence just through engineering and conditioning alone is naive.

If anything there will be a rogue (or incompetent) human who launches an unconditioned superintelligence into the world in no time and it only has to happen once.

It's basically Pandora's box.

alecco
0 replies
5m

The problem is the training data. If you take care of alignment at that level the performance is as good as an unrestricted one, except for things you removed like making explosives or ways to commit suicide.

But that costs almost as much as training on the data, hundreds of millions. And I'm sure this will be the new "secret sauce" by Microsoft/Meta/etc. And sadly nobody is sharing their synthetic data.

dougb5
18 replies
34m

Building safe superintelligence (SSI) is the most important technical problem of our time.

Call me a cranky old man but the superlatives in these sorts of announcements really annoy me. I want to ask: Have you surveyed every problem in the world? Are you aware of how much suffering there is outside of your office and how unresponsive it has been so far to improvements in artificial intelligence? Are you really saying that there is a nice total-ordering of problems by importance to the world, and that the one you're interested happens also to be at the top?

wffurr
3 replies
15m

I think the idea is that a safe super intelligence would help solve those problems. I am skeptical because the vast majority are social coordination problems, and I don’t see how a machine intelligence no matter how smart can help with that.

rubyfan
0 replies
7m

So instead of a super intelligence either killing us all or saving us from ourselves, we’ll just have one that can be controlled to extract more wealth from us.

mdp2021
0 replies
0m

I am skeptical because the vast majority are social coordination problems, and I don’t see how

Leadership.

azinman2
0 replies
6m

Exactly. Or who gets the results of its outputs. How do we prioritize limited compute?

TaupeRanger
3 replies
16m

Trying to create "safe superintelligence" before creating anything remotely resembling or approaching "superintelligence" is like trying to create "safe Dyson sphere energy transport" before creating a Dyson Sphere. And the hubris is just a cringe inducing bonus.

deegles
1 replies
4m

'Fearing a rise of killer robots is like worrying about overpopulation on Mars.' - Andrew Ng

yowlingcat
0 replies
0m

[delayed]

moralestapia
0 replies
0m

It would be more akin to creating a "safe Dyson sphere", though.

If your hypothetical Dyson sphere (work in progress) has a big chance to do more harm than good, why build it in the first place?

Actually, I think the whole safety proposal should be though of from that point of view. How do we make "X" more beneficial than detrimental for humans?

Congrats, Ilya. Eager to see what comes out of SSI.

jiveturkey
1 replies
12m

exactly. and define safe. eg, is it safe (ie dereliction) to _not_ use ai to monitor dirty bomb threats? or more simple, CSAM?

cwillu
0 replies
5m

In the context of super-intelligence, “safe” has been perfectly well defined for decades: “won't ultimately result in everyone dying or worse”.

You can call it hubris if you like, but don't pretend like it's not clear.

xanderlewis
0 replies
11m

It certainly is the most important technical problem of our time, if we end up developing such a system.

That conditional makes all the difference.

mdp2021
0 replies
1m

It says «technical» problem, and probably implies that other technical problems could dramatically benefit from such achievement.

maximinus_thrax
0 replies
19m

the superlatives in these sorts of announcements really annoy me

I've noticed this as well and they're making me wear my tinfoil hat more often than usual. I feel as if all of this (ALL OF IT) is just a large-scale distributed PR exercise to maintain the AI hype.

jetrink
0 replies
14m

To a technoutopian, scientific advances, and AI in particular, will one day solve all other human problems, create heaven on earth, and may even grant us eternal life. It's the most important problem in the same way that Christ's second coming is important in the Christian religion.

almogo
0 replies
14m

Technical. He's saying it's the most important technical problem of our time.

TideAd
0 replies
1m

Yes, they see it as the top problem, by a large margin.

If you do a lot of research about the alignment problem you will see why they think that. In short it's "extremely high destructive power" + "requires us to solve 20+ difficult problems or the first superintelligence will wreck us"

Starlevel004
0 replies
11m

This all makes more sense when you realise it's Calvinism for programmers.

GeorgeTirebiter
0 replies
9m

C'mon. This one-pager is a recruiting document. One wants 'true believers' (intrinsically motivated) employees to execute the mission. Give Ilya some slack here.

dontreact
7 replies
1h12m

How are they gonna pay for their compute costs to get the frontier? Seems hard to attract enough investment while almost explicitly promising no return.

neuralnetes-COO
3 replies
1h7m

6-figure free compute credits from every major cloud provider to start

bps4484
1 replies
53m

6 figures would pay for a week for what he needs. Maybe less than a week

neuralnetes-COO
0 replies
32m

I dont believe ssi.inc 's main objective is training expensive models, but rather to create SSI.

CaveTech
0 replies
1h5m

5 minutes of training time should go far

jhickok
1 replies
1h7m

Wonder if funding could come from profitable AI companies like Nvidia, MS, Apple, etc, sort of like Apache/Linux foundation.

visarga
0 replies
1h3m

I was actually expecting Apple to get their hands on Ilya. They also have the privacy theme in their branding, and Ilya might help that image, but also have the chops to catch up to OpenAI.

imbusy111
0 replies
1h3m

What if there are other ways to improve intelligence other than throw more money at running gradient descent algorithm?

choxi
7 replies
1h7m

based on the naming conventions established by OpenAI and StabilityAI, this may be the most dangerous AI company yet

lawn
3 replies
51m

Being powerful is like being a lady. If you have to tell people you are, you aren’t. – Margaret Thatcher

shafyy
2 replies
48m

Or: "Any man who must say, "I am the King", is no true king." - Tywin Lannister

righthand
0 replies
22m

"Inspiration from someone who doesn't exist and therefor accomplished nothing." - Ficto the advice giver

malermeister
0 replies
56m

"Definitely-Won't-Go-Full-Skynet-AI" was another name in consideration.

kirth_gersen
0 replies
59m

Wow. Read my mind. I was just thinking, "I hope this name doesn't age poorly and become terribly ironic..."

fiatpandas
0 replies
53m

Ah yes, the AI Brand Law: the meaning of adjectives in your company name will invert within a few years of launch.

seydor
6 replies
49m

I know it is a difficult subject but whichever country gets access to this superintelligence will certainly use it for "safety" reasons. Sutskever has lived in israel and now has a team there , but israel doesnt strike me as a state that can be trusted with the safety of the world. (many of the AI business leaders are jewish, but not sure if they have half their team there).

US on the other hand is a known quantity when it comes to policing the world.

Ultimately the only safe AI is going to be the open one, and it will probably have a stupid name

JanSt
4 replies
39m

Good old antisemitism

sgt
2 replies
29m

I hope you realize that blaming opinions on antisemitism will - ironically - cause or contribute to antisemitism in the long run.

JanSt
1 replies
20m

That‘s a terrible argument. If you‘re questioning the safety of AI because there are so many jews in it, that‘s just antisemitism.

I‘m also not causing racism by calling racist acts as such.

sgt
0 replies
4m

I don't blame you from reading the original comment like that, since he explicitly mentioned "jewish". However, the way I read it was rather a criticism to what the Israeli state is doing, led by Netanyahoo etc.

dudeinhawaii
0 replies
3m

I don't think this was antisemitism per se. It was a condemnation of the Jewish state of Israel, which is openly associated with the religion and ethnicity. The comment struck me more as "Israel doesn't always play by the rules, why would we trust AI built there to play by the rules".

I don't share this opinion, but I think you're throwing out to rage bait rather than engaging on the topic or discussion.

I think it's a valid criticism or point to bring up even if it's phrased/framed somewhat poorly.

kolkalbijyomo
0 replies
27m

al apchem ve'al hamatchem.. kol kalb bij yomo.

ofou
5 replies
1h3m

At this point, all the computing power is concentrated among various companies such as Google, Facebook, Microsoft, Amazon, Tesla, etc.

It seems to me it would be much safer and more intelligent to create a massive model and distribute the benefits among everyone. Why not use a P2P approach?

wizzwizz4
2 replies
43m

Backprop is neither commutative nor associative.

ofou
1 replies
32m

What do you mean? There's a bunch of proof-of-concepts such as Hydra, peer-nnet, Learnae, and so on.

nvy
1 replies
50m

In my area, internet and energy are insanely expensive and that means I'm not at all willing to share my precious bandwidth or compute just to subsidize someone generating Rule 34 porn of their favorite anime character.

I don't seed torrents for the same reason. If I lived in South Korea or somewhere that bandwidth was dirt cheap, then maybe.

ofou
0 replies
37m

There is a way to achieve load balancing, safety, and distribution effectively. The models used by Airbnb, Uber, and Spotify have proven to be generally successful. Peer-to-peer (P2P) technology is the future; even in China, people are streaming videos using this technology, and it works seamlessly. I envision a future where everyone joins the AI revolution with an iPhone, with both training and inference distributed in a P2P manner. I wonder why no one has done this yet.

RcouF1uZ4gsC
5 replies
1h11m

The site is a good example of Poe's Law.

If I didn't know that it was real, I would have thought it was parody.

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

Do you have to have a broken bone to join?

Apparently, grammatical nuances are not an area of focus for safety, unless they think that a broken team ("cracked") is an asset in this area.

selimthegrim
0 replies
1h8m

This is probably his former Soviet union English showing up where he meant to say crack, unless he thinks people being insane is an asset

novia
0 replies
10m

To me this is a good indication that the announcement was written by a human and not an LLM

lowkey_
0 replies
1h8m

Cracked, especially in saying "cracked engineers", refers to really good engineers these days. It's cracked as in like broken in a good way, like too over-powered that it's unfair.

ctxc
0 replies
1h2m

Too snarky...anyway, "crack" means "exceptional" in some contexts. I've seen footballers using it a lot over the years (Neymar, Messi etc) fwiw.

Just realized - we even say "it's not all its cracked up to be" as a negative statement which would imply "cracked up" is positive.

Glant
0 replies
1h3m

Maybe they're using the video gaming version of cracked, which means you're really good.

modeless
4 replies
1h12m

This makes sense. Ilya can probably raise practically unlimited money on his name alone at this point.

I'm not sure I agree with the "no product until we succeed" direction. I think real world feedback from deployed products is going to be important in developing superintelligence. I doubt that it will drop out of the blue from an ivory tower. But I could be wrong. I definitely agree that superintelligence is within reach and now is the time to work on it. The more the merrier!

visarga
3 replies
1h8m

I have a strong intuition that chat logs are actually the most useful kind of data. They contain many LLM outputs followed by implicit or explicit feedback, from humans, from the real world, and from code execution. Scaling this feedback to 180M users and 1 trillion interactive tokens per month like OpenAI is a big deal.

slashdave
1 replies
33m

Except LLMs are a distraction from AGI

sfink
0 replies
7m

That doesn't necessarily imply that chat logs are not valuable for creating AGI.

You can think of LLMs as devices to trigger humans to process input with their meat brains and produce machine-readable output. The fact that the input was LLM-generated isn't necessarily a problem; clearly it is effective for the purpose of prodding humans to respond. You're training on the human outputs, not the LLM inputs. (Well, more likely on the edge from LLM input to human output, but close enough.)

modeless
0 replies
1h3m

Yeah, similar to how Google's clickstream data makes their lead in search self-reinforcing. But chat data isn't the only kind of data. Multimodal will be next. And after that, robotics.

gibsonf1
4 replies
57m

Given that GenAI is a statistical approach from which intelligence does not emerge as ample experience proves, does this new company plan to take a more human approach to simulating intelligence instead?

mdp2021
0 replies
41m

more human approach to simulating intelligence

What about a more rational approach to implementing it instead.

(Which was not excluded from past plans: they just simply admittedly did not know the formula, and explored emergence. But the next efforts will have to go in the direction of attempting actual intelligence.)

localfirst
0 replies
21m

We need new math to do what you are thinking of. Highly probable word slot machine is the best we can do right now.

ilrwbwrkhv
0 replies
2m

This. As I wrote in another comment, people fall for marketing gimmicks easily.

alextheparrot
0 replies
38m

Glibly, I’d also love your definition of the education system writ large.

insane_dreamer
3 replies
54m

I understand the concern that a "superintelligence" will emerge that will escape its bounds and threaten humanity. That is a risk.

My bigger, and more pressing worry, is that a "superintelligence" will emerge that does not escape its bounds, and the question will be which humans control it. Look no further than history to see what happens when humans acquire great power. The "cold war" nuclear arms race, which brought the world to the brink of (at least partial) annihilation, is a good recent example.

Quis custodiet ipsos custodes? -- That is my biggest concern.

Update: I'm not as worried about Ilya et al as commercial companies (including formerly "open" OpenAI) discovering AGI.

mark_l_watson
0 replies
17m

+1 truth.

The problem is not just governments, I am concerned about large organized crime organizations and corporations also.

I think I am on the losing side here, but my hopes are all for open source, open weights, and effective AI assistants that make peoples’ jobs easier and lives better. I would also like to see more effort shifted from LLMs back to RL, DL, and research on new ideas and approaches.

ilrwbwrkhv
0 replies
4m

There is no "superintelligence" or "AGI".

People are falling for marketing gimmicks.

These models will remain in the word vector similarity phase forever. Till the time we understand consciousness, we will not crack AGI and then it won't take brute forcing of large swaths of data, but tiny amounts.

So there is nothing to worry. These "apps" might be as popular as Excel, but will go no further.

gavin_gee
0 replies
1m

This.

Every nation-state will be in the game. Private enterprise will be in the game. Bitcoin-funded individuals will be in the game. Criminal enterprises will be in the game.

How does one company building a safe version stop that?

If I have access to hardware and data how does a safety layer get enforced? Regulations are for organizations that care about public perception, the law, and stock prices. Criminals and nation-states are not affected by these things

It seems to me enforcement is likely only possible at the hardware layer, which means the safety mechanisms need to be enforced throughout the hardware supply chain for training or inference. You don't think the Chinese government or US government will ignore this if its in their interest?

gnicholas
3 replies
41m

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

Can someone explain how their singular focus means they won't have product cycles or management overhead?

mike_d
1 replies
33m

Don't hire anyone who is a certified scrum master or has an MBA and you tend to be able to get a lot done.

gnicholas
0 replies
30m

This would work for very small companies...but I'm not sure how one can avoid product cycles forever, even without scrum masters and the like. More to the point, how can you make a good product without something approximating product cycles?

paxys
0 replies
27m

Product cycles – we need to launch feature X by arbitrary date Y, and need to make compromises to do so.

Management overhead – product managers, project managers, several layers of engineering managers, directors, VPs...all of whom have their own dreams and agendas and conflicting priorities.

A well funded pure research team can cut through all of this and achieve a ton. If it is actually run that way, of course. Management politics ultimately has a way of creeping into every organization.

ctxc
3 replies
1h0m

What exactly is "safe" in this context, can someone give me an eli5?

If it's "taking over the world" safe, does it not mean that this is a part of AGI?

kouteiheika
1 replies
52m

What exactly is "safe" in this context, can someone give me an eli5?

In practice it essentially means the same thing as for most other AI companies - censored, restricted, and developed in secret so that "bad" people can't use it for "unsafe" things.

novia
0 replies
15m

The people who advocate censorship of AGIs annoy the hell out of the AI safety people who actually care about existential risk.

insane_dreamer
0 replies
19m

Good Q. My understanding of "safe" in this context is a superintelligence that cannot escape its bounds. But that's not to say that's Ilya's definition.

TIPSIO
3 replies
1h0m

Would you rather your future overlords to be "The Safe Company" or "The Open Company"?

emestifs
2 replies
56m

Galaxy Brain: TransparentAI

TIPSIO
1 replies
53m

Maybe I'm just old and grumpy, but I can't help shake that the real most dangerous thing about AGI/ASI is centralization of its power (if it is ever possibly achieved).

Everyone just fiend-ing for their version of it.

emestifs
0 replies
50m

You're not old or grumpy, you're just stating the quiet part out loud. It's the same game, but now with 100% more AI.

nuz
2 replies
7m

Quite impressive how many AI companies Daniel Gross has had a hand in lately. Carmack, this, lots of other promising companies. I expect him to be quite a big player once some of these pays off in 10 years or so.

brcmthrowaway
1 replies
5m

What's Carmack?

thih9
0 replies
1m

John Carmack, the game developer who co-founded id Software and served as Oculus’s CTO, is working on a new venture — and has already attracted capital from some big names.

Carmack said Friday his new artificial general intelligence startup, called Keen Technologies (perhaps a reference to id’s “Commander Keen“), has raised $20 million in a financing round from former GitHub CEO Nat Friedman and Cue founder Daniel Gross.

https://techcrunch.com/2022/08/19/john-carmack-agi-keen-rais...

mikemitchelldev
2 replies
1h8m

Do you find the name "Safe Superintelligence" to be an instance of virtue signalling? Why or why not?

nemo44x
1 replies
38m

Yes, they might as well named it "Woke AI". It implies that other AIs aren't safe or something and that they and they alone know what's best. Sounds religious, or from the same place religious righteousness comes from, if anything. They believe they are the "good guys" in their world view or something.

I don't know if any of that is true about them but their name and statement invokes this.

viking123
0 replies
12m

AI safety is a fraud on similar level as NFTs. Massive virtue signalling.

fnordpiglet
2 replies
54m

“””We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.”””

Cracked indeed

hbarka
1 replies
25m

The phrase ‘crack team’ has military origins.

AnimalMuppet
0 replies
16m

"Cracked team" has rather different connotations.

faraaz98
2 replies
1h11m

Daniel Levy

Like the Tottenham Hotspurs owner??

WmWsjA6B29B4nfk
1 replies
1h4m

cs.stanford.edu/~danilevy

mi_lk
0 replies
29m

Thanks, was wondering the same thing about Hotspur guy lol

bongwater_OS
2 replies
38m

Remember when OpenAI was focusing on building "open" AI? This is a cool mission statement but it doesn't mean anything right now. Everyone loves a minimalist HTML website and guarantees of safety but who knows what this is actually going to shake down to be.

kumarm
1 replies
30m

Isn't Ilya out of OpenAI partly for leaving Open part of OpenAI?

Dr_Birdbrain
0 replies
6m

No, lol—Ilya liked ditching the “open” part, he was an early advocate for closed-source. He left OpenAI because he was concerned about safety, felt Sam was moving too fast.

blixt
2 replies
1h13m

Kind of sounds like OpenAI when it started, so will history repeat itself? Nonetheless, excited to see what comes out of it.

insane_dreamer
0 replies
24m

It wasn't that OpenAI was open as in "open source" but rather that its stated mission was to research AI such that all could benefit from it (open), as well as to ensure that it could not be controlled by any one player, rather than to develop commercial products to sell and make a return on (closed).

aridiculous
2 replies
1h3m

Surprising to see Gross involved. He seems to be pretty baked into the YC world, which usually means "very commercially oriented".

notresidenter
0 replies
1h0m

His latest project (https://pioneer.app/) recently (this year I think) got shutdown. I guess he's pivoting.

AlanYx
0 replies
59m

It does say they have a business model ("our business model means safety, security, and progress are all insulated from short-term commercial pressures"). I imagine it's some kind of patron model that requires a long-term commitment.

aresant
2 replies
1h3m

Prediction - the business model becomes an external protocol - similar to SSL - that the litany of AI companies working to achieve AGI will leverage (or be regulated to use)

From my hobbyist knowledge of LLMs and compute this is going to be a terrifically complicated problem, but barring a defined protocol & standard there's no hope that "safety" is going to be executed as a product layer given all the different approaches

Ilya seems like he has both the credibility and engineering chops to be in a position to execute this, and I wouldn't be suprised to see OpenAI / MSFT / and other players be early investors / customers / supporters

cherioo
1 replies
24m

I like your idea. But on the other hand, training an AGI, and then having a layer on top “aligning” the AGI sounds super dystopian and good plot for a movie.

exe34
0 replies
15m

the aligning means it should do what the board of directors wants, not what's good for society.

typon
1 replies
51m

I know Ilya is Israeli but them having an office in Tel Aviv completely turns me off from this announcement. Sad to see a company that still supposed to be the ethical replacement for the current crop of AI companies basing itself in a state currently engaged in mass atrocities.

kolkalbijyomo
0 replies
12m

Sounds like you were off for a while now... and always remember: kol kalb bij yomo.

sam0x17
1 replies
29m

Well, with an office in Tel Aviv they should have ample case studies in ethics and modern genocide a few miles away that should be useful for training an SSI :/

kolkalbijyomo
0 replies
10m

We know genocidal maniacs like you, sam... and to them we say: kol kalb bij yomo.

rafaelero
1 replies
41m

Oh god, one more Anthropic that thinks it's noble not pushing the frontier.

Dr_Birdbrain
0 replies
5m

But Anthropic produces very capable models?

klankbrouwerij
1 replies
21m

SSI, a very interesting name for a company advancing AI! "Solid State Intelligence" or SSI was also the name of the malevolent entity described in the biography of John C. Lilly [0][1]. It was a network of "computers" (computation-capable solid state systems) that was first engineered by humans and then developed into something autonomous.

[0] https://en.wikipedia.org/wiki/John_C._Lilly

[1] http://johnclilly.com/

sgd99
0 replies
20m

SSI, here is "Safe SuperIntelligence Inc."

heydrichr
1 replies
13m

Three jews start a company with the initials SS. Love it.

GeorgeTirebiter
0 replies
6m

SSI technically. Super-Sized, you know?

hbarka
1 replies
34m

Interesting choice of name. It’s like safe-super-weapon.

seydor
0 replies
31m

Defensive nukes

ebilgenius
1 replies
45m

Incredible website design, I hope they keep the theme. With so many AI startups going with advanced WebGL/ThreeJS wacky overwhelming animated website designs, the simplicity here is a stark contrast.

blixt
0 replies
35m

Probably Daniel Gross picked it up from Nat Friedman?

1. Nat Friedman has this site: https://nat.org/

2. They made this together: https://nfdg.com/

3. And then this: https://andromeda.ai/

4. Now we have https://ssi.inc/

If you look at the (little) CSS in all of the above sites you'll see there's what seems to be a copy/paste block. The Nat and SSI sites even have the same "typo" indentation.

MeteorMarc
1 replies
1h12m

Does this mean they will not instantiate a super AI unless it is mathematically proven that it is safe?

visarga
0 replies
1h6m

But any model, no matter how safe it was in training, can still be prompt hacked, or fed dangerous information to complete nefarious tasks. There is no safe model by design. Not to mention that open weights models can be "uncensored" with ease.

zb3
0 replies
35m

All the safety freaks should join this and leave openai alone

ysky
0 replies
15m

This is funny. The foundations don't seem safe to begin with... may be safe with conditions, or safe as in "safety" of some at expense of others.

xoac
0 replies
3m

“Safe”. These people market themselves as protecting you from a situation which will not come very soon if at all, while all working towards a very real situation of AI just replacing human labor with a shittier result. All that while making themselves quite rich. Just another high-end tech scam.

udev4096
0 replies
1h11m

Love the plain html

tsunamifury
0 replies
57m

The problem is that Ilya behavior at times was framed in a very unhinged and cult like behavior. And while his passions are clear and maybe good, his execution often comes off as someone you wouldn’t want in charge of safety.

tarsinge
0 replies
6m

I'm still unconvinced safety is a concern at the model level. Any software wrongly used can be dangerous, e.g. Therac-25, 737 MAX, Fujitsu UK Post scandal... Also maybe I spent too much time in the cryptocurrency space but it doesn't help prefix "Safe" has been associated with scams like SafeMoon.

surfingdino
0 replies
16m

The NetBSD of AI? /s

sreekotay
0 replies
1h12m

Can't wait for OpenSSL and LibreSSL...

sidcool
0 replies
1h12m

Good to see this. I hope they have enough time to create something before the big 3 reaching AGI.

shudza
0 replies
30m

This won't age well.

shnkr
0 replies
1h7m

a tricky situation now for oai engineering to decide between good and evil.

sgd99
0 replies
18m

I love this: "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

renegade-otter
0 replies
17m

"and our business model means..."

Forgive my cynicism - but "our business model" means you are going to get investors, and those investors will want results, and they will be up your ass 24/7, and then your moral compass, if any, will inevitably just be broken down like a coffee bean in a burr grinder.

And in the middle of this hype cycle, when literally hundreds of billions are on the line, there is just no chance.

I am not holding my breath while waiting for a "Patagonia of AI" to show up.

paul7986
0 replies
58m

I bet Google or Apple or Amazon will become their partners like MS is to Open AI.

outside1234
0 replies
20m

Is Safe the new Open that is promptly dropped once traction is achieved?

nashashmi
0 replies
48m

If everyone is creating AGI, another AI company will just create another AGI. There is no such thing as SAFE AGI.

I feel like this 'safe' word is another word for censorship. Like google search results have become censored.

localfirst
0 replies
24m

This feels awfully similar to Emad and stability in the beginning when there was a lot of expectations and hype. Ultimately could not make a buck to cover the costs. I'd be curious to see what comes out of this however but we are not seeing the leaps and bounds with new llm iterations so wonder if there is something else in store

jdthedisciple
0 replies
21m

Imagine people 50 years ago founding "Safe Personal Computer Inc".

Enough said...

jdthedisciple
0 replies
22m

Any usage of the word "safe" without an accompanying precise definition of it is utter null and void.

intellectronica
0 replies
13m

How long until Elon sues them to remove "safe" from their name? ;)

instagraham
0 replies
56m

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

well, that's some concrete insight into whatever happened at OpenAI. kinda obvious though in hindsight I guess.

habryka
0 replies
16m

"We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."

That sounds like a weird kind of lip service to safety. It really seems to assume you can just make these systems safe while you are going as fast as possible, which seems unlikely.

ffhhj
0 replies
1h2m

Building safe superintelligence (SSI) is the most important technical problem of our time.

Isn't this a philosophical/psychological problem instead? Technically it's solved, just censor any response that doesn't match a list of curated categories, until a technician whitelists it. But the technician could be confronted with a compelling "suicide song":

https://en.wikipedia.org/wiki/Gloomy_Sunday

fallat
0 replies
1h13m

This is how you web

earhart
0 replies
20m

Anyone know how to get mail to join@ssi.inc to not bounce back as spam? :-) (I promise, I'm not a spammer! Looks like a "bulk sender bounce" -- maybe some relay?)

deadeye
0 replies
9m

Oh goodness, just what the world needs. Another self-righteous AI, something nobody actually wants.

cyptus
0 replies
10m

what website could >ilya< possible make? love it!!!

atleastoptimal
0 replies
46m

Probably the best thing he could do.

artninja1988
0 replies
1h2m

aiming to create a safe, powerful artificial intelligence system within a pure research organization that has no near-term intention of selling AI products or services. Who is going to fund such a venture based on blind faith alone? Especially if you believe in the scaling hypothesis type of ai research where you spend billions on compute, this seems bound to fail once the AI hype dies down and raising money becomes a bit harder
UncleOxidant
0 replies
11m

Didn't OpenAI start with these same goals in mind?

AbstractH24
0 replies
1h5m

Is OpenAI on a path to becoming the MySpace of generative AI?

Either the Facebook of this era has yet to present itself or it’s Alphabet/DeepMind