return to table of content

After AI beat them, professional Go players got better and more creative

dtnewman
68 replies
1d

Just look to Chess. The top players today are way better than any of the greats before, because they can train against computers and know exactly where they failed. That said, because they've gotten so good, chess at the top levels is pretty boring... it's hard to come up with a unique strategy so players tend to be defensive. Lots of ties.

On the other hand, chess is more popular than ever. It's huge in high schools. I see people playing it everywhere. I know that for me, I love being able to play a game and then view the computer analysis afterwards and see exactly what I did wrong (granted, sometimes a move can be good for a computer who will know how to follow through on the next 10 moves, but not necessarily good for me... but most of the time I can see where I made a mistake when the computer points it out).

Side note: I play on LIChess and it's great. Is there an equivalent app for Go?

Taek
28 replies
1d

I think you would see fewer ties if players got 0.2 points each for draws instead of 0.5 points each for draws.

It makes the risk of going for a risky strategy lower (you only drop 0.2 pts instead of 0.5 vs getting an easy draw) and it makes the rewards much greater... a single win and 4 losses scores the same as 5 draws.

you wont see players doing intentional draws anymore either

neysofu
18 replies
23h3m

Another possible solution would be to simply... remove draws from the game. Instead of checkmating the goal becomes to capture the opponent's king.

Needless to say, no one likes this idea because it throws out of the window centuries of game theory. Endgames would be completely different. I'm not convinced it would be a less interesting game, though.

acchow
6 replies
22h23m

Instead of checkmating the goal becomes to capture the opponent's king.

These are the same.

neysofu
5 replies
22h20m

They are not – if the goal becomes to capture the king, and check-related rules are removed from the game, stalemates become impossible.

dudinax
4 replies
21h34m

Im not sure stalemate accounts for most draws at the highest level.

Getting rid of check does make for a better game at beginner levels.

It's both easier to teach and leads to exciting finishes ss noobs hang their king.

doctor_phil
2 replies
15h1m

There are a lot of endgames that are drawn because of stalemate though. Many pawn endgames ( e.g. pawn and king against lone king) are drawn because of stalemate, but would be a win in most cases if stalemate didn't exist.

eterm
0 replies
13h46m

You likely didn't mean to imply all king+pawn vs king endgames are drawn, but to clarify for the layman reading, many are winnable.

It depends on the locations of the kings relative to the pawn ( you generally want it in-front of your own pawn ), and the concept of opposition.

acchow
0 replies
1h33m

How can there be a win when you hit a stalemate? The players keep repeating the same cycle of moves for years until one dies of old age?

charlysl
0 replies
14h10m

Trying to force or avoid stalemate is a huge motive in top level endgames though regardless of whether they actually end in stalemate or not.

btilly
3 replies
22h53m

And what happens if you wind up with king and rook vs king and rook?

Some positions simply do not allow for a win. Yes, you could say do it on time. But then it becomes about mechanical dexterity as people try to be faster than their opponent in a pointless piece shuffle.

neysofu
2 replies
22h22m

Yeah, I didn't think it through. I'd imagine such a rule change would still make draws significantly less likely though, right?

reaperman
0 replies
3h16m

The proposed rule change doesn't make sense, so I can't say what the ramifications would be. Charitably - you're now suggesting that others invent and propose a rule-change which makes draws significantly less likely.

There are no "on-board" rule changes you can make which won't destroy the game of chess. Any rule-changes have to be "meta" changes affecting points in a tournament, the ELO system, or ways to encourage players to play a wider variety of opponents. That's why everyone's talking about what it might look like to modify the points system in tournaments, because it's the most practical thing to actually change.

acchow
0 replies
1h32m

Can you provide an example of a draw/stalemate that would result in a winner with your rule change?

BurningFrog
2 replies
22h57m

There are many situations when this is for all practical purposes impossible.

For example a King vs King endgame. Even really weak players will never accidentally put their king next to the opponent.

debugnik
0 replies
20h28m

Even worse, it's an illegal move to leave your own king in check, if I recall correctly, so that simply can't happen, not even by accident. The only possible outcome for king vs king is a draw. Unless we were to modify even more rules, of course.

whimsicalism
0 replies
22h56m

at least as described, would not be sufficient to remove draws from the game - but would remove stalemates

wh0knows
0 replies
22h47m

If you have insufficient material how can you capture the king? Checkmate is by definition one move before forced capture of the king, the game doesn’t change by making it end one move later.

thom
0 replies
10h29m

I think you’re arguing for the abolition of stalemate (and certain kinds of pins), and that’s totally reasonable. This doesn’t solve drawisness in general though.

paulddraper
0 replies
16h48m

Capturing the king changes nothing, except stalemate.

(Which affects some draws but not most.)

dfan
6 replies
23h35m

One issue with this is that it encourages collusion. If you're a top GM playing someone of equal skill, it's +EV to agree to flip a coin beforehand to determine who will win (and then play a fake game) rather than playing it for real.

Some chess tournaments have experimented with giving 1/3 point for draws instead of 1/2 and it didn't really change much. Mostly it acted as a tiebreaker, which you could have done by just using "most wins" as a tiebreaker anyway.

My favorite idea (not mine) for creating decisive results in chess is that when a draw is agreed, you switch sides and start a new game, but don't reset the clocks.

paulddraper
4 replies
16h49m

But most tournaments don't have players playing each other an even number of times.

Any sport can have a thrown match. A la Rocky.

worddepress
3 replies
10h49m

The difference here is you are not throwing a match for outside money. You are actually doing something in your interest and probably not against the rules (??) so you are just playing the game (the new game) as intended.

Might be an interesting variant of chess where 2 players just decide how much of the point they get each via negotiation, and if they disagree, they go to "court" by playing the chess game.

michaelt
2 replies
10h25m

Why would it be in your interest to intentionally lose and get 0 points?

reaperman
0 replies
3h21m

It only makes sense if you are playing multiple games against the same opponent. Let's say you get 2 points for a win, 0.5 points for a draw, and 0 points for a loss. If you draw both games you both get 1 point. But if you win one and lose the other, you'd each earn 2 points instead.

TheCleric
0 replies
6h4m

Because it averages out. If it's a true coin flip, half the time you'll get 1 point and half you'll get 0. So it averages to an expected value of 0.5 points. 2 draws (at 0.2 points per draw) would only yield you 0.4 points. So if there's a good chance you'd draw twice anyway, it's a higher payout.

masklinn
0 replies
10h30m

“Some chess tournaments” doesn’t change habitual logic, if players are training for and in the mindset of drawing for safety they’re not going ton flip on a dime unless the incentives are massive.

whimsicalism
0 replies
23h51m

think it is somewhat intrinsic to chess that it makes sense to go for ties as black in top tier play

BurningFrog
0 replies
23h40m

Football (Soccer) did something similar.

Before that is was 2 points for a win, 1 point each for a draw.

In 1981 they made it 3 points for a win, and the sport has had substantially more offensive play since.

tptacek
6 replies
1d

online-go.com

anononaut
5 replies
1d

OGS is definitely the best server in the West. Deserves all patronage it gets and more. I wish the AGA was more supportive of it rather than KGS.

naet
2 replies
23h57m

KGS is still pretty great.

OGS might be more accessible to new players with one click sign in and a better web app, but I think KGS has a higher population of true dan+ strength players, and has a stronger "culture" around community reviews and studying.

It used to be even better, but there are less total people playing on KGS than the previous peak.

tptacek
1 replies
23h47m

I found the culture on OGS, particularly wrt moderation, to be pretty great (as a newcomer, and I exclusively play 9x9).

I've read about KGS but I've never figured out how to engage with it. (I know it to be the OG, srs bzns venue, though).

slowmovintarget
0 replies
21h31m

For KGS you had to bring your own client, like cgoban, if memory serves. They have a web client now, but the Java client was what I played on for while.

Alex3917
1 replies
23h5m

I wish the AGA was more supportive of it rather than KGS.

Out of curiosity, why do you like OGS more? I find the UX of KGS to be way more intuitive.

anononaut
0 replies
4h48m

I find everything about OGS superior to KGS except for the quantity of strong players. In particular, its active development, community involvement, and modern tooling all make it more appealing. KGS is very closed source, running ancient software on ancient hardware, very static, very DIY and individual. It's about the game of go and little else. That is admirable in some regards, but not suitable for a growing niche community that is Western go.

jsheard
6 replies
1d

Just look to Chess. The top players today are way better than any of the greats before, because they can train against computers and know exactly where they failed.

AlphaGo isn't available for anyone to train against like Stockfish is though, what are Go players using? Has another powerful Go engine been developed since then?

espadrine
2 replies
1d

KataGo is an open-source algorithm derived from AlphaGo, but with a number of tricks so that it trained faster: https://katagotraining.org/

It likely surpasses AlphaGo, and just like Stockfish, it delivers a protocol that can hook into many user interface apps: https://github.com/lightvector/KataGo?tab=readme-ov-file#gui...

From those technologies, also came an interesting visualisation of how human players changed their habits following AlphaGo: https://drive.google.com/file/d/16-ntvk3D1_pgjJ7u64t4jMYMh0z...

mafuy
1 replies
12h49m

It likely surpasses AlphaGo

If I remember correctly, a recent estimation is around +1000 Elo compared to AlphaGo.

SonOfLilit
2 replies
1d

We use KataGo and sometimes LeelaZero (which is a replication of the AlphZero paper). KataGo was trained with more knowledge of the game (feature engineering and loss engineering), so it trained faster. It was also trained on different board sizes and to play to get a good result when it's already behind or ahead.

KaTrain is a good frontend.

kadoban
1 replies
1d

KataGo was trained with more knowledge of the game (feature engineering and loss engineering), so it trained faster.

Not really important to your point, but it's not really just that it uses more game knowledge. Mostly it's that a small but dedicated community (especially lightvector) worked hard to build on what AlphaGo and LeelaZero did.

Lightvector is a genius and put a lot of effort into KataGo. It wasn't just add some game knowledge and that's it. https://github.com/lightvector/KataGo?tab=readme-ov-file#tra... has a bunch of info if you're interested.

SonOfLilit
0 replies
23h44m

I wasn't at all trying to say his work was simple. I was trying to say "deepmind were trying to build an AI that gets good at games without anything in their structures being specialized for the game, lightvector asked what if we did specialize the model on Go". And he did some wonderfully clever things.

hibikir
5 replies
23h58m

The defensiveness has absolutely nothing to do with better computers and the improvements in play that came with it, but with tournaments where risk taking is an economic disaster. As others have said, there aren't massive numbers of ties in the candidates tournament, because the difference in value between being first and second is so massive that if you aren't first, you are last.

Compare this to regular high level chess in the Grand Chess Tour: It's where most of your money is going to come from if you are a top player. Invitation to the tour as a regular is by rating, and there's enough money at the bottom of the tour than the difference between qualifying or not is massive. Therefore, the most important thing is to stay in the tour train. Lose 20 points of rating, and barring Rex Sinquefield deciding to sponsor your life out of the goodness of his heart, you might as well spend time coaching, because there are so few tournaments where there's a lot of money.

This also shows in the big difficulties for youngsters that reach 2650 or so: They are only going to find good enough opponents to move up quickly in a handful of events a year where people with higher rating end up risking their rating against them. See how something like the US championship is a big risk for the top US professionals, because all the young players that show are at least 50 points underrated, if not more.

This is what causes draws, not computer prep. Anand was better at just drawing every game in every tournament back when he was still on the tour, and yet computers were far worse than today, especially with opening theory.

jimbokun
1 replies
16h40m

So they need to mandate a promotion and relegation system for the top levels. Force players in the top flight to beat at least some of their opponents, or get replaced by top players in the next lower tiers.

I think that would increase spectator interest even more. In football, relegation battles can be almost as compelling as the title race..

doetoe
0 replies
16h7m

Another thing that was done in football, and could be done in chess as well to reduce the number of draws, was to grant 1 point for a draw, but 3 points for a win, up from 2 points in the 90s (earlier in England)

ummonk
0 replies
23h7m

And it simply doesn’t have to be this way. The top tournaments could just use a prior qualification tournament with an open Swiss. Then invite the top finishers from the open Swiss to participate in the round robin. Can reserve an invitational wildcard or two but the rest should have to earn their place.

santoshalper
0 replies
22h54m

Very insightful!

pmontra
0 replies
10h47m

I think that tennis solved the problem by not using an ELO based score but giving points by the number of turns a player wins in a tournament. The most important tournaments give more points. All points are lost after one year. Of course tennis and chess differ in a fundamental way: there is no draw in tennis and tournaments are basically never round robins. The ATP finals have a couple of round robins before the semi finals. They give points for the wins.

So maybe in chess they could give points for each win, less than half of those points for a draw, zero for a loss.

Tradition is very important so they should keep the ELO and keep updating it according to who wins against whom, but qualifications to tournaments and seeding (if that's a thing in chess) would be based on the other score. There could be wild cards to let some strong or popular players play even if they don't have a good score. Tennis pro associations have provisions in place for players that are forced to miss tournaments because of injuries, etc.

fryz
2 replies
1d

FWIW, I find the classical chess tournaments with the super GMs to be fairly interesting, if only because the focus of the games is more about the metagame than about the game itself.

The article linked at the bottom of the source is a WSJ piece about how Magnus beats the best players because of the "human element".

A lot about the games today are about opening preparation, where the goal is to out-prepare and surprise your opponent by studying opening lines and esoteric responses (somewhere computer play has drastically opened up new fields). Similarly, during the middle/end-games, the best players will try to force uncomfortable decisions on their opponents, knowing what positions their opponents tend to not prefer. For example, in the candidates game round 1, Fabiano took Hikari into a position that had very little in the way of aggressive counter-play, effectively taking away a big advantage that Hikaru would otherwise have had.

Watching these games feels somewhat akin to watching generals develop strategies trying to out maneuver their counterparts on the other side, taking into consideration their strengths and weaknesses as much as the tactics/deployment of troops/etc.

zer0-c00l
1 replies
23h41m

https://online-go.com/ is the easiest place to get started as a western beginner. The far more active go servers are Asian and have a higher barrier to entry in terms of registration, downloading the client, and dealing with poor localization. (Fox Weiqi, Tygem, etc.)

dunefox
0 replies
12h42m

OGS is great. There's an android app as well.

Bootvis
1 replies
1d

Second round of the Candidates tournament played Friday had 4/4 decisive games[1]. In general, a tie might be the most common result but even at the highest level there tend to be chances for both sides.

[1]: https://lichess.org/broadcast/fide-candidates-2024--open/rou...

mtlmtlmtlmtl
0 replies
23h49m

It's really up to the players. SuperGMs these days are somewhat addicted to draws because it's a very safe result in a tournament setting and in terms of rating. Therefore these players tend to favour less risky and more calculable openings. They care more about avoiding a loss than they do about winning.

The idea that the large amount of draws is because players are so strong now, is mostly a myth. It's really just psychology and game theory at work.

For a perfect illustration of all my points, look at Aronian vs Grischuk from the 2018 candidates tournament. Here both players chose to play into complications, and the resulting game was wildly complex, with both players making several suboptimal moves simply because the position was just too complex even for two of the strongest calculators in the game at the time.

And in the end, they still ended up constructing a draw by repetition when all 3 results were still possible. Both players had good winning chances, yet the fear of losing finally overtook them and they collectively bailed out of the game.

It's not that players are now so strong it's almost impossible to win, the players just aren't as willing to seek out the necessary positions.

yeellow
0 replies
23h56m

I recommend goQuest (mobile app), and playing 9x9 go. I used to play on KGS, but it is less crowded now (the problem is that there are too many servers: OGS, IGS, Tygem, Wbadul, etc and no one dominates, therefore you wait for the game, you need a rating, etc. Most are not very modern, mobile unfriendly, etc.). Also 19x19 takes too much time for me when comparing to chess, 9x9 is perfect, and goQuest has many active players, after a few seconds you get a match (they offer 13x13 and 19x19, but those are less active I suppose).

veunes
0 replies
3h3m

The ability to train against powerful computer programs have indeed elevated the level of play in many games

timetraveller26
0 replies
1d

Don't know about go, but Lishogi is Lichess for shogi (Japanese chess)

thatswrong0
0 replies
1d

I wish Chess960 was more popular for this exact reason. It’s super fun to watch and play compared to normal Chess… basically all I do with my friends

ricefield
0 replies
14h51m

OGS is the closest thing I’ve found to lichess but it’s quite good! https://online-go.com/

pushedx
0 replies
23h32m

KGS is where I used to play, this is the homepage:

https://www.gokgs.com/

and this is the web client:

https://shin.gokgs.com/

The homepage hasn't had a redesign since at the latest 2007, but the community is great an there are top players on there.

Pet_Ant
0 replies
23h47m

That said, because they've gotten so good, chess at the top levels is pretty boring

Yeah, I feel the same thing about Magic formats when the pros play. When a format is new and people are discovering, and they have to rely on their gut, and make educated guesses. That's when it's fun to play and watch.

Buttons840
0 replies
14h15m

One nice thing about Go is there are no ties. This is offset by how boring the end games are though and having to count. Chess has explosive and exciting endings, Go just kind of fizzles out at some point.

Angostura
0 replies
1d

The Queens Gambit turned quite a few of my daughter's friends on to chess
lordnacho
23 replies
1d

This is the tip of the iceberg, right? It's foreshadowing AI helping experts become better. I can see it happening in a lot of creative fields, including software. Perhaps this is where it really pulls the experts from the juniors, because only experts will be able to judge whether the AI has helped him create something actually good.

gcanko
10 replies
1d

It's exactly like the invention of agriculture. Not having to hunt for food gave more opportunities for intellectual pursuits because of having more free time.

rwbt
8 replies
1d

I'm skeptical of this argument. It gave free time to some people i.e. the landed gentry but also created the toiling peasants and a hierarchical civilization.

ohyes
6 replies
23h52m

Toiling peasants had more free time than we do today.

ummonk
0 replies
23h0m

The amount of toiling they could do without dying was calorically limited. Having lethargy induced by a shortage of food doesn’t necessarily mean a preferable lifestyle.

rwbt
0 replies
23h48m

Hunter gatherers had more leisure than farming peasants. Surely, one can spot the trend.

nicklecompte
0 replies
21h18m

"Who's going to teach those idle medieval peasants to read, Ben? Augustine-man?!"

brailsafe
0 replies
22h20m

Pfft, disagree. Got laid off a year ago and have had nearly 100% free time since

TulliusCicero
0 replies
23h37m

[citation needed]

TulliusCicero
0 replies
21h7m

As I understand it, this is largely inaccurate. People just read "days off" as "more time", even though peasant farmers would need to engage in a lot of labor around the farm or household even on "days off" (your cows and chickens don't care that you're on vacation).

Of course people still do some chores today even on days off, but it's a lot less than you need to do on a farm, ask basically any farmer.

choilive
0 replies
1d

Many other types of governance was enabled by the agricultural revolution, not just feudalism.

rcxdude
0 replies
12h6m

No, it just meant you could support more people on the same amount of land. Non-mechanised agriculture is very labour-intensive compared to hunting.

gwern
5 replies
18h59m

It's foreshadowing AI helping experts become better.

The humans are still way worse than the Go programs. People are still willing to pay them to play a game as entertainment. Are lots of people willing to pay you to do whatever it is you do even when AIs do it much better, out of sheer entertainment value & sentimentality? If they are willing to pay you in particular, how many other people like you are they also willing to pay for, and is that number much greater than or much less than the current number paid to do it?

nottorp
4 replies
15h21m

People are also cheering for Usain Bolt or whoever is the speediest runner this year, in spite of being able to outrun him by simply getting into a car...

sandspar
1 replies
14h9m

I think about this at the gym. I feel very proud of myself when I can go from being 1% as strong as a front-end loader to being 1.1% as strong. The king of my gym, the most high status guy there, is up to maybe 3%.

nottorp
0 replies
10h44m

That 0.1% is a 10% for you which is great progress!

gwern
1 replies
6h20m

"If they are willing to pay you in particular, how many other people like you are they also willing to pay for, and is that number much greater than or much less than the current number paid to do it?"

There are not many Usain Bolts out there: just one, as a matter of fact. Who is Usain Bolt #100? Or #1000?

nottorp
0 replies
3h13m

A good bunch of the people he beat were also paid.

Arrath
1 replies
20h47m

Maybe I'm too small minded but I would love to see AI like this enhance...well, the AI in games in general. I long for the day where I no longer play Civilization or an RTS against AI that has perfect knowledge, or is given handicaps to allow it to be competitive.

sandspar
0 replies
16h46m

Flight simulators are set to benefit from AI. Imagine talking to an AI Air Traffic Control that understands natural language. Imagine walking down the aisle of your plane and overhearing AI people's conversations.

veunes
0 replies
2h58m

Absolutely! We're just beginning to scratch the surface of how AI can augment and enhance human expertise in various fields

jasonfarnon
0 replies
21h48m

I can't think of any reason we should be so lucky that AI will have a ceiling somewhere between human "juniors" and human "experts"

asolove
0 replies
1d

Go is a constructed game with a precise definition of the rules and victory.

The real challenge with AI helping experts is whether it can correctly help them balance their own value function for what "better" means. And whether we can still train human experts who can think about that independently with good judgement, if we've automated away the things that beginners would normally do to train their judgement with black boxes that they can't interrogate.

Will be interesting for sure.

LouisSayers
0 replies
18h54m

I'd say GPT4 has definitely helped me become a better programmer - I'm able to ask it questions, learn how I can refactor my code better, or approach a problem in a way I might not have considered.

It does hit its limits, but it's been so useful - it's a funny cycle of training AI and having it train us, a great symbiotic relationship.

matthest
18 replies
1d

Entertainment is one industry that will survive post-AI.

We're still going to want to watch humans play sports, music and video games. We're going to want to watch humans act, cook food, and make vlogs.

The chess industry is growing rapidly, even though it has already been conquered by AI: https://www.einpresswire.com/article/649379223/chess-market-...

bongodongobob
8 replies
23h5m

As a lifelong musician, AI music has reached a level where it's able to write absolute bangers AND soulful music with feeling. Last weekend I listened to AI music all weekend long in absolute shock.

People will still want to go to concerts, but a lot of that music will either be written by, or inspired by AI.

This is one that I generated that has me convinced anyway. I made about 100 versions in different styles, but this makes my hair stand up.

https://app.suno.ai/song/77d97c83-8633-47d2-80b2-fe47952a6bc...

And a stoner rock banger:

https://app.suno.ai/song/2071317f-a5ba-4f1f-b77b-048d6ff03a9...

I mean, even if you don't think it's perfect, you re-record those and no one's going to know it's AI.

tyg13
5 replies
22h46m

Am I stupid, or is this just AI covering Blackbird by The Beatles in a couple different styles? Is this supposed to be an example of AI writing "absolute bangers" or "soulful music with feeling" because I just don't see it.

It seems like the exact "paint-by-numbers" stuff that we always see with AI: it's capable of taking existing art and mashing it together, but there's nothing interesting or novel being created here. It's the pinnacle of carbon-copy, technically impressive but still soulless art.

If you played me either of these songs without telling me they were AI, I would think: OK, weird cover of Blackbird? Not particularly moving, but perhaps a stepping off point to something more. What would interest me more than the music itself is the story behind why the song was covered, who was performing it, what were their intended emotions in creating it? And of course, since it's a cover, if they had any original material of their own. Knowing that it's just an audio file being synthesized by an AI takes all of the enjoyment out of it for me (not that it was particularly good to begin with).

Music is not simply good because of the sound entering your ears. The story, the artist behind it, the intended emotion and artistry is part of the experience. AI does not deliver on this, and I doubt it ever will, because human connection is what underlies those extra-musical qualities. I care about the people who made the music; I will never care about the machine.

bongodongobob
4 replies
18h57m

Those are two examples I thought I'd give rather than linkbombing.

Yes, I chose the lyrics to Blackbird because they are short, it generates in 2:00 chunks, and they don't have obvious meter. It's not like a limerick or something. I wanted to test the musical capabilities, I wasn't interested in generating lyrics, though that can obviously be done.

As someone who has studied and played music professionally my whole life, this isn't paint by number, at least in the way you think it is.

Most music is very similar. You can write 90% of most popular music using the 6 diatonic trichords, and 80% probably with only 4. There are also only so many ways to rearrange those.

This is not using a book of chord changes or melodies. I can promise you that after listening to 100s of versions of Blackbird, Yesterday, and the ABCs, I do not see the sort of pull the changes from a database that you're implying. In fact, I'd wager I could probably find a dozen songs that use the exact same chord changes and you wouldn't even have realized prior. I know because it's a great wedding gig trick, and we'd do it all the time.

I don't know what changes you think were carbon copied in the first link. It shows really good songwriting skills. Use of tension and surprise. "It sounds like everything else in that genre." Yes, that's what a genre is.

It also shows skill at vocal phrasing. This is really difficult and makes or breaks any song you try to write.

Explaining how difficult it is to write good melodies while putting a hard syllabic constraint on it is beyond the scope here. It's not easy or obvious and usually sounds like shit. Like, try to sing Sweet Home Alabama along with the chord changes at the end of "Layla". After doing essentially that for years (that's exactly what songwriting is) I doubt I could do it and not have it sound idiotic. This thing can do that.

Here's an example of it mimicking Russian to English phrasing: https://app.suno.ai/song/0d2d817f-4bf3-4837-85ab-9ff13abe9b4...

David Bowi-ish phrasing: https://app.suno.ai/song/a6b9e419-2e0f-4208-b66e-929b2076d96...

West Coast Hip-Hop banger alert if you're over 35 https://app.suno.ai/song/82e05d5b-7e0f-4715-8ed4-5f3d22fb81d...

Acid Funk https://app.suno.ai/song/de174b94-1758-4fae-b497-93b79f384a2...

Like I said, I have about 100 of these. This shit is nuts.

Here's some ABC's

Bossa Nova https://app.suno.ai/song/0fe61c85-62be-4327-9f05-5b0865353a6...

90's Grunge https://app.suno.ai/song/321a48e1-611b-4ec6-af0e-6e88815621c...

Baroque https://app.suno.ai/song/68257dae-031f-4910-a013-9bc0281cee2...

If you're thinking "Oh that just sounds like the Brandenburg Concertos" YEAH. THAT'S MY POINT. This level of mimicry is brand new. I've never seen anything close to doing anything like this. If you have, I'm all ears.

Now since you think originality is important, here is a poem I wrote for a girl I dated that worked at a coffee shop. Never intended it to be a song, but I love it.

https://app.suno.ai/song/9346c871-5e7f-439a-9134-d876aca7086...

https://app.suno.ai/song/2be696db-3b3b-48df-9075-66d3388cc11...

When I showed all this stuff to my musician friends this weekend (some who contract with Disney, Netflix, write scores, actively touring, etc), the reactions were actual tears, complete disbelief, shock, and existential dread.

$1000 says there's no way you could tell me whether this is a Chopin original or not played amongst others, and that's my point. And if you say "While sure, but that's not really that impressive because computers", frankly, you don't know what you're talking about and have already made your mind up.

https://app.suno.ai/song/bab48740-b977-4ee3-bdb8-0bc85995047...

sandspar
2 replies
13h39m

Thanks for the write up. Where do you see this going? For example, what will Spotify's front page look like in two years?

bongodongobob
1 replies
12h30m

I think Spotify will start to fade away to personalized music, or adapt to it and become a search engine of generated music.

My mom likes the Kansas, Chicago, early Genesis, ELO, and Rush. She's babysitting her granddaughter. I made this for her tonight in the last two hours, splicing and nudging the prompt.

70s Progressive ABCs:

https://app.suno.ai/song/d8211841-2c75-4c8d-ba01-4cff9d35dcb...

SunghoYahng
0 replies
1h3m

That makes no sense. People don't care that much about how good music or songs are. If not, why do you think the songs on the best playlists keep changing? What matters is being able to maintain the same knowledge and talk about that music or song with someone. So AI-generated music can have no value. In music, narrative isn't something supplementary. People want narratives and music is the medium for that. (But outside of Spotify, maybe this won't be the case. People aren't interested in the narrative of such music as background music or OSTs.)

csa
0 replies
15h9m

Thank you for all of this.

circlefavshape
1 replies
8h55m

Holy crap

Lifelong musician and songwriter here too. That first one is astounding - the singing :o

Any chance you'd share the prompt? Even privately ... fromhn at demersal dot net

bongodongobob
0 replies
8h26m

The prompt was just the lyrics and "Piano singer/songwriter" for the genre. That's it. This was probably the best out of 8 tries. I bought a stupid amount of credits if you want me to run anything through it, shoot me a message.

suyash
4 replies
1d

That's the next one on the chopping block, wait till Sora and related services comes out, it's all going to be digitally generated and will look just as real so yes it will be content about humans but without human doing/creating much of it.

tylerchilds
3 replies
23h13m

as both an engineer and an entertainer, perception is reality. on the one hand, a computer system could 1:1 recreate a stunt a human did and elevate it to a stunt a human couldn’t do.

people witness entertainment to trick their minds, and the disbelief and astonishment in the human condition hinges on “there’s no way _i_ could do that” and once they know they literally could not have done it, they’re not impressed. they may pay to be fooled once, but never twice.

there’s a market for what you’re talking about, but that space is b2b and not c2c, which is where entertainment money flows.

tl;dr dollar for dollar, ai vs taylor swift, taylor swift wins every time, no contest.

hackable_sand
2 replies
21h0m

they’re not impressed

This is subjective. Impressive things impress. It can be the hand-drawn impossible stunts from the 30's, or a CGI stunt from 2019.

tylerchilds
1 replies
18h40m

totally agree, but what i’m getting at is the creative core of an expression. there’s an aspect of a piece that impresses and i’m claiming that exists as something the viewer appreciates and imitates in their mind’s eye.

as a programmer, i create many things that impress people, but when i show them the methods of that creation, i can palpably feel their excitement wane as they lose interest in the nuance of my execution.

beyond the surface, there’s an aspect of being impressed that is also the desire to take part in the recreation of it all.

my claim in the first person, “i’m visually impressed by many image generators, but i’m not interested in fiddling with knobs, buttons, and strings to recreate the image in my mind’s eye”

hackable_sand
0 replies
16h33m

I just think that art economy should not be beholden to critical consensus.

Art is a continuum of multimedia. At some abstraction you will be able to submit your piece into the collective and draw on increasingly precise inspo.

ultra_nick
0 replies
1d

There are content farms in Facebook churning out fake grandkids for old folks to fawn over.

mike_hearn
0 replies
13h17m

Really? I'd think it's the opposite, I'm expecting non-sports entertainment to be largely AI dominated within 20 years.

High budget movies already often have fully CGI characters in which the only human element is the voice, and now AI voice warping is nearly perfect it's an obvious move to eliminate the continuity risks by making voice actors fully interchangeable, or even fully synthetic.

And then even in non-Pixar style movies, many scenes use CGI body doubles for stunts, fully CGI scenes and so on. The human actors often don't appear in all their own scenes, or they're even brought back from death to keep working.

So that leaves things like music, sports, etc. Some music genres totally defocus the humans, like electronic music. They go via frequently changed pseudonyms and you never really see them perform in person. Sports I can see remaining fully human.

chasd00
0 replies
23h55m

humans will always have contests to see which human or group of humans is the best at something and it will always be entertaining to watch the contests. op is right.

Ekaros
0 replies
15h55m

I am not sure why would I care about cooking for one. If AI can recreate a video with same recipe, but I could set constraints on ingredients, time, techniques and such it likely would be preferable option for many myself included.

Human acting? Eeh, good enough AI and video could do it for me. Same goes for vlogs. Content tailored for me would trump any existing vlogger.

Mtinie
12 replies
1d

This supports with my hypothesis about human-created art, post-AI.

People are deeply concerned about how their livelihoods and identities will survive the next few years. I get it, and while there’s certainly a level of existential dread that feels reasonable, I don’t see many people yet discussing what the visual arts industries will look like on the other side.

If Go play is in any way a creative exercise—which I’ve heard it is— then I’m super interested to see the state of humans in the arts 24 months out from now.

smokel
8 replies
1d

Most of contemporary art is unaffected by the current AI craze.

On one hand, the art world has been steadily pushing boundaries since the 19th century, and computer technology is just one blip on the vast radar of interesting subjects (other fashionable ones being gender, colonialist history, social practices, and physical properties of paint).

On the other hand, art is mostly created by artists who were professionally trained as artists, i.e. not as scientists. Knowledge about computer technology is typically rather limited with both artists and collectors, leading to fairly bland stuff, or properly misguided hypes such as NFTs.

Mtinie
5 replies
23h19m

Most of contemporary art is unaffected by the current AI craze.

The illustrators and digital artists I know would generally disagree.

As an abstract painter, I agree with you.

Significant genre specificity.

cageface
2 replies
19h46m

Most of the AI generated "art" I've seen I'd classify as more craft than actual art. Art is supposed to express something and communicate and so far I haven't seen any AI art that really moves me or says anything insightful about experience or existence.

When I scroll through the latest highly rated work on Midjourney, for example, I'm reminded mostly of tacky poster shop stuff.

Mtinie
1 replies
19h30m

That’s fair but I don’t understand the relevance to my comment.

Illustrators and graphic artists are in a hard spot given the commercial work they do. I’ve worked in this industry in the past and can attest that many of the contracts I executed on cared less about the originality of the output and more about the specifics of the prompt.

Generative AI is enough for a lot of clients, if the price is right, even when the output is subjectively bad.

cageface
0 replies
19h28m

Yes sure the kind of art that pays the rent for commercial artists is definitely seriously threatened by AI.

I'm inclined to say that AI companies should have to pay for the training data they use although that does seem to mean only companies with billion dollar warchests can train AIs.

smokel
0 replies
14h37m

I totally agree. There are some confusing differences in definitions of art :)

I feel sorry for the illustrators, and wonder how they'll be able to sustain their creative passions.

numpad0
0 replies
10h10m

I doubt illustrators and digital artists(and their patrons) actually disagree to that line, they just hate AI image generator outputs and want them taken down. The amount of unintended strong negative sentiment an art incites isn't a proxy indication of its artistic value, I mean, I don't get why it's assumed to be one, left and right.

Mtinie
1 replies
20h48m

On the other hand, art is mostly created by artists who were professionally trained as artists, i.e. not as scientists. Knowledge about computer technology is typically rather limited with both artists and collectors, leading to fairly bland stuff, or properly misguided hypes such as NFTs.

Having reread this section, and considered it, I’m going to hard disagree. You are severely underrating the technical capabilities of artists. Historical and contemporary.

smokel
0 replies
14h40m

Note the "mostly" and "typically". There are some amazing artists who really know what they're doing.

The chances of an artist having a PhD in machine learning and a masters at Goldsmiths are statistically speaking very low.

jsheard
2 replies
1d

There is a key difference in the way these models are trained - Chess and Go have clearly defined win conditions, so a model can be taught to explore the possibility space and try to reach victory by any means necessary, potentially with strategies which have never been seen before. With art on the other hand there is no objective measure of quality, so the models are instead taught to treat already existing art as the benchmark to strive towards, making them trite by nature.

As I see it AI can absolutely find innovative solutions, but only if you can clearly and explicitly define the problem it needs to solve.

Mtinie
0 replies
23h9m

I use diffusion models and other generative tools to give me inspiration for works. While these aren’t solutions, per say, the tools do help me define (and refine) my approaches and offer visual options to consider.

CamperBob2
0 replies
22h25m

With art on the other hand there is no objective measure of quality, so the models are instead taught to treat already existing art as the benchmark to strive towards, making them trite by nature.

Isn't this reminiscent of the arguments that were made at the dawn of photography as an art form? Some were afraid that portraiture was finished as an art form, but we got Impressionism, Cubism, and a host of other innovative forms to take its place. Never mind that portraiture was not in fact killed by photography, nor was any other visual form.

Others swore that cameras and film would never be valid implements of art, but they got awfully quiet when Adams and Weston and others showed up on the scene, and you don't hear much from them at all these days.

If nobody was afraid of AI -- if nobody was screaming bloody murder about how urgent it was to stop it -- only then could we safely say that it will have no role or relevance in art.

tptacek
11 replies
1d

When you read Go strategy resources, you see a lot of things divided into what best practices were before AlphaGo and what they are now. It's a whole big thing.

It is still the case, though, that AI dominates humans at Go; humans didn't get so creative about the game that they put AI back on its toes (though some did discover exploitable AI "strategy bugs").

paulcole
5 replies
1d

This is true in Scrabble as well.

When I was playing seriously there were strong players who played a ton over a board and had deep intuition about what made plays good and what made plays bad. In the late 1990s/ early 2000s there started to be a lot more in the way of computer simulation and analysis and some very strong computer players.

One (general) example was that older players liked the idea of making longer plays using more tiles to "win" a race to the S and blank tiles (the best tiles in the bag). Computer simulations generally show that turnover (as this is called) isn't optimal and you're better off holding strong combinations of letters rather than playing them off hoping to draw something better.

Now younger players are better than ever because all of their training came with the help of computer analysis and simulation.

Of course in Scrabble a huge part of it comes down to just memorizing the words in the dictionary.

cdelsolar
4 replies
23h40m

AI doesn't dominate people in Scrabble though. The best humans are better than the best AI.

samatman
2 replies
23h30m

I wouldn't have expected that. Is it just a relative lack of interest in building an AI which can dominate Scrabble?

It's a partial-information game, but the search space can't be as big as Go, and an AI has an advantage over human players in that the entire valid string family can be encoded into a trie or some other efficient data structure, it's never going to forget a word or think it can play one that isn't valid.

My intuition is that AI should be able to crush the best human players at this point in time, but I'm open to being corrected on that if there's some aspect of the game which I'm not modeling correctly.

ultrasaurus
1 replies
23h13m

That strikes me as odd too -- but it might be because searching a dictionary is such an obvious computer advantage that it's not interesting to optimize. There are only 10 articles on arxiv.org that mention Scrabble vs 100s on Chess

https://arxiv.org/search/?query=scrabble&searchtype=all&sour...

paulcole
0 replies
23h0m

Just finding the highest scoring word won’t make you all that good. If you played the highest scoring play available to you each turn you wouldn’t be that strong of a player. Maybe around top 200-500 or so in the US I would guess? And it’d be a super exploitable strategy by a decent player.

The reason is that you need to apply some rules, like when to trade vs. making a play, balancing consonants and vowels for future plays, what parts of the board are too dangerous to make certain plays in, etc. It’s because of the distribution of unplayed tiles, the high-scoring spots on the board, and the 50-point bonus for using all of your tiles.

Because of that, generally, you’ll do better by building towards a 50-point bonus play every 3 or 4 turns than by maxing your score on each turn.

I’d be curious about letting a human player play with the assistance of the best bot available and seeing how much better that would make them. I guess part of the issue though is that in a 13 play game maybe 3 plays are meaningfully difficult. So it’d take awhile to see if the human is improving on the bot or not.

paulcole
0 replies
23h20m

I’ll admit I’m out of the loop but how many people in the world today do you think can beat BestBot over a significantly long series? Do you think there’s going to be a bot that dominates people in series like that?

I’ve been following Mack Meller’s YT channel and he’s getting beaten pretty handily in his series.

I’d put the over/under on people playing today who would beat BestBot in a 100-game series at say 3.5. What side would you take?

nicklecompte
1 replies
21h10m

The "strategy bugs" are a symptom of a more general shortcoming and why 2024 AI is still basically dumber than a mouse.

Keel in mind that if you had a variation of Go where there was a "hole" in the middle of the board, both Lee Sedol and a competent amateur would be able to play competent "Doughnut Go" without any prior experience. But AlphaGo and its successors would certainly make a ton of dumb unforced errors unless it practiced at least a few hundred games. (I am basing this observation on similar experiments with a similar Breakout AI, not sure if these experiments have been done with Go.)

Mammals, including humans, have advanced brains because we evolved to solve weird and unexpected problems with moderate reliability, not to optimize well-known benchmarks with high reliability. (This is also why plants are green instead of black.) By contrast, AlphaGo is a machine designed to solve a highly specific problem. The whole point of machines is that they dominate humans at specific tasks, otherwise we would just use a human. But we don't describe bulldozers as "superhuman" unless we're being intentionally obscure; the same should apply to AI. Otherwise we risk assuming the AI is capable of things it probably can't do without retraining.

sandspar
0 replies
14h2m

"General-purpose" computer seems like a misnomer.

pa7ch
0 replies
1d

Agreed, but I still think humans should get a little more credit for winning against AI no matter how. Its a competitive game with very simple and clear rules. A hole in AI strategy is a hole, even if quickly patched!

I am still so impressed that Lee Sedol beat Alpha Go 1 game out of 5 way back when AI made its breakout. I was sad he felt so sheepish afterward for losing. In hindsight, I think it was an amazing accomplishment even if today an AI could beat Shin Jin-Seo (#1 player) 100 out of 100 times!

SonOfLilit
0 replies
1d

You also see a similar division to 19th century and 20th century when a player called Go Seigen changed the way the game is player even more, I feel, than AI did (but don't take my word, at 7kyu I'm far from qualified to understand how professionals play)

Alex3917
0 replies
22h10m

When you read Go strategy resources, you see a lot of things divided into what best practices were before AlphaGo and what they are now. It's a whole big thing.

Yes and no. The biggest takeaway from AI is that learning all the joseki doesn't actually matter that much, which has freed up players (except for the pros) to spend more of their time focusing on the more fun and interesting parts of the game.

There are a lot of videos showing what josekis and strategies the AI recommends, but as a human you're likely not going to be any better off following them. This is for the same reason why AI analysis of fights is largely useless. That is, the reason why you lost the big fight (and the game) isn't that you didn't find that one obscure 9P move that could have saved you, but rather than you let yourself get cut 50 moves earlier. But the AI will never show you the move where you got cut as being the reason why you lost the game, it will only show you the one random move that you'd never in a million years actually be able to find.

This video from Shygost sums up the most important strategy stuff that you actually need to know in order to get strong: https://www.youtube.com/watch?v=ig8cWuDSHTg

timetraveller26
8 replies
1d

I watched the alpha go doc and it was really shocking to me when one of the top go players decided to retire because the game was meaningless now that computers could beat anybody.

it's good seeing that that wasn't the case for all players.

Solvency
6 replies
1d

i don't get it, this applies to every single game. you can't beat aimbots in fps, you can even rig any game bot to play perfectly.

that's why you play against HUMANS.

gensym
2 replies
1d

I enjoy rock climbing even though I'll never be as good as a monkey or a mountain goat.

krisoft
1 replies
23h13m

Yeah but the difference is that you were always worse than a monkey or a mountain goat. And those are all worse than someone riding a helicopter to the top. So you didn’t had to change your mindset in this regard ever.

Imagine if you grow up where there are no monkeys, mountain goats or helicopter rides to the top. You never heard of them, they are not a thing in your world. And you put in hard work to become a very very good rock climber. You kinda fancy yourself an apex climber. Maybe your mate George is a bit faster than you sometimes, but sometimes you are faster than him. Sometimes Sarah beaths the both of you, but sometimes you beat her times. You are kinda up there with the bests as far as you know. And then suddenly someone brings a monkey to your rock climbing gym and the monkey smokes all of you. It climbs walls much better than you ever could. Now you have to adapt. Will you change your viewpoint and start seeing yourself as “best among humans” and keep competing like that? Or will you see yourself as “clearly outcompeted so badly I might even give up”? Some people will go one way, some the other way. And then the new generation will grow up with the knowledge of monkeys, and they all naturally will be only the first kind of people, who understand that they can’t be the best only best among their class.

Go players had their “the first monkey shows up in the climbing gym” moment during our lifetime. That is why you see some of the players react like that. That is a very different world from rock climbing where everyone already knows about monkeys, and mountain goats and helicopter rides since forever. Every person currently climbing rocks started climbing with the existence of monkeys, mountain goats and helicopters already incorporated into their thinking way before they climbed their first wall.

hackable_sand
0 replies
20h47m

What is your goal when competing?

At the highest level of play, exploration of the game space eclipses point capture.

Points don't matter because it's play, so your emotional attachment to the outcome is a measure of maturity.

Play with infinite boundaries is pointless, so they let us explore freely without having emotional stake in arbitrary accumulation.

To play with an asymmetrical opponent should be some parts learning with sparse competition to test your knowledge of finite play space.

krisoft
1 replies
23h34m

this applies to every single game

But it did not used to apply for every single game. In fact in Go it was famously true just a few years ago that the best humans were waay waay better than even the best computers.

At uni I hung out with Go players and they had all kind of theories why Go is particularly hard for computers. Some of them well quite well reasoned, and some were just a bunch of magical thinking. What was not in doubt is that even at that admitedly medium level the players were better than the computers they had access to.

Just a few years ago it went from “we are competing to be the best Go players period” to “we are competing to be the best human Go players”. That is a change in mindset, and it seems at least for that one particular player they couldn’t make the change.

Imagine that you grew up in a world where aimbots are just worse than mediocre players, always. And you build up your personal motivations with this fact. And then suddenly the aimbots get better than even the best players. Some systems of motivation will crumble when this happens. Some will manage to adapt. This is just how it is.

hinkley
0 replies
20h16m

A lot of people thought we’d get to 2030 at least before the humans lost. More people thought 2035 or even 2040.

anononaut
0 replies
1d

Pros saw the writing on the wall, but remember that no bot was even particularly close to pro strength until 2016. Go was dogmatically described as still being decades away from bots being able to play competitively. For some professionals like Lee Sedol, the burning desire was to play the best games and best moves possible. To such an abstract game about intuition, seeing it finally be dominated by machines could understandably be crushing.

fjfaase
0 replies
23h37m

Lee Sedol continued games until the summer of 2019, more than three years after the match. He quickly dropped in strength while other players who already were stronger than him during the challenge were further rising in strength, surpassing all previous players according to the graph shown at the Go Ratings website. https://www.goratings.org/en/history/

Just like in mathematics many professional Go player peak before 40, after which the slowly become weaker and weaker.

kccqzy
8 replies
1d

Shin et al calculate about 40 percent of the improvement came from moves that could have been memorized by studying the AI. But moves that deviated from what the AI would do also improved, and these “human moves” accounted for 60 percent of the improvement.

I don't often play Go myself but a number of my friends do. Among non-professional players, it is really common to see game play being not as exciting as before because there's now an easy way: just memorize and copy what the AI does. I don't doubt that professional players still have a ton of creativity, but a lot of non-pros don't really have too much creativity and the whole game becomes memorizing and replicating AI moves.

thomasahle
5 replies
1d

a lot of non-pros don't really have too much creativity and the whole game becomes memorizing and replicating AI moves.

That makes no sense. After 10-20 moves you are surely in a position that has never been played before. How do you memorize moves after that?

hibikir
1 replies
23h42m

You'd be surprised. Joseki are corner shapes, which might interact with other corners in the medium to long run, but whose interactions are way too difficult for any human to understand well. Therefore, you have 4 corners, and it's quite likely that you'll see 4 joseki getting played in any game. Joseki sequences have been studied for a long time, so they can be relatively long: Say, 15+ moves of an avalanche joseki, memorized by both players, and that's just one corner. So even before computers were any good, you could still see pretty iffy players using memorized patterns in every corner for a total way past 20 moves.

roenxi
0 replies
13h52m

If they are iffy players they'll use 4 memorised sequences then enter the mid-game with a losing position. Playing out memorised sequences without considering the interactions the corners have on each other is one of the weights keeping amateurs from moving up to higher ranks.

If you are playing someone who is worse at fighting then playing good-enough joseki and making up any theoretical difference in the middle game is a fine strategy. But even choosing good-enough joseki requires thought (or instinct) that does beyond what can reasonably be called memorising. It is critical to recognise when a framework is getting too big and invade before the opportune moment passes.

As thomasahle notes, pretty much every game is unique and a memorised sequence unbacked by an algorithm cannot hope to be optimal.

datameta
1 replies
1d

Perhaps sub-positions still repeat with some regularity? Meaning subsets of the board. I have never played Go however, I've only seen the board and read the rules.

mafuy
0 replies
12h22m

Absolutely so. There are corner patterns but also side patterns and other patterns; all of them are 'joseki', known and often played sequences.

These can be memorized. But that's almost useless without understanding why these patterns are good. You still need to pick a pattern that works well with the rest of the board, in particular which conditions elsewhere on the board influence it (ladders, ko) and you need to know all the punishments for deviations. That's really difficult and often subtle, and it is where strong players as well as AI easily outplay someone who just mindlessly memorized patterns.

tasuki
0 replies
23h1m

The accepted approach used to be that the direction of play mattered. Now the AI has told us that no, just get locally-even results in all corners and you're fine. I never would've guessed!

csa
0 replies
23h36m

Among non-professional players, it is really common to see game play being not as exciting as before because there's now an easy way: just memorize and copy what the AI does

This is just… not true.

Unless one is playing at high dan ranks, it’s trivially easy to induce a “memorized sequence” that your opponent either will not have memorized or will leave them with a situation that they don’t understand well enough to capitalize on.

The “slack moves” in the openings that pros talk about are often worth 1.5 points or less (often a fraction of a point), and that assumes pro-level follow up.

This pro-level follow up is laughably rare outside of strong amateur dan levels and pro levels (and even within those ranks there are substantial differences).

anononaut
0 replies
1d

Before that, weak amateurs were just replicating human joseki. That's nothing new. They definitely give a player a good start, but knowing which to use and when, and of course how to follow up until the game is over is no simple task. It also happens to be the case that AlphaGo, KataGo etc. prefer simplifying the board state. Remove complexity and win only by a thin margin, because that's all that's needed. Memorizing AI preferences is much easier than some of these highly complicated joseki.

intuitionist
5 replies
1d

The blog doesn’t say anything about how this “decision quality” metric is calculated… but presumably it’s using very similar Go evaluation functions to the ones used in the superhuman AI players, right? I think it’s highly unsurprising that humans would improve by that metric — they’re learning from the machine, so of course the machine likes it.

Also, most things in life are not two-player zero-sum games where you can construct an evaluation function and build a “decision quality” metric out of it. So I’m not sure what the takeaway should be in those cases.

SonOfLilit
3 replies
1d

Computers are so much better than humans at Go that the metric for board evaluation applies better to human games than it does to computer games. Just as I'm better at evaluating code written by GPT than code written by senior developers.

Otherwise you'd see players who don't train with computers winning in tournaments against those who do.

intuitionist
2 replies
1d

Yeah, I don’t think the metric is wrong or bad, just that it’s not telling us anything special. Or maybe it’s telling us something about Go AIs (that the “insights” they have are human-comprehensible) but it’s not at all clear that this fully generalizes.

visarga
0 replies
23h56m

Of course it doesn't work that easy in other fields. Basically the go board is an environment, the the AI model learns by creating its experiences in the environment.

This can be applied to other kinds of environment, such as code execution, simulation, video games, human-AI chat rooms or robots. But each environment has its own complexity and searching for good strategies can take a long time. It's the same with scientific research, got to validate the theory in the world.

SonOfLilit
0 replies
23h42m

It sounds from your analysis that "better as evaluated by AI" would be true even if it wasn't really objectively better. All I'm saying is that yes, it does mean objectively better in this case.

omoikane
0 replies
21h5m

"Decision quality" appears to be average difference in winning probability between player's move versus a move by Alpha Go (Leela Zero). It's on page 16 of:

https://doi.org/10.2139/ssrn.3893835

Where it says "Measuring the quality of moves".

I found this via citation #31 of: https://arxiv.org/abs/2311.11388

Which was referenced by the second graph in the blog post (they link to Nature, but the paper at Arxiv appears to be the same).

akira2501
5 replies
1d

The other possibility is that it destroyed the incidental dogma that tends to build up in these types of games and human activities. This is why I like the "hacker ethos" as much as I do, it tends to eschew things like "accepted" dogma in order to find additional performance that other people were just leaving on the table out of polite comfort.

JustLurking2022
3 replies
1d

The dogma generally becomes accepted because it outperforms other known strategies. In a game like Go, that could previously take a while because there are so many possible follow-ups that it takes time to accumulate enough data on whether a new strategy is actually decisively better, or just worse but over-performing because it's less known.

There's a big difference between those two and "the hacker ethos" will lead to a lot of the latter. However, now computers can simulate enough games to give a relatively high degree of confidence that a variation in strategy is truly better.

Izkata
2 replies
23h48m

I don't know how it's developed since, but from what I remember that was how it started - the AIs weren't following the standard moves (joseki) that we'd built up over centuries and human players were thrown off by the nonstandard responses that were working better than expected.

foota
1 replies
20h28m

I wonder if AI could be built to continually adapt, so that instead of playing an optimal strategy, it instead chooses between various suboptimal strategies. If humans train to play against the optimal strategy, then maybe the AI could do better by playing in suboptimally but less expected ways.

mafuy
0 replies
12h38m

This is already happening. The point differences for sometimes huge deviations are minuscule, so it's worthwhile to have in one's repertoire. The same is mostly true for purely human games, too: These are trick moves.

coef2
0 replies
23h32m

So the progress of human proficiency in Go and our collective advancement over time is hindered by dogmatic rules introduced over time. These rules predispose players toward specific strategies and consequently limit the scope of our creative potential within the game. In contrast, AI algorithms operate without such biases offer a unique advantage in overcoming these limitations. They essentially inspire us to get out of established patterns (or local minima) of play and broaden the range of our strategic moves.

bobogei81123
4 replies
21h12m

Back when I was a kid learning go, I was taught that the kick joseki (a standard sequence of moves, similar to chess opening) [1] is a bad move, and you were considered trolling (and the teacher would not be pleased) if you played a 3-3 invasion [2] during the opening phase. These are all vindicated thanks to the AI and played pretty commonly nowadays. AI definitely helped eliminate many dogma and myths in go.

[1] https://senseis.xmp.net/?44PointLowApproach#toc6

[2] https://senseis.xmp.net/?33PointInvasion#toc2

lawn
1 replies
10h56m

I'll suggest that a 3-3 invasion is still a bad move for amateurs because they don't follow it up correctly and it may hamper their learning.

Wildgoose
0 replies
9h23m

Agreed. With a 3-3 you are trading away influence in favour of hard territory. AI is happy to do that very early because AI can effectively destroy influence. Human players need to learn to enter 3-3 at "the last possible moment". That requires judgement.

falserum
0 replies
15h17m

3-3 invasion takes teritory at the expense of influence (future potential).

I think I improved a lot when I stopped 3-3ing (it opened up different style of game for me)

Noobies love 3-3 (I definitely did), because it’s kind of simple and familiar move. (Especially at the start of the game, when board is empty, there is gadzilion of possibilities and most of them unknown and possibly risky)

If not discouraging 3-3, I would still recommend starting without it, to learn that way of play (if for nothing else, to deal with 3-3 invasions)

chewxy
0 replies
16h45m

you have to admit that 3-3 invasion is pretty annoying to handle. AI is way too aggro and people are learning to be as aggressive.

yinser
3 replies
1d

Computers were always going to be better at searching large trees, now they can help steer new heuristics for human players.

SonOfLilit
2 replies
1d

I'm not sure a computer without search defeats the best human playing without any search, but I know it defeats 1dan players (very smart people who put in 5-10 years of deliberate practice) when they are allowed to use as much search as they are able.

mafuy
1 replies
12h9m

In I believe it was the AlphaGo Zero paper, pure policy without tree search was estimated to be near pro level. KataGo is likely much better. I'm 3 dan (yet not smart) and I will definitely lose to it even with some handicap >:-(

SonOfLilit
0 replies
4h46m

Thanks. I remembered 1 dan, I didn't remember if regular or pro.

WalterBright
3 replies
19h31m

A new species arrives on the chess plain. Humans learn and adapt.

veunes
1 replies
2h44m

AI actually does the same... learns and adapts

WalterBright
0 replies
1h46m

I know. It will decide our fate in a microsecond.

sandspar
0 replies
13h22m

I could see AI being similar to the domestication of wolves. Humans encounter superior species, we co-opt the species to both species' benefit. Maybe AI would benefit from keeping us around. For example, we could test paths for it. Keeping a suboptimal pet like us for testing eliminates weak strategies, like a QA team finding bugs. It strengthens approaches by exposing flaws early.

usgroup
2 replies
1d

Can I rephrase this? "Professional Go players finally have a software good enough to beat them, as a result of which they got better by using the software".

ajkjk
1 replies
1d

No? The article makes the point explicitly that they did not only get better by using the software; they're also better at playing moves the computer does not play.

SonOfLilit
0 replies
1d

It makes the point that they learned not only by memorization. They still learned it by using the software.

ordu
2 replies
18h0m

Michael Abrash in his Graphics Programming Black Book described something similar with regard to optimization. People become stuck at some point, when they confuse "it is good enough" with "it is the best possible result". But if some event made them seriously doubt that it is the best possible result they could do wonders, like going from "this is the fastest code possible" to making it 10x faster.

Just knowing that you could do better is a big deal, but if you have an AI showing you how to do better, then further perfection will become inevitable.

sandspar
1 replies
13h50m

Happens in sports. Roger Bannister broke the 4-minute mile in 1954. Before this it was thought to be impossible. Within 3 years, 16 other runners also broke the 4-minute barrier. Their equipment was the same as before; it was a mental thing. Arnold Schwarzenegger tells about a similar thing in weightlifting. Pat Casey broke the 500 lbs bench press record in 1956. Bench press records have since doubled, now exceeding 1,000 lbs in some categories.

stvltvs
0 replies
11h6m

To compare apples to apples, the raw world record (unaided by special equipment) is only 782 lbs, not over 1000. Your point is still a good one, just unnecessarily overstated because benching 782 lbs is damn impressive!

baobabKoodaa
2 replies
19h33m

I really enjoyed the upbeat positive outlook of the article.

Unfortunately, as an ex poker pro, I find it hard to imagine that AI "lifts people up" in domains like games. Sandholm's bots pretty much destroyed poker.

sinuhe69
1 replies
11h14m

Poker is all about “taking the emotion out of the game”, isn’t? In such cases, what can beat a machine? Doesn't a machine naturally have the best “poker face”?

baobabKoodaa
0 replies
3h17m

The idea of "poker face" isn't really relevant as most people have a "good enough" pokerface. The idea of looking into your opponent's eyes and just feeling out what they have is something from movies, not real life (I'm sure you can find exceptions to the rule, but this is true in the general case).

What can beat a machine? Another machine, sadly. We are well past the point where machines surpasses humans in most poker variants.

andrewstuart
2 replies
23h45m

If I was a professional player of any sort of game that AI can play then I would never play against AI.

Just be a human, play against other humans. Who cares what AI can do?

veunes
0 replies
2h41m

In order to be better you need practice more, AI can be there for you all the time

bongodongobob
0 replies
23h33m

People who want to get better care. Everyone who plays chess uses AI to improve.

Art9681
2 replies
1d

When AI beats Go players, they roll up their sleeves and practice their passion and try to get better.

When AI beats Hollywood...

anononaut
0 replies
1d

I see the point you're making, and well made, but I think it also highlights a distinction in problems between the two. In the case of go, people who want to play go are the main motivating force. In the case of movies, I don't give a damn about Hollywood, the money, the studios, the IP, the actors. I only care about the quality of the film. Maybe the autuer, if there is one.

AI changing conceptions about chess or go is very different than generative Ai which can radically change the means of how something is produced. I'm still going to play go because I love it. Meanwhile, I would happily cut out film studios (as we know them) if it meant I got to watch quality cinema.

Tenoke
0 replies
1d

The reward in Hollywood and most professional fields is about who makes the best/most cost efficient product no matter what.

If the same was true for chess or go then players would be using computer assistance in every top level game.

lvl102
1 replies
1d

Pro players train with AI and you can often see “blue dot” moves in tournament settings.

anononaut
0 replies
1d

It's become go broadcasting standard to show some AI bot's win/loss confidence percentage for a given board state. It was fascinating for a few years, but now I feel like it takes away from some of the magic of watching pro level play.

bravura
1 replies
23h55m

1) I would really be interested in broad brush strokes to understand how go theory has expanded.

2) I really wish we could shake the ant-farm with chess and go Fischer random chess. There's something nice about not having to memorize openings.

bongodongobob
0 replies
23h37m

At the same time, the familiarity of openings is nice.

Imagine completely random WoW battlegrounds. Part of the fun is knowing the territory and strategies rather than having to make them up from scratch each game.

zerocrates
0 replies
1d

Just finally getting around to reading/finishing my copy of Seven Games by Oliver Roeder, which covers checkers, chess, go, backgammon, poker, Scrabble and bridge, and the efforts for computers winning/solving each.

A common theme is the effects of the computers on the human players in elevating (but maybe also homogenizing) play.

yieldcrv
0 replies
20h10m

After a few years, the weakest professional players were better than the strongest players before AI. The strongest players pushed beyond what had been thought possible.

Human Instrumentality Project

veunes
0 replies
2h40m

AI is incredible tool to get better in many fields

ummonk
0 replies
23h14m

The article is misleading regarding the history of chess. Magnus excepted, most top players did adopt a more cold and calculating material-focused chess style that mimicked Deep Blue and subsequent chess computers. It was only with the success of AlphaGo and LC0 that top chess players have started playing a more creative playstyle again, playing various wing pawn advances, as well as being more willing to give up material for nebulous initiative or positional advantages.

tutfbhuf
0 replies
13h26m

A few months later Bannister was no longer the only runner to do a 4-minute mile. These days, high schoolers do it.

Wait. I know where you are coming from, but this is simply not true.

suyash
0 replies
1d

It's just a little boost, AI will keep getting better and better at faster pace, humans will have to figure out a different strategy all together.

seoulmetro
0 replies
22h13m

Is this a surprise? The best people at any craft learn from the people that beat them.

mark_l_watson
0 replies
22h18m

Seven years ago I took remote Go playing lessons from a South Korean professional player. I stopped after about 5 months and started using CS Pro Go on my iPad Pro and it has a nice teaching feature of rating every one of my moves so after a game I can see where my biggest mistakes were. This is different than pro players learning new surprising strategies, for me it is nice to use.

idkdotcom
0 replies
1d

Go is a finite search game. So each chess.

Equating intelligence to being good as these games is as silly as equating intelligence to being good at solving differential equations. Computers have bested humans at solving differential equations for many decades now. Nobody said "gee humans are now stupid".

AI, as a knowledge field, is biased in this notion that all it matters when it comes to intelligence is that computers beat humans at Go or Chess.

caligarn
0 replies
13h41m

I wonder if the same thing is happening in chess.

WalterBright
0 replies
19h30m

I played chess poorly because I'm lazy, so instead of thinking about my next move I'd think about writing a program to pick the move for me.

Lacerda69
0 replies
1d

AI will force artists to learn how to draw perfect hands as every artwork with bad hands will be instantly flagged as generated.

1-6
0 replies
18h6m

I bet modern day go players have become more stereotypical in their moves. The only parallel I can draw is from professional Starcraft players who stopped doing very exotic moves because it’s usually blocked by players who’ve seen them all.