return to table of content

BrainGPT turns thoughts into text

joenot443
36 replies
1d1h

Ground Truth: Bob attended the University of Texas at Austin where he graduated, Phi Beta Kappa with a Bachelor’s degree in Latin American Studies in 1973, taking only two and a half years to complete his work, and obtaining generally excel- lent grades.

Predict: was the University of California at Austin in where he studied in Beta Kappa in a degree of degree in history American Studies in 1975. and a one classes a half years to complete the degree. and was a excellent grades.

Wow. That seems comparable to the rudimentary _voice_ to text systems of the 70s and 80s. The brain interface is quickly leaving the realm of sci-fi and becoming a reality. I’m still not sure how I feel about it.

varispeed
18 replies
1d1h

Well you are going to have a brain scanning device directly linked to your social credit score.

That's the future.

WendyTheWillow
8 replies
1d

No, it’s not. Good lord…

jprete
7 replies
1d

There are already businesses tracking their employees' fitness for insurance purposes.

https://www.washingtonpost.com/business/economy/with-fitness...

EDIT: There's also a national legislative proposal to mandate that all cars have a system to monitor their drivers and lock them out on signs of intoxication.

https://www.npr.org/2021/11/09/1053847935/congress-cars-drun...

ceejayoz
6 replies
1d

The fix here is banning these sorts of potentially abusive uses, not hoping the technology itself doesn't develop.

jprete
5 replies
1d

I would agree if I didn't think there were really strong incentives and precedents for abuse of the technology.

valine
2 replies
1d

We have laws that prevent people being subjected to brain surgery against their will. The credit score concept is ridiculous.

The real battle will be with law enforcement who get a warrant to look at your brain in an MRI.

redeeman
0 replies
17h23m

yeah, and then we remember the videos from the good old downunder where the cops assaulted parents, chased kids around, pinned them down, and injected them with experimental drugs that turned out to cause worse outcome for their agegroup.. this was a couple of years ago.

Jensson
0 replies
22h52m

You don't need brain surgery or an MRI to scan a brain, this just uses an EEG.

squigz
0 replies
23h58m

There's really strong incentives to abuse any technology or system that gives people more power. This doesn't just apply to cutting-edge computer science like mind-reading, but to even our basic institutions like law and government; yet most people would agree the solution isn't to basically give up and hope for the best, but to be vigilant and fight back against that abuse.

ceejayoz
0 replies
1d

There absolutely are, but when’s the last time that stopped us advancing new tech?

MoSattler
4 replies
1d

First use will be for criminal suspects, to "save lives". Then its use slowly expands from there.

blindriver
2 replies
1d

"For the children" is the first excuse usually.

fortran77
0 replies
23h4m

Exactly! Strap it on anyone who has to work with children to see if they ever have any untoward thoughts.... Then move on to everyone else.

Octabrain
0 replies
8h29m

"To fight terrorism" usually is the second.

e2le
0 replies
1d

I'm sure among the first applications of this technology will be to scan user thoughts for evidence of CSAM.

garbagewoman
1 replies
1d

Why are you so certain that’s the future?

squigz
0 replies
14h53m

Because spouting FUD is easier than actually doing anything.

alternatex
0 replies
22h30m

Being banned in the EU as we speak.

6510
0 replies
1d

For a while, eventually we will become so suggestible you'd wish you were special enough to have a score.

PaulScotti
11 replies
1d

Guys Figure 1 is not real results, it's an illustration of the "goal" of the paper. The real results are in Table 3. And are much worse.

explaininjs
8 replies
23h32m

Interesting ploy. Present far-better-than-achieved results right on the front page with no text to explain their origin^, but make them poor enough quality to make it seem as if they might be real.

^ "Overall illustration of translate EEG waves into text through quantised encoding." doesn't count.

mike_hearn
6 replies
22h54m

Urgh. And it gets worse from there. The bugs list on the repo has a closed and locked bug report from someone claiming that their code is using teacher forcing!

https://github.com/duanyiqun/DeWave/issues/1

In a normal recurrent neural network, the model predicts token-at-a-time. It predicts a token, and that token is appended to the total prediction so far which is then fed back into the model to generate the next token. In other words, the network generates all the predictions itself based off its own previous outputs and the other inputs (brainwaves in this case), meaning that a bad prediction can send the entire thing off track.

In teacher forcing that isn't the case. All the tokens up to the point where it's predicting are taken from the correct inputs. That means the model is never exposed to its own previous errors. But of course in a real system you don't have access to the correct inputs, so this is not feasible to do in reality.

The other repo says:

"We have written a corrected version to use model.generate to evaluate the model, the result is not so good"

but they don't give examples.

This problem completely invalidates the paper's results. It is awful that they have effectively hidden and locked the thread in which the issue was reported. It's also kind of nonsensical that people doing such advanced ML work are claiming they accidentally didn't know the difference between model.forward() and model.generate(). I mean I'm not an ML researcher and might have mangled the description of teacher forcing, but even I know these aren't the same thing at all.

oeeker
1 replies
17h19m

how could such thing get published?

heyoni
0 replies
14h4m

My guess is repeatability is hard when it comes to AI

chpatrick
1 replies
20h37m

So instead of generating the next token from its own previous predictions (which is what it would do in real life), the code they used for the evaluation actually predicts from the ground truth?

ghayes
0 replies
19h39m

Which would basically turn the model into a plainly normal LLM without any need for utilizing the brainwave inputs, right?

iamleppert
0 replies
19h7m

You’d be shocked how common this is in academia. Most of the time it goes undetected because the people writing the checks can’t be bothered to understand.

AndrewKemendo
0 replies
19h35m

This is a super important point and I think warrants a letter to the editor

godelski
0 replies
17h9m

What's interesting to me is that apparently a lot of people see nothing wrong with this[0]. That whole thread is wild and I'm just showing a small portion.

Also, @dang, can we ban links to iflscience? They're a trash publication that entirely relies on clickbaity misrepresentations of research works. There is __always__ a better source that can be used.

[0] https://news.ycombinator.com/item?id=38565424

seydor
0 replies
7h13m

Why is it such a "pattern" in these brain-computer papers that the authors keep making wild clickbait claims. Last year it was the DishBrain paper, which caused a lot of reactions, as it referred to the tiny system as "sentient" (https://hal.science/hal-04012408)

This year it is the "Brainoware" which is claimed to do speech recognition , and now this.

oldesthacker
0 replies
22h42m

The results of Table 3 are not really exciting. Could this change with 100 times more data? The key novelty in the specific context of this particular application is the quantized variational encoder used "to derive discrete codex encoding and align it with pre-trained language models."

seydor
0 replies
1d

I’m still not sure how I feel about it.

Sir, let us read that for you

samstave
0 replies
23h55m

this podcast is excellent in discussing the future we are racing into.

https://www.youtube.com/watch?v=OSV7cxma6_s

"Peter Diamandis, the futurist to watch as all of these technologies advance with unimaginable speed, is going to blow your mind and help you imagine new possibilities and opportunities for your healthspan."
nextworddev
0 replies
1d1h

The “Matrix” stack is really shaping up recently /s

derefr
0 replies
1d

Seems like it could work a lot better still, very quickly, just by merging the trained model with an LLM trained on the language they expect the person to be thinking in. I.e. try to get an equilibrium between the "bottom-up processing" of what the TTS model believes the person "is thinking", and the "top-down processing" of what the grammar model believes the average person "would say next" given all the conversation so far. (Just like a real neocortex!)

Come to think, you could even train the LLM with a corpus of the person's own transcribed conversations, if you've got it. Then it'd be serving almost exactly the function of predicting "what that person in particular would say at this point."

Maybe you could even find some additional EEG-pad locations that could let you read out the electrical consequences of AMPAR vs NMDAR agonism within the brain; determine from that how much the person is currently relying on their own internal top-down speech model vs using their own internal bottom-up processing to form a weird novel statement they've never thought before; and use this info to weight the level of influence the TTS model has vs the LLM on the output.

api
0 replies
16h25m

Just be sure to only ever use open source or paid commercial grade tech. I’m sure someone will release a “free” BCI that spies on you as much as possible.

giancarlostoro
22 replies
1d1h

This is very impressive and useful, and horrifying all at once.

I imagine it would help a stroke patient, I also imagine it would give out unfiltered thoughts, which might be troublesome.

rvnx
12 replies
1d1h

I agree sadly :(

You're right, this is why in year 2200, your job application is going to be fast-tracked by analyzing your thoughts directly.

If you have a Neuralink, no problems, you can directly upload a trace of thoughts.

In case you have wrong thoughts, don't worry, we have rehabilitation school, which can alter your state of mind.

Don't forget to be happy, it's forbidden to be sad.

Also, this is read-only for now, but what about writing ?

This could open new possibilities as well (real-life Matrix ?)

Oh by the way, did you hear about Lightspeed Briefs ?

==

All that being said, it's great research and going to be useful. Just the potential of abuse from politics is huge over the long-term.

derefr
6 replies
1d

If you have a Neuralink, no problems, you can directly upload a trace of thoughts.

Except that someone with a jailbroken Neuralink could upload a filtered and arbitrarily-modified thought trace, getting ahead of all those plebs. Cyberpunk! :)

Y_Y
5 replies
1d

Just think a virus, you know they're not going to be correctly sanitizing their inputs.

drexlspivey
4 replies
22h7m

just think of Robert’); DROP TABLE candidates;

thfuran
3 replies
20h8m

Who?

selcuka
2 replies
18h27m
thfuran
1 replies
17h38m

I seem to have forgotten who we're talking about. I really dropped the ball. On the table.

selcuka
0 replies
13h6m

I seem to have missed the joke. :/

SubiculumCode
3 replies
1d1h

When your bosses require you to wear one of these while working from home.

rvnx
0 replies
1d1h

To stay focused and analyze your pattern. Oh, so that's what they meant by "Attention Is All You Need".

fragmede
0 replies
1d

you mean I get to bill the client for all the hours I spend thinking about their problem, which includes while I'm sleeping? sign me up!

dexterdog
0 replies
21h22m

Or you just zone out and let them use your brain for the work day and you take nothing with you at the end of the day. At that point it's just Severance, but with the perk of working from home.

wruza
0 replies
10h8m

Hopefully in 2200 we’ll be long gone from all important fields in favor of new tech species who lack all the bs humans inherently bear on each other. If not, our own fault. Our species is a thin layer of culture on top of the “who most dominant ape here” game.

Jensson
4 replies
23h57m

Imagine putting these on presidential candidates as they debate or when they try to explain a bill, it could massively improve democracy and ensure the people know what they actually vote for.

thfuran
2 replies
23h51m

Yes, imagine the glorious future of politicians who have no thoughts beyond the repeatedly coached answers to various talking points.

d-lisp
0 replies
21h31m

Finally Platon's "King" is an AI.

ComodoHacker
0 replies
22h32m

Suddenly they all vigorously turn pro-privacy.

d-lisp
0 replies
21h32m

Yes, and only politicians that do truly know to lie are elected.

da_chicken
1 replies
1d

Yeah I can imagine law enforcement and employers are going to love this.

As much as this is an unimaginable positive benefit to people who are locked in, this is definitely one of those stories that makes me think "Stop inventing the Torment Nexus!"

Jensson
0 replies
23h50m

Yeah I can imagine law enforcement and employers are going to love this.

They will hate it, lies always benefits those with power more than those without since when the police lies against you then there isn't much you could do before, now you could demand they get their thoughts read.

notnmeyer
0 replies
1d1h

unfiltered thoughts

not far off from existing issues like some forms of tourette’s.

ninjaa
0 replies
18h38m

I am happy though, that we finally may talk about what our unfiltered thoughts are, and how much we are expected to control them or curate them, and how to do so in a psychologically helpful way.

notnmeyer
17 replies
1d1h

pretty interesting but with how much current llms get wrong or hallucinate i’d be pretty wary of trusting the output, at least currently.

amazing to think of where this could be in 10 or 20 years.

admax88qqq
13 replies
1d1h

Combine hallucinations with police adopting this as the new polygraph and this could take a pretty bad turn.

Cool tech though, lots of positive applications too.

Sanzig
5 replies
1d1h

If noninvasive mind reading ever becomes practical, we need to recognize the right to refuse a brain scan a universal human right. Additionally, it should be banned from being used for evidence in the courtroom.

Unfortunately there will be authoritarian regimes that will use and abuse this type of tech, but we need to take a firm stand against it in liberal democracies at the very least.

jprete
2 replies
1d

That's not sufficient - it needs to be actually banned for any uses resembling employment purposes, because otherwise people will be easily pressured into it by the incentives of businesses who want their employees to be worker bees. Just look at how many businesses try to force people to waive their right to a trial by law as a condition of being a customer!

mentos
0 replies
1d

I have a feeling that by the time this is fully fleshed out AI will have taken all the jobs anyways.

Sanzig
0 replies
1d

Agreed.

swayvil
1 replies
1d

What if it went the opposite way?

What if perfect brainreaders/liedetectors became as common as smartphones.

Used on everybody all the time. From politicians and cops to schoolkids and your own siblings.

What would be an optimistic version of that?

Sanzig
0 replies
23h58m

I don't think there is one, not for a version of humanity that is even remotely recognizable at least. We are not ready to hear each other's internal monologues.

Most people have intrusive thoughts, some people (like those with OCD, for example) have really frequent and distressing intrusive thoughts. What are you going to think of the OCD sufferer in the cubicle next to you who keeps inadvertently broadcasting intrusive thoughts about violently stabbing you to death? Keep in mind, they will never act on those thoughts, they are simply the result of some faulty brain wiring and they are even more disgusted about them than you are. What are you going to think when you find out your sister-in-law had an affair with a coworker ten years ago, because her mind wandered there while you were having coffee with her?

Humanity does not even come close to having the level of understanding and compassion needed to prevent total chaos in a world like that. People naïvely believed that edgy or embarrassing social media posts made by millenials in the late 2000s wouldn't be a big deal, because we'd all figure out that everyone is imperfect and the person you were 10 years ago is not the person you are today. Nope, if anything the opposite has happened: it's now a widely accepted practice to go on a fishing expedition through someone's social media history to find something compromising to shame them with. Now imagine that, but applied to mind reading. No, that's not a future that we can survive as a species, at least not without radical changes in our approaches to dealing with each other.

spookybones
3 replies
1d1h

I wonder if a subject has to train it first, such as by reading a bunch of prompts while trying to imagine them. Or, are our linguistic neural networks all very similar? If the former is true, it would at least be a bit harder to work as a polygraph. You wouldn't be able to just strap on the helmet and read someone's thoughts accurately.

dexwiz
0 replies
1d1h

I wonder if you could develop techniques to combat it, like a psychic nail in the shoe. Or maybe an actual nail. How useful is a mind reader when all it reads is “PAIN!”

admax88qqq
0 replies
1h1m

and read someone's thoughts *accurately*

Sadly accuracy has not been a big concern historically of polygraphs and much of forensic "science".

RaftPeople
0 replies
23h24m

Yes it requires training for each individual. In addition, they tested using a trained model from one person to try to decode a different person and the results were no better than chance.

They also said that the person must cooperate for the decoding to work, meaning the person could reduce the decoding accuracy by thinking of specific things (e.g. counting).

CORRECTION: The paper I read was not the correct paper, ignore this comment. The actual paper states that the model is transferrable across subjects.

andy99
1 replies
1d

Police is far down the list or realistic concerns.

- insurance discount if you wear this while driving

- remote work offered as a "perk" as long as you wear it

- the "alladvantage ecg helmet" that pays you to wear it around while you're shown advertising

- to augment one of those video interviews where you have to answer questions and a computer screens your behavior

That's all stuff that already exists more or less and much more likely to be the form that the abuse of this technology takes

popcalc
0 replies
1d

Eventually it will become affordable for parents, evangelical churches, and spouses.

codedokode
0 replies
1d1h

Why only police? Install mind-readers at every home.

brookst
2 replies
1d1h

You’re saying this brand new experimental technology may be imperfect?

dang
0 replies
1d1h

I completely understand the reflex against shallow dismissal of groundbreaking work, but please don't respond by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html

ShamelessC
0 replies
1d1h

Yeah...The research here doesn't even make the claim that it has no hallucinations. It seems to largely be exciting _despite_ hallucinations because it clearly does occasionally guess the correct words. They mention lots of issues but so long as it passes peer review, seems like a massive step forward.

https://arxiv.org/pdf/2309.14030v2.pdf

mikpanko
17 replies
20h57m

I did a PhD in brain-computer interfaces, including EEG and implanted electrodes. BCI research to a big extent focuses on helping paralyzed individuals regain communication.

Unfortunately, EEG doesn’t provide sufficient signal-to-noise ratio to support good communication speeds outside of the lab with Faraday cages and days/weeks of de-noising including removing eye-movement artifacts in the recordings. This is a physical limit due to attenuation of brain’s electrical fields outside of the skull, which is hard to overcome. For example, all commercial “mind-reading” toys are actually working based off head and eye muscle signals.

Implanted electrodes provide better signal but are many iterations away from becoming viable commercially. Signal degrades over months as the brain builds scar tissue around electrodes and the brain surgery is obviously pretty dangerous. Iteration cycles are very slow because of the need for government approval for testing in humans (for a good reason).

If I wanted to help a paralyzed friend, who could only move his/her eyes, I would definitely focus on the eye-tracking tech. It hands-down beat all BCIs I’ve heard of.

daniel_iversen
5 replies
19h38m

What’s your thoughts of Elon’s NeuraLink? Also, do you have an opinion on whether good AI algorithms (like in the article) can help filter out or parse a lot of the noise?

laserbeam
1 replies
14h3m

In my understanding, NeuraLink is just a research project thaf musk invested into and did some PR for. I wouldn't read into it more than that. Like any other similar BCI research project feel free to ignore it until papers are published. That is, unless you are involved in the field.

yreg
0 replies
9h51m

In my understanding it is a research project that has a lot more funding than others in the field and therefore it might be better positioned for a breakthrough. Am I mistaken?

dsr_
1 replies
15h50m

It was a little bundle of what looked like thin, glisteningly blue threads, lying in a shallow bowl; a net, like something you'd put on the end of a stick and go fishing for little fish in a stream. She tried to pick it up; it was impossibly slinky and the material slipped through her fingers like oil; the holes in the net were just too small to put a finger-tip through. Eventually she had to tip the bowl up and pour the blue mesh into her palm. It was very light. Something about it stirred a vague memory in her, but she couldn't recall what it was. She asked the ship what it was, via her neural lace.

That is a neural lace, it informed her. ~ A more exquisite and economical method of torturing creatures such as yourself has yet to be invented.

wizzwizz4
0 replies
4h40m

Excession, by Iain M. Banks.

NeuroCoder
0 replies
2h15m

The problem with using AI to filter and denoise is that the things we clearly know are unwanted noise are more quickly removed through other means (I can run the fully automated part of processing EEG data in under an hour with my code). The laborious part is quality control related to more subjective things that research is still figuring out what is important.

drzzhan
3 replies
20h25m

What is it noise-to-signal ratio? Sorry I don't know much about the field but that sounds like something can shutdown ideas like "we can put eeg into transformer and it will work". So may I ask what reference papers that I need to know on this?

IshKebab
1 replies
20h6m

Signal to noise ratio is a very basic thing; you can Google it.

phanimahesh
0 replies
19h2m

They weren't asking what SNR is, but what the typical SNR for eegs in realworld circumstances is.

Even if they were, count them among the lucky 10000 [1] and explaining is nicer. Soon with ai generated spam it feels like searching for anything is going to be worse than useless.

1: https://xkcd.com/1053/

southerntofu
0 replies
19h58m

Not from that field, but "reading" the brain means electromagnetism. In real life, EM interference is everywhere from lights, electric devices, cellphone towers... EVERYWHERE. Parent meant brain waves are weak to detect compared to all surrounding interference, except when a lab faraday cage blocks outside interference then the brain becomes "loud" enough to be read.

https://en.wikipedia.org/wiki/Signal-to-noise_ratio

https://en.wikipedia.org/wiki/Faraday_cage

logtempo
2 replies
17h9m

Recently, a Swiss-French team made the communication between the brain and the legs possible, and the device looked relatively mature. I think patient had nerves damaged in the vertebral column. What do you think about it ? Looked like a promising development.

https://actu.epfl.ch/news/thought-controlled-walking-again-a...

fragmede
1 replies
15h27m

Not OP, but Ttat article's use of the word "implant" implies a much more invasive device, which means a way better signal. Additionally, the output, while still complex, is far from the level of decoding of thought presented here. Thus, while still impressive, it is less of a leap from existing technology, and much more within the realm of what we know is very much possible.

logtempo
0 replies
6h13m

Yes, there is a part that is under the skull, if I understand it correctly ("at the surface of the cortex"). It claims to be the first success in the world, so it is indeed impressive! https://youtu.be/6Ee3kZIW5fk

AndrewKemendo
1 replies
19h38m

I just did a two day ambulatory eeg and noted anytime I did anything that would be electrically noisy.

For example going through a metal detector or handling a phone.

Unsurprisingly one of their biggest sources of noise is handling a plugged in phone.

I think something like an EEG faraday beanie would actually work and adding accessory egocentric video would allow doctors to filter a lot of the noise out.

caycep
0 replies
10h22m

also, just blinking or tensing muscles on your scalp

teaearlgraycold
0 replies
20h24m

I think then VR headsets will become medical devices soon enough

bbor
0 replies
7h35m

...This seems really, really confidently dismissing a new technology as impossible, which of course this forum loves (b/c GPT). This very paper seems like pretty damn strong evidence that prehaps the signal-to-noise ratio of EEG might be coming down as we get better algorithms.

ctoth
16 replies
1d1h

"Seriously, what were these researchers thinking? This 'BrainGPT' thing is a disaster waiting to happen. Ching-Ten Lin and his team of potential civilization destroyers at the University of Technology Sydney might be patting themselves on the back for this, but did they stop to think about the real-world implications? We're talking about reading thoughts—this isn't sci-fi, it's real, and it's terrifying. Where's the line? Today it's translating thoughts for communication, tomorrow it could be involuntary mind-reading. This could end privacy as we know it. We need to slam the brakes on this, and fast. It's not just irresponsible; it's playing with fire, and we're all at risk of getting burned.

Like, accurate brain readers are right under DWIM guns in the pantheon of things thou mustn't build!

digdigdag
3 replies
1d

Why not? There are perfectly legitimate uses for this kind of technology. This would be a godsend for those suffering from paralysis and nervous system disorders, allowing them to communicate with their loved ones.

Yes, the CIA, DARPA, et. al. will be all over this (surprisingly if not already), but this is a sacrifice worth making for this kind of technology.

ctoth
2 replies
1d

How many people in the whole world are paralyzed or locked in? Ten thousand? Less?

How many people in the whole world are tinpot authoritarian despots just looking for an excuse who would just love to be able to look inside your mind?

Somehow, I imagine the first number is dramatically dwarfed by the second number.

This is a technology that, once it is invented, will find more and more and more and more uses.

We need to make sure you don't spill corporate secrets, so we will be mandating that all workers wear this while in the office.

Oh no, we've just had a leak, we're gonna have to ask that if you want to work here you must wear this brain buddy home! For the good of the company.

And so on.

I'm blind, but if you offered to cure my blindness with the side effect that nobody could ever hide under the cover of darkness ( I donno, electronic eyes of some kind? Go with the metaphor!) I would still not take it.

zamadatix
0 replies
23h28m

All this choice guarantees is new technology will always be used for bad things first. It holds no sway on whether someone will do something bad with technology, after all it's not just "good people" capable of advancing it. See the atomic bomb vs the atomic power plant.

What's important is how we prepare for and handle inevitable change. Hoping no negative change comes about if we just stay the same is a far worse game.

ctoth
0 replies
1d

The other thing you people are missing is how technology compounds. You don't need to have people come in to the police station to have their thoughts reviewed when everyone is assigned an LLM at birth to watch over their thoughts in loving grace and maybe play a sound when they have the wrong one.

notfed
2 replies
1d

I'm optimistically going to assume that model training is per-brain, and can't cross over to other brains. Am I wrong? God I hope I'm not wrong.

rgarrett88
0 replies
1d

4.4 Cross-Subject Performance Cross-subject performance is of vital importance for practical usage. To further report the We further provide a comparison with both baseline methods and a representative meta-learning (DA/DG) method, MAML [9], which is widely used in cross-subject problems in EEG classification below. Table 2: Cross-subject performance average decreasing comparison on 18 human subjects, where MAML denotes the method with MAML training. The metric is the lower the better. Calib Data Method Eye fixation −∆(%) ↓ Raw EEG waves −∆(%) ↓ B-2 B-4 R-P R-F B-2 B-4 R-P R-F × Baseline 3.38 2.08 2.14 2.80 7.94 5.38 6.02 5.89 Baseline+MAML [9] 2.51 1.43 1.08 1.23 6.86 4.22 4.08 4.79 × DeWave 2.35 1.25 1.16 1.17 6.24 3.88 3.94 4.28 DeWave+MAML [9] 2.08 1.25 1.16 1.17 6.24 3.88 3.94 4.28 Figure 4: The cross-subjects performance variance without calibration In Table 2, we compare with MAML by reporting the average performance drop ratio between withinsubject and cross-subject translation metrics on 18 human subjects on both eye-fixation sliced features and raw EEG waves. We compare the DeWave with the baseline under both direct testing (without Calib data) and with MAML (with Calib data). The DeWave model shows superior performance in both settings. To further illustrate the performance variance on different subjects, we train the model by only using the data from subject YAG and test the metrics on all other subjects. The results are illustrated in Figure 4, where the radar chart denotes the performance is stable across different subjects.

Looks like it crosses over. That's wild.

exabyte
0 replies
1d

My intuition is at least in the beginning, but with enough individual data won't you have a model that can generalize pretty well over similar cultures? Maybe moreso for the sheep, just speculating... who knows!

alentred
2 replies
1d

What is the alternative? Hide the research papers in a cabinet and never talk about it? How long would it be before another team achieves the same result? Trying to keep it under wraps would only increase the chance of this technology being abused, but now unbeknownst to the general public.

Basically, are you proposing to ban some fields of research because the result can be abused? Anything can be abused. From the social care system to scientific breakthroughs. What the society should do is to control the abuse, not stop the progress. Not even because of ethics, where the opinions diverge, but because stopping the progress is virtually impossible.

ctoth
0 replies
1d

Look up the history of biotechnology, and the intentional way that it has been treated and one might reasonably say suppressed for some examples of how this has been managed previously. Yes, sometimes you can just decide, "we're not gonna research that today." When you start sitting down and building the thing that fits on the head, that's where you say "nope, we're doing that thing we shouldn't do, let's not do it."

There is actually a line. You can actually decide not to cross it.

Aerbil313
0 replies
23h31m

The alternative was to never pursue and invent organization-dependent[1,2] technology in the first place. The dynamics of the macro-system of {human biology + technology + societal dynamics} are so predictable and deterministic that it's argued[3] if there were any entity that is intelligent, replicating and has a self-preservation instinct instead of humans (aliens, intelligent Von Neumann probes, doesn't matter) the path of technological progress which humanity is currently experiencing wouldn't change. That is, the increasing restrictions on the autonomy of individuals and invasion of privacy with the increasing convenience of life and a more efficient civilization.

Ted Kaczynski pretty much predicted the current state of affairs all the way back at 1970s. [1]

Thankfully the world is not infinite so humankind cannot continue this situation for too long. The first Earth Overshoot Day was 31 December 1971, it was August 2 this year.[4] The effects of the nearing population collapse can be easily seen today in the increasing worldwide inflation, interest rates and hostility as the era of abundance comes to an end and resources get scarcer and scarcer. It's important to note that the technological prowess of humanity was only due to having access to basically unlimited energy for decades, not due to some perceived human ingenuity, which can save humankind from extinction-level threats. In fact, humans are pretty incapable of understanding world-scale events and processes and acting accordingly[5], which is another primary reason to not have left the simple non-technological world which the still non-evolved primate-like human brain could intuitively understand.

1: Refer to the manifesto "Industrial Revolution and Its Consequences".

2: Organization-dependent technology: Technology which requires organized effort, as opposed to small scale technology which a single person can produce himself with correct knowledge.

3: By Kaczynski, in the book Anti-Tech Revolution. Freely available online.

4: Biological overshoot occurs when demands placed on an ecosystem by a species exceeds the carrying capacity. Earth Overshoot Day is the day when humanity's demand on nature exceeds Earth's biocapacity. Humanity was able to continue its survival due to phantom carrying capacity.

5: Just take a look at the collective response of humanity to climate change.

drdeca
1 replies
1d

What does “DWIM” mean in this context? My first thought is “do what I mean”, but I suspect that isn’t what you meant.

ctoth
0 replies
1d

DWIM does in fact mean Do what I mean, a DWIM gun is basically like the Imperius curse. Can't remember if I got it from @cstross or Vinge.

ulf-77723
0 replies
1d

Exactly. Dangerous technology. Reminds me of dystopian sci-fi like inception or minority report.

First thing that came to my mind was an airport check. “Oh, you want to enter this country? Just use this device for a few minutes, please”

How about courts and testimony?

This tech will be used against you faster than you will recognize. Later on one will ask, why people let it happen.

lebean
0 replies
1d

Don't worry. It doesn't actually work lol

im3w1l
0 replies
23h46m

Thing is, it's not possible to stop it. Technology has advanced far enough, all the pieces are in place, so it's inevitable that someone will make this. What we should ask is rather how we can cope with its existence.

arlecks
0 replies
1d

If you're referencing the AI safety discussion, there's obviously the fundamental difference between this and a technology with the potential of autonomous, exponential runaway.

dexwiz
13 replies
1d

Everyone is this thread immediately went to mind readers as interrogation. But what about introspection? Many forms of teaching and therapy exist because we are incapable of self analyzing in a completely objective way.

Being able to analyze your thought patterns outside your own head could lead to all sorts of improvements. You could find which teaching techniques are actually the most effective. You could objectively find when you are most and least focused. You could pinpoint when anxious thoughts began and their trigger. And best of all, you could do this personally, with a partner, or in a group based on your choice.

Also you can give someone an FMRI as a brain scanning polygraph today. But there are still a ton of questions about it’s legitimacy.

https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?art...

demondemidi
5 replies
19h29m

Everyone is this thread immediately went to mind readers as interrogation.

Yes.

It's kind of hard to think about the upside of no longer having private thoughts.

I'm amazed you can go right to its benefits without realizing the universe-sized hole in the ethics of this.

But then again, that's techbro nature.

dexwiz
4 replies
18h42m

Every technology has its ups and downs. The same nitrates can grow plants or blow up a building. While blind optimism is bad, it’s depressing how negative discussion surrounding new technology has become. People have become literal luddites.

selcuka
2 replies
18h31m

Every technology has its ups and downs. The same nitrates can grow plants or blow up a building.

This is like saying "thermonuclear weapon tech has its ups and downs". It is technically correct, but we are talking about the (lack of) balance here. Not being able to think privately could be the end of civilisation as we know it.

dexwiz
1 replies
16h59m

Nuclear technology does have its ups and downs as in power generation, medical imaging, and advanced physics experiments versus thermonuclear weapons. If anything you are only highlighting the negative variant in your argument.

selcuka
0 replies
12h25m

Nuclear technology does have its ups and downs

Nuclear technology is a superset of nuclear weapon technology. I am not saying all neuroscience is bad, just this specific topic.

samus
0 replies
17h40m

People have always been luddites in the sense that they want technology used for their benefit, not against them. And it's hard to be optimistic about the impact of new technology considering that the last 20 years saw the normalization of mass surveillance of most of the population. Not to forget the negative impact of the technological developments of the last 200 years on the global environment.

electrondood
4 replies
1d

Being able to analyze your thought patterns outside your own head could lead to all sorts of improvements.

Typing in a journal text file for 15 minutes every morning is already a thing... and it's free.

wruza
1 replies
10h24m

I found it completely useless in my therapy seasons. These trains of thoughts are more like hallucination than real thoughts, because you think different at writing than during the day. I’m not even sure if keeping a diary makes you understand yourself better or just become more coherent with your delusions.

__MatrixMan__
0 replies
1h34m

you think different at writing than during the day

This might not be everybody. I don't feel that way.

keeping a diary makes you understand yourself better or just become more coherent with your delusions

That might be true, but once they're coherent are they still delusions? I call those plans.

dexwiz
0 replies
23h58m

Thoughts are fleeting. 15 minutes could be filled with hundreds or thousands of distinct concepts. Not to mention active recording is different from passive observation.

__MatrixMan__
0 replies
14h33m

Yes, but it could be expensive.

wruza
0 replies
10h30m

Automatic logs would be cool. It’s not only introspection itself that is hard but also that you have to remember to introspect and write down events for further analysis. Assuming you can trust the precision.

MadSudaca
0 replies
1d

Fear is a strong emotion, and while we know little of what we may gain from this, we know a lot of what we stand to lose.

chaosmachine
8 replies
1d1h

Aside from all the horrific implications, this enables something very cool: two-way telepathic communication.

Think your message, think "send", hear responses via earbud. With voice cloning, you even get the message in the sender's voice. Totally silent and invisible to outside observers.

pants2
3 replies
1d1h

Invisible except for the 72 EEG probes strapped to your head.

RobertDeNiro
1 replies
1d

These are also wet electrodes meaning you need to apply gel to every single one. You’ll notice that the person wearing it is also not blinking or using any facial muscles, as that activity would completely throw off the very weak brain signals.

airstrike
0 replies
1d

Sounds like they'd benefit from being in a sensory deprivation pool to enhance the quality of the signal!

https://i.stack.imgur.com/0Rtya.png

dexwiz
0 replies
1d1h

For now. Modern antennas are amazing. Maybe you could beamform from a lower number of devices.

djaro
0 replies
21h50m

I would never use this because I cannot 100% control my thoughts (i.e. intrusive thoughts, songs stuck in head, secrets)

derefr
0 replies
1d

hear responses via earbud

Maybe that's not even necessary.

I'd be very curious to see the results of trying to use the hardware in this system as a set of transducers — i.e. running the ML model here in reverse from a target text, and then pushing the resulting bottom-level electrical signals as trans-cranial direct-current stimulation (tDCS) signals back through the EEG pads.

How interesting would it be, if this resulted in a person hearing the text as a verbal thought in their own mental voice?

d-lisp
0 replies
21h35m

Twenty years ago I couldn't even imagine that I would find smartphones to be somewhat boring. Twenty years ago, I was finding GameBoy color to be the coolest stuff in the world.

PsOne's Tomb Raider seemed hi-res, Hi-res didn't even exist, I thought we were at the peak of gaming.

Apple Pro One wants to make computers spatial, we find telepathy cool.

I would love to code by the sole action of my mind while running in the forest or scuba diving, 10 seconds here and there.

I would love to receive a drawing made in the mind of someone else, to see it appear in front of me and to be able to share it with others around me : "-Hey, look at what Julia did."

And again, that's exactly what happens already but in a more immediate manner; replace smartphone with mind, screen with environment and you're in that futuristic world.

It feels like this is cool because of novelty, but then wouldn't it be cool to go back to punching code on cards, or writing lines with ed on a terminal ?

A few years ago I went from music production in a DAW to ten synthesizers (70-84 era) with a tape machine : way cooler, never going back.

But do I produce as fast as before ?

Nope

Here is what I think : I want the possibility of writing code with my mind and virtual floating screens only because of _one thing_ (apart from the initial first few days of new=cool).

I want this to work less, or more exactly to be less at work.

But you know how it will be; you will be asked to produce more work. And this will become mandatory to work by the sole power of your mind, with 5 or 6 virtual screens around you.

And that's all, until a new invention seems cool to you.

SV_BubbleTime
0 replies
1d1h

Be careful what you wish for. The unintended consequences of this are going to exceed imagination.

opdahl
6 replies
1d

It’s crazy to me that someone has developed a technology that literally reads peoples mind fairly accurately and its just like a semi popular post on Hacker News.

empath-nirvana
1 replies
1d

Do people not think of _anything_ while they're reading besides the text that they're reading? I think of all kinds of other stuff while I'm reading books.

d-lisp
0 replies
21h27m

Not reading a whole page, without noticing it, by effectively looking at it and letting your eyes run through its line while thinking about something completely different is peak literature.

fragmede
0 replies
15h31m

IFL science does a lot of science cheerleading, and I love them for it, but my (I can't speak for. others) enthusiasm is tempered by the number or times limitless nuclear power, cures for cancer, and flying cars have been promised. That's not to say this isn't impressive - it's nothing short of mind-blowing - but I'm not expert enough to evaluate how real this result is. I'll note the technology to communicate to coma patients has been showcased before but it turned out to be a hoax. not saying this is, but, again, this tempers my enthusiasm.

callalex
0 replies
23h9m

Well, the results marketed by this study are vastly overstated, bordering on unethical lying. Figure 1 is literally just made up. See discussion here: https://news.ycombinator.com/item?id=38674971

ShamelessC
0 replies
1d

It’s at the top of the front page now fyi

edit: and it’s sliding down again. Your comment will be relevant again shortly ha

RobertDeNiro
0 replies
1d

Anyone familiar with Brain computer interfaces would not be surprised by this article. People have been capturing brain waves for a while and using it for all sorts of experiments. This is just an extension of what has been done before. It’s still not applicable to anything outside of a lab setting.

karaterobot
5 replies
23h52m

While it’s not the first technology to be able to translate brain signals into language, it’s the only one so far to require neither brain implants nor access to a full-on MRI machine.

I wonder whether, in a decade or two, if the sensor technology has gotten good enough that they don't even need you to wear a cap, just there'll be people saying "obviously you don't have any reasonable expectation of not having your thoughts read in a public space, don't be ridiculous". What I mean is, we just tend to normalize surveillance technology, and I wonder if there's any practical limit to how far that can go.

SoftTalker
2 replies
16h28m

We are still operating computers the same way we did in the 1970s: keyboards and screens. I'm not holding my breath.

quickthrower2
1 replies
14h46m

No we are not. Voice and touchscreens now for much computer usage.

SoftTalker
0 replies
14h3m

Touchscreens are effectively keyboards, just glossy flat ones.

Voice works as a very inferior substitute when hands-free operation is of overriding importance.

simcop2387
0 replies
22h11m

I think this is when we start wearing tin foil hats

MrGinkgo
0 replies
16h7m
swagempire
2 replies
1d

Now...1984 REALLY begins...

demondemidi
1 replies
19h28m

This is way beyond "1984". At least the book had the sanctuary of the mind as a place to hide.

swagempire
0 replies
13h29m

Right that's a good point.

hyperific
2 replies
1d1h

Reminds me of DARPA "Silent Talk" from 14 years ago. The objective was to "allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals"

https://www.engadget.com/2009-05-14-darpa-working-on-silent-...

lamerose
0 replies
23h33m

Subvocal speech recognition has been going just as long.

baby
0 replies
1d

Dragon ball did this way before

ecolonsmak
2 replies
1d

With half of individuals reportedly having no internal monologue, would this be useless with them? Or just render unintelligible results?

klabb3
0 replies
23h56m

I’m pretty sure I’m one of them so I’m surprised reading these comments assume everyone thinks in words. I’m sure you can do a best effort projection of thoughts to words but it’d be extremely reductive, at least for me.

Jensson
0 replies
23h48m

Given that LLMs can learn to translate between languages based on just having lots of related tokens without any explanations I'd bet they could translate those thoughts to words even if the person doesn't think of them as words.

Would probably take more to get data from such people though. From people with an inner monologue you could just make them read text, record that, and then you can follow their inner monologues.

reqo
1 replies
1d1h

I bet this will make Neuralink useless! It would be great for the poor animals getting operated!

d-lisp
0 replies
21h23m

Neuralink also claims to be able to help people with motion related disabilities, which is at least some good thing.

dmd
1 replies
1d

Similar work for turning thoughts into images: https://medarc-ai.github.io/mindeye/

lamerose
0 replies
22h29m

fMRI-to-image

Not so impressive compared to EEG.

chpatrick
1 replies
1d1h

Must be great for interrogation.

d-lisp
0 replies
21h22m

Thought hold-up also ?

Jensson
1 replies
22h53m

Can we train an LLM based on brainwaves rather than written text? Seems to be closer to how we actually think and thus should enable the LLM to learn to think rather than just learn to mimic the output.

For example, when writing we have often gone done many thought paths, evaluated each and backtracked etc, but none of that is left in the text an LLM trains on today. Recording brainwaves and training on that is probably the best training data we could get for LLMs.

Getting that data wouldn't be much harder than paying humans to solve problems with these hats on recording their brainwaves.

ComodoHacker
0 replies
22h41m

On the other hand, the main practical feature of a language is its astronomical SNR, which brain waves lack, to say the least. This allows LLMs to be trained on texts instead of millions of live people. Just imagine the number of parameters and compute resources required for the model to be useful to more than one human.

waihtis
0 replies
1d1h

now's a good time to get into meditation, lest you want the advertisers to read your unfiltered thoughts!

turing_complete
0 replies
8h32m

Why not link to the paper instead of this shitty website? https://arxiv.org/abs/2309.14030v2

swayvil
0 replies
1d

A lie detector?

If it can extract words from my grey spaghetti then maybe it can extract my intention too.

That's probably incredibly obvious and I'm silly for even bringing it up.

smusamashah
0 replies
1d1h

The article or the video didn't explicitly say how many words / min they were doing. If the video was not just a demo (like Google) then its very impressive on speed alone.

odyssey7
0 replies
1d

I wonder if a-linguistic thought could work too. Maybe figure out what your dog is thinking or dreaming about, based on a dataset of signals associated with their everyday activities.

It seems like outputting a representation of embodied experience would be a difficult challenge to get right and interpret, though perhaps a dataset of signals associated with embodied experiences could more readily be robustly annotated with linguistic descriptions using a vision-to-language model, so that the canine mind reader could predict and output those linguistic descriptions instead.

Imagine knowing the specific park your dog wants to go to, or the subtle early signs of an illness or injury they're noticing, or what treat your dog wants you to buy.

motohagiography
0 replies
17h11m

Maybe I'm missing something huge here but a blind controlled demo where the subject committed words to paper and then compared the results afterwards would be persuasive. Unfortunately, the demo as presented in the article seemed achievable by professional magicians and mentalists.

I'm sure we're close to brain interfaces, but something seems off about this one.

Let's say a couple years from now, someone invents an airport scanner that "detects" evil thoughts, except there is no way to verify it, and no accountability for false negatives. The result is whatever the operator says it is. If enough people accept it enough to not resist it, and even turn on the ones who are detected by it, it doesn't matter what's real because it's just a participatory ritual of sympathetic magic. I feel like there are examples of similar dynamics in recent memory.

lamerose
0 replies
23h9m

This is from a paper published back in September btw: https://arxiv.org/pdf/2309.14030.pdf

lamerose
0 replies
23h57m

Seems like it could just be getting at some phonetic encoding, or even raw audio information. The grammatical and vocab transformations could be accounted for by an imperfect decoder.

jcims
0 replies
21h22m

I’ve been wondering lately about the role of language in the mind and if we might in the future develop a successor that optimizes for how our brains work.

iaseiadit
0 replies
1d

How long from reading thoughts to writing thoughts?

hliyan
0 replies
14h7m

It might be far more effective to capture subvocalisations using surface electrodes and then running those signals through machine learning: https://annals-csis.org/Volume_11/drp/pdf/153.pdf

chucke1992
0 replies
23h3m

Sword Art Online when?

anonytrary
0 replies
19h2m

Using EEG to predict thought is like looking at the clouds in Mumbai to predict the clouds in Austin. The electrical signal from individual neurons are lost in a sea of large-scale oscillations, which are further blurred by the layers of bone, muscle, and tissue that separate the device from the brain. Bitrate is like 1 bit per second, completely insufficient for most use-cases.

amrrs
0 replies
1d

FYI The base model that this one uses had some bug in their code which had inflated their baseline results. They are investigating the issues - https://github.com/duanyiqun/DeWave/issues/1

amelius
0 replies
1d

Can it read passwords?

I'm guessing it would be worse at reading passwords like "784&Ghkkr!e" than "horse staple battery ..."

Nasrudith
0 replies
15h15m

I'm a bit confused by the eavesdropping on thoughts conclusions being drawn. Did I misinterpret something, or wouldn't this be (currently) quite limited for "mind-reading" if it only picks up signals from reading and learns them. There is a difference between mapping "inner voice" and whatever is read.

Granted there may be chains of assumptions of advancement involved between them into actual thought-reading, but we've seen before how the devil has hid in those details in advancement chains. See all of the problems with autonomous vehicles in practice. People kind of suck at predicting what problems will be easy to solve.

INTPenis
0 replies
8h57m

    And if my thought-dreams could be seen
    They’d probably put my head in a guillotine
    But it’s alright, Ma, it’s life, and life only

DerSaidin
0 replies
1d

https://youtu.be/crJst7Yfzj4

Not sure on the accuracy in these examples, but this video may be showing the words/min speed of the system.