return to table of content

New theory claims to unite Einstein's gravity with quantum mechanics

rhelz
57 replies
2d17h

The various standard kilograms held by the national bodies, and the prototype standard kilogram in paris, were measured very precisely over the past hundred years, and their weights did seem to diverge mysteriously.

dataflow
53 replies
2d17h

What made them mysterious? Aren't there always some fraction of atoms or molecules with sufficiently high energy to escape the object?

mvdtnz
36 replies
2d16h

The fact that you, a random internet guy, knows this makes me believe that the world's top experts on measuring this exact thing probably thought of it, and if there is a mysterious divergence this probably doesn't account for it.

sam0x17
14 replies
2d16h

I find that it saves time to just start by assuming the experts haven't thought of X because of how many times I've seen assuming that they _have_ thought of X turn out to be a poor assumption, across many domains.

Widdershin
11 replies
2d13h

Can you give any examples of times you’ve easily anticipated X when a whole field of subject matter experts have demonstrably overlooked it?

dataflow
3 replies
2d11h

This isn't to agree with the parent comment, but wouldn't this situation itself be an answer to your question (assuming the claim is true)? Laymen like me easily anticipated mass divergence, but purportedly scientists have been surprised by it.

sumtechguy
0 replies
2d3h

I think that they do not have an exact reason and measured it and seen it happen is the surprising bit. Anything else is a good guess. Of those, people have plenty.

monktastic1
0 replies
2d7h

This comment chain is getting circular. We can't use this as an example for itself by assuming that it is true.

cwillu
0 replies
2d7h

The procedure of multiple weights being calibrated against a single standard is _predicated_ on anticipated mass divergence.

The mystery being discussed is that, even after the obvious sources of error are allowed for, there is still a discrepancy, and it's not easy to determine how much of that discrepancy is with the weights being recalibrated vs the test standard they're being calibrated to. None of which is shocking to anyone involved, just puzzling.

simiones
2 replies
2d11h

I don't really agree with the OP, but I do think there is at least one, possibly two such examples. The pretty clear one is nutrition: the vast majority of studies and recommendations made over the years are pure bullshit, and quite transparently so. They either study a handful of people in detail, or a huge swathe of population in aggregate, and get so many confounding variables that there is 0 explanatory power in any of them. This is quite obvious to anyone, but the field keeps churning out papers and making official recommendations as if they know anything more about nutrition than "missing certain key nutrients can cause certain disease, like scurvy for missing vitamin C".

sam0x17
0 replies
5h4m

Nutrition in particular is a scenario where major corporations willfully hid research about sugar and things for years and years and funded research attacking fat content instead, which turns out is actually pretty benign. Perfect example.

digging
0 replies
2d3h

Is that an example of "the experts didn't actually think of [simple explanation]" though?

anymouse123456
2 replies
2d9h

Can't speak for OP, but I've had more than a few similar experiences (from both sides of the fence FWIW).

I can think of one example in software deployment frequency. The observation (many years ago), was that it's painful and risky (therefore, expensive) to deploy software, so we should do it as infrequently as the market will allow.

Many companies used to be on annual release schedules, some even longer. Many organizations still resist deploying software more than every couple/few weeks.

~15 years ago, I was working alongside the other (obviously ignorant) people who believed that when something is painful, slow and repetitive, it should be automated. We believed that software deployment should happen continuously as a total non-event.

I've had to debate this subject with "experts" over and over and over again, and I've never met a single person who, once migrated, wanted to go back to the nightmare of slow, periodic software deployments.

rtsil
0 replies
2d4h

That's more "the experts had a (wrong) opinion on something" than "the experts overlooked something obvious". They didn't overlook it, they thought about it and came to a conclusion.

And if by "many years ago" you refer to a period where software deployment was mostly offline and through physical media, then it was indeed painful and risky (and therefore expensive). The experts weren't wrong back then.

ndriscoll
0 replies
2d9h

I don't see why a slow deployment cadence is a nightmare. When I've worked in that setting, it mostly didn't matter to me when something got deployed. When it did (e.g. because something was broken), we had a process in place to deploy only high priority fixes between normal releases.

Computers mostly just continue to work when you don't change anything, so that meant after the first week or so after a release, the chance of getting paged dropped dramatically for 3 months.

sam0x17
0 replies
5h17m

* [insert every example of "15 year old unknown vulnerability in X found" here]

* have to be a bit vague here, but while working as a research scientist for the US Department of Defense I regularly witnessed and occasionally took part in scenarios where a radical idea turned "expert advice" on its head, or some applied thing completely contradicted established theoretical models in a novel or interesting way. Consistently, the barrier to such advancements was always "experts" telling you that your thing should not / could not work, blocking your efforts, withholding funding, etc., only to be proven wrong. Far too many experts care more about maintaining the status quo than actually advancing the field, and a concerning number are actually on the payroll of various corporations or private interests to actively prevent such advancements.

* over the last 30 years in the AI field, there have been a few major inflection points, Yann LeCun's convolutional neural networks and his more general idea that ANNs stood to gain something by loosely mimicking the complexity of the human brain, for which he was originally ridiculed and largely ignored by the scientific community until convolution revolutionized computer vision; and the rise of large language models, which came out of a whole branch of AI research that had been disregarded for decades and was definitely not seen as a thing that might ever come close to something like AGI, natural language processing.

* going back further in history there are plenty of examples, like quantum mechanics turning the relativistic model on its head, Galileo, etc etc. The common theme is a bunch of conservative, self-described experts scoffing at something that ends up completely redefining their field and making them ultimately look pretty silly and petty. This happens so frequently in history that I think it should just be assumed at all times in all fields, as this dynamic is one of the few true constants throughout history. No one is an expert, no one has perfect knowledge of everything, and the next big advancement will be something that contradicts conventional wisdom.

Admittedly, I derived these beliefs from some of the Socratic teachings I received very early in life, around 6th grade or so back in the late 90s, but they have continually borne fruit for me. Question everything. When debugging, question your most basic assumptions first., "is it plugged in?" etc, etc

It's sort of at a point these days where if you want to find a fruitful research idea, probably best to just browse through conventional wisdom on your topic and question the most fundamental assumptions until you find something fishy.

bookmark1231
1 replies
2d15h

That’s incredibly specious reasoning.

justinclift
0 replies
2d14h

Probably gets a better result than the alternative though. ;)

dataflow
11 replies
2d16h

I figure as much, but that's kinda why I'm asking what made it mysterious. I actually hadn't heard of it being seen as mysterious until today - that comment is the first time I've seen that. Until today I thought it had been was unsurprising for scientists.

yetihehe
9 replies
2d13h

I figure as much, but that's kinda why I'm asking what made it mysterious.

You expect simple explanations to VERY complex problems from random internet commenters. I think you expect too much.

quietbritishjim
4 replies
2d12h

I think the real question is: where is the evidence that scientists find it mysterious? That doesn't need any sort of explanation, just a reference for a claim made by a commenter. As with the other commenter, I've heard about the mass divergence before, but never that anyone found it surprising.

gumby
3 replies
2d12h

As has even been discussed on HN over the years (!), the cause is unknown.* This is one of the motivations to replace the official kilogram with a definition based on measurable natural quantities, as had been done with the second and the meter. It took until 2018 to do so: https://www.nist.gov/si-redefinition/kilogram-introduction

* it’s not like there aren’t good theories but you can’t experiment directly on “the” kg because you can’t risk changing it!

dataflow
2 replies
2d11h

I'm neither seeing the cause being mentioned as unknown in that link, nor would the cause being unknown imply anything being mysterious?

Like if you find that your car has broken down after 200k miles, you might not be able to determine the cause, but it wouldn't exactly be some kind of mysterious physical phenomenon that would puzzle scientists. Obviously, something wore out. Why/how is this any different?

gumby
1 replies
2d10h

Did you click on the other kilogram links (or other metric links which are also interesting?)

A useful paragraph is the following: "The trend during the past century had been for most of BIPM's official copies to gain mass relative to the IPK, although by somewhat different amounts, averaging around 50 micrograms (millionths of a gram) over 100 years. But an alternative explanation is that the IPK was losing mass relative to its copies. Even more likely, it was a combination of both."

Those pages are for the lay audience but you can do your own web search (probably even an algolia search of HN) to know more.

In general, it's often easy to come up with an explanation of some new phenomenon, but to answer the question you still have to do science...which often simply confirms intuition but not always. My comment was less on the reason and more on the difficulty of doing experiments on what was literally the kg, no matter how it fluctuated.

DFHippie
0 replies
2d8h

My theory: People are treating them with different degrees of reverence. They are careful to polish anything off the IPK, causing it to lose weight gradually through abrasion. The others gain wait through random adhesions, perhaps oils picked up by gloves used to handle them.

I'm just some random guy on the internet, I know. It's fun to have theories!

quickthrower2
1 replies
2d12h

Not on HN sometimes some gems of comments are made by experts

yetihehe
0 replies
2d12h

Sometimes: when you least expect them. Not expecting them is a good strategy here.

vanderZwan
0 replies
2d12h

I don't disagree with you, but on the other hand: if people claim something in a discussion thread we need to be able to ask them to back those claims up. Otherwise anyone can just assert anything.

Sure, the explanation might not fit in the margins here, but then a link to a source that does attempt to explain it would be fine too.

gcr
0 replies
2d7h

One reason why questions like this are important is because it gives the audience around the asker an opportunity to learn something new. Isn't it important to be able to express curiosity, especially for the benefit of others?

Guvante
0 replies
2d6h

50 ug over 130 years is way too fast

The reason for this drift has eluded physicists who have dedicated their careers to the SI unit of mass. No plausible mechanism has been proposed to explain either a steady decrease in the mass of the IPK, or an increase in that of its replicas dispersed throughout the world

https://en.m.wikipedia.org/wiki/International_Prototype_of_t...

ramblerman
4 replies
2d16h

I've heard of an appeal to authority, but an appeal to ignorance is new.

OPs question is not ridiculous, nor does he suggest it to counter the "mysterious" claims

lordnacho
3 replies
2d14h

Not new, Efficient Markets Hypothesis is like this. "I haven't heard of a way to make money from trading so there isn't one" is my caricature of it.

cnity
2 replies
2d10h

Isn't it more like: "exploitation of a market inefficiency renders the market efficient over time", or something to that effect? Or rather, (others) making money from trading is what makes it hard (for you) to make money from trading.

lordnacho
1 replies
2d7h

Sure, depends on how you encounter it. Most often on forums someone will show up wanting to hear about how to write a strategy, and they are rebuffed with "nobody would ever publish that".

cnity
0 replies
2d6h

I love encountering nay-sayers because it means either:

1. I am wrong and could learn something new.

2. They are wrong and their opposition fills me with defiant determination.

The difficulty comes when you view something as case 2 but it is actually case 1 and you embarrass yourself. I do this all the time.

satvikpendem
1 replies
2d16h

They're not being arrogant, as it seems like the parent is genuinely curious as to why such weights diverged, as am I. It is still a good question to ask, since I also have not heard a good answer to that question.

iainmerrick
0 replies
2d12h

Right -- and they're not even asking for the answer, they're just asking what the interesting mysterious phenomenon is.

notjoemama
0 replies
2d9h

Can they perform the calculations to estimate the fluctuations? Can they write an informal explanation about what happens to the gluons? I think the experience of seeing a factoid on the internet is being given too much weight here.

NoLsAfterMid
0 replies
2d11h

Did you ever pause to think "is this comment worth posting"? Maybe you should.

irjustin
7 replies
2d15h

Aren't there always some fraction of atoms or molecules with sufficiently high energy to escape the object?

Can someone help me understand this? I thought that things were supposed to be stable given the elements used for le grand k. That they don't decay on their own and it would have to be another mechanism to explain the divergence?

[0] https://en.wikipedia.org/wiki/International_Prototype_of_the...

pants2
6 replies
2d15h

In a system of particles, due to random Brownian motion and the nature of Gaussian energy distributions, a small number of particles occasionally gain significantly higher energy than average through statistical fluctuations. Maybe enough to escape the binding energy of le grand k. However I would think this is a very very small effect in platinum.

irjustin
4 replies
2d14h

Sure, radioactivity.

But, Le grand K is made of naturally occurring platinum of which most of the isotopes are observationally stable[0] and the one isotope that isn't has a half-life of 6.5×10^11 and only makes up 0.012% [1].

So yeah I don't buy the unstable explanation to even begin to show the divergence, let alone "weight gain".

[0] https://en.wikipedia.org/wiki/Stable_nuclide#Still-unobserve...

[1] https://en.wikipedia.org/wiki/Isotopes_of_platinum

[Edit] Wait are you describing evaporation?

bowsamic
2 replies
2d14h

Yes, he's describing evaporation. Here's a thread about it, basically the answer is "we don't know" https://physics.stackexchange.com/questions/77130/why-does-p...

dataflow
1 replies
2d10h

Thank you, that link (particularly [1]) is basically the kind of explanation I was looking for in my initial comment.

[1] https://physics.stackexchange.com/a/784886

imglorp
0 replies
2d7h

There's so many variables there! What kind of purity could 1889 achieve? What kind of uniformity between the N samples? How do they know the different cleaning procedures (!) are not adding mass by leaving something behind? If the samples are known to be oxidizing, why leave them in air? Why are there multiple elbows in that graph around 1950 -- surely another procedural change but again not uniformly applied?

Maybe these aren't the best sources of historic mass data.

shwouchk
0 replies
2d14h

Edit: yes

Y_Y
0 replies
2d12h

Perhaps you mean Boltzmann distributions?

dotnet00
4 replies
2d16h

If it were simply a matter of losing particles, I'd expect that one of the many very capable metrology labs around the world would've devised an appropriate experiment, e.g. storing two exact same masses at slightly different temperatures near absolute zero and characterizing the difference in mass (since presumably the higher temperature sample would lose slightly more mass if it were down to high energy particles).

A test for this theory would have to involve a means of accounting for these kinds of fluctuations.

dataflow
1 replies
2d16h

I mean I just gave one example of how it might lose mass. There are other obvious ways (like when they're picked up etc.). My point wasn't to present a hypothesis for people like myself here to rebut. I can do that just fine myself. I was trying to understand what about this has been confusing for scientists.

chasd00
0 replies
2d8h

like when they're picked up etc.

Wouldn’t just moving an object change its mass? Any change in potential energy would mean a change in mass from good old e=mc^2

mcpackieh
0 replies
2d11h

Make sure to store them very deep underground down a mineshaft, so random cosmic rays don't ever knock a few atoms loose.

analog31
0 replies
2d4h

Look for material condensed on the inside of the jar.

marcosdumay
1 replies
2d8h

It's well known that they tend to gain weight, not lose it.

And yeah, the mystery is just what mechanism exactly affected each one. (AKA, not much of a mystery.) The possibilities are very well understood.

dataflow
0 replies
2d8h

And yeah, the mystery is just what mechanism exactly affected each one. (AKA, not much of a mystery.) The possibilities are very well understood.

Ahh, that makes more sense. Thanks!

analog31
0 replies
2d4h

If the object is evaporating, look for material condensed on the inside of the jar.

pegasus
0 replies
2d12h

I'm sure that if they suspected that variance in mass could support their theory, they would have investigated that possibility instead of proposing a new test.

pavon
0 replies
2d4h

If I understand correctly this theory would manifest itself as variation in measuring the same standard kilogram over and over (to a precision beyond what we are currently capable of), whereas divergence between different standard kilograms could have other unrelated causes.

dang
0 replies
2d1h

(We detached this subthread from https://news.ycombinator.com/item?id=38527238.)

hn_throwaway_99
27 replies
2d17h

A second paper, published simultaneously in Nature Communications and led by Professor Oppenheim's former Ph.D. students, looks at some of the consequences of the theory, and proposes an experiment to test it: to measure a mass very precisely to see if its weight appears to fluctuate over time.

Definitely don't know enough to comment about the overall proposal, but given how many grand physics theories over the past couple decades have been untestable, it's really nice to see the proponents proposing a test.

szundi
13 replies
2d16h

Untestable theories are not theories - I mean, for what purpose and why care?

mr_mitm
4 replies
2d12h

Some theories are only untestable for practical reasons, like needing a particle accelerator the size of the solar system. That doesn't make them unscientific or "not theories". That is not the kind of falsifiability that Karl Popper had in mind.

leptons
1 replies
2d3h

Yes, it does. The word "theory" has a specific meaning. What this article is about is really a hypothesis, not a theory. I saw no evidence or data to back up what they think might be happening, thus not a theory.

"In scientific reasoning, a hypothesis is constructed before any applicable research has been done. A theory, on the other hand, is supported by evidence: it's a principle formed as an attempt to explain things that have already been substantiated by data."

Most people in this thread, as well as the writers of the article seem to be confused about the meaning of the word theory.

mr_mitm
0 replies
2d2h

Maybe that should give you a hint that professional working scientists don't care much for definitions from wordbooks.

Besides, I was commenting on the topic of falsifiability. To be a scientific hypothesis or theory it must be falsifiable - in principle, not practically. At least that's my claim in the context of Popper. Evidence didn't even enter the discussion.

j-krieger
1 replies
2d

Do you have any examples? I‘d to read up!

mr_mitm
0 replies
1d11h

This guy is exploring ultra large colliders:

https://arxiv.org/abs/2106.02048

The CCM will be an important stepping stone toward an ultimate Planck- scale collider, with a centre-of-mass energy of ∼10^16 TeV, that would require a minimum size equal to a tenth of the distance from Earth to Sun.
Cthulhu_
4 replies
2d12h

Yeah they are, they just cannot be verified - or cannot be verified yet.

The higgs boson theory was theorised in 1964 and it was untestable at the time, but that theory got the funding together to create one of the largest and most expensive scientific instruments built to prove the theory.

Nuclear fusion was theorised in 1915, again untestable at the time, and look how far we've come.

The above comment is an anti-science comment if I've ever seen one.

thereddaikon
3 replies
2d7h

Usually when people say something is untestable they don't mean like the Higgs Boson. It was testable, they just needed the proper tools. This was always known and was being actively worked towards. An untestable theory is one that cannot be tested regardless of the tools due to the nature of the theory. Multiverse is an example of an untestable theory. There is no way we know of to actually prove or disprove it. And its fair to question their legitimacy as theories. As testing a hypothesis is the core action of the scientific method, it logically follows that anything untestable is fundamentally not science and therefore not a theory.

snewman
2 replies
2d5h

Isn't the term for this "unfalsifiable"?

thereddaikon
0 replies
2d3h

In this case I think multiverse is both. Testability and falsifiability are closely related. But its possible to make a testable statement that can't be falsified in most examples I can think of the test can't be exhaustive.

defgeneric
0 replies
2d3h

A theory can be untestable but also falsifiable. For example, while the multiverse theory/interpretation may be untestable, it could be immediately falsified if QM were falsified.

Popper's falsifiability criterion was more aimed at theories like Marxism, where it seemed that within the theory there was always something available to account for an exception or challenge.

dragoncrab
0 replies
2d12h

Not reflecting to this particular theory, but in general history is full of theories which were not testable at their birth, but both engineering and the evolution of the theory eventually found a way to do so, even if it took a century.

Black holes, Higgs bozon and neutrinos are a few popular examples.

dotnet00
0 replies
2d15h

The other big theories aren't completely untestable, the issue is basically that the models have a lot of parameters which can often be tuned in a way that leaves a possibility for the general theory to still work, and testing them to the point that any corrections seem too unreasonable is even more difficult and expensive.

colordrops
0 replies
2d16h

Path finding

nojvek
7 replies
2d6h

Love that there is a test.

Some of the theories like gravity waves and Higgs boson took almost half a century to be confirmed by experimentation.

The hard part about this is building tools that can precisely measure mass fluctuations with extremely low error bars at picosecond timescales.

Because we need to both validate the theory and that the device is correctly measuring.

So much of science became only possible because of better instruments. Be it microscopes to see the tiniest phenomena or telescopes to observe the furthest phenomena.

edgefield
4 replies
2d4h

I’m not a physicist much less an astrophysicist and so take what I’m about to say with a hefty grain of salt. But I wonder if this new approach can also explain observations of distant galaxies. Distant galaxies either redshift because they’re moving away faster over time due to dark energy or redshift because their mass is changing. Can this new theory help explain why older galaxies might lose increasing mass over time?

thfuran
2 replies
1d19h

redshift because their mass is changing.

Why would that cause red shift?

tycho-newman
1 replies
1d17h

Losing energy shifts the frequency of your emissions towards red.

Basically, cooling down is the same thing as moving farther away from an inertial reference point.

mr_mitm
0 replies
1d

Are you thinking of thermal radiation? And what does losing mass have to do with cooling down?

We observe redshift in the spectrum. Hydrogen emits radiation in very specific frequencies, which are redshifted. This cannot be explained by losing mass, heat or energy.

slim
0 replies
1d17h

I thougt the red shift is due to Doppler effect and the universe expansion (planets are running away from any observer in the universe)

jakubmazanec
1 replies
2d2h

So much of science became only possible because of better instruments.

I would argue a stronger claim: experimental confirmation of theories and better measurements must always bootstrap each other. The history of temperature has many examples [1].

[1] https://psychology.okstate.edu/faculty/jgrice/psyc4333/Therm...

thfuran
0 replies
1d19h

Surely someone has at some point come up with a theory that was testable with then-current equipment.

guenthert
1 replies
2d12h

it's really nice to see the proponents proposing a test.

Is it feasible though? I wonder whether that test is not just outside of what can be technically realized today, but Heisenberg's uncertainty principle gets in the way.

mikro2nd
0 replies
2d12h

Well,... measuring gravity waves was outside of the technically realisable for a long time, but a couple of decades of trial and refinement and it got realised.

Animats
1 replies
2d16h

It's good that it's testable. This is one of those ideas which either wins a Nobel Prize or is totally bogus. So what does it take to test it?

stjohnswarts
0 replies
2d8h

Bogus usually implies something meant as a fake or counterfeit, I think a better way to describe it would be "incorrect", I don't think these scientists are trying to fake anything.

dotnet00
0 replies
2d16h

Yeah, since untestable proposals for this kind of theory are basically the norm, the fact that they have a proposed test made me think it was worth sharing.

hoseja
17 replies
2d14h

What awful AI-generated illustrations.

Anyway, is there any justification for treating spacetime as a smooth manifold, except for mathematical tractability?

vore
7 replies
2d14h

They're not AI generated, there's literally artist credits in the picture captions.

hoseja
5 replies
2d14h

Perhaps they're post-edited somewhat but there isn't a way you're convincing me those are fully handmade. Look at the nonsensically warped tiles. Look at the rods suspending some of the globes but not others in a random fashion, look at the illegible "writing".

spuz
2 replies
2d11h

I think you're right. I don't really have a problem with AI generated art being used in articles like this. After all, what kind of alternative visual can you put alongside an article about fundamental theories of nature? If AI art helps to reduce the cost of scientific journalism (without decreasing the quality) then I'm ok with it.

Having said that, to produce art that actually works is clearly not easy. If you've ever used Midjourney or DALL-E, you'll know it's a challenge to get the tool to output exactly what you have in mind. It's also clearly hard to produce images that physically realistic. If mismatched tiles and unrealistic perspectives are noticeable then they'll be a distraction and detract from the reader's experience.

hoseja
0 replies
2d11h

But these add nothing to the article but confusion. That's not how doing interference works. That's not any sort of weighting device. Could have been a generic galaxy-brain stock art and I'd be less annoyed.

CaptainFever
0 replies
2d11h

Oof yeah, I just saw the article pics. I don't have an issue with AI assistance, but the final output here was just kind of incoherent and confusing. Like an unrelated stock photo. I understand that it is hard to control but that just means that the artist needed to do a better job to make something more coherent, more informative and less generic.

Unless yeah as you said, it's just too abstract to put anything coherent, in which case I guess it works (though the mass weighing image surely could have been better than some generic sci fi thing)?

jug
1 replies
2d13h

Yeah and the style has the default DALLE-3 style written all over them. Isaac Young may be credited but I think Isaac is using DALLE-3 and I hope they are aware they have hired a "prompt wizard". :)

fl7305
0 replies
2d10h

I think they even put the prompt as the image caption?

fl7305
0 replies
2d10h

Doesn't the image caption contain a prompt?

When i use it in DALL-E 3, I get similar images.

"The image depicts an experiment in which heavy particles(illustrated as the moon), cause an interference pattern (a quantum effect), while also bendingspacetime. The hanging pendulums depict the measurement of spacetime. The actual experiment istypically performed using Carbon-60, one of the largest known molecules. The UCL calculationindicates that the experiment should also be performed using higher density atoms such as gold. The other two images represent the two experiments proposed by the UCL group, both of whichconstrain any theory where spacetime is treated classically. One is the weighing of a mass, the otheris an interference experiment."

staunton
5 replies
2d14h

Yes. GR does it and GR gives good descriptions for a lot of observed phenomena.

hoseja
4 replies
2d13h

Except, you know, all the elementary particles. I have this intrusive thought those are just bits of non-smoothly kinked/knotted spacetime but have neither the physics background or sense where to even start to look further into that.

showlife
1 replies
2d3h

Maybe what you're looking for is a "geon"?

https://en.wikipedia.org/wiki/Geon_(physics)

hoseja
0 replies
1d11h

held together in a confined region by the gravitational attraction of its own field energy

Hm, not quite, they'd be held together topologically but the https://en.wikipedia.org/wiki/Geometrodynamics is interesting.

fsckboy
1 replies
2d12h

> Yes. GR does it and GR gives good descriptions for a lot of observed phenomena.

Except, you know, all the elementary particles.

except, you know, all the elementary particles don't do a good job of explaining what GR explains

TheOtherHobbes
0 replies
2d12h

Hardly an expert, but it seems more likely to me that GR is the smoothed-out limit of a lot of much smaller stuff happening, than QM is somehow low-level wrinkles in an otherwise perfectly smooth spacetime.

I don't think a smooth anything is physically tractable. You need infinite resolution for perfect smoothness - essentially an infinite amount of information at every point.

Except there are no points. So somehow you have equations that define curvature floating in some kind of metaphysical space which somehow gets mapped to observable phenomena.

And if it's a noisy smoothness - where does the noise come from?

ben_w
1 replies
2d11h

I'm more annoyed with the captions.

… Carbon-60, one of the largest known molecules.
hoseja
0 replies
2d11h

Eh, C₆₀, one of the largest molecules with observed quantum interference, close enough.

denton-scratch
0 replies
2d9h

The illustrations looked like some sort of steampunk drawings from a fantasy book.

yathaid
15 replies
2d14h

The article mentions the 5000:1 odds bet, but reading the text of the bet itself is hilarious:

>> Whereas Carlo Rovelli and Geoff Penington firmly believe that gravity is better described by a quantum theory.

>> And whereas Jonathan Oppenheim is more sympathetic to other possibilities (i.e. doesn’t have a clue) ...

[1] - https://www.ucl.ac.uk/oppenheim/pub/quantum_vs_classical_bet...

bjornsing
6 replies
2d7h

Interesting that three so obviously intelligent people could get the odds of the bet wrong: it’s not a 1:5000 bet, it’s a 2:5000 bet a.k.a. a 1:2500 bet.

jessriedel
4 replies
2d6h

I mean clearly if it's a mistake this is a matter of terminology not intelligence, but could you explain how it's wrong?

If A wins, B pays A $1. If B wins, A pays B $N. For the bet to be fair (zero expectation value), the probability that A wins needs to be p=N/(N+1) and that B wins needs to be 1-p=1/(N+1). My understanding of what "odds" are is the ratio of the outcome probabilities: p:(1-p) = p/(1-p) = N/1 = N:1. (This is why the "log odds" are log(p/(1-p)).)

Do they use a different notion of "odds" in sports gambling?

bjornsing
3 replies
2d6h

Turns out the only interesting thing is how sloppy my reading is (and how quick I am to blame others for that sloppiness…).

I simply missed the second “each” below:

Should space-time be shown to be quantum, the loser will give to each of the winners, one ITEM of their choice. If the alternative hypothesis is deemed to be correct, the losers will each give 5,000 ITEMS to the winner.
seeknotfind
2 replies
2d5h

Given this each, I'm surprised the headline isn't 10,000:1 bet.

Also, who wants 10,000 <$20 items? It's a cool payout, but if I wanted to spend ~$200,000, I wouldn't do it in 10,000 pieces.

sgt101
0 replies
2d5h

I think, reading it, it could be £1000 worth of wine. For example two cases of decent Gevrey Chambertin, I myself could bring myself to accept such a thing.

dash2
0 replies
2d4h

It's not $20. It's 20 British pence - about 25 cents.... "Examples include some crisps, a bazinga ball, a small amount of olive oil, balsamic vinegar, or wine."

nabakin
0 replies
1d23h

I don't think so. It's 5000 from each loser and there are two potential losers so 5000*2=10000 and 2:10000 is the same as 1:5000.

bilekas
4 replies
2d11h

a small amount of olive oil, balsamic vinegar, or wine

I think we can guess who write up the contract!

denton-scratch
2 replies
2d10h

The contract doesn't specify what kind of balsamic vinegar. Supermarket balsamic is just a mixture of colourings and flavourings. Proper solera balsamic is sipped as an aperitif, and is much too expensive for me to have ever tasted.

dash2
0 replies
2d4h

Why not have both: salt 'n' vinegar flavour crisps?

GuB-42
0 replies
2d9h

It doesn't say the amount either, just that each item must be worth less than 20 british pence.

So it can be one drop of "Aceto Balsamico Tradizionale di Modena DOP", which is the real deal. Prices start at around 50€ for 100mL, but it can easily go into the 100s if you want the more premium ones (25 years). A single drop would fit the 20 pence bill.

DrBazza
0 replies
2d11h

Crisps though!

Should space-time be shown to be quantum, the loser will give to each of the winners, one ITEM* of their choice. If the alternative hypothesis is deemed to be correct, the losers will each give 5,000 ITEMS to the winner.

* ITEM is defined to be a object of the winner’s choice, worth no more than 20 British pence on January 21st, 2021. Examples include some crisps, a bazinga ball, a small amount of olive oil, balsamic vinegar, or wine
3cats-in-a-coat
2 replies
2d9h

It's heartwarming to see these people have fun while still sticking to serious work. Everything should be like that.

trinsic2
1 replies
2d7h

When it comes to science, only the British seem to be able do that.

n4r9
0 replies
2d5h

Rovelli is Italian, and I think Pennington is American and Oppenheim is Canadian.

lostdog
10 replies
2d16h

Is there a decent explanation of what the conflict is about, that is understandable with a non-physics-major college level maths background?

selecsosi
3 replies
2d16h

Being _hyper_ reductionist, the standard model of physics, which describes, to an extremely high degree, the interaction (creation, elimination, and chance of one thing turning into another) of particles that constitute "stuff" (matter / energy) does not account for the interaction of matter via gravitation.

Most things interact with each other, they create "particles" to exchange force / energy (actually in the case here much of this is actually virtual particle interaction / creation). An example of this is when two magnets are interacting with each other, the repulsion/attraction that the magnets / and magnetic fields, the force carriers there actually described by best by "virtual photons" https://en.wikipedia.org/wiki/Virtual_photon#:~:text=Virtual....

The whole standard model is like this (protons/neutrons exchange gluons via the strong nuclear force).

For gravity, there isn't a good "particle" we've found that can accurately describe how gravity behave. It's good to note here a "particle" is what is used to "quantize" something, and so quantum mechanics is the study of how these subatomic particles interact and are created when things like protons are smashed together in a particle accelerator.

Gravity, on the other hand, behaves more like a classical theory (in the sense that it is a field, rather than a discrete quantized energy exchange). In the parlance of the field, mass, appears to cause a curvature in space time according to the rules (quite accurately predicted) in general relativity.

This appears to be some sort of more reconciliation by acknowledgment of the different aspects of the behaviors of the models (GR vs QM), and chooses a path to verify this by looking at the breakdown of matter (in this case, entangled quantum particles) in a asomtomic limit that might be imposed on a quantized theory when that is subatomically reconciled against a continuous field like gravity.

(My own notes) This might be indicating that there is a continuous nature (or at least continuous at the scales that these subatomic particles experience) such that when the particles interact with the continuous field, their movement creates some snapback of the field (thinking how movement in water creates microscopic cavitation) which disrupts what would be a continuous laminar flow in the water case, creating macroscopic turbulence (in this case disturbing the entangled particles)

* Edit: a little clarity / notes on the SM/QM/GR overlap

sroussey
1 replies
2d15h

Coming from an engineering background, is there something about this that might prove useful?

Like electron tunneling that made a good amplifier (transistor)?

Could this “snapback” be utilized in interesting ways?

loosak
0 replies
2d13h

One day, Sir, you may tax it...

junon
0 replies
2d15h

Wow, thank you for this very clear explanation. :)

tsimionescu
1 replies
2d13h

User selecosi gave a beautiful explanation, but perhaps it's a bit too detailed.

Trying to give a slightly more high level idea, the conflict is simply that if you introduce the gravitational effects (spacetime bending) from general relativity into the equations of quantum mechanics you get infinities all over the place. This happens because in QM particles are not localized, they exist as a probability wave with certain values at any space and time, but the effects of their mass at all those spaces and times adds up to infinity if you just apply the calculations from relativity, in a way that we haven't been able to solve.

This is all very roughly speaking - the exact reasons are quite technical and I don't understand them myself. Electromagnetism and other interactions had similar issues, but we found mathematical solutions to fix them, and those solutions we know just don't work with the equations for gravitational interactions.

GoblinSlayer
0 replies
2d13h

Infinities appear if particles are assumed to be zero size: they become black holes with infinities at the event horizon.

whatshisface
0 replies
2d16h

Gravitons don't work as force carriers in quantum field theory because the sums describing them diverge even worse than other sums describing the three other forces; sums which also diverge but can be corrected.

psc
0 replies
2d12h

ScienceClic on youtube has some really great videos, the first half of the string theory video [1] does a good job framing the conflict at a high level, string theory being one attempt to resolve the problem.

It doesn't go into math but mentions that in the standard model, interactions are local, which for example means something like photon emission is instantaneous. When applied to gravity/gravitons, you get infinities in the math. This sort of makes sense to me, but I have yet to find anything that does an understandable job explaining how/why exactly the math breaks down.

There's a series on the maths of general relativity [2] which I haven't completed but I'm hoping will provide a bit more background to understand the math.

Also, the article mentions that the new theory was motivated by solving the black hole information paradox [3], which there's a good video [4] about as well

[1] https://www.youtube.com/watch?v=n7cOlBxtKSo

[2] https://www.youtube.com/watch?v=xodtfM1r9FA&list=PLu7cY2CPiR...

[3] https://en.wikipedia.org/wiki/Black_hole_information_paradox

[4] https://www.youtube.com/watch?v=isezfMo8kWQ

mvdtnz
0 replies
2d16h

Depends how much time you want to spend on it. I first learned about this topic in Brian Green's book The Elegant Universe back in 2005 or so. At the time there was a glut of popular science books on the topic (Lee Smolin had an excellent book titled The Trouble with Physics which made an excellent counter argument to Green's string theory-heavy title).

If you want to go much deeper, Roger Penrose wrote The Road to Reality which is a much much harder read (I was never able to complete it after 3 separate attempts).

empath-nirvana
0 replies
2d7h

The curvature of space time depends on the distribution of mass and energy within that space time. If you're modeling general relativity classically with quantized mass and energy at very small scales, stuff gets "weird". If a particle isn't in a well-defined location with a well-defined energy, then what is its effect on the curvature of space time around it?

maaaaattttt
9 replies
2d15h

This is a question I've been asking myself (probably due to a lack of understanding and knowledge) and this theory seems to hint at something similar: why do we assume spacetime to be homogenous in its nature? Here they seem to say that on some quantum level spacetime could vary drastically. Couldn't then dark matter be explained by some variations on a large scale? Like giant "wrinkles" in spacetime? They would then behave as pseudo black holes, or more like "black trenches", that lead to the structures we see today in the universe. The structures seem to have more mass than what we can actually see, but are in fact just placed in locations that behave differently gravity wise?

samus
1 replies
2d15h

Science had fared quite well so far with the assumption that the laws of physics are largely the same no matter where and when we are. There would have to be very strong evidence to throw that out. And the immediate next problem is that it actually wouldn't explain much. The immediate next question would be "why is spacetime behaving this way".

maaaaattttt
0 replies
2d14h

Ultimately it's just a way to say, "well, there is simply more gravity there". I was also thinking of the "why" and I'd say because when the universe expends, or initially expended, it didn't happen homogeneously. In my opinion it's more realistic that spacetime isn't homogeneous than that it is.

jerf
1 replies
2d7h

"Here they seem to say that on some quantum level spacetime could vary drastically."

This is not a result of this theory. Our best current theories say this too: https://en.wikipedia.org/wiki/Quantum_foam Spacetime getting distinctly different at very small scales is a vital part of all modern theories and I doubt it's going away. Even if spacetime is shown to be continuous itself, which I tend to personally doubt [1], getting close to the scales where the uncertainty principle becomes a major influence rather than an almost academic curiosity will inevitably have major impacts on the behavior of things.

"Like giant "wrinkles" in spacetime?"

Giant wrinkles in space time don't act like dark matter. They either smooth themselves out basically instantly (on cosmological scales) or form something other than a giant wrinkle, such as a cosmic string: https://en.wikipedia.org/wiki/Cosmic_string

In fact one of the low-key mysteries extant in the universe today is that our best theories say cosmic strings really ought to exist, but we don't see them. Though I think the confidence in our theories being pushed to that extent is low enough that this is not generally considered the biggest problem, and very unlikely to be the thing that cracks the mystery, so it isn't something that gets talked about a lot. So you could say that we actually don't "expect" space time to be homogeneous and one of the mysteries is why it observationally is!

[1]: https://news.ycombinator.com/item?id=38433917

maaaaattttt
0 replies
2d

Thank you for the link to cosmic strings! This is actually exactly what I had in mind when, as you quoted, I was mentioning wrinkles and “black trenches”.

So, you don’t think structures like galaxy superclusters are a second hand visualization of the existence of cosmic strings? I know these formations could be explained by the effect of gravity of just visible matter. But maybe it’s both. Like a valley could exist because of a river and a river could exist because of a valley.

Since this concept has been around for so long I have some reading to do to understand why cosmic strings are not a possible explanation for dark matter. To me high density and invisible fit the bill for dark matter quite well.

Edit: never mind, continued reading the article and saw this

in the past it was thought that their gravity could have been responsible for the original clumping of matter into galactic superclusters. It is now calculated that their contribution to the structure formation in the universe is less than 10%.
zmgsabst
0 replies
2d13h

I have a similar question:

What happens to quantum tangles[0] that get inflated? — do they melt or is it possible for them to absorb energy in a hard to interact with state?

Eg, is the reason dark matter looks like a braid that it is? — and inflated foam turned into a foamy-web?

[0] - https://en.wikipedia.org/wiki/Anyon

vanderZwan
0 replies
2d12h

If you look at the various ways people have tried to interpret thermodynamics and the observation that our local universe started in a state of low entropy (not exactly spacetime, I know, but it's related to the problem), then you'll see that a variation of your question has been very thoroughly considered already.

Namely, the possibility of the low-entropy start of our observable universe being merely a local statistical fluke on a cosmic scale across infinite time and space.

The problem is that this often leads to weird paradoxes, like the Boltzmann brain universe[0], and from what I understand it's still not entirely clear if that's just a sign we're doing statistics wrong or if there's something missing in the theory of cosmology.

Personally, I'm wondering if the issue isn't that these theories are assuming randomness, but in reality the next "state" of the universe depends on the previous state, so it's actually pseudo-random. But that sounds like too obvious an explanation so that has probably been ruled out somehow.

Anyway, just to be explicit about it again: what I'm talking about relates to thermodynamics and entropy. You're question is about spacetime and the laws of physics, which is something else.

[0] https://en.wikipedia.org/wiki/Boltzmann_brain

psc
0 replies
2d14h

I've had the same question and I don't have the answer, but what you're asking about might be referred to as the cosmological principle

https://en.wikipedia.org/wiki/Cosmological_principle

Seems like so far our observations are that the universe is pretty homogeneous. Cosmological principle being the default position, my best understanding is that it's just a lack of evidence otherwise:

The End of Greatness is an observational scale discovered at roughly 100 Mpc (roughly 300 million light-years) where the lumpiness seen in the large-scale structure of the universe is homogenized and isotropized in accordance with the Cosmological Principle. At this scale, no pseudo-random fractalness is apparent.

https://en.wikipedia.org/wiki/Observable_universe#End_of_Gre...

colechristensen
0 replies
2d14h

If there is some reason over here isn’t the same as over there then there will be a mechanism as to why that is the case and you will want an explanation.

You want your physics to apply everywhere and that’s why you start with the assumption that the same laws, constants, etc apply everywhere.

There’s no “ope stuff is just different over there” in physics, you look for those inconsistencies and then you try to explain them.

Root_Denied
0 replies
2d14h

Small nitpick on choice of words here - homogenous probably isn't the right word to use, instead the correct term is probably invariant.

The universe appears homogenous at various scales, meaning it's self-same within some amount of tolerance across itself. The distribution is homogenous. That's different from invariance, the assumption that a variable (in this case an intrinsic property such as mass) remains unchanged over time for a given subject of study, such as a particle.

Even if a given property can vary from subject to subject (even wildly), the sum of all subjects could still appear homogeneous in their distribution.

why do we assume spacetime to be homogenous in its nature?

Because it's distribution appears to be consistent, at least as far as we can measure. That's not the same thing as objects at the quantum scale being invariant (or variable) in some way that we currently think is the opposite.

Couldn't then dark matter be explained by some variations on a large scale? Like giant "wrinkles" in spacetime?

Considering the scales at which these fluctuations would have to occur for us to have not already measured them I'm doubtful they could build up to anything like that, even in aggregate and random patterns.

They would then behave as pseudo black holes, or more like "black trenches", that lead to the structures we see today in the universe. The structures seem to have more mass than what we can actually see, but are in fact just placed in locations that behave differently gravity wise?

I don't think I'm really qualified to answer on how possible this interpretation is, but with the confirmation of gravitational waves a few years back we confirmed another piece of Einstein's theories and seem to have a good grasp on the observational effects of gravity at least. There was a recent announcement about mapping of the Gravitational Wave Background [0] using pulsars spread across the galaxy that was super cool too you might be interested in that I would say is related to this question.

The real answer is that theories like this require quite a bit of work to manipulate existing and complex math, and that you'd need someone who can translate an idea like yours into a mathematical model that fits with our observational data. That's an exceedingly difficult thing to do, as evidenced by the last 70ish years of physics.

[0]: https://www.space.com/gravitational-wave-background-universe...

maho
6 replies
2d10h

After skimming the second paper, I still don't understand how precision mass measurements come into play here. They mention Cavendish-type measurements, but they are used for measuring the gravitational constant. Of course, you can turn the formula around, plug an unknown mass into the apparatus and then call it a mass measurement, but it's going to be a very imprecise measurement. A Penning trap can give you 11 to 12 significant digits -- a Canvendish-type measurement could give you maybe 5 or so, I think.

Or is it because the Penning trap measures "inertial mass" but they really want a measurement of "gravitational mass"? But wouldn't inertial mass fluctuate the same way?

Timeroot
4 replies
2d8h

(I went to a talk by Oppenheim author a couple weeks ago on this topic.) The idea is that gravity, as a force, only operates classically. More precisely: there is a classical state describing the curvature of space time, and then a quantum state describing the configuration of particles on that spacetime. But then, that quantum state needs to affect the classical state again (mass bends space), which would usually lead to the classical half becoming quantum and entangled with the other half.

You can keep the classical half (the shape of spacetime) classical, if the effect of the quantum part is partially stochastic. There's a minimum amount of random noise you need for it to be mathematically consistent. So, you set up an experiment where a particle is acting on another via gravity. There's a quantity of noise you should expect to see in the gravitational force.

"Inertial Mass=Gravitational Mass" now only holds on average. The gravitational mass will effectively have a Brownian noise term added in.

a_cardboard_box
2 replies
2d5h

If this hypothesis is true, would it give us a way to distinguish many-worlds vs. wave function collapse? If the many "worlds" are all interacting with a single classical spacetime, we should be able to measure the gravity of other worlds, right? I'm not a physicist, but that sounds a lot like dark matter to me.

layer8
0 replies
2d

There is no implication that the classical spacetime wouldn’t still split into branches, possibly with different variations in the stochastic effect from each other.

a_cardboard_box
0 replies
1d20h

Answering my own question based on Oppenheim's lecture[1]: In his theory, the stochastic interaction between quantum and classical states necessarily causes the wave function to collapse, so it is not compatible with many-worlds. Being able to measure the gravity of other quantum worlds is something predicted by semiclassical gravity, which Oppenheim calls "complete nonsense".

[1] https://www.youtube.com/watch?v=sde7k3jJp5E

Workaccount2
0 replies
2d7h

So if I am understanding this correctly:

Quantum particles can effect change (curve) spacetime without direct quantum action if you sprinkle a bit of randomness into the (quantum acting on spacetime) effect?

codethief
0 replies
2d1h

Wouldn't it be enough to measure the (fluctuation of the) total gravitation force ( = gravitational constant times mass) exerted on the second particle, in order to draw conclusions about the nature of the gravitational force at small scales?

zehaeva
5 replies
2d5h

Holy hell, talk about burying the lede here! The last sentence of the whole article is:

The new theory allows for information to be destroyed, due to a fundamental breakdown in predictability.

Are you kidding me?!

nine_k
4 replies
2d5h

Is there something fundamental that breaks when information is allowed to be destroyed?

mackman
1 replies
2d5h

Wouldn't that prevent time from going in reverse?

nine_k
0 replies
2d4h

Isn't the growth of entropy preventing this already at the macro scale? Isn't it similar to the loss of information, because it's a transition from a less-probable state to a more-probable state?

zehaeva
0 replies
1d6h

I mean, yes, things like The Arrow of Time, Entropy, our entire understanding of black holes, most of quantum mechanics.

Yeah, if information can be destroyed then a lot of physics needs to be redone.

digging
0 replies
2d4h

I mean essentially, quantum physics only works if information can't be destroyed.

throwaway4PP
4 replies
2d8h

Exciting times! Any physicist here aware of how these two papers would impact the applicability or validity of Modified Newtonian Dynamics (MOND)?

background: my lay understanding of MOND is that it modifies the gravitational interaction parameter over cosmological distances. the force exerted on objects due to gravity is currently accepted to scale with distance in a certain manner (ex. 1/distance^2) while MOND postulates a different scaling relation (ex. 1/distance^3). Those are just examples, not actual values. The currently accepted gravitational interaction force scaling is what gives rise for the need for dark matter, and the corresponding lambda cold dark matter (LCDM) theory. Of course, we have not been able to observe dark matter, which is a problem for a theory. That is what has given rise to MOND, amongst other things. There are prominent, esteemed physicists who have recognized many issues with LCDM, some of which are addressed by MOND (https://astro.uni-bonn.de/~pavel/kroupa_SciLogs.html)

previous HN posts with interesting discussions / links re: MOND

https://news.ycombinator.com/item?id=37012052

https://news.ycombinator.com/item?id=33261981

https://news.ycombinator.com/item?id=23982814

zehaeva
1 replies
2d5h

I am pretty sure that MOND has been completely ruled out[0] given some recent evidence. And by more than 5 sigma.

[0]https://bigthink.com/hard-science/dark-matter-alternative-mo...

chorsestudios
0 replies
1d4h

That article linked does not completely rule out MOND. It is a great article, but it even mentions several difficulties that complicate their observations.

Timeroot
1 replies
2d7h

It's of no relevance to MOND, at least none that I can see.

MOND gives a different scaling relation, and is therefore contradicting general relativity. Its goal is to explain the effects we associate with dark matter, without the need for dark matter.

General relativity is (we are pretty sure) inconsistent with quantum field theory. String theory tries to fix the issue by replacing the particles in field theory with strings. Oppenheim is trying to fix it by putting general relativity as a classical phenomenon that lives "outside" of quantum field theory.

They're trying to solve different problems. And, Oppenheim's classical gravity picture could be used just as well with MOND instead of standard general relativity, if that's what you wanted.

MOND is getting less popular every year as evidence for dark matter piles up. The Bullet Cluster is a particular instance where we can actually "see" the dark matter flying around, in a way MOND couldn't hope to explain. LIGO has also given us a lot of confidence we have the right theory of gravity, at least up to the quantum scale.

naasking
0 replies
2d5h

The Bullet Cluster is a particular instance where we can actually "see" the dark matter flying around, in a way MOND couldn't hope to explain

This is not correct. In fact, LCDM can't even explain the Bullet Cluster [1]. The evidence is not so favourable to LCDM over MOND [2] when taken as a whole.

More recent observations on wide binary stars disfavour MOND more strongly, but the classic reasons you cite are not valid reasons.

[1] https://arxiv.org/abs/0704.0381

[2] https://arxiv.org/abs/2110.06936

Edit: See for instance what Milgrom said about the Bullet Cluster back in 2006:

http://astroweb.case.edu/ssm/mond/moti_bullet.html

kristov
4 replies
2d10h

I wonder: does this theory explain any currently unexplained phenomenon? It may be testable, but that doesn't guarantee it to be correct. There could be another undescovered theory that also passes these tests. It feels weird to make up a theory, and then make up some tests to validate it, without mentioning the actual problem the theory is trying to resolve. It seems physicists just don't like there being two separate models, one for large things and one for small things, but it could be the universe is just like that, right?

simiones
1 replies
2d9h

It seems physicists just don't like there being two separate models, one for large things and one for small things, but it could be the universe is just like that, right?

No, it couldn't be just like that. There could exist a common umbrella theory that uses GR for certain objects and QM for others, but it would have to be a new theory that introduces a very important new info: the nature of the separation between the two.

Right now, all of the laws of QM say that it applies just as much to the motion of the Sun as to the motion of an electron. GR says the same. And yet, GR's predictions for how an electron behaves are clearly wrong. QM's predictions for how the Sun moves are closer to being correct (that is, QM + SR + Newtonian gravity), but it is hard to measure.

A unified theory has to either correct the equations of QM, GR, or both; or to add explicit boundaries to each of them. A theory that would say "an object moves according to the laws of QM if it's 0.00000001g or lower, and according to the laws of GR if it's any higher" (or any other distinguishing characteristic) would be an entirely new theory.

Interestingly, this new theory being proposed in the articles actually does something slightly in this vein: it says "QM applies to the electromagnetic, weak, and strong interactions, and GR applies to the mass/gravity interaction". It then finds a new way to account for the known inconsistencies between the math of each, apparently by getting rid of the assumption that a particle has a fixed mass (and thus fixed space-time curvature in GR).

kristov
0 replies
2d3h

Thank you!

naijaboiler
0 replies
2d10h

Theories like that are never “correct”. They just explain things and fit data better than other/previous theories. Correct is not what I would go for. “Best we currently have” sounds more like it

mettamage
0 replies
2d10h

Disclaimer: not a physicist.

It seems to me that one issue with falsification is that: if you're exactly right on how the universe works, then it's hard to know that you are exactly right. So I imagine that we'd be infinitely putting resources in it (to some extent) to see if anything is unexplained. So if the universe is just like that, then we'd have a long way to go to find out.

I do think that physicists are bothered by the idea that current theories can't explain everything they're observing. So whether the universe is or isn't like that, more research is definitely needed.

inasio
2 replies
2d16h

I was really hoping to find concrete details of what the experiment that they proposed to validate or refute their theory is. I hope it's something achievable, or at least not ridiculously far from what we can currently measure

pmontra
0 replies
2d14h

The experiments are described in the second paper, at https://www.nature.com/articles/s41467-023-43348-2

fgoesbrrr
0 replies
2d16h

They mention 20 years in the article.

bawolff
2 replies
2d13h

and this scale can be determined by another experiment where we test how long we can put a heavy atom in superposition of being in two different locations."

Im a physics noob, but does this imply the theory helps explain why we usually only see quantum effects on the mucroscopic scale, since it says something about how heavy objects cant be put in superposition?

codethief
0 replies
1d13h

Yes, at least that's my understanding.

bheadmaster
0 replies
2d13h

I'm a physics noob too, but from what I understand, the quantum effects don't happen on macroscopic scale because of high temperature and lack of "structure". Every interaction of particles seems to change the state ("collapse the wave function") of the particles, so such a chaotic environment makes things start behaving in a predictable way. Sort of how balls falling in a Galton board seem to converge to Gauss normal distribution: https://en.wikipedia.org/wiki/Galton_board

In low enough temperature and regular enough structure, even macroscopic objects can be entangled (or, I assume, show other quantum effects): https://www.scientificamerican.com/article/scientists-supers...

amai
2 replies
2d5h

I believe gravity can be both quantizable and non quantizable. Gravitons could be like phonons (quantized sound waves) and only exist in the solid state of space time. In the fluid state of space time gravitons don’t exist ( Similar to the fact that phonons can only exist in solids, but not in fluids).

digging
1 replies
2d3h

What are the solid and fluid states of spacetime?

amai
0 replies
2d1h

One could imagine that for a very low Unruh temperature spacetime changes the phase and becomes „solid“. A low Unruh temperature means very low acceleration/gravitation. This might explain dark matter.

Similar things are discussed in the literature under https://en.wikipedia.org/wiki/Superfluid_vacuum_theory and https://penntoday.upenn.edu/news/physicist-theorizes-dark-ma...

qwertywert_
1 replies
2d6h

So to unify the theories they are actually keeping them separate?

codethief
0 replies
1d13h

In a way, yes, but not exactly. As far as I understand the idea is that gravity induces the collapse in quantum mechanics, which would be a huge conceptual change (and IMO a very satisfying solution to the measurement problem).

Moreover, according to the article this collapse does not preserve the key property of quantum mechanical time evolution: unitarity (aka "preservation of past information" aka "predictability of the future").

The new theory allows for information to be destroyed, due to a fundamental breakdown in predictability.
jeisc
1 replies
2d14h

Our present day understanding of the universe is rudimentary at best and will be rocked with each new device invented to reveal its nature to us. When we explode a nuclear bomb here how far away in the universe would it be visible or detectable? If there were a highly developed civilization would they not be listening for this background noise to detect when a civilization had reached the nuclear point and used it for exploding a bomb?

tsimionescu
0 replies
2d13h

Nuclear explosions on Earth, while very powerful, are not even a blip compared to the random noise in the amount of energy that the Sun gives out. You would have to very very carefully measure the Earth itself in particular to notice them in any way.

defrost
1 replies
2d16h

Jonathan Oppenheim:

    If you'd like to attend a public lecture (virtual or in London) about the postquantum theory of classical gravity, or contribute a donation to the group's research program, click "Donate" to add your email address to our list.
https://www.ucl.ac.uk/oppenheim/

https://www.ucl.ac.uk/oppenheim/research.html

See Also:

Gravitationally induced decoherence vs space-time diffusion: testing the quantum nature of gravity (March 2022) https://arxiv.org/pdf/2203.01982.pdf

Is it time to rethink quantum gravity? (March 2023) https://arxiv.org/pdf/2310.12221.pdf

Certhas
0 replies
2d12h

This is all surprisingly substantative.

That said, there are many results that connect gravitational and thermodynamic phenomena. It would be good to understand how these look from the perspective of this theory.

Edit: just started reading, so if this is explained somewhere would love to get a pointer.

curation
1 replies
2d7h

What if universality is a name for a kind of antagonism? What comes first is not a wholeness that is disturbed, but difference itself. Reality itself is unfinished and built in parallax. This new theory alongside current metaphysical theories of everything is inspiring.

VHRanger
0 replies
2d7h

Why is this comment at the top of the discussion thread?

We're talking about the validity of the mathematical theory, not new age woo-woo methaphysical theories.

I_am_tiberius
1 replies
2d15h

So in summary, if right, it means that spacetime is not as predictable as we thought.

quickthrower2
0 replies
2d12h

Almost as if it is a machine learning simulation :-)

8bitsrule
1 replies
2d14h

A wonderful, bold, and colorful idea, already thought through carefully. And 5000:1 odds! It'll be more interesting how many more dozens of minds will find more testable consequences. Is it too much to hope that some untestable ideas will deservedly die?

hallway_monitor
0 replies
2d14h

That which can be destroyed by the truth should be.

stjohnswarts
0 replies
2d7h

So can anyone explain why this will possibly take 20 years to prove/disprove? We don't have the current level of accuracy necessary to test it, but might if we work through issues over the next 20 years or so?

sheepscreek
0 replies
1d

Did anyone else find the AI generated "art" more distracting than helpful?

netfortius
0 replies
2d13h

I am a huge fan of Carlo Rovelli, and read all his books to date, so I am siding with him on this one.

jug
0 replies
2d13h

Honestly at this point, who knows. Let's find out. This is even a testable theory so that already puts it ahead of a host of others.

eterevsky
0 replies
2d11h
berniedurfee
0 replies
1d23h

Were the illustrations AI generated I wonder? They look very interesting and abstract, but still made to fit the narrative.

amai
0 replies
2d11h
FrustratedMonky
0 replies
2d9h

Any physicist here that can tell if this is the big breakthrough to a theory of everything?

Or just hype? Sorry, gun shy about believing big breakthroughs this year.

Quantum Gravity and Testable? Seems huge on the surface.

If it can be tested, then this is already worlds ahead of string theory.