return to table of content

Nature retracts paper that claimed adult stem cell could become any type of cell

BenFranklin100
61 replies
22h13m

I have long maintained that the NIH should set aside 25% of its budget to fund spot checking of the research output of the remaining 75% of funded research. Even if this funding is only sufficient to review a handful of papers each year, it stands to have a transformative effect on the level of hype and p-hacking in many fields, and could nearly eliminate the rare cases of actual fraud. It would also improve the overall quality and reliability of the literature too.

faeriechangling
13 replies
21h57m

It could also stand to go out of its way to try and give more prestige to people who manage to get influential papers retracted.

HPsquared
6 replies
20h55m

Unfortunately there isn't a neat mechanism like short-selling.

supriyo-biswas
5 replies
16h40m

Income tax departments in my country give a small reward (3-5% of the amount in question) if you report tax fraud.

Maybe we could have something like that, where the vigilance department receives a small amount of money paid off from the penalty imposed on the researcher with bad/fraudulent research.

thechao
2 replies
14h44m

See, that’s weird to me? Wouldn’t the best reward be more like 99 or 100%? The gov’t’s already out the money; the incentive is to prevent tax fraud.

supriyo-biswas
1 replies
14h25m

The tax department is, in fact, not "out of money" in that sense, and tries to recover the money first with some penalties added.

They're incredibly empowered to conduct raids and seize property and other belongings, if it comes down to that and the sum involved is large enough.

NullPrefix
0 replies
12h57m

They're incredibly empowered to conduct raids and seize property and other belongings, if it comes down to that and the sum involved is large enough.

But only if that sum involved is not large enough for the owner to afford proper defense lawyers

t0bia_s
1 replies
12h46m

Denouncing civil disobedience is typical method for sustain totalitarian regime. Where are you from?

strangattractor
3 replies
20h51m

Unfortunately 'prestige' is bestowed by the author's peers - Police internal affairs detectives don't have much prestige within the Police department.

tomrod
2 replies
17h4m

That's not an accurate comparison, in my experience. Police operate as a thin blue line. Academics operate as jackals and hyenas in the Serengeti, and many would eviserate literally anyone if they could get another grant. Best admired from afar.

supriyo-biswas
0 replies
16h37m

It doesn't change the fact that internal vigilance departments are often strangled by reducing their funding (often based on the fact that they haven't done much), and causing various roadblocks by preventing information flow to these departments.

The second one is a little less relevant for published research, although it could take a different form when implemented in Academia.

bee_rider
0 replies
15h56m

They should give these grants to near-retirement academics. Here’s 100k, hire a handful of students and settle some grudges.

spamizbad
1 replies
21h14m

This is already happening in academia.

passwordoops
0 replies
19h20m

You're going to have to provide evidence, as in actual bona-fide promotions handed out, or Tenure committees expressly spelling out and counting this as a factor in their decision

epistasis
13 replies
20h17m

Do you really think that the current situation poses a >25% cost on scientific productivity? Do you think your system would be able to recapture that?

That assessment does not match up with what any practicing scientist thinks is even within the realm of possibility for harm to science.

Reading these conversations is like listening to C-suite execs at big companies talk about what employees are getting away with via work at home policies.

BenFranklin100
4 replies
19h47m

We can quibble over the number, maybe it is low as 10%. The cost to reproduce a study will be significantly higher than to produce it in the first place, due to different expertises, equipment and so forth. I estimate at least a 10X factor.

And I am intimately familiar with what researchers “get away with’ while ‘working at home’. As a researcher who tried to reproduce several research papers only to discover the original scientists were wildly exaggerating their claims or cleverly disguising fundamental shortcomings, I can assure the cost is quite high to the scientific community, well in excess of 25% of the annual $48B NIH budget.

I hold a healthy disdain for my fellow scientists. The only way to get them to play by the rules in my view is to have a threat of a research audit hanging over them.

nextos
2 replies
19h27m

This is the key part, even legitimate findings tend to be exaggerated. A great step forward would be to distribute grants to reproduce new key findings.

For example, the NIH could identify the top findings from 2024 that need to be reproduced, and seek expressions of interest to reproduce these and/or other important findings identified by applicants. Perhaps, also reach an agreement with top journals to publish replications as a new article type, and link them to the original one, just like they do with comments/news & views.

It would instantly make those publishing super edgy findings much more careful, just in case, and things would become more efficient.

BenFranklin100
1 replies
19h12m

I love the idea of putting each year’s top papers under the microscope. Present day hucksters publishing in Nature will think thrice about making unsupported claims. Further, the automatic reproduction of the research will serve as cement in the fundamental building block nature of science.

nextos
0 replies
16h38m

Yes, exactly. It'd act as deterrent for anyone considering to exaggerate or misrepresent findings. Also, it'd lead to a quicker dissemination of true findings.

Currently, academic publications are in a bit of market for lemons situation [1], where the seller (authors) have much more information than the buyers (readers, funders).

Time to change that.

[1] https://en.wikipedia.org/wiki/The_Market_for_Lemons

hoseja
0 replies
11h1m

The cost to reproduce a study will be significantly higher than to produce it in the first place

wdym? You're on the happy path when reproducing, the cost of the original study includes all the failed attempts.

j-wags
3 replies
19h54m

Do you really think that the current situation poses a >25% cost on scientific productivity? Do you think your system would be able to recapture that?

Yes and yes. I'm 6 years past defending my PhD and I have low confidence in being able to reproduce results from papers in my field (computational biophysics).

I was recently at an industry-heavy biophysics conference that ran a speed dating event, and my conversation starter was "what fraction of papers in our field do you trust?". I probably talked to ~20 people, with a median response of ~25%.

Even a tiny amount invested in reproduction studies and accountability would go a long way. Most papers in _computational_ biophysics still don't publish usable code and data.

BenFranklin100
2 replies
19h39m

It’s bad enough that too often I trust companies over academics nowadays. At the end of the day, a company has to answer to the customer. If what they offer doesn’t actually work, they go out of business. Academics often don’t have to answer to anyone. Just be smart and make the paper look good while being careful not to do something that could get you nailed for outright fraud.

mangamadaiyan
0 replies
15h46m

Actually, being answerable to the customer is not universally true.

Typically, most companies are answerable to the investors and shareholders. Customers usually don't figure in the equation.

SJC_Hacker
0 replies
16h37m

If what they offer doesn’t actually work, they go out of business

If this happens before founders/early investors aren't the ones left without a chair when the music stops, it doesn't matter

See: Theranos.

passwordoops
2 replies
19h15m

In the 2000s a friend worked at a top 10 pharma company with the fun job of reproducing results from genetics papers... 25% were reproducible. The rest were not, even after communicating with the authors to get conditions as close as possible.

So, yes, the current situation can safely be assumed to pose at least a 25% cost on science. And "productivity" is the wrong term here. The harm of fraudulent/bad science runs much deeper than productivity

epistasis
1 replies
15h16m

I don't know about your friend, but there was an option piece, not a scientific study, that reported about that number of "landmark" cancer research papers as being "not-reproducible" in industry hands.

However, it did not have methods, it didn't say how they were not reproducible, as in a figure or an effect etc.

The closest thing to a definition of "reproducible" was a footnote on a table defining it as "sufficient to drive a drug development program," which is not at all the same thing as reproducible.

Which is to say, I'm skeptical of these anecdotes.

passwordoops
0 replies
8h33m

Here is an overview of the three papers of the "Reproducibility Project: Cancer Biology" (1). The detailed results are in references 2-3

And if you've only ever encountered a single opinion piece on the reproducibility problem in biology/pre-clinical research, then I highly recommend you do a targeted keyword search.

(1) https://elifesciences.org/articles/75830

lastiteration
0 replies
20h13m

Most stuff in papers can't be replicated so you can't really trust anything and are forced to see what actually works and is worth building upon. This is very expensive both in time and money.

epgui
9 replies
22h10m

I think if we devoted only 1% to this it would be a huge improvement.

ta988
8 replies
21h56m

Agreed, once you know what to look for and how to reproduce it. What do you do if you can't reproduce? That may mean the original research paper doesn't disclose everything (malicious or not, but malicious is REALLY frequent) or missed an important factor (sometimes doing a reaction in a slightly scratched glass container will change the outcome entirely).

To come back to the malicious part, for many researchers, not publishing the exact way they do things is part of how they protect themselves from people reproducing their work. Some do it for money (they want to start a business from that research), others to avoid competition, others because they believe they own the publicly funded research...

jltsiren
7 replies
21h18m

And sometimes you fail to reproduce something because you failed to do it properly. I don't know how often that happens in the field on in the lab, but it's extremely common on the computational side.

Very often, the thing you are trying to reproduce isn't exactly the same that was published. You have to adapt the instructions to your specific case, which can easily go wrong. Or maybe you did a mistake in following the instructions. Or maybe you mixed the instructions for two different cases, because you didn't fully understand the subtleties of the topic. Or maybe you made a mistake in translating the provided scripts to your institute's computational environment.

subroutine
6 replies
19h52m

Part of the problem is that methods sections in contemporary journals do not provide enough information for exact replication, and in the most egregious cases let authors write stuff like "cultured cells prepared according to prevailing standards".

pas
0 replies
16h15m

From the site: "One of the problems was room temperature was too low elsewhere. Tumors don't grow when mice are chilly because the vessels vasocontrict, it cuts off the blood supply and drugs don't circulate.”

That means there are important validation/verification steps left out of the whole process. Sure, it's impossible to give every detail, and naturally there's always a time constraint, but if there's a hypothesis of action it needs to be verified. (Again easier said than done.)

epgui
1 replies
16h28m

In biochem sometimes there are a lot of skills involved, to the point where it's almost magic intangible qualities making an experiment succeed. Especially for more manual work.

For my masters' research I spent 6 years refining a super niche technique until I was able to reproduce my own work.

ta988
0 replies
4h25m

Did you end up publishing the details on how you did it?

ta988
0 replies
4h26m

No if you want you have virtually unlimited supplementary information you can attach to your publication. It is really about a mix of doing as little as possible and hiding details so competitors can't do it.

blincoln
0 replies
3h6m

That's awful. In any field, a writeup of a discovery should include enough information for a peer of the author to reproduce the results. Ideally, it would include enough detail for an enthusiastic amateur to reproduce the results.

This is how we write pen testing reports at work. A pen testing report written that way ~20 years ago is one of the things that got me interested in pen testing. But I apply it to all of my technical writing.

If lack of reproducibility in science is as big a problem as it seems to be, maybe journals should impose a randomized "buddy system" where getting a paper published is conditional on agreeing to repeat at least 2-3 other experiments performed by peers. Have 3 peer researchers/labs repeat the work. If at least 2/3 are successful, publish the paper. If not, the original researchers can revise their instructions once or twice to account for things the peers did differently because the original instructions didn't discuss them.

Hopefully needing to depend on the other organizations for future peer review would be sufficient to keel everyone honest, but maybe throw in a secret "we know this is reproducible" and a secret "we know this is not reproducible" set of instructions every once in awhile and ban organizations from the journal if they fail more than a few of those.

For corner cases that require something truly impractical for even a peer to reproduce independently ("our equipment included the Large Hadron Collider and a neutral particle beam cannon in geosynchronous orbit"), the researchers that want to publish can supervise the peers as the peers reproduce the work using the original equipment.

This would obviously be costly up front, but I think it would be less costly in the big picture than thousands of scientists basing decades of research on a single inaccurate paper.

I also think that forcing different teams to work together might help build a collaborative culture instead of the hostile one that's described elsewhere in this discussion, but maybe I'm overly optimistic.

Helmut10001
7 replies
12h47m

But who could do the spot-checking? In many fields, there is only one team or a few authors capable of understanding, reproducing, or verifying certain research. In a two-party situation, if you pay the opposing team, they have incentives to exaggerate doubts. Often research papers are written in such a confusing way that it is impossible or very expensive to reproduce or verify the results (e.g. repeat a year-long study).

I think it would be better if there were incentives that rewarded quality over quantity. At the moment, my university always says that quality is of the utmost importance, but then threatens to terminate my job if I cannot publish x number of papers in a given year.

logicchains
3 replies
12h0m

, there is only one team or a few authors capable of understanding, reproducing, or verifying certain research

If the research isn't documented well enough to reproduce/verify, then the paper shouldn't pass review in the first place. The NIH could make it a condition of funding that papers are detailed enough to be reproducible.

Helmut10001
2 replies
11h19m

My sense is that this would prevent 98% of papers in the social sciences from ever being published. How do you decide whether to renew a researcher's position when the average time to get a single paper through peer review is 5+ years (my average is 2-3 contract renewals per year)? This is not compatible with today's pace.

I am not saying that all this is a desirable situation. It is very unfortunate, and I wish there was an easy solution. My first research paper took 5 major revisions and 6 years to get through peer review. All the reviewers criticized was the wording and my unwillingness to conform to the accepted views in that particular community; I almost lost my job over this, but once the paper was accepted, it won several awards. All of this leads me to believe that peer review is very subjective and prone to error, and I don't have a solution for that.

nimish
0 replies
1h4m

The goal of science isn't to publish papers, it's to investigate hypotheses. If 98% of social science papers cannot be replicated then that's an indictment of social science, not the requirement to replicate.

I suspect a lot of "hard" science papers would be caught as well so it's a necessary quality control method

concordDance
0 replies
11h7m

My sense is that this would prevent 98% of papers in the social sciences from ever being published.

Given the quality of the social science papers I've read this seems like it would be a good thing IFF the 98% cut were the bottom 98%.

hoseja
0 replies
11h11m

Public research that nobody else can do should be considered utterly worthless. Publication should be predicated on independent replication. Papers are supposed to be instructions for replication! If it's impossible to replicate from the paper, the paper fails its primary purpose IMO.

devoutsalsa
0 replies
8h50m

Replication is one of the central issues in any empirical science. To confirm results or hypotheses by a repetition procedure is at the basis of any scientific conception. A replication experiment to demonstrate that the same findings can be obtained in any other place by any other researcher is conceived as an operationalization of objectivity. It is the proof that the experiment reflects knowledge that can be separated from the specific circumstances (such as time, place, or persons) under which it was gained.

https://en.wikipedia.org/wiki/Replication_crisis

JPLeRouzic
0 replies
12h19m

Often research papers are written in such a confusing way that it is impossible or very expensive to reproduce or verify the results (e.g. repeat a year-long study)

From an economic perspective, is this a very desirable situation?

moneywoes
5 replies
18h32m

what’s the incentive for them?

ptero
2 replies
17h59m

NIH (as many other grant giving sources) is a government agency, they will do this if told by the government.

SJC_Hacker
1 replies
16h34m

The incentive to the PI (professor).

"I replicated X's work" or even "I was unable to replicate X's work" isn't exactly a career maker

Although I guess you could get the few staff scientists at the NIH to handle it.

Some older professors who already have tenure might be willing to help out, if they don't already have much else on their plate

hoseja
0 replies
10h52m

There are oodles of people doing rote labwork/innovation-less but nonetheless skilled scientific-like work. How are they motivated? Academia's perverse reputational economy is the aberration, not the norm.

AlexCoventry
0 replies
18h21m

Perhaps it could be mandated by legislation or executive order.

ThrowawayTestr
4 replies
21h49m

If I was rich I'd start a journal solely dedicated to publishing papers trying to recreate other papers.

nextos
0 replies
16h32m

As I said in another comment, I think it'd be much easier to convince Nature to create a new article type, Replication Study.

Nature, as well as other top journals, do not publish results that report already published findings. Replication Studies would be the exception. These would provide independent validation of a recent prominent article.

People would think twice before misrepresenting findings in top journals, and good ideas would spread out more quickly.

317070
1 replies
21h23m

Wait, you don't need to be rich. With pay-to-publish (also known as free access), it is an absolute goldmine.

People generally don't want to do the work of editing and publishing, or lack the academic knowhow to do it. But if that is not an issue, I don't think money will be an issue either.

caddemon
0 replies
20h33m

There isn't enough incentive currently to publish reproductions, starting a new journal using the same general publishing model isn't going to change that. But with money to burn you could add some incentive, or you could at least do things to improve publication quality like actually paying for good peer reviewers.

staunton
1 replies
10h54m

I think part of the reason why nobody seems serious about replication is that it's actually mostly not worth it.

Most research is useless and pointless, with only a few exceptions. We don't have a way to figure out which topics are the exceptions, so someone has to do the research. It's not worth it (or rather: extremely high financial risk) for companies or individuals to do it, so governments gave to fund it.

At this point, the current amount of fraud does not justify replicating even 1% of studies. We would get less scientific advancement in total. The current situation likely does justify some small investments in shaping incentives.

sgt101
0 replies
8h44m

>It's not worth it (or rather: extremely high financial risk) for companies or individuals to do it

The problem is that it's hard to reliably capture value from research. A good example is LLM's. If OpenAI, Google, Meta and AWS had been able to build a wall round GTP3.5 Turbo and above models then I expect that they could have captured all the value of the research effort.. as it is I don't think that is/will be the case - it's almost too easy to replicate as Mistral have shown. Note: I'm not saying it's trivial or something, but if you spend a few $million on it you can get close enough, and then spending a few $million more will get you all the way. Also, I am not talking about building a frontier model today (which requires $100millon or so and some difficult skills/organisation) but rather a model in say 3 years time with the frontier performance of todays models.

ptero
0 replies
17h54m

Yes, and also require publishing the full methodology for any article funded by public money. This should make verification easier and cheaper.

Not necessarily published as part of the paper, a link to the separate document is fine.

_DeadFred_
0 replies
20h33m

Better yet have bounties like how the tech world has bug bounties.

gershy
35 replies
20h51m

It would be so interesting if we came to a consensus that "cascading deletes" should apply to research papers. If a paper is retracted 20+ years later, and it has 4,500 references, those references should be retracted non-negotiably in cascading fashion. Perhaps such a practice could lead to better research by escalating the consequences of fraud.

epistasis
6 replies
20h36m

This is a completely bonkers idea that would accomplish nothing positive and would mostly erase tons of good science.

The idea of punishing third parties for a citation is weird. If I quote somebody who lied, I'm at fault? Seriously?

EnigmaFlare
3 replies
20h11m

You might not be at fault but your work depends on that wrong work, so your work is probably wrong too and readers should be aware of that. If it doesn't depend on it, then don't cite it! People cite the most ridiculous crap, especially in introductions listing common sense background knowledge with a random citation for every fact. That stuff doesn't really affect the paper so it could just be couched in one big "in my opinion" instead.

epistasis
1 replies
19h46m

but your work depends on that wrong work, so your work is probably wrong

No, absolutely not, that's pure fallacy.

There might be some small subset of citations that work like a mathematical proof, but how many of these 4500 citations could you find that operate that way?

nyssos
0 replies
12h35m

There might be some small subset of citations that work like a mathematical proof

And even then, you're just weakening the result, not throwing it out entirely: instead of a proof of X that cites a proof of Y, you have a proof that Y implies X.

mkl
0 replies
19h36m

Academic papers have to cite related research to situate their contribution, even if they're not directly building on that research. When researchers can't reproduce a paper's results, they have to cite that paper when reporting that, or no one will know what they're talking about and the bad paper cannot be refuted. The whole system also needs many compare and contrast citations that aren't built on directly or at all, so you know what a paper is doing and not doing.

pessimizer
1 replies
20h6m

The priority isn't about punishing you, or about your feelings or career at all. It's about the science.

If you cite something that turns out to be garbage, I'd imagine the procedure would be to remove the citation and to remove anything in the paper that depends on it, and to resubmit. If your paper falls apart without it, then it should be binned.

epistasis
0 replies
14h59m

I can't think of a single paper that would fall apart to any of its cited papers being retracted. What field of science operates that way?

Science papers are novel contributions of data, and sometimes of purely computational methods. A data paper will stand on its own. A method paper will usually (or at least should) operate across multiple data sets to compare performance, or if only on a single dataset it's gonna to be a very well tested dataset.

If MNist turns out to be retracted, would we have to remove all the papers that used it over their years? That's about as deep as a citation can get into being fundamental and integral to a paper. And even in that case nearly any paper operating in that dataset will also be using other datasets. Sure, ignore a paper that only evaluates on a single retracted dataset, but why even bother retracting, as the paper would be ignored anyway, because what significant paper would use a single benchmark?

But 99.9% of citations have less bearing on a paper than being a fundamental dataset used to evaluate the claims in the paper. And those citations are inherently well-tested work product already.

So if people actually care about science, they would never even propose such a scheme. They would bother to at least understand what a citation was first.

arp242
5 replies
20h38m

The question is how many of the citations are actually in support? As in: some might be citations in the form of "Donald Duck's research on coin polishing[1] is not considered due to the controversial nature". Or even "examples of controversial papers on coin polishing include the work of Donald Duck[1]".

I don't think "number of citations" typically make this distinction?

Also for some papers the citation doesn't really matter, and you can exclude the entire thing without really affecting the paper.

Regardless, this seems like a nice idea on the face of it, but practically I foresee a lot of potential problems if done "non-negotiably".

EnigmaFlare
2 replies
20h28m

I love the idea. It would also dampen the tendency to over-cite, and disincentivize citation rings. But mainly encourage researchers to actually evaluate the papers they're citing instead of just cherry picking whatever random crap they can find to support their idea.

Maybe negative citations could be categorized separately by the authors and not count towards the cited paper's citation count and be ignored for cascading citations.

If the citation doesn't materially affect the paper, the author can re-publish it with that removed.

arp242
0 replies
20h9m

If the citation doesn't materially affect the paper, the author can re-publish it with that removed.

This paper is 22 years old. Some authors have retired. Some are dead.

I really think that at the very least it needs a quick sniff test. Which is boring uninteresting work and with 4,500 citations that will take some effort, but that's why we pay the journals big bucks. Otherwise it's just going to be the academic variant of the Scunthorpe problem.

And/or do something more fine-grained than a binary retraction, such as adding in a clear warning that a citation was retracted and telling readers to double-check that citation specifically.

QuesnayJr
0 replies
11h47m

If you are cherry-picking cites that agree with you, that is a much bigger scandal than you citing a paper that ended up being retracted 22 years later. The point of citations is to cite the relevant literature, pro and con.

mcmoor
1 replies
17h47m

I guess those kind of citations should be put in different category that doesn't increase citation count of the referenced paper, in other words raising its prestige. These kind of citations shouldn't do that anyway.

So now if you want to cite come paper you have to decide which papers you'd die and live with, and consequently your paper prestige will be dependent on how many other papers want to die and live with yours.

schoen
0 replies
17h43m

I guess you can have something like a nofollow attribute

https://en.wikipedia.org/wiki/Nofollow

although the incentives will be more confusing.

There's an argument to be made that citing something to disagree with it should increase its prestige but not its credibility (to the extent that those can be separated): you're agreeing that it's important.

neilv
4 replies
20h32m

If a paper is retracted 20+ years later, and it has 4,500 references, those references should be retracted non-negotiably in cascading fashion.

Imagine you're reading a research paper, and each citation of a retracted paper has a bright red indicator.

Cites of papers that cite retracted papers get orange. Higher degrees of separation might get Yellow.

Would that, plus recalculating the citation graph points system, implement the "cascading deletes" you had in mind?

It could be trivial feature of hypertext, like we arguably should be using already. (Or one could even kludge it into viewers for the anachronistic PDF.)

hex4def6
1 replies
19h38m

Riffing on this,

I wonder if you could assign a citation tree score to each first-level citation.

For example, I cite papers A,B,C,D. Paper A cites papers 1,2,3,4. Paper 1 cites a retracted paper, plus 3 good ones.

We could say "Paper 1" was 0.75, or 75% 'truthy'. "Paper A" would be 3x good + 1x 075% = 3.75/4 = 93.7% truthy, and so on.

Basically, the deeper in the tree that the retracted paper is, the less impact it propagates forth.

Maybe you could multiply each citation by it's impact factor at the top level paper.

At the top level, you'd see:

Paper A = 93.7% truthy, impact factor 100 -> 93.7 / 100 pts

Paper B = 100% truthy, IPF 10 -> 10/10 pts

Paper C = 3/4 pts

Paper D = 1/1 pts

Total = 107 / 115 pts = 93% truthy citation list

If a paper has an outsized impact factor, it gets weighted more heavily, since presumably the community has put more stock in it.

aeronaut80
0 replies
16h30m

Thus incentivizing authors to add citations to established papers for no reason other than to increase their own trust score. Which already happens to a degree but this would magnify that tenfold.

armchairhacker
1 replies
20h7m

That would be overwhelming and coarse. You wouldn’t know if an orange or yellow paper actually relies on the retracted citations or it just mentions them in passing, unless you dig through the paper yourself to figure this out yourself, but most people won’t do that.

I think a better method would be for someone to look over each paper that cites a retracted paper, see which parts of it depend on the retracted data, and cut and/or modify those parts (perhaps highlight in red) to show they were invalidated. Then if there’s a lot of or particularly important cut or modified parts, do this for the papers that cite the modified paper, and so on.

This may also be tedious. But you can have people who aren’t the original authors do it (ideally people who like to look for retracted data), and you can pay them full-time for it. Then the researchers who work full-time reading papers and writing new ones can dedicate much less their time questioning the legitimacy of what they read and amending what they’ve written long ago.

neilv
0 replies
19h37m

I don't know which way would be better, since I don't know the subtleties of citations in different fields. I'll just note that automatically applying this modest taint to papers that cite retracted papers is some incentive for the person to be discerning in what they cite.

Of course, some papers pretty much have to be cited, because they're obviously very relevant, and you just have to risk an annoying red mark appearing in your paper if that mandatory citation is ever retracted.

But citations that are more discretionary or political, in some subfields (e.g., you know someone from that PI's lab is going to be a reviewer), if you think their pettiness might be matched by the sloppiness/sketchiness of their work, then maybe you don't give them that citation, after all.

If this means everyone in a field has incentive for citations to become lower-risk for this embarrassing taint, then maybe that field starts taking misconduct and reviewing more seriously.

CoastalCoder
3 replies
20h48m

I suspect this would have some unintended consequences, not all good.

thaumasiotes
2 replies
20h46m

Like what? Currently, there are no consequences when a paper is retracted. If we retracted more papers, what would the difference be?

dyauspitr
1 replies
20h41m

Like very valid research being lost because they mention a retracted paper for some minor point that doesn’t really have a major impact on the final results.

thaumasiotes
0 replies
19h30m

That's already something that doesn't happen to blatantly invalid research which is retracted directly. What are you worried about?

Neywiny
2 replies
19h53m

Jumping in with the others, this is not good. When I've written papers in the past, and used peer reviewed, trusted journals, what else am I supposed to do? Recreate every experiment and analysis all the way down? Even if it's an entirely SW project, where maybe one could do that, presumably the code itself is maliciously wrong. You'd have to check way too much to make this productive.

BalinKing
1 replies
17h29m

Recreate every experiment and analysis all the way down?

If an experiment or analysis is reliant on the correctness of a retracted paper, then shouldn't it need to be redone? In principle this seems reasonable to me—is there something I'm missing?

EDIT: Maybe I misunderstood... is your point that the criterion of "cites a retracted paper" is too vague on its own to warrant redoing all downstream experiments?

Neywiny
0 replies
16h33m

I think usually there's too much building off of each other for this. Standing on the shoulders of giants and whatnot. To me that's the purpose of society and human evolution but I won't get preachy. I didn't read the stem cell paper, but I'll use it for example. Let's say the stem cell paper says "stem cells are one type of cell from the human body" which cites some paper that first found stem cells. Maybe that paper cited the paper that first found any cells. And that one cited a paper about the molecular makeup of some part of the cell. And that cited a paper about what it means for an atom to be in a molecule. And that cited some paper about how atoms can contain electrons, and then that electrons are particles and waves.

I think, personally, it's unrealistic to expect every researcher who mentions anything that has an electron in it (aka most things) to need to recreate the double slit experiment. Or, to harvest the stem cells themselves instead of buying them from trusted suppliers. Yes I do as I type this out see more that if more re-experimenting was done it would help detect fraud. But crucially, it really doesn't matter what an electron is to people determining that stems cells are in humans. The "non-negotiably" is what worries me. There should be some negotiation to say "hey your paper uses this debunked article. You have x days to find another, proven paper that supports the argument, or remove the argument entirely, or we'll retract your paper as well." I think that's valid. Especially since the fraud here wouldn't be impacting the author using the bad paper (most of the time, I would imagine) but rather the ones writing the paper. I would hesitate to believe that people faking such crucial, potentially lifesaving research care that some nobody they'll never meet might be upset their paper doesn't make it.

I think really what I'd like to see instead is more checking done at the peer review stage. To me that's the whole point of the journal. I'm biased on this having been rejected during the peer review stage and disliking how expensive journal articles can get, but at the end of the day, that's the point of them. They should be doing everything in their power to ensure that the research is accurate. And if we can't trust that, what's the point to the journals at all? May as well just go on blogs or something.

not2b
1 replies
17h30m

This comment suggests a lack of understanding of the role of references in papers. They aren't like lemmas in proofs. Often an author will reference a work that tried a different approach to solve the same problem the authors are trying to solve, and whether that other paper is problematic or not has nothing to do with the correctness of the paper that refers to the other work.

Now, it's possible that in a particular case, paper B assumes the correctness of a result in paper A and depends on it. But that isn't going to be the case with most references.

oopsallmagic
0 replies
14h23m

If there were grant money for incorrectly claiming "this other thing that isn't a computer behaves just like a computer", well, we wouldn't need VCs anymore.

throwawaymaths
0 replies
11h23m

What if the citation is "i believe this preceding study to be grievously flawed and possibly fraudulent [ref]"

mhandley
0 replies
19h20m

Just because you cite a paper doesn't mean you agree with it. At least in CS, often you're citing a paper because you're suggesting problems with it, or because your solution works better. Cascading deletes don't really help here - they'd just encourage you not to criticise weaknesses of earlier work, which is the opposite of what you're trying to achieve.

jfengel
0 replies
20h45m

That would certainly lead to people checking their references better. But a lot of references are just in passing, and don't materially affect the paper citing it.

One would hope that if some work really did materially depend on a bogus paper, then they would discover the error sooner rather than later.

demondemidi
0 replies
20h27m

Cascading invalidate. I don’t think it should disappear, I think it should be put in deep storage for future researchers doing studies on misinformation propagation.

armchairhacker
0 replies
20h21m

It probably makes sense to look over papers that cite retracted papers and see if any part of them rely on the invalidated results. But unless the entire paper is worthless without them, it shouldn’t be outright retracted.

How many papers entirely depend on the accuracy of one cited experiment (even if the experiment is replicated)?

Vt71fcAqt7
0 replies
19h35m

Most citations are just noting previous work. Here are some papers citing the retracted one. (Selected randomly).

Therefore, MSC-based bone regeneration is considered an optimal approach [53]. [0]

MSC-subtypes were originally considered to contain pluripotent developmental capabilities (79,80). [1]

Both these examples give a single passing mention of the article. It makes no sense for thousands of researchers to go out and remove these citations. Realisticly you can't expect people to perform every experiment they read before they cite it. Meanwhile there has been a lot of development in this field despite the retracted paper.

[0] https://www.mdpi.com/2073-4409/8/8/886

[1] https://www.tandfonline.com/doi/full/10.3402/jev.v4.30087

QuesnayJr
0 replies
12h22m

This is not at all what a citation means. If someone writes a math paper with a correct result, and the proof is wrong, then you cite that paper to give a corrected proof. If someone writes a math paper where a result itself is incorrect, then you cite that paper to give your counterexample. A citation just means the paper is related, not that it's right or you agree with it.

EasyMark
0 replies
14h33m

Depends on the paper, it would still require review mechanisms. “Nuke it from orbit”is an overreaction to this, as the debunked paper may play very little part other than as a reference.

chrbr
20 replies
23h15m

Unfamiliar with academia here, and I can't quite figure it out from TFA - does a retraction always imply wrongdoing, instead of mere "wrongness?" Or are papers sometimes retracted for being egregiously wrong, even if their methods were not intentionally misleading?

ta988
3 replies
21h52m

No, many honest researchers retract their own papers because they found a problem that cannot be solved by publishing a correction/errata (a kind of mini publication that corrects the original work). It is extremely bad to use number of retracted papers as a judging factor for a researcher. Using the number of retracted papers because of fraud (fabrication of images, data, stealing work, plagiarism...). Self plagiarism is a slightly different case with a much broader grey area.

kenjackson
2 replies
21h25m

I actually retracted one of my papers. It was before it was published, but after I had submitted it. I had discovered a flaw in my methodology the night before that did have material impact on the results. I was so stressed out for 24 hours until I spoke to my advisor.

My advisor was very chill about it. He said that retractions aren't a big deal and was glad I spotted the issue sooner rather than later.

I corrected the experimental methodology and while the results weren't quite as good, they were still quite good and I got published with the correct results.

dekhn
0 replies
18h14m

Minor detail- I believe this would be called "withdrawing" a paper rather than retraction as it had not been published yet.

CoastalCoder
0 replies
20h42m

I corrected the experimental methodology and while the results weren't quite as good, they were still quite good and I got published with the correct results.

I disagree. Your new results were much better, because they were sound.

Very well done.

fredgrott
3 replies
23h11m

Okay, I can answer this. Papers are never retracted for theory proven wrong and they are always retracted when wrong-doing is found. This is why the high level research stuff always has researchers recording their data and notes. Before computers, my exp early 1990s, We had to record everything in a notebook and sign it.

delusional
1 replies
23h6m

There's a difference between "theory proven wrong" and "proof being wrong". A finding that Theory A is wrong is still a valid finding. A wrong finding about Theory A is just a lie, it carries no value, and should thus be retracted.

mjn
0 replies
22h26m

In practice, wrong findings that aren't due to misconduct and aren't very recent are usually not retracted though. It's just considered part of the history of science that some old papers have proofs or results now known to be false. It is pretty common in mathematics, for example, for people to discover (and publish) errors they found in old proofs, without the journal going back and retracting the old proof. A famous example is Hilbert's (incorrect) sketch of a proof for the continuum hypothesis [1].

[1] https://mathoverflow.net/questions/272028/hilberts-alleged-p...

cycomanic
0 replies
22h28m

That statement is wrong. Papers do get retracted because a major innocent error is found. This often happens at the request of the author (typically with an explanation from the authors). . See the comment a bit further up for an example.

WhitneyLand
3 replies
22h45m

Well, consider this:

- The overall retraction rate is 4 in 10,000.

- Most researchers go their entire career without a retraction

- She now has 4.

teekert
0 replies
22h27m

Having been in academia, having felt the pressure, knowing reproduction is not sexy and takes time away from "actual experiments", knowing some theories or groups have cult-like status, knowing that not having papers means not getting a PhD, despite working hard, being smart, knowing that this is (experienced as) very unfair, etc... I'm very sure that 4 in 10.000 is the tip of the iceberg.

We need more reproduction. Or have some rule: Check all assumptions. Yes, it's a lot of work, but man will it save a lot of fake stuff from getting out there and causing a lot of useless work.

olddustytrail
0 replies
21h42m

Having considered it I reckon it could be due to some systemic abuse of the process. Or it could be that she is working in a field where there is a high uncertainty rate.

Why don't you explicitly state which you think it is?

crazygringo
0 replies
21h49m

Not being familiar with her, that isn't telling me anything.

It seems like you're implying she's written exceptionally shoddy papers.

But on the other hand she could also just be exceptionally honest -- one of the very few researchers to retract papers later on when they realize they weren't accurate, as opposed to the 99+% of researchers that wouldn't bother.

Also I would imagine that retraction rates might vary tremendously among fields and subfields. Imagine if a whole subfield had all its results based on a scientific technique believed to be accurate, and then the technique was discovered to be flawed? But the retractions wouldn't have anything to do with honesty or quality of the researchers.

So I'm gonna need more context here.

semi-extrinsic
1 replies
23h8m

I've certainly seen papers retracted over copyright/IP issues with images or other details. Funnily enough, this doesn't mean the article goes away, just that it gets covered with a "Retracted" watermark.

Retractions are primarily associated with wrongdoing, but are sometimes also issued for "honest mistakes". If so it's typically with a very clear explanation, like in the link below.

https://journals.plos.org/ploscompbiol/article?id=10.1371/an...

jhbadger
0 replies
21h46m

Also, in biomedical research papers can get retracted if they can't show the subjects consented to have their samples (e.g. removed tumors) used in research even if the science itself is sound.

gumby
0 replies
22h57m

The article said there was no finding that the primary author did anything wrong but that the original photos were no longer available so the paper could not be corrected.

NOTE: I DON'T FOLLOW THIS WORK CLOSELY: I am not sure that there are any successful programs using pluripotent somatic (adult) stem cells, if they even really exist, though there's lot of successful work with differentiated stem cells. So I think there's an unstated subtext as you surmise.

This paper was very important and eagerly received because the GW Bush administration had banned federal funding for research using foetal stem cells as a sop to the religious right (all that work moved to sg and cn, and continued in Europe).

f6v
0 replies
22h24m

Or are papers sometimes retracted for being egregiously wrong, even if their methods were not intentionally misleading?

There could be a mistake the authors made which led to a wrong interpretation. Like, someone might write another article commenting on that mistake and wrong conclusions. But that wouldn’t be a reason for retraction. Something should be incredibly wrong for authors or journal to do that. Retractions due to fraud are much more common.

epgui
0 replies
22h8m

Retractions don’t imply wrongdoing, but they are not that common so they look very bad.

delusional
0 replies
23h7m

Academic research rarely (if ever) cares about "intentions" of the authors. I'd say papers are exclusively retracted for being "egregiously wrong" (or at least not trustworthy), and never for any "wrongdoing". The wrongdoing just happens to be a pretty good indicator that the conclusions probably aren't trustworthy.

bagels
0 replies
22h7m

If wrongdoing is the same as intentional deceit, I would guess there are some that were not intentional, but instead driven by incompetence or simple mistakes.

Fraudulent/doctored images don't fall in to the incompetence/mistake category though.

Some types of mistakes/incompetence: improperly applied statistics, poor experiment design, faulty logic, mistakes in data collection.

CJefferson
0 replies
11h7m

I have a friend who got their paper retracted, because it turns out they had made a big mistake in implementing an algorithm -- so big that after fixing it, the results entirely disappeared.

In that case, the retraction isn't didn't really get any publicity, and I'm actually proud of them for doing it, as many people wouldn't bother.

However, in practice I would say the majority of retractions are for wrongdoing on the part of the authors.

I wish, particularly in the case of the modern internet, that it was easier for authors to attach extras to old papers -- I have old papers where I would like to say "there is a typo in Table 2.3", and most journals have basically no way of doing this. I'm not retracting the paper over that of course! This is one advantage of arXiv, you can upload small fixes.

arjvik
11 replies
23h37m

Verfaille, agreed with the retraction. She now has four retractions, by our count.

Maybe I just don't understand biology, but there seems to be something up here.

bpodgursky
9 replies
23h35m

Probably too many grad students with high expectations and no careful review. PI will be "on" the paper but only do a cursory review.

o238jro2j5
5 replies
23h21m

Can confirm. My PI didn't review anything. Sent their journal reviews to students and told us to sign their name at the bottom. Straight up told us to falsify our results on more than one occasion (I refused). I reported them to admin. Admin didn't investigate, didn't even contact the witnesses I named, and gave the professor tenure. This was at UT Austin about ten years ago. Academia is broken.

alan-hn
4 replies
21h41m

Name and shame

j-krieger
3 replies
21h23m

I wouldn’t dare in their place. Academia is tiny.

alan-hn
1 replies
21h21m

I know, I'm in academia. But this silence is why these things keep happening. PIs hold power over their students to keep them in line, through letters of recommendation to networking and post doc/job offers. We need to work to correct that.

rf15
0 replies
17h50m

Imagine how much research is held back/misguided because of this... I'm happy I'm out tbh, a good r&d team seems so much easier to work in.

CoastalCoder
0 replies
20h38m

Damn, academia needs a #me-too moment.

I find this vaguely reminiscent of Hollywood's casting couches.

akira2501
0 replies
23h15m

You can pay for an education or you can pay for a degree. It seems like some people don't mind getting the latter when they aimed to get the former.

Metacelsus
0 replies
23h28m

And this is why it's expected that the PI will take responsibility for any papers published by their lab – if the PI isn't doing their job, they should face the consequences.

(note I wrote "should", not "will")

SketchySeaBeast
0 replies
21h52m

Maybe I just don't understand biology, but there seems to be something up here.

If I had a nickel for every time I've heard that.

Turing_Machine
6 replies
23h30m

I realize that this probably wouldn't fit in the title, but this is about a specific kind of adult stem cell, not stem cells in general.

It's trivially obvious that some kinds of stem cell can become any type of cell, given that we all had our beginnings as a single cell.

CodeWriter23
5 replies
20h34m

It's trivially obvious that some kinds of stem cell can become any type of cell, given that we all had our beginnings as a single cell.

It’s not that obvious, as the brain grows, certain kind of cells die off and never come back. For example at 4-5 years of age being able to speak different phonenes is lost due to mass die off of a certain type of brain cell.

Could be the same for the pair of cells that start a human life. Once their purpose is served they may never exist again.

rf15
1 replies
17h48m

Do you have a source on the phonemes situation? First time I hear of it, and I'm pretty sure I've seen people learn them past that point. I know they struggle with it, but, as an example, even japanese people can learn the effective difference between l and r given enough effort.

CodeWriter23
0 replies
9h32m

It’s why certain ethnicities cannot pronounce words in their non-native language. Like people from the US not being able to make certain consonant sounds in German or Chinese being unable to pronounce the R consonant. So go look it up yourself.

Turing_Machine
1 replies
15h1m

It’s not that obvious, as the brain grows, certain kind of cells die off and never come back.

I'm not sure what difference that would make. Those brain cells (and all the other brain cells that don't die off) were still formed by successive divisions of the single cell that resulted from sperm meeting egg. Therefore, that original cell is capable of producing any cell type found in the body.

CodeWriter23
0 replies
9h31m

By your logic, every cell in the human body is a brain cell.

burning_hamster
0 replies
8h38m

You are referring to the critical period [1] of (second) language acquisition, which is generally thought to end with the onset of puberty [2]. Neurodevelopmentally, this period coincides with extensive synaptic and dendritic pruning and increased myelination (of axons) [3], which result in the loss of some connections and the strengthening and acceleration of others. Cell loss is not thought to be a major driver of brain maturation, nor is it thought to occur more frequently during this time window.

[1] https://en.wikipedia.org/wiki/Critical_period_hypothesis

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5857581/

[3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3982854/

renewiltord
5 replies
22h37m

This is a big problem with Belgian scientists. Their culture puts a lot of pressure on publishing and so on so they tend to falsify flashy results over just doing the science.

rafram
2 replies
22h31m

The authors were all employed by the University of Minnesota Medical School, and I think only one is Belgian. Not sure how much Belgian science culture has to do with it.

renewiltord
0 replies
22h25m

PI is Belgian. That culture seeps through to the lab. It's a risk to science.

f6v
0 replies
22h21m

I hope you realize there’re many different labs with different attitudes. I have one of my degrees from Ghent and what you describe never came up. AFAIK there wasn’t even a requirement to publish a paper for a PhD student to graduate anymore.

JonChesterfield
0 replies
22h0m

This statement holds up pretty well if you drop the country from it. Academia follows incentive structures just like everything else.

Say you were a software engineer who was paid by how often you shipped code with a nice title but you didn't have to give people the binaries so noone ever ran them. That is, the difference between nice documentation about code that never quite existed and scruffy documentation about code that does really useful things is you get money for the first and fired for the second.

Academia isn't quite that extreme but it does have incentives pointed in that direction.

neilv
3 replies
21h4m

I personally know two people who complained about fabrications, and both had to restart their careers, no longer pursuing research.

I've heard stories from others, such as when a fabrication was known to students in a lab, and of some playing along with it anyway.

We routinely hear on HN of fabrications discovered in journal publications.

Exactly how bad is the problem? What's the prevalence, scale, and impact?

What are the causes, and how does society address the problem?

izacus
0 replies
8h25m

I don't think there's a single academic research lab out there that doesn't heavily doctor their data to make it publishable to some extent. The best of them "merely" cherrypick data. The pressures of academia make being honest an extremely bad career choice that can end up with your unemployment.

Publish or perish. You can't publish if your results aren't good.

coolhand2120
0 replies
16h34m

https://en.wikipedia.org/wiki/Replication_crisis

In some fields more than half of the research is somehow not reproducible. Some is attributed to fraud, some incompetence. As a whole it makes science produced by these fields worse than a coin flip. Psychology is by far the worst culprit.

We're at an inflection point in history where the scientific method dictates we shouldn't trust many fields that use the title "science".

caddemon
0 replies
20h23m

Straight up fabrication is more common than we'd hope, but probably not systemically threatening. I'm much more concerned about how poorly replication goes even when authors are not malicious/generally following the "status quo" for methodology in their subfield.

throwitaway222
2 replies
21h14m

Nothing should be publishable to the public until it is vetted.

mkl
0 replies
19h19m

I'm guessing you consider "vetting" to be confirming as correct, rather than the vetting that referees already do? In that case, how would anybody know about it to vet it? As soon as it's communicated to other researchers to work on it's effectively public, as trying to keep all research secret like that would lead to enormous amounts of duplicated effort and slow scientific progress almost to a halt. Besides, some research (especially theoretical) takes years or decades to be confirmed and accepted, and some research is reporting extremely rare things that may be unconfirmable but will eventually, because they were published, be part of a larger theory.

Kalanos
0 replies
5h22m

research is comprised of peer-reviewed observations. we attempt to triangulate truths

rolph
2 replies
23h9m

from the submission;

1- "The errors the authors corrected “do not alter the conclusions of the Article,” they wrote in the notice."

2- "the Blood paper contained falsified images, but Verfaillie was not responsible for the manipulations. Blood retracted the article in 2009 at the request of the authors. "

3- "The university found “no breach of research integrity in the publications investigated.” "

4- "The notice mentions two image duplications Bik wrote about on PubPeer. Because the authors could not retrieve the original images, it states:

    the Editors no longer have confidence that the conclusion that multipotent adult progenitor cells (MAPCs) engraft in the bone marrow is supported.

    Given the concerns above the Editors no longer have confidence in the reliability of the data reported in this article."

bbarnett
0 replies
22h9m

I think the grant application provided a strong case of benefit from the fraud, likely why it succeeded.

(I agree... fraud is fraud, and should be handled with criminal law)

purpleblue
2 replies
22h7m

Does this mean that stem cells cannoot become any type of cell? Certainly this should have been easy to test over the last 22 years?

jjw1414
0 replies
21h52m

It's important to distinguish between the types of stem cells when referring to pluripotency (i.e., the ability for a cell to differentiate into almost any cell type in the body). Embryonic stem cells are considered pluripotent. Adult stem cells are more correctly referred to as "multipotent" in that they can be coaxed into differentiating into other cell types, but typically into cell types close to their own lineage.

brightball
2 replies
19h41m

Is there a site that tracks retractions?

brightball
0 replies
5h7m

Exactly like that. Thank you.

abdullahkhalids
2 replies
23h18m

So, according to more credible research, what type of other cells can stem cells turn into?

rolph
0 replies
22h55m

that is dependent upon the origin developmental state, and biochemical history of the stem cell in question

globalise83
1 replies
23h25m

Most-cited retracted paper ever is quite a claim to fame!

kstrauser
0 replies
20h15m

Citation needed.

crakenzak
1 replies
11h43m

So, can adult stem cells become any type of cell or not?

fnord77
0 replies
21h20m

There used to be so much hype about the potential for stem cells in medicine.

JackFr
0 replies
21h36m

Doesn't this paper make a fairly straightforward claim, which is either true or not? Hasn't there been any further research in the past 22 years to either effectively support or undermine the conclusions?

Harmohit
0 replies
20h23m

We strongly need a "prestigious" journal devoted to publishing reproductions of other studies. Moreover, we need to change our perception of a good scientist. Doing novel research is awesome and great but reproducing other people's work is also important - it is a fundamental pillar of science.

The problem is even more pronounced with more and more specialized and expensive equipment required for doing certain experiments.