published in Nature in 2006
So around 18 years ago. That’s a long time for researchers that believed in this papar's conclusions to be going down the wrong rabbit hole. What a huge waste of effort and the lives of those with Alzheimer's.
published in Nature in 2006
So around 18 years ago. That’s a long time for researchers that believed in this papar's conclusions to be going down the wrong rabbit hole. What a huge waste of effort and the lives of those with Alzheimer's.
If one wants less fraud change incentives.
1. Grants and faculty positions should be less dependent on number of publications, and more on the quality of publications.
2. Place more emphasis on basic research and less on translational research. The latter encourages hyperbole and p-hacking. In the late 90s, the NIH initiated a strong emphasis that all research should have an immediate practical value. This is not reasonable and as a result, researchers grossly exaggerate the impact of their work, and some even engage in an outright fraud.
How do you determine quality? Right now, publishing below Q2 or even below Q1 in some cases is the same as not publishing at all. I've seen grants that only accept D1 papers. As a curiosity, Gregor Mendel original work was published in a small and newly created local Brno journal. It was cited three times in the following 35 years. By all metrics, it was a low quality work. Only 40 years after being published it was rediscovered as a fundamental work.
That on the clean part. I've also seen papers published well above its merit just because the authors know the editors, or the paper comes from a reputable lab so it must be good. Also, the opposite is true: your work is from a small or unknown lab, or goes against the grain, and you'll be lucky if you get published at all.
If money is the problem, maybe money is the solution?
Like open a betting market for study replication. If no methodological errors are found and the study can be successfully replicated the authors get a percentage of the pool, replication effort is run by a red team that gets paid the same percentage of the pool regardless of outcome making their incentive to find bets that just attract a lot of bets.
This would incentivize scrutinizing big findings like the one in the OP where a failure would be big, but also act as a force for unearthing dark horse findings in journals off the beaten path where replication success would be big.
How do you determine quality?
Certainly not by citations counts. Citations have more to do with the authors social or scientific network than the worth of the papers.
It was cited three times in the following 35 years. By all metrics, it was a low quality work.
It was not low quality work (although it did spark quite a bit of controversy due to its perceived or actual issues). It was just an article written by an unknown author in an unknown journal.
FYI: Some have suggested that Mendel's data might be too perfect, indicating possible manipulation or fraud. (1) Mendel's results are unusually close to the expected ratios of 3:1 for dominant and recessive traits. Some argue that real-world data often show more variation. (2) In 1936, statistician Ronald A. Fisher analyzed Mendel's data and suggested it was "too good to be true." Fisher believed the results might have been altered or selectively reported to better match Mendel's hypothesis. (3) Despite these concerns, many of Mendel's experiments have been replicated, and his fundamental findings remain valid. Most scientists believe any discrepancies in Mendel's data were not due to intentional fraud but possibly unconscious bias or error.
Also, the opposite is true: your work is from a small or unknown lab, or goes against the grain, and you'll be lucky if you get published at all.
Unfortunately, many papers that come from obscure labs and go against the grain are both bad and wrong. It's a hard problem.
How do we measure quality of publications? I know there have been attempts to do so based on journal impact factor and quantity of citations but those metrics are also manipulated.
I know we would have to change a lot from how it is set up today, but here are some ideas:
- How many times has the results been reproduced by others? - Add another layer of blindness: the person doing the lab work is not the person crunching the results. It could even be where two different groups crunch the numbers, and all groups are unknown to the others. - Avoid p hacking with pre determined (and registered!) p values - Register the experiment with hypothesis before the start of the experiment - Register authors as they are added and removed, and have that history be publicly available - All results have to be uploaded to a website before publication - The method of calculation has to be public on a website before publication.
So a high quality paper is one where - the experiment was logged in advance - the history of authors is known - there is a distinct separation of experimenter and cruncher - the public can get the results themselves and run the same analysis to confirm (or not!) the results - the experiment is repeated and confirmed by others. Even if the first experiment is bad or a fraud, and the second one doesn’t confirm it, a third experiment could be the tie breaker. It would be more traceable to understand if it was the lab or the cruncher that made a mistake or was committing fraud.
Overall, we have to incentivize good science.
Many of these ideas have been tried already. Unfortunately they don't work.
1. Pre-registration. Great idea, it's often being done. Doesn't work because universities don't enforce it. They'd have to proactively check that papers match their pre-registrations and then fire people when they don't.
2. Reproducibility. Nobody gets funded to do this but even if they did, there are lots of bogus studies that can easily be reproduced. The problems are in the methods rather than the data. If you specify a logically invalid or unscientific method, then people following your instructions will calculate the same results and still be wrong.
3. Blindness/splitting work up. This is already how many papers are made and academics turn it around as a defense. Notice how in every single case where this stuff happens, the people whose names are on the paper claim the fraud was done by someone else and they had no idea. Universities invariably accept this claim without question.
4. All results have to be uploaded before publication. Did you mean raw data? Results are the primary content of the paper, so it's unclear what this would mean. Researchers in some fields heavily resist publishing raw data for various (bad) reasons, like academic culture rewarding papers but not data collection efforts, so they're incentivized to squeeze as many papers out of a dataset as possible before revealing it to competitors. In a few fields (like climatology) they often refuse to publish raw data because they're afraid of empowering people who might check their papers for errors, who they see as a sort of shadowy enemy.
5. Authorship history. Which frauds are you thinking of that this would fix?
I spent a lot of lockdown time looking at this question. You've listed five ideas, I churned through maybe 15-20. None of them work. On investigation they've usually been tried before and academics immediately work around them. Science is littered with integrity theatre in which systems are put in place in response to some fraud, and they appear to be operating on the surface, but nothing is actually being checked or enforced.
> Overall, we have to incentivize good science.
I'm by now 95% convinced the only way to do this is for scientific research to be done commercially. No government or charitable funding of science at all. As long as science is being firehosed with money by people who don't actually use or consume the resulting papers, the incentive will always be to provide the appearance of doing science without actually doing it. It's just so much more effective for fundraising. Commercial science that's purchased by someone has a customer who will actually try to check they got what they paid for at some point, and can vote with their wallet if they didn't. Also legal protections and systems to stop fraud in markets are well developed. Notice that Elizabeth Holmes went to prison for defrauding investors. If she'd done the same thing at a university she'd have gotten away with it scot free.
If she'd done the same thing at a university she'd have gotten away with it scot free.
But she didn't though. She was rejected from academia, which is why she turned to private capital. Her fraud worked on private investors, not government funding agencies. She couldn't even convince her professor to get behind her idea, but private funds threw millions of dollars at her.
I haven't read her bio, so would be interested to know more about her being rejected from academia. The official story is that she dropped out during her undergrad and immediately formed a company. Or are you referring to her professors telling her they didn't think her idea would work? If so then they were right, but her profs are not the people handing out grants. There's no sign she wouldn't have got a grant for such a thing given that government funding is justified by the fact that it can fund long shot ideas that some say won't work, and people doing things that clearly can't work regularly get grants.
Are you arguing that Alzheimer's research isn't basic research? That it's got so much fraud because people demand immediate practical value from it? How does that square with the fact that they've been researching it for decades, haven't found a cure or even a solid understanding of the cause, and still get funded? Surely this is an exemplar of basic research?
'Basic research' has a specific meaning in the natural sciences, which, I suspect, is how the OP was using the phrase.
Interesting. What's the definition they use?
If the authors are pushing a novel or shaky hypothesis that purports to provide a potential cure (as they often do), yes this might not fall under basic research. For instance, arguing a particular pathway is the cause of AD without first characterizing the pathway and how it affects humans can waste tens of million dollars and cost years.
In the balance between trying to quickly develop a therapeutic and understanding the basic mechanisms driving the disease etiology, we see a lot of research groups with under powered studies playing fast and loose and hyping their research. Is it outright fraud? Rarely. But it’s not good science either.
Why not criminal charges for fraud? Besides the fact that this kind of stuff is used to obtain funding in the first place (defrauding the funding agency), it’s theft (stealing funds that would otherwise be allocated to competitors), and breaking the public trust since usually these kinds of funds are allocated by government agencies.
As it is now, bad researches get rewarded and good researches who speak up get pushed out. One of the most despicable people I've ever met is a long term "successful" academic researcher at a major university.
E.g., return to the rigor and quality-aspects of research standards c. 1940 before it declined into the Sears, Roebuck and Co. model we find today.
The commercialization of academia rewards churn and volume rather than $upporting (a semi-orthogonal constraint) and promoting landmark hard-work that takes longer to gather data or develop.
Perhaps data-oriented research needs to consider extraordinary rigor as the new normal, perhaps with internal and external audits to assure standards and practices are sound.
I'm amazed at how this one specific easily detectable type of fraud is so common. One has to wonder about all the other, less obvious, ways of comitting fraud and how common they must be.
This is why you're going to see an explosion of fraud cases stemming from the 90s and early-to-mid 00s. That was the period where PCs were widespread so it was pretty easy to copy words and images, but actually looking at any individual set of words or images and asking "Were these copied from somewhere?" was much more difficult. A lot of people copied/manipulated then because they thought it would be too hard to catch them. Well, technology caught up.
Just look at the multiple plagiarism cases against the former Harvard president. It was clear, at least to me, she copied liberally more out of laziness/lack of confidence because she didn't think she would get caught for small phrases here and there. I mean, who goes through all of the trouble to write a dissertation and then plagiarizes the acknowledgements???
In the Claudine Gay case, she didn't actually steal anyone's work in any of the publicized examples I saw. She clearly attributed the ideas and statements she was using, but then sometimes proceeded to paraphrase them too closely, without using quotation marks. You could argue that it's technically plagiarism, but morally, it isn't. The reason she got nailed was because billionaire donors were upset that she wasn't cracking down on pro-Palestinian student protesters.
Ironically, the guy who led the charge against Gay, Bill Ackman, is married to a celebrity academic who committed real, bona fide plagiarism. Once that came out, Ackman suddenly had all sorts of excuses for his wife's actual misconduct, which makes it clear that he never actually cared about the issue of plagiarism. He just wants kids who protest against the war in Gaza to get expelled.
I agree with everything you've said about Ackman's rationale and actions.
I don't necessarily agree with your overall characterization of Gay's plagiarism. While some of it is clearly of the kind you site (e.g. she's clearly referencing other work in a lot of her analysis, so the fact that she doesn't just reword some phrases a little more seems like a very minor transgression to me), there are other cases that are more than just sloppiness and are outright weird, like the acknowledgements issue. This opinion article from Ruth Marcus (a generally left-leaning writer) of the Washington Post highlighted the issues very well IMO: https://archive.vn/h8lqM
I don't think anyone cares what is written in acknowledgment sections. That's where people thank their mom and dad. If they thank their mom and dad in exactly the same way as someone else, who cares?
Ruth Marcus may be liberal, but she's also extremely pro-Israeli. The entire motivation for going after Claudine Gay was that she didn't stop students from protesting against Israel's war in Gaza, so Marcus' political stance on that issue likely colors her view on the plagiarism accusations.
If they thank their mom and dad in exactly the same way as someone else, who cares?
Because it's pathetic? Again, this wasn't just one instance of sloppy plagiarism, this was a common thread throughout her extremely thin academic career. And Gay wasn't just some random undergrad, she was the president of what was supposed to be one of the most prestigious universities in the world! The plagiarism scandal was just more proof that Gay was woefully unqualified for her job.
As I said at the top, the plagiarism accusations were extremely thin, and nobody cares about how you word your acknowledgment.
The bottom line is that if Claudine Gay had cracked down hard on students for protesting against Israel's war in Gaza, she would never have faced these accusations, and she would still be president of the university.
I don't particularly care if she was a great president or not. I find it extremely unsettling that a few billionaires were able to oust the president of Harvard, as a way of forcing their own politics on the institution. Since her ousting, Harvard has moved hard against student protesters. Harvard got the message loud and clear, and that message had nothing to do with plagiarism.
I disagree; I inspected Gay's writing closely and compared it to the original and from what I can tell, she had specific intent to spend the least amount of effort and skill to make something that barely passed the threshold for being publishable, or she lacked the skill entirely to write her own unique text.
I expect somebody who reaches her level of achievement to adhere to the rules of academic plagiarism to the letter.
Was it as easily detectable in 2006?
Sure. They're not using advanced ML to find these. Just visual inspection and the kind of transforms you could do in Photoshop in 1991.
I know 2 people including my own mother who died of Alzheimer's and one who has it right now. Who knows where would we be right now if we didn't go on the wrong track with this fraudulent paper? Things like this is why trust in scientists are at an all-time low. There's corruption at every single layer of our society and I don't know what we can do to get honor and honesty back as top moral goals for citizens.
This amyloid theory has been completely debunked at this point but has wasted almost two decades of progress because of it. It's so infuriating, but I don't see this stopping because there's no accountability.
Everyone in power is in on it for revenue maximization. Treating someone at an early stage is never the goal; making money and winning awards is the goal. The FDA is 100% in on it.
Treating someone at an early stage is never the goal
This claim is just an opinion. I bet that if we check the real statistics there are lots of cases that show exactly the opposite behavior.
making money and winning awards is the goal
We should remind that making money and winning awards are perfectly legit goals, as long as the work is done honestly.
Working with Alzheimer patients rank probably among some of the most emotionally hardest and more depressive works that I could imagine. We should stop treating scientists as missionary figures that were born to save all of us for free.
Those who don't even see the problem are already corrupted. Early stage treatments at scale can be very possible with cheap preventative bloodwork and inexpensive small molecules. Instead, we have no diagnostics until it's already too late, and then we have 50K/year "treatments" that don't even work. The ones you referred to are not scientists; they're scientific mercenaries.
Billionaires and powerful politicians (and their parents and other loved ones) die of Alzheimer's too.
If it was just a big cartel conspiracy, you would see a different disease pattern when comparing the elites to everyone else.
Just because they're billionaires does not mean that they're smart enough to improve the human condition, even their own. They're billionaires because they are good at making money, often by exploiting the environment and/or humans. It doesn't imply that they know how to spend it well even to save themselves in the future.
I think we're already well into looking at Alzheimers as an auto-immune response by the brain.
So far, it appears to be a degenerative process with manifold causes still to be elucidated. Perhaps in ~50 years, there will be a wider corpus of knowledge and treatments to attack more of them.
Of note: Cancer and Alzheimer’s disease inverse relationship: an age-associated diverging derailment of shared pathways Molecular Psychiatry vol. 26, pp. 280–295 (2021)
this is why trust in scientists are at an all-time low
People needs to stop making excuses to avoid science, use their brain cells and grow some critic sense. When you feel afraid of science, stop and think that the other option is worse.
Karen Ashe attended Harvard for both her undergrad and MD. With all the academic fraud committed by Harvard connected people that has come to light in the past few years, one really has to wonder what is up with that institution (and higher ed more generally).
Harvard wants academic celebrities. It's not enough to do solid work, you need awards, plaudits, research that punches through the noise, articles in the mainstream press, and so on. Harvard is a brand first and foremost. If what you want most in life is to be a tenured Harvard professor in your chosen field, you need to publish research that makes a big splash.
The most surefire way to do this was to simply commit fraud rather than conduct experiments and pray you get lucky and find an outlier result worthy of significant rewards. We are witnessing the fallout now as people no longer assume honesty and integrity, and begin to analyze the actual papers for accuracy.
Indeed. Another example of this is Forbes featuring outlier successful people on its front cover who have later turned out to be involved in criminal activities or scandals. (1) Elizabeth Holmes, founder of Theranos, who was once celebrated as a tech visionary but later convicted of fraud. (2) Sam Bankman-Fried, founder of FTX, who faced multiple fraud charges following the collapse of his cryptocurrency exchange.
Forbes is just another junk brand looking to turn a profit. With Harvard and other higher education institutions there is the expectation that they are vehement proponents of scientific rigor.
It was her "protégé", Sylvain Lesné, that did the dirty work. And apparently has been rewarded with continuing employment.
Perhaps it was Ashe's Harvard education that led her to not consider double-checking the work...
Edit to add: iirc, it was the University of Minnesota that is also responsible for the "food fat is bad" advice that has made obesity than American epidemic...
I met a harvard grad the other day. I was astounded how below average this person was. They just got their PhD. However, it was on a soft science.
I asked him for advice on a problem in his field, and he basically couldn't answer. He talked about his degrees.
I think this person must have cheated. Its the only thing I can imagine.
Harvard makes the news. Sadly it happens everywhere [1]
[1] https://www.nature.com/articles/d41586-023-03974-8 (apologies of you don't have access to Nature)
Neuroscientist Karen Ashe plans to retract her team’s landmark Alzheimer’s paper after acknowledging that it contains manipulated images. The 2006 study, which suggested that the disease could be caused by an amyloid beta protein, has been cited nearly 2,500 times. “I had no knowledge of any image manipulations in the published paper until it was brought to my attention two years ago,” Ashe wrote on the discussion site PubPeer, adding that she stands by the paper’s conclusions.
Scientists are divided over whether the problems with the paper undermine the dominant, yet controversial, theory that beta-amyloid plaques are a root cause of Alzheimer’s disease.
Scientists are divided over whether the problems with the paper undermine the dominant, yet controversial, theory that beta-amyloid plaques are a root cause of Alzheimer’s disease.
How could it not at least be a wake up call that studies that use this paper as a base could be fundamentally flawed?
Part of the problem is that studying the brain is hard. We have lots of in vitro or mouse models that tell a story, but nothing is conclusive. There have been many orthogonal studies which can replicate some of the associations, but nothing that can precisely point out underlying cause and effect. Yet people are suffering today, so researchers are following the best leads we have.
The best leads we have are faked images?
I get that it is complicated but fraud in one area (images) makes the hard data extremely suspect.
The problem is that the altered images represent a still plausible theory. There is a reason why they were believed in the first place. It isn’t that these false images are negative evidence. Instead, they don’t help or refute the underlying hypothesis at all.
The real question (that I’m not sure about) is if anyone else has been able for replicate the original experiment.
There have been many subsequent studies which generate evidence in support of the underlying idea.
As they say, it only takes one result to disprove a theory, but nobody has been able to do so to date.
Too late, the tens of thousands of papers citing this paper and drawing conclusions based on it won't be retracted. Nor would be retracted papers based on papers based on this paper.
It's dubious that a lot of the papers citing this paper are actually drawing conclusions based on this paper. Per Derek Lowe [1]:
I could be wrong about this, but from this vantage point the original Lesné paper and its numerous follow-ups have largely just given people in the field something to point at when asked about the evidence for amyloid oligomers directly affecting memory. [...] The expressions in the literature about the failure to find *56 (as in the Selkoe lab’s papers) did not de-validate the general idea for anyone - indeed, Selkoe’s lab has been working on amyloid oligomers the whole time and continues to do so.
[1] https://www.science.org/content/blog-post/faked-beta-amyloid...
It's interesting that Lowe said this: >When I review a paper, I freely admit that I am generally not thinking “What if all of this is based on lies and fakery?” It’s not the way that we tend to approach scientific manuscripts. Rather, you ask whether the hypothesis is a sound one and if it was tested in a useful way: were the procedures used sufficient to trust the results and were these results good enough to draw conclusions that can in turn be built upon by further research?
I have, over time, come to treat every paper I read as being based on lies and fakery (or incompetence, unconscious bias, or intentional omission of key details), and I work to convince myself that the paper is not fradulent or false. That is, my null hypothesis is that published work is wrong.
After chattiing with many people about this, I've found that most people default ot believing a paper is right and if the figures and conclusion agree with their bias, they just move on, beliving the paper to be true. I've been guilty of this in the past as well.
There should be some sort of "taint" checking that will indicate how many retracted papers any given one is based on. It shouldn't be too difficult since publications are highly structured.
With LLMs we can read all the papers at once and flag the most strongly derived for further review.
The only papers that are likely to draw conclusions directly from the cited papers are meta-analysis and reviews. Any pure research will have its own hypothesis, its own experiments, its own results, and its own conclusions. Most papers will simply cite it as background or related works. Even if the evidence in the cited work is doctored, the hypothesis can still be true and future papers based on it can still find valid, positive results.
This paper being manipulated doesn't imply anything about the hypothesis or any subsequent studies - it simply fails to be useful as any form of evidence.
One red flag should be that nowhere in this news article is the reader made aware of the exact nature of the manipulated images or their implications. If you go to the linked pubpeer review page you'll find why -- it's much less dramatic and all the findings were also replicated following this inquiry to show that any image alteration, that might have been made for editorial purposes, does not affect conclusions....
Drama drama drama, feed the people more drama... sigh
“All the findings were replicated” is a claim by the accused, which is disputed by the researcher who originally found the issues, and he detailed all the contradicting claims right in that thread https://pubpeer.com/publications/8FF7E6996524B73ACB4A9EF5C0A.... Image alteration “that might have been made for editorial purposes” is a laughable euphemism for fraud, even the accused didn’t dare to use that phrasing. Not sure what’s in it for you to seriously misrepresent scientific fraud.
One red flag should be that nowhere in this news article is the reader made aware of the exact nature of the manipulated images or their implications.
Because that was covered in detail when the manipulations were first reported and those articles are linked to in the above article. This is just reporting on the resulting retraction two years after that initial report.
it's much less dramatic and all the findings were also replicated following this inquiry to show that any image alteration, that might have been made for editorial purposes, does not affect conclusions
Other groups had issues replicating the results with the same oligomer (often just chalked up to its instability), it's not like someone just happened to stumble upon these manipulations casually. This retraction only happened because Nature rejected the author's attempt to publish a correction. This whole thing is a black mark on Nature's record as well so if it really was just some minor change to make a picture look prettier for publishing purposes, I doubt they would have insisted on this action
I disagree with this assessment. If Bik sees "shockingly blatant" copying, it's almost certain the author (or one of the authors) specifically, with intent, committed fraud. The other main explanation is incompetence (it's not impossible to misattribute a specific figure if your data handling is poor).
There's a strong incentive to get a high citation count for your papers. This encourages behavior like the manipulations we're seeing here, on the part of fraudsters. But it also fails to incentivize caution on the part of researchers who cite existing works. If there were a "bamboozled count" that showed how many times a researcher cited a work that was later retracted, that would incentivize people to be a little more cautious, and perhaps avoid citing work of people who are suspected to be fudging the numbers.
While we're at it, let's also add a "rickrolled counter" for people who open the link and close it in less than 10 seconda. That should incentivize people to be a little more cautious, and perhaps avoid clicking on links without first doublechecking the URL.
This is basically the academic version of that.
Making the system more elaborate would also make it more complex, and people would just find more complex ways to game it. it might even make things worse.
I am sure most citations are not by people who suspected the numbers were fudged.
If anything, the way to go is to stop using flawed metrics. Most people want to do a good job and build a good reputation. Incentives just distort this.
Elisabeth Bik has a twitter where, at least in the past, she exposed a bunch of questionable things in papers (lots of manipulated images or images copied from other papers)
https://x.com/MicrobiomDigest/status/902016709019672577
These efforts are just going to accelerate with AI applied, new and exciting frauds will be discovered.
Yes, but AI tools also make it easier for bad actors to generate false images that are harder to detect as false going forward. The ones that get caught now seem to be incredibly simplistic manipulations.
Ehh maybe, the images will be too perfect. I don't think they can model every source of bias, that would be a huge training problem.
That is also true. There are decades of fraudulent papers out there to find from the before times, so it'll still be useful.
They should be sent to prison for this, as should all fraudulent researchers big and small.
I may be alone in this, but I think these threats like jailtime for fraudulent research would have the side effect of discouraging researchers that don't even engage in that stuff. I think pulling out the pitchforks is a misguided idea, and it's a shame that it's so common today.
On one hand, these people have a big responsibility to do good work and stay out of trouble for the sake of the ones they're helping (i.e. Alzheimer's patients), but on the other hand their impact depends a lot on their mental/emotional state -- if you put more bs in their mental buffer, it's naturally going to mess with them and lead to a decline in quality. They have to focus.
Does that same reasoning apply to people who work in finance or accounting?
Distrust in science is at an all time high and these frauds cause incalculable damage to scientific integrity. They are literally poisoning the well of human knowledge. That should be considered a high crime.
Lesné, who did not reply to requests for comment, remains a UMN professor and receives National Institutes of Health funding. The university has been investigating his work since June 2022. A spokesperson says UMN recently told Nature it had reviewed two images in question, and “has closed this review with no findings of research misconduct pertaining to these figures.” The statement did not reference several other questioned figures in the same paper. UMN did not comment on whether it had reached conclusions about other Lesné papers with apparently doctored images.
Horrifying. 2 years investigating and they’ve commented on two images?
It's standard. Universities always drag these things out for years, and often acquit either without explanation or with an explanation like "this unfortunate event was caused merely by enormous incompetence", even when that can't possibly be true.
Universities cannot/will not investigate their own staff properly.
No one (institutional or human), anywhere, can investigate themselves competently.
It should never be acceptable.
I still firmly believe a large portion of alzheimer cases are due to candida or other forms of gut dysbiosis, leading to the breakdown of the blood-brain barrier. Candida then enters the brain and the glial cells break down the candida producing amyloid plaques. But the root cause is from the gut.
The microbiome is significant. Could it not also be olfactory?
Olfactory in what way? I think toxins could definitely get in by breathing it in, e.g. like mold, which is a generally underestimated, but in my opinion significant source of complex chronic conditions. So for sure those are possibilities too.
I suspect alzheimers (which is a subset of dementia), is a cluster of diseases similar to cancer. I think a large subset is due to candida, but I wouldn't be surprised if mold and other environmental toxins cause other subsets.
I have been reading Outlive by Dr. Peter Attia. I believe this research is used in the book to talk about Alzheimer's. I'm guessing the book will have to be revised. Gonna have to check in the acknowledgements of the book now.
This paper is not cited in the book thankfully.
I actually know one of the guys who faked this data from a young age. He was basically losing his mind because of rage issues due to his extreme phobia of being poor/ falling into poverty. It’s interesting to me how these companies basically played into his fear of losing his funding to push this BS.
People have been using these mice as mouse model of alzheimer's for decades, so there must be something here. It's not like the whole research direction was "wrong"
This article contains the actual alleged doctored images:
https://www.science.org/content/article/potential-fabricatio...
This person Lesné shouldn’t have a job at a university after this. In fact, I strongly believe this person should be behind bars for having swindled the government and tax payers out of many millions of dollars in wasted grant money. The incentives for perpetuating this kind of fraudulent research have to be utterly devastating to those who do it to even hope for a deterrence against future malfeasance.
This paper is certainly shameful.
But one should not exaggerate the impact it had. The strongest support for the amyloid hypothesis of Alzheimer's comes from multiple sources of evidence that are completely unrelated to this paper.
The problem is we don't know how many of those other lines of evidence are also fraudulent.
What we do know is that finding academic fraud is like shooting fish in a barrel. It's way too easy even for totally unfunded volunteers. The situation in science right now is reminiscent of the situation in computer security circa 1999. Back then there were so many easily exploited RCE vulns in foundational infrastructure that "hacked by teenagers" became a TV trope. Almost anyone could sit down and find a way to hack into a network without much effort. Vendors engaged in a lot of screaming and denial, sometimes even attacking the people who found exploits for them.
But, the industry changed. White hats became widespread, companies hired those ex-teenage hackers in their thousands. They put every employee through trainings. Then when that wasn't enough they started paying out millions in bug bounties, buying sandboxing companies (like Google did for Chrome) and more. Companies did this because bad security was hurting their reputations and in a competitive market that is an exploitable weakness. It was a form of self defense.
Academic science is excluded from market mechanisms by legal fiat, supposedly on the basis that this will yield better research outcomes. What it's actually done is shield universities from any need to make hard changes. Their reputation is tanking but they just don't seem to care and why should they? They'll get grants from the NIH anyway because they're all as bad as each other, and nobody in politics is talking about a total defunding of the sector yet.
Unfortunately even if the grants were cut off tomorrow they probably would still find it hard to change. It's becoming apparent that if a culture tolerates fraud for decades then eventually the leadership of these institutions are people who got there through fraud. How are universities or journals going to reform when the people who run them know full well they can't crack down on fraud without exposing their own career to audit risk?
This to me just seems untrue. What is your basis for this claim? There is plenty of research privately funded by corporations, some of which is very influential. Often this work is published by university researchers. Ask any university researcher about the numerous compliance courses we all have to take about funding and conflict of interest.
It is true that the biggest funders (NSF, NIH) are not market-focused, but for good reason. The market does not prioritize the public good. I know first hand -- my son has a rare disease (1 out of 20,000 people). There are many drug companies putting vast resources into drug development in the hopes of a huge payoff. In reality this benefits a small number of people (I remain grateful for how improvements have helped us). I'm grateful our major scientific funding bodies are not swayed entirely by market influences because it would lead to us focusing on a narrow set of scientific problems which would ultimately limit the way it helps the public good.
Im any event, I work in biomedical research. I think your diagnosis (incentives, process) is correct, but the way you discuss the attitudes and motives of researchers is wrong-headed.
You say:
You're talking about hundreds of thousands of researchers as if they're all psychotic citation fanatics with no care for truth. That is not reality. I think the kind of psychotic, data-manipulating researcher who would put people's health and lives at risk for citations -- or fabricate data sets out of thin air -- are vanishingly small. We can point to a handful of them -- the author of this paper, and the Daniel Ariely's and Francesca Gino's of the world -- but there are tens of thousands of people in every field working on research in good faith, with utmost care. The vast, vast majority never have any scandal, never get caught up in data manipulation, and so on.
No field I know of out-right tolerates fraud (and I follow all the retraction stories fairly closely). I think the closest we get to "toleration" is researchers dealing with scientific problems who more or less say "we're not going to publicly flay you but behind your back we're all going to know what you did and your future is limited when it comes to big grants, prestigious invitations, and so on." People who are credibly accused of fraud become pariahs and often targets of scorn not only within the research community but in the press and wider community.
The most serious issue IMO is not outright but poor norms around scientific practice, leading to p-hacking, harking, and other "forking paths" problems. Calling that type of behavior "fraudulent" is perhaps justifiable under some ways of thinking, but I think the word fraud mischaracterizes what is going on. There are, in fact, many serious efforts to root out this type of behavior and put in transparency rules to open up research to scrutiny, including among funders like the NIH.
> What is your basis for this claim?
Well you answered your own question: It is true that the biggest funders (NSF, NIH) are not market-focused. A lot of research is funded by taxes. That's an exclusion from market mechanisms. They don't have to convince the actual consumers of the research to buy it, we are all collectively forced to buy it by law.
> No field I know of out-right tolerates fraud
I know quite a few such fields, so we might have a different definition of "tolerate". After all this story contains the following paragraph:
“It’s unfortunate that it has taken 2 years to make the decision to retract,” says Donna Wilcock, an Indiana University neuroscientist and editor of the journal Alzheimer’s & Dementia. “The evidence of manipulation was overwhelming.”
We're talking about a retraction here, which is the weakest response possible. So ... it took two years of "investigation" to do nearly nothing, after other people did all the investigative work for free, and one of the authors continues to be employed with no consequences whatsoever even though his co-author admitted the figures were tampered with. I'd argue this is what institutional tolerance of fraud looks like.
There are no "consumers" who "buy" basic research.
You're talking about fundamental research into basic scientific questions as if it were the same as potato-chip manufacturing.
The "consumer" who "buys" research in a commercial setting is usually the executive who is funding the department. In that context stuff like P-hacking, HARKing etc doesn't happen much because at some point your bosses boss is going to read your internal paper and notice that your claimed discovery has nothing to do with what you were originally asked to investigate. In academia that doesn't matter, it still counts as a discovery because nobody is really checking your pre-registrations. In corporate research it'll either be checked by the people paying your salary, or at a larger scale, it'll be checked by a regulator who is forcing you to pre-register your clinical trials.
There's no "consumer" who "buys" research into whether protons decay. That's basic research that will only occur if funded by national agencies that are not motivated by return on investment.
As for commercial research, similar problems of fraud exist as in academic research. Instead of prestige, the motivations are things like bonuses and promotions.
Academics actually care a great deal about fraud, funding agencies hate fraud and punish it, journals hate fraud - everyone dislikes it. Competing labs have every incentive to catch fraud conducted by their rivals. The idea that academia is rife with fraud and that nobody cares is just not true.
There will always be a certain level of fraud not just in research, but in every economic sector, every intellectual pursuit, and every sport, commercial or not. There's no system that will perfectly eliminate it, but because of its empiricism and openness, science is fairly good at correcting itself over time.
Physics gets large amounts of commercial investment. Quantum computers have been invested in heavily by the private sector for years. The defense world invests in laser research, etc. Plenty of examples out there. You can always find a topic you personally feel is super important even though it has no practical applications today, but such arguments are unfalsifiable. I can just as easily argue [more] proton decay research doesn't matter [yet] and you can't prove I'm wrong, so it's not worth going there.
The same problems exist but not on the same scale. If your employer discovers you committed fraud to get a promotion you will certainly be fired and quite probably be taken to court by their legal department. If the fraud is at the level of the company they risk destruction and imprisonment by the government. It's not like in academia where they'll sit on it for years and then, maybe, request a little notice to be put on the paper's web page - all without the government even noticing let alone caring. The huge difference in consequences yields different risk/reward ratios and that's reflected in how often these problems are found.
> Academics actually care a great deal about fraud, funding agencies hate fraud and punish it, journals hate fraud - everyone dislikes it.
Do they? How is the co-author of this paper still employed if funding agencies punish it? Where are the university funded research-police departments? Why do we keep hearing cases like this Alzheimer's one and why are there never any announcements by Vice Chancellors about doubling investment into fraud investigations as a consequence? Why is a research audit not something that these fraudsters fear? How is it the case that publishers discover after the fact that dozens of their journals have been completely compromised by paper mills, instead of it being discovered via some more active process before publication occurs?
> science is fairly good at correcting itself over time
Absolutely not. If science was good at self correcting it wouldn't take nearly two decades for someone to notice that a widely cited paper was forged, and the people who notice these things wouldn't need to be anonymous. But they do. Look at the Gino case. She launched a massively well funded lawsuit against the people claiming she engaged in fraud. That's the exact opposite of self correction.
Almost all funding for basic research, including for physics, comes from governments. There are a few niche areas of basic research, like quantum computing, that also attract commercial investment, and there is plenty of applied research, but governments are almost the only game in town for basic research.
I think fraud is actually a much larger problem in commercial research, because the incentives to cheat are much stronger. There's real money at play. Theranos was a massive fraud. There's plenty of fraud and misconduct in the pharma industry. And when it comes to academia, the fields that have the most commercial potential (like biomedical research) have the worst problems with fraud.
Is the co-author guilty of fraud?
It's the funding agencies, the journals and other academics that are most involved in finding fraud.
Because there's a huge volume of research, and some small percentage of it will be fraudulent.
That might be a bad allocation of resources. If fraud is a rare phenomenon and isn't severely impacting a field, then the current level of investigations might be sufficient. Add to that the fact that the way most of these frauds are uncovered is by competing researchers.
You've diagnosed the issue then recommended leaches and bleeding.
'Market mechanisms' already exist - no company is precluded from doing research. We can discuss the deficit in research investment that short termist 'market mechanisms' have led to in contemporary corporations, but its absurd to claim that the market can or should replace academic research.
It's pretty clear that in practice the 'publish or perish' academic culture fostered by the financialisation of the university, the rise of the adjunct and the death of tenure have contributed heavily to the falsification of research. In practice encouraging replication, self directed research and significantly more 'pure' 'blue sky' research than governments and universities are inclined to do, would significantly alleviate this problem.
Academia needs far less exposure to 'market mechanisms'. They're perverse incentives for good science.
To say there's a deficit in research spending pre-supposes there's a correct absolute level of resource allocation that is easily discoverable. But the reason we need markets in the first place is that governments aren't able to discover the right levels of resource allocation on their own. That's why the USSR ended up with lots of steel foundries but no software companies, nor even any competitor to the internet.
If anything research is clearly over-funded right now, especially in some fields like medical research. That's why so many scientists turn to fraud in desperation. There just aren't enough genuine discoveries to go round.
> encouraging replication, self directed research and significantly more 'pure' 'blue sky' research
Replication is the opposite of self directed blue sky research, so how do you suggest it be encouraged whilst simultaneously reducing the power of the money givers even further? BTW, many bogus papers will replicate whilst still being wrong. Replicability is not a synonym for good science.
Really, this is a very common claim that needs to go away. The solution to fraud and incompetence is not to give incompetent fraudsters even more freedom and money.
Markets don't discover "the right" (i.e. social welfare maximizing) resource allocation, they discover, under ideal conditions, a pareto-optimal allocation. This is a very weak condition: "Jeff Bezos owns everything" is pareto-optimal. You need redistribution if you want to turn that into a socially optimal one.
In realistic conditions, they don't even do that: market mechanisms will reliably underinvest in things with positive externalities, and the larger the externality the greater the degree of misallocation. The positive externalities of research are enormous compared to the value captured by the researchers.
No market system anywhere ends up with one person owning everything. That is the standard under non-market systems, though.
> The positive externalities of research
... are only relevant if it's actually research, and not something that looks a lot like research without actually being so.
In reality, it is in non-market economies that you have a few owning everything: North Korea (the Kim family), USSR (the party), etc.
And a world class propaganda game. Go onto mainstream social media, pick a science fan at random out of the crowd and pick an argument with them about science. The average fan would have you believe that science is just short of perfect.
In a sense it's hard to blame these people, considering what two years of the media machine's full court press of pro-science culture war propaganda during a global pandemic does to a mind of average intelligence.
I mean, I suppose "winning" is important folks, but is losing your soul in the process worth it?
Maybe it should become best practice to publish honeypots. Anyone who claims to reproduce these papers or cites thsm gets autolisted for fraud.
Retracted and sham papers is actually a major problem. Estimated 10,000 papers were retracted in 2023 (https://www.theguardian.com/science/2024/feb/03/the-situatio...) and that is thought to be only a small percentage.
Not everything gets retracted either. There's a surprisingly deep rot in many parts of science. There are strong incentives to publish, and a lot of the methods you can use to inflate statistical significance (i.e. p-hacking) are hard to distinguish from publication bias and other innocent explanations for falling outside of statistical expectations.
Preregistration might help, but it doesn't really address the misaligned incentives that are at the heart of academic fraud.
Even articles that publish legit findings tend to embellish data. I do this for a living, I often try to reproduce prominent results, and I regularly see things that are too good to be true. This is bad because it pushes everyone to do the same, as reviewers are now used to seeing perfect and pristine data.
I have been asked to manipulate data a few times in my career. I have always refused, but this came at the cost of internal fights, getting excluded from other projects for being "too idealistic", or missed promotions. Incentives are just perverse. Fraud and dishonesty are rewarded, pretty depressing.
Tragedy of the Commons Ruins Everything Around Me.
Everyone wants answers, so anyone that provides an answer is elevated, independent of if the answer is right or not. To wit: the current AI push.
Clips from The Big Short surfaced in my YouTube feed recently, and they way you worded this reminded me of the scene with the rating agency.
Is scientific research and publishing headed for it's own CDS/MBS-esque implosion?
I think academic research is becoming very inefficient, and traditional Academia might eventually become stagnant. If you don't play the game I described above, it is really hard to stay afloat. I guess industrial labs, where incentives are better aligned, might become more attractive. I have seen lots of prominent scientists moving into industrial labs recently, which would be something hard to imagine even a few years back.
Thank you for doing the right thing.
Yep. Worse, p-hacking can be done by accident. I mean, the term implies intent, but a dogshit null hypothesis is problematic regardless of whether it is dogshit on purpose or merely due to lack of skill on the part of the researcher. Either way, it pumps publication numbers and dumps publication quality. If 100% of researchers were 100% honest, we would still see this effect boost low-quality research.
this isn't a big deal since no one reads most of those papers, it's mostly invisible sacrifice to the metrics gods. Very prominent papers like the one discussed, on the other hand, have much bigger consequences
Some other fraud in the same subfield lead to an amoral douchebag becoming President of Stanford, where he destroyed the campus culture (read https://www.palladiummag.com/2022/06/13/stanfords-war-on-soc... to verify) and then supported DEI run amok (read https://www.insidehighered.com/news/2023/01/11/amid-backlash... to see some of the fallout from that).
This is in addition to wasting a big chunk of the billions spent each year on Alzheimer's research on false leads (see https://www.alz.org/news/2024/congress-bipartisan-funding-al... for a source on the billions).
Yes, very serious consequences indeed. (Note, I did my best to back up my statements with high quality unbiased factual references. Please read them before disagreeing with my description of the result.)
Maybe retraction is something that should be something done in competitive fashion with negatve points awarded to researchers, universities and journals.
That's kinda misleading. Not every retracted paper contains doctored results. There just a ton of mistakes as well.
I’m 100% in agreement that there is a massive reproducibility crisis in science and that the publish-or-perish model is broken.
But, for completeness, paper retractions can happen for many reasons, not all of them nefarious, though it could be that most retractions are from authors trying to game the system and getting called out. For example, if the terms of using a certain data set change, you could be required to retract your paper and remove that data from the analysis.
That number alone doesn't say much. Yes it sounds like a lot in absolute terms, but consider that four million papers were published in 2023 alone, and a bit less than 70 millions since 1996 [1].
[1] https://www.scimagojr.com/worldreport.php
Not only effort: millions and millions of public and private dollars wasted on followup research on a false premise, all so that an author could get citations. What a cost.
Playing devil's advocate: perhaps the authors strongly believed in the method but have failed to produce strong enough evidence. Then the goal was to renew financing in hopes of getting somewhere with the research, not to just get citations.
This of course is pure speculation
I think the point you're making is that there need not have been any malicious intent, and I accept that, but I don't really consider this a devil's advocate position, since strongly believing something will work is still not grounds enough to lie about outcomes.
Fraud is still fraud no matter what color you paint it.
It's really not an excuse. This has directly led to decades of wasted time/money/effort searching for a cure of a very prevalent disease.
It does lend to an interesting thought experiment, though.
Imagine the researchers were right after all. They just needed more time, and so they faked some results. We might have ended up with an underdog/feel-good movie about how the researchers bravely persevered in spite of the drying up of their funding.
I'm sympathetic to that: the pressure to have produced results must have been very, very heavy.
What’s the difference?
I don't think "Fraud" counts as a devil's advocate position?
Not so bad because they were just trying to secure funding that might have gone to research not based on fraud?
This kind of fraud should carry a prison sentence.
I wonder what the effects on retractions would be though. Some researchers might go down fighting or refuse to cooperate with investigators.
Absolutely. It’s not just a bad path, it’s taken researchers down a road that robbed approaches that could have helped people of resources because they lied.
A lot of science is fake, if not most of it. I learned the hard way in academia that it really seems the majority of PhD level researchers are there because they are willing to stretch the truth to "tell a good story".
Talk about a sweeping conclusion based on n=1! Very unscientific ;)
Sounds ready to publish
Google Scholar says it's been cited 3,455 times. (Having a quick look at the top three results showed the paper was cited together with other papers in the same area, though.)
https://scholar.google.com/scholar?cites=1621513420842042156...