return to table of content

Author-paid publication fees corrupt science and should be abandoned

drgo
46 replies
3d15h

The entire research field needs major reform urgently. There are so many corrupting incentives for its practitioners that nearly most research findings (at least in my field: medicine) are suspect. Most researchers have neither the time nor energy to work on the important problems, to perfect their craft and to understand the limitations of the scientific methods. The metrics used in evaluating grant applications and academic promotion push researchers to work on short-term sensationalist and easy-to-publish ideas. This is sad because if there was a time we need solid unimpugnable science is now.

ilrwbwrkhv
23 replies
3d12h

Yes, you are absolutely right, and this in turn has led to people disbelieving science and things which were once a bipartisan issue, such as climate change, has now been termed a conspiracy by people on the right. This has only happened because people do not trust scientists and science anymore.

zaik
14 replies
3d10h

I'm not sure about the direction of causality here. Anti-vax people or climate change deniers also object to well-replicated results and would believe what they want to believe anyways.

vixen99
9 replies
3d7h

Can you please define terms so that I can understand exactly what you mean? Meanwhile I decided to check Wikipedia. https://en.wikipedia.org/wiki/Climate_change_denial says

Climate change denial (also global warming denial) is a form of science denial characterized by rejecting, refusing to acknowledge, disputing, or fighting the scientific consensus on climate change.

I cannot assume this corresponds with your own use of the term which is why I suggest defining terms but I'm wondering if anyone reading this sees some slight problem in the Wiki wording? Let's assume any disputation will have to include peer-reviewed evidence if it's to be seriously considered though according to Wiki, it won't be, whatever the evidence. This is science - according to an anonymous editor of Wikipedia? As described by Wiki, this is a religion.

dekken_
8 replies
3d7h

The fact that there is even a term like "Climate change denial", means that it feasible to deny "the science" (science is about doubt), because the climate scientists actually can't prove anything, fundamentally. They only have stats, and stat models, and there is no way, at our technological level to even model a single mole of carbon 12 (12 grams of atoms), so you think they can model an entire planetary system? Accurately? No, it's not possible.

They also like to muddy the waters, climates change, because climates aren't even a thing, but an aggregate, obviously. Anthropogenic climate change, is what's being questioned, and that there is no certainty that it will cause disasters any more so than if there was still 300ppm CO2 in the atmosphere. Maybe 400ppm is bad, doesn't seem so tho, we are living the experiment.

BUT also, I would bet Air Conditioners are far worse for the planet than 400ppm CO2, even if they act together. ACs, are, absolutely abysmal thermodynamically. Heating up one space to make another space colder temporarily, with high likelihood of CO2 release.

Slight tangent. Have you heard the sixth Dimmu Borgir album? Good stuff. https://www.youtube.com/watch?v=TFRwNYPsHwY

I wonder will it generate any discussion, or just be castigated via downvotes, your choice.

MaxBarraclough
3 replies
3d4h

The fact that there is even a term like "Climate change denial", means that it feasible to deny "the science" (science is about doubt), because the climate scientists actually can't prove anything, fundamentally.

There are people out there denying the shape of the planet, despite that it has been observed directly.

dekken_
2 replies
3d4h

Am I talking about those people?

You're conflating things that are empirically verifiable with things that are not.

MaxBarraclough
1 replies
3d4h

I was responding to what you wrote:

The fact that there is even a term like "Climate change denial", means that it feasible to deny "the science"

This is clearly untrue. Some people, such as the 'flat Earthers', deny plain facts.

dekken_
0 replies
3d3h

Yeah good point, I'll try better

zaik
1 replies
3d6h

"Psychology can't produce real results because we can't even model a single brain cell perfectly. They only have stats, and stat models."

dekken_
0 replies
3d6h

You want people to think you're a specious disingenuous person? It's of your own doing.

acidburnNSA
1 replies
3d4h

We also can't model every single one of the atoms splitting in a nuclear reactor core, but thanks to stats and models, we can predict how they behave to very high precision.

Knowing whether or not anthropogenic greenhouse gas emissions impact global climate can be reasonably achieved with some modeling assumptions.

We may not be able to predict the weather in each location for the indefinite future, but we certainly can and do understand the general impact of bringing billions of tonnes of new greenhouse gas into a system.

kstrauser
0 replies
3d2h

Similarly, “We can’t even model the inside of a proton. How can you possibly know when a pot will boil?”

Turns out you can get pretty far modeling the macro behavior without perfect knowledge of the micro details.

pessimizer
0 replies
3d6h

Anti-vax people or climate change deniers also object to well-replicated results and would believe what they want to believe anyways.

Accusing your enemy of being beyond argument is just an underhanded way to avoid making an argument.

There will always be bad beliefs, we will not always know at the time which beliefs are bad, and the response to what we think are bad beliefs should be good science, not massive fraud studies on hydroxychloroquine published in one the most prominent journals in the history of medicine.

mike_hearn
0 replies
3d8h

Replicable is not a synonym for correct even though some discourse about the science crisis treats it that way. It's necessary but not sufficient. The academic literature is full of studies that will replicate every time but the methods are lacking rigor or are just outright invalid. The classic correlation/causation fallacy pops up everywhere, to pick one example. A paper that shows a correlation then declares a causation will replicate just fine but still yield wrong conclusions.

jononor
0 replies
3d9h

It does give them some ammo for low-brow dismissal of the entire field of science. Which the more extreme may use to convince the more reasonable people. Even though the argument is rarely done in good faith, we do need some strategy for dealing with it...

Being perfect is not really a realistic option though - mistakes will be made in such a complex endeavour as science. But if we can no longer legitimately say "it only _a few_ bad apples", then we must improve things drastically - or risk loosing all the credibility.

fluidcruft
0 replies
2d4h

There's an interesting documentary about flatearthers called "Behind the Curve" that gets into individual flatearther origin stories. The interesting thing is they are not stupid it's more than they have learned to not trust people in positions of authority on topics of science (often due to an experience in grade school).

Towards the end there are a few funded research projects that are erily more similar to the social dynamic of actual "legit" academic science than the film makers probably realize.

Ar-Curunir
5 replies
3d11h

Declining trust in science is not because of lack of open-access publications.

The right is not ignoring research because it is behind a paywall or because it hasn’t been reproduced. In fact, the right has chosen to willingly ignore research that has been painstakingly reproduced, all in service of the god of greed and ideology.

DataDive
4 replies
3d4h

In my opinion the declining trust is science is due to science being far less apolitical and far less objective and less validated than in the past.

llamaimperative
0 replies
3d4h

Yeah definitely dude. Climate disbelievers are actually doing good faith deep dives into the studies and identifying specific, real problems, and pushing for better science to be done.

This is evidenced by… wait… what exactly?

kergonath
0 replies
2d23h

How is science political? I am happy to work from that premise, but it needs more than an assertion to be taken seriously.

My experience is that people complaining about science being partisan (which they tend to mean by “political”) do not understand much about science or how it works. I am willing to believe that you are not one of those people, but what you write is a bit close. So, could you give some particular examples or facts rather than empty phrases?

josephg
0 replies
3d

Do you have any evidence for this?

I suspect in the 60s and 70s science was (in the public imagination) wrapped up in a positive story of the future, full of space travel and prosperity for all. And now science is wrapped in a story (at least for some) of doom and gloom. Climate wars, or creeping authoritarian governments telling people what to think.

I suspect the experience of actual scientists, in labs trying things out, hasn’t changed anywhere near as much as the public’s stories of science have changed.

Ar-Curunir
0 replies
3d2h

Which era of science was apolitical, exactly? The one where Galileo was prosecuted by the church? Or one where the US military invested billions of dollars into early computer science? Or one where nearly every physicist was staunchly anti-proliferationist ?

Science has not, and will never be, apolitical. It is just that you are noticing that the politics of some scientists does not align with yours or with some politicians, and so you’re noticing the difference.

wakawaka28
0 replies
3d12h

What if science and politics really are corrupt enough to corroborate the conspiracy theories? We need an International Journal of Conspiracy Studies right now!

Mountain_Skies
0 replies
3d11h

Funny how the unabashedly partisan are the ones who make partisan statements in the same breath as their denouncements of partisanship.

rscho
8 replies
3d6h

In clinical medicine, the system needs no reform, because almost nobody's goal is to do research. Clinical research is pretty unique in that it serves almost entirely hierarchical advancement rather than science. Be honest, when was last time you saw a clinical researcher formulate a consistent research hypothesis and constitute a solid project around it? Almost all projects are basically: 'I'll test that with homemade shitty stats because it's trendy!'

So IMO, the first reform to conduct is to turn clinical researchers into real scientists. That won't be easy.

kstrauser
2 replies
3d2h

I was told long ago that the first rule of evaluating a medical treatment is to ignore all the clinical studies because nothing useful comes out of them.

I don’t know if that’s actually the case but that seems to be the prevailing thought. Which is awful, because it should be the opposite. Clinical studies should be the gold standard. They’re often conducted by people who are better doctors than scientists though. A zero-blind trial of n=4 might perhaps not be the definitive study the doctor had hoped.

(Not to pick on doctors. My wife’s one. Software developers often fall into the same common “I’m good at this one hard in-demand skill so I must be good at all of them” trap.)

p00dles
1 replies
2d9h

To ignore _all_ clinical studies because “nothing useful comes out of them” seems extreme. Yes, there are poorly run studies, and conclusions can be inappropriately drawn from studies (ex: when a study wasn’t meant to answer certain questions).

However, well-run randomized control trials (RCTs) in medicine absolutely exist and can be a very strong form of evidence when interpreted correctly. This is especially true in comparison to other disciplines (economics, etc.) where such RCT trials are impossible and researchers wish that they a research tool as valuable as an RCT.

kstrauser
0 replies
2d3h

Fair. It was mainly in the context of “clinical studies show that unlikely thing X cures Y”. No randomized double-blind trials? Sure.

firejake308
1 replies
3d5h

Interested to hear what you mean when you say "turn clinical researchers into real scientists." Are you thinking of expanding the MSTP? Or just better statistical literacy for physicians? Or something else entirely?

rscho
0 replies
3d4h

I don't know of any magic formula. We'd have to force the change somehow, because it won't correct itself. But the path is unclear, if there is a path at all. The current situation is that most clinical research is performed by MDs/MD-PhDs with no scientific background else than what is taught in medical/doctoral school, i.e. they lack the basics, and are usually more interested about advancing their careers rather than science. That said, systems shape people, and not the other way around.

drgo
1 replies
2d12h

I am an MD/PhD, and I agree that a lot of the research done by clinicians is self-serving and useless. But most medical/health research is not done by physicians who are typically quite busy and generally not motivated to do anything that involves statistics and takes time and effort. Also, most clinician-led research is simply ignored because it tends to be overridden by the results of large clinical trials usually conducted by professional scientists. Where I see major problems are in two fields. The first is epidemiology (which receives about 30% of Research funding) where most of the research is done by non-clinicians, often involves data dredging and posthoc theorizing motivated by easy access to larger amount of pre-collected data and the availability of statistical software that permits fitting black box models that typically show whatever results are good for publication. This field is the source of contradictory results that we read every day in the news where something and its opposite are both good and bad for every kind of imaginable illness or symptom -. The other is a lot of basic sciences research which typically represent about 60% of all funded research. The great majority of that research is baseless as was shown by effort to replicate many of the findings. Billions of dollars are spent searching for genetic and other causes of illness using small poorly designed studies that can neither prove or disprove anything. So the problem is much bigger than just poorly trained and busy clinicians trying to prove a pet hypothesis. The problem is everywhere. Edited to fix typos

rscho
0 replies
2d11h

I also agree with what you wrote here. But I have much more experience with clinical research, and know less about other fields.

jpeloquin
0 replies
3d

So IMO, the first reform to conduct is to turn clinical researchers into real scientists. That won't be easy.

Mostly, there are not enough hours in a life to be both a good doctor and a good scientist. The few MD-PhDs who give equal weight to both areas are both brilliant and extremely driven. Multidisciplinary teams seem to have a better chance at success.

I guess if the scientific education was done early (undergraduate at the latest) dual training could work. Once medical training and practice starts there isn't a lot of time left over. And early / non-practicing science education, for whatever reason, doesn't seem to be very effective.

EvgeniyZh
5 replies
3d7h

What kind of reform?

sova
2 replies
3d2h

Get paid to answer important questions instead of getting paid to provide sought-after results. I think that's the end-point, the "how we get there" is now the question!

EvgeniyZh
1 replies
3d

Who decides what's important?

sova
0 replies
17h13m

Great question. Who decides currently?

paretoer
1 replies
3d3h

I don't personally see a solution other than student loan money being much harder to get.

There is so much bad research because there are too many researchers.

For myself, I think I am smart enough to have worked on a PhD in economics but not smart enough to have done anything worthwhile or to have advanced the field. I am certain I would not have. Then in the face of this massive student loan debt I would have produced junk research and try to game the system to make the investment worthwhile. The incentives are all ridiculous in the current system for most.

The idea politicians will cut student loan money in order to produce better scientific research would be a good way to optimize for losing an election.

Personally, I don't think there is a way out. If anything it probably just gets worse.

EvgeniyZh
0 replies
3d

There are countries where student loans are not a thing and the science there arguably faces the same problem.

PhD is not paid by a student even in US, it's all scholarships.

hedora
3 replies
3d2h

I think the scale of the problem varies wildly between different fields of science. For instance, in computer science, double blind reviewing is the norm, and the biggest issue is that there is an incentive to produce more papers with smaller contributions, instead of a mixture of larger, better, papers, and 1-2 column notes that describe important, but simple results (such notes used to be published routinely).

In other fields, not only are submissions not anonymous, but editors sometimes override the reviewers (in both directions), for what appear to be political purposes.

Der_Einzige
1 replies
3d2h

Uhh, the elephant in the room for CS/ML publications is that a lot of people hardcore lie on their results page since almost no one tries to reproduce anything.

I've witnessed folks straight out of their undergrad with literally 1 workshop paper at an ACL track conference get thrown 150K starting offers by no-name startups. Imagine how important it must be to get your first NeurIPS main conference publication. The ROI likely amounts to 300K+ per year, so if anything, I'm surprised that cheating/lies in the academy aren't even more brazen. Dynamics like this are why both Stanford and Harvard's presidents had to step-down for gross academic plagerism/misconduct.

1. https://www.science.org/doi/10.1126/science.359.6377.725 (2018)

2. https://www.technologyreview.com/2020/11/12/1011944/artifici... (2020)

3. https://www.nature.com/articles/d41586-023-03817-6 (2023, Nature)

godelski
0 replies
3d1h

I've seen it happen even when source is available. I've also seen a lot of really terrible math mistakes in ML because people don't understand the underlying assumptions of the metrics they use.

The rot is everywhere because people forgot the goal. We all get caught up in the metrics that are used to judge us that we forgot what we were proxying in the first place.

jltsiren
0 replies
2d22h

Double blind reviewing is not the norm in computer science. You will sometimes see it, but it depends on the subfield, the specific conference/journal, and even the year. In my experience as an author/reviewer, the submissions have been anonymous maybe 10% of the time.

And the biggest issue I see in CS is the focus on gatekeeping. When many (most?) papers are published in conferences with a single round of reviews, peer reviewing becomes more about accepting/rejecting the paper than improving it. The average review I receive in bioinformatics is more useful to me than the average review in CS.

sa-code
0 replies
2d11h

What are the current corrupting incentives? And what would an ideal research world look like?

nine_k
0 replies
3d10h

It looks eerily similar to the problems of businesses that happen when all what the executives care about is the stock price growth in the current quarter.

aborsy
0 replies
3d10h

It has become a game of reputation, citations and the number of papers. It works the same way that a company does, with the actual work being done by students and postdocs and citations accumulated by management (a lot of them don’t do research, rather spend time putting their names in different papers, collecting more and more students, and playing academic politics). After a while, they become a luminary and collect awards.

godelski
24 replies
3d9h

There's so many worse things. Like how people think research can be measured by citations and h-indexes.

Yes, these things help, but if you're serious about hiring or promoting just read the god damn papers. The numbers are no substitute and it's insane that so many people outsource their thinking to numbers when there's such rich data within arm's reach. No wonder why people have hacked the metrics, no one vets them -__-

As for journals and conferences? Fuck em. Post to arxiv or better, open review. Engage with people who actually care about your work. The whole purpose of punishing is to make your work available and communicate with your peers. We've solved that. We don't need them (well it's still nice to physically meet with people. The conferences can stay, but they should focus on that. Not being "arbitrators of truth")

JohnKemeny
23 replies
3d6h

You don't strike me as someone who has hired a lot of faculty members... If you're hiring in your own field (which you should), you don't have to read the papers. You're already familiar with most of them, and judging by the venue their published in, you also know their "level", so to speak.

As for open review ... You also don't strike me as someone who has reviewed a lot of papers.

I've served on several program committees, and reviewed hundreds of papers. I'm happy with blind reviews.

beedeebeedee
11 replies
3d4h

If you're hiring in your own field (which you should), you don't have to read the papers. You're already familiar with most of them, and judging by the venue their published in, you also know their "level", so to speak.

You're judging by reputation rather than substance. That opens up the possibility of manipulating reputation to achieve career goals instead of doing the hard work.

jltsiren
5 replies
2d22h

Hard work doesn't scale. If you are already an expert in the field, you need a few months to properly evaluate someone's work. And if there are 100 candidates for the position (which is pretty common), you have to spend decades evaluating them.

In practice, you reject the vast majority of the candidates by reputation, publication venues, paper counts, grants and awards, and similar metrics. And then, if you are particularly interested, you may devote a few days on superficial reading of the works of the shortlisted candidates.

godelski
4 replies
2d18h

  > Hard work doesn't scale
Not with the current framing but you contradict yourself.

It's true that there are hundreds of candidates but that's... Scale. Getting a PhD is no easy task. And I'm not saying you should abandon the metrics all together, that would be a naive misinterpretation especially given what I made explicit. Instead consider the effort in evaluation should scale with the role. Maybe you can reject 80% of candidates by citations and indices alone. Do you miss some gold candidates? Probably, but it's fuzzy. Your remaining 20% you can consider their niches, funding capacity, and read at least the abstracts of their top works. But in your final selection you should be nuanced and read a selected list of their works (not all. Let the candidate tell you).

Is it hard? Hell yeah. But why should something being hard stop us from doing it? That's called giving up. It's a crazy mentality to have as a researcher.

  > you may devote a few days on superficial reading of the works of the shortlisted candidates.
So it's okay to do this with a paper when we review for publication but this is too much work when hiring someone who's going to be your colleague for years to come?

Come on, I can't be the only one that thinks this is ludicrous.

jltsiren
3 replies
2d14h

Evaluation by reputation and metrics is the norm. Evaluation by the quality of your work is the exception. You have to be particularly good by reputation and metrics in order to reach the stage where your work may be evaluated.

And even when your work is evaluated, it's based on superficial reading. The people reading it (who are hopefully experts in the subfield) don't have the time to become experts in that specific niche to judge it properly. What they do is similar to peer review. And like all peer review, it's far more useful for spotting weak points and suggesting improvements than for ranking people/works.

godelski
2 replies
2d1h

  > Evaluation by reputation and metrics is the norm. Evaluation by the quality of your work is the exception
You don't see a problem with this? Because I sure do.

  > And even when your work is evaluated, it's based on superficial reading. The people reading it 
Ditto

This is exactly what I'm complaining about and saying we should do better.

Just because things suck doesn't mean you need to defend them. We can in fact change things.

jltsiren
1 replies
2d

I don't see a problem that could be solved within the current constraints. If we could convince taxpayers and private funders to double or triple the funding for academic research, many things could be done better. But today we have too many people trying to do a career in research and too little funding for them. Most issues in the academia arise from that.

godelski
0 replies
1d11h

Then look for ways to solve it. I clearly didn't present a full solution and I don't have one. But the biggest issue is people being apathetic or trying to say there is no problem[0].

We're researchers. We solve crazy hard problems all the time. I'm not saying our "house" needs to be completely in order, but right now it seems to be in dismay. I'm certain we can solve this. We put a fucking man on the moon, we found the Higgs and gravitational waves, we made computers talk, and we rid the world of deadly diseases. Surely we can solve a problem like this. If anything, those should be reasons to motivate us to solve this, so we can do more! But I do think a problem is how people interpret economics. Maybe a research paper doesn't directly lead to a trillion dollar company, but those companies certainly rely on a lot of publicly funded research. The returns on investment are pretty high and that's worthwhile.

[0] I believe some feel the need to defend a system due to their success in it. This needs to be disentangled. A critique on the system is not a critique on "you" or "your work".

kergonath
2 replies
2d23h

You're judging by reputation rather than substance.

Is there an alternative, though? When hiring someone we use their degrees, experience and credentials, all of which are indirect ways of estimating competence. We cannot realistically expect people to know all about someone’s career when interviewing them, it’s not realistic.

It’s the same thing for careers evolution. HR people don’t have time or skills to know about the science, and good scientists already spend way too much time reviewing articles or grant proposals, and sitting in various boards and panels. There is no need for more administrative work.

I don’t think the current systems are ideal, but I think improving it would by fixing incentives would be more efficient (and that’s not only for scientists, it’s the same for any major organisation or company).

godelski
0 replies
2d18h

  > Is there an alternative, though?
Yes

I said "read the god damn papers" but this is a condensed version of saying be more nuanced and familiar with the person's work. Metrics are signals, not answers. They are always fuzzy and we need to recognize that and bring this into the core of how we evaluate. You'll never be perfect and while we know to not let perfection stand in the way of good enough neither should you let lack of global optima stand in the way of progress. There's always room to improve.

At the heart of academia we push human knowledge forward, even if just by little nudges at a time. But this means the environment changes as we succeed. What worked in the past may not work moving forward. And as we all should be intimately familiar with, given enough time Goodhart's Law always wins.

But there's so many problems with becoming over reliant upon metrics. It's not a static measurement, it is a degrading one. This is because we get lost seeking success for ourselves because doing so makes it easier to do the work we want to (and ideally more impact on humanity). Because of this the best way to advance your career is not to continue researching but to grow the size of your lab and get grad students and post docs to research. You need to become a PI, who's job is political and about funding. This person does need a strong PhD background but it's also silly that it's extremely hard to find a path where your primary job after getting a PhD is... Research...

But pushing the edge of what we know is fucking hard. It should be no surprise that it is hard to evaluate the performance of people trying to do this. So forgive me if I think in addition to using metrics like citations, venue pubs, h-indexes, you can take the time to read you colleagues papers. I'm really not asking for much here.

One of the things I also find rotten in academia is this isolation. It's weird that we don't know, intimately, the work of those in our department. We should be reading their papers in the first place! I'm glad academia encouraged collaboration with groups outside your institution, but it shouldn't come at the cost of collaboration in house. Both are valuable.

There's just so much more and I think it's why many are willing to tear down the system and start over. I'd rather not, but things are too broken for patchwork. We don't need to tear it down, but we do need to do some serious renovation.

I encourage everyone to think deeply about the problem, even if you don't think there is one (especially if you don't!). Address the underlying assumptions that no one says. Consider when they hold and when they don't. Then ask if this works. If you believe so, great! I'd love to hear that opinion. But I find it absurd we can't spend a little time questioning the vehicle we are all riding in. It should be questioned frequently and regularly. Because maintenance is far easier than fixing something when it's broken.

beedeebeedee
0 replies
2d18h

@kergonath and @jitsiren in the sibling comment. You're both describing the outcome of a system of vastly disproportionate numbers of applicants and opportunities. I believe we should continue churning out intellectuals who are passionate about the sciences, but we need to provide them with enough opportunities, so there isn't the pressure for it to be gamed (because there is only opportunity for a few). Ultimately, if we don't do anything about it, it makes this problem just seem like the neglected by-product of thinly veiled money-laundering from the government to financial institutions (via university administrators and student loans).

Ar-Curunir
1 replies
3d2h

Unsurprisingly, in the vast majority of cases, substance and reputation tend to be at least loosely correlated, at least in academia. There are high-profile exceptions, but they are only high-profile because they are indeed exceptional.

beedeebeedee
0 replies
3d2h

Perhaps, but it not only deviates from the practice of science but creates the very conditions that science has been developed to overcome- i.e., social status is accepted as the driver of research rather than the pursuit of knowledge. It might be considered ok for financial institutions to make decisions on who to fund or partner with based on status signals, but it is galling to think that is how our scientific institutions work. That practice seems as scientific as scientology.

almostgotcaught
5 replies
2d23h

I'm happy with blind reviews.

Of course you're happy as the reviewer. I would be too if I could criticize completely anonymously and have it counted. Me as the reviewee? Not so much.

I always thought it was funny: in the US, it's a fundamental right that you should not be criticized anonymously (6th amendment). Ostensibly this is to prevent laundered dictatorial rule. And in academia it's the norm lol. Makes you wonder doesn't it.

kergonath
4 replies
2d23h

Of course you're happy as the reviewer. I would be too if I could criticize completely anonymously and have it counted. Me as the reviewee? Not so much.

I would support double blind reviews in both roles. I don’t think it’s fair to evaluate a manuscript on the authors line up. I think it’s irrelevant and encourages arse-kissing to get big names on papers and reviewers that are too deferential or combative just because of their personal history with one of the authors. I don’t think it is healthy or productive, even when I am one of the authors.

What is your perspective, why would you not like it?

almostgotcaught
2 replies
2d21h

Ask yourself why juries and prosecutors and judges and police are not anonymous but voting is.

What is your perspective, why would you not like it?

Authors should be hidden from reviewers. Reviewers should be made known to authors. Why? When some reviewer dings me because he/she doesn't understand a fundamental concept, or some technical detail, or because I didn't cite their work, I should be able to have some/any recourse. Right now these people are completely shielded by anonymity.

kergonath
1 replies
1d20h

Ask yourself why juries and prosecutors and judges and police are not anonymous but voting is.

Jurors are anonymous when it’s needed, for the same reasons voting is secret: to avoid coercion, blackmail, and retribution. We should want to keep reviewers anonymous for the same reasons. Prosecutors and judges are not, of course. And I don’t need to ask myself, all of this is obvious.

Authors should be hidden from reviewers.

So we…half agree? As I said, think it would be much healthier to have double blind reviews, because what’s being reviewed is the work and not the authors.

Why? When some reviewer dings me because he/she doesn't understand a fundamental concept, or some technical detail, or because I didn't cite their work, I should be able to have some/any recourse. Right now these people are completely shielded by anonymity.

And for good reasons. When I reviewed a paper from a big lab as a post-doc, it would have been impossible to do it properly if I had to fear the professor’s retaliation. Anonymity is a shield to avoid direct confrontation.

Rogue or stubborn reviewers are why we need more than one of them and editor discretion. The editors are not stupid, they know these patterns.

almostgotcaught
0 replies
1d19h

It's pointless debating people that benefit from the current system. All I'll say I'm glad I graduated and went into industry so I never have to deal another review again.

godelski
0 replies
1d11h

I'm a toss up between blind reviews. I like them in theory but I'm not convinced they work well in practice and at scale I think weird things happen.

Review works well at small scales because people are more passionate about the work than the money or career, though those matter. There's better accountability in smaller communities because they're more reliant. But blind is often a farce there since you recognize one anothers writings, subtopics, niche, whatever.

At scale review sucks because there is no accountability. Venues need to reject on percentage of submissions, not on the quality of work that year. It's insane it's a zero sum game. And at scale everyone is over burdened and quality control goes down, so malicious/lazy reviewers are even incentivized since they gain time for being lazy, they have less scrutiny when rejecting, and they increase the likelihood of their work being admitted when doing do (I mean every work can be legitimately criticized, so a reject is ALWAYS defendable, even if seen as harsh. This isn't true for accepts). But also at scale, blind is weird. It exists for small labs but not for big ones.

So I don't know. I think many forgot the point of review. That we should all be on the same team (team science), and competition should be friendly. That reviews should leave the authors feeling like they know what they can improve on not like a review read a different paper or just read one figure (and wrong), and just dismissed (that's not a review). And that wastes lots of time and money as well as increases the problem as papers get recycled through the slot machine.

So I don't know. But I do know the point of publishing is to communicate with peers. And this can be done by open punishing. Sure, this might make the signal of citations and number of papers noisier, but I think it's already pretty noisy even if we don't want to admit it. And maybe it's not a bad thing for those to be noisy. I think we rely too heavily on them currently. Maybe if it was clearer that noise existed we could be more nuanced in our evaluations, which is what's frequently lacking these days. Who knows. But I do know the system today is not one I want to continue being in and this sentiment is far from uncommon. So something needs to happen.

godelski
2 replies
3d1h

You sound like review 2: speaks from authority and gives criticism with little substance.

  > You're already familiar with most of them
In what way? Their prestige? If I went about outsourcing my opinions of people to others I'd hold many criminals and objectional people in high regard while also looking down at many great men. Even if this works 80% of the time you trust *but verify*. Have some actual confidence in yourself. Stop outsourcing your intellectualism.

  > I've served on several program committees
Then maybe stop complaining and start critiquing.

You speak with an air of authority and that we should trust you but give no actual reason why. Use your experience and expertise to educate us. Don't just say I'm wrong, say why I'm wrong. Convince me you're right not through ethos, but logos. This is what everyone hates about review 2. Don't become him.

I believe you've also missed the substance of what I originally suggested. You're so caught up in the standard way you injected it into my comment without even noticing. You forgot the point all of this. It to climb the ladder of academia, get grants, and all that. It is about advancing science and moving humanity forward. The way to measure this isn't meaningful at our disposal. You can't measure this by citations or indices, by publications or venues. Again, they can be a signal (I'm not saying to throw out the baby with the bath water), but I don't believe you're naive enough to not be aware of how frequent progress is made by those who fail on those measures, how frequent it comes from a dark horse. This should be enough to suggest that there's more to it than what we usually consider. Academia has always been about challenging the status quo, even if it is just to verify. The problem with academia is it's created to give refuge to those who want to spend their lives dreaming but has become so structured you can only dream a certain way. I get why. It's hard to differentiate useless pipe dreams from those that push us forward. But in creating the structure to prevent people from taking a free ride you created the monster you tried to stop. Only now it has more power.

So don't just wave your hands dismissing, elaborate. Be academic not arrogant.

JohnKemeny
1 replies
3d

> You're already familiar with most of them

In what way? Their prestige?

In the sense that I've read most of their papers, I've even reviewed several of them.

I've literally just handed in a report for an expert committee to hire a new faculty member, and I had read papers from all applicants that were relevant and reviewed for more than half of them.

> I've served on several program committees

Then maybe stop complaining and start critiquing.

I'm not complaining about anything. I just tried to lay out my point of view.

And yes, I'm aware of the painful measures that citations and impact factors are, but, like any other HN reader, I'm aware of Goodhart's law.

My point is that we're (I'm) not using citations or publication points to rank researchers. We know them.

godelski
0 replies
2d1h

  > In the sense that I've read most of their papers, I've even reviewed several of them.
Which means you.... Read the papers...

So what was the point of your comments

fraserphysics
0 replies
2d22h

I too have served on faculty search committees and reviewed scores of papers. Many members of the committees only counted publications. I read some papers and commented on them without much effect.

Der_Einzige
0 replies
3d2h

That you're struck that this is someone who "doesn't hire faculty members" is exactly the best thing.

The academy is so corrupt it should be "shattered into a million pieces and scattered into the wind". H Index ego, P hacking, lazy lies because "no one tries to reproduce", citation cartels, institutionalized systemic academic dishonesty/plagiarism at the highest levels (presidents of top universities and likely their underlings), "reviewer #2", etc. It's all bunk, and most of it is designed to take bright-eyed Ph.D's and grind their soul to bits before they can realize that they can both out-earn and out-produce (in an objective sense) most of their professors by simply leaving the academy.

Being ineligible for tenure should be a badge of honor, as the whole system should be abolished and the world would be far better for it - particularly, science would advance far faster.

Far more of HN needs to read Paul Feyerabend and actually internalize what "epistemological anarchism" means. The academy does not hold any particularly valid claim to being closer to observing "objective truth" than anyone else. The allegory of the cave/divided line (the primary justifications for a lot of academic wankery) was one of the original lies made by a nascent and authoritarian philosopher class. We should have abandoned these ideas long ago.

Oh, and if you leave the STEM fields, the academy goes into full on "charlatan" mode. I go into most social sciences and find a wasteland of footnotes on Foucault and Lacan. That Situationalist International was so successful that the academy is still choked by an obsession with french fashionable nonsense is an indictment of the whole of the academy in the west. It's all intellectually bankrupt.

max_
18 replies
3d12h

Why can't researchers just publish something on a pre-print like ArXiv?

Why do they have to go to Journals that require payment for both readers & authors?

The machine learning/ml community pretty much gave up on the Journal model and most of the ground breaking papers are available for free on ArXiv.

Why are these scientists complaining? Is it just a prestige seeking thing?

underbooter
7 replies
3d12h

They always have been able to. It's called a blog, personal or company.

But everyone knows why "serious research" isn't mostly published on blogs:

Corporations like Elsevier have successfully executed takeovers of research centers (universities) and made journal submission a mandatory rite of passage for academics joining the state-mandated academia funnel. Don't publish? No postdoc for you. No professorship for you. No grant approval or news coverage.

Do publish in an academic journal? All the work you did, all the IP you invented, is assigned to the university and/or the grant funder. You're basically a non-shareholder: a contractor.

Researchers who do publish on their own tend to be viewed as cranks, since they generally don't use "journalese" and aren't required to cite from the same pool of "officially published" articles. Consequently, they also can't really get cited outside of the blogosphere - a blog post isn't "legitimate literature."

Researchers who don't publish in official journals and are labeled cranks generally can't afford to do research long term.

So how do you divorce yourself from academia?

Start a research company or project

Get funding via grants, product sales, donations

Use your research to directly build products and get a return on R&D invested

Decide to not publish your research in the open because doing so would take away the information asymmetry keeping you ahead of the competition

Oops, you are no longer publishing.

mapt
4 replies
3d7h

I lost a lot of respect for science when my advisor explained that it didn't matter that the cutting edge work I wanted to extend was being done by Some Guy On A Forum, I wasn't allowed to to cite them, and a pity too, because it was probably correct.

BeetleB
2 replies
3d2h

I lost a lot of respect for science when my advisor explained that it didn't matter that the cutting edge work I wanted to extend was being done by Some Guy On A Forum, I wasn't allowed to to cite them

My strong guess is that your advisor was wrong (or you misunderstood him). I've often seen citations to random web sites. Even citations to conversations. The purpose of citation is to give credit, not rack up points.

Perhaps he was saying it was too risky to base your work off of some blog post, as his work was not yet "established" because it hadn't been published.

kergonath
1 replies
2d23h

I think it depends on how the reference is used. Something like “here is a proof or an explanation, it’s great and I am going to repeat it here but it came from that website over there so don’t think I came up with it” is very different from “there is a proof over there so I will accept it as true and you should too” (i.e., how citations tend to be used most of the time, unfortunately).

Even as a referee I would be happy with the former, provided that there is a permanent link or a pdf of the webpage in the supplementary material. I really would not let the latter fly.

BeetleB
0 replies
2d14h

is very different from “there is a proof over there so I will accept it as true and you should too” (i.e., how citations tend to be used most of the time, unfortunately).

I have seen exactly this. A (famous) professor at my university had a manuscript for a textbook that he had been working on for years, but had not yet published. A number of people wrote papers citing the (unpublished) book.

Of course, this is a bit of an outlier as the author had over the years given people drafts of his book.

Still, I would say that as long as the referee can access the web site and verify it, it should be allowed (and I'll still assert that it has been allowed on numerous occasions). May vary with discipline, though.

elashri
0 replies
3d5h

To be honest and judging by the amount and quality of emails I receive weekly from people claiming to have a discovery or sharing this last unification theory, this is a safe assumption. The problem that peer-review solve it not to waste your valuable little time seeing if this is following reasonable scientific methods or not. It does not tell you that this wrong or correct.

I ready about 30 papers weekly (below average) and I spend 2 days out of my week reading them. This without having to read anything, it would be a much worse situation where I have to read too many just to find that many of them are just written by cranks.

Someone can say we can can pay for people to have some sort of verified aggregate feed of articles. Yes that is possible and congratulations, you just reinvented peer-review system.

danaris
1 replies
3d4h

Corporations like Elsevier have successfully executed takeovers of research centers (universities) and made journal submission a mandatory rite of passage for academics joining the state-mandated academia funnel.

I...think the causality on this is wrong.

"Publish or perish" long predates Elsevier's rapacious consumption of journals. The reason they did that in the first place is because they already knew it was, in effect, a captive audience.

underbooter
0 replies
2d23h

At the present moment it doesn't much matter how it started.

Before Elsevier, journal articles or "letters" were just correspondence between individual researchers. The big publishers got into it later.

bruce511
3 replies
3d12h

It's a filter thing.

Free platforms let anyone post anything without any filter. Clearly that results in a lot of rubbish being included.

As the signal to noise ratio increases the value of the platform decreases. I can publish anything I like on my blog, but my blog has no reputation compared to a million other blogs.

Of course filtering is expensive, biased, sometimes wrong, limits access to information and so on. The current academic system is pretty "broken".

But just making all articles free to post, free to read, isn't the solution. We have that already and it's terrible. (Facebook, YouTube et al.)

Ultimately we still need filtering - and thats the hard problem that needs solving.

bjornsing
1 replies
3d11h

Filtering should happen after publication, and be built into platforms like ArXiv.

bruce511
0 replies
2d23h

I would suggest that it's hard to do that as an afterthought. If the internet has shown us anything it's that public curating of material seldom allows quality to float to the top.

And when filtering, or curating, is exercised then the argument swings between "untrue" on one side and "censored" on the other.

Simply saying "it should all be open" without acknowledging, or worse without understanding, the real harm that approach could bring is to bring an overly simplistic solution to a hard problem.

matwood
0 replies
3d6h

It's a filter thing.

I'm reminded of this quote from Ben Thompson:

In short, the analog world was defined by scarcity, which meant distribution of scarce goods was the locus of power; the digital world is defined by abundance, which means discovery of what you actually want to see is the locus of power. The result is that consumers have access to anything, which is to say that nothing is special; everything has been flattened.

Instead of filter, I tend to say curate. There are so many papers being created no institution has the time to vet and curate. They want to use open access repositories and journals, but they don't want to bring hundreds of thousands unvetted papers and books into their local catalogs.

There's also the prestige of certain journals, but like you touched on, at a meta level that's just curation. Citations and other scorings are attempts to address this issue, but those end up heavily weighted towards older publications that it's hard for new, possibly better research to bubble up.

mike_hearn
2 replies
3d8h

The ML community is much less dependent on government granting agencies than most other fields. A typical ML paper on arXiv will have at least one and often many corporate funded researchers. Problems like h-index hacking are a disease created by grant giving: take away the government grants and people lose the incentive to engage in that kind of scientific fraud, as they'd only be defrauding their own employers. Who, you can bet, actually are reading the papers.

elashri
0 replies
3d5h

take away the government grants and people lose the incentive to engage in that kind of scientific fraud

And the incentives and resources to engage in scientific activities in case of other fields.

dotnet00
0 replies
3d4h

On the other hand, the ML community is especially big on papers with cherry picked results and things that can't be replicated.

kergonath
0 replies
2d23h

Why can't researchers just publish something on a pre-print like ArXiv?

In my field, Arxiv has too many really suboptimal manuscripts, too much garbage, and is not well indexed. Looking for decent stuff is hard. I would really like our institution’s library to pay someone to sift through Arxiv full time and compile lists of decent articles regularly. Like how they used to do when you had to physically go to a library to even get the titles of published articles. This was enough of a hassle to make bibliography lists valuable. But yeah in the meantime, it’s too expensive in terms of wasted time compared to even Google Scholar.

PeterisP
0 replies
3d1h

It's a funding thing - if you don't publish in highly-filtered places, you don't get grants so you can't do science; no one will pay for you, your students and your equipment if you only publish on ArXiv.

EEBio
0 replies
3d12h

Grant application and advancement in scientific careers. As a junior researcher, you either publish in impactful peer reviewed journals or you don’t get your PhD. As a senior researcher, you either publish in impactful peer reviewed journals or your PhD student doesn’t get the PhD degree (this hurts both of you). Moreover, you won’t get grants so you can’t even hire a PhD student in the first place.

While scientists’ prestige might be a part of the equation, it’s mostly the academic leaderships and fonds that have brought us here.

CobrastanJorji
9 replies
3d12h

The remedy is to abandon author-paid OA publishing and seek less harmful alternatives.

That's just "No remedy is known" wearing a suit.

Y_Y
8 replies
3d11h

To paraphrase Linus:

"Only wimps use traditional journals. REAL men just upload their important stuff on arXiv and let the rest of the world peer review it."

otherme123
6 replies
3d10h

Our group always pre-publish to arXiv, as a way of saying "we did this first/already" because we usually suspect, and half the times are correct, that our competitor are on a similar research. But as soon as we arXiv, we start the hunt to publish with as high Impact Factor as we can. Why? Almost 100% grants applications are going to evaluate you for the weight of your publications, and some of them doesn't even consider indexed publications below Quartile 1 or 2. The IF of the journal is a direct proxy for your work quality.

In our experience, arXiv doesn't peer review at all. You get some twitter posts, messages of congratulations, at most a couple of questions. But nothing like the average peer seriously reviewing your paper.

I wouldn't pay a dime to publish if my salary wasn't affected (e.g. being Linus). I would submit to arXiv, Github, Twitter or even a "Tell HN", and probably reach a larger audience. In fact, the journals ask you to promote your papers on your social networks, because they know theirs doesn't matter unless you are Science or Nature. But sadly that's not how the scientific world works.

Y_Y
2 replies
3d10h

You're totally right. In fact that's one of the main reasons I gave up on an academic career. My collaborators who still rely on grants are miserable, and with good reason.

matwood
0 replies
3d5h

The whole grant driven, non-profit world is really an odd place. It's hard to explain to people who haven't experienced it first hand.

mapt
0 replies
3d7h

Dramatically increasing the amount of scientific research grant funding nationwide would be a natural response to this part of the demographic transition and to the idea of promoting college education, but instead we have an academic pyramid scheme that drives huge amounts of raw researcher labor off a cliff and into Starbucks.

raister
1 replies
3d3h

I agree that being the first is important. I have had a paper reviewed, then rejected, only to see the same idea elsewhere a while after. I cannot say the reviewer "had the same idea" nor prove, but it is known to happen. Putting into arXiv remediates that. However, my personal issue with that is that the journal might not be that keen to want to publish your paper after knowing it was publicly available elsewhere, as they thrive "innovation" and "being the first" as well. It's a very difficult problem, indeed.

otherme123
0 replies
3d2h

What we did in the past was contact the editor first and ask if they have any problem with it being pre-published. Some do (most of them in our experience), some don't. The paper always change between arXiv and the journal, so the source of truth is the journal.

I have had a paper reviewed, then rejected, only to see the same idea elsewhere a while after

I've seen worse. A grant being rejected by the grant reviewer, only to be re-submitted slightly changed by a reviewer's friend (this is a small world, we know each other) the next year. The inners of science suck, most of us keep doing it because we are like monks: we love God (Science), but we hate the Church.

PeterisP
0 replies
3d1h

In some fields having it be pre-published pretty much disqualifies that work to be ever published in a way that "counts" as almost all high impact factor journals will only consider unpublished research, so placing it in arXiv means that this research will never be considered 'quality' by the formal criteria and the work is wasted from the perspective of fulfilling the obligations towards the funding source which required Q1 publications.

kergonath
0 replies
2d22h

The problem is that the rest of the world does not really like reviewing stuff. A few people hint frauds, which is a great service, but that’s about it.

elashri
8 replies
3d17h

Publishers need money to pay for typesetters, proof-readers, and editors

Maybe in the past but most of the journals now will return the manuscript to the author or charge for revisions and any typing service.

The margin in academic publishing industry is one of the highest in all industries for a reason.

But I agree with the author that open source publishing incentives publishing more papers even if it is of lower quality. But also it does harm the repetition of the journal.

The elephant in the room that people usually avoid is that this is a political problem. You cannot rely on publications and journals impact factor in grant selections and academic careers evaluations and expect to solve this problem. No scientist is going to choose to publish an article on their website vs a journal if it will not only not add but harm their careers. Yes people put papers on arxiv but most of who wants to continue in academia will submit them for a traditional journal publication.

And how would we fix that? That's a policy reforms of funding agencies and academic institutions evaluation methodology.

throwaway14356
6 replies
3d13h

So scientists created this new publishing platform called the www but other scientists with no such expertise pretend it hurts their reputation to use it? That seems rather comical. Do they perhaps have a constructive argument to offer?

If they desire a harsh rigorous review mechanism it doesn't seem very hard or expensive to build. Anon reviewers, top noch reviewers, they can have many tiers.

It isn't the journal that has prestige but the track record of its reviewers.

Mirror the topics on a discussion platform with separate sections for different skill levels. Heavy contributors earn bragging rights.

Mirror the topics on a wiki and have experts approve edits. There is prestige in just approving contributions and having conversations.

A chat client is also imaginable where one approves/adds contacts by type of publication.

Each can run rare expensive advertising that lives up to unreasonable demands.

But no? That would hurt the reputation - just because?

whatshisface
2 replies
3d12h

It is in nobody's individual interest to switch from an established journal to a new one before at least half of the rest have switched, so nobody does it. Scientists are also being squeezed within an inch of their lives by the career advancement / funding system leaving no time for the organization of this kind of change.

RobotToaster
1 replies
3d10h

It's basically the prisoner's dilemma. Everyone knows what would be best if everyone did it, but doing it while others don't has a negative payoff.

throwaway14356
0 replies
1d2h

Religions also attempt to explain the universe. The problem begins when new theories, new observations and new stories can no longer be added. It must after all support some form of acceptance initially.

The edge science is suppose to have is obvious.

Let me make a gesture with my finger in the atmosphere and say: on with it already! Forewards!

this one is funny

https://www.journal-of-nuclear-physics.com

jpeloquin
1 replies
2d23h

The problem with gold OA is the proliferation of low-quality spam, and the recommended remedy is to both restore exclusivity (raise the barrier to entry) without supporting excessive (monopolistic, exploitative) journal subscription fees. The article's recommendation to build up society journals has a decent chance of accomplishing this. It won't prevent spam from getting published, but the only essential aspect is to keep the spam easily identifiable (e.g., published by MDPI) so it can be safely ignored. The www seems to facilitate the spam business model so I don't think the spam will go away entirely.

throwaway14356
0 replies
19h14m

Assuming you want to get rid of the fees entirely: people will have to look at the spam and divide it into more and less spammy. Others will have to do the same with the less spammy and so on. Eventually there is room for more accomplished moderators Until those with the greatest prestige filter down to a tiny sub set. You assign weights to the experts so that with each level of review the good half raises in rank much more than the previous round.

At first you do it by committee but eventually the rating for each scientists can be derived from the importance of their publication(s).

I don't think it needs to be free or even cheap. Subscriptions to the archives (and research data) could be more expensive and the money can be divided over the reputation points.

Free access can be earned by the sweat of your brow, publish worthy papers and do reviews that closely match other reviewers.

Universities should be able to publish enough quality material to get paid. Have the grownups pay for juniors education again. It was a good idea back then it is a good idea now.

rscho
0 replies
3d6h

You are clearly unfamiliar with biomed research. Journals do have prestige by themselves, by now. And it's not so much reviewers but rather the acceptance rate on cursory read by an (sometimes anonymous) editor that maintains it.

godelski
0 replies
3d1h

Funny thing is I've had paper rejected where the only complaints were about a few spelling mistakes and grammar. They didn't say where but I guess that's enough

vouaobrasil
6 replies
3d17h

The problem is not really author-paid fees. It's the fact that paper count and citation count is so important in the first place.

Actually, if you want to fix the problem, peer-review should be rewarded rather than publications, with accurate peer-review and unbiased evaluations. Publication count should not be nearly as valued, and peer-review should not be anonymous.

knighthack
1 replies
3d16h

accurate peer-review and unbiased evaluations

And how do you even guarantee this, 'accurate' and 'unbiased'? Any form of peer review will always have a slant, no matter how 'objective' it is intended to be.

Until that can be ascertained, I don't see how peer review is fundamentally superior. I would even argue that peer review can be detrimental to true science, especially those on the frontier.

vouaobrasil
0 replies
3d7h

And how do you even guarantee this, 'accurate' and 'unbiased'? Any form of peer review will always have a slant, no matter how 'objective' it is intended to be.

Peer review of peer review. Instead of a couple of scientists looking at it, the peer-review process should be entirely open. So other people can scrutinize it.

bachmeier
1 replies
3d15h

peer-review should not be anonymous

How do you prevent movement to an equilibrium where reviewers give favorable reviews to the reviewers of their own papers?

vouaobrasil
0 replies
3d7h

How do you prevent movement to an equilibrium where reviewers give favorable reviews to the reviewers of their own papers?

Peer-review of peer review. If all peer review is public, then people can analyse the trends.

RobotToaster
1 replies
3d10h

What about those who peer review papers that are later retracted?

vouaobrasil
0 replies
3d7h

Well, if the paper used falsified data, the authors should be punished. We can't expect all falsified data to be found, as some people are really good at falsification.

drdeca
6 replies
3d15h

A thought I had: what if there were a service which maintained a list of arxiv identifiers (or, maybe also allow doi identifiers, but only if some extra conditions are met) that met their standards for notability/interesting-ness/quality (each URL could maybe be accompanied by the title and a list of the authors of the work).

And suppose that all this organization published, was this list of URLs (and titles and names).

It seems conceivable that, if their list became respected, getting one’s pdf on this list could, maybe (this seems a stretch) prestigious.

Now, if they accepted submissions of URLs to be considered for inclusion in a future issue of their public list of URLs, if being on this list was prestigious, they would presumably be flooded with submissions, and need to reduce the number of submissions they received somehow. Then, I imagine them charging some fee for submitting a URL for them to consider for evaluation.

Of course, they might include a few documents that no one submitted, in order to make sure the list remains useful by having most of the most relevant papers in whatever field, in order to maintain prestige.

Conceivably there could be multiple such organizations doing this for different fields and subfields.

If this happened, it would seem to be essentially a replication of the phenomenon of author-paid publication fees in academic journals, but without relying on copyright or actually doing the publishing of the articles.

wakawaka28
3 replies
3d12h

Who would pay for an article they could get for free? Who would pay to subscribe to a journal that just aggregates open-access articles?

amatic
1 replies
3d9h

The scientific community, or the ministry of science of some country, or a university - might find that paying for peer review, for example, might be more effective in promoting good science than paying for publication.

wakawaka28
0 replies
2d19h

Perhaps that should also be compensated but you're talking about another cost associated with publication (and even non-publication). Since there's no guarantee that reviewers even read the crap they are assigned to read, I don't think paying them is the best approach. It would perhaps be better to publish the names of the reviewers. And if you reject a paper, you have to go on the record with your gripes. I'm sure these policies would slow reviews down an awful lot though. Reviews are only supposed to uncover blatant errors I suppose, and not offer definitive endorsement of results.

drdeca
0 replies
3d2h

In what I was imagining, the income was from people paying to be considered for inclusion on the list. Reading the list is free.

jbaber
0 replies
3d4h

This is great! I'm trying to think of any disadvantage.

biomcgary
6 replies
3d17h

Publish or perish is yet another example of Goodhart's Law (https://en.wikipedia.org/wiki/Goodhart's_law).

Paper count is a terribly easy metric to game when used to evaluate academic output because publishing low quality papers doesn't count against the authors.

We can fix the problem in an unbiased manner by using ChatGPT to quantify article novelty, impact, and quality. /s

_aavaa_
2 replies
3d16h

I almost missed the /s in my anger while reading that last line. It's too close to being real...

_aavaa_
0 replies
2d19h

They say AI scientist, but it looked more like the NN Evolution by LLM Selection.

OvbiousError
2 replies
3d10h

You also need citations though, and those don't come with low quality papers, nor will they help your reputation. To get citations people have to actually read your stuff.

inglor_cz
1 replies
3d10h

Or organize a fraudulent citation ring.

dguest
0 replies
3d10h

Some scientific collaborations put thousands of people on every paper, publish around 100 papers a year, and cite at least 10 of their own papers in each one.

To some, that might be "a fraudulent citation ring". To others it's just business as usual.

Of course a lot of academia has caught on to the ruse, but the mitigations are pretty ugly: some funding agencies and universities just stopped counting papers from large experiments as valid scientific contributions.

h4ck_th3_pl4n3t
4 replies
3d10h

Great idea.

Now realize that academia is getting more funding with more citations and that's why research in a capitalist society relies on successful research and not on failure (which objectively 50% of results should be).

Then iterate on the corruption idea and realize that the only way to fix this on a societal level is unconditional basic income.

Only with basic income and legally forbidden sponsorships you can guarantee objectivity.

fdej
3 replies
3d10h

There's no a priori reason why the expected success rate of research projects should be 50% and not, say, 1% or 99%.

h4ck_th3_pl4n3t
2 replies
3d10h

There's no a priori reason why the expected success rate of research projects should be 50% and not, say, 1% or 99%.

Humans are really bad at predicting the future, so I'd argue that the large majority should be between 49% and 51%. A 99% success rate would imply that humans can predict the future, which I think is pretty much the opposite of how science should work.

frankling_
0 replies
3d8h

If you're able to predict the future with 50% accuracy, you should start filling out some lottery tickets.

Ekaros
0 replies
3d8h

I would actually put it 60-80%. As there is plenty enough of ideas that should work. It is not really random search, but applying existing knowledge or expectation to some different area. If doing one thing with one thing works, I would fully expect in many cases that doing same thing with this other thing relatively closely related to also work.

Lot of research is iterative process and there you can do iterations that you know could work and already ignore things that certainly won't.

wmf
2 replies
3d12h

Speaking as a published scientist, I never thought open access was supposed to "put an end to the era of impact-chasing, false-positives, and unpublished truths." The point of open access is to stop paywalling science by parasitic publishers and it accomplishes that. I also didn't predict that a flood of pay-to-publish journals would lower their standards, but I only pay attention to the top conferences anyway.

MichaelRo
0 replies
3d12h

@wmv has the correct answer.

The original argument is utopian (can never happen in a free society) and basically proposes going back to an era of very few scientists, like I dunno, when most people were illiterate peasants and the sea of retardness was dotted here and there with an Euclid or a Fibonacci. Which sounds good in theory if all you had were Einsteins and Newtons and Kelvins but in practice at least some of the available places would be eliminated by nepotism, and most of the approved scientists while genuine in their effort, would still produce mediocre results. Science results, like LLM training, depends on scale.

So I would argue rather, let anyone who wants to do science, do it. The cashier lady and plumber guy are unlikely to wanna do research but out of those who do, eventually something's gonna come out. Only it's not clear straight from the start out of whom.

DataDive
0 replies
3d4h

I never thought "open access" means that an author would need to pay $10K to be published in Nature ... that is a lot more parasitic than having a library pay for subscription fees.

wakawaka28
2 replies
3d12h

I think this can be solved (or improved) by making the option to pay publication fees available only after the paper is accepted. In fact, it could even be extended to after the official publication. If the author then wants the article to be free for readers, they can pay having already secured the paper. If this was also combined with an anonymizing factor to hide the identity of the author from the publisher, it would be perfect. But I know that part is impractical.

gapan
0 replies
3d5h

Am I misunderstanding something? Isn't that the way it is already with most open access journals? Apart from hiding the author's identity from the publisher of course.

dotnet00
0 replies
3d4h

My understanding is that's how it already works, at least that's how it's been for the journals I've published in. When you submit the paper you simply promise to pay if the paper is accepted. Then peer review happens and if you pass, they'll ask you to pay.

jillesvangurp
1 replies
3d10h

The solution is on the funding side and the reader side, not the publisher side. Scientists have to publish in order to get other scientists to read their work and (hopefully) refer to it in their own work.

This business of referring to other scientific work is what drives academic careers and the allocation of research budgets. Most of the better universities look at metrics that take this into account. The problem is all the second rate universities haggling for funding trying to boost their metrics.

Promoting scientific work is essentially an SEO problem. How do you get other scientists to take your work seriously? Well, they need to find your work when they are doing research for their own work.

Getting the PDF distributed is the easy part of the problem. That's a solved problem since the nineties. It doesn't really cost anything worth talking about. It's not 0$ but close enough that the difference isn't really worth talking about. Publishers don't matter for distribution any more; it's all digital now. Any website that Google can index will do for distribution.

The goto scientific SEO process hasn't really changed in centuries. You submit your article to some exclusive groups of researchers, anonymous peer review is arranged, and if deemed acceptable the article is included in some publication. Which these days just means they put it on their website. Actual printed journals are a side show. Some publishers still do it. But mostly it's a waste of paper.

The process beyond that is simple good old SEO. You drive traffic by promoting your work at conferences, workshops, writing books, by talking to others, etc. This is also how you become part of these exclusive groups. You'll be invited to take part in peer reviewing, editorial work, conference organizing, etc. And in return, you are expected to refer back to work of others in the same group.

The odd thing is that scientists don't really do any traditional SEO. They aren't buying keywords from Google. They aren't messing with their meta data. And they aren't optimizing their pdfs for readability, AB testing various click baity titles. Nor are they using analytics tools to measure how other scientists engage with their work. They aren't running social media campaigns either. This is super odd. You do all this work and then you throw it over the fence and hope for the best. Why?! This is stupid.

The value add of publishing to some paid publisher scam journal that nobody reads and where the peer review is basically bribing the "editor" is of course very low. The key problem is that universities don't scrutinize their metrics enough. That's because they aren't independent. Like researchers, their funding depends on public sources that look at metrics to judge how well the money is being spent. That process is super corrupt and bureaucratic. A lack of standards and integrity in this process is the root cause of the problem.

Scientists referring to work from dubious publications is a giant red flag that they are manipulating their metrics. That giant red flag needs to be taken into account during grant applications for public funding. This isn't rocket science. Like plagiarism is frowned upon, referring to work from disreputable sources should be frowned upon as well. It signals scientists trying to mess with the numbers and not taking their own work and integrity seriously.

This should be flagged by editors of reputable publications because it threatens their own reputability. Grant application reviewers should be flagging this as well. Metrics should be created to automate this. Simply flag some publications and calculate the least reputable authors based on their citation habits referring to work in these publications. That will sort things out in no time. If you can calculate an impact factor, you can also calculate a reputability factor.

MarkusQ
0 replies
3d2h

The goto scientific SEO process hasn't really changed in centuries.

You submit your article to some exclusive groups of researchers,

anonymous peer review is arranged, and if deemed acceptable the

article is included in some publication.

Barely 50 years, actually. Peer review is a latecomer to the party[1] and is actually part of the problem. It's fundamentally gatekeeping and (along with an analogous process in funding) serves to slow/prevent paradigm shifts by locking in popular models at the expense of creative thinking (e.g. the amyloid hypothesis[2]). It's not part of the scientific process, it's part of a concerted effort to tame and regularize it.

[1] https://mitcommlab.mit.edu/broad/commkit/peer-review-a-histo...

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5652035/

OvbiousError
1 replies
3d10h

I published papers in the past. Are the fees (too) high? Probably. Should publishing be free? Good luck with that, especially now with llms.

I (my university) paid the journals, review was tough and I sank a ton of work in both the first draft and the revisions. And then upon acceptance the papers went both into the journal and to arxiv.

Reading the new papers on arxiv was also my start of the day. That is only possible if that list is heavily moderated, there were around 30-40 of which I would read/skim 1-5.

Luc
0 replies
3d10h

It's almost like you think the fees paid by your university went to reviewers and moderators?

That's not how academic publishers make billions of dollars of profit each year. They don't pay those.

ynniv
0 replies
3d3h

  - Academia has become a paper throughput competition
  - Open access journals make it easier to game this system
  - Ergo, open access journals are bad
Well, he's right that one of those things is a serious problem that needs to be addressed.

typedef_struct
0 replies
3d2h

Oh science. Peer review, the ultimate "LGTM", stands in for replication.

throwawaymaths
0 replies
3d15h

PLOS journals are some of the least bad journals and iirc it's entirely author paid

spiritplumber
0 replies
3d10h

Publications Georg, who lives in a cave and publishes over 10000 papers a year, is an outlier and should not have been counted.

netcan
0 replies
3d9h

Was open access actually expected to "put an end to the era of impact-chasing, false-positives, and unpublished truths?"

Generally speaking, I don't see how the going ons of seconds tier matter.

kkfx
0 replies
3d10h

Next topic: how to surpass the economic religion, because the current mania of making anything tradable and measurable in money units actually IS A FORM OF RELIGION. Of course a useful religion for those who know how to master it, but anyway a religion with serious damaging effect on society and human development.

achillesheels
0 replies
3d14h

Sorry but I have to disagree. It’s the only way I was able to be read.

DataDive
0 replies
3d4h

Fundamentally, the problem is today science is framed as nothing else but a competition, an olymic event, that produces winners that we celebrate and losers, rather than cooperation.

If you got the grant, you won; if you did not, you lost. Losing means you cannot hire graduate students you cannot perform your job at all!

Now consider that around 15% of grants get funded, and it takes about six months to determine whether you got that grant or not.

The desperate competition for money overshadows everything in science.

Science is often thought of and presented as huge leaps, when in reality, it is an iterative process that builds on small advances.