There are partial holes in at at one end. You insert a small amount of dyed DNA (etc) containing solution each. Apply an electrical potential across the gel. DNA gradually moves along. Smaller DNA fragments move faster. So, at a given time, you can coarsely measure fragment size of a given sample. Your absolute scale is given by "standards", aka "ladders" that have samples of multiple, known sizes.
The paper authors cheated (allegedly) by copy + pasting images of the gel. This is what was caught, so it implies they may have made up some or all results in this and other papers.
Close - this is a SDS-PAGE gel, and you run it using proteins. The bands in the first two rows are from a western blot (gel is transferred to a membrane), where you use antibodies against those specific proteins to detect them. The Pon S row is Ponceau S, a dye that non-specifically detects all proteins - so it's used as a loading control, to make sure that the same amount of total protein is loaded in each lane of the gel.
Is it conceivable that the control was run once because the key result came from the same run? I can see a reviewer asking for it in all three figures, whereas they may drafted it only in one
Based on the images, it is inconceivable that these are from the same run (see the dramatically different levels of TRF-S in each gel. One column/lane = one sample). This isn't something that would be included because of a reviewer - loading controls are required to meaningfully interpret the results (e.g. the data is useless without such a control).
Additional context to be speculative of OP's intentions. Within the academic world there was a major scandal where a semi-famous researcher was exposed for faking decades of data (Google: Pruitt). Every since, people have been hungry for more drama of the same shape.
what happens to people who do this? are they shunned forever from scientific endeavors? isn't this the ultimate betrayal of what a scientist is supposed to do?
if caught and it's unignorable, usually they say "oops, we made a minor unintentional mistake while preparing the data for publication, but the conclusion is still totally valid"
There's a difference of having your results on your black plastic cookware being off by several factors in an "innocent" math mistake vs deliberately reusing results to fraudulently mislead people by faking the data.
Most people only remember the initial publication and the noise it makes. The updated/retractions generally are not remembered resulting in the same "generally, no consequences" but the details matter
The people in the area remember (probably because they wasted 3 months trying to extend/reproduce the result [1]). They may stop citing them.
In my area we have a few research groups that are very trustworthy and it's safe to try to combine their result with one of our ideas to get a new result. Other groups have a mixed history of dubious results, they don't lie but they cherry pick too much, so their result may not be generalizable to use as a foundation for our research.
[1] Exact reproduction are difficult to publish, but if you reproduce a result and make a twist, it may be good enough to be published.
This is a general issue with interpreting scientific papers: the people who specialize in the area will generally have a good idea about the plausibility of the result and the general reputation of the authors, but outsiders often lack that completely, and it's hard to think of a good way to really make that information accessible.
(And I think part of the general blowback against the credibility of science amongst the public is because there's been a big emphasis in popular communication that "peer reviewed paper == credible", which is an important distortion from the real message "peer reviewed paper is the minimum bar for credible", and high-profile cases of incorrect results or fraud are obvious problems with the first statement)
I completely agree. When I see a post here I had no idea if i's a good journal or a crackpot journal [1]. The impact factor is sometimes useful, but the level in each area is very different. (In math, a usual values is about 1, but in biology it's about 5.)
Also, many sites just copy&paste the press release from the university that many times has a lot of exaggerations, and sometimes they ad a few more.
[1] If the journal has too many single author articles, it's a big red flag.
Yes, I think science communication is also a big part of the problem. It's a hard one to do right, but easy to do wrong and few journalists especially care or have the resources to do it right (and the end results tends to be less appealing, because there's a lot less certainty involved)
Horseshit. All of the following scientists were caught outright faking results and as a result were generally removed from science.
Jan Hendrick Schon (he was even stripped of his Phd, which is not possible in most jurisdictions) He made up over 200 papers about organic semiconductors
Victor Ninov who lied about creating like 4 different elements
Hwang Woo-suk who faked cloning humans and other mammals, lied about the completely unethical acquisition of human egg cells, and literally had the entire Korean government attempting to prevent him from being discredited, and was caught primarily because his papers were reusing pictures of cells. Hilariously, his lab successfully cloned a dog which was considered difficult at the time.
Pons and Fleischmann didn't do any actual fraud. They were merely startlingly incompetent, incurious, and arrogant. They still never did real research again.
I've always wondered about gel image fraud -- what's stopping fraudulent researchers from just running a dummy gel for each fake figure? If you just loaded some protein with a similar MW / migration / concentration as the one you're trying to spoof, the bands would look more or less indistinguishable. And because it's a real unique band (just with the wrong protein), you wouldn't be able to tell it's been faked using visual inspection.
Perhaps this is already happening, and we just don't know it... In this way I've always thought gel images were more susceptible to fraud vs. other commonly faked images (NMR / MS spectra etc, which are harder to spoof)
Gel electrophoresis data or Western/Southern/Northern blots are not hard to fake. Nobody seeing the images can tell what you put into each pocket of your gel. And for the blots nobody can tell which kind of antibody you used. It's still not totally effortless to fake as you have to find another protein with the right weight, this is not necessarily something you have just lying around.
I'd also suspect that fraud does not necessarily start at the beginning of the experiments, but might happen at a later stage when someone realizes their results didn't turn out as expected or wanted. At that point you already did the gels and it might be much more convenient to just do image manipulation.
Something like NMR data is certainly much more difficult to fake convincingly, especially if you'd have to provide the original raw datasets at publication (which unfortunately isn't really happening yet).
Shifting the topic from research misconduct to good laboratory practices, I don't really understand how someone would forget to take pictures of their gels often enough that they would feel it necessary to fake data. (I think you're recounting something you saw someone else do, so this isn't criticizing you.) The only reason to run the experiment to collect data. If there's no data in hand, why would they think the experiment was done? Also, they should be working from a written protocol or a short-form checklist so each item can be ticked off as it is completed. And they should record where they put their data and other research materials in their lab notebook, and copy any work (data or otherwise) to a file server or other redundant storage, before leaving for the day. So much has to go wrong to get to research misconduct and fraud from the starting point of a little forgetfulness.
I mean, I've seen people deliberately choose to discard their data and keep no notes, even when I offered to give them a flash drive with their data on it, so I understand that this sort of thing happens. It's still senseless.
Isn't this the plot for pretty much every movie about science research fraud? When Richard Kimble was chasing his one arm man, it led to the doctor using the same data to make the research look good. I know this is not the only example.
"Whats stopping?" nothing, and that is why it is happening constantly. A larger and larger portion of scientific literature is riddled with these fake studies. I've seen it myself and it is going to keep increasing as long as the number of papers published is the only way to get ahead.
I was curious how the video creators were able to generate so many videos in such a short timeframe. It looks like it might be automated with this tech: https://rivervalley.io/products/research-integrity
Very cool. I wish these guys would have a podcast discussing high profile papers, how influential they are, what sorts of projects have been built on top of them and then be like "uh oh, it looks like our system detecting something strange about the results".
I wish wish wish there was something similar also for computer science. If I got paid for how many papers that looked interested but could not be replicated, I would be rich.
The title was edited by supposedly HN moderators after I posted it. I actually ran into this youtube channel and thought it was very interesting, since I didn't realize academia seems to make so many mistakes all the time. https://news.ycombinator.com/item?id=42728742
For reference, the title of the paper this appeared in is "Novel RNA- and FMRP-binding protein TRF2-S regulates axonal mRNA transport and presynaptic plasticity"
The senior author is Mark Mattson: one of the world’s most highly cited neuroscientists with amazing productivity and large lab while at NIH when this work was done.
If you just looked at all the undergrads trying to find ways to cheat on their homework, exams, and job interviews, it'd be easy to imagine that university lab science conducted by those same people is also full of cheating whenever they thought they could get away with it.
But I've wondered whether maybe some of the fabrications are just sloppy work tracking so many artifacts.
You might be experienced enough with computers to have filing conventions and workflow tools, around which you could figure out how to accurately keep track of numerous lab equipment artifacts, including those produced by multiple team members, and have traceability from publication figures all the way to original imaging or data. But is this something everyone involved in a university lab would be able to do reliably?
I'm sure there's a lot of dishonesty going on, because people going into the hard sciences can be just as shitty as your average Leetcode Cadet. But maybe some genuine scientists could use better computer tools and skills?
"Adequately" is doing a lot of heavy lifting in Hanlon's Razor. A good corollary to keep in mind is "Never attribute to stupidity what is better explained by malice." I usually apply this to politics, but science publishing is 90% politics, so it still fits.
Yeah, I have mixed feelings about hanlons razor. Giving people the benefit of the doubt is good, and some people don't do it enough, but there's also a lot of people that overextend the benefit of the doubt to the point that they're almost doing damage control for fraudsters
Letting the people slide is not the same thing as letting the action/outcome slide. I do think it's reasonable to let intent inform one's feelings toward the person, but if it's easy to accidentally do fraudulent science then the system should still be criticized and the systemic problem should still be addressed.
> So sick of Hanlon's Razor. It's just a gift to the actually-malicious. If the outcome is the same then intentions don't matter.
I think that's only true for a single incident. If someone does injury to me, I'm just as injured whether they were malicious or incompetent, but mitigation strategies for future interactions are different.
Here's how the razor applies:
There is no real malice behind all the fraud in science publications. The authors aren't usually out to specifically harm others.
However, in the long run it is stupid because of two and a half reasons:
- it reduces people's trust in science because it is obvious we cannot trust the scientists which in the long run will reduce public funding for The grift
- it causes misallocation of funds by people misled by the grift and this may lead you actual harm (e.g., what if you catch Alzheimer's but there is no cure because you lied about the causes 20 years ago?)
1/2- there is a chance that you will get caught, and like the former president of Stanford, not be allowed to continue bilking the gullible. This only gets half a point because the repercussions are generally not immediate and definitely not devastating to those who do it skillfully.
The opportunity here is to automate detection of fake data used in papers.
I could be hard to do without access to data and costly integration. And like shorting, the difficulty is how to monetize. It could also be easy to game. Still...
The nice thing about the business is that market (publishing) is flourishing. Not sure about state of the art or availability of such services.
For sales: run it on recent publications, and quietly ping the editors with findings and a reasonable price.
Unclear though whether to brand in a user-visible way (i.e., where the journal would report to readers that you validate their stuff). It could drive uptake, but a glaring false negative would be a risk.
Structurally, perhaps should be a non-profit (which of course can accumulate profits at will). Does YC do deals without ownership, e.g., with profit-sharing agreements?
Elizabeth Bik (who is known for submitting such reports to journals) has a nice interview about this problem[0], which covers software as well.
> After I raised my concerns about 4% of papers having image problems, some other journals upped their game and have hired people to look for these things. This is still mainly being done I believe by humans, but there is now software on the market that is being tested by some publishers to screen all incoming manuscripts. The software will search for duplications but can also search for duplicated elements of photos against a database of many papers, so it’s not just screening within a paper or across two papers or so, but it is working with a database to potentially find many more examples of duplications. I believe one of the software packages that is being tested is Proofig.
At least this paper has only 43 citations over last 10 years, which is really nothing for Nature, which means it's basically irrelevant.
(Obviously it is still a good idea to identify cheaters)
It inverts the second image and passes the first and third images under it, and when there is a complete overlap the combined images make a nearly perfectly gray rectangle, showing that they cancel out.
Try looking at the artifacts, not the actual bands. There's a little black hairline on the top right corner of the leftmost band, and a similar line toward the left of the middle band.
Any image manipulation program like photoshop with layers, you put the suspect images on top of one another and use filters to subtract one layer from the other (I'm not sure which filter operation works best, it might be multiply or divide) and then work to align the two layers. Differences and similarities become extremely obvious.
You can also get the raw pixel information by converting to a bitmap and comparing values, but it's easier visually because it's pretty trivial for a simple image modification to change all of the pixel values but still have the same image.
A desperate need for automated experiment verification and auditing is needed.
Something as simple as submitting exif + archiving at time of capture, for crying out loud.
A imgur for scientific photos with hash-based search or something. We have the technology for this.
Even for people familiar with the field this title is a bit hard to parse at first without context. "bands" really needs either gels or gel electrophoresis as context.
“We are no longer called Sonic Death Monkey. We are on the verge of being called Kathleen Turner Overdrive, however this evening we will be Barry Jive, and the Uptown Five.”
Ok, we've changed it. Submitted title was "Same three bands appear in three different presentations with different labels".
picture (the submitter) had the right idea—it's often better to take a subtitle or a representative sentence from the article when an original title isn't suitable for whatever reason, but since in this case it's ambiguous, we can change it.
If there's a better phrase from the article itself, we can change it again.
I guess I'll bite - what am I looking at here?
An (agarose?) gel.
There are partial holes in at at one end. You insert a small amount of dyed DNA (etc) containing solution each. Apply an electrical potential across the gel. DNA gradually moves along. Smaller DNA fragments move faster. So, at a given time, you can coarsely measure fragment size of a given sample. Your absolute scale is given by "standards", aka "ladders" that have samples of multiple, known sizes.
The paper authors cheated (allegedly) by copy + pasting images of the gel. This is what was caught, so it implies they may have made up some or all results in this and other papers.
Close - this is a SDS-PAGE gel, and you run it using proteins. The bands in the first two rows are from a western blot (gel is transferred to a membrane), where you use antibodies against those specific proteins to detect them. The Pon S row is Ponceau S, a dye that non-specifically detects all proteins - so it's used as a loading control, to make sure that the same amount of total protein is loaded in each lane of the gel.
Is it conceivable that the control was run once because the key result came from the same run? I can see a reviewer asking for it in all three figures, whereas they may drafted it only in one
The horizontal label is fine, it says Pon S in all images. (I guess a wrong label would be obvious to detect for specialists.)
The problem are the vertical labels
In Figure 1e it says: "MT1+2", "MT2" and "MT1"
In Figure 3a it says: "5'-CR1", "CR2" and "3'-UTR"
In Figure 3b it says: "CR2", "CR3" and "CR4"
Based on the images, it is inconceivable that these are from the same run (see the dramatically different levels of TRF-S in each gel. One column/lane = one sample). This isn't something that would be included because of a reviewer - loading controls are required to meaningfully interpret the results (e.g. the data is useless without such a control).
Additional context to be speculative of OP's intentions. Within the academic world there was a major scandal where a semi-famous researcher was exposed for faking decades of data (Google: Pruitt). Every since, people have been hungry for more drama of the same shape.
This is protein on a western blot but the general idea is the same.
I love HN - thanks!
Faked scientific results.
what happens to people who do this? are they shunned forever from scientific endeavors? isn't this the ultimate betrayal of what a scientist is supposed to do?
if caught and it's unignorable, usually they say "oops, we made a minor unintentional mistake while preparing the data for publication, but the conclusion is still totally valid"
generally, no consequences
There's a difference of having your results on your black plastic cookware being off by several factors in an "innocent" math mistake vs deliberately reusing results to fraudulently mislead people by faking the data.
Most people only remember the initial publication and the noise it makes. The updated/retractions generally are not remembered resulting in the same "generally, no consequences" but the details matter
The people in the area remember (probably because they wasted 3 months trying to extend/reproduce the result [1]). They may stop citing them.
In my area we have a few research groups that are very trustworthy and it's safe to try to combine their result with one of our ideas to get a new result. Other groups have a mixed history of dubious results, they don't lie but they cherry pick too much, so their result may not be generalizable to use as a foundation for our research.
[1] Exact reproduction are difficult to publish, but if you reproduce a result and make a twist, it may be good enough to be published.
This is a general issue with interpreting scientific papers: the people who specialize in the area will generally have a good idea about the plausibility of the result and the general reputation of the authors, but outsiders often lack that completely, and it's hard to think of a good way to really make that information accessible.
(And I think part of the general blowback against the credibility of science amongst the public is because there's been a big emphasis in popular communication that "peer reviewed paper == credible", which is an important distortion from the real message "peer reviewed paper is the minimum bar for credible", and high-profile cases of incorrect results or fraud are obvious problems with the first statement)
I completely agree. When I see a post here I had no idea if i's a good journal or a crackpot journal [1]. The impact factor is sometimes useful, but the level in each area is very different. (In math, a usual values is about 1, but in biology it's about 5.)
Also, many sites just copy&paste the press release from the university that many times has a lot of exaggerations, and sometimes they ad a few more.
[1] If the journal has too many single author articles, it's a big red flag.
Yes, I think science communication is also a big part of the problem. It's a hard one to do right, but easy to do wrong and few journalists especially care or have the resources to do it right (and the end results tends to be less appealing, because there's a lot less certainty involved)
Horseshit. All of the following scientists were caught outright faking results and as a result were generally removed from science.
Jan Hendrick Schon (he was even stripped of his Phd, which is not possible in most jurisdictions) He made up over 200 papers about organic semiconductors
Victor Ninov who lied about creating like 4 different elements
Hwang Woo-suk who faked cloning humans and other mammals, lied about the completely unethical acquisition of human egg cells, and literally had the entire Korean government attempting to prevent him from being discredited, and was caught primarily because his papers were reusing pictures of cells. Hilariously, his lab successfully cloned a dog which was considered difficult at the time.
Pons and Fleischmann didn't do any actual fraud. They were merely startlingly incompetent, incurious, and arrogant. They still never did real research again.
This guy made some videos about it
https://m.youtube.com/@PeteJudo1/videos
I've always wondered about gel image fraud -- what's stopping fraudulent researchers from just running a dummy gel for each fake figure? If you just loaded some protein with a similar MW / migration / concentration as the one you're trying to spoof, the bands would look more or less indistinguishable. And because it's a real unique band (just with the wrong protein), you wouldn't be able to tell it's been faked using visual inspection.
Perhaps this is already happening, and we just don't know it... In this way I've always thought gel images were more susceptible to fraud vs. other commonly faked images (NMR / MS spectra etc, which are harder to spoof)
Gel electrophoresis data or Western/Southern/Northern blots are not hard to fake. Nobody seeing the images can tell what you put into each pocket of your gel. And for the blots nobody can tell which kind of antibody you used. It's still not totally effortless to fake as you have to find another protein with the right weight, this is not necessarily something you have just lying around.
I'd also suspect that fraud does not necessarily start at the beginning of the experiments, but might happen at a later stage when someone realizes their results didn't turn out as expected or wanted. At that point you already did the gels and it might be much more convenient to just do image manipulation.
Something like NMR data is certainly much more difficult to fake convincingly, especially if you'd have to provide the original raw datasets at publication (which unfortunately isn't really happening yet).
Or from my own experience, suddenly realize you forgot to make a picture of the gel (or lost it?) and all you have are the shitty ones.
Shifting the topic from research misconduct to good laboratory practices, I don't really understand how someone would forget to take pictures of their gels often enough that they would feel it necessary to fake data. (I think you're recounting something you saw someone else do, so this isn't criticizing you.) The only reason to run the experiment to collect data. If there's no data in hand, why would they think the experiment was done? Also, they should be working from a written protocol or a short-form checklist so each item can be ticked off as it is completed. And they should record where they put their data and other research materials in their lab notebook, and copy any work (data or otherwise) to a file server or other redundant storage, before leaving for the day. So much has to go wrong to get to research misconduct and fraud from the starting point of a little forgetfulness.
I mean, I've seen people deliberately choose to discard their data and keep no notes, even when I offered to give them a flash drive with their data on it, so I understand that this sort of thing happens. It's still senseless.
Isn't this the plot for pretty much every movie about science research fraud? When Richard Kimble was chasing his one arm man, it led to the doctor using the same data to make the research look good. I know this is not the only example.
You switched the samples! In the pathology reports! Did you kill Lentz too!?
"Whats stopping?" nothing, and that is why it is happening constantly. A larger and larger portion of scientific literature is riddled with these fake studies. I've seen it myself and it is going to keep increasing as long as the number of papers published is the only way to get ahead.
They have a playlist of 3500 videos showing images like this one
https://youtube.com/playlist?list=PLlXXK20HE_dV8rBa2h-8P9d-0...
I was curious how the video creators were able to generate so many videos in such a short timeframe. It looks like it might be automated with this tech: https://rivervalley.io/products/research-integrity
Very cool. I wish these guys would have a podcast discussing high profile papers, how influential they are, what sorts of projects have been built on top of them and then be like "uh oh, it looks like our system detecting something strange about the results".
I wish wish wish there was something similar also for computer science. If I got paid for how many papers that looked interested but could not be replicated, I would be rich.
There is so little content and context to this link that it is essentially flame war bait in a non-expert forum like HN.
I smell this too, especially with the editorialized HN title that contains the word "mRNA".
The title was edited by supposedly HN moderators after I posted it. I actually ran into this youtube channel and thought it was very interesting, since I didn't realize academia seems to make so many mistakes all the time. https://news.ycombinator.com/item?id=42728742
For reference, the title of the paper this appeared in is "Novel RNA- and FMRP-binding protein TRF2-S regulates axonal mRNA transport and presynaptic plasticity"
Google Scholar reports 43 citations: https://scholar.google.com/scholar?q=Novel+RNA-and+FMRP-bind...
The images still seem to be visible in both PubMed and Nature versions.
PubMed version: https://pubmed.ncbi.nlm.nih.gov/26586091/
Nature version: https://www.nature.com/articles/ncomms9888
Nature version (PDF): https://www.nature.com/articles/ncomms9888.pdf
Just for context:
The senior author is Mark Mattson: one of the world’s most highly cited neuroscientists with amazing productivity and large lab while at NIH when this work was done.
https://scholar.google.com/citations?user=N3ObarMAAAAJ&hl=en...
Mattson is well known as a biohacker and an expert in intermittent fasting and health benefits.
https://en.wikipedia.org/wiki/Mark_Mattson
He retired from the National Institute on Aging in 2019 and is now at Johns Hopkins University. Still active researcher.
https://nihrecord.nih.gov/2019/08/23/mattson-expert-brain-ag...
Not just same bands, but same noise and artifacts too. They copypasted the data?
Here's me, clicking and expecting to read about someone fleecing Spotify by setting up fake bands.
Whereas actually Spotify funds artificial bands because they're more profitable
https://harpers.org/archive/2025/01/the-ghosts-in-the-machin...
The news here is that modern pop music has become so same same that people can't tell an "AI" generated music from real music.
tbf, I don't think any of these are pop songs. It's ambient music and lofi chill stuff.
If you just looked at all the undergrads trying to find ways to cheat on their homework, exams, and job interviews, it'd be easy to imagine that university lab science conducted by those same people is also full of cheating whenever they thought they could get away with it.
But I've wondered whether maybe some of the fabrications are just sloppy work tracking so many artifacts.
You might be experienced enough with computers to have filing conventions and workflow tools, around which you could figure out how to accurately keep track of numerous lab equipment artifacts, including those produced by multiple team members, and have traceability from publication figures all the way to original imaging or data. But is this something everyone involved in a university lab would be able to do reliably?
I'm sure there's a lot of dishonesty going on, because people going into the hard sciences can be just as shitty as your average Leetcode Cadet. But maybe some genuine scientists could use better computer tools and skills?
Would this imply that someone faked data in a paper they published?
Hard to explain how else it could happen.
Could this be a repeat of the Xerox image duplication bug? https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
In different documents?
any reason hanlons razor doesnt apply here? honest question, im just a regular 4 year degree off to work guy
"Adequately" is doing a lot of heavy lifting in Hanlon's Razor. A good corollary to keep in mind is "Never attribute to stupidity what is better explained by malice." I usually apply this to politics, but science publishing is 90% politics, so it still fits.
Yeah, I have mixed feelings about hanlons razor. Giving people the benefit of the doubt is good, and some people don't do it enough, but there's also a lot of people that overextend the benefit of the doubt to the point that they're almost doing damage control for fraudsters
There are perverse incentives in scientific publishing, and there are not many alternative explanations.
So sick of Hanlon's Razor. It's just a gift to the actually-malicious. If the outcome is the same then intentions don't matter.
I consider it a reminder to stop and think before getting swept up in outrage.
Sure, bad actors will maintain plausible deniability, but I would rather let some people slide than get worked up over mistakes or misunderstandings.
Letting the people slide is not the same thing as letting the action/outcome slide. I do think it's reasonable to let intent inform one's feelings toward the person, but if it's easy to accidentally do fraudulent science then the system should still be criticized and the systemic problem should still be addressed.
IMO it’s only applicable to humans. Hierarchies attract malicious actors.
> So sick of Hanlon's Razor. It's just a gift to the actually-malicious. If the outcome is the same then intentions don't matter.
I think that's only true for a single incident. If someone does injury to me, I'm just as injured whether they were malicious or incompetent, but mitigation strategies for future interactions are different.
Here's how the razor applies: There is no real malice behind all the fraud in science publications. The authors aren't usually out to specifically harm others.
However, in the long run it is stupid because of two and a half reasons:
- it reduces people's trust in science because it is obvious we cannot trust the scientists which in the long run will reduce public funding for The grift
- it causes misallocation of funds by people misled by the grift and this may lead you actual harm (e.g., what if you catch Alzheimer's but there is no cure because you lied about the causes 20 years ago?)
1/2- there is a chance that you will get caught, and like the former president of Stanford, not be allowed to continue bilking the gullible. This only gets half a point because the repercussions are generally not immediate and definitely not devastating to those who do it skillfully.
The former president of Stanford is the CEO of Xaira now.
The opportunity here is to automate detection of fake data used in papers.
I could be hard to do without access to data and costly integration. And like shorting, the difficulty is how to monetize. It could also be easy to game. Still...
The nice thing about the business is that market (publishing) is flourishing. Not sure about state of the art or availability of such services.
For sales: run it on recent publications, and quietly ping the editors with findings and a reasonable price.
Unclear though whether to brand in a user-visible way (i.e., where the journal would report to readers that you validate their stuff). It could drive uptake, but a glaring false negative would be a risk.
Structurally, perhaps should be a non-profit (which of course can accumulate profits at will). Does YC do deals without ownership, e.g., with profit-sharing agreements?
Elizabeth Bik (who is known for submitting such reports to journals) has a nice interview about this problem[0], which covers software as well.
> After I raised my concerns about 4% of papers having image problems, some other journals upped their game and have hired people to look for these things. This is still mainly being done I believe by humans, but there is now software on the market that is being tested by some publishers to screen all incoming manuscripts. The software will search for duplications but can also search for duplicated elements of photos against a database of many papers, so it’s not just screening within a paper or across two papers or so, but it is working with a database to potentially find many more examples of duplications. I believe one of the software packages that is being tested is Proofig.
Proofig makes a lot of claims but they also list a lot of journals: https://www.proofig.com/
[0]: https://thepublicationplan.com/2022/11/29/spotting-fake-imag...
At least this paper has only 43 citations over last 10 years, which is really nothing for Nature, which means it's basically irrelevant. (Obviously it is still a good idea to identify cheaters)
Ooh, I love that this website exists, and major props to whoever made that visualization!
The image with meaningless blotches, technical diagrams and implied dubiousness feels like the beginning of a "please check and comment" meme.
Is there an obvious way to tell that these are exactly the same? Or is this a pixel level comparison that is not mentioned?
There's a video that's quite convincing: https://youtu.be/K0Xio5yo_x8
It inverts the second image and passes the first and third images under it, and when there is a complete overlap the combined images make a nearly perfectly gray rectangle, showing that they cancel out.
Look at the "scratch" on the right end of the leftmost dash. That "noise" shouldn't be replicated, right?
Try looking at the artifacts, not the actual bands. There's a little black hairline on the top right corner of the leftmost band, and a similar line toward the left of the middle band.
The page has another comment with an animation where they're overlaying the images to show how similar (same?) they are.
The linked video makes it pretty clear by subtracting one image from the other and showing the difference: https://www.youtube.com/watch?v=K0Xio5yo_x8
Ironically there was a whole post about basically exactly this the other day: https://news.ycombinator.com/item?id=42655870
Any image manipulation program like photoshop with layers, you put the suspect images on top of one another and use filters to subtract one layer from the other (I'm not sure which filter operation works best, it might be multiply or divide) and then work to align the two layers. Differences and similarities become extremely obvious.
You can also get the raw pixel information by converting to a bitmap and comparing values, but it's easier visually because it's pretty trivial for a simple image modification to change all of the pixel values but still have the same image.
A desperate need for automated experiment verification and auditing is needed. Something as simple as submitting exif + archiving at time of capture, for crying out loud.
A imgur for scientific photos with hash-based search or something. We have the technology for this.
Pruitt? Is that you?
Copypasta.
damn you spotify … :)
Can someone change the title to:
"Comment on Nature paper on 2015 mRNA paper suggests data re-used in different contexts"
The current title would suggest music to most lay-people.
Even for people familiar with the field this title is a bit hard to parse at first without context. "bands" really needs either gels or gel electrophoresis as context.
Agreed
Disagreed. Title is fine.
As someone clueless about music and mRNA I've got to say this wouldn't help me much.
“We are no longer called Sonic Death Monkey. We are on the verge of being called Kathleen Turner Overdrive, however this evening we will be Barry Jive, and the Uptown Five.”
Ok, we've changed it. Submitted title was "Same three bands appear in three different presentations with different labels".
picture (the submitter) had the right idea—it's often better to take a subtitle or a representative sentence from the article when an original title isn't suitable for whatever reason, but since in this case it's ambiguous, we can change it.
If there's a better phrase from the article itself, we can change it again.
Thanks :)
>> "Same three bands appear in three different presentations with different labels"
This has the makings of a Highlander episode. Three groups of immortals forming bands in different generations.