- Skip to main content
- Skip to secondary menu
- Skip to primary sidebar
- Skip to footer
The Big Idea
Research and Innovation


Rule of thumb, loopholes and things that keep us up at night.
In fraud we trust: top 5 cases of misconduct in university research.
May 9, 2019 by Rene Cantu

There’s a thin line between madness and immorality. This idea of the “mad scientist” has taken on a charming, even glorified perception in popular culture. From the campy portrayal of Nikola Tesla in the first issue of Superman, to Dr. Frankenstein, to Dr. Emmet Brown of Back to the Future, there’s no question Hollywood has softened the idea of the mad scientist. So, I will not paint the scientists involved in these five cases of research fraud as such. The immoral actions of these researchers didn’t just affect their own lives, but also the lives and careers of innocent students, patients, and colleagues. Academic fraud is not only a crime, it is a threat to the intellectual integrity upon which the evolution of knowledge rests. It also compromises the integrity of the institution, as any institution will take a blow to their reputation for allowing academic misconduct to go unnoticed under its watch. Here, you will find the top five most notorious cases of fraud in university research in only the last few years
Fraud in Psychology Research
In 2011, a Dutch psychologist named Diederik Stapel committed academic fraud in a number of publications over the course of ten years, spanning three different universities: the University of Groningen, the University of Amsterdam, and Tilburg University.
Among the dozens of studies in question, most notably, he falsified data on a study which analyzed racial stereotyping and the effects of advertisements on personal identity. The journal Science published the study, which claimed that one particular race stereotyped and discriminated against another particular race in a chaotic, messy environment, versus an organized, structured one. Stapel produced another study which claimed that the average person determined employment applicants to be more competent if they had a male voice. As a result, both studies were found to be contaminated with false, manipulated data.
Psychologists discovered Stapel’s falsified work and reported that his work did not stand up to scrutiny. Moreover, they concluded that Stapel took advantage of a loose system, under which researchers were able to work in almost total secrecy and very lightly maneuver data to reach their conclusions with little fear of being contested. A host of newspapers published Stapel’s research all over the world. He even oversaw and administered over a dozen doctoral theses; all of which have been rendered invalid, thereby compromising the integrity of former students’ degrees.
“I have failed as a scientist and a researcher. I feel ashamed for it and have great regret,” lamented Stapel to the New York Times. You can read the particulars of this fraud case here .
Duke University Cancer Research Fraud
In 2010, Dr. Anil Potti left Duke University after allegations of research fraud surfaced. The fraud came in waves. First, Dr. Potti flagrantly lied about being a Rhodes Scholar to attain hundreds of thousands of dollars in grant money from the American Cancer Society. Then, Dr. Potti was caught outright falsifying data in his research, after he discovered one of his theories for personalized cancer treatment was disproven. This theory was intended to justify clinical trials for over a hundred patients. Because it was disproven, the trials could no longer take place. Dr. Potti falsified data in order to continue with these trials and attain further funding.
Over a dozen papers that he published were retracted from various medical journals, including the New England Journal of Medicine.
Dr. Potti had been working on personalized cancer treatment he hailed as “the holy grail of cancer.” There are a lot of people whose bodies fail to respond to more traditional cancer treatments. Personalized treatments, however, offer hope because patients are exposed to treatments that are tailored to their own unique body constitution, and the type of tumors they have. Because of this, patients flocked to Duke to register for trials for these drugs. They were even told there was an 80% chance that they would find the right drug for them. The patients who partook in these trials filed a lawsuit against Duke, alleging that the institution performed ill-performed chemotherapy on participants. Patients were so excited that there was renewed hope for their cancer treatment, that they trusted Dr. Potti’s trials and drugs. Sadly, many of these cancer patients suffered from unusual side effects like blood clots and damaged joints.
Duke settled these lawsuits with the families of the patients. You can read details of the case here .
Plagiarism in Kansas
Mahesh Visvanathan and Gerald Lushington, two computer scientists from the University of Kansas, confessed to accusations of plagiarism. They copied large chunks of their research from the works of other scientists in their field. The plagiarism was so ubiquitous that even the summary statement of their presentation was lifted from another scientist’s article in a renowned journal.
Visvanathan and Lushington oversaw a program at the University of Kansas in which researchers reviewed and processed large amounts of data for DNA analysis. In this case, Visvanathan committed the plagiarism and Lushington knowingly refrained from reporting it to the university. Learn more about this case here .
Columbia University Research Misconduct
The year was 2010. Bengü Sezen was finally caught falsifying data after ten years of continuously committing fraud. Her fraudulent activity was so blatant that she even made up fake people and organizations in an effort to support her research results. Sezen was found guilty of committing over 20 acts of research misconduct, with about ten research papers recalled for redaction due to plagiarism and outright fabrication.
Sezen’s doctoral thesis was fabricated entirely in order to produce her desired results. Additionally, her misconduct greatly affected the careers of other young scientists who worked with her. These scientists dedicated a large portion of their graduate careers trying to reproduce Sezen’s desired results.
Columbia University moved to retract her Ph.D in chemistry. Sezen fled the country during her investigation. Read further details about this case here .
Penn State Fraud
In 2012, Craig Grimes ripped off the U.S. government to the tune of $3 million. He pleaded guilty to wire fraud, money laundering, and engaging in fraudulent statements to attain grant money.
Grimes bamboozled the National Institute of Health (NIH) and the National Science Foundation (NSF) into granting him $1.2 million for research on gases in blood, which helps detect disorders in infants. Sadly, it was revealed by the Attorney’s Office that Grimes never carried out this research, and instead used the majority of his granted funds for personal expenditures. In addition to that $1.2 million, Grimes also falsified information that helped him attain $1.9 million in grant money via the American Recovery and Reinvestment Act. Consequently, a federal judge ruled that Grimes spend 41 months in prison and pay back over $660,000 to Penn State, the NIH, and the NSF.
Check out the details about this case here .
Share this:
About rene cantu.
Rene† was the Communications Coordinator for the UH Division of Research. He felt blessed with the opportunity to work with a team of brilliant, industrious professionals in developing The Big Idea. He wrote for the Startup Experience section, highlighting Technology Bridge's most innovative and groundbreaking startups. He also produced informative pieces for aspiring startup entrepreneurs and other compelling stories as they arose.
UH Division of Research
Recent posts.
- Navigating the Fly America Act September 25, 2023
- How Can the Humanities Impact Health Outcomes? June 12, 2023
- Salami Slicing: A Recipe for Research Misconduct April 5, 2023
- What Does Intuitive AI and ChatGPT Mean for Research? February 28, 2023
- Flushing Out Absolutism in Science August 31, 2022
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- CAREER FEATURE
- 25 August 2023
‘Gagged and blindsided’: how an allegation of research misconduct affected our lab
- Anne Gulland 0
Anne Gulland is a freelance science writer in London.
You can also search for this author in PubMed Google Scholar
In May 2019, a phone call to Ram Sasisekharan from a reporter at The Wall Street Journal triggered a chain of events that stalled the bioengineer’s research, decimated his laboratory group and, he says, left him unable to help find treatments for emerging infectious diseases during a global pandemic.
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
24,99 € / 30 days
cancel any time
Subscribe to this journal
Receive 51 print issues and online access
185,98 € per year
only 3,65 € per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
doi: https://doi.org/10.1038/d41586-023-02711-5
Vásquez, M., Krauland, E., Walker, L., Wittrup, D. & Gerngross, T. mAbs 11 , 803–808 (2019).
Article PubMed Google Scholar
Tharakaraman, K. et al. Proc. Natl Acad. Sci. USA 112 , 10890–10895 (2015).
Tharakaraman, K. et al. Cell Host Microbe 23 , 618–627 (2018).
Lee, D. C. P., Raman, R., Ghafar, N. A. & Budigi, Y. Antiviral Res. 192 , 105105 (2021).
Miller, N. L., Clark, T., Raman, R. & Sasisekharan, R. Cell Rep. Med. 3 , 100527 (2022).
Download references
Related Articles

- Drug discovery
- Research management

How Paris is becoming a happy home for health-technology start-up companies
Spotlight 08 NOV 23

Why a climate researcher pushed the limits of low-carbon travel — and his employer’s patience
Career Feature 08 NOV 23

Why postdocs need entrepreneurship training
Career Column 08 NOV 23
Israel: when reality meets academia
Correspondence 07 NOV 23

Five ways in which rookie lab leaders can get up to speed
Technology Feature 06 NOV 23

Tuning sterol extraction kinetics yields a renal-sparing polyene antifungal
Article 08 NOV 23
Recognition of methamphetamine and other amines by trace amine receptor TAAR1
Article 07 NOV 23

My path to heading a biotech company
Career Q&A 03 NOV 23
STAFF SCIENTIST 1 Deputy Program Head, Sequence Enhancements, Tools, and Delivery (SeqPlus) Program
POSITION INFORMATION: The National Library of Medicine (NLM), National Center for Biotechnology Information (NCBI), Information Engineering Branch ...
Bethesda, Maryland
National Library of Medicine, National Center for Biotechnology Information
Postdoctoral Position (m/f/d)
The Department of General and Visceral Surgery at Medical Center - University of Freiburg offers a Postdoctoral Position (m/f/d).
Freiburg im Breisgau, Baden-Württemberg (DE)
University Hospital Freiburg
NIHR GOSH BRC 3-year Clinical Training (PhD) Fellowship
Clinical PhD Fellowship for paediatric doctors and wider Healthcare Professionals at the UCL Great Ormond Street Institute of Child Health
London (Greater) (GB)
NIHR GOSH BRC
Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Warmly Welcomes Talents Abroad
“Qiushi” Distinguished Scholar, Zhejiang University, including Professor and Physician
No. 3, Qingchun East Road, Hangzhou, Zhejiang (CN)
Sir Run Run Shaw Hospital Affiliated with Zhejiang University School of Medicine
Assistant/Associate Professor
The University of Rochester Medical Center is recruiting early to mid-career faculty to establish an innovative immunology research program in the ...
Rochester, New York (US)
University of Rochester Medical Center
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Annual Review of Ethics Case Studies
What are research ethics cases.
For additional information, please visit Resources for Research Ethics Education
Research Ethics Cases are a tool for discussing scientific integrity. Cases are designed to confront the readers with a specific problem that does not lend itself to easy answers. By providing a focus for discussion, cases help staff involved in research to define or refine their own standards, to appreciate alternative approaches to identifying and resolving ethical problems, and to develop skills for dealing with hard problems on their own.
Research Ethics Cases for Use by the NIH Community
- Theme 23 – Authorship, Collaborations, and Mentoring (2023)
- Theme 22 – Use of Human Biospecimens and Informed Consent (2022)
- Theme 21 – Science Under Pressure (2021)
- Theme 20 – Data, Project and Lab Management, and Communication (2020)
- Theme 19 – Civility, Harassment and Inappropriate Conduct (2019)
- Theme 18 – Implicit and Explicit Biases in the Research Setting (2018)
- Theme 17 – Socially Responsible Science (2017)
- Theme 16 – Research Reproducibility (2016)
- Theme 15 – Authorship and Collaborative Science (2015)
- Theme 14 – Differentiating Between Honest Discourse and Research Misconduct and Introduction to Enhancing Reproducibility (2014)
- Theme 13 – Data Management, Whistleblowers, and Nepotism (2013)
- Theme 12 – Mentoring (2012)
- Theme 11 – Authorship (2011)
- Theme 10 – Science and Social Responsibility, continued (2010)
- Theme 9 – Science and Social Responsibility - Dual Use Research (2009)
- Theme 8 – Borrowing - Is It Plagiarism? (2008)
- Theme 7 – Data Management and Scientific Misconduct (2007)
- Theme 6 – Ethical Ambiguities (2006)
- Theme 5 – Data Management (2005)
- Theme 4 – Collaborative Science (2004)
- Theme 3 – Mentoring (2003)
- Theme 2 – Authorship (2002)
- Theme 1 – Scientific Misconduct (2001)
For Facilitators Leading Case Discussion
For the sake of time and clarity of purpose, it is essential that one individual have responsibility for leading the group discussion. As a minimum, this responsibility should include:
- Reading the case aloud.
- Defining, and re-defining as needed, the questions to be answered.
- Encouraging discussion that is “on topic”.
- Discouraging discussion that is “off topic”.
- Keeping the pace of discussion appropriate to the time available.
- Eliciting contributions from all members of the discussion group.
- Summarizing both majority and minority opinions at the end of the discussion.
How Should Cases be Analyzed?
Many of the skills necessary to analyze case studies can become tools for responding to real world problems. Cases, like the real world, contain uncertainties and ambiguities. Readers are encouraged to identify key issues, make assumptions as needed, and articulate options for resolution. In addition to the specific questions accompanying each case, readers should consider the following questions:
- Who are the affected parties (individuals, institutions, a field, society) in this situation?
- What interest(s) (material, financial, ethical, other) does each party have in the situation? Which interests are in conflict?
- Were the actions taken by each of the affected parties acceptable (ethical, legal, moral, or common sense)? If not, are there circumstances under which those actions would have been acceptable? Who should impose what sanction(s)?
- What other courses of action are open to each of the affected parties? What is the likely outcome of each course of action?
- For each party involved, what course of action would you take, and why?
- What actions could have been taken to avoid the conflict?
Is There a Right Answer?
Acceptable solutions.
Most problems will have several acceptable solutions or answers, but it will not always be the case that a perfect solution can be found. At times, even the best solution will still have some unsatisfactory consequences.
Unacceptable Solutions
While more than one acceptable solution may be possible, not all solutions are acceptable. For example, obvious violations of specific rules and regulations or of generally accepted standards of conduct would typically be unacceptable. However, it is also plausible that blind adherence to accepted rules or standards would sometimes be an unacceptable course of action.
Ethical Decision-Making
It should be noted that ethical decision-making is a process rather than a specific correct answer. In this sense, unethical behavior is defined by a failure to engage in the process of ethical decision-making. It is always unacceptable to have made no reasonable attempt to define a consistent and defensible basis for conduct.
This page was last updated on Friday, July 7, 2023

- NIH Grants & Funding
- Blog Policies
NIH Extramural Nexus

Test Your Knowledge – Interactive Video of Research Misconduct Case Studies
What are some red flags that may help you avoid research misconduct? Research Integrity Officers from the HHS Office of Research Integrity (ORI) and NIH answer this question and more during our recent Research Misconduct & Detrimental Research Practices event.
In this interactive session, experts break down several case studies and hear from the audience to explain Public Health Service (PHS) regulations on handling allegations and responsibilities of an institution receiving PHS funds. Tune in to the recording to join the conversation and check your knowledge on the ethical conduct of research!
RELATED NEWS
Before submitting your comment, please review our blog comment policies.
Your email address will not be published. Required fields are marked *
Explanations of Research Misconduct, and How They Hang Together
- Open access
- Published: 19 May 2021
- volume 52 , pages 543–561 ( 2021 )
You have full access to this open access article
- Tamarinde Haven ORCID: orcid.org/0000-0002-4702-2472 1 &
- René van Woudenberg ORCID: orcid.org/0000-0002-1169-6539 1
9223 Accesses
3 Citations
19 Altmetric
Explore all metrics
Cite this article
In this paper, we explore different possible explanations for research misconduct (especially falsification and fabrication), and investigate whether they are compatible. We suggest that to explain research misconduct, we should pay attention to three factors: (1) the beliefs and desires of the misconductor, (2) contextual affordances, (3) and unconscious biases or influences. We draw on the three different narratives (individual, institutional, system of science) of research misconduct as proposed by Sovacool to review six different explanations. Four theories start from the individual: Rational Choice theory, Bad Apple theory, General Strain Theory and Prospect Theory. Organizational Justice Theory focuses on institutional factors, while New Public Management targets the system of science. For each theory, we illustrate the kinds of facts that must be known in order for explanations based on them to have minimal plausibility. We suggest that none can constitute a full explanation. Finally, we explore how the different possible explanations interrelate. We find that they are compatible, with the exception of explanations based on Rational Choice Theory and Prospect Theory respectively, which are incompatible with one another. For illustrative purposes we examine the case of Diederik Stapel.
Avoid common mistakes on your manuscript.
1 Introduction
Over the past few years, interest in research misconduct has substantially increased (Gunsalus 2019 ). While not everyone agrees about what should be labeled a research mis behavior , there is general consensus on what has been called research mis conduct : falsification, fabrication and plagiarism (FFP) (Lafollette 2000 ; Steneck 2006 ). This consensus is reflected in codes of conduct, both national and international (ECoC 2017 ; NCCRI 2018 ).
This paper has a twofold aim. First, to explore and discuss a number of possible explanations of research misconduct, and second to use this as a case study for the more philosophical question: how do these different explanations relate to one another: are they compatible, or are they not?
This paper potentially has practical relevance in that explanations of research misconduct can be expected to give a handle on what can be done to prevent research misconduct. This being said, this paper focuses on explanation, not prevention.
The paper is organized as follows. In Sect. 1 we describe various types of research misconduct, and describe one actual case for concreteness’ sake, as well as for the sake of future reference. Section 2 discusses what to expect from an explanation. The next section presents and discusses a number of explanations of research misconduct and explores what needs to be known if those explanations are to have some minimum level of credibility. In Sect. 4 we discuss the more philosophical question of how these explanations hang together. We conclude with some overall remarks.
2 Research Misconduct
The most extreme kinds of research misbehaviors—fabrication, falsification, and plagiarism (FFP)—are at the same time not the most frequent ones (Martinson et al. 2005 ; Fanelli 2009 ). Much more frequent are the numerous ‘minor offences’, the many cases of ‘sloppy science’, the ‘questionable research practices’ (QRPs) (Steneck 2006 ). According to recent surveys, examples of frequent QRPs are: failing to report all dependent measures that are relevant for a finding (Fiedler and Schwarz 2016 ), insufficient supervision of junior co-workers (Haven et al. 2019 ); selective citing to enhance one’s own findings or conviction; and not publishing a ‘negative’ study (Bouter et al. 2016 ; Maggio et al. 2019 ). Despite their presumed frequency, assessment of the wrongness of the QRPs can be less than straightforward. Here, context, extent and frequency matter. The wrongness of FFP is more evident and codes of conduct are typically developed in order to prevent these. (For an excellent overview of different reasons for using a wide or narrow concept of research misconduct, see Faria 2018 ).
The reason why research misconduct needs to be prevented is somewhat different for falsification and fabrication compared to plagiarism. Whereas falsification and fabrication distort the creation of scientific knowledge, plagiarism need not distort the field nor hamper its progress. Plagiarism fails to connect the knowledge to its proper origin, but it need not distort scientific knowledge per se (Steneck 2006 ; Fanelli 2009 ). Also, explanations for plagiarism can be expected to differ from explanations for falsification and fabrication. Some plagiarism, for example, is committed by non-fluent English authors who borrow well-written sentences or even entire paragraphs for their own work, which is an explanation that is not available for cases of falsification and fabrication. We therefore focus on the latter two.
For illustrative purposes, we will examine a case of actual research misconduct in order to review the applicability of explanatory theories of research misconduct. We chose the case of Diederik Stapel for two main reasons. First, because his fraud has been established beyond reasonable doubt. Second, because there is sufficient publicly available information about the case: information about the committees’ way of assessing the case, as well as about Stapel’s own responses and reflections on his case. The more details of a case that are available, the better we can discuss the explanatory power of the theories we shall review. With the disclaimer that it is not our aim to provide an explanation of Stapel’s fraudulent behavior, and that others have produced interesting accounts of it (for example, see Abma 2013 ; Zwart 2017 ), we now offer a very brief description of the Stapel case.
Diederik Stapel was a professor of cognitive and social psychology. His research included topics such as the influence of power on morality, the influence of stereotyping and advertisements on self-perception and performance, and other eye-catching topics (Vogel 2011 ). He was an established figure whose findings often appeared in (inter-)national newspapers. Stapel was accused of data falsification by three whistle blowers from within Tilburg University, where Stapel was employed in 2011, the year the case became public. In total, three committees investigated whether Stapel’s work at the University of Amsterdam, University of Groningen and finally University of Tilburg, was indeed fraudulent (Levelt Committee, Noort Committee, Drenth Committee 2012 ). The committees established that, whilst the studies were carefully designed in consultation with collaborators, Stapel fabricated the data sets from scratch. In another variant, the data were gathered but altered by Stapel after a student-assistant had forwarded them to him. Finally, Stapel had at times reached out to colleagues inviting them to use some data he claimed to have ‘lying around’.
Stapel has admitted that he engaged in these practices. The committees concluded that Stapel intentionally falsified and fabricated data. None of Stapel’s co-authors were found to have collaborated with him in this regard. We will provide more information about the case as we proceed.
3 What to Expect From an Explanation
It is fair to say that currently, we have no single unifying theory of explanation (Woodward starts his book with a similar remark, see Woodward ( 2003 )). What we have is a wide assortment of ideas that are all claimed to be at least sometimes relevant for understanding explanation. One idea is that explanation is closely linked with causation : an explanation of X can be achieved by pinpointing the causal factors relevant to X. Another that it is closely linked to laws : an explanation of X is achieved by referring to laws under which X can be subsumed. Yet another idea is that explanation is linked with unification : an explanation of the phenomena X, Y and Z is achieved by showing that X, Y and Z are special cases of a more general phenomenon GP. A further idea is that explanation sometimes has to do with reasons (as opposed to causes): an explanation of a person’s action A is achieved by citing her reasons, i.e. her beliefs and desires, for doing A. Footnote 1 In the social and behavioural sciences, this idea is sometimes coupled with the idea mentioned above that explanation is linked with laws. This approach to explaining human behaviour aims to formulate empirical generalizations of the form: If person P desires D, and believes that action A is the most efficient means of attaining D, then P does A. The hope is that such generalizations can be improved so as to state genuine laws, laws that enable prediction. Whether this hope is a realistic one need not detain us here. The important point to note is that reference to a person’s reasons often has explanatory force.
However, it is often not just a person’s reasons that have explanatory force; they often have it in conjunction with what we shall call “affordances”: the specific situations in which a person acted and in which certain possibilities are open to him. The explanation of the fact that A shot B cannot consist of merely citing A’s desire that B be dead and his belief that pulling the trigger was a way to attain that goal. A factor in the explanation should surely be the availability of a gun to A. The availability of the gun is a contextual affordance for A.
We should add that some behaviors can be explained independently of the actor’s reasons, and independently even of the actor’s being aware of displaying those behaviors. There are unconscious influences on human behavior, like the biases and heuristics that psychologists have been researching, and reference to them can also do explanatory work (see Gilovich 1991 ; Kahneman 2011 ).
To conclude: if we want to explain cases of research misconduct, we should pay attention, among possible others, to the following factors:
I: the desires and beliefs of the misconductor, meaning his or her (motivating) reasons;
II: the contextual affordances available to the misconductor;
III: unconscious influences Footnote 2
In an actual case of misconduct, all these factors may be at work. We should therefore heed the distinction between partial and full explanations. A full explanation of an event specifies all the factors that jointly guarantee the occurrence of the event. A partial explanation, by contrast, specifies a factor, or several factors, that facilitate the occurrence of the event, but do not guarantee it. It remains an open question (for us at least) whether full explanations of human behavior are even possible.
Explanations in the social sciences can take various forms. One that will figure quite prominently in our discussion are inferences to the best explanation (IBEs). A key feature of IBEs is that the factor doing explanatory work is not directly observed, but concluded to. Footnote 3
4 Explanations of Research Misconduct
In a helpful article, Benjamin Sovacool ( 2008 ) distinguishes three ‘narratives’ about research misconduct: one in terms of (1) impure individuals, another in terms of the (2) failures of this-or-that particular university or research institute, and yet another in terms of (3) the corrupting structure of the practice of modern science as such—three narratives that he suggests are incommensurable. Even if these narratives do not explain in any straightforward way individual cases of research misconduct, they are helpful for two reasons.
First, narratives can deliver cognitive goods that are distinct from explanations—they can provide understanding. And, as Peter Lipton ( 2009 ) has argued, there can be understanding without explanation. Even if we have no explanation of Stapel’s fraudulent behavior, it does give insight into the whole affair if the evidence indicates that Stapel was only one bad apple, or if it indicates that the institute at which he worked was failing in important respects, or if the whole structure of science turns out to be corruptive. Second, Sovacool’s narratives are helpful as they do point to places we could look for explanations. For example, the narrative that a case of research misconduct is due to an impure individual (and not a failing research institute, nor something like the corruptive structure of science as such) does not explain in any detail why Stapel engaged in the misbehavior he did, but the narrative (if true) does point to what is needed for such an explanation: the nature of his impure character needs to be understood, so that we can see how Stapel’s specific impurity led to the misbehaviors that made him notorious. Likewise, the narrative that the misconduct is due to a failing research institute does not explain Stapel’s behavior, but it does point (if true) to where to look for an explanation: to the operative rules and procedures of the institute, perhaps, to its ‘culture’ or ‘climate’ (‘there was an atmosphere of terror’), etc.
Of course, things get complicated here. For if Stapel’s misbehaviors are due exclusively to factors covered in the narratives about the institutions he was part of (or about the structure of science as such), then we should expect other members of those institutions to have displayed similar misbehaviors—which, as far as we know, they have not. And this is a reason for thinking that Stapel’s misbehaviors are due not exclusively to institutional failings, but also, say, to personal impurities like character flaws. The distinction between partial and full explanations is a recognition of this complication.
We draw attention to the fact that whereas explanations under Sovacool’s first narrative will typically refer to type I and III factors (beliefs and desires; unconscious influences), explanations under Sovacool’s second and third narrative will refer to type II factors (contextual affordances). Since all these factors, possibly and likely, can play a role in cases of research misconduct, we need not assume that the explanations under Sovacool’s three narratives are per se incommensurable if that entails they are incompatible. In fact, as we will argue in Sect. 4 , most of these explanations are compatible with each other, as they are partial at best.
To conclude: Sovacool’s narratives do not offer explanations of cases of research misconduct, but they point to where to look for explanations. We discuss six Footnote 4 different (types of) theories that might help explain research misconduct. Footnote 5 Our aim here is to specify what we need to know about a specific case in order for such explanations to get a good start. Whether they are credible, is a further issue. We begin with four theories that fall under Sovacool’s first kind of narrative.
4.1 First: Rational Choice Theory
Sometimes labeled ‘rational choice theory’, this theory has its origins in economics. It starts from an individual that is portrayed as rationally considering different options to tackle a particular problem. Rational Choice says that an individual actor faced with a risky outcome selects the specific behavioral action that yields the maximized anticipated payoffs, where the utility of his behavior is weighted by the probabilities of its occurrence. The domain of the utility function is absolute benefits and costs. The individual weighs the costs and benefits attached to each option, and next makes the calculation, on the basis of which she makes a decision. Footnote 6 This theory, that refers to type I factors only (beliefs and desires), is appealed to in the research integrity literature by Wible ( 1992 ) as well as by Lacetera and Zirulia ( 2011 ).
Suppose we apply this theory to Stapel’s case. We will first describe what we think needs to be the case if this theory is going to provide an adequate explanation of his misconduct. Next we discuss whether (we know) these things are indeed the case.
If this theory is to explain Stapel’s misconduct, we should envisage Stapel as a rational agent Footnote 7 who is calculating the costs and benefits, i.e. the utility, of cheating compared to playing it fair (i.e. observing the rules and principles that we now find in the numerous Codes of Responsible Conduct of Research). The benefits of (undetected) cheating probably include: more publications (or: more publications with outcomes that would be considered remarkable), which would contribute to greater prestige, which would increase the chances of obtaining more research funds, which would mean gaining more visibility, power and influence. The costs of cheating probably include: the fear of being found out (and fear of whatever else is set in motion by it: retraction of publications, loss of research funds, loss of prestige, loss of job, etc.), which means that one must always be on one’s guard; loss of self-respect; not contributing to the (great!) cause of science. The costs of playing it fair include, probably: often having research results that are not significant and/or interesting, which decreases the likelihood of one’s research being published, which decreases the chances of getting research funds, and of making an impact. The benefits of playing it fair include: doing what one, from a moral point of view, ought to do, behaving in a responsible way (and virtue is its own reward, as the proverbial wisdom has it); increasing the chance that you will have research results that are genuine contributions to the cause of science; increasing the chance of receiving recognition that is based on substance.
If rational choice theory is going to give an adequate explanation of the falsifications and fabrications committed by Stapel, he must have engaged in a cost/benefit analysis of the cheating option as compared to the playing fairly option—and on that basis have decided that falsification and fabrication ‘pay’.
Is there any evidence that Stapel did engage in a cost/benefit analysis of this sort? There are two main types of possible evidence here: the misconduct investigation reports and Stapel’s own accounts. From the report (Levelt et al. 2012 ) on Stapel’s misconduct, we could deduce that the costs—at least, the fear of being found out—seemed low: “It was easy for researchers to go their own way. Nobody looked over their shoulder…” (p. 39). Stapel’s own account Footnote 8 also points in this direction: “So when I started to fake my studies, it was very, very easy. I was always alone. Nobody checked my work; everyone trusted me. I did everything myself. Everything. I thought up the idea, I did the experiment, and I evaluated the results. I was judge, jury, and executioner. I did everything myself because I wanted to be one of the elite, to create great things and score lots of major publications” (p. 118–119).
Yet, it remains somewhat questionable whether Stapel actually engaged in a cost/benefit analysis. But this does not mean that the rational choice theory explanation is false or wrong. Stapel’s engaging in such an analysis is at least a possible outcome of a rational choice IBE, for his fabrications and falsifications may be best explained by his having made a cost/benefit analysis. Whether it indeed is the best explanation, depends, of course, on the strength of alternative explanations. Moreover, as we noted, explanations can be partial . Rational choice theory, then, may offer only a part of a full (or fuller) explanation. As a matter of fact, this IBE, even if it is correct, can at best be a partial explanation only. For, as we suggested in the previous section, there must be contextual affordances (so type II factors), in this case: structures and systems that allow for the possibility of falsification and fabrication. And these affordances fall outside the scope of rational choice theory, as do type III factors.
4.2 Second: Bad Apple Theories
Like rational choice theory, this theory too has its roots in economics. Here, the individual is depicted as someone with a flawed (moral) character. This flawed character is subsequently causally linked to corrupt acts. Greed is sometimes deemed to be an element in a flawed character. An example of a full-scale faulty character has been labelled in the literature as the Machiavellian personality type, that deems that the prestige associated with a particular goal justifies any means to attain it, even if those would be seen as unethical. Hren et al. ( 2006 ) studied Machiavellism in relation to moral reasoning and Tijdink et al. linked personality types such as a Machiavellian character to research misbehaviour (Tijdink et al. 2016 ). Bad apple theories refer to type I factors only—to reasons that motivate certain characters to behave in certain ways.
If we apply this theory to Stapel and ask what should be the case if bad apple theories are to provide an adequate explanation of his misconduct, it is clear that he needs to have, or at the time have had, a flawed moral character—he needs to have a Machiavellian personality type for example, or some other flawed moral character. Footnote 9
Is there evidence that Stapel had a flawed moral character at the time—evidence coming from psychologists and psychiatrists, for example, who have done something like a personality-analysis on him? The only evidence that would point in that direction appears in Stapel’s own book (Stapel 2014 ): “ It takes strong legs to carry the weight of all that success. My legs were too weak. I slipped to the floor, while others—maybe wobbling, maybe with a stick to lean on—managed to stay upright. I wanted to do everything, to be good at everything. I wasn’t content with my averageness; I shut myself away, suppressed my emotions, pushed my morality to one side, and got drunk on success and the desire for answers and solutions.” (p. 148, emphasis original). Yet, this one passage seems insufficient as a basis for a solid psychological verdict on his character, and as far as we know we have nothing else to go on that is publicly available and would reliably demonstrate a flawed character.
Note that when we refer to a flawed character, we do not mean to insinuate that Stapel had no moral awareness whatsoever. The report (Levelt et al. 2012 ) on his misconduct explicitly mentions that he taught the research ethics course. Stapel’s account (Stapel 2014 ) confirms this: “I’m the teacher for the research ethics course, in which I get to discuss all the dilemmas with which I’m confronted every day, and for which I always make the wrong choice.” (p. 129).
Even if we have no solid basis to draw a conclusion about Stapel’s moral character, this doesn’t mean a bad apple explanation can be ruled out. For it is possible to make an IBE, based on a bad apple theory, to the effect that Stapel’s fraudulent conduct is best explained by the fact that he had, at the time, a flawed (moral) character. Whether this is really the best explanation, depends, again, on the strength of the available alternatives.
It seems clear, however, that bad apple theories, even insofar as they are correct, cannot give us a full explanation of Stapel’s misconduct. For there must be contextual affordances that allow flawed moral characters to commit acts of fabrication and falsification—and these are part of a full(er) explanation of the misconducts at hand.
4.3 Three: General Strain Theory
Another theory that could be headed under the individual narrative is General Strain Theory (henceforth: GST) as originally developed by Agnew ( 1992 ) who worked in the sociology of crime. GST sees misconduct as originating in stress or strain. These states of stress and strain bring about a negative emotional state in the researcher, like anger, sadness or depression—which are, broadly speaking, type I factors. As a third step, GST posits that the behavioral strategies researchers adhere to in order to cope with these negative states differ, and, importantly, strategies may include deviant behavior (in our case: research misconduct). This theory, which has been coined as playing a role in explaining research misbehavior by Martinson et al. ( 2010 ), is put forth in the Institute of Medicine’s report Fostering Integrity in Research (NASEM 2017 ), and recently came forward in research by Holtfreter et al. ( 2019 ) wherein they asked US scientists what factors they believed to play a role in research misconduct.
If this theory is to do explanatory work, we need to know whether Stapel faced prolonged stressful situations, so prolonged that they put him in a persistent negative state. The report on Stapel’s misconduct is silent on this issue. In his book, Stapel himself, though, talks of a persistent state of stress he experienced: “Nothing relaxes me any more… but I feel stressed and restless. I want everything, and everything has to happen now. I want out. I don’t want to have to write papers any more. I want to start over, get away from this fantasy world I’ve created, get out of this system of lies and half-truths, to another city, another job” (Stapel 2014 , 131). However, he experienced this after he got into the habit of altering his data.
GST theory presupposes that behavioral strategies to cope with the negative emotional states differ. Thus, whereas Stapel’s colleagues facing similar strains found other ways to cope, he turned to deviant behavior. But this is also a caveat: What exactly made Stapel turn to deviant strategies? Perhaps his environment was crucially different in some way, which fueled his urge to create spectacular results? In any case, GST can thus, at best, be a partial explanation. That is not to say that GST can be ruled out entirely, as it is possible, via an IBE, that his misconduct could be explained by GST—whether that is also the best explanation depends on the explanatory force of the alternative theories.
4.4 Four: Prospect Theory
The final theory that we shall consider under Sovacool’s first narrative is prospect theory. The roots of prospect theory lie in the psychology of risk, but the theory has also been used in behavioral economics. In their study of risky choice, Kahneman and Tversky ( 1979 ; Kahneman 2003 ) found that individuals are more strongly motivated by fear of loss than potential gain, and are inclined to avoid risk when faced with potential gains, yet seek risk when faced with potential losses. Bearing in mind that the reference point of the individual researcher matters (their context—whether that is one in which the researcher is faced with potential losses or gains), prospect theory would predict that researchers faced with potentially losing their job, tenure or other meaningful resources would be more prone to take risks, or in our case, to engage in research misconduct, than colleagues who face no such threats. This theory refers to type I and II factors, as the behavioral tendencies involved may, but need not, go unnoticed by the subject. The National Academies’ report Fostering Integrity in Research offers this as a possible explanation in its chapter on the causes of deviance (NASEM 2017 ).
For this theory to explain Stapel’s deeds, we need to know whether, at the point in time when he falsified or fabricated datasets, he was faced with the threat of losing his job, or tenure, or other meaningful resources. In addition, it would be useful to know if the opposite situation occurred, where Stapel was faced with a potential gain, perhaps greater chance of having his research accepted in a high-impact journal through the risky behavior of falsifying his data, and decided against it.
Stapel’s book contains a passage of his reflection that reads: “There was no pressure, no power politics, no need to produce patents or pills, to compete in the marketplace or make a pile of money. It was always purely academic, scientific research, which makes any form of cheating even harder to understand.” ( 2014 , 188). Another passage seems to point more at the potential for gain as a driving force: “I couldn’t resist the temptation to go a step further. I wanted it so badly. I wanted to belong, to be part of the action, to score. I really, really wanted to be really, really good. I wanted to be published in the best journals and speak in the largest room at conferences.” (p. 102–103).
The report (Levelt et al. 2012 ) does not provide direct information on these issues, but it does detail that 55 of Stapel’s publications rested on falsified or fabricated data. Even if we put aside the idea that different papers can be based on the same dataset, how often can one be faced with potentially losing their job, tenure or another meaningful resource? It seems likely that there were other factors at play, too. Again, that is not to say that prospect theory cannot be an explanation for research misconduct, but that it can at best be a partial explanation. And even if, in Stapel’s case, there was no direct evidence that he feared losing his job, this potential threat could be inferred via an IBE. This in turn sparks the question whether it is also the best explanation, given its competitors.
We now move on to consider a theory that aims to explain misconduct by referring to the institutions and organizations in which the perpetrator works, and thereafter to a theory that aims to explain it by referring to the structure of the practice of modern science in general. Explanations based on these theories refer to type II factors, contextual affordances.
4.5 Five: Organizational Culture Theories
These theories find their roots in organizational psychology. They have in common that they consider people as working in an organization with a specific culture and a particular structure, and argue that these have an effect on individuals and their behavior. An assumption underlying these theories is that there is a causal path from a certain organizational culture, to a particular mental state, to an individual’s behavior.
One particular organizational culture theory, called organizational justice theory, is based on the idea that people who perceive themselves to be treated fairly by their organization, Footnote 10 behave more fairly themselves. Conversely, when the organizational procedures are perceived as unfair, people are more likely to engage in acts that make up for the perceived unfairness, e.g. falsifying or fabricating their data. Martinson and colleagues (Vries et al. 2006 ; Crain et al. 2013 ) have investigated this theory and they report that researchers who perceived their treatment as unfair were more likely to engage in research misconduct.
There are various ways in which the organization can influence the behavior of researchers, and the organization itself is not immune to external influences. Footnote 11 The Institute of Medicine’s (IOM) report Integrity in Scientific Research: Creating an Environment that Promotes Responsible Conduct (IOM 2002 ) conceptualized the research organization as an open systems model. Within the organizational structure itself, there are policies and procedures in place that influence researchers, and within the organizational processes the IOM report emphasizes the role of leadership and supervision. These last two are especially important, as studies on the organizational climate in academic and other settings found that organizational leadership, ethics debates and ethical supervision were associated with an ethical climate. The system is open in that it produces various outputs in the form of papers and other research related activities that in turn influence organizational inputs through funding and human resources, which in turn influence the organization again.
Another idea is that the organisational dynamics themselves can take such a form that everyone in the organization begins to engage in questionable practices. This type of unethical conduct may then become so frequent that it slowly becomes the normal way of conducting research.
If we apply this theory to the misconduct of Stapel and ask what should be the case if his misconduct is to be adequately explained by it, we must say that the culture and structure of the organizations he was employed by, somehow induced his conduct. Either there should be indications that he was mistreated by his organizations or there should be evidence that his work environment was perverted altogether. Delving deeper: Is there information available on their policies, the degree to which leadership emphasized integrity, or whether open debates about integrity issues were a regular occurrence? There must, perhaps, have been reward systems in place that triggered misconduct, or some element of an organization’s culture that did the trick.
So, if such an explanation is to work for the Stapel case, what we need is insight into the culture and structure of the organizations that he worked with. Stapel seems to believe that culture played a role (Stapel 2014 , 171): “I’m not the only bad guy, there’s a lot more going on, and I’ve been just a small part of a culture that makes bad things possible.” Even if there was no direct evidence available about Stapel’s research culture, it might be possible to make an IBE here too: from his misconduct we can draw conclusions suggesting a bad organizational culture and bad organisational structures—the latter explaining the occurrence of the former.
Interestingly, the report (Levelt et al. 2012 ) about Stapel’s misconduct devotes an entire chapter to the culture in which his fraud took place. It is described as “a culture in which scientific integrity is not held in high esteem” (p. 33) and “even in the absence of fraud in the strict sense, there was a general culture of careless, selective and uncritical handling of research and data.” (p. 47). This may prompt one to believe that the culture indeed played a role in fostering Stapel’s fraudulent behavior. However, the report (Levelt et al. 2012 ) presents culture as an explanation for why the fraud could sustain for so long—“The Committees are of the opinion that this culture partly explains why the fraud was not detected earlier.” (p. 47)—not as one that brought about the fraud. Of course, this does not preclude the organizational culture from being a potential explanatory factor in the origination of the misconduct as well.
Are there indications that Stapel was structurally undervalued by his respective organizations, and treated unfairly? The report’s (Levelt et al. 2012 ) information points in the opposite direction: “These more detailed local descriptions also reveal Mr Stapel’s considerably powerful position, at any rate within the University of Groningen and even more so within Tilburg University. At the University of Amsterdam he already enjoyed a reputation as a ‘golden boy’.” (p. 38). To our knowledge, there is no public evidence of a culture that treated researchers unfairly or that suggests Stapel’s deeds could be interpreted as a means to make up for perceived unfairness done unto him.
Can we know enough about the organization’s culture and the structures of the units Stapel belonged to? Perhaps we can. But even if we do, the organizational culture explanation can at best be a partial one. For many other individuals who worked in the same organization, have not (we assume this to be so) committed acts of fabrication and falsification. For this reason we may think of an organization’s climate and structure as contextual affordances that do not forestall misconduct, and do not cause it either, but do enable it.
Until a certain stage of investigation, it is possible to propose an organizational culture explanation of Stapel’s behavior, namely as long as we have no evidence that any of the other explanations even partly explain it. At a later stage of the investigation, however, it should be possible to have more direct access to the organizational culture, as it should in principle be observable.
4.6 Six Footnote 12 : Ethos of Public Administration
Ethos of public administration theories, at times labelled Taylorism or New Public Management (NPM) theories, have their roots in economics, and, applied to research misconduct, fall under Sovacool’s third kind of narrative. These theories center around a complex set of ideas and concepts: specialization, command, unity, efficiency and atomism. The ideas that connect these concepts are, firstly, that individuals are naturally isolated from one another and that only an organization, through a chain of command and a sense of mission, can unify individuals into a single, efficient and rational working unit. The second is that individuals tend to laziness, selfishness and are not interested in any social good beyond their own individual good, and that therefore organizational unity and discipline must always be maintained.
The perverting influences of NPM or Taylorism on the academic system can be expressed through different phenomena that Halffman and De Radder ( 2015 ) eloquently captured in their Academic Manifesto . They describe, among other phenomena, the “measurability for accountability” (p. 167), meaning the obsession with output quantifiers, be it publication indices, metrics, or impact factors. They also elaborate on the “permanent competition under the pretense of ‘quality’” (p. 168), referring to the ‘hypercompetition’ where researchers compete against each other for funding in a ‘winner takes it all’ system, where it is the junior staff that do the bulk of the work, faced with temporary contracts and poor career opportunities (Halffman and Radder 2015 ).
Now, this extreme emphasis on effectiveness and performance can come at the cost of neglecting ethical issues and crowding out the values that motivate professional behavior and institute the organization’s mission. When this happens, it can lead to corrupt individuals. Overman et al. ( 2016 ) seem to subscribe to this proposition when they write: “Academic misconduct is considered to be the logical behavioral consequence of output-oriented management practices, based on performance incentives.” (p. 1140).
If this theory is going to explain Stapel’s misconduct, what should be the case is that he worked in an organization with a strong focus on performance and output in a way that crowds out values and the acknowledgement thereof. Perhaps he started out with an intrinsic desire to do good research. However, the more his work’s merit was determined by performance indicators and the more the focus was put on effectiveness, the more this intrinsic motivation was replaced by a desire to do good according to these performance indicators—to be effective and publish lots of papers. In addition, the emphasis on these performance incentives shifted attention away from responsible conduct of research.
Is there evidence that Stapel worked in such a system? Overman and colleagues describe that performance indicators indeed have become more evident among academic institutions in The Netherlands (they draw on research by Teelken ( 2015 )). Do we have evidence that increased emphasis on performance accounts for Stapel’s actions? His own account acknowledges the pressures in contemporary science: “Science is an ongoing conflict of interests. Scientists are … all in competition with each other to try and produce as much knowledge as possible in as short a time, and with as little money, as possible, and they try to achieve this goal by all means possible. They form partnerships with business, enter the commercial research market, and collect patents, publications, theses, subsidies, and prizes.” (Stapel 2014 , 189–190).
Perhaps we should consider the role of these performance indicators plus the reality of hypercompetition as biasing Stapel’s view on research. Under their influence, he unconsciously focused more and more on effectiveness at the expense of ethical conduct. At some point, effectiveness itself became his main desire. One is reminded of Goodheart’s law: “When a measure becomes a target, it ceases to be a good measure”.
However, we are again left with the question why these indicators biased Stapel towards extreme efficiency and not his peers. Maybe his affordances were different from those of his peers, but these fall outside the scope of this theory. Hence the ethos of public administration or NPM, even if it is an acceptable explanation of misconduct, can best be thought of as a partial explanation.
As with the other theories, even if (so far) there is no direct evidence that a case of scientific fraud was caused (at least in part) by excessive emphasis on effectiveness and performance indicators, excessive emphasis could, indirectly, be inferred via an IBE. In which case the question arises whether it is also the best explanation, given its competitors.
5 Are the Different Explanations of Research Misconduct Compatible?
Having discussed six explanations of research misconduct, and having explicated what, for each of them, needs to be the case if they are to be accurate, if only partial, explanations, we now address the second question that we have for this paper: how do these explanations relate to each other? Two different explanations of the same phenomenon, E1 and E2, can be compatible, or they can be incompatible. And if they are compatible, further qualifications can be added—for example that E1 and E2 “add up”, or that they reinforce each other, or that one weakens the other. We will see examples below of each of these sorts of relationships.
Given that we have six explanations on our hands, this means there are 15 pairs of explanations to consider. We can reduce this number because each of the four explanations under the first narrative are in their nature compatible with the explanations under the second (institutional) and third (system of science) narratives. This is in the nature of the case, as the first focus on qualities of the misconductor, and the latter two on contextual affordances—none of which, we suggested, constitute full explanations. We do not want to make this point only at this abstract level, but want to offer one illustration. Consider bad apple explanations and organizational culture explanations. It would seem that such explanations (of the same behavior) are at least compatible. If cheating can be adequately, if only partly, explained by reference to the ill treatment that the cheater has suffered in an earlier stage, then this explanation can be augmented by the additional explanation that the cheater has a failed moral character. And if cheating can adequately, if only partly, be explained by reference to the culture within the organization that the cheater worked with, then this explanation can be augmented by the additional explanation that the cheater has a failed moral character. So these explanations are at least compatible. At least , for it is possible (and plausible) that these explanations reinforce each other in this way: failed moral characters will tend to make organizational cultures bad, and bad organizational culture will tend to make moral characters fail. Failed moral characters in organizations with a bad culture, will tend to feel at home like fish in water. Applied to Stapel: the explanation of his misconduct can be explained by reference to his failed moral character (to akrasia perhaps), but also by reference to the culture of the organizations with which he worked. And the two explanations can reinforce each other, as bad characters breed bad cultures, and bad cultures breed bad characters.
As is in the nature of the case, the explanations under the second and third narratives, being Organizational Culture explanations and NPM explanations, are compatible as well. This point can also be made in a more concrete way. Since NPM will foster a particular kind of culture within an organization, and since a particular kind of culture will be especially sensitive to the down-sides of NPM, explanations of misconduct that refer to culture and to NPM are compatible, and they even reinforce each other. Applied to Stapel’s case, his misconduct can be explained, partly, by reference to organizational culture, and this can be augmented (and so make for a more complete explanation) by reference to the down-sides of NPM—and these two reinforce each other.
Since the first narrative covers four explanations, there are six pairs to check for compatibility. The first pair we consider is Rational Choice explanations and Bad Apple explanations. We may feel pulled in two directions here. Suppose someone is a bad apple, i.e. displays a defective moral character (perhaps the person suffers from akrasia), then we may think that his choice can never be rational, because his defective moral character prevents him from making such a choice. On the other hand, if making a rational choice consists of weighing the costs and benefits of an action as compared to alternative actions, then it would seem that someone with a defective moral character can engage in rational choice making as well—even if the outcome of the calculation is not what we would like it to be. Since it is formal (“means-end”) rationality that rational choice theory works with, it seems that a rational choice explanation is compatible with a bad apple explanation of the same behaviour. Applied to the Stapel case: an explanation of his misconduct in terms of character flaws (like akrasia) is compatible with the claim that his choice to cheat was the outcome of a rational cost–benefit analysis.
The second pair of explanations we consider is Rational Choice and General Strain. This pair puts before us the question whether strain and stress prevent a person from making a rational choice. On the face of it, stress and strain may lead a person to select a goal that he would not have selected in the absence of it; and given the goal, he may have calculated the means to attain it. Alternatively, a person may have set himself a goal, while stress and strain influence the calculation of the means to attain it. The influence may be that certain means become live options that were dead, or that options that were alive, die. But given the options, a stressed person may still make what he thinks is a fair calculation—fair not in a moral but in a formal sense. Either way, the explanations based on Rational Choice and General Strain are compatible. Applied to the Stapel case: stress and strain may have led him to set the goal of achieving high-profile publications, and rational choice deliberation suggested to him that fabrication and falsification were the ways to attain that goal. Alternatively, Stapel had set himself the goal of achieving high-profile publications, and strain and stress led him to calculate that fabrication and falsification were the best ways to attain the goal.
Third, Rational Choice and Prospect Theory, by contrast, do not deliver compatible explanations. For the former assumes that an actor will always seek maximal gains based on the probability of occurrence, while the latter says that fear of loss tends to be a much stronger motivator of behavior than the potential for gain, and also that individuals tend toward risk aversion when confronted with potential gains but bias toward risk seeking when confronted with avoiding potential losses.
Applied to Stapel: Rational Choice explains his fraudulent behavior by reference to a rational calculation he has made so as to have maximal gains, while Prospect Theory predicts that, given Stapel’s stable job’s situation (he had a tenured position with no fear of losing it), he would be less likely to make the risky choices that he did make.
The incompatibility should come as no surprise, as Prospect Theory was expressly developed as an alternative to Rational Choice (Thomas and Loughran 2014 ).
General Strain and Bad Apple approaches are compatible. If stress and strain induce deviant behavior, then it does so in virtuous persons and bad apples alike. Strain explanations and bad apple explanations are compatible, and they may even reinforce each other, in that it is plausible to think that bad apples make even worse choices if they also experience stress and strain—and that strained persons make worse choices if they have flawed moral characters. Applied to Stapel: if he had a flawed moral character, he may already have been open to cheating, but if he was also under stress and strain, then the cheating option may have become even more salient.
Prospect Theory and Bad Apple theory are also compatible. As indicated, Prospect Theory predicts that people faced with the prospect of losing their job or other meaningful resources, will be more inclined to take risks—and if this holds, it holds for bad and good apples alike. The two explanations of behavior it suggests, can both be correct, if only partially. Applied to our case: if we counterfactually assume that Stapel’s position was at stake, and also that he had a flawed moral character, then both these factors can be referred to for explanatory purposes—and both explanations can be correct.
The sixth and final pair to consider is that of Prospect and General Strain. Strain and stress may be real in a person who, when faced with serious loss of meaningful resources, is more prone to take risks than when not so faced. Hence, two explanations of a person’s behavior based on their own respective theories, can both be true and hence be compatible. However, if a person is experiencing stress and strain, while at the same time there is no threat of loss of meaningful resources, then the two theories yield incompatible explanations. After all, Prospect Theory tells us that people are risk-aversive. The impulse to deviant behavior generated by strain and stress would be mitigated by the impulse to risk-aversion. In that case we might say that the two theories are compatible—but that the explanations do not reinforce each other, nor do they add up, but rather the one weakens the other in the sense that the effect that one theory predicts does not occur to the degree it would have in the absence of the other effect. If we again assume that Stapel was experiencing stress and strain (which already motivated him towards deviant behavior) and he was also facing the threat of losing meaningful resources (which inclined him to take more risks than he would otherwise have taken), then the explanations reinforce each other. But if he was experiencing stress and strain, yet there was no fear of losing meaningful resources, then the theories lead us to expect deviant behavior to a lesser degree than if there was also a threat of loss.
6 Concluding Remarks
We have discussed six explanations of research misconduct, and how they relate to each other. We argued that most theories are compatible with each other, with the exception of Rational Choice and Prospect. Suppose now we concentrate on explanations that are compatible. Can we conclude that those pairs offer full explanations? For a number of reasons we cannot. First, we have only looked at pairs among the six theories we have discussed. But triplets of them may offer fuller explanations, and quartets of them even fuller. Second, there are explanations of research misconduct that we have not discussed, but that can be added to the fold. Footnote 13 Third, a large body of research that investigates research misconduct takes the form of correlation ‘theories’ that map significant correlations between (some measure of) research misconduct and some other factor of interest. Of course, correlation does not equal causation. Take this one step further: on a narrow reading of theory—“an idea that is used to account for a situation”—it seems incorrect to speak of correlation theories . Correlations map temporal co-occurrence beyond some degree of doubt. The idea or link that is to explain this co-occurrence is often thought up post-hoc as a rationalization, but it is not (yet) a fully-fledged theory.
However, that does not render correlational research meaningless for explaining research misconduct. Similar to narratives, correlational research results deliver cognitive goods—they give knowledge about factors that in some way play into the misconduct. Along that same line of reasoning, they serve as a pointer for further theorizing that may at some point be formalized into a theory.
Still, we are left with the question whether it is sensible to suppose that, drawing on all correlational research and supplemented with the types of theories we reviewed here, one can fully explain research misconduct. There seem two avenues to take, both reconcilable with what we argued above, and these avenues are connected to one’s stance on free will. Either one believes that humans are free and this will render some part of their behaviour—especially complex behaviors, like research misconduct—inexplicable. Or one believes that humans are not free and that scholars have not yet found the (final) key to the explanatory puzzle. It seems natural to think that this key, if it exists, is to be found somewhere along the lines of unconscious factors that influence human behavior, such as biases or heuristics. We tend to the first view.
A further point we would like to make is that although this paper is focused on theories coined to explain falsification and fabrication, these theories also seem relevant when explaining lesser trespasses, such as QRPs. In fact, for those QRPs that teeter on the edge of falsification—take p -hacking or HARKing (hypothesizing after results are known)—it seems natural to suspect that when we apply the theories reviewed here to explain the occurrence of those QRPs, we likely run into similar problems that we encountered when trying to explain research misconduct. And since explanations of research misbehavior—here encompassing both FFP and QRPs—feed into our ideas about prevention of research misbehavior, extending our theories and models on how to explain may help us to prevent.
A further point we would like to make is that although various theories have been used to explain research misbehavior of individual scientists, our discussion brought to light that in order for such explanations to have some minimal level of plausibility, we need to know quite a bit about the personal situation of the researcher, as well as her contextual affordances at an institutional level. The suggestion of our paper is that such knowledge is not easily obtained.
Our final point concerns the role of the Stapel case in our discussion. It should be clear that we have not tried to offer the fullest possible explanation of his fraudulent behavior. We have used Stapel merely to illustrate the kinds and amounts of facts that should be known if an explanation of research misconduct, based on any of the six theories discussed in this paper, is to have minimal plausibility.
We note that reasons can serve different roles: they can be motivating and they can, even at the same time, be normative . P’s motivating reasons are the reasons for which P did A— the considerations in light of which P did what she did, and that motivated her for doing A. Normative reasons are the reasons that P would cite in favor of her action A, reasons that would show that A was the sensible, or right thing to do. This way of making this distinction is borrowed from Dancy ( 2000 ). Anscombe ( 2005 ) offers a subtle analysis of the notion “explaining behavior”.
Note that the factors we describe seem to match up with what has been termed levels of explanation, e.g. an explanation using desires and beliefs would be an explanation on the personal level, etc. (see Owens 1989 ). Yet, we will not focus on the question of whether an explanation on one level is more fundamental than an explanation on another; our aim is merely to assess the plausibility of the explanations and whether they are compatible with each other.
See Lipton ( 2008 , Ch. 4). Standard examples of IBEs are the doctor’s inference that his patient has measles, since this is the best explanation of the symptoms; and the astronomer’s inference to the existence and motion of Neptune, since that is the best explanation of the observed perturbations of Uranus.
Our search for theories was guided by a similar endeavor of Gjalt de Graaf’s ( 2007 ), in which he discusses a number of theories that purport to explain corrupt or fraudulent behavior in public administration, such as taking bribes. We supplemented his list with additional theories where relevant. Interestingly, de Graaf also included correlation ‘theories’, but as these are not theories in that they do not contain an idea about the explanatory mechanism, we do not review them in-depth but will elaborate briefly on their relevance in the concluding section.
Note that the (types of) theories De Graaf ( 2007 ) reviewed are theories of human behavior that come from different fields such as economics, sociology or criminology and seem to work on different levels. Hence, these theories often apply to terrains beyond fraud in public administration and may not even have been designed to explain fraud in public administration, but have been appealed to to explain fraud in public administration. In a similar vein, we explore whether these theories that have been coined to explain research misconduct are actually applicable and compatible.
We are aware that rational choice theory is sometimes used as a general paradigm that is not to be applied to individual cases because the notion of a “rational choice” is deemed to be no more than a useful theoretical fiction. We side with those authors that have used rational choice theory to shed light on individual cases of human behavior.
What does it mean to be a ‘rational’ agent in this case? In jurisprudence, an important consideration for holding someone accountable is whether that person had the right mentality, or mens rea . The four generally distinguished levels of mentality are purpose, knowledge, recklessness and negligence. Each of these correspond to a different extent to which the researcher, in our case Stapel, could be held accountable for his deeds, with the first being the highest level of accountability, purpose. Stapel’s case maps most closely onto this level—in his own writings, he is explicit about his intention to deceive others. The mental state of knowledge would look something like this: a colleague of Stapel had reasonably strong doubts about Stapel’s conduct, but decided to work with him regardless. Recklessness could perhaps be applied to cases of falsification, where a researcher runs a data analysis she does not fully understand and finds a significant result that she reports regardless. The level of negligence does not seem to work in our case, as it is unlikely to engage in misconduct out of negligence. So if one applies rational choice theory to cases of misconduct, one should be clear about whether that conduct was intentional, with knowledge, reckless or negligent, as rational choice theory seems more apt to explain cases where the trespasser had a mens rea of purpose or knowledge, compared to negligence or recklessness.
Note that this regards the translation of Stapel’s 2012 autobiographical book by Brown ( 2014 ). Caution is needed when interpreting these statements, as it is arguably oratio pro domo . Stapel seems to acknowledge this as his foreword reads: “This is my own, personal, selective, biased story about my downfall.” (p. iii). Similar to Zwart ( 2017 ), it is not our primary concern whether “the autobiographical account actually corresponds with the facts… but rather what can be learned and gained from this ego-document” (pp. 211–212).
Although we mostly discuss moral character flaws, it seems plausible that intellectual character flaws, such as insouciance, play a similar role—insouciance example taken from Cassam ( 1992 ).
The organization is often studied through the organizational culture (the values, beliefs, and norms that help shape members’ behavior) and the organizational climate, defined as "the shared meaning organizational members attach to the events, policies, practices, and procedures they experience and the behaviors they see being rewarded, supported, and expected” (Schneider et al. 2013 , 115). We will look at both in our consideration of organizational justice theory, but as policies and procedures are more observable than values and beliefs, we will focus more heavily on the former when reviewing the empirical materials available.
It can be hard, in the case of academic research, to pinpoint the boundary at which the culture ends and the outside begins, which can be seen as a caveat of applying organizational justice theory to research misconduct. Sometimes we speak of the research culture in, say, psychology, referring to the scientific field at large. Related, internal means of promotion or tenure are influenced by review committees of papers and grants, which would traditionally be placed outside of the organizational culture (see also Martinson et al. 2010 ). Nevertheless, it seems reasonable to suppose that an individual researcher is most profoundly influenced by their local climate—by the policies that directly apply to them and by the practices they see their colleagues engage in and be rewarded for.
De Graaf’s ( 2007 ) fourth type of theory is the theory of clashing moral values. The idea builds on Sellin ( 1983 ) and is that particular values that are held in high regard in the private atmosphere may lead to behaviors that are undesirable in the public or work atmosphere. Davis ( 2003 ) applied this to research misconduct cases: Take a researcher that comes from a culture where scientific productivity is the holy grail. After working for some time in a culture where adherence to ethical practice is regarded pivotal, some of the researcher’s behaviors may be regarded fraudulent. Davis’ argues that this can be fixed by subjecting the researcher to ethical training or developing codes of conduct that are endorsed widely. If this theory is to stand ground, Stapel should have been (successfully) socialized in a culture that held values which clash with ethical practice in extremely high regard. We think the analogy to research misconduct does not work here. Which value in the private atmosphere is supposed to do the explanatory work? If we look at the example from Davis, it is far from obvious that adherence to ethical practice clashes with productivity. We regard clashing moral values a nonstarter in the case of research misconduct.
See for example Rajah-Kanagasabai and Roberts ( 2015 ) that use the theory of planned behavior to explain research misconduct in students. Because our review focused on misconduct among academic researchers, and it had not been coined outside the realm of students, we chose to not review the theory of planned behavior in-depth here. Alternatively, Hackett ( 1994 ) reviewed anomie as a possible explanation for researchers engaging in research misconduct, but he disregarded anomie so persuasively that we chose not review it here.
Abma, R. (2013). De publicatiefabriek. Over de betekenis van de affaire-Stapel (p. 183). Nijmegen: Van Tilt Uitgeverij.
Google Scholar
Agnew, R. (1992). Foundation for a general strain theory of crime and delinquency. Criminology, 30, 47–87.
Article Google Scholar
ALLEA (All European Academies). (2017). The European code of conduct for research integrity. Berlin: All European Academies.
Anscombe, G. E. M. (2005). The causation of action. In: M. Geach & L. Gormally (Eds.), Human life, action and ethics (St. Andrews Studies in Philosophy and Public Affairs). Exeter: Imprint Academic.
Bouter, L. M., Tijdink, J., Axelsen, N., Martinson, B. C., & ter Riet, G. (2016). Ranking major and minor research misbehaviors: Results from a survey among participants of four world conferences on research integrity. Research Integrity Peer Review, 1 (17), 1–8.
Cassam, Q. (1992). Vices of the mind . Oxford: Oxford University Press.
Crain, L. A., Martinson, B. C., & Thrush, C. R. (2013). Relationships between the survey of organizational research climate (SORC) and self-reported research practices. Science and Engineering Ethics, 19 (3), 835–850.
Dancy, J. (2000). Practical reality . Oxford: Oxford University Press.
Davis, M. S. (2003). The role of culture in research misconduct. Accountability in Research, 10 (3), 189–201.
De Graaf, G. (2007). Causes of corruption: Towards a contextual theory of corruption. Public Administration Quarterly, 31, 39–86.
De Vries, R., Anderson, M. S., & Martinson, B. C. (2006). Normal misbehavior: Scientists talk about the ethics of research. Journal of Empirical Research on Human Research Ethics, 1 (1), 43–50.
Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4 (5), e5738.
Faria, R. (2018). Research misconduct as white-collar crime: A criminological approach . Cham: Palgrave Macmillan.
Book Google Scholar
Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science, 7 (1), 45–52.
Gilovich, T. (1991). How we know what isn’t so: The fallibility of human reason in everyday life . New York: Free Press.
Gunsalus, C. K. (2019). Make reports of research misconduct public. Nature, 570, 7.
Hackett, E. J. (1994). A social control perspective on scientific misconduct author. Journal of Higher Education, 65 (3), 242–260.
Halffman, W., & Radder, H. (2015). The academic manifesto: From an occupied to a Public University. Minerva, 53 (2), 165–187.
Haven, T. L., Tijdink, J. K., Pasman, H. R., Widdershoven, G., Riet, G., & Bouter, L. M. (2019). Researchers’ perceptions of research misbehaviours: A mixed methods study among academic researchers in Amsterdam. Research Integrity and Peer Review, 4 (25), 1–12.
Holtfreter, K., Reisig, M. D., Pratt, T. C., Mays, R. D. (2019) The perceived causes of research misconduct among faculty members in the natural, social, and applied sciences. Studies in Higher Education 1–13.
Hren, D., Vujaklija, A., Ivanišević, R., Knežević, J., Marušić, M., & Marušić, A. (2006). Students’ moral reasoning, Machiavellianism and socially desirable responding: Implications for teaching ethics and research integrity. Medical Education, 40 (3), 269–277.
Institute of Medicine. (2002). Integrity in scientific research: Creating an environment that promotes responsible conduct . Washington D.C.: National Academy of Sciences.
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review, 93 (5), 1449–1475.
Kahneman, D. (2011). Thinking, fast and slow . London: Penguin Books.
Kahneman, D., & Tversky, A. (1979). Prospect theory: an analysis of decisions under risk. Econometrica, 47 (2), 263–291.
Lacetera, N., & Zirulia, L. (2011). The economics of scientific misconduct. Journal of Law Economics and Organization, 27 (3), 568–603.
Lafollette, M. C. (2000). The evolution of the “scientific misconduct” issue: An historical overview. Proceedings of the Society for Experimental Biology and Medicine, 224 (4), 211–215.
Levelt Committee, Noort Committee, Drenth Committee. (2012). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel .
Lipton, P. (2008). Inference to the best explanation (2nd ed.). London and New York: Routledge.
Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives . Pittsburgh: University of Pittsburgh Press.
Maggio, L., Dong, T., Driessen, E., & Artino, A. (2019). Factors associated with scientific misconduct and questionable research practices in health professions education. Perspectives on Medical Education, 8 (2), 74–82.
Martinson, B. C., Anderson, M. S., & de Vries, R. (2005). Scientists behaving badly. Nature, 435 (7043), 737–738.
Martinson, B. C., Crain, L. A., De Vries, R., & Anderson, M. S. (2010). The importance of organizational justice in ensuring research integrity. Journal of Empirical Research on Human Research Ethics, 5 (3), 67–83.
National Academies of Sciences, Engineering, and Medicine. (2017). F ostering integrity in research . Washington, DC: The National Academies Press. https://doi.org/10.17226/21896 .
Netherlands code of conduct for research integrity. (2018).
Overman, S., Akkerman, A., & Torenvlied, R. (2016). Targets for honesty: How performance indicators shape integrity in Dutch higher education. Public Administration, 94 (4), 1140–1154.
Owens, D. (1989). Levels of explanation. Mind, 98, 57–79.
Rajah-Kanagasabai, C. J., & Roberts, L. D. (2015). Predicting self-reported research misconduct and questionable research practices in university students using an augmented theory of planned behavior. Frontiers in Psychology, 6, 535.
Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual Review of Psychology, 64 (1), 361–388.
Sellin, T. (1983). Culture and conflict . New York: Social Science Research Council.
Sovacool, B. K. (2008). Exploring scientific misconduct: Isolated individuals, impure institutions, or an inevitable idiom of modern science? Journal of Bioethical Inquiry, 5 (4), 271–282.
Stapel, D. (2014). Faking science: A true story of academic fraud . (N. J. L. Brown, Transl.). https://bit.ly/3tLfkCr .
Steneck, N. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics, 12 (1), 53–74.
Teelken, C. (2015). Hybridity, coping mechanisms, and academic performance management: Comparing three countries. Public Administration 93 (2), 307–323.
Thomas K. J., & Loughran T. A. (2014). Rational choice and prospect theory. In: G. Bruinsma & D. Weisburd (Eds.), Encyclopedia of criminology and criminal justice. New York: Springer.
Tijdink, J. K., Bouter, L. M., Veldkamp, C. L. S., Van De Ven, P. M., Wicherts, J. M., & Smulders, Y. M. (2016). Personality traits are associated with research misbehavior in Dutch scientists: A cross-sectional study. PLoS ONE, 11 (9), 1–12.
Vogel, G. (2011). Psychologist accused of fraud on astonishing scale. Science, 334 (6056), 579.
Wible, J. R. (1992). Fraud in science: An economic approach. Philosophy of the Social Sciences, 22 (1), 5–27.
Woodward, J. (2003). Making things happen: A theory of causal explanation . Oxford: Oxford University Press.
Zwart, H. (2017). Tales of research misconduct: A Lancanian diagnostics of integrity challenges in science novels (pp. 1–254). Cham, Switzerland: Springer.
Download references
Acknowledgements
TH and RvW would like to thank the reviewers and the members of the Theoretical Philosophy reading group for their critical yet constructive comments on earlier versions of this paper.
Author information
Authors and affiliations.
Department of Philosophy, Vrije Universiteit, De Boelelaan 1105, 1081 HV, Amsterdam, The Netherlands
Tamarinde Haven & René van Woudenberg
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to René van Woudenberg .
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and Permissions
About this article
Haven, T., van Woudenberg, R. Explanations of Research Misconduct, and How They Hang Together. J Gen Philos Sci 52 , 543–561 (2021). https://doi.org/10.1007/s10838-021-09555-5
Download citation
Accepted : 12 January 2021
Published : 19 May 2021
Issue Date : December 2021
DOI : https://doi.org/10.1007/s10838-021-09555-5
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Research misconduct
- Explanations
- Research integrity
- Find a journal
- Publish with us
- Research article
- Open access
- Published: 30 April 2021
A scoping review of the literature featuring research ethics and research integrity cases
- Anna Catharina Vieira Armond ORCID: orcid.org/0000-0002-7121-5354 1 ,
- Bert Gordijn 2 ,
- Jonathan Lewis 2 ,
- Mohammad Hosseini 2 ,
- János Kristóf Bodnár 1 ,
- Soren Holm 3 , 4 &
- Péter Kakuk 5
BMC Medical Ethics volume 22 , Article number: 50 ( 2021 ) Cite this article
11k Accesses
19 Citations
29 Altmetric
Metrics details
The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the scientific literature.
The search included cases involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework. A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without language or date restriction. Data relating to the articles and the cases were extracted from case descriptions.
A total of 14,719 records were identified, and 388 items were included in the qualitative synthesis. The papers contained 500 case descriptions. After applying the eligibility criteria, 238 cases were included in the analysis. In the case analysis, fabrication and falsification were the most frequently tagged violations (44.9%). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%), followed by patient safety issues (11.1%) and plagiarism (6.9%). 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities. Paper retraction was the most prevalent sanction (45.4%), followed by exclusion from funding applications (35.5%).
Conclusions
Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields compared to its proportion in scientific publications. The cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of misbehaviors. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations, and from recently emerging forms of misbehaviors.
Peer Review reports
There has been an increase in academic interest in research ethics (RE) and research integrity (RI) over the past decade. This is due, among other reasons, to the changing research environment with new and complex technologies, increased pressure to publish, greater competition in grant applications, increased university-industry collaborative programs, and growth in international collaborations [ 1 ]. In addition, part of the academic interest in RE and RI is due to highly publicized cases of misconduct [ 2 ].
There is a growing body of published RE and RI cases, which may contribute to public attitudes regarding both science and scientists [ 3 ]. Different approaches have been used in order to analyze RE and RI cases. Studies focusing on ORI files (Office of Research Integrity) [ 2 ], retracted papers [ 4 ], quantitative surveys [ 5 ], data audits [ 6 ], and media coverage [ 3 ] have been conducted to understand the context, causes, and consequences of these cases.
Analyses of RE and RI cases often influence policies on responsible conduct of research [ 1 ]. Moreover, details about cases facilitate a broader understanding of issues related to RE and RI and can drive interventions to address them. Currently, there are no comprehensive studies that have collected and evaluated the RE and RI cases available in the academic literature. This review has been developed by members of the EnTIRE consortium to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). Two separate analyses have been conducted. The first analysis uses identified research articles to explore how the literature presents cases of RE and RI, in relation to the year of publication, country, article genre, and violation involved. The second analysis uses the cases extracted from the literature in order to characterize the cases and analyze them concerning the violations involved, sanctions, and field of science.
This scoping review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA Extension for Scoping Reviews (PRISMA-ScR). The full protocol was pre-registered and it is available at https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5bde92120&appId=PPGMS .
Eligibility
Articles with non-fictional case(s) involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, were included. Cases unrelated to scientific activities, research institutions, academic or industrial research and publication were excluded. Articles that did not contain a substantial description of the case were also excluded.
A normative framework consists of explicit rules, formulated in laws, regulations, codes, and guidelines, as well as implicit rules, which structure local research practices and influence the application of explicitly formulated rules. Therefore, if a case involves a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, then it does so on the basis of explicit and/or implicit rules governing RE and RI practice.
Search strategy
A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without any language or date restrictions. Two parallel searches were performed with two sets of medical subject heading (MeSH) terms, one for RE and another for RI. The parallel searches generated two sets of data thereby enabling us to analyze and further investigate the overlaps in, differences in, and evolution of, the representation of RE and RI cases in the academic literature. The terms used in the first search were: (("research ethics") AND (violation OR unethical OR misconduct)). The terms used in the parallel search were: (("research integrity") AND (violation OR unethical OR misconduct)). The search strategy’s validity was tested in a pilot search, in which different keyword combinations and search strings were used, and the abstracts of the first hundred hits in each database were read (Additional file 1 ).
After searching the databases with these two search strings, the titles and abstracts of extracted items were read by three contributors independently (ACVA, PK, and KB). Articles that could potentially meet the inclusion criteria were identified. After independent reading, the three contributors compared their results to determine which studies were to be included in the next stage. In case of a disagreement, items were reassessed in order to reach a consensus. Subsequently, qualified items were read in full.
Data extraction
Data extraction processes were divided by three assessors (ACVA, PK and KB). Each list of extracted data generated by one assessor was cross-checked by the other two. In case of any inconsistencies, the case was reassessed to reach a consensus. The following categories were employed to analyze the data of each extracted item (where available): (I) author(s); (II) title; (III) year of publication; (IV) country (according to the first author's affiliation); (V) article genre; (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification)[ 7 ]; (X) types of violation (see below); (XI) case description; and (XII) consequences for persons or institutions involved in the case.
Two sets of data were created after the data extraction process. One set was used for the analysis of articles and their representation in the literature, and the other set was created for the analysis of cases. In the set for the analysis of articles, all eligible items, including duplicate cases (cases found in more than one paper, e.g. Hwang case, Baltimore case) were included. The aim was to understand the historical aspects of violations reported in the literature as well as the paper genre in which cases are described and discussed. For this set, the variables of the year of publication (III); country (IV); article genre (V); and types of violation (X) were analyzed.
For the analysis of cases, all duplicated cases and cases that did not contain enough information about particularities to differentiate them from others (e.g. names of the people or institutions involved, country, date) were excluded. In this set, prominent cases (i.e. those found in more than one paper) were listed only once, generating a set containing solely unique cases. These additional exclusion criteria were applied to avoid multiple representations of cases. For the analysis of cases, the variables: (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification); (X) types of violation; (XI) case details; and (XII) consequences for persons or institutions involved in the case were considered.
Article genre classification
We used ten categories to capture the differences in genre. We included a case description in a “news” genre if a case was published in the news section of a scientific journal or newspaper. Although we have not developed a search strategy for newspaper articles, some of them (e.g. New York Times) are indexed in scientific databases such as Pubmed. The same method was used to allocate case descriptions to “editorial”, “commentary”, “misconduct notice”, “retraction notice”, “review”, “letter” or “book review”. We applied the “case analysis” genre if a case description included a normative analysis of the case. The “educational” genre was used when a case description was incorporated to illustrate RE and RI guidelines or institutional policies.
Categorization of violations
For the extraction process, we used the articles’ own terminology when describing violations/ethical issues involved in the event (e.g. plagiarism, falsification, ghost authorship, conflict of interest, etc.) to tag each article. In case the terminology was incompatible with the case description, other categories were added to the original terminology for the same case. Subsequently, the resulting list of terms was standardized using the list of major and minor misbehaviors developed by Bouter and colleagues [ 8 ]. This list consists of 60 items classified into four categories: Study design, data collection, reporting, and collaboration issues. (Additional file 2 ).
Systematic search
A total of 11,641 records were identified through the RE search and 3078 in the RI search. The results of the parallel searches were combined and the duplicates removed. The remaining 10,556 records were screened, and at this stage, 9750 items were excluded because they did not fulfill the inclusion criteria. 806 items were selected for full-text reading. Subsequently, 388 articles were included in the qualitative synthesis (Fig. 1 ).

Flow diagram
Of the 388 articles, 157 were only identified via the RE search, 87 exclusively via the RI search, and 144 were identified via both search strategies. The eligible articles contained 500 case descriptions, which were used for the analysis of the publications articles analysis. 256 case descriptions discussed the same 50 cases. The Hwang case was the most frequently described case, discussed in 27 articles. Furthermore, the top 10 most described cases were found in 132 articles (Table 1 ).
For the analysis of cases, 206 (41.2% of the case descriptions) duplicates were excluded, and 56 (11.2%) cases were excluded for not providing enough information to distinguish them from other cases, resulting in 238 eligible cases.
Analysis of the articles
The categories used to classify the violations include those that pertain to the different kinds of scientific misconduct (falsification, fabrication, plagiarism), detrimental research practices (authorship issues, duplication, peer-review, errors in experimental design, and mentoring), and “other misconduct” (according to the definitions from the National Academies of Sciences and Medicine, [ 1 ]). Each case could involve more than one type of violation. The majority of cases presented more than one violation or ethical issue, with a mean of 1.56 violations per case. Figure 2 presents the frequency of each violation tagged to the articles. Falsification and fabrication were the most frequently tagged violations. The violations accounted respectively for 29.1% and 30.0% of the number of taggings (n = 780), and they were involved in 46.8% and 45.4% of the articles (n = 500 case descriptions). Problems with informed consent represented 9.1% of the number of taggings and 14% of the articles, followed by patient safety (6.7% and 10.4%) and plagiarism (5.4% and 8.4%). Detrimental research practices, such as authorship issues, duplication, peer-review, errors in experimental design, mentoring, and self-citation were mentioned cumulatively in 7.0% of the articles.

Tagged violations from the article analysis
Analysis of the cases
Figure 3 presents the frequency and percentage of each violation found in the cases. Each case could include more than one item from the list. The 238 cases were tagged 305 times, with a mean of 1.28 items per case. Fabrication and falsification were the most frequently tagged violations (44.9%), involved in 57.7% of the cases (n = 238). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%) and involved in 20.2% of the cases. Patient safety issues were the third most frequently tagged violations (11.1%), involved in 14.3% of the cases, followed by plagiarism (6.9% and 8.8%). The list of major and minor misbehaviors [ 8 ] classifies the items into study design, data collection, reporting, and collaboration issues. Our results show that 56.0% of the tagged violations involved issues in reporting, 16.4% in data collection, 15.1% involved collaboration issues, and 12.5% in the study design. The items in the original list that were not listed in the results were not involved in any case collected.

Major and minor misbehavior items from the analysis of cases
Article genre
The articles were mostly classified into “news” (33.0%), followed by “case analysis” (20.9%), “editorial” (12.1%), “commentary” (10.8%), “misconduct notice” (10.3%), “retraction notice” (6.4%), “letter” (3.6%), “educational paper” (1.3%), “review” (1%), and “book review” (0.3%) (Fig. 4 ). The articles classified into “news” and “case analysis” included predominantly prominent cases. Items classified into “news” often explored all the investigation findings step by step for the associated cases as the case progressed through investigations, and this might explain its high prevalence. The case analyses included mainly normative assessments of prominent cases. The misconduct and retraction notices included the largest number of unique cases, although a relatively large portion of the retraction and misconduct records could not be included because of insufficient case details. The articles classified into “editorial”, “commentary” and “letter” also included unique cases.

Article genre of included articles
Article analysis
The dates of the eligible articles range from 1983 to 2018 with notable peaks between 1990 and 1996, most probably associated with the Gallo [ 9 ] and Imanishi-Kari cases [ 10 ], and around 2005 with the Hwang [ 11 ], Wakefield [ 12 ], and CNEP trial cases [ 13 ] (Fig. 5 ). The trend line shows an increase in the number of articles over the years.

Frequency of articles according to the year of publication
Case analysis
The dates of included cases range from 1798 to 2016. Two cases occurred before 1910, one in 1798 and the other in 1845. Figure 6 shows the number of cases per year from 1910. An increase in the curve started in the early 1980s, reaching the highest frequency in 2004 with 13 cases.

Frequency of cases per year
Geographical distribution
The first analysis concerned the authors’ affiliation and the corresponding author’s address. Where the article contained more than one country in the affiliation list, only the first author’s location was considered. Eighty-one articles were excluded because the authors’ affiliations were not available, and 307 articles were included in the analysis. The articles originated from 26 different countries (Additional file 3 ). Most of the articles emanated from the USA and the UK (61.9% and 14.3% of articles, respectively), followed by Canada (4.9%), Australia (3.3%), China (1.6%), Japan (1.6%), Korea (1.3%), and New Zealand (1.3%). Some of the most discussed cases occurred in the USA; the Imanishi-Kari, Gallo, and Schön cases [ 9 , 10 ]. Intensely discussed cases are also associated with Canada (Fisher/Poisson and Olivieri cases), the UK (Wakefield and CNEP trial cases), South Korea (Hwang case), and Japan (RIKEN case) [ 12 , 14 ]. In terms of percentages, North America and Europe stand out in the number of articles (Fig. 7 ).

Percentage of articles and cases by continent
The case analysis involved the location where the case took place, taking into account the institutions involved in the case. For cases involving more than one country, all the countries were considered. Three cases were excluded from the analysis due to insufficient information. In the case analysis, 40 countries were involved in 235 different cases (Additional file 4 ). Our findings show that most of the reported cases occurred in the USA and the United Kingdom (59.6% and 9.8% of cases, respectively). In addition, a number of cases occurred in Canada (6.0%), Japan (5.5%), China (2.1%), and Germany (2.1%). In terms of percentages, North America and Europe stand out in the number of cases (Fig. 7 ). To enable comparison, we have additionally collected the number of published documents according to country distribution, available on SCImago Journal & Country Rank [ 16 ]. The numbers correspond to the documents published from 1996 to 2019. The USA occupies the first place in the number of documents, with 21.9%, followed by China (11.1%), UK (6.3%), Germany (5.5%), and Japan (4.9%).
Field of science
The cases were classified according to the field of science. Four cases (1.7%) could not be classified due to insufficient information. Where information was available, 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities (Fig. 8 ). Additionally, we have retrieved the number of published documents according to scientific field distribution, available on SCImago [ 16 ]. Of the total number of scientific publications, 41.5% are related to natural sciences, 22% to engineering, 25.1% to health and medical sciences, 7.8% to social sciences, 1.9% to agricultural sciences, and 1.7% to the humanities.

Field of science from the analysis of cases
This variable aimed to collect information on possible consequences and sanctions imposed by funding agencies, scientific journals and/or institutions. 97 cases could not be classified due to insufficient information. 141 cases were included. Each case could potentially include more than one outcome. Most of cases (45.4%) involved paper retraction, followed by exclusion from funding applications (35.5%). (Table 2 ).
RE and RI cases have been increasingly discussed publicly, affecting public attitudes towards scientists and raising awareness about ethical issues, violations, and their wider consequences [ 5 ]. Different approaches have been applied in order to quantify and address research misbehaviors [ 5 , 17 , 18 , 19 ]. However, most cases are investigated confidentially and the findings remain undisclosed even after the investigation [ 19 , 20 ]. Therefore, the study aimed to collect the RE and RI cases available in the scientific literature, understand how the cases are discussed, and identify the potential of case descriptions to raise awareness on RE and RI.
We collected and analyzed 500 detailed case descriptions from 388 articles and our results show that they mostly relate to extensively discussed and notorious cases. Approximately half of all included cases was mentioned in at least two different articles, and the top ten most commonly mentioned cases were discussed in 132 articles.
The prominence of certain cases in the literature, based on the number of duplicated cases we found (e.g. Hwang case), can be explained by the type of article in which cases are discussed and the type of violation involved in the case. In the article genre analysis, 33% of the cases were described in the news section of scientific publications. Our findings show that almost all article genres discuss those cases that are new and in vogue. Once the case appears in the public domain, it is intensely discussed in the media and by scientists, and some prominent cases have been discussed for more than 20 years (Table 1 ). Misconduct and retraction notices were exceptions in the article genre analysis, as they presented mostly unique cases. The misconduct notices were mainly found on the NIH repository, which is indexed in the searched databases. Some federal funding agencies like NIH usually publicize investigation findings associated with the research they fund. The results derived from the NIH repository also explains the large proportion of articles from the US (61.9%). However, in some cases, only a few details are provided about the case. For cases that have not received federal funding and have not been reported to federal authorities, the investigation is conducted by local institutions. In such instances, the reporting of findings depends on each institution’s policy and willingness to disclose information [ 21 ]. The other exception involves retraction notices. Despite the existence of ethical guidelines [ 22 ], there is no uniform and a common approach to how a journal should report a retraction. The Retraction Watch website suggests two lists of information that should be included in a retraction notice to satisfy the minimum and optimum requirements [ 22 , 23 ]. As well as disclosing the reason for the retraction and information regarding the retraction process, optimal notices should include: (I) the date when the journal was first alerted to potential problems; (II) details regarding institutional investigations and associated outcomes; (III) the effects on other papers published by the same authors; (IV) statements about more recent replications only if and when these have been validated by a third party; (V) details regarding the journal’s sanctions; and (VI) details regarding any lawsuits that have been filed regarding the case. The lack of transparency and information in retraction notices was also noted in studies that collected and evaluated retractions [ 24 ]. According to Resnik and Dinse [ 25 ], retractions notices related to cases of misconduct tend to avoid naming the specific violation involved in the case. This study found that only 32.8% of the notices identify the actual problem, such as fabrication, falsification, and plagiarism, and 58.8% reported the case as replication failure, loss of data, or error. Potential explanations for euphemisms and vague claims in retraction notices authored by editors could pertain to the possibility of legal actions from the authors, honest or self-reported errors, and lack of resources to conduct thorough investigations. In addition, the lack of transparency can also be explained by the conflicts of interests of the article’s author(s), since the notices are often written by the authors of the retracted article.
The analysis of violations/ethical issues shows the dominance of fabrication and falsification cases and explains the high prevalence of prominent cases. Non-adherence to laws and regulations (REC approval, informed consent, and data protection) was the second most prevalent issue, followed by patient safety, plagiarism, and conflicts of interest. The prevalence of the five most tagged violations in the case analysis was higher than the prevalence found in the analysis of articles that involved the same violations. The only exceptions are fabrication and falsification cases, which represented 45% of the tagged violations in the analysis of cases, and 59.1% in the article analysis. This disproportion shows a predilection for the publication of discussions related to fabrication and falsification when compared to other serious violations. Complex cases involving these types of violations make good headlines and this follows a custom pattern of writing about cases that catch the public and media’s attention [ 26 ]. The way cases of RE and RI violations are explored in the literature gives a sense that only a few scientists are “the bad apples” and they are usually discovered, investigated, and sanctioned accordingly. This implies that the integrity of science, in general, remains relatively untouched by these violations. However, studies on misconduct determinants show that scientific misconduct is a systemic problem, which involves not only individual factors, but structural and institutional factors as well, and that a combined effort is necessary to change this scenario [ 27 , 28 ].
Analysis of cases
A notable increase in RE and RI cases occurred in the 1990s, with a gradual increase until approximately 2006. This result is in agreement with studies that evaluated paper retractions [ 24 , 29 ]. Although our study did not focus only on retractions, the trend is similar. This increase in cases should not be attributed only to the increase in the number of publications, since studies that evaluated retractions show that the percentage of retraction due to fraud has increased almost ten times since 1975, compared to the total number of articles. Our results also show a gradual reduction in the number of cases from 2011 and a greater drop in 2015. However, this reduction should be considered cautiously because many investigations take years to complete and have their findings disclosed. ORI has shown that from 2001 to 2010 the investigation of their cases took an average of 20.48 months with a maximum investigation time of more than 9 years [ 24 ].
The countries from which most cases were reported were the USA (59.6%), the UK (9.8%), Canada (6.0%), Japan (5.5%), and China (2.1%). When analyzed by continent, the highest percentage of cases took place in North America, followed by Europe, Asia, Oceania, Latin America, and Africa. The predominance of cases from the USA is predictable, since the country publishes more scientific articles than any other country, with 21.8% of the total documents, according to SCImago [ 16 ]. However, the same interpretation does not apply to China, which occupies the second position in the ranking, with 11.2%. These differences in the geographical distribution were also found in a study that collected published research on research integrity [ 30 ]. The results found by Aubert Bonn and Pinxten (2019) show that studies in the United States accounted for more than half of the sample collected, and although China is one of the leaders in scientific publications, it represented only 0.7% of the sample. Our findings can also be explained by the search strategy that included only keywords in English. Since the majority of RE and RI cases are investigated and have their findings locally disclosed, the employment of English keywords and terms in the search strategy is a limitation. Moreover, our findings do not allow us to draw inferences regarding the incidence or prevalence of misconduct around the world. Instead, it shows where there is a culture of publicly disclosing information and openly discussing RE and RI cases in English documents.
Scientific field analysis
The results show that 80.8% of reported cases occurred in the medical and health sciences whilst only 1.3% occurred in the humanities. This disciplinary difference has also been observed in studies on research integrity climates. A study conducted by Haven and colleagues, [ 28 ] associated seven subscales of research climate with the disciplinary field. The subscales included: (1) Responsible Conduct of Research (RCR) resources, (2) regulatory quality, (3) integrity norms, (4) integrity socialization, (5) supervisor/supervisee relations, (6) (lack of) integrity inhibitors, and (7) expectations. The results, based on the seven subscale scores, show that researchers from the humanities and social sciences have the lowest perception of the RI climate. By contrast, the natural sciences expressed the highest perception of the RI climate, followed by the biomedical sciences. There are also significant differences in the depth and extent of the regulatory environments of different disciplines (e.g. the existence of laws, codes of conduct, policies, relevant ethics committees, or authorities). These findings corroborate our results, as those areas of science most familiar with RI tend to explore the subject further, and, consequently, are more likely to publish case details. Although the volume of published research in each research area also influences the number of cases, the predominance of medical and health sciences cases is not aligned with the trends regarding the volume of published research. According to SCImago Journal & Country Rank [ 16 ], natural sciences occupy the first place in the number of publications (41,5%), followed by the medical and health sciences (25,1%), engineering (22%), social sciences (7,8%), and the humanities (1,7%). Moreover, biomedical journals are overrepresented in the top scientific journals by IF ranking, and these journals usually have clear policies for research misconduct. High-impact journals are more likely to have higher visibility and scrutiny, and consequently, more likely to have been the subject of misconduct investigations. Additionally, the most well-known general medical journals, including NEJM, The Lancet, and the BMJ, employ journalists to write their news sections. Since these journals have the resources to produce extensive news sections, it is, therefore, more likely that medical cases will be discussed.
Violations analysis
In the analysis of violations, the cases were categorized into major and minor misbehaviors. Most cases involved data fabrication and falsification, followed by cases involving non-adherence to laws and regulations, patient safety, plagiarism, and conflicts of interest. When classified by categories, 12.5% of the tagged violations involved issues in the study design, 16.4% in data collection, 56.0% in reporting, and 15.1% involved collaboration issues. Approximately 80% of the tagged violations involved serious research misbehaviors, based on the ranking of research misbehaviors proposed by Bouter and colleagues. However, as demonstrated in a meta-analysis by Fanelli (2009), most self-declared cases involve questionable research practices. In the meta-analysis, 33.7% of scientists admitted questionable research practices, and 72% admitted when asked about the behavior of colleagues. This finding contrasts with an admission rate of 1.97% and 14.12% for cases involving fabrication, falsification, and plagiarism. However, Fanelli’s meta-analysis does not include data about research misbehaviors in its wider sense but focuses on behaviors that bias research results (i.e. fabrication and falsification, intentional non-publication of results, biased methodology, misleading reporting). In our study, the majority of cases involved FFP (66.4%). Overrepresentation of some types of violations, and underrepresentation of others, might lead to misguided efforts, as cases that receive intense publicity eventually influence policies relating to scientific misconduct and RI [ 20 ].

Sanctions analysis
The five most prevalent outcomes were paper retraction, followed by exclusion from funding applications, exclusion from service or position, dismissal and suspension, and paper correction. This result is similar to that found by Redman and Merz [ 31 ], who collected data from misconduct cases provided by the ORI. Moreover, their results show that fabrication and falsification cases are 8.8 times more likely than others to receive funding exclusions. Such cases also received, on average, 0.6 more sanctions per case. Punishments for misconduct remain under discussion, ranging from the criminalization of more serious forms of misconduct [ 32 ] to social punishments, such as those recently introduced by China [ 33 ]. The most common sanction identified by our analysis—paper retraction—is consistent with the most prevalent types of violation, that is, falsification and fabrication.
Publicizing scientific misconduct
The lack of publicly available summaries of misconduct investigations makes it difficult to share experiences and evaluate the effectiveness of policies and training programs. Publicizing scientific misconduct can have serious consequences and creates a stigma around those involved in the case. For instance, publicized allegations can damage the reputation of the accused even when they are later exonerated [ 21 ]. Thus, for published cases, it is the responsibility of the authors and editors to determine whether the name(s) of those involved should be disclosed. On the one hand, it is envisaged that disclosing the name(s) of those involved will encourage others in the community to foster good standards. On the other hand, it is suggested that someone who has made a mistake should have the right to a chance to defend his/her reputation. Regardless of whether a person's name is left out or disclosed, case reports have an important educational function and can help guide RE- and RI-related policies [ 34 ]. A recent paper published by Gunsalus [ 35 ] proposes a three-part approach to strengthen transparency in misconduct investigations. The first part consists of a checklist [ 36 ]. The second suggests that an external peer reviewer should be involved in investigative reporting. The third part calls for the publication of the peer reviewer’s findings.
Limitations
One of the possible limitations of our study may be our search strategy. Although we have conducted pilot searches and sensitivity tests to reach the most feasible and precise search strategy, we cannot exclude the possibility of having missed important cases. Furthermore, the use of English keywords was another limitation of our search. Since most investigations are performed locally and published in local repositories, our search only allowed us to access cases from English-speaking countries or discussed in academic publications written in English. Additionally, it is important to note that the published cases are not representative of all instances of misconduct, since most of them are never discovered, and when discovered, not all are fully investigated or have their findings published. It is also important to note that the lack of information from the extracted case descriptions is a limitation that affects the interpretation of our results. In our review, only 25 retraction notices contained sufficient information that allowed us to include them in our analysis in conformance with the inclusion criteria. Although our search strategy was not focused specifically on retraction and misconduct notices, we believe that if sufficiently detailed information was available in such notices, the search strategy would have identified them.
Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields when compared with the volume of publications produced by each field. Moreover, published cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of ethical issues for RE and RI. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations and ethical issues, and recently emerging forms of misbehaviors.
Availability of data and materials
This review has been developed by members of the EnTIRE project in order to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). The dataset supporting the conclusions of this article is available in the Open Science Framework (OSF) repository in https://osf.io/3xatj/?view_only=313a0477ab554b7489ee52d3046398b9 .
National Academies of Sciences E, Medicine. Fostering integrity in research. National Academies Press; 2017.
Davis MS, Riske-Morris M, Diaz SR. Causal factors implicated in research misconduct: evidence from ORI case files. Sci Eng Ethics. 2007;13(4):395–414. https://doi.org/10.1007/s11948-007-9045-2 .
Article Google Scholar
Ampollini I, Bucchi M. When public discourse mirrors academic debate: research integrity in the media. Sci Eng Ethics. 2020;26(1):451–74. https://doi.org/10.1007/s11948-019-00103-5 .
Hesselmann F, Graf V, Schmidt M, Reinhart M. The visibility of scientific misconduct: a review of the literature on retracted journal articles. Curr Sociol La Sociologie contemporaine. 2017;65(6):814–45. https://doi.org/10.1177/0011392116663807 .
Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–8. https://doi.org/10.1038/435737a .
Loikith L, Bauchwitz R. The essential need for research misconduct allegation audits. Sci Eng Ethics. 2016;22(4):1027–49. https://doi.org/10.1007/s11948-016-9798-6 .
OECD. Revised field of science and technology (FoS) classification in the Frascati manual. Working Party of National Experts on Science and Technology Indicators 2007. p. 1–12.
Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integrity Peer Rev. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 .
Greenberg DS. Resounding echoes of Gallo case. Lancet. 1995;345(8950):639.
Dresser R. Giving scientists their due. The Imanishi-Kari decision. Hastings Center Rep. 1997;27(3):26–8.
Hong ST. We should not forget lessons learned from the Woo Suk Hwang’s case of research misconduct and bioethics law violation. J Korean Med Sci. 2016;31(11):1671–2. https://doi.org/10.3346/jkms.2016.31.11.1671 .
Opel DJ, Diekema DS, Marcuse EK. Assuring research integrity in the wake of Wakefield. BMJ (Clinical research ed). 2011;342(7790):179. https://doi.org/10.1136/bmj.d2 .
Wells F. The Stoke CNEP Saga: did it need to take so long? J R Soc Med. 2010;103(9):352–6. https://doi.org/10.1258/jrsm.2010.10k010 .
Normile D. RIKEN panel finds misconduct in controversial paper. Science. 2014;344(6179):23. https://doi.org/10.1126/science.344.6179.23 .
Wager E. The Committee on Publication Ethics (COPE): Objectives and achievements 1997–2012. La Presse Médicale. 2012;41(9):861–6. https://doi.org/10.1016/j.lpm.2012.02.049 .
SCImago nd. SJR — SCImago Journal & Country Rank [Portal]. http://www.scimagojr.com . Accessed 03 Feb 2021.
Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. 2009;4(5):e5738. https://doi.org/10.1371/journal.pone.0005738 .
Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics. 2006;12(1):53–74. https://doi.org/10.1007/PL00022268 .
DuBois JM, Anderson EE, Chibnall J, Carroll K, Gibb T, Ogbuka C, et al. Understanding research misconduct: a comparative analysis of 120 cases of professional wrongdoing. Account Res. 2013;20(5–6):320–38. https://doi.org/10.1080/08989621.2013.822248 .
National Academy of Sciences NAoE, Institute of Medicine Panel on Scientific R, the Conduct of R. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US) Copyright (c) 1992 by the National Academy of Sciences; 1992.
Bauchner H, Fontanarosa PB, Flanagin A, Thornton J. Scientific misconduct and medical journals. JAMA. 2018;320(19):1985–7. https://doi.org/10.1001/jama.2018.14350 .
COPE Council. COPE Guidelines: Retraction Guidelines. 2019. https://doi.org/10.24318/cope.2019.1.4 .
Retraction Watch. What should an ideal retraction notice look like? 2015, May 21. https://retractionwatch.com/2015/05/21/what-should-an-ideal-retraction-notice-look-like/ .
Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci USA. 2012;109(42):17028–33. https://doi.org/10.1073/pnas.1212247109 .
Resnik DB, Dinse GE. Scientific retractions and corrections related to misconduct findings. J Med Ethics. 2013;39(1):46–50. https://doi.org/10.1136/medethics-2012-100766 .
de Vries R, Anderson MS, Martinson BC. Normal misbehavior: scientists talk about the ethics of research. J Empir Res Hum Res Ethics JERHRE. 2006;1(1):43–50. https://doi.org/10.1525/jer.2006.1.1.43 .
Sovacool BK. Exploring scientific misconduct: isolated individuals, impure institutions, or an inevitable idiom of modern science? J Bioethical Inquiry. 2008;5(4):271. https://doi.org/10.1007/s11673-008-9113-6 .
Haven TL, Tijdink JK, Martinson BC, Bouter LM. Perceptions of research integrity climate differ between academic ranks and disciplinary fields: results from a survey among academic researchers in Amsterdam. PLoS ONE. 2019;14(1):e0210599. https://doi.org/10.1371/journal.pone.0210599 .
Trikalinos NA, Evangelou E, Ioannidis JPA. Falsified papers in high-impact journals were slow to retract and indistinguishable from nonfraudulent papers. J Clin Epidemiol. 2008;61(5):464–70. https://doi.org/10.1016/j.jclinepi.2007.11.019 .
Aubert Bonn N, Pinxten W. A decade of empirical research on research integrity: What have we (not) looked at? J Empir Res Hum Res Ethics. 2019;14(4):338–52. https://doi.org/10.1177/1556264619858534 .
Redman BK, Merz JF. Scientific misconduct: do the punishments fit the crime? Science. 2008;321(5890):775. https://doi.org/10.1126/science.1158052 .
Bülow W, Helgesson G. Criminalization of scientific misconduct. Med Health Care Philos. 2019;22(2):245–52. https://doi.org/10.1007/s11019-018-9865-7 .
Cyranoski D. China introduces “social” punishments for scientific misconduct. Nature. 2018;564(7736):312. https://doi.org/10.1038/d41586-018-07740-z .
Bird SJ. Publicizing scientific misconduct and its consequences. Sci Eng Ethics. 2004;10(3):435–6. https://doi.org/10.1007/s11948-004-0001-0 .
Gunsalus CK. Make reports of research misconduct public. Nature. 2019;570(7759):7. https://doi.org/10.1038/d41586-019-01728-z .
Gunsalus CK, Marcus AR, Oransky I. Institutional research misconduct reports need more credibility. JAMA. 2018;319(13):1315–6. https://doi.org/10.1001/jama.2018.0358 .
Download references
Acknowledgements
The authors wish to thank the EnTIRE research group. The EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) aims to create an online platform that makes RE+RI information easily accessible to the research community. The EnTIRE Consortium is composed by VU Medical Center, Amsterdam, gesinn. It Gmbh & Co Kg, KU Leuven, University of Split School of Medicine, Dublin City University, Central European University, University of Oslo, University of Manchester, European Network of Research Ethics Committees.
EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement N 741782. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Author information
Authors and affiliations.
Department of Behavioural Sciences, Faculty of Medicine, University of Debrecen, Móricz Zsigmond krt. 22. III. Apartman Diákszálló, Debrecen, 4032, Hungary
Anna Catharina Vieira Armond & János Kristóf Bodnár
Institute of Ethics, School of Theology, Philosophy and Music, Dublin City University, Dublin, Ireland
Bert Gordijn, Jonathan Lewis & Mohammad Hosseini
Centre for Social Ethics and Policy, School of Law, University of Manchester, Manchester, UK
Center for Medical Ethics, HELSAM, Faculty of Medicine, University of Oslo, Oslo, Norway
Center for Ethics and Law in Biomedicine, Central European University, Budapest, Hungary
Péter Kakuk
You can also search for this author in PubMed Google Scholar
Contributions
All authors (ACVA, BG, JL, MH, JKB, SH and PK) developed the idea for the article. ACVA, PK, JKB performed the literature search and data analysis, ACVA and PK produced the draft, and all authors critically revised it. All authors have read and approved the manuscript.
Corresponding author
Correspondence to Anna Catharina Vieira Armond .
Ethics declarations
Ethics approval and consent to participate.
Not applicable.
Consent for publication
Competing interests.
The authors declare that they have no conflict of interest.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Additional file 1.
. Pilot search and search strategy.
Additional file 2
. List of Major and minor misbehavior items (Developed by Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Research integrity and peer review. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 ).
Additional file 3
. Table containing the number and percentage of countries included in the analysis of articles.
Additional file 4
. Table containing the number and percentage of countries included in the analysis of the cases.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and Permissions
About this article
Cite this article.
Armond, A.C.V., Gordijn, B., Lewis, J. et al. A scoping review of the literature featuring research ethics and research integrity cases. BMC Med Ethics 22 , 50 (2021). https://doi.org/10.1186/s12910-021-00620-8
Download citation
Received : 06 October 2020
Accepted : 21 April 2021
Published : 30 April 2021
DOI : https://doi.org/10.1186/s12910-021-00620-8
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Research ethics
- Research integrity
- Scientific misconduct
BMC Medical Ethics
ISSN: 1472-6939
- Submission enquiries: [email protected]
- General enquiries: [email protected]

An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Browse Titles
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992.

Responsible Science: Ensuring the Integrity of the Research Process: Volume I.
- Hardcopy Version at National Academies Press
4 Misconduct in Science—Incidence and Significance
Estimates reported in government summaries, research studies, and anecdotal accounts of cases of confirmed misconduct in science in the United States range between 40 and 100 cases during the period from 1980 to 1990. 1 The range reflects differences in the definitions of misconduct in science, uncertainties about the basis for “confirmed” cases, the time lag between the occurrence and disclosure of some cases, and potential overlap between government summaries (which are anonymous) and cases identified by name in the research literature.
When measured against the denominator of the number of research awards or research investigators, the range of misconduct-in-science cases cited above is small. 2 Furthermore, less than half of the allegations of misconduct received by government agencies have resulted in confirmed findings of misconduct in science. For example, after examining 174 case files of misconduct in science in the period from March 1989 through March 1991, the Office of Scientific Integrity in the Public Health Service found evidence of misconduct in fewer than 20 cases, although 56 investigations, mostly conducted by universities, were still under way (Wheeler, 1991).
However, even infrequent incidents of misconduct in science raise serious questions among scientists, research sponsors, and the public about the integrity of the research process and the stewardship of federal research funds.
INCIDENCE OF MISCONDUCT IN SCIENCE—PUBLISHED EVIDENCE AND INFORMATION
The incidence of misconduct in science and the significance of several confirmed cases have been topics of extensive discussion. Measures of the incidence of misconduct in science include (1) the number of allegations and confirmations of misconduct-in-science cases recorded and reviewed by government agencies and research institutions and (2) data and information presented in analyses, surveys, other studies, and anecdotal reports.
Some observers have suggested that incidents of misconduct in science are underreported. It may be difficult for co-workers and junior scientists, for example, graduate students and postdoctoral fellows, to make allegations of misconduct in science because of lack of supporting evidence and/or fear of retribution. The significant professional discrimination and economic loss experienced by whistle-blowers as a result of reporting misconduct are well known and may deter others from disclosing wrongdoing in the research environment.
Government Statistics on Misconduct in Science
Owing to differing perspectives on the role of government and research institutions in addressing misconduct in science, and to discrepancies in the number of allegations received by government offices, the number of open cases, and the cases of misconduct in science confirmed by research institutions or government agencies, many questions remain to be answered. These areas of uncertainty and disagreement inhibit the resolution of issues such as identifying the specific practices that fit legal definitions of misconduct in science; agreeing on standards for the evidence necessary to substantiate a finding of misconduct in science; clarifying the extent to which investigating panels can or should consider the intentions of the accused person in reaching a finding of misconduct in science; assessing the ability of research institutions and government agencies to discharge their responsibilities effectively and handle misconduct investigations appropriately; determining the frequency with which misconduct occurs; achieving consensus on the penalties that are likely to be imposed by diverse institutions for similar types of offenses; and evaluating the utility of allocating substantial amounts of public and private resources to handle allegations, only a few of which may result in confirmed findings of misconduct. The absence of publicly available summaries of the investigation and adjudication of incidents of misconduct in science inhibits scholarly efforts to examine how prevalent misconduct in science, is and to evaluate the effectiveness of governmental and institutional treatment and prevention programs.
As a result, analyses of and policies related to misconduct in science are often influenced by information derived from a small number of cases that have received extensive publicity. The panel has not seen evidence that would help determine whether these highly publicized cases are representative of the broader sample of allegations or confirmed incidents of misconduct in science. One trend should be emphasized, however. The highly publicized cases often involve charges of falsification and fabrication of data, but the large majority of cases of confirmed misconduct in science have involved plagiarism (NSF, 1991a; Wheeler, 1991). Possible explanations for this trend are that plagiarism is more clearly identifiable by the complainants and more easily proved by those who investigate the complaint.
Five semiannual reports prepared by the National Science Foundation's Office of Inspector General (NSF 1989c; 1990a,b; 1991a,c) and a 1991 annual report prepared by the Office of Scientific Integrity Review of the Department of Health and Human Services (DHHS, 1991b) are the first systematic governmental efforts to analyze characteristics of a specific set of cases of misconduct in science. Although the treatment of some individual cases reported in these summaries has been the subject of debate and controversy, the panel commends these analyses as initial efforts and suggests that they receive professional review and revisions, if warranted.
National Science Foundation
The National Science Foundation's (NSF's) Office of Inspector General (OIG) received 41 allegations of misconduct in science in FY 1990 and reviewed another group of 6 allegations received by NSF prior to 1990 (NSF, 1990b). 3 From this group of 47 allegations, OIG closed 21 cases by the end of FY 1990. In three cases NSF made findings of misconduct in science; in another four cases, NSF accepted institutional findings of misconduct in science. NSF officials caution that, in their view, future cases may result in a larger percentage of confirmed findings of misconduct because many of the open cases raise complicated issues that require more time to resolve. 4
The panel matched the 41 allegations reviewed by NSF in FY 1990 against the definitions of misconduct in science used by NSF at that time ( Table 4.1 ).
Allegations of Misconduct in Science Reviewed in FY 1990 by the National Science Foundation.
The NSF's Office of the Director recommended the most serious penalty (debarment for 5 years) in a case involving charges of repeated incidents of sexual harassment, sexual assault, and threats of professional and academic blackmail by a co-principal investigator on NSF-funded research (NSF, 1990b, p. 21). Following an investigation that involved extensive interviews and affidavits, NSF's OIG determined that “no federal criminal statutes were violated . . . [but that] the pattern and effect of the co-principal investigator's actions constituted a serious deviation from accepted research practices” (NSF, 1990b, p. 21). NSF's OIG further determined that these incidents were “an integral part of this individual's performance as a researcher and research mentor and represented a serious deviation from accepted research practices” (p. 27). However, reports of this particular case have caused some scientists to express concern that the scope of the definition of misconduct in science may be inappropriately broadened into areas designated by the panel as “other misconduct,” such as sexual harassment.
Department of Health and Human Services
In FY 1989 and FY 1990, following the creation of the Office of Scientific Integrity (OSI), the Department of Health and Human Services (DHHS) received a total of 155 allegations of misconduct in science, many of which had been under review from earlier years by various offices within the Public Health Service (PHS). 5 In April 1991, OSI reported that since its formation it had closed about 110 cases, most of which did not result in findings of misconduct in science.
The Office of Scientific Integrity Review (OSIR), in the office of the assistant secretary for health, reviewed 21 reports of investigations of misconduct in science in the period from March 1989 to December 1990, some of which involved multiple charges. 6 The cases reviewed by OSIR had been forwarded to that office by OSI and had completed both an inquiry and investigation stage. Findings of misconduct in science, engaged in by 16 individuals, were made in 15 of the reports of investigations reviewed by OSIR. The OSIR's summary of findings is given in Table 4.2 .
Findings of Misconduct in Science in Cases Reviewed by the Office of Scientific Integrity Review, Department of Health and Human Services, March 1989 to December 1990.
The OSIR recommended debarment in six cases, the most extreme administrative sanction available short of referral to the Justice Department for criminal prosecution. Actions to recover PHS grant funds were undertaken in two cases.
Consequences of Confirmed Misconduct
Confirmed findings of misconduct in science can result in governmental penalties, such as dismissal or debarment, whereby individuals or institutions can be prohibited from receiving government grants or contracts on a temporary or permanent basis (42 C.F.R. 50). An individual who presents false information to the government in any form, including a research proposal, employment application, research report, or publication, may be subject to prosecution under the False Claims Act (18 U.S.C. 1001). At least one case of criminal prosecution against a research scientist, for example, rested on evidence that the scientist had provided false research information in research proposals and progress reports to a sponsoring agency. 7 Similar prosecutions have occurred in connection with some pharmaceutical firms or contract laboratories that provided false test data in connection with licensing or government testing requirements (O'Reilly, 1990).
Government regulations on misconduct in science provide a separate mechanism through which individuals and institutions can be subjected to government penalties and criminal prosecution if they misrepresent information from research that is supported by federal funds, even if the information is not presented directly to government officials. Research institutions and scientific investigators who apply for and receive federal funds are thus expected to comply with high standards of honesty and integrity in the performance of their research activities.
Government Definitions of Misconduct in Science—Ambiguity in Categories
The PHS's misconduct-in-science regulations apply to research sponsored by all PHS agencies, including the National Institutes of Health, the Alcohol, Drug Abuse, and Mental Health Administration, the Centers for Disease Control, the Food and Drug Administration, and the Agency for Health Care Policy and Research. The PHS defines misconduct in science as “fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting, or reporting research. It does not include honest error or honest differences in interpretations or judgments of data” (DHHS, 1989a, p. 32447). 8
The PHS's definition does not further define fabrication, falsification, plagiarism, or other serious deviations from commonly accepted research practices. The ambiguous scope of this last category is a topic of major concern to the research community because of the perception that it could be applied inappropriately in cases of disputed scientific judgment.
The first annual report of the DHHS's OSIR suggests the types of alleged misconduct in science that might fall within the scope of this category (DHHS, 1991b):
- Misuse by a journal referee of privileged information contained in a manuscript,
- Fabrication of entries or misrepresentation of the publication status of manuscripts referenced in a research bibliography,
- Failure to perform research supported by a PHS grant while stating in progress reports that active progress has been made,
- Improper reporting of the status of subjects in clinical research (e.g., reporting the same subjects as controls in one study and as experimental subjects in another),
- Preparation and publication of a book chapter listing co-authors who were unaware of being named as co-authors,
- Selective reporting of primary data,
- Unauthorized use of data from another investigator's laboratory,
- Engaging in inappropriate authorship practices on a publication and failure to acknowledge that data used in a grant application were developed by another scientist, and
- Inappropriate data analysis and use of faulty statistical methodology.
The panel points out that most of the behaviors described above, such as the fabrication of bibliographic material or falsely reporting research progress, are behaviors that fall within the panel's definition of misconduct in science proposed in Chapter 1 .
The NSF's definition (NSF, 1991b) is broader than that used by the PHS 9 and extends to nonresearch activities supported by the agency, such as science education. NSF also includes in its definition of misconduct in science acts of retaliation against any person who provides information about suspected misconduct and who has not acted in bad faith.
The panel believes that behaviors such as repeated incidents of sexual harassment, sexual assault, or professional intimidation should be regarded as other misconduct, not as misconduct in science, because these actions (1) do not require expert knowledge to resolve complaints and (2) should be governed by mechanisms that apply to all institutional members, not just those who receive government research awards. Practices such as inappropriate authorship, in the panel's view, should be regarded as questionable research practices, because they do not fit within the rationale for misconduct in science as defined by the panel in Chapter 1 .
The investigation of questionable research practices as incidents of alleged misconduct in science, in the absence of consensus about the nature, acceptability, and damage that questionable practices cause, can do serious harm to individuals and to the research, enterprise. Institutional or regulatory efforts to determine “correct” research methods or analytical practices, without sustained participation by the research community, could encourage orthodoxy and rigidity in research practice and cause scientists to avoid novel or unconventional research paradigms. 10
Reports from Local Institutional Officials
Investigatory reports.
Government regulations currently require local institutions to notify the sponsoring agency if they intend to initiate an investigation of an allegation of misconduct in science. The institutions are also required to submit a report of the investigation when it is completed. These reports, in the aggregate, may provide a future source of evidence regarding the frequency with which misconduct-in-science cases are handled by local institutions.
Although some investigatory reports have been released on an ad hoc basis, research scientists generally do not have access to comprehensive summaries of the investigatory reports prepared or reviewed by government agencies. The absence of such summaries impedes informed analysis of misconduct in science and inhibits the exchange of information and experience among institutions about factors that can contribute to or prevent misconduct in science.
Other Institutional Reports
The perspectives and experiences of institutional officials in handling allegations of misconduct in science are likely in the future to be important sources of information about the incidence of misconduct. This body of experience is largely undocumented, and most institutions do not maintain accessible records on their misconduct cases because of concerns about individual privacy and confidentiality, as well as concerns about possible institutional embarrassment, loss of prestige, and lawsuits.
The DHHS's regulations now require grantee institutions to provide annual reports of aggregate information on allegations, inquiries, and investigations, along with annual assurances that the institutions have an appropriate administrative process for handling allegations of misconduct in science (DHHS, 1989a). The institutional reports filed in early 1991 were not available for this study. These institutional summaries could eventually provide an additional source of evidence regarding how frequently misconduct in science addressed at the local level involves biomedical or behavioral research. If the reports incorporate standard terms of reference, are prepared in a manner that facilitates analysis and interpretation, and are accessible to research scientists, they could provide a basis for making independent judgments about the effectiveness of research institutions in handling allegations of misconduct in science. The NSF's regulations do not require an annual report from grantee institutions.
International Studies
Cases of misconduct in science have been reported and confirmed in other countries. The editor of the British Medical Journal reported in 1988 that in the 1980s at least five cases of misconduct by scientists had been documented in Britain and five cases had been publicly disclosed in Australia (Lock, 1988b, 1990.). As a result of a “nonsystematic” survey of British medical institutions, scientists, physicians, and editors of medical journals, Lock cited at least another 40 unreported cases.
There has been at least one prominent case of misconduct in science in India recently (Jayaraman, 1991). Several cases of misconduct in science and academic plagiarism have been recorded in Germany (Foelsing, 1984; Eisenhut, 1990).
Analyses, Surveys, and Other Reports
Hundreds of articles on misconduct in science have been published in the popular and scholarly literature over the past decade. The study panel's own working bibliography included over 1,100 such items.
Although highly publicized reports about individual misconduct cases have appeared with some frequency, systematic efforts to analyze data on cases of misconduct in science have not attracted significant interest or support within the research community until very recently. Research studies have been hampered by the absence of information and statistical data, lack of rigorous definitions of misconduct in science, the heterogeneous and decentralized nature of the research environment, the complexity of misconduct cases, and the confidential and increasingly litigious nature of misconduct cases (U.S. Congress, 1990b; AAAS-ABA, 1989).
As a result, only a small number of confirmed misconduct cases have been the subject of scholarly examination. The results of these studies are acknowledged by their authors to be subject to statistical bias; the sample, which is drawn primarily from public records, may or may not be representative of the larger pool of cases or allegations. Preliminary studies have focused primarily on questions of range, prevalence, incidence, and frequency of misconduct in science. There has been little effort to identify patterns of misconduct or questionable practices in science. Beyond speculation, very little is known about the etiology, dynamics, and consequences of misconduct in science. The relationship of misconduct in science to factors in the contemporary research environment, such as the size of research teams, financial incentives, or collaborative research efforts, has not been systematically evaluated and is not known.
Woolf Analysis
Patricia Woolf of Princeton University, a member of this panel, has analyzed incidents of alleged misconduct publicly reported from 1950 to 1987 (Woolf, 1981, 1986, 1988a).
Woolf examined 26 cases of misconduct identified as having occurred or been detected in the period from 1980 to 1987, the majority of which (22 cases) were in biomedical research. Her analysis indicated that 11 of the institutions associated with the 26 cases were prestigious schools and hospitals, ranked in the top 20 in the Cole and Lipton (1977) evaluation of reputation. Woolf found that a “notable percentage” of the individuals accused of misconduct were from highly regarded institutions: “seven graduated from the top twenty schools” (Woolf, 1988a, p. 79), as ranked by reputation, an important finding that deserves further analysis. She also suggested that because cases of misconduct are often handled locally, the total number of cases is likely to be larger than reported in the public record (Woolf, 1988a).
The types of alleged misconduct reported in the cases analyzed by Woolf, some of which involved more than one type, included plagiarism (4 cases); falsification, fabrication, and forgery of data (12 cases); and misrepresentation and other misconduct (12 cases). She suggested that “plagiarism is almost certainly under-represented in this survey, as it appears to be handled locally and without publicity whenever possible” (Woolf, 1988a, p. 83).
Woolf identified several important caveats, noted below, that still apply to all systematic efforts to analyze the characteristics and demography of misconduct in science (Woolf, 1988a, p. 76):
- Small number of instances. There are not enough publicly known cases to draw statistically sound conclusions or make significant generalizations, and those that are available are a biased sample of the population of actual cases.
- Blurred categories. It is not possible in all cases to cleanly separate misconduct in science from falsification in drug trials or laboratory tests. Similarly, one person may indulge in plagiarism, fabrication, and falsification.
- Incomplete information. Some information about reported instances is not yet available.
- Variety of sources. The sources of information (for Woolf's analysis) include public accounts, such as newspaper reports, as well as original documents and interview material. They are not all equally reliable with regard to dates and other minor details.
- Unclear resolution. Disputed cases that have nevertheless been “settled” are included (in Woolf's analysis). In some highly publicized cases of alleged misconduct in science, the accused scientist has not admitted, and may have specifically denied, misconduct in science.
OSIR Analysis
The DHHS's OSIR prepared a first annual report in early 1991 that analyzed data associated with investigations of misconduct in science reviewed by that office in the period March 1989 through December 1990 (DHHS, 1991b). The report examined misconduct investigations carried out by research institutions and by the OSI.
Seniority of Subjects of Misconduct Cases in Woolf and OSIR Analy ses. Both Woolf and the OSIR examined the rank of individuals who have been the subjects of misconduct-in-science cases. Although some have speculated that junior scientists might be more likely to engage in misconduct in science, both Woolf's analysis and the OSIR's analysis suggest that misconduct in science “did not occur primarily among junior scientists or trainees” (DHHS, 1991b, p. 7). Their preliminary studies suggest that the incidence of misconduct is likely to be greater among senior scientists ( Table 4.3 ), a finding that deserves further analysis.
Academic Ranks of Subjects in Confirmed Cases of Misconduct in Science.
Detection of Misconduct in Science in Woolf and OSIR Analyses. Woolf and the OSIR examined processes used to detect incidents of confirmed or suspected misconduct in science and also analyzed the status of individuals who disclosed these incidents ( Table 4.4 and Table 4.5 ). Their analyses indicate that existing channels within the peer review process and research institutions do provide information about misconduct in science. Initial reports were often made by supervisors, collaborators, or subordinates who were in direct contact with the individual suspected of misconduct. These findings contradict opinions that checks such as peer review, replication of research, and journal reviews do not help identify instances of misconduct.
Primary Sources of Detection of Alleged Misconduct (1980 to 1987).
Status of Individual Bringing Allegations.
However, the panel notes that supervisors, colleagues, and subordinate personnel may report misconduct in science at their peril. The honesty of individuals who hold positions of respect or prestige cannot be easily questioned. It can be particularly deleterious for junior or temporary personnel to make allegations of misconduct by their superiors. Students, research fellows, and technicians can jeopardize current positions, imperil progress on their research projects, and sacrifice future recommendations from their research supervisors by making allegations of misconduct by their co-workers.
The Acadia Institute Survey
One provocative study of university officers' experience with misconduct in science is a 1988 survey of 392 deans of graduate studies from institutions affiliated with the Council of Graduate Schools (CGS). 11 12 The survey was conducted with support from NSF and the American Association for the Advancement of Science. Approximately 75 percent (294) of the graduate deans responded to the survey.
The Acadia Institute survey data indicate that 40 percent (118) of the responding graduate deans had received reports of possible faculty misconduct in science during the previous 5 years. Two percent (6) had received more than five reports. These figures suggest that graduate deans have a significant chance of becoming involved in handling an allegation of misconduct in science.
The survey shows that about 190 allegations of misconduct in science were addressed by CGS institutions over the 5-year period (1983 to 1988) reported in the survey. It is not known whether any or all of these allegations were separately submitted to government offices concerned with misconduct in science during this time period, although overlap is likely.
The Acadia Institute survey also suggests, not surprisingly, that allegations of misconduct in science are associated with institutions that receive significant amounts of external research funding. As noted in the NSF's OIG summary report of the Acadia Institute survey: “Of the institutions receiving more than $50 million in external research funding annually, 69 percent [36] had been notified of possible faculty misconduct. Among institutions receiving less than $5 million, only 19 percent [14] had been so notified” (NSF, 1990d, pp. 2-3).
When asked about cases of verified misconduct by their faculties during the previous 5 years, 20 percent (59) of all the responding graduate deans indicated such instances. Among universities with over $50 million per year in external funding (about 55 institutions fell within this category in 1988), 41 percent (20) had some verified misconduct, according to responses of graduate deans participating in the Acadia Institute survey. The actual number of cases associated with these percentages, which is small, is consistent with the panel's observation that the total number of confirmed cases of misconduct in science is very small. Nevertheless, reports indicating that prestigious research institutions consistently receive, and confirm, allegations of misconduct in science are disturbing.
Other Reports
Bechtel and Pearson. Bechtel and Pearson (1985) examined both the question of prevalence of misconduct in science and the concept of deviant behavior by scientists as part of a larger exploration of “elite occupational deviance” that included white collar crime. The authors reviewed 12 cases of misconduct in science, drawn from reports in the popular and scientific press in the 1970s and early 1980s. They found that available evidence was inadequate to support accurate generalizations about how widespread misconduct in science might be. As to the causes of deviant behavior, the authors concluded that “in the debate between those who favor individualistic explanations based on psychological notions of emotional disturbance, and the critics of big science who blame the increased pressures for promotion, tenure, and recognition through publications, one tends to see greater merit in the latter” (p. 244). They suggested that further systematic examination is required to determine the appropriate balance between individual and structural sources of deviant behavior.
Sigma Xi Study. As part of a broader survey it conducted in 1988, Sigma Xi, the honor society for scientific researchers in North America, asked its members to respond to the following statement: “Excluding gross stupidities and/or minor slip ups that can be charitably dismissed (but not condoned), I have direct knowledge of fraud (e.g., falsifying data, misreporting results, plagiarism) on the part of a professional scientist.” 13
Respondents were asked to rank their agreement or disagreement with the statement on a five-point scale. The survey was mailed to 9,998 members of the society; about 38 percent responded (which indicates a possible source of bias).
Although 19 percent of the Sigma Xi respondents indicated that they had direct knowledge of fraud by a scientist, it is not certain from the survey whether direct knowledge meant personal experience with or simply awareness of scientific fraud. It is also possible that some respondents were referring to identical cases, and respondents may have reported knowledge of cases gained secondhand. Furthermore, it is not clear what information can be gained by having respondents rank “direct knowledge” on a five-point scale of agreement and disagreement.
Additional Information. Estimates about the incidence of misconduct in science have ranged from editorial statements that the scientific literature is “99.9999 percent pure” to reader surveys published in scientific journals indicating that significant numbers of the respondents have had direct experience with misconduct of some sort in science. 14 The broad variance in these estimates has not resolved uncertainties about the frequency with which individuals or institutions actually encounter incidents of misconduct in science.
In March 1990, the NSF's OIG reported that, based on a comprehensive review of the results from past surveys that attempted to measure the incidence of misconduct in science, “the full extent of misconduct is not yet known” (NSF, 1990d, p. 9). The NSF reports found that only a few quantitative studies have examined the extent of misconduct in science and that prior survey efforts had poor response rates, asked substantively different questions, and employed varying definitions of misconduct. These efforts have not yielded a database that would provide an appropriate foundation for findings and conclusions about the extent of misconduct in science and engineering. 15
- FINDINGS AND CONCLUSIONS
The panel found that existing data are inadequate to draw accurate conclusions about the incidence of misconduct in science or of questionable research practices. The panel points out that the number of confirmed cases of misconduct in science is low compared to the level of research activity in the United States. However, as with all forms of misconduct, underreporting may be significant; federal agencies have only recently imposed procedural and report ing requirements that may yield larger numbers of reported cases. The possibility of underreporting can neither be dismissed nor con firmed at this time. More research is necessary to determine the full extent of misconduct in science.
Regardless of the incidence, the panel emphasizes that even infrequent cases of misconduct in science are serious matters. The number of confirmed incidents of misconduct in science, together with the possibility of underreporting and the results presented in some preliminary studies, indicate that misconduct in science is a problem that cannot be ignored. The consequences of even infre quent cases of misconduct in science require that attention be giv en to appropriate methods of treatment and prevention.
1. Reports of cases involving findings of misconduct in science were provided to the panel by DHHS and NSF. These reports indicate a total of 15 cases of findings of misconduct in science by DHHS in the period from March 1989 to December 1990 and 3 cases of findings of misconduct in science by NSF in the period from July 1989 to September 1990. See NSF (1990b) and DHHS (1991b). Information was also provided in a personal communication from Donald Buzzelli, staff associate, OIG, NSF, February 1, 1991.
Congressional testimony by and telephone interviews with NIH and ADAMHA officials indicated that in the period from 1980 to 1987, roughly 17 misconduct cases handled by these agencies resulted in institutional findings of research misconduct, some of which are included in the Woolf analysis discussed below. During this same period, NSF made findings of misconduct in science in seven cases. See the testimony of Katherine Bick and Mary Miers in U.S. Congress (1989a); see also Woolf (1988a).
The report by Woolf (1988a) identified 40 publicly reported cases of alleged misconduct in science in the period from 1950 to 1987, many of which involved confirmed findings of misconduct. Another two dozen or so cases of alleged misconduct in science were reported in congressional hearings in the 1980s. Some of the cases discussed in congressional hearing's and in the Woolf analysis are included in the NSF and DHHS reports mentioned above. Some cases discussed in congressional hearings are still open, and the remainder have been closed without an institutional finding of misconduct in science.
The estimate of confirmed cases of misconduct in science does not include cases in which research institutions have made findings of misconduct, unless these cases are included in the Woolf analysis or the congressional hearings mentioned above. During the time of this study, there were no central records for institutional reports on misconduct in science that would indicate the frequency with which these organizations found allegations to have merit.
Finally, several authors have reviewed selected cases of misconduct in science, both contemporary and historical. The most popular accounts are a book by Broad and Wade (1982), who cite 34 cases of “known or suspected cases of scientific fraud” ranging from “ancient Greece to the present day”; a book by Klotz (1985); and one by Kohn (1986), who cites 24 cases of “known or suspected misconduct.” These texts, and the government reports, congressional hearings, and Woolf analysis cited above, discuss many of the same cases.
2. The preamble to the PHS's 1989 regulations for scientific misconduct notes that “reported instances of scientific misconduct appear to represent only a small fraction of the total number of research and research training awards funded by the PHS” (DHHS, 1989a, p. 32446). The preamble to the NSF's 1987 misconduct regulations states that “NSF has received relatively few allegations of misconduct or fraud occurring in NSF-supported research or . . . proposals” (NSF, 1987, p. 24466).
Furthermore, according to the National Library of Medicine, during the 10-year period from 1977 to 1986, about 2.8 million articles were published in the world's biomedical literature. The number of articles retracted because of the discovery of fraud or falsification of data was 41, less than 0.002 percent of the total. See Holton (1988), p. 457.
3. Analyses of the NSF's experience are complicated by the fact that different offices have held authority for handling research misconduct cases. Prior to the creation of the OIG in March 1989, this authority was assigned to the NSF's Division of Audit and Oversight. The OIG “inherited” approximately 19 case files, and it received 6 new allegations of research misconduct during FY 1989. NSF officials reported in 1987 that NSF had examined 12 charges of research misconduct, 7 of which were found to be warranted, of which 3 were considered minor violations. See Woolf (1988a).
4. Personal communication, OIG, NSF, February 1, 1991.
5. Personal communication, Jules Hallum, director, OSI, February 27, 1991.
6. Four of these investigations were conducted by the PHS. Sixteen were conducted by outside, primarily grantee, institutions. One additional investigation was an intramural case within the PHS.
7. See the documentation regarding the case of psychologist Stephen Breuning as detailed in the DHHS's Report and Recommendations of a Panel to Investigate Allegations of Scientific Misconduct under Grants MH-32206 and MH-37449, April 20, 1987.
8. The definition excludes violations of regulations that govern human or animal experimentation, financial or other record-keeping requirements, or the use of toxic or hazardous substances. It applies to individuals or institutions that apply for as well as those that receive extramural research, research-training, or research-related grants or cooperative agreements under the PHS, and to all intramural PHS research. In the proposed rule, the PHS's definition of misconduct included a second clause referring to “material failure to comply with federal requirements that uniquely relate to the conduct of research.” This clause was eliminated in the misconduct definition adopted in the final rule (DHHS, 1989a) to avoid duplicate reporting of violations of research regulations involving animal and human subjects, since these areas are covered by existing regulations and policies.
9. In the commentary accompanying its final rule, NSF (1987) noted that several letters on the proposed rule had commented that the proposed definition was too vague or overreaching. The NSF's 1987 definition originally included two clauses in addition to those in the PHS misconduct definition: “material failure to comply with federal requirements for protection of researchers, human subjects, or the public or for ensuring the welfare of laboratory animals” and “failure to meet other material legal requirements governing research” (NSF, 1987, p. 24468). These categories were removed in 1991 when the regulations were amended.
10. In a “Dear Colleague Letter on Misconduct” issued on August 16, 1991, the NSF's OIG stated, “The definition is not intended to elevate ordinary disputes in research to the level of misconduct and does not contemplate that NSF will act as an arbitrator of mere personality clashes or technical disputes between researchers.”
11. K. Louis, J. Swazey, and M. Anderson, University Policies and Ethical Issues in Research and Graduate Education: Results of a Survey of Graduate School Deans, preliminary report (Bar Harbor, Me.: Acadia Institute, November 1988). The survey was published as Swazey et al. (1989).
12. It should be noted that the survey instrument used by the Acadia Institute did not define “research misconduct,” but instead left that term open to the interpretation of the respondents. In some parts of the survey, “plagiarism” was distinguished from “research misconduct.”
13. Sigma Xi (1989), as summarized in NSF (1990d), pp. 4-5.
14. Cited in Woolf (1988a), p. 71. She quotes an editorial by Koshland (1987) for the first figure and a survey by St. James-Roberts (1976b) for the latter.
15. See Tangney (1987) and Davis (1989). See also St. James-Roberts (1976a). The reader survey reported in St. James-Roberts (1976b) received 204 questionnaire replies. Ninety-two percent of the respondents reported direct or indirect experience with “intentional bias” in research findings. The source of knowledge of bias was primarily from direct contact (52 percent). Forty percent reported secondary sources (information from colleagues, scientific grapevine, media) as the basis for their knowledge.
See also Industrial Chemist (1987a,b). The editors expressed surprise at the high level of responses: 28.4 percent of the 290 respondents indicated that they faked a research result often or occasionally.
- Cite this Page National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992. 4, Misconduct in Science—Incidence and Significance.
- PDF version of this title (1.2M)
In this Page
- INCIDENCE OF MISCONDUCT IN SCIENCE—PUBLISHED EVIDENCE AND INFORMATION
Recent Activity
- Misconduct in Science—Incidence and Significance - Responsible Science Misconduct in Science—Incidence and Significance - Responsible Science
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers

- NIH Grants Conference & PreCon Events
- Precon Events
Research Misconduct & Detrimental Research Practices
What is research misconduct and what are some detrimental research practices? Research Integrity Officers from the HHS Office of Research Integrity (ORI) and NIH will help answer these questions and more, as they discuss Public Health Service (PHS) regulations on handling research misconduct allegations, responsibilities of an institution receiving PHS funds, and red flags that may help to avoid misconduct in research. Experts will explore interpersonal, institutional, and professional responsibilities in the overall ethical conduct of research during presentations, case studies, and discussions with the audience.
Ready to Explore?
Already Registered?

#NIHgrantsconf

Virtual Event Overview
Friday, October 14, 2022 Time: 2:00 PM – 4:00 PM ET One-Time Registration Required: NIHGrantsConference.vfairs.com (If you have already registered on the NIH Grants Conference & PreCon Events website, then you do not need to register again!) Log In: Once registered, you can join live events and explore the NIH Exhibit Hall with additional resources at Institute, Center, & Office Booths.
- Public Health Service (PHS) Regulations on handling research misconduct allegations;
- Responsibilities of an institution receiving PHS funds;
- Red flags that may help to avoid misconduct in research.
Event Resources:
- Session Recording – Research Misconduct & Detrimental Research Practices (YouTube)
- Session Transcript - Research Misconduct & Detrimental Research Practices (Word)
- Presentation (PowerPoint)
Upcoming Events
- Feb. 1-2 Virtual 2023 NIH Grants Conference: Funding, Policies, & Processes
Past PreCon Events
Recording available 5-7 business days following the event.
- Aug. 25 NIH Loan Repayment Programs: Supporting the Next Generation of Researchers
- Sept. 15 Navigating Early Career Funding Opportunities
- Nov. 9 International Collaborations: Policies, Processes, & Partnerships
- Dec. 6 & 7 Human Subjects Research: Policies, Clinical Trials, & Inclusion
*This event is scheduled in the Eastern Time Zone.
Welcome & Overview
Presenter: Patricia Valdez, Ph.D. Chief Extramural Research Integrity Officer Office of Extramural Research (OER), NIH
Presenter: Ranjini Ambalavanar, Ph.D. Scientist Investigator Office of Research Integrity (ORI) Health and Human Services (HHS)
- PHS definition of research misconduct
- Community Responsibility
- Research misconduct and regulations
- The role of HHS ORI in research misconduct cases
- Examples of data falsification and/or fabrication
- Process for identifying and dealing with research misconduct allegations
- When to contact the funding agency
- Red flags and how to avoid misconduct
- Other integrity concerns and detrimental research practices that negatively impact research and PHS funding
ORI & NIH Case Studies: What Do You Think?
Presenter: Ranjini Ambalavanar, Ph.D. Scientist Investigator Office of Research Integrity (ORI) Health and Human Services (HHS)
Potential Topics of Discussion:
- Research Misconduct Proceedings
- ORI and NIH Administrative Actions
- Research Misconduct with Additional Compliance Concerns
Acknowledgments & Closing
"office hours" - monday, oct. 17, 2022 .
- 9:00 AM - 4:00 PM ET ORI Expert Hours
- 12:00 PM - 5:00 PM ET NIH Expert Hours
In addition to the Research Misconduct & Detrimental Research Practices PreCon Event , you have the opportunity to schedule a 1:1 meeting with the presenters during specified Office Hours on Monday, October 17. During these 20-minute chats, you can interact and connect with experts who can provide additional guidance and answers to your questions. Limited availability! Visit the "See Something, Say Something!" Booth to schedule your appointment! Note: Only 1 appointment per person.
Click the image to view bios of each presenter.

Ranjini Ambalavanar, Ph.D. Scientist-Investigator, Division of Investigative Oversight, Office of Research Integrity, HHS
Ranjini Ambalavanar, Ph.D., is a Scientist-Investigator in the Division of Investigative Oversight (DIO), Office of Research Integrity (ORI). She conducts oversight reviews of cases of Research misconduct in Public Health Service (PHS)-funded research at US institutions. ORI promotes integrity in biomedical research supported by the PHS and oversees investigations of research misconduct cases.
Prior to joining the ORI in May 2009, Dr. Ambalavanar was a faculty member at the University of Maryland Dental School (UMd). She received her Ph.D. in neuroscience from the University of Liverpool, UK, her postdoctoral training at Cambridge University, UK, and NINDS, NIH, (Bethesda, MD, USA). Dr. Ambalavanar was interested in the mechanisms of chronic cutaneous and deep tissue pain involving muscles and joints. She explored the neural mechanisms of chronic craniofacial pain disorders and provided creative directions in science through her unique contributions to the field. She has published many peer-reviewed articles and invited book chapters in her field of research.
Elyse Sullivan, Ph.D. Communications Strategist, Content Development Team Lead, Division of Communications and Outreach, Office of Extramural Research, NIH
Elyse Sullivan, Ph.D., is a communications strategist and the Content Development Team Lead in the Division of Communications and Outreach (DCO) within the Office of Extramural Research (OER). She works with subject-matter experts to disseminate important grants process and policy information for both external and internal audiences by developing websites, blogs, newsletters, and multimedia training tools. Dr. Sullivan received her PhD in Neuroscience from the University of Maryland, Baltimore, where she studied translational electrophysiological biomarkers in schizophrenia.
Patricia Valdez, Ph.D. NIH Extramural Research Integrity Officer, Office of Extramural Programs, NIH
Patricia Valdez, Ph.D., serves as the NIH Extramural Research Integrity Officer in the Office of Extramural Programs (OEP), in the Office of Extramural Research (OER). In this role, she is responsible for training NIH Extramural staff and Research Integrity Officers on handling allegations of research misconduct in NIH-funded extramural activities and for performing the initial review and referral of allegations to the appropriate oversight agencies.
Prior to joining the OER, Dr. Valdez was the Manager of Publication Ethics for the American Society for Biochemistry and Molecular Biology (ASBMB) where she handled all allegations of scientific misconduct in ASBMB journals, including the Journal of Biological Chemistry
Dr. Valdez received her Ph.D. in molecular and cell biology from the University of California, Berkeley where she studied T cell development. She carried out her Postdoctoral training in the Immunology Discovery department at Genentech, where she focused on both basic research and pre-clinical drug development. Dr. Valdez continued her research as an NIH Intramural Staff Scientist in the NIAID Laboratory of Clinical Infectious Disease.
Engage and Connect
Want to learn more about Research Misconduct and Integrity before attending the conference?
- Welcome to Research Training
- HHS Office of Research Integrity
- NIH Office of Research Integrity
- Maintaining Confidentiality in Peer Review
- Managing Conflicts of Interest in Peer Review
- Research Misconduct
- All About Grants Podcasts
- Coming Soon!
- Definitions
- Requirements for Making a Finding of Research Misconduct
- NIH Process for Handling Research Misconduct Allegations
- What Happens if there is a Finding of Research Misconduct?
- What should you do if you Suspect Research Misconduct?
- PHS Administrative Action Bulletin Board
- PHS Research Misconduct Case Summaries
- NIH Peer Review: Grants and Cooperative Agreements
- Open Mike Blog: Breaches of Peer Review Integrity
- Open Mike Blog: Working Together to Protect the Integrity of NIH-funded Research

Event Questions and Special Requests: [email protected] (Submit no less than 3 days prior to the event, if possible.)
Technical Issues: [email protected]
This page last updated on: November 29, 2022
- Bookmark & Share
- E-mail Updates
- Help Downloading Files
- Privacy Notice
- Accessibility
- National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
- NIH... Turning Discovery Into Health
Conducting pre-award subrecipient risk assessments for your research institution
Pre-award subrecipient risk assessments are proactive evaluations conducted by higher education institutions to evaluate the level of experience, compliance history and overall reliability of potential research project partners before entering into a grant or funding agreement. These assessments help institutions identify and mitigate risks, ensuring efficient resource allocation and successful collaboration in research projects and other initiatives.
Why conduct risk assessments?
Recipients of federal awards are required by Uniform Guidance 2 CFR 200.332 to conduct pre-award subrecipient risk assessments. These risk assessments play a vital part in helping an institution become aware of potential legal, reputational and other issues that exist in a potential partner’s institution. Subrecipient risk assessments for potential partners should be completed before entering into any agreements, as required by Uniform Guidance, to ensure the appropriate partners are identified in a timely way to avoid delays with project initiation. Identifying the risk level associated with potential partners is not only important from a compliance standpoint, but it is also integral to safeguarding project success. Risk assessments help evaluate the capabilities and technical expertise of potential partners, minimizing the risk of project failures or delays. In some cases, the results of these risk assessments may reveal it might be too risky to enter into an agreement with the potential partner.
What areas should be assessed?
Key items to assess when performing a risk assessment are financial stability, organizational capacity, compliance and legal issues, risk management policies and general structure of the federal award. The Federal Demonstration Partnership (FDP) provides a helpful template of a Risk Assessment Questionnaire that touches on all the aforementioned areas. While this is a widely accepted standard template, institutions may decide to add additional questions or areas to be assessed based on their own unique needs.
Once risks have been identified after performing the risk assessment, it is important to develop a comprehensive risk mitigation plan. Partners deemed as high-risk (e.g., partners with single audit findings, partners inexperienced in the administration of federal funds, etc.) may require closer monitoring, requiring extensive and/or frequent reporting or imposing specific conditions when passing down federal funds are examples of effective mitigations plans.
Baker Tilly can help
Baker Tilly can review an institution’s subrecipient risk management procedures and deliver recommendations to gaps within the subrecipient risk management process. Our team can provide the following deliverables as a result of our review.
Questions? Contact us!
Case study: subrecipient risk assessment in action.

Client background
A research organization was looking to strengthen its internal controls over subrecipient risk assessments and subrecipient monitoring to meet compliance with sponsor and federal regulations and requirements.
Baker Tilly solution
Baker Tilly was engaged to conduct a comprehensive review of the organization’s sponsored research infrastructure. As part of the review, Baker Tilly reviewed risk management policies and procedures currently in place to identify and mitigate risks when evaluating partners’ eligibility. Key process owners were interviewed, and documentation was reviewed to determine risk appetite, current policies and procedures in place and determine roles and responsibilities.
Results achieved
Baker Tilly identified gaps within the client’s policies and procedures. Our review noted that the organization's risk management process did not have clearly defined roles and responsibilities, and inadequate controls were in place to identify and mitigate risk. Our team provided recommendations and enhancement opportunities to not only strengthen their risk management process to achieve compliance with federal regulations, but also be positioned to identify and mitigate risks associated with subrecipients in a prompt manner.
For more information on pre-award subrecipient risk assessments, or to learn more about how Baker Tilly’s higher education internal audit specialists can help your institution, contact our team.
Related sections
- Higher Education
- Academic Medical Centers
- Internal Audit
- Grants Administration & Research Compliance
Article Tags

Inflation Reduction Act proposed regulations released regarding prevailing wage and apprenticeship requirements
Search form
- Clinical leadership
- Development of the exams
- How we make a quality exam
- About the colleges
- Specialty Certificate Examinations
- European Specialty Examination in Gastroenterology & Hepatology
- European Specialty Examination in Nephrology
- Apply online
- International
- Form of Faith
- Admissions ceremonies and diplomas
- Advice, guidance and preparation
- Regulations
- Verification
- Reasonable adjustments
- Work with us
- Board members
- Question writers
- PACES examiners
- PACES scenario writers
- PACES Awards
- Trainee representatives
- Lay representatives
- International partners
- Expense claims
You are here
Misconduct case studies.
Examples of cases of misconduct investigated by MRCP(UK).

A science experiment in the sky attempts to unravel the mysteries of contrails
IN THE SKIES OVER MONTANA ― It was a six hour flight that officially didn't go anywhere, but could help usher in a new chapter of aviation sustainability.
A high altitude series of science experiments over the course of a few weeks in October tested different kinds of aviation fuels and studied the effects of contrails – those thin, wispy cloud-like lines you sometimes see behind planes. USA TODAY got a front-row seat to the cutting-edge research a few days before the mission concluded on Nov. 1.
The flights were carried out by NASA’s DC-8 flying laboratory out of the Armstrong Flight Research Center in California while it tailed a brand-new Boeing 737 MAX 10, which will eventually join United Airlines’ fleet.
USA TODAY joined one of the DC-8 flights from Paine Field, north of Seattle, at the invitation of NASA, Boeing and United and saw a promising glimpse of what eco-conscious travelers might be able to expect in the future, if the research delivers on its promise.
“We think from the modelers that the contrails have a bigger climate impact today than all the aviation CO2 that’s been emitted in the last 100 years,” Rich Moore, principal investigator from NASA Langley said before the flight took off. “We’ve accumulated a certain amount of CO2 in the atmosphere and that’s a greenhouse gas, so there’s some warming associated with that … (But) the modelers are telling us that those clouds are also having a warming effect overall and that the effect is bigger.”
How do contrails affect the environment?
It’s not totally clear, and that’s part of why this research is important. Scientists do have a basic understanding that contrails (short for condensation trails , which are formed partly around the tiny soot particles that jet engines emit), like clouds, trap heat and reflect it back toward the Earth’s surface, but the extent and severity of the contrail effect isn’t fully understood. The research is multi-faceted. It's looking at what conditions contribute to contrail formation and their composition, and that knowledge can be used to help better understand their effect on the climate as well as inform development of new aviation technologies.
What was the flight like?
NASA's DC-8 trailed Boeing's 737 MAX 10 and wove in and out of its wake for about six hours on Oct. 23. That allowed sensors onboard the flying lab to capture emissions data from the Boeing's engines, as well as compare that airflow to the surrounding environment. Scientists analyzed the emissions from the Boeing aircraft as it alternated between 100% sustainable aviation fuel and low-sulfur traditional petroleum-based type A jet fuel, with the help of an array of sensors onboard the DC-8.
Boeing's 737 MAX fleet uses a new generation of engines, the LEAP-1B, manufactured by CFM, which are highly efficient. The engines emit about 15% less carbon dioxide than earlier generations of 737 power plants according to SAFRAN, one of the companies in the CFM consortium
How are contrails being studied?
If an observer on the ground had been able to see through the cloud cover, they would have seen two planes circling for a few hours at 35,000 feet or so above Montana . Onboard, however, the passengers and cargoes they carried are the vanguard of aviation’s more sustainable future.
“We want to be able to sample the emissions from the aircraft ahead of us in order to understand, at cruise conditions, what are the particles and gasses coming out of the engines,” Moore said. “We want to understand the effect of those particles on the contrails under a variety of different environmental conditions.”
The technology already seems to be achieving at least some of its goals according to the researchers.
“The lean burn combustors on the LEAP-1B are incredibly low-sooting,” Moore said, explaining that engine particle emissions are a crucial part of contrail formation. “It’s observable at cruise, the absence of soot particle emissions from this engine.”
How can the industry reduce contrails?
Moore said that during another flight, a chase plane was able to capture photographs showing the DC-8, which first entered service in 1969, producing contrails, while the new Boeing 737, with its ultra-efficient CFM LEAP-1B engines and more streamlined design, had no such streamers behind it. Boeing and NASA did not make those photos available for publication.
A full analysis of the test flights’ results is still at least a few months away, but researchers onboard and other industry partners are excited about what they’ve seen so far.
“It is a super wonky and complicated science challenge but I’m excited for the amount we’re learning,” Lauren Riley, United Airlines’ chief sustainability officer, told USA TODAY. “The industry is committed to doing the right thing.”
According to Moore, sustainable aviation fuel combined with highly efficient engines made the formation of contrails less likely than with earlier aviation technologies.
It's complicated, though.
Contrails are more likely to form when airplane engines have higher emissions, but that's not the only factor involved in their formation. Environmental factors like humidity play a role, too. Cleaner jet fuels and more efficient engines make contrails less likely to form, but they can still show up even with the most cutting-edge airplane and engine technology.
The experts emphasized that cleaner fuels are an important goal for the industry's sustainability, and that they have an added benefit of making contrail formation less likely, which probably further reduces the warming effect of flying.
“We’re still seeing contrails under some conditions but we’re having to tease out the effects of the fuels and the environmental conditions and that will be something the team will be working on over the coming weeks and months.” Moore said. “We have our work cut out for us. The emissions are just that low, we’re measuring very small signals above the atmospheric background. The fact that it’s that hard is a good thing for this technology and these fuels going forward, even if it makes our job as scientists a little harder.”
Will contrails research change passenger flights?
It probably won't mean anything noticeable to flyers right away, but the goal of this research is to make travel better for the environment. The overall aim of the airline industry is to be carbon neutral by 2050, but understanding contrails and their impact is part of a broader sustainability push by the aviation sector.
Many of those onboard the NASA flight said the research will inform the next generation of airplane technology and will help the industry move toward its sustainability goals.
“This is really allowing us to tune our technologies, to make the right technology choices to put out the products that are really maximized for sustainability, that are really best suited to where the market needs and wants to go,” David Ostdiek, a representative from GE, which is part of the CFM engine manufacturing consortium said during a post-flight briefing.
Longer term, this research will also likely contribute to better contrail forecasting tools and could lead to a better understanding of the tradeoffs between fuel efficient routings, which reduce CO2 emissions, and “clean” routings, which reduce contrail formation.
“How and where do contrails form but secondarily, what are the additional benefits of sustainable aviation fuels in mitigating (the formation and effect of contrails,)” Riley said. “There’s so much that’s still unknown and we have to take those steps forward rooted in science.”
Zach Wichter is a travel reporter for USA TODAY based in New York. You can reach him at [email protected]

Definition of Research Misconduct
Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.
(a) Fabrication is making up data or results and recording or reporting them. (b) Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record. (c) Plagiarism is the appropriation of another person's ideas, processes, results, or words without giving appropriate credit. (d) Research misconduct does not include honest error or differences of opinion.

Email Updates

IMAGES
VIDEO
COMMENTS
Research Misconduct Case Summaries Handling Misconduct Technical Assistance Complainant Respondent Handling Allegations Preliminary Assessment Inquiries Investigations Institutional Decision ORI Oversight Review PHS/HHS Decision Hearings Administrative Actions Legal Concerns Forensic Tools
The following five detailed case histories of specific cases of actual and alleged research misconduct are included in an appendix to raise key issues and impart lessons that underlie the committee's findings and recommendations without breaking up the flow of the report.
In Fraud We Trust: Top 5 Cases of Misconduct in University Research by Rene Cantu There's a thin line between madness and immorality. This idea of the "mad scientist" has taken on a charming, even glorified perception in popular culture.
Swedish research misconduct agency swamped with cases in first year Because the MIT investigation was confidential, Sasisekharan explains, he and his colleagues were effectively gagged for...
Fortunately, multiple surveys indicate that the vast majority of researchers (approximately 98%) never engage in research misconduct. 18 However, as a 2008 article points out, given how much research the government funds, if only 1.5% of NIH-funded researchers engaged in FFP, it would amount to 2,325 cases per year in the US alone. 19 Thus, it i...
Research Ethics Cases are a tool for discussing scientific integrity. Cases are designed to confront the readers with a specific problem that does not lend itself to easy answers. By providing a focus for discussion, cases help staff involved in research to define or refine their own standards, to appreciate alternative approaches to ...
The following case study will present a pattern of research misconduct by Dr. William Fals-Stewart (NYS AG, 2010 ), a prominent researcher in the field of intimate partner violence (IPV), that took place over the course of nine years, during which he was employed at two separate R1 universities located in Western New York.
Research Misconduct. Case One: Were These Slides Falsified? Case Two: Struggling to Understand Plagiarism. Case Three: Haven't I Seen that Protocol Before? Case Four: Accusations of Falsifying Data. Role Play: In a Stew over Cooked Data.
However, his case reminds of us a series of studies by Mazar, Amir, and Ariely (2008) that supported their theory that "people behave dishonestly enough to profit but honestly enough to delude themselves of their own integrity" (p. 633). ... A finding of research misconduct requires that there be a "significant departure from accepted practices ...
Posted November 22, 2022 0 Comments What are some red flags that may help you avoid research misconduct? Research Integrity Officers from the HHS Office of Research Integrity (ORI) and NIH answer this question and more during our recent Research Misconduct & Detrimental Research Practices event.
We suggest that to explain research misconduct, we should pay attention to three factors: (1) the beliefs and desires of the misconductor, (2) contextual affordances, (3) and unconscious biases or influences.
The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the ...
Each widely publicized case of research misconduct creates a new scandal, leading to questions about whether current regulation is effective or just, and whether it supports the progress of science. —Barbara Redman (2013) ... In a study by Stroebe et al. (2012), an examination of 40 "notorious" cases of research misconduct from 1974 ...
2 CASE STUDIES FOR ETHICS IN ACADEMIC RESEARCH Research misconduct, as the definition suggests, can occur through ignorance of correct procedures, negligence, or carelessness, or because of deliberate actions. ... of research misconduct must try to identify clearly whether misconduct has in fact occurred (i.e., whether the allegation is true ...
List of scientific misconduct incidents Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in the publication of professional scientific research. A Lancet review on Handling of Scientific Misconduct in Scandinavian countries gave examples of policy definitions.
Estimates reported in government summaries, research studies, and anecdotal accounts of cases of confirmed misconduct in science in the United States range between 40 and 100 cases during the period from 1980 to 1990.11. Reports of cases involving findings of misconduct in science were provided to the panel by DHHS and NSF. These reports indicate a total of 15 cases of findings of misconduct ...
PHS Research Misconduct Case Summaries; Info for the Press; Responsible Conduct of Research (RCR) - HHS ORI Resources; Research Misconduct - Definitions . Definitions Research misconduct is defined as fabrication, falsification, or plagiarism in proposing ... a study coordinator completed trial enrollment forms using faked names and ...
They offer an opportunity to apply the guidelines for authorship and negotiate authorship in difficult situations. Case One: A postdoc who enjoys collaborating with researchers is unsure how to deal with her colleagues who inappropriately take credit for her work. Case Two: A statistician helped colleagues to analyze their dataset and is listed ...
Research Integrity Officers from the HHS Office of Research Integrity (ORI) and NIH will help answer these questions and more, as they discuss Public Health Service (PHS) regulations on handling research misconduct allegations, responsibilities of an institution receiving PHS funds, and red flags that may help to avoid misconduct in research.
The corpus of 746 case studies provides a rich resource for analysing the broader societal impacts of Scottish university research, reflecting Scottish higher education research's diversity and reach. The study provides an in-depth examination of Scottish university Impact Case Studies using a bespoke, mixed-methods research approach that ...
A research organization was looking to strengthen its internal controls over subrecipient risk assessments and subrecipient monitoring to meet compliance with sponsor and federal regulations and requirements. Baker Tilly solution . Baker Tilly was engaged to conduct a comprehensive review of the organization's sponsored research infrastructure.
The study provides an in-depth examination of Scottish university impact case studies using a bespoke, mixed-methods research approach that involved a range of quantitative and qualitative analyses such as topic modelling, geotagging, text searches, bibliometric analysis, infographics and deep dives. This report is intended for a range of ...
Introduction. ORI Introduction to RCR: Chapter 2. Research Misconduct. Public concern about misconduct in research first surfaced in the early 1980's following reports of cases of egregious misbehavior. One researcher republished under his own name dozens of articles previously published by others. Other researchers in one way or another ...
Misconduct case studies. Examples of cases of misconduct investigated by MRCP (UK).
A high altitude series of science experiments over the course of a few weeks in October tested different kinds of aviation fuels and studied the effects of contrails - those thin, wispy cloud ...
Handling Misconduct. This section provides an overview of the process established by the Public Health Service (PHS) for responding to allegations of research misconduct in biomedical and behavioral research or research training supported by the PHS. The role of two major figures in the process are discussed first - the complainant and the ...
In a study published in the journal Behavioural Processes last month, two US scientists counted 276 different facial expressions when domesticated cats interacted with one another. "Our study ...
The study aimed to address the following two objectives: Collect and enhance 2021 REF ICS data to provide the REF team with a structured dataset supporting further development of the REF 2021 online database; and. Quantitatively and qualitatively analyse the ICSs to examine the broader societal impacts of research at Higher Education ...
Definition of Research Misconduct Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. (a) Fabrication is making up data or results and recording or reporting them.