• 20 Most Unethical Experiments in Psychology

Humanity often pays a high price for progress and understanding — at least, that seems to be the case in many famous psychological experiments. Human experimentation is a very interesting topic in the world of human psychology. While some famous experiments in psychology have left test subjects temporarily distressed, others have left their participants with life-long psychological issues . In either case, it’s easy to ask the question: “What’s ethical when it comes to science?” Then there are the experiments that involve children, animals, and test subjects who are unaware they’re being experimented on. How far is too far, if the result means a better understanding of the human mind and behavior ? We think we’ve found 20 answers to that question with our list of the most unethical experiments in psychology .

Emma Eckstein

unethical psychological case studies

Electroshock Therapy on Children

unethical psychological case studies

Operation Midnight Climax

unethical psychological case studies

The Monster Study

unethical psychological case studies

Project MKUltra

unethical psychological case studies

The Aversion Project

unethical psychological case studies

Unnecessary Sexual Reassignment

unethical psychological case studies

Stanford Prison Experiment

unethical psychological case studies

Milgram Experiment

unethical psychological case studies

The Monkey Drug Trials

unethical psychological case studies

Featured Programs

Facial expressions experiment.

unethical psychological case studies

Little Albert

unethical psychological case studies

Bobo Doll Experiment

unethical psychological case studies

The Pit of Despair

unethical psychological case studies

The Bystander Effect

unethical psychological case studies

Learned Helplessness Experiment

unethical psychological case studies

Racism Among Elementary School Students

unethical psychological case studies

UCLA Schizophrenia Experiments

unethical psychological case studies

The Good Samaritan Experiment

unethical psychological case studies

Robbers Cave Experiment

unethical psychological case studies

Related Resources:

  • What Careers are in Experimental Psychology?
  • What is Experimental Psychology?
  • The 25 Most Influential Psychological Experiments in History
  • 5 Best Online Ph.D. Marriage and Family Counseling Programs
  • Top 5 Online Doctorate in Educational Psychology
  • 5 Best Online Ph.D. in Industrial and Organizational Psychology Programs
  • Top 10 Online Master’s in Forensic Psychology
  • 10 Most Affordable Counseling Psychology Online Programs
  • 10 Most Affordable Online Industrial Organizational Psychology Programs
  • 10 Most Affordable Online Developmental Psychology Online Programs
  • 15 Most Affordable Online Sport Psychology Programs
  • 10 Most Affordable School Psychology Online Degree Programs
  • Top 50 Online Psychology Master’s Degree Programs
  • Top 25 Online Master’s in Educational Psychology
  • Top 25 Online Master’s in Industrial/Organizational Psychology
  • Top 10 Most Affordable Online Master’s in Clinical Psychology Degree Programs
  • Top 6 Most Affordable Online PhD/PsyD Programs in Clinical Psychology
  • 50 Great Small Colleges for a Bachelor’s in Psychology
  • 50 Most Innovative University Psychology Departments
  • The 30 Most Influential Cognitive Psychologists Alive Today
  • Top 30 Affordable Online Psychology Degree Programs
  • 30 Most Influential Neuroscientists
  • Top 40 Websites for Psychology Students and Professionals
  • Top 30 Psychology Blogs
  • 25 Celebrities With Animal Phobias
  • Your Phobias Illustrated (Infographic)
  • 15 Inspiring TED Talks on Overcoming Challenges
  • 10 Fascinating Facts About the Psychology of Color
  • 15 Scariest Mental Disorders of All Time
  • 15 Things to Know About Mental Disorders in Animals
  • 13 Most Deranged Serial Killers of All Time

Online Psychology Degree Guide

Site Information

  • About Online Psychology Degree Guide
  • 30 Most Unethical Psychology Human Experiments

Lead

Disturbing human experiments aren’t something the average person thinks too much about. Rather, the progress achieved in the last 150 years of human history is an accomplishment we’re reminded of almost daily. Achievements made in biomedicine and the f ield of psychology mean that we no longer need to worry about things like deadly diseases or masturbation as a form of insanity. For better or worse, we have developed more effective ways to gather information, treat skin abnormalities, and even kill each other. But what we are not constantly reminded of are the human lives that have been damaged or lost in the name of this progress. The following is a list of the 30 most disturbing human experiments in history.

30. The Tearoom Sex Study

30-Tea-Room-Sex-Study

Image Source Sociologist Laud Humphreys often wondered about the men who commit impersonal sexual acts with one another in public restrooms. He wondered why “tearoom sex” — fellatio in public restrooms — led to the majority of homosexual arrests in the United States. Humphreys decided to become a “watchqueen” (the person who keeps watch and coughs when a cop or stranger get near) for his Ph.D. dissertation at Washington University. Throughout his research, Humphreys observed hundreds of acts of fellatio and interviewed many of the participants. He found that 54% of his subjects were married, and 38% were very clearly neither bisexual or homosexual. Humphreys’ research shattered a number of stereotypes held by both the public and law enforcement.

29. Prison Inmates as Test Subjects

29-Prison-Inmates-as-Test-Subjects

Image Source In 1951, Dr. Albert M. Kligman, a dermatologist at the University of Pennsylvania and future inventor of Retin-A, began experimenting on inmates at Philadelphia’s Holmesburg Prison. As Kligman later told a newspaper reporter, “All I saw before me were acres of skin. It was like a farmer seeing a field for the first time.” Over the next 20 years, inmates willingly allowed Kligman to use their bodies in experiments involving toothpaste, deodorant, shampoo, skin creams, detergents, liquid diets, eye drops, foot powders, and hair dyes. Though the tests required constant biopsies and painful procedures, none of the inmates experienced long-term harm.

28. Henrietta Lacks

28-Henrietta-Lacks

Image Source In 1955, Henrietta Lacks, a poor, uneducated African-American woman from Baltimore, was the unwitting source of cells which where then cultured for the purpose of medical research. Though researchers had tried to grow cells before, Henrietta’s were the first successfully kept alive and cloned. Henrietta’s cells, known as HeLa cells, have been instrumental in the development of the polio vaccine, cancer research, AIDS research, gene mapping, and countless other scientific endeavors. Henrietta died penniless and was buried without a tombstone in a family cemetery. For decades, her husband and five children were left in the dark about their wife and mother’s amazing contribution to modern medicine.

27. Project QKHILLTOP

27-Project-QKHILLTOP

Image Source In 1954, the CIA developed an experiment called Project QKHILLTOP to study Chinese brainwashing techniques, which they then used to develop new methods of interrogation. Leading the research was Dr. Harold Wolff of Cornell University Medical School. After requesting that the CIA provide him with information on imprisonment, deprivation, humiliation, torture, brainwashing, hypnoses, and more, Wolff’s research team began to formulate a plan through which they would develop secret drugs and various brain damaging procedures. According to a letter he wrote, in order to fully test the effects of the harmful research, Wolff expected the CIA to “make available suitable subjects.”

26. Stateville Penitentiary Malaria Study

26-Stateville-Penitentiary-Malaria-Study

Image Source During World War II, malaria and other tropical diseases were impeding the efforts of American military in the Pacific. In order to get a grip, the Malaria Research Project was established at Stateville Penitentiary in Joliet, Illinois. Doctors from the University of Chicago exposed 441 volunteer inmates to bites from malaria-infected mosquitos. Though one inmate died of a heart attack, researchers insisted his death was unrelated to the study. The widely-praised experiment continued at Stateville for 29 years, and included the first human test of Primaquine, a medication still used in the treatment of malaria and Pneumocystis pneumonia.

25. Emma Eckstein and Sigmund Freud

25-Emma-Eckstein-and-Sigmund-Freud

Image Source Despite seeking the help of Sigmund Freud for vague symptoms like stomach ailments and slight depression, 27-year old Emma Eckstein was “treated” by the German doctor for hysteria and excessive masturbation, a habit then considered dangerous to mental health. Emma’s treatment included a disturbing experimental surgery in which she was anesthetized with only a local anesthetic and cocaine before the inside of her nose was cauterized. Not surprisingly, Emma’s surgery was a disaster. Whether Emma was a legitimate medical patient or a source of more amorous interest for Freud, as a recent movie suggests, Freud continued to treat Emma for three years.

24. Dr. William Beaumont and the Stomach

Image Source In 1822, a fur trader on Mackinac Island in Michigan was accidentally shot in the stomach and treated by Dr. William Beaumont. Despite dire predictions, the fur trader survived — but with a hole (fistula) in his stomach that never healed. Recognizing the unique opportunity to observe the digestive process, Beaumont began conducting experiments. Beaumont would tie food to a string, then insert it through the hole in the trader’s stomach. Every few hours, Beaumont would remove the food to observe how it had been digested. Though gruesome, Beaumont’s experiments led to the worldwide acceptance that digestion was a chemical, not a mechanical, process.

23. Electroshock Therapy on Children

23-Electroshock-Therapy-on-Children

Image Source In the 1960s, Dr. Lauretta Bender of New York’s Creedmoor Hospital began what she believed to be a revolutionary treatment for children with social issues — electroshock therapy. Bender’s methods included interviewing and analyzing a sensitive child in front of a large group, then applying a gentle amount of pressure to the child’s head. Supposedly, any child who moved with the pressure was showing early signs of schizophrenia. Herself the victim of a misunderstood childhood, Bender was said to be unsympathetic to the children in her care. By the time her treatments were shut down, Bender had used electroshock therapy on over 100 children, the youngest of whom was age three.

22. Project Artichoke

22-Project-Artichoke

Image Source In the 1950s, the CIA’s Office of Scientific Intelligence ran a series of mind control projects in an attempt to answer the question “Can we get control of an individual to the point where he will do our bidding against his will and even against fundamental laws of nature?” One of these programs, Project Artichoke, studied hypnosis, forced morphine addiction, drug withdrawal, and the use of chemicals to incite amnesia in unwitting human subjects. Though the project was eventually shut down in the mid-1960s, the project opened the door to extensive research on the use of mind-control in field operations.

21. Hepatitis in Mentally Disabled Children

21-Hepatitis-in-Mentally-Disabled-Children

Image Source In the 1950s, Willowbrook State School, a New York state-run institution for mentally handicapped children, began experiencing outbreaks of hepatitis. Due to unsanitary conditions, it was virtually inevitable that these children would contract hepatitis. Dr. Saul Krugman, sent to investigate the outbreak, proposed an experiment that would assist in developing a vaccine. However, the experiment required deliberately infecting children with the disease. Though Krugman’s study was controversial from the start, critics were eventually silenced by the permission letters obtained from each child’s parents. In reality, offering one’s child to the experiment was oftentimes the only way to guarantee admittance into the overcrowded institution.

20. Operation Midnight Climax

20-Operation-Midnight-Climax

Image Source Initially established in the 1950s as a sub-project of a CIA-sponsored, mind-control research program, Operation Midnight Climax sought to study the effects of LSD on individuals. In San Francisco and New York, unconsenting subjects were lured to safehouses by prostitutes on the CIA payroll, unknowingly given LSD and other mind-altering substances, and monitored from behind one-way glass. Though the safehouses were shut down in 1965, when it was discovered that the CIA was administering LSD to human subjects, Operation Midnight Climax was a theater for extensive research on sexual blackmail, surveillance technology, and the use of mind-altering drugs on field operations.

19. Study of Humans Accidentally Exposed to Fallout Radiation

19-1954-Castle-Bravo-nuclear-test

Image Source The 1954 “Study of Response of Human Beings exposed to Significant Beta and Gamma Radiation due to Fall-out from High-Yield Weapons,” known better as Project 4.1, was a medical study conducted by the U.S. of residents of the Marshall Islands. When the Castle Bravo nuclear test resulted in a yield larger than originally expected, the government instituted a top secret study to “evaluate the severity of radiation injury” to those accidentally exposed. Though most sources agree the exposure was unintentional, many Marshallese believed Project 4.1 was planned before the Castle Bravo test. In all, 239 Marshallese were exposed to significant levels of radiation.

18. The Monster Study

18-The-Monster-Study

Image Source In 1939, University of Iowa researchers Wendell Johnson and Mary Tudor conducted a stuttering experiment on 22 orphan children in Davenport, Iowa. The children were separated into two groups, the first of which received positive speech therapy where children were praised for speech fluency. In the second group, children received negative speech therapy and were belittled for every speech imperfection. Normal-speaking children in the second group developed speech problems which they then retained for the rest of their lives. Terrified by the news of human experiments conducted by the Nazis, Johnson and Tudor never published the results of their “Monster Study.”

17. Project MKUltra

17-Project-MKUltra

Image Source Project MKUltra is the code name of a CIA-sponsored research operation that experimented in human behavioral engineering. From 1953 to 1973, the program employed various methodologies to manipulate the mental states of American and Canadian citizens. These unwitting human test subjects were plied with LSD and other mind-altering drugs, hypnosis, sensory deprivation, isolation, verbal and sexual abuse, and various forms of torture. Research occurred at universities, hospitals, prisons, and pharmaceutical companies. Though the project sought to develop “chemical […] materials capable of employment in clandestine operations,” Project MKUltra was ended by a Congress-commissioned investigation into CIA activities within the U.S.

16. Experiments on Newborns

16-Experiments-on-Newborns

Image Source In the 1960s, researchers at the University of California began an experiment to study changes in blood pressure and blood flow. The researchers used 113 newborns ranging in age from one hour to three days old as test subjects. In one experiment, a catheter was inserted through the umbilical arteries and into the aorta. The newborn’s feet were then immersed in ice water for the purpose of testing aortic pressure. In another experiment, up to 50 newborns were individually strapped onto a circumcision board, then tilted so that their blood rushed to their head and their blood pressure could be monitored.

15. The Aversion Project

15-The-Aversion-Project

Image Source In 1969, during South Africa’s detestable Apartheid era, thousands of homosexuals were handed over to the care of Dr. Aubrey Levin, an army colonel and psychologist convinced he could “cure” homosexuals. At the Voortrekkerhoogte military hospital near Pretoria, Levin used electroconvulsive aversion therapy to “reorientate” his patients. Electrodes were strapped to a patient’s upper arm with wires running to a dial calibrated from 1 to 10. Homosexual men were shown pictures of a naked man and encouraged to fantasize, at which point the patient was subjected to severe shocks. When Levin was warned that he would be named an abuser of human rights, he emigrated to Canada where he currently works at a teaching hospital.

14. Medical Experiments on Prison Inmates

14-Medical-Experiments-on-Prison-Inmates

Image Source Perhaps one benefit of being an inmate at California’s San Quentin prison is the easy access to acclaimed Bay Area doctors. But if that’s the case, then a downside is that these doctors also have easy access to inmates. From 1913 to 1951, Dr. Leo Stanley, chief surgeon at San Quentin, used prisoners as test subjects in a variety of bizarre medical experiments. Stanley’s experiments included sterilization and potential treatments for the Spanish Flu. In one particularly disturbing experiment, Stanley performed testicle transplants on living prisoners using testicles from executed prisoners and, in some cases, from goats and boars.

13. Sexual Reassignment

13-Sexual-Reassignment

Image Source In 1965, Canadian David Peter Reimer was born biologically male. But at seven months old, his penis was accidentally destroyed during an unconventional circumcision by cauterization. John Money, a psychologist and proponent of the idea that gender is learned, convinced the Reimers that their son would be more likely to achieve a successful, functional sexual maturation as a girl. Though Money continued to report only success over the years, David’s own account insisted that he had never identified as female. He spent his childhood teased, ostracized, and seriously depressed. At age 38, David committed suicide by shooting himself in the head.

12. Effect of Radiation on Testicles

12-Effect-of-Radiation-on-Testicles

Image Source Between 1963 and 1973, dozens of Washington and Oregon prison inmates were used as test subjects in an experiment designed to test the effects of radiation on testicles. Bribed with cash and the suggestion of parole, 130 inmates willingly agreed to participate in the experiments conducted by the University of Washington on behalf of the U.S. government. In most cases, subjects were zapped with over 400 rads of radiation (the equivalent of 2,400 chest x-rays) in 10 minute intervals. However, it was much later that the inmates learned the experiments were far more dangerous than they had been told. In 2000, the former participants settled a $2.4 million class-action settlement from the University.

11. Stanford Prison Experiment

11-Stanford-Prison-Experiment

Image Source Conducted at Stanford University from August 14-20, 1971, the Stanford Prison Experiment was an investigation into the causes of conflict between military guards and prisoners. Twenty-four male students were chosen and randomly assigned roles of prisoners and guards. They were then situated in a specially-designed mock prison in the basement of the Stanford psychology building. Those subjects assigned to be guards enforced authoritarian measures and subjected the prisoners to psychological torture. Surprisingly, many of the prisoners accepted the abuses. Though the experiment exceeded the expectations of all of the researchers, it was abruptly ended after only six days.

10. Syphilis Experiments in Guatemala

10-Syphilis-Experiments-in-Guatemala

Image Source From 1946 to 1948, the United States government, Guatemalan president Juan José Arévalo, and some Guatemalan health ministries, cooperated in a disturbing human experiment on unwitting Guatemalan citizens. Doctors deliberately infected soldiers, prostitutes, prisoners, and mental patients with syphilis and other sexually transmitted diseases in an attempt to track their untreated natural progression. Treated only with antibiotics, the experiment resulted in at least 30 documented deaths. In 2010, the United States made a formal apology to Guatemala for their involvement in these experiments.

9. Tuskegee Syphilis Study

9-Tuskegee-Syphilis-Study

Image Source In 1932, the U.S. Public Health Service began working with the Tuskegee Institute to track the natural progression of untreated syphilis. Six hundred poor, illiterate, male sharecroppers were found and hired in Macon County, Alabama. Of the 600 men, only 399 had previously contracted syphilis, and none were told they had a life threatening disease. Instead, they were told they were receiving free healthcare, meals, and burial insurance in exchange for participating. Even after Penicillin was proven an effective cure for syphilis in 1947, the study continued until 1972. In addition to the original subjects, victims of the study included wives who contracted the disease, and children born with congenital syphilis. In 1997, President Bill Clinton formally apologized to those affected by what is often called the “most infamous biomedical experiment in U.S. history.”

8. Milgram Experiment

8-Milgram-Experiment

In 1961, Stanley Milgram, a psychologist at Yale University, began a series of social psychology experiments that measured the willingness of test subjects to obey an authority figure. Conducted only three months after the start of the trial of German Nazi war criminal Adolf Eichmann, Milgram’s experiment sought to answer the question, “Could it be that Eichmann and his million accomplices in the Holocaust were just following orders?” In the experiment, two participants (one secretly an actor and one an unwitting test subject) were separated into two rooms where they could hear, but not see, each other. The test subject would then read a series of questions to the actor, punishing each wrong answer with an electric shock. Though many people would indicate their desire to stop the experiment, almost all subjects continued when they were told they would not be held responsible, or that there would not be any permanent damage.

7. Infected Mosquitos in Towns

7-Infected-Mosquitos-in-Towns

In 1956 and 1957, the United States Army conducted a number of biological warfare experiments on the cities of Savannah, Georgia and Avon Park, Florida. In one such experiment, millions of infected mosquitos were released into the two cities, in order to see if the insects could spread yellow fever and dengue fever. Not surprisingly, hundreds of researchers contracted illnesses that included fevers, respiratory problems, stillbirths, encephalitis, and typhoid. In order to photograph the results of their experiments, Army researchers pretended to be public health workers. Several people died as a result of the research.

6. Human Experimentation in the Soviet Union

6-Human-Experimentation-in-the-Soviet-Union

Beginning in 1921 and continuing for most of the 21st century, the Soviet Union employed poison laboratories known as Laboratory 1, Laboratory 12, and Kamera as covert research facilities of the secret police agencies. Prisoners from the Gulags were exposed to a number of deadly poisons, the purpose of which was to find a tasteless, odorless chemical that could not be detected post mortem. Tested poisons included mustard gas, ricin, digitoxin, and curare, among others. Men and women of varying ages and physical conditions were brought to the laboratories and given the poisons as “medication,” or part of a meal or drink.

5. Human Experimentation in North Korea

5-Human-Experimentation-in-North-Korea

Image Source Several North Korean defectors have described witnessing disturbing cases of human experimentation. In one alleged experiment, 50 healthy women prisoners were given poisoned cabbage leaves — all 50 women were dead within 20 minutes. Other described experiments include the practice of surgery on prisoners without anesthesia, purposeful starvation, beating prisoners over the head before using the zombie-like victims for target practice, and chambers in which whole families are murdered with suffocation gas. It is said that each month, a black van known as “the crow” collects 40-50 people from a camp and takes them to an known location for experiments.

4. Nazi Human Experimentation

4-Nazi-Human-Experimentation

Image Source Over the course of the Third Reich and the Holocaust, Nazi Germany conducted a series of medical experiments on Jews, POWs, Romani, and other persecuted groups. The experiments were conducted in concentration camps, and in most cases resulted in death, disfigurement, or permanent disability. Especially disturbing experiments included attempts to genetically manipulate twins; bone, muscle, and nerve transplantation; exposure to diseases and chemical gasses; sterilization, and anything else the infamous Nazi doctors could think up. After the war, these crimes were tried as part of the Nuremberg Trial and ultimately led to the development of the Nuremberg Code of medical ethics.

3. Unit 731

3-Unit-731

Image Source From 1937 to 1945, the imperial Japanese Army developed a covert biological and chemical warfare research experiment called Unit 731. Based in the large city of Harbin, Unit 731 was responsible for some of the most atrocious war crimes in history. Chinese and Russian subjects — men, women, children, infants, the elderly, and pregnant women — were subjected to experiments which included the removal of organs from a live body, amputation for the study of blood loss, germ warfare attacks, and weapons testing. Some prisoners even had their stomachs surgically removed and their esophagus reattached to the intestines. Many of the scientists involved in Unit 731 rose to prominent careers in politics, academia, business, and medicine.

2. Radioactive Materials in Pregnant Women

2-Radioactive-Materials-in-Pregnant-Women

Image Source Shortly after World War II, with the impending Cold War forefront on the minds of Americans, many medical researchers were preoccupied with the idea of radioactivity and chemical warfare. In an experiment at Vanderbilt University, 829 pregnant women were given “vitamin drinks” they were told would improve the health of their unborn babies. Instead, the drinks contained radioactive iron and the researchers were studying how quickly the radioisotope crossed into the placenta. At least seven of the babies later died from cancers and leukemia, and the women themselves experienced rashes, bruises, anemia, loss of hair and tooth, and cancer.

1. Mustard Gas Tested on American Military

1-Mustard-Gas-Tested-on-American-Military

Image Source In 1943, the U.S. Navy exposed its own sailors to mustard gas. Officially, the Navy was testing the effectiveness of new clothing and gas masks against the deadly gas that had proven so terrifying in the first World War. The worst of the experiments occurred at the Naval Research Laboratory in Washington. Seventeen and 18-year old boys were approached after eight weeks of boot camp and asked if they wanted to participate in an experiment that would help shorten the war. Only when the boys reached the Research Laboratory were they told the experiment involved mustard gas. The participants, almost all of whom suffered severe external and internal burns, were ignored by the Navy and, in some cases, threatened with the Espionage Act. In 1991, the reports were finally declassified and taken before Congress.

28. Prison Inmates as Test Subjects Henrietta Lacks 26. Project QKHILLTOP 25. Stateville Penitentiary Malaria Study Stateville Penitentiary Malaria Study: Primaquine 24. Emma Eckstein 23. Dr. William Beaumont Dr. William Beaumont 21. Electroshock Therapy on Children 21. Project Artichoke 20. Operation Midnight Climax 19. Study of Humans Accidentally Exposed to Fallout Radiation 18. The Monster Experiment 17. Project MKUltra 16. Experiments on Newborns 15. The Aversion Project 14. Medical Experiments on Prison Inmates 13. Sexual Reassignment 12. Effect of Radiation on Testicles 11. Stanford Prison Experiment 10. Syphilis Experiment in Guatemala 9. Tuskegee Syphilis Study 8. Milgram Experiment 7. Infected Mosquitos in Towns 6. Human Experimentation in the Soviet Union 5. Human Experimentation in North Korea 4. Nazi Human Experimentation 3. Unit 731 2. Radioactive Materials in Pregnant Women 1. Mustard Gas Tested on American Military

  • Psychology Education
  • Bachelors in Psychology
  • Masters in Psychology
  • Doctorate in Psychology
  • Psychology Resources
  • Psychology License
  • Psychology Salary
  • Psychology Career
  • Psychology Major
  • What is Psychology
  • Up & Coming Programs
  • Top 10 Up and Coming Undergraduate Psychology Programs in the South
  • Top 10 Up and Coming Undergraduate Psychology Programs in the Midwest
  • Top 10 Up and Coming Undergraduate Psychology Programs in the West
  • Top 10 Up and Coming Undergraduate Psychology Programs in the East
  • Best Psychology Degrees Scholarship Opportunity
  • The Pursuit of Excellence in Psychology Scholarship is Now Closed
  • Meet Gemma: Our First Psychology Scholarship Winner
  • 50 Most Affordable Clinical Psychology Graduate Programs
  • 50 Most Affordable Selective Small Colleges for a Psychology Degree
  • The 50 Best Schools for Psychology: Undergraduate Edition
  • 30 Great Small Colleges for a Counseling Degree (Bachelor’s)
  • Top 10 Best Online Bachelors in Psychology Degree Programs
  • Top 10 Online Child Psychology Degree Programs
  • 10 Best Online Forensic Psychology Degree Programs
  • Top 10 Online Master’s in Psychology Degree Programs
  • Top 15 Most Affordable School Psychology Programs
  • Top 20 Most Innovative Graduate Psychology Degree Programs
  • Top 8 Online Sports Psychology Degree Programs
  • Recent Posts
  • Does Psychology Require Math? – Requirements for Psychology Majors
  • 10 Classes You Will Take as a Psychology Major
  • Top 15 Highest-Paying Jobs with a Master’s Degree in Psychology
  • The Highest Paying Jobs with an Associate’s Degree in Psychology
  • The Highest-Paying Jobs with a Bachelor’s in Psychology
  • Should I Major in Psychology?
  • How to Become a CBT Therapist
  • What is a Social Psychologist?
  • How to Become a Clinical Neuropsychologist
  • MA vs. MS in Psychology: What’s the Difference?
  • PsyD vs. PhD in Psychology: What’s the Difference?
  • What Can You Do with a Master’s in Psychology?
  • What Can You Do With A PhD in Psychology?
  • Master’s in Child Psychology Guide
  • Master’s in Counseling Psychology – A Beginner’s Guide
  • Master’s in Forensic Psychology – A Beginner’s Guide
  • 8 Reasons to Become a Marriage and Family Therapist
  • What Do Domestic Violence & Abuse Counselors Do?
  • What Training is Needed to Be a Psychologist for People of the LGBTQ Community?
  • 15 Inspiring TED Talks on Intelligence and Critical Thinking
  • The 30 Most Inspiring Personal Growth and Development Blogs
  • 30 Most Prominent Psychologists on Twitter
  • New Theory Discredits the Myth that Individuals with Asperger’s Syndrome Lack Empathy
  • 10 Crazy Things Famous People Have Believed
  • Psychology Infographics
  • Top Infographics About Psychology
  • The Birth Order Effect [Infographic]
  • The Psychology of Dogs [Infographic]
  • Can Going Green Improve Your Mental Health? [Infographic]
  • Surprising Alternative Treatments for Mental Disorders [Infographic]
  • What Can Humans Learn From Animals? [Infographic]

Online Psychology Degrees Logo

What Are The Top 10 Unethical Psychology Experiments?

  • By Cliff Stamp, BS Psychology, MS Rehabilitation Counseling
  • Published September 9, 2019
  • Last Updated November 13, 2023
  • Read Time 8 mins

unethical psychology experiments

Posted September 2019 by Clifton Stamp, B.S. Psychology; M.A. Rehabilitation Counseling, M.A. English; 10 updates since. Reading time: 8 min. Reading level: Grade 9+. Questions on unethical psychology experiments? Email Toni at: [email protected] .

Like all sciences, psychology relies on experimentation to validate its hypotheses. This puts researchers in a bit of a bind, in that experimentation requires manipulation of one set of variables. Manipulating human beings can be unethical and has the potential to lead to outright harm. Nowadays experiments that involve human beings must meet a high standard for safety and security for all research participants. Although ethical and safety standards in the 21st century keep people safe from any potential ill effects of experiments and studies, conditions just a few decades ago were far from ideal and were in many cases out and out harmful. 

The Top 10 Unethical Psychology Experiments

10. The Stanford Prison Experiment (1971). This example of unethical research studies occurred in August of 1971, Dr. Philip Zimbardo of Stanford University began a Navy-funded experiment examining the effects of power dynamics between prison officers and prisoners. It only took six days before the experiment collapsed. Participants so completely absorbed their roles that the “officers” began psychologically torturing the prisoners and the prisoners became aggressive toward the officers. The prisoners, in turn, fought the guards and refused to comply with requests.  By the second day, prisoners refused to obey guards and the guards started threatening prisoners with violence, far over their instructions.  By the 6 th day, guards were harassing the prisoners physically and mentally and some guards had harmed prisoners. Zimbardo stopped the experiment at that point.

9. The Monster Study (1939). The Monster Study is a prime example of an unethical psychology experiment on humans that changed the world. Wendell Johnson, a psychologist at the University of Iowa, conducted an experiment about stuttering on 22 orphans. His graduate student, Mary Tudor, experimented while Johnson supervised her work. She divided a group of 22 children into two groups. Each group was a mixture of children with and without speech problems. One group received encouragement and positive feedback, but the other was ridiculed for any speech problems, including non-existent problems. Children who received ridicule naturally made no progress, and some of the orphans with no speech problems developed those very problems.

The study continued for six months and caused lasting, chronic psychological issues for some of the children. The study caused so much harm that some of the former subjects secured a monetary award from the University of Iowa in 2007 due to the harm they’d suffered.

8. The Milgram Conformity Experiment (1961 ). After the horrors of the Second World War, psychological researchers like Stanley Milgram wondered what made average citizens act like those in Germany who had committed atrocities. Milgram wanted to determine how far people would go carrying out actions that might be detrimental to others if they were ordered or encouraged to do so by an authority figure. The Milgram experiment showed the tension between that obedience to the orders of the authority figure and personal conscience. 

In each experiment, Milgram designated  three people as either a teacher, learner or experimenter. The “learner” was an actor planted by Milgram and stayed in a room separate from the experimenter and teacher. The teacher attempted to teach the learner how to perform small sets of word associations. When the learner got a pair wrong, the teacher delivered an electric shock to the learner. In reality, there was no shock given. The learner pretended to be in increasingly greater amounts of distress. When some teachers expressed hesitation about increasing the level of shocks, the experimenter encouraged them to do so.

Many of the subjects (the teachers) experienced severe and lasting psychological distress. The Milgram Conformity Experiment has become the byword for well-intentioned psychological experiments gone wrong.

8. David Reimer (1967–1977) . When David Reimer was eighth months old, his penis was seriously damaged during a failed circumcision. His parents contacted John Money, a professor of psychology and pediatrics at Johns Hopkins, who was a researcher in the development of gender. As David had an identical twin brother, Money viewed the situation as a rare opportunity to conduct his own experiment into the nature of gender, by advising Reimer’s parents to have the David sexually reassigned as a female. Money’s theory was that gender was a completely sociological construct and primarily influenced by nurture, as opposed to nature. Money was catastrophically wrong. 

Reimer never identified as female and developed strong psychological attachment to being a male. At age 14, Reimer’s parents told him the truth about his condition and he elected to switch to a male identity. Although he later had surgery to correct the initial sex re-assignment, he suffered from profound depression related to his sex and gender issues and committed suicide in 2004. Money’s desire to test his controversial psychology experiment on humans without their consent cost someone his life. 

7. Landis’ Facial Expressions Experiment (1924). In 1924, at the University of Minnesota, Carney Landis created an experiment on humans to investigate the similarity of different people’s’ facial expressions. He wanted to determine if people displayed similar or different facial expressions while experiencing common emotions.

Carney chose students as participants, who were exposed to different situations to prompt emotional reactions. However, to elicit revulsion, he ordered the participants to behead a live rat. One-third of participants refused to do it, while another two-thirds complied, with a great deal of trauma done to them–and the rats. This unethical experiment is one of many reasons review boards were created and have made drastic changes in policy over experiments done on humans.

6. The Aversion Project (1970s and 80s). During the apartheid years in South Africa,  doctors in the South African military tried to “cure” homosexuality in conscripts by forcing them to undergo electroshock therapy and chemical castration. The military also forced gay conscripts to undergo sex-change operations. This happened as one segment of a secret military program headed by Dr. Aubrey Levin that sought to study and eliminate homosexuality in the military as recently as 1989. Exception for several cases of lesbian soldiers who were abused, most of the 900 soldiers to be abused were very young, from 16 to 24-year-old male conscripts. Astoundingly, Dr. Levin relocated to Canada where he worked until he was sent to prison for assaulting a patient.

5. Monkey Drug Trials (1969).  The Monkey Drug Trials experiment was ostensibly meant to test the effects of  illicit drugs on monkeys. Given that monkeys and humans have similar reactions to drugs, and that animals have long been part of medical experiments, the face of this experiment might not look too bad. It’s actually horrific. Monkeys and rats learned to self-administer a range of drugs, from cocaine, amphetamines, codeine, morphine and alcohol. The animals suffered horribly, tearing their fingers off, clawing fur away, and in some cases breaking bones attempting to break out of their cages. This psychology experiment study had no purpose other than to re-validate studies that had been validated many times before. 

4. Little Albert (1920). John Watson, the founder of the psychological school of behaviorism, believed that behaviors were primarily learned. Anxious to test his hypothesis, he used an orphan, “Little Albert,” as an experimental subject. He exposed the child to a laboratory rat, which caused no fear response from the boy, for several months. Next, at the same time, the child was exposed to the rat, he struck a steel bar with a hammer, scaring the little boy and causing a fear response. By associating the appearance of the rat with the loud noise, Little Albert became afraid of the rat. Naturally, the fear was a condition that needed to be fixed, but the boy left the facility before Watson could remedy things.

3. Harlow’s Pit of Despair (1970s). Harry Harlow, a psychologist at the University of Wisconsin-Madison, was a researcher in the field of maternal bonding. To investigate the effects of attachment on development, he took young monkeys and isolated them after they’d bonded with their mothers. He kept them completely isolated, in a vertical chamber that prevented all contact with other monkeys.  They developed no social skills and became completely unable to function as normal rhesus monkeys. These controversial psychology experiments, were not only staggeringly cruel but revealed no data that wasn’t already known.

2. Learned Helplessness (1965). In 1965, Doctors Martin Seligman and Steve Maier investigated the concept of learned helplessness. Three sets of dogs were placed in harnesses. One group of dogs were control subjects. Nothing happened to them. Dogs from group two received electric shocks; however, they were able to stop the shocks when they pressed a lever. In group three, the subjects were shocked, but the levers didn’t abort the shocks. Next, the psychologists placed the dogs in an open box the dogs could easily jump out of, but even though they received shocks, they didn’t leap out of the box. The dogs developed a sense of helplessness, or an inability to take successful action to change a bad situation.

1. The Robbers Cave Experiment (1954). Although the Robbers Cave Experiment is much less disturbing than some of the others in the list, it’s still a good example of the need for informed consent. In 1954, Muzafer Sherif, a psychologist interested in group dynamics and conflict, brought a group of preteen boys to a summer camp. He divided them into groups and engaged the boys in competitions. However, Sherif manipulated the outcomes of the contests, keeping the results close for each group. Then he gave the boys tasks to complete as a unified group, with everyone working together. The conflicts that had arisen when the boys were competing vanished when they worked as one large group.

More Psychology Rankings of Interest:

  • 10 Key Moments in the History of Psychology
  • 5 Key Moments in Social Psychology
  • 5 Learning Theories in Psychology
  • Top 10 Movies About Psychology

Trending now

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Stanford Prison Experiment

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

unethical psychological case studies

Cara Lustik is a fact-checker and copywriter.

unethical psychological case studies

  • Participants
  • Setting and Procedure

Impact of the Zimbardo Prison Experiment

In 1971, psychologist Philip Zimbardo and his colleagues set out to create an experiment that looked at the impact of becoming a prisoner or prison guard. The Stanford Prison Experiment, also known as the Zimbardo Prison Experiment, went on to become one of the best-known (and controversial) in psychology's history.

The study has long been a staple in textbooks, articles, psychology classes, and even movies, but recent criticisms have called the study's scientific merits and value into question.

Purpose of the Stanford Prison Experiment

Zimbardo was a former classmate of the psychologist Stanley Milgram . Milgram is best known for his famous obedience experiment .

Zimbardo was interested in expanding upon Milgram's research. He wanted to further investigate the impact of situational variables on human behavior.

The researchers wanted to know how the participants would react when placed in a simulated prison environment.

The researchers wondered if physically and psychologically healthy people who knew they were participating in an experiment would change their behavior in a prison-like setting.

Participants in the Stanford Prison Experiment

The researchers set up a mock prison in the basement of Stanford University's psychology building. They selected 24 undergraduate students to play the roles of both prisoners and guards.

The participants were chosen from a larger group of 70 volunteers because they had no criminal background, lacked psychological issues, and had no significant medical conditions. The volunteers agreed to participate during a one to two-week period in exchange for $15 a day.

The Setting and Procedures

The simulated prison included three six-by-nine-foot prison cells. Each cell held three prisoners and included three cots.

Other rooms across from the cells were utilized for the jail guards and warden. One tiny space was designated as the solitary confinement room, and yet another small room served as the prison yard.

The 24 volunteers were then randomly assigned to either the prisoner group or the guard group. Prisoners were to remain in the mock prison 24 hours a day during the study.

Guards were assigned to work in three-man teams for eight-hour shifts. After each shift, guards were allowed to return to their homes until their next shift.

Researchers were able to observe the behavior of the prisoners and guards using hidden cameras and microphones.

Results of the Stanford Prison Experiment

So what happened in the Zimbardo experiment? While the experiment was originally slated to last 14 days, it had to be stopped after just six due to what was happening to the student participants. The guards became abusive, and the prisoners began to show signs of extreme stress and anxiety.

Some of these included:

  • While the prisoners and guards were allowed to interact in any way they wanted, the interactions were hostile or even dehumanizing.
  • The guards began to behave in ways that were aggressive and abusive toward the prisoners while the prisoners became passive and depressed.
  • Five of the prisoners began to experience severe negative emotions, including crying and acute anxiety, and had to be released from the study early.

Even the researchers themselves began to lose sight of the reality of the situation. Zimbardo, who acted as the prison warden, overlooked the abusive behavior of the jail guards until graduate student Christina Maslach voiced objections to the conditions in the simulated prison and the morality of continuing the experiment.

One possible explanation for the results of this experiment is the idea of deindividuation , which states that being part of a large group can make us more likely to perform behaviors we would otherwise not do on our own.

The experiment became famous and was widely cited in textbooks and other publications. According to Zimbardo and his colleagues, the Stanford Prison Experiment demonstrated the powerful role that the situation can play in human behavior.

Because the guards were placed in a position of power, they began to behave in ways they would not usually act in their everyday lives or other situations. The prisoners, placed in a situation where they had no real control, became submissive and depressed.

In 2011, the Stanford Alumni Magazine featured a retrospective of the Stanford Prison Experiment in honor of the experiment’s 40th anniversary. The article contained interviews with several people involved, including Zimbardo and other researchers as well as some of the participants in the study.

Richard Yacco, one of the prisoners in the experiment, suggested that the experiment demonstrated the power that societal roles and expectations can play in a person's behavior.

In 2015, the experiment became the topic of a feature film titled The Stanford Prison Experiment that dramatized the events of the 1971 study.

Criticisms of the Stanford Prison Experiment

In the years since the experiment was conducted, there have been a number of critiques of the study. Some of these include:

Ethical Issues

The Stanford Prison Experiment is frequently cited as an example of unethical research. The experiment could not be replicated by researchers today because it fails to meet the standards established by numerous ethical codes, including the Ethics Code of the American Psychological Association .

Lack of Generalizability

Other critics suggest that the study lacks generalizability due to a variety of factors. The unrepresentative sample of participants (mostly white and middle-class males) makes it difficult to apply the results to a wider population.

Lack of Realism

The Zimbardo Prison Experiment is also criticized for its lack of ecological validity. Ecological validity refers to the degree of realism with which a simulated experimental setup matches the real-world situation it seeks to emulate.

While the researchers did their best to recreate a prison setting, it is simply not possible to perfectly mimic all of the environmental and situational variables of prison life. Because there may have been factors related to the setting and situation that influenced how the participants behaved, it may not really represent what might happen outside of the lab.

Recent Criticisms

More recent examination of the experiment's archives and interviews with participants have revealed major issues with the research's design, methods, and procedures that call the study's validity, value, and even authenticity into question.

These reports, including examinations of the study's records and new interviews with participants, have also cast doubt on some of the key findings and assumptions about the study.

Among the issues described:

  • One participant, for example, has suggested that he faked a breakdown so that he could leave the experiment because he was worried about failing his classes.
  • Other participants also reported altering their behavior in a way designed to "help" the experiment.
  • Evidence also suggests that the experimenters encouraged the behavior of the guards and played a role in fostering the abusive actions of the guards.

In 2019, the journal American Psychologist published an article debunking the famed experiment, detailing its lack of scientific merit, and concluding that the Stanford Prison Experiment was "an incredibly flawed study that should have died an early death."

In a statement posted on the experiment's official website, Zimbardo maintains that these criticisms do not undermine the main conclusion of the study—that situational forces can alter individual actions both in positive and negative ways.

Why was Zimbardo's experiment unethical?

Zimbardo's experiment was unethical due to a lack of fully informed consent, abuse of participants, and lack of appropriate debriefings. More recent findings suggest there were other significant ethical issues that compromise the experiment's scientific standing, including the fact that experimenters may have encouraged abusive behaviors.

The Stanford Prison Experiment is well known both in and out of the field of psychology. While the study has long been criticized for many reasons, more recent criticisms of the study's procedures shine a brighter light on the experiment's scientific shortcomings.

Stanford University Libraries. The Stanford Prison Experiment: 40 years later .

Stanford Prison Experiment. 2. Setting up .

Zimbardo P, Haney C, Banks WC, Jaffe D. The Stanford Prison Experiment: A simulation study of the psychology of imprisonment . Stanford University, Stanford Digital Repository, Stanford; 1971.

Sommers T. An interview with Philip Zimbardo . The Believer.

Ratnesar, R. The menace within . Stanford Magazine.

Horn S. Landmark Stanford Prison Experiment criticized as a sham . Prison Legal News .

Bartels JM. The Stanford Prison Experiment in introductory psychology textbooks: A content analysis .  Psychology Learning & Teaching . 2015;14(1):36-50. doi:10.1177/1475725714568007

American Psychological Association. Ecological validity .

Blum B. The lifespan of a lie . Medium .

Le Texier T. Debunking the Stanford Prison Experiment . American Psychologist . 2019;74(7):823-839. doi:10.1037/amp0000401

Stanford Prison Experiment. Philip Zimbardo's response to recent criticisms of the Stanford Prison Experiment .

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Neuroscience News & Research

Stay up to date on the topics that matter to you

Three Psychology Experiments That Pushed the Limit of Ethics

Ruairi J Mackenzie image

Complete the form below to unlock access to ALL audio articles.

Last month, a team of volunteers emerged from 40 days of isolation in the Lombrives cave in south-west France. This ordeal was part of a scientific study called the Deep Time experiment. The volunteers were tasked with going without sunlight, phones or clocks for the duration of the experiment. The study aimed to understand how the human brain would be affected as it lost its grasp on time and space. Putting human subjects, even volunteers, through this might seem ethically dubious, but pales in comparison to the questions raised by these three studies.

The ultimate isolation: Hebb’s “pathological boredom”

unethical psychological case studies

In 2008, the BBC attempted to re-enact Hebb's experiments. Credit: BBC Horizon

Hebb and his team wanted to see how this environment, one that wasn’t entirely devoid of sensory information, but that was incredibly monotonous and boring, would affect the volunteers. Initially, the participants, who were all university students, thought about their results, or the papers they had due. But after a while, their minds instead drifted onto memories from their childhood. Eventually, most of the participants reported that they became unable to think about anything for any length of time. These details were reported by Hebb’s collaborator Woodburn Heron in an article published in Scientific American . The subjects also showed impaired mental performance, registering lower results on tests of mental arithmetic and word association. Most strikingly of all, the subjects reported that, despite their complete absence of sensory stimulation, they experienced an array of hallucinations, including one participant who saw endless images of babies. These hallucinations, which Heron compared to the effects of the hallucinogenic drug mescaline, grew in complexity over time – one participant eventually reported “a procession of squirrels with sacks over their shoulders marching ‘purposefully’across the visual field.” The hallucinations were accompanied by sounds and even sensations across the volunteers’ bodies. Summing up these weird and distressing effects, Heron concluded that “a changing sensory environment seems essential for human beings.”

How malleable is our willpower?

The facebook study: how contagious are emotions.

unethical psychological case studies

Credit: Pixabay

This study started huge controversy when it was ascertained that the only consent Facebook sought was signing up for the platform. Facebook's Data Use Policy, the company said, gave them all the permission they needed to play around with users’ feeds. Whilst these findings were interesting, the ethically dubious way in which the study was conducted makes for uncomfortable reading. The ethical quandaries involved, and the reasons why the study was able to bypass certain regulations, were summed up by bioethicist Michelle N. Meyer in an article for WIRED . 

Ruairi J Mackenzie image

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

The Psychology Behind Unethical Behavior

  • Merete Wedell-Wedellsborg

unethical psychological case studies

Understanding it can help keep your worst impulses in check.

Leaders are often faced with ethical conundrums. So how can they determine when they’re inching toward dangerous territory? There are three main psychological dynamics that lead to crossing moral lines. First, there’s omnipotence: when someone feels so aggrandized and entitled that they believe the rules of decent behavior don’t apply to them. Second, consider cultural numbness: when others play along and gradually begin to accept and embody deviant norms. Finally, when people don’t speak up because they are thinking of more immediate rewards, we see justified neglect. There are several strategies leaders can use to counter these dynamics, including relying on a group of trusted peers to keep you in check, keeping a list of things you will never do for profit, and looking out for ways you explain away borderline actions.

On a warm evening after a strategy off-site, a team of executives arrives at a well-known local restaurant. The group is looking forward to having dinner together, but the CEO is not happy about the table and demands a change. “This isn’t the one that my assistant usually reserves for me,” he says. A young waiter quickly finds the manager who explains that there are no other tables available.

unethical psychological case studies

  • Merete Wedell-Wedellsborg runs her own business psychology practice with clients in the financial, pharmaceutical, and defense sectors, as well as family offices. Merete holds a Ph.D. in Business Economics from Copenhagen Business School and an M.A. in Psychology from University of Copenhagen (Clinical Psychology). She is the author of the book Battle Mind: How to Navigate in Chaos and Perform Under Pressure.

Partner Center

These 1950s experiments showed us the trauma of parent-child separation. Now experts say they’re too unethical to repeat—even on monkeys.

A childhood without affection can be devastating, even if basic needs are met.

By Eleanor Cummins | Published Jun 22, 2018 7:00 PM EDT

mom child holding hands

John Gluck’s excitement about studying parent-child separation quickly soured. He’d been thrilled to arrive at the University of Wisconsin at Madison in the late 1960s, his spot in the lab of renowned behavioral psychologist Harry Harlow secure. Harlow had cemented his legacy more than a decade earlier when his experiments showed the devastating effects of broken parent-child bonds in rhesus monkeys. As a graduate student researcher, Gluck would use Harlow’s monkey colony to study the impact of such disruption on intellectual ability.

Gluck found academic success, and stayed in touch with Harlow long after graduation. His mentor even sent Gluck monkeys to use in his own laboratory. But in the three years Gluck spent with Harlow—and the subsequent three decades he spent as a leading animal researcher in his own right—his concern for the well-being of his former test subjects overshadowed his enthusiasm for animal research.

Separating parent and child, he’d decided, produced effects too cruel to inflict on monkeys.

Since the 1990s, Gluck’s focus has been on bioethics; he’s written research papers and even a book about the ramifications of conducting research on primates. Along the way, he has argued that continued lab experiments testing the effects of separation on monkeys are unethical. Many of his peers, from biology to psychology, agree. And while the rationale for discontinuing such testing has many factors, one reason stands out. The fundamental questions we had about parent-child separation, Gluck says, were answered long ago.

Orphange black and white old photo

The first insights into attachment theory began with studious observations on the part of clinicians.

Starting in the 1910s and peaking in the 1930s, doctors and psychologists actively advised parents against hugging , kissing, or cuddling children on the assumption such fawning attention would condition children to behave in a manner that was weak, codependent, and unbecoming. This theory of “behaviorism” was derived from research like Ivan Pavlov’s classical conditioning research on dogs and the work of Harvard psychologist B.F. Skinner , who believed free will to be an illusion. Applied in the context of the family unit, this research seemed to suggest that forceful detachment on the part of ma and pa were essential ingredients in creating a strong, independent future adult. Parents were simply there to provide structure and essentials like food.

But after the end of World War II, doctors began to push back. In 1946, Dr. Benjamin Spock (no relation to Dr. Spock of Star Trek ) authored Baby and Child Care, the international bestseller, which sold 50 million copies in Spock’s lifetime. The book, which was based on his professional observation of parent-child relationships, advised against the behaviorist theories of the day. Instead, Spock implored parents to see their children as individuals in need of customized care—and plenty of physical affection.

At the same time, the British psychiatrist John Bowlby was commissioned to write the World Health Organization’s Maternal Care and Mental Health report. Bowlby had gained renowned before the war for his systematic study of the effects of institutionalization on children, from long-term hospital stays to childhoods confined to orphanages.

Published in 1951, Bowlby’s lengthy two-part document focused on the mental health of homeless children. In it, he brought together anecdotal reports and descriptive statistics to paint a portrait of the disastrous effects of the separation of children from their caretakers and the consequences of “deprivation” on both the body and mind. “Partial deprivation brings in its train acute anxiety, excessive need for love, powerful feelings of revenge, and, arising from these last, guilt and depression,” Bowlby wrote. Like Spock, this research countered behaviorist theories that structure and sustenance were all a child needed. Orphans were certainly fed, but in most cases they lacked love. The consequences, Bowlby argued, were dire—and long-lasting.

The evidence of the near-sanctity of parent-child attachment was growing thanks to the careful observation of experts like Spock and Bowlby. Still, many experts felt one crucial piece of evidence was missing: experimental data. Since the Enlightenment, scientists have worked to refine their methodology in the hopes of producing the most robust observations about the natural world. In the late 1800s, randomized, controlled trials were developed and in the 20th century came to be seen as the “gold standard” for research —a conviction that more or less continues to this day.

While Bowlby had clinically-derived data, he knew to advance his ideas in the wider world he would need data from a lab . But by 1947, the scientific establishment required informed consent for research participants (though notable cases like the Tuskegee syphilis study violated such rules into at least the 1970s). As a result, no one would condone forcibly separating parents and children for research purposes. Fortunately, Bowlby’s transatlantic correspondent, Harry Harlow, had another idea.

Animal testing money

Over the course of his career, Harlow conducted countless studies of primate behavior and published more than 300 research papers and books. Unsurprisingly, in a 2002 ranking the impact of 20th century psychologists , the American Psychological Association named him the 26th most cited researcher of the era, below B.F. Skinner (1), but above Noam Chomsky (38). But the (ethically-fraught) experiments that cemented his status in Psychology 101 textbooks for good began in earnest only in the 1950s.

Around the time Bowlby published WHO report, Harlow began to push the psychological limits of monkeys in myriad ways—all in the name of science. He surgically altered their brains or beamed radiation through their skulls to cause lesions, and then watched the neurological effect, according to a 1997 paper by Gluck that spans history, biography, and ethics. He forced some animals to live in a “deep, wedge-shaped, stainless steel chambers… graphically called the ‘pit of despair'” in order to study the effect of such solitary confinement on the mind, Gluck wrote. But Harlow’s most well-known study, begun in the 1950s and carefully documented in pictures and videos made available to the public, centered around milk.

To test the truth of the behaviorist’s claims that things like food mattered more than affection, Harlow set up an experiment that allowed baby monkeys, forcibly separated from their mothers at birth, to choose between two fake surrogates. One known as the “iron maiden” was made only of wire, but had bottles full of milk protruding from its metal chest. The other was covered in a soft cloth, but entirely devoid of food. If behaviorists were right, babies should choose the surrogate who offered them food over the surrogate who offered them nothing but comfort.

As Spock or Bowlby may have predicted, this was far from the case.

“Results demonstrated that the monkeys overwhelmingly preferred to maintain physical contact with the soft mothers,” Gluck wrote. “It also was shown that the monkeys seemed to derive a form of emotional security by the very presence of the soft surrogate that lasted for years, and they ‘screamed their distress’ in ‘abject terror’ when the surrogate mothers were removed from them.” They visited the iron maiden when they were too hungry to avoid her metallic frame any longer.

As anyone in behavioral psychology will tell you, Harlow’s monkey studies are still considered foundational for the field of parent-child research to this day. But his work is not without controversy. In fact, it never has been. Even when Harlow was conducting his research, some of his peers criticized the experiments , which they considered to be cruel to the animal and degrading to the scientists who executed them. The chorus of dissenting voices is not new; it’s merely grown.

Animal research today is more carefully regulated by individual institutions, professional organizations like the American Psychological Association and legislation like the Federal Animal Welfare Act. Many activists and scholars argue research on primates should end entirely and that experiments like Harlow’s should never be repeated. “Academics should be on the front lines of condemning such work as well, for they represent a betrayal of the basic notions of dignity and decency we should all be upholding in our research, especially in the case of vulnerable populations in our samples—such as helpless animals or young children,” psychologist Azadeh Aalai wrote in Psychology Today .

Animal studies have not disappeared. Research on attachment in monkeys continues at the University of Wisconsin at Madison . But animal studies have declined. New methods—or, depending on how you look at it, old methods—have filled the void. Natural experiments and epidemiological studies, similar to the kind Bowlby employed, have added new insight into the importance of “tender age” attachment .

Romanian orphanages established after the fall of the Soviet Union have served as such a study site. The facilities, which have been described as “slaughterhouses of the soul” , have historically had great disparities between the number of children and the number of caregivers (25 or more kids to one adult), meaning few if any children received the physical or emotional care they needed. Many of the children who were raised in these environments have exhibited mental health and behavioral disorders as a result. It’s even had a physical effect, with neurological research showing a dramatic reduction in the literal size of their brains and low levels of brain activity as measured by electroencephalography, or EEG, machines.

Similarly, epidemiological research has tracked the trajectories of children in the foster care system in the United States and parts of Europe to see how they differ, on average, from youths in a more traditional home environment. They’ve shown that the risk of mental disorders , suicidal ideation and attempts , and obesity are elevated among these children. Many of these health outcomes appear to be even worse among children in an institutional setting , like a Romanian orphanage, than children placed in foster care, which typically offers kids more individualized attention.

Dogs lab testing

Scientists rarely say no to more data. After all, the more observations and perspectives we have, the better we understand a given topic. But alternatives to animal models are under development and epidemiological methodologies are only growing stronger. As a result, we may be able to set some kinds of data—that data collected at the expense of humans or animal —aside.

When it comes to lab experiments on parent-child attachment, we may know everything we need to know—and have for more than 60 years. Gluck believes that testing attachment theory at the expense of primates should have ended with Harry Harlow. And he continues to hope people will come to see the irony inherent in harming animals to prove, scientifically, that human children deserve compassion.

“Whether it is called mother-infant separation, social deprivation, or the more pleasant sounding ‘nursery rearing,'” Gluck wrote in a New York Times op-ed in 2016, “these manipulations cause such drastic damage across many behavioral and physiological systems that the work should not be repeated.”

Like science, tech, and DIY projects?

Sign up to receive Popular Science's emails and get the highlights.

APS

How the Classics Changed Research Ethics

Some of history’s most controversial psychology studies helped drive extensive protections for human research participants. some say those reforms went too far..

  • Behavioral Research
  • Institutional Review Board (IRB)

unethical psychological case studies

Photo above: In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer “guards” became abusive to the “prisoners,” famously leading one prisoner into a fit of sobbing. Photo credit:   PrisonExp.org

Nearly 60 years have passed since Stanley Milgram’s infamous “shock box” study sparked an international focus on ethics in psychological research. Countless historians and psychology instructors assert that Milgram’s experiments—along with studies like the Robbers Cave and Stanford prison experiments—could never occur today; ethics gatekeepers would swiftly bar such studies from proceeding, recognizing the potential harms to the participants. 

But the reforms that followed some of the 20th century’s most alarming biomedical and behavioral studies have overreached, many social and behavioral scientists complain. Studies that pose no peril to participants confront the same standards as experimental drug treatments or surgeries, they contend. The institutional review boards (IRBs) charged with protecting research participants fail to understand minimal risk, they say. Researchers complain they waste time addressing IRB concerns that have nothing to do with participant safety. 

Several factors contribute to this conflict, ethicists say. Researchers and IRBs operate in a climate of misunderstanding, confusing regulations, and a systemic lack of ethics training, said APS Fellow Celia Fisher, a Fordham University professor and research ethicist, in an interview with the Observer . 

“In my view, IRBs are trying to do their best and investigators are trying to do their best,” Fisher said. “It’s more that we really have to enhance communication and training on both sides.” 

‘Sins’ from the past  

Modern human-subjects protections date back to the 1947 Nuremberg Code, the response to Nazi medical experiments on concentration-camp internees. Those ethical principles, which no nation or organization has officially accepted as law or official ethics guidelines, emphasized that a study’s benefits should outweigh the risks and that human subjects should be fully informed about the research and participate voluntarily.  

See the 2014 Observer cover story by APS Fellow Carol A. Tavris, “ Teaching Contentious Classics ,” for more about these controversial studies and how to discuss them with students.

But the discovery of U.S.-government-sponsored research abuses, including the Tuskegee syphilis experiment on African American men and radiation experiments on humans, accelerated regulatory initiatives. The abuses investigators uncovered in the 1970s, 80s, and 90s—decades after the experiments had occurred—heightened policymakers’ concerns “about what else might still be going on,” George Mason University historian Zachary M. Schrag explained in an interview. These concerns generated restrictions not only on biomedical research but on social and behavioral studies that pose a minute risk of harm.  

“The sins of researchers from the 1940s led to new regulations in the 1990s, even though it was not at all clear that those kinds of activities were still going on in any way,” said Schrag, who chronicled the rise of IRBs in his book  Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009.  

Accompanying the medical research scandals were controversial psychological studies that provided fodder for textbooks, historical tomes, and movies.  

  • In the early 1950s, social psychologist Muzafer Sherif and his colleagues used a Boy Scout camp called Robbers Cave to study intergroup hostility. They randomly assigned preadolescent boys to one of two groups and concocted a series of competitive activities that quickly sparked conflict. They later set up a situation that compelled the boys to overcome their differences and work together. The study provided insights into prejudice and conflict resolution but generated criticism because the children weren’t told they were part of an experiment. 
  • In 1961, Milgram began his studies on obedience to authority by directing participants to administer increasing levels of electric shock to another person (a confederate). To Milgram’s surprise, more than 65% of the participants delivered the full voltage of shock (which unbeknownst to them was fake), even though many were distressed about doing so. Milgram was widely criticized for the manipulation and deception he employed to carry out his experiments. 
  • In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer “guards” became abusive to the “prisoners,” famously leading one prisoner into a fit of sobbing. 

Western policymakers created a variety of safeguards in the wake of these psychological studies and other medical research. Among them was the Declaration of Helsinki, an ethical guide for human-subjects research developed by the Europe-based World Medical Association. The U.S. Congress passed the National Research Act of 1974, which created a commission to oversee participant protections in biomedical and behavioral research. And in the 90s, federal agencies adopted the Federal Policy for the Protection of Human Subjects (better known as the Common Rule), a code of ethics applied to any government-funded research. IRBs review studies through the lens of the Common Rule. After that, social science research, including studies in social psychology, anthropology, sociology, and political science, began facing widespread institutional review (Schrag, 2010).  

Sailing Through Review

Psychological scientists and other researchers who have served on institutional review boards provide these tips to help researchers get their studies reviewed swiftly.  

  • Determine whether your study qualifies for minimal-risk exemption from review. Online tools are even in development to help researchers self-determine exempt status (Ben-Shahar, 2019; Schneider & McClutcheon, 2018). 
  • If you’re not clear about your exemption, research the regulations to understand how they apply to your planned study. Show you’ve done your homework and have developed a protocol that is safe for your participants.  
  • Consult with stakeholders. Look for advocacy groups and representatives from the population you plan to study. Ask them what they regard as fair compensation for participation. Get their feedback about your questionnaires and consent forms to make sure they’re understandable. These steps help you better show your IRB that the population you’re studying will find the protections adequate (Fisher, 2022). 
  • Speak to IRB members or staff before submitting the protocol. Ask them their specific concerns about your study, and get guidance on writing up the protocol to address those concerns. Also ask them about expected turnaround times so you can plan your submission in time to meet any deadlines associated with your study (e.g., grant application deadlines).  

Ben-Shahar, O. (2019, December 2). Reforming the IRB in experimental fashion. The Regulatory Review . University of Pennsylvania. https://www.theregreview.org/2019/12/02/ben-shahar-reforming-irb-experimental-fashion/  

Fisher, C. B. (2022). Decoding the ethics code: A practical guide for psychologists (5 th ed.). Sage Publications. 

Schneider, S. L. & McCutcheon, J. A. (2018).  Proof of concept: Use of a wizard for self-determination of IRB exempt status . Federal Demonstration Partnership.   http://thefdp.org/default/assets/File/Documents/wizard_pilot_final_rpt.pdf  

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support?  

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science  paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009).   

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

“You can have IRB approval and later decide to make a nominal change to the protocol (a frequent one is to add a new assistant to the project or to increase the sample size),” he wrote. “It can take over a month to get approval. In the meantime, nothing can move forward and the students sit around waiting.” 

Not all researchers view institutional review as a roadblock. Psychological scientist Nathaniel Herr, who runs American University’s Interpersonal Emotion Lab and has served on the school’s IRB, says the board effectively collaborated with researchers to ensure the study designs were safe and that participant privacy was appropriately protected 

“If the IRB that I operated on saw an issue, they shared suggestions we could make to overcome that issue,” Herr said. “It was about making the research go forward. I never saw a project get shut down. It might have required a significant change, but it was often about confidentiality and it’s something that helps everybody feel better about the fact we weren’t abusing our privilege as researchers. I really believe it [the review process] makes the projects better.” 

Some universities—including Fordham University, Yale University, and The University of Chicago—even have social and behavioral research IRBs whose members include experts optimally equipped to judge the safety of a psychological study, Fisher noted. 

Training gaps  

Institutional review is beset by a lack of ethics training in research programs, Fisher believes. While students in professional psychology programs take accreditation-required ethics courses in their doctoral programs, psychologists in other fields have no such requirement. In these programs, ethics training is often limited to an online program that provides, at best, a perfunctory overview of federal regulations. 

“It gives you the fundamental information, but it has nothing to do with our real-world deliberations about protecting participants,” she said. 

Additionally, harm to a participant is difficult to predict. As sociologist Martin Tolich of University of Otago in New Zealand wrote, the Stanford prison study had been IRB-approved. 

“Prediction of harm with any certainty is not necessarily possible, and should not be the aim of ethics review,” he argued. “A more measured goal is the minimization of risk, not its eradication” (Tolich, 2014). 

Fisher notes that scientists aren’t trained to recognize and respond to adverse events when they occur during a study. 

“To be trained in research ethics requires not just knowing you have to obtain informed consent,” she said. “It’s being able to apply ethical reasoning to each unique situation. If you don’t have the training to do that, then of course you’re just following the IRB rules, which are very impersonal and really out of sync with the true nature of what we’re doing.” 

Researchers also raise concerns that, in many cases, the regulatory process harms vulnerable populations rather than safeguards them. Fisher and psychological scientist Brian Mustanski of University of Illinois at Chicago wrote in 2016, for example, that the review panels may be hindering HIV prevention strategies by requiring researchers to get parental consent before including gay and bisexual adolescents in their studies. Under that requirement, youth who are not out to their families get excluded. Boards apply those restrictions even in states permitting minors to get HIV testing and preventive medication without parental permission—and even though federal rules allow IRBs to waive parental consent in research settings (Mustanski & Fisher, 2016) 

IRBs also place counterproductive safety limits on suicide and self-harm research, watching for any sign that a participant might need to be removed from a clinical study and hospitalized. 

“The problem is we know that hospitalization is not the panacea,” Fisher said. “It stops suicidality for the moment, but actually the highest-risk period is 3 months after the first hospitalization for a suicide attempt. Some of the IRBs fail to consider that a non-hospitalization intervention that’s being tested is just as safe as hospitalization. It’s a difficult problem, and I don’t blame them. But if we have to take people out of a study as soon as they reach a certain level of suicidality, then we’ll never find effective treatment.” 

Communication gaps  

Supporters of the institutional review process say researchers tend to approach the IRB process too defensively, overlooking the board’s good intentions.  

“Obtaining clarification or requesting further materials serve to verify that protections are in place,” a team of institutional reviewers wrote in an editorial for  Psi Chi Journal of Psychological Research . “If researchers assume that IRBs are collaborators in the research process, then these requests can be seen as prompts rather than as admonitions” (Domenech Rodriguez et al., 2017). 

Fisher agrees that researchers’ attitudes play a considerable role in the conflicts that arise over ethics review. She recommends researchers develop each protocol with review-board questions in mind (see sidebar). 

“For many researchers, there’s a disdain for IRBs,” she said. “IRBs are trying their hardest. They don’t want to reject research. It’s just that they’re not informed. And sometimes if behavioral scientists or social scientists are disdainful of their IRBs, they’re not communicating with them.” 

Some researchers are building evidence to help IRBs understand the level of risk associated with certain types of psychological studies.  

  • In a study involving more than 500 undergraduate students, for example, psychological scientists at the University of New Mexico found that the participants were less upset than expected by questionnaires about sex, trauma , and other sensitive topics. This finding, the researchers reported in  Psychological Science , challenges the usual IRB assumption about the stress that surveys on sex and trauma might inflict on participants (Yeater et al., 2012). 
  • A study involving undergraduate women indicated that participants who had experienced child abuse , although more likely than their peers to report distress from recalling the past as part of a study, were also more likely to say that their involvement in the research helped them gain insight into themselves and hoped it would help others (Decker et al., 2011). 
  • A multidisciplinary team, including APS Fellow R. Michael Furr of Wake Forest University, found that adolescent psychiatric patients showed a drop in suicide ideation after being questioned regularly about their suicidal thoughts over the course of 2 years. This countered concerns that asking about suicidal ideation would trigger an increase in such thinking (Mathias et al., 2012). 
  • A meta-analysis of more than 70 participant samples—totaling nearly 74,000 individuals—indicated that people may experience only moderate distress when discussing past traumas in research studies. They also generally might find their participation to be a positive experience, according to the findings (Jaffe et al., 2015). 

The takeaways  

So, are the historians correct? Would any of these classic experiments survive IRB scrutiny today? 

Reexaminations of those studies make the question arguably moot. Recent revelations about some of these studies suggest that scientific integrity concerns may taint the legacy of those findings as much as their impact on participants did (Le Texier, 2019, Resnick, 2018; Perry, 2018).  

Also, not every aspect of the controversial classics is taboo in today’s regulatory environment. Scientists have won IRB approval to conceptually replicate both the Milgram and Stanford prison experiments (Burger, 2009; Reicher & Haslam, 2006). They simply modified the protocols to avert any potential harm to the participants. (Scholars, including Zimbardo himself, have questioned the robustness of those replication findings [Elms, 2009; Miller, 2009; Zimbardo, 2006].) 

Many scholars believe there are clear and valuable lessons from the classic experiments. Milgram’s work, for instance, can inject clarity into pressing societal issues such as political polarization and police brutality . Ethics training and monitoring simply need to include those lessons learned, they say. 

“We should absolutely be talking about what Milgram did right, what he did wrong,” Schrag said. “We can talk about what we can learn from that experience and how we might answer important questions while respecting the rights of volunteers who participate in psychological experiments.”  

Feedback on this article? Email  [email protected]  or login to comment.

References   

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist , 64 (1), 1–11. https://doi.org/10.1037/a0010932  

Ceci, S. J. & Bruck, M. (2009). Do IRBs pass the minimal harm test? Perspectives on Psychological Science , 4 (1), 28–29. https://doi.org/10.1111/j.1745-6924.2009.01084.x   

Decker, S. E., Naugle, A. E., Carter-Visscher, R., Bell, K., & Seifer, A. (2011). Ethical issues in research on sensitive topics: Participants’ experiences of stress and benefit . Journal of Empirical Research on Human Research Ethics: An International Journal , 6 (3), 55–64. https://doi.org/10.1525/jer.2011.6.3.55  

Domenech Rodriguez, M. M., Corralejo, S. M., Vouvalis, N., & Mirly, A. K. (2017). Institutional review board: Ally not adversary. Psi Chi Journal of Psychological Research , 22 (2), 76–84.  https://doi.org/10.24839/2325-7342.JN22.2.76  

Elms, A. C. (2009). Obedience lite. American Psychologist , 64 (1), 32–36.  https://doi.org/10.1037/a0014473

Fisher, C. B., True, G., Alexander, L., & Fried, A. L. (2009). Measures of mentoring, department climate, and graduate student preparedness in the responsible conduct of psychological research. Ethics & Behavior , 19 (3), 227–252. https://doi.org/10.1080/10508420902886726  

Jaffe, A. E., DiLillo, D., Hoffman, L., Haikalis, M., & Dykstra, R. E. (2015). Does it hurt to ask? A meta-analysis of participant reactions to trauma research. Clinical Psychology Review , 40 , 40–56. https://doi.org/10.1016/j.cpr.2015.05.004  

Le Texier, T. (2019). Debunking the Stanford Prison experiment. American Psychologist , 74 (7), 823–839. http://dx.doi.org/10.1037/amp0000401  

Mathias, C. W., Furr, R. M., Sheftall, A. H., Hill-Kapturczak, N., Crum, P., & Dougherty, D. M. (2012). What’s the harm in asking about suicide ideation? Suicide and Life-Threatening Behavior , 42 (3), 341–351. https://doi.org/10.1111/j.1943-278X.2012.0095.x  

Miller, A. G. (2009). Reflections on “Replicating Milgram” (Burger, 2009). American Psychologist , 64 (1), 20–27. https://doi.org/10.1037/a0014407  

Mustanski, B., & Fisher, C. B. (2016). HIV rates are increasing in gay/bisexual teens: IRB barriers to research must be resolved to bend the curve.  American Journal of Preventive Medicine ,  51 (2), 249–252. https://doi.org/10.1016/j.amepre.2016.02.026  

Perry, G. (2018). The lost boys: Inside Muzafer Sherif’s Robbers Cave experiment. Scribe Publications.  

Reicher, S. & Haslam, S. A. (2006). Rethinking the psychology of tyranny: The BBC prison study. British Journal of Social Psychology , 45 , 1–40. https://doi.org/10.1348/014466605X48998  

Resnick, B. (2018, June 13). The Stanford prison experiment was massively influential. We just learned it was a fraud. Vox. https://www.vox.com/2018/6/13/17449118/stanford-prison-experiment-fraud-psychology-replication  

Schrag, Z. M. (2010). Ethical imperialism: Institutional review boards and the social sciences, 1965–2009 . Johns Hopkins University Press. 

Tolich, M. (2014). What can Milgram and Zimbardo teach ethics committees and qualitative researchers about minimal harm? Research Ethics , 10 (2), 86–96. https://doi.org/10.1177/1747016114523771  

Yeater, E., Miller, G., Rinehart, J., & Nason, E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards.  Psychological Science , 23 (7), 780–787. https://doi.org/10.1177/0956797611435131  

Zimbardo, P. G. (2006). On rethinking the psychology of tyranny: The BBC prison study. British Journal of Social Psychology , 45 , 47–53. https://doi.org/10.1348/014466605X81720  

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

About the Author

Scott Sleek is a freelance writer in Silver Spring, Maryland, and the former director of news and information at APS.

unethical psychological case studies

Inside Grants: National Science Foundation Research Data Ecosystems (RDE)

The National Science Foundation’s Research Data Ecosystems (RDE) is a $38 million effort to improve scientific research through modernized data management collection.

unethical psychological case studies

Up-and-Coming Voices: Methodology and Research Practices

Talks by students and early-career researchers related to methodology and research practices.

unethical psychological case studies

Understanding ‘Scientific Consensus’ May Correct Misperceptions About GMOs, but Not Climate Change

Explaining the meaning of “scientific consensus” may be more effective at countering some types of misinformation than others.

Privacy Overview

  • Skip to main content
  • Keyboard shortcuts for audio player

Harvard professor who studies dishonesty is accused of falsifying data

Juliana Kim headshot

Juliana Kim

unethical psychological case studies

Francesca Gino has been teaching at Harvard Business School for 13 years. Maddie Meyer/Getty Images hide caption

Francesca Gino has been teaching at Harvard Business School for 13 years.

Francesca Gino, a prominent professor at Harvard Business School known for researching dishonesty and unethical behavior, has been accused of submitting work that contained falsified results.

Gino has authored dozens of captivating studies in the field of behavioral science — consulting for some of the world's biggest companies like Goldman Sachs and Google, as well as dispensing advice on news outlets, like The New York Times, The Wall Street Journal and even NPR .

But over the past two weeks, several people, including a colleague, came forward with claims that Gino tampered with data in at least four papers.

Harvard releases report detailing its ties to slavery, plans to issue reparations

Harvard releases report detailing its ties to slavery, plans to issue reparations

Gino is currently on administrative leave. Harvard Business School declined to comment on when that decision was made as well as the allegations in general.

In a statement shared on LinkedIn , the professor said she was aware of the claims but did not deny or admit to any wrongdoing.

"As I continue to evaluate these allegations and assess my options, I am limited into what I can say publicly," Gino wrote on Saturday. "I want to assure you that I take them seriously and they will be addressed."

The scandal was first reported by The Chronicle of Higher Education earlier this month. According to the news outlet, over the past year, Harvard had been investigating a series of papers involving Gino.

University Of South Carolina President Resigns After Plagiarizing Part Of Speech

University Of South Carolina President Resigns After Plagiarizing Part Of Speech

The university found that in a 2012 paper, it appeared someone had added and altered figures in its database, Max H. Bazerman, a Harvard Business School professor who collaborated with Gino in the past, told The Chronicle.

The study itself looked at whether honesty in tax and insurance paperwork differed between participants who were asked to sign truthfulness declarations at the top of the page versus at the bottom. The Proceedings of the National Academy of Sciences , which had published the research , has retracted it.

Shortly after the story, DataColada , a group of three investigators, came forward with similar accusations. After examining a number of Gino's works, the team said they found evidence of fraud spanning over a decade, most recently in 2020.

Acclaimed Harvard Scientist Is Arrested, Accused Of Lying About Ties To China

Acclaimed Harvard Scientist Is Arrested, Accused Of Lying About Ties To China

"Specifically, we wrote a report about four studies for which we had accumulated the strongest evidence of fraud. We believe that many more Gino-authored papers contain fake data. Perhaps dozens," DataColada wrote.

The group said they shared their concerns with Harvard Business School in 2021.

Gino has contributed to over a hundred academic articles around entrepreneurial success, promoting trust in the workforce and just last year published a study titled, "Case Study: What's the Right Career Move After a Public Failure?"

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Introduction: Case Studies in the Ethics of Mental Health Research

Joseph millum.

Clinical Center Department of Bioethics/Fogarty International Center, National Institutes of Health, Bethesda, MD

This collection presents six case studies on the ethics of mental health research, written by scientific researchers and ethicists from around the world. We publish them here as a resource for teachers of research ethics and as a contribution to several ongoing ethical debates. Each consists of a description of a research study that was proposed or carried out and an in-depth analysis of the ethics of the study.

Building Global Capacity in Mental Health Research

According to the World Health Organization (WHO), there are more than 450 million people with mental, neurological, or behavioral problems worldwide ( WHO, 2005a ). Mental health problems are estimated to account for 13% of the global burden of disease, principally from unipolar and bipolar depression, alcohol and substance-use disorders, schizophrenia, and dementia. Nevertheless, in many countries, mental health is accorded a low priority; for example, a 2005 WHO analysis found that nearly a third of low-income countries who reported a mental health budget spent less than 1% of their total health budget on mental health ( WHO, 2005b ).

Despite the high burden of disease and some partially effective treatments that can be implemented in countries with weaker healthcare delivery systems ( Hyman et al., 2006 ), there exist substantial gaps in our knowledge of how to treat most mental health conditions. A 2007 Lancet Series entitled Global Mental Health claimed that the “rudimentary level of mental health-service research programmes in many nations also contributes to poor delivery of mental health care” ( Jacob et al., 2007 ). Its recommendations for mental health research priorities included research into the effects of interactions between mental health and other health conditions ( Prince et al., 2007 ), interventions for childhood developmental disabilities ( Patel et al., 2007 ), cost-effectiveness analysis, the scaling up of effective interventions, and the development of interventions that can be delivered by nonspecialist health workers ( Lancet Global Mental Health Group, 2007 ). All of these priorities require research in environments where the prevailing health problems and healthcare services match those of the populations the research will benefit, which suggests that research must take place all around the world. Similarly, many of the priorities identified by the Grand Challenges in Mental Health Initiative require focus on local environments, cultural factors, and the health systems of low- and middle-income countries. All the challenges “emphasize the need for global cooperation in the conduct of research” ( Collins et al., 2011 ).

Notwithstanding the need for research that is sensitive to different social and economic contexts, the trend of outsourcing to medical research to developing countries shows no sign of abating ( Thiers et al., 2008 ). Consequently, a substantial amount of mental health research will, in any case, take place in low- and middle-income countries, as well as rich countries, during the next few years.

The need for local research and the continuing increase in the international outsourcing of research imply that there is a pressing need to build the capacity to conduct good quality mental health research around the world. However, the expansion of worldwide capacity to conduct mental health research requires more than simply addressing low levels of funding for researchers and the imbalance between the resources available in rich and poor countries. People with mental health disorders are often thought to be particularly vulnerable subjects. This may be a product of problems related to their condition, such as where the condition reduces the capacity to make autonomous decisions. It may also result from social conditions because people with mental disorders are disproportionately likely to be poor, are frequently stigmatized as a result of their condition, and may be victims of human rights abuses ( Weiss et al., 2001 ; WHO, 2005a ). As a result, it is vitally important that the institutional resources and expertise are in place for ensuring that this research is carried out ethically.

Discussion at a special session at the 7th Global Forum on Bioethics in Research revealed the perception that many mental health researchers are not very interested in ethics and showed up a lack of ethics resources directly related to their work. This collection of case studies in the ethics of mental health research responds to that gap.

This collection comprises six case studies written by contributors from around the world ( Table 1 ). Each describes a mental health research study that raised difficult ethical issues, provides background and analysis of those issues, and draws conclusions about the ethics of the study, including whether it was ethical as it stood and how it ought to be amended otherwise. Three of the case studies are written by scientists who took part in the research they analyzed. For these cases, we have asked scholars independent of the research to write short commentaries on them. It is valuable to hear how the researchers themselves grapple with the ethical issues they encounter, as well as to hear the views of people with more distance from the research enterprise. Some of the ethical issues raised here have not been discussed before in the bioethics literature; others are more common concerns that have not received much attention in the context of international research. The case studies are intended to both expand academic discussion of some of the key questions related to research into mental health and for use in teaching ethics.

Case studies are an established teaching tool. Ethical analyses of such cases demonstrate the relevance of ethics to the actual practice of medical research and provide paradigmatic illustrations of the application of ethical principles to particular research situations. Concrete cases help generate and guide discussion and assist students who have trouble dealing with ethical concepts in abstraction. Through structured discussion, ethical development and decision-making skills can be enhanced. Moreover, outside of the teaching context, case study analyses provide a means to generate and focus debate on the relevant ethical issues, which can both highlight their importance and help academic discussion to advance.

People working in mental health research can benefit most from case studies that are specific to mental health. Even though, as outlined below, many of the same ethical problems arise in mental health research as elsewhere, the details of how they arise are important. For example, the nature of depression and the variation in effectiveness of antidepressive medication make a difference to how we should assess the ethics of placebo-controlled trials for new antidepressants. Moreover, seeing how familiar ethical principles are applied to one's own research specialty makes it easier to think about the ethics of one's own research. The cases in this collection highlight the commonalities and the variation in the ethical issues facing researchers in mental health around the world.

The current literature contains some other collections of ethics case studies that may be useful to mental health researchers. I note four important collections here, to which interested scholars may want to refer. Lavery et al.'s (2007) Ethical Issues in International Bio-medical Research provides in-depth analyses of ethically problematic research, mostly in low- and middle-income countries, although none of these cases involve mental health. Cash et al.'s (2009) Casebook on Ethical Issues in International Health Research also focuses on research in low- and middle-income countries, and several of the 64 short case descriptions focus on populations with mental health problems. Two further collections focus on mental health research, in particular. Dubois (2007) and colleagues developed short and longer US-based case studies for teaching as part of their “Ethics in Mental Health Research” training course. Finally, Hoagwood et al.'s (1996) book Ethical Issues in Mental Health Research with Children and Adolescents contains a casebook of 61 short case descriptions, including a few from outside the United States and Western Europe. For teachers and academics in search of more case studies, these existing collections should be very useful. Here, we expand on the available resources with six case studies from around the world with extended ethical analyses.

The remainder of this introduction provides an overview of some of the most important ethical issues that arise in mental health research and describes some of the more significant ethics guidance documents that apply.

Ethical Issues in Mental Health Research

The same principles can be applied in assessing the ethics of mental health research as to other research using human participants ( Emanuel et al., 2000 ). Concerns about the social value of research, risks, informed consent, and the fair treatment of participants all still apply. This means that we can learn from the work done in other areas of human subjects research. However, specific research contexts make a difference to how the more general ethical principles should be applied to them. Different medical conditions may require distinctive research designs, different patient populations may need special protections, and different locations may require researchers to respond to study populations who are very poor and lack access to health care or to significant variations in regulatory systems. The ethical analysis of international mental health research therefore needs to be tailored to its particularities.

Each case study in this collection focuses on the particular ethical issues that are relevant to the research it analyzes. Nevertheless, some issues arise in multiple cases. For example, questions about informed consent arise in the context of research with stroke patients, with students, and with other vulnerable groups. To help the reader compare the treatment of an ethical issue across the different case studies, the ethical analyses use the same nine headings to delineate the issues they consider. These are social value, study design, study population, informed consent, risks and benefits, confidentiality, post-trial obligations, legal versus ethical obligations, and oversight.

Here, I focus on five of these ethical issues as they arise in the context of international mental health research: (1) study design, (2) study population, (3) risks and benefits, (4) informed consent, and (5) post-trial obligations. I close by mentioning some of the most important guidelines that pertain to mental health research.

Study Design

The scientific design of a research study determines what sort of data it can generate. For example, the decision about what to give participants in each arm of a controlled trial determines what interventions the trial compares and what questions about relative safety and efficacy it can answer. What data a study generates makes a difference to the ethics of the study because research that puts human beings at risk is ethically justified in terms of the social value of the knowledge it produces. It is widely believed that human subject research without any social value is unethical and that the greater the research risks to participants, the greater the social value of the research must be to compensate ( Council for International Organizations of Medical Sciences [CIOMS], 2002 ; World Medical Association, 2008 ). However, changing the scientific design of a study frequently changes what happens to research participants, too. For example, giving a control group in a treatment trial an existing effective treatment rather than placebo makes it more likely that their condition will improve but may expose them to adverse effects they would not otherwise experience. Therefore, questions of scientific design can be ethically very complex because different possible designs are compared both in terms of the useful knowledge they may generate and their potential impact on participants.

One of the more controversial questions of scientific design concerns the standard of care that is offered to participants in controlled trials. Some commentators argue that research that tests therapeutic interventions is only permissible if there is equipoise concerning the relative merits of the treatments being compared, that is, there are not good reasons to think that participants in any arm of the trial are receiving inferior treatment ( Joffe and Truog, 2008 ). If there is not equipoise, the argument goes, then physician-researchers will be breaching their duty to give their patients the best possible care ( Freedman, 1987 ).

The Bucharest Early Intervention Project (BEIP) described in the case study by Charles Zeanah was a randomized controlled trial comparing foster care with institutional care in Bucharest, Romania. When designing the BEIP, the researchers wrestled with the issue of whether there was genuine equipoise regarding the relative merits of institutional and foster care. One interpretation of equipoise is that it exists when the professional community has not reached consensus about the better treatment ( Freedman, 1987 ). Childcare professionals in the United States were confident that foster care was superior, but there was no such confidence in Romania, where institutional care was the norm. Which, then, was the relevant professional community?

The equipoise requirement is justified by reference to the role morality of physicians: for a physician to give her patient treatment that she knows to be inferior would violate principles of therapeutic beneficence and nonmaleficence. As a result, the equipoise requirement has been criticized for conflating the ethics of the physician-patient relationship with the ethics of the researcher-participant relationship ( Miller and Brody, 2003 ). According to Miller and Brody (2003) , provided that other ethical requirements are met, including an honest null hypothesis, it is not unethical to assign participants to receive treatment regimens known to be inferior to the existing standard of care.

A subset of trial designs that violate equipoise are placebo-controlled trials of experimental treatments for conditions for which proven effective treatments already exist. Here, there is not equipoise because some participants will be assigned to placebo treatment, and ex hypothesi there already exists treatment that is superior to placebo. Even if we accept Miller and Brody's (2003) argument and reject the equipoise requirement, there remain concerns about these placebo-controlled trials. Providing participants with less effective treatment than they could get outside of the trial constitutes a research risk because trial participation makes them worse off. Moreover, on the face of it, a placebo-controlled trial of a novel treatment of a condition will not answer the most important scientific question about the treatment that clinicians are interested in: is this new treatment better than the old one? Consequently, in situations where there already exists a standard treatment of a condition, it has generally been considered unethical to use a placebo control when testing a new treatment, rather than using the standard treatment as an active-control ( World Medical Association, 2008 ).

Some psychiatric research provides scientific reasons to question a blanket prohibition on placebo-controlled trials when an effective intervention exists. For example, it is not unusual for antidepressive drugs to fail to show superiority to placebo in any given trial. This means that active-control trials may seem to show that an experimental drug is equivalent in effectiveness to the current standard treatment, when the explanation for their equivalence may, in fact, be that neither was better than placebo. Increasing the power of an active-control trial sufficiently to rule out this possibility may require an impractically large number of subjects and will, in any case, put a greater number of subjects at risk ( Carpenter et al., 2003 ; Miller, 2000 ). A 2005 trial of risperidone for acute mania conducted in India ( Khanna et al., 2005 ) was criticized for unnecessarily exposing subjects to risk ( Basil et al., 2006 ; Murtagh and Murphy, 2006 ; Srinivasan et al., 2006 ). The investigators' response to criticisms adopted exactly the line of argument just described:

A placebo group was included because patients with mania generally show a high and variable placebo response, making it difficult to identify their responses to an active medication. Placebo-controlled trials are valuable in that they expose the fewest patients to potentially ineffective treatments. In addition, inclusion of a placebo arm allows a valid evaluation of adverse events attributable to treatment v. those independent of treatment. ( Khanna et al., 2006 )

Concerns about the standard of care given to research participants are exacerbated in trials in developing countries, like India, where research participants may not have access to treatment independent of the study. In such cases, potential participants may have no real choice but to join a placebo-controlled trial, for example, because that is the only way they have a chance to receive treatment. In the Indian risperidone trial, the issue of exploitation is particularly stark because it seemed to some that participants were getting less than the international best standard of care, in order that a pharmaceutical company could gather data that was unlikely to benefit many Indian patients.

This is just one way in which trial design may present ethically troubling risks to participants. Other potentially difficult designs include washout studies, in which participants discontinue use of their medication, and challenge studies, in which psychiatric symptoms are experimentally induced ( Miller and Rosenstein, 1997 ). In both cases, the welfare of participants may seem to be endangered ( Zipursky, 1999 ). A variant on the standard placebo-controlled trial design is the withdrawal design, in which everyone starts the trial on medication, the people who respond to the medication are then selected for randomization, and then half of those people are randomized to placebo. This design was used by a Japanese research team to assess the effectiveness of sertraline for depression, as described by Shimon Tashiro and colleagues in this collection. The researchers regarded this design as more likely to benefit the participants because for legal reasons, sertraline was being tested in Japan despite its proven effectiveness in non-Japanese populations. Tashiro and colleagues analyze how the risks and benefits of a withdrawal design compare with those of standard placebo-controlled trials and consider whether the special regulatory context of Japan makes a difference.

Study Population

The choice of study population implicates considerations of justice. The Belmont Report, which lays out the ethical foundations for the United States system for ethical review of human subject research, says:

Individual justice in the selection of subjects would require that researchers … should not offer potentially beneficial research only to some patients who are in their favor or select only “undesirable” persons for risky research. Social justice requires that distinction be drawn between classes of subjects that ought, and ought not, to participate in any particular kind of research, based on the ability of members of that class to bear burdens and on the appropriateness of placing further burdens on already burdened persons. ( National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978 )

Two distinct considerations are highlighted here. The first (“individual justice”) requires that the researchers treat people equally. Morally irrelevant differences between people should not be the basis for deciding whom to enroll in research. For example, it would normally be unjust to exclude women from a phase 3 trial of a novel treatment of early-stage Alzheimer disease, given that they are an affected group. Some differences are not morally irrelevant, however. In particular, there may be scientific reasons for choosing one possible research population over another, and there may be risk-related reasons for excluding certain groups. For example, a functional magnetic resonance imaging study in healthy volunteers to examine the acute effects of an antianxiety medication might reasonably exclude left-handed people because their brain structure is different from that of right-handed people, and a study of mood that required participants to forego medication could justifiably exclude people with severe depression or suicidal ideation.

The second consideration requires that we consider how the research is likely to impact “social justice.” Social justice refers to the way in which social institutions distribute goods, like property, education, and health care. This may apply to justice within a state ( Rawls, 1971 ) or to global justice ( Beitz, 1973 ). In general, research will negatively affect social justice when it increases inequality, for example, by making people who are already badly off even worse off. The quotation from the Belmont Report above suggests one way in which research might violate a requirement of social justice: people who are already badly off might be asked to participate in research and so be made worse off. For example, a study examining changes in the brain caused by alcohol abuse that primarily enrolled homeless alcoholics from a shelter near the study clinic might only put at further risk this group who are already very badly off. An alternative way in which research can promote justice or injustice is through its results. Research that leads to the development of expensive new attention deficit hyperactivity disorder medication is likely to do little, if anything, to make the world more just. Research on how to improve the cognitive development of orphaned children in poor environments (like the BEIP) is much more likely to improve social justice.

This last point suggests a further concern about fairness—exploitation—that frequently arises in the context of international collaborative research in developing countries. Exploitation occurs, roughly, when one party takes “unfair advantage” of the vulnerability of another. This means that the first party benefits from the interaction and does so to an unfair extent ( Wertheimer, 1996 ). These conditions may be met in international collaborative research when the burdens of research fall disproportionately on people and institutions in developing countries, but the benefits of research, such as access to new treatments, accrue to people in richer countries. A number of case studies in this collection raise this concern in one way or another. For example, Virginia Rodriguez analyzes a proposed study of the genetic basis of antisocial personality disorder run by US researchers but carried out at sites in several Latin American countries. One of the central objections raised by one of the local national research ethics committees with regard to this study was that there appeared to be few, if any, benefits for patients and researchers in the host country.

Risks and Benefits

Almost all research poses some risk of harm to participants. Participants in mental health research may be particularly susceptible to risk in several ways. First, and most obviously, they may be physically or psychologically harmed as a result of trial participation. For example, an intervention study of an experimental antipsychotic may result in some serious adverse effects for participants who take the drug. Less obvious but still very important are the potential effects of stopping medication. As mentioned above, some trials of psychoactive medications require that patients stop taking the medications that they were on before the trial ( e.g ., the Japanese withdrawal trial). Stopping their medication can lead to relapse, to dangerous behavior (like attempted suicide), and could mean that their previous treatment regimen is less successful when they attempt to return to it. Participants who were successfully treated during a trial may have similar effects if they do not have access to treatment outside of the trial. This is much more likely to happen in research conducted with poor populations, such as the Indian mania patients.

The harms resulting directly from research-related interventions are not the only risk to participants in mental health research. Participation can also increase the risks of psychosocial harms, such as being identified by one's family or community as having a particular condition. Such breaches of confidentiality need not involve gross negligence on the part of researchers. The mere fact that someone regularly attends a clinic or sees a psychiatrist could be sufficient to suggest that they have a mental illness. In other research, the design makes confidentiality hard to maintain. For example, the genetic research described by Rodriguez involved soliciting the enrollment of the family members of people with antisocial personality disorder.

The harm from a breach of confidentiality is exacerbated when the condition studied or the study population is stigmatized. Both of these were true in the case Sana Loue describes in this collection. She studied the co-occurrence of severe mental illnesses and human immunodeficiency virus risk in African-American men who have sex with men. Not only was there shame attached to the conditions under study, such that they were euphemistically described in the advertisements for the research, but also many of the participants were men who had heterosexual public identities.

Informed Consent

Many people with mental disorders retain the capacity (ability) and competence (legal status) to give informed consent. Conversely, potential participants without mental problems may lack or lose capacity (and competence). Nevertheless, problems with the ability to consent remain particularly pressing with regard to mental health research. This is partly a consequence of psychological conditions that reduce or remove the ability to give informed consent. To study these conditions, it may be necessary to use participants who have them, which means that alternative participants who can consent are, in principle, not available. This occurred in the study of South African stroke patients described by Anne Pope in this collection. The researcher she describes wanted to compare the effectiveness of exercises designed to help patients whose ability to communicate was compromised by their stroke. Given their communication difficulties and the underlying condition, there would inevitably be questions about their capacity. Whether it is permissible to enroll people who cannot give informed consent into a study depends on several factors, including the availability of alternative study populations, the levels of risk involved, and the possible benefits to participants in comparison with alternative health care they could receive.

In research that expects to enroll people with questionable capacity to consent, it is wise to institute procedures for assessing the capacity of prospective participants. There are two general strategies for making these assessments. The first is to conduct tests that measure the general cognitive abilities of the person being assessed, as an IQ test does. If she has the ability to perform these sorts of mental operations sufficiently well, it is assumed that she also has the ability to make autonomous decisions about research participation. A Mini-Mental State Examination might be used to make this sort of assessment ( Kim and Caine, 2002 ). The second capacity assessment strategy focuses on a prospective participant's understanding and reasoning with regard to the specific research project they are deciding about. If she understands that project and what it implies for her and is capable of articulating her reasoning about it, then it is clear that she is capable of consenting to participation, independent of her more general capacities. This sort of assessment requires questions that are tailored to each specific research project and cannot be properly carried out unless the assessor is familiar with that research.

Where someone lacks the capacity to give consent, sometimes a proxy decision maker can agree to trial participation on her behalf. In general, proxy consent is not equivalent to individual consent: unless the proxy was expressly designated to make research decisions by the patient while capacitated, the proxy lacks the power to exercise the patient's rights. As a result, the enrollment of people who lack capacity is only acceptable when the research poses a low net risk to participants or holds out the prospect of benefiting them. When someone has not designated a proxy decision maker for research, it is common to allow the person who has the power to make decisions about her medical care also to make decisions about research participation. However, because medical care is directed at the benefit of the patient, but research generally is not aimed at the benefit of participants, the basis for this assumption is unclear. Its legal basis may be weak, too. For example, in her discussion of research on South African stroke patients, Pope notes the confusion surrounding the legality of surrogate decision makers, given that the South African constitution forbids proxy decision making for adults (unless they have court-appointed curators), but local and international guidance documents seem to assume it.

Although it is natural to think of the capacity to give consent as an all-or-nothing phenomenon, it may be better conceptualized as domain-specific. Someone may be able to make decisions about some areas of her life, but not others. This fits with assumptions that many people make in everyday life. For example, a 10-year-old child may be deemed capable of deciding what clothes she will wear but may not be capable of deciding whether to visit the dentist. The capacity to consent may admit of degrees in another way, too. Someone may have diminished capacity to consent but still be able to make decisions about their lives if given the appropriate assistance. For example, a patient with mild dementia might not be capable of deciding on his own whether he should move in with a caregiver, but his memory lapses during decision making could be compensated for by having his son present to remind him of details relevant to the decision. The concept of supported decision making has been much discussed in the literature on disability; however, its application to consent to research has received little attention ( Herr, 2003 ; United Nations, 2007 ).

The ability to give valid informed consent is the aspect of autonomy that is most frequently discussed in the context of mental health research, but it is not the only important aspect. Several of the case studies in this collection also raise issues of voluntariness and coercion. For example, Douglas Wassenaar and Nicole Mamotte describe a study in which professors enrolled their students, which raises the question of the vulnerability of student subjects to pressure. Here, there is both the possibility of explicit coercion and the possibility that students will feel pressure even from well-meaning researchers. For various reasons, including dependence on caregivers or healthcare professionals and the stigma of their conditions, people with mental illnesses can be particularly vulnerable to coercion.

Post-Trial Obligations

The obligations of health researchers extend past the end of their study. Participants'data remain in the hands of researchers after their active involvement in a study is over, and patients with chronic conditions who enroll in clinical trials may leave them still in need of treatment.

Ongoing confidentiality is particularly important when studying stigmatized populations (such as men who have sex with men as discussed by Sana Loue) or people with stigmatizing conditions (such as bipolar disorder). In research on mental illnesses, as with many medical conditions, it is now commonplace for researchers to collect biological specimens and phenotypic data from participants to use in future research (such as genome-wide association studies). Additional challenges with regard to confidentiality are raised by the collection of data and biological specimens for future research because confidentiality must then be guaranteed in a long period of time and frequently with different research groups making use of the samples.

Biobanking also generates some distinctive ethical problems of its own. One concerns how consent to the future use of biological specimens should be obtained. Can participants simply give away their samples for use in whatever future research may be proposed, or do they need to have some idea of what this research might involve in order to give valid consent? A second problem, which arises particularly in transnational research, concerns who should control the ongoing use of the biobank. Many researchers think that biological samples should not leave the country in which they were collected, and developing country researchers worry that they will not be allowed to do research on the biobanks that end up in developed countries. This was another key concern with the proposed study in Latin America.

In international collaborative research, further questions arise as a result of the disparities between developing country participants and researchers and developed country sponsors and researchers. For example, when clinical trials test novel therapies, should successful therapies be made available after the trial? If they should, who is responsible for ensuring their provision, to whom should they be provided, and in what does providing them consist? In the case of chronic mental illnesses like depression or bipolar disorder, patient-participants may need maintenance treatment for the rest of their lives and may be at risk if treatment is stopped. This suggests that the question of what happens to them after the trial must at least be considered by those who sponsor and conduct the trial and the regulatory bodies that oversee it. Exactly on whom obligations fall remains a matter of debate ( Millum, 2011 ).

Ethics Guidelines

A number of important policy documents are relevant to the ethics of research into mental disorders. The WMA's Declaration of Helsinki and the CIOMS' Ethical Guidelines for Biomedical Research both consider research on individuals whose capacity and/or competence to consent is impaired. They agree on three conditions: a) research on these people is justified only if it cannot be carried out on individuals who can give adequate informed consent, b) consent to such research should be obtained from a proxy representative, and c) the goal of such research should be the promotion of the health of the population that the research participants represent ( Council for International Organizations of Medical Sciences, 2002 ; World Medical Association, 2008 ). In addition, with regard to individuals who are incapable of giving consent, Guideline 9 of CIOMS states that interventions that do not “hold out the prospect of direct benefit for the individual subject” should generally involve no more risk than their “routine medical or psychological examination.”

In 1998, the US National Bioethics Advisory Commission (NBAC) published a report entitled Research Involving Persons with Mental Disorders That May Affect Decision-making Capacity ( National Bioethics Advisory Commission, 1998 ). As the title suggests, this report concentrates on issues related to the capacity or competence of research participants to give informed consent. Its recommendations are largely consistent with those made in the Declaration of Helsinki and CIOMS, although it is able to devote much more space to detailed policy questions (at least in the United States context). Two domains of more specific guidance are of particular interest. First, the NBAC report considers the conditions under which individuals who lack the capacity to consent may be enrolled in research posing different levels of risk and supplying different levels of expected benefits to participants. Second, it provides some analysis of who should be recognized as an appropriate proxy decision maker (or “legally authorized representative”) for participation in clinical trials.

Finally, the World Psychiatric Association's Madrid Declaration gives guidelines on the ethics of psychiatric practice. This declaration may have implications for what is permissible in psychiatric research, insofar as the duties of psychiatrists as personal physicians are also duties of psychiatrists as medical researchers. It also briefly considers the ethics of psychiatric research, although it notes only the special vulnerability of psychiatric patients as a concern distinctive of mental health research ( World Psychiatric Association, 2002 ).

The opinions expressed are the author's own. They do not reflect any position or policy of the National Institutes of Health, U.S. Public Health Service, or Department of Health and Human Services.

Disclosure : The author declares no conflict of interest.

  • Basil B, Adetunji B, Mathews M, Budur K. Trial of risperidone in India—concerns. Br J Psychiatry. 2006; 188 :489–490. [ PubMed ] [ Google Scholar ]
  • Beitz C. Political theory and international relations. Princeton, NJ: Princeton University Press; 1973. [ Google Scholar ]
  • Carpenter WT, Appelbaum PS, Levine RJ. The Declaration of Helsinki and clinical trials: A focus on placebo-controlled trials in schizophrenia. Am J Psychiatry. 2003; 160 :356–362. [ PubMed ] [ Google Scholar ]
  • Cash R, Wikler D, Saxena A, Capron A, editors. Casebook on ethical issues in international health research. Geneva, Switzerland: World Health Organization; 2009. Available at: http://whqlibdoc.who.int/publications/2009/9789241547727_eng.pdf . [ Google Scholar ]
  • Collins PY, Patel V, Joestl SS, March D, Insel TR, Daar AS on behalf of the Scientific Advisory Board and the Executive Committee of the Grand Challenges on Global Mental Health. Grand challenges in global mental health. Nature. 2011; 475 :27–30. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Council for International Organizations of Medical Sciences. The international ethical guidelines for biomedical research involving human subjects. [Accessed on January 31, 2012]; 2002 Available at: http://www.cioms.ch/publications/layout_guide2002.pdf .
  • DuBois JM. Ethical research in mental health. New York, NY: Oxford University Press; 2007. [ Google Scholar ]
  • Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA. 2000; 283 :2701–2711. [ PubMed ] [ Google Scholar ]
  • Freedman B. Equipoise and the ethics of clinical research. N Engl J Med. 1987; 317 :141–145. [ PubMed ] [ Google Scholar ]
  • Herr SS. Self-determination, autonomy, and alternatives for guardianship. In: Herr SS, Gostin LO, Koh HH, editors. The human rights of persons with intellectual disabilities: Different but equal. Oxford, UK: Oxford University Press; 2003. pp. 429–450. [ Google Scholar ]
  • Hoagwood K, Jensen PS, Fisher CB. Ethical issues in mental health research with children and adolescents. Mahwah, NJ: Lawrence Erlbaum Associates; 1996. [ Google Scholar ]
  • Hyman S, Chisholm D, Kessler R, Patel V, Whiteford H. Chapter 31: Mental disorders. In: Jamison DT, Breman JG, Measham AR, Alleyne G, Claeson M, Evans DB, Jha P, Mills A, Musgrove P, editors. Disease control priorities in developing countries. 2nd. Washington, DC; New York, NY: Oxford University Press and the World Bank; 2006. [ Google Scholar ]
  • Jacob KS, Sharan P, Mirza I, Garrido-Cumbrera M, Seedat S, Mari JJ, Sreenivas V, Saxena S. Global Mental Health 4: Mental health systems in countries: where are we now? Lancet. 2007; 370 :1061–1077. [ PubMed ] [ Google Scholar ]
  • Joffe S, Truog RD. Equipoise and randomization. In: Emanuel EJ, Grady C, Crouch RA, Lie RK, Miller FG, Wendler D, editors. The oxford textbook of clinical research ethics. Oxford, UK: Oxford University Press; 2008. pp. 245–260. [ Google Scholar ]
  • Khanna S, Vieta E, Lyons B, Grossman F, Eerdekens M, Kramer M. Risperidone in the treatment of acute mania: Double-blind, placebo-controlled study. Br J Psychiatry. 2005; 187 :229–234. [ PubMed ] [ Google Scholar ]
  • Khanna S, Vieta E, Lyons B, Grossman F, Kramer M, Eerdekens M. Trial of risperidone in India—concerns. Authors' reply. Br J Psychiatry. 2006; 188 :491. [ Google Scholar ]
  • Kim SYH, Caine ED. Utility and limits of the Mini Mental State Examination in evaluating consent capacity in Alzheimer's disease. Psychiatr Serv. 2002; 53 :1322–1324. [ PubMed ] [ Google Scholar ]
  • Lancet Global Mental Health Group. Global Mental Health 6: Scale up services for mental disorders: a call for action. Lancet. 2007; 370 :1241–1252. [ PubMed ] [ Google Scholar ]
  • Lavery J, Grady C, Wahl ER, Emanuel EJ, editors. Ethical issues in international biomedical research: A casebook. New York, NY: Oxford University Press; 2007. [ Google Scholar ]
  • Miller FG, Rosenstein DL. Psychiatric symptom-provoking studies: An ethical appraisal. Biol Psychiatry. 1997; 42 :403–409. [ PubMed ] [ Google Scholar ]
  • Miller FG. Placebo-controlled trials in psychiatric research: An ethical perspective. Biol Psychiatry. 2000; 47 :707–716. [ PubMed ] [ Google Scholar ]
  • Miller FG, Brody H. A critique of clinical equipoise: Therapeutic misconception in the ethics of clinical trials. Hastings Cent Rep. 2003; 33 :19–28. [ PubMed ] [ Google Scholar ]
  • Millum J. Post-trial access to antiretrovirals: Who owes what to whom? Bioethics. 2011; 25 :145–154. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Murtagh A, Murphy KC. Trial of risperidone in India—concerns. Br J Psychiatry. 2006; 188 :489. [ PubMed ] [ Google Scholar ]
  • National Bioethics Advisory Commission. Research involving persons with mental disorders that may affect decision-making capacity. [Accessed January 31, 2012]; 1998 Available at: http://bioethics.georgetown.edu/nbac/capacity/toc.htm .
  • National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. Washington, DC: DHEW Publication; 1978. pp. 78–0012. No. (OS) [ PubMed ] [ Google Scholar ]
  • Patel V, Araya R, Chatterjee S, Chisholm D, Cohen A, De Silva M, Hosman C, McGuire H, Rojas G, van Ommeren M. Global Mental Health 3: Treatment and prevention of mental disorders in low-income and middle-income countries. Lancet. 2007; 370 :991–1005. [ PubMed ] [ Google Scholar ]
  • Prince M, Patel V, Saxena S, Maj M, Maselko J, Phillips MR, Rahman A. Global Mental Health 1: No health without mental health. Lancet. 2007; 370 :859–877. [ PubMed ] [ Google Scholar ]
  • Rawls J. A theory of justice. Cambridge, MA: Harvard University Press; 1971. [ Google Scholar ]
  • Thiers FA, Sinskey AJ, Berndt ER. Trends in the globalization of clinical trials. Nature Rev Drug Discov. 2008; 7 :13–14. [ Google Scholar ]
  • Srinivasan S, Pai SA, Bhan A, Jesani A, Thomas G. Trial of risperidone in India—concerns. Br J Psychiatry. 2006; 188 :489. [ PubMed ] [ Google Scholar ]
  • United Nations. Legal capacity and supported decision-making in from exclusion to equality: Realizing the rights of persons with disabilities. Geneva, Switzerland: United Nations; 2007. Available at: http://www.un.org/disabilities/default.asp?id=242 . [ Google Scholar ]
  • Weiss MG, Jadhav S, Raguram R, Vounatsou P, Littlewood R. Psychiatric stigma across cultures: Local validation in Bangalore and London. Anthropol Med. 2001; 8 :71–87. [ Google Scholar ]
  • Wertheimer A. Exploitation. Princeton, UK: Princeton University Press; 1996. [ Google Scholar ]
  • World Health Organization. Glaring inequalities for people with mental disorders addressed in new WHO effort. [Accessed January 31, 2012]; 2005a Available at: http://www.who.int/mediacentre/news/notes/2005/np14/en/index.html .
  • World Health Organization. Mental Health Atlas: 2005. [Accessed January 31, 2012]; 2005b Available at: http://www.who.int/mental_health/evidence/mhatlas05/en/index.html .
  • World Medical Association. The Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. [Accessed January 31, 2012]; 2008 Available at: http://www.wma.net/en/30publications/10policies/b3/index.html .
  • World Psychiatric Association. Madrid Declaration on Ethical Standards for Psychiatric Practice. [Accessed January 31, 2012]; 2002 Available at: http://www.wpanet.org/detail.php?section_id=5&content_id=48 .
  • Zipursky RB. Ethical issues in schizophrenia research. Curr Psychiatry Rep. 1999; 1 :13–19. [ PubMed ] [ Google Scholar ]

Unethical human research in the field of neuroscience: a historical review

Affiliations.

  • 1 King Abdulaziz Medical City, King Saud bin Abdulaziz University for Health Sciences, P.O. Box: 12723, Jeddah, 21483, Saudi Arabia. [email protected].
  • 2 King Saud bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia.
  • 3 King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia.
  • PMID: 29460160
  • DOI: 10.1007/s10072-018-3245-1

Understanding the historical foundations of ethics in human research are key to illuminating future human research and clinical trials. This paper gives an overview of the most remarkable unethical human research and how past misconducts helped develop ethical guidelines on human experimentation such as The Nuremberg Code 1947 following WWII. Unethical research in the field of neuroscience also proved to be incredibly distressing. Participants were often left with life-long cognitive disabilities. This emphasizes the importance of implicating strict rules and ethical guidelines in neuroscience research that protect participants and respects their dignity. The experiments conducted by German Nazi in the concentration camps during WWII are probably the most inhumane and brutal ever conducted. The Nuremberg Code of 1947, one of the few positive outcomes of the Nazi experiments, is often considered the first document to set out ethical regulations of human research. It consists of numerous necessary criteria, to highlight a few, the subject must give informed consent, there must be a concrete scientific basis for the experiment, and the experiment should yield positive results that cannot be obtained in any other way. In the end, we must remember, the interest of the patient must always prevail over the interest of science or society.

Keywords: Ethics; History; Neuroscience; Unethical research.

Publication types

  • Historical Article
  • History, 20th Century
  • Human Experimentation / ethics
  • Human Experimentation / history*
  • Neurosciences / ethics
  • Neurosciences / history*

logo that says helpful professor with a mortarboard hat picture next to it

15 Famous Experiments and Case Studies in Psychology

psychology theories, explained below

Psychology has seen thousands upon thousands of research studies over the years. Most of these studies have helped shape our current understanding of human thoughts, behavior, and feelings.

The psychology case studies in this list are considered classic examples of psychological case studies and experiments, which are still being taught in introductory psychology courses up to this day.

Some studies, however, were downright shocking and controversial that you’d probably wonder why such studies were conducted back in the day. Imagine participating in an experiment for a small reward or extra class credit, only to be left scarred for life. These kinds of studies, however, paved the way for a more ethical approach to studying psychology and implementation of research standards such as the use of debriefing in psychology research .

Case Study vs. Experiment

Before we dive into the list of the most famous studies in psychology, let us first review the difference between case studies and experiments.

  • It is an in-depth study and analysis of an individual, group, community, or phenomenon. The results of a case study cannot be applied to the whole population, but they can provide insights for further studies.
  • It often uses qualitative research methods such as observations, surveys, and interviews.
  • It is often conducted in real-life settings rather than in controlled environments.
  • An experiment is a type of study done on a sample or group of random participants, the results of which can be generalized to the whole population.
  • It often uses quantitative research methods that rely on numbers and statistics.
  • It is conducted in controlled environments, wherein some things or situations are manipulated.

See Also: Experimental vs Observational Studies

Famous Experiments in Psychology

1. the marshmallow experiment.

Psychologist Walter Mischel conducted the marshmallow experiment at Stanford University in the 1960s to early 1970s. It was a simple test that aimed to define the connection between delayed gratification and success in life.

The instructions were fairly straightforward: children ages 4-6 were presented a piece of marshmallow on a table and they were told that they would receive a second piece if they could wait for 15 minutes without eating the first marshmallow.

About one-third of the 600 participants succeeded in delaying gratification to receive the second marshmallow. Mischel and his team followed up on these participants in the 1990s, learning that those who had the willpower to wait for a larger reward experienced more success in life in terms of SAT scores and other metrics.

This case study also supported self-control theory , a theory in criminology that holds that people with greater self-control are less likely to end up in trouble with the law!

The classic marshmallow experiment, however, was debunked in a 2018 replication study done by Tyler Watts and colleagues.

This more recent experiment had a larger group of participants (900) and a better representation of the general population when it comes to race and ethnicity. In this study, the researchers found out that the ability to wait for a second marshmallow does not depend on willpower alone but more so on the economic background and social status of the participants.

2. The Bystander Effect

In 1694, Kitty Genovese was murdered in the neighborhood of Kew Gardens, New York. It was told that there were up to 38 witnesses and onlookers in the vicinity of the crime scene, but nobody did anything to stop the murder or call for help.

Such tragedy was the catalyst that inspired social psychologists Bibb Latane and John Darley to formulate the phenomenon called bystander effect or bystander apathy .

Subsequent investigations showed that this story was exaggerated and inaccurate, as there were actually only about a dozen witnesses, at least two of whom called the police. But the case of Kitty Genovese led to various studies that aim to shed light on the bystander phenomenon.

Latane and Darley tested bystander intervention in an experimental study . Participants were asked to answer a questionnaire inside a room, and they would either be alone or with two other participants (who were actually actors or confederates in the study). Smoke would then come out from under the door. The reaction time of participants was tested — how long would it take them to report the smoke to the authorities or the experimenters?

The results showed that participants who were alone in the room reported the smoke faster than participants who were with two passive others. The study suggests that the more onlookers are present in an emergency situation, the less likely someone would step up to help, a social phenomenon now popularly called the bystander effect.

3. Asch Conformity Study

Have you ever made a decision against your better judgment just to fit in with your friends or family? The Asch Conformity Studies will help you understand this kind of situation better.

In this experiment, a group of participants were shown three numbered lines of different lengths and asked to identify the longest of them all. However, only one true participant was present in every group and the rest were actors, most of whom told the wrong answer.

Results showed that the participants went for the wrong answer, even though they knew which line was the longest one in the first place. When the participants were asked why they identified the wrong one, they said that they didn’t want to be branded as strange or peculiar.

This study goes to show that there are situations in life when people prefer fitting in than being right. It also tells that there is power in numbers — a group’s decision can overwhelm a person and make them doubt their judgment.

4. The Bobo Doll Experiment

The Bobo Doll Experiment was conducted by Dr. Albert Bandura, the proponent of social learning theory .

Back in the 1960s, the Nature vs. Nurture debate was a popular topic among psychologists. Bandura contributed to this discussion by proposing that human behavior is mostly influenced by environmental rather than genetic factors.

In the Bobo Doll Experiment, children were divided into three groups: one group was shown a video in which an adult acted aggressively toward the Bobo Doll, the second group was shown a video in which an adult play with the Bobo Doll, and the third group served as the control group where no video was shown.

The children were then led to a room with different kinds of toys, including the Bobo Doll they’ve seen in the video. Results showed that children tend to imitate the adults in the video. Those who were presented the aggressive model acted aggressively toward the Bobo Doll while those who were presented the passive model showed less aggression.

While the Bobo Doll Experiment can no longer be replicated because of ethical concerns, it has laid out the foundations of social learning theory and helped us understand the degree of influence adult behavior has on children.

5. Blue Eye / Brown Eye Experiment

Following the assassination of Martin Luther King Jr. in 1968, third-grade teacher Jane Elliott conducted an experiment in her class. Although not a formal experiment in controlled settings, A Class Divided is a good example of a social experiment to help children understand the concept of racism and discrimination.

The class was divided into two groups: blue-eyed children and brown-eyed children. For one day, Elliott gave preferential treatment to her blue-eyed students, giving them more attention and pampering them with rewards. The next day, it was the brown-eyed students’ turn to receive extra favors and privileges.

As a result, whichever group of students was given preferential treatment performed exceptionally well in class, had higher quiz scores, and recited more frequently; students who were discriminated against felt humiliated, answered poorly in tests, and became uncertain with their answers in class.

This study is now widely taught in sociocultural psychology classes.

6. Stanford Prison Experiment

One of the most controversial and widely-cited studies in psychology is the Stanford Prison Experiment , conducted by Philip Zimbardo at the basement of the Stanford psychology building in 1971. The hypothesis was that abusive behavior in prisons is influenced by the personality traits of the prisoners and prison guards.

The participants in the experiment were college students who were randomly assigned as either a prisoner or a prison guard. The prison guards were then told to run the simulated prison for two weeks. However, the experiment had to be stopped in just 6 days.

The prison guards abused their authority and harassed the prisoners through verbal and physical means. The prisoners, on the other hand, showed submissive behavior. Zimbardo decided to stop the experiment because the prisoners were showing signs of emotional and physical breakdown.

Although the experiment wasn’t completed, the results strongly showed that people can easily get into a social role when others expect them to, especially when it’s highly stereotyped .

7. The Halo Effect

Have you ever wondered why toothpastes and other dental products are endorsed in advertisements by celebrities more often than dentists? The Halo Effect is one of the reasons!

The Halo Effect shows how one favorable attribute of a person can gain them positive perceptions in other attributes. In the case of product advertisements, attractive celebrities are also perceived as intelligent and knowledgeable of a certain subject matter even though they’re not technically experts.

The Halo Effect originated in a classic study done by Edward Thorndike in the early 1900s. He asked military commanding officers to rate their subordinates based on different qualities, such as physical appearance, leadership, dependability, and intelligence.

The results showed that high ratings of a particular quality influences the ratings of other qualities, producing a halo effect of overall high ratings. The opposite also applied, which means that a negative rating in one quality also correlated to negative ratings in other qualities.

Experiments on the Halo Effect came in various formats as well, supporting Thorndike’s original theory. This phenomenon suggests that our perception of other people’s overall personality is hugely influenced by a quality that we focus on.

8. Cognitive Dissonance

There are experiences in our lives when our beliefs and behaviors do not align with each other and we try to justify them in our minds. This is cognitive dissonance , which was studied in an experiment by Leon Festinger and James Carlsmith back in 1959.

In this experiment, participants had to go through a series of boring and repetitive tasks, such as spending an hour turning pegs in a wooden knob. After completing the tasks, they were then paid either $1 or $20 to tell the next participants that the tasks were extremely fun and enjoyable. Afterwards, participants were asked to rate the experiment. Those who were given $1 rated the experiment as more interesting and fun than those who received $20.

The results showed that those who received a smaller incentive to lie experienced cognitive dissonance — $1 wasn’t enough incentive for that one hour of painstakingly boring activity, so the participants had to justify that they had fun anyway.

Famous Case Studies in Psychology

9. little albert.

In 1920, behaviourist theorists John Watson and Rosalie Rayner experimented on a 9-month-old baby to test the effects of classical conditioning in instilling fear in humans.

This was such a controversial study that it gained popularity in psychology textbooks and syllabi because it is a classic example of unethical research studies done in the name of science.

In one of the experiments, Little Albert was presented with a harmless stimulus or object, a white rat, which he wasn’t scared of at first. But every time Little Albert would see the white rat, the researchers would play a scary sound of hammer and steel. After about 6 pairings, Little Albert learned to fear the rat even without the scary sound.

Little Albert developed signs of fear to different objects presented to him through classical conditioning . He even generalized his fear to other stimuli not present in the course of the experiment.

10. Phineas Gage

Phineas Gage is such a celebrity in Psych 101 classes, even though the way he rose to popularity began with a tragic accident. He was a resident of Central Vermont and worked in the construction of a new railway line in the mid-1800s. One day, an explosive went off prematurely, sending a tamping iron straight into his face and through his brain.

Gage survived the accident, fortunately, something that is considered a feat even up to this day. He managed to find a job as a stagecoach after the accident. However, his family and friends reported that his personality changed so much that “he was no longer Gage” (Harlow, 1868).

New evidence on the case of Phineas Gage has since come to light, thanks to modern scientific studies and medical tests. However, there are still plenty of mysteries revolving around his brain damage and subsequent recovery.

11. Anna O.

Anna O., a social worker and feminist of German Jewish descent, was one of the first patients to receive psychoanalytic treatment.

Her real name was Bertha Pappenheim and she inspired much of Sigmund Freud’s works and books on psychoanalytic theory, although they hadn’t met in person. Their connection was through Joseph Breuer, Freud’s mentor when he was still starting his clinical practice.

Anna O. suffered from paralysis, personality changes, hallucinations, and rambling speech, but her doctors could not find the cause. Joseph Breuer was then called to her house for intervention and he performed psychoanalysis, also called the “talking cure”, on her.

Breuer would tell Anna O. to say anything that came to her mind, such as her thoughts, feelings, and childhood experiences. It was noted that her symptoms subsided by talking things out.

However, Breuer later referred Anna O. to the Bellevue Sanatorium, where she recovered and set out to be a renowned writer and advocate of women and children.

12. Patient HM

H.M., or Henry Gustav Molaison, was a severe amnesiac who had been the subject of countless psychological and neurological studies.

Henry was 27 when he underwent brain surgery to cure the epilepsy that he had been experiencing since childhood. In an unfortunate turn of events, he lost his memory because of the surgery and his brain also became unable to store long-term memories.

He was then regarded as someone living solely in the present, forgetting an experience as soon as it happened and only remembering bits and pieces of his past. Over the years, his amnesia and the structure of his brain had helped neuropsychologists learn more about cognitive functions .

Suzanne Corkin, a researcher, writer, and good friend of H.M., recently published a book about his life. Entitled Permanent Present Tense , this book is both a memoir and a case study following the struggles and joys of Henry Gustav Molaison.

13. Chris Sizemore

Chris Sizemore gained celebrity status in the psychology community when she was diagnosed with multiple personality disorder, now known as dissociative identity disorder.

Sizemore has several alter egos, which included Eve Black, Eve White, and Jane. Various papers about her stated that these alter egos were formed as a coping mechanism against the traumatic experiences she underwent in her childhood.

Sizemore said that although she has succeeded in unifying her alter egos into one dominant personality, there were periods in the past experienced by only one of her alter egos. For example, her husband married her Eve White alter ego and not her.

Her story inspired her psychiatrists to write a book about her, entitled The Three Faces of Eve , which was then turned into a 1957 movie of the same title.

14. David Reimer

When David was just 8 months old, he lost his penis because of a botched circumcision operation.

Psychologist John Money then advised Reimer’s parents to raise him as a girl instead, naming him Brenda. His gender reassignment was supported by subsequent surgery and hormonal therapy.

Money described Reimer’s gender reassignment as a success, but problems started to arise as Reimer was growing up. His boyishness was not completely subdued by the hormonal therapy. When he was 14 years old, he learned about the secrets of his past and he underwent gender reassignment to become male again.

Reimer became an advocate for children undergoing the same difficult situation he had been. His life story ended when he was 38 as he took his own life.

15. Kim Peek

Kim Peek was the inspiration behind Rain Man , an Oscar-winning movie about an autistic savant character played by Dustin Hoffman.

The movie was released in 1988, a time when autism wasn’t widely known and acknowledged yet. So it was an eye-opener for many people who watched the film.

In reality, Kim Peek was a non-autistic savant. He was exceptionally intelligent despite the brain abnormalities he was born with. He was like a walking encyclopedia, knowledgeable about travel routes, US zip codes, historical facts, and classical music. He also read and memorized approximately 12,000 books in his lifetime.

This list of experiments and case studies in psychology is just the tip of the iceberg! There are still countless interesting psychology studies that you can explore if you want to learn more about human behavior and dynamics.

You can also conduct your own mini-experiment or participate in a study conducted in your school or neighborhood. Just remember that there are ethical standards to follow so as not to repeat the lasting physical and emotional harm done to Little Albert or the Stanford Prison Experiment participants.

Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70 (9), 1–70. https://doi.org/10.1037/h0093718

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. The Journal of Abnormal and Social Psychology, 63 (3), 575–582. https://doi.org/10.1037/h0045925

Elliott, J., Yale University., WGBH (Television station : Boston, Mass.), & PBS DVD (Firm). (2003). A class divided. New Haven, Conn.: Yale University Films.

Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. The Journal of Abnormal and Social Psychology, 58 (2), 203–210. https://doi.org/10.1037/h0041593

Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). A study of prisoners and guards in a simulated prison. Naval Research Review , 30 , 4-17.

Latane, B., & Darley, J. M. (1968). Group inhibition of bystander intervention in emergencies. Journal of Personality and Social Psychology, 10 (3), 215–221. https://doi.org/10.1037/h0026570

Mischel, W. (2014). The Marshmallow Test: Mastering self-control. Little, Brown and Co.

Thorndike, E. (1920) A Constant Error in Psychological Ratings. Journal of Applied Psychology , 4 , 25-29. http://dx.doi.org/10.1037/h0071663

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of experimental psychology , 3 (1), 1.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 17 Adversity Examples (And How to Overcome Them)

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

John Money Gender Experiment: Reimer Twins

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, Ph.D., is a qualified psychology teacher with over 18 years experience of working in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

The John Money Experiment involved David Reimer, a twin boy raised as a girl following a botched circumcision. Money asserted gender was primarily learned, not innate.

However, David struggled with his female identity and transitioned back to male in adolescence. The case challenged Money’s theory, highlighting the influence of biological sex on gender identity.

  • David Reimer: David was born in 1965; he had a MZ twin brother. When he was 8 months old his penis was accidentally cut off during surgery.
  • His parents contacted John Money, a psychologist who was developing a theory of gender neutrality. His theory claimed that a child would take the gender identity he/she was raised with rather than the gender identity corresponding to the biological sex.
  • David’s parents brought him up as a girl and Money wrote extensively about this case claiming it supported his theory. However, Brenda as he was named was suffering from severe psychological and emotional difficulties and in her teens, when she found out what had happened, she reverted back to being a boy.
  • This case study supports the influence of testosterone on gender development as it shows that David’s brain development was influenced by the presence of this hormone and its effects on gender identity was stronger that the influence of social factors.

What Did John Money Do To The Twins

David Reimer was an identical twin boy born in Canada in 1965. When he was 8 months old, his penis was irreparably damaged during a botched circumcision.

John Money, a psychologist from Johns Hopkins University, had a prominent reputation in the field of sexual development and gender identity.

David’s parents took David to see Dr. Money at Johns Hopkins Hospital in Baltimore where he advised that David be “sex reassigned” as a girl through surgical, hormonal, and psychological treatments.

John Money believed that gender identity is primarily learned through one’s upbringing (nurture) as opposed to one’s inborn traits (nature). He proposed that gender identity could be changed through behavioural interventions, and he advocated that gender reassignment was the solution for treating any child with intersex traits or atypical sex anatomies.

Dr. John Money argued that it’s possible to habilitate a baby with a defective penis more effectively as a girl than a boy.

At the age of 22 months, David underwent extensive surgery in which his testes and penis were surgically removed and rudimentary female genitals were constructed.

David’s parents raised him as a female and gave him the name Brenda (this name was chosen to be similar to his birth name, Bruce). David was given estrogen during adolescence to promote the development of breasts.

He was forced to wear dresses and was directed to engage in typical female norms, such as playing with dolls and mingling with other girls.

Throughout his childhood, David was never informed that he was biologically male and that he was an experimental subject in a controversial investigation to bolster Money’s belief in the theory of gender neutrality – that nurture, not nature, determines gender identity and sexual orientation.

David’s twin brother, Brian, served as the ideal control because the brothers had the same genetic makeup, but one was raised as a girl and the other as a boy. Money continued to see David and Brian for consultations and check ups annually.

During these check-ups, Money would force the twins to rehearse sexual acts and inspect one another’s genitals. On some occasions, Money would even photograph the twins doing these exercises. Money claimed that childhood sexual rehearsal play was important for healthy childhood sexual exploration.

David also recalls receiving anger and verbal abuse from Money if they resisted participation.

Money (1972) reported on Reimer’s progress as the “John/Joan case” to keep the identity of David anonymous. Money described David’s transition as successful.

He claimed that David behaved like a little girl and did not demonstrate any of the boyish mannerisms of his twin brother Brian. Money would publish this data to reinforce his theories on gender fluidity and to justify that gender identity is primarily learned.

In reality, though, David was never happy as a girl. He rejected his female identity and experienced severe gender dysphoria . He would complain to his parents and teachers that he felt like a boy and would refuse to wear dresses or play with dolls.

He was severely bullied in school and experienced suicidal depression throughout adolescence. Upon learning about the truth about his birth and sex of rearing from his father at the age of 15, David assumed a male gender identity, calling himself David.

David Reimer underwent treatments to reverse the assignment such as testosterone injections and surgeries to remove his breasts and reconstruct a penis.

David married a woman named Jane at 22 years and adopted three children.

Dr. Milton Diamond, a psychologist and sexologist at the University of Hawaii and a longtime academic rival of Money, met with David to discuss his story in the mid-1990s.

Diamond (1997) brought David’s experiences to international attention by reporting the true outcome of David’s case to prevent physicians from making similar decisions when treating other infants. Diamond helped debunk Money’s theory that gender identity could be completely learned through intervention.

David continued to suffer from psychological trauma throughout adulthood due to Money’s experiments and his harrowing childhood experiences. David endured unemployment, the death of his twin brother Brian, and marital difficulties.

At the age of thirty-eight, David committed suicide.

David’s case became the subject of multiple books, magazine articles, and documentaries. He brought to attention to the complications of gender identity and called into question the ethicality of sex reassignment of infants and children.

Originally, Money’s view of gender malleability dominated the field as his initial report on David was that the reassignment had been a success. However, this view was disproved once the truth about David came to light.

His case led to a decline in the number of sex reassignment surgeries for unambiguous XY male infants with a micropenis and other congenital malformations and brought into question the malleability of gender and sex.

At present, however, the clinical literature is still deeply divided on the best way to manage cases of intersex infants.

Colapinto, J. (2000). As nature made him: The boy who was raised as a girl. New York, NY: Harper Collins.

Colapinto, J. (2018). As nature made him: The boy who was raised as a girl. Langara College.

Diamond, M., & Sigmundson, H. K. (1997). Sex reassignment at birth: Long-term review and clinical implications . Archives of pediatrics & adolescent medicine, 151(3), 298-304.

Money, J., & Ehrhardt, A. A. (1972). Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Money, J. (1994). The concept of gender identity disorder in childhood and adolescence after 39 years . Journal of sex & marital therapy, 20(3), 163-177.

Print Friendly, PDF & Email

IMAGES

  1. unethical psychological case studies

    unethical psychological case studies

  2. unethical psychological case studies

    unethical psychological case studies

  3. Examples of Unethical Psychological Research The

    unethical psychological case studies

  4. PPT

    unethical psychological case studies

  5. Case Study Examples Psychological Disorders

    unethical psychological case studies

  6. 4 Unethical Psychological Studies

    unethical psychological case studies

VIDEO

  1. Information about psychology

  2. Understanding psychological concepts

  3. CASE STUDIES OF UNETHICAL ISSUES PLAGUING AMBAZONIA #LIVE

  4. ETHICS FOR AN EMERGING AMBAZONIA. CASE STUDIES OF UNETHICAL ISSUES PLAGUING AMBAZONIA 12.05.2023

  5. CONTACT-02

  6. Symposium: Diverse Perspectives in Psychological Science

COMMENTS

  1. 20 Most Unethical Experiments in Psychology

    20 Most Unethical Experiments in Psychology Humanity often pays a high price for progress and understanding — at least, that seems to be the case in many famous psychological experiments. Human experimentation is a very interesting topic in the world of human psychology.

  2. Controversial and Unethical Psychology Experiments

    Some of the most controversial and unethical experiments in psychology include Harlow's monkey experiments, Milgram's obedience experiments, Zimbardo's prison experiment, Watson's Little Albert experiment, and Seligman's learned helplessness experiment.

  3. 30 Most Unethical Psychology Human Experiments

    30 Most Unethical Psychology Human Experiments Featured Programs: Sponsored School (s) Featured Program: Online BS, MS in Psychology with many different focus options. Arizona State University - Online Featured Program: Psychology, MS Request Info Purdue Global Featured Program: Online Master of Science in Psychology Request Info

  4. What Are The Top 10 Unethical Psychology Experiments?

    The Top 10 Unethical Psychology Experiments 10. The Stanford Prison Experiment (1971). This example of unethical research studies occurred in August of 1971, Dr. Philip Zimbardo of Stanford University began a Navy-funded experiment examining the effects of power dynamics between prison officers and prisoners.

  5. 11+ most controversial psychological experiments in history

    Here is a glimpse at some of the most controversial, unethical psychological experiments in history. Kashyap Vyas Published: Dec 30, 2020 10:11 AM EST lists 1, 2 Scientific experimentation is...

  6. Stanford Prison Experiment: Zimbardo's Famous Study

    The Stanford Prison Experiment, also known as the Zimbardo Prison Experiment, went on to become one of the best-known (and controversial) in psychology's history. The study has long been a staple in textbooks, articles, psychology classes, and even movies, but recent criticisms have called the study's scientific merits and value into question.

  7. Milgram Shock Experiment: Summary, Results, & Ethics

    Milgram (1963) Audio Clips Learning Check Stanley Milgram, a psychologist at Yale University, carried out one of the most famous studies of obedience in psychology. He conducted an experiment focusing on the conflict between obedience to authority and personal conscience.

  8. Three Psychology Experiments That Pushed the Limit of Ethics

    The subjects, for the (relatively generous at the time) sum of $20 a day, were tasked with lying in bed in a small, lit cubicle for 24 hours a day. Breaks were given at mealtimes (eaten sitting on the edge of the bed) and for toilet breaks. The volunteers wore visors that allowed in light but blocked any detailed vision, and touch-restricting ...

  9. The victims of unethical human experiments and coerced research under

    Abstract There has been no full evaluation of the numbers of victims of Nazi research, who the victims were, and of the frequency and types of experiments and research. This paper gives the first results of a comprehensive evidence-based evaluation of the different categories of victims.

  10. Stanford Prison Experiment: Zimbardo's Famous Study

    The experiment was conducted in 1971 by psychologist Philip Zimbardo to examine situational forces versus dispositions in human behavior. 24 young, healthy, psychologically normal men were randomly assigned to be "prisoners" or "guards" in a simulated prison environment. The experiment had to be terminated after only 6 days due to the ...

  11. The Psychology Behind Unethical Behavior

    April 12, 2019 Ivan/Getty Images Summary. Leaders are often faced with ethical conundrums. So how can they determine when they're inching toward dangerous territory? There are three main...

  12. These 1950s experiments showed us the trauma of parent-child separation

    Laboratory research on the parent-infant bond among monkeys began in earnest in the 1950s. Deposit Photos SHARE. John Gluck's excitement about studying parent-child separation quickly soured.

  13. How the Classics Changed Research Ethics

    How the Classics Changed Research Ethics. Some of history's most controversial psychology studies helped drive extensive protections for human research participants. Some say those reforms went too far. Photo above: In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer "guards" became abusive ...

  14. Case Studies

    Case Studies More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences.

  15. Harvard professor who studies dishonesty is accused of tampering with

    Maddie Meyer/Getty Images. Francesca Gino, a prominent professor at Harvard Business School known for researching dishonesty and unethical behavior, has been accused of submitting work that ...

  16. Ethical issues in naturalistic versus controlled trials

    Ethical core issues in research with human subjects are related to risk-benefit assessment and informed consent. This is valid for all types of studies. However, at least in former times, there was a much greater focus of ethical considerations on controlled clinical trials than on naturalistic trials. A major reason could have been that ...

  17. Introduction: Case Studies in the Ethics of Mental Health Research

    Abstract. This collection presents six case studies on the ethics of mental health research, written by scientific researchers and ethicists from around the world. We publish them here as a resource for teachers of research ethics and as a contribution to several ongoing ethical debates. Each consists of a description of a research study that ...

  18. Unethical human research in the field of neuroscience: a historical

    This paper gives an overview of the most remarkable unethical human research and how past misconducts helped develop ethical guidelines on human experimentation such as The Nuremberg Code 1947 following WWII. Unethical research in the field of neuroscience also proved to be incredibly distressing.

  19. Unethical human experimentation in the United States

    A subject of the Tuskegee syphilis experiment has his blood drawn, c. 1953. Numerous experiments which are performed on human test subjects in the United States are considered unethical, because they are performed without the knowledge or informed consent of the test subjects.

  20. (PDF) Research Ethics and Case Studies in Psychology: A Commentary on

    Abstract. Loftus and Guyer have been criticized for the methods they employed in investigating an anonymous case study published by Corwin and Olafson. This article examines the ethical dimensions ...

  21. Ethical Considerations in Psychology Research

    Ethical Considerations In Psychology Research By Saul Mcleod, PhD Updated on December 7, 2023 Reviewed by Olivia Guy-Evans, MSc On This Page: What are Ethical Guidelines? Informed Consent Debrief Protection of Participants Deception Confidentiality Withdrawal from an Investigation Ethical Issues

  22. 15 Famous Experiments and Case Studies in Psychology

    1. The Marshmallow Experiment Psychologist Walter Mischel conducted the marshmallow experiment at Stanford University in the 1960s to early 1970s. It was a simple test that aimed to define the connection between delayed gratification and success in life.

  23. Harnessing the power of AI in psychology

    That's why the Australian Psychological Society (APS), in its 2024-25 pre-budget submission, is calling on the Federal Government to invest in supporting, innovating and examining the use of AI in psychology. Our proposal comprises three initiatives. The first is designing innovative digital-human models for mental health services.

  24. Dr. John Money Gender Experiment: Reimer Twins

    The John Money Experiment involved David Reimer, a twin boy raised as a girl following a botched circumcision. Money asserted gender was primarily learned, not innate. However, David struggled with his female identity and transitioned back to male in adolescence. The case challenged Money's theory, highlighting the influence of biological sex on gender identity.