If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

loading

How it works

For Business

Join Mind Tools

Article • 5 min read

Using the Scientific Method to Solve Problems

How the scientific method and reasoning can help simplify processes and solve problems.

By the Mind Tools Content Team

The processes of problem-solving and decision-making can be complicated and drawn out. In this article we look at how the scientific method, along with deductive and inductive reasoning can help simplify these processes.

scientific problem solving examples

‘It is a capital mistake to theorize before one has information. Insensibly one begins to twist facts to suit our theories, instead of theories to suit facts.’ Sherlock Holmes

The Scientific Method

The scientific method is a process used to explore observations and answer questions. Originally used by scientists looking to prove new theories, its use has spread into many other areas, including that of problem-solving and decision-making.

The scientific method is designed to eliminate the influences of bias, prejudice and personal beliefs when testing a hypothesis or theory. It has developed alongside science itself, with origins going back to the 13th century. The scientific method is generally described as a series of steps.

  • observations/theory
  • explanation/conclusion

The first step is to develop a theory about the particular area of interest. A theory, in the context of logic or problem-solving, is a conjecture or speculation about something that is not necessarily fact, often based on a series of observations.

Once a theory has been devised, it can be questioned and refined into more specific hypotheses that can be tested. The hypotheses are potential explanations for the theory.

The testing, and subsequent analysis, of these hypotheses will eventually lead to a conclus ion which can prove or disprove the original theory.

Applying the Scientific Method to Problem-Solving

How can the scientific method be used to solve a problem, such as the color printer is not working?

1. Use observations to develop a theory.

In order to solve the problem, it must first be clear what the problem is. Observations made about the problem should be used to develop a theory. In this particular problem the theory might be that the color printer has run out of ink. This theory is developed as the result of observing the increasingly faded output from the printer.

2. Form a hypothesis.

Note down all the possible reasons for the problem. In this situation they might include:

  • The printer is set up as the default printer for all 40 people in the department and so is used more frequently than necessary.
  • There has been increased usage of the printer due to non-work related printing.
  • In an attempt to reduce costs, poor quality ink cartridges with limited amounts of ink in them have been purchased.
  • The printer is faulty.

All these possible reasons are hypotheses.

3. Test the hypothesis.

Once as many hypotheses (or reasons) as possible have been thought of, then each one can be tested to discern if it is the cause of the problem. An appropriate test needs to be devised for each hypothesis. For example, it is fairly quick to ask everyone to check the default settings of the printer on each PC, or to check if the cartridge supplier has changed.

4. Analyze the test results.

Once all the hypotheses have been tested, the results can be analyzed. The type and depth of analysis will be dependant on each individual problem, and the tests appropriate to it. In many cases the analysis will be a very quick thought process. In others, where considerable information has been collated, a more structured approach, such as the use of graphs, tables or spreadsheets, may be required.

5. Draw a conclusion.

Based on the results of the tests, a conclusion can then be drawn about exactly what is causing the problem. The appropriate remedial action can then be taken, such as asking everyone to amend their default print settings, or changing the cartridge supplier.

Inductive and Deductive Reasoning

The scientific method involves the use of two basic types of reasoning, inductive and deductive.

Inductive reasoning makes a conclusion based on a set of empirical results. Empirical results are the product of the collection of evidence from observations. For example:

‘Every time it rains the pavement gets wet, therefore rain must be water’.

There has been no scientific determination in the hypothesis that rain is water, it is purely based on observation. The formation of a hypothesis in this manner is sometimes referred to as an educated guess. An educated guess, whilst not based on hard facts, must still be plausible, and consistent with what we already know, in order to present a reasonable argument.

Deductive reasoning can be thought of most simply in terms of ‘If A and B, then C’. For example:

  • if the window is above the desk, and
  • the desk is above the floor, then
  • the window must be above the floor

It works by building on a series of conclusions, which results in one final answer.

Social Sciences and the Scientific Method

The scientific method can be used to address any situation or problem where a theory can be developed. Although more often associated with natural sciences, it can also be used to develop theories in social sciences (such as psychology, sociology and linguistics), using both quantitative and qualitative methods.

Quantitative information is information that can be measured, and tends to focus on numbers and frequencies. Typically quantitative information might be gathered by experiments, questionnaires or psychometric tests. Qualitative information, on the other hand, is based on information describing meaning, such as human behavior, and the reasons behind it. Qualitative information is gathered by way of interviews and case studies, which are possibly not as statistically accurate as quantitative methods, but provide a more in-depth and rich description.

The resultant information can then be used to prove, or disprove, a hypothesis. Using a mix of quantitative and qualitative information is more likely to produce a rounded result based on the factual, quantitative information enriched and backed up by actual experience and qualitative information.

In terms of problem-solving or decision-making, for example, the qualitative information is that gained by looking at the ‘how’ and ‘why’ , whereas quantitative information would come from the ‘where’, ‘what’ and ‘when’.

It may seem easy to come up with a brilliant idea, or to suspect what the cause of a problem may be. However things can get more complicated when the idea needs to be evaluated, or when there may be more than one potential cause of a problem. In these situations, the use of the scientific method, and its associated reasoning, can help the user come to a decision, or reach a solution, secure in the knowledge that all options have been considered.

Join Mind Tools and get access to exclusive content.

This resource is only available to Mind Tools members.

Already a member? Please Login here

scientific problem solving examples

Team Management

Learn the key aspects of managing a team, from building and developing your team, to working with different types of teams, and troubleshooting common problems.

Sign-up to our newsletter

Subscribing to the Mind Tools newsletter will keep you up-to-date with our latest updates and newest resources.

Subscribe now

Business Skills

Personal Development

Leadership and Management

Member Extras

Most Popular

Newest Releases

Article amtbj63

SWOT Analysis

Article a4wo118

SMART Goals

Mind Tools Store

About Mind Tools Content

Discover something new today

How to stop procrastinating.

Overcoming the Habit of Delaying Important Tasks

What Is Time Management?

Working Smarter to Enhance Productivity

How Emotionally Intelligent Are You?

Boosting Your People Skills

Self-Assessment

What's Your Leadership Style?

Learn About the Strengths and Weaknesses of the Way You Like to Lead

Recommended for you

The addie model.

Developing Learning Sessions from the Ground Up

Business Operations and Process Management

Strategy Tools

Customer Service

Business Ethics and Values

Handling Information and Data

Project Management

Knowledge Management

Self-Development and Goal Setting

Time Management

Presentation Skills

Learning Skills

Career Skills

Communication Skills

Negotiation, Persuasion and Influence

Working With Others

Difficult Conversations

Creativity Tools

Self-Management

Work-Life Balance

Stress Management and Wellbeing

Coaching and Mentoring

Change Management

Managing Conflict

Delegation and Empowerment

Performance Management

Leadership Skills

Developing Your Team

Talent Management

Problem Solving

Decision Making

Member Podcast

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, the 6 scientific method steps and how to use them.

author image

General Education

feature_microscope-1

When you’re faced with a scientific problem, solving it can seem like an impossible prospect. There are so many possible explanations for everything we see and experience—how can you possibly make sense of them all? Science has a simple answer: the scientific method.

The scientific method is a method of asking and answering questions about the world. These guiding principles give scientists a model to work through when trying to understand the world, but where did that model come from, and how does it work?

In this article, we’ll define the scientific method, discuss its long history, and cover each of the scientific method steps in detail.

What Is the Scientific Method?

At its most basic, the scientific method is a procedure for conducting scientific experiments. It’s a set model that scientists in a variety of fields can follow, going from initial observation to conclusion in a loose but concrete format.

The number of steps varies, but the process begins with an observation, progresses through an experiment, and concludes with analysis and sharing data. One of the most important pieces to the scientific method is skepticism —the goal is to find truth, not to confirm a particular thought. That requires reevaluation and repeated experimentation, as well as examining your thinking through rigorous study.

There are in fact multiple scientific methods, as the basic structure can be easily modified.  The one we typically learn about in school is the basic method, based in logic and problem solving, typically used in “hard” science fields like biology, chemistry, and physics. It may vary in other fields, such as psychology, but the basic premise of making observations, testing, and continuing to improve a theory from the results remain the same.

body_history

The History of the Scientific Method

The scientific method as we know it today is based on thousands of years of scientific study. Its development goes all the way back to ancient Mesopotamia, Greece, and India.

The Ancient World

In ancient Greece, Aristotle devised an inductive-deductive process , which weighs broad generalizations from data against conclusions reached by narrowing down possibilities from a general statement. However, he favored deductive reasoning, as it identifies causes, which he saw as more important.

Aristotle wrote a great deal about logic and many of his ideas about reasoning echo those found in the modern scientific method, such as ignoring circular evidence and limiting the number of middle terms between the beginning of an experiment and the end. Though his model isn’t the one that we use today, the reliance on logic and thorough testing are still key parts of science today.

The Middle Ages

The next big step toward the development of the modern scientific method came in the Middle Ages, particularly in the Islamic world. Ibn al-Haytham, a physicist from what we now know as Iraq, developed a method of testing, observing, and deducing for his research on vision. al-Haytham was critical of Aristotle’s lack of inductive reasoning, which played an important role in his own research.

Other scientists, including Abū Rayhān al-Bīrūnī, Ibn Sina, and Robert Grosseteste also developed models of scientific reasoning to test their own theories. Though they frequently disagreed with one another and Aristotle, those disagreements and refinements of their methods led to the scientific method we have today.

Following those major developments, particularly Grosseteste’s work, Roger Bacon developed his own cycle of observation (seeing that something occurs), hypothesis (making a guess about why that thing occurs), experimentation (testing that the thing occurs), and verification (an outside person ensuring that the result of the experiment is consistent).

After joining the Franciscan Order, Bacon was granted a special commission to write about science; typically, Friars were not allowed to write books or pamphlets. With this commission, Bacon outlined important tenets of the scientific method, including causes of error, methods of knowledge, and the differences between speculative and experimental science. He also used his own principles to investigate the causes of a rainbow, demonstrating the method’s effectiveness.

Scientific Revolution

Throughout the Renaissance, more great thinkers became involved in devising a thorough, rigorous method of scientific study. Francis Bacon brought inductive reasoning further into the method, whereas Descartes argued that the laws of the universe meant that deductive reasoning was sufficient. Galileo’s research was also inductive reasoning-heavy, as he believed that researchers could not account for every possible variable; therefore, repetition was necessary to eliminate faulty hypotheses and experiments.

All of this led to the birth of the Scientific Revolution , which took place during the sixteenth and seventeenth centuries. In 1660, a group of philosophers and physicians joined together to work on scientific advancement. After approval from England’s crown , the group became known as the Royal Society, which helped create a thriving scientific community and an early academic journal to help introduce rigorous study and peer review.

Previous generations of scientists had touched on the importance of induction and deduction, but Sir Isaac Newton proposed that both were equally important. This contribution helped establish the importance of multiple kinds of reasoning, leading to more rigorous study.

As science began to splinter into separate areas of study, it became necessary to define different methods for different fields. Karl Popper was a leader in this area—he established that science could be subject to error, sometimes intentionally. This was particularly tricky for “soft” sciences like psychology and social sciences, which require different methods. Popper’s theories furthered the divide between sciences like psychology and “hard” sciences like chemistry or physics.

Paul Feyerabend argued that Popper’s methods were too restrictive for certain fields, and followed a less restrictive method hinged on “anything goes,” as great scientists had made discoveries without the Scientific Method. Feyerabend suggested that throughout history scientists had adapted their methods as necessary, and that sometimes it would be necessary to break the rules. This approach suited social and behavioral scientists particularly well, leading to a more diverse range of models for scientists in multiple fields to use.

body_experiment-3

The Scientific Method Steps

Though different fields may have variations on the model, the basic scientific method is as follows:

#1: Make Observations 

Notice something, such as the air temperature during the winter, what happens when ice cream melts, or how your plants behave when you forget to water them.

#2: Ask a Question

Turn your observation into a question. Why is the temperature lower during the winter? Why does my ice cream melt? Why does my toast always fall butter-side down?

This step can also include doing some research. You may be able to find answers to these questions already, but you can still test them!

#3: Make a Hypothesis

A hypothesis is an educated guess of the answer to your question. Why does your toast always fall butter-side down? Maybe it’s because the butter makes that side of the bread heavier.

A good hypothesis leads to a prediction that you can test, phrased as an if/then statement. In this case, we can pick something like, “If toast is buttered, then it will hit the ground butter-first.”

#4: Experiment

Your experiment is designed to test whether your predication about what will happen is true. A good experiment will test one variable at a time —for example, we’re trying to test whether butter weighs down one side of toast, making it more likely to hit the ground first.

The unbuttered toast is our control variable. If we determine the chance that a slice of unbuttered toast, marked with a dot, will hit the ground on a particular side, we can compare those results to our buttered toast to see if there’s a correlation between the presence of butter and which way the toast falls.

If we decided not to toast the bread, that would be introducing a new question—whether or not toasting the bread has any impact on how it falls. Since that’s not part of our test, we’ll stick with determining whether the presence of butter has any impact on which side hits the ground first.

#5: Analyze Data

After our experiment, we discover that both buttered toast and unbuttered toast have a 50/50 chance of hitting the ground on the buttered or marked side when dropped from a consistent height, straight down. It looks like our hypothesis was incorrect—it’s not the butter that makes the toast hit the ground in a particular way, so it must be something else.

Since we didn’t get the desired result, it’s back to the drawing board. Our hypothesis wasn’t correct, so we’ll need to start fresh. Now that you think about it, your toast seems to hit the ground butter-first when it slides off your plate, not when you drop it from a consistent height. That can be the basis for your new experiment.

#6: Communicate Your Results

Good science needs verification. Your experiment should be replicable by other people, so you can put together a report about how you ran your experiment to see if other peoples’ findings are consistent with yours.

This may be useful for class or a science fair. Professional scientists may publish their findings in scientific journals, where other scientists can read and attempt their own versions of the same experiments. Being part of a scientific community helps your experiments be stronger because other people can see if there are flaws in your approach—such as if you tested with different kinds of bread, or sometimes used peanut butter instead of butter—that can lead you closer to a good answer.

body_toast-1

A Scientific Method Example: Falling Toast

We’ve run through a quick recap of the scientific method steps, but let’s look a little deeper by trying again to figure out why toast so often falls butter side down.

#1: Make Observations

At the end of our last experiment, where we learned that butter doesn’t actually make toast more likely to hit the ground on that side, we remembered that the times when our toast hits the ground butter side first are usually when it’s falling off a plate.

The easiest question we can ask is, “Why is that?”

We can actually search this online and find a pretty detailed answer as to why this is true. But we’re budding scientists—we want to see it in action and verify it for ourselves! After all, good science should be replicable, and we have all the tools we need to test out what’s really going on.

Why do we think that buttered toast hits the ground butter-first? We know it’s not because it’s heavier, so we can strike that out. Maybe it’s because of the shape of our plate?

That’s something we can test. We’ll phrase our hypothesis as, “If my toast slides off my plate, then it will fall butter-side down.”

Just seeing that toast falls off a plate butter-side down isn’t enough for us. We want to know why, so we’re going to take things a step further—we’ll set up a slow-motion camera to capture what happens as the toast slides off the plate.

We’ll run the test ten times, each time tilting the same plate until the toast slides off. We’ll make note of each time the butter side lands first and see what’s happening on the video so we can see what’s going on.

When we review the footage, we’ll likely notice that the bread starts to flip when it slides off the edge, changing how it falls in a way that didn’t happen when we dropped it ourselves.

That answers our question, but it’s not the complete picture —how do other plates affect how often toast hits the ground butter-first? What if the toast is already butter-side down when it falls? These are things we can test in further experiments with new hypotheses!

Now that we have results, we can share them with others who can verify our results. As mentioned above, being part of the scientific community can lead to better results. If your results were wildly different from the established thinking about buttered toast, that might be cause for reevaluation. If they’re the same, they might lead others to make new discoveries about buttered toast. At the very least, you have a cool experiment you can share with your friends!

Key Scientific Method Tips

Though science can be complex, the benefit of the scientific method is that it gives you an easy-to-follow means of thinking about why and how things happen. To use it effectively, keep these things in mind!

Don’t Worry About Proving Your Hypothesis

One of the important things to remember about the scientific method is that it’s not necessarily meant to prove your hypothesis right. It’s great if you do manage to guess the reason for something right the first time, but the ultimate goal of an experiment is to find the true reason for your observation to occur, not to prove your hypothesis right.

Good science sometimes means that you’re wrong. That’s not a bad thing—a well-designed experiment with an unanticipated result can be just as revealing, if not more, than an experiment that confirms your hypothesis.

Be Prepared to Try Again

If the data from your experiment doesn’t match your hypothesis, that’s not a bad thing. You’ve eliminated one possible explanation, which brings you one step closer to discovering the truth.

The scientific method isn’t something you’re meant to do exactly once to prove a point. It’s meant to be repeated and adapted to bring you closer to a solution. Even if you can demonstrate truth in your hypothesis, a good scientist will run an experiment again to be sure that the results are replicable. You can even tweak a successful hypothesis to test another factor, such as if we redid our buttered toast experiment to find out whether different kinds of plates affect whether or not the toast falls butter-first. The more we test our hypothesis, the stronger it becomes!

What’s Next?

Want to learn more about the scientific method? These important high school science classes will no doubt cover it in a variety of different contexts.

Test your ability to follow the scientific method using these at-home science experiments for kids !

Need some proof that science is fun? Try making slime

author image

Melissa Brinks graduated from the University of Washington in 2014 with a Bachelor's in English with a creative writing emphasis. She has spent several years tutoring K-12 students in many subjects, including in SAT prep, to help them prepare for their college education.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

scientific problem solving examples

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

1.3: The Scientific Method - How Chemists Think

  • Last updated
  • Save as PDF
  • Page ID 47444

Learning Objectives

  • Identify the components of the scientific method.

Scientists search for answers to questions and solutions to problems by using a procedure called the scientific method. This procedure consists of making observations, formulating hypotheses, and designing experiments; which leads to additional observations, hypotheses, and experiments in repeated cycles (Figure \(\PageIndex{1}\)).

1.4.jpg

Step 1: Make observations

Observations can be qualitative or quantitative. Qualitative observations describe properties or occurrences in ways that do not rely on numbers. Examples of qualitative observations include the following: "the outside air temperature is cooler during the winter season," "table salt is a crystalline solid," "sulfur crystals are yellow," and "dissolving a penny in dilute nitric acid forms a blue solution and a brown gas." Quantitative observations are measurements, which by definition consist of both a number and a unit. Examples of quantitative observations include the following: "the melting point of crystalline sulfur is 115.21° Celsius," and "35.9 grams of table salt—the chemical name of which is sodium chloride—dissolve in 100 grams of water at 20° Celsius." For the question of the dinosaurs’ extinction, the initial observation was quantitative: iridium concentrations in sediments dating to 66 million years ago were 20–160 times higher than normal.

Step 2: Formulate a hypothesis

After deciding to learn more about an observation or a set of observations, scientists generally begin an investigation by forming a hypothesis, a tentative explanation for the observation(s). The hypothesis may not be correct, but it puts the scientist’s understanding of the system being studied into a form that can be tested. For example, the observation that we experience alternating periods of light and darkness corresponding to observed movements of the sun, moon, clouds, and shadows is consistent with either one of two hypotheses:

  • Earth rotates on its axis every 24 hours, alternately exposing one side to the sun.
  • The sun revolves around Earth every 24 hours.

Suitable experiments can be designed to choose between these two alternatives. For the disappearance of the dinosaurs, the hypothesis was that the impact of a large extraterrestrial object caused their extinction. Unfortunately (or perhaps fortunately), this hypothesis does not lend itself to direct testing by any obvious experiment, but scientists can collect additional data that either support or refute it.

Step 3: Design and perform experiments

After a hypothesis has been formed, scientists conduct experiments to test its validity. Experiments are systematic observations or measurements, preferably made under controlled conditions—that is—under conditions in which a single variable changes.

Step 4: Accept or modify the hypothesis

A properly designed and executed experiment enables a scientist to determine whether or not the original hypothesis is valid. If the hypothesis is valid, the scientist can proceed to step 5. In other cases, experiments often demonstrate that the hypothesis is incorrect or that it must be modified and requires further experimentation.

Step 5: Development into a law and/or theory

More experimental data are then collected and analyzed, at which point a scientist may begin to think that the results are sufficiently reproducible (i.e., dependable) to merit being summarized in a law, a verbal or mathematical description of a phenomenon that allows for general predictions. A law simply states what happens; it does not address the question of why.

One example of a law, the law of definite proportions , which was discovered by the French scientist Joseph Proust (1754–1826), states that a chemical substance always contains the same proportions of elements by mass. Thus, sodium chloride (table salt) always contains the same proportion by mass of sodium to chlorine, in this case 39.34% sodium and 60.66% chlorine by mass, and sucrose (table sugar) is always 42.11% carbon, 6.48% hydrogen, and 51.41% oxygen by mass.

Whereas a law states only what happens, a theory attempts to explain why nature behaves as it does. Laws are unlikely to change greatly over time unless a major experimental error is discovered. In contrast, a theory, by definition, is incomplete and imperfect, evolving with time to explain new facts as they are discovered.

Because scientists can enter the cycle shown in Figure \(\PageIndex{1}\) at any point, the actual application of the scientific method to different topics can take many different forms. For example, a scientist may start with a hypothesis formed by reading about work done by others in the field, rather than by making direct observations.

Example \(\PageIndex{1}\)

Classify each statement as a law, a theory, an experiment, a hypothesis, an observation.

  • Ice always floats on liquid water.
  • Birds evolved from dinosaurs.
  • Hot air is less dense than cold air, probably because the components of hot air are moving more rapidly.
  • When 10 g of ice were added to 100 mL of water at 25°C, the temperature of the water decreased to 15.5°C after the ice melted.
  • The ingredients of Ivory soap were analyzed to see whether it really is 99.44% pure, as advertised.
  • This is a general statement of a relationship between the properties of liquid and solid water, so it is a law.
  • This is a possible explanation for the origin of birds, so it is a hypothesis.
  • This is a statement that tries to explain the relationship between the temperature and the density of air based on fundamental principles, so it is a theory.
  • The temperature is measured before and after a change is made in a system, so these are observations.
  • This is an analysis designed to test a hypothesis (in this case, the manufacturer’s claim of purity), so it is an experiment.

Exercise \(\PageIndex{1}\) 

Classify each statement as a law, a theory, an experiment, a hypothesis, a qualitative observation, or a quantitative observation.

  • Measured amounts of acid were added to a Rolaids tablet to see whether it really “consumes 47 times its weight in excess stomach acid.”
  • Heat always flows from hot objects to cooler ones, not in the opposite direction.
  • The universe was formed by a massive explosion that propelled matter into a vacuum.
  • Michael Jordan is the greatest pure shooter to ever play professional basketball.
  • Limestone is relatively insoluble in water, but dissolves readily in dilute acid with the evolution of a gas.

The scientific method is a method of investigation involving experimentation and observation to acquire new knowledge, solve problems, and answer questions. The key steps in the scientific method include the following:

  • Step 1: Make observations.
  • Step 2: Formulate a hypothesis.
  • Step 3: Test the hypothesis through experimentation.
  • Step 4: Accept or modify the hypothesis.
  • Step 5: Develop into a law and/or a theory.

Contributions & Attributions

  • Publications
  • Conferences & Events
  • Professional Learning
  • Science Standards
  • Awards & Competitions
  • Daily Do Lesson Plans
  • Free Resources
  • American Rescue Plan
  • For Preservice Teachers
  • NCCSTS Case Collection
  • Partner Jobs in Education
  • Interactive eBooks+
  • Digital Catalog
  • Regional Product Representatives
  • e-Newsletters
  • Bestselling Books
  • Latest Books
  • Popular Book Series
  • Prospective Authors
  • Web Seminars
  • Exhibits & Sponsorship
  • Conference Reviewers
  • National Conference • Denver 24
  • Leaders Institute 2024
  • National Conference • New Orleans 24
  • Submit a Proposal
  • Latest Resources
  • Professional Learning Units & Courses
  • For Districts
  • Online Course Providers
  • Schools & Districts
  • College Professors & Students
  • The Standards
  • Teachers and Admin
  • eCYBERMISSION
  • Toshiba/NSTA ExploraVision
  • Junior Science & Humanities Symposium
  • Teaching Awards
  • Climate Change
  • Earth & Space Science
  • New Science Teachers
  • Early Childhood
  • Middle School
  • High School
  • Postsecondary
  • Informal Education
  • Journal Articles
  • Lesson Plans
  • e-newsletters
  • Science & Children
  • Science Scope
  • The Science Teacher
  • Journal of College Sci. Teaching
  • Connected Science Learning
  • NSTA Reports
  • Next-Gen Navigator
  • Science Update
  • Teacher Tip Tuesday
  • Trans. Sci. Learning

MyNSTA Community

  • My Collections

A Problem-Solving Experiment

Using Beer’s Law to Find the Concentration of Tartrazine

The Science Teacher—January/February 2022 (Volume 89, Issue 3)

By Kevin Mason, Steve Schieffer, Tara Rose, and Greg Matthias

Share Start a Discussion

A Problem-Solving Experiment

A problem-solving experiment is a learning activity that uses experimental design to solve an authentic problem. It combines two evidence-based teaching strategies: problem-based learning and inquiry-based learning. The use of problem-based learning and scientific inquiry as an effective pedagogical tool in the science classroom has been well established and strongly supported by research ( Akinoglu and Tandogan 2007 ; Areepattamannil 2012 ; Furtak, Seidel, and Iverson 2012 ; Inel and Balim 2010 ; Merritt et al. 2017 ; Panasan and Nuangchalerm 2010 ; Wilson, Taylor, and Kowalski 2010 ).

Floyd James Rutherford, the founder of the American Association for the Advancement of Science (AAAS) Project 2061 once stated, “To separate conceptually scientific content from scientific inquiry,” he underscored, “is to make it highly probable that the student will properly understand neither” (1964, p. 84). A more recent study using randomized control trials showed that teachers that used an inquiry and problem-based pedagogy for seven months improved student performance in math and science ( Bando, Nashlund-Hadley, and Gertler 2019 ). A problem-solving experiment uses problem-based learning by posing an authentic or meaningful problem for students to solve and inquiry-based learning by requiring students to design an experiment to collect and analyze data to solve the problem.

In the problem-solving experiment described in this article, students used Beer’s Law to collect and analyze data to determine if a person consumed a hazardous amount of tartrazine (Yellow Dye #5) for their body weight. The students used their knowledge of solutions, molarity, dilutions, and Beer’s Law to design their own experiment and calculate the amount of tartrazine in a yellow sports drink (or citrus-flavored soda).

According to the Next Generation Science Standards, energy is defined as “a quantitative property of a system that depends on the motion and interactions of matter and radiation with that system” ( NGSS Lead States 2013 ). Interactions of matter and radiation can be some of the most challenging for students to observe, investigate, and conceptually understand. As a result, students need opportunities to observe and investigate the interactions of matter and radiation. Light is one example of radiation that interacts with matter.

Light is electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle. When light interacts with matter, light can be reflected at the surface, absorbed by the matter, or transmitted through the matter ( Figure 1 ). When a single beam of light enters a substance at a perpendicularly (at a 90 ° angle to the surface), the amount of reflection is minimal. Therefore, the light will either be absorbed by the substance or be transmitted through the substance. When a given wavelength of light shines into a solution, the amount of light that is absorbed will depend on the identity of the substance, the thickness of the container, and the concentration of the solution.

Light interacting with matter.  (Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg).

Light interacting with matter.

(Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg ).

Beer’s Law states the amount of light absorbed is directly proportional to the thickness and concentration of a solution. Beer’s Law is also sometimes known as the Beer-Lambert Law. A solution of a higher concentration will absorb more light and transmit less light ( Figure 2 ). Similarly, if the solution is placed in a thicker container that requires the light to pass through a greater distance, then the solution will absorb more light and transmit less light.

Figure 2 Light transmitted through a solution.  (Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg).

Light transmitted through a solution.

(Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg ).

Definitions of key terms.

Absorbance (A) – the process of light energy being captured by a substance

Beer’s Law (Beer-Lambert Law) – the absorbance (A) of light is directly proportional to the molar absorptivity (ε), thickness (b), and concentration (C) of the solution (A = εbC)

Concentration (C) – the amount of solute dissolved per amount of solution

Cuvette – a container used to hold a sample to be tested in a spectrophotometer

Energy (E) – a quantitative property of a system that depends on motion and interactions of matter and radiation with that system (NGSS Lead States 2013).

Intensity (I) – the amount or brightness of light

Light – electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle

Molar Absorptivity (ε) – a property that represents the amount of light absorbed by a given substance per molarity of the solution and per centimeter of thickness (M-1 cm-1)

Molarity (M) – the number of moles of solute per liters of solution (Mol/L)

Reflection – the process of light energy bouncing off the surface of a substance

Spectrophotometer – a device used to measure the absorbance of light by a substance

Tartrazine – widely used food and liquid dye

Transmittance (T) – the process of light energy passing through a substance

The amount of light absorbed by a solution can be measured using a spectrophotometer. The solution of a given concentration is placed in a small container called a cuvette. The cuvette has a known thickness that can be held constant during the experiment. It is also possible to obtain cuvettes of different thicknesses to study the effect of thickness on the absorption of light. The key definitions of the terms related to Beer’s Law and the learning activity presented in this article are provided in Figure 3 .

Overview of the problem-solving experiment

In the problem presented to students, a 140-pound athlete drinks two bottles of yellow sports drink every day ( Figure 4 ; see Online Connections). When she starts to notice a rash on her skin, she reads the label of the sports drink and notices that it contains a yellow dye known as tartrazine. While tartrazine is safe to drink, it may produce some potential side effects in large amounts, including rashes, hives, or swelling. The students must design an experiment to determine the concentration of tartrazine in the yellow sports drink and the number of milligrams of tartrazine in two bottles of the sports drink.

While a sports drink may have many ingredients, the vast majority of ingredients—such as sugar or electrolytes—are colorless when dissolved in water solution. The dyes added to the sports drink are responsible for the color of the sports drink. Food manufacturers may use different dyes to color sports drinks to the desired color. Red dye #40 (allura red), blue dye #1 (brilliant blue), yellow dye #5 (tartrazine), and yellow dye #6 (sunset yellow) are the four most common dyes or colorants in sports drinks and many other commercial food products ( Stevens et al. 2015 ). The concentration of the dye in the sports drink affects the amount of light absorbed.

In this problem-solving experiment, the students used the previously studied concept of Beer’s Law—using serial dilutions and absorbance—to find the concentration (molarity) of tartrazine in the sports drink. Based on the evidence, the students then determined if the person had exceeded the maximum recommended daily allowance of tartrazine, given in mg/kg of body mass. The learning targets for this problem-solving experiment are shown in Figure 5 (see Online Connections).

Pre-laboratory experiences

A problem-solving experiment is a form of guided inquiry, which will generally require some prerequisite knowledge and experience. In this activity, the students needed prior knowledge and experience with Beer’s Law and the techniques in using Beer’s Law to determine an unknown concentration. Prior to the activity, students learned how Beer’s Law is used to relate absorbance to concentration as well as how to use the equation M 1 V 1 = M 2 V 2 to determine concentrations of dilutions. The students had a general understanding of molarity and using dimensional analysis to change units in measurements.

The techniques for using Beer’s Law were introduced in part through a laboratory experiment using various concentrations of copper sulfate. A known concentration of copper sulfate was provided and the students followed a procedure to prepare dilutions. Students learned the technique for choosing the wavelength that provided the maximum absorbance for the solution to be tested ( λ max ), which is important for Beer’s Law to create a linear relationship between absorbance and solution concentration. Students graphed the absorbance of each concentration in a spreadsheet as a scatterplot and added a linear trend line. Through class discussion, the teacher checked for understanding in using the equation of the line to determine the concentration of an unknown copper sulfate solution.

After the students graphed the data, they discussed how the R2 value related to the data set used to construct the graph. After completing this experiment, the students were comfortable making dilutions from a stock solution, calculating concentrations, and using the spectrophotometer to use Beer’s Law to determine an unknown concentration.

Introducing the problem

After the initial experiment on Beer’s Law, the problem-solving experiment was introduced. The problem presented to students is shown in Figure 4 (see Online Connections). A problem-solving experiment provides students with a valuable opportunity to collaborate with other students in designing an experiment and solving a problem. For this activity, the students were assigned to heterogeneous or mixed-ability laboratory groups. Groups should be diversified based on gender; research has shown that gender diversity among groups improves academic performance, while racial diversity has no significant effect ( Hansen, Owan, and Pan 2015 ). It is also important to support students with special needs when assigning groups. The mixed-ability groups were assigned intentionally to place students with special needs with a peer who has the academic ability and disposition to provide support. In addition, some students may need additional accommodations or modifications for this learning activity, such as an outlined lab report, a shortened lab report format, or extended time to complete the analysis. All students were required to wear chemical-splash goggles and gloves, and use caution when handling solutions and glass apparatuses.

Designing the experiment

During this activity, students worked in lab groups to design their own experiment to solve a problem. The teacher used small-group and whole-class discussions to help students understand the problem. Students discussed what information was provided and what they need to know and do to solve the problem. In planning the experiment, the teacher did not provide a procedure and intentionally provided only minimal support to the students as needed. The students designed their own experimental procedure, which encouraged critical thinking and problem solving. The students needed to be allowed to struggle to some extent. The teacher provided some direction and guidance by posing questions for students to consider and answer for themselves. Students were also frequently reminded to review their notes and the previous experiment on Beer’s Law to help them better use their resources to solve the problem. The use of heterogeneous or mixed-ability groups also helped each group be more self-sufficient and successful in designing and conducting the experiment.

Students created a procedure for their experiment with the teacher providing suggestions or posing questions to enhance the experimental design, if needed. Safety was addressed during this consultation to correct safety concerns in the experimental design or provide safety precautions for the experiment. Students needed to wear splash-proof goggles and gloves throughout the experiment. In a few cases, students realized some opportunities to improve their experimental design during the experiment. This was allowed with the teacher’s approval, and the changes to the procedure were documented for the final lab report.

Conducting the experiment

A sample of the sports drink and a stock solution of 0.01 M stock solution of tartrazine were provided to the students. There are many choices of sports drinks available, but it is recommended that the ingredients are checked to verify that tartrazine (yellow dye #5) is the only colorant added. This will prevent other colorants from affecting the spectroscopy results in the experiment. A citrus-flavored soda could also be used as an alternative because many sodas have tartrazine added as well. It is important to note that tartrazine is considered safe to drink, but it may produce some potential side effects in large amounts, including rashes, hives, or swelling. A list of the materials needed for this problem-solving experiment is shown in Figure 6 (see Online Connections).

This problem-solving experiment required students to create dilutions of known concentrations of tartrazine as a reference to determine the unknown concentration of tartrazine in a sports drink. To create the dilutions, the students were provided with a 0.01 M stock solution of tartrazine. The teacher purchased powdered tartrazine, available from numerous vendors, to create the stock solution. The 0.01 M stock solution was prepared by weighing 0.534 g of tartrazine and dissolving it in enough distilled water to make a 100 ml solution. Yellow food coloring could be used as an alternative, but it would take some research to determine its concentration. Since students have previously explored the experimental techniques, they should know to prepare dilutions that are somewhat darker and somewhat lighter in color than the yellow sports drink sample. Students should use five dilutions for best results.

Typically, a good range for the yellow sports drink is standard dilutions ranging from 1 × 10-3 M to 1 × 10-5 M. The teacher may need to caution the students that if a dilution is too dark, it will not yield good results and lower the R2 value. Students that used very dark dilutions often realized that eliminating that data point created a better linear trendline, as long as it didn’t reduce the number of data points to fewer than four data points. Some students even tried to use the 0.01 M stock solution without any dilution. This was much too dark. The students needed to do substantial dilutions to get the solutions in the range of the sports drink.

After the dilutions are created, the absorbance of each dilution was measured using a spectrophotometer. A Vernier SpectroVis (~$400) spectrophotometer was used to measure the absorbance of the prepared dilutions with known concentrations. The students adjusted the spectrophotometer to use different wavelengths of light and selected the wavelength with the highest absorbance reading. The same wavelength was then used for each measurement of absorbance. A wavelength of 650 nanometers (nm) provided an accurate measurement and good linear relationship. After measuring the absorbance of the dilutions of known concentrations, the students measured the absorbance of the sports drink with an unknown concentration of tartrazine using the spectrophotometer at the same wavelength. If a spectrophotometer is not available, a color comparison can be used as a low-cost alternative for completing this problem-solving experiment ( Figure 7 ; see Online Connections).

Analyzing the results

After completing the experiment, the students graphed the absorbance and known tartrazine concentrations of the dilutions on a scatter-plot to create a linear trendline. In this experiment, absorbance was the dependent variable, which should be graphed on the y -axis. Some students mistakenly reversed the axes on the scatter-plot. Next, the students used the graph to find the equation for the line. Then, the students solve for the unknown concentration (molarity) of tartrazine in the sports drink given the linear equation and the absorbance of the sports drink measured experimentally.

To answer the question posed in the problem, the students also calculated the maximum amount of tartrazine that could be safely consumed by a 140 lb. person, using the information given in the problem. A common error in solving the problem was not converting the units of volume given in the problem from ounces to liters. With the molarity and volume in liters, the students then calculated the mass of tartrazine consumed per day in milligrams. A sample of the graph and calculations from one student group are shown in Figure 8 . Finally, based on their calculations, the students answered the question posed in the original problem and determined if the person’s daily consumption of tartrazine exceeded the threshold for safe consumption. In this case, the students concluded that the person did NOT consume more than the allowable daily limit of tartrazine.

Sample graph and calculations from a student group.

Sample graph and calculations from a student group.

Communicating the results

After conducting the experiment, students reported their results in a written laboratory report that included the following sections: title, purpose, introduction, hypothesis, materials and methods, data and calculations, conclusion, and discussion. The laboratory report was assessed using the scoring rubric shown in Figure 9 (see Online Connections). In general, the students did very well on this problem-solving experiment. Students typically scored a three or higher on each criteria of the rubric. Throughout the activity, the students successfully demonstrated their ability to design an experiment, collect data, perform calculations, solve a problem, and effectively communicate those results.

This activity is authentic problem-based learning in science as the true concentration of tartrazine in the sports drink was not provided by the teacher or known by the students. The students were generally somewhat biased as they assumed the experiment would result in exceeding the recommended maximum consumption of tartrazine. Some students struggled with reporting that the recommended limit was far higher than the two sports drinks consumed by the person each day. This allows for a great discussion about the use of scientific methods and evidence to provide unbiased answers to meaningful questions and problems.

The most common errors in this problem-solving experiment were calculation errors, with the most common being calculating the concentrations of the dilutions (perhaps due to the use of very small concentrations). There were also several common errors in communicating the results in the laboratory report. In some cases, students did not provide enough background information in the introduction of the report. When the students communicated the results, some students also failed to reference specific data from the experiment. Finally, in the discussion section, some students expressed concern or doubts in the results, not because there was an obvious error, but because they did not believe the level consumed could be so much less than the recommended consumption limit of tartrazine.

The scientific study and investigation of energy and matter are salient topics addressed in the Next Generation Science Standards ( Figure 10 ; see Online Connections). In a chemistry classroom, students should have multiple opportunities to observe and investigate the interaction of energy and matter. In this problem-solving experiment students used Beer’s Law to collect and analyze data to determine if a person consumed an amount of tartrazine that exceeded the maximum recommended daily allowance. The students correctly concluded that the person in the problem did not consume more than the recommended daily amount of tartrazine for their body weight.

In this activity students learned to work collaboratively to design an experiment, collect and analyze data, and solve a problem. These skills extend beyond any one science subject or class. Through this activity, students had the opportunity to do real-world science to solve a problem without a previously known result. The process of designing an experiment may be difficult for some students that are often accustomed to being given an experimental procedure in their previous science classroom experiences. However, because students sometimes struggled to design their own experiment and perform the calculations, students also learned to persevere in collecting and analyzing data to solve a problem, which is a valuable life lesson for all students. ■

Online Connections

The Beer-Lambert Law at Chemistry LibreTexts: https://bit.ly/3lNpPEi

Beer’s Law – Theoretical Principles: https://teaching.shu.ac.uk/hwb/chemistry/tutorials/molspec/beers1.htm

Beer’s Law at Illustrated Glossary of Organic Chemistry: http://www.chem.ucla.edu/~harding/IGOC/B/beers_law.html

Beer Lambert Law at Edinburgh Instruments: https://www.edinst.com/blog/the-beer-lambert-law/

Beer’s Law Lab at PhET Interactive Simulations: https://phet.colorado.edu/en/simulation/beers-law-lab

Figure 4. Problem-solving experiment problem statement: https://bit.ly/3pAYHtj

Figure 5. Learning targets: https://bit.ly/307BHtb

Figure 6. Materials list: https://bit.ly/308a57h

Figure 7. The use of color comparison as a low-cost alternative: https://bit.ly/3du1uyO

Figure 9. Summative performance-based assessment rubric: https://bit.ly/31KoZRj

Figure 10. Connecting to the Next Generation Science Standards : https://bit.ly/3GlJnY0

Kevin Mason ( [email protected] ) is Professor of Education at the University of Wisconsin–Stout, Menomonie, WI; Steve Schieffer is a chemistry teacher at Amery High School, Amery, WI; Tara Rose is a chemistry teacher at Amery High School, Amery, WI; and Greg Matthias is Assistant Professor of Education at the University of Wisconsin–Stout, Menomonie, WI.

Akinoglu, O., and R. Tandogan. 2007. The effects of problem-based active learning in science education on students’ academic achievement, attitude and concept learning. Eurasia Journal of Mathematics, Science, and Technology Education 3 (1): 77–81.

Areepattamannil, S. 2012. Effects of inquiry-based science instruction on science achievement and interest in science: Evidence from Qatar. The Journal of Educational Research 105 (2): 134–146.

Bando R., E. Nashlund-Hadley, and P. Gertler. 2019. Effect of inquiry and problem-based pedagogy on learning: Evidence from 10 field experiments in four countries. The National Bureau of Economic Research 26280.

Furtak, E., T. Seidel, and H. Iverson. 2012. Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research 82 (3): 300–329.

Hansen, Z., H. Owan, and J. Pan. 2015. The impact of group diversity on class performance. Education Economics 23 (2): 238–258.

Inel, D., and A. Balim. 2010. The effects of using problem-based learning in science and technology teaching upon students’ academic achievement and levels of structuring concepts. Pacific Forum on Science Learning and Teaching 11 (2): 1–23.

Merritt, J., M. Lee, P. Rillero, and B. Kinach. 2017. Problem-based learning in K–8 mathematics and science education: A literature review. The Interdisciplinary Journal of Problem-based Learning 11 (2).

NGSS Lead States. 2013. Next Generation Science Standards: For states, by states. Washington, DC: National Academies Press.

Panasan, M., and P. Nuangchalerm. 2010. Learning outcomes of project-based and inquiry-based learning activities. Journal of Social Sciences 6 (2): 252–255.

Rutherford, F.J. 1964. The role of inquiry in science teaching. Journal of Research in Science Teaching 2 (2): 80–84.

Stevens, L.J., J.R. Burgess, M.A. Stochelski, and T. Kuczek. 2015. Amounts of artificial food dyes and added sugars in foods and sweets commonly consumed by children. Clinical Pediatrics 54 (4): 309–321.

Wilson, C., J. Taylor, and S. Kowalski. 2010. The relative effects and equity of inquiry-based and commonplace science teaching on students’ knowledge, reasoning, and argumentation. Journal of Research in Science Teaching 47 (3): 276–301.

Chemistry Crosscutting Concepts Curriculum Disciplinary Core Ideas General Science Inquiry Instructional Materials Labs Lesson Plans Mathematics NGSS Pedagogy Science and Engineering Practices STEM Teaching Strategies Technology Three-Dimensional Learning High School

You may also like

Reports Article

Web Seminar

Join us on Thursday, June 13, 2024, from 7:00 PM to 8:00 PM ET, to learn about the science and technology of firefighting. Wildfires have become an e...

Join us on Thursday, October 10, 2024, from 7:00 to 8:00 PM ET, for a Science Update web seminar presented by NOAA about climate science and marine sa...

Secondary Pre-service Teachers! Join us on Monday, October 21, 2024, from 7:00 to 8:15 PM ET to learn about safety considerations for the science labo...

Module 7: Exponents

Problem solving with scientific notation, learning outcome.

  • Solve application problems involving scientific notation

Molecule of water with one oxygen bonded to two hydrogen.

A water molecule.

Solve Application Problems

Learning rules for exponents seems pointless without context, so let us explore some examples of using scientific notation that involve real problems. First, let us look at an example of how scientific notation can be used to describe real measurements.

Think About It

Match each length in the table with the appropriate number of meters described in scientific notation below. Write your ideas in the textboxes provided before you look at the solution.

Red Blood Cells.

Several red blood cells.

One of the most important parts of solving a “real-world” problem is translating the words into appropriate mathematical terms and recognizing when a well known formula may help. Here is an example that requires you to find the density of a cell given its mass and volume. Cells are not visible to the naked eye, so their measurements, as described with scientific notation, involve negative exponents.

Human cells come in a wide variety of shapes and sizes. The mass of an average human cell is about [latex]2\times10^{-11}[/latex] grams. [1] Red blood cells are one of the smallest types of cells [2] , clocking in at a volume of approximately [latex]10^{-6}\text{ meters }^3[/latex]. [3] Biologists have recently discovered how to use the density of some types of cells to indicate the presence of disorders such as sickle cell anemia or leukemia. [4]  Density is calculated as [latex]\frac{\text{ mass }}{\text{ volume }}[/latex]. Calculate the density of an average human cell.

Read and Understand:  We are given an average cellular mass and volume as well as the formula for density. We are looking for the density of an average human cell.

Define and Translate:   [latex]m=\text{mass}=2\times10^{-11}[/latex], [latex]v=\text{volume}=10^{-6}\text{ meters}^3[/latex], [latex]\text{density}=\frac{\text{ mass }}{\text{ volume }}[/latex]

Write and Solve:  Use the quotient rule to simplify the ratio.

[latex]\begin{array}{c}\text{ density }=\frac{2\times10^{-11}\text{ grams }}{10^{-6}\text{ meter }^3}\\\text{ }\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=2\times10^{-11-\left(-6\right)}\frac{\text{ grams }}{\text{ meter }^3}\\\text{ }\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=2\times10^{-5}\frac{\text{ grams }}{\text{ meter }^3}\\\end{array}[/latex]

If scientists know the density of healthy cells, they can compare the density of a sick person’s cells to that to rule out or test for disorders or diseases that may affect cellular density.

The average density of a human cell is [latex]2\times10^{-5}\frac{\text{ grams }}{\text{ meter }^3}[/latex]

The following video provides an example of how to find the number of operations a computer can perform in a very short amount of time.

Earth in the foreground, sun in the background, light beams traveling from the sun to the earth.

Light traveling from the sun to the earth.

In the next example, you will use another well known formula, [latex]d=r\cdot{t}[/latex], to find how long it takes light to travel from the sun to Earth. Unlike the previous example, the distance between the earth and the sun is massive, so the numbers you will work with have positive exponents.

The speed of light is [latex]3\times10^{8}\frac{\text{ meters }}{\text{ second }}[/latex]. If the sun is [latex]1.5\times10^{11}[/latex] meters from earth, how many seconds does it take for sunlight to reach the earth?  Write your answer in scientific notation.

Read and Understand:  We are looking for how long—an amount of time. We are given a rate which has units of meters per second and a distance in meters. This is a [latex]d=r\cdot{t}[/latex] problem.

Define and Translate: 

[latex]\begin{array}{l}d=1.5\times10^{11}\\r=3\times10^{8}\frac{\text{ meters }}{\text{ second }}\\t=\text{ ? }\end{array}[/latex]

Write and Solve:  Substitute the values we are given into the [latex]d=r\cdot{t}[/latex] equation. We will work without units to make it easier. Often, scientists will work with units to make sure they have made correct calculations.

[latex]\begin{array}{c}d=r\cdot{t}\\1.5\times10^{11}=3\times10^{8}\cdot{t}\end{array}[/latex]

Divide both sides of the equation by [latex]3\times10^{8}[/latex] to isolate  t.

[latex]\begin{array}{c}\frac{1.5\times10^{11}}{3\times10^{8}}=\frac{3\times10^{8}}{3\times10^{8}}\cdot{t}\end{array}[/latex]

On the left side, you will need to use the quotient rule of exponents to simplify, and on the right, you are left with  t. 

[latex]\begin{array}{c}\text{ }\\\left(\frac{1.5}{3}\right)\times\left(\frac{10^{11}}{10^{8}}\right)=t\\\text{ }\\\left(0.5\right)\times\left(10^{11-8}\right)=t\\0.5\times10^3=t\end{array}[/latex]

This answer is not in scientific notation, so we will move the decimal to the right, which means we need to subtract one factor of [latex]10[/latex].

[latex]0.5\times10^3=5.0\times10^2=t[/latex]

The time it takes light to travel from the sun to Earth is [latex]5.0\times10^2[/latex] seconds, or in standard notation, [latex]500[/latex] seconds.  That is not bad considering how far it has to travel!

Scientific notation was developed to assist mathematicians, scientists, and others when expressing and working with very large and very small numbers. Scientific notation follows a very specific format in which a number is expressed as the product of a number greater than or equal to one and less than ten times a power of [latex]10[/latex]. The format is written [latex]a\times10^{n}[/latex], where [latex]1\leq{a}<10[/latex] and n is an integer. To multiply or divide numbers in scientific notation, you can use the commutative and associative properties to group the exponential terms together and apply the rules of exponents.

  • Orders of magnitude (mass). (n.d.). Retrieved May 26, 2016, from https://en.wikipedia.org/wiki/Orders_of_magnitude_(mass) ↵
  • How Big is a Human Cell? ↵
  • How big is a human cell? - Weizmann Institute of Science. (n.d.). Retrieved May [latex]26, 2016[/latex], from http://www.weizmann.ac.il/plants/Milo/images/humanCellSize120116Clean.pdf ↵
  • Grover, W. H., Bryan, A. K., Diez-Silva, M., Suresh, S., Higgins, J. M., & Manalis, S. R. (2011). Measuring single-cell density. Proceedings of the National Academy of Sciences, 108(27), 10992-10996. doi:10.1073/pnas.1104651108 ↵
  • Revision and Adaptation. Provided by : Lumen Learning. License : CC BY: Attribution
  • Application of Scientific Notation - Quotient 1 (Number of Times Around the Earth). Authored by : James Sousa (Mathispower4u.com) for Lumen Learning. Located at : https://youtu.be/san2avgwu6k . License : CC BY: Attribution
  • Application of Scientific Notation - Quotient 2 (Time for Computer Operations). Authored by : James Sousa (Mathispower4u.com) for Lumen Learning. Located at : https://youtu.be/Cbm6ejEbu-o . License : CC BY: Attribution
  • Unit 11: Exponents and Polynomials, from Developmental Math: An Open Program. Provided by : Monterey Institute of Technology and Education. Located at : http://nrocnetwork.org/dm-opentext . License : CC BY: Attribution

Footer Logo Lumen Waymaker

Book cover

Encyclopedia of Science Education pp 786–790 Cite as

Problem Solving in Science Learning

  • Edit Yerushalmi 2 &
  • Bat-Sheva Eylon 3  
  • Reference work entry
  • First Online: 01 January 2015

232 Accesses

1 Citations

3 Altmetric

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Introduction: Problem Solving in the Science Classroom

Problem solving plays a central role in school science, serving both as a learning goal and as an instructional tool. As a learning goal, problem-solving expertise is considered as a means of promoting both proficiency in solving practice problems and competency in tackling novel problems, a hallmark of successful scientists and engineers. As an instructional tool, problem solving attempts to situate the learning of scientific ideas and practices in an applicative context, thus providing an opportunity to transform science learning into an active, relevant, and motivating experience. Problem solving is also frequently a central strategy in the assessment of students’ performance on various measures (e.g., mastery of procedural skills, conceptual understanding, as well as scientific and learning practices).

A problem is often defined as an unfamiliar task that requires one to make judicious decisions when searching through a problem space with the intent of devising a sequence of actions to reach a certain goal. Problem space is defined as “an individual’s representation of the objects in the problem situation, the goal of the problem, and the actions that can be performed in working on the problem” (Greeno and Simon 1984 , pp. 591). This exploratory nature of problem solving mandates an innovative, iterative, and adaptive process. In contrast, in an exercise the solvers apply a preset procedure, with which they are acquainted, to reach the problem’s goal, and therefore the solvers’ choices are limited (Reif 2008 ). Whether a task serves as a problem or an exercise for a particular solver depends on the prior knowledge of the solver.

School science problems share common traits that stem from their being grounded in a scientific knowledge domain: prediction is derived from well-specified premises and requires precision; real-world phenomena are simplified and their description is reduced to a few variables. Where appropriate, school science problems require the use of formal, explicit rules that are part of a scientific theory. These rules must be interpreted unambiguously, as compared with the plausible, implicit knowledge structures that serve in everyday reasoning. However, the problems typically used in science classrooms vary greatly along dimensions such as authenticity (i.e., abstract vs. real-world context), specifications of known and target variables (i.e., well vs. ill structured), duration (a few minutes to several months), complexity (i.e., the amount of intermediate variables), and ownership (i.e., the problem defined by the teacher or by the student). At one end of the problem spectrum, one can find the abstract, well-structured, short, and simple problems that are commonly found in traditional textbooks. At the other end, one can find design or inquiry projects that introduce a real-world context, are ill structured and complex, and involve the students themselves who define the project goals and who engage in extended investigations that they have roles in determining.

Domain-Specific Knowledge and Problem Solving

The relevance of research on problem solving to science instruction increased in the 1970s, when researchers began studying problems anchored in specific knowledge domains (e.g., chess, physics, and medicine). Some of this research focused on short and well-structured problems encountered in science classrooms, all of which require well-defined problem-solving procedures. This research originally took an information processing perspective. It involved two central methodologies: the analysis of “think aloud” protocols of both more and less experienced subjects (described in this research as “experts vs. novices”) in a specific domain to determine how each group approached problem solving and the construction of computer programs to simulate and validate theories of human behavior in solving problems. Research on experts and novices has underscored the importance of domain-specific knowledge in problem solving, as well as the role of problem solving in developing domain-specific knowledge and general problem-solving methods.

Experts’ and Novices’ Approaches to Problem Solving

One aspect in which experts and novices in a certain domain differ is their prior knowledge structures and how they use these. When approaching problems, both experts and novices rely on specific cues (e.g., keywords, idealized objects, and previously encountered contexts) in the problem situation to automatically retrieve domain-relevant information. However, experts store in their memory hierarchical structures (schemas) of domain-specific knowledge (declarative and procedural) that allow them to use cues in problems to quickly encode information (chunking) and to reliably retrieve schemas. Experts’ schemas revolve around in-depth features (e.g., underlying physics principles), in comparison with novices’ problem schemas, which rely on surface features (e.g., “pulleys”) and are fragmented. However, other researchers (Smith et al. 1993 ) who have taken a constructivist, “knowledge-in-pieces” view of learning argue that novices abstract deep structures of intuitive ideas rather than representations organized around concrete surface features.

When solving problems, both experts and novices were found to use heuristics – general but weak methods to enhance their search process, such as a means-ends analysis, starting from the problem’s goal and working backward to the given problem situation, recursively identifying the gap between the problem or goal and the current state and by taking actions to bridge this gap. However, unlike novices, in describing a problem, experts devote considerable time to qualitative analysis of the problem situation before turning to a quantitative representation (Reif 2008 ). More specifically, experts often make simplifying assumptions, construct a pictorial or diagrammatic representation, and map the problem situation to appropriate theoretical models by retrieving effective representations that derive from the experts’ larger and better organized knowledge base. In addition, experts differ from novices in their approach towards constructing the solution: while novices approach problems in a haphazard manner (e.g., plugging numbers into formulae), experts devote sufficient time to effectively plan a strategy for constructing a solution by devising useful subproblems. Experts also have better metacognitive skills and spend more time than do novices in monitoring their progress: they evaluate their solutions in light of task constraints, reflect on their former analysis and decisions, and revise their choices accordingly as necessary.

Problem Solving as a Learning Process

Problem solving may trigger a process of conceptual change, leading learners to develop scientifically acceptable domain-specific declarative knowledge . Namely, learners can refine and elaborate their conceptual understanding by engaging in problem solving in a deliberate manner: articulating how they apply domain concepts and principles, acknowledging conflicts between their ideas and counterevidence, and searching for mechanisms to resolve these conflicts.

Similarly, successful learners were found also to work in a deliberate manner when learning from worked-out examples. Worked-out examples in standard textbooks frequently omit information justifying the solution steps. Research (Chi 2000 ) indicates that learners tend to generate content-relevant articulations (self-explanations) to make sense of solution steps. Successful problem solvers generate more self-explanations and, in particular, principle-based ones. More specifically, they tend to relate the solution steps to the domain principles or elaborate on these principles. Self-explanations serve learning by either bridging the gaps that correspond to the omissions in the solution or by a self-repair process in which solvers attempt to resolve a conflict between the scientific models conveyed by a worked-out example and the possibly flawed mental model held by the solver.

Acquiring domain-specific procedural knowledge takes place when the problem solver, applying general but weak search methods, identifies successful domain-specific procedures that are stored for future use. These acquisition processes play an important role in designing e-tutors for problem solving. Another mechanism for acquiring domain-specific knowledge is the use of analogical reasoning in solving analogous problems that may be similar in one of two ways:

They may be similar in the material properties shared by the two problem scenarios (e.g., the heart and a pump). Devising an analogous scenario to an original problem scenario can help the solver to focus on relevant variables and identify a strategy to solve the original problem. Instructors have used intermediate scenarios, termed bridging analogies, to support the process.

Another type of similarity between two problems refers to the scientific concepts and principles that the solver employs to solve the problems. Solvers often rely on the procedure used in solving a previous problem (i.e., a source problem) and map it to construct a solution to a target problem (e.g., resolving the forces in a static equilibrium problem, where various force agents are considered).

Identifying clearly the similarities as well as the differences between analogous problems can help students to acquire domain-specific procedural and declarative knowledge.

Research underscores several factors affecting learning through problem solving:

Metacognition relates to the extent to which within a problem-solving process the purposeful pursuit of learning, accompanied by an awareness of one’s beliefs and goals, takes place.

Epistemological beliefs , such as one’s expectation to engage in a deliberate search process or merely to retrieve an algorithm to solve a problem, affect what learners notice and think about when they act.

The sociocultural nature of learning highlights the contribution of cooperative learning that engages students in discussing and arguing their ideas when solving problems together.

Cognitive load (Paas et al. 2010 ) results from the limitations of working memory, impeding meaningful learning when the solver has to process simultaneously vast amounts of information.

Instructional Methods for Fostering Problem Solving and Conceptual Change

The traditional teaching of science problem solving involves a considerable amount of drill and practice. Research suggests that these practices do not lead to the development of expert-like problem-solving strategies and that there is little correlation between the number of problems solved (exceeding 1,000 problems in one specific study) and the development of a conceptual understanding.

Cognitive apprenticeship underlies many instructional strategies that have been found to promote expert-like problem solving. During problem solving, learners interact with their peers and with their instructor and reflect on the connections between their existing ideas and practices and those that more closely characterize experts’ culture of practice and decision-making. The roles of the teacher are (a) modeling , explicating the tacit problem-solving processes of the expert, (b) coaching learners as they engage in scaffolded problem-solving, and (c) fading , gradually decreasing this support until the learners can work independently.

Two complementary mechanisms of scaffolding problem-solving have been suggested (Reiser 2004 ): structuring and problematizing. Structuring a task refers to reducing its complexity and limiting the choices of the problem solver. Problematizing directs one’s attention to aspects that one might otherwise overlook. Instruction should be balanced between structuring and problematizing so that tasks will be manageable to learners yet remain challenging and engaging. These mechanisms can be carried out for the range of problems described in the introduction.

The following are some examples of structuring methods that have been shown to be effective:

Worked-out examples can be used to “model” the tacit goals and reasoning underlying the problem-solving strategies of experts, which later can be used in students’ solutions. To narrow the problem space and minimize the cognitive load, researchers use methods such as labeling solution steps in terms of “subgoals.” This approach has also been implemented in e-tutors that follow a reciprocal teaching approach, where computers and students alternately coach each other.

Tools that assist learners in applying heuristics when seeking solutions. For example, inquiry or design maps recommended for guiding a particular design or an inquiry project serve as visual aids that may help learners to decompose open-ended problems and track what steps they have taken in a problem solution.

Explicit verbal reminders (prompts) can induce problem solvers to formulate self-explanations and to self-repair their understanding.

Like structuring, problematizing also takes different forms along the problem spectrum previously mentioned. For example, researchers have pointed out the instructional value of problem situations that elicit intuitive ideas and result in confusion (e.g., qualitative conceptual problems) to trigger argumentative discussion. Problematizing in this context refers to techniques used to stimulate argumentation, such as asking students to express their ideas, encouraging them to identify gaps in their understanding and by providing the requisite time and social atmosphere so that they can work to resolve any disagreements. Incorrect examples (e.g., the teacher providing solutions she/he knows to be erroneous) were found efficient in triggering reflection and in highlighting critical features that distinguish between scientific and lay interpretations of the scientific concepts and principles involved.

Programs such as “systematic inventive thinking” (SIT), based on analyzing many patents, illustrate the interplay between structuring and problematizing in promoting creativity when solving ill-defined problems such as design projects. In such programs learners study methods for functional analysis of systems and systematic problem-solving strategies for carrying out divergent and convergent thinking.

Part and parcel in the design of instruction is the careful choice of assessment tasks and methods of scoring that align with the learning goals. These choices have a strong impact on the problem-solving practices that will take place in the classroom.

Cross-References

Authentic Science

Cognitive Demand

Competence in Science

Conceptual Change in Learning

Information Processing and the Learning of Science

Inquiry, Learning Through

Prior Knowledge

Problem Solving in Science, Assessment of the Ability to

Problem-Based Learning (PBL)

Chi MTH (2000) Self-explaining expository texts: the dual processes of generating inferences and repairing mental models. In: Glaser R (ed) Advances in instructional psychology. Lawrence Erlbaum Associates, Mahwah, pp 161–238

Google Scholar  

Greeno JG, Simon HA (1984) Problem solving and reasoning. In: Atkinson RC, Herrnstein R, Lindzey G, Lute RD (eds) Stevens’ handbook of experimental psychology. Wiley, New York

Paas F, van Gog T, Sweller J (2010) Cognitive load theory: new conceptualizations, specifications, and integrated research perspectives. Educ Psychol Rev 22:115–121

Article   Google Scholar  

Reif F (2008) Applying cognitive science to education: thinking and learning in scientific or other complex domains. MIT Press, Cambridge

Reiser BJ (2004) Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J Learn Sci 13:273–304

Smith JP, DiSessa AA, Roschelle J (1993) Misconceptions reconceived: a constructivist analysis of knowledge in transition. J Learn Sci 3:115–163

Download references

Author information

Authors and affiliations.

The Weizmann Institute of Science, Rehovot, Israel

Edit Yerushalmi

The Science Teaching Department, The Weizmann Institute of Science, Rehovot, Israel

Bat-Sheva Eylon

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Bat-Sheva Eylon .

Editor information

Editors and affiliations.

Emeritus Professor of Science and Technology Education, Faculty of Education Monash University, Clayton, VIC, Australia

Richard Gunstone

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Yerushalmi, E., Eylon, BS. (2015). Problem Solving in Science Learning. In: Gunstone, R. (eds) Encyclopedia of Science Education. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-2150-0_129

Download citation

DOI : https://doi.org/10.1007/978-94-007-2150-0_129

Published : 04 January 2015

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-2149-4

Online ISBN : 978-94-007-2150-0

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

TMYN Logo Drawing

The Math You Need, When You Need It

math tutorials for students majoring in the earth sciences

Scientific Notation - Practice Problems

Solving earth science problems with scientific notation, × div[id^='image-'] {position:static}div[id^='image-'] div.hover{position:static} introductory problems.

These problems cover the fundamentals of writing scientific notation and using it to understand relative size of values and scientific prefixes.

Problem 1: The distance to the moon is 238,900 miles. Write this value in scientific notation.

Problem 2: One mile is 1609.34 meters. What is the distance to the moon in meters using scientific notation?

`1609.34 m/(mi) xx 238","900 mi` = 384,400,000 m

Notice in the above unit conversion the 'mi' units cancel each other out because 'mi' is in the denominator for the first term and the numerator for the second term

Earth from space

Problem 4: The atomic radius of a magnesium atom is approximately 1.6 angstroms, which is equal to 1.6 x 10 -10 meters (m). How do you write this length in standard form?

 0.00000000016 m  

Fissure A = 40,0000 m Fissure B = 5,0000 m

This shows fissure A is larger (by almost 10 times!). The shortcut to answer a question like this is to look at the exponent. If both coefficients are between 1-10, then the value with the larger exponent is the larger number.

Problem 6: The amount of carbon in the atmosphere is 750 petagrams (pg). One petagram equals 1 x 10 15 grams (g). Write out the amount of carbon in the atmosphere in (i) scientific notation and (ii) standard decimal format.

The exponent is a positive number, so the decimal will move to the right in the next step.

750,000,000,000,000,000 g

Advanced Problems

Scientific notation is used in solving these earth and space science problems and they are provided to you as an example. Be forewarned that these problems move beyond this module and require some facility with unit conversions, rearranging equations, and algebraic rules for multiplying and dividing exponents. If you can solve these, you've mastered scientific notation!

Problem 7: Calculate the volume of water (in cubic meters and in liters) falling on a 10,000 km 2 watershed from 5 cm of rainfall.

`10,000  km^2 = 1 xx 10^4  km^2`

5 cm of rainfall = `5 xx 10^0 cm`

Let's start with meters as the common unit and convert to liters later. There are 1 x 10 3 m in a km and area is km x km (km 2 ), therefore you need to convert from km to m twice:

`1 xx 10^3 m/(km) * 1 xx 10^3 m/(km) = 1 xx 10^6 m^2/(km)^2` `1 xx 10 m^2/(km)^2 * 1 xx 10^4 km^2 = 1 xx 10^10 m^2` for the area of the watershed.

For the amount of rainfall, you should convert from centimeters to meters:

`5 cm * (1 m)/(100 cm)= 5 xx 10^-2 m`

`V = A * d`

When multiplying terms with exponents, you can multiply the coefficients and add the exponents:

`V = 1 xx 10^10 m^2 * 5.08 xx 10^(-2) m = 5.08 xx 10^8 m^3`

Given that there are 1 x 10 3 liters in a cubic meter we can make the following conversion:

`1 xx 10^3 L * 5.08 xx 10^8 m^3 = 5.08 xx 10^11 L`

Step 5. Check your units and your answer - do they make sense?           

`V = 4/3 pi r^3`

Using this equation, plug in the radius (r) of the dust grains.

`V = 4/3 pi (2 xx10^(-6))^3m^3`

Notice the (-6) exponent is cubed. When you take an exponent to an exponent, you need to multiply the two terms

`V = 4/3 pi (8 xx10^(-18)m^3)`

Then, multiple the cubed radius times pi and 4/3

`V = 3.35 xx 10^(-17) m^3`

`m = 3.35 xx 10^(-17) m^3 * 3300 (kg)/m^3`

Notice in the equation above that the m 3 terms cancel each other out and you are left with kg

`m = 1.1 xx 10^(-13) kg`

Barnard nebula

`V = 4/3 pi (2.325 xx10^(15) m)^3`

`V = 5.26 xx10^(46) m^3`

Number of dust grains = `5.26 xx10^(46) m^3 xx 0.001` grains/m 3

Number of dust grains = `5.26xx10^43 "grains"`

Total mass = `1.1xx10^(-13) (kg)/("grains") * 5.26xx10^43 "grains"`

Notice in the equation above the 'grains' terms cancel each other out and you are left with kg

Total mass = `5.79xx10^30 kg`

If you feel comfortable with this topic, you can go on to the assessment . Or you can go back to the Scientific Notation explanation page .

« Previous Page       Next Page »

  • Rules First
  • Organizational Problems
  • Wrong Paths
  • Math Problems
  • Metacognition
  • What Does It Take?
  • Satisfaction
  • Re-examination
  • 1st Hand Accounts

Scientific Problem Solving

By example, exercise, and reflection.

  • SPS Home  > 

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Example Physics Problems and Solutions

Equilibrium Example Problem 1

Learning how to solve physics problems is a big part of learning physics. Here’s a collection of example physics problems and solutions to help you tackle problems sets and understand concepts and how to work with formulas:

Physics Homework Tips Physics homework can be challenging! Get tips to help make the task a little easier.

Unit Conversion Examples

There are now too many unit conversion examples to list in this space. This Unit Conversion Examples page is a more comprehensive list of worked example problems.

Newton’s Equations of Motion Example Problems

Equations of Motion – Constant Acceleration Example This equations of motion example problem consist of a sliding block under constant acceleration. It uses the equations of motion to calculate the position and velocity of a given time and the time and position of a given velocity.

Equations of Motion Example Problem – Constant Acceleration This example problem uses the equations of motion for constant acceleration to find the position, velocity, and acceleration of a breaking vehicle.

Equations of Motion Example Problem – Interception

This example problem uses the equations of motion for constant acceleration to calculate the time needed for one vehicle to intercept another vehicle moving at a constant velocity.

well drop setup illustration

Vertical Motion Example Problem – Coin Toss Here’s an example applying the equations of motion under constant acceleration to determine the maximum height, velocity and time of flight for a coin flipped into a well. This problem could be modified to solve any object tossed vertically or dropped off a tall building or any height. This type of problem is a common equation of motion homework problem.

Projectile Motion Example Problem This example problem shows how to find different variables associated with parabolic projectile motion.

Accelerometer

Accelerometer and Inertia Example Problem Accelerometers are devices to measure or detect acceleration by measuring the changes that occur as a system experiences an acceleration. This example problem uses one of the simplest forms of an accelerometer, a weight hanging from a stiff rod or wire. As the system accelerates, the hanging weight is deflected from its rest position. This example derives the relationship between that angle, the acceleration and the acceleration due to gravity. It then calculates the acceleration due to gravity of an unknown planet.

Weight In An Elevator Have you ever wondered why you feel slightly heavier in an elevator when it begins to move up? Or why you feel lighter when the elevator begins to move down? This example problem explains how to find your weight in an accelerating elevator and how to find the acceleration of an elevator using your weight on a scale.

Equilibrium Example Problem This example problem shows how to determine the different forces in a system at equilibrium. The system is a block suspended from a rope attached to two other ropes.

Equilibrium Cat 1

Equilibrium Example Problem – Balance This example problem highlights the basics of finding the forces acting on a system in mechanical equilibrium.

Force of Gravity Example This physics problem and solution shows how to apply Newton’s equation to calculate the gravitational force between the Earth and the Moon.

Coupled Systems Example Problems

Atwood Machine

Coupled systems are two or more separate systems connected together. The best way to solve these types of problems is to treat each system separately and then find common variables between them. Atwood Machine The Atwood Machine is a coupled system of two weights sharing a connecting string over a pulley. This example problem shows how to find the acceleration of an Atwood system and the tension in the connecting string. Coupled Blocks – Inertia Example This example problem is similar to the Atwood machine except one block is resting on a frictionless surface perpendicular to the other block. This block is hanging over the edge and pulling down on the coupled string. The problem shows how to calculate the acceleration of the blocks and the tension in the connecting string.

Friction Example Problems

friction slide setup

These example physics problems explain how to calculate the different coefficients of friction.

Friction Example Problem – Block Resting on a Surface Friction Example Problem – Coefficient of Static Friction Friction Example Problem – Coefficient of Kinetic Friction Friction and Inertia Example Problem

Momentum and Collisions Example Problems

Desktop Momentum Balls Toy

These example problems show how to calculate the momentum of moving masses.

Momentum and Impulse Example Finds the momentum before and after a force acts on a body and determine the impulse of the force.

Elastic Collision Example Shows how to find the velocities of two masses after an elastic collision.

It Can Be Shown – Elastic Collision Math Steps Shows the math to find the equations expressing the final velocities of two masses in terms of their initial velocities.

Simple Pendulum Example Problems

scientific problem solving examples

These example problems show how to use the period of a pendulum to find related information.

Find the Period of a Simple Pendulum Find the period if you know the length of a pendulum and the acceleration due to gravity.

Find the Length of a Simple Pendulum Find the length of the pendulum when the period and acceleration due to gravity is known.

Find the Acceleration due to Gravity Using A Pendulum Find ‘g’ on different planets by timing the period of a known pendulum length.

Harmonic Motion and Waves Example Problems

Hooke's Law Forces

These example problems all involve simple harmonic motion and wave mechanics.

Energy and Wavelength Example This example shows how to determine the energy of a photon of a known wavelength.

Hooke’s Law Example Problem An example problem involving the restoring force of a spring.

Wavelength and Frequency Calculations See how to calculate wavelength if you know frequency and vice versa, for light, sound, or other waves.

Heat and Energy Example Problems

Heat of Fusion Example Problem Two example problems using the heat of fusion to calculate the energy required for a phase change.

Specific Heat Example Problem This is actually 3 similar example problems using the specific heat equation to calculate heat, specific heat, and temperature of a system.

Heat of Vaporization Example Problems Two example problems using or finding the heat of vaporization.

Ice to Steam Example Problem Classic problem melting cold ice to make hot steam. This problem brings all three of the previous example problems into one problem to calculate heat changes over phase changes.

Charge and Coulomb Force Example Problems

Setup diagram of Coulomb's Law Example Problem.

Electrical charges generate a coulomb force between themselves proportional to the magnitude of the charges and inversely proportional to the distance between them. Coulomb’s Law Example This example problem shows how to use Coulomb’s Law equation to find the charges necessary to produce a known repulsive force over a set distance. Coulomb Force Example This Coulomb force example shows how to find the number of electrons transferred between two bodies to generate a set amount of force over a short distance.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Discovery

Scientific discovery is the process or product of successful scientific inquiry. Objects of discovery can be things, events, processes, causes, and properties as well as theories and hypotheses and their features (their explanatory power, for example). Most philosophical discussions of scientific discoveries focus on the generation of new hypotheses that fit or explain given data sets or allow for the derivation of testable consequences. Philosophical discussions of scientific discovery have been intricate and complex because the term “discovery” has been used in many different ways, both to refer to the outcome and to the procedure of inquiry. In the narrowest sense, the term “discovery” refers to the purported “eureka moment” of having a new insight. In the broadest sense, “discovery” is a synonym for “successful scientific endeavor” tout court. Some philosophical disputes about the nature of scientific discovery reflect these terminological variations.

Philosophical issues related to scientific discovery arise about the nature of human creativity, specifically about whether the “eureka moment” can be analyzed and about whether there are rules (algorithms, guidelines, or heuristics) according to which such a novel insight can be brought about. Philosophical issues also arise about the analysis and evaluation of heuristics, about the characteristics of hypotheses worthy of articulation and testing, and, on the meta-level, about the nature and scope of philosophical analysis itself. This essay describes the emergence and development of the philosophical problem of scientific discovery and surveys different philosophical approaches to understanding scientific discovery. In doing so, it also illuminates the meta-philosophical problems surrounding the debates, and, incidentally, the changing nature of philosophy of science.

1. Introduction

2. scientific inquiry as discovery, 3. elements of discovery, 4. pragmatic logics of discovery, 5. the distinction between the context of discovery and the context of justification, 6.1 discovery as abduction, 6.2 heuristic programming, 7. anomalies and the structure of discovery, 8.1 discoverability, 8.2 preliminary appraisal, 8.3 heuristic strategies, 9.1 kinds and features of creativity, 9.2 analogy, 9.3 mental models, 10. machine discovery, 11. social epistemology and discovery, 12. integrated approaches to knowledge generation, other internet resources, related entries.

Philosophical reflection on scientific discovery occurred in different phases. Prior to the 1930s, philosophers were mostly concerned with discoveries in the broad sense of the term, that is, with the analysis of successful scientific inquiry as a whole. Philosophical discussions focused on the question of whether there were any discernible patterns in the production of new knowledge. Because the concept of discovery did not have a specified meaning and was used in a very wide sense, almost all discussions of scientific method and practice could potentially be considered as early contributions to reflections on scientific discovery. In the course of the 18 th century, as philosophy of science and science gradually became two distinct endeavors with different audiences, the term “discovery” became a technical term in philosophical discussions. Different elements of scientific inquiry were specified. Most importantly, during the 19 th century, the generation of new knowledge came to be clearly and explicitly distinguished from its assessment, and thus the conditions for the narrower notion of discovery as the act or process of conceiving new ideas emerged. This distinction was encapsulated in the so-called “context distinction,” between the “context of discovery” and the “context of justification”.

Much of the discussion about scientific discovery in the 20 th century revolved around this distinction It was argued that conceiving a new idea is a non-rational process, a leap of insight that cannot be captured in specific instructions. Justification, by contrast, is a systematic process of applying evaluative criteria to knowledge claims. Advocates of the context distinction argued that philosophy of science is exclusively concerned with the context of justification. The assumption underlying this argument is that philosophy is a normative project; it determines norms for scientific practice. Given this assumption, only the justification of ideas, not their generation, can be the subject of philosophical (normative) analysis. Discovery, by contrast, can only be a topic for empirical study. By definition, the study of discovery is outside the scope of philosophy of science proper.

The introduction of the context distinction and the disciplinary distinction between empirical science studies and normative philosophy of science that was tied to it spawned meta-philosophical disputes. For a long time, philosophical debates about discovery were shaped by the notion that philosophical and empirical analyses are mutually exclusive. Some philosophers insisted, like their predecessors prior to the 1930s, that the philosopher’s tasks include the analysis of actual scientific practices and that scientific resources be used to address philosophical problems. They maintained that it is a legitimate task for philosophy of science to develop a theory of heuristics or problem solving. But this position was the minority view in philosophy of science until the last decades of the 20 th century. Philosophers of discovery were thus compelled to demonstrate that scientific discovery was in fact a legitimate part of philosophy of science. Philosophical reflections about the nature of scientific discovery had to be bolstered by meta-philosophical arguments about the nature and scope of philosophy of science.

Today, however, there is wide agreement that philosophy and empirical research are not mutually exclusive. Not only do empirical studies of actual scientific discoveries in past and present inform philosophical thought about the structure and cognitive mechanisms of discovery, but works in psychology, cognitive science, artificial intelligence and related fields have become integral parts of philosophical analyses of the processes and conditions of the generation of new knowledge. Social epistemology has opened up another perspective on scientific discovery, reconceptualizing knowledge generation as group process.

Prior to the 19 th century, the term “discovery” was used broadly to refer to a new finding, such as a new cure, an unknown territory, an improvement of an instrument, or a new method of measuring longitude. One strand of the discussion about discovery dating back to ancient times concerns the method of analysis as the method of discovery in mathematics and geometry, and, by extension, in philosophy and scientific inquiry. Following the analytic method, we seek to find or discover something – the “thing sought,” which could be a theorem, a solution to a geometrical problem, or a cause – by analyzing it. In the ancient Greek context, analytic methods in mathematics, geometry, and philosophy were not clearly separated; the notion of finding or discovering things by analysis was relevant in all these fields.

In the ensuing centuries, several natural and experimental philosophers, including Avicenna and Zabarella, Bacon and Boyle, the authors of the Port-Royal Logic and Newton, and many others, expounded rules of reasoning and methods for arriving at new knowledge. The ancient notion of analysis still informed these rules and methods. Newton’s famous thirty-first query in the second edition of the Opticks outlines the role of analysis in discovery as follows: “As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths … By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis” (Newton 1718, 380, see Koertge 1980, section VI). Early modern accounts of discovery captured knowledge-seeking practices in the study of living and non-living nature, ranging from astronomy and physics to medicine, chemistry, and agriculture. These rich accounts of scientific inquiry were often expounded to bolster particular theories about the nature of matter and natural forces and were not explicitly labeled “methods of discovery ”, yet they are, in fact, accounts of knowledge generation and proper scientific reasoning, covering topics such as the role of the senses in knowledge generation, observation and experimentation, analysis and synthesis, induction and deduction, hypotheses, probability, and certainty.

Bacon’s work is a prominent example. His view of the method of science as it is presented in the Novum Organum showed how best to arrive at knowledge about “form natures” (the most general properties of matter) via a systematic investigation of phenomenal natures. Bacon described how first to collect and organize natural phenomena and experimentally produced facts in tables, how to evaluate these lists, and how to refine the initial results with the help of further trials. Through these steps, the investigator would arrive at conclusions about the “form nature” that produces particular phenomenal natures. Bacon expounded the procedures of constructing and evaluating tables of presences and absences to underpin his matter theory. In addition, in his other writings, such as his natural history Sylva Sylvarum or his comprehensive work on human learning De Augmentis Scientiarium , Bacon exemplified the “art of discovery” with practical examples and discussions of strategies of inquiry.

Like Bacon and Newton, several other early modern authors advanced ideas about how to generate and secure empirical knowledge, what difficulties may arise in scientific inquiry, and how they could be overcome. The close connection between theories about matter and force and scientific methodologies that we find in early modern works was gradually severed. 18 th - and early 19 th -century authors on scientific method and logic cited early modern approaches mostly to model proper scientific practice and reasoning, often creatively modifying them ( section 3 ). Moreover, they developed the earlier methodologies of experimentation, observation, and reasoning into practical guidelines for discovering new phenomena and devising probable hypotheses about cause-effect relations.

It was common in 20 th -century philosophy of science to draw a sharp contrast between those early theories of scientific method and modern approaches. 20 th -century philosophers of science interpreted 17 th - and 18 th -century approaches as generative theories of scientific method. They function simultaneously as guides for acquiring new knowledge and as assessments of the knowledge thus obtained, whereby knowledge that is obtained “in the right way” is considered secure (Laudan 1980; Schaffner 1993: chapter 2). On this view, scientific methods are taken to have probative force (Nickles 1985). According to modern, “consequentialist” theories, propositions must be established by comparing their consequences with observed and experimentally produced phenomena (Laudan 1980; Nickles 1985). It was further argued that, when consequentialist theories were on the rise, the two processes of generation and assessment of an idea or hypothesis became distinct, and the view that the merit of a new idea does not depend on the way in which it was arrived at became widely accepted.

More recent research in history of philosophy of science has shown, however, that there was no such sharp contrast. Consequentialist ideas were advanced throughout the 18 th century, and the early modern generative theories of scientific method and knowledge were more pragmatic than previously assumed. Early modern scholars did not assume that this procedure would lead to absolute certainty. One could only obtain moral certainty for the propositions thus secured.

During the 18 th and 19 th centuries, the different elements of discovery gradually became separated and discussed in more detail. Discussions concerned the nature of observations and experiments, the act of having an insight and the processes of articulating, developing, and testing the novel insight. Philosophical discussion focused on the question of whether and to what extent rules could be devised to guide each of these processes.

Numerous 19 th -century scholars contributed to these discussions, including Claude Bernard, Auguste Comte, George Gore, John Herschel, W. Stanley Jevons, Justus von Liebig, John Stuart Mill, and Charles Sanders Peirce, to name only a few. William Whewell’s work, especially the two volumes of Philosophy of the Inductive Sciences of 1840, is a noteworthy and, later, much discussed contribution to the philosophical debates about scientific discovery because he explicitly distinguished the creative moment or “happy thought” as he called it from other elements of scientific inquiry and because he offered a detailed analysis of the “discoverer’s induction”, i.e., the pursuit and evaluation of the new insight. Whewell’s approach is not unique, but for late 20 th -century philosophers of science, his comprehensive, historically informed philosophy of discovery became a point of orientation in the revival of interest in scientific discovery processes.

For Whewell, discovery comprised three elements: the happy thought, the articulation and development of that thought, and the testing or verification of it. His account was in part a description of the psychological makeup of the discoverer. For instance, he held that only geniuses could have those happy thoughts that are essential to discovery. In part, his account was an account of the methods by which happy thoughts are integrated into the system of knowledge. According to Whewell, the initial step in every discovery is what he called “some happy thought, of which we cannot trace the origin; some fortunate cast of intellect, rising above all rules. No maxims can be given which inevitably lead to discovery” (Whewell 1996 [1840]: 186). An “art of discovery” in the sense of a teachable and learnable skill does not exist according to Whewell. The happy thought builds on the known facts, but according to Whewell it is impossible to prescribe a method for having happy thoughts.

In this sense, happy thoughts are accidental. But in an important sense, scientific discoveries are not accidental. The happy thought is not a wild guess. Only the person whose mind is prepared to see things will actually notice them. The “previous condition of the intellect, and not the single fact, is really the main and peculiar cause of the success. The fact is merely the occasion by which the engine of discovery is brought into play sooner or later. It is, as I have elsewhere said, only the spark which discharges a gun already loaded and pointed; and there is little propriety in speaking of such an accident as the cause why the bullet hits its mark.” (Whewell 1996 [1840]: 189).

Having a happy thought is not yet a discovery, however. The second element of a scientific discovery consists in binding together—“colligating”, as Whewell called it—a set of facts by bringing them under a general conception. Not only does the colligation produce something new, but it also shows the previously known facts in a new light. Colligation involves, on the one hand, the specification of facts through systematic observation, measurements and experiment, and on the other hand, the clarification of ideas through the exposition of the definitions and axioms that are tacitly implied in those ideas. This process is extended and iterative. The scientists go back and forth between binding together the facts, clarifying the idea, rendering the facts more exact, and so forth.

The final part of the discovery is the verification of the colligation involving the happy thought. This means, first and foremost, that the outcome of the colligation must be sufficient to explain the data at hand. Verification also involves judging the predictive power, simplicity, and “consilience” of the outcome of the colligation. “Consilience” refers to a higher range of generality (broader applicability) of the theory (the articulated and clarified happy thought) that the actual colligation produced. Whewell’s account of discovery is not a deductivist system. It is essential that the outcome of the colligation be inferable from the data prior to any testing (Snyder 1997).

Whewell’s theory of discovery clearly separates three elements: the non-analyzable happy thought or eureka moment; the process of colligation which includes the clarification and explication of facts and ideas; and the verification of the outcome of the colligation. His position that the philosophy of discovery cannot prescribe how to think happy thoughts has been a key element of 20 th -century philosophical reflection on discovery. In contrast to many 20 th -century approaches, Whewell’s philosophical conception of discovery also comprises the processes by which the happy thoughts are articulated. Similarly, the process of verification is an integral part of discovery. The procedures of articulation and test are both analyzable according to Whewell, and his conception of colligation and verification serve as guidelines for how the discoverer should proceed. To verify a hypothesis, the investigator needs to show that it accounts for the known facts, that it foretells new, previously unobserved phenomena, and that it can explain and predict phenomena which are explained and predicted by a hypothesis that was obtained through an independent happy thought-cum-colligation (Ducasse 1951).

Whewell’s conceptualization of scientific discovery offers a useful framework for mapping the philosophical debates about discovery and for identifying major issues of concern in 20 th -century philosophical debates. Until the late 20 th century, most philosophers operated with a notion of discovery that is narrower than Whewell’s. In more recent treatments of discovery, however, the scope of the term “discovery” is limited to either the first of these elements, the “happy thought”, or to the happy thought and its initial articulation. In the narrower conception, what Whewell called “verification” is not part of discovery proper. Secondly, until the late 20 th century, there was wide agreement that the eureka moment, narrowly construed, is an unanalyzable, even mysterious leap of insight. The main disagreements concerned the question of whether the process of developing a hypothesis (the “colligation” in Whewell’s terms) is, or is not, a part of discovery proper – and if it is, whether and how this process is guided by rules. Much of the controversies in the 20 th century about the possibility of a philosophy of discovery can be understood against the background of the disagreement about whether the process of discovery does or does not include the articulation and development of a novel thought. Philosophers also disagreed on the issue of whether it is a philosophical task to explicate these rules.

In early 20 th -century logical empiricism, the view that discovery is or at least crucially involves a non-analyzable creative act of a gifted genius was widespread. Alternative conceptions of discovery especially in the pragmatist tradition emphasize that discovery is an extended process, i.e., that the discovery process includes the reasoning processes through which a new insight is articulated and further developed.

In the pragmatist tradition, the term “logic” is used in the broad sense to refer to strategies of human reasoning and inquiry. While the reasoning involved does not proceed according to the principles of demonstrative logic, it is systematic enough to deserve the label “logical”. Proponents of this view argued that traditional (here: syllogistic) logic is an inadequate model of scientific discovery because it misrepresents the process of knowledge generation as grossly as the notion of an “aha moment”.

Early 20 th -century pragmatic logics of discovery can best be described as comprehensive theories of the mental and physical-practical operations involved in knowledge generation, as theories of “how we think” (Dewey 1910). Among the mental operations are classification, determination of what is relevant to an inquiry, and the conditions of communication of meaning; among the physical operations are observation and (laboratory) experiments. These features of scientific discovery are either not or only insufficiently represented by traditional syllogistic logic (Schiller 1917: 236–7).

Philosophers advocating this approach agree that the logic of discovery should be characterized as a set of heuristic principles rather than as a process of applying inductive or deductive logic to a set of propositions. These heuristic principles are not understood to show the path to secure knowledge. Heuristic principles are suggestive rather than demonstrative (Carmichael 1922, 1930). One recurrent feature in these accounts of the reasoning strategies leading to new ideas is analogical reasoning (Schiller 1917; Benjamin 1934, see also section 9.2 .). However, in academic philosophy of science, endeavors to develop more systematically the heuristics guiding discovery processes were soon eclipsed by the advance of the distinction between contexts of discovery and justification.

The distinction between “context of discovery” and “context of justification” dominated and shaped the discussions about discovery in 20 th -century philosophy of science. The context distinction marks the distinction between the generation of a new idea or hypothesis and the defense (test, verification) of it. As the previous sections have shown, the distinction among different elements of scientific inquiry has a long history but in the first half of the 20 th century, the distinction between the different features of scientific inquiry turned into a powerful demarcation criterion between “genuine” philosophy and other fields of science studies, which became potent in philosophy of science. The boundary between context of discovery (the de facto thinking processes) and context of justification (the de jure defense of the correctness of these thoughts) was now understood to determine the scope of philosophy of science, whereby philosophy of science is conceived as a normative endeavor. Advocates of the context distinction argue that the generation of a new idea is an intuitive, nonrational process; it cannot be subject to normative analysis. Therefore, the study of scientists’ actual thinking can only be the subject of psychology, sociology, and other empirical sciences. Philosophy of science, by contrast, is exclusively concerned with the context of justification.

The terms “context of discovery” and “context of justification” are often associated with Hans Reichenbach’s work. Reichenbach’s original conception of the context distinction is quite complex, however (Howard 2006; Richardson 2006). It does not map easily on to the disciplinary distinction mentioned above, because for Reichenbach, philosophy of science proper is partly descriptive. Reichenbach maintains that philosophy of science includes a description of knowledge as it really is. Descriptive philosophy of science reconstructs scientists’ thinking processes in such a way that logical analysis can be performed on them, and it thus prepares the ground for the evaluation of these thoughts (Reichenbach 1938: § 1). Discovery, by contrast, is the object of empirical—psychological, sociological—study. According to Reichenbach, the empirical study of discoveries shows that processes of discovery often correspond to the principle of induction, but this is simply a psychological fact (Reichenbach 1938: 403).

While the terms “context of discovery” and “context of justification” are widely used, there has been ample discussion about how the distinction should be drawn and what their philosophical significance is (c.f. Kordig 1978; Gutting 1980; Zahar 1983; Leplin 1987; Hoyningen-Huene 1987; Weber 2005: chapter 3; Schickore and Steinle 2006). Most commonly, the distinction is interpreted as a distinction between the process of conceiving a theory and the assessment of that theory, specifically the assessment of the theory’s epistemic support. This version of the distinction is not necessarily interpreted as a temporal distinction. In other words, it is not usually assumed that a theory is first fully developed and then assessed. Rather, generation and assessment are two different epistemic approaches to theory: the endeavor to articulate, flesh out, and develop its potential and the endeavor to assess its epistemic worth. Within the framework of the context distinction, there are two main ways of conceptualizing the process of conceiving a theory. The first option is to characterize the generation of new knowledge as an irrational act, a mysterious creative intuition, a “eureka moment”. The second option is to conceptualize the generation of new knowledge as an extended process that includes a creative act as well as some process of articulating and developing the creative idea.

Both of these accounts of knowledge generation served as starting points for arguments against the possibility of a philosophy of discovery. In line with the first option, philosophers have argued that neither is it possible to prescribe a logical method that produces new ideas nor is it possible to reconstruct logically the process of discovery. Only the process of testing is amenable to logical investigation. This objection to philosophies of discovery has been called the “discovery machine objection” (Curd 1980: 207). It is usually associated with Karl Popper’s Logic of Scientific Discovery .

The initial state, the act of conceiving or inventing a theory, seems to me neither to call for logical analysis not to be susceptible of it. The question how it happens that a new idea occurs to a man—whether it is a musical theme, a dramatic conflict, or a scientific theory—may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge. This latter is concerned not with questions of fact (Kant’s quid facti ?) , but only with questions of justification or validity (Kant’s quid juris ?) . Its questions are of the following kind. Can a statement be justified? And if so, how? Is it testable? Is it logically dependent on certain other statements? Or does it perhaps contradict them? […]Accordingly I shall distinguish sharply between the process of conceiving a new idea, and the methods and results of examining it logically. As to the task of the logic of knowledge—in contradistinction to the psychology of knowledge—I shall proceed on the assumption that it consists solely in investigating the methods employed in those systematic tests to which every new idea must be subjected if it is to be seriously entertained. (Popper 2002 [1934/1959]: 7–8)

With respect to the second way of conceptualizing knowledge generation, many philosophers argue in a similar fashion that because the process of discovery involves an irrational, intuitive process, which cannot be examined logically, a logic of discovery cannot be construed. Other philosophers turn against the philosophy of discovery even though they explicitly acknowledge that discovery is an extended, reasoned process. They present a meta-philosophical objection argument, arguing that a theory of articulating and developing ideas is not a philosophical but a psychological or sociological theory. In this perspective, “discovery” is understood as a retrospective label, which is attributed as a sign of accomplishment to some scientific endeavors. Sociological theories acknowledge that discovery is a collective achievement and the outcome of a process of negotiation through which “discovery stories” are constructed and certain knowledge claims are granted discovery status (Brannigan 1981; Schaffer 1986, 1994).

The impact of the context distinction on 20 th -century studies of scientific discovery and on philosophy of science more generally can hardly be overestimated. The view that the process of discovery (however construed) is outside the scope of philosophy of science proper was widely shared amongst philosophers of science for most of the 20 th century. The last section shows that there were some attempts to develop logics of discovery in the 1920s and 1930s, especially in the pragmatist tradition. But for several decades, the context distinction dictated what philosophy of science should be about and how it should proceed. The dominant view was that theories of mental operations or heuristics had no place in philosophy of science and that, therefore, discovery was not a legitimate topic for philosophy of science. Until the last decades of the 20 th century, there were few attempts to challenge the disciplinary distinction tied to the context distinction. Only during the 1970s did the interest in philosophical approaches to discovery begin to increase again. But the context distinction remained a challenge for philosophies of discovery.

There are several lines of response to the disciplinary distinction tied to the context distinction. Each of these lines of response opens a philosophical perspective on discovery. Each proceeds on the assumption that philosophy of science may legitimately include some form of analysis of actual reasoning patterns as well as information from empirical sciences such as cognitive science, psychology, and sociology. All of these responses reject the idea that discovery is nothing but a mystical event. Discovery is conceived as an analyzable reasoning process, not just as a creative leap by which novel ideas spring into being fully formed. All of these responses agree that the procedures and methods for arriving at new hypotheses and ideas are no guarantee that the hypothesis or idea that is thus formed is necessarily the best or the correct one. Nonetheless, it is the task of philosophy of science to provide rules for making this process better. All of these responses can be described as theories of problem solving, whose ultimate goal is to make the generation of new ideas and theories more efficient.

But the different approaches to scientific discovery employ different terminologies. In particular, the term “logic” of discovery is sometimes used in a narrow sense and sometimes broadly understood. In the narrow sense, “logic” of discovery is understood to refer to a set of formal, generally applicable rules by which novel ideas can be mechanically derived from existing data. In the broad sense, “logic” of discovery refers to the schematic representation of reasoning procedures. “Logical” is just another term for “rational”. Moreover, while each of these responses combines philosophical analyses of scientific discovery with empirical research on actual human cognition, different sets of resources are mobilized, ranging from AI research and cognitive science to historical studies of problem-solving procedures. Also, the responses parse the process of scientific inquiry differently. Often, scientific inquiry is regarded as having two aspects, viz. generation and assessments of new ideas. At times, however, scientific inquiry is regarded as having three aspects, namely generation, pursuit or articulation, and assessment of knowledge. In the latter framework, the label “discovery” is sometimes used to refer just to generation and sometimes to refer to both generation and pursuit.

One response to the challenge of the context distinction draws on a broad understanding of the term “logic” to argue that we cannot but admit a general, domain-neutral logic if we do not want to assume that the success of science is a miracle (Jantzen 2016) and that a logic of scientific discovery can be developed ( section 6 ). Another response, drawing on a narrow understanding of the term “logic”, is to concede that there is no logic of discovery, i.e., no algorithm for generating new knowledge, but that the process of discovery follows an identifiable, analyzable pattern ( section 7 ).

Others argue that discovery is governed by a methodology . The methodology of discovery is a legitimate topic for philosophical analysis ( section 8 ). Yet another response assumes that discovery is or at least involves a creative act. Drawing on resources from cognitive science, neuroscience, computational research, and environmental and social psychology, philosophers have sought to demystify the cognitive processes involved in the generation of new ideas. Philosophers who take this approach argue that scientific creativity is amenable to philosophical analysis ( section 9.1 ).

All these responses assume that there is more to discovery than a eureka moment. Discovery comprises processes of articulating, developing, and assessing the creative thought, as well as the scientific community’s adjudication of what does, and does not, count as “discovery” (Arabatzis 1996). These are the processes that can be examined with the tools of philosophical analysis, augmented by input from other fields of science studies such as sociology, history, or cognitive science.

6. Logics of discovery after the context distinction

One way of responding to the demarcation criterion described above is to argue that discovery is a topic for philosophy of science because it is a logical process after all. Advocates of this approach to the logic of discovery usually accept the overall distinction between the two processes of conceiving and testing a hypothesis. They also agree that it is impossible to put together a manual that provides a formal, mechanical procedure through which innovative concepts or hypotheses can be derived: There is no discovery machine. But they reject the view that the process of conceiving a theory is a creative act, a mysterious guess, a hunch, a more or less instantaneous and random process. Instead, they insist that both conceiving and testing hypotheses are processes of reasoning and systematic inference, that both of these processes can be represented schematically, and that it is possible to distinguish better and worse paths to new knowledge.

This line of argument has much in common with the logics of discovery described in section 4 above but it is now explicitly pitched against the disciplinary distinction tied to the context distinction. There are two main ways of developing this argument. The first is to conceive of discovery in terms of abductive reasoning ( section 6.1 ). The second is to conceive of discovery in terms of problem-solving algorithms, whereby heuristic rules aid the processing of available data and enhance the success in finding solutions to problems ( section 6.2 ). Both lines of argument rely on a broad conception of logic, whereby the “logic” of discovery amounts to a schematic account of the reasoning processes involved in knowledge generation.

One argument, elaborated prominently by Norwood R. Hanson, is that the act of discovery—here, the act of suggesting a new hypothesis—follows a distinctive logical pattern, which is different from both inductive logic and the logic of hypothetico-deductive reasoning. The special logic of discovery is the logic of abductive or “retroductive” inferences (Hanson 1958). The argument that it is through an act of abductive inferences that plausible, promising scientific hypotheses are devised goes back to C.S. Peirce. This version of the logic of discovery characterizes reasoning processes that take place before a new hypothesis is ultimately justified. The abductive mode of reasoning that leads to plausible hypotheses is conceptualized as an inference beginning with data or, more specifically, with surprising or anomalous phenomena.

In this view, discovery is primarily a process of explaining anomalies or surprising, astonishing phenomena. The scientists’ reasoning proceeds abductively from an anomaly to an explanatory hypothesis in light of which the phenomena would no longer be surprising or anomalous. The outcome of this reasoning process is not one single specific hypothesis but the delineation of a type of hypotheses that is worthy of further attention (Hanson 1965: 64). According to Hanson, the abductive argument has the following schematic form (Hanson 1960: 104):

  • Some surprising, astonishing phenomena p 1 , p 2 , p 3 … are encountered.
  • But p 1 , p 2 , p 3 … would not be surprising were an hypothesis of H ’s type to obtain. They would follow as a matter of course from something like H and would be explained by it.
  • Therefore there is good reason for elaborating an hypothesis of type H—for proposing it as a possible hypothesis from whose assumption p 1 , p 2 , p 3 … might be explained.

Drawing on the historical record, Hanson argues that several important discoveries were made relying on abductive reasoning, such as Kepler’s discovery of the elliptic orbit of Mars (Hanson 1958). It is now widely agreed, however, that Hanson’s reconstruction of the episode is not a historically adequate account of Kepler’s discovery (Lugg 1985). More importantly, while there is general agreement that abductive inferences are frequent in both everyday and scientific reasoning, these inferences are no longer considered as logical inferences. Even if one accepts Hanson’s schematic representation of the process of identifying plausible hypotheses, this process is a “logical” process only in the widest sense whereby the term “logical” is understood as synonymous with “rational”. Notably, some philosophers have even questioned the rationality of abductive inferences (Koehler 1991; Brem and Rips 2000).

Another argument against the above schema is that it is too permissive. There will be several hypotheses that are explanations for phenomena p 1 , p 2 , p 3 …, so the fact that a particular hypothesis explains the phenomena is not a decisive criterion for developing that hypothesis (Harman 1965; see also Blackwell 1969). Additional criteria are required to evaluate the hypothesis yielded by abductive inferences.

Finally, it is worth noting that the schema of abductive reasoning does not explain the very act of conceiving a hypothesis or hypothesis-type. The processes by which a new idea is first articulated remain unanalyzed in the above schema. The schema focuses on the reasoning processes by which an exploratory hypothesis is assessed in terms of its merits and promise (Laudan 1980; Schaffner 1993).

In more recent work on abduction and discovery, two notions of abduction are sometimes distinguished: the common notion of abduction as inference to the best explanation (selective abduction) and creative abduction (Magnani 2000, 2009). Selective abduction—the inference to the best explanation—involves selecting a hypothesis from a set of known hypotheses. Medical diagnosis exemplifies this kind of abduction. Creative abduction, by contrast, involves generating a new, plausible hypothesis. This happens, for instance, in medical research, when the notion of a new disease is articulated. However, it is still an open question whether this distinction can be drawn, or whether there is a more gradual transition from selecting an explanatory hypothesis from a familiar domain (selective abduction) to selecting a hypothesis that is slightly modified from the familiar set and to identifying a more drastically modified or altered assumption.

Another recent suggestion is to broaden Peirce’s original account of abduction and to include not only verbal information but also non-verbal mental representations, such as visual, auditory, or motor representations. In Thagard’s approach, representations are characterized as patterns of activity in mental populations (see also section 9.3 below). The advantage of the neural account of human reasoning is that it covers features such as the surprise that accompanies the generation of new insights or the visual and auditory representations that contribute to it. Surprise, for instance, could be characterized as resulting from rapid changes in activation of the node in a neural network representing the “surprising” element (Thagard and Stewart 2011). If all mental representations can be characterized as patterns of firing in neural populations, abduction can be analyzed as the combination or “convolution” (Thagard) of patterns of neural activity from disjoint or overlapping patterns of activity (Thagard 2010).

The concern with the logic of discovery has also motivated research on artificial intelligence at the intersection of philosophy of science and cognitive science. In this approach, scientific discovery is treated as a form of problem-solving activity (Simon 1973; see also Newell and Simon 1971), whereby the systematic aspects of problem solving are studied within an information-processing framework. The aim is to clarify with the help of computational tools the nature of the methods used to discover scientific hypotheses. These hypotheses are regarded as solutions to problems. Philosophers working in this tradition build computer programs employing methods of heuristic selective search (e.g., Langley et al. 1987). In computational heuristics, search programs can be described as searches for solutions in a so-called “problem space” in a certain domain. The problem space comprises all possible configurations in that domain (e.g., for chess problems, all possible arrangements of pieces on a board of chess). Each configuration is a “state” of the problem space. There are two special states, namely the goal state, i.e., the state to be reached, and the initial state, i.e., the configuration at the starting point from which the search begins. There are operators, which determine the moves that generate new states from the current state. There are path constraints, which limit the permitted moves. Problem solving is the process of searching for a solution of the problem of how to generate the goal state from an initial state. In principle, all states can be generated by applying the operators to the initial state, then to the resulting state, until the goal state is reached (Langley et al. 1987: chapter 9). A problem solution is a sequence of operations leading from the initial to the goal state.

The basic idea behind computational heuristics is that rules can be identified that serve as guidelines for finding a solution to a given problem quickly and efficiently by avoiding undesired states of the problem space. These rules are best described as rules of thumb. The aim of constructing a logic of discovery thus becomes the aim of constructing a heuristics for the efficient search for solutions to problems. The term “heuristic search” indicates that in contrast to algorithms, problem-solving procedures lead to results that are merely provisional and plausible. A solution is not guaranteed, but heuristic searches are advantageous because they are more efficient than exhaustive random trial and error searches. Insofar as it is possible to evaluate whether one set of heuristics is better—more efficacious—than another, the logic of discovery turns into a normative theory of discovery.

Arguably, because it is possible to reconstruct important scientific discovery processes with sets of computational heuristics, the scientific discovery process can be considered as a special case of the general mechanism of information processing. In this context, the term “logic” is not used in the narrow sense of a set of formal, generally applicable rules to draw inferences but again in a broad sense as a label for a set of procedural rules.

The computer programs that embody the principles of heuristic searches in scientific inquiry simulate the paths that scientists followed when they searched for new theoretical hypotheses. Computer programs such as BACON (Simon et al. 1981) and KEKADA (Kulkarni and Simon 1988) utilize sets of problem-solving heuristics to detect regularities in given data sets. The program would note, for instance, that the values of a dependent term are constant or that a set of values for a term x and a set of values for a term y are linearly related. It would thus “infer” that the dependent term always has that value or that a linear relation exists between x and y . These programs can “make discoveries” in the sense that they can simulate successful discoveries such as Kepler’s third law (BACON) or the Krebs cycle (KEKADA).

Computational theories of scientific discoveries have helped identify and clarify a number of problem-solving strategies. An example of such a strategy is heuristic means-ends analysis, which involves identifying specific differences between the present and the goal situation and searches for operators (processes that will change the situation) that are associated with the differences that were detected. Another important heuristic is to divide the problem into sub-problems and to begin solving the one with the smallest number of unknowns to be determined (Simon 1977). Computational approaches have also highlighted the extent to which the generation of new knowledge draws on existing knowledge that constrains the development of new hypotheses.

As accounts of scientific discoveries, the early computational heuristics have some limitations. Compared to the problem spaces given in computational heuristics, the complex problem spaces for scientific problems are often ill defined, and the relevant search space and goal state must be delineated before heuristic assumptions could be formulated (Bechtel and Richardson 1993: chapter 1). Because a computer program requires the data from actual experiments, the simulations cover only certain aspects of scientific discoveries; in particular, it cannot determine by itself which data is relevant, which data to relate and what form of law it should look for (Gillies 1996). However, as a consequence of the rise of so-called “deep learning” methods in data-intensive science, there is renewed philosophical interest in the question of whether machines can make discoveries ( section 10 ).

Many philosophers maintain that discovery is a legitimate topic for philosophy of science while abandoning the notion that there is a logic of discovery. One very influential approach is Thomas Kuhn’s analysis of the emergence of novel facts and theories (Kuhn 1970 [1962]: chapter 6). Kuhn identifies a general pattern of discovery as part of his account of scientific change. A discovery is not a simple act, but an extended, complex process, which culminates in paradigm changes. Paradigms are the symbolic generalizations, metaphysical commitments, values, and exemplars that are shared by a community of scientists and that guide the research of that community. Paradigm-based, normal science does not aim at novelty but instead at the development, extension, and articulation of accepted paradigms. A discovery begins with an anomaly, that is, with the recognition that the expectations induced by an established paradigm are being violated. The process of discovery involves several aspects: observations of an anomalous phenomenon, attempts to conceptualize it, and changes in the paradigm so that the anomaly can be accommodated.

It is the mark of success of normal science that it does not make transformative discoveries, and yet such discoveries come about as a consequence of normal, paradigm-guided science. The more detailed and the better developed a paradigm, the more precise are its predictions. The more precisely the researchers know what to expect, the better they are able to recognize anomalous results and violations of expectations:

novelty ordinarily emerges only for the man who, knowing with precision what he should expect, is able to recognize that something has gone wrong. Anomaly appears only against the background provided by the paradigm. (Kuhn 1970 [1962]: 65)

Drawing on several historical examples, Kuhn argues that it is usually impossible to identify the very moment when something was discovered or even the individual who made the discovery. Kuhn illustrates these points with the discovery of oxygen (see Kuhn 1970 [1962]: 53–56). Oxygen had not been discovered before 1774 and had been discovered by 1777. Even before 1774, Lavoisier had noticed that something was wrong with phlogiston theory, but he was unable to move forward. Two other investigators, C. W. Scheele and Joseph Priestley, independently identified a gas obtained from heating solid substances. But Scheele’s work remained unpublished until after 1777, and Priestley did not identify his substance as a new sort of gas. In 1777, Lavoisier presented the oxygen theory of combustion, which gave rise to fundamental reconceptualization of chemistry. But according to this theory as Lavoisier first presented it, oxygen was not a chemical element. It was an atomic “principle of acidity” and oxygen gas was a combination of that principle with caloric. According to Kuhn, all of these developments are part of the discovery of oxygen, but none of them can be singled out as “the” act of discovery.

In pre-paradigmatic periods or in times of paradigm crisis, theory-induced discoveries may happen. In these periods, scientists speculate and develop tentative theories, which may lead to novel expectations and experiments and observations to test whether these expectations can be confirmed. Even though no precise predictions can be made, phenomena that are thus uncovered are often not quite what had been expected. In these situations, the simultaneous exploration of the new phenomena and articulation of the tentative hypotheses together bring about discovery.

In cases like the discovery of oxygen, by contrast, which took place while a paradigm was already in place, the unexpected becomes apparent only slowly, with difficulty, and against some resistance. Only gradually do the anomalies become visible as such. It takes time for the investigators to recognize “both that something is and what it is” (Kuhn 1970 [1962]: 55). Eventually, a new paradigm becomes established and the anomalous phenomena become the expected phenomena.

Recent studies in cognitive neuroscience of brain activity during periods of conceptual change support Kuhn’s view that conceptual change is hard to achieve. These studies examine the neural processes that are involved in the recognition of anomalies and compare them with the brain activity involved in the processing of information that is consistent with preferred theories. The studies suggest that the two types of data are processed differently (Dunbar et al. 2007).

8. Methodologies of discovery

Advocates of the view that there are methodologies of discovery use the term “logic” in the narrow sense of an algorithmic procedure to generate new ideas. But like the AI-based theories of scientific discovery described in section 6 , methodologies of scientific discovery interpret the concept “discovery” as a label for an extended process of generating and articulating new ideas and often describe the process in terms of problem solving. In these approaches, the distinction between the contexts of discovery and the context of justification is challenged because the methodology of discovery is understood to play a justificatory role. Advocates of a methodology of discovery usually rely on a distinction between different justification procedures, justification involved in the process of generating new knowledge and justification involved in testing it. Consequential or “strong” justifications are methods of testing. The justification involved in discovery, by contrast, is conceived as generative (as opposed to consequential) justification ( section 8.1 ) or as weak (as opposed to strong) justification ( section 8.2 ). Again, some terminological ambiguity exists because according to some philosophers, there are three contexts, not two: Only the initial conception of a new idea (the creative act is the context of discovery proper, and between it and justification there exists a separate context of pursuit (Laudan 1980). But many advocates of methodologies of discovery regard the context of pursuit as an integral part of the process of justification. They retain the notion of two contexts and re-draw the boundaries between the contexts of discovery and justification as they were drawn in the early 20 th century.

The methodology of discovery has sometimes been characterized as a form of justification that is complementary to the methodology of testing (Nickles 1984, 1985, 1989). According to the methodology of testing, empirical support for a theory results from successfully testing the predictive consequences derived from that theory (and appropriate auxiliary assumptions). In light of this methodology, justification for a theory is “consequential justification,” the notion that a hypothesis is established if successful novel predictions are derived from the theory or claim. Generative justification complements consequential justification. Advocates of generative justification hold that there exists an important form of justification in science that involves reasoning to a claim from data or previously established results more generally.

One classic example for a generative methodology is the set of Newton’s rules for the study of natural philosophy. According to these rules, general propositions are established by deducing them from the phenomena. The notion of generative justification seeks to preserve the intuition behind classic conceptions of justification by deduction. Generative justification amounts to the rational reconstruction of the discovery path in order to establish its discoverability had the researchers known what is known now, regardless of how it was first thought of (Nickles 1985, 1989). The reconstruction demonstrates in hindsight that the claim could have been discovered in this manner had the necessary information and techniques been available. In other words, generative justification—justification as “discoverability” or “potential discovery”—justifies a knowledge claim by deriving it from results that are already established. While generative justification does not retrace exactly those steps of the actual discovery path that were actually taken, it is a better representation of scientists’ actual practices than consequential justification because scientists tend to construe new claims from available knowledge. Generative justification is a weaker version of the traditional ideal of justification by deduction from the phenomena. Justification by deduction from the phenomena is complete if a theory or claim is completely determined from what we already know. The demonstration of discoverability results from the successful derivation of a claim or theory from the most basic and most solidly established empirical information.

Discoverability as described in the previous paragraphs is a mode of justification. Like the testing of novel predictions derived from a hypothesis, generative justification begins when the phase of finding and articulating a hypothesis worthy of assessing is drawing to a close. Other approaches to the methodology of discovery are directly concerned with the procedures involved in devising new hypotheses. The argument in favor of this kind of methodology is that the procedures of devising new hypotheses already include elements of appraisal. These preliminary assessments have been termed “weak” evaluation procedures (Schaffner 1993). Weak evaluations are relevant during the process of devising a new hypothesis. They provide reasons for accepting a hypothesis as promising and worthy of further attention. Strong evaluations, by contrast, provide reasons for accepting a hypothesis as (approximately) true or confirmed. Both “generative” and “consequential” testing as discussed in the previous section are strong evaluation procedures. Strong evaluation procedures are rigorous and systematically organized according to the principles of hypothesis derivation or H-D testing. A methodology of preliminary appraisal, by contrast, articulates criteria for the evaluation of a hypothesis prior to rigorous derivation or testing. It aids the decision about whether to take that hypothesis seriously enough to develop it further and test it. For advocates of this version of the methodology of discovery, it is the task of philosophy of science to characterize sets of constraints and methodological rules guiding the complex process of prior-to-test evaluation of hypotheses.

In contrast to the computational approaches discussed above, strategies of preliminary appraisal are not regarded as subject-neutral but as specific to particular fields of study. Philosophers of biology, for instance, have developed a fine-grained framework to account for the generation and preliminary evaluation of biological mechanisms (Darden 2002; Craver 2002; Bechtel and Richardson 1993; Craver and Darden 2013). Some philosophers have suggested that the phase of preliminary appraisal be further divided into two phases, the phase of appraising and the phase of revising. According to Lindley Darden, the phases of generation, appraisal and revision of descriptions of mechanisms can be characterized as reasoning processes governed by reasoning strategies. Different reasoning strategies govern the different phases (Darden 1991, 2002; Craver 2002; Darden 2009). The generation of hypotheses about mechanisms, for instance, is governed by the strategy of “schema instantiation” (see Darden 2002). The discovery of the mechanism of protein synthesis involved the instantiation of an abstract schema for chemical reactions: reactant 1 + reactant 2 = product. The actual mechanism of protein synthesis was found through specification and modification of this schema.

Neither of these strategies is deemed necessary for discovery, and they are not prescriptions for biological research. Rather, these strategies are deemed sufficient for the discovery of mechanisms. The methodology of the discovery of mechanisms is an extrapolation from past episodes of research on mechanisms and the result of a synthesis of rational reconstructions of several of these historical episodes. The methodology of discovery is weakly normative in the sense that the strategies for the discovery of mechanisms that were successful in the past may prove useful in future biological research (Darden 2002).

As philosophers of science have again become more attuned to actual scientific practices, interest in heuristic strategies has also been revived. Many analysts now agree that discovery processes can be regarded as problem solving activities, whereby a discovery is a solution to a problem. Heuristics-based methodologies of discovery are neither purely subjective and intuitive nor algorithmic or formalizable; the point is that reasons can be given for pursuing one or the other problem-solving strategy. These rules are open and do not guarantee a solution to a problem when applied (Ippoliti 2018). On this view, scientific researchers are no longer seen as Kuhnian “puzzle solvers” but as problem solvers and decision makers in complex, variable, and changing environments (Wimsatt 2007).

Philosophers of discovery working in this tradition draw on a growing body of literature in cognitive psychology, management science, operations research, and economy on human reasoning and decision making in contexts with limited information, under time constraints, and with sub-optimal means (Gigerenzer & Sturm 2012). Heuristic strategies characterized in these studies, such as Gigerenzer’s “tools to theory heuristic” are then applied to understand scientific knowledge generation (Gigerenzer 1992, Nickles 2018). Other analysts specify heuristic strategies in a range of scientific fields, including climate science, neurobiology, and clinical medicine (Gramelsberger 2011, Schaffner 2008, Gillies 2018). Finally, in analytic epistemology, formal methods are developed to identify and assess distinct heuristic strategies currently in use, such as Bayesian reverse engineering in cognitive science (Zednik and Jäkel 2016).

As the literature on heuristics continues to grow, it has become clear that the term “heuristics” is itself used in a variety of different ways. (For a valuable taxonomy of meanings of “heuristic,” see Chow 2015, see also Ippoliti 2018.) Moreover, as in the context of earlier debates about computational heuristics, debates continue about the limitations of heuristics. The use of heuristics may come at a cost if heuristics introduce systematic biases (Wimsatt 2007). Some philosophers thus call for general principles for the evaluation of heuristic strategies (Hey 2016).

9. Cognitive perspectives on discovery

The approaches to scientific discovery presented in the previous sections focus on the adoption, articulation, and preliminary evaluation of ideas or hypotheses prior to rigorous testing, not on how a novel hypothesis or idea is first thought up. For a long time, the predominant view among philosophers of discovery was that the initial step of discovery is a mysterious intuitive leap of the human mind that cannot be analyzed further. More recent accounts of discovery informed by evolutionary biology also do not explicate how new ideas are formed. The generation of new ideas is akin to random, blind variations of thought processes, which have to be inspected by the critical mind and assessed as neutral, productive, or useless (Campbell 1960; see also Hull 1988), but the key processes by which new ideas are generated are left unanalyzed.

With the recent rapprochement among philosophy of mind, cognitive science and psychology and the increased integration of empirical research into philosophy of science, these processes have been submitted to closer analysis, and philosophical studies of creativity have seen a surge of interest (e.g. Paul & Kaufman 2014a). The distinctive feature of these studies is that they integrate philosophical analyses with empirical work from cognitive science, psychology, evolutionary biology, and computational neuroscience (Thagard 2012). Analysts have distinguished different kinds and different features of creative thinking and have examined certain features in depth, and from new angles. Recent philosophical research on creativity comprises conceptual analyses and integrated approaches based on the assumption that creativity can be analyzed and that empirical research can contribute to the analysis (Paul & Kaufman 2014b). Two key elements of the cognitive processes involved in creative thinking that have been in the focus of philosophical analysis are analogies ( section 9.2 ) and mental models ( section 9.3 ).

General definitions of creativity highlight novelty or originality and significance or value as distinctive features of a creative act or product (Sternberg & Lubart 1999, Kieran 2014, Paul & Kaufman 2014b, although see Hills & Bird 2019). Different kinds of creativity can be distinguished depending on whether the act or product is novel for a particular individual or entirely novel. Psychologist Margaret Boden distinguishes between psychological creativity (P-creativity) and historical creativity (H-creativity). P-creativity is a development that is new, surprising and important to the particular person who comes up with it. H-creativity, by contrast, is radically novel, surprising, and important—it is generated for the first time (Boden 2004). Further distinctions have been proposed, such as anthropological creativity (construed as a human condition) and metaphysical creativity, a radically new thought or action in the sense that it is unaccounted for by antecedents and available knowledge, and thus constitutes a radical break with the past (Kronfeldner 2009, drawing on Hausman 1984).

Psychological studies analyze the personality traits and creative individuals’ behavioral dispositions that are conducive to creative thinking. They suggest that creative scientists share certain distinct personality traits, including confidence, openness, dominance, independence, introversion, as well as arrogance and hostility. (For overviews of recent studies on personality traits of creative scientists, see Feist 1999, 2006: chapter 5).

Recent work on creativity in philosophy of mind and cognitive science offers substantive analyses of the cognitive and neural mechanisms involved in creative thinking (Abrams 2018, Minai et al 2022) and critical scrutiny of the romantic idea of genius creativity as something deeply mysterious (Blackburn 2014). Some of this research aims to characterize features that are common to all creative processes, such as Thagard and Stewart’s account according to which creativity results from combinations of representations (Thagard & Stewart 2011, but see Pasquale and Poirier 2016). Other research aims to identify the features that are distinctive of scientific creativity as opposed to other forms of creativity, such as artistic creativity or creative technological invention (Simonton 2014).

Many philosophers of science highlight the role of analogy in the development of new knowledge, whereby analogy is understood as a process of bringing ideas that are well understood in one domain to bear on a new domain (Thagard 1984; Holyoak and Thagard 1996). An important source for philosophical thought about analogy is Mary Hesse’s conception of models and analogies in theory construction and development. In this approach, analogies are similarities between different domains. Hesse introduces the distinction between positive, negative, and neutral analogies (Hesse 1966: 8). If we consider the relation between gas molecules and a model for gas, namely a collection of billiard balls in random motion, we will find properties that are common to both domains (positive analogy) as well as properties that can only be ascribed to the model but not to the target domain (negative analogy). There is a positive analogy between gas molecules and a collection of billiard balls because both the balls and the molecules move randomly. There is a negative analogy between the domains because billiard balls are colored, hard, and shiny but gas molecules do not have these properties. The most interesting properties are those properties of the model about which we do not know whether they are positive or negative analogies. This set of properties is the neutral analogy. These properties are the significant properties because they might lead to new insights about the less familiar domain. From our knowledge about the familiar billiard balls, we may be able to derive new predictions about the behavior of gas molecules, which we could then test.

Hesse offers a more detailed analysis of the structure of analogical reasoning through the distinction between horizontal and vertical analogies between domains. Horizontal analogies between two domains concern the sameness or similarity between properties of both domains. If we consider sound and light waves, there are similarities between them: sound echoes, light reflects; sound is loud, light is bright, both sound and light are detectable by our senses. There are also relations among the properties within one domain, such as the causal relation between sound and the loud tone we hear and, analogously, between physical light and the bright light we see. These analogies are vertical analogies. For Hesse, vertical analogies hold the key for the construction of new theories.

Analogies play several roles in science. Not only do they contribute to discovery but they also play a role in the development and evaluation of scientific theories. Current discussions about analogy and discovery have expanded and refined Hesse’s approach in various ways. Some philosophers have developed criteria for evaluating analogy arguments (Bartha 2010). Other work has identified highly significant analogies that were particularly fruitful for the advancement of science (Holyoak and Thagard 1996: 186–188; Thagard 1999: chapter 9). The majority of analysts explore the features of the cognitive mechanisms through which aspects of a familiar domain or source are applied to an unknown target domain in order to understand what is unknown. According to the influential multi-constraint theory of analogical reasoning developed by Holyoak and Thagard, the transfer processes involved in analogical reasoning (scientific and otherwise) are guided or constrained in three main ways: 1) by the direct similarity between the elements involved; 2) by the structural parallels between source and target domain; as well as 3) by the purposes of the investigators, i.e., the reasons why the analogy is considered. Discovery, the formulation of a new hypothesis, is one such purpose.

“In vivo” investigations of scientists reasoning in their laboratories have not only shown that analogical reasoning is a key component of scientific practice, but also that the distance between source and target depends on the purpose for which analogies are sought. Scientists trying to fix experimental problems draw analogies between targets and sources from highly similar domains. In contrast, scientists attempting to formulate new models or concepts draw analogies between less similar domains. Analogies between radically different domains, however, are rare (Dunbar 1997, 2001).

In current cognitive science, human cognition is often explored in terms of model-based reasoning. The starting point of this approach is the notion that much of human reasoning, including probabilistic and causal reasoning as well as problem solving takes place through mental modeling rather than through the application of logic or methodological criteria to a set of propositions (Johnson-Laird 1983; Magnani et al. 1999; Magnani and Nersessian 2002). In model-based reasoning, the mind constructs a structural representation of a real-world or imaginary situation and manipulates this structure. In this perspective, conceptual structures are viewed as models and conceptual innovation as constructing new models through various modeling operations. Analogical reasoning—analogical modeling—is regarded as one of three main forms of model-based reasoning that appear to be relevant for conceptual innovation in science. Besides analogical modeling, visual modeling and simulative modeling or thought experiments also play key roles (Nersessian 1992, 1999, 2009). These modeling practices are constructive in that they aid the development of novel mental models. The key elements of model-based reasoning are the call on knowledge of generative principles and constraints for physical models in a source domain and the use of various forms of abstraction. Conceptual innovation results from the creation of new concepts through processes that abstract and integrate source and target domains into new models (Nersessian 2009).

Some critics have argued that despite the large amount of work on the topic, the notion of mental model is not sufficiently clear. Thagard seeks to clarify the concept by characterizing mental models in terms of neural processes (Thagard 2010). In his approach, mental models are produced through complex patterns of neural firing, whereby the neurons and the interconnections between them are dynamic and changing. A pattern of firing neurons is a representation when there is a stable causal correlation between the pattern or activation and the thing that is represented. In this research, questions about the nature of model-based reasoning are transformed into questions about the brain mechanisms that produce mental representations.

The above sections again show that the study of scientific discovery integrates different approaches, combining conceptual analysis of processes of knowledge generation with empirical work on creativity, drawing heavily and explicitly on current research in psychology and cognitive science, and on in vivo laboratory observations, as well as brain imaging techniques (Kounios & Beeman 2009, Thagard & Stewart 2011).

Earlier critics of AI-based theories of scientific discoveries argued that a computer cannot devise new concepts but is confined to the concepts included in the given computer language (Hempel 1985: 119–120). It cannot design new experiments, instruments, or methods. Subsequent computational research on scientific discovery was driven by the motivation to contribute computational tools to aid scientists in their research (Addis et al. 2016). It appears that computational methods can be used to generate new results leading to refereed scientific publications in astrophysics, cancer research, ecology, and other fields (Langley 2000). However, the philosophical discussion has continued about the question of whether these methods really generate new knowledge or whether they merely speed up data processing. It is also still an open question whether data-intensive science is fundamentally different from traditional research, for instance regarding the status of hypothesis or theory in data-intensive research (Pietsch 2015).

In the wake of recent developments in machine learning, some older discussions about automated discovery have been revived. The availability of vastly improved computational tools and software for data analysis has stimulated new discussions about computer-generated discovery (see Leonelli 2020). It is largely uncontroversial that machine learning tools can aid discovery, for instance in research on antibiotics (Stokes et al, 2020). The notion of “robot scientist” is mostly used metaphorically, and the vision that human scientists may one day be replaced by computers – by successors of the laboratory automation systems “Adam” and “Eve”, allegedly the first “robot scientists” – is evoked in writings for broader audiences (see King et al. 2009, Williams et al. 2015, for popularized descriptions of these systems), although some interesting ethical challenges do arise from “superhuman AI” (see Russell 2021). It also appears that, on the notion that products of creative acts are both novel and valuable, AI systems should be called “creative,” an implication which not all analysts will find plausible (Boden 2014)

Philosophical analyses focus on various questions arising from the processes involving human-machine complexes. One issue relevant to the problem of scientific discovery arises from the opacity of machine learning. If machine learning indeed escapes human understanding, how can we be warranted to say that knowledge or understanding is generated by deep learning tools? Might we have reason to say that humans and machines are “co-developers” of knowledge (Tamaddoni-Nezhad et al. 2021)?

New perspectives on scientific discovery have also opened up in the context of social epistemology (see Goldman & O’Connor 2021). Social epistemology investigates knowledge production as a group process, specifically the epistemic effects of group composition in terms of cognitive diversity and unity and social interactions within groups or institutions such as testimony and trust, peer disagreement and critique, and group justification, among others. On this view, discovery is a collective achievement, and the task is to explore how assorted social-epistemic activities or practices have an impact on the knowledge generated by groups in question. There are obvious implications for debates about scientific discovery of recent research in the different branches of social epistemology. Social epistemologists have examined individual cognitive agents in their roles as group members (as providers of information or as critics) and the interactions among these members (Longino 2001), groups as aggregates of diverse agents, or the entire group as epistemic agent (e.g., Koons 2021, Dragos 2019).

Standpoint theory, for instance, explores the role of outsiders in knowledge generation, considering how the sociocultural structures and practices in which individuals are embedded aid or obstruct the generation of creative ideas. According to standpoint theorists, people with standpoint are politically aware and politically engaged people outside the mainstream. Because people with standpoint have different experiences and access to different domains of expertise than most members of a culture, they can draw on rich conceptual resources for creative thinking (Solomon 2007).

Social epistemologists examining groups as aggregates of agents consider to what extent diversity among group members is conducive to knowledge production and whether and to what extent beliefs and attitudes must be shared among group members to make collective knowledge possible (Bird 2014). This is still an open question. Some formal approaches to model the influence of diversity on knowledge generation suggest that cognitive diversity is beneficial to collective knowledge generation (Weisberg and Muldoon 2009), but others have criticized the model (Alexander et al (2015), see also Thoma (2015) and Poyhönen (2017) for further discussion).

This essay has illustrated that philosophy of discovery has come full circle. Philosophy of discovery has once again become a thriving field of philosophical study, now intersecting with, and drawing on philosophical and empirical studies of creative thinking, problem solving under uncertainty, collective knowledge production, and machine learning. Recent approaches to discovery are typically explicitly interdisciplinary and integrative, cutting across previous distinctions among hypothesis generation and theory building, data collection, assessment, and selection; as well as descriptive-analytic, historical, and normative perspectives (Danks & Ippoliti 2018, Michel 2021). The goal no longer is to provide one overarching account of scientific discovery but to produce multifaceted analyses of past and present activities of knowledge generation in all their complexity and heterogeneity that are illuminating to the non-scientist and the scientific researcher alike.

  • Abraham, A. 2019, The Neuroscience of Creativity, Cambridge: Cambridge University Press.
  • Addis, M., Sozou, P.D., Gobet, F. and Lane, P. R., 2016, “Computational scientific discovery and cognitive science theories”, in Mueller, V. C. (ed.) Computing and Philosophy , Springer, 83–87.
  • Alexander, J., Himmelreich, J., and Thompson, C. 2015, Epistemic Landscapes, Optimal Search, and the Division of Cognitive Labor, Philosophy of Science 82: 424–453.
  • Arabatzis, T. 1996, “Rethinking the ‘Discovery’ of the Electron,” Studies in History and Philosophy of Science Part B Studies In History and Philosophy of Modern Physics , 27: 405–435.
  • Bartha, P., 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press.
  • Bechtel, W. and R. Richardson, 1993, Discovering Complexity , Princeton: Princeton University Press.
  • Benjamin, A.C., 1934, “The Mystery of Scientific Discovery ” Philosophy of Science , 1: 224–36.
  • Bird, A. 2014, “When is There a Group that Knows? Distributed Cognition, Scientific Knowledge, and the Social Epistemic Subject”, in J. Lackey (ed.), Essays in Collective Epistemology , Oxford: Oxford University Press, 42–63.
  • Blackburn, S. 2014, “Creativity and Not-So-Dumb Luck”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0008.
  • Blackwell, R.J., 1969, Discovery in the Physical Sciences , Notre Dame: University of Notre Dame Press.
  • Boden, M.A., 2004, The Creative Mind: Myths and Mechanisms , London: Routledge.
  • –––, 2014, “Creativity and Artificial Intelligence: A Contradiction in Terms?”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0012 .
  • Brannigan, A., 1981, The Social Basis of Scientific Discoveries , Cambridge: Cambridge University Press.
  • Brem, S. and L.J. Rips, 2000, “Explanation and Evidence in Informal Argument”, Cognitive Science , 24: 573–604.
  • Campbell, D., 1960, “Blind Variation and Selective Retention in Creative Thought as in Other Knowledge Processes”, Psychological Review , 67: 380–400.
  • Carmichael, R.D., 1922, “The Logic of Discovery”, The Monist , 32: 569–608.
  • –––, 1930, The Logic of Discovery , Chicago: Open Court.
  • Chow, S. 2015, “Many Meanings of ‘Heuristic’”, British Journal for the Philosophy of Science , 66: 977–1016
  • Craver, C.F., 2002, “Interlevel Experiments, Multilevel Mechanisms in the Neuroscience of Memory”, Philosophy of Science Supplement , 69: 83–97.
  • Craver, C.F. and L. Darden, 2013, In Search of Mechanisms: Discoveries across the Life Sciences , Chicago: University of Chicago Press.
  • Curd, M., 1980, “The Logic of Discovery: An Analysis of Three Approaches”, in T. Nickles (ed.) Scientific Discovery, Logic, and Rationality , Dordrecht: D. Reidel, 201–19.
  • Danks, D. & Ippoliti, E. (eds.) 2018, Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , New York: Oxford University Press.
  • –––, 2002, “Strategies for Discovering Mechanisms: Schema Instantiation, Modular Subassembly, Forward/Backward Chaining”, Philosophy of Science , 69: S354-S65.
  • –––, 2009, “Discovering Mechanisms in Molecular Biology: Finding and Fixing Incompleteness and Incorrectness”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 43–55.
  • Dewey, J. 1910, How We Think . Boston: D.C. Heath
  • Dragos, C., 2019, “Groups Can Know How” American Philosophical Quarterly 56: 265–276
  • Ducasse, C.J., 1951, “Whewell’s Philosophy of Scientific Discovery II”, The Philosophical Review , 60(2): 213–34.
  • Dunbar, K., 1997, “How scientists think: On-line creativity and conceptual change in science”, in T.B. Ward, S.M. Smith, and J. Vaid (eds.), Conceptual Structures and Processes: Emergence, Discovery, and Change , Washington, DC: American Psychological Association Press, 461–493.
  • –––, 2001, “The Analogical Paradox: Why Analogy is so Easy in Naturalistic Settings Yet so Difficult in Psychological Laboratories”, in D. Gentner, K.J. Holyoak, and B.N. Kokinov (eds.), The Analogical Mind: Perspectives from Cognitive Science , Cambridge, MA: MIT Press.
  • Dunbar, K, J. Fugelsang, and C Stein, 2007, “Do Naïve Theories Ever Go Away? Using Brain and Behavior to Understand Changes in Concepts”, in M. Lovett and P. Shah (eds.), Thinking with Data: 33rd Carnegie Symposium on Cognition , Mahwah: Erlbaum, 193–205.
  • Feist, G.J., 1999, “The Influence of Personality on Artistic and Scientific Creativity”, in R.J. Sternberg (ed.), Handbook of Creativity , New York: Cambridge University Press, 273–96.
  • –––, 2006, The psychology of science and the origins of the scientific mind , New Haven: Yale University Press.
  • Gillies D., 1996, Artificial intelligence and scientific method . Oxford: Oxford University Press.
  • –––, 2018 “Discovering Cures in Medicine” in Danks, D. & Ippoliti, E. (eds.), Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 83–100.
  • Goldman, Alvin & O’Connor, C., 2021, “Social Epistemology”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2021/entries/epistemology-social/>.
  • Gramelsberger, G. 2011, “What Do Numerical (Climate) Models Really Represent?” Studies in History and Philosophy of Science 42: 296–302.
  • Gutting, G., 1980, “Science as Discovery”, Revue internationale de philosophie , 131: 26–48.
  • Hanson, N.R., 1958, Patterns of Discovery , Cambridge: Cambridge University Press.
  • –––, 1960, “Is there a Logic of Scientific Discovery?”, Australasian Journal of Philosophy , 38: 91–106.
  • –––, 1965, “Notes Toward a Logic of Discovery”, in R.J. Bernstein (ed.), Perspectives on Peirce. Critical Essays on Charles Sanders Peirce , New Haven and London: Yale University Press, 42–65.
  • Harman, G.H., 1965, “The Inference to the Best Explanation”, Philosophical Review , 74.
  • Hausman, C. R. 1984, A Discourse on Novelty and Creation , New York: SUNY Press.
  • Hempel, C.G., 1985, “Thoughts in the Limitations of Discovery by Computer”, in K. Schaffner (ed.), Logic of Discovery and Diagnosis in Medicine , Berkeley: University of California Press, 115–22.
  • Hesse, M., 1966, Models and Analogies in Science , Notre Dame: University of Notre Dame Press.
  • Hey, S. 2016 “Heuristics and Meta-heuristics in Scientific Judgement”, British Journal for the Philosophy of Science , 67: 471–495
  • Hills, A., Bird, A. 2019, “Against Creativity”, Philosophy and Phenomenological Research , 99: 694–713.
  • Holyoak, K.J. and P. Thagard, 1996, Mental Leaps: Analogy in Creative Thought , Cambridge, MA: MIT Press.
  • Howard, D., 2006, “Lost Wanderers in the Forest of Knowledge: Some Thoughts on the Discovery-Justification Distinction”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 3–22.
  • Hoyningen-Huene, P., 1987, “Context of Discovery and Context of Justification”, Studies in History and Philosophy of Science , 18: 501–15.
  • Hull, D.L., 1988, Science as Practice: An Evolutionary Account of the Social and Conceptual Development of Science , Chicago: University of Chicago Press.
  • Ippoliti, E. 2018, “Heuristic Logic. A Kernel” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 191–212
  • Jantzen, B.C., 2016, “Discovery without a ‘Logic’ would be a Miracle”, Synthese , 193: 3209–3238.
  • Johnson-Laird, P., 1983, Mental Models , Cambridge: Cambridge University Press.
  • Kieran, M., 2014, “Creativity as a Virtue of Character,” in E. Paul and S. B. Kaufman (eds.), The Philosophy of Creativity: New Essays . Oxford: Oxford University Press, 125–44
  • King, R. D. et al. 2009, “The Automation of Science”, Science 324: 85–89.
  • Koehler, D.J., 1991, “Explanation, Imagination, and Confidence in Judgment”, Psychological Bulletin , 110: 499–519.
  • Koertge, N. 1980, “Analysis as a Method of Discovery during the Scientific Revolution” in Nickles, T. (ed.) Scientific Discovery, Logic, and Rationality vol. I, Dordrecht: Reidel, 139–157
  • Koons, J.R. 2021, “Knowledge as a Collective Status”, Analytic Philosophy , https://doi.org/10.1111/phib.12224
  • Kounios, J. and Beeman, M. 2009, “The Aha! Moment : The Cognitive Neuroscience of Insight”, Current Directions in Psychological Science , 18: 210–16.
  • Kordig, C., 1978, “Discovery and Justification”, Philosophy of Science , 45: 110–17.
  • Kronfeldner, M. 2009, “Creativity Naturalized”, The Philosophical Quarterly 59: 577–592.
  • Kuhn, T.S., 1970 [1962], The Structure of Scientific Revolutions , 2 nd edition, Chicago: The University of Chicago Press; first edition, 1962.
  • Kulkarni, D. and H.A. Simon, 1988, “The processes of scientific discovery: The strategy of experimentation”, Cognitive Science , 12: 139–76.
  • Langley, P., 2000, “The Computational Support of Scientific Discovery”, International Journal of Human-Computer Studies , 53: 393–410.
  • Langley, P., H.A. Simon, G.L. Bradshaw, and J.M. Zytkow, 1987, Scientific Discovery: Computational Explorations of the Creative Processes , Cambridge, MA: MIT Press.
  • Laudan, L., 1980, “Why Was the Logic of Discovery Abandoned?” in T. Nickles (ed.), Scientific Discovery (Volume I), Dordrecht: D. Reidel, 173–83.
  • Leonelli, S. 2020, “Scientific Research and Big Data”, The Stanford Encyclopedia of Philosophy (Summer 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2020/entries/science-big-data/>
  • Leplin, J., 1987, “The Bearing of Discovery on Justification”, Canadian Journal of Philosophy , 17: 805–14.
  • Longino, H. 2001, The Fate of Knowledge , Princeton: Princeton University Press
  • Lugg, A., 1985, “The Process of Discovery”, Philosophy of Science , 52: 207–20.
  • Magnani, L., 2000, Abduction, Reason, and Science: Processes of Discovery and Explanation , Dordrecht: Kluwer.
  • –––, 2009, “Creative Abduction and Hypothesis Withdrawal”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer.
  • Magnani, L. and N.J. Nersessian, 2002, Model-Based Reasoning: Science, Technology, and Values , Dordrecht: Kluwer.
  • Magnani, L., N.J. Nersessian, and P. Thagard, 1999, Model-Based Reasoning in Scientific Discovery , Dordrecht: Kluwer.
  • Michel, J. (ed.) 2021, Making Scientific Discoveries. Interdisciplinary Reflections , Brill | mentis.
  • Minai, A., Doboli, S., Iyer, L. 2022 “Models of Creativity and Ideation: An Overview” in Ali A. Minai, Jared B. Kenworthy, Paul B. Paulus, Simona Doboli (eds.), Creativity and Innovation. Cognitive, Social, and Computational Approaches , Springer, 21–46.
  • Nersessian, N.J., 1992, “How do scientists think? Capturing the dynamics of conceptual change in science”, in R. Giere (ed.), Cognitive Models of Science , Minneapolis: University of Minnesota Press, 3–45.
  • –––, 1999, “Model-based reasoning in conceptual change”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Scientific Discovery , New York: Kluwer, 5–22.
  • –––, 2009, “Conceptual Change: Creativity, Cognition, and Culture ” in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 127–66.
  • Newell, A. and H. A Simon, 1971, “Human Problem Solving: The State of the Theory in 1970”, American Psychologist , 26: 145–59.
  • Newton, I. 1718, Opticks; or, A Treatise of the Reflections, Inflections and Colours of Light , London: Printed for W. and J. Innys, Printers to the Royal Society.
  • Nickles, T., 1984, “Positive Science and Discoverability”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984: 13–27.
  • –––, 1985, “Beyond Divorce: Current Status of the Discovery Debate”, Philosophy of Science , 52: 177–206.
  • –––, 1989, “Truth or Consequences? Generative versus Consequential Justification in Science”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1988, 393–405.
  • –––, 2018, “TTT: A Fast Heuristic to New Theories?” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 213–244.
  • Pasquale, J.-F. de and Poirier, P. 2016, “Convolution and Modal Representations in Thagard and Stewart’s Neural Theory of Creativity: A Critical Analysis ”, Synthese , 193: 1535–1560
  • Paul, E. S. and Kaufman, S. B. (eds.), 2014a, The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.001.0001.
  • –––, 2014b, “Introducing: The Philosophy of Creativity”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0001.
  • Pietsch, W. 2015, “Aspects of Theory-Ladenness in Data-Intensive Science”, Philosophy of Science 82: 905–916.
  • Popper, K., 2002 [1934/1959], The Logic of Scientific Discovery , London and New York: Routledge; original published in German in 1934; first English translation in 1959.
  • Pöyhönen, S. 2017, “Value of Cognitive Diversity in Science”, Synthese , 194(11): 4519–4540. doi:10.1007/s11229–016-1147-4
  • Pulte, H. 2019, “‘‘Tis Much Better to Do a Little with Certainty’: On the Reception of Newton’s Methodology”, in The Reception of Isaac Newton in Europe , Pulte, H, and Mandelbrote, S. (eds.), Continuum Publishing Corporation, 355–84.
  • Reichenbach, H., 1938, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge , Chicago: The University of Chicago Press.
  • Richardson, A., 2006, “Freedom in a Scientific Society: Reading the Context of Reichenbach’s Contexts”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 41–54.
  • Russell, S. 2021, “Human-Compatible Artificial Intelligence”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N. (eds.), Oxford: Oxford University Press, 4–23
  • Schaffer, S., 1986, “Scientific Discoveries and the End of Natural Philosophy”, Social Studies of Science , 16: 387–420.
  • –––, 1994, “Making Up Discovery”, in M.A. Boden (ed.), Dimensions of Creativity , Cambridge, MA: MIT Press, 13–51.
  • Schaffner, K., 1993, Discovery and Explanation in Biology and Medicine , Chicago: University of Chicago Press.
  • –––, 2008 “Theories, Models, and Equations in Biology: The Heuristic Search for Emergent Simplifications in Neurobiology”, Philosophy of Science , 75: 1008–21.
  • Schickore, J. and F. Steinle, 2006, Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer.
  • Schiller, F.C.S., 1917, “Scientific Discovery and Logical Proof”, in C.J. Singer (ed.), Studies in the History and Method of Science (Volume 1), Oxford: Clarendon, 235–89.
  • Simon, H.A., 1973, “Does Scientific Discovery Have a Logic?”, Philosophy of Science , 40: 471–80.
  • –––, 1977, Models of Discovery and Other Topics in the Methods of Science , Dordrecht: D. Reidel.
  • Simon, H.A., P.W. Langley, and G.L. Bradshaw, 1981, “Scientific Discovery as Problem Solving”, Synthese , 47: 1–28.
  • Smith, G.E., 2002, “The Methodology of the Principia ”, in G.E. Smith and I.B. Cohen (eds), The Cambridge Companion to Newton , Cambridge: Cambridge University Press, 138–73.
  • Simonton, D. K., “Hierarchies of Creative Domains: Disciplinary Constraints on Blind Variation and Selective Retention”, in Paul, E. S. and Kaufman, S. B. (eds), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0013
  • Snyder, L.J., 1997, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • Solomon, M., 2009, “Standpoint and Creativity”, Hypatia : 226–37.
  • Sternberg, R J. and T. I. Lubart, 1999, “The concept of creativity: Prospects and paradigms,” in R. J. Sternberg (ed.) Handbook of Creativity , Cambridge: Cambridge University Press, 3–15.
  • Stokes, D., 2011, “Minimally Creative Thought”, Metaphilosophy , 42: 658–81.
  • Tamaddoni-Nezhad, A., Bohan, D., Afroozi Milani, G., Raybould, A., Muggleton, S., 2021, “Human–Machine Scientific Discovery”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N., (eds.), Oxford: Oxford University Press, 297–315
  • Thagard, P., 1984, “Conceptual Combination and Scientific Discovery”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984(1): 3–12.
  • –––, 1999, How Scientists Explain Disease , Princeton: Princeton University Press.
  • –––, 2010, “How Brains Make Mental Models”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Science & Technology , Berlin and Heidelberg: Springer, 447–61.
  • –––, 2012, The Cognitive Science of Science , Cambridge, MA: MIT Press.
  • Thagard, P. and Stewart, T. C., 2011, “The AHA! Experience: Creativity Through Emergent Binding in Neural Networks”, Cognitive Science , 35: 1–33.
  • Thoma, Johanna, 2015, “The Epistemic Division of Labor Revisited”, Philosophy of Science , 82: 454–472. doi:10.1086/681768
  • Weber, M., 2005, Philosophy of Experimental Biology , Cambridge: Cambridge University Press.
  • Whewell, W., 1996 [1840], The Philosophy of the Inductive Sciences (Volume II), London: Routledge/Thoemmes.
  • Weisberg, M. and Muldoon, R., 2009, “Epistemic Landscapes and the Division of Cognitive Labor”, Philosophy of Science , 76: 225–252. doi:10.1086/644786
  • Williams, K. et al. 2015, “Cheaper Faster Drug Development Validated by the Repositioning of Drugs against Neglected Tropical Diseases”, Journal of the Royal Society Interface 12: 20141289. http://dx.doi.org/10.1098/rsif.2014.1289.
  • Zahar, E., 1983, “Logic of Discovery or Psychology of Invention?”, British Journal for the Philosophy of Science , 34: 243–61.
  • Zednik, C. and Jäkel, F. 2016 “Bayesian Reverse-Engineering Considered as a Research Strategy for Cognitive Science”, Synthese , 193, 3951–3985.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

abduction | analogy and analogical reasoning | cognitive science | epistemology: social | knowledge: analysis of | Kuhn, Thomas | models in science | Newton, Isaac: Philosophiae Naturalis Principia Mathematica | Popper, Karl | rationality: historicist theories of | scientific method | scientific research and big data | Whewell, William

Copyright © 2022 by Jutta Schickore < jschicko @ indiana . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Penn State College of Agricultural Science Logo

  • Undergraduate Students
  • Graduate Students
  • All Information for Students
  • Penn State Ag Council
  • Ag Action Network
  • Recruitment and Employer Relations
  • Continuing Education
  • Labs and Services
  • All Information for Industry
  • Get Involved
  • Awards and Recognition
  • Armsby Honor Society
  • All Information for Alumni
  • Faculty & Staff Resources
  • All Information for Faculty & Staff
  • Places to Visit
  • Service Laboratories
  • Programs and Events
  • Youth Opportunities
  • All Information for Visitors & Public

A Holistic Approach to Solving Problems

Posted: April 3, 2024

Leveraging emerging technologies to support people and the planet

Photo credit: Jens Magnusson / Ikon Images

Photo credit: Jens Magnusson / Ikon Images

As the world grapples with the unprecedented challenges of climate change, the alarming loss of biodiversity and the constraints imposed by limited resources, it becomes increasingly clear that our agricultural systems can provide solutions that support food security, human well-being and environmental health.

In Penn State's College of Agricultural Sciences, we embrace the responsibility of developing and fostering agricultural systems that safeguard the delicate balance between production and conservation. With our college's unique history and diverse capabilities, we are positioned to address these challenges by taking a holistic approach through Technologies for Agriculture and Living Systems.

This new initiative builds on the strengths and opportunities provided by the landscapes, communities and people of Pennsylvania. The initiative will address our needs in Pennsylvania and beyond by fostering the development and expansion of emerging and advanced technologies, including robotics, sensors, artificial intelligence (AI), genomics and gene editing. Our goal is to provide tools and resources to support growers, conservationists, urban planners and the public, thereby promoting sustainable food production, human and animal health, and effective management and conservation of biodiversity.

With the rapid pace of advancements in computer science and engineering — and our increasing understanding of the dynamics of microbes, plants, animals and ecosystems — it is becoming possible to adapt these technologies from their current applications on large-scale farms to a broader range of contexts, including mid- and small-scale operations, forests, watersheds, cities and backyards.

Pennsylvania's diverse agricultural, natural and urban landscapes and Penn State's significant investments in interdisciplinary research, education and community engagement set the stage for Pennsylvania to be a global leader in this holistic approach that has the potential to transform agricultural production, natural resource conservation, community health and workforce development for the next generation of agricultural leaders.

Today, faculty in the college are amplifying their expertise by designing and employing these new technologies to monitor, manage and model agricultural production, the spread of invasive species and diseases, biodiversity loss, livestock health, forest health, water and habitat quality, human nutrition, and food access systems.

Alongside expanding our research expertise, the college takes a multipronged approach to workforce education in technology and AI applications for living systems. One outstanding example of this is a novel, interdisciplinary training program for graduate students that brings together faculty from the colleges of Agricultural Sciences, Engineering, and Earth and Mineral Sciences through the support of a $3 million National Science Foundation Research Traineeship grant awarded to Penn State for the project titled "Interdisciplinary Studies in Entomology, Computer Science and Technology network," or INSECT NET. At the core of this project is developing graduate courses, certificates and internships. Importantly, the programming, engineering and AI approaches leveraged in the program also can be readily adapted to K-12 education, including 4-H, thereby allowing us to expand workforce development in the commonwealth.

Above all, the core of this vision lies in collaboration. Our college has a rich history of convening and facilitating diverse audiences, tackling complex and new challenges, and creating effective solutions. As we continue to build toward this vision, we enthusiastically encourage like-minded partners from academia, industry and government to join us on this journey as we elevate agricultural systems for the betterment of the planet.

Christina Grozinger, Publius Vergilius Maro Professor of Entomology at Penn State, leads the research arm of the Technologies for Agriculture and Living Systems Initiative.

Latest Issue

Latest Issue

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Share Podcast

HBR On Leadership podcast series

Do You Understand the Problem You’re Trying to Solve?

To solve tough problems at work, first ask these questions.

  • Apple Podcasts
  • Google Podcasts

Problem solving skills are invaluable in any job. But all too often, we jump to find solutions to a problem without taking time to really understand the dilemma we face, according to Thomas Wedell-Wedellsborg , an expert in innovation and the author of the book, What’s Your Problem?: To Solve Your Toughest Problems, Change the Problems You Solve .

In this episode, you’ll learn how to reframe tough problems by asking questions that reveal all the factors and assumptions that contribute to the situation. You’ll also learn why searching for just one root cause can be misleading.

Key episode topics include: leadership, decision making and problem solving, power and influence, business management.

HBR On Leadership curates the best case studies and conversations with the world’s top business and management experts, to help you unlock the best in those around you. New episodes every week.

  • Listen to the original HBR IdeaCast episode: The Secret to Better Problem Solving (2016)
  • Find more episodes of HBR IdeaCast
  • Discover 100 years of Harvard Business Review articles, case studies, podcasts, and more at HBR.org .

HANNAH BATES: Welcome to HBR on Leadership , case studies and conversations with the world’s top business and management experts, hand-selected to help you unlock the best in those around you.

Problem solving skills are invaluable in any job. But even the most experienced among us can fall into the trap of solving the wrong problem.

Thomas Wedell-Wedellsborg says that all too often, we jump to find solutions to a problem – without taking time to really understand what we’re facing.

He’s an expert in innovation, and he’s the author of the book, What’s Your Problem?: To Solve Your Toughest Problems, Change the Problems You Solve .

  In this episode, you’ll learn how to reframe tough problems, by asking questions that reveal all the factors and assumptions that contribute to the situation. You’ll also learn why searching for one root cause can be misleading. And you’ll learn how to use experimentation and rapid prototyping as problem-solving tools.

This episode originally aired on HBR IdeaCast in December 2016. Here it is.

SARAH GREEN CARMICHAEL: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Sarah Green Carmichael.

Problem solving is popular. People put it on their resumes. Managers believe they excel at it. Companies count it as a key proficiency. We solve customers’ problems.

The problem is we often solve the wrong problems. Albert Einstein and Peter Drucker alike have discussed the difficulty of effective diagnosis. There are great frameworks for getting teams to attack true problems, but they’re often hard to do daily and on the fly. That’s where our guest comes in.

Thomas Wedell-Wedellsborg is a consultant who helps companies and managers reframe their problems so they can come up with an effective solution faster. He asks the question “Are You Solving The Right Problems?” in the January-February 2017 issue of Harvard Business Review. Thomas, thank you so much for coming on the HBR IdeaCast .

THOMAS WEDELL-WEDELLSBORG: Thanks for inviting me.

SARAH GREEN CARMICHAEL: So, I thought maybe we could start by talking about the problem of talking about problem reframing. What is that exactly?

THOMAS WEDELL-WEDELLSBORG: Basically, when people face a problem, they tend to jump into solution mode to rapidly, and very often that means that they don’t really understand, necessarily, the problem they’re trying to solve. And so, reframing is really a– at heart, it’s a method that helps you avoid that by taking a second to go in and ask two questions, basically saying, first of all, wait. What is the problem we’re trying to solve? And then crucially asking, is there a different way to think about what the problem actually is?

SARAH GREEN CARMICHAEL: So, I feel like so often when this comes up in meetings, you know, someone says that, and maybe they throw out the Einstein quote about you spend an hour of problem solving, you spend 55 minutes to find the problem. And then everyone else in the room kind of gets irritated. So, maybe just give us an example of maybe how this would work in practice in a way that would not, sort of, set people’s teeth on edge, like oh, here Sarah goes again, reframing the whole problem instead of just solving it.

THOMAS WEDELL-WEDELLSBORG: I mean, you’re bringing up something that’s, I think is crucial, which is to create legitimacy for the method. So, one of the reasons why I put out the article is to give people a tool to say actually, this thing is still important, and we need to do it. But I think the really critical thing in order to make this work in a meeting is actually to learn how to do it fast, because if you have the idea that you need to spend 30 minutes in a meeting delving deeply into the problem, I mean, that’s going to be uphill for most problems. So, the critical thing here is really to try to make it a practice you can implement very, very rapidly.

There’s an example that I would suggest memorizing. This is the example that I use to explain very rapidly what it is. And it’s basically, I call it the slow elevator problem. You imagine that you are the owner of an office building, and that your tenants are complaining that the elevator’s slow.

Now, if you take that problem framing for granted, you’re going to start thinking creatively around how do we make the elevator faster. Do we install a new motor? Do we have to buy a new lift somewhere?

The thing is, though, if you ask people who actually work with facilities management, well, they’re going to have a different solution for you, which is put up a mirror next to the elevator. That’s what happens is, of course, that people go oh, I’m busy. I’m busy. I’m– oh, a mirror. Oh, that’s beautiful.

And then they forget time. What’s interesting about that example is that the idea with a mirror is actually a solution to a different problem than the one you first proposed. And so, the whole idea here is once you get good at using reframing, you can quickly identify other aspects of the problem that might be much better to try to solve than the original one you found. It’s not necessarily that the first one is wrong. It’s just that there might be better problems out there to attack that we can, means we can do things much faster, cheaper, or better.

SARAH GREEN CARMICHAEL: So, in that example, I can understand how A, it’s probably expensive to make the elevator faster, so it’s much cheaper just to put up a mirror. And B, maybe the real problem people are actually feeling, even though they’re not articulating it right, is like, I hate waiting for the elevator. But if you let them sort of fix their hair or check their teeth, they’re suddenly distracted and don’t notice.

But if you have, this is sort of a pedestrian example, but say you have a roommate or a spouse who doesn’t clean up the kitchen. Facing that problem and not having your elegant solution already there to highlight the contrast between the perceived problem and the real problem, how would you take a problem like that and attack it using this method so that you can see what some of the other options might be?

THOMAS WEDELL-WEDELLSBORG: Right. So, I mean, let’s say it’s you who have that problem. I would go in and say, first of all, what would you say the problem is? Like, if you were to describe your view of the problem, what would that be?

SARAH GREEN CARMICHAEL: I hate cleaning the kitchen, and I want someone else to clean it up.

THOMAS WEDELL-WEDELLSBORG: OK. So, my first observation, you know, that somebody else might not necessarily be your spouse. So, already there, there’s an inbuilt assumption in your question around oh, it has to be my husband who does the cleaning. So, it might actually be worth, already there to say, is that really the only problem you have? That you hate cleaning the kitchen, and you want to avoid it? Or might there be something around, as well, getting a better relationship in terms of how you solve problems in general or establishing a better way to handle small problems when dealing with your spouse?

SARAH GREEN CARMICHAEL: Or maybe, now that I’m thinking that, maybe the problem is that you just can’t find the stuff in the kitchen when you need to find it.

THOMAS WEDELL-WEDELLSBORG: Right, and so that’s an example of a reframing, that actually why is it a problem that the kitchen is not clean? Is it only because you hate the act of cleaning, or does it actually mean that it just takes you a lot longer and gets a lot messier to actually use the kitchen, which is a different problem. The way you describe this problem now, is there anything that’s missing from that description?

SARAH GREEN CARMICHAEL: That is a really good question.

THOMAS WEDELL-WEDELLSBORG: Other, basically asking other factors that we are not talking about right now, and I say those because people tend to, when given a problem, they tend to delve deeper into the detail. What often is missing is actually an element outside of the initial description of the problem that might be really relevant to what’s going on. Like, why does the kitchen get messy in the first place? Is it something about the way you use it or your cooking habits? Is it because the neighbor’s kids, kind of, use it all the time?

There might, very often, there might be issues that you’re not really thinking about when you first describe the problem that actually has a big effect on it.

SARAH GREEN CARMICHAEL: I think at this point it would be helpful to maybe get another business example, and I’m wondering if you could tell us the story of the dog adoption problem.

THOMAS WEDELL-WEDELLSBORG: Yeah. This is a big problem in the US. If you work in the shelter industry, basically because dogs are so popular, more than 3 million dogs every year enter a shelter, and currently only about half of those actually find a new home and get adopted. And so, this is a problem that has persisted. It’s been, like, a structural problem for decades in this space. In the last three years, where people found new ways to address it.

So a woman called Lori Weise who runs a rescue organization in South LA, and she actually went in and challenged the very idea of what we were trying to do. She said, no, no. The problem we’re trying to solve is not about how to get more people to adopt dogs. It is about keeping the dogs with their first family so they never enter the shelter system in the first place.

In 2013, she started what’s called a Shelter Intervention Program that basically works like this. If a family comes and wants to hand over their dog, these are called owner surrenders. It’s about 30% of all dogs that come into a shelter. All they would do is go up and ask, if you could, would you like to keep your animal? And if they said yes, they would try to fix whatever helped them fix the problem, but that made them turn over this.

And sometimes that might be that they moved into a new building. The landlord required a deposit, and they simply didn’t have the money to put down a deposit. Or the dog might need a $10 rabies shot, but they didn’t know how to get access to a vet.

And so, by instigating that program, just in the first year, she took her, basically the amount of dollars they spent per animal they helped went from something like $85 down to around $60. Just an immediate impact, and her program now is being rolled out, is being supported by the ASPCA, which is one of the big animal welfare stations, and it’s being rolled out to various other places.

And I think what really struck me with that example was this was not dependent on having the internet. This was not, oh, we needed to have everybody mobile before we could come up with this. This, conceivably, we could have done 20 years ago. Only, it only happened when somebody, like in this case Lori, went in and actually rethought what the problem they were trying to solve was in the first place.

SARAH GREEN CARMICHAEL: So, what I also think is so interesting about that example is that when you talk about it, it doesn’t sound like the kind of thing that would have been thought of through other kinds of problem solving methods. There wasn’t necessarily an After Action Review or a 5 Whys exercise or a Six Sigma type intervention. I don’t want to throw those other methods under the bus, but how can you get such powerful results with such a very simple way of thinking about something?

THOMAS WEDELL-WEDELLSBORG: That was something that struck me as well. This, in a way, reframing and the idea of the problem diagnosis is important is something we’ve known for a long, long time. And we’ve actually have built some tools to help out. If you worked with us professionally, you are familiar with, like, Six Sigma, TRIZ, and so on. You mentioned 5 Whys. A root cause analysis is another one that a lot of people are familiar with.

Those are our good tools, and they’re definitely better than nothing. But what I notice when I work with the companies applying those was those tools tend to make you dig deeper into the first understanding of the problem we have. If it’s the elevator example, people start asking, well, is that the cable strength, or is the capacity of the elevator? That they kind of get caught by the details.

That, in a way, is a bad way to work on problems because it really assumes that there’s like a, you can almost hear it, a root cause. That you have to dig down and find the one true problem, and everything else was just symptoms. That’s a bad way to think about problems because problems tend to be multicausal.

There tend to be lots of causes or levers you can potentially press to address a problem. And if you think there’s only one, if that’s the right problem, that’s actually a dangerous way. And so I think that’s why, that this is a method I’ve worked with over the last five years, trying to basically refine how to make people better at this, and the key tends to be this thing about shifting out and saying, is there a totally different way of thinking about the problem versus getting too caught up in the mechanistic details of what happens.

SARAH GREEN CARMICHAEL: What about experimentation? Because that’s another method that’s become really popular with the rise of Lean Startup and lots of other innovation methodologies. Why wouldn’t it have worked to, say, experiment with many different types of fixing the dog adoption problem, and then just pick the one that works the best?

THOMAS WEDELL-WEDELLSBORG: You could say in the dog space, that’s what’s been going on. I mean, there is, in this industry and a lot of, it’s largely volunteer driven. People have experimented, and they found different ways of trying to cope. And that has definitely made the problem better. So, I wouldn’t say that experimentation is bad, quite the contrary. Rapid prototyping, quickly putting something out into the world and learning from it, that’s a fantastic way to learn more and to move forward.

My point is, though, that I feel we’ve come to rely too much on that. There’s like, if you look at the start up space, the wisdom is now just to put something quickly into the market, and then if it doesn’t work, pivot and just do more stuff. What reframing really is, I think of it as the cognitive counterpoint to prototyping. So, this is really a way of seeing very quickly, like not just working on the solution, but also working on our understanding of the problem and trying to see is there a different way to think about that.

If you only stick with experimentation, again, you tend to sometimes stay too much in the same space trying minute variations of something instead of taking a step back and saying, wait a minute. What is this telling us about what the real issue is?

SARAH GREEN CARMICHAEL: So, to go back to something that we touched on earlier, when we were talking about the completely hypothetical example of a spouse who does not clean the kitchen–

THOMAS WEDELL-WEDELLSBORG: Completely, completely hypothetical.

SARAH GREEN CARMICHAEL: Yes. For the record, my husband is a great kitchen cleaner.

You started asking me some questions that I could see immediately were helping me rethink that problem. Is that kind of the key, just having a checklist of questions to ask yourself? How do you really start to put this into practice?

THOMAS WEDELL-WEDELLSBORG: I think there are two steps in that. The first one is just to make yourself better at the method. Yes, you should kind of work with a checklist. In the article, I kind of outlined seven practices that you can use to do this.

But importantly, I would say you have to consider that as, basically, a set of training wheels. I think there’s a big, big danger in getting caught in a checklist. This is something I work with.

My co-author Paddy Miller, it’s one of his insights. That if you start giving people a checklist for things like this, they start following it. And that’s actually a problem, because what you really want them to do is start challenging their thinking.

So the way to handle this is to get some practice using it. Do use the checklist initially, but then try to step away from it and try to see if you can organically make– it’s almost a habit of mind. When you run into a colleague in the hallway and she has a problem and you have five minutes, like, delving in and just starting asking some of those questions and using your intuition to say, wait, how is she talking about this problem? And is there a question or two I can ask her about the problem that can help her rethink it?

SARAH GREEN CARMICHAEL: Well, that is also just a very different approach, because I think in that situation, most of us can’t go 30 seconds without jumping in and offering solutions.

THOMAS WEDELL-WEDELLSBORG: Very true. The drive toward solutions is very strong. And to be clear, I mean, there’s nothing wrong with that if the solutions work. So, many problems are just solved by oh, you know, oh, here’s the way to do that. Great.

But this is really a powerful method for those problems where either it’s something we’ve been banging our heads against tons of times without making progress, or when you need to come up with a really creative solution. When you’re facing a competitor with a much bigger budget, and you know, if you solve the same problem later, you’re not going to win. So, that basic idea of taking that approach to problems can often help you move forward in a different way than just like, oh, I have a solution.

I would say there’s also, there’s some interesting psychological stuff going on, right? Where you may have tried this, but if somebody tries to serve up a solution to a problem I have, I’m often resistant towards them. Kind if like, no, no, no, no, no, no. That solution is not going to work in my world. Whereas if you get them to discuss and analyze what the problem really is, you might actually dig something up.

Let’s go back to the kitchen example. One powerful question is just to say, what’s your own part in creating this problem? It’s very often, like, people, they describe problems as if it’s something that’s inflicted upon them from the external world, and they are innocent bystanders in that.

SARAH GREEN CARMICHAEL: Right, or crazy customers with unreasonable demands.

THOMAS WEDELL-WEDELLSBORG: Exactly, right. I don’t think I’ve ever met an agency or consultancy that didn’t, like, gossip about their customers. Oh, my god, they’re horrible. That, you know, classic thing, why don’t they want to take more risk? Well, risk is bad.

It’s their business that’s on the line, not the consultancy’s, right? So, absolutely, that’s one of the things when you step into a different mindset and kind of, wait. Oh yeah, maybe I actually am part of creating this problem in a sense, as well. That tends to open some new doors for you to move forward, in a way, with stuff that you may have been struggling with for years.

SARAH GREEN CARMICHAEL: So, we’ve surfaced a couple of questions that are useful. I’m curious to know, what are some of the other questions that you find yourself asking in these situations, given that you have made this sort of mental habit that you do? What are the questions that people seem to find really useful?

THOMAS WEDELL-WEDELLSBORG: One easy one is just to ask if there are any positive exceptions to the problem. So, was there day where your kitchen was actually spotlessly clean? And then asking, what was different about that day? Like, what happened there that didn’t happen the other days? That can very often point people towards a factor that they hadn’t considered previously.

SARAH GREEN CARMICHAEL: We got take-out.

THOMAS WEDELL-WEDELLSBORG: S,o that is your solution. Take-out from [INAUDIBLE]. That might have other problems.

Another good question, and this is a little bit more high level. It’s actually more making an observation about labeling how that person thinks about the problem. And what I mean with that is, we have problem categories in our head. So, if I say, let’s say that you describe a problem to me and say, well, we have a really great product and are, it’s much better than our previous product, but people aren’t buying it. I think we need to put more marketing dollars into this.

Now you can go in and say, that’s interesting. This sounds like you’re thinking of this as a communications problem. Is there a different way of thinking about that? Because you can almost tell how, when the second you say communications, there are some ideas about how do you solve a communications problem. Typically with more communication.

And what you might do is go in and suggest, well, have you considered that it might be, say, an incentive problem? Are there incentives on behalf of the purchasing manager at your clients that are obstructing you? Might there be incentive issues with your own sales force that makes them want to sell the old product instead of the new one?

So literally, just identifying what type of problem does this person think about, and is there different potential way of thinking about it? Might it be an emotional problem, a timing problem, an expectations management problem? Thinking about what label of what type of problem that person is kind of thinking as it of.

SARAH GREEN CARMICHAEL: That’s really interesting, too, because I think so many of us get requests for advice that we’re really not qualified to give. So, maybe the next time that happens, instead of muddying my way through, I will just ask some of those questions that we talked about instead.

THOMAS WEDELL-WEDELLSBORG: That sounds like a good idea.

SARAH GREEN CARMICHAEL: So, Thomas, this has really helped me reframe the way I think about a couple of problems in my own life, and I’m just wondering. I know you do this professionally, but is there a problem in your life that thinking this way has helped you solve?

THOMAS WEDELL-WEDELLSBORG: I’ve, of course, I’ve been swallowing my own medicine on this, too, and I think I have, well, maybe two different examples, and in one case somebody else did the reframing for me. But in one case, when I was younger, I often kind of struggled a little bit. I mean, this is my teenage years, kind of hanging out with my parents. I thought they were pretty annoying people. That’s not really fair, because they’re quite wonderful, but that’s what life is when you’re a teenager.

And one of the things that struck me, suddenly, and this was kind of the positive exception was, there was actually an evening where we really had a good time, and there wasn’t a conflict. And the core thing was, I wasn’t just seeing them in their old house where I grew up. It was, actually, we were at a restaurant. And it suddenly struck me that so much of the sometimes, kind of, a little bit, you love them but they’re annoying kind of dynamic, is tied to the place, is tied to the setting you are in.

And of course, if– you know, I live abroad now, if I visit my parents and I stay in my old bedroom, you know, my mother comes in and wants to wake me up in the morning. Stuff like that, right? And it just struck me so, so clearly that it’s– when I change this setting, if I go out and have dinner with them at a different place, that the dynamic, just that dynamic disappears.

SARAH GREEN CARMICHAEL: Well, Thomas, this has been really, really helpful. Thank you for talking with me today.

THOMAS WEDELL-WEDELLSBORG: Thank you, Sarah.  

HANNAH BATES: That was Thomas Wedell-Wedellsborg in conversation with Sarah Green Carmichael on the HBR IdeaCast. He’s an expert in problem solving and innovation, and he’s the author of the book, What’s Your Problem?: To Solve Your Toughest Problems, Change the Problems You Solve .

We’ll be back next Wednesday with another hand-picked conversation about leadership from the Harvard Business Review. If you found this episode helpful, share it with your friends and colleagues, and follow our show on Apple Podcasts, Spotify, or wherever you get your podcasts. While you’re there, be sure to leave us a review.

We’re a production of Harvard Business Review. If you want more podcasts, articles, case studies, books, and videos like this, find it all at HBR dot org.

This episode was produced by Anne Saini, and me, Hannah Bates. Ian Fox is our editor. Music by Coma Media. Special thanks to Maureen Hoch, Adi Ignatius, Karen Player, Ramsey Khabbaz, Nicole Smith, Anne Bartholomew, and you – our listener.

See you next week.

  • Subscribe On:

Latest in this series

This article is about leadership.

  • Decision making and problem solving
  • Power and influence
  • Business management

Partner Center

  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial

Welcome to the daily solving of our PROBLEM OF THE DAY with Yash Dwivedi . We will discuss the entire problem step-by-step and work towards developing an optimized solution. This will not only help you brush up on your concepts of Arrays but also build up problem-solving skills. In this problem, we are given an array nums of n positive integers. Find the minimum number of operations required to modify the array such that array elements are in strictly increasing order (nums[i] < nums[i+1]). Changing a number to greater or lesser than original number is counted as one operation. Note: Array elements can become negative after applying operation. Example : Input: n = 6 nums = [1, 2, 3, 6, 5, 4] Output:  2 Explanation:  By decreasing 6 by 2 and increasing 4 by 2, nums will be like [1, 2, 3, 4, 5, 6] which is stricly increasing. Give the problem a try before going through the video. All the best!!! Problem Link: https://www.geeksforgeeks.org/problems/convert-to-strictly-increasing-array3351/1 Solution IDE Link: https://ide.geeksforgeeks.org/online-cpp-compiler/8ed95be1-40c2-44d3-861c-bd46148e787f

Video Thumbnail

IMAGES

  1. What Is Problem-Solving? Steps, Processes, Exercises to do it Right

    scientific problem solving examples

  2. 39 Best Problem-Solving Examples (2024)

    scientific problem solving examples

  3. Scientific Method: Definition and Examples

    scientific problem solving examples

  4. the scientific method problem

    scientific problem solving examples

  5. PPT

    scientific problem solving examples

  6. PPT

    scientific problem solving examples

VIDEO

  1. Vocabulary About Scientific Problem-Solving Preview 2, LevelG. i-Ready Answers

  2. The Importance of Good Science Education with Matt Beall

  3. Problem Solving Method & Checklist: Sample Problem

  4. || Chemistry || Science || Empirical formula molecular formula NEET JEE PNST Organic chemistry

  5. Managerial Problem: Methods of Problem Solving

  6. Scientific Problem-Solving: GPT-4 VISION (multimodal)

COMMENTS

  1. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  2. Using the Scientific Method to Solve Problems

    The scientific method is a process used to explore observations and answer questions. Originally used by scientists looking to prove new theories, its use has spread into many other areas, including that of problem-solving and decision-making. The scientific method is designed to eliminate the influences of bias, prejudice and personal beliefs ...

  3. PDF The Scientific Method: Steps & Examples Example #1: Cinderella

    THE SCIENTIFIC METHOD: STEPS & EXAMPLES Anyone can think like a scientist by using common sense and paying attention to SIX careful steps. 1. State the Problem or Question to be answered ... A hypothesis is an explanation for your question or problem (Step 1) that can be tested using evidence. Important: a hypothesis must be testable—in other ...

  4. 1.2: Scientific Approach for Solving Problems

    In doing so, they are using the scientific method. 1.2: Scientific Approach for Solving Problems is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts. Chemists expand their knowledge by making observations, carrying out experiments, and testing hypotheses to develop laws to summarize their results and ...

  5. The 6 Scientific Method Steps and How to Use Them

    The one we typically learn about in school is the basic method, based in logic and problem solving, typically used in "hard" science fields like biology, chemistry, and physics. It may vary in other fields, such as psychology, but the basic premise of making observations, testing, and continuing to improve a theory from the results remain ...

  6. Identifying a Scientific Problem

    Identifying a Problem. Picture yourself playing with a ball. You throw it up, and you watch it fall back down. You climb up a tree, and you let the ball roll off a branch and then down to the ...

  7. 1.12: Scientific Problem Solving

    The scientific method, as developed by Bacon and others, involves several steps: Ask a question - identify the problem to be considered. Make observations - gather data that pertains to the question. Propose an explanation (a hypothesis) for the observations. Make new observations to test the hypothesis further.

  8. 1.3: The Scientific Method

    The scientific method is a method of investigation involving experimentation and observation to acquire new knowledge, solve problems, and answer questions. The key steps in the scientific method include the following: Step 1: Make observations. Step 2: Formulate a hypothesis. Step 3: Test the hypothesis through experimentation.

  9. Solving Everyday Problems with the Scientific Method

    Supplementary. This book describes how one can use The Scientific Method to solve everyday problems including medical ailments, health issues, money management, traveling, shopping, cooking, household chores, etc. It illustrates how to exploit the information collected from our five senses, how to solve problems when no information is available ...

  10. Chapter 6: Scientific Problem Solving

    Exercise. Explain how you would solve these problems using the four steps of the scientific process. Example: The fire alarm is not working. Answer: 1) Observe/Define the problem: it does not beep when I push the button. 2) Hypothesis: it is caused by a dead battery. 3) Test: try a new battery.

  11. Problem Solving With Scientific Notation

    Think About It. Match each length in the table with the appropriate number of meters described in scientific notation below. One of the most important parts of solving a "real" problem is translating the words into appropriate mathematical terms, and recognizing when a well known formula may help. Here's an example that requires you to ...

  12. A Problem-Solving Experiment

    A problem-solving experiment is a learning activity that uses experimental design to solve an authentic problem. It combines two evidence-based teaching strategies: problem-based learning and inquiry-based learning. The use of problem-based learning and scientific inquiry as an effective pedagogical tool in the science classroom has been well established and strongly supported by research ...

  13. Problem Solving With Scientific Notation

    Scientific notation follows a very specific format in which a number is expressed as the product of a number greater than or equal to one and less than ten times a power of 10 10. The format is written a×10n a × 10 n, where 1 ≤a < 10 1 ≤ a < 10 and n is an integer. To multiply or divide numbers in scientific notation, you can use the ...

  14. The Scientific Method: What Is It?

    The scientific method is a step-by-step problem-solving process. These steps include: ... Here's an everyday example of how you can apply the scientific method to understand more about your world ...

  15. STEM Problem Solving: Inquiry, Concepts, and Reasoning

    Balancing disciplinary knowledge and practical reasoning in problem solving is needed for meaningful learning. In STEM problem solving, science subject matter with associated practices often appears distant to learners due to its abstract nature. Consequently, learners experience difficulties making meaningful connections between science and their daily experiences. Applying Dewey's idea of ...

  16. Problem Solving in Science Learning

    The traditional teaching of science problem solving involves a considerable amount of drill and practice. Research suggests that these practices do not lead to the development of expert-like problem-solving strategies and that there is little correlation between the number of problems solved (exceeding 1,000 problems in one specific study) and the development of a conceptual understanding.

  17. Scientific Notation

    Advanced Problems. Scientific notation is used in solving these earth and space science problems and they are provided to you as an example. Be forewarned that these problems move beyond this module and require some facility with unit conversions, rearranging equations, and algebraic rules for multiplying and dividing exponents.

  18. Scientific method

    The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. (For notable practitioners in previous centuries, see history of scientific method.). The scientific method involves careful observation coupled with rigorous scepticism, because cognitive assumptions can distort the interpretation of the ...

  19. Solving Scientific Problems by Asking Diverse Questions

    Solving Scientific Problems by Asking Diverse Questions. Gravity has been apparent for thousands of years: Aristotle, for example, proposed that objects fall to settle into their natural place in 4th century BC. But it was not until around 1900, when Issac Newton explained gravity using mathematical equations, that we really understood the ...

  20. Scientific Problem Solving

    Scientific Problem Solving is a lab course. The idea is to train scientific problem solving skills through exercise & self-analysis. Puzzles and challenging problems matched to aspects of each theme provide a fun and productive path to improving problem solving skills. While play and practice create an important foundation, active learning ...

  21. Example Physics Problems and Solutions

    Heat of Vaporization Example Problems. Two example problems using or finding the heat of vaporization. Ice to Steam Example Problem. Classic problem melting cold ice to make hot steam. This problem brings all three of the previous example problems into one problem to calculate heat changes over phase changes.

  22. Scientific Discovery

    Computational theories of scientific discoveries have helped identify and clarify a number of problem-solving strategies. An example of such a strategy is heuristic means-ends analysis, which involves identifying specific differences between the present and the goal situation and searches for operators (processes that will change the situation ...

  23. Identifying problems and solutions in scientific text

    Introduction. Problem solving is generally regarded as the most important cognitive activity in everyday and professional contexts (Jonassen 2000).Many studies on formalising the cognitive process behind problem-solving exist, for instance (Chandrasekaran 1983).Jordan argues that we all share knowledge of the thought/action problem-solution process involved in real life, and so our writings ...

  24. A Holistic Approach to Solving Problems (Ag Science Magazine)

    One outstanding example of this is a novel, interdisciplinary training program for graduate students that brings together faculty from the colleges of Agricultural Sciences, Engineering, and Earth and Mineral Sciences through the support of a $3 million National Science Foundation Research Traineeship grant awarded to Penn State for the project ...

  25. Do You Understand the Problem You're Trying to Solve?

    To solve tough problems at work, first ask these questions. Problem solving skills are invaluable in any job. But all too often, we jump to find solutions to a problem without taking time to ...

  26. PROBLEM OF THE DAY : 05/04/2024

    In this problem, we are given an array nums of n positive integers. Find the minimum number of operations required to modify the array such that array elements are in strictly increasing order (nums [i] < nums [i+1]). Changing a number to greater or lesser than original number is counted as one operation.