7.3 Problem-Solving

Learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving

   People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

The study of human and animal problem solving processes has provided much insight toward the understanding of our conscious experience and led to advancements in computer science and artificial intelligence. Essentially much of cognitive science today represents studies of how we consciously and unconsciously make decisions and solve problems. For instance, when encountered with a large amount of information, how do we go about making decisions about the most efficient way of sorting and analyzing all the information in order to find what you are looking for as in visual search paradigms in cognitive psychology. Or in a situation where a piece of machinery is not working properly, how do we go about organizing how to address the issue and understand what the cause of the problem might be. How do we sort the procedures that will be needed and focus attention on what is important in order to solve problems efficiently. Within this section we will discuss some of these issues and examine processes related to human, animal and computer problem solving.

PROBLEM-SOLVING STRATEGIES

   When people are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

Problems themselves can be classified into two different categories known as ill-defined and well-defined problems (Schacter, 2009). Ill-defined problems represent issues that do not have clear goals, solution paths, or expected solutions whereas well-defined problems have specific goals, clearly defined solutions, and clear expected solutions. Problem solving often incorporates pragmatics (logical reasoning) and semantics (interpretation of meanings behind the problem), and also in many cases require abstract thinking and creativity in order to find novel solutions. Within psychology, problem solving refers to a motivational drive for reading a definite “goal” from a present situation or condition that is either not moving toward that goal, is distant from it, or requires more complex logical analysis for finding a missing description of conditions or steps toward that goal. Processes relating to problem solving include problem finding also known as problem analysis, problem shaping where the organization of the problem occurs, generating alternative strategies, implementation of attempted solutions, and verification of the selected solution. Various methods of studying problem solving exist within the field of psychology including introspection, behavior analysis and behaviorism, simulation, computer modeling, and experimentation.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them (table below). For example, a well-known strategy is trial and error. The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

   Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Further problem solving strategies have been identified (listed below) that incorporate flexible and creative thinking in order to reach solutions efficiently.

Additional Problem Solving Strategies :

  • Abstraction – refers to solving the problem within a model of the situation before applying it to reality.
  • Analogy – is using a solution that solves a similar problem.
  • Brainstorming – refers to collecting an analyzing a large amount of solutions, especially within a group of people, to combine the solutions and developing them until an optimal solution is reached.
  • Divide and conquer – breaking down large complex problems into smaller more manageable problems.
  • Hypothesis testing – method used in experimentation where an assumption about what would happen in response to manipulating an independent variable is made, and analysis of the affects of the manipulation are made and compared to the original hypothesis.
  • Lateral thinking – approaching problems indirectly and creatively by viewing the problem in a new and unusual light.
  • Means-ends analysis – choosing and analyzing an action at a series of smaller steps to move closer to the goal.
  • Method of focal objects – putting seemingly non-matching characteristics of different procedures together to make something new that will get you closer to the goal.
  • Morphological analysis – analyzing the outputs of and interactions of many pieces that together make up a whole system.
  • Proof – trying to prove that a problem cannot be solved. Where the proof fails becomes the starting point or solving the problem.
  • Reduction – adapting the problem to be as similar problems where a solution exists.
  • Research – using existing knowledge or solutions to similar problems to solve the problem.
  • Root cause analysis – trying to identify the cause of the problem.

The strategies listed above outline a short summary of methods we use in working toward solutions and also demonstrate how the mind works when being faced with barriers preventing goals to be reached.

One example of means-end analysis can be found by using the Tower of Hanoi paradigm . This paradigm can be modeled as a word problems as demonstrated by the Missionary-Cannibal Problem :

Missionary-Cannibal Problem

Three missionaries and three cannibals are on one side of a river and need to cross to the other side. The only means of crossing is a boat, and the boat can only hold two people at a time. Your goal is to devise a set of moves that will transport all six of the people across the river, being in mind the following constraint: The number of cannibals can never exceed the number of missionaries in any location. Remember that someone will have to also row that boat back across each time.

Hint : At one point in your solution, you will have to send more people back to the original side than you just sent to the destination.

The actual Tower of Hanoi problem consists of three rods sitting vertically on a base with a number of disks of different sizes that can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top making a conical shape. The objective of the puzzle is to move the entire stack to another rod obeying the following rules:

  • 1. Only one disk can be moved at a time.
  • 2. Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty rod.
  • 3. No disc may be placed on top of a smaller disk.

problem solving hypothesis psychology

  Figure 7.02. Steps for solving the Tower of Hanoi in the minimum number of moves when there are 3 disks.

problem solving hypothesis psychology

Figure 7.03. Graphical representation of nodes (circles) and moves (lines) of Tower of Hanoi.

The Tower of Hanoi is a frequently used psychological technique to study problem solving and procedure analysis. A variation of the Tower of Hanoi known as the Tower of London has been developed which has been an important tool in the neuropsychological diagnosis of executive function disorders and their treatment.

GESTALT PSYCHOLOGY AND PROBLEM SOLVING

As you may recall from the sensation and perception chapter, Gestalt psychology describes whole patterns, forms and configurations of perception and cognition such as closure, good continuation, and figure-ground. In addition to patterns of perception, Wolfgang Kohler, a German Gestalt psychologist traveled to the Spanish island of Tenerife in order to study animals behavior and problem solving in the anthropoid ape.

As an interesting side note to Kohler’s studies of chimp problem solving, Dr. Ronald Ley, professor of psychology at State University of New York provides evidence in his book A Whisper of Espionage  (1990) suggesting that while collecting data for what would later be his book  The Mentality of Apes (1925) on Tenerife in the Canary Islands between 1914 and 1920, Kohler was additionally an active spy for the German government alerting Germany to ships that were sailing around the Canary Islands. Ley suggests his investigations in England, Germany and elsewhere in Europe confirm that Kohler had served in the German military by building, maintaining and operating a concealed radio that contributed to Germany’s war effort acting as a strategic outpost in the Canary Islands that could monitor naval military activity approaching the north African coast.

While trapped on the island over the course of World War 1, Kohler applied Gestalt principles to animal perception in order to understand how they solve problems. He recognized that the apes on the islands also perceive relations between stimuli and the environment in Gestalt patterns and understand these patterns as wholes as opposed to pieces that make up a whole. Kohler based his theories of animal intelligence on the ability to understand relations between stimuli, and spent much of his time while trapped on the island investigation what he described as  insight , the sudden perception of useful or proper relations. In order to study insight in animals, Kohler would present problems to chimpanzee’s by hanging some banana’s or some kind of food so it was suspended higher than the apes could reach. Within the room, Kohler would arrange a variety of boxes, sticks or other tools the chimpanzees could use by combining in patterns or organizing in a way that would allow them to obtain the food (Kohler & Winter, 1925).

While viewing the chimpanzee’s, Kohler noticed one chimp that was more efficient at solving problems than some of the others. The chimp, named Sultan, was able to use long poles to reach through bars and organize objects in specific patterns to obtain food or other desirables that were originally out of reach. In order to study insight within these chimps, Kohler would remove objects from the room to systematically make the food more difficult to obtain. As the story goes, after removing many of the objects Sultan was used to using to obtain the food, he sat down ad sulked for a while, and then suddenly got up going over to two poles lying on the ground. Without hesitation Sultan put one pole inside the end of the other creating a longer pole that he could use to obtain the food demonstrating an ideal example of what Kohler described as insight. In another situation, Sultan discovered how to stand on a box to reach a banana that was suspended from the rafters illustrating Sultan’s perception of relations and the importance of insight in problem solving.

Grande (another chimp in the group studied by Kohler) builds a three-box structure to reach the bananas, while Sultan watches from the ground.  Insight , sometimes referred to as an “Ah-ha” experience, was the term Kohler used for the sudden perception of useful relations among objects during problem solving (Kohler, 1927; Radvansky & Ashcraft, 2013).

Solving puzzles.

   Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below (see figure) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

How long did it take you to solve this sudoku puzzle? (You can see the answer at the end of this section.)

   Here is another popular type of puzzle (figure below) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

Did you figure it out? (The answer is at the end of this section.) Once you understand how to crack this puzzle, you won’t forget.

   Take a look at the “Puzzling Scales” logic puzzle below (figure below). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”

What steps did you take to solve this puzzle? You can read the solution at the end of this section.

Pitfalls to problem solving.

   Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

   Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in the table below.

Were you able to determine how many marbles are needed to balance the scales in the figure below? You need nine. Were you able to solve the problems in the figures above? Here are the answers.

The first puzzle is a Sudoku grid of 16 squares (4 rows of 4 squares) is shown. Half of the numbers were supplied to start the puzzle and are colored blue, and half have been filled in as the puzzle’s solution and are colored red. The numbers in each row of the grid, left to right, are as follows. Row 1: blue 3, red 1, red 4, blue 2. Row 2: red 2, blue 4, blue 1, red 3. Row 3: red 1, blue 3, blue 2, red 4. Row 4: blue 4, red 2, red 3, blue 1.The second puzzle consists of 9 dots arranged in 3 rows of 3 inside of a square. The solution, four straight lines made without lifting the pencil, is shown in a red line with arrows indicating the direction of movement. In order to solve the puzzle, the lines must extend beyond the borders of the box. The four connecting lines are drawn as follows. Line 1 begins at the top left dot, proceeds through the middle and right dots of the top row, and extends to the right beyond the border of the square. Line 2 extends from the end of line 1, through the right dot of the horizontally centered row, through the middle dot of the bottom row, and beyond the square’s border ending in the space beneath the left dot of the bottom row. Line 3 extends from the end of line 2 upwards through the left dots of the bottom, middle, and top rows. Line 4 extends from the end of line 3 through the middle dot in the middle row and ends at the right dot of the bottom row.

   Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.

References:

Openstax Psychology text by Kathryn Dumper, William Jenkins, Arlene Lacombe, Marilyn Lovett and Marion Perlmutter licensed under CC BY v4.0. https://openstax.org/details/books/psychology

Review Questions:

1. A specific formula for solving a problem is called ________.

a. an algorithm

b. a heuristic

c. a mental set

d. trial and error

2. Solving the Tower of Hanoi problem tends to utilize a  ________ strategy of problem solving.

a. divide and conquer

b. means-end analysis

d. experiment

3. A mental shortcut in the form of a general problem-solving framework is called ________.

4. Which type of bias involves becoming fixated on a single trait of a problem?

a. anchoring bias

b. confirmation bias

c. representative bias

d. availability bias

5. Which type of bias involves relying on a false stereotype to make a decision?

6. Wolfgang Kohler analyzed behavior of chimpanzees by applying Gestalt principles to describe ________.

a. social adjustment

b. student load payment options

c. emotional learning

d. insight learning

7. ________ is a type of mental set where you cannot perceive an object being used for something other than what it was designed for.

a. functional fixedness

c. working memory

Critical Thinking Questions:

1. What is functional fixedness and how can overcoming it help you solve problems?

2. How does an algorithm save you time and energy when solving a problem?

Personal Application Question:

1. Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

anchoring bias

availability heuristic

confirmation bias

functional fixedness

hindsight bias

problem-solving strategy

representative bias

trial and error

working backwards

Answers to Exercises

algorithm:  problem-solving strategy characterized by a specific set of instructions

anchoring bias:  faulty heuristic in which you fixate on a single aspect of a problem to find a solution

availability heuristic:  faulty heuristic in which you make a decision based on information readily available to you

confirmation bias:  faulty heuristic in which you focus on information that confirms your beliefs

functional fixedness:  inability to see an object as useful for any other use other than the one for which it was intended

heuristic:  mental shortcut that saves time when solving a problem

hindsight bias:  belief that the event just experienced was predictable, even though it really wasn’t

mental set:  continually using an old solution to a problem without results

problem-solving strategy:  method for solving problems

representative bias:  faulty heuristic in which you stereotype someone or something without a valid basis for your judgment

trial and error:  problem-solving strategy in which multiple solutions are attempted until the correct one is found

working backwards:  heuristic in which you begin to solve a problem by focusing on the end result

Creative Commons License

Share This Book

  • Increase Font Size

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Modularity of Mind

The concept of modularity has loomed large in philosophy of psychology since the early 1980s, following the publication of Fodor’s landmark book The Modularity of Mind (1983). In the decades since the term ‘module’ and its cognates first entered the lexicon of cognitive science, the conceptual and theoretical landscape in this area has changed dramatically. Especially noteworthy in this respect has been the development of evolutionary psychology, whose proponents adopt a less stringent conception of modularity than the one advanced by Fodor, and who argue that the architecture of the mind is more pervasively modular than Fodor claimed. Where Fodor (1983, 2000) draws the line of modularity at the relatively low-level systems underlying perception and language, post-Fodorian theorists such as Sperber (2002) and Carruthers (2006) contend that the mind is modular through and through, up to and including the high-level systems responsible for reasoning, planning, decision making, and the like. The concept of modularity has also figured in recent debates in philosophy of science, epistemology, ethics, and philosophy of language—further evidence of its utility as a tool for theorizing about mental architecture.

1. What is a mental module?

2.1. challenges to low-level modularity, 2.2. fodor’s argument against high-level modularity, 3.1. the case for massive modularity, 3.2. doubts about massive modularity, 4. modularity and philosophy, other internet resources, related entries.

In his classic introduction to modularity, Fodor (1983) lists nine features that collectively characterize the type of system that interests him. In original order of presentation, they are:

  • Domain specificity
  • Mandatory operation
  • Limited central accessibility
  • Fast processing
  • Informational encapsulation
  • ‘Shallow’ outputs
  • Fixed neural architecture
  • Characteristic and specific breakdown patterns
  • Characteristic ontogenetic pace and sequencing

A cognitive system counts as modular in Fodor’s sense if it is modular “to some interesting extent,” meaning that it has most of these features to an appreciable degree (Fodor, 1983, p. 37). This is a weighted most, since some marks of modularity are more important than others. Information encapsulation, for example, is more or less essential for modularity, as well as explanatorily prior to several of the other features on the list (Fodor, 1983, 2000).

Each of the items on the list calls for explication. To streamline the exposition, we will cluster most of the features thematically and examine them on a cluster-by-cluster basis, along the lines of Prinz (2006).

Encapsulation and inaccessibility. Informational encapsulation and limited central accessibility are two sides of the same coin. Both features pertain to the character of information flow across computational mechanisms, albeit in opposite directions. Encapsulation involves restriction on the flow of information into a mechanism, whereas inaccessibility involves restriction on the flow of information out of it.

A cognitive system is informationally encapsulated to the extent that in the course of processing a given set of inputs it cannot access information stored elsewhere; all it has to go on is the information contained in those inputs plus whatever information might be stored within the system itself, for example, in a proprietary database. In the case of language, for example:

A parser for [a language] L contains a grammar of L . What it does when it does its thing is, it infers from certain acoustic properties of a token to a characterization of certain of the distal causes of the token (e.g., to the speaker’s intention that the utterance should be a token of a certain linguistic type). Premises of this inference can include whatever information about the acoustics of the token the mechanisms of sensory transduction provide, whatever information about the linguistic types in L the internally represented grammar provides, and nothing else . (Fodor, 1984, pp. 245–246; italics in original)

Similarly, in the case of perception—understood as a kind of non-demonstrative (i.e., defeasible, or non-monotonic) inference from sensory ‘premises’ to perceptual ‘conclusions’—the claim that perceptual systems are informationally encapsulated is equivalent to the claim that “the data that can bear on the confirmation of perceptual hypotheses includes, in the general case, considerably less than the organism may know” (Fodor, 1983, p. 69). The classic illustration of this property comes from the study of visual illusions, which tend to persist even after the viewer is explicitly informed about the character of the stimulus. In the Müller-Lyer illusion, for example, the two lines continue to look as if they were of unequal length even after one has convinced oneself otherwise, e.g., by measuring them with a ruler (see Figure 1, below).

Mueller-Lyer diagram of lines

Figure 1 . The Müller-Lyer illusion .

Informational encapsulation is related to what Pylyshyn (1984, 1999) calls cognitive impenetrability. But the two properties are not the same; instead, they are related as genus to species. Cognitive impenetrability is a matter of encapsulation relative to information stored in central memory, paradigmatically in the form of beliefs and utilities. But a system could be encapsulated in this respect without being encapsulated across the board. For example, auditory speech perception might be encapsulated relative to beliefs and utilities but unencapsulated relative to vision, as suggested by the McGurk effect (see below, §2.1). Likewise, a system could be unencapsulated relative to beliefs and utilities yet encapsulated relative to perception; it’s plausible that central systems have this character, insofar as their operations are sensitive only to post-perceptual, propositionally encoded information. Strictly speaking, then, cognitive impenetrability is a specific type of informational encapsulation, albeit a type with special architectural significance. Lacking this feature means failing the encapsulation test, the litmus test of modularity. But systems with this feature might still fail the test, due to information seepage of a different (i.e., non-central) sort.

The flip side of informational encapsulation is inaccessibility to central monitoring. A system is inaccessible in this sense if the intermediate-level representations that it computes prior to producing its output are inaccessible to consciousness, and hence unavailable for explicit report. In effect, centrally inaccessible systems are those whose internal processing is opaque to introspection. Though the outputs of such systems may be phenomenologically salient, their precursor states are not. Speech comprehension, for example, likely involves the successive elaboration of myriad representations (of various types: phonological, lexical, syntactic, etc.) of the stimulus, but of these only the final product—the representation of the meaning of what was said—is consciously available.

Mandatoriness, speed, and superficiality. In addition to being informationally encapsulated and centrally inaccessible, modular systems and processes are “fast, cheap, and out of control” (to borrow a phrase by roboticist Rodney Brooks). These features form a natural trio, as we’ll see.

The operation of a cognitive system is mandatory just in case it is automatic, that is, not under conscious control (Bargh & Chartrand, 1999). This means that, like it or not, the system’s operations are switched on by presentation of the relevant stimuli and those operations run to completion. For example, native speakers of English cannot hear the sounds of English being spoken as mere noise: if they hear those sounds at all, they hear them as English. Likewise, it’s impossible to see a 3D array of objects in space as 2D patches of color, however hard one may try.

Speed is arguably the mark of modularity that requires least in the way of explication. But speed is relative, so the best way to proceed here is by way of examples. Speech shadowing is generally considered to be very fast, with typical lag times on the order of about 250 ms. Since the syllabic rate of normal speech is about 4 syllables per second, this suggests that shadowers are processing the stimulus in syllabus-length bits—probably the smallest bits that can be identified in the speech stream, given that “only at the level of the syllable do we begin to find stretches of wave form whose acoustic properties are at all reliably related to their linguistic values” (Fodor, 1983, p. 62). Similarly impressive results are available for vision: in a rapid serial visual presentation task (matching picture to description), subjects were 70% accurate at 125 ms. exposure per picture and 96% accurate at 167 ms. (Fodor, 1983, p. 63). In general, a cognitive process counts as fast in Fodor’s book if it takes place in a half second or less.

A further feature of modular systems is that their outputs are relatively ‘shallow’. Exactly what this means is unclear. But the depth of an output seems to be a function of at least two properties: first, how much computation is required to produce it (i.e., shallow means computationally cheap); second, how constrained or specific its informational content is (i.e., shallow means informationally general) (Fodor, 1983, p. 87). These two properties are correlated, in that outputs with more specific content tend to be more costly for a system to compute, and vice versa. Some writers have interpreted shallowness to require non-conceptual character (e.g., Carruthers, 2006, p. 4). But this conflicts with Fodor’s own gloss on the term, in which he suggests that the output of a plausibly modular system such as visual object recognition might be encoded at the level of ‘basic-level’ concepts, like DOG and CHAIR (Rosch et al., 1976). What’s ruled out here is not concepts per se , then, but highly theoretical concepts like PROTON, which are too informationally specific and too computationally expensive to meet the shallowness criterion.

All three of the features just discussed—mandatoriness, speed, and shallowness—are associated with, and to some extent explicable in terms of, informational encapsulation. In each case, less is more, informationally speaking. Mandatoriness flows from the insensitivity of the system to the organism’s utilities, which is one dimension of cognitive impenetrability. Speed depends upon the efficiency of processing, which positively correlates with encapsulation in so far as encapsulation tends to reduce the system’s informational load. Shallowness is a similar story: shallow outputs are computationally cheap, and computational expense is negatively correlated with encapsulation. In short, the more informationally encapsulated a system is, the more likely it is to be fast, cheap, and out of control.

Dissociability and localizability . To say that a system is functionally dissociable is to say that it can be selectively impaired, that is, damaged or disabled with little or no effect on the operation of other systems. As the neuropsychological record indicates, selective impairments of this sort have frequently been observed as a consequence of circumscribed brain lesions. Standard examples from the study of vision include prosopagnosia (impaired face recognition), achromatopsia (total color blindness), and akinetopsia (motion blindness); examples from the study of language include agrammatism (loss of complex syntax), jargon aphasia (loss of complex semantics), alexia (loss of object words), and dyslexia (impaired reading and writing). Each of these disorders have been found in otherwise cognitively normal individuals, suggesting that the lost capacities are subserved by functionally dissociable mechanisms.

Functional dissociability is associated with neural localizability in a strong sense. A system is strongly localized just in case it is (a) implemented in neural circuitry that is both relatively circumscribed in extent (though not necessarily in contiguous areas) and (b) dedicated to the realization of that system alone. Localization in this sense goes beyond mere implementation in local neural circuitry, since a given bit of circuitry could subserve more than one cognitive function (Anderson, 2010). Proposed candidates for strong localization include systems for color vision (V4), motion detection (MT), face recognition (fusiform gyrus), and spatial scene recognition (parahippocampal gyrus).

Domain specificity . A system is domain specific to the extent that it has a restricted subject matter, that is, the class of objects and properties that it processes information about is circumscribed in a relatively narrow way. As Fodor (1983) puts it, “domain specificity has to do with the range of questions for which a device provides answers (the range of inputs for which it computes analyses)” (p. 103): the narrower the range of inputs a system can compute, the narrower the range of problems the system can solve—and the narrower the range of such problems, the more domain specific the device. Alternatively, the degree of a system’s domain specificity can be understood as a function of the range of inputs that turn the system on, where the size of that range determines the informational reach of the system (Carruthers, 2006; Samuels, 2000).

Domains (and by extension, modules) are typically more fine-grained than sensory modalities like vision and audition. This seems clear from Fodor’s list of plausibly domain-specific mechanisms, which includes systems for color perception, visual shape analysis, sentence parsing, and face and voice recognition (Fodor, 1983, p. 47)—none of which correspond to perceptual or linguistic faculties in an intuitive sense. It also seems plausible, however, that the traditional sense modalities (vision, audition, olfaction, etc.), and the language faculty as a whole, are sufficiently domain specific to count as displaying this particular mark of modularity (McCauley & Henrich, 2006).

Innateness . The final feature of modular systems on Fodor’s roster is innateness, understood as the property of “develop[ing] according to specific, endogenously determined patterns under the impact of environmental releasers” (Fodor, 1983, p. 100). On this view, modular systems come on-line chiefly as the result of a brute-causal process like triggering, rather than an intentional-causal process like learning. (For more on this distinction, see Cowie, 1999; for an alternative analysis of innateness, based on the notion of canalization, see Ariew, 1999.) The most familiar example here is language, the acquisition of which occurs in all normal individuals in all cultures on more or less the same schedule: single words at 12 months, telegraphic speech at 18 months, complex grammar at 24 months, and so on (Stromswold, 1999). Other candidates include visual object perception (Spelke, 1994) and low-level mindreading (Scholl & Leslie, 1999).

2. Modularity, Fodor-style: A modest proposal

The hypothesis of modest modularity, as we shall call it, has two strands. The first strand of the hypothesis is positive. It says that input systems, such as systems involved in perception and language, are modular. The second strand is negative. It says that central systems, such as systems involved in belief fixation and practical reasoning, are not modular.

In this section, we assess the case for modest modularity. The next section (§3) will be devoted to discussion of the hypothesis of massive modularity, which retains the positive strand of Fodor’s hypothesis while reversing the polarity of the second strand from negative to positive—revising the concept of modularity in the process.

The positive part of the modest modularity hypothesis is that input systems are modular. By ‘input system’ Fodor (1983) means a computational mechanism that “presents the world to thought” (p. 40) by processing the outputs of sensory transducers. A sensory transducer is a device that converts the energy impinging on the body’s sensory surfaces, such as the retina and cochlea, into a computationally usable form, without adding or subtracting information. Roughly speaking, the product of sensory transduction is raw sensory data. Input processing involves non-demonstrative inferences from this raw data to hypotheses about the layout of objects in the world. These hypotheses are then passed on to central systems for the purpose of belief fixation, and those systems in turn pass their outputs to systems responsible for the production of behavior.

Fodor argues that input systems constitute a natural kind, defined as “a class of phenomena that have many scientifically interesting properties over and above whatever properties define the class” (Fodor, 1983, p. 46). He argues for this by presenting evidence that input systems are modular, where modularity is marked by a cluster of psychologically interesting properties—the most interesting and important of these being informational encapsulation, as discussed in §1. In the course of that discussion, we reviewed a representative sample of this evidence, and for present purposes that should suffice. (Readers interested in further details should consult Fodor, 1983, pp. 47–101.)

Fodor’s claim about the modularity of input systems has been disputed by a number of philosophers and psychologists (Churchland, 1988; Arbib, 1987; Marslen-Wilson & Tyler, 1987; McCauley & Henrich, 2006). The most wide-ranging philosophical critique is due to Prinz (2006), who argues that perceptual and linguistic systems rarely exhibit the features characteristic of modularity. In particular, he argues that such systems are not informationally encapsulated. To this end, Prinz adduces two types of evidence. First, there appear to be cross-modal effects in perception, which would tell against encapsulation at the level of input systems. The classic example of this, also from the speech perception literature, is the McGurk effect (McGurk & MacDonald, 1976). Here, subjects watching a video of one phoneme being spoken (e.g., /ga/) dubbed with a sound recording of a different phoneme (/ba/) hear a third, altogether different phoneme (/da/). Second, he points to what look to be top-down effects on visual and linguistic processing, the existence of which would tell against cognitive impenetrability, i.e., encapsulation relative to central systems. Some of the most striking examples of such effects come from research on speech perception. Probably the best-known is the phoneme restoration effect, as in the case where listeners ‘fill in’ a missing phoneme in a spoken sentence ( The state governors met with their respective legi*latures convening in the capital city ) from which the missing phoneme (the /s/ sound in legislatures ) has been deleted and replaced with the sound of a cough (Warren, 1970). By hypothesis, this filling-in is driven by listeners’ understanding of the linguistic context.

How convincing one finds this part of Prinz’s critique, however, depends on how convincing one finds his explanation of these effects. The McGurk effect, for example, seems consistent with the claim that speech perception is an informationally encapsulated system, albeit a system that is multi-modal in character (cf. Fodor, 1983, p.132n.13). If speech perception is a multi-modal system, the fact that its operations draw on both auditory and visual information need not undermine the claim that speech perception is encapsulated. Other cross-modal effects, however, resist this type of explanation. In the double flash illusion, for example, viewers shown a single flash accompanied by two beeps report seeing two flashes (Shams et al., 2000). The same goes for the rubber hand illusion, in which synchronous brushing of a hand hidden from view and a realistic-looking rubber hand seen at the usual location of the hand that was hidden gives rise to the impression that the fake hand is real (Botvinick & Cohen, 1998). With respect to phenomena of this sort, unlike the McGurk effect, there is no plausible candidate for a single, domain-specific system whose operations draw on multiple sources of sensory information.

Regarding phoneme restoration, it could be that the effect is driven by listeners’ drawing on information stored in a language-proprietary database (specifically, information about the linguistic types in the lexicon of English), rather than higher-level contextual information. Hence, it’s unclear whether the case of phoneme restoration described above counts as a top-down effect. But not all cases of phoneme restoration can be accommodated so readily, since the phenomenon also occurs when there are multiple lexical items available for filling in (Warren & Warren, 1970). For example, listeners fill the gap in the sentences The *eel is on the axle and The *eel is on the orange differently—with a /wh/ sound and a /p/ sound, respectively—suggesting that speech perception is sensitive to contextual information after all.

A further challenge to modest modularity, not addressed by Prinz (2006), comes from evidence that susceptibility to the Müller-Lyer illusion varies by both culture and age. For example, it appears that adults in Western cultures are more susceptible to the illusion than their non-Western counterparts; that adults in some non-Western cultures, such as hunter-gatherers from the Kalahari Desert, are nearly immune to the illusion; and that within (but not always across) Western and non-Western cultures, pre-adolescent children are more susceptible to the illusion than adults are (Segall, Campbell, & Herskovits, 1966). McCawley and Henrich (2006) take these findings as showing that the visual system is diachronically (as opposed to synchronically) penetrable, in that how one experiences the illusion-inducing stimulus changes as a result of one’s wider perceptual experience over an extended period of time. They also argue that the aforementioned evidence of cultural and developmental variability in perception militates against the idea that vision is an innate capacity, that is, the idea that vision is among the “endogenous features of the human cognitive system that are, if not largely fixed at birth, then, at least, genetically pre-programmed” and “triggered, rather than shaped, by the newborn’s subsequent experience” (p. 83). However, they also issue the following caveat:

[N]othing about any of the findings we have discussed establishes the synchronic cognitive penetrability of the Müller-Lyer stimuli. Nor do the Segall et al. (1966) findings provide evidence that adults’ visual input systems are diachronically penetrable. They suggest that it is only during a critical developmental stage that human beings’ susceptibility to the Müller-Lyer illusion varies considerably and that that variation substantially depends on cultural variables. (McCauley & Henrich, 2006, p. 99; italics in original)

As such, the evidence cited can be accommodated by friends of modest modularity, provided that allowance is made for the potential impact of environmental, including cultural, variables on development—something that most accounts of innateness make room for.

A useful way of making this point invokes Segal’s (1996) idea of diachronic modularity (see also Scholl & Leslie, 1999). Diachronic modules are systems that exhibit parametric variation over the course of their development. For example, in the case of language, different individuals learn to speak different languages depending on the linguistic environment in which they grew up, but they nonetheless share the same underlying linguistic competence in virtue of their (plausibly innate) knowledge of Universal Grammar. Given the observed variation in how people see the Müller-Lyer illusion, it may be that the visual system is modular in much the same way, with its development is constrained by features of the visual environment. Such a possibility seems consistent with the claim that input systems are modular in Fodor’s sense.

Another source of difficulty for proponents of input-level modularity is neuroscientific evidence against the claim that perceptual and linguistic systems are strongly localized. Recall that for a system to be strongly localized, it must be realized in dedicated neural circuitry. Strong localization at the level of input systems, then, entails the existence of a one-to-one mapping between input systems and brain structures. As Anderson (2010, 2014) argues, however, there is no such mapping, since most cortical regions of any size are deployed in different tasks across different domains. For instance, activation of the fusiform face area, once thought to be dedicated to the perception of faces, is also recruited for the perception of cars and birds (Gauthier et al., 2000). Likewise, Broca’s area, once thought to be dedicated to speech production, also plays a role in action recognition, action sequencing, and motor imagery (Tettamanti & Weniger, 2006). Functional neuroimaging studies generally suggest that cognitive systems are at best weakly localized, that is, implemented in distributed networks of the brain that overlap, rather than discrete and disjoint regions.

Arguably the most serious challenge to modularity at the level of input systems, however, comes from evidence that vision is cognitively penetrable, and hence, not informationally encapsulated. The concept of cognitive penetrability, originally introduced by Pylyshyn (1984), has been characterized in a variety of non-equivalent ways (Stokes, 2013), but the core idea is this: A perceptual system is cognitively penetrable if and only if its operations are directly causally sensitive to the agent’s beliefs, desires, intentions, or other nonperceptual states. Behavioral studies purporting to show that vision is cognitively penetrable date back to the early days of New Look psychology (Bruner and Goodman, 1947) and continue to the present day, with renewed interest in the topic emerging in the early 2000s (Firestone & Scholl, 2016). It appears, for example, that vision is influenced by an agent’s motivational states, with experimental subjects reporting that desirable objects look closer (Balcetis & Dunning, 2010) and ambiguous figures look like the interpretation associated with a more rewarding outcome (Balcetis & Dunning, 2006). In addition, vision seems to be influenced by subjects’ beliefs, with racial categorization affecting reports of the perceived skin tone of faces even when the stimuli are equiluminant (Levin & Banaji, 2006), and categorization of objects affecting reports of the perceived color of grayscale images of those objects (Hansen et al., 2006).

Skeptics of cognitive penetrability point out, however, that experimental evidence for top-down effects on perception can be explained in terms of effects of judgment, memory, and relatively peripheral forms of attention (Firestone & Scholl, 2016; Machery, 2015). Consider, for example, the claim that throwing a heavy ball (vs. a light ball) at a target makes the target look farther away, evidence for which consists of subjects’ visual estimates of the distance to the target (Witt, Proffitt, & Epstein, 2004). While it is possible that the greater effort involved in throwing the heavy ball caused the target to look farther away, it is also possible that the increased estimate of distance reflected the fact that subjects in the heavy ball condition judged the target to be farther away because they found it harder to hit (Firestone & Scholl, 2016). Indeed, reports by subjects in a follow-up study who were explicitly instructed to make their estimates on the basis of visual appearances only did not show the effect of effort, suggesting that the effect was post-perceptual (Woods, Philbeck, & Danoff, 2009). Other purported top-down effects on perception, such as the effect of golfing performance on size and distance estimates of golf holes (Witt et al., 2008), can be explained as effects of spatial attention, such as the fact that visually attended objects tend to appear larger and closer (Firestone & Scholl, 2016). These and related considerations suggest that the case for cognitive penetrability—and by extension, the case against low-level modularity—is weaker than its proponents make it out to be.

I turn now to the dark side of Fodor’s hypothesis: the claim that central systems are not modular.

Among the principal jobs of central systems is the fixation of belief, perceptual belief included, via non-demonstrative inference. Fodor (1983) argues that this sort of process cannot be realized in an informationally encapsulated system, and hence that central systems cannot be modular. Spelled out a bit further, his reasoning goes like this:

  • Central systems are responsible for belief fixation.
  • Belief fixation is isotropic and Quinean.
  • Isotropic and Quinean processes cannot be carried out by informationally encapsulated systems.
  • Belief fixation cannot be carried out by an informationally encapsulated system. [from 2 and 3]
  • Modular systems are informationally encapsulated.
  • Belief fixation is not modular. [from 4 and 5]
  • Central systems are not modular. [from 1 and 6]

The argument here contains two terms that call for explication, both of which relate to the notion of confirmation holism in the philosophy of science. The term ‘isotropic’ refers to the epistemic interconnectedness of beliefs in the sense that “everything that the scientist knows is, in principle, relevant to determining what else he ought to believe. In principle, our botany constrains our astronomy, if only we could think of ways to make them connect” (Fodor, 1983, p. 105). Antony (2003) presents a striking case of this sort of long-range interdisciplinary cross-talk in the sciences, between astronomy and archaeology; Carruthers (2006, pp. 356–357) furnishes another example, linking solar physics and evolutionary theory. On Fodor’s view, since scientific confirmation is akin to belief fixation, the fact that scientific confirmation is isotropic suggests that belief fixation in general has this property.

A second dimension of confirmation holism is that confirmation is ‘Quinean’, meaning that:

[T]he degree of confirmation assigned to any given hypothesis is sensitive to properties of the entire belief system … simplicity, plausibility, and conservatism are properties that theories have in virtue of their relation to the whole structure of scientific beliefs taken collectively . A measure of conservatism or simplicity would be a metric over global properties of belief systems. (Fodor, 1983, pp. 107–108; italics in original).

Here again, the analogy between scientific thinking and thinking in general underwrites the supposition that belief fixation is Quinean.

Both isotropy and Quineanness are features that preclude encapsulation, since their possession by a system would require extensive access to the contents of central memory, and hence a high degree of cognitive penetrability. Put in slightly different terms: isotropic and Quinean processes are ‘global’ rather than ‘local’, and since globality precludes encapsulation, isotropy and Quineanness preclude encapsulation as well.

By Fodor’s lights, the upshot of this argument—namely, the nonmodular character of central systems—is bad news for the scientific study of higher cognitive functions. This is neatly expressed by his “First Law of the Non-Existence of Cognitive Science,” according to which “[t]he more global (e.g., the more isotropic) a cognitive process is, the less anybody understands it” (Fodor, 1983, p. 107). His grounds for pessimism on this score are twofold. First, global systems are unlikely to be associated with local brain architecture, thereby rendering them unpromising objects of neuroscientific study:

We have seen that isotropic systems are unlikely to exhibit articulated neuroarchitecture. If, as seems plausible, neuroarchitecture is often a concomitant of constraints on information flow, then neural equipotentiality is what you would expect in systems in which every process has more or less uninhibited access to all the available data. The moral is that, to the extent that the existence of form/function correspondence is a precondition for successful neuropsychological research, there is not much to be expected in the way of a neuropsychology of thought (Fodor, 1983, pp. 127).

Second, and more importantly, global processes are resistant to computational explanation, making them unpromising objects of psychological study:

The fact is that—considerations of their neural realization to one side—global systems are per se bad domains for computational models, at least of the sort that cognitive scientists are accustomed to employ. The condition for successful science (in physics, by the way, as well as psychology) is that nature should have joints to carve it at: relatively simple subsystems which can be artificially isolated and which behave, in isolation, in something like the way that they behave in situ . Modules satisfy this condition; Quinean/isotropic-wholistic-systems by definition do not. If, as I have supposed, the central cognitive processes are nonmodular, that is very bad news for cognitive science (Fodor, 1983, pp. 128).

By Fodor’s lights, then, considerations that militate against high-level modularity also militate against the possibility of a robust science of higher cognition—not a happy result, as far as most cognitive scientists and philosophers of mind are concerned.

Gloomy implications aside, Fodor’s argument against high-level modularity is difficult to resist. The main sticking points are these: first, the negative correlation between globality and encapsulation; second, the positive correlation between encapsulation and modularity. Putting these points together, we get a negative correlation between globality and modularity: the more global the process, the less modular the system that executes it. As such, there seem to be only three ways to block the conclusion of the argument:

  • Deny that central processes are global.
  • Deny that globality and encapsulation are negatively correlated.
  • Deny that encapsulation and modularity are positively correlated.

Of these three options, the second seems least attractive, as it seems something like a conceptual truth that globality and encapsulation pull in opposite directions. The first option is slightly more appealing, but only slightly. The idea that central processes are relatively global, even if not as global as the process of confirmation in science suggests, is hard to deny. And that is all the argument really requires.

That leaves the third option: denying that modularity requires encapsulation. This is, in effect, the strategy pursued by Carruthers (2006). More specifically, Carruthers draws a distinction between two kinds of encapsulation: ‘narrow-scope’ and ‘wide-scope’. A system is narrow-scope encapsulated if it cannot draw on any information held outside of it in the course of its processing. This corresponds to encapsulation as Fodor uses the term. By contrast, a system that is wide-scope encapsulated can draw on exogenous information during the course of its operations—it just cannot draw on all of that information. (Compare: “No exogenous information is accessible” vs. “Some exogenous information is not accessible.”) This is encapsulation in a weaker sense of the term than Fodor’s. Indeed, Carruthers’s use of the term ‘encapsulation’ in this context is a bit misleading, insofar as wide-scope encapsulated systems count as unencapsulated in Fodor’s sense (Prinz, 2006).

Dropping the (narrow-scope) encapsulation requirement on modules raises a number of issues, not the least of which being that it reduces the power of modularity hypotheses to explain functional dissociations at the system level (Stokes & Bergeron, 2015). That said, if modularity requires only wide-scope encapsulation, then Fodor’s argument against central modularity no longer goes through. But given the importance of narrow-scope encapsulation to Fodorian modularity, all this shows is that central systems might be modular in a non-Fodorian way. The original argument that central systems are not Fodor-modular—and with it, the motivation for the negative strand of the modest modularity hypothesis—stands.

3. Post-Fodorian modularity

According to the massive modularity hypothesis, the mind is modular through and through, including the parts responsible for high-level cognition functions like belief fixation, problem-solving, planning, and the like. Originally articulated and advocated by proponents of evolutionary psychology (Sperber, 1994, 2002; Cosmides & Tooby, 1992; Pinker, 1997; Barrett, 2005; Barrett & Kurzban, 2006), the hypothesis has received its most comprehensive and sophisticated defense at the hands of Carruthers (2006). Before proceeding to the details of that defense, however, we need to consider briefly what concept of modularity is in play.

The main thing to note here is that the operative notion of modularity differs significantly from the traditional Fodorian one. Carruthers is explicit on this point:

[If] a thesis of massive mental modularity is to be remotely plausible, then by ‘module’ we cannot mean ‘Fodor-module’. In particular, the properties of having proprietary transducers, shallow outputs, fast processing, significant innateness or innate channeling, and encapsulation will very likely have to be struck out. That leaves us with the idea that modules might be isolable function-specific processing systems, all or almost all of which are domain specific (in the content sense), whose operations aren’t subject to the will, which are associated with specific neural structures (albeit sometimes spatially dispersed ones), and whose internal operations may be inaccessible to the remainder of cognition. (Carruthers, 2006, p. 12)

Of the original set of nine features associated with Fodor-modules, then, Carruthers-modules retain at most only five: dissociability, domain specificity, automaticity, neural localizability, and central inaccessibility. Conspicuously absent from the list is informational encapsulation, the feature most central to modularity in Fodor’s account. What’s more, Carruthers goes on to drop domain specificity, automaticity, and strong localizability (which rules out the sharing of parts between modules) from his initial list of five features, making his conception of modularity even more sparse (Carruthers, 2006, p. 62). Other proposals in the literature are similarly permissive in terms of the requirements a system must meet in order to count as modular (Coltheart, 1999; Barrett & Kurzban, 2006).

A second point, related to the first, is that defenders of massive modularity have chiefly been concerned to defend the modularity of central cognition, taking for granted that the mind is modular at the level of input systems. Thus, the hypothesis at issue for theorists like Carruthers might be best understood as the conjunction of two claims: first, that input systems are modular in a way that requires narrow-scope encapsulation; second, that central systems are modular, but only in a way that does not require this feature. In defending massive modularity, Carruthers focuses on the second of these claims, and so will we.

The centerpiece of Carruthers (2006) consists of three arguments for massive modularity: the Argument from Design, the Argument from Animals, and the Argument from Computational Tractability. Let’s briefly consider each of them in turn.

The Argument from Design is as follows:

  • Biological systems are designed systems, constructed incrementally.
  • Such systems, when complex, need to be organized in a pervasively modular way, that is, as a hierarchical assembly of separately modifiable, functionally autonomous components.
  • The human mind is a biological system, and is complex.
  • Therefore, the human mind is (probably) massively modular in its organization. (Carruthers, 2006, p. 25)

The crux of this argument is the idea that complex biological systems cannot evolve unless they are organized in a modular way, where modular organization entails that each component of the system (that is, each module) can be selected for change independently of the others. In other words, the evolvability of the system as a whole requires the independent evolvability of its parts. The problem with this assumption is twofold (Woodward & Cowie, 2004). First, not all biological traits are independently modifiable. Having two lungs, for example, is a trait that cannot be changed without changing other traits of an organism, because the genetic and developmental mechanisms underlying lung numerosity causally depend on the genetic and developmental mechanisms underlying bilateral symmetry. Second, there appear to be developmental constraints on neurogenesis which rule out changing the size of one brain area independently of the others. This in turn suggests that natural selection cannot modify cognitive traits in isolation from one another, given that evolving the neural circuitry for one cognitive trait is likely to result in changes to the neural circuitry for other traits.

A further worry about the Argument from Design concerns the gap between its conclusion (the claim that the mind is massively modular in organization ) and the hypothesis at issue (the claim that the mind is massively modular simpliciter ). The worry is this. According to Carruthers, the modularity of a system implies the possession of just two properties: functional dissociability and inaccessibility of processing to external monitoring. Suppose that a system is massively modular in organization. It follows from the definition of modular organization that the components of the system are functionally autonomous and separately modifiable. Though functional autonomy guarantees dissociability, it’s not clear why separate modifiability guarantees inaccessibility to external monitoring. According to Carruthers, the reason is that “if the internal operations of a system (e.g., the details of the algorithm being executed) were available elsewhere, then they couldn’t be altered without some corresponding alteration being made in the system to which they are accessible” (Carruthers, 2006, p. 61). But this is a questionable assumption. On the contrary, it seems plausible that the internal operations of one system could be accessible to a second system in virtue of a monitoring mechanism that functions the same way regardless of the details of the processing being monitored. At a minimum, the claim that separate modifiability entails inaccessibility to external monitoring calls for more justification than Carruthers offers.

In short, the Argument from Design is susceptible to a number of objections. Fortunately, there’s a slightly stronger argument in the vicinity of this one, due to Cosmides and Tooby (1992). It goes like this:

  • The human mind is a product of natural selection.
  • In order to survive and reproduce, our human ancestors had to solve a number of recurrent adaptive problems (finding food, shelter, mates, etc.).
  • Since adaptive problems are solved more quickly, efficiently, and reliably by modular systems than by non-modular ones, natural selection would have favored the evolution of a massively modular architecture.
  • Therefore, the human mind is (probably) massively modular.

The force of this argument depends chiefly on the strength of the third premise. Not everyone is convinced, to put it mildly (Fodor, 2000; Samuels, 2000; Woodward & Cowie, 2004). First, the premise exemplifies adaptationist reasoning, and adaptationism in the philosophy of biology has more than its share of critics. Second, it is doubtful whether adaptive problem-solving in general is easier to accomplish with a large collection of specialized problem-solving devices than with a smaller collection of general problem-solving devices with access to a library of specialized programs (Samuels, 2000). Hence, insofar as the massive modularity hypothesis postulates an architecture of the first sort—as evolutionary psychologists’ ‘Swiss Army knife’ metaphor of the mind implies (Cosmides & Tooby, 1992)—the premise seems shaky.

A related argument is the Argument from Animals. Unlike the Argument from Design, this argument is never explicitly stated in Carruthers (2006). But here is a plausible reconstruction of it, due to Wilson (2008):

  • Animal minds are massively modular.
  • Human minds are incremental extensions of animal minds.

Unfortunately for friends of massive modularity, this argument, like the argument from design, is vulnerable to a number of objections (Wilson, 2008). We’ll mention two of them here. First, it’s not easy to motivate the claim that animal minds are massively modular in the operative sense. Though Carruthers (2006) goes to heroic lengths to do so, the evidence he cites—e.g., for the domain specificity of animal learning mechanisms, à la Gallistel, 1990—adds up to less than what’s needed. The problem is that domain specificity is not sufficient for Carruthers-style modularity; indeed, it is not even one of the central characteristics of modularity in Carruthers’ account. So the argument falters at the first step. Second, even if animal minds are massively modular, and even if single incremental extensions of the animal mind preserve that feature, it’s quite possible that a series of such extensions of animal minds might have led to its loss. In other words, as Wilson (2008) puts it, it can’t be assumed that the conservation of massive modularity is transitive. And without this assumption, the argument from animals can’t go through.

Finally, we have the Argument from Computational Tractability (Carruthers, 2006, pp. 44–59). For the purposes of this argument, we assume that a mental process is computationally tractable if it can be specified at the algorithmic level in such a way that the execution of the process is feasible given time, energy, and other resource constraints on human cognition (Samuels, 2005). We also assume that a system is encapsulated if in the course of its operations the system lacks access to at least some information exogenous to it.

  • The mind is computationally realized.
  • All computational mental processes must be tractable.
  • Tractable processing is possible only in encapsulated systems.
  • Hence, the mind must consist entirely of encapsulated systems.
  • Hence, the mind is (probably) massively modular.

There are two problems with this argument, however. The first problem has to do with the third premise, which states that tractability requires encapsulation, that is, the inaccessibility of at least some exogenous information to processing. What tractability actually requires is something weaker, namely, that not all information is accessed by the mechanism in the course of its operations (Samuels, 2005). In other words, it is possible for a system to have unlimited access to a database without actually accessing all of its contents. Though tractable computation rules out exhaustive search, for example, unencapsulated mechanisms need not engage in exhaustive search, so tractability does not require encapsulation. The second problem with the argument concerns the last step. Though one might reasonably suppose that modular systems must be encapsulated, the converse doesn’t follow. Indeed, Carruthers (2006) makes no mention of encapsulation in his characterization of modularity, so it’s unclear how one is supposed to get from a claim about pervasive encapsulation to a claim about pervasive modularity.

All in all, then, compelling general arguments for massive modularity are hard to come by. This is not yet to dismiss the possibility of modularity in high-level cognition, but it invites skepticism, especially given the paucity of empirical evidence directly supporting the hypothesis (Robbins, 2013). For example, it has been suggested that the capacity to think about social exchanges is subserved by a domain-specific, functionally dissociable, and innate mechanism (Stone et al., 2002; Sugiyama et al., 2002). However, it appears that deficits in social exchange reasoning do not occur in isolation, but are accompanied by other social-cognitive impairments (Prinz, 2006). Skepticism about modularity in other areas of central cognition, such as high-level mindreading, also seems to be the order of the day (Currie & Sterelny, 2000). The type of mindreading impairments characteristic of Asperger syndrome and high-functioning autism, for example, co-occur with sensory processing and executive function deficits (Frith, 2003). In general, there is little in the way of neuropsychological evidence to support the idea of high-level modularity.

Just as there are general theoretical arguments for massive modularity, there are general theoretical arguments against it. One argument takes the form of what Fodor (2000) calls the ‘Input Problem’. The problem is this. Suppose that the architecture of the mind is modular from top to bottom, and the mind consists entirely of domain-specific mechanisms. In that case, the outputs of each low-level (input) system will need to be routed to the appropriately specialized high-level (central) system for processing. But that routing can only be accomplished by a domain-general, non-modular mechanism—contradicting the initial supposition. In response to this problem, Barrett (2005) argues that processing in a massively modular architecture does not require a domain-general routing device of the sort envisaged by Fodor. An alternative solution, Barrett suggests, involves what he calls ‘enzymatic computation’. In this model, low-level systems pool their outputs together in a centrally accessible workspace where each central system is selectively activated by outputs that match its domain, in much the same way that enzymes selectively bind with substrates that match their specific templates. Like enzymes, specialized computational devices at the central level of the architecture accept a restricted range of inputs (analogous to biochemical substrates), perform specialized operations on that input (analogous to biochemical reactions), and produce outputs in a format useable by other computational devices (analogous to biochemical products). This obviates the need for a domain-general (hence, non-modular) mechanism to mediate between low-level and high-level systems.

A second challenge to massive modularity is posed by the ‘Domain Integration Problem’ (Carruthers, 2006). The problem here is that reasoning, planning, decision making, and other types of high-level cognition routinely involve the production of conceptually structured representations whose content crosses domains. This means that there must be some mechanism for integrating representations from multiple domains. But such a mechanism would be domain general rather than domain specific, and hence, non-modular. Like the Input Problem, however, the Domain Integration Problem is not insurmountable. One possible solution is that the language system has the capacity to play the role of content integrator in virtue of its capacity to transform conceptual representations that have been linguistically encoded (Hermer & Spelke, 1996; Carruthers, 2002, 2006). On this view, language is the vehicle of domain-general thought.

Empirical objections to massive modularity take a variety of forms. To start with, there is neurobiological evidence of developmental plasticity, a phenomenon that tells against the idea that brain structure is innately specified (Buller, 2005; Buller and Hardcastle, 2000). However, not all proponents of massive modularity insist that modules are innately specified (Carruthers, 2006; Kurzban, Tooby, and Cosmides, 2001). Furthermore, it’s unclear to what extent the neurobiological record is at odds with nativism, given the evidence that specific genes are linked to the normal development of cortical structures in both humans and animals (Machery & Barrett, 2008; Ramus, 2006).

Another source of evidence against massive modularity comes from research on individual differences in high-level cognition (Rabaglia, Marcus, & Lane, 2011). Such differences tend to be strongly positively correlated across domains—a phenomenon known as the ‘positive manifold’—suggesting that high-level cognitive abilities are subserved by a domain-general mechanism, rather than by a suite of specialized modules. There is, however, an alternative explanation of the positive manifold. Since post-Fodorian modules are allowed to share parts (Carruthers, 2006), the correlations observed may stem from individual differences in the functioning of components spanning multiple domain-specific mechanisms.

Interest in modularity is not confined to cognitive science and the philosophy of mind; it extends well into a number of allied fields. In epistemology, modularity has been invoked to defend the legitimacy of a theory-neutral type of observation, and hence the possibility of some degree of consensus among scientists with divergent theoretical commitments (Fodor, 1984). The ensuing debate on this issue (Churchland, 1988; Fodor, 1988; McCauley & Henrich, 2006) holds lasting significance for the general philosophy of science, particularly for controversies regarding the status of scientific realism. Relatedly, evidence of the cognitive penetrability of perception has given rise to worries about the justification of perceptual beliefs (Siegel, 2012; Stokes, 2012). In ethics, evidence of this sort has been used to cast doubt on ethical intuitionism as an account of moral epistemology (Cowan, 2014). In philosophy of language, modularity has figured in theorizing about linguistic communication, for example, in relevance theorists’ suggestion that speech interpretation, pragmatic warts and all, is a modular process (Sperber & Wilson, 2002). It has also been used demarcate the boundary between semantics and pragmatics, and to defend a notably austere version of semantic minimalism (Borg, 2004). Though the success of these deployments of modularity theory is subject to dispute (e.g., see Robbins, 2007, for doubts about the modularity of semantics), their existence testifies to the relevance of the concept of modularity to philosophical inquiry in a variety of domains.

  • Anderson, M. L., 2010. Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences , 33: 245–313.
  • –––, 2014. After phrenology: Neural reuse and the interactive brain , Cambridge, MA: MIT Press.
  • Antony, L. M., 2003. Rabbit-pots and supernovas: On the relevance of psychological data to linguistic theory. In A. Barber (ed.), Epistemology of Language , Oxford: Oxford University Press, pp. 47–68.
  • Arbib, M., 1987. Modularity and interaction of brain regions underlying visuomotor coordination. In J. L. Garfield (ed.), Modularity in Knowledge Representation and Natural-Language Understanding , Cambridge, MA: MIT Press, pp. 333–363.
  • Ariew, A., 1999. Innateness is canalization: In defense of a developmental account of innateness. In V. G. Hardcastle (ed.), Where Biology Meets Psychology , Cambridge, MA: MIT Press, pp. 117–138.
  • Balcetis, E. and Dunning, D., 2006. See what you want to see: Motivational influences on visual perception. Journal of Personality and Social Psychology , 91: 612–625.
  • –––, 2010. Wishful seeing: More desired objects are seen as closer. Psychological Science , 21: 147–152.
  • Bargh, J. A. and Chartrand, T. L., 1999. The unbearable automaticity of being. American Psychologist , 54: 462–479.
  • Barrett, H. C., 2005. Enzymatic computation and cognitive modularity. Mind & Language , 20: 259–287.
  • Barrett, H. C. and Kurzban, R., 2006. Modularity in cognition: Framing the debate. Psychological Review , 113: 628–647.
  • Borg, E., 2004. Minimal Semantics , Oxford: Oxford University Press.
  • Bruner, J. and Goodman, C. C., 1947. Value and need as organizing factors in perception. Journal of Abnormal and Social Psychology , 42: 33–44.
  • Buller, D., 2005. Adapting Minds , Cambridge, MA: MIT Press.
  • Buller, D. and Hardcastle, V. G., 2000. Evolutionary psychology, meet developmental neurobiology: Against promiscuous modularity. Brain and Mind , 1: 302–325.
  • Carruthers, P., 2002. The cognitive functions of language. Behavioral and Brain Sciences , 25: 657–725.
  • –––, 2006. The Architecture of the Mind , Oxford: Oxford University Press.
  • Churchland, P., 1988. Perceptual plasticity and theoretical neutrality: A reply to Jerry Fodor. Philosophy of Science , 55: 167–187.
  • Coltheart, M., 1999. Modularity and cognition. Trends in Cognitive Sciences , 3: 115–120.
  • Cosmides, L. and Tooby, J., 1992. Cognitive adaptations for social exchange. In J. Barkow, L. Cosmides, and J. Tooby, eds., The Adapted Mind , Oxford: Oxford University Press, pp. 163–228.
  • Cowan, R., 2014. Cognitive penetrability and ethical perception. Review of Philosophy and Psychology , 6: 665–682.
  • Cowie, F., 1999. What’s Within? Nativism Reconsidered , Oxford: Oxford University Press.
  • Currie, G. and Sterelny, K., 2000. How to think about the modularity of mind-reading. Philosophical Quarterly , 50: 145–160.
  • Firestone, C. and Scholl, B. J., 2016. Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and Brain Sciences , 39.
  • Fodor, J. A., 1983. The Modularity of Mind , Cambridge, MA: MIT Press.
  • –––, 1984. Observation reconsidered. Philosophy of Science , 51: 23–43.
  • –––, 1988. A reply to Churchland’s “Perceptual plasticity and theoretical neutrality.” Philosophy of Science , 55: 188–198.
  • –––, 2000. The Mind Doesn’t Work That Way , Cambridge, MA: MIT Press.
  • Frith, U., 2003. Autism: Explaining the enigma , 2nd edition, Malden, MA: Wiley-Blackwell.
  • Gauthier, I., Skudlarski, P., Gore, J.C., and Anderson, A. W., 2000. Expertise for cars and birds recruits brain areas involved in face recognition. Nature Neuroscience , 3: 191–197.
  • Hansen, T., Olkkonen, M., Walter, S., and Gegenfurtner, K. R., 2006. Memory modulates color appearance. Nature Neuroscience , 9: 1367–1368.
  • Hermer, L. and Spelke, E. S., 1996. Modularity and development: The case of spatial reorientation. Cognition , 61: 195–232.
  • Kurzban, R., Tooby, J., and Cosmides, L., 2001. Can race be erased? Coalitional computation and social categorization. Proceedings of the National Academy of Sciences , 98: 15387–15392.
  • Levin, D. and Banaji, M., 2006. Distortions in the perceived lightness of faces: The role of race categories. Journal of Experimental Psychology: General , 135: 501–512.
  • Machery, E., 2015. Cognitive penetrability: A no-progress report. In J. Zeimbekis and A. Raftopoulos (eds.), The Cognitive Penetrability of Perception: New Philosophical Perspectives , Oxford: Oxford University Press.
  • Machery, E. and Barrett, H. C., 2006. Debunking Adapting Minds . Philosophy of Science , 73: 232–246.
  • Marslen-Wilson, W. and Tyler, L. K., 1987. Against modularity. In J. L. Garfield (ed.), Modularity in Knowledge Representation and Natural-Language Understanding , Cambridge, MA: MIT Press.
  • McCauley, R. N. and Henrich, J., 2006. Susceptibility to the Müller-Lyer illusion, theory-neutral observation, and the diachronic penetrability of the visual input system. Philosophical Psychology , 19: 79–101.
  • McGurk, H. and Macdonald, J., 1976. Hearing lips and seeing voices. Nature , 391: 756.
  • Pinker, S., 1997. How the Mind Works , New York: W. W. Norton & Company.
  • Prinz, J. J., 2006. Is the mind really modular? In R. Stainton (ed.), Contemporary Debates in Cognitive Science , Oxford: Blackwell, pp. 22–36.
  • Pylyshyn, Z., 1984. Computation and Cognition , Cambridge, MA: MIT Press.
  • –––, 1999. Is vision continuous with cognition? The case for cognitive penetrability of vision. Behavioral and Brain Sciences , 22: 341–423.
  • Rabaglia, C. D., Marcus, G. F., and Lane, S. P., 2011. What can individual differences tell us about the specialization of function? Cognitive Neuropsychology , 28: 288–303.
  • Ramus, F., 2006. Genes, brain, and cognition: A roadmap for the cognitive scientist. Cognition , 101: 247–269.
  • Robbins, P., 2007. Minimalism and modularity. In G. Preyer and G. Peter, eds., Context-Sensitivity and Semantic Minimalism , Oxford: Oxford University Press, pp. 303–319.
  • –––, 2013. Modularity and mental architecture. WIREs Cognitive Science , 4: 641–649.
  • Rosch, E., Mervis, C., Gray, W., Johnson, D., and Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology , 8: 382–439.
  • Samuels, R., 2000. Massively modular minds: Evolutionary psychology and cognitive architecture. In P. Carruthers and A. Chamberlain, eds., Evolution and the Human Mind , Cambridge: Cambridge University Press, pp. 13–46.
  • –––, 2005. The complexity of cognition: Tractability arguments for massive modularity. In P. Carruthers, S. Laurence, and S. Stich, eds., The Innate Mind: Structure and Contents , Oxford: Oxford University Press, pp. 107–121.
  • Scholl, B. J. and Leslie, A. M., 1999. Modularity, development and ‘theory of mind’. Mind & Language , 14: 131–153.
  • Segal, G., 1996. The modularity of theory of mind. In P. Carruthers and P. K. Smith, eds., Theories of Theories of Mind , Cambridge: Cambridge University Press, pp. 141–157.
  • Segall, M., Campbell, D. and Herskovits, M. J., 1966. The Influence of Culture on Visual Perception , New York: Bobbs-Merrill.
  • Shams, L., Kamitani, Y., and Shimojo, S., 2000. Illusions: What you see is what you hear. Nature , 408: 788.
  • Siegel, S., 2011. Cognitive penetrability and perceptual justification. Nous , 46: 201–222.
  • Spelke, E., 1994. Initial knowledge: Six suggestions. Cognition , 50: 435–445.
  • Sperber, D., 1994. The modularity of thought and the epidemiology of representations. In L. A. Hirschfeld and S. A. Gelman (eds.), Mapping the Mind , Cambridge: Cambridge University Press, pp. 39–67.
  • –––, 2002. In defense of massive modularity. In I. Dupoux (ed.), Language, Brain, and Cognitive Development , Cambridge, MA: MIT Press, pp. 47–57.
  • Sperber, D. and Wilson, D., 2002. Pragmatics, modularity and mind-reading. Mind & Language , 17: 3–23.
  • Stokes, D., 2012. Perceiving and desiring: A new look at the cognitive penetrability of experience. Philosophical Studies , 158: 479–492.
  • –––, 2013. Cognitive penetrability of perception. Philosophy Compass , 8: 646–663.
  • Stokes, D. and Bergeron, V., 2015. Modular architectures and informational encapsulation: A dilemma. European Journal for the Philosophy of Science , 5: 315–338.
  • Stone, V. E., Cosmides, L., Tooby, J., Kroll, N., and Knight, R. T., 2002. Selective impairment of reasoning about social exchange in a patient with bilateral limbic system damage. Proceedings of the National Academy of Sciences , 99: 11531–11536.
  • Stromswold, K., 1999. Cognitive and neural aspects of language acquisition. In E. Lepore and Z. Pylyshyn, eds., What Is Cognitive Science? , Oxford: Blackwell, pp. 356–400.
  • Sugiyama, L. S., Tooby, J., and Cosmides, L., 2002. Cross-cultural evidence of cognitive adaptations for social exchange among the Shiwiar of Ecuadorian Amazonia. Proceedings of the National Academy of Sciences , 99: 11537–11542.
  • Tettamanti, M. and Weniger, D., 2006. Broca’s area: A supramodal hierarchical processor? Cortex , 42: 491–494.
  • Warren, R. M., 1970. Perceptual restoration of missing speech sounds. Science , 167: 392–393.
  • Warren, R. M. and Warren, R. P., 1970. Auditory illusions and confusions. Scientific American , 223: 30–36.
  • Wilson, R. A., 2008. The drink you’re having when you’re not having a drink. Mind & Language , 23: 273–283.
  • Witt, J. K., Linkenauger, S. A., Bakdash, J. Z. and Proffitt, D. R., 2008. Putting to a bigger hole: Golf performance relates to perceived size. Psychonomic Bulletin and Review , 15: 581–585.
  • Witt, J. K., Proffitt, D. R. and Epstein, W., 2004. Perceiving distances: A role of effort and intent. Perception , 33: 577–590.
  • Woods, A. J., Philbeck, J. W., and Danoff, J. V., 2009. The various perceptions of distance: An alternative view of how effort affects distance judgments. Journal of Experimental Psychology: Human Perception and Performance , 35: 1104–1117.
  • Woodward, J. F. and Cowie, F., 2004. The mind is not (just) a system of modules shaped (just) by natural selection. In C. Hitchcock, ed., Contemporary Debates in Philosophy of Science , Malden, MA: Blackwell, pp. 312–334.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Modularity in Cognitive Science , bibliography category at philpapers.org.
  • The Modularity Home Page , maintained by Raffaele Calabretta (Institute of Cognitive Sciences and Technologies, Italian National Research Council).

cognitive science | mind: computational theory of | neuroscience, philosophy of | psychology: evolutionary

Copyright © 2017 by Philip Robbins < robbinsp @ missouri . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Clinical problem...

Clinical problem solving and diagnostic decision making: selective review of the cognitive literature

  • Related content
  • Peer review

This article has a correction. Please see:

  • Clinical problem solving and diagnostic decision making: selective review of the cognitive literature - November 02, 2006
  • Arthur S Elstein , professor ( aelstein{at}uic.edu ) ,
  • Alan Schwarz , assistant professor of clinical decision making.
  • Department of Medical Education, University of Illinois College of Medicine, Chicago, IL 60612-7309, USA
  • Correspondence to: A S Elstein

This is the fourth in a series of five articles

This article reviews our current understanding of the cognitive processes involved in diagnostic reasoning in clinical medicine. It describes and analyses the psychological processes employed in identifying and solving diagnostic problems and reviews errors and pitfalls in diagnostic reasoning in the light of two particularly influential approaches: problem solving 1 , 2 , 3 and decision making. 4 , 5 , 6 , 7 , 8 Problem solving research was initially aimed at describing reasoning by expert physicians, to improve instruction of medical students and house officers. Psychological decision research has been influenced from the start by statistical models of reasoning under uncertainty, and has concentrated on identifying departures from these standards.

Summary points

Problem solving and decision making are two paradigms for psychological research on clinical reasoning, each with its own assumptions and methods

The choice of strategy for diagnostic problem solving depends on the perceived difficulty of the case and on knowledge of content as well as strategy

Final conclusions should depend both on prior belief and strength of the evidence

Conclusions reached by Bayes's theorem and clinical intuition may conflict

Because of cognitive limitations, systematic biases and errors result from employing simpler rather than more complex cognitive strategies

Evidence based medicine applies decision theory to clinical diagnosis

Problem solving

Diagnosis as selecting a hypothesis.

The earliest psychological formulation viewed diagnostic reasoning as a process of testing hypotheses. Solutions to difficult diagnostic problems were found by generating a limited number of hypotheses early in the diagnostic process and using them to guide subsequent collection of data. 1 Each hypothesis can be used to predict what additional findings ought to be present if it were true, and the diagnostic process is a guided search for these findings. Experienced physicians form hypotheses and their diagnostic plan rapidly, and the quality of their hypotheses is higher than that of novices. Novices struggle to develop a plan and some have difficulty moving beyond collection of data to considering possibilities.

It is possible to collect data thoroughly but nevertheless to ignore, to misunderstand, or to misinterpret some findings, but also possible for a clinician to be too economical in collecting data and yet to interpret accurately what is available. Accuracy and thoroughness are analytically separable.

Pattern recognition or categorisation

Expertise in problem solving varies greatly between individual clinicians and is highly dependent on the clinician's mastery of the particular domain. 9 This finding challenges the hypothetico-deductive model of clinical reasoning, since both successful and unsuccessful diagnosticians use hypothesis testing. It appears that diagnostic accuracy does not depend as much on strategy as on mastery of content. Further, the clinical reasoning of experts in familiar situations frequently does not involve explicit testing of hypotheses. 3 10 , 11 , 12 Their speed, efficiency, and accuracy suggest that they may not even use the same reasoning processes as novices. 11 It is likely that experienced physicians use a hypothetico-deductive strategy only with difficult cases and that clinical reasoning is more a matter of pattern recognition or direct automatic retrieval. What are the patterns? What is retrieved? These questions signal a shift from the study of judgment to the study of the organisation and retrieval of memories.

Problem solving strategies

Hypothesis testing

Pattern recognition (categorisation)

By specific instances

By general prototypes

Viewing the process of diagnosis assigning a case to a category brings some other issues into clearer view. How is a new case categorised? Two competing answers to this question have been put forward and research evidence supports both. Category assignment can be based on matching the case to a specific instance (“instance based” or “exemplar based” recognition) or to a more abstract prototype. In the former, a new case is categorised by its resemblance to memories of instances previously seen. 3 11 This model is supported by the fact that clinical diagnosis is strongly affected by context—for example, the location of a skin rash on the body—even when the context ought to be irrelevant. 12

The prototype model holds that clinical experience facilitates the construction of mental models, abstractions, or prototypes. 2 13 Several characteristics of experts support this view—for instance, they can better identify the additional findings needed to complete a clinical picture and relate the findings to an overall concept of the case. These features suggest that better diagnosticians have constructed more diversified and abstract sets of semantic relations, a network of links between clinical features and diagnostic categories. 14

The controversy about the methods used in diagnostic reasoning can be resolved by recognising that clinicians approach problems flexibly; the method they select depends upon the perceived characteristics of the problem. Easy cases can be solved by pattern recognition: difficult cases need systematic generation and testing of hypotheses. Whether a diagnostic problem is easy or difficult is a function of the knowledge and experience of the clinician.

The strategies reviewed are neither proof against error nor always consistent with statistical rules of inference. Errors that can occur in difficult cases in internal medicine include failure to generate the correct hypothesis; misperception or misreading the evidence, especially visual cues; and misinterpretations of the evidence. 15 16 Many diagnostic problems are so complex that the correct solution is not contained in the initial set of hypotheses. Restructuring and reformulating should occur as data are obtained and the clinical picture evolves. However, a clinician may quickly become psychologically committed to a particular hypothesis, making it more difficult to restructure the problem.

Decision making

Diagnosis as opinion revision.

From the point of view of decision theory, reaching a diagnosis means updating opinion with imperfect information (the clinical evidence). 8 17 The standard rule for this task is Bayes's theorem. The pretest probability is either the known prevalence of the disease or the clinician's subjective impression of the probability of disease before new information is acquired. The post-test probability, the probability of disease given new information, is a function of two variables, pretest probability and the strength of the evidence, measured by a “likelihood ratio.”

Bayes's theorem tells us how we should reason, but it does not claim to describe how opinions are revised. In our experience, clinicians trained in methods of evidence based medicine are more likely than untrained clinicians to use a Bayesian approach to interpreting findings. 18 Nevertheless, probably only a minority of clinicians use it in daily practice and informal methods of opinion revision still predominate. Bayes's theorem directs attention to two major classes of errors in clinical reasoning: in the assessment of either pretest probability or the strength of the evidence. The psychological study of diagnostic reasoning from this viewpoint has focused on errors in both components, and on the simplifying rules or heuristics that replace more complex procedures. Consequently, this approach has become widely known as “heuristics and biases.” 4 19

Errors in estimation of probability

Availability —People are apt to overestimate the frequency of vivid or easily recalled events and to underestimate the frequency of events that are either very ordinary or difficult to recall. Diseases or injuries that receive considerable media attention are often thought of as occurring more commonly than they actually do. This psychological principle is exemplified clinically in the overemphasis of rare conditions, because unusual cases are more memorable than routine problems.

Representativeness —Representativeness refers to estimating the probability of disease by judging how similar a case is to a diagnostic category or prototype. It can lead to overestimation of probability either by causing confusion of post-test probability with test sensitivity or by leading to neglect of base rates and implicitly considering all hypotheses equally likely. This is an error, because if a case resembles disease A and disease B equally, and A is much more common than B, then the case is more likely to be an instance of A. Representativeness is associated with the “conjunction fallacy”—incorrectly concluding that the probability of a joint event (such as the combination of findings to form a typical clinical picture) is greater than the probability of any one of these events alone.

Heuristics and biases

Availability

Representativeness

Probability transformations

Effect of description detail

Conservatism

Anchoring and adjustment

Order effects

Decision theory assumes that in psychological processing of probabilities, they are not transformed from the ordinary probability scale. Prospect theory was formulated as a descriptive account of choices involving gambling on two outcomes, 20 and cumulative prospect theory extends the theory to cases with multiple outcomes. 21 Both prospect theory and cumulative prospect theory propose that, in decision making, small probabilities are overweighted and large probabilities underweighted, contrary to the assumption of standard decision theory. This “compression” of the probability scale explains why the difference between 99% and 100% is psychologically much greater than the difference between, say, 60% and 61%. 22

Support theory

Support theory proposes that the subjective probability of an event is inappropriately influenced by how detailed the description is. More explicit descriptions yield higher probability estimates than compact, condensed descriptions, even when the two refer to exactly the same events. Clinically, support theory predicts that a longer, more detailed case description will be assigned a higher subjective probability of the index disease than a brief abstract of the same case, even if they contain the same information about that disease. Thus, subjective assessments of events, while often necessary in clinical practice, can be affected by factors unrelated to true prevalence. 23

Errors in revision of probability

In clinical case discussions, data are presented sequentially, and diagnostic probabilities are not revised as much as is implied by Bayes's theorem 8 ; this phenomenon is called conservatism. One explanation is that diagnostic opinions are revised up or down from an initial anchor, which is either given in the problem or subjectively formed. Final opinions are sensitive to the starting point (the “anchor”), and the shift (“adjustment”) from it is typically insufficient. 4 Both biases will lead to collecting more information than is necessary to reach a desired level of diagnostic certainty.

It is difficult for everyday judgment to keep separate accounts of the probability of a disease and the benefits that accrue from detecting it. Probability revision errors that are systematically linked to the perceived cost of mistakes show the difficulties experienced in separating assessments of probability from values, as required by standard decision theory. There is a tendency to overestimate the probability of more serious but treatable diseases, because a clinician would hate to miss one. 24

Bayes's theorem implies that clinicians given identical information should reach the same diagnostic opinion, regardless of the order in which information is presented. However, final opinions are also affected by the order of presentation of information. Information presented later in a case is given more weight than information presented earlier. 25

Other errors identified in data interpretation include simplifying a diagnostic problem by interpreting findings as consistent with a single hypothesis, forgetting facts inconsistent with a favoured hypothesis, overemphasising positive findings, and discounting negative findings. From a Bayesian standpoint, these are all errors in assessing the diagnostic value of clinical evidence—that is, errors in implicit likelihood ratios.

Educational implications

Two recent innovations in medical education, problem based learning and evidence based medicine, are consistent with the educational implications of this research. Problem based learning can be understood as an effort to introduce the formulation and testing of clinical hypotheses into the preclinical curriculum. 26 The theory of cognition and instruction underlying this reform is that since experienced physicians use this strategy with difficult problems, and since practically any clinical situation selected for instructional purposes will be difficult for students, it makes sense to provide opportunities for students to practise problem solving with cases graded in difficulty. The finding of case specificity showed the limits of teaching a general problem solving strategy. Expertise in problem solving can be separated from content analytically, but not in practice. This realisation shifted the emphasis towards helping students acquire a functional organisation of content with clinically usable schemas. This goal became the new rationale for problem based learning. 27

Evidence based medicine is the most recent, and by most standards the most successful, effort to date to apply statistical decision theory in clinical medicine. 18 It teaches Bayes's theorem, and residents and medical students quickly learn how to interpret diagnostic studies and how to use a computer based nomogram to compute post-test probabilities and to understand the output. 28

We have selectively reviewed 30 years of psychological research on clinical diagnostic reasoning. The problem solving approach has focused on diagnosis as hypothesis testing, pattern matching, or categorisation. The errors in reasoning identified from this perspective include failure to generate the correct hypothesis; misperceiving or misreading the evidence, especially visual cues; and misinterpreting the evidence. The decision making approach views diagnosis as opinion revision with imperfect information. Heuristics and biases in estimation and revision of probability have been the subject of intense scrutiny within this research tradition. Both research paradigms understand judgment errors as a natural consequence of limitations in our cognitive capacities and of the human tendency to adopt short cuts in reasoning.

Both approaches have focused more on the mistakes made by both experts and novices than on what they get right, possibly leading to overestimation of the frequency of the mistakes catalogued in this article. The reason for this focus seems clear enough: from the standpoint of basic research, errors tell us a great deal about fundamental cognitive processes, just as optical illusions teach us about the functioning of the visual system. From the educational standpoint, clinical instruction and training should focus more on what needs improvement than on what learners do correctly; to improve performance requires identifying errors. But, in conclusion, we emphasise, firstly, that the prevalence of these errors has not been established; secondly, we believe that expert clinical reasoning is very likely to be right in the majority of cases; and, thirdly, despite the expansion of statistically grounded decision supports, expert judgment will still be needed to apply general principles to specific cases.

Series editor J A Knottnerus

Preparation of this review was supported in part by grant RO1 LM5630 from the National Library of Medicine.

Competing interests None declared.

“The Evidence Base of Clinical Diagnosis,” edited by J A Knottnerus, can be purchased through the BMJ Bookshop ( http://www.bmjbookshop.com/ )

  • Elstein AS ,
  • Shulman LS ,
  • Bordage G ,
  • Schmidt HG ,
  • Norman GR ,
  • Boshuizen HPA
  • Kahneman D ,
  • Sox HC Jr . ,
  • Higgins MC ,
  • Mellers BA ,
  • Schwartz A ,
  • Chapman GB ,
  • Sonnenberg F
  • Glasziou P ,
  • Pliskin J ,
  • Brooks LR ,
  • Coblentz CL ,
  • Lemieux M ,
  • Kassirer JP ,
  • Kopelman RI
  • Sackett DL ,
  • Haynes RB ,
  • Guyatt GH ,
  • Richardson WS ,
  • Rosenberg W ,
  • Tversky A ,
  • Fischhoff B ,
  • Bostrom A ,
  • Quadrell M J
  • Redelmeier DA ,
  • Koehler DJ ,
  • Liberman V ,
  • Wallsten TS
  • Bergus GR ,

problem solving hypothesis psychology

7.3 Problem Solving

Learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving and decision making

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

Problem-Solving Strategies

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( Table 7.2 ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Everyday Connection

Solving puzzles.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( Figure 7.7 ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

Here is another popular type of puzzle ( Figure 7.8 ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

Take a look at the “Puzzling Scales” logic puzzle below ( Figure 7.9 ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

Pitfalls to Problem Solving

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but they just need to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. Duncker (1945) conducted foundational research on functional fixedness. He created an experiment in which participants were given a candle, a book of matches, and a box of thumbtacks. They were instructed to use those items to attach the candle to the wall so that it did not drip wax onto the table below. Participants had to use functional fixedness to overcome the problem ( Figure 7.10 ). During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Link to Learning

Check out this Apollo 13 scene about NASA engineers overcoming functional fixedness to learn more.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 7.3 .

Watch this teacher-made music video about cognitive biases to learn more.

Were you able to determine how many marbles are needed to balance the scales in Figure 7.9 ? You need nine. Were you able to solve the problems in Figure 7.7 and Figure 7.8 ? Here are the answers ( Figure 7.11 ).

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • Authors: Rose M. Spielman, William J. Jenkins, Marilyn D. Lovett
  • Publisher/website: OpenStax
  • Book title: Psychology 2e
  • Publication date: Apr 22, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/psychology-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/psychology-2e/pages/7-3-problem-solving

© Jan 6, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Logo for College of DuPage Digital Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7 Module 7: Thinking, Reasoning, and Problem-Solving

This module is about how a solid working knowledge of psychological principles can help you to think more effectively, so you can succeed in school and life. You might be inclined to believe that—because you have been thinking for as long as you can remember, because you are able to figure out the solution to many problems, because you feel capable of using logic to argue a point, because you can evaluate whether the things you read and hear make sense—you do not need any special training in thinking. But this, of course, is one of the key barriers to helping people think better. If you do not believe that there is anything wrong, why try to fix it?

The human brain is indeed a remarkable thinking machine, capable of amazing, complex, creative, logical thoughts. Why, then, are we telling you that you need to learn how to think? Mainly because one major lesson from cognitive psychology is that these capabilities of the human brain are relatively infrequently realized. Many psychologists believe that people are essentially “cognitive misers.” It is not that we are lazy, but that we have a tendency to expend the least amount of mental effort necessary. Although you may not realize it, it actually takes a great deal of energy to think. Careful, deliberative reasoning and critical thinking are very difficult. Because we seem to be successful without going to the trouble of using these skills well, it feels unnecessary to develop them. As you shall see, however, there are many pitfalls in the cognitive processes described in this module. When people do not devote extra effort to learning and improving reasoning, problem solving, and critical thinking skills, they make many errors.

As is true for memory, if you develop the cognitive skills presented in this module, you will be more successful in school. It is important that you realize, however, that these skills will help you far beyond school, even more so than a good memory will. Although it is somewhat useful to have a good memory, ten years from now no potential employer will care how many questions you got right on multiple choice exams during college. All of them will, however, recognize whether you are a logical, analytical, critical thinker. With these thinking skills, you will be an effective, persuasive communicator and an excellent problem solver.

The module begins by describing different kinds of thought and knowledge, especially conceptual knowledge and critical thinking. An understanding of these differences will be valuable as you progress through school and encounter different assignments that require you to tap into different kinds of knowledge. The second section covers deductive and inductive reasoning, which are processes we use to construct and evaluate strong arguments. They are essential skills to have whenever you are trying to persuade someone (including yourself) of some point, or to respond to someone’s efforts to persuade you. The module ends with a section about problem solving. A solid understanding of the key processes involved in problem solving will help you to handle many daily challenges.

7.1. Different kinds of thought

7.2. Reasoning and Judgment

7.3. Problem Solving

READING WITH PURPOSE

Remember and understand.

By reading and studying Module 7, you should be able to remember and describe:

  • Concepts and inferences (7.1)
  • Procedural knowledge (7.1)
  • Metacognition (7.1)
  • Characteristics of critical thinking:  skepticism; identify biases, distortions, omissions, and assumptions; reasoning and problem solving skills  (7.1)
  • Reasoning:  deductive reasoning, deductively valid argument, inductive reasoning, inductively strong argument, availability heuristic, representativeness heuristic  (7.2)
  • Fixation:  functional fixedness, mental set  (7.3)
  • Algorithms, heuristics, and the role of confirmation bias (7.3)
  • Effective problem solving sequence (7.3)

By reading and thinking about how the concepts in Module 6 apply to real life, you should be able to:

  • Identify which type of knowledge a piece of information is (7.1)
  • Recognize examples of deductive and inductive reasoning (7.2)
  • Recognize judgments that have probably been influenced by the availability heuristic (7.2)
  • Recognize examples of problem solving heuristics and algorithms (7.3)

Analyze, Evaluate, and Create

By reading and thinking about Module 6, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  • Use the principles of critical thinking to evaluate information (7.1)
  • Explain whether examples of reasoning arguments are deductively valid or inductively strong (7.2)
  • Outline how you could try to solve a problem from your life using the effective problem solving sequence (7.3)

7.1. Different kinds of thought and knowledge

  • Take a few minutes to write down everything that you know about dogs.
  • Do you believe that:
  • Psychic ability exists?
  • Hypnosis is an altered state of consciousness?
  • Magnet therapy is effective for relieving pain?
  • Aerobic exercise is an effective treatment for depression?
  • UFO’s from outer space have visited earth?

On what do you base your belief or disbelief for the questions above?

Of course, we all know what is meant by the words  think  and  knowledge . You probably also realize that they are not unitary concepts; there are different kinds of thought and knowledge. In this section, let us look at some of these differences. If you are familiar with these different kinds of thought and pay attention to them in your classes, it will help you to focus on the right goals, learn more effectively, and succeed in school. Different assignments and requirements in school call on you to use different kinds of knowledge or thought, so it will be very helpful for you to learn to recognize them (Anderson, et al. 2001).

Factual and conceptual knowledge

Module 5 introduced the idea of declarative memory, which is composed of facts and episodes. If you have ever played a trivia game or watched Jeopardy on TV, you realize that the human brain is able to hold an extraordinary number of facts. Likewise, you realize that each of us has an enormous store of episodes, essentially facts about events that happened in our own lives. It may be difficult to keep that in mind when we are struggling to retrieve one of those facts while taking an exam, however. Part of the problem is that, in contradiction to the advice from Module 5, many students continue to try to memorize course material as a series of unrelated facts (picture a history student simply trying to memorize history as a set of unrelated dates without any coherent story tying them together). Facts in the real world are not random and unorganized, however. It is the way that they are organized that constitutes a second key kind of knowledge, conceptual.

Concepts are nothing more than our mental representations of categories of things in the world. For example, think about dogs. When you do this, you might remember specific facts about dogs, such as they have fur and they bark. You may also recall dogs that you have encountered and picture them in your mind. All of this information (and more) makes up your concept of dog. You can have concepts of simple categories (e.g., triangle), complex categories (e.g., small dogs that sleep all day, eat out of the garbage, and bark at leaves), kinds of people (e.g., psychology professors), events (e.g., birthday parties), and abstract ideas (e.g., justice). Gregory Murphy (2002) refers to concepts as the “glue that holds our mental life together” (p. 1). Very simply, summarizing the world by using concepts is one of the most important cognitive tasks that we do. Our conceptual knowledge  is  our knowledge about the world. Individual concepts are related to each other to form a rich interconnected network of knowledge. For example, think about how the following concepts might be related to each other: dog, pet, play, Frisbee, chew toy, shoe. Or, of more obvious use to you now, how these concepts are related: working memory, long-term memory, declarative memory, procedural memory, and rehearsal? Because our minds have a natural tendency to organize information conceptually, when students try to remember course material as isolated facts, they are working against their strengths.

One last important point about concepts is that they allow you to instantly know a great deal of information about something. For example, if someone hands you a small red object and says, “here is an apple,” they do not have to tell you, “it is something you can eat.” You already know that you can eat it because it is true by virtue of the fact that the object is an apple; this is called drawing an  inference , assuming that something is true on the basis of your previous knowledge (for example, of category membership or of how the world works) or logical reasoning.

Procedural knowledge

Physical skills, such as tying your shoes, doing a cartwheel, and driving a car (or doing all three at the same time, but don’t try this at home) are certainly a kind of knowledge. They are procedural knowledge, the same idea as procedural memory that you saw in Module 5. Mental skills, such as reading, debating, and planning a psychology experiment, are procedural knowledge, as well. In short, procedural knowledge is the knowledge how to do something (Cohen & Eichenbaum, 1993).

Metacognitive knowledge

Floyd used to think that he had a great memory. Now, he has a better memory. Why? Because he finally realized that his memory was not as great as he once thought it was. Because Floyd eventually learned that he often forgets where he put things, he finally developed the habit of putting things in the same place. (Unfortunately, he did not learn this lesson before losing at least 5 watches and a wedding ring.) Because he finally realized that he often forgets to do things, he finally started using the To Do list app on his phone. And so on. Floyd’s insights about the real limitations of his memory have allowed him to remember things that he used to forget.

All of us have knowledge about the way our own minds work. You may know that you have a good memory for people’s names and a poor memory for math formulas. Someone else might realize that they have difficulty remembering to do things, like stopping at the store on the way home. Others still know that they tend to overlook details. This knowledge about our own thinking is actually quite important; it is called metacognitive knowledge, or  metacognition . Like other kinds of thinking skills, it is subject to error. For example, in unpublished research, one of the authors surveyed about 120 General Psychology students on the first day of the term. Among other questions, the students were asked them to predict their grade in the class and report their current Grade Point Average. Two-thirds of the students predicted that their grade in the course would be higher than their GPA. (The reality is that at our college, students tend to earn lower grades in psychology than their overall GPA.) Another example: Students routinely report that they thought they had done well on an exam, only to discover, to their dismay, that they were wrong (more on that important problem in a moment). Both errors reveal a breakdown in metacognition.

The Dunning-Kruger Effect

In general, most college students probably do not study enough. For example, using data from the National Survey of Student Engagement, Fosnacht, McCormack, and Lerma (2018) reported that first-year students at 4-year colleges in the U.S. averaged less than 14 hours per week preparing for classes. The typical suggestion is that you should spend two hours outside of class for every hour in class, or 24 – 30 hours per week for a full-time student. Clearly, students in general are nowhere near that recommended mark. Many observers, including some faculty, believe that this shortfall is a result of students being too busy or lazy. Now, it may be true that many students are too busy, with work and family obligations, for example. Others, are not particularly motivated in school, and therefore might correctly be labeled lazy. A third possible explanation, however, is that some students might not think they need to spend this much time. And this is a matter of metacognition. Consider the scenario that we mentioned above, students thinking they had done well on an exam only to discover that they did not. Justin Kruger and David Dunning examined scenarios very much like this in 1999. Kruger and Dunning gave research participants tests measuring humor, logic, and grammar. Then, they asked the participants to assess their own abilities and test performance in these areas. They found that participants in general tended to overestimate their abilities, already a problem with metacognition. Importantly, the participants who scored the lowest overestimated their abilities the most. Specifically, students who scored in the bottom quarter (averaging in the 12th percentile) thought they had scored in the 62nd percentile. This has become known as the  Dunning-Kruger effect . Many individual faculty members have replicated these results with their own student on their course exams, including the authors of this book. Think about it. Some students who just took an exam and performed poorly believe that they did well before seeing their score. It seems very likely that these are the very same students who stopped studying the night before because they thought they were “done.” Quite simply, it is not just that they did not know the material. They did not know that they did not know the material. That is poor metacognition.

In order to develop good metacognitive skills, you should continually monitor your thinking and seek frequent feedback on the accuracy of your thinking (Medina, Castleberry, & Persky 2017). For example, in classes get in the habit of predicting your exam grades. As soon as possible after taking an exam, try to find out which questions you missed and try to figure out why. If you do this soon enough, you may be able to recall the way it felt when you originally answered the question. Did you feel confident that you had answered the question correctly? Then you have just discovered an opportunity to improve your metacognition. Be on the lookout for that feeling and respond with caution.

concept :  a mental representation of a category of things in the world

Dunning-Kruger effect : individuals who are less competent tend to overestimate their abilities more than individuals who are more competent do

inference : an assumption about the truth of something that is not stated. Inferences come from our prior knowledge and experience, and from logical reasoning

metacognition :  knowledge about one’s own cognitive processes; thinking about your thinking

Critical thinking

One particular kind of knowledge or thinking skill that is related to metacognition is  critical thinking (Chew, 2020). You may have noticed that critical thinking is an objective in many college courses, and thus it could be a legitimate topic to cover in nearly any college course. It is particularly appropriate in psychology, however. As the science of (behavior and) mental processes, psychology is obviously well suited to be the discipline through which you should be introduced to this important way of thinking.

More importantly, there is a particular need to use critical thinking in psychology. We are all, in a way, experts in human behavior and mental processes, having engaged in them literally since birth. Thus, perhaps more than in any other class, students typically approach psychology with very clear ideas and opinions about its subject matter. That is, students already “know” a lot about psychology. The problem is, “it ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so” (Ward, quoted in Gilovich 1991). Indeed, many of students’ preconceptions about psychology are just plain wrong. Randolph Smith (2002) wrote a book about critical thinking in psychology called  Challenging Your Preconceptions,  highlighting this fact. On the other hand, many of students’ preconceptions about psychology are just plain right! But wait, how do you know which of your preconceptions are right and which are wrong? And when you come across a research finding or theory in this class that contradicts your preconceptions, what will you do? Will you stick to your original idea, discounting the information from the class? Will you immediately change your mind? Critical thinking can help us sort through this confusing mess.

But what is critical thinking? The goal of critical thinking is simple to state (but extraordinarily difficult to achieve): it is to be right, to draw the correct conclusions, to believe in things that are true and to disbelieve things that are false. We will provide two definitions of critical thinking (or, if you like, one large definition with two distinct parts). First, a more conceptual one: Critical thinking is thinking like a scientist in your everyday life (Schmaltz, Jansen, & Wenckowski, 2017).  Our second definition is more operational; it is simply a list of skills that are essential to be a critical thinker. Critical thinking entails solid reasoning and problem solving skills; skepticism; and an ability to identify biases, distortions, omissions, and assumptions. Excellent deductive and inductive reasoning, and problem solving skills contribute to critical thinking. So, you can consider the subject matter of sections 7.2 and 7.3 to be part of critical thinking. Because we will be devoting considerable time to these concepts in the rest of the module, let us begin with a discussion about the other aspects of critical thinking.

Let’s address that first part of the definition. Scientists form hypotheses, or predictions about some possible future observations. Then, they collect data, or information (think of this as making those future observations). They do their best to make unbiased observations using reliable techniques that have been verified by others. Then, and only then, they draw a conclusion about what those observations mean. Oh, and do not forget the most important part. “Conclusion” is probably not the most appropriate word because this conclusion is only tentative. A scientist is always prepared that someone else might come along and produce new observations that would require a new conclusion be drawn. Wow! If you like to be right, you could do a lot worse than using a process like this.

A Critical Thinker’s Toolkit 

Now for the second part of the definition. Good critical thinkers (and scientists) rely on a variety of tools to evaluate information. Perhaps the most recognizable tool for critical thinking is  skepticism (and this term provides the clearest link to the thinking like a scientist definition, as you are about to see). Some people intend it as an insult when they call someone a skeptic. But if someone calls you a skeptic, if they are using the term correctly, you should consider it a great compliment. Simply put, skepticism is a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided. People from Missouri should recognize this principle, as Missouri is known as the Show-Me State. As a skeptic, you are not inclined to believe something just because someone said so, because someone else believes it, or because it sounds reasonable. You must be persuaded by high quality evidence.

Of course, if that evidence is produced, you have a responsibility as a skeptic to change your belief. Failure to change a belief in the face of good evidence is not skepticism; skepticism has open mindedness at its core. M. Neil Browne and Stuart Keeley (2018) use the term weak sense critical thinking to describe critical thinking behaviors that are used only to strengthen a prior belief. Strong sense critical thinking, on the other hand, has as its goal reaching the best conclusion. Sometimes that means strengthening your prior belief, but sometimes it means changing your belief to accommodate the better evidence.

Many times, a failure to think critically or weak sense critical thinking is related to a  bias , an inclination, tendency, leaning, or prejudice. Everybody has biases, but many people are unaware of them. Awareness of your own biases gives you the opportunity to control or counteract them. Unfortunately, however, many people are happy to let their biases creep into their attempts to persuade others; indeed, it is a key part of their persuasive strategy. To see how these biases influence messages, just look at the different descriptions and explanations of the same events given by people of different ages or income brackets, or conservative versus liberal commentators, or by commentators from different parts of the world. Of course, to be successful, these people who are consciously using their biases must disguise them. Even undisguised biases can be difficult to identify, so disguised ones can be nearly impossible.

Here are some common sources of biases:

  • Personal values and beliefs.  Some people believe that human beings are basically driven to seek power and that they are typically in competition with one another over scarce resources. These beliefs are similar to the world-view that political scientists call “realism.” Other people believe that human beings prefer to cooperate and that, given the chance, they will do so. These beliefs are similar to the world-view known as “idealism.” For many people, these deeply held beliefs can influence, or bias, their interpretations of such wide ranging situations as the behavior of nations and their leaders or the behavior of the driver in the car ahead of you. For example, if your worldview is that people are typically in competition and someone cuts you off on the highway, you may assume that the driver did it purposely to get ahead of you. Other types of beliefs about the way the world is or the way the world should be, for example, political beliefs, can similarly become a significant source of bias.
  • Racism, sexism, ageism and other forms of prejudice and bigotry.  These are, sadly, a common source of bias in many people. They are essentially a special kind of “belief about the way the world is.” These beliefs—for example, that women do not make effective leaders—lead people to ignore contradictory evidence (examples of effective women leaders, or research that disputes the belief) and to interpret ambiguous evidence in a way consistent with the belief.
  • Self-interest.  When particular people benefit from things turning out a certain way, they can sometimes be very susceptible to letting that interest bias them. For example, a company that will earn a profit if they sell their product may have a bias in the way that they give information about their product. A union that will benefit if its members get a generous contract might have a bias in the way it presents information about salaries at competing organizations. (Note that our inclusion of examples describing both companies and unions is an explicit attempt to control for our own personal biases). Home buyers are often dismayed to discover that they purchased their dream house from someone whose self-interest led them to lie about flooding problems in the basement or back yard. This principle, the biasing power of self-interest, is likely what led to the famous phrase  Caveat Emptor  (let the buyer beware) .  

Knowing that these types of biases exist will help you evaluate evidence more critically. Do not forget, though, that people are not always keen to let you discover the sources of biases in their arguments. For example, companies or political organizations can sometimes disguise their support of a research study by contracting with a university professor, who comes complete with a seemingly unbiased institutional affiliation, to conduct the study.

People’s biases, conscious or unconscious, can lead them to make omissions, distortions, and assumptions that undermine our ability to correctly evaluate evidence. It is essential that you look for these elements. Always ask, what is missing, what is not as it appears, and what is being assumed here? For example, consider this (fictional) chart from an ad reporting customer satisfaction at 4 local health clubs.

problem solving hypothesis psychology

Clearly, from the results of the chart, one would be tempted to give Club C a try, as customer satisfaction is much higher than for the other 3 clubs.

There are so many distortions and omissions in this chart, however, that it is actually quite meaningless. First, how was satisfaction measured? Do the bars represent responses to a survey? If so, how were the questions asked? Most importantly, where is the missing scale for the chart? Although the differences look quite large, are they really?

Well, here is the same chart, with a different scale, this time labeled:

problem solving hypothesis psychology

Club C is not so impressive any more, is it? In fact, all of the health clubs have customer satisfaction ratings (whatever that means) between 85% and 88%. In the first chart, the entire scale of the graph included only the percentages between 83 and 89. This “judicious” choice of scale—some would call it a distortion—and omission of that scale from the chart make the tiny differences among the clubs seem important, however.

Also, in order to be a critical thinker, you need to learn to pay attention to the assumptions that underlie a message. Let us briefly illustrate the role of assumptions by touching on some people’s beliefs about the criminal justice system in the US. Some believe that a major problem with our judicial system is that many criminals go free because of legal technicalities. Others believe that a major problem is that many innocent people are convicted of crimes. The simple fact is, both types of errors occur. A person’s conclusion about which flaw in our judicial system is the greater tragedy is based on an assumption about which of these is the more serious error (letting the guilty go free or convicting the innocent). This type of assumption is called a value assumption (Browne and Keeley, 2018). It reflects the differences in values that people develop, differences that may lead us to disregard valid evidence that does not fit in with our particular values.

Oh, by the way, some students probably noticed this, but the seven tips for evaluating information that we shared in Module 1 are related to this. Actually, they are part of this section. The tips are, to a very large degree, set of ideas you can use to help you identify biases, distortions, omissions, and assumptions. If you do not remember this section, we strongly recommend you take a few minutes to review it.

skepticism :  a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided

bias : an inclination, tendency, leaning, or prejudice

  • Which of your beliefs (or disbeliefs) from the Activate exercise for this section were derived from a process of critical thinking? If some of your beliefs were not based on critical thinking, are you willing to reassess these beliefs? If the answer is no, why do you think that is? If the answer is yes, what concrete steps will you take?

7.2 Reasoning and Judgment

  • What percentage of kidnappings are committed by strangers?
  • Which area of the house is riskiest: kitchen, bathroom, or stairs?
  • What is the most common cancer in the US?
  • What percentage of workplace homicides are committed by co-workers?

An essential set of procedural thinking skills is  reasoning , the ability to generate and evaluate solid conclusions from a set of statements or evidence. You should note that these conclusions (when they are generated instead of being evaluated) are one key type of inference that we described in Section 7.1. There are two main types of reasoning, deductive and inductive.

Deductive reasoning

Suppose your teacher tells you that if you get an A on the final exam in a course, you will get an A for the whole course. Then, you get an A on the final exam. What will your final course grade be? Most people can see instantly that you can conclude with certainty that you will get an A for the course. This is a type of reasoning called  deductive reasoning , which is defined as reasoning in which a conclusion is guaranteed to be true as long as the statements leading to it are true. The three statements can be listed as an  argument , with two beginning statements and a conclusion:

Statement 1: If you get an A on the final exam, you will get an A for the course

Statement 2: You get an A on the final exam

Conclusion: You will get an A for the course

This particular arrangement, in which true beginning statements lead to a guaranteed true conclusion, is known as a  deductively valid argument . Although deductive reasoning is often the subject of abstract, brain-teasing, puzzle-like word problems, it is actually an extremely important type of everyday reasoning. It is just hard to recognize sometimes. For example, imagine that you are looking for your car keys and you realize that they are either in the kitchen drawer or in your book bag. After looking in the kitchen drawer, you instantly know that they must be in your book bag. That conclusion results from a simple deductive reasoning argument. In addition, solid deductive reasoning skills are necessary for you to succeed in the sciences, philosophy, math, computer programming, and any endeavor involving the use of logic to persuade others to your point of view or to evaluate others’ arguments.

Cognitive psychologists, and before them philosophers, have been quite interested in deductive reasoning, not so much for its practical applications, but for the insights it can offer them about the ways that human beings think. One of the early ideas to emerge from the examination of deductive reasoning is that people learn (or develop) mental versions of rules that allow them to solve these types of reasoning problems (Braine, 1978; Braine, Reiser, & Rumain, 1984). The best way to see this point of view is to realize that there are different possible rules, and some of them are very simple. For example, consider this rule of logic:

therefore q

Logical rules are often presented abstractly, as letters, in order to imply that they can be used in very many specific situations. Here is a concrete version of the of the same rule:

I’ll either have pizza or a hamburger for dinner tonight (p or q)

I won’t have pizza (not p)

Therefore, I’ll have a hamburger (therefore q)

This kind of reasoning seems so natural, so easy, that it is quite plausible that we would use a version of this rule in our daily lives. At least, it seems more plausible than some of the alternative possibilities—for example, that we need to have experience with the specific situation (pizza or hamburger, in this case) in order to solve this type of problem easily. So perhaps there is a form of natural logic (Rips, 1990) that contains very simple versions of logical rules. When we are faced with a reasoning problem that maps onto one of these rules, we use the rule.

But be very careful; things are not always as easy as they seem. Even these simple rules are not so simple. For example, consider the following rule. Many people fail to realize that this rule is just as valid as the pizza or hamburger rule above.

if p, then q

therefore, not p

Concrete version:

If I eat dinner, then I will have dessert

I did not have dessert

Therefore, I did not eat dinner

The simple fact is, it can be very difficult for people to apply rules of deductive logic correctly; as a result, they make many errors when trying to do so. Is this a deductively valid argument or not?

Students who like school study a lot

Students who study a lot get good grades

Jane does not like school

Therefore, Jane does not get good grades

Many people are surprised to discover that this is not a logically valid argument; the conclusion is not guaranteed to be true from the beginning statements. Although the first statement says that students who like school study a lot, it does NOT say that students who do not like school do not study a lot. In other words, it may very well be possible to study a lot without liking school. Even people who sometimes get problems like this right might not be using the rules of deductive reasoning. Instead, they might just be making judgments for examples they know, in this case, remembering instances of people who get good grades despite not liking school.

Making deductive reasoning even more difficult is the fact that there are two important properties that an argument may have. One, it can be valid or invalid (meaning that the conclusion does or does not follow logically from the statements leading up to it). Two, an argument (or more correctly, its conclusion) can be true or false. Here is an example of an argument that is logically valid, but has a false conclusion (at least we think it is false).

Either you are eleven feet tall or the Grand Canyon was created by a spaceship crashing into the earth.

You are not eleven feet tall

Therefore the Grand Canyon was created by a spaceship crashing into the earth

This argument has the exact same form as the pizza or hamburger argument above, making it is deductively valid. The conclusion is so false, however, that it is absurd (of course, the reason the conclusion is false is that the first statement is false). When people are judging arguments, they tend to not observe the difference between deductive validity and the empirical truth of statements or conclusions. If the elements of an argument happen to be true, people are likely to judge the argument logically valid; if the elements are false, they will very likely judge it invalid (Markovits & Bouffard-Bouchard, 1992; Moshman & Franks, 1986). Thus, it seems a stretch to say that people are using these logical rules to judge the validity of arguments. Many psychologists believe that most people actually have very limited deductive reasoning skills (Johnson-Laird, 1999). They argue that when faced with a problem for which deductive logic is required, people resort to some simpler technique, such as matching terms that appear in the statements and the conclusion (Evans, 1982). This might not seem like a problem, but what if reasoners believe that the elements are true and they happen to be wrong; they will would believe that they are using a form of reasoning that guarantees they are correct and yet be wrong.

deductive reasoning :  a type of reasoning in which the conclusion is guaranteed to be true any time the statements leading up to it are true

argument :  a set of statements in which the beginning statements lead to a conclusion

deductively valid argument :  an argument for which true beginning statements guarantee that the conclusion is true

Inductive reasoning and judgment

Every day, you make many judgments about the likelihood of one thing or another. Whether you realize it or not, you are practicing  inductive reasoning   on a daily basis. In inductive reasoning arguments, a conclusion is likely whenever the statements preceding it are true. The first thing to notice about inductive reasoning is that, by definition, you can never be sure about your conclusion; you can only estimate how likely the conclusion is. Inductive reasoning may lead you to focus on Memory Encoding and Recoding when you study for the exam, but it is possible the instructor will ask more questions about Memory Retrieval instead. Unlike deductive reasoning, the conclusions you reach through inductive reasoning are only probable, not certain. That is why scientists consider inductive reasoning weaker than deductive reasoning. But imagine how hard it would be for us to function if we could not act unless we were certain about the outcome.

Inductive reasoning can be represented as logical arguments consisting of statements and a conclusion, just as deductive reasoning can be. In an inductive argument, you are given some statements and a conclusion (or you are given some statements and must draw a conclusion). An argument is  inductively strong   if the conclusion would be very probable whenever the statements are true. So, for example, here is an inductively strong argument:

  • Statement #1: The forecaster on Channel 2 said it is going to rain today.
  • Statement #2: The forecaster on Channel 5 said it is going to rain today.
  • Statement #3: It is very cloudy and humid.
  • Statement #4: You just heard thunder.
  • Conclusion (or judgment): It is going to rain today.

Think of the statements as evidence, on the basis of which you will draw a conclusion. So, based on the evidence presented in the four statements, it is very likely that it will rain today. Will it definitely rain today? Certainly not. We can all think of times that the weather forecaster was wrong.

A true story: Some years ago psychology student was watching a baseball playoff game between the St. Louis Cardinals and the Los Angeles Dodgers. A graphic on the screen had just informed the audience that the Cardinal at bat, (Hall of Fame shortstop) Ozzie Smith, a switch hitter batting left-handed for this plate appearance, had never, in nearly 3000 career at-bats, hit a home run left-handed. The student, who had just learned about inductive reasoning in his psychology class, turned to his companion (a Cardinals fan) and smugly said, “It is an inductively strong argument that Ozzie Smith will not hit a home run.” He turned back to face the television just in time to watch the ball sail over the right field fence for a home run. Although the student felt foolish at the time, he was not wrong. It was an inductively strong argument; 3000 at-bats is an awful lot of evidence suggesting that the Wizard of Ozz (as he was known) would not be hitting one out of the park (think of each at-bat without a home run as a statement in an inductive argument). Sadly (for the die-hard Cubs fan and Cardinals-hating student), despite the strength of the argument, the conclusion was wrong.

Given the possibility that we might draw an incorrect conclusion even with an inductively strong argument, we really want to be sure that we do, in fact, make inductively strong arguments. If we judge something probable, it had better be probable. If we judge something nearly impossible, it had better not happen. Think of inductive reasoning, then, as making reasonably accurate judgments of the probability of some conclusion given a set of evidence.

We base many decisions in our lives on inductive reasoning. For example:

Statement #1: Psychology is not my best subject

Statement #2: My psychology instructor has a reputation for giving difficult exams

Statement #3: My first psychology exam was much harder than I expected

Judgment: The next exam will probably be very difficult.

Decision: I will study tonight instead of watching Netflix.

Some other examples of judgments that people commonly make in a school context include judgments of the likelihood that:

  • A particular class will be interesting/useful/difficult
  • You will be able to finish writing a paper by next week if you go out tonight
  • Your laptop’s battery will last through the next trip to the library
  • You will not miss anything important if you skip class tomorrow
  • Your instructor will not notice if you skip class tomorrow
  • You will be able to find a book that you will need for a paper
  • There will be an essay question about Memory Encoding on the next exam

Tversky and Kahneman (1983) recognized that there are two general ways that we might make these judgments; they termed them extensional (i.e., following the laws of probability) and intuitive (i.e., using shortcuts or heuristics, see below). We will use a similar distinction between Type 1 and Type 2 thinking, as described by Keith Stanovich and his colleagues (Evans and Stanovich, 2013; Stanovich and West, 2000). Type 1 thinking is fast, automatic, effortful, and emotional. In fact, it is hardly fair to call it reasoning at all, as judgments just seem to pop into one’s head. Type 2 thinking , on the other hand, is slow, effortful, and logical. So obviously, it is more likely to lead to a correct judgment, or an optimal decision. The problem is, we tend to over-rely on Type 1. Now, we are not saying that Type 2 is the right way to go for every decision or judgment we make. It seems a bit much, for example, to engage in a step-by-step logical reasoning procedure to decide whether we will have chicken or fish for dinner tonight.

Many bad decisions in some very important contexts, however, can be traced back to poor judgments of the likelihood of certain risks or outcomes that result from the use of Type 1 when a more logical reasoning process would have been more appropriate. For example:

Statement #1: It is late at night.

Statement #2: Albert has been drinking beer for the past five hours at a party.

Statement #3: Albert is not exactly sure where he is or how far away home is.

Judgment: Albert will have no difficulty walking home.

Decision: He walks home alone.

As you can see in this example, the three statements backing up the judgment do not really support it. In other words, this argument is not inductively strong because it is based on judgments that ignore the laws of probability. What are the chances that someone facing these conditions will be able to walk home alone easily? And one need not be drunk to make poor decisions based on judgments that just pop into our heads.

The truth is that many of our probability judgments do not come very close to what the laws of probability say they should be. Think about it. In order for us to reason in accordance with these laws, we would need to know the laws of probability, which would allow us to calculate the relationship between particular pieces of evidence and the probability of some outcome (i.e., how much likelihood should change given a piece of evidence), and we would have to do these heavy math calculations in our heads. After all, that is what Type 2 requires. Needless to say, even if we were motivated, we often do not even know how to apply Type 2 reasoning in many cases.

So what do we do when we don’t have the knowledge, skills, or time required to make the correct mathematical judgment? Do we hold off and wait until we can get better evidence? Do we read up on probability and fire up our calculator app so we can compute the correct probability? Of course not. We rely on Type 1 thinking. We “wing it.” That is, we come up with a likelihood estimate using some means at our disposal. Psychologists use the term heuristic to describe the type of “winging it” we are talking about. A  heuristic   is a shortcut strategy that we use to make some judgment or solve some problem (see Section 7.3). Heuristics are easy and quick, think of them as the basic procedures that are characteristic of Type 1.  They can absolutely lead to reasonably good judgments and decisions in some situations (like choosing between chicken and fish for dinner). They are, however, far from foolproof. There are, in fact, quite a lot of situations in which heuristics can lead us to make incorrect judgments, and in many cases the decisions based on those judgments can have serious consequences.

Let us return to the activity that begins this section. You were asked to judge the likelihood (or frequency) of certain events and risks. You were free to come up with your own evidence (or statements) to make these judgments. This is where a heuristic crops up. As a judgment shortcut, we tend to generate specific examples of those very events to help us decide their likelihood or frequency. For example, if we are asked to judge how common, frequent, or likely a particular type of cancer is, many of our statements would be examples of specific cancer cases:

Statement #1: Andy Kaufman (comedian) had lung cancer.

Statement #2: Colin Powell (US Secretary of State) had prostate cancer.

Statement #3: Bob Marley (musician) had skin and brain cancer

Statement #4: Sandra Day O’Connor (Supreme Court Justice) had breast cancer.

Statement #5: Fred Rogers (children’s entertainer) had stomach cancer.

Statement #6: Robin Roberts (news anchor) had breast cancer.

Statement #7: Bette Davis (actress) had breast cancer.

Judgment: Breast cancer is the most common type.

Your own experience or memory may also tell you that breast cancer is the most common type. But it is not (although it is common). Actually, skin cancer is the most common type in the US. We make the same types of misjudgments all the time because we do not generate the examples or evidence according to their actual frequencies or probabilities. Instead, we have a tendency (or bias) to search for the examples in memory; if they are easy to retrieve, we assume that they are common. To rephrase this in the language of the heuristic, events seem more likely to the extent that they are available to memory. This bias has been termed the  availability heuristic   (Kahneman and Tversky, 1974).

The fact that we use the availability heuristic does not automatically mean that our judgment is wrong. The reason we use heuristics in the first place is that they work fairly well in many cases (and, of course that they are easy to use). So, the easiest examples to think of sometimes are the most common ones. Is it more likely that a member of the U.S. Senate is a man or a woman? Most people have a much easier time generating examples of male senators. And as it turns out, the U.S. Senate has many more men than women (74 to 26 in 2020). In this case, then, the availability heuristic would lead you to make the correct judgment; it is far more likely that a senator would be a man.

In many other cases, however, the availability heuristic will lead us astray. This is because events can be memorable for many reasons other than their frequency. Section 5.2, Encoding Meaning, suggested that one good way to encode the meaning of some information is to form a mental image of it. Thus, information that has been pictured mentally will be more available to memory. Indeed, an event that is vivid and easily pictured will trick many people into supposing that type of event is more common than it actually is. Repetition of information will also make it more memorable. So, if the same event is described to you in a magazine, on the evening news, on a podcast that you listen to, and in your Facebook feed; it will be very available to memory. Again, the availability heuristic will cause you to misperceive the frequency of these types of events.

Most interestingly, information that is unusual is more memorable. Suppose we give you the following list of words to remember: box, flower, letter, platypus, oven, boat, newspaper, purse, drum, car. Very likely, the easiest word to remember would be platypus, the unusual one. The same thing occurs with memories of events. An event may be available to memory because it is unusual, yet the availability heuristic leads us to judge that the event is common. Did you catch that? In these cases, the availability heuristic makes us think the exact opposite of the true frequency. We end up thinking something is common because it is unusual (and therefore memorable). Yikes.

The misapplication of the availability heuristic sometimes has unfortunate results. For example, if you went to K-12 school in the US over the past 10 years, it is extremely likely that you have participated in lockdown and active shooter drills. Of course, everyone is trying to prevent the tragedy of another school shooting. And believe us, we are not trying to minimize how terrible the tragedy is. But the truth of the matter is, school shootings are extremely rare. Because the federal government does not keep a database of school shootings, the Washington Post has maintained their own running tally. Between 1999 and January 2020 (the date of the most recent school shooting with a death in the US at of the time this paragraph was written), the Post reported a total of 254 people died in school shootings in the US. Not 254 per year, 254 total. That is an average of 12 per year. Of course, that is 254 people who should not have died (particularly because many were children), but in a country with approximately 60,000,000 students and teachers, this is a very small risk.

But many students and teachers are terrified that they will be victims of school shootings because of the availability heuristic. It is so easy to think of examples (they are very available to memory) that people believe the event is very common. It is not. And there is a downside to this. We happen to believe that there is an enormous gun violence problem in the United States. According the the Centers for Disease Control and Prevention, there were 39,773 firearm deaths in the US in 2017. Fifteen of those deaths were in school shootings, according to the Post. 60% of those deaths were suicides. When people pay attention to the school shooting risk (low), they often fail to notice the much larger risk.

And examples like this are by no means unique. The authors of this book have been teaching psychology since the 1990’s. We have been able to make the exact same arguments about the misapplication of the availability heuristics and keep them current by simply swapping out for the “fear of the day.” In the 1990’s it was children being kidnapped by strangers (it was known as “stranger danger”) despite the facts that kidnappings accounted for only 2% of the violent crimes committed against children, and only 24% of kidnappings are committed by strangers (US Department of Justice, 2007). This fear overlapped with the fear of terrorism that gripped the country after the 2001 terrorist attacks on the World Trade Center and US Pentagon and still plagues the population of the US somewhat in 2020. After a well-publicized, sensational act of violence, people are extremely likely to increase their estimates of the chances that they, too, will be victims of terror. Think about the reality, however. In October of 2001, a terrorist mailed anthrax spores to members of the US government and a number of media companies. A total of five people died as a result of this attack. The nation was nearly paralyzed by the fear of dying from the attack; in reality the probability of an individual person dying was 0.00000002.

The availability heuristic can lead you to make incorrect judgments in a school setting as well. For example, suppose you are trying to decide if you should take a class from a particular math professor. You might try to make a judgment of how good a teacher she is by recalling instances of friends and acquaintances making comments about her teaching skill. You may have some examples that suggest that she is a poor teacher very available to memory, so on the basis of the availability heuristic you judge her a poor teacher and decide to take the class from someone else. What if, however, the instances you recalled were all from the same person, and this person happens to be a very colorful storyteller? The subsequent ease of remembering the instances might not indicate that the professor is a poor teacher after all.

Although the availability heuristic is obviously important, it is not the only judgment heuristic we use. Amos Tversky and Daniel Kahneman examined the role of heuristics in inductive reasoning in a long series of studies. Kahneman received a Nobel Prize in Economics for this research in 2002, and Tversky would have certainly received one as well if he had not died of melanoma at age 59 in 1996 (Nobel Prizes are not awarded posthumously). Kahneman and Tversky demonstrated repeatedly that people do not reason in ways that are consistent with the laws of probability. They identified several heuristic strategies that people use instead to make judgments about likelihood. The importance of this work for economics (and the reason that Kahneman was awarded the Nobel Prize) is that earlier economic theories had assumed that people do make judgments rationally, that is, in agreement with the laws of probability.

Another common heuristic that people use for making judgments is the  representativeness heuristic (Kahneman & Tversky 1973). Suppose we describe a person to you. He is quiet and shy, has an unassuming personality, and likes to work with numbers. Is this person more likely to be an accountant or an attorney? If you said accountant, you were probably using the representativeness heuristic. Our imaginary person is judged likely to be an accountant because he resembles, or is representative of the concept of, an accountant. When research participants are asked to make judgments such as these, the only thing that seems to matter is the representativeness of the description. For example, if told that the person described is in a room that contains 70 attorneys and 30 accountants, participants will still assume that he is an accountant.

inductive reasoning :  a type of reasoning in which we make judgments about likelihood from sets of evidence

inductively strong argument :  an inductive argument in which the beginning statements lead to a conclusion that is probably true

heuristic :  a shortcut strategy that we use to make judgments and solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

availability heuristic :  judging the frequency or likelihood of some event type according to how easily examples of the event can be called to mind (i.e., how available they are to memory)

representativeness heuristic:   judging the likelihood that something is a member of a category on the basis of how much it resembles a typical category member (i.e., how representative it is of the category)

Type 1 thinking : fast, automatic, and emotional thinking.

Type 2 thinking : slow, effortful, and logical thinking.

  • What percentage of workplace homicides are co-worker violence?

Many people get these questions wrong. The answers are 10%; stairs; skin; 6%. How close were your answers? Explain how the availability heuristic might have led you to make the incorrect judgments.

  • Can you think of some other judgments that you have made (or beliefs that you have) that might have been influenced by the availability heuristic?

7.3 Problem Solving

  • Please take a few minutes to list a number of problems that you are facing right now.
  • Now write about a problem that you recently solved.
  • What is your definition of a problem?

Mary has a problem. Her daughter, ordinarily quite eager to please, appears to delight in being the last person to do anything. Whether getting ready for school, going to piano lessons or karate class, or even going out with her friends, she seems unwilling or unable to get ready on time. Other people have different kinds of problems. For example, many students work at jobs, have numerous family commitments, and are facing a course schedule full of difficult exams, assignments, papers, and speeches. How can they find enough time to devote to their studies and still fulfill their other obligations? Speaking of students and their problems: Show that a ball thrown vertically upward with initial velocity v0 takes twice as much time to return as to reach the highest point (from Spiegel, 1981).

These are three very different situations, but we have called them all problems. What makes them all the same, despite the differences? A psychologist might define a  problem   as a situation with an initial state, a goal state, and a set of possible intermediate states. Somewhat more meaningfully, we might consider a problem a situation in which you are in here one state (e.g., daughter is always late), you want to be there in another state (e.g., daughter is not always late), and with no obvious way to get from here to there. Defined this way, each of the three situations we outlined can now be seen as an example of the same general concept, a problem. At this point, you might begin to wonder what is not a problem, given such a general definition. It seems that nearly every non-routine task we engage in could qualify as a problem. As long as you realize that problems are not necessarily bad (it can be quite fun and satisfying to rise to the challenge and solve a problem), this may be a useful way to think about it.

Can we identify a set of problem-solving skills that would apply to these very different kinds of situations? That task, in a nutshell, is a major goal of this section. Let us try to begin to make sense of the wide variety of ways that problems can be solved with an important observation: the process of solving problems can be divided into two key parts. First, people have to notice, comprehend, and represent the problem properly in their minds (called  problem representation ). Second, they have to apply some kind of solution strategy to the problem. Psychologists have studied both of these key parts of the process in detail.

When you first think about the problem-solving process, you might guess that most of our difficulties would occur because we are failing in the second step, the application of strategies. Although this can be a significant difficulty much of the time, the more important source of difficulty is probably problem representation. In short, we often fail to solve a problem because we are looking at it, or thinking about it, the wrong way.

problem :  a situation in which we are in an initial state, have a desired goal state, and there is a number of possible intermediate states (i.e., there is no obvious way to get from the initial to the goal state)

problem representation :  noticing, comprehending and forming a mental conception of a problem

Defining and Mentally Representing Problems in Order to Solve Them

So, the main obstacle to solving a problem is that we do not clearly understand exactly what the problem is. Recall the problem with Mary’s daughter always being late. One way to represent, or to think about, this problem is that she is being defiant. She refuses to get ready in time. This type of representation or definition suggests a particular type of solution. Another way to think about the problem, however, is to consider the possibility that she is simply being sidetracked by interesting diversions. This different conception of what the problem is (i.e., different representation) suggests a very different solution strategy. For example, if Mary defines the problem as defiance, she may be tempted to solve the problem using some kind of coercive tactics, that is, to assert her authority as her mother and force her to listen. On the other hand, if Mary defines the problem as distraction, she may try to solve it by simply removing the distracting objects.

As you might guess, when a problem is represented one way, the solution may seem very difficult, or even impossible. Seen another way, the solution might be very easy. For example, consider the following problem (from Nasar, 1998):

Two bicyclists start 20 miles apart and head toward each other, each going at a steady rate of 10 miles per hour. At the same time, a fly that travels at a steady 15 miles per hour starts from the front wheel of the southbound bicycle and flies to the front wheel of the northbound one, then turns around and flies to the front wheel of the southbound one again, and continues in this manner until he is crushed between the two front wheels. Question: what total distance did the fly cover?

Please take a few minutes to try to solve this problem.

Most people represent this problem as a question about a fly because, well, that is how the question is asked. The solution, using this representation, is to figure out how far the fly travels on the first leg of its journey, then add this total to how far it travels on the second leg of its journey (when it turns around and returns to the first bicycle), then continue to add the smaller distance from each leg of the journey until you converge on the correct answer. You would have to be quite skilled at math to solve this problem, and you would probably need some time and pencil and paper to do it.

If you consider a different representation, however, you can solve this problem in your head. Instead of thinking about it as a question about a fly, think about it as a question about the bicycles. They are 20 miles apart, and each is traveling 10 miles per hour. How long will it take for the bicycles to reach each other? Right, one hour. The fly is traveling 15 miles per hour; therefore, it will travel a total of 15 miles back and forth in the hour before the bicycles meet. Represented one way (as a problem about a fly), the problem is quite difficult. Represented another way (as a problem about two bicycles), it is easy. Changing your representation of a problem is sometimes the best—sometimes the only—way to solve it.

Unfortunately, however, changing a problem’s representation is not the easiest thing in the world to do. Often, problem solvers get stuck looking at a problem one way. This is called  fixation . Most people who represent the preceding problem as a problem about a fly probably do not pause to reconsider, and consequently change, their representation. A parent who thinks her daughter is being defiant is unlikely to consider the possibility that her behavior is far less purposeful.

Problem-solving fixation was examined by a group of German psychologists called Gestalt psychologists during the 1930’s and 1940’s. Karl Dunker, for example, discovered an important type of failure to take a different perspective called  functional fixedness . Imagine being a participant in one of his experiments. You are asked to figure out how to mount two candles on a door and are given an assortment of odds and ends, including a small empty cardboard box and some thumbtacks. Perhaps you have already figured out a solution: tack the box to the door so it forms a platform, then put the candles on top of the box. Most people are able to arrive at this solution. Imagine a slight variation of the procedure, however. What if, instead of being empty, the box had matches in it? Most people given this version of the problem do not arrive at the solution given above. Why? Because it seems to people that when the box contains matches, it already has a function; it is a matchbox. People are unlikely to consider a new function for an object that already has a function. This is functional fixedness.

Mental set is a type of fixation in which the problem solver gets stuck using the same solution strategy that has been successful in the past, even though the solution may no longer be useful. It is commonly seen when students do math problems for homework. Often, several problems in a row require the reapplication of the same solution strategy. Then, without warning, the next problem in the set requires a new strategy. Many students attempt to apply the formerly successful strategy on the new problem and therefore cannot come up with a correct answer.

The thing to remember is that you cannot solve a problem unless you correctly identify what it is to begin with (initial state) and what you want the end result to be (goal state). That may mean looking at the problem from a different angle and representing it in a new way. The correct representation does not guarantee a successful solution, but it certainly puts you on the right track.

A bit more optimistically, the Gestalt psychologists discovered what may be considered the opposite of fixation, namely  insight . Sometimes the solution to a problem just seems to pop into your head. Wolfgang Kohler examined insight by posing many different problems to chimpanzees, principally problems pertaining to their acquisition of out-of-reach food. In one version, a banana was placed outside of a chimpanzee’s cage and a short stick inside the cage. The stick was too short to retrieve the banana, but was long enough to retrieve a longer stick also located outside of the cage. This second stick was long enough to retrieve the banana. After trying, and failing, to reach the banana with the shorter stick, the chimpanzee would try a couple of random-seeming attempts, react with some apparent frustration or anger, then suddenly rush to the longer stick, the correct solution fully realized at this point. This sudden appearance of the solution, observed many times with many different problems, was termed insight by Kohler.

Lest you think it pertains to chimpanzees only, Karl Dunker demonstrated that children also solve problems through insight in the 1930s. More importantly, you have probably experienced insight yourself. Think back to a time when you were trying to solve a difficult problem. After struggling for a while, you gave up. Hours later, the solution just popped into your head, perhaps when you were taking a walk, eating dinner, or lying in bed.

fixation :  when a problem solver gets stuck looking at a problem a particular way and cannot change his or her representation of it (or his or her intended solution strategy)

functional fixedness :  a specific type of fixation in which a problem solver cannot think of a new use for an object that already has a function

mental set :  a specific type of fixation in which a problem solver gets stuck using the same solution strategy that has been successful in the past

insight :  a sudden realization of a solution to a problem

Solving Problems by Trial and Error

Correctly identifying the problem and your goal for a solution is a good start, but recall the psychologist’s definition of a problem: it includes a set of possible intermediate states. Viewed this way, a problem can be solved satisfactorily only if one can find a path through some of these intermediate states to the goal. Imagine a fairly routine problem, finding a new route to school when your ordinary route is blocked (by road construction, for example). At each intersection, you may turn left, turn right, or go straight. A satisfactory solution to the problem (of getting to school) is a sequence of selections at each intersection that allows you to wind up at school.

If you had all the time in the world to get to school, you might try choosing intermediate states randomly. At one corner you turn left, the next you go straight, then you go left again, then right, then right, then straight. Unfortunately, trial and error will not necessarily get you where you want to go, and even if it does, it is not the fastest way to get there. For example, when a friend of ours was in college, he got lost on the way to a concert and attempted to find the venue by choosing streets to turn onto randomly (this was long before the use of GPS). Amazingly enough, the strategy worked, although he did end up missing two out of the three bands who played that night.

Trial and error is not all bad, however. B.F. Skinner, a prominent behaviorist psychologist, suggested that people often behave randomly in order to see what effect the behavior has on the environment and what subsequent effect this environmental change has on them. This seems particularly true for the very young person. Picture a child filling a household’s fish tank with toilet paper, for example. To a child trying to develop a repertoire of creative problem-solving strategies, an odd and random behavior might be just the ticket. Eventually, the exasperated parent hopes, the child will discover that many of these random behaviors do not successfully solve problems; in fact, in many cases they create problems. Thus, one would expect a decrease in this random behavior as a child matures. You should realize, however, that the opposite extreme is equally counterproductive. If the children become too rigid, never trying something unexpected and new, their problem solving skills can become too limited.

Effective problem solving seems to call for a happy medium that strikes a balance between using well-founded old strategies and trying new ground and territory. The individual who recognizes a situation in which an old problem-solving strategy would work best, and who can also recognize a situation in which a new untested strategy is necessary is halfway to success.

Solving Problems with Algorithms and Heuristics

For many problems there is a possible strategy available that will guarantee a correct solution. For example, think about math problems. Math lessons often consist of step-by-step procedures that can be used to solve the problems. If you apply the strategy without error, you are guaranteed to arrive at the correct solution to the problem. This approach is called using an  algorithm , a term that denotes the step-by-step procedure that guarantees a correct solution. Because algorithms are sometimes available and come with a guarantee, you might think that most people use them frequently. Unfortunately, however, they do not. As the experience of many students who have struggled through math classes can attest, algorithms can be extremely difficult to use, even when the problem solver knows which algorithm is supposed to work in solving the problem. In problems outside of math class, we often do not even know if an algorithm is available. It is probably fair to say, then, that algorithms are rarely used when people try to solve problems.

Because algorithms are so difficult to use, people often pass up the opportunity to guarantee a correct solution in favor of a strategy that is much easier to use and yields a reasonable chance of coming up with a correct solution. These strategies are called  problem solving heuristics . Similar to what you saw in section 6.2 with reasoning heuristics, a problem solving heuristic is a shortcut strategy that people use when trying to solve problems. It usually works pretty well, but does not guarantee a correct solution to the problem. For example, one problem solving heuristic might be “always move toward the goal” (so when trying to get to school when your regular route is blocked, you would always turn in the direction you think the school is). A heuristic that people might use when doing math homework is “use the same solution strategy that you just used for the previous problem.”

By the way, we hope these last two paragraphs feel familiar to you. They seem to parallel a distinction that you recently learned. Indeed, algorithms and problem-solving heuristics are another example of the distinction between Type 1 thinking and Type 2 thinking.

Although it is probably not worth describing a large number of specific heuristics, two observations about heuristics are worth mentioning. First, heuristics can be very general or they can be very specific, pertaining to a particular type of problem only. For example, “always move toward the goal” is a general strategy that you can apply to countless problem situations. On the other hand, “when you are lost without a functioning gps, pick the most expensive car you can see and follow it” is specific to the problem of being lost. Second, all heuristics are not equally useful. One heuristic that many students know is “when in doubt, choose c for a question on a multiple-choice exam.” This is a dreadful strategy because many instructors intentionally randomize the order of answer choices. Another test-taking heuristic, somewhat more useful, is “look for the answer to one question somewhere else on the exam.”

You really should pay attention to the application of heuristics to test taking. Imagine that while reviewing your answers for a multiple-choice exam before turning it in, you come across a question for which you originally thought the answer was c. Upon reflection, you now think that the answer might be b. Should you change the answer to b, or should you stick with your first impression? Most people will apply the heuristic strategy to “stick with your first impression.” What they do not realize, of course, is that this is a very poor strategy (Lilienfeld et al, 2009). Most of the errors on exams come on questions that were answered wrong originally and were not changed (so they remain wrong). There are many fewer errors where we change a correct answer to an incorrect answer. And, of course, sometimes we change an incorrect answer to a correct answer. In fact, research has shown that it is more common to change a wrong answer to a right answer than vice versa (Bruno, 2001).

The belief in this poor test-taking strategy (stick with your first impression) is based on the  confirmation bias   (Nickerson, 1998; Wason, 1960). You first saw the confirmation bias in Module 1, but because it is so important, we will repeat the information here. People have a bias, or tendency, to notice information that confirms what they already believe. Somebody at one time told you to stick with your first impression, so when you look at the results of an exam you have taken, you will tend to notice the cases that are consistent with that belief. That is, you will notice the cases in which you originally had an answer correct and changed it to the wrong answer. You tend not to notice the other two important (and more common) cases, changing an answer from wrong to right, and leaving a wrong answer unchanged.

Because heuristics by definition do not guarantee a correct solution to a problem, mistakes are bound to occur when we employ them. A poor choice of a specific heuristic will lead to an even higher likelihood of making an error.

algorithm :  a step-by-step procedure that guarantees a correct solution to a problem

problem solving heuristic :  a shortcut strategy that we use to solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

confirmation bias :  people’s tendency to notice information that confirms what they already believe

An Effective Problem-Solving Sequence

You may be left with a big question: If algorithms are hard to use and heuristics often don’t work, how am I supposed to solve problems? Robert Sternberg (1996), as part of his theory of what makes people successfully intelligent (Module 8) described a problem-solving sequence that has been shown to work rather well:

  • Identify the existence of a problem.  In school, problem identification is often easy; problems that you encounter in math classes, for example, are conveniently labeled as problems for you. Outside of school, however, realizing that you have a problem is a key difficulty that you must get past in order to begin solving it. You must be very sensitive to the symptoms that indicate a problem.
  • Define the problem.  Suppose you realize that you have been having many headaches recently. Very likely, you would identify this as a problem. If you define the problem as “headaches,” the solution would probably be to take aspirin or ibuprofen or some other anti-inflammatory medication. If the headaches keep returning, however, you have not really solved the problem—likely because you have mistaken a symptom for the problem itself. Instead, you must find the root cause of the headaches. Stress might be the real problem. For you to successfully solve many problems it may be necessary for you to overcome your fixations and represent the problems differently. One specific strategy that you might find useful is to try to define the problem from someone else’s perspective. How would your parents, spouse, significant other, doctor, etc. define the problem? Somewhere in these different perspectives may lurk the key definition that will allow you to find an easier and permanent solution.
  • Formulate strategy.  Now it is time to begin planning exactly how the problem will be solved. Is there an algorithm or heuristic available for you to use? Remember, heuristics by their very nature guarantee that occasionally you will not be able to solve the problem. One point to keep in mind is that you should look for long-range solutions, which are more likely to address the root cause of a problem than short-range solutions.
  • Represent and organize information.  Similar to the way that the problem itself can be defined, or represented in multiple ways, information within the problem is open to different interpretations. Suppose you are studying for a big exam. You have chapters from a textbook and from a supplemental reader, along with lecture notes that all need to be studied. How should you (represent and) organize these materials? Should you separate them by type of material (text versus reader versus lecture notes), or should you separate them by topic? To solve problems effectively, you must learn to find the most useful representation and organization of information.
  • Allocate resources.  This is perhaps the simplest principle of the problem solving sequence, but it is extremely difficult for many people. First, you must decide whether time, money, skills, effort, goodwill, or some other resource would help to solve the problem Then, you must make the hard choice of deciding which resources to use, realizing that you cannot devote maximum resources to every problem. Very often, the solution to problem is simply to change how resources are allocated (for example, spending more time studying in order to improve grades).
  • Monitor and evaluate solutions.  Pay attention to the solution strategy while you are applying it. If it is not working, you may be able to select another strategy. Another fact you should realize about problem solving is that it never does end. Solving one problem frequently brings up new ones. Good monitoring and evaluation of your problem solutions can help you to anticipate and get a jump on solving the inevitable new problems that will arise.

Please note that this as  an  effective problem-solving sequence, not  the  effective problem solving sequence. Just as you can become fixated and end up representing the problem incorrectly or trying an inefficient solution, you can become stuck applying the problem-solving sequence in an inflexible way. Clearly there are problem situations that can be solved without using these skills in this order.

Additionally, many real-world problems may require that you go back and redefine a problem several times as the situation changes (Sternberg et al. 2000). For example, consider the problem with Mary’s daughter one last time. At first, Mary did represent the problem as one of defiance. When her early strategy of pleading and threatening punishment was unsuccessful, Mary began to observe her daughter more carefully. She noticed that, indeed, her daughter’s attention would be drawn by an irresistible distraction or book. Fresh with a re-representation of the problem, she began a new solution strategy. She began to remind her daughter every few minutes to stay on task and remind her that if she is ready before it is time to leave, she may return to the book or other distracting object at that time. Fortunately, this strategy was successful, so Mary did not have to go back and redefine the problem again.

Pick one or two of the problems that you listed when you first started studying this section and try to work out the steps of Sternberg’s problem solving sequence for each one.

a mental representation of a category of things in the world

an assumption about the truth of something that is not stated. Inferences come from our prior knowledge and experience, and from logical reasoning

knowledge about one’s own cognitive processes; thinking about your thinking

individuals who are less competent tend to overestimate their abilities more than individuals who are more competent do

Thinking like a scientist in your everyday life for the purpose of drawing correct conclusions. It entails skepticism; an ability to identify biases, distortions, omissions, and assumptions; and excellent deductive and inductive reasoning, and problem solving skills.

a way of thinking in which you refrain from drawing a conclusion or changing your mind until good evidence has been provided

an inclination, tendency, leaning, or prejudice

a type of reasoning in which the conclusion is guaranteed to be true any time the statements leading up to it are true

a set of statements in which the beginning statements lead to a conclusion

an argument for which true beginning statements guarantee that the conclusion is true

a type of reasoning in which we make judgments about likelihood from sets of evidence

an inductive argument in which the beginning statements lead to a conclusion that is probably true

fast, automatic, and emotional thinking

slow, effortful, and logical thinking

a shortcut strategy that we use to make judgments and solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

udging the frequency or likelihood of some event type according to how easily examples of the event can be called to mind (i.e., how available they are to memory)

judging the likelihood that something is a member of a category on the basis of how much it resembles a typical category member (i.e., how representative it is of the category)

a situation in which we are in an initial state, have a desired goal state, and there is a number of possible intermediate states (i.e., there is no obvious way to get from the initial to the goal state)

noticing, comprehending and forming a mental conception of a problem

when a problem solver gets stuck looking at a problem a particular way and cannot change his or her representation of it (or his or her intended solution strategy)

a specific type of fixation in which a problem solver cannot think of a new use for an object that already has a function

a specific type of fixation in which a problem solver gets stuck using the same solution strategy that has been successful in the past

a sudden realization of a solution to a problem

a step-by-step procedure that guarantees a correct solution to a problem

The tendency to notice and pay attention to information that confirms your prior beliefs and to ignore information that disconfirms them.

a shortcut strategy that we use to solve problems. Although they are easy to use, they do not guarantee correct judgments and solutions

Introduction to Psychology Copyright © 2020 by Ken Gray; Elizabeth Arnott-Hill; and Or'Shaundra Benson is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Problem-Solving Strategies and Obstacles

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

problem solving hypothesis psychology

Sean is a fact-checker and researcher with experience in sociology, field research, and data analytics.

problem solving hypothesis psychology

JGI / Jamie Grill / Getty Images

  • Application
  • Improvement

From deciding what to eat for dinner to considering whether it's the right time to buy a house, problem-solving is a large part of our daily lives. Learn some of the problem-solving strategies that exist and how to use them in real life, along with ways to overcome obstacles that are making it harder to resolve the issues you face.

What Is Problem-Solving?

In cognitive psychology , the term 'problem-solving' refers to the mental process that people go through to discover, analyze, and solve problems.

A problem exists when there is a goal that we want to achieve but the process by which we will achieve it is not obvious to us. Put another way, there is something that we want to occur in our life, yet we are not immediately certain how to make it happen.

Maybe you want a better relationship with your spouse or another family member but you're not sure how to improve it. Or you want to start a business but are unsure what steps to take. Problem-solving helps you figure out how to achieve these desires.

The problem-solving process involves:

  • Discovery of the problem
  • Deciding to tackle the issue
  • Seeking to understand the problem more fully
  • Researching available options or solutions
  • Taking action to resolve the issue

Before problem-solving can occur, it is important to first understand the exact nature of the problem itself. If your understanding of the issue is faulty, your attempts to resolve it will also be incorrect or flawed.

Problem-Solving Mental Processes

Several mental processes are at work during problem-solving. Among them are:

  • Perceptually recognizing the problem
  • Representing the problem in memory
  • Considering relevant information that applies to the problem
  • Identifying different aspects of the problem
  • Labeling and describing the problem

Problem-Solving Strategies

There are many ways to go about solving a problem. Some of these strategies might be used on their own, or you may decide to employ multiple approaches when working to figure out and fix a problem.

An algorithm is a step-by-step procedure that, by following certain "rules" produces a solution. Algorithms are commonly used in mathematics to solve division or multiplication problems. But they can be used in other fields as well.

In psychology, algorithms can be used to help identify individuals with a greater risk of mental health issues. For instance, research suggests that certain algorithms might help us recognize children with an elevated risk of suicide or self-harm.

One benefit of algorithms is that they guarantee an accurate answer. However, they aren't always the best approach to problem-solving, in part because detecting patterns can be incredibly time-consuming.

There are also concerns when machine learning is involved—also known as artificial intelligence (AI)—such as whether they can accurately predict human behaviors.

Heuristics are shortcut strategies that people can use to solve a problem at hand. These "rule of thumb" approaches allow you to simplify complex problems, reducing the total number of possible solutions to a more manageable set.

If you find yourself sitting in a traffic jam, for example, you may quickly consider other routes, taking one to get moving once again. When shopping for a new car, you might think back to a prior experience when negotiating got you a lower price, then employ the same tactics.

While heuristics may be helpful when facing smaller issues, major decisions shouldn't necessarily be made using a shortcut approach. Heuristics also don't guarantee an effective solution, such as when trying to drive around a traffic jam only to find yourself on an equally crowded route.

Trial and Error

A trial-and-error approach to problem-solving involves trying a number of potential solutions to a particular issue, then ruling out those that do not work. If you're not sure whether to buy a shirt in blue or green, for instance, you may try on each before deciding which one to purchase.

This can be a good strategy to use if you have a limited number of solutions available. But if there are many different choices available, narrowing down the possible options using another problem-solving technique can be helpful before attempting trial and error.

In some cases, the solution to a problem can appear as a sudden insight. You are facing an issue in a relationship or your career when, out of nowhere, the solution appears in your mind and you know exactly what to do.

Insight can occur when the problem in front of you is similar to an issue that you've dealt with in the past. Although, you may not recognize what is occurring since the underlying mental processes that lead to insight often happen outside of conscious awareness .

Research indicates that insight is most likely to occur during times when you are alone—such as when going on a walk by yourself, when you're in the shower, or when lying in bed after waking up.

How to Apply Problem-Solving Strategies in Real Life

If you're facing a problem, you can implement one or more of these strategies to find a potential solution. Here's how to use them in real life:

  • Create a flow chart . If you have time, you can take advantage of the algorithm approach to problem-solving by sitting down and making a flow chart of each potential solution, its consequences, and what happens next.
  • Recall your past experiences . When a problem needs to be solved fairly quickly, heuristics may be a better approach. Think back to when you faced a similar issue, then use your knowledge and experience to choose the best option possible.
  • Start trying potential solutions . If your options are limited, start trying them one by one to see which solution is best for achieving your desired goal. If a particular solution doesn't work, move on to the next.
  • Take some time alone . Since insight is often achieved when you're alone, carve out time to be by yourself for a while. The answer to your problem may come to you, seemingly out of the blue, if you spend some time away from others.

Obstacles to Problem-Solving

Problem-solving is not a flawless process as there are a number of obstacles that can interfere with our ability to solve a problem quickly and efficiently. These obstacles include:

  • Assumptions: When dealing with a problem, people can make assumptions about the constraints and obstacles that prevent certain solutions. Thus, they may not even try some potential options.
  • Functional fixedness : This term refers to the tendency to view problems only in their customary manner. Functional fixedness prevents people from fully seeing all of the different options that might be available to find a solution.
  • Irrelevant or misleading information: When trying to solve a problem, it's important to distinguish between information that is relevant to the issue and irrelevant data that can lead to faulty solutions. The more complex the problem, the easier it is to focus on misleading or irrelevant information.
  • Mental set: A mental set is a tendency to only use solutions that have worked in the past rather than looking for alternative ideas. A mental set can work as a heuristic, making it a useful problem-solving tool. However, mental sets can also lead to inflexibility, making it more difficult to find effective solutions.

How to Improve Your Problem-Solving Skills

In the end, if your goal is to become a better problem-solver, it's helpful to remember that this is a process. Thus, if you want to improve your problem-solving skills, following these steps can help lead you to your solution:

  • Recognize that a problem exists . If you are facing a problem, there are generally signs. For instance, if you have a mental illness , you may experience excessive fear or sadness, mood changes, and changes in sleeping or eating habits. Recognizing these signs can help you realize that an issue exists.
  • Decide to solve the problem . Make a conscious decision to solve the issue at hand. Commit to yourself that you will go through the steps necessary to find a solution.
  • Seek to fully understand the issue . Analyze the problem you face, looking at it from all sides. If your problem is relationship-related, for instance, ask yourself how the other person may be interpreting the issue. You might also consider how your actions might be contributing to the situation.
  • Research potential options . Using the problem-solving strategies mentioned, research potential solutions. Make a list of options, then consider each one individually. What are some pros and cons of taking the available routes? What would you need to do to make them happen?
  • Take action . Select the best solution possible and take action. Action is one of the steps required for change . So, go through the motions needed to resolve the issue.
  • Try another option, if needed . If the solution you chose didn't work, don't give up. Either go through the problem-solving process again or simply try another option.

You can find a way to solve your problems as long as you keep working toward this goal—even if the best solution is simply to let go because no other good solution exists.

Sarathy V. Real world problem-solving .  Front Hum Neurosci . 2018;12:261. doi:10.3389/fnhum.2018.00261

Dunbar K. Problem solving . A Companion to Cognitive Science . 2017. doi:10.1002/9781405164535.ch20

Stewart SL, Celebre A, Hirdes JP, Poss JW. Risk of suicide and self-harm in kids: The development of an algorithm to identify high-risk individuals within the children's mental health system . Child Psychiat Human Develop . 2020;51:913-924. doi:10.1007/s10578-020-00968-9

Rosenbusch H, Soldner F, Evans AM, Zeelenberg M. Supervised machine learning methods in psychology: A practical introduction with annotated R code . Soc Personal Psychol Compass . 2021;15(2):e12579. doi:10.1111/spc3.12579

Mishra S. Decision-making under risk: Integrating perspectives from biology, economics, and psychology . Personal Soc Psychol Rev . 2014;18(3):280-307. doi:10.1177/1088868314530517

Csikszentmihalyi M, Sawyer K. Creative insight: The social dimension of a solitary moment . In: The Systems Model of Creativity . 2015:73-98. doi:10.1007/978-94-017-9085-7_7

Chrysikou EG, Motyka K, Nigro C, Yang SI, Thompson-Schill SL. Functional fixedness in creative thinking tasks depends on stimulus modality .  Psychol Aesthet Creat Arts . 2016;10(4):425‐435. doi:10.1037/aca0000050

Huang F, Tang S, Hu Z. Unconditional perseveration of the short-term mental set in chunk decomposition .  Front Psychol . 2018;9:2568. doi:10.3389/fpsyg.2018.02568

National Alliance on Mental Illness. Warning signs and symptoms .

Mayer RE. Thinking, problem solving, cognition, 2nd ed .

Schooler JW, Ohlsson S, Brooks K. Thoughts beyond words: When language overshadows insight. J Experiment Psychol: General . 1993;122:166-183. doi:10.1037/0096-3445.2.166

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Means-Ends Planning: An Example Soar System

Cite this chapter.

Book cover

  • Aladin Akyürek 3  

Part of the book series: Studies in Cognitive Systems ((COGS,volume 10))

160 Accesses

3 Citations

Although Soar is intended to cover the full range of weak problem-solving methods-often hypothesized in cognitive science as basic for all intelligent agents, earlier attempts to add means-ends analysis to Soar’s repertoire of methods have not been particularly successful or convincing. Considering its psychological significance, stipulated by Newell and Simon (1972), it seems essential that Soar, when taken as a general cognitive architecture, should allow the means-ends analysis to arise naturally from its own structure. This paper presents a planner program that interleaves a difference reduction process with Soar’s default mechanism for operator subgoaling that it modifies in order to map meansends analysis onto Soar. The scheme advanced is shown to produce “macro-operators” of a novel kind, called macrochunks , which may have important implications for explaining routine behavior. The approach taken and the problems that had to be dealt with in implementing this planner are treated in detail. Also, SoarL —a language used for state representations- is reviewed with respect to the frame problem.

  • Goal Condition
  • Problem Space
  • Frame Problem
  • Knowledge Compilation

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Akyürek, A. (1992). On a computational model of human planning. In J. A. Michon & A. Akyürek (Eds.), Soar: A cognitive architecture in perspective (pp. 81–108). Dordrecht, The Netherlands: Kluwer.

Chapter   Google Scholar  

Anderson, J. R. (1986). Knowledge compilation: The general learning mechanism. In R. S. Michalski. J. G. Carbonell. & T. M. Mitchell (Eds.). Machine learning: An artificial intelligence approach (Vol. II. pp. 289–310). Los Altos. CA: Morgan Kaufma

Google Scholar  

Atwood, M. E. & Polson. P. G. (1976). A process model for water jug problems. Cognitive Psychology , 8 ,191–216.

Article   Google Scholar  

Carbonell, J. G. (1983). Learning by analogy: Formulating and generalizing plans from past experience, In R. S. Michalski. J. G. Carbonell, & T. M. Mitchell (Eds.). Machine learning: An artificial intelligence approach (pp. 137–161). Los Altos. CA: Morgan Kaufmann.

Dawson, C., & Siklóssy, L. (1977). The role of preprocessing in problem solving systems. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence (pp. 465–471). San Mateo. CA: Morgan Kaufmann.

DeJong, G., & Mooney. R. (1986). Explanation-based learning: An alternative view. Machine Learning , 1 , 145–176.

Ernst, G. W. (1969). Sufficient conditions for the success of GPS. Journal of the Association for Computing Machinery , 16 , 517–533.

Fikes, R. E., Hart, P.E., & Nilsson, N. J. (1972). Learning and executing generalized robot plans. Artificial Intelligence , 3 , 251–288.

Fikes, R. E., & Nilsson, N. J. (1971). STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence , 2 , 189–208.

Hayes. P. J. (1973). The frame problem and related problems in artificial intelligence. In A. Elithorn & D. Jones (Eds.). Artificial and human thinking (pp. 45–59). Amsterdam: Elsevier.

Jeffries, R., Polson, P. G., & Razran, L. (1977). A process model for missionaries-cannibals and other river-crossing problems. Cognitive Psychology , 9 ,412–440.

Korf, R. E. (1985). Learning to solve problems by searching for macro-operators . Boston, MA: Pitman.

Knoblock, C. A. (1989). Learning hierarchies of abstraction spaces. In Proceedings of the Sixth International Workshop on Machine Learning (pp. 241–245). San Mateo. CA: Morgan Kaufmann.

Laird, J. E. (1984). Universal subgoaling (Tech. Rep. CMU-CS-84-129). Pittsburgh. PA: Carnegie Mellon University, Department of Computer Science. [Also available as part of Laird, I., Rosenbloom, P., & Newell, A. (1986). Universal subgoaling and chunking: The automatic generation and learning of goal hierarchies (pp. 1–131). Boston. MA: Kluwer.)

Laird, J. E., Congdon, C. B., Altmann, E., & Swedlow, K. (1990). Soar user’s manual: Version 5.2 (Tech. Rep. CMU-CS-90-179). Pittsburgh, PA: Carnegie Mellon University, School of Computer Science.

Laird, J., & Newell, A. (1983). A universal weak method ([ech. Rep. CMU-CS-83-141). Pittsburgh, PA: Carnegie Mellon University, Department of Computer Science.

Laird, J. E., & Rosenbloom, P. S. (1990). Integrating execution, planning, and learning in Soar for external environments. In Proceedings of the Eighth National Conference on Artificial Intelligence (pp. 1022–1029). San Mateo, CA: Morgan Kaufmann.

Laird, J. E., Rosenbloom, P. S., & Newell, A. (1984). Towards chunking as a general learning mechanism. In Proceedings of the Fourth National Conference on Artificial Intelligence (pp. 188–192). San Mateo, CA: Morgan Kaufmann.

Laird, J. E., Rosenbloom, P. S., & Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning , 1 , 11–46.

Laird, J., Swedlow, K., Altmann, E., Congdon, C. B., & Wiesmeyer, M. (1989). Soar user’s manual: Version 4.5 . Pittsburgh, PA: Carnegie Mellon University, School of Computer Science.

McDermott, D. (1987). AI, logic, and the frame problem. In F. M. Brown (Ed.), The frame problem in artificial intelligence (pp. 105–118). Los Altos, CA: Morgan Kaufmann.

McDermott, D. (1978). Planning and acting. Cognitive Science , 2 , 71–109).

Minton, S. (1988). Learning search control knowledge: An explanation-based approach . Boston, MA: Kluwer.

Book   Google Scholar  

Minton, S., Knoblock, C. A., Kuokka, D. R., Gil, Y., Joseph, R. L., & Carbonell, J. G. (1989). PRODIGY 2.0: The manual and tutorial (Tech. Rep. CMU-CS-89-146). Pittsburgh, PA: Carnegie-Mellon University, School of Computer Science.

Mitchell, T. M., Keller, R. M., & Kedar-Cabelli, S. T. (1986). Explanation-based generalization: A unifying view. Machine Learning , 1 ,47–80.

Newell, A. (1980). Reasoning, problem solving, and decision processes: The problem space as a fundamental category. In R. S. Nickerson (Ed.), Attention and performance (Vol. 8, pp. 693–718). Hillsdale, NJ: Erlbaum.

Newell, A. (1990). Unified theories of cognition . Cambridge, MA: Harvard University Press.

Newell, A. (1992). Unified theories of cognition and the role of Soar. In J. A. Michon & A. Akyürek (Eds.), Soar: A cognitive architecture in perspective (pp. 25–79). Dordrecht, The Netherlands: Kluwer.

Newell, A., Rosenbloom, P. S., & Laird, J. E. (1989). Symbolic architectures for cognition. In M. I. Posner (Ed.), Foundations of cognitive science (pp. 93–131). Cambridge, MA: MIT Press.

Newell, A., Shaw, J. C., & Simon, H. A. (1962). The processes of creative thinking. In H. E. Gruber, G. Terrell, & M. Wertheimer (Eds.), Contemporary approaches to creative thinking (pp. 63–119). New York: Atherton Press.

Newell, A., & Simon, H. A. (1963). GPS, a program that simulates human thought. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 279–293). New York: McGraw-Hill. (Original work published 1961)

Newell, A., & Simon, H. A. (1972). Human problem solving . Englewood Cliffs. NJ: PrenticeHall.

Newell, A., Yost, G., Laird, J. E., Rosenbloom, P. S., & Altmann, E. (1990). Formulating the problem space computational model . Paper presented at the 25th Anniversary Symposium. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

Nilsson, N. J.(1980). Principles of artificial intelligence . Los Altos, CA: Morgan Kaufmann.

Russell, S. J. (1989). Execution architectures and compilation. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (pp. 15–20). San Mateo, CA: Morgan Kaufmann.

Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. Artificial Intelligence , 5 ,115–135.

Sandewall, E. J. (1969). A planning problem solver based on look-ahead in stochastic game trees. Journal of the Association for Computing Machinery , 16 , 364–382.

Schubert, L. (1990). Monotonic solution of the frame problem in the situation calculus: An efficient method for worlds with fully specified actions. In H. E. Kyburg, R. P. Loui, & G. N. Carlson (Eds.), Knowledge representation and defeasible reasoning (pp. 23–67). Dordrecht, The Netherlands: Kluwer.

Sussman, G. J. (1975). A computer model of skill acquisition . New York: American Elsevier.

Tambe, M., Newell, A., & Rosenbloom, P. S. (1990). The problem of expensive chunks and its solution by restricting expressiveness. Machine Learning , 5 ,299–348.

Tenenberg, J. D. (1988). Abstraction in planning (Tech. Rep. No. 250). Rochester, NY: University of Rochester. Computer Science Department.

Unruh, A., & Rosenbloom, P. S. (1989). Abstraction in problem solving and learning. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (pp. 681–687). San Mateo. CA: Morgan Kaufmann.

Unruh, A., & Rosenbloom, P. S. (1990). Two new weak method increments for abstraction . Manuscript submitted for publication.

Wilkins, D. E. (1988). Practical planning: Extending the classical AI planning paradigm . San Mateo,CA: Morgan Kaufmann.

Woods, W. A. (1975). What’s in a link: Foundations for semantic networks. In D. G. Bobrow & A. Collins (Eds.). Representation and understanding: Studies in cognitive science (pp. 35–82). Orlando, FL: Academic Press.

Download references

Author information

Authors and affiliations.

Department of Psychology, University of Groningen, Groningen, The Netherlands

Aladin Akyürek

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

John A. Michon  & Aladin Akyürek  & 

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer Science+Business Media Dordrecht

About this chapter

Akyürek, A. (1992). Means-Ends Planning: An Example Soar System. In: Michon, J.A., Akyürek, A. (eds) Soar: A Cognitive Architecture in Perspective. Studies in Cognitive Systems, vol 10. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-2426-3_5

Download citation

DOI : https://doi.org/10.1007/978-94-011-2426-3_5

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-010-5070-8

Online ISBN : 978-94-011-2426-3

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

9.2: Hypothesis Testing Problem Solving Steps

  • Last updated
  • Save as PDF
  • Page ID 31812

Now that we have some background on setting up hypotheses and finding critical regions, we introduce the steps needed for every hypothesis testing procedure. Hypothesis testing is based directly on sampling theory and the probabilities \(P(\mbox{test statistic} \mid H_{0})\) that the sampling theory gives. Here are the steps we will follow :

  • Hypotheses : Formulate \(H_{0}\) and \(H_{1}\). State which is the claim
  • Critical statistic : Find the critical values and regions. (Use tables of \(z\), \(t\), \(\chi^2\), etc. values).
  • Test statistic : Compute the {\em test statistic} from your data. It summarizes your data in one number. The \(p\)-value follows from the test statistic.
  • Decision : If the test statistic falls in the critical region (rejection region), reject \(H_{0}\). (This decision can also be made using the \(p\)-value.)
  • Interpretation : Summarize results in a sentence and/or present a graphic or table.

The definition of a \(p\)-value will be covered below. For now you should know that a computer program (SPSS) will give you a \(p\)-value but not a critical statistic. So there is no Step 2 if you use SPSS.

A generic test statistic may be defined by :

\[\mbox{test value} = \frac{\mbox{(observed value)} - \mbox{(expected }H_{0} \mbox{ value)}}{\mbox{standard error}}.\]

The numerator represents a signal or an effect. The denominator represents noise. Not all test statistics will have this form (e.g. some \(\chi^{2}\) test statistics), but all test statistics represent a signal-to-noise ratio. Much of the tabular output of SPSS gives the numerator and denominator of this generic form with or without the corresponding test statistic.

Piaget’s Theory and Stages of Cognitive Development

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Key Takeaways

  • Jean Piaget is famous for his theories regarding changes in cognitive development that occur as we move from infancy to adulthood.
  • Cognitive development results from the interplay between innate capabilities (nature) and environmental influences (nurture).
  • Children progress through four distinct stages , each representing varying cognitive abilities and world comprehension: the sensorimotor stage (birth to 2 years), the preoperational stage (2 to 7 years), the concrete operational stage (7 to 11 years), and the formal operational stage (11 years and beyond).
  • A child’s cognitive development is not just about acquiring knowledge, the child has to develop or construct a mental model of the world, which is referred to as a schema .
  • Piaget emphasized the role of active exploration and interaction with the environment in shaping cognitive development, highlighting the importance of assimilation and accommodation in constructing mental schemas.

Stages of Development

Jean Piaget’s theory of cognitive development suggests that children move through four different stages of intellectual development which reflect the increasing sophistication of children’s thought

Each child goes through the stages in the same order (but not all at the same rate), and child development is determined by biological maturation and interaction with the environment.

At each stage of development, the child’s thinking is qualitatively different from the other stages, that is, each stage involves a different type of intelligence.

Although no stage can be missed out, there are individual differences in the rate at which children progress through stages, and some individuals may never attain the later stages.

Piaget did not claim that a particular stage was reached at a certain age – although descriptions of the stages often include an indication of the age at which the average child would reach each stage.

The Sensorimotor Stage

Ages: Birth to 2 Years

The first stage is the sensorimotor stage , during which the infant focuses on physical sensations and learning to coordinate its body.

sensorimotor play 1

Major Characteristics and Developmental Changes:

  • The infant learns about the world through their senses and through their actions (moving around and exploring their environment).
  • During the sensorimotor stage, a range of cognitive abilities develop. These include: object permanence; self-recognition (the child realizes that other people are separate from them); deferred imitation; and representational play.
  • They relate to the emergence of the general symbolic function, which is the capacity to represent the world mentally
  • At about 8 months, the infant will understand the permanence of objects and that they will still exist even if they can’t see them and the infant will search for them when they disappear.

During the beginning of this stage, the infant lives in the present. It does not yet have a mental picture of the world stored in its memory therefore it does not have a sense of object permanence.

If it cannot see something, then it does not exist. This is why you can hide a toy from an infant, while it watches, but it will not search for the object once it has gone out of sight.

The main achievement during this stage is object permanence – knowing that an object still exists, even if it is hidden. It requires the ability to form a mental representation (i.e., a schema) of the object.

Towards the end of this stage the general symbolic function begins to appear where children show in their play that they can use one object to stand for another. Language starts to appear because they realise that words can be used to represent objects and feelings.

The child begins to be able to store information that it knows about the world, recall it, and label it.

Individual Differences

  • Cultural Practices : In some cultures, babies are carried on their mothers’ backs throughout the day. This constant physical contact and varied stimuli can influence how a child perceives their environment and their sense of object permanence.
  • Gender Norms : Toys assigned to babies can differ based on gender expectations. A boy might be given more cars or action figures, while a girl might receive dolls or kitchen sets. This can influence early interactions and sensory explorations.

Learn More: The Sensorimotor Stage of Cognitive Development

The Preoperational Stage

Ages: 2 – 7 Years

Piaget’s second stage of intellectual development is the preoperational stage . It takes place between 2 and 7 years. At the beginning of this stage, the child does not use operations, so the thinking is influenced by the way things appear rather than logical reasoning.

A child cannot conserve which means that the child does not understand that quantity remains the same even if the appearance changes.

Furthermore, the child is egocentric; he assumes that other people see the world as he does. This has been shown in the three mountains study.

As the preoperational stage develops, egocentrism declines, and children begin to enjoy the participation of another child in their games, and let’s pretend play becomes more important.

pretend play

Toddlers often pretend to be people they are not (e.g. superheroes, policemen), and may play these roles with props that symbolize real-life objects. Children may also invent an imaginary playmate.

  • Toddlers and young children acquire the ability to internally represent the world through language and mental imagery.
  • During this stage, young children can think about things symbolically. This is the ability to make one thing, such as a word or an object, stand for something other than itself.
  • A child’s thinking is dominated by how the world looks, not how the world is. It is not yet capable of logical (problem-solving) type of thought.
  • Moreover, the child has difficulties with class inclusion; he can classify objects but cannot include objects in sub-sets, which involves classifying objects as belonging to two or more categories simultaneously.
  • Infants at this stage also demonstrate animism. This is the tendency for the child to think that non-living objects (such as toys) have life and feelings like a person’s.

By 2 years, children have made some progress toward detaching their thoughts from the physical world. However, have not yet developed logical (or “operational”) thought characteristics of later stages.

Thinking is still intuitive (based on subjective judgments about situations) and egocentric (centered on the child’s own view of the world).

  • Cultural Storytelling : Different cultures have unique stories, myths, and folklore. Children from diverse backgrounds might understand and interpret symbolic elements differently based on their cultural narratives.
  • Race & Representation : A child’s racial identity can influence how they engage in pretend play. For instance, a lack of diverse representation in media and toys might lead children of color to recreate scenarios that don’t reflect their experiences or background.

Learn More: The Preoperational Stage of Cognitive Development

The Concrete Operational Stage

Ages: 7 – 11 Years

By the beginning of the concrete operational stage , the child can use operations (a set of logical rules) so they can conserve quantities, realize that people see the world in a different way (decentring), and demonstrate improvement in inclusion tasks. Children still have difficulties with abstract thinking.

concrete operational stage

  • During this stage, children begin to think logically about concrete events.
  • Children begin to understand the concept of conservation; understanding that, although things may change in appearance, certain properties remain the same.
  • During this stage, children can mentally reverse things (e.g., picture a ball of plasticine returning to its original shape).
  • During this stage, children also become less egocentric and begin to think about how other people might think and feel.

The stage is called concrete because children can think logically much more successfully if they can manipulate real (concrete) materials or pictures of them.

Piaget considered the concrete stage a major turning point in the child’s cognitive development because it marks the beginning of logical or operational thought. This means the child can work things out internally in their head (rather than physically try things out in the real world).

Children can conserve number (age 6), mass (age 7), and weight (age 9). Conservation is the understanding that something stays the same in quantity even though its appearance changes.

But operational thought is only effective here if the child is asked to reason about materials that are physically present. Children at this stage will tend to make mistakes or be overwhelmed when asked to reason about abstract or hypothetical problems.

  • Cultural Context in Conservation Tasks : In a society where resources are scarce, children might demonstrate conservation skills earlier due to the cultural emphasis on preserving and reusing materials.
  • Gender & Learning : Stereotypes about gender abilities, like “boys are better at math,” can influence how children approach logical problems or classify objects based on perceived gender norms.

Learn More: The Concrete Operational Stage of Development

The Formal Operational Stage

Ages: 12 and Over

The formal operational period begins at about age 11. As adolescents enter this stage, they gain the ability to think in an abstract manner, the ability to combine and classify items in a more sophisticated way, and the capacity for higher-order reasoning.

abstract thinking

Adolescents can think systematically and reason about what might be as well as what is (not everyone achieves this stage). This allows them to understand politics, ethics, and science fiction, as well as to engage in scientific reasoning.

Adolescents can deal with abstract ideas: e.g. they can understand division and fractions without having to actually divide things up, and solve hypothetical (imaginary) problems.

  • Concrete operations are carried out on things whereas formal operations are carried out on ideas. Formal operational thought is entirely freed from physical and perceptual constraints.
  • During this stage, adolescents can deal with abstract ideas (e.g. no longer needing to think about slicing up cakes or sharing sweets to understand division and fractions).
  • They can follow the form of an argument without having to think in terms of specific examples.
  • Adolescents can deal with hypothetical problems with many possible solutions. E.g. if asked ‘What would happen if money were abolished in one hour’s time? they could speculate about many possible consequences.

From about 12 years children can follow the form of a logical argument without reference to its content. During this time, people develop the ability to think about abstract concepts, and logically test hypotheses.

This stage sees the emergence of scientific thinking, formulating abstract theories and hypotheses when faced with a problem.

  • Culture & Abstract Thinking : Cultures emphasize different kinds of logical or abstract thinking. For example, in societies with a strong oral tradition, the ability to hold complex narratives might develop prominently.
  • Gender & Ethics : Discussions about morality and ethics can be influenced by gender norms. For instance, in some cultures, girls might be encouraged to prioritize community harmony, while boys might be encouraged to prioritize individual rights.

Learn More: The Formal Operational Stage of Development

Piaget’s Theory

  • Piaget’s theory places a strong emphasis on the active role that children play in their own cognitive development.
  • According to Piaget, children are not passive recipients of information; instead, they actively explore and interact with their surroundings.
  • This active engagement with the environment is crucial because it allows them to gradually build their understanding of the world.

1. How Piaget Developed the Theory

Piaget was employed at the Binet Institute in the 1920s, where his job was to develop French versions of questions on English intelligence tests. He became intrigued with the reasons children gave for their wrong answers to the questions that required logical thinking.

He believed that these incorrect answers revealed important differences between the thinking of adults and children.

Piaget branched out on his own with a new set of assumptions about children’s intelligence:

  • Children’s intelligence differs from an adult’s in quality rather than in quantity. This means that children reason (think) differently from adults and see the world in different ways.
  • Children actively build up their knowledge about the world . They are not passive creatures waiting for someone to fill their heads with knowledge.
  • The best way to understand children’s reasoning is to see things from their point of view.

Piaget did not want to measure how well children could count, spell or solve problems as a way of grading their I.Q. What he was more interested in was the way in which fundamental concepts like the very idea of number , time, quantity, causality , justice , and so on emerged.

Piaget studied children from infancy to adolescence using naturalistic observation of his own three babies and sometimes controlled observation too. From these, he wrote diary descriptions charting their development.

He also used clinical interviews and observations of older children who were able to understand questions and hold conversations.

2. Piaget’s Theory Differs From Others In Several Ways:

Piaget’s (1936, 1950) theory of cognitive development explains how a child constructs a mental model of the world. He disagreed with the idea that intelligence was a fixed trait, and regarded cognitive development as a process that occurs due to biological maturation and interaction with the environment.

Children’s ability to understand, think about, and solve problems in the world develops in a stop-start, discontinuous manner (rather than gradual changes over time).

  • It is concerned with children, rather than all learners.
  • It focuses on development, rather than learning per se, so it does not address learning of information or specific behaviors.
  • It proposes discrete stages of development, marked by qualitative differences, rather than a gradual increase in number and complexity of behaviors, concepts, ideas, etc.

The goal of the theory is to explain the mechanisms and processes by which the infant, and then the child, develops into an individual who can reason and think using hypotheses.

To Piaget, cognitive development was a progressive reorganization of mental processes as a result of biological maturation and environmental experience.

Children construct an understanding of the world around them, then experience discrepancies between what they already know and what they discover in their environment.

Piaget claimed that knowledge cannot simply emerge from sensory experience; some initial structure is necessary to make sense of the world.

According to Piaget, children are born with a very basic mental structure (genetically inherited and evolved) on which all subsequent learning and knowledge are based.

Schemas are the basic building blocks of such cognitive models, and enable us to form a mental representation of the world.

Piaget (1952, p. 7) defined a schema as: “a cohesive, repeatable action sequence possessing component actions that are tightly interconnected and governed by a core meaning.”

In more simple terms, Piaget called the schema the basic building block of intelligent behavior – a way of organizing knowledge. Indeed, it is useful to think of schemas as “units” of knowledge, each relating to one aspect of the world, including objects, actions, and abstract (i.e., theoretical) concepts.

Wadsworth (2004) suggests that schemata (the plural of schema) be thought of as “index cards” filed in the brain, each one telling an individual how to react to incoming stimuli or information.

When Piaget talked about the development of a person’s mental processes, he was referring to increases in the number and complexity of the schemata that a person had learned.

When a child’s existing schemas are capable of explaining what it can perceive around it, it is said to be in a state of equilibrium, i.e., a state of cognitive (i.e., mental) balance.

Operations are more sophisticated mental structures which allow us to combine schemas in a logical (reasonable) way.

As children grow they can carry out more complex operations and begin to imagine hypothetical (imaginary) situations.

Apart from the schemas we are born with schemas and operations are learned through interaction with other people and the environment.

piaget operations

Piaget emphasized the importance of schemas in cognitive development and described how they were developed or acquired.

A schema can be defined as a set of linked mental representations of the world, which we use both to understand and to respond to situations. The assumption is that we store these mental representations and apply them when needed.

Examples of Schemas

A person might have a schema about buying a meal in a restaurant. The schema is a stored form of the pattern of behavior which includes looking at a menu, ordering food, eating it and paying the bill.

This is an example of a schema called a “script.” Whenever they are in a restaurant, they retrieve this schema from memory and apply it to the situation.

The schemas Piaget described tend to be simpler than this – especially those used by infants. He described how – as a child gets older – his or her schemas become more numerous and elaborate.

Piaget believed that newborn babies have a small number of innate schemas – even before they have had many opportunities to experience the world. These neonatal schemas are the cognitive structures underlying innate reflexes. These reflexes are genetically programmed into us.

For example, babies have a sucking reflex, which is triggered by something touching the baby’s lips. A baby will suck a nipple, a comforter (dummy), or a person’s finger. Piaget, therefore, assumed that the baby has a “sucking schema.”

Similarly, the grasping reflex which is elicited when something touches the palm of a baby’s hand, or the rooting reflex, in which a baby will turn its head towards something which touches its cheek, are innate schemas. Shaking a rattle would be the combination of two schemas, grasping and shaking.

4. The Process of Adaptation

Piaget also believed that a child developed as a result of two different influences: maturation, and interaction with the environment. The child develops mental structures (schemata) which enables him to solve problems in the environment.

Adaptation is the process by which the child changes its mental models of the world to match more closely how the world actually is.

Adaptation is brought about by the processes of assimilation (solving new experiences using existing schemata) and accommodation (changing existing schemata in order to solve new experiences).

The importance of this viewpoint is that the child is seen as an active participant in its own development rather than a passive recipient of either biological influences (maturation) or environmental stimulation.

When our existing schemas can explain what we perceive around us, we are in a state of equilibration . However, when we meet a new situation that we cannot explain it creates disequilibrium, this is an unpleasant sensation which we try to escape, and this gives us the motivation to learn.

According to Piaget, reorganization to higher levels of thinking is not accomplished easily. The child must “rethink” his or her view of the world. An important step in the process is the experience of cognitive conflict.

In other words, the child becomes aware that he or she holds two contradictory views about a situation and they both cannot be true. This step is referred to as disequilibrium .

piaget adaptation2

Jean Piaget (1952; see also Wadsworth, 2004) viewed intellectual growth as a process of adaptation (adjustment) to the world. This happens through assimilation, accommodation, and equilibration.

To get back to a state of equilibration, we need to modify our existing schemas to learn and adapt to the new situation.

This is done through the processes of accommodation and assimilation . This is how our schemas evolve and become more sophisticated. The processes of assimilation and accommodation are continuous and interactive.

5. Assimilation

Piaget defined assimilation as the cognitive process of fitting new information into existing cognitive schemas, perceptions, and understanding. Overall beliefs and understanding of the world do not change as a result of the new information.

Assimilation occurs when the new experience is not very different from previous experiences of a particular object or situation we assimilate the new situation by adding information to a previous schema.

This means that when you are faced with new information, you make sense of this information by referring to information you already have (information processed and learned previously) and trying to fit the new information into the information you already have.

  • Imagine a young child who has only ever seen small, domesticated dogs. When the child sees a cat for the first time, they might refer to it as a “dog” because it has four legs, fur, and a tail – features that fit their existing schema of a dog.
  • A person who has always believed that all birds can fly might label penguins as birds that can fly. This is because their existing schema or understanding of birds includes the ability to fly.
  • A 2-year-old child sees a man who is bald on top of his head and has long frizzy hair on the sides. To his father’s horror, the toddler shouts “Clown, clown” (Siegler et al., 2003).
  • If a baby learns to pick up a rattle he or she will then use the same schema (grasping) to pick up other objects.

6. Accommodation

Accommodation: when the new experience is very different from what we have encountered before we need to change our schemas in a very radical way or create a whole new schema.

Psychologist Jean Piaget defined accommodation as the cognitive process of revising existing cognitive schemas, perceptions, and understanding so that new information can be incorporated.

This happens when the existing schema (knowledge) does not work, and needs to be changed to deal with a new object or situation.

In order to make sense of some new information, you actually adjust information you already have (schemas you already have, etc.) to make room for this new information.

  • A baby tries to use the same schema for grasping to pick up a very small object. It doesn’t work. The baby then changes the schema by now using the forefinger and thumb to pick up the object.
  • A child may have a schema for birds (feathers, flying, etc.) and then they see a plane, which also flies, but would not fit into their bird schema.
  • In the “clown” incident, the boy’s father explained to his son that the man was not a clown and that even though his hair was like a clown’s, he wasn’t wearing a funny costume and wasn’t doing silly things to make people laugh. With this new knowledge, the boy was able to change his schema of “clown” and make this idea fit better to a standard concept of “clown”.
  • A person who grew up thinking all snakes are dangerous might move to an area where garden snakes are common and harmless. Over time, after observing and learning, they might accommodate their previous belief to understand that not all snakes are harmful.

7. Equilibration

Piaget believed that all human thought seeks order and is uncomfortable with contradictions and inconsistencies in knowledge structures. In other words, we seek “equilibrium” in our cognitive structures.

Equilibrium occurs when a child’s schemas can deal with most new information through assimilation. However, an unpleasant state of disequilibrium occurs when new information cannot be fitted into existing schemas (assimilation).

Piaget believed that cognitive development did not progress at a steady rate, but rather in leaps and bounds. Equilibration is the force which drives the learning process as we do not like to be frustrated and will seek to restore balance by mastering the new challenge (accommodation).

Once the new information is acquired the process of assimilation with the new schema will continue until the next time we need to make an adjustment to it.

Equilibration is a regulatory process that maintains a balance between assimilation and accommodation to facilitate cognitive growth. Think of it this way: We can’t merely assimilate all the time; if we did, we would never learn any new concepts or principles.

Everything new we encountered would just get put in the same few “slots” we already had. Neither can we accommodate all the time; if we did, everything we encountered would seem new; there would be no recurring regularities in our world. We’d be exhausted by the mental effort!

Jean Piaget

Applications to Education

Think of old black and white films that you’ve seen in which children sat in rows at desks, with ink wells, would learn by rote, all chanting in unison in response to questions set by an authoritarian old biddy like Matilda!

Children who were unable to keep up were seen as slacking and would be punished by variations on the theme of corporal punishment. Yes, it really did happen and in some parts of the world still does today.

Piaget is partly responsible for the change that occurred in the 1960s and for your relatively pleasurable and pain-free school days!

raked classroom1937

“Children should be able to do their own experimenting and their own research. Teachers, of course, can guide them by providing appropriate materials, but the essential thing is that in order for a child to understand something, he must construct it himself, he must re-invent it. Every time we teach a child something, we keep him from inventing it himself. On the other hand that which we allow him to discover by himself will remain with him visibly”. Piaget (1972, p. 27)

Plowden Report

Piaget (1952) did not explicitly relate his theory to education, although later researchers have explained how features of Piaget’s theory can be applied to teaching and learning.

Piaget has been extremely influential in developing educational policy and teaching practice. For example, a review of primary education by the UK government in 1966 was based strongly on Piaget’s theory. The result of this review led to the publication of the Plowden Report (1967).

In the 1960s the Plowden Committee investigated the deficiencies in education and decided to incorporate many of Piaget’s ideas into its final report published in 1967, even though Piaget’s work was not really designed for education.

The report makes three Piaget-associated recommendations:
  • Children should be given individual attention and it should be realized that they need to be treated differently.
  • Children should only be taught things that they are capable of learning
  • Children mature at different rates and the teacher needs to be aware of the stage of development of each child so teaching can be tailored to their individual needs.

“The report’s recurring themes are individual learning, flexibility in the curriculum, the centrality of play in children’s learning, the use of the environment, learning by discovery and the importance of the evaluation of children’s progress – teachers should “not assume that only what is measurable is valuable.”

Discovery learning – the idea that children learn best through doing and actively exploring – was seen as central to the transformation of the primary school curriculum.

How to teach

Within the classroom learning should be student-centered and accomplished through active discovery learning. The role of the teacher is to facilitate learning, rather than direct tuition.

Because Piaget’s theory is based upon biological maturation and stages, the notion of “readiness” is important. Readiness concerns when certain information or concepts should be taught.

According to Piaget’s theory, children should not be taught certain concepts until they have reached the appropriate stage of cognitive development.

According to Piaget (1958), assimilation and accommodation require an active learner, not a passive one, because problem-solving skills cannot be taught, they must be discovered.

Therefore, teachers should encourage the following within the classroom:
  • Educational programs should be designed to correspond to Piaget’s stages of development. Children in the concrete operational stage should be given concrete means to learn new concepts e.g. tokens for counting.
  • Devising situations that present useful problems, and create disequilibrium in the child.
  • Focus on the process of learning, rather than the end product of it. Instead of checking if children have the right answer, the teacher should focus on the student’s understanding and the processes they used to get to the answer.
  • Child-centered approach. Learning must be active (discovery learning). Children should be encouraged to discover for themselves and to interact with the material instead of being given ready-made knowledge.
  • Accepting that children develop at different rates so arrange activities for individual children or small groups rather than assume that all the children can cope with a particular activity.
  • Using active methods that require rediscovering or reconstructing “truths.”
  • Using collaborative, as well as individual activities (so children can learn from each other).
  • Evaluate the level of the child’s development so suitable tasks can be set.
  • Adapt lessons to suit the needs of the individual child (i.e. differentiated teaching).
  • Be aware of the child’s stage of development (testing).
  • Teach only when the child is ready. i.e. has the child reached the appropriate stage.
  • Providing support for the “spontaneous research” of the child.
  • Using collaborative, as well as individual activities.
  • Educators may use Piaget’s stages to design age-appropriate assessment tools and strategies.

Classroom Activities

Sensorimotor stage (0-2 years):.

Although most kids in this age range are not in a traditional classroom setting, they can still benefit from games that stimulate their senses and motor skills.

  • Object Permanence Games : Play peek-a-boo or hide toys under a blanket to help babies understand that objects still exist even when they can’t see them.
  • Sensory Play : Activities like water play, sand play, or playdough encourage exploration through touch.
  • Imitation : Children at this age love to imitate adults. Use imitation as a way to teach new skills.

Preoperational Stage (2-7 years):

  • Role Playing : Set up pretend play areas where children can act out different scenarios, such as a kitchen, hospital, or market.
  • Use of Symbols : Encourage drawing, building, and using props to represent other things.
  • Hands-on Activities : Children should interact physically with their environment, so provide plenty of opportunities for hands-on learning.
  • Egocentrism Activities : Use exercises that highlight different perspectives. For instance, having two children sit across from each other with an object in between and asking them what the other sees.

Concrete Operational Stage (7-11 years):

  • Classification Tasks : Provide objects or pictures to group, based on various characteristics.
  • Hands-on Experiments : Introduce basic science experiments where they can observe cause and effect, like a simple volcano with baking soda and vinegar.
  • Logical Games : Board games, puzzles, and logic problems help develop their thinking skills.
  • Conservation Tasks : Use experiments to showcase that quantity doesn’t change with alterations in shape, such as the classic liquid conservation task using different shaped glasses.

Formal Operational Stage (11 years and older):

  • Hypothesis Testing : Encourage students to make predictions and test them out.
  • Abstract Thinking : Introduce topics that require abstract reasoning, such as algebra or ethical dilemmas.
  • Problem Solving : Provide complex problems and have students work on solutions, integrating various subjects and concepts.
  • Debate and Discussion : Encourage group discussions and debates on abstract topics, highlighting the importance of logic and evidence.
  • Feedback and Questioning : Use open-ended questions to challenge students and promote higher-order thinking. For instance, rather than asking, “Is this the right answer?”, ask, “How did you arrive at this conclusion?”

While Piaget’s stages offer a foundational framework, they are not universally experienced in the same way by all children.

Social identities play a critical role in shaping cognitive development, necessitating a more nuanced and culturally responsive approach to understanding child development.

Piaget’s stages may manifest differently based on social identities like race, gender, and culture:
  • Race & Teacher Interactions : A child’s race can influence teacher expectations and interactions. For example, racial biases can lead to children of color being perceived as less capable or more disruptive, influencing their cognitive challenges and supports.
  • Racial and Cultural Stereotypes : These can affect a child’s self-perception and self-efficacy . For instance, stereotypes about which racial or cultural groups are “better” at certain subjects can influence a child’s self-confidence and, subsequently, their engagement in that subject.
  • Gender & Peer Interactions : Children learn gender roles from their peers. Boys might be mocked for playing “girl games,” and girls might be excluded from certain activities, influencing their cognitive engagements.
  • Language : Multilingual children might navigate the stages differently, especially if their home language differs from their school language. The way concepts are framed in different languages can influence cognitive processing. Cultural idioms and metaphors can shape a child’s understanding of concepts and their ability to use symbolic representation, especially in the pre-operational stage.

Curriculum Development

According to Piaget, children’s cognitive development is determined by a process of maturation which cannot be altered by tuition so education should be stage-specific.

For example, a child in the concrete operational stage should not be taught abstract concepts and should be given concrete aid such as tokens to count with.

According to Piaget children learn through the process of accommodation and assimilation so the role of the teacher should be to provide opportunities for these processes to occur such as new material and experiences that challenge the children’s existing schemas.

Furthermore, according to this theory, children should be encouraged to discover for themselves and to interact with the material instead of being given ready-made knowledge.

Curricula need to be developed that take into account the age and stage of thinking of the child. For example there is no point in teaching abstract concepts such as algebra or atomic structure to children in primary school.

Curricula also need to be sufficiently flexible to allow for variations in the ability of different students of the same age. In Britain, the National Curriculum and Key Stages broadly reflect the stages that Piaget laid down.

For example, egocentrism dominates a child’s thinking in the sensorimotor and preoperational stages. Piaget would therefore predict that using group activities would not be appropriate since children are not capable of understanding the views of others.

However, Smith et al. (1998), point out that some children develop earlier than Piaget predicted and that by using group work children can learn to appreciate the views of others in preparation for the concrete operational stage.

The national curriculum emphasizes the need to use concrete examples in the primary classroom.

Shayer (1997), reported that abstract thought was necessary for success in secondary school (and co-developed the CASE system of teaching science). Recently the National curriculum has been updated to encourage the teaching of some abstract concepts towards the end of primary education, in preparation for secondary courses. (DfEE, 1999).

Child-centered teaching is regarded by some as a child of the ‘liberal sixties.’ In the 1980s the Thatcher government introduced the National Curriculum in an attempt to move away from this and bring more central government control into the teaching of children.

So, although the British National Curriculum in some ways supports the work of Piaget, (in that it dictates the order of teaching), it can also be seen as prescriptive to the point where it counters Piaget’s child-oriented approach.

However, it does still allow for flexibility in teaching methods, allowing teachers to tailor lessons to the needs of their students.

Social Media (Digital Learning)

Jean Piaget could not have anticipated the expansive digital age we now live in.

Today, knowledge dissemination and creation are democratized by the Internet, with platforms like blogs, wikis, and social media allowing for vast collaboration and shared knowledge. This development has prompted a reimagining of the future of education.

Classrooms, traditionally seen as primary sites of learning, are being overshadowed by the rise of mobile technologies and platforms like MOOCs (Passey, 2013).

The millennial generation, defined as the first to grow up with cable TV, the internet, and cell phones, relies heavily on technology.

They view it as an integral part of their identity, with most using it extensively in their daily lives, from keeping in touch with loved ones to consuming news and entertainment (Nielsen, 2014).

Social media platforms offer a dynamic environment conducive to Piaget’s principles. These platforms allow for interactions that nurture knowledge evolution through cognitive processes like assimilation and accommodation.

They emphasize communal interaction and shared activity, fostering both cognitive and socio-cultural constructivism. This shared activity promotes understanding and exploration beyond individual perspectives, enhancing social-emotional learning (Gehlbach, 2010).

A standout advantage of social media in an educational context is its capacity to extend beyond traditional classroom confines. As the material indicates, these platforms can foster more inclusive learning, bridging diverse learner groups.

This inclusivity can equalize learning opportunities, potentially diminishing biases based on factors like race or socio-economic status, resonating with Kegan’s (1982) concept of “recruitability.”

However, there are challenges. While the potential of social media in learning is vast, its practical application necessitates intention and guidance. Cuban, Kirkpatrick, and Peck (2001) note that certain educators and students are hesitant about integrating social media into educational contexts.

This hesitancy can stem from technological complexities or potential distractions. Yet, when harnessed effectively, social media can provide a rich environment for collaborative learning and interpersonal development, fostering a deeper understanding of content.

In essence, the rise of social media aligns seamlessly with constructivist philosophies. Social media platforms act as tools for everyday cognition, merging daily social interactions with the academic world, and providing avenues for diverse, interactive, and engaging learning experiences.

Applications to Parenting

Parents can use Piaget’s stages to have realistic developmental expectations of their children’s behavior and cognitive capabilities.

For instance, understanding that a toddler is in the pre-operational stage can help parents be patient when the child is egocentric.

Play Activities

Recognizing the importance of play in cognitive development, many parents provide toys and games suited for their child’s developmental stage.

Parents can offer activities that are slightly beyond their child’s current abilities, leveraging Vygotsky’s concept of the “Zone of Proximal Development,” which complements Piaget’s ideas.

  • Peek-a-boo : Helps with object permanence.
  • Texture Touch : Provide different textured materials (soft, rough, bumpy, smooth) for babies to touch and feel.
  • Sound Bottles : Fill small bottles with different items like rice, beans, bells, and have children shake and listen to the different sounds.
  • Memory Games : Using cards with pictures, place them face down, and ask students to find matching pairs.
  • Role Playing and Pretend Play : Let children act out roles or stories that enhance symbolic thinking. Encourage symbolic play with dress-up clothes, playsets, or toy cash registers. Provide prompts or scenarios to extend their imagination.
  • Story Sequencing : Give children cards with parts of a story and have them arranged in the correct order.
  • Number Line Jumps : Create a number line on the floor with tape. Ask students to jump to the correct answer for math problems.
  • Classification Games : Provide a mix of objects and ask students to classify them based on different criteria (e.g., color, size, shape).
  • Logical Puzzle Games : Games that involve problem-solving using logic, such as simple Sudoku puzzles or logic grid puzzles.
  • Debate and Discussion : Provide a topic and let students debate on pros and cons. This promotes abstract thinking and logical reasoning.
  • Hypothesis Testing Games : Present a scenario and have students come up with hypotheses and ways to test them.
  • Strategy Board Games : Games like chess, checkers, or Settlers of Catan can help in developing strategic and forward-thinking skills.

Critical Evaluation

  • The influence of Piaget’s ideas on developmental psychology has been enormous. He changed how people viewed the child’s world and their methods of studying children.

He was an inspiration to many who came after and took up his ideas. Piaget’s ideas have generated a huge amount of research which has increased our understanding of cognitive development.

  • Piaget (1936) was one of the first psychologists to make a systematic study of cognitive development. His contributions include a stage theory of child cognitive development, detailed observational studies of cognition in children, and a series of simple but ingenious tests to reveal different cognitive abilities.
  • His ideas have been of practical use in understanding and communicating with children, particularly in the field of education (re: Discovery Learning). Piaget’s theory has been applied across education.
  • According to Piaget’s theory, educational programs should be designed to correspond to the stages of development.
  • Are the stages real? Vygotsky and Bruner would rather not talk about stages at all, preferring to see development as a continuous process. Others have queried the age ranges of the stages. Some studies have shown that progress to the formal operational stage is not guaranteed.

For example, Keating (1979) reported that 40-60% of college students fail at formal operation tasks, and Dasen (1994) states that only one-third of adults ever reach the formal operational stage.

The fact that the formal operational stage is not reached in all cultures and not all individuals within cultures suggests that it might not be biologically based.

  • According to Piaget, the rate of cognitive development cannot be accelerated as it is based on biological processes however, direct tuition can speed up the development which suggests that it is not entirely based on biological factors.
  • Because Piaget concentrated on the universal stages of cognitive development and biological maturation, he failed to consider the effect that the social setting and culture may have on cognitive development.

Cross-cultural studies show that the stages of development (except the formal operational stage) occur in the same order in all cultures suggesting that cognitive development is a product of a biological process of maturation.

However, the age at which the stages are reached varies between cultures and individuals which suggests that social and cultural factors and individual differences influence cognitive development.

Dasen (1994) cites studies he conducted in remote parts of the central Australian desert with 8-14-year-old Indigenous Australians. He gave them conservation of liquid tasks and spatial awareness tasks. He found that the ability to conserve came later in the Aboriginal children, between ages of 10 and 13 (as opposed to between 5 and 7, with Piaget’s Swiss sample).

However, he found that spatial awareness abilities developed earlier amongst the Aboriginal children than the Swiss children. Such a study demonstrates cognitive development is not purely dependent on maturation but on cultural factors too – spatial awareness is crucial for nomadic groups of people.

Vygotsky , a contemporary of Piaget, argued that social interaction is crucial for cognitive development. According to Vygotsky the child’s learning always occurs in a social context in cooperation with someone more skillful (MKO). This social interaction provides language opportunities and Vygotsky considered language the foundation of thought.

  • Piaget’s methods (observation and clinical interviews) are more open to biased interpretation than other methods. Piaget made careful, detailed naturalistic observations of children, and from these, he wrote diary descriptions charting their development. He also used clinical interviews and observations of older children who were able to understand questions and hold conversations.

Because Piaget conducted the observations alone the data collected are based on his own subjective interpretation of events. It would have been more reliable if Piaget conducted the observations with another researcher and compared the results afterward to check if they are similar (i.e., have inter-rater reliability).

Although clinical interviews allow the researcher to explore data in more depth, the interpretation of the interviewer may be biased.

For example, children may not understand the question/s, they have short attention spans, they cannot express themselves very well, and may be trying to please the experimenter. Such methods meant that Piaget may have formed inaccurate conclusions.

  • As several studies have shown Piaget underestimated the abilities of children because his tests were sometimes confusing or difficult to understand (e.g., Hughes , 1975).

Piaget failed to distinguish between competence (what a child is capable of doing) and performance (what a child can show when given a particular task). When tasks were altered, performance (and therefore competence) was affected. Therefore, Piaget might have underestimated children’s cognitive abilities.

For example, a child might have object permanence (competence) but still not be able to search for objects (performance). When Piaget hid objects from babies he found that it wasn’t till after nine months that they looked for it.

However, Piaget relied on manual search methods – whether the child was looking for the object or not.

Later, researchers such as Baillargeon and Devos (1991) reported that infants as young as four months looked longer at a moving carrot that didn’t do what it expected, suggesting they had some sense of permanence, otherwise they wouldn’t have had any expectation of what it should or shouldn’t do.

  • The concept of schema is incompatible with the theories of Bruner (1966) and Vygotsky (1978). Behaviorism would also refute Piaget’s schema theory because is cannot be directly observed as it is an internal process. Therefore, they would claim it cannot be objectively measured.
  • Piaget studied his own children and the children of his colleagues in Geneva to deduce general principles about the intellectual development of all children. His sample was very small and composed solely of European children from families of high socio-economic status. Researchers have, therefore, questioned the generalisability of his data.
  • For Piaget, language is considered secondary to action, i.e., thought precedes language. The Russian psychologist Lev Vygotsky (1978) argues that the development of language and thought go together and that the origin of reasoning has more to do with our ability to communicate with others than with our interaction with the material world.

Piaget’s Theory vs Vygotsky

Piaget maintains that cognitive development stems largely from independent explorations in which children construct knowledge of their own.

Whereas Vygotsky argues that children learn through social interactions, building knowledge by learning from more knowledgeable others such as peers and adults. In other words, Vygotsky believed that culture affects cognitive development.

These factors lead to differences in the education style they recommend: Piaget would argue for the teacher to provide opportunities that challenge the children’s existing schemas and for children to be encouraged to discover for themselves.

Alternatively, Vygotsky would recommend that teachers assist the child to progress through the zone of proximal development by using scaffolding.

However, both theories view children as actively constructing their own knowledge of the world; they are not seen as just passively absorbing knowledge.

They also agree that cognitive development involves qualitative changes in thinking, not only a matter of learning more things.

What is cognitive development?

Cognitive development is how a person’s ability to think, learn, remember, problem-solve, and make decisions changes over time.

This includes the growth and maturation of the brain, as well as the acquisition and refinement of various mental skills and abilities.

Cognitive development is a major aspect of human development, and both genetic and environmental factors heavily influence it. Key domains of cognitive development include attention, memory, language skills, logical reasoning, and problem-solving.

Various theories, such as those proposed by Jean Piaget and Lev Vygotsky, provide different perspectives on how this complex process unfolds from infancy through adulthood.

What are the 4 stages of Piaget’s theory?

Piaget divided children’s cognitive development into four stages; each of the stages represents a new way of thinking and understanding the world.

He called them (1) sensorimotor intelligence , (2) preoperational thinking , (3) concrete operational thinking , and (4) formal operational thinking . Each stage is correlated with an age period of childhood, but only approximately.

According to Piaget, intellectual development takes place through stages that occur in a fixed order and which are universal (all children pass through these stages regardless of social or cultural background).

Development can only occur when the brain has matured to a point of “readiness”.

What are some of the weaknesses of Piaget’s theory?

Cross-cultural studies show that the stages of development (except the formal operational stage) occur in the same order in all cultures suggesting that cognitive development is a product of a biological maturation process.

However, the age at which the stages are reached varies between cultures and individuals, suggesting that social and cultural factors and individual differences influence cognitive development.

What are Piaget’s concepts of schemas?

Schemas are mental structures that contain all of the information relating to one aspect of the world around us.

According to Piaget, we are born with a few primitive schemas, such as sucking, which give us the means to interact with the world.

These are physical, but as the child develops, they become mental schemas. These schemas become more complex with experience.

Baillargeon, R., & DeVos, J. (1991). Object permanence in young infants: Further evidence . Child development , 1227-1246.

Bruner, J. S. (1966). Toward a theory of instruction. Cambridge, Mass.: Belkapp Press.

Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: Explaining an apparent paradox.  American Educational Research Journal ,  38 (4), 813-834.

Dasen, P. (1994). Culture and cognitive development from a Piagetian perspective. In W .J. Lonner & R.S. Malpass (Eds.), Psychology and culture (pp. 145–149). Boston, MA: Allyn and Bacon.

Gehlbach, H. (2010). The social side of school: Why teachers need social psychology.  Educational Psychology Review ,  22 , 349-362.

Hughes, M. (1975). Egocentrism in preschool children . Unpublished doctoral dissertation. Edinburgh University.

Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence . New York: Basic Books.

Keating, D. (1979). Adolescent thinking. In J. Adelson (Ed.), Handbook of adolescent psychology (pp. 211-246). New York: Wiley.

Kegan, R. (1982).  The evolving self: Problem and process in human development . Harvard University Press.

Nielsen. 2014. “Millennials: Technology = Social Connection.” http://www.nielsen.com/content/corporate/us/en/insights/news/2014/millennials-technology-social-connecti on.html.

Passey, D. (2013).  Inclusive technology enhanced learning: Overcoming cognitive, physical, emotional, and geographic challenges . Routledge.

Piaget, J. (1932). The moral judgment of the child . London: Routledge & Kegan Paul.

Piaget, J. (1936). Origins of intelligence in the child. London: Routledge & Kegan Paul.

Piaget, J. (1945). Play, dreams and imitation in childhood . London: Heinemann.

Piaget, J. (1957). Construction of reality in the child. London: Routledge & Kegan Paul.

Piaget, J., & Cook, M. T. (1952). The origins of intelligence in children . New York, NY: International University Press.

Piaget, J. (1981).  Intelligence and affectivity: Their relationship during child development.(Trans & Ed TA Brown & CE Kaegi) . Annual Reviews.

Plowden, B. H. P. (1967). Children and their primary schools: A report (Research and Surveys). London, England: HM Stationery Office.

Siegler, R. S., DeLoache, J. S., & Eisenberg, N. (2003). How children develop . New York: Worth.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Cambridge, MA: Harvard University Press.

Wadsworth, B. J. (2004). Piaget’s theory of cognitive and affective development: Foundations of constructivism . New York: Longman.

Further Reading

  • BBC Radio Broadcast about the Three Mountains Study
  • Piagetian stages: A critical review
  • Bronfenbrenner’s Ecological Systems Theory

Print Friendly, PDF & Email

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Ch 2: Psychological Research Methods

Children sit in front of a bank of television screens. A sign on the wall says, “Some content may not be suitable for children.”

Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

The topic of violence in the media today is contentious. Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of quickly changing technologies, questions about the effects of media continue to emerge. Is it okay to talk on a cell phone while driving? Are headphones good to use in a car? What impact does text messaging have on reaction time while driving? These are types of questions that psychologist David Strayer asks in his lab.

Watch this short video to see how Strayer utilizes the scientific method to reach important conclusions regarding technology and driving safety.

You can view the transcript for “Understanding driver distraction” here (opens in new window) .

How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.

Introduction to the Scientific Method

Learning objectives.

  • Explain the steps of the scientific method
  • Describe why the scientific method is important to psychology
  • Summarize the processes of informed consent and debriefing
  • Explain how research involving humans or animals is regulated

photograph of the word "research" from a dictionary with a pen pointing at the word.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives. In this section, you’ll see how psychologists use the scientific method to study and understand behavior.

The Scientific Process

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see the behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests.

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 5). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Why the scientific method is important for psychology.

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

Ethics in Research

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the Tuskegee Syphilis Study, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB) . The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 6). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent  form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, the informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing  upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Dig Deeper: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 7). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.

Learn more about the Tuskegee Syphilis Study on the CDC website .

Research Involving Animal Subjects

A photograph shows a rat.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC) . An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

Introduction to Approaches to Research

  • Differentiate between descriptive, correlational, and experimental research
  • Explain the strengths and weaknesses of case studies, naturalistic observation, and surveys
  • Describe the strength and weaknesses of archival research
  • Compare longitudinal and cross-sectional approaches to research
  • Explain what a correlation coefficient tells us about the relationship between variables
  • Describe why correlation does not mean causation
  • Describe the experimental process, including ways to control for bias
  • Identify and differentiate between independent and dependent variables

Three researchers review data while talking around a microscope.

Psychologists use descriptive, experimental, and correlational methods to conduct research. Descriptive, or qualitative, methods include the case study, naturalistic observation, surveys, archival research, longitudinal research, and cross-sectional research.

Experiments are conducted in order to determine cause-and-effect relationships. In ideal experimental design, the only difference between the experimental and control groups is whether participants are exposed to the experimental manipulation. Each group goes through all phases of the experiment, but each group will experience a different level of the independent variable: the experimental group is exposed to the experimental manipulation, and the control group is not exposed to the experimental manipulation. The researcher then measures the changes that are produced in the dependent variable in each group. Once data is collected from both groups, it is analyzed statistically to determine if there are meaningful differences between the groups.

When scientists passively observe and measure phenomena it is called correlational research. Here, psychologists do not intervene and change behavior, as they do in experiments. In correlational research, they identify patterns of relationships, but usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

Watch It: More on Research

If you enjoy learning through lectures and want an interesting and comprehensive summary of this section, then click on the Youtube link to watch a lecture given by MIT Professor John Gabrieli . Start at the 30:45 minute mark  and watch through the end to hear examples of actual psychological studies and how they were analyzed. Listen for references to independent and dependent variables, experimenter bias, and double-blind studies. In the lecture, you’ll learn about breaking social norms, “WEIRD” research, why expectations matter, how a warm cup of coffee might make you nicer, why you should change your answer on a multiple choice test, and why praise for intelligence won’t make you any smarter.

You can view the transcript for “Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011” here (opens in new window) .

Descriptive Research

There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments.

The three main categories of psychological research are descriptive, correlational, and experimental research. Research studies that do not test specific relationships between variables are called descriptive, or qualitative, studies . These studies are used to describe general or specific behaviors and attributes that are observed and measured. In the early stages of research it might be difficult to form a hypothesis, especially when there is not any existing literature in the area. In these situations designing an experiment would be premature, as the question of interest is not yet clearly defined as a hypothesis. Often a researcher will begin with a non-experimental approach, such as a descriptive study, to gather more information about the topic before designing an experiment or correlational study to address a specific hypothesis. Descriptive research is distinct from correlational research , in which psychologists formally test whether a relationship exists between two or more variables. Experimental research  goes a step further beyond descriptive and correlational research and randomly assigns people to different conditions, using hypothesis testing to make inferences about how these conditions affect behavior. It aims to determine if one variable directly impacts and causes another. Correlational and experimental research both typically use hypothesis testing, whereas descriptive research does not.

Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control on how or what kind of data was collected.

Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, which will be discussed later in the text, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in very artificial settings. This calls into question the validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical concerns.

The three main types of descriptive studies are, naturalistic observation, case studies, and surveys.

Naturalistic Observation

If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. However, people might change their behavior in unexpected ways if they know they are being observed. How do researchers obtain accurate information when people tend to hide their natural behavior? As an example, imagine that your professor asks everyone in your class to raise their hand if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?

This is very similar to the phenomenon mentioned earlier in this module: many individuals do not feel comfortable answering a question honestly. But if we are committed to finding out the facts about hand washing, we have other options available to us.

Suppose we send a classmate into the restroom to actually watch whether everyone washes their hands after using the restroom. Will our observer blend into the restroom environment by wearing a white lab coat, sitting with a clipboard, and staring at the sinks? We want our researcher to be inconspicuous—perhaps standing at one of the sinks pretending to put in contact lenses while secretly recording the relevant information. This type of observational study is called naturalistic observation : observing behavior in its natural setting. To better understand peer exclusion, Suzanne Fanger collaborated with colleagues at the University of Texas to observe the behavior of preschool children on a playground. How did the observers remain inconspicuous over the duration of the study? They equipped a few of the children with wireless microphones (which the children quickly forgot about) and observed while taking notes from a distance. Also, the children in that particular preschool (a “laboratory preschool”) were accustomed to having observers on the playground (Fanger, Frankel, & Hazen, 2012).

A photograph shows two police cars driving, one with its lights flashing.

It is critical that the observer be as unobtrusive and as inconspicuous as possible: when people know they are being watched, they are less likely to behave naturally. If you have any doubt about this, ask yourself how your driving behavior might differ in two situations: In the first situation, you are driving down a deserted highway during the middle of the day; in the second situation, you are being followed by a police car down the same deserted highway (Figure 9).

It should be pointed out that naturalistic observation is not limited to research involving humans. Indeed, some of the best-known examples of naturalistic observation involve researchers going into the field to observe various kinds of animals in their own environments. As with human studies, the researchers maintain their distance and avoid interfering with the animal subjects so as not to influence their natural behaviors. Scientists have used this technique to study social hierarchies and interactions among animals ranging from ground squirrels to gorillas. The information provided by these studies is invaluable in understanding how those animals organize socially and communicate with one another. The anthropologist Jane Goodall, for example, spent nearly five decades observing the behavior of chimpanzees in Africa (Figure 10). As an illustration of the types of concerns that a researcher might encounter in naturalistic observation, some scientists criticized Goodall for giving the chimps names instead of referring to them by numbers—using names was thought to undermine the emotional detachment required for the objectivity of the study (McKie, 2010).

(a) A photograph shows Jane Goodall speaking from a lectern. (b) A photograph shows a chimpanzee’s face.

The greatest benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means that we have a higher degree of ecological validity, or realism, than we might achieve with other research approaches. Therefore, our ability to generalize  the findings of the research to real-world situations is enhanced. If done correctly, we need not worry about people or animals modifying their behavior simply because they are being observed. Sometimes, people may assume that reality programs give us a glimpse into authentic human behavior. However, the principle of inconspicuous observation is violated as reality stars are followed by camera crews and are interviewed on camera for personal confessionals. Given that environment, we must doubt how natural and realistic their behaviors are.

The major downside of naturalistic observation is that they are often difficult to set up and control. In our restroom study, what if you stood in the restroom all day prepared to record people’s hand washing behavior and no one came in? Or, what if you have been closely observing a troop of gorillas for weeks only to find that they migrated to a new place while you were sleeping in your tent? The benefit of realistic data comes at a cost. As a researcher you have no control of when (or if) you have behavior to observe. In addition, this type of observational research often requires significant investments of time, money, and a good dose of luck.

Sometimes studies involve structured observation. In these cases, people are observed while engaging in set, specific tasks. An excellent example of structured observation comes from Strange Situation by Mary Ainsworth (you will read more about this in the module on lifespan development). The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant upon being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.

Another potential problem in observational research is observer bias . Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability : a measure of reliability that assesses the consistency of observations by different observers.

Case Studies

In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.

The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.

These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).

In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.

If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.

Often, psychologists develop surveys as a means of gathering data. Surveys are lists of questions to be answered by research participants, and can be delivered as paper-and-pencil questionnaires, administered electronically, or conducted verbally (Figure 11). Generally, the survey itself can be completed in a short time, and the ease of administering a survey makes it easy to collect data from a large number of people.

Surveys allow researchers to gather data from larger samples than may be afforded by other research methods . A sample is a subset of individuals selected from a population , which is the overall group of individuals that the researchers are interested in. Researchers study the sample and seek to generalize their findings to the population.

A sample online survey reads, “Dear visitor, your opinion is important to us. We would like to invite you to participate in a short survey to gather your opinions and feedback on your news consumption habits. The survey will take approximately 10-15 minutes. Simply click the “Yes” button below to launch the survey. Would you like to participate?” Two buttons are labeled “yes” and “no.”

There is both strength and weakness of the survey in comparison to case studies. By using surveys, we can collect information from a larger sample of people. A larger sample is better able to reflect the actual diversity of the population, thus allowing better generalizability. Therefore, if our sample is sufficiently large and diverse, we can assume that the data we collect from the survey can be generalized to the larger population with more certainty than the information collected through a case study. However, given the greater number of people involved, we are not able to collect the same depth of information on each person that would be collected in a case study.

Another potential weakness of surveys is something we touched on earlier in this chapter: people don’t always give accurate responses. They may lie, misremember, or answer questions in a way that they think makes them look good. For example, people may report drinking less alcohol than is actually the case.

Any number of research questions can be answered through the use of surveys. One real-world example is the research conducted by Jenkins, Ruppel, Kizer, Yehl, and Griffin (2012) about the backlash against the US Arab-American community following the terrorist attacks of September 11, 2001. Jenkins and colleagues wanted to determine to what extent these negative attitudes toward Arab-Americans still existed nearly a decade after the attacks occurred. In one study, 140 research participants filled out a survey with 10 questions, including questions asking directly about the participant’s overt prejudicial attitudes toward people of various ethnicities. The survey also asked indirect questions about how likely the participant would be to interact with a person of a given ethnicity in a variety of settings (such as, “How likely do you think it is that you would introduce yourself to a person of Arab-American descent?”). The results of the research suggested that participants were unwilling to report prejudicial attitudes toward any ethnic group. However, there were significant differences between their pattern of responses to questions about social interaction with Arab-Americans compared to other ethnic groups: they indicated less willingness for social interaction with Arab-Americans compared to the other ethnic groups. This suggested that the participants harbored subtle forms of prejudice against Arab-Americans, despite their assertions that this was not the case (Jenkins et al., 2012).

Think It Over

Archival research.

(a) A photograph shows stacks of paper files on shelves. (b) A photograph shows a computer.

In comparing archival research to other research methods, there are several important distinctions. For one, the researcher employing archival research never directly interacts with research participants. Therefore, the investment of time and money to collect data is considerably less with archival research. Additionally, researchers have no control over what information was originally collected. Therefore, research questions have to be tailored so they can be answered within the structure of the existing data sets. There is also no guarantee of consistency between the records from one source to another, which might make comparing and contrasting different data sets problematic.

Longitudinal and Cross-Sectional Research

Sometimes we want to see how people change over time, as in studies of human development and lifespan. When we test the same group of individuals repeatedly over an extended period of time, we are conducting longitudinal research. Longitudinal research  is a research design in which data-gathering is administered repeatedly over an extended period of time. For example, we may survey a group of individuals about their dietary habits at age 20, retest them a decade later at age 30, and then again at age 40.

Another approach is cross-sectional research . In cross-sectional research, a researcher compares multiple segments of the population at the same time. Using the dietary habits example above, the researcher might directly compare different groups of people by age. Instead of observing a group of people for 20 years to see how their dietary habits changed from decade to decade, the researcher would study a group of 20-year-old individuals and compare them to a group of 30-year-old individuals and a group of 40-year-old individuals. While cross-sectional research requires a shorter-term investment, it is also limited by differences that exist between the different generations (or cohorts) that have nothing to do with age per se, but rather reflect the social and cultural experiences of different generations of individuals make them different from one another.

To illustrate this concept, consider the following survey findings. In recent years there has been significant growth in the popular support of same-sex marriage. Many studies on this topic break down survey participants into different age groups. In general, younger people are more supportive of same-sex marriage than are those who are older (Jones, 2013). Does this mean that as we age we become less open to the idea of same-sex marriage, or does this mean that older individuals have different perspectives because of the social climates in which they grew up? Longitudinal research is a powerful approach because the same individuals are involved in the research project over time, which means that the researchers need to be less concerned with differences among cohorts affecting the results of their study.

Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might cause or prevent the development of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over 20 years to determine which of them develop cancer and which do not.

Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of cancer and smoking (American Cancer Society, n.d.) (Figure 13).

A photograph shows pack of cigarettes and cigarettes in an ashtray. The pack of cigarettes reads, “Surgeon general’s warning: smoking causes lung cancer, heart disease, emphysema, and may complicate pregnancy.”

As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. In addition to the time demands, these studies also require a substantial financial investment. Many researchers are unable to commit the resources necessary to see a longitudinal project through to the end.

Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get married and take new names, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. As a result, the attrition  rates, or reduction in the number of research participants due to dropouts, in longitudinal studies are quite high and increases over the course of a project. For this reason, researchers using this approach typically recruit many participants fully expecting that a substantial number will drop out before the end. As the study progresses, they continually check whether the sample still represents the larger population, and make adjustments as necessary.

Correlational Research

Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur.

It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream. How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is attributable to coincidence or causation?

Three scatterplots are shown. Scatterplot (a) is labeled “positive correlation” and shows scattered dots forming a rough line from the bottom left to the top right; the x-axis is labeled “weight” and the y-axis is labeled “height.” Scatterplot (b) is labeled “negative correlation” and shows scattered dots forming a rough line from the top left to the bottom right; the x-axis is labeled “tiredness” and the y-axis is labeled “hours of sleep.” Scatterplot (c) is labeled “no correlation” and shows scattered dots having no pattern; the x-axis is labeled “shoe size” and the y-axis is labeled “hours of sleep.”

Correlation Does Not Indicate Causation

Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect . While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable , is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.

Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research , we would be overstepping our bounds by making this assumption.

A photograph shows a bowl of cereal.

Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantzen, Treviño, Echon, Garcia-Dominic, & DiMarco, 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet (Figure 15)? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.

Watch this clip from Freakonomics for an example of how correlation does  not  indicate causation.

You can view the transcript for “Correlation vs. Causality: Freakonomics Movie” here (opens in new window) .

Illusory Correlations

The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations , or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (Figure 16).

A photograph shows the moon.

There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.

Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias . Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).

We all have a tendency to make illusory correlations from time to time. Try to think of an illusory correlation that is held by you, a family member, or a close friend. How do you think this illusory correlation came about and what can be done in the future to combat them?

Experiments

Causality: conducting experiments and using the data, experimental hypothesis.

In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have younger relatives who watch cartoons featuring characters using martial arts to save the world from evildoers, with an impressive array of punching, kicking, and defensive postures. You notice that after watching these programs for a while, your young relatives mimic the fighting behavior of the characters portrayed in the cartoon (Figure 17).

A photograph shows a child pointing a toy gun.

These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to rigorously test our hypothesis. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.

Designing an Experiment

The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group  gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.

In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.

We also need to precisely define, or operationalize, what is considered violent and nonviolent. An operational definition is a description of how we will measure our variables, and it is important in allowing others understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.

Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts is recorded.

Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study , meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.

A photograph shows three glass bottles of pills labeled as placebos.

In a double-blind study , both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase placebo effect, you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.

The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.

Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.

To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations (Figure 18).

Independent and Dependent Variables

In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study (Figure 19). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.

A box labeled “independent variable: type of television programming viewed” contains a photograph of a person shooting an automatic weapon. An arrow labeled “influences change in the…” leads to a second box. The second box is labeled “dependent variable: violent behavior displayed” and has a photograph of a child pointing a toy gun.

We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what effect does watching a half hour of violent television programming or nonviolent television programming have on the number of incidents of physical aggression displayed on the playground?

Selecting and Assigning Experimental Participants

Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants  are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.

Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (Figure 20). If possible, we should use a random sample   (there are other types of samples, but for the purposes of this section, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.

In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth graders who we want to participate in our experiment.

In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.

(a) A photograph shows an aerial view of crowds on a street. (b) A photograph shows s small group of children.

Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment , all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the fourth graders in the sample to either the experimental or the control group.

Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.

Issues to Consider

While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.

Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.

Introduction to Statistical Thinking

Psychologists use statistics to assist them in analyzing data, and also to give more precise measurements to describe whether something is statistically significant. Analyzing data using statistics enables researchers to find patterns, make claims, and share their results with others. In this section, you’ll learn about some of the tools that psychologists use in statistical analysis.

  • Define reliability and validity
  • Describe the importance of distributional thinking and the role of p-values in statistical inference
  • Describe the role of random sampling and random assignment in drawing cause-and-effect conclusions
  • Describe the basic structure of a psychological research article

Interpreting Experimental Findings

Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.

Reporting Research

When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.

A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.

Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.

Dig Deeper: The Vaccine-Autism Myth and the Retraction of Published Studies

Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.

A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated (Figure 21). For more information about how the vaccine/autism story unfolded, as well as the repercussions of this story, take a look at Paul Offit’s book, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure.

A photograph shows a child being given an oral vaccine.

Reliability and Validity

Dig deeper:  everyday connection: how valid is the sat.

Standardized tests like the SAT are supposed to measure an individual’s aptitude for a college education, but how reliable and valid are such tests? Research conducted by the College Board suggests that scores on the SAT have high predictive validity for first-year college students’ GPA (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008). In this context, predictive validity refers to the test’s ability to effectively predict the GPA of college freshmen. Given that many institutions of higher education require the SAT for admission, this high degree of predictive validity might be comforting.

However, the emphasis placed on SAT scores in college admissions has generated some controversy on a number of fronts. For one, some researchers assert that the SAT is a biased test that places minority students at a disadvantage and unfairly reduces the likelihood of being admitted into a college (Santelices & Wilson, 2010). Additionally, some research has suggested that the predictive validity of the SAT is grossly exaggerated in how well it is able to predict the GPA of first-year college students. In fact, it has been suggested that the SAT’s predictive validity may be overestimated by as much as 150% (Rothstein, 2004). Many institutions of higher education are beginning to consider de-emphasizing the significance of SAT scores in making admission decisions (Rimer, 2008).

In 2014, College Board president David Coleman expressed his awareness of these problems, recognizing that college success is more accurately predicted by high school grades than by SAT scores. To address these concerns, he has called for significant changes to the SAT exam (Lewin, 2014).

Statistical Significance

Coffee cup with heart shaped cream inside.

Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day also had a 10% lower chance of dying (women’s chances were 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit? We will explore these results in more depth in the next section about drawing conclusions from statistics. Modern society has become awash in studies such as this; you can read about several such studies in the news every day.

Conducting such a study well, and interpreting the results of such studies requires understanding basic ideas of statistics , the science of gaining insight from data. Key components to a statistical investigation are:

  • Planning the study: Start by asking a testable research question and deciding how to collect data. For example, how long was the study period of the coffee study? How many people were recruited for the study, how were they recruited, and from where? How old were they? What other variables were recorded about the individuals? Were changes made to the participants’ coffee habits during the course of the study?
  • Examining the data: What are appropriate ways to examine the data? What graphs are relevant, and what do they reveal? What descriptive statistics can be calculated to summarize relevant aspects of the data, and what do they reveal? What patterns do you see in the data? Are there any individual observations that deviate from the overall pattern, and what do they reveal? For example, in the coffee study, did the proportions differ when we compared the smokers to the non-smokers?
  • Inferring from the data: What are valid statistical methods for drawing inferences “beyond” the data you collected? In the coffee study, is the 10%–15% reduction in risk of death something that could have happened just by chance?
  • Drawing conclusions: Based on what you learned from your data, what conclusions can you draw? Who do you think these conclusions apply to? (Were the people in the coffee study older? Healthy? Living in cities?) Can you draw a cause-and-effect conclusion about your treatments? (Are scientists now saying that the coffee drinking is the cause of the decreased risk of death?)

Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this section, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about.

Distributional Thinking

When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. Let’s take a look at an example.

Example 1 : Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Figure 23.

Table showing patients' reading levels and pahmphlet's reading levels.

  • Data vary . More specifically, values of a variable (such as reading level of a cancer patient or readability level of a cancer pamphlet) vary.
  • Analyzing the pattern of variation, called the distribution of the variable, often reveals insights.

Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 24.

Bar graph showing that the reading level of pamphlets is typically higher than the reading level of the patients.

Figure 24 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables.

Finding Significance in Data

Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1°F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population? Let’s take a look at another example.

Example 2 : In a study reported in the November 2007 issue of Nature , researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with.

The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy. One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand?

Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible. It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process.

Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor.

If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value . The p-value represents the likelihood that experimental results happened by chance. Within psychology, the most common standard for p-values is “p < .05”. What this means is that there is less than a 5% probability that the results happened just by random chance, and therefore a 95% probability that the results reflect a meaningful pattern in human psychology. We call this statistical significance .

So, in the study above, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy.

If we compare the p-value to some cut-off value, like 0.05, we see that the p=value is smaller. Because the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior.

Drawing Conclusions from Statistics

Generalizability.

Photo of a diverse group of college-aged students.

One limitation to the study mentioned previously about the babies choosing the “helper” toy is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample ) from a much larger group of individuals (the population ) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.

Example 3 : The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a r andom sample  that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.

In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling . Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error. A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.

The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.

Cause and Effect

In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?

Example 4 : A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic (internal) or extrinsic (external) motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 26, where higher scores indicate more creativity.

Image showing a dot for creativity scores, which vary between 5 and 27, and the types of motivation each person was given as a motivator, either extrinsic or intrinsic.

In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?

Figure 26 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)

The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.

We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment  tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.

But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?

We again want to apply to a probability model to approximate a p-value , but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 27 shows the results from 1,000 such hypothetical random assignments for these scores.

Standard distribution in a typical bell curve.

Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.

Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.

Close-up photo of mathematical equations.

Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.

So where does this leave us with regard to the coffee study mentioned previously (the Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012 found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none)? We can answer many of the questions:

  • This was a 14-year study conducted by researchers at the National Cancer Institute.
  • The results were published in the June issue of the New England Journal of Medicine , a respected, peer-reviewed journal.
  • The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study.
  • About 52,000 people died during the course of the study.
  • People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups.
  • The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%).
  • Whether coffee was caffeinated or decaffeinated did not appear to affect the results.
  • This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee.

This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.

Explore these outside resources to learn more about applied statistics:

  • Video about p-values:  P-Value Extravaganza
  • Interactive web applets for teaching and learning statistics
  • Inter-university Consortium for Political and Social Research  where you can find and analyze data.
  • The Consortium for the Advancement of Undergraduate Statistics
  • Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented?
  • Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not.

How to Read Research

In this course and throughout your academic career, you’ll be reading journal articles (meaning they were published by experts in a peer-reviewed journal) and reports that explain psychological research. It’s important to understand the format of these articles so that you can read them strategically and understand the information presented. Scientific articles vary in content or structure, depending on the type of journal to which they will be submitted. Psychological articles and many papers in the social sciences follow the writing guidelines and format dictated by the American Psychological Association (APA). In general, the structure follows: abstract, introduction, methods, results, discussion, and references.

  • Abstract : the abstract is the concise summary of the article. It summarizes the most important features of the manuscript, providing the reader with a global first impression on the article. It is generally just one paragraph that explains the experiment as well as a short synopsis of the results.
  • Introduction : this section provides background information about the origin and purpose of performing the experiment or study. It reviews previous research and presents existing theories on the topic.
  • Method : this section covers the methodologies used to investigate the research question, including the identification of participants , procedures , and  materials  as well as a description of the actual procedure . It should be sufficiently detailed to allow for replication.
  • Results : the results section presents key findings of the research, including reference to indicators of statistical significance.
  • Discussion : this section provides an interpretation of the findings, states their significance for current research, and derives implications for theory and practice. Alternative interpretations for findings are also provided, particularly when it is not possible to conclude for the directionality of the effects. In the discussion, authors also acknowledge the strengths and limitations/weaknesses of the study and offer concrete directions about for future research.

Watch this 3-minute video for an explanation on how to read scholarly articles. Look closely at the example article shared just before the two minute mark.

https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/

Practice identifying these key components in the following experiment: Food-Induced Emotional Resonance Improves Emotion Recognition.

In this chapter, you learned to

  • define and apply the scientific method to psychology
  • describe the strengths and weaknesses of descriptive, experimental, and correlational research
  • define the basic elements of a statistical investigation

Putting It Together: Psychological Research

Psychologists use the scientific method to examine human behavior and mental processes. Some of the methods you learned about include descriptive, experimental, and correlational research designs.

Watch the CrashCourse video to review the material you learned, then read through the following examples and see if you can come up with your own design for each type of study.

You can view the transcript for “Psychological Research: Crash Course Psychology #2” here (opens in new window).

Case Study: a detailed analysis of a particular person, group, business, event, etc. This approach is commonly used to to learn more about rare examples with the goal of describing that particular thing.

  • Ted Bundy was one of America’s most notorious serial killers who murdered at least 30 women and was executed in 1989. Dr. Al Carlisle evaluated Bundy when he was first arrested and conducted a psychological analysis of Bundy’s development of his sexual fantasies merging into reality (Ramsland, 2012). Carlisle believes that there was a gradual evolution of three processes that guided his actions: fantasy, dissociation, and compartmentalization (Ramsland, 2012). Read   Imagining Ted Bundy  (http://goo.gl/rGqcUv) for more information on this case study.

Naturalistic Observation : a researcher unobtrusively collects information without the participant’s awareness.

  • Drain and Engelhardt (2013) observed six nonverbal children with autism’s evoked and spontaneous communicative acts. Each of the children attended a school for children with autism and were in different classes. They were observed for 30 minutes of each school day. By observing these children without them knowing, they were able to see true communicative acts without any external influences.

Survey : participants are asked to provide information or responses to questions on a survey or structure assessment.

  • Educational psychologists can ask students to report their grade point average and what, if anything, they eat for breakfast on an average day. A healthy breakfast has been associated with better academic performance (Digangi’s 1999).
  • Anderson (1987) tried to find the relationship between uncomfortably hot temperatures and aggressive behavior, which was then looked at with two studies done on violent and nonviolent crime. Based on previous research that had been done by Anderson and Anderson (1984), it was predicted that violent crimes would be more prevalent during the hotter time of year and the years in which it was hotter weather in general. The study confirmed this prediction.

Longitudinal Study: researchers   recruit a sample of participants and track them for an extended period of time.

  • In a study of a representative sample of 856 children Eron and his colleagues (1972) found that a boy’s exposure to media violence at age eight was significantly related to his aggressive behavior ten years later, after he graduated from high school.

Cross-Sectional Study:  researchers gather participants from different groups (commonly different ages) and look for differences between the groups.

  • In 1996, Russell surveyed people of varying age groups and found that people in their 20s tend to report being more lonely than people in their 70s.

Correlational Design:  two different variables are measured to determine whether there is a relationship between them.

  • Thornhill et al. (2003) had people rate how physically attractive they found other people to be. They then had them separately smell t-shirts those people had worn (without knowing which clothes belonged to whom) and rate how good or bad their body oder was. They found that the more attractive someone was the more pleasant their body order was rated to be.
  • Clinical psychologists can test a new pharmaceutical treatment for depression by giving some patients the new pill and others an already-tested one to see which is the more effective treatment.

American Cancer Society. (n.d.). History of the cancer prevention studies. Retrieved from http://www.cancer.org/research/researchtopreventcancer/history-cancer-prevention-study

American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: Author.

American Psychological Association. (n.d.). Research with animals in psychology. Retrieved from https://www.apa.org/research/responsible/research-animals.pdf

Arnett, J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614.

Barton, B. A., Eldridge, A. L., Thompson, D., Affenito, S. G., Striegel-Moore, R. H., Franko, D. L., . . . Crockett, S. J. (2005). The relationship of breakfast and cereal consumption to nutrient intake and body mass index: The national heart, lung, and blood institute growth and health study. Journal of the American Dietetic Association, 105(9), 1383–1389. Retrieved from http://dx.doi.org/10.1016/j.jada.2005.06.003

Chwalisz, K., Diener, E., & Gallagher, D. (1988). Autonomic arousal feedback and emotional experience: Evidence from the spinal cord injured. Journal of Personality and Social Psychology, 54, 820–828.

Dominus, S. (2011, May 25). Could conjoined twins share a mind? New York Times Sunday Magazine. Retrieved from http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?_r=5&hp&

Fanger, S. M., Frankel, L. A., & Hazen, N. (2012). Peer exclusion in preschool children’s play: Naturalistic observations in a playground setting. Merrill-Palmer Quarterly, 58, 224–254.

Fiedler, K. (2004). Illusory correlation. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 97–114). New York, NY: Psychology Press.

Frantzen, L. B., Treviño, R. P., Echon, R. M., Garcia-Dominic, O., & DiMarco, N. (2013). Association between frequency of ready-to-eat cereal consumption, nutrient intakes, and body mass index in fourth- to sixth-grade low-income minority children. Journal of the Academy of Nutrition and Dietetics, 113(4), 511–519.

Harper, J. (2013, July 5). Ice cream and crime: Where cold cuisine and hot disputes intersect. The Times-Picaune. Retrieved from http://www.nola.com/crime/index.ssf/2013/07/ice_cream_and_crime_where_hot.html

Jenkins, W. J., Ruppel, S. E., Kizer, J. B., Yehl, J. L., & Griffin, J. L. (2012). An examination of post 9-11 attitudes towards Arab Americans. North American Journal of Psychology, 14, 77–84.

Jones, J. M. (2013, May 13). Same-sex marriage support solidifies above 50% in U.S. Gallup Politics. Retrieved from http://www.gallup.com/poll/162398/sex-marriage-support-solidifies-above.aspx

Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first-year college grade point average (Research Report No. 2008-5). Retrieved from https://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2008-5-validity-sat-predicting-first-year-college-grade-point-average.pdf

Lewin, T. (2014, March 5). A new SAT aims to realign with schoolwork. New York Times. Retreived from http://www.nytimes.com/2014/03/06/education/major-changes-in-sat-announced-by-college-board.html.

Lowry, M., Dean, K., & Manders, K. (2010). The link between sleep quantity and academic performance for the college student. Sentience: The University of Minnesota Undergraduate Journal of Psychology, 3(Spring), 16–19. Retrieved from http://www.psych.umn.edu/sentience/files/SENTIENCE_Vol3.pdf

McKie, R. (2010, June 26). Chimps with everything: Jane Goodall’s 50 years in the jungle. The Guardian. Retrieved from http://www.theguardian.com/science/2010/jun/27/jane-goodall-chimps-africa-interview

Offit, P. (2008). Autism’s false prophets: Bad science, risky medicine, and the search for a cure. New York: Columbia University Press.

Perkins, H. W., Haines, M. P., & Rice, R. (2005). Misperceiving the college drinking norm and related problems: A nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. J. Stud. Alcohol, 66(4), 470–478.

Rimer, S. (2008, September 21). College panel calls for less focus on SATs. The New York Times. Retrieved from http://www.nytimes.com/2008/09/22/education/22admissions.html?_r=0

Rothstein, J. M. (2004). College performance predictions and the SAT. Journal of Econometrics, 121, 297–317.

Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97(2), 286–306. doi:10.1037/0033-2909.97.2.286

Santelices, M. V., & Wilson, M. (2010). Unfair treatment? The case of Freedle, the SAT, and the standardization approach to differential item functioning. Harvard Education Review, 80, 106–134.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515–530.

Tuskegee University. (n.d.). About the USPHS Syphilis Study. Retrieved from http://www.tuskegee.edu/about_us/centers_of_excellence/bioethics_center/about_the_usphs_syphilis_study.aspx.

CC licensed content, Original

  • Psychological Research Methods. Provided by : Karenna Malavanti. License : CC BY-SA: Attribution ShareAlike

CC licensed content, Shared previously

  • Psychological Research. Provided by : OpenStax College. License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-introduction .
  • Why It Matters: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/introduction-15/
  • Introduction to The Scientific Method. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-the-scientific-method/
  • Research picture. Authored by : Mediterranean Center of Medical Sciences. Provided by : Flickr. License : CC BY: Attribution   Located at : https://www.flickr.com/photos/mcmscience/17664002728 .
  • The Scientific Process. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-the-scientific-process/
  • Ethics in Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/ethics/
  • Ethics. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-4-ethics . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction .
  • Introduction to Approaches to Research. Provided by : Lumen Learning. License : CC BY-NC-SA: Attribution NonCommercial ShareAlike   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-approaches-to-research/
  • Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011. Authored by : John Gabrieli. Provided by : MIT OpenCourseWare. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : https://www.youtube.com/watch?v=syXplPKQb_o .
  • Paragraph on correlation. Authored by : Christie Napa Scollon. Provided by : Singapore Management University. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/research-designs?r=MTc0ODYsMjMzNjQ%3D . Project : The Noba Project.
  • Descriptive Research. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-clinical-or-case-studies/
  • Approaches to Research. Authored by : OpenStax College.  License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-2-approaches-to-research
  • Analyzing Findings. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-3-analyzing-findings . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction.
  • Experiments. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Research Review. Authored by : Jessica Traylor for Lumen Learning. License : CC BY: Attribution Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Introduction to Statistics. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-statistical-thinking/
  • histogram. Authored by : Fisher’s Iris flower data set. Provided by : Wikipedia.
  • License : CC BY-SA: Attribution-ShareAlike   Located at : https://en.wikipedia.org/wiki/Wikipedia:Meetup/DC/Statistics_Edit-a-thon#/media/File:Fisher_iris_versicolor_sepalwidth.svg .
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman . Provided by : California Polytechnic State University, San Luis Obispo.  
  • License : CC BY-NC-SA: Attribution-NonCommerci al-S hareAlike .  License Terms : http://nobaproject.com/license-agreement   Located at : http://nobaproject.com/modules/statistical-thinking . Project : The Noba Project.
  • Drawing Conclusions from Statistics. Authored by: Pat Carroll and Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-drawing-conclusions-from-statistics/
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman, California Polytechnic State University, San Luis Obispo. Provided by : Noba. License: CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/statistical-thinking .
  • The Replication Crisis. Authored by : Colin Thomas William. Provided by : Ivy Tech Community College. License: CC BY: Attribution
  • How to Read Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/how-to-read-research/
  • What is a Scholarly Article? Kimbel Library First Year Experience Instructional Videos. 9. Authored by:  Joshua Vossler, John Watts, and Tim Hodge.  Provided by : Coastal Carolina University  License :  CC BY NC ND:  Attribution-NonCommercial-NoDerivatives Located at :  https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/
  • Putting It Together: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/putting-it-together-psychological-research/
  • Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:

All rights reserved content

  • Understanding Driver Distraction. Provided by : American Psychological Association. License : Other. License Terms: Standard YouTube License Located at : https://www.youtube.com/watch?v=XToWVxS_9lA&list=PLxf85IzktYWJ9MrXwt5GGX3W-16XgrwPW&index=9 .
  • Correlation vs. Causality: Freakonomics Movie. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=lbODqslc4Tg.
  • Psychological Research – Crash Course Psychology #2. Authored by : Hank Green. Provided by : Crash Course. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=hFV71QPvX2I .

Public domain content

  • Researchers review documents. Authored by : National Cancer Institute. Provided by : Wikimedia. Located at : https://commons.wikimedia.org/wiki/File:Researchers_review_documents.jpg . License : Public Domain: No Known Copyright

grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing

well-developed set of ideas that propose an explanation for observed phenomena

(plural: hypotheses) tentative and testable statement about the relationship between two or more variables

an experiment must be replicable by another researcher

implies that a theory should enable us to make predictions about future events

able to be disproven by experimental results

implies that all data must be considered when evaluating a hypothesis

committee of administrators, scientists, and community members that reviews proposals for research involving human participants

process of informing a research participant about what to expect during an experiment, any risks involved, and the implications of the research, and then obtaining the person’s consent to participate

purposely misleading experiment participants in order to maintain the integrity of the experiment

when an experiment involved deception, participants are told complete and truthful information about the experiment at its conclusion

committee of administrators, scientists, veterinarians, and community members that reviews proposals for research involving non-human animals

research studies that do not test specific relationships between variables

research investigating the relationship between two or more variables

research method that uses hypothesis testing to make inferences about how one variable impacts and causes another

observation of behavior in its natural setting

inferring that the results for a sample apply to the larger population

when observations may be skewed to align with observer expectations

measure of agreement among observers on how they record and classify a particular event

observational research study focusing on one or a few people

list of questions to be answered by research participants—given as paper-and-pencil questionnaires, administered electronically, or conducted verbally—allowing researchers to collect data from a large number of people

subset of individuals selected from the larger population

overall group of individuals that the researchers are interested in

method of research using past records or data sets to answer various research questions, or to search for interesting patterns or relationships

studies in which the same group of individuals is surveyed or measured repeatedly over an extended period of time

compares multiple segments of a population at a single time

reduction in number of research participants as some drop out of the study over time

relationship between two or more variables; when two variables are correlated, one variable changes as the other does

number from -1 to +1, indicating the strength and direction of the relationship between variables, and usually represented by r

two variables change in the same direction, both becoming either larger or smaller

two variables change in different directions, with one becoming larger as the other becomes smaller; a negative correlation is not the same thing as no correlation

changes in one variable cause the changes in the other variable; can be determined only through an experimental research design

unanticipated outside factor that affects both variables of interest, often giving the false impression that changes in one variable causes changes in the other variable, when, in actuality, the outside factor causes changes in both variables

seeing relationships between two things when in reality no such relationship exists

tendency to ignore evidence that disproves ideas or beliefs

group designed to answer the research question; experimental manipulation is the only difference between the experimental and control groups, so any differences between the two are due to experimental manipulation rather than chance

serves as a basis for comparison and controls for chance factors that might influence the results of the study—by holding such factors constant across groups so that the experimental manipulation is the only difference between groups

description of what actions and operations will be used to measure the dependent variables and manipulate the independent variables

researcher expectations skew the results of the study

experiment in which the researcher knows which participants are in the experimental group and which are in the control group

experiment in which both the researchers and the participants are blind to group assignments

people's expectations or beliefs influencing or determining their experience in a given situation

variable that is influenced or controlled by the experimenter; in a sound experimental study, the independent variable is the only important difference between the experimental and control group

variable that the researcher measures to see how much effect the independent variable had

subjects of psychological research

subset of a larger population in which every member of the population has an equal chance of being selected

method of experimental group assignment in which all participants have an equal chance of being assigned to either group

consistency and reproducibility of a given result

accuracy of a given result in measuring what it is designed to measure

determines how likely any difference between experimental groups is due to chance

statistical probability that represents the likelihood that experimental results happened by chance

Psychological Science is the scientific study of mind, brain, and behavior. We will explore what it means to be human in this class. It has never been more important for us to understand what makes people tick, how to evaluate information critically, and the importance of history. Psychology can also help you in your future career; indeed, there are very little jobs out there with no human interaction!

Because psychology is a science, we analyze human behavior through the scientific method. There are several ways to investigate human phenomena, such as observation, experiments, and more. We will discuss the basics, pros and cons of each! We will also dig deeper into the important ethical guidelines that psychologists must follow in order to do research. Lastly, we will briefly introduce ourselves to statistics, the language of scientific research. While reading the content in these chapters, try to find examples of material that can fit with the themes of the course.

To get us started:

  • The study of the mind moved away Introspection to reaction time studies as we learned more about empiricism
  • Psychologists work in careers outside of the typical "clinician" role. We advise in human factors, education, policy, and more!
  • While completing an observation study, psychologists will work to aggregate common themes to explain the behavior of the group (sample) as a whole. In doing so, we still allow for normal variation from the group!
  • The IRB and IACUC are important in ensuring ethics are maintained for both human and animal subjects

Psychological Science: Understanding Human Behavior Copyright © by Karenna Malavanti is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Bruce Tulgan, JD

How Managers Can Improve Team Problem-Solving

Teaching good problem-solving means learning from previous solutions..

Posted March 28, 2024 | Reviewed by Ray Parker

  • What Is a Career
  • Find a career counselor near me
  • We can access vast information online, but critical thinking skills are still essential.
  • The key to improving team problem-solving is providing reliable resources you trust.
  • Build a library of problem-solving resources, including creating step-by-step instructions and checklists.

TA Design/Shutterstock

By now, it is a hackneyed truth about today’s world that we all have endless amounts of information at our fingertips, available instantly, all the time. We have multiple competing answers to any question on any subject—more answers than an entire team, let alone an individual, could possibly master in a lifetime. The not quite as obvious punchline is this: There has been a radical change in how much information a person needs to keep inside their head versus accessible through their fingertips.

Nobody should be so short-sighted or so old-fashioned as to write off the power of being able to fill knowledge gaps on demand. Yet this phenomenon is often attributed to a growing critical thinking skills gap experienced in many organizations today.

Many people today are simply not in the habit of really thinking on their feet. Without a lot of experience puzzling through problems, it should be no surprise that so many people are often puzzled when they encounter unanticipated problems.

Here’s the thing: Nine out of ten times, you don’t need to make important decisions on the basis of your own judgment at the moment. You are much better off if you can rely on the accumulated experience of the organization in which you are working, much like we rely on the accumulated information available online.

The key is ensuring that your direct reports are pulling from sources of information and experience they and the organization can trust.

The first step to teaching anybody the basics of problem-solving is to anticipate the most common recurring problems and prepare with ready-made solutions. It may seem counterintuitive, but problem-solving skills aren’t built by reinventing the wheel: From learning and implementing ready-made solutions, employees will learn a lot about the anatomy of a good solution. This will put them in a much better position to improvise when they encounter a truly unanticipated problem.

The trick is to capture best practices, turn them into standard operating procedures, and deploy them to your team for use as job aids. This can be as simple as an “if, then” checklist:

  • If A happens, then do B.
  • If C happens, then do D.
  • If E happens, then do F.

Here are seven tips to help you build a library of problem-solving resources for your team:

1. Break things down and write them out. Start with what you know. Break down the task or project into a list of step-by-step instructions, incorporating any resources or job aids you currently use. Then, take each step further by breaking it down into a series of concrete actions. Get as granular as you possibly can—maybe even go overboard a little. It will always be easier to remove unnecessary steps from your checklist than to add in necessary steps later.

2. Follow your instructions as if you were a newbie. Once you have a detailed, step-by-step outline, try using it as though you were totally new to the task or project. Follow the instructions exactly as you have written them: Avoid subconsciously filling in any gaps with your own expertise. Don't assume that anything goes without saying, especially if the task or project is especially technical or complex. As you follow your instructions, make corrections and additions as you go. Don't make the mistake of assuming you will remember to make necessary corrections or additions later.

3. Make final edits. Follow your updated and improved instructions one final time. Make any further corrections or additions as necessary. Include as many details as possible for and between each step.

4. Turn it into a checklist. Now, it's time to translate your instructions into a checklist format. Checklists are primarily tools of mindfulness : They slow us down and focus us on the present actions under our control. Consider whether the checklist will be more helpful if it is phrased in past or present tense. Who will be using the checklist? What information do they need to know? How much of the checklist can be understood at a glance?

5. Get outside input. Ask someone to try and use your checklist to see if it works for them. Get their feedback about what was clear, what was unclear, and why it was clear or unclear. Ask about any questions they had that weren't answered by the checklist. Solicit other suggestions, thoughts, or improvements you may not have considered. Incorporate their input and then repeat the process with another tester.

problem solving hypothesis psychology

6. Use your checklist. Don't simply create your checklist for others and then abandon it. Use it in your own work going forward, and treat it as a living document. Make clarifying notes, additions, and improvements as the work naturally changes over time. Remember, checklists are tools of mindfulness. Use them to tune in to the work you already do and identify opportunities for growth and improvement.

7. Establish a system for saving drafts, templates, and examples of work that can be shared with others . Of course, checklists are just one type of shareable job aid. Sharing examples of your previous work or another team member is another useful way to help someone jumpstart a new task or project. This can be anything from final products to drafts, sketches, templates, or even videos.

Bruce Tulgan, JD

Bruce Tulgan, JD, is the founder and CEO of RainmakerThinking and the author of The Art of Being Indispensable at Work.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience
  • Share full article

Advertisement

Supported by

Working With Your Hands Is Good for Your Brain

Activities like writing, gardening and knitting can improve your cognition and mood. Tapping, typing and scrolling? Less so.

problem solving hypothesis psychology

By Markham Heid

The human hand is a marvel of nature. No other creature on Earth, not even our closest primate relatives , has hands structured quite like ours, capable of such precise grasping and manipulation.

But we’re doing less intricate hands-on work than we used to. A lot of modern life involves simple movements, such as tapping screens and pushing buttons, and some experts believe our shift away from more complex hand activities could have consequences for how we think and feel.

“When you look at the brain’s real estate — how it’s divided up, and where its resources are invested — a huge portion of it is devoted to movement, and especially to voluntary movement of the hands,” said Kelly Lambert, a professor of behavioral neuroscience at the University of Richmond in Virginia.

Dr. Lambert, who studies effort-based rewards, said that she is interested in “the connection between the effort we put into something and the reward we get from it” and that she believes working with our hands might be uniquely gratifying.

In some of her research on animals , Dr. Lambert and her colleagues found that rats that used their paws to dig up food had healthier stress hormone profiles and were better at problem solving compared with rats that were given food without having to dig.

She sees some similarities in studies on people, which have found that a whole range of hands-on activities — such as knitting , gardening and coloring — are associated with cognitive and emotional benefits, including improvements in memory and attention, as well as reductions in anxiety and depression symptoms.

These studies haven’t determined that hand involvement, specifically, deserves the credit. The researchers who looked at coloring, for example, speculated that it might promote mindfulness, which could be beneficial for mental health. Those who have studied knitting said something similar. “The rhythm and repetition of knitting a familiar or established pattern was calming, like meditation,” said Catherine Backman, a professor emeritus of occupational therapy at the University of British Columbia in Canada who has examined the link between knitting and well-being.

However, Dr. Backman said the idea that working with one’s hands could benefit a person’s mind and wellness seems plausible. Hands-on tasks that fully engage our attention — and even mildly challenge us — can support learning, she added.

Dr. Lambert has another hypothesis. “With depression, people experience something called learned helplessness, where they feel like it doesn’t matter what they do, nothing ever works,” she said. She believes that working with one’s hands is stimulating to the brain, and that it could even help counteract this learned helplessness. “When you put in effort and can see the product of that, like a scarf you knitted, I think that builds up a sense of accomplishment and control over your world,” she said.

Some researchers have zeroed in on the possible repercussions of replacing relatively complicated hand tasks with more basic ones.

In a small study of university students published in January, Norwegian researchers compared the neurological effects of writing by hand with typing on a keyboard. Handwriting was associated with “far more elaborate” brain activity than keyboard writing, the researchers found.

“With handwriting, you have to form these intricate letters by making finely controlled hand and finger movements,” said Audrey van der Meer, one of the authors of that study and a professor of psychology at the Norwegian University of Science and Technology. Each letter is different, she explained, and requires a different hand action.

Dr. Van der Meer said that the act of forming a letter activates distinctive memories and brain pathways tied to what that letter represents (such as the sound it makes and the words that include it). “But when you type, every letter is produced by the same very simple finger movement, and as a result you use your whole brain much less than when writing by hand,” she added.

Dr. Van der Meer’s study is the latest in a series of research efforts in which she and her colleagues have found that writing and drawing seem to engage and exercise the brain more than typing on a keyboard. “Skills involving fine motor control of the hands are excellent training and superstimulation for the brain,” she said. “The brain is like a muscle, and if we continue to take away these complex movements from our daily lives — especially fine motor movements — I think that muscle will weaken.” While more research is needed, Dr. Van der Meer posits that understimulation of the brain could ultimately lead to deficits in attention, memory formation and problem solving.

But as with knitting and coloring, some experts question the underlying mechanisms at play.

“With some of this research, I think it’s hard to dissociate whether it’s the physical movement of the hands that’s producing a benefit, or whether it’s the concentration or novelty or cognitive challenge involved,” said Rusty Gage, a professor at the Salk Institute for Biological Studies in San Diego.

Dr. Gage studies how certain activities can stimulate the growth of new cells in the brain. “I think if you’re doing complex work that involves making decisions and planning, that may matter more than whether you’re using your hands,” he said.

That said, the benefits of many hands-on activities aren’t in doubt. Along with gardening and handicrafts, research has found that pursuits like making art and playing a musical instrument also seem to do us some good.

“You know, we evolved in a three-dimensional world, and we evolved to interact with that world through our hands,” Dr. Lambert said. “I think there are a lot of reasons why working with our hands may be prosperous for our brains.”

IMAGES

  1. How psychology does define problem solvi

    problem solving hypothesis psychology

  2. How to Write a Strong Hypothesis in 6 Simple Steps

    problem solving hypothesis psychology

  3. PPT

    problem solving hypothesis psychology

  4. Hypothesis

    problem solving hypothesis psychology

  5. What is a Hypothesis

    problem solving hypothesis psychology

  6. Issue Trees: The Ultimate Guide with Examples

    problem solving hypothesis psychology

VIDEO

  1. Deloitte Problem Solving: Overview of Hypothesis Based Problem Solving

  2. Hypothesis| UGC NET Psychology

  3. Hypothesis Testing in Psychological Research

  4. problem solving involving hypothesis testing

  5. Hypothesis-Driven Problem Solving: Tips for Getting Started

  6. Think Like A Mastermind: Mind Hacks From Billionaires

COMMENTS

  1. 7.3 Problem-Solving

    Within psychology, problem solving refers to a motivational drive for reading a definite "goal" from a present situation or condition that is either not moving toward that goal, is distant from it, or requires more complex logical analysis for finding a missing description of conditions or steps toward that goal. ... Hypothesis testing ...

  2. Research Hypothesis In Psychology: Types, & Examples

    Examples. A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  3. Problem Solving

    Cognitive—Problem solving occurs within the problem solver's cognitive system and can only be inferred indirectly from the problem solver's behavior (including biological changes, introspections, and actions during problem solving).. Process—Problem solving involves mental computations in which some operation is applied to a mental representation, sometimes resulting in the creation of ...

  4. Teaching of General Psychology: Problem Solving

    The nature of human problem solving has been studied by psychologists for the past hundred years. Early conceptual work of German Gestalt psychologists (e.g., Duncker, 1935; Wertheimer, 1959) and experimental research on problem solving in the 1960s and 1970s typically operated with relatively simple, laboratory tasks (e.g., Duncker's famous "X-ray" problem; Ewert and Lambert's 1932 ...

  5. Reasoning and Problem Solving

    This chapter provides a revised review of the psychological literature on reasoning and problem solving. Four classes of deductive reasoning are presented, including rule (mental logic) theories, semantic (mental model) theories, evolutionary theories, and heuristic theories. Major developments in the study of reasoning are also presented such ...

  6. Problem solving stages in the five square problem.

    According to the restructuring hypothesis, insight problem solving typically progresses through consecutive stages of search, impasse, insight, and search again for someone, who solves the task. The order of these stages was determined through self-reports of problem solvers and has never been verified behaviorally. We asked whether individual analysis of problem solving attempts of ...

  7. Reasoning, Problem Solving, and Decision Processes: The Problem Space

    The notion of a problem space is well known in the area of problem solving research, both in cognitive psychology and artificial intelligence. The Problem Space Hypothesis is enunciated that the scope of problem spaces is to be extended to all symbolic cognitive activity. The chapter is devoted to explaining the nature of this hypothesis and ...

  8. Modularity of Mind

    The hypothesis of modest modularity, as we shall call it, has two strands. The first strand of the hypothesis is positive. ... problem-solving, planning, and the like. Originally articulated and advocated by proponents of evolutionary psychology (Sperber, 1994, 2002; Cosmides & Tooby, 1992; Pinker, 1997; Barrett, 2005; Barrett & Kurzban, 2006 ...

  9. Clinical problem solving and diagnostic decision making: selective

    Diagnosis as selecting a hypothesis. The earliest psychological formulation viewed diagnostic reasoning as a process of testing hypotheses. Solutions to difficult diagnostic problems were found by generating a limited number of hypotheses early in the diagnostic process and using them to guide subsequent collection of data. 1 Each hypothesis can be used to predict what additional findings ...

  10. 7.3 Problem Solving

    Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( Figure 7.7) is a 4×4 grid.

  11. Research Problems and Hypotheses in Empirical Research

    (3) A hypothesis, but not a problem, can be tested. A hypothesis is tested by testing its validity, while a problem is solved by trying to find the best candidate in its solution set. (4) The solving of a problem and the testing of hypothesis are related to two different views on knowledge, i.e., knowledge taken for granted and knowledge on ...

  12. Reasoning, problem solving, and decision processes: the problem space

    The Problem Space Hypothesis is enunciated that the scope of problem spaces is to be extended to all symbolic cognitive activity, and the origin of the numerous flow diagrams that serve as theories of how subjects behave in tasks in the psychological laboratory are explained. The notion of a problem space is well known in the area of problem solving research, both in cognitive psychology and ...

  13. Problem Solving

    In this theory, people solve problems by searching in a problem space. The problem space consists of the initial (current) state, the goal state, and all possible states in between. The actions that people take in order to move from one state to another are known as operators. Consider the eight puzzle. The problem space for the eight puzzle ...

  14. 7 Module 7: Thinking, Reasoning, and Problem-Solving

    Module 7: Thinking, Reasoning, and Problem-Solving. This module is about how a solid working knowledge of psychological principles can help you to think more effectively, so you can succeed in school and life. You might be inclined to believe that—because you have been thinking for as long as you can remember, because you are able to figure ...

  15. Problem-Solving Strategies and Obstacles

    Problem-solving is a vital skill for coping with various challenges in life. This webpage explains the different strategies and obstacles that can affect how you solve problems, and offers tips on how to improve your problem-solving skills. Learn how to identify, analyze, and overcome problems with Verywell Mind.

  16. PDF Psychology of the scientist: An analysis of problem-solving bias

    Analysis of Problem-Solving Bias 233 the announced hypothesis was wrong, they were told to continue their ex- perimentation. Task performance continued until the subject announced the correct hypothesis, gave up, or completed a standardized 10-minute interval (measured by the experimenter).

  17. PDF problem space hypothesis

    Department of Psychology, University of Groningen Groningen, The Netherlands ABS1RACI: Although Soar is intended to cover the full range of weak problem-solving methods-often hypothesized in cognitive science as basic for all intelligent agents, earlier attempts to add means-ends analysis to Soar's repertoire of methods have not been particu -

  18. 9.2: Hypothesis Testing Problem Solving Steps

    Hypothesis testing is based directly on sampling theory and the probabilities P(test statistic ∣ H0) P ( test statistic ∣ H 0) that the sampling theory gives. Here are the steps we will follow : Hypotheses : Formulate H0 H 0 and H1 H 1. State which is the claim. Critical statistic : Find the critical values and regions.

  19. Learning Problem‐Solving Rules as Search Through a Hypothesis Space

    1.1 Learning problem-solving rules as a hypothesis search process. ... 1989) has led to their wide use in many domains outside of psychology, perhaps most notably in speech recognition. Our application of Hidden Markov Models will be close to work in educational data mining (e.g., Cohen & Beal, ...

  20. Piaget's Theory and Stages of Cognitive Development

    Hypothesis Testing: Encourage students to make predictions and test them out. Abstract Thinking: Introduce topics that require abstract reasoning, such as algebra or ethical dilemmas. Problem Solving: Provide complex problems and have students work on solutions, integrating various subjects and concepts.

  21. Ch 2: Psychological Research Methods

    Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to ...

  22. PDF Problem Space Hypothesis

    Introduction to Search. Search is one of the most powerful approaches to problem solving in AI. Search is a universal problem solving mechanism that. Systematically explores the alternatives. Finds the sequence of steps towards a solution. Problem Space Hypothesis All goal-oriented symbolic activities. occur in a problem space.

  23. The Quarterly Journal of Experimental Psychology Section A: Human

    This work collected verbal protocols from participants learning to control a linear system consisting of 3 outputs by manipulating 3 inputs and replicated the goal specificity effect on performance and showed that giving participants a hypothesis to test improved performance. Taylor & Francis makes every effort to ensure the accuracy of all the information (the " Content ") contained in the ...

  24. How Managers Can Improve Team Problem-Solving

    Here are seven tips to help you build a library of problem-solving resources for your team: 1. Break things down and write them out. Start with what you know. Break down the task or project into a ...

  25. Working With Your Hands Is Good for Your Brain

    Dr. Van der Meer's study is the latest in a series of research efforts in which she and her colleagues have found that writing and drawing seem to engage and exercise the brain more than typing ...