Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • When did science begin?
  • Where was science invented?

Blackboard inscribed with scientific formulas and calculations in physics and mathematics

hypothetico-deductive method

Our editors will review what you’ve submitted and determine whether to revise the article.

hypothetico-deductive method , procedure for the construction of a scientific theory that will account for results obtained through direct observation and experimentation and that will, through inference , predict further effects that can then be verified or disproved by empirical evidence derived from other experiments.

An early version of the hypothetico-deductive method was proposed by the Dutch physicist Christiaan Huygens (1629–95). The method generally assumes that properly formed theories are conjectures intended to explain a set of observable data. These hypotheses , however, cannot be conclusively established until the consequences that logically follow from them are verified through additional observations and experiments. The method treats theory as a deductive system in which particular empirical phenomena are explained by relating them back to general principles and definitions. However, it rejects the claim of Cartesian mechanics that those principles and definitions are self-evident and valid; it assumes that their validity is determined only by the exact light their consequences throw on previously unexplained phenomena or on actual scientific problems.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Med Educ
  • v.7(2); 2018 Apr

Logo of pmeded

Teaching clinical reasoning through hypothetico-deduction is (slightly) better than self-explanation in tutorial groups: An experimental study

Ahmed al rumayyan.

1 College of Medicine, King Saud bin Abdulaziz University of Health Sciences, Ryiadh, Saudi Arabia

Reem Al Subait

Ghassan al ghamdi, moeber mohammed mahzari, tarig awad mohamed, jerome i. rotgans.

2 Imperial College London, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore

Mustafa Donmez

3 Department of General Practice, Erasmus Medical Center, Rotterdam, The Netherlands

Silvia Mamede

4 Institute of Medical Education Research Rotterdam, Erasmus Medical Center, Rotterdam, The Netherlands

5 Department of Psychology, Erasmus University Rotterdam, Rotterdam, The Netherlands

Henk G. Schmidt

6 Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore

Self-explanation while individually diagnosing clinical cases has proved to be an effective instructional approach for teaching clinical reasoning. The present study compared the effects on diagnostic performance of self-explanation in small groups with the more commonly used hypothetico-deductive approach.

Second-year students from a six-year medical school in Saudi Arabia (39 males; 49 females) worked in small groups on seven clinical vignettes (four criterion cases representing cardiovascular diseases and three ‘fillers’, i.e. cases of other unrelated diagnoses). The students followed different approaches to work on each case depending on the experimental condition to which they had been randomly assigned. Under the self-explanation condition, students provided a diagnosis and a suitable pathophysiological explanation for the clinical findings whereas in the hypothetico-deduction condition students hypothesized about plausible diagnoses for signs and symptoms that were presented sequentially. One week later, all students diagnosed eight vignettes, four of which represented cardiovascular diseases. A mean diagnostic accuracy score (range: 0–1) was computed for the criterion cases. One-way ANOVA with experimental condition as between-subjects factor was performed on the mean diagnostic accuracy scores.

Students in the hypothetico-deduction condition outperformed those in the self-explanation condition (mean = 0.22, standard deviation = 0.14, mean = 0.17; standard deviation = 0.12; F (1, 88) = 4.90, p  = 0.03, partial η 2  = 0.06, respectively).

Conclusions

Students in the hypothetico-deduction condition performed slightly better on a follow-up test involving similar cases, possibly because they were allowed to formulate more than one hypothesis per case during the learning phase.

What this paper adds

Medical education places much value on the development of students’ diagnostic competence. Many schools now offer clinical reasoning courses early in the curriculum, but there is little empirical research on the approaches commonly used for the teaching of clinical reasoning. This experiment compared the effectiveness of two teaching approaches: self-explanation and hypothetico-deduction. The latter asks students to hypothesize about plausible diagnoses for clinical findings that are presented sequentially. Despite being very common, its effectiveness has rarely been investigated. The hypothetico-deduction approach worked slightly better than self-explanation to foster students’ diagnostic performance. Possible explanations for the findings are discussed.

Introduction

The acquisition of competence in the skill of diagnostic reasoning is perhaps the most important task a medical student is confronted with, a task that is fraught with difficulties. Not only does the student have to learn to distinguish between 700+ different diseases, these diseases tend to present in quite idiosyncratic ways in patients. In addition, contextual influences, such as time pressure [ 1 ], patients’ disruptive behaviours [ 2 ] and a variety of cognitive biases such as availability bias [ 3 ], seem to add to the difficulty of arriving at the right diagnosis. The teaching of clinical reasoning is therefore an inherently challenging endeavour.

Teaching clinical reasoning has been traditionally left to the clinical rotations, intuitively the best place to learn these skills. However, this maxim is not true to the same extent as it was for a long time, as research findings and anecdotal evidence suggest [ 4 ]. Supervision and feedback are often suboptimal in clinical rotations, and students tend to be exposed to a patient population that does not replicate the range of health problems that they will encounter in professional life [ 5 ]. In response to these developments, medical schools have begun to establish clinical reasoning courses earlier in the curriculum, during which students become acquainted with the art and science of diagnostic reasoning by practising with clinical problems. Early examples concern employing simulated patients for this task. More recent additions involve the use of high-fidelity virtual patients presented online. Both are however expensive to develop and execute and have uncertain advantages over paper vignettes [ 6 , 7 ]. Written clinical cases have therefore been extensively used, with a variety of instructional approaches being employed to teach clinical reasoning. Schmidt and Mamede have recently reviewed paper-based approaches that are currently used (or proposed) [ 4 ]. They distinguish between approaches on the basis of several dimensions, one of which is of interest for the present study: a distinction between cases unfolding in a sequential fashion (the ‘serial-cue’ approach) and ‘whole-case’ approaches. The basic difference between these two approaches is whether the case information is disclosed step-by-step or the entire case is available from the start. The former (‘serial-cue’) involves ‘hypothetico-deduction’, a way of reasoning resembling the diagnostic process of physicians [ 8 ]. Information about the patient only becomes sequentially available in the course of a student’s engagement with a case. Usually the patient’s chief complaint is presented and the students propose diagnostic hypotheses and deduce potential consequences, in terms of findings they would expect if the hypotheses were correct. Additional information is disclosed as the students progress through this process [ 9 – 11 ]. On the other hand, the ‘whole-case’ approach presents the case in full before students become involved with it. Schmidt and Mamede’s review [ 4 ] seems to suggest that whole-case approaches are generally more effective than serial-cue approaches. However, their evidence was based on a limited number of studies [ 12 ].

Among the whole-case approaches, a rather promising method in the teaching of clinical reasoning is self-explanation [ 13 ]. Chamberland and colleagues [ 14 , 15 ] presented cases to advanced students and asked them, in addition to diagnosing these cases, to explain the signs and symptoms in terms of their underlying pathophysiology. A control group was simply asked to diagnose the same cases. The aim was to investigate whether self-explanation would foster students’ ability to distinguish between diseases that could explain a particular clinical presentation (for example, possible diagnoses for a patient presenting with chest pain and shortness of breath). The assumption underlying self-explanation was that by reactivating pathophysiological knowledge previously learned, the pathophysiological explanation would act as the underlying fabric more clearly tying together the signs and symptoms of these cases [ 16 ]. This would in turn lead to better diagnostic performance on similar cases presented at a later date. Chamberland found evidence showing just that, however only for cases that the students were not very familiar with.

The Chamberland studies presented cases to individual students to assess their impact. Such an approach, while theoretically important, is not amenable to introduction into an actual medical curriculum. In the Chamberland studies, for instance, each student had his or her own facilitator, who was tasked with encouraging the student to think aloud while dealing with the case. In actual programs, however, students would probably be practising in the presence of peers or in small groups. What would happen if groups of students were to work on cases? There is much evidence that having groups of students collaborate adds to the individual members’ learning and performance [ 17 ]. Such superior performance emerges because groups encourage individual students to elaborate on their prior knowledge (which facilitates further learning) and in addition to learn from each other.

The purpose of the present study, then, was to assess the effects of a self-explanation approach in small groups, relative to a hypothetico-deductive approach, on students’ performance in the diagnosis of the same clinical cases. Based on the previous studies by Chamberland and a study by Nendaz [ 12 ], our hypothesis was that the self-explanation group would do better than the hypothetico-deduction group on a test with similar cases. To test this hypothesis, groups of six students either processed seven cases through self-explanation or via hypothetico-deduction. One week later, they were presented with eight new cases (four of which were directly relevant to the cases processed during the previous learning phase) which they had to diagnose. The mean number of accurate diagnoses was taken as a measure of the quality of learning taking place in the learning phase.

The study consisted of two phases: a learning phase and a delayed diagnostic performance test administered 1 week later. In the learning phase, participants in small groups of approximately six discussed and diagnosed seven clinical cases under two different experimental conditions. Students were randomly assigned to the conditions of the experiment. Students in the hypothetico-deduction condition were presented with case information in a sequential fashion, had to provide tentative hypotheses, test these hypotheses as more information became available, and discuss their findings in small groups. The students in the self-explanation condition were presented with the whole case and were asked to explain the signs and symptoms in terms of their underlying pathophysiology in small groups, and provide a diagnosis as well.

The test required candidates to diagnose a set of eight new clinical cases, of which four criterion cases represented new exemplars of the clinical presentations encountered in the learning phase and four represented “fillers” (cases of different diseases used to decrease the chance that participants would easily recognize the new set of cases as representing the diseases seen in the learning phase).

Participants

All 188 second-year medical students at King Saud bin Abdulaziz University Medical College, in Riyadh, Saudi Arabia, a six-year medical school, were invited to participate in this study. We recruited Year 2 students because, at this point in their training, they have been exposed to theoretical knowledge about diseases but not yet seen any patients. Written consent was obtained from all students involved. They were promised that data would be analyzed anonymously.

Ethical approval for the study was given by King Abdullah International Medical Research Center (KAIMRC) Riyadh, Kingdom of Saudi Arabia. The study was carried out in accordance with the Declaration of Helsinki.

Two sets of different clinical cases were used in the study, one for each phase (See Tab.  1 for the diagnoses involved). Each case consisted of a half-page description of a patient’s medical history, present complaints, findings of a physical examination and results of laboratory tests. See Tab.  2 for an example of such a case. The cases were based on real patients and had been used in previous studies [ 15 ]. Part of the cases consisted of cardiovascular diseases, another part of unrelated diseases (filler cases). The former were the criterion cases to be considered for the primary analysis (because the instructional approaches aim at increasing students’ ability to distinguish between diseases that are part of the differential diagnosis of a particular clinical presentation). The latter were included to reinforce the idea that both learning and test phase were clinical reasoning exercises and are therefore not relevant for the primary analysis.

Diagnoses of the cases used in the different phases of the study

Learning phaseTest phase
Case 2.0 —Stomach cancer (Filler)
Case 1.1—Acute myocardial infarction with heart failureCase 2.1—Chronic CAD, with decompensated heart failure by anaemia
Case 1.2—Community-acquired pneumonia (Filler)Case 2.2—Acute pyelonephritis (Filler)
Case 1.3—Aortic stenosis with heart failureCase 2.3—Chronic mitral insufficiency with secondary heart failure
Case 1.4—Nephrotic syndrome (Filler)Case 2.4—Meningoencephalitis (Filler)
Case 1.5—Hypertensive cardiomyopathyCase 2.5—Hypertensive cardiomyopathy
Case 1.6—Acute viral hepatitis (Filler)Case 2.6—Acute appendicitis
Case 1.7—Alcoholic cardiomyopathyCase 2.7—Viral myocarditis
Case 2.8—Rheumatoid arthritis (Filler)

A case of acute myocardial infarction with heart failure

A 59-year-old businessman presents in the emergency department with severe dyspnoea. For the last 2 months, the patient has noted increasing shortness of breath: at first on climbing the stairs, and since last week at the least effort. The last two nights were particularly difficult, the patient experiencing shortness of breath even when lying down which forced him to sleep sitting up in a chair. He did not notice any cough or sputum. He used a salbutamol inhaler, which he uses as needed for asthma, without result. In the last 24 h he has also noted 4–5 episodes of tightness of the chest, of moderate intensity, lasting 5 to 10 min. No palpitations or syncope. He had a cold last week, which resolved spontaneously. Medical history: Hypertension for some 20 years, apparently well controlled with diltiazem 240 mg daily. Seasonal asthma, for which he periodically takes steroids, using a dosing inhaler, and salbutamol. The patient smokes ½ pack of cigarettes/day; he reports a healthy diet
Physical examination:BP 100/60, steady pulse 105/min; the patient is clammy; RR 28/min, dyspnoea at rest with saturation of 88% on arrival—ambient air—and 92% using nasal cannula at 2 l/min; oral temperature 36.5. Jugular veins not distended. Heart sounds are normal, with presence of a B3. Presence of a systolic murmur noted, 2/6 at the apex radiating towards the armpit. On pulmonary examination, crackles noted bilaterally in the lower thirds and wheezes noted on expiration. The abdomen is normal. The lower limbs are normal
Laboratory results:Blood count, electrolytes, creatinine and glycaemia are normal. The ECG shows q waves (inferior) and inversion of the T wave from V2 to V6 with displacement of 2 mm in V3, V4, V5. Elevated troponins, 0.12. Chest X‑ray showed perihilar haze, septal lines and a slight right pleural effusion

Learning phase. The learning phase required the students to diagnose seven clinical cases. The cases were presented through PowerPoint slides in one of two randomized orders. Participants were randomly assigned to either the self-explanation condition or the hypothetico-deduction condition by using the list of students enrolled in the second year of the program. Subsequently, they were subdivided in groups of six, each with a facilitator who was a member of the academic staff. Prior to the study, the facilitators attended a 2-hour training session aimed at familiarising them with the study and ensuring uniformity of the procedure. The facilitator’s task was to take care that the procedure as described below was followed meticulously. He or she did not provide feedback or otherwise interfere with the learning. Each student was also presented with a response booklet with blank pages in which he or she was asked to make notes.

In the self-explanation condition, once the case was presented, the students were given the following instructions: 1) Please read the case quickly. 2) Write down here one or more diagnoses that come to mind. 3) Write down in bullet points which pathophysiological process may have caused the signs and symptoms in this case. 4) Now discuss your ideas about the pathophysiology of the case with your colleagues. 5) What is your final diagnosis? The first three steps were taken individually. In step 4, students had to explain to each other how the signs and symptoms in the case were produced by the underlying pathophysiology. In step 5, they were to agree on a most likely diagnosis. After having reached an agreement, the next case was presented on screen. Students did not receive feedback. The steps taken individually required written responses, whereas the other steps demanded only verbal reporting.

In the hypothetico-deduction condition, each case was presented in sequential fashion: history, physical examination, and laboratory test results would appear only after students followed the relevant parts of the instructions: History: 1) Write down here one or more diagnoses that come to mind while reading the history. 2) What further information would you need to test these diagnostic hypotheses? 3) Now discuss your ideas with you colleagues. Physical examination: 4) Write down here one or more diagnoses that come to mind while reading the physical examination information. 5) What further information would you need to test these diagnostic hypotheses? 6) Now discuss your ideas with you colleagues. Laboratory tests: 7) Write down here one or more diagnoses that come to mind while reading the laboratory data. 8) What is your final diagnosis? 9) Now discuss this conclusion with your colleagues. Steps 3, 6, and 9 required students to discuss ideas with their colleagues; the other steps were taken individually. As in the self-explanation condition, the steps taken individually required written responses, whereas the other steps demanded only verbal reporting. After completing a case, the next case was presented sequentially. Students were allowed to take as much time as they needed, but facilitators were instructed to spend no more than an hour on the seven cases. Time was maximized for each case in each condition. No significant differences in time emerged.

Test phase. One week after the training phase, the students received, under examination conditions, a booklet with eight cases, four of which described a cardiovascular condition (criterion cases) and five were filler cases. Students were requested to read each case and write down the most likely diagnosis. At the end of the test phase, students were debriefed with regard to the purpose of the experiment.

Data analysis

The diagnoses provided by the participants for the criterion cases in the test phase were evaluated as correct, partially correct or incorrect, receiving scores of 1, 0.5, or 0 respectively. The diagnosis was considered correct whenever the core correct diagnosis of the case was provided (e.g. ‘myocarditis’ in a case of viral myocarditis). When the core diagnosis was not given, but one component of the diagnosis was mentioned, the diagnosis was considered partially correct (e.g. ‘mitral insufficiency’ in a case of chronic mitral insufficiency with secondary heart failure). When the participant’s response did not fall into one of these categories, the diagnosis was considered incorrect. Three experts in internal medicine (G.A.C., M.M.M., and M.D.) independently evaluated participants’ responses for each case. Responses had been previously transcribed from the booklets to excel sheets so that evaluators were not aware of the experimental condition under which the diagnoses had been provided. Their evaluations corresponded for 89% of the diagnoses; discrepancies were resolved by discussion.

For each participant, the scores obtained on the four cases of cardiovascular diseases were averaged. An ANOVA (significance level: 0.05) with experimental condition (self-explanation versus hypothetico-deduction condition) as between-subjects factor was conducted. This analysis tested the hypothesis that self-explanation while solving clinical cases would foster learning and would lead to better diagnostic performance on the test.

Fifty-nine (out of 188) students declined and 41 students either failed to complete all the phases or provided insufficient data to be included. Eventually, 49 female and 39 male students (mean age 22.1 years, standard deviation 1.98) participated in the study.

Tab.  3 contains the descriptive statistics of the experiment.

Means and standard deviations of diagnostic accuracy scores under the conditions of the experiment (self-explanation versus hypothetico-deduction) for male and female participants

Experimental conditionMeanStandard deviation
Hypothetico-deduction0.220.1445
Self-explanation0.170.1243
Total0.200.1388

A univariate analysis of variance was conducted with experimental condition as independent variable and diagnostic accuracy as the dependent variable. Students in the hypothetico-deduction condition performed better than those in the self-explanation condition, F (1, 86) = 4.20, p  = 0.04. The effect size was small (Cohen’s d  = 0.38) [ 18 ].

No differences in performance were observed on the filler cases, F (1, 86) = 0.91, p  = 0.76, Cohen’s d  = 0.05, suggesting that the groups were indeed similar and randomization was successful.

The purpose of the present experiment was to study the effectiveness of self-explanation of clinical cases in small tutorial groups relative to a hypothetico-deductive approach. Our hypothesis was that the self-explanation approach would yield higher gains because it enables students to activate previously acquired pathophysiological knowledge that would create coherence among the signs and symptoms to be explained and therefore facilitate subsequent diagnosis of similar cases [ 15 , 19 ]. To study this hypothesis, we required students to work in small groups on seven cases to either provide a suitable pathophysiological explanation for each of them, in addition to providing a diagnosis, or to hypothesize about signs and symptoms presented sequentially. One week later, all students received the same eight cases and were required to provide a diagnosis.

Contrary to expectation, the students who were asked to engage in hypothetico-deduction, a task very similar to the task of the physician, performed significantly better than the self-explanation group. The effect was small, but it should be realized that it emerged even though the two approaches were employed in a single session and a small number of cases. In real educational programs, the approaches would be repeatedly employed throughout a series of sessions and cases, with a potentially higher effect. This is somewhat surprising because self-explanation, and other interventions that aim at elaboration or strengthening a person’s knowledge base, are usually successful in doing so. Since arriving at a correct diagnosis is a knowledge-based activity, self-explanation should be expected to be helpful. This finding also seems to contradict previous findings by Chamberland and colleagues [ 15 ]. They found self-explanation to be the superior approach when measuring performance on a set of new cases at a later point in time. However, their learning phase entailed the interaction between a single student, rather than a group of students, and a facilitator. In addition, their control condition was not asked to process the cases sequentially, but to provide a best diagnosis based on the engagement with a whole case.

A number of possible explanations for these divergent findings present themselves.

First, some facilitators reported that students in the self-explanation groups had difficulty coming up with explanations incorporating mechanisms or principles underlying the signs and symptoms in the cases. It seemed that they had already forgotten much of the basic science they learned previously or had difficulty applying pathophysiology to actual clinical cases. This may be a reason that the self-explanation condition did not reach its full potential: it simply insufficiently activated relevant knowledge to strengthen the students’ knowledge base.

Second, the hypothetico-deductive condition encouraged students to explicitly consider more than one hypothesis, while the self-explanation condition did not. Since the cases in the learning phase and the test phase were not identical, the chances are that those who were hypothesizing about possible diagnoses also considered one of more diagnoses that returned in the test phase, giving them a slight edge over the self-explanation group. On the other hand, in the studies by Chamberland and colleagues [ 15 ] and those of Mamede and colleagues [ 20 , 21 ], the knowledge elicitation procedures excelled in particular with transfer cases, that is: cases that were in the same domain (for instance: cardiovascular disease) but had different diagnoses. To be fair, it has to be noted, however, that their studies did not include a comparison with a hypothetico-deduction condition.

A third factor possibly favouring the hypothetico-deduction condition is that our experimental setup forced us to provide both groups with the same patient information, even if students in the hypothetico-deduction condition did not ask for that information . In real life, as in most educational settings, hypothetico-deduction is driven by the informational needs as seen by the doctor or student engaged in diagnosing a case. The problem-solver receives only the information he or she has asked for. Nendaz has demonstrated that, when doctors and students diagnose clinical vignettes using the hypothetico-deductive approach, they perform less well than groups who receive the whole case [ 12 ]. Our hypothetico-deduction condition may have profited from receiving all the information, even the information it did not ask for.

A fourth issue limiting our study is the surprisingly low performance of all our groups. With mean scores around 0.20 on a scale between 0 and 1, our participants’ achievements were well under the achievements of students in similar studies [ 20 , 21 ]. Again, this may indicate that our participants simply did not yet have sufficient knowledge to deal with the cases, and therefore those who produced, perhaps haphazardly, more different hypotheses during the learning phase, had a slight edge over those who did not. More research is clearly necessary here.

It should be highlighted that many of these limitations can be seen as a side effect of our attempt at increasing ecological validity. We opted for comparing the two approaches under conditions that would closely simulate those encountered in an actual medical program: students worked in small groups with different facilitators. In doing so, we may have gained in validity, but strict control over the discussion in the groups was not possible. This comes as the unavoidable trade-off between ecological validity and experimental control.

In conclusion, the much-used hypothetico-deductive method for teaching clinical reasoning did relatively well in our study. Tentative explanations have been raised but further research is required to explore which approach works better and under which conditions. New methods, such as self-explanation, need further scrutiny.

Acknowledgements

The authors wish to thank the following tutors of King Saud bin Abdulaziz University of Health Sciences for their contributions to the study: Saima Aamir, Hatouf Sukkarieh, Nosheen Mahmood, Amal Abdulrahim, Anum Yaqoob, Saima Ejaz, Suha Althubaiti, Mohammed Bargo, Mohammed Abdelrahim, Hisham Elaagip, Mohammed Eltoum, Amir Shouk, and Saleh Sayed.

Biographies

is professor in the Department of Pediatric Neurology, and dean of the College of Medicine, King Saud bin Abdulaziz University of Health Sciences, Riyadh, Saudi Arabia.

is lecturer in the Department of Medical Education, King Saud bin Abdulaziz University of Health Sciences, Riyadh, Saudi Arabia.

is lecturer in the Department of Medical Education, College of Medicine, King Saud bin Abdulaziz University of Health Sciences, Riyadh, Saudi Arabia.

is assistant professor in the Department of Emergency Medicine, King Saud bin Abdulaziz University of Health Sciences, Riyadh, Saudi Arabia.

is assistant professor in the Department of Endocrinology, King Saud bin Abdulaziz University of Health Sciences, Riyadh, Saudi Arabia.

is assistant professor in the Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore.

is lecturer in the Department of General Practice, Erasmus Medical Center, Erasmus University Rotterdam, the Netherlands.

is associate professor in the Institute of Medical Education Research Rotterdam, Erasmus Medical Center, and in the Department of Psychology, Erasmus University Rotterdam.

is visiting professor in the King Saud bin Abdulaziz University of Health Sciences, College of Medicine, Saudi Arabia; visiting professor in the Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; and professor in the Institute of Medical Education Research Rotterdam, Erasmus Medical Center, Rotterdam, the Netherlands.

Conflict of interest

A. Al Rumayyan, N. Ahmed, R. Al Subait, G. Al Ghamdi, M. Mohammed Mahzari, T. Awad Mohamed, J.I. Rotgans, M. Donmez, S. Mamede and H.G. Schmidt declare that they have no competing interests.

Institution to which the work should be attributed: College of Medicine, King Saud bin Abdulaziz University of Health Sciences, Riyadh, Saudi Arabia.

Availability of data and materials

Data are available for sharing upon request to the first author.

THE HYPOTHETICO-DEDUCTIVE METHOD

  • It Is Been Proposed An Austrian Philosopher , Karl Popper.
  • It Is A Typical Version Of Scientific Method.
  • It Has Seven Steps.

Identify a broad problem area

Define the problem statement

Develop hypotheses

  • Determine measures

Data collection

  • Data analysis
  • Interpretation of data
  • should be hand written
  • Deadline – Next Class
  • Should be in a presentable manner

What are the steps in hypothetico-deductive research with reference to new version of book? Is it really different to old version? If so identify with your own point of view?

The hypothetico-Deductive Method

The Seven steps involved in the hypothetico deductive method of research

from the building blocks are listed below:

If the manager notice a drop in sales , incorrect accounting results , low-yielding investment , disinterestedness of employees in their work, and the like, could attract the attention of the manager to do a research project.

  • Scientific research starts with a definite aim or purpose.
  • A problem statement states the general objective of the research.
  • The network of associations between the problem and the variables that affect it is identified.
  • A scientific hypothesis must meet two requirements:
  • The hypothesis must be testable
  • The hypothesis must be falsifiable (we can only prove our hypotheses until they are disproved).

Determine Measures

  • The variables in the theoretical framework should be measurable in some way.
  • Some variables cannot be measure quantitatively , such as unresponsive employees , we need to operationalize this variable.
  • Measurement of variables is discussed in Ch. 6 & 7.
  • Data with respect to each variable in the hypothesis need to be obtained.

There are two types of data:

-Quantitative data

-Qualitative data

  • Data Analysis
  • In this step, the data gathered are statistically analyzed to see if the hypotheses that were generated have been supported.
  • Analyses of both quantitative and qualitative data can be done to determine if certain relations are important.
  • Qualitative data refer to information gathered through interviews and observations. These data usually for objects than can not be physically measured, like feelings and attitudes.
  • Quantitative data refer to information gathered about objects that can be physically measured. The researcher could obtain these data through the company records, government statistics, or any formal records.
  • Now we must decide whether our hypotheses are supported or not by interpreting the meaning of the results or the data analysis.
  • Based on these results, the researcher would make recommendations in order to solve the problem in hand.

Example 2.2 of the Application of the Hypothetico-Deductive Method

  • Observation of the CIO Dilemma

The Chief Information Officer (CIO) of a firm observes that the newly installed Management Information System (MIS) is not being used by middle managers as much as was originally expected.

“There is surely a problem here,” the CIO exclaims.

Example 2.2 (cont.)

  • Information Gathering through Informal Interviews

- Talking to some of the middle-level managers, the CIO finds that many of them have very little idea as to what MIS is all about, what kinds of information it could provide, and how to access it and utilize the information.

  • Obtaining More Information through Literature Survey

- The CIO immediately uses the Internet to explore further information on the lack of use of MIS in organizations.

- The search indicates that many middle-level managers are not familiar with operating personal computers.

- Lack of knowledge about what MIS offers is also found to be another main reason why some managers do not use it.

  • Formulating a Theory

- based on all this information, the CIO develops a theory incorporating all the relevant factors contributing to the lack of access to the MIS by managers in the organization.

  • Hypothesizing

From such a theory, the CIO generates various hypotheses for testing, one among them being:

- Knowledge of the usefulness of MIS would help managers to put it to greater use.

  • Data Collection

The CIO then develops a short questionnaire on the various factors theorized to influence the use of the MIS by managers , such as:

  • The extent of knowledge of what MIS is
  • What kinds of information MIS provides
  • How to gain access to the information
  • The level of comfort felt by managers in using computers in general
  • How often managers have used the MIS in the preceding 3 months.

The CIO then analyzes the data obtained through the questionnaire to see what factors prevent the managers from using the system.

Based on the results, the manager deduces or concludes that managers do not use MIS owing to certain factors.

  • These deductions help the CIO to take necessary actions to solve the problem, which might include, among other things:

- Organizing seminars for training managers on the use of computers, and

- MIS and its usefulness.

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety
  • Foundations >
  • Reasoning >

Hypothetico-Deductive Method

The hypothetico-deductive method is one of the mainstays of scientific research, often regarded as the only 'true' scientific research method.

This article is a part of the guide:

  • Falsifiability
  • Inductive Reasoning
  • Deductive Reasoning
  • Scientific Reasoning
  • Testability

Browse Full Outline

  • 1 Scientific Reasoning
  • 2.1 Falsifiability
  • 2.2 Verification Error
  • 2.3 Testability
  • 2.4 Post Hoc Reasoning
  • 3 Deductive Reasoning
  • 4.1 Raven Paradox
  • 5 Causal Reasoning
  • 6 Abductive Reasoning
  • 7 Defeasible Reasoning

This area fuels intense debate and discussion between many fields of scientific specialization.

Concisely, the method involves the traditional steps of observing the subject, in order to elaborate upon an area of study. This allows the researcher to generate a testable and realistic hypothesis .

The hypothesis must be falsifiable by recognized scientific methods but can never be fully confirmed, because refined research methods may disprove it at a later date.

From the hypothesis , the researcher must generate some initial predictions, which can be proved, or disproved, by the experimental process . These predictions must be inherently testable for the hypothetico-deductive method to be a valid process.

Reasoning Cycle - Scientific Research

For example, trying to test the hypothesis that God exists would be difficult, because there is no scientific way to evaluate it.

sample research proposal for hypothetico deductive method

Generating and Analyzing the Data

The next stage is to perform the experiment , obtaining statistically testable results , which can be used to analyze the results and determine whether the hypothesis has validity or has little foundation. This experiment must involve some manipulation of variables to allow the generation of analyzable data.

Finally, statistical tests will confirm whether the predictions are correct or not. This method is usually so rigorous that it is rare for a hypothesis to be completely proved, but some of the initial predictions may be correct and will lead to new areas of research and refinements of the hypothesis.

sample research proposal for hypothetico deductive method

Assessing the Validity of the Hypothesis

Proving and confirming a hypothesis is never a clear-cut and definitive process. Statistics is a science based on probability, and however strong the results generated; there is always a chance of experimental error .

In addition, there may be another unknown reason that explains the results. Most theories, however solid the proof, develop and evolve over time, changing and adapting as new research refines the known data.

Proving a hypothesis is never completely accurate but, after a process of debate and retesting of the results, may become a scientific assumption. Science is built upon these ' paradigms ' and even commonly accepted views may prove to be inaccurate upon further exploration.

A false hypothesis does not necessarily mean that the area of research is now closed or incorrect. The experiment may not have been accurate enough, or there may have been some other contributing error.

This is why the hypothetico-deductive method relies on initial predictions; very few hypotheses, if the research is thorough, are completely wrong as they generate new directions for future research.

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth (Oct 10, 2008). Hypothetico-Deductive Method. Retrieved Aug 04, 2024 from Explorable.com: https://explorable.com/hypothetico-deductive-method

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Save this course for later.

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

sample research proposal for hypothetico deductive method

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Last updated 2nd August 2024: Online ordering is currently unavailable due to technical issues. As we resolve the issues resulting from this, we are also experiencing some delays to publication. We are working hard to restore services as soon as possible and apologise for the inconvenience. For further updates please visit our website https://www.cambridge.org/news-and-insights/technical-incident

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

sample research proposal for hypothetico deductive method

  • > Theories of Scientific Method
  • > The hypothetico-deductive method

sample research proposal for hypothetico deductive method

Book contents

  • Frontmatter
  • Abbreviations
  • Acknowledgements
  • Introduction
  • I The idea of methodology
  • II Inductive and hypothetico-deductive methods
  • 5 Induction in science
  • 6 Some justifications of induction
  • 7 The hypothetico-deductive method
  • III Probability and scientific method
  • IV Popper and his rivals
  • V Naturalism, pragmatism, realism and methodology
  • Bibliography

7 - The hypothetico-deductive method

from II - Inductive and hypothetico-deductive methods

As the name indicates there are at least two parts to the hypothetico-deductive (h-d) method: a hypothetico part in which a hypothesis or theory, arising from whatever source, is proposed for test, and a deductive part in which test consequences are drawn from the hypotheses. Unmentioned in the name of the method is a crucial third part in which consequences are deduced and compared with experiment or what we can observe. The consequences pass or fail when the comparison is made. In some cases the hypothesis might be invented to account for some already known fact(s); it is then tested by deducing further consequences from it, which are then subject to test. An important question arises about how the pass or fail verdict is transmitted back to the hypothesis; this creates problems for the h-d method, as will be seen. The test consequences need not be obtained only by deduction; if the hypotheses are statistical then the consequences are inferred by non-deductive, or inductive, reasoning. So a better name might be the hypothetico-inferential method , to cover both cases of deductive and non-deductive inference.

The method has had a long history from the time of Plato when it went by other names in his dialogues such as “the method of hypothesis”. It was applied to science in medieval times and since then has had a long involvement with scientific method. It became central in the nineteenth-century debate over method between Whewell and J. S. Mill. Some say its day has now come and its involvement in methodology is largely over. The task of this chapter is to spell out the nature of this method, and its strengths and weaknesses.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • The hypothetico-deductive method
  • Robert Nola , University of Auckland
  • Book: Theories of Scientific Method
  • Online publication: 05 February 2013
  • Chapter DOI: https://doi.org/10.1017/UPO9781844653881.008

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

Chapter 4: Theory in Psychology

4.3 using theories in psychological research, learning objectives.

  • Explain how researchers in psychology test their theories, and give a concrete example.
  • Explain how psychologists reevaluate theories in light of new results, including some of the complications involved.
  • Describe several ways to incorporate theory into your own research.

We have now seen what theories are, what they are for, and the variety of forms that they take in psychological research. In this section we look more closely at how researchers actually use them. We begin with a general description of how researchers test and revise their theories, and we end with some practical advice for beginning researchers who want to incorporate theory into their research.

Theory Testing and Revision

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method (although this term is much more likely to be used by philosophers of science than by scientists themselves). A researcher begins with a set of phenomena and either constructs a theory to explain or interpret them or chooses an existing theory to work with. He or she then makes a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researcher then conducts an empirical study to test the hypothesis. Finally, he or she reevaluates the theory in light of the new results and revises it if necessary. This process is usually conceptualized as a cycle because the researcher can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As Figure 4.5 “Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology” shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the book—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

Figure 4.5 Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology

Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology

Together they form a model of theoretically motivated research.

As an example, let us return to Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This leads to social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969). The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory.

Constructing or Choosing a Theory

Along with generating research questions, constructing theories is one of the more creative parts of scientific research. But as with all creative activities, success requires preparation and hard work more than anything else. To construct a good theory, a researcher must know in detail about the phenomena of interest and about any existing theories based on a thorough review of the literature. The new theory must provide a coherent explanation or interpretation of the phenomena of interest and have some advantage over existing theories. It could be more formal and therefore more precise, broader in scope, more parsimonious, or it could take a new perspective or theoretical approach. If there is no existing theory, then almost any theory can be a step in the right direction.

As we have seen, formality, scope, and theoretical approach are determined in part by the nature of the phenomena to be interpreted. But the researcher’s interests and abilities play a role too. For example, constructing a theory that specifies the neural structures and processes underlying a set of phenomena requires specialized knowledge and experience in neuroscience (which most professional researchers would acquire in college and then graduate school). But again, many theories in psychology are relatively informal, narrow in scope, and expressed in terms that even a beginning researcher can understand and even use to construct his or her own new theory.

It is probably more common, however, for a researcher to start with a theory that was originally constructed by someone else—giving due credit to the originator of the theory. This is another example of how researchers work collectively to advance scientific knowledge. Once they have identified an existing theory, they might derive a hypothesis from the theory and test it or modify the theory to account for some new phenomenon and then test the modified theory.

Deriving Hypotheses

Again, a hypothesis is a prediction about a new phenomenon that should be observed if a particular theory is accurate. Theories and hypotheses always have this if-then relationship. “ If drive theory is correct, then cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in Chapter 2 “Getting Started in Research” and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this is an interesting question on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991). Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the number of examples they bring to mind and the other was that people base their judgments on how easily they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Evaluating and Revising Theories

If a hypothesis is confirmed in a systematic empirical study, then the theory has been strengthened. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A second reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if A then B ” and “not B ” necessarily lead to the conclusion “not A .” If A is the theory and B is the hypothesis (“if A then B ”), then disconfirming the hypothesis (“not B ”) must mean that the theory is incorrect (“not A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a fluke or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable. A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems.

This does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually abandon their theories and replace them with ones that are more successful.

Incorporating Theory Into Your Research

It should be clear from this chapter that theories are not just “icing on the cake” of scientific research; they are a basic ingredient. If you can understand and use them, you will be much more successful at reading and understanding the research literature, generating interesting research questions, and writing and conversing about research. Of course, your ability to understand and use theories will improve with practice. But there are several things that you can do to incorporate theory into your research right from the start.

The first thing is to distinguish the phenomena you are interested in from any theories of those phenomena. Beware especially of the tendency to “fuse” a phenomenon to a commonsense theory of it. For example, it might be tempting to describe the negative effect of cell phone usage on driving ability by saying, “Cell phone usage distracts people from driving.” Or it might be tempting to describe the positive effect of expressive writing on health by saying, “Dealing with your emotions through writing makes you healthier.” In both of these examples, however, a vague commonsense explanation (distraction, “dealing with” emotions) has been fused to the phenomenon itself. The problem is that this gives the impression that the phenomenon has already been adequately explained and closes off further inquiry into precisely why or how it happens.

As another example, researcher Jerry Burger and his colleagues were interested in the phenomenon that people are more willing to comply with a simple request from someone with whom they are familiar (Burger, Soroka, Gonzago, Murphy, & Somervell, 1999). A beginning researcher who is asked to explain why this is the case might be at a complete loss or say something like, “Well, because they are familiar with them.” But digging just a bit deeper, Burger and his colleagues realized that there are several possible explanations. Among them are that complying with people we know creates positive feelings, that we anticipate needing something from them in the future, and that we like them more and follow an automatic rule that says to help people we like.

The next thing to do is turn to the research literature to identify existing theories of the phenomena you are interested in. Remember that there will usually be more than one plausible theory. Existing theories may be complementary or competing, but it is essential to know what they are. If there are no existing theories, you should come up with two or three of your own—even if they are informal and limited in scope. Then get in the habit of describing the phenomena you are interested in, followed by the two or three best theories of it. Do this whether you are speaking or writing about your research. When asked what their research was about, for example, Burger and his colleagues could have said something like the following:

It’s about the fact that we’re more likely to comply with requests from people we know [the phenomenon]. This is interesting because it could be because it makes us feel good [Theory 1], because we think we might get something in return [Theory 2], or because we like them more and have an automatic tendency to comply with people we like [Theory 3].

At this point, you may be able to derive a hypothesis from one of the theories. At the very least, for each research question you generate, you should ask what each plausible theory implies about the answer to that question. If one of them implies a particular answer, then you may have an interesting hypothesis to test. Burger and colleagues, for example, asked what would happen if a request came from a stranger whom participants had sat next to only briefly, did not interact with, and had no expectation of interacting with in the future. They reasoned that if familiarity created liking, and liking increased people’s tendency to comply (Theory 3), then this situation should still result in increased rates of compliance (which it did). If the question is interesting but no theory implies an answer to it, this might suggest that a new theory needs to be constructed or that existing theories need to be modified in some way. These would make excellent points of discussion in the introduction or discussion of an American Psychological Association (APA) style research report or research presentation.

When you do write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

Key Takeaways

  • Working with theories is not “icing on the cake.” It is a basic ingredient of psychological research.
  • Like other scientists, psychologists use the hypothetico-deductive method. They construct theories to explain or interpret phenomena (or work with existing theories), derive hypotheses from their theories, test the hypotheses, and then reevaluate the theories in light of the new results.
  • There are several things that even beginning researchers can do to incorporate theory into their research. These include clearly distinguishing phenomena from theories, knowing about existing theories, constructing one’s own simple theories, using theories to make predictions about the answers to research questions, and incorporating theories into one’s writing and speaking.
  • Practice: Find a recent empirical research report in a professional journal. Read the introduction and highlight in different colors descriptions of phenomena, theories, and hypotheses.

Burger, J. M., Soroka, S., Gonzago, K., Murphy, E., & Somervell, E. (1999). The effect of fleeting attraction on compliance to requests. Personality and Social Psychology Bulletin, 27 , 1578–1586.

Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61 , 195–202.

Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach. Journal of Personality and Social Psychology, 13 , 83–92.

  • Research Methods in Psychology. Provided by : University of Minnesota Libraries Publishing. Located at : http://open.lib.umn.edu/psychologyresearchmethods/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Footer Logo Lumen Candela

Privacy Policy

Hypothetico-deductive Method

  • Reference work entry
  • First Online: 01 January 2015
  • Cite this reference work entry

sample research proposal for hypothetico deductive method

  • Anton E. Lawson 2  

2464 Accesses

2 Citations

The hypothetico-deductive (HD) method, sometimes called the scientific method, is a cyclic pattern of reasoning and observation used to generate and test proposed explanations (i.e., hypotheses and/or theories) of puzzling observations in nature. The goal of the method is to derive useful knowledge – in the sense that causes are determined such that reliable predictions about future events can be made. The term “method” may be somewhat misleading as use of the HD method does not insure success. The method may fail for a variety of reasons, not the least of which is that the “correct” causal explanation may not occur to the scientist, effective ways of testing proposed causes may not occur to the scientist, and proposed tests may not be feasible with available technology or funding. The following seven steps and four inferences are involved:

Scientists undertake explorations that lead to puzzling observations. For example, in 1610, Galileo used his newly invented telescope to observe...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York

Google Scholar  

Lawson AE (2010) Basic inferences of scientific reasoning, argumentation, and discovery. Sci Educ 94(2):336–364

Lawson AE, Daniel ES (2011) Inferences of clinical diagnostic reasoning and diagnostic errors. J Biomed Inform 44:402–412

Article   Google Scholar  

Download references

Author information

Authors and affiliations.

School of Life Sciences, Arizona State University, 427 E. Tyler Mall, Tempe, AZ, 852876505, USA

Anton E. Lawson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Anton E. Lawson .

Editor information

Editors and affiliations.

Emeritus Professor of Science and Technology Education, Faculty of Education Monash University, Clayton, VIC, Australia

Richard Gunstone

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Lawson, A.E. (2015). Hypothetico-deductive Method. In: Gunstone, R. (eds) Encyclopedia of Science Education. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-2150-0_260

Download citation

DOI : https://doi.org/10.1007/978-94-007-2150-0_260

Published : 04 January 2015

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-2149-4

Online ISBN : 978-94-007-2150-0

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

2.4 Developing a Hypothesis

Learning objectives.

  • Distinguish between a theory and a hypothesis.
  • Discover how theories are used to generate hypotheses and how the results of studies can be used to further inform theories.
  • Understand the characteristics of a good hypothesis.

Theories and Hypotheses

Before describing how to develop a hypothesis it is imporant to distinguish betwee a theory and a hypothesis. A  theory  is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes, functions, or organizing principles that have not been observed directly. Consider, for example, Zajonc’s theory of social facilitation and social inhibition. He proposed that being watched by others while performing a task creates a general state of physiological arousal, which increases the likelihood of the dominant (most likely) response. So for highly practiced tasks, being watched increases the tendency to make correct responses, but for relatively unpracticed tasks, being watched increases the tendency to make incorrect responses. Notice that this theory—which has come to be called drive theory—provides an explanation of both social facilitation and social inhibition that goes beyond the phenomena themselves by including concepts such as “arousal” and “dominant response,” along with processes such as the effect of arousal on the dominant response.

Outside of science, referring to an idea as a theory often implies that it is untested—perhaps no more than a wild guess. In science, however, the term theory has no such implication. A theory is simply an explanation or interpretation of a set of phenomena. It can be untested, but it can also be extensively tested, well supported, and accepted as an accurate description of the world by the scientific community. The theory of evolution by natural selection, for example, is a theory because it is an explanation of the diversity of life on earth—not because it is untested or unsupported by scientific research. On the contrary, the evidence for this theory is overwhelmingly positive and nearly all scientists accept its basic assumptions as accurate. Similarly, the “germ theory” of disease is a theory because it is an explanation of the origin of various diseases, not because there is any doubt that many diseases are caused by microorganisms that infect the body.

A  hypothesis , on the other hand, is a specific prediction about a new phenomenon that should be observed if a particular theory is accurate. It is an explanation that relies on just a few key concepts. Hypotheses are often specific predictions about what will happen in a particular study. They are developed by considering existing evidence and using reasoning to infer what will happen in the specific context of interest. Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. This is because theories are broad in nature and they explain larger bodies of data. So if our research question is really original then we may need to collect some data and make some observation before we can develop a broader theory.

Theories and hypotheses always have this  if-then  relationship. “ If   drive theory is correct,  then  cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in this chapter  and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this  question  is an interesting one  on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991) [1] . Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the  number  of examples they bring to mind and the other was that people base their judgments on how  easily  they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Theory Testing

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method  (although this term is much more likely to be used by philosophers of science than by scientists themselves). A researcher begins with a set of phenomena and either constructs a theory to explain or interpret them or chooses an existing theory to work with. He or she then makes a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researcher then conducts an empirical study to test the hypothesis. Finally, he or she reevaluates the theory in light of the new results and revises it if necessary. This process is usually conceptualized as a cycle because the researcher can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As  Figure 2.2  shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the textbook—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

Figure 4.4 Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology Together they form a model of theoretically motivated research.

Figure 2.2 Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology Together they form a model of theoretically motivated research.

As an example, let us consider Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This theory predicts social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969) [2] . The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory. (Zajonc also showed that drive theory existed in humans (Zajonc & Sales, 1966) [3] in many other studies afterward).

Incorporating Theory into Your Research

When you write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

To use theories in your research will not only give you guidance in coming up with experiment ideas and possible projects, but it lends legitimacy to your work. Psychologists have been interested in a variety of human behaviors and have developed many theories along the way. Using established theories will help you break new ground as a researcher, not limit you from developing your own ideas.

Characteristics of a Good Hypothesis

There are three general characteristics of a good hypothesis. First, a good hypothesis must be testable and falsifiable . We must be able to test the hypothesis using the methods of science and if you’ll recall Popper’s falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be  logical. As described above, hypotheses are more than just a random guess. Hypotheses should be informed by previous theories or observations and logical reasoning. Typically, we begin with a broad and general theory and use  deductive reasoning to generate a more specific hypothesis to test based on that theory. Occasionally, however, when there is no theory to inform our hypothesis, we use  inductive reasoning  which involves using specific observations or research findings to form a more general hypothesis. Finally, the hypothesis should be  positive.  That is, the hypothesis should make a positive statement about the existence of a relationship or effect, rather than a statement that a relationship or effect does not exist. As scientists, we don’t set out to show that relationships do not exist or that effects do not occur so our hypotheses should not be worded in a way to suggest that an effect or relationship does not exist. The nature of science is to assume that something does not exist and then seek to find evidence to prove this wrong, to show that really it does exist. That may seem backward to you but that is the nature of the scientific method. The underlying reason for this is beyond the scope of this chapter but it has to do with statistical theory.

Key Takeaways

  • A theory is broad in nature and explains larger bodies of data. A hypothesis is more specific and makes a prediction about the outcome of a particular study.
  • Working with theories is not “icing on the cake.” It is a basic ingredient of psychological research.
  • Like other scientists, psychologists use the hypothetico-deductive method. They construct theories to explain or interpret phenomena (or work with existing theories), derive hypotheses from their theories, test the hypotheses, and then reevaluate the theories in light of the new results.
  • Practice: Find a recent empirical research report in a professional journal. Read the introduction and highlight in different colors descriptions of theories and hypotheses.
  • Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic.  Journal of Personality and Social Psychology, 61 , 195–202. ↵
  • Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach.  Journal of Personality and Social Psychology, 13 , 83–92. ↵
  • Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate responses. Journal of Experimental Social Psychology, 2 , 160-168. ↵

Creative Commons License

Share This Book

  • Increase Font Size

1Library

  • No results found

Hypothetico-deductive (HD) method

Chapter 4: methodology, 4.4 research design and methods, 4.4.2 hypothetico-deductive (hd) method.

As explained above, in line with a positivist research approach, the overall method for addressing the research objectives and answering the research questions followed the HD method in the application of a number of distinct stages which were adapted from Jankowicz (2005) and Blaikie (2010), as follows:

1. Putting forward a tentative idea, a premise, a hypothesis (a testable proposition about

the relationship between two or more concepts or variables) or set of hypotheses to form a theory. Through critical analysis of the strategic pay concept, a conceptual framework

was developed, modelling the relationships between pay practices, business strategy, employment group, industry sector and organisation size and HR performance outcomes.

2. Using existing literature, deduce a testable proposition or number of propositions. Ten hypotheses were developed based on the theoretical propositions that pay practices have an effect on HR performance outcomes, pay will be dependent on organisational contingencies and pay aligned with organisational contingencies will have a positive effect on HR performance outcomes.

106 3. Examining the premises and the logic of the argument that produced them, comparing

this argument with existing theories to see if it offers an advance in understanding. An

iteration between literature and the developing model took place and clear potential for an advance in knowledge and understanding emerged relating to the interaction of variables as well as the integration of universalistic and alignment approaches to strategic pay.

4. Testing the premises by collecting appropriate data to measure the concepts or

variables and analysing them. The variables, ‘pay practices’, ‘business strategy’,

‘employment group’, ‘industry sector’, ‘organisation size’, and ‘HR performance outcomes’ were operationally defined. Quantitative data were collected using an organisational-level survey questionnaire and analysed using a range of statistical tests. 5. Drawing implications for the verification or falsification of the theory. Conclusions

were reached using deductive and inductive reasoning relating to verification of hypothetical statements. Implications for the validity of the strategic pay model were drawn and an extended strategic pay framework developed.

4.4.3 Sampling

4.4.3.1 Population and sampling frame

The population of interest to this study is UK private sector organisations. The strategic pay concept has been largely developed by United States (US) researchers and theorists; a contribution of this research project is the testing of the strategic pay model within a UK context, on organisations operating in the UK (although not necessarily UK-owned).

Because ‘business strategy’ as defined by Miles and Snow (Miles et al., 1978; 1984) and Porter (1980, 2004) is a key variable, it was decided to admit data solely from private sector organisations as opposed to public sector and third sector (charity and not-for-profit) organisations which, while they may have business-like strategies, do not fit the standard business strategy typologies well. While the CIPD Reward Management Survey collects data from all three organisational groups, for the purposes of this study only private sector data was analysed.

In order to collect organisational-level data from a sample of UK private sector organisations, there needed to be representative survey respondents within each organisation who could act as a proxy respondent on behalf of their organisation (Lavrakas, 2008). Balkin and Gomez-Mejia (1990) and Montemayor (1996) both used senior HR professionals as informants because they

107 were likely to be intimately involved in formulating pay practices as well as possessing knowledge of business and pay strategy. In this research study too, participants were required to possess a high level of knowledge and understanding of the technicalities of pay practice in order for them to respond accurately to survey questions; they also needed to have knowledge of, and access to data on, organisational operations, business strategy and HR outcomes. Practising HR/reward professionals within each organisation were therefore the target proxy respondents because they were likely to have both the required knowledge and access to data.

Having identified a very broad scope for the population to be researched, it was necessary to obtain a representative sample (De Vaus, 2002). The Chartered Institute of Personnel and Development (CIPD) is the professional body for HR practitioners in the UK which had a membership of approximately 135,000 in 2012 (CIPD, 2013). Approximately 14,000 of these CIPD members had responsibility for pay and reward management and this large body of reward professionals provided an appropriate sampling frame from which to draw a sample (De Vaus, 2002). Saunders et al. (2015) stress the importance of accurate and up-to-date information when using membership databases such as the one held by CIPD. While the possibility of the database containing out-of-date contact details or email addresses was evident, the fact that to retain current CIPD membership (and therefore appear on the current membership database) individuals were required to subscribe at least annually, meant there was a likelihood that the contact information would be largely accurate.

4.4.3.2 Sample biases

3.4.4.2.1 Sample selection bias

The selection of private sector organisations via the CIPD membership database of HR/reward professionals as representative of the population of UK private sector organisations was made with some caution. There is the clear possibility of sample selection bias (Heckman, 1979; Berk, 1983) where the sample selected is not representative of the entire population because the sample is drawn from a sampling frame that differs from the population. And indeed, a crucial difference between the entire population of private UK organisations and the ones in the sample obtained is that the organisations in this study all employed HR/reward professionals who were CIPD members whereas there are many organisations in the UK private sector who do not. Indeed, it is reasonable to assume that there are many organisations which do not employ HR professionals at all. It is possible that the presence of HR professionals and CIPD members in many organisations materially influenced some of the variables in this study (e.g. pay practices

108 and HR outcomes). It is worth then assessing the likely impact of this possible bias. The aim of this research is to evaluate the extent of strategically aligned pay practices and their impact on HR outcomes and it is arguable that organisations employing HR practitioners who are members of the professional body, and indeed are actively engaging in research conducted by the professional body, are more likely to be at the forefront of normative strategic pay practice than organisations within the general population. Therefore, the risk is that results may over- state the incidence of strategic pay practices and this has to be considered in the final analysis of results. The corollary of this argument is that if strategic pay is not being practised in the sample organisations, it is perhaps less likely to be practised in the population of UK organisations.

3.4.4.2.2 Non-response bias

The entire sampling frame of 14,000 CIPD members with responsibility for pay/reward was invited to participate in the survey in 2012. This was standard practice for CIPD research where inclusive, open invitations to participate were preferred to some form of probability sampling technique which would have limited the number of opportunities organisations had to participate in research. In addition, it was recognised that the complexity and depth of the questionnaire was likely to dissuade or exempt some potential respondents and so a large sample was contacted in anticipation of a poor response rate. Indeed, the 2011 CIPD survey which acted as a pilot (see section 4.4.4.2. below) showed a response rate of 1.98% could be expected from a similar sized sample. In essence, the sampling strategy was to aim primarily for high quality responses and it was accepted that a likely consequence would be a low response rate.

Of course, this approach came with some disadvantages, primarily that, although all organisations had an equal chance of participating, the final sample was comprised of organisations that had self-selected into participation. This had the potential to be problematic as it could have created non-response bias in the results i.e. the responses of those that responded compared with those that did not respond could have been different and this could have influenced the end results (Cascio, 2012). The difference between non-respondents and respondents within the sampling frame is quite difficult to assess. Non-response could have been due to a number of passive factors such as unavailability or incorrect contact details or more salient, active non-response issues such as perceptions of sharing bad practice or even fear of reprisals (Thompson and Surface, 2007). Because the researcher’s access to the CIPD membership database was restricted due to data protection, it was not possible to run tests for non-response bias (e.g. ANOVA) between responding and non-responding organisations. Although this issue does need to be factored in to the final assessment of results, similar studies

109 (e.g. Delery and Doty, 1996; Lepak and Snell, 2002) found no significant differences between respondents and non-respondents suggesting that this may not be a significant problem in studies of this type.

4.4.3.3 Generalisability

Taking the potential for sample biases into account, it is not reasonable to claim that the results of this study are generalisable to all UK private sector organisations. However, it is possible to generalise these results to organisations operating in the UK private sector that have internal HR/pay expertise. While clearly a limitation of generalisability, this focused applicability nevertheless allows for generalisation to a wide range of organisations, indeed, WERS and CIPD data suggests nearly a third of workplaces in the UK have an HR specialist present (Brown, Bryson, Forth and Whitfield, 2009; CIPD, 2014). Furthermore, as noted above, it might be reasonable to assume that organisations with HR expertise are more likely to be practising strategic pay that those with none given the increasing professionalisation of the function and its emphasis on ‘strategic’ practices (CIPD, 2014). It is among these organisations that any effects should be evident, and it is on this basis that the study contributes to knowledge about relationships between pay, strategy and performance.

4.4.3.4 Sample size

The minimum recommended sample size for this study was based on an assessment of the required degree of accuracy for the sample and the extent to which there is variation in the population regarding key variables (De Vaus, 2002). On the basis of the first consideration alone, accuracy, in aiming for a 95% confidence level with a sampling error of 5% (i.e. 95% confidence that the results in the population will be the same as in the sample plus or minus 5%) a minimum sample of 400 would be required (Ibid.). However, the second factor to be considered is the degree of diversity in key variables in the study and this can influence the minimum required sample size (Saunders et al., 2015). The relevant variables for this study are industry sector (manufacturing / production or private sector services) and organisation size (SME or large). As will be explained below, business strategy was measured on a scored scale (1-5) and employment group was measured on an intra-organisation basis (i.e. most organisations contained both employee groups rather than one or the other) and therefore calculations relating to these variables were not influenced by sample size. Data from the 2011 CIPD Reward Management Survey, which acted as a pilot study, was used to establish the likely proportions of industry sector and organisation size categories. 2011 data showed the split of

110 24% manufacturing / production to 76% private sector services and 27% SMEs to 73% large companies from a total of 182 private sector respondents. These figures were used to calculate the minimum sample required following Saunders et al.’s (2015, p.704) formula:

n = p% x q% x (z ÷ e%)2

(n is the minimum sample size required, p % is the percentage belonging to the specified category, q % is the percentage not belonging to the specified category, z is the z value corresponding to the level of confidence required (always 1.96 for 95% confidence level), e % is the margin of error required.)

So, for industry sector the calculation of minimum required sample size was: n = 24% x 76% x (1.96 ÷ 5)2

= 1824 x (0.392)2

= 1824 x 0.154 = 281

And for organisation size the calculation of minimum required sample size was: n = 27% x 73% x (1.96 ÷ 5)2

= 1971 x (0.392)2

= 1971 x 0.154 = 304

Therefore, a figure of approximately 300 was considered an appropriate overall sample size for this study which would provide a 95% confidence level with a sampling error of 5%.

4.4.4 Data collection

The main data collection phase took place with the CIPD Reward Management Survey 2012 which was used to gather a quantitative dataset using a closed-ended, web-based questionnaire. The survey collected organisational-level data on: industry sector, size (employee numbers) and geographical ownership; business strategy; employee demographics; pay and benefits practices by employee category; pay transparency and HR outcomes. Not all data collected was relevant to the thesis but was a necessary part of CIPD’s benchmarking research report (the full survey instrument is reproduced in Appendix B).

111 4.4.4.1 CIPD Reward Management Surveys

The Chartered Institute of Personnel and Development (CIPD) have been annually surveying their professional membership and publishing the results in survey reports since 2001. In late 2010 the CIPD contracted the University of Bedfordshire (and later London Metropolitan University) to produce survey reports bringing the expertise of an academic team (see Appendix A for team membership in 2012). The researcher was the lead in terms of questionnaire design (developing an existing survey instrument); data analysis and interpretation; and survey report writing with the other academic members of the team and the CIPD’s senior adviser on performance and reward management contributing advice and guidance as well as input to the final published report (e.g. foreword / conclusion). The output of this collaborative project was the production of survey reports from 2011-17 (albeit with changing team personnel). The involvement of the researcher and academic team brought a more theoretical perspective to the research with the intention of examining pay issues in greater depth, particularly the relationships between strategy, pay and HR outcomes. During completion of the 2011 survey report it became clear that there was an opportunity for a notable contribution to theoretical and empirical studies of strategic pay, but that a project of such depth was beyond the scope of the annual CIPD survey reports. Nevertheless, the potential for the CIPD reward management data set to be expanded to collect relevant data was clear. The researcher, independently of the CIPD survey project but within the remit of the contract for services between the Universities and CIPD (see Appendix A4 and A.5), embarked upon the present study.

4.4.4.2 Development of 2012 CIPD survey

The 2011 CIPD Reward Management Survey was based on previous iterations of CIPD’s Reward Management Surveys and provided data on pay practices, industry sector, organisation size and employee category. The researcher added to - and amended - previous 2011 survey questions to compile the 2012 survey instrument, for example by gathering data relating to different aspects of pay practice, transparency, business strategy and HR outcomes. The 2011 survey participants’ feedback regarding user-friendliness helped to shape the wording of the 2012 survey questions as well as the sequence and layout of this survey instrument. The 2011 survey was also used to facilitate initial development of some of the elements of the theoretical framework of the thesis and to provide information to calculate the required minimum sample size necessary for the 2012 data collection forming the basis of this research.

112 4.4.4.3 Variable definitions and measures

The definition of variables and how they will be measured is a fundamental aspect of the HD method outlined in 7.3 above (Jankowicz, 2005). The variables that form each element of the theoretical framework detailed in Chapter 3 have been defined and operationalised based on a synthesis of the theoretical and empirical studies examined. The following sub-sections each relate to the main variables in this study; pay practices, business strategy, industry sector, organisation size, employment group and HR performance outcomes.

4.4.4.3.1 Pay practices

Pay practices were categorised as either experiential or algorithmic based on relevant literature, as presented in Table 3.3 in Chapter 3. Table 4.1 shows these categorisations, provides some tighter definitions and indicates in the final column the relevant survey question used to gather data (see Appendix B for all survey questions).

Table 4.1 Experiential and algorithmic pay configurations

Experiential pay Algorithmic pay Survey question

Broadbanding or job family structures Narrow-graded pay structures or pay

Low vertical pay dispersion High vertical pay dispersion Q31

Above market pay

(Upper quartile or decile of market) At or below market pay (Median, lower quartile or decile of market)

Organisation’s ‘ability to pay’ for pay level determination and reviews

Q30 Q6 & Q8

Market rates to determine pay* Market rates to progress pay Movement in market rates, and recruitment and retention as pay review factors

Job evaluation to determine pay* Q6 Q7 Q8

Performance, skills, competencies or employee value / retention as criteria for pay progression

Length of service as a criterion for pay

progression Q7

Individual base pay rates / salaries Collective bargaining Q5 & Q6

Extensive performance-related reward (PRR)

Extensive employee coverage of PRR schemes

Minimal or no performance-related reward (PRR)

Minimal employee coverage of PRR schemes

Q9 Q12 Combination performance-related

schemes (org./group/indiv.) Individual bonus / cash incentives Merit pay Gainsharing Goal-sharing Profit-sharing Piece rates Sales commission Q10 Q10 Q10 Q11 Q11 Q11

Open pay Pay secrecy Q32

Long-term pay (share schemes / long-

Note. * Q6 of the CIPD Reward Management Survey (Appendix B) asked which factor is most important in determining pay. CIPD questions were: ‘market rates (with JE)’ or ‘market rates (without JE)’ therefore when testing for the variable ‘job evaluation’ ‘market rates (with JE)’ was used.

The categorisation of pay practices as either experiential or algorithmic was largely based on the framework as originally conceived by Balkin and Gomez-Mejia (1990) and Gomez-Mejia and Balkin (1992) incorporating elements from the strategic / traditional pay model charted by Lawler (1990) and Schuster and Zingheim (1992) as well as amendments and additions from Hambrick and Snow (1989); Boyd and Salamin (2001); Heneman and Dixon (2001); Allen and Helms (2002); Yanadori and Marler (2006); Chen and Jermias (2014); and Tenhiälä and Laamanen (2016).

Some of these categorisations are therefore slightly different from the original Balkin and Gomez-Mejia (1990) groupings. The most obvious one is the categorisation of ‘above market pay’ as an experiential pay practice and ‘at or below market pay’ as an algorithmic pay practice. As detailed in Chapter 3, Miles and Snow (1984) and Balkin and Gomez-Mejia (1990) propose that above market pay is included in the algorithmic configuration because it is a consequence of the low-road organisation’s emphasis on internal equity, minimal risk-sharing and high job security. More recent studies by Boyd and Salamin (2001) and Yanadori and Marler (2006) however find that it is instead high-road firms that pay above the market. This supports Hambrick and Snow’s (1989) argument that because low-road firms prioritise cost-control and minimising costs, they are more likely to pay at or below market pay whereas because working for a high-road firm means less security and higher risk, higher salaries are needed to attract and retain the talent required to pursue a strategy of innovation or quality. It is this argument that has led to the decision to place ‘above market pay’ in the experiential configuration and ‘at or below market pay’ in the algorithmic configuration but with an acknowledgement that this is a contestable categorisation.

Similarly, the categorisation of the use of competency pay as an experiential pay practice was not straightforward. For Lawler (1990), rewarding the development and demonstration of organisationally desired competencies is a key aspect of strategic pay and was thus included in the experiential pay configuration. However, competencies also feature as an element of the internal/make HR configurations outlined by Delery and Doty (1996) and Youndt and Snell (2004) because these systems are intended to develop competencies internally rather than acquiring them from the market; competency pay could therefore easily sit within the algorithmic pay configuration.

115 Both of the above cases are examples of slightly unclear alignments with arguments in the literature for inclusion in either pay category. It is therefore only possible to describe the configurations presented in Table 3.4, Chapter 3 as ‘broadly aligned’, recognising that there is a

  • Sorting effects
  • Incentive effects
  • Equity and justice effects
  • The new pay paradigm
  • Configurational perspective
  • Strategic pay in the UK
  • Theorising the problem framework and hypotheses
  • Hypothetico-deductive (HD) method (You are here)
  • Assumptions
  • High pay dispersion

Related documents

Biology Exams 4 U

  • __Crossword/Puzzle
  • __Resources
  • Difference Between
  • Biology MCQ
  • _Chemistry for Biologists

Deductive, Inductive and Hypothetico-deductive methods

sample research proposal for hypothetico deductive method

Our website uses cookies to improve your experience. Learn more

Contact form

sample research proposal for hypothetico deductive method

eNotes World

Hypothetico-deductive Method


Meaning of Hypothetico-deductive Method

Hypothetico-deductive method also called H-D method   is an approach to research or a method of construction of a scientific theory that will account for results procured through direct observation and experimentation and that will, through inference, forecast additional effects that can then be verified or disproved by empirical verification derived from other experiments.

An early or premature version of the hypothetico-deductive method was forwarded by the Dutch physicist Christiaan Huygens (1629–95). The method generally assumes that properly shaped theories are postulations and they are proposed to explain a set of observable data. According to him, such hypotheses cannot be decisively recognized and established until the consequences that rationally derived from them are confirmed and verified through further observations and experiments.

The hypothetico-deductive method is based on the idea of falsification, and it is an alternative to inductivism. The method looks theory as a deductive system in which particular empirical phenomena are explained by relating them to general principles and definitions.

The method treats theory as a deductive system in which particular empirical phenomena are explained by connecting them back to general principles and definitions. It assumes that the validity of theories is measured only by the exact light of their consequences or findings throw on previously unexplained phenomena or actual scientific problems.

Philosopher Karl Popper said that it is not possible to confirm a scientific theory correct by the way of induction because no evidence ensures that opposing facts or evidence will not be found. Instead, Karl Popper predicted that proper science or a scientific theory is accomplished by deduction. According to him, a scientific theory is based on deduction and it involves the process of falsification. Falsification is a particular specialized feature or attribute of hypothesis testing. It includes stating some output from theory and then finding opposing cases using experiments or observations. This methodology proposed by Popper is commonly known as the hypothetico-deductive method.

According to this method, the scientific inquisition proceeds by formulating a hypothesis and then falsified by vigorous tests on visible empirical data. Thus the hypothetico-deductive is one of the keystone foundations of scientific research and is considered as the only true scientific research method.

The hypothetico-deductive method unlike the deductive method provides more stress on the denial of the incorrect or unverifiable hypothesis and to set effort into forming the revised hypothesis, which is testable. Therefore, the hypothetico-deductive method proposes the ever-improving cycle of hypothesis formation that follows the unbroken cycle of refutation and revisions of the hypothesis unless the hypothesis is verified.

The hypothetico-deductive method confirms a theory when the prediction and observation gap is small and disconfirms when the gap is large. Most of the focuses on scientific methodology are to reduce the gap between predictions and observations.

Steps in Hypothetico-deductive Method

The hypothetico-deductive method begins with the postulation of the hypothesis that relies on general phenomena. From the stated hypothesis, some initial predictions are generated and these can be proved or disproved by the experimental process. These predictions must be intrinsically testable for the hypothetico-deductive methods to be a valid process.

Thus, the stated hypothesis must be testable and pragmatic. The proposed hypothesis is tested through analysis of the collected data from observation and other several data collecting techniques. If the hypothesis is refuted, then a revised hypothesis is formed and the same previous process is continued. If the hypothesis is confirmed, then the theory is confirmed. So, this method is said to be a scientific method of research. The following chart shows the process of hypothetico-deductive method of study.

Steps involved in the hypothetico-deductive method

The above chart shows the process of hypothetico-deductive method of scientific study. Somehow the process is parallel to the deductive study of scientific study. But the very significant break between these two methods occurs at the state of refusal of stated hypothesis. The deductive method is not in favor of suggesting the revision of the stated hypothesis and then re-conducting the hypothesis testing process. However, this method advocates for the revision of the hypothesis and re-operation of the hypothesis testing unless the hypothesis is verified and accepted.

The hypothetico-deductive method is the true scientific research method in which testable hypotheses can be set up and then they are justified with empirical data and observation. Falsification is the key context of this method. It involves a specific statement and then finding contrary evidence through experiments on observations. Under this model, a hypothesis is set to generate and predict some identical preconditions, the experiment is conducted and consequences are derived which are tested against the hypothesis. If the hypothesis is proved true, it becomes confirmed theory and if it is falsified we can again go for re-conduction of the observation and experimentation. 

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Privacy Overview

CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.

Insert/edit link

Enter the destination URL

Or link to existing content

COMMENTS

  1. (PDF) HYPOTHETICO-DEDUCTIVE METHOD: A COMPARATIVE ANALYSIS

    Abstract. Purpose: This paper presents the analysis of hypothetico-deductive method and its applications in different domains. Design/Methodology/Approach: The author explains hypothetico ...

  2. Hypothetico-deductive method

    hypothetico-deductive method, procedure for the construction of a scientific theory that will account for results obtained through direct observation and experimentation and that will, through inference, predict further effects that can then be verified or disproved by empirical evidence derived from other experiments.. An early version of the hypothetico-deductive method was proposed by the ...

  3. Teaching clinical reasoning through hypothetico-deduction is (slightly

    In conclusion, the much-used hypothetico-deductive method for teaching clinical reasoning did relatively well in our study. Tentative explanations have been raised but further research is required to explore which approach works better and under which conditions. New methods, such as self-explanation, need further scrutiny.

  4. THE HYPOTHETICO-DEDUCTIVE METHOD 5th Edition

    The Seven steps involved in the hypothetico deductive method of research. from the building blocks are listed below: . Identify a broad problem area. Define the problem statement. Develop hypotheses. Determine measures. Data collection. Data analysis.

  5. Hypothetico-Deductive Method

    The hypothetico-deductive method is one of the mainstays of scientific research, often regarded as the only 'true' scientific research method. This area fuels intense debate and discussion between many fields of scientific specialization. Concisely, the method involves the traditional steps of observing the subject, in order to elaborate upon ...

  6. The Hypothetico-Deductive Method

    O. Mesly, Creating Models in Psychological Research, SpringerBriefs in Psychology, DOI 10.1007/978-3-319-15753-5_8 8.1 Introduction This is where the researcher stands right now (Fig. 8.1): ... 64 8 The Hypothetico-Deductive Method Here are the four types—choose one: • Descriptive (which can be comparative)—one uses (S) and (F) arrows

  7. PDF 3. Hypothetical-deductive Method and Experimented Cruces 3.1 ...

    3.1.1 PHASES OF THE HYPOTHETICAL-DEDUCTIVE METHOD: The hypothetico-deductive model or method is a proposed description of scientific method. According to it, scientific inquiry proceeds by formulating a hypothesis in a form that could conceivably be falsified by a test on observable data.

  8. Hypothetico-deductive model

    The hypothetico-deductive model or method is a proposed description of the scientific method.According to it, scientific inquiry proceeds by formulating a hypothesis in a form that can be falsifiable, using a test on observable data where the outcome is not yet known.A test outcome that could have and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis.

  9. Mastering the Hypothetico-Deductive Method: 7 Steps to Effective

    The hypothetico-deductive method is a fundamental approach to scientific research that involves systematically testing hypotheses using empirical data. This method helps researchers to identify and refine their research questions, develop testable hypotheses, and collect and analyze data to support or refute their hypotheses. In this article, we will discuss the seven steps of the hypothetico ...

  10. 7

    Theories of Scientific Method - August 2007. As the name indicates there are at least two parts to the hypothetico-deductive (h-d) method: a hypothetico part in which a hypothesis or theory, arising from whatever source, is proposed for test, and a deductive part in which test consequences are drawn from the hypotheses. Unmentioned in the name of the method is a crucial third part in which ...

  11. 4.3 Using Theories in Psychological Research

    As Figure 4.5 "Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology" shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the book—creating a more detailed model of "theoretically motivated" or "theory-driven" research.

  12. Hypothetico-deductive Method

    The hypothetico-deductive (HD) method, sometimes called the scientific method, is a cyclic pattern of reasoning and observation used to generate and test proposed explanations (i.e., hypotheses and/or theories) of puzzling observations in nature. The goal of the method is to derive useful knowledge - in the sense that causes are determined ...

  13. 2.4 Developing a Hypothesis

    First, a good hypothesis must be testable and falsifiable. We must be able to test the hypothesis using the methods of science and if you'll recall Popper's falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be logical.

  14. Theory Construction Methodology: A Practical Framework for Building

    Even a cursory look through methodology textbooks in psychology shows that an endorsement of the hypothetico-deductive method is widespread in the discipline; the idea that hypothesis testing should be psychologists' primary focus when it comes to the scientific method, or even that science is defined by hypothesis testing, is deeply ...

  15. Hypothetico-deductive (HD) method

    4.4.2 Hypothetico-deductive (HD) method As explained above, in line with a positivist research approach, the overall method for addressing the research objectives and answering the research questions followed the HD method in the application of a number of distinct stages which were adapted from Jankowicz (2005) and Blaikie (2010), as follows:

  16. Hypothetico

    The first is that there is an important similarity between this issue and the issue of the strength of hypothetico-deductive reasoning in general. Of course, views on hypothetico-deduction in science span the range from utter rejection (e.g., Popper) to claims that it is the central form of reasoning in science (e.g., [Lawson, 2000]).

  17. DOCX 1. Develop a sample research proposal

    1. Develop a sample research proposal . to represent. the various steps of hypothetico- deductive method. (10 Marks) Ans 1. Introduction. Science depends upon numerous foundations that incorporate the processes of reasoning, logic, and values to perform research. Depending on the research methods, the research base is scientific reasoning.

  18. Hypothetico-Deductive Method in Business Research

    It involves making a specific statement and then finding contrary evidence through experiments or observations. This is known as the hypothetico-deductive method. Scientific Method. Being scientific in nature, the aforesaid method is common to all disciplines like economics, physics or biochemistry.

  19. Deductive, Inductive and Hypothetico-deductive methods

    There are three methods of scientific enquiry namely inductive method, deductive method and hypothetico-deductive method. 1) Deductive reasoning (general to specific): This method of reasoning was developed by Aristotle. It uses general principles to predict specific results. Deductive reasoning starts from a general principle or theory, and ...

  20. Hypothetico-deductive Method-Microeconomic Study

    This methodology proposed by Popper is commonly known as the hypothetico-deductive method. According to this method, the scientific inquisition proceeds by formulating a hypothesis and then falsified by vigorous tests on visible empirical data. Thus the hypothetico-deductive is one of the keystone foundations of scientific research and is ...