- Philosophy Of Science
Scientific Thinking and Reasoning
- November 2012
- University of Maryland, College Park
- Carnegie Mellon University
Abstract and Figures
Discover the world's research
- 25+ million members
- 160+ million publication pages
- 2.3+ billion citations
- Matthew Jackson
- Susan Vineberg
- Kevin R. Theis
- Donglin Liu
- Lect Notes Comput Sci
- Kelly Vera Chozo
- Miguel López Astorga
- Kevin Dunbar
- Harry Beilin
- Ryan D. Tweney
- Patrick W. Langley
- Herbert A. Simon
- Jan M. Zytkow
- Paul Thagard
- McDermott, L. C
- Arabic Portal
- Recruit researchers
- Join for free
- Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
Developing Scientific Thinking and Research Skills Through the Research Thesis or Dissertation
- First Online: 22 September 2019
Cite this chapter
- Gina Wisker 3 , 4
1392 Accesses
3 Citations
This chapter explores higher level scientific thinking skills that research students need to develop during their research learning journeys towards their dissertation/thesis at postgraduate levels, and also final year undergraduate (Australian honours year) dissertation. A model of four quadrants is introduced. Practice and experience-informed examples are presented to show how higher order skills can be realised and embedded so that they become established ways of thinking, researching, creating, and expressing knowledge and understanding.
This is a preview of subscription content, log in via an institution to check access.
Access this chapter
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Compact, lightweight edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
- Durable hardcover edition
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Similar content being viewed by others
Building Confidence About the Academic Journey
How Do Teachers Meet the Academic Needs of High-Ability Students in Science?
Transforming from naïve research student to confident critical thinker.
Aitchison, C., & Guerin, C. (Eds.). (2014). Writing groups for doctoral education and beyond: Innovations in practice and theory . Abingdon: Routledge.
Google Scholar
Archer, L. (2008). Younger academics’ constructions of ‘authenticity’, ‘success’ and professional identity. Studies in Higher Education, 33 (4), 385–403. https://doi.org/10.1080/03075070802211729 .
Article Google Scholar
Baker, V. L., & Lattuca, L. R. (2010). Developmental networks and learning: Toward an interdisciplinary perspective on identity development during doctoral study. Studies in Higher Education, 35 (7), 807–827. https://doi.org/10.1080/03075070903501887 .
Bell, J. (2010). Doing your research project: A guide for first-time researchers in education, health and social science . Maidenhead: McGraw-Hill Open University Press.
Boud, D. (Ed.). (1988). Developing student autonomy in learning (2nd ed.). New York: Nichols Publishing Company.
Boud, D., & Walker, D. (1998). Promoting reflection in professional courses: The challenge of context. Studies in Higher Education, 23 (2), 191–206. https://doi.org/10.1080/03075079812331380384 .
Brew, A. (2001). Conceptions of research: A phenomenographic study. Studies in Higher Education, 26 (3), 271–285. https://doi.org/10.1080/03075070120076255 .
Bruce, N. (1995). Practising what we preach: Creating the conditions for student autonomy. Hong Kong Papers in Linguistics & Language Teaching , 18 , 73–88. Retrieved from http://hkjo.lib.hku.hk/archive/files/30cc63ad3122e0ad29590de2006ba6d0.pdf .
Butler, S. (1999). Catalysing student autonomy through action research in a problem centered learning environment. Research in Science Education, 29 (1), 127–140. https://doi.org/10.1007/BF02461184 .
Carver, C., & Scheier, M. (2005). Optimism. In C. R. Snyder & S. J. Lopez (Eds.), Handbook of positive psychology (pp. 231–256). New York, NY: Oxford University Press.
Castello, M., Kobayashi, S., McGinn, M. K., Pechar, H., Vekkaila, J., & Wisker, G. (2017). Researcher identity in transition: Signals to identify and manage spheres of activity in a risk-career. Frontline Learning Research , 3 (3), 19–33. https://doi.org/10.14786/flr.v3i3.149 .
Davies, A., & Danaher, P. A. (2014). Capacity-building for western expatriate nurses and Australian early career researchers. Procedia—Social and Behavioral Sciences, 112, 373–381. https://doi.org/10.1016/j.sbspro.2014.01.1177 .
Elbow, P. (1973). Writing without teachers . Oxford: Oxford University Press.
Fazey, D. M. A., & Fazey, J. A. (2001). The potential for autonomy in learning: Perceptions of competence, motivation and locus of control in first-year undergraduate students. Studies in Higher Education, 26 (3), 345–361. https://doi.org/10.1080/03075070120076309 .
Flanagan, M. (2018). Michael Thomas Flanagan, University College London, London . Retrieved April 4, 2019, from https://www.ee.ucl.ac.uk/~mflanaga .
Grant, B. M. (2008). Agonistic struggle: Master-slave dialogues in humanities supervision. Arts and Humanities in Higher Education, 7 (1), 9–27. https://doi.org/10.1177/1474022207084880 .
Haksever, A. M., & Manisali, E. (2000). Assessing supervision requirements of PhD students: The case of construction management and engineering in the UK. European Journal of Engineering Education, 25 (1), 19–32. https://doi.org/10.1080/030437900308616 .
Healey, M., Bovill, C., & Jenkins, A. (2015). Students as partners in learning. In J. Lea (Ed.), Enhancing learning and teaching in higher education: Engaging with the dimensions of practice (pp. 141–172). Maidenhead: Open University Press.
Healey, M., Flint, A., & Harrington, K. (2014). Engagement through partnership: Students as partners in learning and teaching in higher education . York: HE Academy. Retrieved from https://www.heacademy.ac.uk/knowledge-hub/engagement-through-partnership-students-partners-learning-and-teaching-higher .
Healey, M., Flint, A., & Harrington, K. (2016). Students as partners: Reflections on a conceptual model. Teaching and Learning Inquiry , 4 (2), 1–13. https://doi.org/10.20343/teachlearninqu.4.2.3 .
Holbrook, A., Shaw, K., Scevak, J., Bourke, S., Cantwell, R., & Budd, J. (2014). PhD candidate expectations: Exploring mismatch with experience. International Journal of Doctoral Studies , 9 , 329–346. https://doi.org/10.28945/2078 .
Ivanic, R. (1998). Writing and identity: The discoursal construction of identity in academic writing . Amsterdam: John Benjamins.
Book Google Scholar
Jenkins, E. W. (2001). Research in science education in Europe: Retrospect and prospect. In H. Behrendt (Ed.), Research in science education: Past, present and future (pp. 17–26). Dordrecht, The Netherlands: Kluwer Academic. https://doi.org/10.1007/0-306-47639-8_2 .
Johansson, T., Wisker, G., Claesson, S., Strandler, O., & Saalman, E. (2013). PhD supervision as an emotional process—Critical situations and emotional boundary work. Pertanika: Journal of Social Science and Humanities , 22 , 605–622.
Kiley, M., & Wisker, G. (2009). Threshold concepts in research education and evidence of threshold crossing. Higher Education Research and Development, 28 (4), 431–441. https://doi.org/10.1080/07294360903067930 .
Kobayashi, S., Grout, B., & Rump, C. Ø. (2013). Interaction and learning in PhD supervision—A qualitative study of supervision with multiple supervisors. Dansk Universitetspaedagogisk Tidsskrift , 8 (14), 13–25. Retrieved from https://tidsskrift.dk/dut/article/download/7174/6653 .
Kobayashi, S., Grout, B. W., & Rump, C. Ø. (2015). Opportunities to learn scientific thinking in joint doctoral supervision. Innovations in Education and Teaching International, 52 (1), 41–51. https://doi.org/10.1080/14703297.2014.981837 .
Li, S., & Seale, C. (2007). Learning to do qualitative data analysis: An observational study of doctoral work. Qualitative Health Research, 17 (10), 1442–1452. https://doi.org/10.1177/1049732307306924 .
Land, R., Meyer, J., & Smith, J. (Eds.). (2008). Threshold concepts within the disciplines . Rotterdam: Sense.
Leibowitz, B., Wisker, G., & Lamberti, P. (2018). ‘Crossing over’ into research on teaching and learning. In E. Bitzer (Ed.), Spaces, journeys and new horizons for postgraduate supervision (pp. 149–162). Stellenbosch, South Africa: SUN Press.
Lombardo, T. (2006). The evolution of future consciousness: The nature and historical development of the human capacity to think about the future . Bloomington, IN: AuthorHouse.
Lombardo, T. (2007). Developing constructive and creative attitudes and behaviors about the future: Part III—The self‐narrative, optimism and self‐efficacy, and the evolving future self. World Futures Study Federation : Futures Bulletin , 32 (2). Retrieved from https://www.centerforfutureconsciousness.com/pdf_files/WFSFArticles/Bulletins/FutConsArticle3.pdf .
Lombardo, T., & Richter, J. (2004). Evolving future consciousness through the pursuit of virtue. In H. Didsbury (Ed.), Thinking creatively in turbulent times . Bethesda, MD: World Future Society.
Meyer, J. H. F., & Land, R. (2003). Threshold concepts and troublesome knowledge: Linkages to ways of thinking and practising within the disciplines. In C. Rust (Ed.), Improving student learning: Improving student learning theory and practice—Ten years on . Oxford: Oxford Centre for Staff and Learning Development.
McAlpine, L. (2012). Shining a light on doctoral reading: Implications for doctoral identities and pedagogies. Innovations in Education and Teaching International, 49 (4), 351–361. https://doi.org/10.1080/14703297.2012.728875 .
McAlpine, L., Amundsen, C., & Turner, G. (2013). Identity-trajectory: Reframing early career academic experience. British Educational Research Journey, 40 (6), 952–969. https://doi.org/10.1002/berj.3123 .
Morris, C., & Wisker, G. (2011). Troublesome encounters: Strategies for managing the wellbeing of master’s and doctoral education students during their learning processes (HEA ESCalate Subject Centre Report). Retrieved from http://escalate.ac.uk/6828 .
Muurlink, O., & Poyatos Matas, C. (2010). A higher degree of stress: Academic wellbeing. In L. Marshall & C. Morris (Eds.), Taking wellbeing forward in higher education: Reflections on theory and practice (pp. 60–71). Falmer: University of Brighton Press.
Murray, R. (2016). How to write a thesis (4th ed.). Maidenhead: Open University Press.
Murtonen, M. (2005). University students’ research orientations: Do negative attitudes exist toward quantitative methods? Scandinavian Journal of Educational Research, 49 (3), 263–280. https://doi.org/10.1080/00313830500109568 .
Murtonen, M. (2015). University students’ understanding of the concepts empirical, theoretical, qualitative, and quantitative research. Teaching in Higher Education, 20 (7), 684–698. https://doi.org/10.1080/13562517.2015.1072152 .
Murtonen, M., Olkinuora, E., Tynjälä, P., & Lehtinen, E. (2008). “Do I need research skills in working life?”: University students’ motivation and difficulties in quantitative methods courses. Higher Education, 56 (5), 599–612. https://doi.org/10.1007/s10734-008-9113-9 .
Phillips, E., & Pugh, D. S. (2010). How to get a PhD: A handbook for students and their supervisors . Maidenhead: McGraw-Hill Open University Press.
Poyatos Matas, C. (2008, July). An innovative approach to research supervision . Paper presented at the University of Brighton Learning and Teaching Conference, Brighton, UK.
Poyatos Matas, C. (2009). A new approach to research supervision. In J. Barlow, G. Louw, & M. Rice (Eds.), Social purpose and creativity—Integrating learning in the real world (pp. 28–32). Falmer: University of Brighton Press.
Pyhältö, K., Nummenmaa, A. R., Soini, T., Stubb, J., & Lonka, K. (2012). Research on scholarly communities and development of scholarly identity in Finnish doctoral education. In T. S. Ahola & D. M. Hoffman (Eds.), Higher education research in Finland: Emerging structures and contemporary issues (pp. 337–357). Jyväskylä, Finland: Jyväskylä University Press.
Schulze, S. (2012). Empowering and disempowering students in student-supervisor relationships. Koers-Bulletin for Christian Scholarship, 77 (2), 1–8. https://doi.org/10.4102/koers.v77i2.47 .
Strandler, O., Johansson, T., Wisker, G., & Claesson, S. (2014). Supervisor or counsellor? Emotional boundary work in supervision. International Journal for Researcher Development, 5 (2), 70–82. https://doi.org/10.1108/IJRD-03-2014-0002 .
Trafford, V., & Leshem, S. (2009). Doctorateness as a threshold concept. Innovations in Education and Teaching International , 46 (3), 305–316. https://doi.org/10.1080/14703290903069027 .
Vekkaila, J., Pyhältö, K., & Lonka, K. (2013). Focusing on doctoral students’ experiences of engagement in thesis work. Frontline Learning Research, 1 (2), 10–32. https://doi.org/10.14786/flr.v1i2.43 .
Walliman, N. S. R. (2011). Your research project: Designing and planning your work . London: Sage.
Willison, J. (2009). Multiple contexts, multiple outcomes, one conceptual framework for research skill development in the undergraduate curriculum. CUR Quarterly , 29 (3), 10–14. Retrieved from https://www.adelaide.edu.au/rsd/evidence/related-articles/willison_2009_CURQ_29_3_.pdf .
Willison, J. (2012). When academics integrate research skill development in the curriculum. Higher Education Research & Development, 31 (6), 905–919. https://doi.org/10.1080/07294360.2012.658760 .
Willison, J., & O’Regan, K. (2005). 2020 Vision: An information literacy continuum for students primary school to post graduation. In A. Brew & C. Asmar (Eds.), Higher education in a changing world: Proceedings of the 28th HERDSA annual conference (pp. 633–641). Sydney, NSW, Australia: Higher Education Research and Development Society of Australasia.
Willison, J., & O’Regan, K. (2006). Research skill development framework . Retrieved April 4, 2019, from www.adelaide.edu.au/rsd .
Willison, J., Sabir, F., & Thomas, J. (2017). Shifting dimensions of autonomy in students’ research and employment. Higher Education Research and Development, 36 (2), 430–443. https://doi.org/10.1080/07294360.2016.1178216 .
Wisker, G. (2001). The postgraduate research handbook . Basingstoke: Palgrave Macmillan.
Wisker, G. (2009/2018). The undergraduate research handbook. Basingstoke: Palgrave Macmillan.
Wisker, G. (2012). The good supervisor: Supervising postgraduate and undergraduate research for doctoral theses and dissertations . Basingstoke: Palgrave Research Skills.
Wisker, G. (2014). Voice, vision and articulation: Conceptual threshold crossing in academic writing. In C. O’Mahony, A. Buchanan, M. O’Rourke, & B. Higgs (Eds.), Proceedings of the National Academy’s sixth annual conference and the fourth biennial threshold concepts conference (pp. 159–164). Cork, Ireland: NAIRTL. Retrieved from https://eric.ed.gov/?id=ED558533 .
Wisker, G. (2015). Getting published . London: Palgrave Macmillan.
Wisker, G. (2016). Toil and trouble: Professional and personal expectations and identities in academic writing for publication. In J. Smith (Ed.), Identity work in the contemporary university: Exploring an uneasy profession (pp. 143–154). Rotterdam: Sense.
Wisker, G. (2018). Frameworks and freedoms: Supervising research learning and the undergraduate dissertation. Journal of University Teaching and Learning Practice , 15 (4), 1–14. Retrieved from https://ro.uow.edu.au/jutlp/vol15/iss4/2 .
Wisker, G., & Kiley, M. (2017). Helping students demonstrate mastery of doctoral threshold concepts. In S. Carter & D. Laurs (Eds.), Developing research writing: A handbook for supervisors and advisers (pp. 173–178). Abingdon: Routledge.
Chapter Google Scholar
Wisker, G., Kiley, M., & Aiston, S. (2006, March). Making the learning leap: Research students crossing conceptual thresholds . Paper presented at the quality in postgraduate research conference, Adelaide, Australia.
Wisker, G., Morris, C., Cheng, M., Masika, R., Warnes, M., Lilly, J. …, Robinson, G. (2010). Doctoral learning journeys—Final report of the NTFS-funded project . Retrieved from http://www.heacademy.ac.uk/resources/detail/ntfs/Projects/Doctoral_Learning_Journeys .
Wisker, G., & Robinson, G. (2013). Doctoral ‘orphans’: Nurturing and supporting the success of postgraduates who have lost their supervisors. Higher Education Research & Development, 32 (2), 300–313. https://doi.org/10.1080/07294360.2012.657160 .
Wisker, G., Robinson, G., & Kiley, M. (2008, March). Crossing liminal spaces: Encouraging postgraduate students to cross conceptual thresholds and achieve threshold concepts in their research . Paper presented at the quality in postgraduate research: Research education in the new global environment—Part 2. Canberra, Australia.
Wisker, G., Robinson, G., Trafford, V., Warnes, M., & Creighton, E. (2003). From supervisory dialogues to successful PhDs: Strategies supporting and enabling the learning conversations of staff and students at postgraduate level. Teaching in Higher Education, 8 (3), 383–397. https://doi.org/10.1080/13562510309400 .
Wisker, G., & Savin-Baden, M. (2009). Priceless conceptual thresholds: Beyond the ‘stuck place’ in writing. London Review of Education, 7 (3), 235–247. https://doi.org/10.1080/14748460903290207 .
Download references
Author information
Authors and affiliations.
University of Brighton, Brighton, UK
Gina Wisker
University of Johannesburg, Johannesburg, South Africa
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Gina Wisker .
Editor information
Editors and affiliations.
Faculty of Education and Culture, Tampere University, Tampere, Finland
Mari Murtonen
Department of Higher Education, University of Surrey, Guildford, UK
Kieran Balloo
Rights and permissions
Reprints and permissions
Copyright information
© 2019 The Author(s)
About this chapter
Wisker, G. (2019). Developing Scientific Thinking and Research Skills Through the Research Thesis or Dissertation. In: Murtonen, M., Balloo, K. (eds) Redefining Scientific Thinking for Higher Education. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-24215-2_9
Download citation
DOI : https://doi.org/10.1007/978-3-030-24215-2_9
Published : 22 September 2019
Publisher Name : Palgrave Macmillan, Cham
Print ISBN : 978-3-030-24214-5
Online ISBN : 978-3-030-24215-2
eBook Packages : Education Education (R0)
Share this chapter
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Publish with us
Policies and ethics
- Find a journal
- Track your research
5 Scientific Thinking
Learning Objectives
- Describe the principles of the scientific method and explain its importance in conducting and interpreting research.
- Differentiate laws from theories and explain how research hypotheses are developed and tested.
- Identify the role of the research hypothesis in psychological research.
Psychologists aren’t the only people who seek to understand human behavior and solve social problems. Philosophers, religious leaders, and politicians, among others, also strive to provide explanations for human behavior. But psychologists believe that research is the best tool for understanding human beings and their relationships with others. Rather than accepting the claim of a philosopher that people do (or do not) have free will, a psychologist would collect data to empirically test whether or not people are able to actively control their own behavior. Rather than accepting a politician’s contention that creating (or abandoning) a new center for mental health will improve the lives of individuals in the inner city, a psychologist would empirically assess the effects of receiving mental health treatment on the quality of life of the recipients. The statements made by psychologists are empirical , which means they are based on systematic collection and analysis of data .
The Scientific Method
All scientists (whether they are physicists, chemists, biologists, sociologists, or psychologists) are engaged in the basic processes of collecting data and drawing conclusions about those data. The methods used by scientists have developed over many years and provide a common framework for developing, organizing, and sharing information. The scientific method is the set of assumptions, rules, and procedures scientists use to conduct research .
In addition to requiring that science be empirical, the scientific method demands that the procedures used be objective , or free from the personal bias or emotions of the scientist . The scientific method describes how scientists collect and analyze data, how they draw conclusions from data, and how they share data with others. These rules increase objectivity by placing data under the scrutiny of other scientists and even the public at large. Because data are reported objectively, other scientists know exactly how the scientist collected and analyzed the data. This means that they do not have to rely only on the scientist’s own interpretation of the data; they may draw their own, potentially different, conclusions.
The scientific method is an iterative process. The scientific process often starts with making a hypothesis (which is also an educated guess). Then, research studies are designed to test the hypothesis. The results obtained from experiments then inform the researchers how behaviors may be predicted or explained. This is a recurring process in which the results then loop back to modify the hypothesis if necessary. With an updated hypothesis, researchers then continue to employ the scientific process to conduct experiments.
Figure 2.1 The scientific process employed by psychologists
Most new research is designed to replicate —that is, to repeat, add to, or modify—previous research findings. The scientific method therefore results in an accumulation of scientific knowledge through the reporting of research and the addition to and modifications of these reported findings by other scientists.
Laws and Theories as Organizing Principles
One goal of research is to organize information into meaningful statements that can be applied in many situations. Principles that are so general as to apply to all situations in a given domain of inquiry are known as laws . There are well-known laws in the physical sciences, such as the law of gravity and the laws of thermodynamics, and there are some universally accepted laws in psychology, such as the law of effect and Weber’s law. But because laws are very general principles and their validity has already been well established, they are themselves rarely directly subjected to scientific testing.
The next step down from laws in the hierarchy of organizing principles is theory. A theory is an integrated set of principles that explains and predicts many, but not all, observed relationships within a given domain of inquiry . One example of an important theory in psychology is the stage theory of cognitive development proposed by the Swiss psychologist Jean Piaget. The theory states that children pass through a series of cognitive stages as they grow, each of which must be mastered in succession before movement to the next cognitive stage can occur. This is an extremely useful theory in human development because it can be applied to many different content areas and can be tested in many different ways.
Good theories have four important characteristics. First, good theories are general , meaning they summarize many different outcomes. Second, they are parsimonious , meaning they provide the simplest possible account of those outcomes. The stage theory of cognitive development meets both of these requirements. It can account for developmental changes in behavior across a wide variety of domains, and yet it does so parsimoniously—by hypothesizing a simple set of cognitive stages. Third, good theories provide ideas for future research . The stage theory of cognitive development has been applied not only to learning about cognitive skills but also to the study of children’s moral (Kohlberg, 1966) and gender (Ruble & Martin, 1998) development.
Finally, good theories are falsifiable (Popper, 1959), which means the variables of interest can be adequately measured and the relationships between the variables that are predicted by the theory can be shown through research to be incorrect . The stage theory of cognitive development is falsifiable because the stages of cognitive reasoning can be measured and because if research discovers, for instance, that children learn new tasks before they have reached the cognitive stage hypothesized to be required for that task, then the theory will be shown to be incorrect.
No single theory is able to account for all behavior in all cases. Rather, theories are each limited in that they make accurate predictions in some situations or for some people but not in other situations or for other people. As a result, there is a constant exchange between theory and data: Existing theories are modified on the basis of collected data, and the newly modified theories then make new predictions that are tested by new data, and so forth. When a better theory is found, it will replace the old one. This is part of the accumulation of scientific knowledge.
The Research Hypothesis
Theories are usually framed too broadly to be tested in a single experiment. Therefore, scientists use a more precise statement of the presumed relationship among specific parts of a theory—a research hypothesis—as the basis for their research. A research hypothesis is a specific and falsifiable prediction about the relationship between or among two or more variables, where a variable is any attribute that can assume different values among different people or across different times or places. The research hypothesis states the existence of a relationship between the variables of interest and the specific direction of that relationship. For instance, the research hypothesis “Using marijuana will reduce learning” predicts that there is a relationship between a variable “using marijuana” and another variable called “learning.” Similarly, in the research hypothesis “participating in psychotherapy will reduce anxiety,” the variables that are expected to be related are “participating in psychotherapy” and “level of anxiety.”
When stated in an abstract manner, the ideas that form the basis of a research hypothesis are known as conceptual variables. Conceptual variables are abstract ideas that form the basis of research hypotheses. Sometimes the conceptual variables are rather simple—for instance, “age,” “gender,” or “weight.” In other cases, the conceptual variables represent more complex ideas, such as “anxiety,” “cognitive development,” “learning,” “self-esteem,” or “sexism.”
The first step in testing a research hypothesis involves turning the conceptual variables into measured variables, which are variables consisting of numbers that represent the conceptual variables. For instance, the conceptual variable “participating in psychotherapy” could be represented as the measured variable “number of psychotherapy hours the patient has accrued,” and the conceptual variable “using marijuana” could be assessed by having the research participants rate, on a scale from 1 to 10, how often they use marijuana or by administering a blood test that measures the presence of the chemicals in marijuana.
Psychologists use the term operational definition to refer to a precise statement of how a conceptual variable is turned into a measured variable. The relationship between conceptual and measured variables in a research hypothesis is diagrammed in Figure 2.2 “Diagram of a Research Hypothesis.” The conceptual variables are represented within circles at the top of the figure, and the measured variables are represented within squares at the bottom. The two vertical arrows, which lead from the conceptual variables to the measured variables, represent the operational definitions of the two variables. The arrows indicate the expectation that changes in the conceptual variables (psychotherapy and anxiety in this example) will cause changes in the corresponding measured variables. The measured variables are then used to draw inferences about the conceptual variables.
Table 2.1, “Examples of the Operational Definitions of Conceptual Variables That Have Been Used in Psychological Research,” lists some potential operational definitions of conceptual variables that have been used in psychological research. As you read through this list, note that in contrast to the abstract conceptual variables, the measured variables are very specific. This specificity is important for two reasons. First, more specific definitions mean that there is less danger that the collected data will be misunderstood by others. Second, specific definitions will enable future researchers to replicate the research.
Table 2.1 Examples of the Operational Definitions of Conceptual Variables That Have Been Used in Psychological Research
Conceptual variable | Operational definitions |
---|---|
Aggression | |
Interpersonal attraction | |
Employee satisfaction | ) to 9 ( ) |
Decision-making skills | |
Depression |
the set of assumptions, rules, and procedures scientists use to conduct research
free from the personal bias or emotions of the scientist
Introduction to Psychology Copyright © 2022 by LOUIS: The Louisiana Library Network is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
- Table of Contents
- Random Entry
- Chronological
- Editorial Information
- About the SEP
- Editorial Board
- How to Cite the SEP
- Special Characters
- Advanced Tools
- Support the SEP
- PDFs for SEP Friends
- Make a Donation
- SEPIA for Libraries
- Entry Contents
Bibliography
Academic tools.
- Friends PDF Preview
- Author and Citation Info
- Back to Top
Scientific Method
Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories. How these are carried out in detail can vary greatly, but characteristics like these have been looked to as a way of demarcating scientific activity from non-science, where only enterprises which employ some canonical form of scientific method or methods should be considered science (see also the entry on science and pseudo-science ). Others have questioned whether there is anything like a fixed toolkit of methods which is common across science and only science. Some reject privileging one view of method as part of rejecting broader views about the nature of science, such as naturalism (Dupré 2004); some reject any restriction in principle (pluralism).
Scientific method should be distinguished from the aims and products of science, such as knowledge, predictions, or control. Methods are the means by which those goals are achieved. Scientific method should also be distinguished from meta-methodology, which includes the values and justifications behind a particular characterization of scientific method (i.e., a methodology) — values such as objectivity, reproducibility, simplicity, or past successes. Methodological rules are proposed to govern method and it is a meta-methodological question whether methods obeying those rules satisfy given values. Finally, method is distinct, to some degree, from the detailed and contextual practices through which methods are implemented. The latter might range over: specific laboratory techniques; mathematical formalisms or other specialized languages used in descriptions and reasoning; technological or other material means; ways of communicating and sharing results, whether with other scientists or with the public at large; or the conventions, habits, enforced customs, and institutional controls over how and what science is carried out.
While it is important to recognize these distinctions, their boundaries are fuzzy. Hence, accounts of method cannot be entirely divorced from their methodological and meta-methodological motivations or justifications, Moreover, each aspect plays a crucial role in identifying methods. Disputes about method have therefore played out at the detail, rule, and meta-rule levels. Changes in beliefs about the certainty or fallibility of scientific knowledge, for instance (which is a meta-methodological consideration of what we can hope for methods to deliver), have meant different emphases on deductive and inductive reasoning, or on the relative importance attached to reasoning over observation (i.e., differences over particular methods.) Beliefs about the role of science in society will affect the place one gives to values in scientific method.
The issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism, which considers the effectiveness of any methodological prescription to be so context sensitive as to render it not explanatory on its own. Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate. But the details of scientific practice vary with time and place, from institution to institution, across scientists and their subjects of investigation. How significant are the variations for understanding science and its success? How much can method be abstracted from practice? This entry describes some of the attempts to characterize scientific method or methods, as well as arguments for a more context-sensitive approach to methods embedded in actual scientific practices.
1. Overview and organizing themes
2. historical review: aristotle to mill, 3.1 logical constructionism and operationalism, 3.2. h-d as a logic of confirmation, 3.3. popper and falsificationism, 3.4 meta-methodology and the end of method, 4. statistical methods for hypothesis testing, 5.1 creative and exploratory practices.
- 5.2 Computer methods and the ‘new ways’ of doing science
6.1 “The scientific method” in science education and as seen by scientists
6.2 privileged methods and ‘gold standards’, 6.3 scientific method in the court room, 6.4 deviating practices, 7. conclusion, other internet resources, related entries.
This entry could have been given the title Scientific Methods and gone on to fill volumes, or it could have been extremely short, consisting of a brief summary rejection of the idea that there is any such thing as a unique Scientific Method at all. Both unhappy prospects are due to the fact that scientific activity varies so much across disciplines, times, places, and scientists that any account which manages to unify it all will either consist of overwhelming descriptive detail, or trivial generalizations.
The choice of scope for the present entry is more optimistic, taking a cue from the recent movement in philosophy of science toward a greater attention to practice: to what scientists actually do. This “turn to practice” can be seen as the latest form of studies of methods in science, insofar as it represents an attempt at understanding scientific activity, but through accounts that are neither meant to be universal and unified, nor singular and narrowly descriptive. To some extent, different scientists at different times and places can be said to be using the same method even though, in practice, the details are different.
Whether the context in which methods are carried out is relevant, or to what extent, will depend largely on what one takes the aims of science to be and what one’s own aims are. For most of the history of scientific methodology the assumption has been that the most important output of science is knowledge and so the aim of methodology should be to discover those methods by which scientific knowledge is generated.
Science was seen to embody the most successful form of reasoning (but which form?) to the most certain knowledge claims (but how certain?) on the basis of systematically collected evidence (but what counts as evidence, and should the evidence of the senses take precedence, or rational insight?) Section 2 surveys some of the history, pointing to two major themes. One theme is seeking the right balance between observation and reasoning (and the attendant forms of reasoning which employ them); the other is how certain scientific knowledge is or can be.
Section 3 turns to 20 th century debates on scientific method. In the second half of the 20 th century the epistemic privilege of science faced several challenges and many philosophers of science abandoned the reconstruction of the logic of scientific method. Views changed significantly regarding which functions of science ought to be captured and why. For some, the success of science was better identified with social or cultural features. Historical and sociological turns in the philosophy of science were made, with a demand that greater attention be paid to the non-epistemic aspects of science, such as sociological, institutional, material, and political factors. Even outside of those movements there was an increased specialization in the philosophy of science, with more and more focus on specific fields within science. The combined upshot was very few philosophers arguing any longer for a grand unified methodology of science. Sections 3 and 4 surveys the main positions on scientific method in 20 th century philosophy of science, focusing on where they differ in their preference for confirmation or falsification or for waiving the idea of a special scientific method altogether.
In recent decades, attention has primarily been paid to scientific activities traditionally falling under the rubric of method, such as experimental design and general laboratory practice, the use of statistics, the construction and use of models and diagrams, interdisciplinary collaboration, and science communication. Sections 4–6 attempt to construct a map of the current domains of the study of methods in science.
As these sections illustrate, the question of method is still central to the discourse about science. Scientific method remains a topic for education, for science policy, and for scientists. It arises in the public domain where the demarcation or status of science is at issue. Some philosophers have recently returned, therefore, to the question of what it is that makes science a unique cultural product. This entry will close with some of these recent attempts at discerning and encapsulating the activities by which scientific knowledge is achieved.
Attempting a history of scientific method compounds the vast scope of the topic. This section briefly surveys the background to modern methodological debates. What can be called the classical view goes back to antiquity, and represents a point of departure for later divergences. [ 1 ]
We begin with a point made by Laudan (1968) in his historical survey of scientific method:
Perhaps the most serious inhibition to the emergence of the history of theories of scientific method as a respectable area of study has been the tendency to conflate it with the general history of epistemology, thereby assuming that the narrative categories and classificatory pigeon-holes applied to the latter are also basic to the former. (1968: 5)
To see knowledge about the natural world as falling under knowledge more generally is an understandable conflation. Histories of theories of method would naturally employ the same narrative categories and classificatory pigeon holes. An important theme of the history of epistemology, for example, is the unification of knowledge, a theme reflected in the question of the unification of method in science. Those who have identified differences in kinds of knowledge have often likewise identified different methods for achieving that kind of knowledge (see the entry on the unity of science ).
Different views on what is known, how it is known, and what can be known are connected. Plato distinguished the realms of things into the visible and the intelligible ( The Republic , 510a, in Cooper 1997). Only the latter, the Forms, could be objects of knowledge. The intelligible truths could be known with the certainty of geometry and deductive reasoning. What could be observed of the material world, however, was by definition imperfect and deceptive, not ideal. The Platonic way of knowledge therefore emphasized reasoning as a method, downplaying the importance of observation. Aristotle disagreed, locating the Forms in the natural world as the fundamental principles to be discovered through the inquiry into nature ( Metaphysics Z , in Barnes 1984).
Aristotle is recognized as giving the earliest systematic treatise on the nature of scientific inquiry in the western tradition, one which embraced observation and reasoning about the natural world. In the Prior and Posterior Analytics , Aristotle reflects first on the aims and then the methods of inquiry into nature. A number of features can be found which are still considered by most to be essential to science. For Aristotle, empiricism, careful observation (but passive observation, not controlled experiment), is the starting point. The aim is not merely recording of facts, though. For Aristotle, science ( epistêmê ) is a body of properly arranged knowledge or learning—the empirical facts, but also their ordering and display are of crucial importance. The aims of discovery, ordering, and display of facts partly determine the methods required of successful scientific inquiry. Also determinant is the nature of the knowledge being sought, and the explanatory causes proper to that kind of knowledge (see the discussion of the four causes in the entry on Aristotle on causality ).
In addition to careful observation, then, scientific method requires a logic as a system of reasoning for properly arranging, but also inferring beyond, what is known by observation. Methods of reasoning may include induction, prediction, or analogy, among others. Aristotle’s system (along with his catalogue of fallacious reasoning) was collected under the title the Organon . This title would be echoed in later works on scientific reasoning, such as Novum Organon by Francis Bacon, and Novum Organon Restorum by William Whewell (see below). In Aristotle’s Organon reasoning is divided primarily into two forms, a rough division which persists into modern times. The division, known most commonly today as deductive versus inductive method, appears in other eras and methodologies as analysis/synthesis, non-ampliative/ampliative, or even confirmation/verification. The basic idea is there are two “directions” to proceed in our methods of inquiry: one away from what is observed, to the more fundamental, general, and encompassing principles; the other, from the fundamental and general to instances or implications of principles.
The basic aim and method of inquiry identified here can be seen as a theme running throughout the next two millennia of reflection on the correct way to seek after knowledge: carefully observe nature and then seek rules or principles which explain or predict its operation. The Aristotelian corpus provided the framework for a commentary tradition on scientific method independent of science itself (cosmos versus physics.) During the medieval period, figures such as Albertus Magnus (1206–1280), Thomas Aquinas (1225–1274), Robert Grosseteste (1175–1253), Roger Bacon (1214/1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546), Giacomo Zabarella (1533–1589) all worked to clarify the kind of knowledge obtainable by observation and induction, the source of justification of induction, and best rules for its application. [ 2 ] Many of their contributions we now think of as essential to science (see also Laudan 1968). As Aristotle and Plato had employed a framework of reasoning either “to the forms” or “away from the forms”, medieval thinkers employed directions away from the phenomena or back to the phenomena. In analysis, a phenomena was examined to discover its basic explanatory principles; in synthesis, explanations of a phenomena were constructed from first principles.
During the Scientific Revolution these various strands of argument, experiment, and reason were forged into a dominant epistemic authority. The 16 th –18 th centuries were a period of not only dramatic advance in knowledge about the operation of the natural world—advances in mechanical, medical, biological, political, economic explanations—but also of self-awareness of the revolutionary changes taking place, and intense reflection on the source and legitimation of the method by which the advances were made. The struggle to establish the new authority included methodological moves. The Book of Nature, according to the metaphor of Galileo Galilei (1564–1642) or Francis Bacon (1561–1626), was written in the language of mathematics, of geometry and number. This motivated an emphasis on mathematical description and mechanical explanation as important aspects of scientific method. Through figures such as Henry More and Ralph Cudworth, a neo-Platonic emphasis on the importance of metaphysical reflection on nature behind appearances, particularly regarding the spiritual as a complement to the purely mechanical, remained an important methodological thread of the Scientific Revolution (see the entries on Cambridge platonists ; Boyle ; Henry More ; Galileo ).
In Novum Organum (1620), Bacon was critical of the Aristotelian method for leaping from particulars to universals too quickly. The syllogistic form of reasoning readily mixed those two types of propositions. Bacon aimed at the invention of new arts, principles, and directions. His method would be grounded in methodical collection of observations, coupled with correction of our senses (and particularly, directions for the avoidance of the Idols, as he called them, kinds of systematic errors to which naïve observers are prone.) The community of scientists could then climb, by a careful, gradual and unbroken ascent, to reliable general claims.
Bacon’s method has been criticized as impractical and too inflexible for the practicing scientist. Whewell would later criticize Bacon in his System of Logic for paying too little attention to the practices of scientists. It is hard to find convincing examples of Bacon’s method being put in to practice in the history of science, but there are a few who have been held up as real examples of 16 th century scientific, inductive method, even if not in the rigid Baconian mold: figures such as Robert Boyle (1627–1691) and William Harvey (1578–1657) (see the entry on Bacon ).
It is to Isaac Newton (1642–1727), however, that historians of science and methodologists have paid greatest attention. Given the enormous success of his Principia Mathematica and Opticks , this is understandable. The study of Newton’s method has had two main thrusts: the implicit method of the experiments and reasoning presented in the Opticks, and the explicit methodological rules given as the Rules for Philosophising (the Regulae) in Book III of the Principia . [ 3 ] Newton’s law of gravitation, the linchpin of his new cosmology, broke with explanatory conventions of natural philosophy, first for apparently proposing action at a distance, but more generally for not providing “true”, physical causes. The argument for his System of the World ( Principia , Book III) was based on phenomena, not reasoned first principles. This was viewed (mainly on the continent) as insufficient for proper natural philosophy. The Regulae counter this objection, re-defining the aims of natural philosophy by re-defining the method natural philosophers should follow. (See the entry on Newton’s philosophy .)
To his list of methodological prescriptions should be added Newton’s famous phrase “ hypotheses non fingo ” (commonly translated as “I frame no hypotheses”.) The scientist was not to invent systems but infer explanations from observations, as Bacon had advocated. This would come to be known as inductivism. In the century after Newton, significant clarifications of the Newtonian method were made. Colin Maclaurin (1698–1746), for instance, reconstructed the essential structure of the method as having complementary analysis and synthesis phases, one proceeding away from the phenomena in generalization, the other from the general propositions to derive explanations of new phenomena. Denis Diderot (1713–1784) and editors of the Encyclopédie did much to consolidate and popularize Newtonianism, as did Francesco Algarotti (1721–1764). The emphasis was often the same, as much on the character of the scientist as on their process, a character which is still commonly assumed. The scientist is humble in the face of nature, not beholden to dogma, obeys only his eyes, and follows the truth wherever it leads. It was certainly Voltaire (1694–1778) and du Chatelet (1706–1749) who were most influential in propagating the latter vision of the scientist and their craft, with Newton as hero. Scientific method became a revolutionary force of the Enlightenment. (See also the entries on Newton , Leibniz , Descartes , Boyle , Hume , enlightenment , as well as Shank 2008 for a historical overview.)
Not all 18 th century reflections on scientific method were so celebratory. Famous also are George Berkeley’s (1685–1753) attack on the mathematics of the new science, as well as the over-emphasis of Newtonians on observation; and David Hume’s (1711–1776) undermining of the warrant offered for scientific claims by inductive justification (see the entries on: George Berkeley ; David Hume ; Hume’s Newtonianism and Anti-Newtonianism ). Hume’s problem of induction motivated Immanuel Kant (1724–1804) to seek new foundations for empirical method, though as an epistemic reconstruction, not as any set of practical guidelines for scientists. Both Hume and Kant influenced the methodological reflections of the next century, such as the debate between Mill and Whewell over the certainty of inductive inferences in science.
The debate between John Stuart Mill (1806–1873) and William Whewell (1794–1866) has become the canonical methodological debate of the 19 th century. Although often characterized as a debate between inductivism and hypothetico-deductivism, the role of the two methods on each side is actually more complex. On the hypothetico-deductive account, scientists work to come up with hypotheses from which true observational consequences can be deduced—hence, hypothetico-deductive. Because Whewell emphasizes both hypotheses and deduction in his account of method, he can be seen as a convenient foil to the inductivism of Mill. However, equally if not more important to Whewell’s portrayal of scientific method is what he calls the “fundamental antithesis”. Knowledge is a product of the objective (what we see in the world around us) and subjective (the contributions of our mind to how we perceive and understand what we experience, which he called the Fundamental Ideas). Both elements are essential according to Whewell, and he was therefore critical of Kant for too much focus on the subjective, and John Locke (1632–1704) and Mill for too much focus on the senses. Whewell’s fundamental ideas can be discipline relative. An idea can be fundamental even if it is necessary for knowledge only within a given scientific discipline (e.g., chemical affinity for chemistry). This distinguishes fundamental ideas from the forms and categories of intuition of Kant. (See the entry on Whewell .)
Clarifying fundamental ideas would therefore be an essential part of scientific method and scientific progress. Whewell called this process “Discoverer’s Induction”. It was induction, following Bacon or Newton, but Whewell sought to revive Bacon’s account by emphasising the role of ideas in the clear and careful formulation of inductive hypotheses. Whewell’s induction is not merely the collecting of objective facts. The subjective plays a role through what Whewell calls the Colligation of Facts, a creative act of the scientist, the invention of a theory. A theory is then confirmed by testing, where more facts are brought under the theory, called the Consilience of Inductions. Whewell felt that this was the method by which the true laws of nature could be discovered: clarification of fundamental concepts, clever invention of explanations, and careful testing. Mill, in his critique of Whewell, and others who have cast Whewell as a fore-runner of the hypothetico-deductivist view, seem to have under-estimated the importance of this discovery phase in Whewell’s understanding of method (Snyder 1997a,b, 1999). Down-playing the discovery phase would come to characterize methodology of the early 20 th century (see section 3 ).
Mill, in his System of Logic , put forward a narrower view of induction as the essence of scientific method. For Mill, induction is the search first for regularities among events. Among those regularities, some will continue to hold for further observations, eventually gaining the status of laws. One can also look for regularities among the laws discovered in a domain, i.e., for a law of laws. Which “law law” will hold is time and discipline dependent and open to revision. One example is the Law of Universal Causation, and Mill put forward specific methods for identifying causes—now commonly known as Mill’s methods. These five methods look for circumstances which are common among the phenomena of interest, those which are absent when the phenomena are, or those for which both vary together. Mill’s methods are still seen as capturing basic intuitions about experimental methods for finding the relevant explanatory factors ( System of Logic (1843), see Mill entry). The methods advocated by Whewell and Mill, in the end, look similar. Both involve inductive generalization to covering laws. They differ dramatically, however, with respect to the necessity of the knowledge arrived at; that is, at the meta-methodological level (see the entries on Whewell and Mill entries).
3. Logic of method and critical responses
The quantum and relativistic revolutions in physics in the early 20 th century had a profound effect on methodology. Conceptual foundations of both theories were taken to show the defeasibility of even the most seemingly secure intuitions about space, time and bodies. Certainty of knowledge about the natural world was therefore recognized as unattainable. Instead a renewed empiricism was sought which rendered science fallible but still rationally justifiable.
Analyses of the reasoning of scientists emerged, according to which the aspects of scientific method which were of primary importance were the means of testing and confirming of theories. A distinction in methodology was made between the contexts of discovery and justification. The distinction could be used as a wedge between the particularities of where and how theories or hypotheses are arrived at, on the one hand, and the underlying reasoning scientists use (whether or not they are aware of it) when assessing theories and judging their adequacy on the basis of the available evidence. By and large, for most of the 20 th century, philosophy of science focused on the second context, although philosophers differed on whether to focus on confirmation or refutation as well as on the many details of how confirmation or refutation could or could not be brought about. By the mid-20 th century these attempts at defining the method of justification and the context distinction itself came under pressure. During the same period, philosophy of science developed rapidly, and from section 4 this entry will therefore shift from a primarily historical treatment of the scientific method towards a primarily thematic one.
Advances in logic and probability held out promise of the possibility of elaborate reconstructions of scientific theories and empirical method, the best example being Rudolf Carnap’s The Logical Structure of the World (1928). Carnap attempted to show that a scientific theory could be reconstructed as a formal axiomatic system—that is, a logic. That system could refer to the world because some of its basic sentences could be interpreted as observations or operations which one could perform to test them. The rest of the theoretical system, including sentences using theoretical or unobservable terms (like electron or force) would then either be meaningful because they could be reduced to observations, or they had purely logical meanings (called analytic, like mathematical identities). This has been referred to as the verifiability criterion of meaning. According to the criterion, any statement not either analytic or verifiable was strictly meaningless. Although the view was endorsed by Carnap in 1928, he would later come to see it as too restrictive (Carnap 1956). Another familiar version of this idea is operationalism of Percy William Bridgman. In The Logic of Modern Physics (1927) Bridgman asserted that every physical concept could be defined in terms of the operations one would perform to verify the application of that concept. Making good on the operationalisation of a concept even as simple as length, however, can easily become enormously complex (for measuring very small lengths, for instance) or impractical (measuring large distances like light years.)
Carl Hempel’s (1950, 1951) criticisms of the verifiability criterion of meaning had enormous influence. He pointed out that universal generalizations, such as most scientific laws, were not strictly meaningful on the criterion. Verifiability and operationalism both seemed too restrictive to capture standard scientific aims and practice. The tenuous connection between these reconstructions and actual scientific practice was criticized in another way. In both approaches, scientific methods are instead recast in methodological roles. Measurements, for example, were looked to as ways of giving meanings to terms. The aim of the philosopher of science was not to understand the methods per se , but to use them to reconstruct theories, their meanings, and their relation to the world. When scientists perform these operations, however, they will not report that they are doing them to give meaning to terms in a formal axiomatic system. This disconnect between methodology and the details of actual scientific practice would seem to violate the empiricism the Logical Positivists and Bridgman were committed to. The view that methodology should correspond to practice (to some extent) has been called historicism, or intuitionism. We turn to these criticisms and responses in section 3.4 . [ 4 ]
Positivism also had to contend with the recognition that a purely inductivist approach, along the lines of Bacon-Newton-Mill, was untenable. There was no pure observation, for starters. All observation was theory laden. Theory is required to make any observation, therefore not all theory can be derived from observation alone. (See the entry on theory and observation in science .) Even granting an observational basis, Hume had already pointed out that one could not deductively justify inductive conclusions without begging the question by presuming the success of the inductive method. Likewise, positivist attempts at analyzing how a generalization can be confirmed by observations of its instances were subject to a number of criticisms. Goodman (1965) and Hempel (1965) both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below.
The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive (H-D) method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences. As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod (1924) and others in the 20 th century. Often, Hempel’s (1966) description of the H-D method, illustrated by the case of Semmelweiss’ inferential procedures in establishing the cause of childbed fever, has been presented as a key account of H-D as well as a foil for criticism of the H-D account of confirmation (see, for example, Lipton’s (2004) discussion of inference to the best explanation; also the entry on confirmation ). Hempel described Semmelsweiss’ procedure as examining various hypotheses explaining the cause of childbed fever. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true (what Hempel called the test implications of the hypothesis), then conducting an experiment and observing whether or not the test implications occurred. If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The confirmation of a test implication does not verify a hypothesis, though Hempel did allow that “it provides at least some support, some corroboration or confirmation for it” (Hempel 1966: 8). The degree of this support then depends on the quantity, variety and precision of the supporting evidence.
Another approach that took off from the difficulties with inductive inference was Karl Popper’s critical rationalism or falsificationism (Popper 1959, 1963). Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis. The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. (This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality. )
Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.
Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science. Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.
A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications. This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted. Hence, scientific hypotheses must be falsifiable. Not only must there exist some possible observation statement which could falsify the hypothesis or theory, were it observed, (Popper called these the hypothesis’ potential falsifiers) it is crucial to the Popperian scientific method that such falsifications be sincerely attempted on a regular basis.
The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability (Popper 1985: 41f.).
From the 1960s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry.
Thomas Kuhn’s The Structure of Scientific Revolutions (1962) begins with a well-known shot across the bow for philosophers of science:
History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. (1962: 1)
The image Kuhn thought needed transforming was the a-historical, rational reconstruction sought by many of the Logical Positivists, though Carnap and other positivists were actually quite sympathetic to Kuhn’s views. (See the entry on the Vienna Circle .) Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.
The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place. Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. Method in this normal phase operates within a disciplinary matrix (Kuhn’s later concept of a paradigm) which includes standards for problem solving, and defines the range of problems to which the method should be applied. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility.
An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm. Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place
Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress (Feyerabend 1988). His arguments are grounded in re-examining accepted “myths” about the history of science. Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration. Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. As a consequence, the only rule that could provide what he took to be sufficient freedom was the vacuous “anything goes”. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant (Feyerabend 1978).
An even more fundamental kind of criticism was offered by several sociologists of science from the 1970s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes. Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors (see, e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history. (See the entries on the social dimensions of scientific knowledge and social epistemology .) Well-known examinations by Latour and Woolgar (1979/1986), Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seem to bear out that it was social ideologies (on a macro-scale) or individual interactions and circumstances (on a micro-scale) which were the primary causal factors in determining which beliefs gained the status of scientific knowledge. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded.
A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early 2000s, a number of scientists attempting to replicate the results of published experiments could not do so. There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method. (See the entry on reproducibility of scientific results .)
By the close of the 20 th century the search for the scientific method was flagging. Nola and Sankey (2000b) could introduce their volume on method by remarking that “For some, the whole idea of a theory of scientific method is yester-year’s debate …”.
Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation (or refutation), still important progress has been made on understanding how observation can provide evidence for a given theory. Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references.
Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present. Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid-19 th century, and the significance tests developed by Gosset (a.k.a. “Student”), Fisher, Neyman & Pearson and others in the 1920s and 1930s (see, e.g., Swijtink 1987 for a brief historical overview; and also the entry on C.S. Peirce ).
These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component. This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other (see especially Fisher 1955, Neyman 1956 and Pearson 1955, and for analyses of the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). On Fisher’s view, hypothesis testing was a methodology for when to accept or reject a statistical hypothesis, namely that a hypothesis should be rejected by evidence if this evidence would be unlikely relative to other possible outcomes, given the hypothesis were true. In contrast, on Neyman and Pearson’s view, the consequence of error also had to play a role when deciding between hypotheses. Introducing the distinction between the error of rejecting a true hypothesis (type I error) and accepting a false hypothesis (type II error), they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one. Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action. Here, the important point was not whether a hypothesis was true, but whether one should act as if it was.
Similar discussions are found in the philosophical literature. On the one side, Churchman (1948) and Rudner (1953) argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas (2009) and Howard (2003). For a broad set of case studies examining the role of values in science, see e.g. Elliott & Richards 2017.
In recent decades, philosophical discussions of the evaluation of probabilistic hypotheses by statistical inference have largely focused on Bayesianism that understands probability as a measure of a person’s degree of belief in an event, given the available information, and frequentism that instead understands probability as a long-run frequency of a repeatable event. Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events (see, e.g., Sober 2008, chapter 1 for a detailed introduction to Bayesianism and frequentism as well as to likelihoodism). Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs (i.e., background knowledge) and incoming evidence. Bayesianism employs a rule based on Bayes’ theorem, a theorem of the probability calculus which relates conditional probabilities. The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence (an observation, say) being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed (see, e.g., Sprenger & Hartmann 2019 for a comprehensive treatment of Bayesian philosophy of science). Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo (1996) that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics. The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation .
5. Method in Practice
Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge. Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory (see Nickles 1987 for an exposition of this view). The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.
A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century (see section 2 ) is that no such distinction can be clearly seen in scientific activity (see Arabatzis 2006). Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address (see also the entry on scientific discovery ). Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation.
Examining the reasoning practices of historical and contemporary scientists, Nersessian (2008) has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed. These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation. However, Nersessian also emphasizes that
creative model-based reasoning cannot be applied as a simple recipe, is not always productive of solutions, and even its most exemplary usages can lead to incorrect solutions. (Nersessian 2008: 11)
Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is
the creation of concepts through which to comprehend, structure, and communicate about physical phenomena …. (Nersessian 1987: 11)
Similarly, work on heuristics for discovery and theory construction by scholars such as Darden (1991) and Bechtel & Richardson (1993) present science as problem solving and investigate scientific problem solving as a special case of problem-solving in general. Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems.
Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play. The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described (Steinle 1997, 2002; Burian 1997; Waters 2007)). However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction. Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.
The development of high throughput instrumentation in molecular biology and neighbouring fields has given rise to a special type of exploratory experimentation that collects and analyses very large amounts of data, and these new ‘omics’ disciplines are often said to represent a break with the ideal of hypothesis-driven science (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007) and instead described as data-driven research (Leonelli 2012; Strasser 2012) or as a special kind of “convenience experimentation” in which many experiments are done simply because they are extraordinarily convenient to perform (Krohs 2012).
5.2 Computer methods and ‘new ways’ of doing science
The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation (higher speed, better filtering, more variables, sophisticated coordination and control), but also, through modelling and simulations, might constitute a form of experimentation themselves. Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?
Because computers can be used to automate measurements, quantifications, calculations, and statistical analyses where, for practical reasons, these operations cannot be otherwise carried out, many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a “black box”, without the direct involvement or awareness of a human. This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation.
The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available. The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated. Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model.
A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. (1994) raise concerns that “validiation”, because it suggests deductive inference, might lead to over-confidence in the results of simulations. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two (Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs. The status of simulations as experiments has therefore been examined (Kaufmann and Smarr 1993; Humphreys 1995; Hughes 1999; Norton and Suppe 2001). This literature considers the epistemology of these experiments: what we can learn by simulation, and also the kinds of justifications which can be given in applying that knowledge to the “real” world. (Mayo 1996; Parker 2008b). As pointed out, part of the advantage of computer simulation derives from the fact that huge numbers of calculations can be carried out without requiring direct observation by the experimenter/simulator. At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation.
For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain. Rather, they seem to crucially involve aspects of both. This has led some authors, such as Fox Keller (2003: 200) to argue that we ought to consider computer simulation a “qualitatively different way of doing science”. The literature in general tends to follow Kaufmann and Smarr (1993) in referring to computer simulation as a “third way” for scientific methodology (theoretical reasoning and experimental practice are the first two ways.). It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own (see the entry on computer simulations in science ).
In recent years, the rapid development of machine learning techniques has prompted some scholars to suggest that the scientific method has become “obsolete” (Anderson 2008, Carrol and Goodstein 2009). This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research (for samples, see e.g. Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment of this topic, we refer to the entry scientific research and big data .
6. Discourse on scientific method
Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Often, reference to scientific method is used in ways that convey either the legend of a single, universal method characteristic of all science, or grants to a particular method or set of methods privilege as a special ‘gold standard’, often with reference to particular philosophers to vindicate the claims. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science. In these areas, the philosophical attempts at identifying a set of methods characteristic for scientific endeavors are closely related to the philosophy of science’s classical problem of demarcation (see the entry on science and pseudo-science ) and to the philosophical analysis of the social dimension of scientific knowledge and the role of science in democratic society.
One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education (see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002). [ 5 ] Often, ‘the scientific method’ is presented in textbooks and educational web pages as a fixed four or five step procedure starting from observations and description of a phenomenon and progressing over formulation of a hypothesis which explains the phenomenon, designing and conducting experiments to test the hypothesis, analyzing the results, and ending with drawing a conclusion. Such references to a universal scientific method can be found in educational material at all levels of science education (Blachowicz 2009), and numerous studies have shown that the idea of a general and universal scientific method often form part of both students’ and teachers’ conception of science (see, e.g., Aikenhead 1987; Osborne et al. 2003). In response, it has been argued that science education need to focus more on teaching about the nature of science, although views have differed on whether this is best done through student-led investigations, contemporary cases, or historical cases (Allchin, Andersen & Nielsen 2014)
Although occasionally phrased with reference to the H-D method, important historical roots of the legend in science education of a single, universal scientific method are the American philosopher and psychologist Dewey’s account of inquiry in How We Think (1910) and the British mathematician Karl Pearson’s account of science in Grammar of Science (1892). On Dewey’s account, inquiry is divided into the five steps of
(i) a felt difficulty, (ii) its location and definition, (iii) suggestion of a possible solution, (iv) development by reasoning of the bearing of the suggestions, (v) further observation and experiment leading to its acceptance or rejection. (Dewey 1910: 72)
Similarly, on Pearson’s account, scientific investigations start with measurement of data and observation of their correction and sequence from which scientific laws can be discovered with the aid of creative imagination. These laws have to be subject to criticism, and their final acceptance will have equal validity for “all normally constituted minds”. Both Dewey’s and Pearson’s accounts should be seen as generalized abstractions of inquiry and not restricted to the realm of science—although both Dewey and Pearson referred to their respective accounts as ‘the scientific method’.
Occasionally, scientists make sweeping statements about a simple and distinct scientific method, as exemplified by Feynman’s simplified version of a conjectures and refutations method presented, for example, in the last of his 1964 Cornell Messenger lectures. [ 6 ] However, just as often scientists have come to the same conclusion as recent philosophy of science that there is not any unique, easily described scientific method. For example, the physicist and Nobel Laureate Weinberg described in the paper “The Methods of Science … And Those By Which We Live” (1995) how
The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. (1995: 8)
Interview studies with scientists on their conception of method shows that scientists often find it hard to figure out whether available evidence confirms their hypothesis, and that there are no direct translations between general ideas about method and specific strategies to guide how research is conducted (Schickore & Hangel 2019, Hangel & Schickore 2017)
Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice. For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine (CAM)—alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.
Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. (See, e.g., Parascandola 1998 for an analysis of how this argument has been made to downgrade epidemiology compared to the laboratory sciences.) Similarly, based on an examination of the practices of major funding institutions such as the National Institutes of Health (NIH), the National Science Foundation (NSF) and the Biomedical Sciences Research Practices (BBSRC) in the UK, O’Malley et al. (2009) have argued that funding agencies seem to have a tendency to adhere to the view that the primary activity of science is to test hypotheses, while descriptive and exploratory research is seen as merely preparatory activities that are valuable only insofar as they fuel hypothesis-driven research.
In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data. For example, the codified format of publications in most biomedical journals known as the IMRAD format (Introduction, Method, Results, Analysis, Discussion) is explicitly described by the journal editors as “not an arbitrary publication format but rather a direct reflection of the process of scientific discovery” (see the so-called “Vancouver Recommendations”, ICMJE 2013: 11). However, scientific publications do not in general reflect the process by which the reported scientific results were produced. For example, under the provocative title “Is the scientific paper a fraud?”, Medawar argued that scientific papers generally misrepresent how the results have been produced (Medawar 1963/1996). Similar views have been advanced by philosophers, historians and sociologists of science (Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe 1998) who have argued that scientists’ experimental practices are messy and often do not follow any recognizable pattern. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism (see Schickore 2008 for a review of this work).
Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. A key case is Daubert vs Merrell Dow Pharmaceuticals (92–102, 509 U.S. 579, 1993). In this case, the Supreme Court argued in its 1993 ruling that trial judges must ensure that expert testimony is reliable, and that in doing this the court must look at the expert’s methodology to determine whether the proffered evidence is actually scientific knowledge. Further, referring to works of Popper and Hempel the court stated that
ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge … is whether it can be (and has been) tested. (Justice Blackmun, Daubert v. Merrell Dow Pharmaceuticals; see Other Internet Resources for a link to the opinion)
But as argued by Haack (2005a,b, 2010) and by Foster & Hubner (1999), by equating the question of whether a piece of testimony is reliable with the question whether it is scientific as indicated by a special methodology, the court was producing an inconsistent mixture of Popper’s and Hempel’s philosophies, and this has later led to considerable confusion in subsequent case rulings that drew on the Daubert case (see Haack 2010 for a detailed exposition).
The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. One of the first and most influential attempts at defining misconduct in science was the US definition from 1989 that defined misconduct as
fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community . (Code of Federal Regulations, part 50, subpart A., August 8, 1989, italics added)
However, the “other practices that seriously deviate” clause was heavily criticized because it could be used to suppress creative or novel science. For example, the National Academy of Science stated in their report Responsible Science (1992) that it
wishes to discourage the possibility that a misconduct complaint could be lodged against scientists based solely on their use of novel or unorthodox research methods. (NAS: 27)
This clause was therefore later removed from the definition. For an entry into the key philosophical literature on conduct in science, see Shamoo & Resnick (2009).
The question of the source of the success of science has been at the core of philosophy since the beginning of modern science. If viewed as a matter of epistemology more generally, scientific method is a part of the entire history of philosophy. Over that time, science and whatever methods its practitioners may employ have changed dramatically. Today, many philosophers have taken up the banners of pluralism or of practice to focus on what are, in effect, fine-grained and contextually limited examinations of scientific method. Others hope to shift perspectives in order to provide a renewed general account of what characterizes the activity we call science.
One such perspective has been offered recently by Hoyningen-Huene (2008, 2013), who argues from the history of philosophy of science that after three lengthy phases of characterizing science by its method, we are now in a phase where the belief in the existence of a positive scientific method has eroded and what has been left to characterize science is only its fallibility. First was a phase from Plato and Aristotle up until the 17 th century where the specificity of scientific knowledge was seen in its absolute certainty established by proof from evident axioms; next was a phase up to the mid-19 th century in which the means to establish the certainty of scientific knowledge had been generalized to include inductive procedures as well. In the third phase, which lasted until the last decades of the 20 th century, it was recognized that empirical knowledge was fallible, but it was still granted a special status due to its distinctive mode of production. But now in the fourth phase, according to Hoyningen-Huene, historical and philosophical studies have shown how “scientific methods with the characteristics as posited in the second and third phase do not exist” (2008: 168) and there is no longer any consensus among philosophers and historians of science about the nature of science. For Hoyningen-Huene, this is too negative a stance, and he therefore urges the question about the nature of science anew. His own answer to this question is that “scientific knowledge differs from other kinds of knowledge, especially everyday knowledge, primarily by being more systematic” (Hoyningen-Huene 2013: 14). Systematicity can have several different dimensions: among them are more systematic descriptions, explanations, predictions, defense of knowledge claims, epistemic connectedness, ideal of completeness, knowledge generation, representation of knowledge and critical discourse. Hence, what characterizes science is the greater care in excluding possible alternative explanations, the more detailed elaboration with respect to data on which predictions are based, the greater care in detecting and eliminating sources of error, the more articulate connections to other pieces of knowledge, etc. On this position, what characterizes science is not that the methods employed are unique to science, but that the methods are more carefully employed.
Another, similar approach has been offered by Haack (2003). She sets off, similar to Hoyningen-Huene, from a dissatisfaction with the recent clash between what she calls Old Deferentialism and New Cynicism. The Old Deferentialist position is that science progressed inductively by accumulating true theories confirmed by empirical evidence or deductively by testing conjectures against basic statements; while the New Cynics position is that science has no epistemic authority and no uniquely rational method and is merely just politics. Haack insists that contrary to the views of the New Cynics, there are objective epistemic standards, and there is something epistemologically special about science, even though the Old Deferentialists pictured this in a wrong way. Instead, she offers a new Critical Commonsensist account on which standards of good, strong, supportive evidence and well-conducted, honest, thorough and imaginative inquiry are not exclusive to the sciences, but the standards by which we judge all inquirers. In this sense, science does not differ in kind from other kinds of inquiry, but it may differ in the degree to which it requires broad and detailed background knowledge and a familiarity with a technical vocabulary that only specialists may possess.
- Aikenhead, G.S., 1987, “High-school graduates’ beliefs about science-technology-society. III. Characteristics and limitations of scientific knowledge”, Science Education , 71(4): 459–487.
- Allchin, D., H.M. Andersen and K. Nielsen, 2014, “Complementary Approaches to Teaching Nature of Science: Integrating Student Inquiry, Historical Cases, and Contemporary Cases in Classroom Practice”, Science Education , 98: 461–486.
- Anderson, C., 2008, “The end of theory: The data deluge makes the scientific method obsolete”, Wired magazine , 16(7): 16–07
- Arabatzis, T., 2006, “On the inextricability of the context of discovery and the context of justification”, in Revisiting Discovery and Justification , J. Schickore and F. Steinle (eds.), Dordrecht: Springer, pp. 215–230.
- Barnes, J. (ed.), 1984, The Complete Works of Aristotle, Vols I and II , Princeton: Princeton University Press.
- Barnes, B. and D. Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism , M. Hollis and S. Lukes (eds.), Cambridge: MIT Press, pp. 1–20.
- Bauer, H.H., 1992, Scientific Literacy and the Myth of the Scientific Method , Urbana: University of Illinois Press.
- Bechtel, W. and R.C. Richardson, 1993, Discovering complexity , Princeton, NJ: Princeton University Press.
- Berkeley, G., 1734, The Analyst in De Motu and The Analyst: A Modern Edition with Introductions and Commentary , D. Jesseph (trans. and ed.), Dordrecht: Kluwer Academic Publishers, 1992.
- Blachowicz, J., 2009, “How science textbooks treat scientific method: A philosopher’s perspective”, The British Journal for the Philosophy of Science , 60(2): 303–344.
- Bloor, D., 1991, Knowledge and Social Imagery , Chicago: University of Chicago Press, 2 nd edition.
- Boyle, R., 1682, New experiments physico-mechanical, touching the air , Printed by Miles Flesher for Richard Davis, bookseller in Oxford.
- Bridgman, P.W., 1927, The Logic of Modern Physics , New York: Macmillan.
- –––, 1956, “The Methodological Character of Theoretical Concepts”, in The Foundations of Science and the Concepts of Science and Psychology , Herbert Feigl and Michael Scriven (eds.), Minnesota: University of Minneapolis Press, pp. 38–76.
- Burian, R., 1997, “Exploratory Experimentation and the Role of Histochemical Techniques in the Work of Jean Brachet, 1938–1952”, History and Philosophy of the Life Sciences , 19(1): 27–45.
- –––, 2007, “On microRNA and the need for exploratory experimentation in post-genomic molecular biology”, History and Philosophy of the Life Sciences , 29(3): 285–311.
- Carnap, R., 1928, Der logische Aufbau der Welt , Berlin: Bernary, transl. by R.A. George, The Logical Structure of the World , Berkeley: University of California Press, 1967.
- –––, 1956, “The methodological character of theoretical concepts”, Minnesota studies in the philosophy of science , 1: 38–76.
- Carrol, S., and D. Goodstein, 2009, “Defining the scientific method”, Nature Methods , 6: 237.
- Churchman, C.W., 1948, “Science, Pragmatics, Induction”, Philosophy of Science , 15(3): 249–268.
- Cooper, J. (ed.), 1997, Plato: Complete Works , Indianapolis: Hackett.
- Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press
- Dewey, J., 1910, How we think , New York: Dover Publications (reprinted 1997).
- Douglas, H., 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh: University of Pittsburgh Press.
- Dupré, J., 2004, “Miracle of Monism ”, in Naturalism in Question , Mario De Caro and David Macarthur (eds.), Cambridge, MA: Harvard University Press, pp. 36–58.
- Elliott, K.C., 2007, “Varieties of exploratory experimentation in nanotoxicology”, History and Philosophy of the Life Sciences , 29(3): 311–334.
- Elliott, K. C., and T. Richards (eds.), 2017, Exploring inductive risk: Case studies of values in science , Oxford: Oxford University Press.
- Falcon, Andrea, 2005, Aristotle and the science of nature: Unity without uniformity , Cambridge: Cambridge University Press.
- Feyerabend, P., 1978, Science in a Free Society , London: New Left Books
- –––, 1988, Against Method , London: Verso, 2 nd edition.
- Fisher, R.A., 1955, “Statistical Methods and Scientific Induction”, Journal of The Royal Statistical Society. Series B (Methodological) , 17(1): 69–78.
- Foster, K. and P.W. Huber, 1999, Judging Science. Scientific Knowledge and the Federal Courts , Cambridge: MIT Press.
- Fox Keller, E., 2003, “Models, Simulation, and ‘computer experiments’”, in The Philosophy of Scientific Experimentation , H. Radder (ed.), Pittsburgh: Pittsburgh University Press, 198–215.
- Gilbert, G., 1976, “The transformation of research findings into scientific knowledge”, Social Studies of Science , 6: 281–306.
- Gimbel, S., 2011, Exploring the Scientific Method , Chicago: University of Chicago Press.
- Goodman, N., 1965, Fact , Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
- Haack, S., 1995, “Science is neither sacred nor a confidence trick”, Foundations of Science , 1(3): 323–335.
- –––, 2003, Defending science—within reason , Amherst: Prometheus.
- –––, 2005a, “Disentangling Daubert: an epistemological study in theory and practice”, Journal of Philosophy, Science and Law , 5, Haack 2005a available online . doi:10.5840/jpsl2005513
- –––, 2005b, “Trial and error: The Supreme Court’s philosophy of science”, American Journal of Public Health , 95: S66-S73.
- –––, 2010, “Federal Philosophy of Science: A Deconstruction-and a Reconstruction”, NYUJL & Liberty , 5: 394.
- Hangel, N. and J. Schickore, 2017, “Scientists’ conceptions of good research practice”, Perspectives on Science , 25(6): 766–791
- Harper, W.L., 2011, Isaac Newton’s Scientific Method: Turning Data into Evidence about Gravity and Cosmology , Oxford: Oxford University Press.
- Hempel, C., 1950, “Problems and Changes in the Empiricist Criterion of Meaning”, Revue Internationale de Philosophie , 41(11): 41–63.
- –––, 1951, “The Concept of Cognitive Significance: A Reconsideration”, Proceedings of the American Academy of Arts and Sciences , 80(1): 61–77.
- –––, 1965, Aspects of scientific explanation and other essays in the philosophy of science , New York–London: Free Press.
- –––, 1966, Philosophy of Natural Science , Englewood Cliffs: Prentice-Hall.
- Holmes, F.L., 1987, “Scientific writing and scientific discovery”, Isis , 78(2): 220–235.
- Howard, D., 2003, “Two left turns make a right: On the curious political career of North American philosophy of science at midcentury”, in Logical Empiricism in North America , G.L. Hardcastle & A.W. Richardson (eds.), Minneapolis: University of Minnesota Press, pp. 25–93.
- Hoyningen-Huene, P., 2008, “Systematicity: The nature of science”, Philosophia , 36(2): 167–180.
- –––, 2013, Systematicity. The Nature of Science , Oxford: Oxford University Press.
- Howie, D., 2002, Interpreting probability: Controversies and developments in the early twentieth century , Cambridge: Cambridge University Press.
- Hughes, R., 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Models as Mediators , M. Morgan and M. Morrison (eds.), Cambridge: Cambridge University Press, pp. 97–145
- Hume, D., 1739, A Treatise of Human Nature , D. Fate Norton and M.J. Norton (eds.), Oxford: Oxford University Press, 2000.
- Humphreys, P., 1995, “Computational science and scientific method”, Minds and Machines , 5(1): 499–512.
- ICMJE, 2013, “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals”, International Committee of Medical Journal Editors, available online , accessed August 13 2014
- Jeffrey, R.C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246.
- Kaufmann, W.J., and L.L. Smarr, 1993, Supercomputing and the Transformation of Science , New York: Scientific American Library.
- Knorr-Cetina, K., 1981, The Manufacture of Knowledge , Oxford: Pergamon Press.
- Krohs, U., 2012, “Convenience experimentation”, Studies in History and Philosophy of Biological and BiomedicalSciences , 43: 52–57.
- Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press
- Latour, B. and S. Woolgar, 1986, Laboratory Life: The Construction of Scientific Facts , Princeton: Princeton University Press, 2 nd edition.
- Laudan, L., 1968, “Theories of scientific method from Plato to Mach”, History of Science , 7(1): 1–63.
- Lenhard, J., 2006, “Models and statistical inference: The controversy between Fisher and Neyman-Pearson”, The British Journal for the Philosophy of Science , 57(1): 69–91.
- Leonelli, S., 2012, “Making Sense of Data-Driven Research in the Biological and the Biomedical Sciences”, Studies in the History and Philosophy of the Biological and Biomedical Sciences , 43(1): 1–3.
- Levi, I., 1960, “Must the scientist make value judgments?”, Philosophy of Science , 57(11): 345–357
- Lindley, D., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press.
- Lipton, P., 2004, Inference to the Best Explanation , London: Routledge, 2 nd edition.
- Marks, H.M., 2000, The progress of experiment: science and therapeutic reform in the United States, 1900–1990 , Cambridge: Cambridge University Press.
- Mazzochi, F., 2015, “Could Big Data be the end of theory in science?”, EMBO reports , 16: 1250–1255.
- Mayo, D.G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
- McComas, W.F., 1996, “Ten myths of science: Reexamining what we think we know about the nature of science”, School Science and Mathematics , 96(1): 10–16.
- Medawar, P.B., 1963/1996, “Is the scientific paper a fraud”, in The Strange Case of the Spotted Mouse and Other Classic Essays on Science , Oxford: Oxford University Press, 33–39.
- Mill, J.S., 1963, Collected Works of John Stuart Mill , J. M. Robson (ed.), Toronto: University of Toronto Press
- NAS, 1992, Responsible Science: Ensuring the integrity of the research process , Washington DC: National Academy Press.
- Nersessian, N.J., 1987, “A cognitive-historical approach to meaning in scientific theories”, in The process of science , N. Nersessian (ed.), Berlin: Springer, pp. 161–177.
- –––, 2008, Creating Scientific Concepts , Cambridge: MIT Press.
- Newton, I., 1726, Philosophiae naturalis Principia Mathematica (3 rd edition), in The Principia: Mathematical Principles of Natural Philosophy: A New Translation , I.B. Cohen and A. Whitman (trans.), Berkeley: University of California Press, 1999.
- –––, 1704, Opticks or A Treatise of the Reflections, Refractions, Inflections & Colors of Light , New York: Dover Publications, 1952.
- Neyman, J., 1956, “Note on an Article by Sir Ronald Fisher”, Journal of the Royal Statistical Society. Series B (Methodological) , 18: 288–294.
- Nickles, T., 1987, “Methodology, heuristics, and rationality”, in Rational changes in science: Essays on Scientific Reasoning , J.C. Pitt (ed.), Berlin: Springer, pp. 103–132.
- Nicod, J., 1924, Le problème logique de l’induction , Paris: Alcan. (Engl. transl. “The Logical Problem of Induction”, in Foundations of Geometry and Induction , London: Routledge, 2000.)
- Nola, R. and H. Sankey, 2000a, “A selective survey of theories of scientific method”, in Nola and Sankey 2000b: 1–65.
- –––, 2000b, After Popper, Kuhn and Feyerabend. Recent Issues in Theories of Scientific Method , London: Springer.
- –––, 2007, Theories of Scientific Method , Stocksfield: Acumen.
- Norton, S., and F. Suppe, 2001, “Why atmospheric modeling is good science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance , C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
- O’Malley, M., 2007, “Exploratory experimentation and scientific practice: Metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29(3): 337–360.
- O’Malley, M., C. Haufe, K. Elliot, and R. Burian, 2009, “Philosophies of Funding”, Cell , 138: 611–615.
- Oreskes, N., K. Shrader-Frechette, and K. Belitz, 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science , 263(5147): 641–646.
- Osborne, J., S. Simon, and S. Collins, 2003, “Attitudes towards science: a review of the literature and its implications”, International Journal of Science Education , 25(9): 1049–1079.
- Parascandola, M., 1998, “Epidemiology—2 nd -Rate Science”, Public Health Reports , 113(4): 312–320.
- Parker, W., 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science , 22(2): 165–83.
- –––, 2008b, “Computer Simulation through an Error-Statistical Lens”, Synthese , 163(3): 371–84.
- Pearson, K. 1892, The Grammar of Science , London: J.M. Dents and Sons, 1951
- Pearson, E.S., 1955, “Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society , B, 17: 204–207.
- Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics , Edinburgh: Edinburgh University Press.
- Popper, K.R., 1959, The Logic of Scientific Discovery , London: Routledge, 2002
- –––, 1963, Conjectures and Refutations , London: Routledge, 2002.
- –––, 1985, Unended Quest: An Intellectual Autobiography , La Salle: Open Court Publishing Co..
- Rudner, R., 1953, “The Scientist Qua Scientist Making Value Judgments”, Philosophy of Science , 20(1): 1–6.
- Rudolph, J.L., 2005, “Epistemology for the masses: The origin of ‘The Scientific Method’ in American Schools”, History of Education Quarterly , 45(3): 341–376
- Schickore, J., 2008, “Doing science, writing science”, Philosophy of Science , 75: 323–343.
- Schickore, J. and N. Hangel, 2019, “‘It might be this, it should be that…’ uncertainty and doubt in day-to-day science practice”, European Journal for Philosophy of Science , 9(2): 31. doi:10.1007/s13194-019-0253-9
- Shamoo, A.E. and D.B. Resnik, 2009, Responsible Conduct of Research , Oxford: Oxford University Press.
- Shank, J.B., 2008, The Newton Wars and the Beginning of the French Enlightenment , Chicago: The University of Chicago Press.
- Shapin, S. and S. Schaffer, 1985, Leviathan and the air-pump , Princeton: Princeton University Press.
- Smith, G.E., 2002, “The Methodology of the Principia”, in The Cambridge Companion to Newton , I.B. Cohen and G.E. Smith (eds.), Cambridge: Cambridge University Press, 138–173.
- Snyder, L.J., 1997a, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
- –––, 1997b, “The Mill-Whewell Debate: Much Ado About Induction”, Perspectives on Science , 5: 159–198.
- –––, 1999, “Renovating the Novum Organum: Bacon, Whewell and Induction”, Studies in History and Philosophy of Science , 30: 531–557.
- Sober, E., 2008, Evidence and Evolution. The logic behind the science , Cambridge: Cambridge University Press
- Sprenger, J. and S. Hartmann, 2019, Bayesian philosophy of science , Oxford: Oxford University Press.
- Steinle, F., 1997, “Entering New Fields: Exploratory Uses of Experimentation”, Philosophy of Science (Proceedings), 64: S65–S74.
- –––, 2002, “Experiments in History and Philosophy of Science”, Perspectives on Science , 10(4): 408–432.
- Strasser, B.J., 2012, “Data-driven sciences: From wonder cabinets to electronic databases”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 85–87.
- Succi, S. and P.V. Coveney, 2018, “Big data: the end of the scientific method?”, Philosophical Transactions of the Royal Society A , 377: 20180145. doi:10.1098/rsta.2018.0145
- Suppe, F., 1998, “The Structure of a Scientific Paper”, Philosophy of Science , 65(3): 381–405.
- Swijtink, Z.G., 1987, “The objectification of observation: Measurement and statistical methods in the nineteenth century”, in The probabilistic revolution. Ideas in History, Vol. 1 , L. Kruger (ed.), Cambridge MA: MIT Press, pp. 261–285.
- Waters, C.K., 2007, “The nature and context of exploratory experimentation: An introduction to three case studies of exploratory research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
- Weinberg, S., 1995, “The methods of science… and those by which we live”, Academic Questions , 8(2): 7–13.
- Weissert, T., 1997, The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem , New York: Springer Verlag.
- William H., 1628, Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus , in On the Motion of the Heart and Blood in Animals , R. Willis (trans.), Buffalo: Prometheus Books, 1993.
- Winsberg, E., 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
- Wivagg, D. & D. Allchin, 2002, “The Dogma of the Scientific Method”, The American Biology Teacher , 64(9): 645–646
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
- Blackmun opinion , in Daubert v. Merrell Dow Pharmaceuticals (92–102), 509 U.S. 579 (1993).
- Scientific Method at philpapers. Darrell Rowbottom (ed.).
- Recent Articles | Scientific Method | The Scientist Magazine
al-Kindi | Albert the Great [= Albertus magnus] | Aquinas, Thomas | Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science | Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources | Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West | Aristotle | Bacon, Francis | Bacon, Roger | Berkeley, George | biology: experiment in | Boyle, Robert | Cambridge Platonists | confirmation | Descartes, René | Enlightenment | epistemology | epistemology: Bayesian | epistemology: social | Feyerabend, Paul | Galileo Galilei | Grosseteste, Robert | Hempel, Carl | Hume, David | Hume, David: Newtonianism and Anti-Newtonianism | induction: problem of | Kant, Immanuel | Kuhn, Thomas | Leibniz, Gottfried Wilhelm | Locke, John | Mill, John Stuart | More, Henry | Neurath, Otto | Newton, Isaac | Newton, Isaac: philosophy | Ockham [Occam], William | operationalism | Peirce, Charles Sanders | Plato | Popper, Karl | rationality: historicist theories of | Reichenbach, Hans | reproducibility, scientific | Schlick, Moritz | science: and pseudo-science | science: theory and observation in | science: unity of | scientific discovery | scientific knowledge: social dimensions of | simulations in science | skepticism: medieval | space and time: absolute and relational space and motion, post-Newtonian theories | Vienna Circle | Whewell, William | Zabarella, Giacomo
Copyright © 2021 by Brian Hepburn < brian . hepburn @ wichita . edu > Hanne Andersen < hanne . andersen @ ind . ku . dk >
- Accessibility
Support SEP
Mirror sites.
View this site from another server:
- Info about mirror sites
The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
The PMC website is updating on October 15, 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- Microb Biotechnol
- v.16(10); 2023 Oct
- PMC10527184
Science, method and critical thinking
Antoine danchin.
1 School of Biomedical Sciences, Li KaShing Faculty of Medicine, Hong Kong University, Pokfulam Hong Kong, China
Science is founded on a method based on critical thinking. A prerequisite for this is not only a sufficient command of language but also the comprehension of the basic concepts underlying our understanding of reality. This constraint implies an awareness of the fact that the truth of the World is not directly accessible to us, but can only be glimpsed through the construction of models designed to anticipate its behaviour. Because the relationship between models and reality rests on the interpretation of founding postulates and instantiations of their predictions (and is therefore deeply rooted in language and culture), there can be no demarcation between science and non‐science. However, critical thinking is essential to ensure that the link between models and reality is gradually made more adequate to reality, based on what has already been established, thus guaranteeing that science progresses on this basis and excluding any form of relativism.
Science understands that we only can reach the truth of the World via creation of models. The method, based on critical thinking, is embedded in the scientific method, named here the Critical Generative Method.
Before illustrating the key requirements for critical thinking, one point must be made clear from the outset: thinking involves using language, and the depth of thought is directly related to the ‘active’ vocabulary (Magyar, 1942 ) used by the thinker. A recent study of young students in France showed that a significant percentage of the population had a very limited vocabulary. This unfortunate situation is shared by many countries (Fournier & Rakocevic, 2023 ). This omnipresent fact, which precludes any attempt to improve critical thinking in the general population, is very visible in a great many texts published on social networks. This is the more concerning because science uses a vocabulary that lies well beyond that available to most people. For example, a word such as ‘metabolism’ is generally not understood. As a consequence, it is essential to agree on a minimal vocabulary before teaching paths to critical thinking. This may look trivial, but this is an essential prerequisite. Typically, words such as analysis and synthesis must be understood (and the idea of what a ‘concept’ is not widely shared). It must also be remembered that the way the scientific vocabulary kept creating neologisms in the most creative times of science was based on using the Ancient Greek language, and for a good reason: a considerable advantage of that unsaid rule is that this makes scientific objects and concepts prominent for scientists from all over the world, while precluding implicit domination by any country over the others when science is at stake (Iliopoulos et al., 2019 ). Unfortunately, and this demonstrates how the domination of an ignorant subset of the research community gains ground, this rule is now seldom followed. This also highlights the lack of extensive scientific background of the majority of researchers: the creation of new words now follows the rule of the self‐assertive. Interestingly, the very observation that a neologism in a scientific paper does not follow the traditional rule provides us with a critical way to identify either ignorance of the scientific background of the work or the presence in the text of hidden agendas that have nothing to do with science.
In practice, the initiation of the process of critical thinking ought to begin with a step similar to the ‘due diligence’ required by investors when they study whether they will invest, or not, in a start‐up company. The first expected action should be ‘verify’, ‘verify’, ‘verify’… any statement which is used as a basis for the reasoning that follows. This asks not only for understanding what is said or written (hence the importance of language), but also for checking the origins of the statement, not only by investigating who is involved but also by checking that the historical context is well known.
Of course, nobody has complete knowledge of everything, not even anything in fact, which means that at some point people have to accept that they will base their reasoning on some kind of ‘belief’. This inevitable imperative forces future scientists asking a question about reality to resort to a set of assertions called ‘postulates’ in conventional science, that is, beliefs temporarily accepted without further discussion but understood as such. The way in which postulates are formulated is therefore key to their subsequent role in science. Similarly, the fact that they are temporary is essential to understanding their role. A fundamental feature of critical thinking is to be able to identify these postulates and then remember that they are provisional in nature. When needed this enables anyone to return to the origins of reasoning and then decide whether it is reasonable to retain the postulates or modify or even abandon them.
Here is an example illustrated with the famous greenhouse effect that allows our planet not to be a snowball (Arrhenius, 1896 ). Note that understanding this phenomenon requires a fair amount of basic physics, as well as a trait that is often forgotten: common sense. There is no doubt that carbon dioxide is a greenhouse gas (this is based on well‐established physics, which, nevertheless must be accepted as a postulate by the majority, as they would not be able to demonstrate that). However, a straightforward question arises, which is almost never asked in its proper details. There are many gases in the atmosphere, and the obvious preliminary question should be to ask what they all are, and each of their relative contribution to greenhouse effect. This is partially understood by a fraction of the general public as asking for the contribution of methane, and sometimes N 2 O and ozone. However, this is far from enough, because the gas which contributes the most to the greenhouse effect on our planet is … water vapour (about 60% of the total effect: https://www.acs.org/climatescience/climatesciencenarratives/its‐water‐vapor‐not‐the‐co2.html )! This fact is seldom highlighted. Yet it is extremely important because water is such a strange molecule. Around 300 K water can evolve rapidly to form a liquid, a gas, or a solid (ice). The transitions between these different states (with only the gas having a greenhouse effect, while water droplets in clouds have generally a cooling effect) make that water is unable to directly control the Earth's temperature. Worse, in fact, these phase transitions will amplify the fluctuations around a given temperature, generally in a feedforward way. We know very well the situation in deserts, where the night temperature is very low, with a very high temperature during the day. In fact, this explains why ‘global warming’ (i.e. shifting upwards the average temperature of the planet) is also parallel with an amplification of weather extremes. It is quite remarkable that the role of water, which is well established, does not belong to popular knowledge. Standard ‘due diligence’ would have made this knowledge widely shared.
Another straightforward example of the need to have a clear knowledge of the thought of our predecessors is illustrated in the following. When we see expressions such as ‘paradigm change’, ‘change of paradigm’, ‘paradigm shift’ or ‘shift of paradigm’ (12,424 articles listed in PubMed as of June 26, 2023), we should be aware that the subject of interest of these articles has nothing to do with a paradigm shift, simply because such a change in paradigm is extremely rare, being distributed over centuries, at best (Kuhn, 1962 ). Worse, the use of the word implies that the authors of the works have most probably never read Thomas Kuhn's work, and are merely using a fashionable hearsay. As a consequence, critical thinking should lead authentic scientists to put aside all these works before further developing their investigation (Figure 1 ).
Number of articles identified in the PubMed database with the keywords ‘paradigm change’ or ‘change of paradigm’ or ‘paradigm shift’ or ‘shift of paradigm’. A very low number of articles, generally reporting information consistent with the Kuhnian view of scientific revolutions is published before 1993. Between 1993 and 2000 a looser view of the term paradigm begins to be used in a metaphoric way. Since then the word has become fashionable while losing entirely its original meaning, while carrying over lack of epistemological knowledge. This example of common behaviour illustrates the decadence of contemporary science.
This being understood, we can now explore the general way science proceeds. This has been previously discussed at a conference meant to explain the scientific method to an audience of Chinese philosophers, anthropologists and scientists and held at Sun Yat Sen (Zhong Shan) University in Canton (Guangzhou) in 1991. This discussion is expanded in The Delphic Boat (Danchin, 2002 ). For a variety of reasons, it would be useful to anticipate the future of our world. This raises an unlimited number of questions and the aim of the scientific method is to try and answer those. The way in which questions emerge is a subject in itself. This is not addressed here, but this should also be the subject of critical thinking (Yanai & Lercher, 2019 ).
The basis for scientific investigation accepts that, while the truth of the world exists in itself (‘relativism’ is foreign to scientific knowledge, as science keeps building up its progresses on previous knowledge, even when changing its paradigms), we can only access it through the mediation of a representation. This has been extensively debated at the time, 2500 years ago, when science and philosophy designed the common endeavour meant to generate knowledge (Frank, 1952 ). It was then apparent that we cannot escape this omnipresent limitation of human rationality, as Xenophanes of Colophon explicitly stated at the time [discussed in Popper, 1968 ]. This limitation comes from an inevitable constraint: contrary to what many keep saying, data do not speak . Reality must be interpreted within the frame of a particular representation that critical thinking aims at making visible. A sentence that we all forget to reject, such as ‘results show…’ is meaningless: results are interpreted as meaning this or that.
Accepting this limitation is a difficult attribute of scientific judgement. Yet the quality of thought progresses as the understanding of this constraint becomes more effective: to answer our questions we have to build models of the world, and be satisfied with this perspective. It is through our knowledge of the world's models that we are able to explore and act upon it. We can even become the creators of new behaviours of reality, including new artefacts such as a laser beam, a physics‐based device that is unlikely to exist in the universe except in places where agents with an ability similar to ours would exist. Indeed, to create models is to introduce a distance, a mediation through some kind of symbolic coding (via the construction of a model), between ourselves and the world. It is worth pointing out that this feature highlights how science builds its strength from its very radical weakness, which is to know that it is incapable, in principle, of attaining truth. Furthermore and fortunately, we do not have to begin with a tabula rasa . Science keeps progressing. The ideas and the models we have received from our fathers form the basis of our first representation of the world. The critical question we all face, then, is: how well these models match up with reality? how do they fare in answering our questions?
Many, over time, think they achieve ultimate understanding of reality (or force others to think so) and abide by the knowledge reached at the time, precluding any progress. A few persist in asking questions about what remains enigmatic in the way things behave. Until fairly recently (and this can still be seen in the fashion for ‘organic’ things, or the idea, similar to that of the animating ‘phlogiston’ of the Middle Ages, that things spontaneously organize themselves in certain elusive circumstances usually represented by fancy mathematical models), things were thought to combine four elements: fire, air, water, and earth, in a variety of proportions and combinations. In China, wood, a fifth element that had some link to life was added to the list. Later on, the world was assumed to result from the combination of 10 categories (Danchin, 2009 ). It took time to develop a physic of reality involving space, time, mass, and energy. What this means is still far from fully understood. How, in our times when the successes of the applications of science are so prominent, is it still possible to question the generally accepted knowledge, to progress in the construction of a new representation of reality?
This is where critical thinking comes in. The first step must be to try and simplify the problem, to abstract from the blurred set of inherited ideas a few foundational concepts that will not immediately be called into question, at least as a preliminary stage of investigation. We begin by isolating a phenomenon whose apparent clarity contrasts with its environment. A key point in the process is to be aware of the fact that the links between correlation and causation are not trivial (Altman & Krzywinski, 2015 ). The confusion between both properties results probably in the major anti‐science behaviour that prevents the development of knowledge. In our time, a better understanding of what causality is is essential to understand the present development of Artificial Intelligence (Schölkopf et al., 2021 ) as this is directly linked to the process of rational decision (Simon, 1996 ).
Subsequently, a set of undisputed rules, phenomenological criteria and postulates is associated with the phenomenon. It constitutes temporarily the founding dogma of the theory, made up of the phenomenon of interest, the postulates, the model and the conditions and results of its application to reality. This epistemological attitude can legitimately be described as ‘dogmatic’ and it remains unchanged for a long time in the progression of scientific knowledge. This is well illustrated by the fact that the word ‘dogma’, a religious word par excellence, is often misused when referring to a scientific theory. Many still refer, for example, to the expression ‘the central dogma of molecular biology’ to describe the rules for rewriting the genetic program from DNA to RNA and then proteins (Crick, 1970 ). Of course, critical thinking understands that this is no dogma, and variations on the theme are omnipresent, as seen for instance in the role of the enzyme reverse transcriptase which allows RNA to be rewritten into a DNA sequence.
Yet, whereas isolating postulates is an important step, it does not permit one to give explanations or predictions. To go further, one must therefore initiate a constructive process. The essential step there will be the constitution of a model (or in weaker instances, a simulation) of the phenomenon (Figure 2 ).
The Critical Generative Method. Science is based on the premises that while we can look for the truth of reality, this is in principle impossible. The only way out is to build up models of reality (‘realistic models’) and find ways to compare their outcome to the behaviour of reality [see an explicit example for genome sequences in Hénaut et al., 1996 ]. The ultimate model is mathematical model, but this is rarely possible to achieve. Other models are based on simulations, that is, models that mimic the behaviour of reality without trying to propose an explanation of that behaviour. A primitive attempt of this endeavour is illustrated when people use figurines that they manipulate hoping that this will anticipate the behaviour of their environment (e.g. ‘voodoo’). This is also frequent in borderline science (Friedman & Brown, 2018 ).
To this aim, the postulates will be interpreted in the form of entities (concrete or abstract) or of relationships between entities, which will be further manipulated by an independent set of processes. The perfect stage, generally considered as the ultimate one, associates the manipulation of abstract entities, interpreting postulates into axioms and definitions, manipulable according to the rules of logic. In the construction of a model, one assists therefore first to a process of abstraction , which allows one to go from the postulates to the axioms. Quite often, however, one will not be able to axiomatize the postulates. It will only be possible to represent them using analogies involving the founding elements of another phenomenon, better known and considered as analogous. One could also change the scales of a phenomenon (this is the case when one uses mock‐ups as models). In these families of approaches, the model is considered as a simulation. For example, it will be possible to simulate an electromagnetic phenomenon using a hydrodynamic phenomenon [for a general example in physics (Vives & Ricou, 1985 )]. In recent times the simulation is generally performed numerically, using (super)computers [e.g. the mesoscopic scale typical for cells (Huber & McCammon, 2019 )]. While all these approaches have important implications in terms of diagnostic, for example, they are generally purely phenomenological and descriptive. This is understood by critical thinking, despite the general tendency to mistake the mimic for what it represents. Recent artificial intelligence approaches that use ‘neuronal networks’ are not, at least for the time being, models of the brain.
However useful and effective, the simulation of a phenomenon is clearly an admission of failure. A simulation represents behaviour that conforms to reality, but does not explain it. Yet science aims to do more than simply represent a phenomenon; it aims to anticipate what will happen in the near and distant future. To get closer to the truth, we need to understand and explain, that is, reduce the representation to simpler elementary principles (and as few as possible) in order to escape the omnipresent anecdotes that parasitize our vision of the future. In the case of the study of genomes, for example, this will lead us to question their origin and evolution. It will also require us to understand the formal nature of the control processes (of which feedback, e.g. is one) that they encode. As soon as possible, therefore, we would like to translate the postulates that enabled the model's construction into well‐formed statements that will constitute the axioms and definitions of an explanatory model. At a later stage, the axioms and definitions will be linked together to create a demonstration leading to a theorem or, more often than not, a simple conjecture.
When based on mathematics, the model is made up of its axioms and definitions, and the demonstrations and theorems it conveys. It is an entirely autonomous entity, which can only be justified by its own rules. To be valid, it must necessarily be true according to the rules of mathematical logic. So here we have an essential truth criterion, but one that can say nothing about the truth of the phenomenon. A key feature of critical thinking is the understanding that the truth of the model is not the truth of the phenomenon. The amalgam of these two truths, common in magical thinking, often results in the model (identified as a portion of the world) being given a sacred value, and changes the role of the scientist to that of a priest.
Having started from the phenomenon of interest to build the model, we now need to return from the model to the real world. A process symmetrical to that which provided the basis for the model, an instantiation of the conclusions summarized in the theorem, is now required. This can take the form of predictions, observations or experiments, for which at least two types can be broadly identified. These predictions are either existential (the object, process, or relations predicted by the instantiation of the theorem must be discovered), or phenomenological, and therefore subject to verification and deniability. An experimental set‐up will have to be constructed to explore what has been predicted by the instantiations of the model theorems and to support or falsify the predictions. In the case of hypotheses based on genes, for example, this will lead to synthetic biology constructs experiments (Danchin & Huang, 2023 ), where genes are replaced by counterparts, even made of atoms that differ from the canonical ones.
The reaction of reality, either to simple (passive) observation or to the observation of phenomena triggered by the experiments, will validate the model and measure the degree of adequacy between the model and the reality. This follows a constructive path when the model's shortcomings are identified, and when are discovered the predicted new objects that must now be included in further models of reality. This process imposes the falsification of certain instantiated conclusions that have been falsified as a major driving force for the progression of the model in line with reality. This part of the thought process is essential to escape infinite regression in a series of confirmation experiments, one after the other, ad infinitum. Identifying this type of situation, based on the understanding that the behaviour of the model is not reality but an interpretation of reality, is essential to promote critical thinking.
It must also be stressed that, of course, the weight of the proof of the model's adequacy to reality belongs to the authors of the model. It would be both contrary to the simplest rules of logic (the proof of non‐existence is only possible for finite sets), and also totally inefficient, as well as sterile, to produce an unfalsifiable model. This is indeed a critical way to identify the many pretenders who plague science. They are easy to recognize since they identify themselves precisely by the fact that they ask the others: ‘repeat my experiments again and show me that they are wrong!’. Unfortunately, this old conjuring trick is still well spread, especially in a world dominated by mass media looking for scoops, not for truth.
When certain predictions of the model are not verified, critical thinking forces us to study its relationship with reality, and we must proceed in reverse, following the path that led to these inadequate predictions (Figure 2 ). In this reverse process, we go backwards until we reach the postulates on which the model was built, at which point we modify, refine and, if necessary, change them. The explanatory power of the model will increase each time we can reduce the number of postulates on which it is built. This is another way of developing critical thinking skills: the more factors there are underlying an explanation, the less reliable the model. As an example in molecular biology, the selective model used by Monod and coworkers to account for allostery (Monod et al., 1965 ) used far fewer adjustable parameters than Koshland's induced‐fit model (Koshland, 1959 ).
In real‐life situations, this reverse path is long and difficult to build. The model's resistance to change is quickly organized, if only because, lacking critical thinking, its creators cannot help thinking that, in fact, the model manifests, rather than represents, the truth of the world. It is only natural, then, to think that the lack of predictive power is primarily due not to the model's inadequacy, but to the inappropriate way in which its broad conclusions have been instantiated. This corresponds, in effect, to a stage where formal terms have been interpreted in terms of real behaviour, which involves a great deal of fine‐tuning. Because it is inherently difficult to identify the inadequacy of the model or its links with the phenomenon of interest, it is often the case that a model persists, sometimes for a very long time, despite numerous signs of imperfection.
During this critical process, the very nature of the model is questioned, and its construction, the meaning it represents, is clarified and refined under the constraint of contradictions. The very terms of the instantiations of predictions, or of the abstraction of founding postulates, are made finer and finer. This is why this dogmatic stage plays such an essential role: a model that was too inadequate would have been quickly discarded, and would not have been able to generate and advance knowledge, whereas a succession of improvements leads to an ever finer understanding, and hence better representation of the phenomenon of interest. Then comes a time when the very axioms on which the model is based are called into question, and when the most recent abstractions made from the initial postulates lead to them being called into question. This is of course very rare and difficult, and is the source of those genuine scientific revolutions, those paradigm shifts (to use Thomas Kuhn's word), from which new models are born, develop and die, based on assumptions that differ profoundly from those of their predecessors. This manifests an ultimate, but extremely rare, success of critical thinking.
A final comment. Karl Popper in his Logik der Forschung ( The Logic of Scientific Discovery ) tried to show that there was a demarcation separating science from non‐science (Keuth and Popper, 1934 ). This resulted from the implementation of a refutation process that he named falsification that was sufficient to tell the observer that a model was failing. However, as displayed in Figure 2 , refutation does not work directly on the model of interest, but on the interpretation of its predictions . This means that while science is associated with a method, its implementation in practice is variable, and its borders fuzzy. In fact, trying to match models with reality allows us to progress by producing better adequacy with reality (Putnam, 1991 ). Nevertheless, because the separation between models and reality rests on interpretations (processes rooted in culture and language), establishing an explicit demarcation is impossible. This intrinsic difficulty, which is associated with a property that we could name ‘context associated with a research programme’ (Lakatos, 1976 , 1978 ), shows that the demarcation between science and non‐science is dominated by a particular currency of reality, which we have to consider under the name information , using the word with all its common (and accordingly fuzzy) connotations, and which operates in addition to the standard categories, mass, energy, space and time.
The first attempts to solve contradictions between model predictions and observed phenomena do not immediately discard the model, as Popper would have it. The common practice is for the authors of a model to re‐interpret the instantiation process that has coupled the theorem to reality. Typically: ‘exceptions make the rule’, or ‘this is not exactly what we meant, we need to focus more on this or that feature’, etc. This polishing step is essential, it allows the frontiers of the model and its associated phenomena to be defined as accurately as possible. It marks the moment when technically arid efforts such as defining a proper nomenclature, a database data schema, etc., have a central role. In contrast to the hopes of Popper, who sought for a principle telling us whether a particular creation of knowledge can be named Science, using refutation as principle, there is no ultimate demarcation between science and non‐science. Then comes a time when, despite all efforts to reconcile predictions and phenomena, the inadequacy between the model and reality becomes insoluble. Assuming no mistake in the demonstration (within the model), this contradiction implies that we need to reconsider the axioms and definitions upon which the model has been constructed. This is the time when critical thinking becomes imperative.
AUTHOR CONTRIBUTIONS
Antoine Danchin: Conceptualization (lead); writing – original draft (lead); writing – review and editing (lead).
CONFLICT OF INTEREST STATEMENT
This work belongs to efforts pertaining to epistemological thinking and does not imply any conflict of interest.
PERSPECTIVE article
Supporting early scientific thinking through curiosity.
- Curry School of Education and Human Development, University of Virginia, Charlottesville, VA, United States
Curiosity and curiosity-driven questioning are important for developing scientific thinking and more general interest and motivation to pursue scientific questions. Curiosity has been operationalized as preference for uncertainty ( Jirout and Klahr, 2012 ), and engaging in inquiry-an essential part of scientific reasoning-generates high levels of uncertainty ( Metz, 2004 ; van Schijndel et al., 2018 ). This perspective piece begins by discussing mechanisms through which curiosity can support learning and motivation in science, including motivating information-seeking behaviors, gathering information in response to curiosity, and promoting deeper understanding through connection-making related to addressing information gaps. In the second part of the article, a recent theory of how to promote curiosity in schools is discussed in relation to early childhood science reasoning. Finally, potential directions for research on the development of curiosity and curiosity-driven inquiry in young children are discussed. Although quite a bit is known about the development of children’s question asking specifically, and there are convincing arguments for developing scientific curiosity to promote science reasoning skills, there are many important areas for future research to address how to effectively use curiosity to support science learning.
Scientific Thinking and Curiosity
Scientific thinking is a type of knowledge seeking involving intentional information seeking, including asking questions, testing hypotheses, making observations, recognizing patterns, and making inferences ( Kuhn, 2002 ; Morris et al., 2012 ). Much research indicates that children engage in this information-seeking process very early on through questioning behaviors and exploration. In fact, children are quite capable and effective in gathering needed information through their questions, and can reason about the effectiveness of questions, use probabilistic information to guide their questioning, and evaluate who they should question to get information, among other related skills (see Ronfard et al., 2018 for review). Although formal educational contexts typically give students questions to explore or steps to follow to “do science,” young children’s scientific thinking is driven by natural curiosity about the world around them, and the desire to understand it and generate their own questions about the world ( Chouinard et al., 2007 ; Duschl et al., 2007 ; French et al., 2013 ; Jirout and Zimmerman, 2015 ).
What Does Scientific Curiosity Look Like?
Curiosity is defined here as the desire to seek information to address knowledge gaps resulting from uncertainty or ambiguity ( Loewenstein, 1994 ; Jirout and Klahr, 2012 ). Curiosity is often seen as ubiquitous within early childhood. Simply observing children can provide numerous examples of the bidirectional link between curiosity and scientific reasoning, such as when curiosity about a phenomenon leads to experimentation, which, in turn, generates new questions and new curiosities. For example, an infant drops a toy to observe what will happen. When an adult stoops to pick it up, the infant becomes curious about how many times an adult will hand it back before losing interest. Or, a child might observe a butterfly over a period of time, and wonder why it had its wings folded or open at different points, how butterflies fly, why different butterflies are different colors, and so on (see Figure 1 ). Observations lead to theories, which may be immature, incomplete, or even inaccurate, but so are many early scientific theories. Importantly, theories can help identify knowledge gaps, leading to new instances of curiosity and motivating children’s information seeking to acquire new knowledge and, gradually, correct misconceptions. Like adults, children learn from their experiences and observations and use information about the probability of events to revise their theories ( Gopnik, 2012 ).
Figure 1. A child looks intently at a butterfly, becoming curious about the many things she wonders based on her observations.
Although this type of reasoning is especially salient in science, curiosity can manifest in many different types of information seeking in response to uncertainty, and is similar to critical thinking in other domains of knowledge and to active learning and problem solving more generally ( Gopnik, 2012 ; Klahr et al., 2013 ; Saylor and Ganea, 2018 ). The development of scientific thinking begins as the senses develop and begin providing information about the world ( Inhelder and Piaget, 1958 ; Gopnik et al., 1999 ). When they are not actively discouraged, children need no instruction to ask questions and explore, and the information they get often leads to further information seeking. In fact, observational research suggests that children can ask questions at the rate of more than 100 per hour ( Chouinard et al., 2007 )! Although the adults in a child’s life might tire of what seems like relentless questioning ( Turgeon, 2015 ), even young children can modify their beliefs and learn from the information they receive ( Ronfard et al., 2018 ). More generally, children seek to understand their world through active exploration, especially in response to recognizing a gap in their understanding ( Schulz and Bonawitz, 2007 ). The active choice of what to learn, driven by curiosity, can provide motivation and meaning to information and instill a lasting positive approach to learning in formal educational contexts.
How Does Curiosity Develop and Support Scientific Thinking?
There are several mechanisms through which children’s curiosity can support the development and persistence of scientific thinking. Three of these are discussed below, in sequence: that curiosity can (1) motivate information-seeking behavior, which leads to (2) question-asking and other information-seeking behaviors, which can (3) activate related previous knowledge and support deeper learning. Although we discuss these as independent, consecutive steps for the sake of clarity, it is much more likely that curiosity, question asking and information seeking, and cognitive processing of information and learning are all interrelated processes that support each other ( Oudeyer et al., 2016 ). For example, information seeking that is not a result of curiosity can lead to new questions, and as previous knowledge is activated it may influence the ways in which a child seeks information.
Curiosity as a Motivation for Information Seeking
Young children’s learning is driven by exploration to make sense of the world around them (e.g., Piaget, 1926 ). This exploration can result from curiosity ( Loewenstein, 1994 ; Jirout and Klahr, 2012 ) and lead to active engagement in learning ( Saylor and Ganea, 2018 ). In the example given previously, the child sees that some butterflies have open wings and some have closed wings, and may be uncertain about why, leading to more careful observations that provide potential for learning. Several studies demonstrate that the presence of uncertainty or ambiguity leads to higher engagement ( Howard-Jones and Demetriou, 2009 ) and more exploration and information seeking ( Berlyne, 1954 ; Lowry and Johnson, 1981 ; Loewenstein, 1994 ; Litman et al., 2005 ; Jirout and Klahr, 2012 ). For example, when children are shown ambiguous demonstrations for how a novel toy works, they prefer and play longer with that toy than with a new toy that was demonstrated without ambiguity ( Schulz and Bonawitz, 2007 ). Similar to ambiguity, surprising or unexpected observations can create uncertainty and lead to curiosity-driven questions or explanations through adult–child conversations ( Frazier et al., 2009 ; Danovitch and Mills, 2018 ; Jipson et al., 2018 ). This curiosity can promote lasting effects; Shah et al. (2018) show that young children’s curiosity, reported by parents at the start of kindergarten, relates to academic school readiness. In one of the few longitudinal studies including curiosity, research shows that parents’ promotion of curiosity early in childhood leads to science intrinsic motivation years later and science achievement in high school ( Gottfried et al., 2016 ). More generally, curiosity can provide a remedy to boredom, giving children a goal to direct their behavior and the motivation to act on their curiosity ( Litman and Silvia, 2006 ).
Curiosity as Support for Directing Information-Seeking Behavior
Gopnik et al. (2015) suggest that adults are efficient in their attention allocation, developed through extensive experience, but this attentional control comes at the cost of missing much of what is going on around them unrelated to their goals. Children have less experience and skill in focusing their attention, and more exploration-oriented goals, resulting in more open-ended exploratory behavior but also more distraction. Curiosity can help focus children’s attention on the specific information being sought (e.g., Legare, 2014 ). For example, when 7–9-year-old children completed a discovery-learning task in a museum, curiosity was related to more efficient learning-more curious children were quicker and learned more from similar exploration than less-curious children ( van Schijndel et al., 2018 ). Although children are quite capable of using questions to express curiosity and request specific information ( Berlyne, 1954 ; Chin and Osborne, 2010 ; Jirout and Zimmerman, 2015 ; Kidd and Hayden, 2015 ; Luce and Hsi, 2015 ), these skills can and should be strategically supported, as question asking plays a fundamental role in science and is important to develop ( Chouinard et al., 2007 ; Dewey, 1910 ; National Governors Association, 2010 ; American Association for the Advancement of Science [AAAS], 1993 ; among others). Indeed, the National Resource Council (2012) National Science Education Standards include question asking as the first of eight scientific and engineering practices that span all grade levels and content areas.
Children are proficient in requesting information from quite early ages ( Ronfard et al., 2018 ). Yet, there are limitations to children’s question asking; it can be “inefficient.” For example, to identify a target object from an array, young children often ask confirmation questions or make guesses rather than using more efficient “constraint-seeking” questions ( Mills et al., 2010 ; Ruggeri and Lombrozo, 2015 ). However, this behavior is observed in highly structured problem-solving tasks, during which children likely are not very curious. In fact, if the environment contains other things that children are curious about, it could be more efficient to use a simplistic strategy, freeing up cognitive resources for the true target of their curiosity. More research is needed to better understand children’s use of curiosity-driven questioning behavior as well as exploration, but naturalistic observations show that children do ask questions spontaneously to gain information, and that their questions (and follow-up questions) are effective in obtaining desired information ( Nelson et al., 2004 ; Kelemen et al., 2005 ; Chouinard et al., 2007 ).
Curiosity as Support for Deeper Learning
Returning to the definition of curiosity as information seeking to address knowledge gaps, becoming curious-by definition-involves the activation of previous knowledge, which enhances learning ( VanLehn et al., 1992 ; Conati and Carenini, 2001 ). The active learning that results from curiosity-driven information seeking involves meaningful cognitive engagement and constructive processing that can support deeper learning ( Bonwell and Eison, 1991 ; King, 1994 ; Loyens and Gijbels, 2008 ). The constructive process of seeking information to generate new thinking or new knowledge in response to curiosity is a more effective means of learning than simply receiving information ( Chi and Wylie, 2014 ). Even if information is simply given to a child as a result of their asking a question, the mere process of recognizing the gap in one’s knowledge to have a question activates relevant previous knowledge and leads to more effective storage of the new information within a meaningful mental representation; the generation of the question is a constructive process in itself. Further, learning more about a topic allows children to better recognize their related knowledge and information gaps ( Danovitch et al., 2019 ). This metacognitive reasoning supports learning through the processes of activating, integrating, and inferring involved in the constructive nature of curiosity-drive information seeking ( Chi and Wylie, 2014 ). Consistent with this theory, Lamnina and Chase (2019) showed that higher curiosity, which increased with the amount of uncertainty in a task, related to greater transfer of middle school students’ learning about specific science topics.
Promoting Curiosity in Young Children
Curiosity is rated by early childhood educators as “very important” or “essential” for school readiness and considered to be even more important than discrete academic skills like counting and knowing the alphabet ( Heaviside et al., 1993 ; West et al., 1993 ), behind only physical health and communication skills in importance ( Harradine and Clifford, 1996 ). Engel (2011 , 2013) finds that curiosity declines with development and suggests that understanding how to promote or at least sustain it is important. Although children’s curiosity is considered a natural characteristic that is present at birth, interactions with and responses from others can likely influence curiosity, both at a specific moment and context and as a more stable disposition ( Jirout et al., 2018 ). For example, previous work suggests that curiosity can be promoted by encouraging children to feel comfortable with and explore uncertainty ( Jirout et al., 2018 ); experiences that create uncertainty lead to higher levels of curious behavior (e.g., Bonawitz et al., 2011 ; Engel and Labella, 2011 ; Gordon et al., 2015 ).
One strategy for promoting curiosity is through classroom climate; children should feel safe and be encouraged to be curious and exploration and questions should be valued ( Pianta et al., 2008 ). This is accomplished by de-emphasizing being “right” or all-knowing, and instead embracing uncertainty and gaps in one’s own knowledge as opportunities to learn. Another strategy to promote curiosity is to provide support for the information-seeking behaviors that children use to act on their curiosity. There are several specific strategies that may promote children’s curiosity (see Jirout et al., 2018 , for additional strategies), including:
1. Encourage and provide opportunities for children to explore and “figure out,” emphasizing the value of the process (exploration) over the outcome (new knowledge or skills). Children cannot explore if opportunities are not provided to them, and they will not ask questions if they do not feel that their questions are welcomed. Even if opportunities and encouragement are provided, the fear of being wrong can keep children from trying to learn new things ( Martin and Marsh, 2003 ; Martin, 2011 ). Active efforts to discover or “figure out” are more effective at supporting learning than simply telling children something or having them practice learned procedures ( Schwartz and Martin, 2004 ). Children can explore when they have guidance and support to engage in think-aloud problem solving, instead of being told what to try or getting questions answered directly ( Chi et al., 1994 ).
2. Model curiosity for children, allowing them to see that others have things that they do not know and want to learn about, and that others also enjoy information-seeking activities like asking questions and researching information. Technology makes information seeking easier than it has ever been. For example, children are growing up surrounded by internet-connected devices (more than 8 per capita in 2018), and asking questions is reported to be one of the most frequent uses of smart speakers ( NPR-Edison Research Spring, 2019 ). Observing others seeking information as a normal routine can encourage children’s own question asking ( McDonald, 1992 ).
3. Children spontaneously ask questions, but adults can encourage deeper questioning by using explicit prompts and then supporting children to generate questions ( King, 1994 ; Rosenshine et al., 1996 ). This is different from asking “Do you have any questions?,” which may elicit a simple “yes” or “no” response from the child. Instead, asking, “What questions do you have?” is more likely to provide a cue for children to practice analyzing what they do not know and generating questions. The ability to evaluate one’s knowledge develops through practice, and scaffolding this process by helping children recognize questions to ask can effectively support development ( Kuhn and Pearsall, 2000 ; Chin and Brown, 2002 ).
4. Other methods to encourage curiosity include promoting and reinforcing children’s thinking about alternative ideas, which could also support creativity. Part of being curious is recognizing questions that can be asked, and if children understand that there are often multiple solutions or ways to do something they will be more likely to explore to learn “ how we know and why we believe; e.g., to expose science as a way of knowing” ( Duschl and Osborne, 2002 , p. 40). Children who learn to “think outside the box” will question what they and others know and better understand the dynamic nature of knowledge, supporting a curious mindset ( Duschl and Osborne, 2002 ).
Although positive interactions can promote and sustain curiosity in young children, curiosity can also be suppressed or discouraged through interactions that emphasize performance or a focus on explicit instruction ( Martin and Marsh, 2003 ; Martin, 2011 ; Hulme et al., 2013 ). Performance goals, which are goals that are focused on demonstrating the attainment of a skill, can lead to lower curiosity to avoid distraction or risk to achieving the goal ( Hulme et al., 2013 ). Mastery goals, which focus on understanding and the learning process, support learning for its own sake ( Ames, 1993 ). When children are older and attend school, they experience expectations that prioritize performance metrics over academic and intellectual exploration, such as through tests and state-standardized assessments, which discourages curiosity ( Engel, 2011 ; Jirout et al., 2018 ). In my own recent research, we observed a positive association between teachers’ use of mastery-focused language and their use of curiosity-promoting instructional practices in preschool math and science lessons ( Jirout and Vitiello, 2019 ). Among 5th graders, student ratings of teacher emphasis on standardized testing was associated with lower observed curiosity-promotion by teachers ( Jirout and Vitiello, 2019 ). It is likely that learning orientations influence children’s curiosity even before children begin formal schooling, and de-emphasizing performance is a way to support curiosity.
In summary, focusing on the process of “figuring out” something children do not know, modeling and explicitly prompting exploration and question asking, and supporting metacognitive and creative thinking are all ways to promote curiosity and support effective cognitive engagement during learning. These methods are consistent with inquiry-based and active learning, which both are grounded in constructivism and information gaps similar to the current operationalization of curiosity ( Jirout and Klahr, 2012 ; Saylor and Ganea, 2018 ; van Schijndel et al., 2018 ). Emphasizing performance, such as academic climates focused on teaching rote procedures and doing things the “correct” way to get the right answer, can suppress or discourage curiosity. Instead, creating a supportive learning climate and responding positively to curiosity are likely to further reinforce children’s information seeking, and to sustain their curiosity so that it can support scientific thinking and learning.
Conclusion: a Call for Research
In this article, I describe evidence from the limited existing research showing that curiosity is important and relates to science learning, and I suggest several mechanisms through which curiosity can support science learning. The general perspective presented here is that science learning can and should be supported by promoting curiosity, and I provide suggestions for promoting (and avoiding the suppression of) curiosity in early childhood. However, much more research is needed to address the complex challenge of educational applications of this work. Specifically, the suggested mechanisms through which curiosity promotes learning need to be studied to tease apart questions of directionality, the influence of related factors such as interest, the impact of context and learning domain on these relations, and the role of individual differences. Both the influence of curiosity on learning and effective ways to promote it likely change in interesting and important ways across development, and research is needed to understand this development-especially through studying change in individuals over time. Finally, it is important to acknowledge that learning does not happen in isolation, and one’s culture and environment have important roles in shaping one’s development. Thus, application of research on curiosity and science learning must include studies of the influence of social factors such as socioeconomic status and contexts, the influence of peers, teachers, parents, and others in children’s environments, and the many ways that culture may play a role, both in the broad values and beliefs instilled in children and the adults interacting with them, and in the influences of behavior expectations and norms. For example, parents across cultures might respond differently to children’s questions, so cross-cultural differences in questions likely indicate something other than differences in curiosity ( Ünlütabak et al., 2019 ). Although curiosity likely promotes science learning across cultures and contexts, the ways in which it does so and effective methods of promoting it may differ, which is an important area for future research to explore. Despite the benefits I present, curiosity seems to be rare or even absent from formal learning contexts ( Engel, 2013 ), even as children show curiosity about things outside of school ( Post and Walma van der Molen, 2018 ). Efforts to promote science learning should focus on the exciting potential for curiosity in supporting children’s learning, as promoting young children’s curiosity in science can start children on a positive trajectory for later learning.
Ethics Statement
Written informed consent was obtained from the individual(s) and/or minor(s)’ legal guardian/next of kin publication of any potentially identifiable images or data included in this article.
Author Contributions
JJ conceived of the manuscript topic and wrote the manuscript.
This publication was made possible through the support of grants from the John Templeton Foundation, the Spencer Foundation, and the Center for Curriculum Redesign. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation or other funders.
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
American Association for the Advancement of Science [AAAS] (1993). Benchmarks for Science Literacy. Oxford: Oxford University Press.
Google Scholar
Ames, C. (1993). Classrooms: goals, structures, and student motivation. J. Educ. Psychol. 84, 261–271. doi: 10.1037/0022-0663.84.3.261
CrossRef Full Text | Google Scholar
Berlyne, D. E. (1954). An experimental study of human curiosity. Br. J. Psychol. 45, 256–265. doi: 10.1111/j.2044-8295.1954.tb01253.x
PubMed Abstract | CrossRef Full Text | Google Scholar
Bonawitz, E., Shafto, P., Gweon, H., Goodman, N. D., Spelke, E., and Schulz, L. (2011). The double-edged sword of pedagogy: instruction limits spontaneous exploration and discovery. Cognition 120, 322–330. doi: 10.1016/j.cognition.2010.10.001
Bonwell, C. C., and Eison, J. A. (1991). Active Learning: Creating Excitement in the Classroom. 1991 ASHE-ERIC Higher Education Reports. ERIC Clearinghouse on Higher Education. Washington, DC: The George Washington University.
Chi, M. T. H., Leeuw, N. D., Chiu, M.-H., and Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cogn. Sci. 18, 439–477. doi: 10.1207/s15516709cog1803_3
Chi, M. T. H., and Wylie, R. (2014). The ICAP framework: linking cognitive engagement to active learning outcomes. Educ. Psychol. 49, 219–243. doi: 10.1080/00461520.2014.965823
Chin, C., and Brown, D. E. (2002). Student-generated questions: a meaningful aspect of learning in science. Int. J. Sci. Educ. 24, 521–549. doi: 10.1080/09500690110095249
Chin, C., and Osborne, J. (2010). Supporting argumentation through students’. Questions: case studies in science classrooms. J. Learn. Sci. 19, 230–284. doi: 10.1080/10508400903530036
Chouinard, M. M., Harris, P. L., and Maratsos, M. P. (2007). Children’s questions: a mechanism for cognitive development. Monogr. Soc. Res. Child Dev. 72, i–129.
Conati, C., and Carenini, G. (2001). “Generating tailored examples to support learning via self-explanation,” in Proceedings of IJCAI’01, 17th International Joint Conference on Artificial Intelligence , Seattle, WA, 1301–1306.
Danovitch, J. H., Fisher, M., Schroder, H., Hambrick, D. Z., and Moser, J. (2019). Intelligence and neurophysiological markers of error monitoring relate to Children’s intellectual humility. Child Dev. 90, 924–939. doi: 10.1111/cdev.12960
Danovitch, J. H., and Mills, C. M. (2018). “Understanding when and how explanation promotes exploration,” in Active Learning from Infancy to Childhood: Social Motivation, Cognition, and Linguistic Mechanisms , eds M. M. Saylor and P. A. Ganea (Berlin: Springer), 95–112. doi: 10.1007/978-3-319-77182-3_6
Dewey, J. (1910). How We Think. Lexington, MA: D.C. Heath and Company. doi: 10.1037/10903-000
Duschl, R. A., and Osborne, J. (2002). Supporting and promoting argumentation discourse in science education. Stud. Sci. Educ. 38, 39–72. doi: 10.1080/03057260208560187
Duschl, R. A., Schweingruber, H. A., and Shouse, A. W. (eds) (2007). Taking Science to School: Learning and Teaching Science in Grades K-8. Washington, DC: The National Academies Press. doi: 10.17226/11625
Engel, S. (2011). Children’s need to know: curiosity in schools. Harv. Educ. Rev. 81, 625–645. doi: 10.17763/haer.81.4.h054131316473115
Engel, S. (2013). The Case for CURIOSITY. Educ. Leadersh. 70, 36–40.
Engel, S., and Labella, M. (2011). Encouraging exploration: the effects of teaching behavior on student expressions of curiosity, as cited in Engel, S. (2011). Children’s Need to Know: curiosity in Schools. Harv. Educ. Rev. 81, 625–645. doi: 10.17763/haer.81.4.h054131316473115
Frazier, B. N., Gelman, S. A., and Wellman, H. M. (2009). Preschoolers’ search for explanatory information within adult–child conversation. Child Dev. 80, 1592–1611. doi: 10.1111/j.1467-8624.2009.01356.x
French, L. A., Woodring, S. D., and Woodring, S. D. (2013). Science Education in the Early Years. Handbook of Research on the Education of Young Children. Available online at: http://www.taylorfrancis.com/ (accessed February 29, 2020).
Gopnik, A. (2012). Scientific thinking in young children: theoretical advances, empirical research, and policy implications. Science 337, 1623–1627. doi: 10.1126/science.1223416
Gopnik, A., Griffiths, T. L., and Lucas, C. G. (2015). When younger learners can be better (or at least more open-minded) than older ones. Curr. Dir. Psychol. Sci. 24, 87–92. doi: 10.1177/0963721414556653
Gopnik, A., Meltzoff, A. N., and Kuhl, P. K. (1999). The Scientist in the Crib: Minds, Brains, and How Children Learn. New York, NY: William Morrow & Co.
Gordon, G., Breazeal, C., and Engel, S. (2015). Can children catch curiosity from a social robot? Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction , New York, NY, 91–98. doi: 10.1145/2696454.2696469
Gottfried, A. E., Preston, K. S. J., Gottfried, A. W., Oliver, P. H., Delany, D. E., and Ibrahim, S. M. (2016). Pathways from parental stimulation of children’s curiosity to high school science course accomplishments and science career interest and skill. Int. J. Sci. Educ. 38, 1972–1995. doi: 10.1080/09500693.2016.1220690
Harradine, C. C., and Clifford, R. M. (1996). When are children ready for kindergarten? Views of families, kindergarten teachers, and child care providers. Paper Presented at the Annual Meeting of the American Educational Research Association , New York, NY.
Howard-Jones, P. A., and Demetriou, S. (2009). Uncertainty and engagement with learning games. Inst. Sci. 37, 519–536. doi: 10.1007/s11251-008-9073-6
Heaviside, S., Farris, E., and Carpenter, J. M. (1993). Public School Kindergarten Teachers’ Views on Children’s Readiness for School. US Department of Education, Office of Educational Research and Improvement, National Center for Education Statistics.
Hulme, E., Green, D. T., and Ladd, K. S. (2013). Fostering student engagement by cultivating curiosity: fostering student engagement by cultivating curiosity. New Dir. Stud. Serv. 2013, 53–64. doi: 10.1002/ss.20060
Inhelder, B., and Piaget, J. (1958). The Growth of Logical Thinking from Childhood to Adolescence: An Essay on the Construction of Formal Operational Structures. London: Routledge.
Jipson, J. L., Labotka, D., Callanan, M. A., and Gelman, S. A. (2018). “How conversations with parents may help children learn to separate the sheep from the goats (and the Robots),” in Active Learning from Infancy to Childhood: Social Motivation, Cognition, and Linguistic Mechanisms , eds M. M. Saylor and P. A. Ganea (Berlin: Springer), 189–212. doi: 10.1007/978-3-319-77182-3_11
Jirout, J., and Klahr, D. (2012). Children’s scientific curiosity: in search of an operational definition of an elusive concept. Dev. Rev. 32, 125–160. doi: 10.1016/j.dr.2012.04.002
Jirout, J., and Vitiello, V. (2019). “uriosity in the classroom through supportive instruction. Paper Presented at the SRCD Biennial Meeting , Baltimore, MD.
Jirout, J., Vitiello, V., and Zumbrunn, S. (2018). “Curiosity in schools,” in The New Science of Curiosity , ed. G. Gordon (Hauppauge, NY: Nova).
Jirout, J., and Zimmerman, C. (2015). “Development of science process skills in the early childhood years,” in Research in Early Childhood Science Education , eds K. Cabe Trundle and M. Saçkes (Berlin: Springer), 143–165. doi: 10.1007/978-94-017-9505-0_7
Kelemen, D., Callanan, M. A., Casler, K., and Pérez-Granados, D. R. (2005). Why things happen: teleological explanation in parent-child conversations. Dev. Psychol. 41, 251–264. doi: 10.1037/0012-1649.41.1.251
Kidd, C., and Hayden, B. Y. (2015). The psychology and neuroscience of curiosity. Neuron 88, 449–460. doi: 10.1016/j.neuron.2015.09.010
King, A. (1994). Guiding knowledge construction in the classroom: effects of teaching children how to question and how to explain. Am. Educ. Res. J. 31, 338–368. doi: 10.2307/1163313
Klahr, D., Matlen, B., and Jirout, J. (2013). “Children as scientific thinkers,” in Handbook of the Psychology of Science , eds G. Feist and M. Gorman (New York, NY: Springer), 223–248.
Kuhn, D. (2002). “What is scientific thinking, and how does it develop?” in Blackwell Handbook of Childhood Cognitive Development , ed. U. Goswami (Oxford: Blackwell Publishing.), 371–393. doi: 10.1002/9780470996652.ch17
Kuhn, D., and Pearsall, S. (2000). Developmental Origins of Scientific Thinking. J. Cogn. Dev. 1, 113–129. doi: 10.1207/S15327647JCD0101N_11
Lamnina, M., and Chase, C. C. (2019). Developing a thirst for knowledge: how uncertainty in the classroom influences curiosity, affect, learning, and transfer. Contemp. Educ. Psychol. 59:101785. doi: 10.1016/j.cedpsych.2019.101785
Legare, C. H. (2014). The contributions of explanation and exploration to children’s scientific reasoning. Child Dev. Perspect. 8, 101–106. doi: 10.1111/cdep.12070
Litman, J., Hutchins, T., and Russon, R. (2005). Epistemic curiosity, feeling-of-knowing, and exploratory behaviour. Cogn. Emot. 19, 559–582. doi: 10.1080/02699930441000427
Litman, J. A., and Silvia, P. J. (2006). The latent structure of trait curiosity: evidence for interest and deprivation curiosity dimensions. J. Pers. Assess. 86, 318–328. doi: 10.1207/s15327752jpa8603_07
Loewenstein, G. (1994). The psychology of curiosity: a review and reinterpretation. Psychol. Bull. 116, 75–98. doi: 10.1037/0033-2909.116.1.75
Lowry, N., and Johnson, D. W. (1981). Effects of controversy on epistemic curiosity, achievement, and attitudes. J. Soc. Psychol. 115, 31–43. doi: 10.1080/00224545.1981.9711985
Loyens, S. M., and Gijbels, D. (2008). Understanding the effects of constructivist learning environments: introducing a multi-directional approach. Inst. Sci. 36, 351–357. doi: 10.1007/s11251-008-9059-4
Luce, M. R., and Hsi, S. (2015). Science-relevant curiosity expression and interest in science: an exploratory study: CURIOSITY AND SCIENCE INTEREST. Sci. Educ. 99, 70–97. doi: 10.1002/sce.21144
Martin, A. J. (2011). Courage in the classroom: exploring a new framework predicting academic performance and engagement. Sch. Psychol. Q. 26, 145–160. doi: 10.1037/a0023020
Martin, A. J., and Marsh, H. W. (2003). Fear of Failure: Friend or Foe? Aust. Psychol. 38, 31–38. doi: 10.1080/00050060310001706997
McDonald, J. P. (1992). Teaching: Making Sense of an Uncertain Craft. New York, NY: Teachers College Press.
Metz, K. E. (2004). Children’s understanding of scientific inquiry: their conceptualization of uncertainty in investigations of their own design. Cogn. Instr. 22, 219–290. doi: 10.1207/s1532690xci2202_3
Mills, C. M., Legare, C. H., Bills, M., and Mejias, C. (2010). Preschoolers use questions as a tool to acquire knowledge from different sources. J. Cogn. Dev. 11, 533–560. doi: 10.1080/15248372.2010.516419
Morris, B. J., Croker, S., Masnick, A., and Zimmerman, C. (2012). “The emergence of scientific reasoning,” in Current Topics in Children’s Learning and Cognition , eds H. Kloos, B. J. Morris, and J. L. Amaral (Rijeka: IntechOpen). doi: 10.5772/53885
National Governors Association (2010). Common Core State Standards. Washington, DC: National Governors Association.
National Resource Council (2012). A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Washington, DC: National Academy Press.
Nelson, D. G. K., Chan, L. E., and Holt, M. B. (2004). When Children Ask, “What Is It? “What Do They Want to Know About Artifacts? Psychol. Sci. 15, 384–389. doi: 10.1111/j.0956-7976.2004.00689.x
NPR-Edison Research Spring (2019). The Smart Audio Report. Available online at: https://www.nationalpublicmedia.com/uploads/2019/10/The_Smart_Audio_Report_Spring_2019.pdf (accessed February 23, 2020).
Oudeyer, P.-Y., Gottlieb, J., and Lopes, M. (2016). Intrinsic motivation, curiosity, and learning: theory and applications in educational technologies. Prog. Brain Res. 229, 257–284. doi: 10.1016/bs.pbr.2016.05.005
Piaget, J. (1926). The Thought and Language of the Child. New York, NY: Harcourt, Brace, and Company.
Pianta, R. C., La Paro, K. M., and Hamre, B. K. (2008). Classroom Assessment Scoring SystemTM: Manual K-3. Baltimore, MD: Paul H Brookes Publishing.
Post, T., and Walma van der Molen, J. H. (2018). Do children express curiosity at school? Exploring children’s experiences of curiosity inside and outside the school context. Learn. Cult. Soc. Interact. 18, 60–71. doi: 10.1016/j.lcsi.2018.03.005
Ronfard, S., Zambrana, I. M., Hermansen, T. K., and Kelemen, D. (2018). Question-asking in childhood: a review of the literature and a framework for understanding its development. Dev. Rev. 49, 101–120. doi: 10.1016/j.dr.2018.05.002
Rosenshine, B., Meister, C., and Chapman, S. (1996). Teaching students to generate questions: a review of the intervention studies. Rev. Educ. Res. 66, 181–221. doi: 10.2307/1170607
Ruggeri, A., and Lombrozo, T. (2015). Children adapt their questions to achieve efficient search. Cognition 143, 203–216. doi: 10.1016/j.cognition.2015.07.004
Saylor, M. M., and Ganea, P. A. (eds) (2018). Active Learning from Infancy to Childhood: Social Motivation, Cognition, and Linguistic Mechanisms. Berlin: Springer. doi: 10.1007/978-3-319-77182-3
Schulz, L. E., and Bonawitz, E. B. (2007). Serious fun: preschoolers engage in more exploratory play when evidence is confounded. Dev. Psychol. 43, 1045–1050. doi: 10.1037/0012-1649.43.4.1045
Schwartz, D. L., and Martin, T. (2004). Inventing to prepare for future learning: the hidden efficiency of encouraging original student production in statistics instruction. Cogn. Inst. 22, 129–184. doi: 10.1207/s1532690xci2202_1
Shah, P. E., Weeks, H. M., Richards, B., and Kaciroti, N. (2018). Early childhood curiosity and kindergarten reading and math academic achievement. Pediatr. Res. 84, 380–386. doi: 10.1038/s41390-018-0039-3
Turgeon, W. C. (2015). The art and danger of the question: its place within philosophy for children and its philosophical history. Mind Cult. Act. 22, 284–298. doi: 10.1080/10749039.2015.1079919
Ünlütabak, B., Nicolopoulou, A., and Aksu-Koç, A. (2019). Questions asked by Turkish preschoolers from middle-SES and low-SES families. Cogn. Dev. 52:100802. doi: 10.1016/j.cogdev.2019.100802
van Schijndel, T. J. P., Jansen, B. R. J., and Raijmakers, M. E. J. (2018). Do individual differences in children’s curiosity relate to their inquiry-based learning? Int. J. Sci. Educ. 40, 996–1015. doi: 10.1080/09500693.2018.1460772
VanLehn, K., Jones, R. M., and Chi, M. T. H. (1992). A model of the self-explanation effect. J. Learn. Sci. 2, 1–59. doi: 10.1207/s15327809jls0201_1
West, J., Hausken, E. G., and Collins, M. (1993). Readiness for Kindergarten: Parent and Teacher Beliefs. Statistics in Brief. Available online at: https://eric.ed.gov/?id=ED363429 (accessed February 29, 2020).
Keywords : curiosity, scientific reasoning, scientific thinking, information seeking, exploration, learning
Citation: Jirout JJ (2020) Supporting Early Scientific Thinking Through Curiosity. Front. Psychol. 11:1717. doi: 10.3389/fpsyg.2020.01717
Received: 28 February 2020; Accepted: 23 June 2020; Published: 05 August 2020.
Reviewed by:
Copyright © 2020 Jirout. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jamie J. Jirout, [email protected]
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
- Bipolar Disorder
- Therapy Center
- When To See a Therapist
- Types of Therapy
- Best Online Therapy
- Best Couples Therapy
- Managing Stress
- Sleep and Dreaming
- Understanding Emotions
- Self-Improvement
- Healthy Relationships
- Student Resources
- Personality Types
- Guided Meditations
- Verywell Mind Insights
- 2024 Verywell Mind 25
- Mental Health in the Classroom
- Editorial Process
- Meet Our Review Board
- Crisis Support
Scientific Method Steps in Psychology Research
Steps, Uses, and Key Terms
Verywell / Theresa Chiechi
How do researchers investigate psychological phenomena? They utilize a process known as the scientific method to study different aspects of how people think and behave.
When conducting research, the scientific method steps to follow are:
- Observe what you want to investigate
- Ask a research question and make predictions
- Test the hypothesis and collect data
- Examine the results and draw conclusions
- Report and share the results
This process not only allows scientists to investigate and understand different psychological phenomena but also provides researchers and others a way to share and discuss the results of their studies.
Generally, there are five main steps in the scientific method, although some may break down this process into six or seven steps. An additional step in the process can also include developing new research questions based on your findings.
What Is the Scientific Method?
What is the scientific method and how is it used in psychology?
The scientific method consists of five steps. It is essentially a step-by-step process that researchers can follow to determine if there is some type of relationship between two or more variables.
By knowing the steps of the scientific method, you can better understand the process researchers go through to arrive at conclusions about human behavior.
Scientific Method Steps
While research studies can vary, these are the basic steps that psychologists and scientists use when investigating human behavior.
The following are the scientific method steps:
Step 1. Make an Observation
Before a researcher can begin, they must choose a topic to study. Once an area of interest has been chosen, the researchers must then conduct a thorough review of the existing literature on the subject. This review will provide valuable information about what has already been learned about the topic and what questions remain to be answered.
A literature review might involve looking at a considerable amount of written material from both books and academic journals dating back decades.
The relevant information collected by the researcher will be presented in the introduction section of the final published study results. This background material will also help the researcher with the first major step in conducting a psychology study: formulating a hypothesis.
Step 2. Ask a Question
Once a researcher has observed something and gained some background information on the topic, the next step is to ask a question. The researcher will form a hypothesis, which is an educated guess about the relationship between two or more variables
For example, a researcher might ask a question about the relationship between sleep and academic performance: Do students who get more sleep perform better on tests at school?
In order to formulate a good hypothesis, it is important to think about different questions you might have about a particular topic.
You should also consider how you could investigate the causes. Falsifiability is an important part of any valid hypothesis. In other words, if a hypothesis was false, there needs to be a way for scientists to demonstrate that it is false.
Step 3. Test Your Hypothesis and Collect Data
Once you have a solid hypothesis, the next step of the scientific method is to put this hunch to the test by collecting data. The exact methods used to investigate a hypothesis depend on exactly what is being studied. There are two basic forms of research that a psychologist might utilize: descriptive research or experimental research.
Descriptive research is typically used when it would be difficult or even impossible to manipulate the variables in question. Examples of descriptive research include case studies, naturalistic observation , and correlation studies. Phone surveys that are often used by marketers are one example of descriptive research.
Correlational studies are quite common in psychology research. While they do not allow researchers to determine cause-and-effect, they do make it possible to spot relationships between different variables and to measure the strength of those relationships.
Experimental research is used to explore cause-and-effect relationships between two or more variables. This type of research involves systematically manipulating an independent variable and then measuring the effect that it has on a defined dependent variable .
One of the major advantages of this method is that it allows researchers to actually determine if changes in one variable actually cause changes in another.
While psychology experiments are often quite complex, a simple experiment is fairly basic but does allow researchers to determine cause-and-effect relationships between variables. Most simple experiments use a control group (those who do not receive the treatment) and an experimental group (those who do receive the treatment).
Step 4. Examine the Results and Draw Conclusions
Once a researcher has designed the study and collected the data, it is time to examine this information and draw conclusions about what has been found. Using statistics , researchers can summarize the data, analyze the results, and draw conclusions based on this evidence.
So how does a researcher decide what the results of a study mean? Not only can statistical analysis support (or refute) the researcher’s hypothesis; it can also be used to determine if the findings are statistically significant.
When results are said to be statistically significant, it means that it is unlikely that these results are due to chance.
Based on these observations, researchers must then determine what the results mean. In some cases, an experiment will support a hypothesis, but in other cases, it will fail to support the hypothesis.
So what happens if the results of a psychology experiment do not support the researcher's hypothesis? Does this mean that the study was worthless?
Just because the findings fail to support the hypothesis does not mean that the research is not useful or informative. In fact, such research plays an important role in helping scientists develop new questions and hypotheses to explore in the future.
After conclusions have been drawn, the next step is to share the results with the rest of the scientific community. This is an important part of the process because it contributes to the overall knowledge base and can help other scientists find new research avenues to explore.
Step 5. Report the Results
The final step in a psychology study is to report the findings. This is often done by writing up a description of the study and publishing the article in an academic or professional journal. The results of psychological studies can be seen in peer-reviewed journals such as Psychological Bulletin , the Journal of Social Psychology , Developmental Psychology , and many others.
The structure of a journal article follows a specified format that has been outlined by the American Psychological Association (APA) . In these articles, researchers:
- Provide a brief history and background on previous research
- Present their hypothesis
- Identify who participated in the study and how they were selected
- Provide operational definitions for each variable
- Describe the measures and procedures that were used to collect data
- Explain how the information collected was analyzed
- Discuss what the results mean
Why is such a detailed record of a psychological study so important? By clearly explaining the steps and procedures used throughout the study, other researchers can then replicate the results. The editorial process employed by academic and professional journals ensures that each article that is submitted undergoes a thorough peer review, which helps ensure that the study is scientifically sound.
Once published, the study becomes another piece of the existing puzzle of our knowledge base on that topic.
Before you begin exploring the scientific method steps, here's a review of some key terms and definitions that you should be familiar with:
- Falsifiable : The variables can be measured so that if a hypothesis is false, it can be proven false
- Hypothesis : An educated guess about the possible relationship between two or more variables
- Variable : A factor or element that can change in observable and measurable ways
- Operational definition : A full description of exactly how variables are defined, how they will be manipulated, and how they will be measured
Uses for the Scientific Method
The goals of psychological studies are to describe, explain, predict and perhaps influence mental processes or behaviors. In order to do this, psychologists utilize the scientific method to conduct psychological research. The scientific method is a set of principles and procedures that are used by researchers to develop questions, collect data, and reach conclusions.
Goals of Scientific Research in Psychology
Researchers seek not only to describe behaviors and explain why these behaviors occur; they also strive to create research that can be used to predict and even change human behavior.
Psychologists and other social scientists regularly propose explanations for human behavior. On a more informal level, people make judgments about the intentions, motivations , and actions of others on a daily basis.
While the everyday judgments we make about human behavior are subjective and anecdotal, researchers use the scientific method to study psychology in an objective and systematic way. The results of these studies are often reported in popular media, which leads many to wonder just how or why researchers arrived at the conclusions they did.
Examples of the Scientific Method
Now that you're familiar with the scientific method steps, it's useful to see how each step could work with a real-life example.
Say, for instance, that researchers set out to discover what the relationship is between psychotherapy and anxiety .
- Step 1. Make an observation : The researchers choose to focus their study on adults ages 25 to 40 with generalized anxiety disorder.
- Step 2. Ask a question : The question they want to answer in their study is: Do weekly psychotherapy sessions reduce symptoms in adults ages 25 to 40 with generalized anxiety disorder?
- Step 3. Test your hypothesis : Researchers collect data on participants' anxiety symptoms . They work with therapists to create a consistent program that all participants undergo. Group 1 may attend therapy once per week, whereas group 2 does not attend therapy.
- Step 4. Examine the results : Participants record their symptoms and any changes over a period of three months. After this period, people in group 1 report significant improvements in their anxiety symptoms, whereas those in group 2 report no significant changes.
- Step 5. Report the results : Researchers write a report that includes their hypothesis, information on participants, variables, procedure, and conclusions drawn from the study. In this case, they say that "Weekly therapy sessions are shown to reduce anxiety symptoms in adults ages 25 to 40."
Of course, there are many details that go into planning and executing a study such as this. But this general outline gives you an idea of how an idea is formulated and tested, and how researchers arrive at results using the scientific method.
Erol A. How to conduct scientific research ? Noro Psikiyatr Ars . 2017;54(2):97-98. doi:10.5152/npa.2017.0120102
University of Minnesota. Psychologists use the scientific method to guide their research .
Shaughnessy, JJ, Zechmeister, EB, & Zechmeister, JS. Research Methods In Psychology . New York: McGraw Hill Education; 2015.
By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
COMMENTS
Abstract. Scientific thinking refers to both thinking about the content of science and the set of reasoning processes that permeate the field of science: induction, deduction, experimental design, causal reasoning, concept formation, hypothesis testing, and so on. Here we cover both the history of research on scientific thinking and the ...
Future research will focus on the collaborative aspects of scientific thinking, on effective methods for teaching science, and on the neural underpinnings of the scientific mind.
Scientific Thinking is More Than "the Scientific Method" Students in many science classrooms are presented with the scientific method as the fundamental plan scientists use to gain their understandings. Scientists throughout history have come to their conclusions in a variety of ways, not always following such a specific method.
The publication is classified as using the classic scientific method if the study applied the three features (bar 4). In contrast, the publication is classified as using the sophisticated scientific method if the study applied a complex scientific method or instrument (bar 5), as defined below. The 10 most commonly used scientific methods and ...
The scientific method, or research method, is a set of systematic techniques used to acquire, modify, and integrate knowledge concerning observable and measurable phenomena. Science is one wayofknowing about the world by making use of the scientific method to acquire knowledge. Answers: 1. The scientific method is a set of systematic techniques ...
Scientific Method and Critical Thinking 1.1 Scientific Methods Most beginning science courses describe the scientific method. They mean ... Observational and Exploratory Research . 1. Create instrument or method for making observation that has not been made before. 4 2. Carry out observations, recording as much detail as possible, searching for ...
Scientific reasoning, then, may be interpreted as the subset of critical-thinking skills (cognitive and metacognitive processes and dispositions) that 1) are involved in making meaning of information in scientific domains and 2) support the epistemological commitment to scientific methodology and paradigm(s).
Future research will focus on the collaborative aspects of scientific thinking, on effective methods for teaching science, and on the neural underpinnings of the scientific mind.
Introduction. As part of high-order thinking processes, Scientific Reasoning (SR) and Scientific Thinking (ST) are concepts of great relevance for psychology and educational disciplines (Kuhn, 2009). The relevance of these concepts resides in two levels. First, the level of ontogenetical development (Zimmerman, 2007) reflected in the early ...
Determining appropriate methodology and specific methods and knowing for a particular project is a common conceptual threshold crossing and essential for scientific thinking, since for research, methodological skills are essential (see Chapter 1 of this book). There are also likely to be sudden insights about the project and findings.
theoretical explanation.The scientific method, orresearch method, is a set of systematic techniques used to acquire, modify, and integrate knowle. ge concerning observable and measurable phenomena.This book. is a formal introduction to the scientific meth. d. Science is one way of know-ing about the world. The word scienc.
5 Scientific Thinking. 5. Scientific Thinking. Describe the principles of the scientific method and explain its importance in conducting and interpreting research. Differentiate laws from theories and explain how research hypotheses are developed and tested. Identify the role of the research hypothesis in psychological research.
Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of ...
Abstract. Science is founded on a method based on critical thinking. A prerequisite for this is not only a sufficient command of language but also the comprehension of the basic concepts underlying our understanding of reality. This constraint implies an awareness of the fact that the truth of the World is not directly accessible to us, but can ...
SCIENTIFIC METHODOLOGY AND ITS COMPONENTS. Methodologyin science refers to the diverse prin- ciples, procedures, and practices that govern empiri- cal research. It is useful to distinguish five major components to convey the scope of the topics and to organize the subject matter. 1.
The definition of scientific thinking adopted in this chapter is knowledge seeking. This definition encompasses any instance of purposeful thinking that has the objective of enhancing the seeker's knowledge. One consequence that follows from this definition is that scientific thinking is something people do, not something they have.
The six steps of the scientific method include: 1) asking a question about something you observe, 2) doing background research to learn what is already known about the topic, 3) constructing a hypothesis, 4) experimenting to test the hypothesis, 5) analyzing the data from the experiment and drawing conclusions, and 6) communicating the results ...
The scientific method is how scientists and researchers apply their scientific thinking. The scientific method is a series of steps (or a method) to follow when conducting research or experiments ...
Scientific Thinking and Curiosity. Scientific thinking is a type of knowledge seeking involving intentional information seeking, including asking questions, testing hypotheses, making observations, recognizing patterns, and making inferences (Kuhn, 2002; Morris et al., 2012).Much research indicates that children engage in this information-seeking process very early on through questioning ...
When conducting research, the scientific method steps to follow are: Observe what you want to investigate. Ask a research question and make predictions. Test the hypothesis and collect data. Examine the results and draw conclusions. Report and share the results. This process not only allows scientists to investigate and understand different ...