Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Qualitative versus Quantitative Research

Research Methods in Library and Information Science

Submitted: 28 October 2016 Reviewed: 23 March 2017 Published: 28 June 2017

DOI: 10.5772/intechopen.68749

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Qualitative versus Quantitative Research

Edited by Sonyel Oflazoglu

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

6,050 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Overall attention for this chapters

Library and information science (LIS) is a very broad discipline, which uses a wide rangeof constantly evolving research strategies and techniques. The aim of this chapter is to provide an updated view of research issues in library and information science. A stratified random sample of 440 articles published in five prominent journals was analyzed and classified to identify (i) research approach, (ii) research methodology, and (iii) method of data analysis. For each variable, a coding scheme was developed, and the articles were coded accordingly. A total of 78% of the articles reported empirical research. The rest 22% were classified as non‐empirical research papers. The five most popular topics were “information retrieval,” “information behaviour,” “information literacy,” “library services,” and “organization and management.” An overwhelming majority of the empirical research articles employed a quantitative approach. Although the survey emerged as the most frequently used research strategy, there is evidence that the number and variety of research methodologies have been increased. There is also evidence that qualitative approaches are gaining increasing importance and have a role to play in LIS, while mixed methods have not yet gained enough recognition in LIS research.

  • library and information science
  • research methods
  • research strategies
  • data analysis techniques
  • research articles

Author Information

Aspasia togia *.

  • Department of Library Science & Information Systems, Technological Educational Institute (TEI) of Thessaloniki, Greece

Afrodite Malliari

  • DataScouting, Thessaloniki, Greece

*Address all correspondence to: [email protected]

1. Introduction

Library and information science (LIS), as its name indicates, is a merging of librarianship and information science that took place in the 1960s [ 1 , 2 ]. LIS is a field of both professional practice and scientific inquiry. As a field of practice, it includes the profession of librarianship as well as a number of other information professions, all of which assume the interplay of the following:

information content,

the people who interact with the content, and

the technology used to facilitate the creation, communication, storage, or transformation of the content [ 3 ].

The disciplinary foundation of LIS, which began in the 1920s, aimed at providing a theoretical foundation for the library profession. LIS has evolved in close relationship with other fields of research, especially computer science, communication studies, and cognitive sciences [ 4 ].

The connection of LIS with professional practice, on one hand, and other research fields on the other has influenced its research orientation and the development of methodological tools and theoretical perspectives [ 5 ]. Research problems are diverse, depending on the research direction, local trends, etc. Most of them relate to the professional practice although there are theoretical research statements as well. LIS research strives to address important information issues, such as these of “ information retrieval, information quality and authenticity, policy for access and preservation, the health and security applications of data mining ”(p. 3) [ 6 ]. The research is multidisciplinary in nature, and it has been heavily influenced by research designs developed in the social, behavioral, and management sciences and to a lesser extent by the theoretical inquiry adopted in the humanities [ 7 ]. Methods used in information retrieval research have been adapted from computer science. The emergence of evidence‐based librarianship in the late 1990s brought a positivist approach to LIS research, since it incorporated many of the research designs and methods used in clinical medicine [ 7 , 8 ]. In addition, LIS has developed its own methodological approaches, a prominent example of which is bibliometrics. Bibliometrics, which can be defined as “ the use of mathematical and statistical methods to study documents and patterns of publication ” (p. 38) [ 9 ], is a native research methodology, which has been extensively used outside the field, especially in science studies [ 10 ].

Library and information science research has been often criticized as being fragmentary, narrowly focused, and oriented to practical problems [ 11 ]. Many authors have noticed limited use of theory in published research and have advocated greater use of theory as a conceptual basis in LIS research [ 4 , 11 – 14 ]. Feehan et al. [ 13 ] claimed that LIS literature has not evolved enough to support a rigid body of its own theoretical basis. Jarvelin and Vakkari [ 15 ] argued that LIS theories are usually vague and conceptually unclear, and that research in LIS has been dominated by a paradigm which “ has made little use of such traditional scientific approaches as foundations and conceptual analysis, or of scientific explanation and theory formulation ” (p. 415). This lack of theoretical contributions may be associated with the fact that LIS emanated from professional practice and is therefore closely linked to practical problems such as the processing and organization of library materials, documentation, and information retrieval [ 15 , 16 ].

In this chapter, after briefly discussing the role of theory in LIS research, we provide an updated view of research issues in the field that will help scholars and students stay informed about topics related to research strategies and methods. To accomplish this, we describe and analyze patterns of LIS research activity as reflected in prominent library journals. The analysis of the articles highlights trends and recurring themes in LIS research regarding the use of multiple methods, the adoption of qualitative approaches, and the employment of advanced techniques for data analysis and interpretation [ 17 ].

2. The role of theory in LIS research

The presence of theory is an indication of research eminence and respectability [ 18 ], as well as a feature of discipline’s maturity [ 19 , 20 ]. Theory has been defined in many ways. “ Any of the following have been used as the meaning of theory: a law, a hypothesis, group of hypotheses, proposition, supposition, explanation, model, assumption, conjecture, construct, edifice, structure, opinion, speculation, belief, principle, rule, point of view, generalization, scheme, or idea ” (p. 309) [ 21 ]. A theory can be described as “ a set of interrelated concepts, definitions, and propositions that explains or predicts events or situations by specifying relations among variables ” [ 22 ]. According to Babbie [ 23 ], research is “ a systematic explanation for the observed facts and laws that related to a particular aspect of life ” (p. 49). It is “ a multiple‐level component of the research process, comprising a range of generalizations that move beyond a descriptive level to a more explanatory level ” [ 24 ] (p. 319). The role of theory in social sciences is, among other things, to explain and predict behavior, be usable in practical applications, and guide research [ 25 ]. According to Smiraglia [ 26 ], theory does not exist in a vacuum but in a system that explains the domains of human actions, the phenomena found in these domains, and the ways in which they are affected. He maintains that theory is developed by systematically observing phenomena, either in the positivist empirical research paradigm or in the qualitative hermeneutic paradigm. Theory is used to formulate hypotheses in quantitative research and confirms observations in qualitative research.

Glazier and Grover [ 24 ] proposed a model for theory‐building in LIS called “circuits of theory.” The model includes taxonomy of theory, developed earlier by the authors [ 11 ], and the critical social and psychological factors that influence research. The purpose of the taxonomy was to demonstrate the relationships among the concepts of research, theory, paradigms, and phenomena. Phenomena are described as “ events experienced in the empirical world ” (p. 230) [ 11 ]. Researchers assign symbols (digital or iconic representations, usually words or pictures) to phenomena, and meaning to symbols, and then they conceptualize the relationships among phenomena and formulate hypotheses and research questions. “ In the taxonomy, empirical research begins with the formation of research questions to be answered about the concepts or hypotheses for testing the concepts within a narrow set of predetermined parameters ” (p. 323) [ 24 ]. Various levels of theories, with implications for research in library and information Science, are described. The first theory level, called substantive theory , is defined as “ a set of propositions which furnish an explanation for an applied area of inquiry ” (p. 233) [ 11 ]. In fact, it may not be viewed as a theory but rather be considered as a research hypothesis that has been tested or even a research finding [ 16 ]. The next level of theory, called formal theory , is defined as “ a set of propositions which furnish an explanation for a formal or conceptual area of inquiry, that is, a discipline ” (p. 234) [ 11 ]. Substantive and formal theories together are usually considered as “middle range” theory in the social sciences. Their difference lies in the ability to structure generalizations and the potential for explanation and prediction. The final level, grand theory , is “ a set of theories or generalizations that transcend the borders of disciplines to explain relationships among phenomena ” (p. 321) [ 24 ]. According to the authors, most research generates substantive level theory, or, alternatively, researchers borrow theory from the appropriate discipline, apply it to the problem under investigation, and reconstruct the theory at the substantive level. Next in the hierarchy of theoretical categories is the paradigm , which is described as “ a framework of basic assumptions with which perceptions are evaluated and relationships are delineated and applied to a discipline or profession ” (p. 234) [ 11 ]. Finally, the most significant theoretical category is the world view , which is defined as “ an individual’s accepted knowledge, including values and assumptions, which provide a ‘filter’ for perception of all phenomena ” (p. 235) [ 11 ]. All the previous categories contribute to shaping the individual’s worldview. In the revised model, which places more emphasis on the impact of social environment on the research process, research and theory building is surrounded by a system of three basic contextual modules: the self, society, and knowledge, both discovered and undiscovered. The interactions and dialectical relationships of these three modules affect the research process and create a dynamic environment that fosters theory creation and development. The authors argue that their model will help researchers build theories that enable generalizations beyond the conclusions drawn from empirical data [ 24 ].

In an effort to propose a framework for a unified theory of librarianship, McGrath [ 27 ] reviewed research articles in the areas of publishing, acquisitions, classification and knowledge organization, storage, preservation and collection management, library collections, and circulations. In his study, he included articles that employed explanatory and predictive statistical methods to explore relationships between variables within and between the above subfields of LIS. For each paper reviewed, he identified the dependent variable, significant independent variables, and the units of analysis. The review displayed explanatory studies “ in nearly every level, with the possible exception of classification, while studies in circulation and use of the library were clearly dominant. A recapitulation showed that a variable at one level may be a unit of analysis at another, a property of explanatory research crucial to the development of theory, which has been either ignored or unrecognized in LIS literature ” (p. 368) [ 27 ]. The author concluded that “explanatory and predictive relationships do exist and that they can be useful in constructing a comprehensive unified theory of librarianship” (p. 368) [ 27 ].

Recent LIS literature provides several analyses of theory development and use in the field. In a longitudinal analysis of information needs and uses of literature, Julien and Duggan [ 28 ] investigated, among other things, to what extent LIS literature was grounded in theory. Articles “ based on a coherent and explicit framework of assumptions, definitions, and propositions that, taken together, have some explanatory power ” (p. 294) were classified as theoretical articles. Results showed that only 18.3% of the research studies identified in the sample of articles examined were theoretically grounded.

Pettigrew and McKechnie [ 29 ] analyzed 1160 journal articles published between 1993 and 1998 to determine the level of theory use in information science research. In the absence of a singular definition of theory that would cover all the different uses of the term in the sample of articles, they operationalized “theory” according to authors’ use of the term. They found that 34.1% of the articles incorporated theory, with the largest percentage of theories drawn from the social sciences. Information science itself was the second most important source of theories. The authors argued that this significant increase in theory use in comparison to earlier studies could be explained by the research‐oriented journals they selected for examination, the sample time, and the broad way in which they defined “theory.” With regard to this last point, that is, their approach of identifying theories only if the author(s) describe them as such in the article, Pettigrew and McKechnie [ 29 ] observed significant differences in how information science researchers perceive theory:

Although it is possible that conceptual differences regarding the nature of theory may be due to the different disciplinary backgrounds of researchers in IS, other themes emerged from our data that suggest a general confusion exists about theory even within subfields. Numerous examples came to light during our analysis in which an author would simultaneously refer to something as a theory and a method, or as a theory and a model, or as a theory and a reported finding. In other words, it seems as though authors, themselves, are sometimes unsure about what constitutes theory. Questions even arose regarding whether the author to whom a theory was credited would him or herself consider his or her work as theory (p. 68).

Kim and Jeong [ 16 ] examined the state and characteristics of theoretical research in LIS journals between 1984 and 2003. They focused on the “theory incident,” which is described as “an event in which the author contributes to the development or the use of theory in his/her paper.” Their study adopted Glazier and Grover’s [ 24 ] model of “circuits of theory.” Substantive level theory was operationalized to a tested hypothesis or an observed relationship, while both formal and grand level theories were identified when they were named as “theory,” “model,” or “law” by authors other than those who had developed them. Results demonstrated that the application of theory was present in 41.4% of the articles examined, signifying a significant increase in the proportion of theoretical articles as compared to previous studies. Moreover, it was evident that both theory development and theory use had increased by the year. Information seeking and use, and information retrieval, were identified as the subfields with the most significant contribution to the development of the theoretical framework.

In a more in‐depth analysis of theory use in Kumasi et al. [ 30 ] qualitatively analyzed the extent to which theory is meaningfully used in scholarly literature. For this purpose, they developed a theory talk coding scheme, which included six analytical categories, describing how theory is discussed in a study. The intensity of theory talk in the articles was described across a continuum from minimal (e.g., theory is discussed in literature review and not mentioned later) through moderate (e.g., multiple theories are introduced but without discussing their relevance to the study) to major (e.g., theory is employed throughout the study). Their findings seem to support the opinion that “ LIS discipline has been focused on the application of specific theoretical frameworks rather than the generation of new theories ” (p. 179) [ 30 ]. Another point the authors made was about the multiple terms used in the articles to describe theory. Words such as “framework,” “model,” or “theory” were used interchangeably by scholars.

It is evident from the above discussion that the treatment of theory in LIS research covers a spectrum of intensity, from marginal mentions to theory revising, expanding, or building. Recent analyses of the published scholarship indicate that the field has not been very successful in contributing to existing theory or producing new theory. In spite of this, one may still assert that LIS research employs theory, and, in fact, there are many theories that have been used or generated by LIS scholars. However, “ calls for additional and novel theory development work in LIS continue, particularly for theories that might help to address the research practice gap ” (p. 12) [ 31 ].

3. Research strategies in LIS

3.1. surveys of research methods.

LIS is a very broad discipline, which uses a wide range of constantly evolving research strategies and techniques [ 32 ]. Various classification schemes have been developed to analyze methods employed in LIS research (e.g., [ 13 , 15 , 17 , 33 – 35 , 38 ]). Back in 1996, in the “research record” column of the Journal of Education for Library and Information Science, Kim [ 36 ] synthesized previous categories and definitions and introduced a list of research strategies, including data collection and analysis methods. The listing included four general research strategies: (i) theoretical/philosophical inquiry (development of conceptual models or frameworks), (ii) bibliographic research (descriptive studies of books and their properties as well as bibliographies of various kinds), (iii) R&D (development of storage and retrieval systems, software, interface, etc.), and (iv) action research, it aims at solving problems and bringing about change in organizations. Strategies are then divided into quantitative and qualitative driven. In the first category are included descriptive studies, predictive/explanatory studies, bibliometric studies, content analysis, and operation research studies. Qualitative‐driven strategies are considered the following: case study, biographical method, historical method, grounded theory, ethnography, phenomenology, symbolic interactionism/semiotics, sociolinguistics/discourse analysis/ethnographic semantics/ethnography of communication, and hermeneutics/interpretive interactionism (p. 378–380) [ 36 ].

Systematic studies of research methods in LIS started in the 1980s and several reviews of the literature have been conducted over the past years to analyze the topics, methodologies, and quality of research. One of the earliest studies was done by Peritz [ 37 ] who carried out a bibliometric analysis of the articles published in 39 core LIS journals between 1950 and 1975. She examined the methodologies used, the type of library or organization investigated, the type of activity investigated, and the institutional affiliation of the authors. The most important findings were a clear orientation toward library and information service activities, a widespread use of the survey methodology, a considerable increase of research articles after 1960, and a significant increase in theoretical studies after 1965.

Nour [ 38 ] followed up on Peritz’s [ 37 ] work and studied research articles published in 41 selected journals during the year 1980. She found that survey and theoretical/analytic methodologies were the most popular, followed by bibliometrics. Comparing these findings to those made by Peritz [ 37 ], Nour [ 38 ] found that the amount of research continued to increase, but the proportion of research articles to all articles had been decreasing since 1975.

Feehan et al. [ 13 ] described how LIS research published during 1984 was distributed over various topics and what methods had been used to study these topics. Their analysis revealed a predominance of survey and historical methods and a notable percentage of articles using more than one research method. Following a different approach, Enger et al. (1989) focused on the statistical methods used by LIS researchers in articles published during 1985 [ 39 ]. They found that only one out of three of the articles reported any use of statistics. Of those, 21% used descriptive statistics and 11% inferential statistics. In addition, the authors found that researchers from disciplines other than LIS made the highest use of statistics and LIS faculty showed the highest use of inferential statistics.

An influential work, against which later studies have been compared, is that of Jarvelin and Vakkari [ 15 ] who studied LIS articles published in 1985 in order to determine how research was distributed over various subjects, what approaches had been taken by the authors, and what research strategies had been used. The authors replicated their study later to include older research published between 1965 and 1985 [ 40 ]. The main finding of these studies was that the trends and characteristics of LIS research remained more or less the same over the aforementioned period of 20 years. The most common topics were information service activities and information storage and retrieval. Empirical research strategies were predominant, and of them, the most frequent was the survey. Kumpulainen [ 41 ], in an effort to provide a continuum with Jarvelin and Vakkeri’s [ 15 ] study, analyzed 632 articles sampled from 30 core LIS journals with respect to various characteristics, including topics, aspect of activity, research method, data selection method, and data analysis techniques. She used the same classification scheme, and she selected the journals based on a slightly modified version of Jarvelin and Vakkari’s [ 15 ] list. Library services and information storage and retrieval emerged again as the most common subjects approached by the authors and survey was the most frequently used method.

More recent studies of this nature include those conducted by Koufogiannakis et al. [ 42 ], Hildreth and Aytac [ 43 ], Hider and Pymm [ 32 ], and Chu [ 17 ]. Koufogiannakis et al. [ 42 ] examined research articles published in 2001 and they found that the majority of them were questionnaire‐based descriptive studies. Comparative, bibliometrics, content analysis, and program evaluation studies were also popular. Information storage and retrieval emerged as the predominant subject area, followed by library collections and management. Hildreth and Aytac [ 43 ] presented a review of the 2003–2005 published library research with special focus on methodology issues and the quality of published articles of both practitioners and academic scholars. They found that most research was descriptive and the most frequent method for data collection was the questionnaire, followed by content analysis and interviews. With regard to data analysis, more researchers used quantitative methods, considerably less used qualitative‐only methods, whereas 61 out of 206 studies included some kind of qualitative analysis, raising the total percentage of qualitative methods to nearly 50%. With regard to the quality of published research, the authors argued that “ the majority of the reports are detailed, comprehensive, and well‐organized ” (p. 254) [ 43 ]. Still, they noticed that the majority of reports did not mention the critical issues of research validity and reliability and neither did they indicate study limitations or future research recommendations. Hider and Pymm [ 32 ] described content analysis of LIS literature “ which aimed to identify the most common strategies and techniques employed by LIS researchers carrying out high‐profile empirical research ” (p. 109). Their results suggested that while researchers employed a wide variety of strategies, they mostly used surveys and experiments. They also observed that although quantitative research accounted for more than 50% of the articles, there was an increase in the use of most sophisticated qualitative methods. Chu [ 17 ] analyzed the research articles published between 2001 and 2010 in three major journals and reported the following most frequent research methods: theoretical approach (e.g., conceptual analysis), content analysis, questionnaire, interview, experiment, and bibliometrics. Her study showed an increase in both the number and variety of research methods but lack of growth in the use of qualitative research or in the adoption of multiple research methods.

In summary, the literature shows a continued interest in the analysis of published LIS research. Approaches include focusing on particular publication years, geographic areas, journal titles, aspects of LIS, and specific characteristics, such as subjects, authorship, and research methods. Despite the abundance of content analyses of LIS literature, the findings are not easily comparable due to differences in the number and titles of journals examined, in the types of the papers selected for analysis, in the periods covered, and in classification schemes developed by the authors to categorize article topics and research strategies. Despite the differences, some findings are consistent among all studies:

Information seeking, information retrieval, and library and information service activities are among the most common subjects studied,

Descriptive research methodologies based on surveys and questionnaires predominate,

Over the years, there has been a considerable increase in the array of research approaches used to explore library issues, and

Data analysis is usually limited to descriptive statistics, including frequencies, means, and standard deviations.

3.2. Data collection and analysis

Articles published between 2011 and 2016 were obtained from the following journals: Library and Information Science Research, College & Research Libraries, Journal of Documentation, Information Processing & Management, and Journal of Academic Librarianship ( Table 1 ). These five titles were selected as data sources because they have the highest 5‐year impact factor of the journals classified in Ulrich’s Serials Directory under the “Library and Information Sciences” subject heading. From the journals selected, only full‐length articles were collected. Editorials, book reviews, letters, interviews, commentaries, and news items were excluded from the analysis. This selection process yielded 1643 articles. A stratified random sample of 440 articles was chosen for in‐depth analysis ( Table 2 ). For the purpose of this study, five strata, corresponding to the five journals, were used. The sample size was determined using a margin of error, 4%, and confidence interval, 95%.

Libr & Inf Sci ResColl & Res LibrJ DocInf Proc & ManagJ Acad Libr
ScopeThe research process in library and information science as well as research findings and, where applicable, their practical applications and significanceAll fields of interest and concern to academic and research librariesTheories, concepts, models, frameworks, and philosophies related to documents and recorded knowledgeTheory, methods, or application in the field of information scienceProblems and issues germane to college and university libraries
PublisherElsevierACRLEmeraldElsevierElsevier
Start year19791939194519631975
FrequencyQuarterlyBi‐monthlyBi‐monthlyBi‐monthlyBi‐monthly
5‐year impact factor1.9811.6171.4801.4681.181

Table 1.

Profile of the journals.

TitlesTotal number of articlesArticles selected
Libr & Inf Sci Res21457
Coll & Res Libr23362
J of Docum30481
Inf Proc & Manag432116
J Acad Libr460123

Table 2.

Journal titles.

Each article was classified as either research or theoretical. Articles that employed specific research methodology and presented specific findings of original studies performed by the author(s) were considered research articles. The kind of study may vary (e.g., it could be an experiment, a survey, etc.), but in all cases, raw data had been collected and analyzed, and conclusions were drawn from the results of that analysis. Articles reporting research in system design or evaluation in the information systems field were also regarded as research articles . On the other hand, works that reviewed theories, theoretical concepts, or principles discussed topics of interest to researchers and professionals, or described research methodologies were regarded as theoretical articles [ 44 ] and were classified in the no‐empirical‐research category. In this category, were also included literature reviews and articles describing a project, a situation, a process, etc.

Each article was classified into a topical category according to its main subject. The articles classified as research were then further explored and analyzed to identify (i) research approach, (ii) research methodology, and (iii) method of data analysis. For each variable, a coding scheme was developed, and the articles were coded accordingly. The final list of the analysis codes was extracted inductively from the data itself, using as reference the taxonomies utilized in previous studies [ 15 , 32 , 43 , 45 ]. Research approaches “ are plans and procedures for research ” (p. 3) [ 46 ]. Research approaches can generally be grouped as qualitative, quantitative, and mixed methods studies. Quantitative studies aim at the systematic empirical investigation of quantitative properties or phenomena and their relationships. Qualitative research can be broadly defined as “ any kind of research that produces findings not arrived at by means of statistical procedures or other means of quantification ” (p. 17) [ 47 ]. It is a way to gain insights through discovering meanings and explaining phenomena based on the attributes of the data. In mixed model research, quantitative and qualitative approaches are combined within or across the stages of the research process. It was beyond the scope of this study to identify in which stages of a study—data collection, data analysis, and data interpretation—the mixing was applied or to reveal the types of mixing. Therefore, studies using both quantitative and qualitative methods, irrespective of whether they describe if and how the methods were integrated, were coded as mixed methods studies.

Research methodologies , or strategies of inquiry, are types of research models “ that provide specific direction for procedures in a research design ” (p. 11) [ 46 ] and inform the decisions concerning data collection and analysis. A coding schema of research methodologies was developed by the authors based on the analysis of all research articles included in the sample. The methodology classification included 12 categories ( Table 3 ). Each article was classified into one category for the variable research methodology . If more than one research strategy was mentioned (e.g., experiment and survey), the article was classified according to the main strategy.

Research methodologyDescription
Action researchSystematic procedure for collecting information about and subsequently improving a particular situation in a setting where there is a problem needing a solution or change
Bibliometrics“A series of techniques that seeks to quantify the process of written communication” (Ikpaahindi, 1985). The most common type of bibliometric research is citation analysis
Case studyIn‐depth exploration of an activity, an event, a program, etc., usually using a variety of data collection procedures
Content analysisAnalysis (qualitative or quantitative) of secondary text or visual material
EthnographyStudy of behavior, actions, etc. of a group in a natural setting
ExperimentPre‐experimental designs, quasi‐experiments, and true experiments aiming at investigating relationships between variables establishing possible cause‐and‐effect relationships
Grounded theoryThe development of a theory “of a process, action, or interaction grounded in the views of participants” (Creswell, 2014, p. 87)
Mathematical methodStudies employing mathematical analysis (e.g., integrals)
PhenomenologicalThe study of the lived experiences of individuals about a phenomenon (Creswell, 2009)
Secondary data analysisUse of existing data (e.g., circulation statistics, institutional repository data, etc.) to answer the research question(s)
SurveyDescriptive research method used to “describe the characteristics of, and make predictions about, a population” (“LARKS: Librarian and Researcher Knowledge Space,” 2017)
System and software analysis/designDevelopment and experimental evaluation of tools, techniques, systems, etc. related to information retrieval and related areas

Table 3.

Coding schema for research methodologies.

Methods of data analysis refer to the techniques used by the researchers to explore the original data and answer their research problems or questions. Data analysis for quantitative researches involves statistical analysis and interpretation of figures and numbers. In qualitative studies, on the other hand, data analysis involves identifying common patterns within the data and making interpretations of the meanings of the data. The array of data analysis methods included the following categories:

Descriptive statistics,

Inferential statistics,

Qualitative data analysis,

Experimental evaluation, and

Other methods,

Descriptive statistics are used to describe the basic features of the data in a study. Inferential statistics investigate questions, models, and hypotheses. Mathematical analysis refers to mathematic functions, etc. used mainly in bibliometric studies to answer research questions associated with citation data. Qualitative data analysis is the range of processes and procedures used for the exploration of qualitative data, from coding and descriptive analysis to identification of patterns and themes and the testing of emergent findings and hypotheses. It was used in this study as an overarching term encompassing various types of analysis, such as thematic analysis, discourse analysis, or grounded theory analysis. The class experimental evaluation was used for system and software analysis and design studies which assesses the newly developed algorithm, tool, method, etc. by performing experiments on selected datasets. In these cases, “experiments” differ from the experimental designs in social sciences. Methods that did not fall into one of these categories (e.g., mathematical analysis, visualization, or benchmarking) were classified as other methods . If both descriptive and inferential statistics were used in an article, only the inferential were recorded. In mixed methods studies, each method was recorded in the order in which it was reported in the article.

Ten percent of the articles were randomly selected and used to establish inter‐rater reliability and provide basic validation of the coding schema. Cohen’s kappa was calculated for each coded variable. The average Cohen’s kappa value was κ = 0.60, p < 0.000 (the highest was 0.63 and lowest was 0.59). This indicates a substantial agreement [ 48 ]. The coding disparities across raters were discussed, and the final codes were determined via consensus.

3.3. Results

3.3.1. topic.

Table 4 presents the distribution of articles over the various topics, for each of which a detailed description is provided. The five most popular topics of the papers in the total sample of 440 articles were “information retrieval,” “information behavior,” “information literacy,” “library services,” and “organization and management.” These areas cover over 60% of all topics studied in the papers. The least‐studied topics (covered in less than eight papers) fall into the categories of “information and knowledge management,” “library information systems,” “LIS theory,” and “infometrics.”

TopicDescription%
Information retrievalTheory, algorithms, and experiments in information retrieval, issues related to data mining, and knowledge discovery21.6
Information behaviorInteraction of individuals with information sources. Topics such as information access, information needs, information seeking, and information use are included here15.0
Information literacyIssues related to information literacy and bibliographic instruction (methods, assessment, competences and skills, attitudes, etc.)9.5
Library servicesIssues related to different library services, such as circulation, reference services, ILL, digital services, etc., including innovative programs and services9.3
Organization and managementElements of library management and administration, such as staffing, budget, financing, etc. and issues related to the assessment of library services, standards, etc.7.3
Scholarly communicationIssues related to different aspects of scholarly communication, such as publishing, open access, analysis of literature, methods, and techniques for the evaluation and impact of scientific research (e.g., journal rankings, bibliometric indices, etc.)5.7
Digital libraries and metadataIssues related to digital collections, digital libraries, institutional repositories, design and use of metadata, as well as data management and curation activities4.3
Knowledge organizationProcesses (e.g., cataloguing, subject analysis, indexing and classification) and knowledge and information organization systems (e.g., classification systems, lists of subject headings, thesauri, ontologies)4.3
Library collectionsDevelopment and evaluation of all types of library collections, including special collections. Issues related to e‐resources (e‐books, e‐journals, etc.), including their use, evaluation, management, etc.3.9
Library personnelIssues related to library personnel (qualifications, professional development, professional experiences, etc.)3.6
Research in LISIssues related to research methods employed in LIS research as well as librarians’ engagement in research activities3.0
Social mediaIssues related to social media (facebook, twitter, blogs, etc.) and their use by both libraries and library users2.5
Spaces and facilitiesLibrary buildings, library as place2.0
Information/knowledge managementIssues related to the process of finding, selecting, organizing, disseminating, and transferring information and knowledge1.6
Library information systemsIssues related to different aspects of information systems, such as OPAC, ILS, etc. Design, content, and usability of library websites1.6
LIS theoryIssues related to theoretical aspects of LIS and theoretical studies on the transmission, processing, utilization, and extraction of information1.6
InfometricsThe use of mathematical and statistical methods in research related to information. Bibliometrics and webometrics are included here1.1
OtherTopics that could not be classified anywhere else and were represented by minimal number of articles (e.g., information history, faculty‐librarian cooperation)2.0
Total100

Table 4.

Article topics.

Figure 1 shows how the top five topics are distributed across journals. As expected, the topic “information retrieval” has higher publication frequencies in Information Processing & Management, a journal focusing on system design and issues related to the tools and techniques used in storage and retrieval of information. “Information literacy,” “information behavior,” “library services,” and “organization and management” appear to be distributed almost proportionately in College & Research Libraries. “Information literacy” seems to be a more preferred topic in the Journal of Academic Librarianship, while “information behavior” is more popular in the Journal of Documentation and Library & Information Science Research.

case study method in library science

Figure 1.

Distribution of topics across journals.

3.3.2. Research approach and methodology

Of all articles examined, 343 articles, which represent the 78% of the sample, reported empirical research. The rest 22% (N = 97) were classified as non‐empirical research papers. Research articles were coded as quantitative, qualitative, or mixed methods studies. An overwhelming majority (70%) of the empirical research articles employed a quantitative research approach. Qualitative and mixed methods research was reported in 21.6 and 8.5% of the articles, respectively ( Figure 2 ).

case study method in library science

Figure 2.

Research approach.

Table 5 presents the distribution of research approaches over the five most famous topics. The quantitative approach clearly prevails in all topics, especially in information retrieval research. However, qualitative designs seem to gain acceptance in all topics (except information retrieval), while in information behavior research, quantitative and qualitative approaches are almost evenly distributed. Mixed methods were quite frequent in information literacy and information behavior studies and less popular in the other topics.

TopicsMixed methodsQualitativeQuantitative
Information behavior14.0%40.4%45.6%
Information literacy17.6%26.5%55.9%
Information retrieval0.0%0.0%100.0%
Library services3.6%39.3%57.1%
Organization and management4.8%23.8%71.4%

Table 5.

Topics across research approach.

The most frequently used research strategy was survey, accounting for almost 37% of all research articles, followed by system and software analysis and design, a strategy used in this study specifically for research in information systems (Jarvelin & Vakkari, 1990). This result is influenced by the fact that Information Processing & Management addresses issues at the intersection between LIS and computer science, and the majority of its articles present the development of new tools, algorithms, methods and systems, and their experimental evaluation. The third‐ and fourth‐ranking strategies were content analysis and bibliometrics. Case study, experiment, and secondary data analysis were represented by 15 articles each, while the rest of the techniques were underrepresented with considerably fewer articles ( Table 6 ).

Research methodology%
Survey37.0
System and software analysis/design26.8
Content analysis9.6
Bibliometrics6.4
Case study4.4
Experiment4.4
Secondary data analysis4.4
Grounded theory2.6
Phenomenological2.0
Ethnography1.5
Action research0.6
Mathematical method0.3
Total100.0

Table 6.

Research methodologies.

3.3.3. Methods of data analysis

Table 7 displays the frequencies for each type of data analysis.

Method%
Descriptive statistics28.4
Inferential statistics18.5
Qualitative data analysis27.1
Experimental evaluation24.7
Other methods1.3
Total100

Table 7.

Method of data analysis.

Almost half of the empirical research papers examined reported any use of statistics. Descriptive statistics, such as frequencies, means, or standard deviations, were more frequently used compared to inferential statistics, such as ANOVA, regression, or factor analysis. Nearly one‐third of the articles employed some type of qualitative data analysis either as the only method or—in mixed methods studies—in combination with quantitative techniques.

3.4. Discussions and conclusions

The patterns of LIS research activity as reflected in the articles published between 2011 and 2016 in five well‐established, peer‐reviewed journals were described and analyzed. LIS literature addresses many and diverse topics. Information retrieval, information behavior, and library services continue to attract the interest of researchers as they are core areas in library science. Information retrieval has been rated as one of the most famous areas of interest in research articles published between 1965 and 1985 [ 40 ]. According to Dimitroff [ 49 ], information retrieval was the second most popular topic in the articles published in the Bulletin of the Medical Library Association, while Cano [ 50 ] argued that LIS research produced in Spain from 1977 to 1994 was mostly centered on information retrieval and library and information services. In addition, Koufogiannakis et al. [ 42 ] found that information access and retrieval were the domain with the most research, and in Hildreth and Aytac’s [ 43 ] study, most articles were dealing with issues related to users (needs, behavior, information seeking, etc.), services, and collections. The present study provides evidence that the amount of research in information literacy is increasing, presumably due to the growing importance of information literacy instruction in libraries. In recent years, there is an ongoing educational role for librarians, who are more and more actively engaging in the teaching and learning processes, a trend that is reflected in the research output.

With regard to research methodologies, the present study seems to confirm the well‐documented predominance of survey in LIS research. According to Dimitroff [ 49 ], the percentage related to use of survey research methods reported in various studies varied between 20.3 and 41.5%. Powell [ 51 ], in a review of the research methods appearing in LIS literature, pointed out that survey had consistently been the most common type of study in both dissertations and journal articles. Survey reported the most widely used research design by Jarvelin and Vakkari [ 40 ], Crawford [ 52 ], Hildreth and Aytac [ 43 ], and Hider and Pymm [ 32 ]. The majority of articles examined by Koufogiannakis et al. [ 42 ] were descriptive studies using questionnaires/surveys. In addition, survey methods represented the largest proportion of methods used in information behavior articles analyzed by Julien et al. [ 53 ]. There is no doubt that survey has been used more than any other method in LIS research. As Jarvelin and Vakkari [ 15 ] put it, “it appears that the field is so survey‐oriented that almost all problems are seen through a survey viewpoint” (p. 416). Much of survey’s popularity can be ascribed to its being a well‐known, understood, easily conducted, and inexpensive method, which is easy to analyze results [ 41 , 42 ]. However, our findings suggest that while the survey ranks high, a variety of other methods have been also used in the research articles. Content analysis emerged as the third‐most frequent strategy, a finding similar to those of previous studies [ 17 , 32 ]. Although content analysis was not regarded by LIS researchers as a favored research method until recently, its popularity seems to be growing [ 17 ].

Quantitative approaches, which dominate, tend to rely on frequency counts, percentages, and descriptive statistics used to describe the basic features of the data in a study. Fewer studies used advanced statistical analysis techniques, such as t‐tests, correlation, and regressions, while there were some examples of more sophisticated methods, such as factor analysis, ANOVA, MANOVA, and structural equation modeling. Researchers engaging in quantitative research designs should take into consideration the use of inferential statistics, which enables the generalization from the sample being studied to the population of interest and, if used appropriately, are very useful for hypothesis testing. In addition, multivariate statistics are suitable for examining the relationships among variables, revealing patterns and understanding complex phenomena.

The findings also suggest that qualitative approaches are gaining increasing importance and have a role to play in LIS studies. These results are comparable to the findings of Hider and Pymm [ 32 ], who observed significant increases for qualitative research strategies in contemporary LIS literature. Qualitative analysis description varied widely, reflecting the diverse perspectives, analysis methods, and levels of depth of analysis. Commonly used terms in the articles included coding, content analysis, thematic analysis, thematic analytical approach, theme, or pattern identification. One could argue that the efforts made to encourage and promote qualitative methods in LIS research [ 54 , 55 ] have made some impact. However, qualitative research methods do not seem to be adequately utilized by library researchers and practitioners, despite their potential to offer far more illuminating ways to study library‐related issues [ 56 ]. LIS research has much to gain from the interpretive paradigm underpinning qualitative methods. This paradigm assumes that social reality is

the product of processes by which social actors together negotiate the meanings for actions and situations; it is a complex of socially constructed meanings. Human experience involves a process of interpretation rather than sensory, material apprehension of the external physical world and human behavior depends on how individuals interpret the conditions in which they find themselves. Social reality is not some ‘thing’ that may be interpreted in different ways, it is those interpretations (p. 96) [ 57 ].

As stated in the introduction of this chapter, library and information science focuses on the interaction between individuals and information. In every area of LIS research, the connection of factors that lead to and influence this interaction is increasingly complex. Qualitative research searches for “ all aspects of that complexity on the grounds that they are essential to understanding the behavior of which they are a part ” (p. 241) [ 59 ]. Qualitative research designs can offer a more in‐depth analysis of library users, their needs, attitudes, and behaviors.

The use of mixed methods designs was found to be rather rare. While Hildreth and Aytac [ 43 ] found higher percentages of studies using combined methods in data analysis, our results are analogous to those shown by Fidel [ 60 ]. In fact, as in her study, only few of the articles analyzed referred to mixed methods research by name, a finding indicating that “ the concept has not yet gained recognition in LIS research ” (p. 268). Mixed methods research has become an established research approach in the social sciences as it minimizes the weaknesses of quantitative and qualitative research alone and allows researchers to investigate the phenomena more completely [ 58 ].

In conclusion, there is evidence that LIS researchers employ a large number and wide variety of research methodologies. Each research approach, strategy, and method has its advantages and limitations. If the aim of the study is to confirm hypotheses about phenomena or measure and analyze the causal relationships between variables, then quantitative methods might be used. If the research seeks to explore, understand, and explain phenomena then qualitative methods might be used. Researchers can consider the full range of possibilities and make their selection based on the philosophical assumptions they bring to the study, the research problem being addressed, their personal experiences, and the intended audience for the study [ 46 ].

Taking into consideration the increasing use of qualitative methods in LIS studies, an in‐depth analysis of papers using qualitative methods would be interesting. A future study in which the different research strategies and types of analysis used in qualitative methods will be presented and analyzed could help LIS practitioners understand the benefits of qualitative analysis.

Mixed methods used in LIS research papers could be analyzed in future studies in order to identify in which stages of a study, data collection, data analysis, and data interpretation, the mixing was applied and to reveal the types of mixing.

As far as it concerns the quantitative research methods, which predominate in LIS research, it would be interesting to identify systematic relations between more than two variables such as authors’ affiliation, topic, research strategies, etc. and to create homogeneous groups using multivariate data analysis techniques.

  • 1. Buckland MK, Liu ZM. History of information science. Annual Review of Information Science and Technology. 1995; 30 :385-416
  • 2. Rayward WB. The history and historiography of information science: Some reflections. Information Processing & Management. 1996; 32 (1):3-17
  • 3. Wildemuth BM. Applications of Social Research Methods to Questions in Information and Library Science. Westport, CT: Libraries Unlimited; 2009
  • 4. Hjørland B. Theory and metatheory of information science: A new interpretation. Journal of Documentation. 1998; 54 (5):606-621. DOI: http://doi.org/10.1108/EUM0000000007183
  • 5. Åström F. Heterogeneity and homogeneity in library and information science research. Information Research [Internet]. 2007 [cited 23 April 2017]; 12 (4): poster colisp01 [3 p.]. Available from: http://www.informationr.net/ir/12-4/colis/colisp01.html
  • 6. Dillon A. Keynote address: Library and information science as a research domain: Problems and prospects. Information Research [Internet]. 2007 [cited 23 April 2017]; 12 (4): paper colis03 [6 p.]. Available from: http://www.informationr.net/ir/12-4/colis/colis03.html
  • 7. Eldredge JD. Evidence‐based librarianship: An overview. Bulletin of the Medical Library Association. 2000; 88 (4):289-302
  • 8. Bradley J, Marshall JG. Using scientific evidence to improve information practice. Health Libraries Review. 1995; 12 (3):147-157
  • 9. Bibliometrics. In: International Encyclopedia of Information and Library Science. 2nd ed. London, UK: Routledge; 2003. p. 38
  • 10. Åström F. Library and Information Science in context: The development of scientific fields, and their relations to professional contexts. In: Rayward WB, editor. Aware and Responsible: Papers of the Nordic‐International Colloquium on Social and Cultural Awareness and Responsibility in Library, Information and Documentation Studies (SCARLID). Oxford, UK: Scarecrow Press; 2004. pp. 1-27
  • 11. Grover R, Glazier J. A conceptual framework for theory building in library and information science. Library and Information Science Research. 1986; 8 (3):227-242
  • 12. Boyce BR, Kraft DH. Principles and theories in information science. In: W ME, editor. Annual Review of Information Science and Technology. Medford, NJ: Knowledge Industry Publications. 1985; pp. 153-178
  • 13. Feehan PE, Gragg WL, Havener WM, Kester DD. Library and information science research: An analysis of the 1984 journal literature. Library and Information Science Research. 1987; 9 (3):173-185
  • 14. Spink A. Information science: A third feedback framework. Journal of the American Society for Information Science. 1997; 48 (8):728-740
  • 15. Jarvelin K, Vakkari P. Content analysis of research articles in Library and Information Science. Library and Information Science Research. 1990; 12 (4):395-421
  • 16. Kim SJ, Jeong DY. An analysis of the development and use of theory in library and information science research articles. Library and Information Science Research. 2006; 28 (4):548-562. DOI: http://doi.org/10.1016/j.lisr.2006.03.018
  • 17. Chu H. Research methods in library and information science: A content analysis. Library & Information Science Research. 2015; 37 (1):36-41. DOI: http://doi.org/10.1016/j.lisr.2014.09.003
  • 18. Van Maanen J. Different strokes: Qualitative research in the administrative science quarterly from 1956 to 1996. In: Van Maanen J, editor. Qualitative Studies of Organizations. Thousand Oaks, CA: SAGE; 1998. pp. ix‐xxxii
  • 19. Brookes BC. The foundations of information science Part I. Philosophical aspects. Journal of Information Science. 1980; 2 (3/4):125-133
  • 20. Hauser L. A conceptual analysis of information science. Library and Information Science Research. 1988; 10 (1):3-35
  • 21. McGrath WE. Current theory in Library and Information Science. Introduction. Library Trends. 2002; 50 (3):309-316
  • 22. Theory and why it is important - Social and behavioral theories - e-Source Book - OBSSR e-Source [Internet]. Esourceresearch.org. 2017 [cited 23 April 2017]. Available from: http://www.esourceresearch.org/eSourceBook/SocialandBehavioralTheories/3TheoryandWhyItisImportant/tabid/727/Default.aspx
  • 23. Babbie E. The practice of social research. 7th ed. Belmont, CA: Wadsworth; 1995
  • 24. Glazier JD, Grover R. A multidisciplinary framework for theory building. Library Trends. 2002; 50 (3):317-329
  • 25. Glaser B, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. New Brunswick: Aldine Transaction; 1999
  • 26. Smiraglia RP. The progress of theory in knowledge organization. Library Trends. 2002; 50 :330-349
  • 27. McGrath WE. Explanation and prediction: Building a unified theory of librarianship, concept and review. Library Trends. 2002; 50 (3):350-370
  • 28. Julien H, Duggan LJ. A longitudinal analysis of the information needs and uses literature. Library & Information Science Research. 2000; 22 (3):291-309. DOI: http://doi.org/10.1016/S0740‐8188(99)00057‐2
  • 29. Pettigrew KE, McKechnie LEF. The use of theory in information science research. Journal of the American Society for Information Science and Technology. 2001; 52 (1):62-73. DOI: http://doi.org/10.1002/1532‐2890(2000)52:1<62::AID‐ASI1061>3.0.CO;2‐J
  • 30. Kumasi KD, Charbonneau DH, Walster D. Theory talk in the library science scholarly literature: An exploratory analysis. Library & Information Science Research. 2013; 35 (3):175-180. DOI: http://doi.org/10.1016/j.lisr.2013.02.004
  • 31. Rawson C, Hughes‐Hassell S. Research by Design: The promise of design‐based research for school library research. School Libraries Worldwide. 2015; 21 (2):11-25
  • 32. Hider P, Pymm B. Empirical research methods reported in high‐profile LIS journal literature. Library & Information Science Research. 2008; 30 (2):108-114. DOI: http://doi.org/10.1016/j.lisr.2007.11.007
  • 33. Bernhard, P. In search of research methods used in information science. Canadian Journal of Information and Library Science. 1993;18(3): 1-35
  • 34. Blake VLP. Since Shaughnessy. Collection Management. 1994; 19 (1‐2):1-42. DOI: http://doi.org/10.1300/J105v19n01_01
  • 35. Schlachter GA. Abstracts of library science dissertations. Library Science Annual. 1989; 1 :1988-1996
  • 36. Kim MT. Research record. Journal of Education for Library and Information Science. 1996; 37 (4):376-383
  • 37. Peritz BC. The methods of library science research: Some results from a bibliometric survey. Library Research. 1980; 2 (3):251-268
  • 38. Nour MM. A quantitative analysis of the research articles published in core library journals of 1980. Library and Information Science Research. 1985; 7 (3):261-273
  • 39. Enger KB, Quirk G, Stewart JA. Statistical methods used by authors of library and infor- mation science journal articles. Library and Information Science Research. 1989; 11 (1): 37-46
  • 40. Jarvelin K, Vakkari P. The evolution of library and information science 1965-1985: A content analysis of journal articles. Information Processing and Management. 1993; 29 (1):129-144
  • 41. Kumpulainen S. Library and information science research in 1975: Content analysis of the journal articles. Libri. 1991; 41 (1):59-76
  • 42. Koufogiannakis D, Slater L, Crumley E. A content analysis of librarianship research. Journal of Information Science. 2004; 30 (3):227-239. DOI: http://doi.org/10.1177/0165551504044668
  • 43. Hildreth CR, Aytac S. Recent library practitioner research: A methodological analysis and critique on JSTOR. Journal of Education for Library and Information Science. 2007; 48 (3):236-258
  • 44. Gonzales‐Teruel A, Abad‐Garcia MF. Information needs and uses: An analysis of the literature published in Spain, 1990‐2004. Library and Information Science Research. 2007; 29 (1):30-46
  • 45. Luo L, Mckinney M. JAL in the past decade: A comprehensive analysis of academic library research. The Journal of Academic Librarianship. 2015; 41 :123-129. DOI: http://doi.org/10.1016/j.acalib.2015.01.003
  • 46. Creswell JW. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 3rd ed. Thousand Oaks, CA: SAGE; 2009
  • 47. Strauss A, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: SAGE Publications; 1990
  • 48. Neuendorf KA. The Content Analysis Guidebook. 2nd ed. Thousand Oaks, CA: SAGE Publications; 2016
  • 49. Dimitroff A. Research for special libraries: A quantitative analysis of the literature. Special Libraries. 1995; 86 (4):256-264
  • 50. Cano V. Bibliometric overview of library and information science research in Spain. Journal of the American Society for Information Science. 1999; 50 (8):675-680. DOI: http://doi.org/10.1002/(SICI)1097‐4571(1999)50:8<675::AID‐ASI5>3.0.CO;2‐B
  • 51. Powell RR. Recent trends in research: A methodological essay. Library & Information Science Research. 1999; 21 (1):91-119. DOI: http://doi.org/10.1016/S0740‐8188(99)80007‐3
  • 52. Crawford GA. The research literature of academic librarianship: A comparison of college & Research Libraries and Journal of Academic Librarianship. College & Research Libraries. 1999; 60 (3):224-230. DOI: http://doi.org/10.5860/crl.60.3.224
  • 53. Julien H, Pecoskie JJL, Reed K. Trends in information behavior research, 1999-2008: A content analysis. Library & Information Science Research. 2011; 33 (1):19-24. DOI: http://doi.org/10.1016/j.lisr.2010.07.014
  • 54. Fidel R. Qualitative methods in information retrieval research. Library and Information Science Research. 1993; 15 (3):219-247
  • 55. Hernon P, Schwartz C. Reflections (editorial). Library and Information Science Research. 2003; 25 (1):1-2. DOI: http://doi.org/10.1016/S0740‐8188(02)00162‐7
  • 56. Priestner A. Going native: Embracing ethnographic research methods in libraries. Revy. 2015; 38 (4):16-17
  • 57. Blaikie N. Approaches to social enquiry. Cambridge: Polity; 1993
  • 58. Johnson RB, Onwuegbuzie AJ. Mixed methods research: A research paradigm whose time has come. Educational Researcher. 2004; 33 (7):14-26
  • 59. Westbrook L. Qualitative research methods: A review of major stages, data analysis techniques, and quality controls. Library & Information Science Research. 1994; 16 (3):241-254
  • 60. Fidel R. Are we there yet?: Mixed methods research in library and information science. Library and Information Science Research. 2008; 30 (4):265-272. DOI: http://doi.org/10.1016/j.lisr.2008.04.001

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Edited by Sonyel Oflazoglu Dora

Published: 28 June 2017

By Seyma Demir and Yasemin Yildirim Usta

1969 downloads

By Maria Cecília de Souza Minayo

2173 downloads

By Yusuf Bilgin

3278 downloads

IntechOpen Author/Editor? To get your discount, log in .

Discounts available on purchase of multiple copies. View rates

Local taxes (VAT) are calculated in later steps, if applicable.

Support: [email protected]

Research Methods in Library and Information Science

  • November 1998

Rajesh Singh at University of Delhi

  • University of Delhi

case study method in library science

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Minraj Paudel

  • Emma M Ndalameta-Theo
  • Celine Maluma Mwafulilwa

Christine Wamunyima Kanyengo

  • PAULINE V. YOUNG
  • Ulf Wennerberg
  • Olaf Helmer

Nicholas Rescher

  • Charles H. Busha
  • Stephen P. Harter
  • William Little
  • C. T. (Charles Talbut
  • Rajesh Singh
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Point Loma logo

Organizing Your Social Sciences Research Paper: Writing a Case Study

  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Reading Research Effectively
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • What Is Scholarly vs. Popular?
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • Types of Structured Group Activities
  • Group Project Survival Skills
  • Multiple Book Review Essay
  • Reviewing Collected Essays
  • Writing a Case Study
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Research Proposal
  • Bibliography

The term case study refers to both a method of analysis and a specific research design for examining a problem, both of which are used in most circumstances to generalize across populations. This tab focuses on the latter--how to design and organize a research paper in the social sciences that analyzes a specific case.

A case study research paper examines a person, place, event, phenomenon, or other type of subject of analysis in order to extrapolate  key themes and results that help predict future trends, illuminate previously hidden issues that can be applied to practice, and/or provide a means for understanding an important research problem with greater clarity. A case study paper usually examines a single subject of analysis, but case study papers can also be designed as a comparative investigation that shows relationships between two or among more than two subjects. The methods used to study a case can rest within a quantitative, qualitative, or mixed-method investigative paradigm.

Case Studies . Writing@CSU. Colorado State University; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010 ; “What is a Case Study?” In Swanborn, Peter G. Case Study Research: What, Why and How? London: SAGE, 2010.

How to Approach Writing a Case Study Research Paper

General information about how to choose a topic to investigate can be found under the " Choosing a Research Problem " tab in this writing guide. Review this page because it may help you identify a subject of analysis that can be investigated using a single case study design.

However, identifying a case to investigate involves more than choosing the research problem . A case study encompasses a problem contextualized around the application of in-depth analysis, interpretation, and discussion, often resulting in specific recommendations for action or for improving existing conditions. As Seawright and Gerring note, practical considerations such as time and access to information can influence case selection, but these issues should not be the sole factors used in describing the methodological justification for identifying a particular case to study. Given this, selecting a case includes considering the following:

  • Does the case represent an unusual or atypical example of a research problem that requires more in-depth analysis? Cases often represent a topic that rests on the fringes of prior investigations because the case may provide new ways of understanding the research problem. For example, if the research problem is to identify strategies to improve policies that support girl's access to secondary education in predominantly Muslim nations, you could consider using Azerbaijan as a case study rather than selecting a more obvious nation in the Middle East. Doing so may reveal important new insights into recommending how governments in other predominantly Muslim nations can formulate policies that support improved access to education for girls.
  • Does the case provide important insight or illuminate a previously hidden problem? In-depth analysis of a case can be based on the hypothesis that the case study will reveal trends or issues that have not been exposed in prior research or will reveal new and important implications for practice. For example, anecdotal evidence may suggest drug use among homeless veterans is related to their patterns of travel throughout the day. Assuming prior studies have not looked at individual travel choices as a way to study access to illicit drug use, a case study that observes a homeless veteran could reveal how issues of personal mobility choices facilitate regular access to illicit drugs. Note that it is important to conduct a thorough literature review to ensure that your assumption about the need to reveal new insights or previously hidden problems is valid and evidence-based.
  • Does the case challenge and offer a counter-point to prevailing assumptions? Over time, research on any given topic can fall into a trap of developing assumptions based on outdated studies that are still applied to new or changing conditions or the idea that something should simply be accepted as "common sense," even though the issue has not been thoroughly tested in practice. A case may offer you an opportunity to gather evidence that challenges prevailing assumptions about a research problem and provide a new set of recommendations applied to practice that have not been tested previously. For example, perhaps there has been a long practice among scholars to apply a particular theory in explaining the relationship between two subjects of analysis. Your case could challenge this assumption by applying an innovative theoretical framework [perhaps borrowed from another discipline] to the study a case in order to explore whether this approach offers new ways of understanding the research problem. Taking a contrarian stance is one of the most important ways that new knowledge and understanding develops from existing literature.
  • Does the case provide an opportunity to pursue action leading to the resolution of a problem? Another way to think about choosing a case to study is to consider how the results from investigating a particular case may result in findings that reveal ways in which to resolve an existing or emerging problem. For example, studying the case of an unforeseen incident, such as a fatal accident at a railroad crossing, can reveal hidden issues that could be applied to preventative measures that contribute to reducing the chance of accidents in the future. In this example, a case study investigating the accident could lead to a better understanding of where to strategically locate additional signals at other railroad crossings in order to better warn drivers of an approaching train, particularly when visibility is hindered by heavy rain, fog, or at night.
  • Does the case offer a new direction in future research? A case study can be used as a tool for exploratory research that points to a need for further examination of the research problem. A case can be used when there are few studies that help predict an outcome or that establish a clear understanding about how best to proceed in addressing a problem. For example, after conducting a thorough literature review [very important!], you discover that little research exists showing the ways in which women contribute to promoting water conservation in rural communities of Uganda. A case study of how women contribute to saving water in a particular village can lay the foundation for understanding the need for more thorough research that documents how women in their roles as cooks and family caregivers think about water as a valuable resource within their community throughout rural regions of east Africa. The case could also point to the need for scholars to apply feminist theories of work and family to the issue of water conservation.

Eisenhardt, Kathleen M. “Building Theories from Case Study Research.” Academy of Management Review 14 (October 1989): 532-550; Emmel, Nick. Sampling and Choosing Cases in Qualitative Research: A Realist Approach . Thousand Oaks, CA: SAGE Publications, 2013; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Seawright, Jason and John Gerring. "Case Selection Techniques in Case Study Research." Political Research Quarterly 61 (June 2008): 294-308.

Structure and Writing Style

The purpose of a paper in the social sciences designed around a case study is to thoroughly investigate a subject of analysis in order to reveal a new understanding about the research problem and, in so doing, contributing new knowledge to what is already known from previous studies. In applied social sciences disciplines [e.g., education, social work, public administration, etc.], case studies may also be used to reveal best practices, highlight key programs, or investigate interesting aspects of professional work. In general, the structure of a case study research paper is not all that different from a standard college-level research paper. However, there are subtle differences you should be aware of. Here are the key elements to organizing and writing a case study research paper.

I.  Introduction

As with any research paper, your introduction should serve as a roadmap for your readers to ascertain the scope and purpose of your study . The introduction to a case study research paper, however, should not only describe the research problem and its significance, but you should also succinctly describe why the case is being used and how it relates to addressing the problem. The two elements should be linked. With this in mind, a good introduction answers these four questions:

  • What was I studying? Describe the research problem and describe the subject of analysis you have chosen to address the problem. Explain how they are linked and what elements of the case will help to expand knowledge and understanding about the problem.
  • Why was this topic important to investigate? Describe the significance of the research problem and state why a case study design and the subject of analysis that the paper is designed around is appropriate in addressing the problem.
  • What did we know about this topic before I did this study? Provide background that helps lead the reader into the more in-depth literature review to follow. If applicable, summarize prior case study research applied to the research problem and why it fails to adequately address the research problem. Describe why your case will be useful. If no prior case studies have been used to address the research problem, explain why you have selected this subject of analysis.
  • How will this study advance new knowledge or new ways of understanding? Explain why your case study will be suitable in helping to expand knowledge and understanding about the research problem.

Each of these questions should be addressed in no more than a few paragraphs. Exceptions to this can be when you are addressing a complex research problem or subject of analysis that requires more in-depth background information.

II.  Literature Review

The literature review for a case study research paper is generally structured the same as it is for any college-level research paper. The difference, however, is that the literature review is focused on providing background information and  enabling historical interpretation of the subject of analysis in relation to the research problem the case is intended to address . This includes synthesizing studies that help to:

  • Place relevant works in the context of their contribution to understanding the case study being investigated . This would include summarizing studies that have used a similar subject of analysis to investigate the research problem. If there is literature using the same or a very similar case to study, you need to explain why duplicating past research is important [e.g., conditions have changed; prior studies were conducted long ago, etc.].
  • Describe the relationship each work has to the others under consideration that informs the reader why this case is applicable . Your literature review should include a description of any works that support using the case to study the research problem and the underlying research questions.
  • Identify new ways to interpret prior research using the case study . If applicable, review any research that has examined the research problem using a different research design. Explain how your case study design may reveal new knowledge or a new perspective or that can redirect research in an important new direction.
  • Resolve conflicts amongst seemingly contradictory previous studies . This refers to synthesizing any literature that points to unresolved issues of concern about the research problem and describing how the subject of analysis that forms the case study can help resolve these existing contradictions.
  • Point the way in fulfilling a need for additional research . Your review should examine any literature that lays a foundation for understanding why your case study design and the subject of analysis around which you have designed your study may reveal a new way of approaching the research problem or offer a perspective that points to the need for additional research.
  • Expose any gaps that exist in the literature that the case study could help to fill . Summarize any literature that not only shows how your subject of analysis contributes to understanding the research problem, but how your case contributes to a new way of understanding the problem that prior research has failed to do.
  • Locate your own research within the context of existing literature [very important!] . Collectively, your literature review should always place your case study within the larger domain of prior research about the problem. The overarching purpose of reviewing pertinent literature in a case study paper is to demonstrate that you have thoroughly identified and synthesized prior studies in the context of explaining the relevance of the case in addressing the research problem.

III.  Method

In this section, you explain why you selected a particular subject of analysis to study and the strategy you used to identify and ultimately decide that your case was appropriate in addressing the research problem. The way you describe the methods used varies depending on the type of subject of analysis that frames your case study.

If your subject of analysis is an incident or event . In the social and behavioral sciences, the event or incident that represents the case to be studied is usually bounded by time and place, with a clear beginning and end and with an identifiable location or position relative to its surroundings. The subject of analysis can be a rare or critical event or it can focus on a typical or regular event. The purpose of studying a rare event is to illuminate new ways of thinking about the broader research problem or to test a hypothesis. Critical incident case studies must describe the method by which you identified the event and explain the process by which you determined the validity of this case to inform broader perspectives about the research problem or to reveal new findings. However, the event does not have to be a rare or uniquely significant to support new thinking about the research problem or to challenge an existing hypothesis. For example, Walo, Bull, and Breen conducted a case study to identify and evaluate the direct and indirect economic benefits and costs of a local sports event in the City of Lismore, New South Wales, Australia. The purpose of their study was to provide new insights from measuring the impact of a typical local sports event that prior studies could not measure well because they focused on large "mega-events." Whether the event is rare or not, the methods section should include an explanation of the following characteristics of the event: a) when did it take place; b) what were the underlying circumstances leading to the event; c) what were the consequences of the event.

If your subject of analysis is a person. Explain why you selected this particular individual to be studied and describe what experience he or she has had that provides an opportunity to advance new understandings about the research problem. Mention any background about this person which might help the reader understand the significance of his/her experiences that make them worthy of study. This includes describing the relationships this person has had with other people, institutions, and/or events that support using him or her as the subject for a case study research paper. It is particularly important to differentiate the person as the subject of analysis from others and to succinctly explain how the person relates to examining the research problem.

If your subject of analysis is a place. In general, a case study that investigates a place suggests a subject of analysis that is unique or special in some way and that this uniqueness can be used to build new understanding or knowledge about the research problem. A case study of a place must not only describe its various attributes relevant to the research problem [e.g., physical, social, cultural, economic, political, etc.], but you must state the method by which you determined that this place will illuminate new understandings about the research problem. It is also important to articulate why a particular place as the case for study is being used if similar places also exist [i.e., if you are studying patterns of homeless encampments of veterans in open spaces, why study Echo Park in Los Angeles rather than Griffith Park?]. If applicable, describe what type of human activity involving this place makes it a good choice to study [e.g., prior research reveals Echo Park has more homeless veterans].

If your subject of analysis is a phenomenon. A phenomenon refers to a fact, occurrence, or circumstance that can be studied or observed but with the cause or explanation to be in question. In this sense, a phenomenon that forms your subject of analysis can encompass anything that can be observed or presumed to exist but is not fully understood. In the social and behavioral sciences, the case usually focuses on human interaction within a complex physical, social, economic, cultural, or political system. For example, the phenomenon could be the observation that many vehicles used by ISIS fighters are small trucks with English language advertisements on them. The research problem could be that ISIS fighters are difficult to combat because they are highly mobile. The research questions could be how and by what means are these vehicles used by ISIS being supplied to the militants and how might supply lines to these vehicles be cut? How might knowing the suppliers of these trucks from overseas reveal larger networks of collaborators and financial support? A case study of a phenomenon most often encompasses an in-depth analysis of a cause and effect that is grounded in an interactive relationship between people and their environment in some way.

NOTE:   The choice of the case or set of cases to study cannot appear random. Evidence that supports the method by which you identified and chose your subject of analysis should be linked to the findings from the literature review. Be sure to cite any prior studies that helped you determine that the case you chose was appropriate for investigating the research problem.

IV.  Discussion

The main elements of your discussion section are generally the same as any research paper, but centered around interpreting and drawing conclusions about the key findings from your case study. Note that a general social sciences research paper may contain a separate section to report findings. However, in a paper designed around a case study, it is more common to combine a description of the findings with the discussion about their implications. The objectives of your discussion section should include the following:

Reiterate the Research Problem/State the Major Findings Briefly reiterate the research problem you are investigating and explain why the subject of analysis around which you designed the case study were used. You should then describe the findings revealed from your study of the case using direct, declarative, and succinct proclamation of the study results. Highlight any findings that were unexpected or especially profound.

Explain the Meaning of the Findings and Why They are Important Systematically explain the meaning of your case study findings and why you believe they are important. Begin this part of the section by repeating what you consider to be your most important or surprising finding first, then systematically review each finding. Be sure to thoroughly extrapolate what your analysis of the case can tell the reader about situations or conditions beyond the actual case that was studied while, at the same time, being careful not to misconstrue or conflate a finding that undermines the external validity of your conclusions.

Relate the Findings to Similar Studies No study in the social sciences is so novel or possesses such a restricted focus that it has absolutely no relation to previously published research. The discussion section should relate your case study results to those found in other studies, particularly if questions raised from prior studies served as the motivation for choosing your subject of analysis. This is important because comparing and contrasting the findings of other studies helps to support the overall importance of your results and it highlights how and in what ways your case study design and the subject of analysis differs from prior research about the topic.

Consider Alternative Explanations of the Findings It is important to remember that the purpose of social science research is to discover and not to prove. When writing the discussion section, you should carefully consider all possible explanations for the case study results, rather than just those that fit your hypothesis or prior assumptions and biases. Be alert to what the in-depth analysis of the case may reveal about the research problem, including offering a contrarian perspective to what scholars have stated in prior research.

Acknowledge the Study's Limitations You can state the study's limitations in the conclusion section of your paper but describing the limitations of your subject of analysis in the discussion section provides an opportunity to identify the limitations and explain why they are not significant. This part of the discussion section should also note any unanswered questions or issues your case study could not address. More detailed information about how to document any limitations to your research can be found here .

Suggest Areas for Further Research Although your case study may offer important insights about the research problem, there are likely additional questions related to the problem that remain unanswered or findings that unexpectedly revealed themselves as a result of your in-depth analysis of the case. Be sure that the recommendations for further research are linked to the research problem and that you explain why your recommendations are valid in other contexts and based on the original assumptions of your study.

V.  Conclusion

As with any research paper, you should summarize your conclusion in clear, simple language; emphasize how the findings from your case study differs from or supports prior research and why. Do not simply reiterate the discussion section. Provide a synthesis of key findings presented in the paper to show how these converge to address the research problem. If you haven't already done so in the discussion section, be sure to document the limitations of your case study and needs for further research.

The function of your paper's conclusion is to: 1)  restate the main argument supported by the findings from the analysis of your case; 2) clearly state the context, background, and necessity of pursuing the research problem using a case study design in relation to an issue, controversy, or a gap found from reviewing the literature; and, 3) provide a place for you to persuasively and succinctly restate the significance of your research problem, given that the reader has now been presented with in-depth information about the topic.

Consider the following points to help ensure your conclusion is appropriate:

  • If the argument or purpose of your paper is complex, you may need to summarize these points for your reader.
  • If prior to your conclusion, you have not yet explained the significance of your findings or if you are proceeding inductively, use the conclusion of your paper to describe your main points and explain their significance.
  • Move from a detailed to a general level of consideration of the case study's findings that returns the topic to the context provided by the introduction or within a new context that emerges from your case study findings.

Note that, depending on the discipline you are writing in and your professor's preferences, the concluding paragraph may contain your final reflections on the evidence presented applied to practice or on the essay's central research problem. However, the nature of being introspective about the subject of analysis you have investigated will depend on whether you are explicitly asked to express your observations in this way.

Problems to Avoid

Overgeneralization One of the goals of a case study is to lay a foundation for understanding broader trends and issues applied to similar circumstances. However, be careful when drawing conclusions from your case study. They must be evidence-based and grounded in the results of the study; otherwise, it is merely speculation. Looking at a prior example, it would be incorrect to state that a factor in improving girls access to education in Azerbaijan and the policy implications this may have for improving access in other Muslim nations is due to girls access to social media if there is no documentary evidence from your case study to indicate this. There may be anecdotal evidence that retention rates were better for girls who were on social media, but this observation would only point to the need for further research and would not be a definitive finding if this was not a part of your original research agenda.

Failure to Document Limitations No case is going to reveal all that needs to be understood about a research problem. Therefore, just as you have to clearly state the limitations of a general research study , you must describe the specific limitations inherent in the subject of analysis. For example, the case of studying how women conceptualize the need for water conservation in a village in Uganda could have limited application in other cultural contexts or in areas where fresh water from rivers or lakes is plentiful and, therefore, conservation is understood differently than preserving access to a scarce resource.

Failure to Extrapolate All Possible Implications Just as you don't want to over-generalize from your case study findings, you also have to be thorough in the consideration of all possible outcomes or recommendations derived from your findings. If you do not, your reader may question the validity of your analysis, particularly if you failed to document an obvious outcome from your case study research. For example, in the case of studying the accident at the railroad crossing to evaluate where and what types of warning signals should be located, you failed to take into consideration speed limit signage as well as warning signals. When designing your case study, be sure you have thoroughly addressed all aspects of the problem and do not leave gaps in your analysis.

Case Studies . Writing@CSU. Colorado State University; Gerring, John. Case Study Research: Principles and Practices . New York: Cambridge University Press, 2007; Merriam, Sharan B. Qualitative Research and Case Study Applications in Education . Rev. ed. San Francisco, CA: Jossey-Bass, 1998; Miller, Lisa L. “The Use of Case Studies in Law and Social Science Research.” Annual Review of Law and Social Science 14 (2018): TBD; Mills, Albert J., Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Putney, LeAnn Grogan. "Case Study." In Encyclopedia of Research Design , Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE Publications, 2010), pp. 116-120; Simons, Helen. Case Study Research in Practice . London: SAGE Publications, 2009;  Kratochwill,  Thomas R. and Joel R. Levin, editors. Single-Case Research Design and Analysis: New Development for Psychology and Education .  Hilldsale, NJ: Lawrence Erlbaum Associates, 1992; Swanborn, Peter G. Case Study Research: What, Why and How? London : SAGE, 2010; Yin, Robert K. Case Study Research: Design and Methods . 6th edition. Los Angeles, CA, SAGE Publications, 2014; Walo, Maree, Adrian Bull, and Helen Breen. “Achieving Economic Benefits at Local Events: A Case Study of a Local Sports Event.” Festival Management and Event Tourism 4 (1996): 95-106.

Writing Tip

At Least Five Misconceptions about Case Study Research

Social science case studies are often perceived as limited in their ability to create new knowledge because they are not randomly selected and findings cannot be generalized to larger populations. Flyvbjerg examines five misunderstandings about case study research and systematically "corrects" each one. To quote, these are:

Misunderstanding 1 :  General, theoretical [context-independent knowledge is more valuable than concrete, practical (context-dependent) knowledge. Misunderstanding 2 :  One cannot generalize on the basis of an individual case; therefore, the case study cannot contribute to scientific development. Misunderstanding 3 :  The case study is most useful for generating hypotheses; that is, in the first stage of a total research process, whereas other methods are more suitable for hypotheses testing and theory building. Misunderstanding 4 :  The case study contains a bias toward verification, that is, a tendency to confirm the researcher’s preconceived notions. Misunderstanding 5 :  It is often difficult to summarize and develop general propositions and theories on the basis of specific case studies [p. 221].

While writing your paper, think introspectively about how you addressed these misconceptions because to do so can help you strengthen the validity and reliability of your research by clarifying issues of case selection, the testing and challenging of existing assumptions, the interpretation of key findings, and the summation of case outcomes. Think of a case study research paper as a complete, in-depth narrative about the specific properties and key characteristics of your subject of analysis applied to the research problem.

Flyvbjerg, Bent. “Five Misunderstandings About Case-Study Research.” Qualitative Inquiry 12 (April 2006): 219-245.

  • << Previous: Reviewing Collected Essays
  • Next: Writing a Field Report >>
  • Last Updated: Jan 17, 2023 10:50 AM
  • URL: https://libguides.pointloma.edu/ResearchPaper

The Case Study as Research Method: A Practical Handbook

Qualitative Research in Accounting & Management

ISSN : 1176-6093

Article publication date: 21 June 2011

Scapens, R.W. (2011), "The Case Study as Research Method: A Practical Handbook", Qualitative Research in Accounting & Management , Vol. 8 No. 2, pp. 201-204. https://doi.org/10.1108/11766091111137582

Emerald Group Publishing Limited

Copyright © 2011, Emerald Group Publishing Limited

This book aims to provide case‐study researchers with a step‐by‐step practical guide to “help them conduct the study with the required degree of rigour” (p. xi).

It seeks to “demonstrate that the case study is indeed a scientific method” (p. 104) and to show “the usefulness of the case method as one tool in the researcher's methodological arsenal” (p. 105). The individual chapters cover the various stages in conducting case‐study research, and each chapter sets out a number of practical steps which have to be taken by the researcher. The following are the eight stages/chapters and, in brackets, the number of steps in each stages:

Assessing appropriateness and usefulness (4).

Ensuring accuracy of results (21).

Preparation (6).

Selecting cases (4).

Collecting data (7).

Analyzing data (4).

Interpreting data (3).

Reporting results (4).

It is particularly noticeable that ensuring accuracy of results has by far the largest number of number of steps – 21 steps compared to seven or fewer steps in the other stages. This reflects Gagnon's concern to demonstrate the scientific rigour of case‐study research. In the forward, he explains that the book draws on his experience in conducting his own PhD research, which was closely supervised by three professors, one of whom was inclined towards quantitative research. Consequently, his research was underpinned by the principles and philosophy of quantitative research. This is clearly reflected in the approach taken in this book, which seeks to show that case‐study research is just as rigorous and scientific as quantitative research, and it can produce an objective and accurate representation of the observed reality.

There is no discussion of the methodological issues relating to the use of case‐study research methods. This is acknowledged in the forward, although Gagnon refers to them as philosophical or epistemological issues (p. xii), as he tends to use the terms methodology and method interchangeably – as is common in quantitative research. Although he starts (step 1.1) by trying to distance case and other qualitative research from the work of positivists, arguing that society is socially constructed, he nevertheless sees social reality as objective and independent of the researcher. So for Gagnon, the aim of case research is to accurately reflect that reality. At various points in the book the notion of interpretation is used – evidence is interpreted and the (objective) case findings have to be interpreted.

So although there is a distancing from positivist research (p. 1), the approach taken in this book retains an objective view of the social reality which is being researched; a view which is rather different to the subjective view of reality taken by many interpretive case researchers. This distinction between an objective and a subjective view of the social reality being researched – and especially its use in contrasting positivist and interpretive research – has its origins the taxonomy of Burrell and Morgan (1979) . Although there have been various developments in the so‐called “objective‐subjective debate”, and recently some discussion in relation to management accounting research ( Kakkuri‐Knuuttila et al. , 2008 ; Ahrens, 2008 ), this debate is not mentioned in the book. Nevertheless, it is clear that Gagnon is firmly in the objective camp. In a recent paper, Johnson et al. (2006, p. 138) provide a more contemporary classification of the different types of qualitative research. In their terms, the approach taken in this book could be described as neo‐empiricist – an approach which they characterise as “qualitative positivists”.

The approach taken in this handbook leaves case studies open to the criticisms that they are a small sample, and consequently difficult to generalise, and to arguments that case studies are most appropriate for exploratory research which can subsequently be generalised though quantitative research. Gagnon explains that this was the approach he used after completing his thesis (p. xi). The handbook only seems to recognise two types of case studies, namely exploratory and raw empirical case studies – the latter being used where “the researcher is interested in a subject without having formed any preconceived ideas about it” (p. 15) – which has echoes of Glaser and Strauss (1967) . However, limiting case studies to these two types ignores other potential types; in particular, explanatory case studies which are where interpretive case‐study research can make important contributions ( Ryan et al. , 2002 ).

This limited approach to case studies comes through in the practical steps which are recommended in the handbook, and especially in the discussion of reliability and validity. The suggested steps seem to be designed to keep very close to the notions of reliability and validity used in quantitative research. There is no mention of the recent discussion of “validity” in interpretive accounting research, which emphasises the importance of authenticity and credibility and their implications for writing up qualitative and case‐study research ( Lukka and Modell, 2010 ). Although the final stage of Gagnon's handbook makes some very general comments about reporting the results, it does not mention, for example, Baxter and Chua's (2008) paper in QRAM which discusses the importance of demonstrating authenticity, credibility and transferability in writing qualitative research.

Despite Gagnon's emphasis on traditional notions of reliability and validity the handbook provides some useful practical advice for all case‐study researchers. For example, case‐study research needs a very good research design; case‐study researchers must work hard to gain access to and acceptance in the research settings; a clear strategy is needed for data collection; the case researcher should create field notes (in a field notebook, or otherwise) to record all the thoughts, ideas, observations, etc. that would not otherwise be collected; and the vast amount of data that case‐study research can generate needs to be carefully managed. Furthermore, because of what Gagnon calls the “risk of mortality” (p. 54) (i.e. the risk that access to a research site may be lost – for instance, if the organisation goes bankrupt) it is crucial for some additional site(s) to be selected at the outset to ensure that the planned research can be completed. This is what I call “insurance cases” when talking to my own PhD students. Interestingly, Gagnon recognises the ethical issues involved in doing case studies – something which is not always mentioned by the more objectivist type of case‐study researchers. He emphasises that it is crucial to honour confidentiality agreements, to ensure data are stored securely and that commitments are met and promises kept.

There is an interesting discussion of the advantages and disadvantages of using computer methods in analysing data (in stage 6). However, the discussion of coding appears to be heavily influenced by grounded theory, and is clearly concerned with producing an accurate reflection of an objective reality. In addition, Gagnon's depiction of case analysis is overly focussed on content analysis – possibly because it is a quantitative type of technique. There is no reference to the other approaches available to qualitative researchers. For example, there is no mention of the various visualisation techniques set out in Miles and Huberman (1994) .

To summarise, Gagnon's book is particularly useful for case‐study researchers who see the reality they are researching as objective and researcher independent. However, this is a sub‐set of case‐study researchers. Although some of the practical guidance offered is relevant for other types of case‐study researchers, those who see multiple realities in the social actors and/or recognise the subjectivity of the research process might have difficulty with some of the steps in this handbook. Gagnon's aim to show that the case study is a scientific method, gives the handbook a focus on traditional (quantitatively inspired) notions rigour and validity, and a tendency to ignore (or at least marginalise) other types of case study research. For example, the focus on exploratory cases, which need to be supplemented by broad based quantitative research, overlooks the real potential of case study research which lies in explanatory cases. Furthermore, Gagnon is rather worried about participant research, as the researcher may play a role which is “not consistent with scientific method” (p. 42), and which may introduce researcher bias and thereby damage “the impartiality of the study” (p. 53). Leaving aside the philosophical question about whether any social science research, including quantitative research, can be impartial, this stance could severely limit the potential of case‐study research and it would rule out both the early work on the sociology of mass production and the recent calls for interventionist research. Clearly, there could be a problem where a researcher is trying to sell consulting services, but there is a long tradition of social researchers working within organisations that they are studying. Furthermore, if interpretive research is to be relevant for practice, researchers may have to work with organisations to introduce new ideas and new ways of analysing problems. Gagnon would seem to want to avoid all such research – as it would not be “impartial”.

Consequently, although there is some good practical advice for case study researchers in this handbook, some of the recommendations have to be treated cautiously, as it is a book which sees case‐study research in a very specific way. As mentioned earlier, in the Forward Gagnon explicitly recognises that the book does not take a position on the methodological debates surrounding the use of case studies as a research method, and he says that “The reader should therefore use and judge this handbook with these considerations in mind” (p. xii). This is very good advice – caveat emptor .

Ahrens , T. ( 2008 ), “ A comment on Marja‐Liisa Kakkuri‐Knuuttila ”, Accounting, Organizations and Society , Vol. 33 Nos 2/3 , pp. 291 ‐ 7 , Kari Lukka and Jaakko Kuorikoski.

Baxter , J. and Chua , W.F. ( 2008 ), “ The field researcher as author‐writer ”, Qualitative Research in Accounting & Management , Vol. 5 No. 2 , pp. 101 ‐ 21 .

Burrell , G. and Morgan , G. ( 1979 ), Sociological Paradigms and Organizational Analysis , Heinneman , London .

Glaser , B.G. and Strauss , A.L. ( 1967 ), The Discovery of Grounded Theory: Strategies for Qualitative Research , Aldine , New York, NY .

Johnson , P. , Buehring , A. , Cassell , C. and Symon , G. ( 2006 ), “ Evaluating qualitative management research: towards a contingent critieriology ”, International Journal of Management Reviews , Vol. 8 No. 3 , pp. 131 ‐ 56 .

Kakkuri‐Knuuttila , M.‐L. , Lukka , K. and Kuorikoski , J. ( 2008 ), “ Straddling between paradigms: a naturalistic philosophical case study on interpretive research in management accounting ”, Accounting, Organizations and Society , Vol. 33 Nos 2/3 , pp. 267 ‐ 91 .

Lukka , K. and Modell , S. ( 2010 ), “ Validation in interpretive management accounting research ”, Accounting, Organizations and Society , Vol. 35 , pp. 462 ‐ 77 .

Miles , M.B. and Huberman , A.M. ( 1994 ), Qualitative Data Analysis: A Source Book of New Methods , 2nd ed. , Sage , London .

Ryan , R.J. , Scapens , R.W. and Theobald , M. ( 2002 ), Research Methods and Methodology in Finance and Accounting , 2nd ed. , Thomson Learning , London .

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

Advertisement

Issue Cover

  • Previous Article
  • Next Article

1. INTRODUCTION

2. related work, 3. methodology, 4. findings, 5. discussion, 6. conclusion, author contributions, competing interests, funding information, data availability, towards automated analysis of research methods in library and information science.

ORCID logo

Handling Editor: Ludo Waltman

  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Ziqi Zhang , Winnie Tam , Andrew Cox; Towards automated analysis of research methods in library and information science. Quantitative Science Studies 2021; 2 (2): 698–732. doi: https://doi.org/10.1162/qss_a_00123

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Previous studies of research methods in Library and Information Science (LIS) lack consensus in how to define or classify research methods, and there have been no studies on automated recognition of research methods in the scientific literature of this field. This work begins to fill these gaps by studying how the scope of “research methods” in LIS has evolved, and the challenges in automatically identifying the usage of research methods in LIS literature. We collected 2,599 research articles from three LIS journals. Using a combination of content analysis and text mining methods, a sample of this collection is coded into 29 different concepts of research methods and is then used to test a rule-based automated method for identifying research methods reported in the scientific literature. We show that the LIS field is characterized by the use of an increasingly diverse range of methods, many of which originate outside the conventional boundaries of LIS. This implies increasing complexity in research methodology and suggests the need for a new approach towards classifying LIS research methods to capture the complex structure and relationships between different aspects of methods. Our automated method is the first of its kind in LIS, and sets an important reference for future research.

Research methods are one of the defining intellectual characteristics of an academic discipline ( Whitley, 2000 ). Paradigmatic fields use a settled range of methods. Softer disciplines are marked by greater variation, more interdisciplinary borrowing, and novelty. In trying to understand our own field of Library and Information Science (LIS) better, a grasp of the changing pattern of methods can tell us much about the character and directions of the subject. LIS employs an increasingly diverse range of research methods as the discipline becomes increasingly entwined with other subjects, such as health informatics (e.g., Lustria, Kazmer et al., 2010 ), and computer science (e.g., Chen, Liu, & Ho, 2013 ). As a result of a wish to understand these patterns, a number of studies have been conducted to investigate the usage and evolution of research methods in LIS. Many of these ( Bernhard, 1993 ; Blake, 1994 ; Chu, 2015 ; Järvelin & Vakkari, 1990 ) aim to develop a classification scheme of commonly used research methods in LIS, whereas some ( Hider & Pymm, 2008 ; VanScoy & Fontana, 2016 ) focus on comparing the usage of certain methods (e.g., qualitative vs. quantitative), or recent trends in the usage of certain methods ( Fidel, 2008 ; Grankikov, Hong et al., 2020 ).

However, we identify several gaps in the literature on research methods in LIS. First, there is an increasing need for an updated view of how the scope of “research methods” in LIS has evolved. On the one hand, as we shall learn from the literature review, despite continuous interest in this research area, there remains a lack of consensus in the terminology and the classification of research methods ( Ferran-Ferrer, Guallar et al., 2017 ; Risso, 2016 ). Some ( Hider & Pymm, 2008 ; Järvelin & Vakkari, 1990 ) classify methods from different angles that form a hierarchy, and others ( Chu, 2015 ; Park, 2004 ) define a flat structure of methods. In reporting their methods, scholars also undertake different approaches, such as some that define their work in terms of data collection methods, and others that define themselves through modes of analysis. Therefore, this “lack of consensus” is difficult to resolve, but reflects that LIS is not a paradigmatic discipline where it is agreed how knowledge is built. Rather, the field sustains a number of incommensurable viewpoints about the definition of method.

On the other hand, as our results will show, the growth of artificial intelligence (AI) and Big Data research in the last decade has led to a significant increase of data-driven research published in LIS that extends to these fast-growing disciplines. As a result of this, the conventional scope and definitions of LIS research methods have difficulty in accommodating these new disciplines. For example, many of the articles published around the AI and Big Data topics are difficult to fit into the categories of methods defined in Chu (2015) .

The implication of the above situation is that it becomes extremely challenging for researchers (particularly new to LIS) to develop and maintain an informed view of the research methods used in the field. Second, there is an increasing need for automated methods that can help the analysis of research methods in LIS, as the number of publications and research methods both increase rapidly. However, we find no work in this direction in LIS to date. Although such work has already been attempted in other disciplines, such as Computer Science ( Augenstein, Das et al., 2017 ) and Biomedicine ( Hirohata, Okazaki et al., 2008 ) there is nothing comparable in LIS. Studies in those other fields have focused on automatically identifying the use of research methods and their parameters (e.g., data collected, experiment settings) from scientific literature, and have proved to be an important means for the effective archiving and timely summarizing of research. The need for providing structured access to the content of scientific literature is also articulated in Knoth and Herrmannova (2014) ’s concept of “semantometics.” We see a pressing need for conducting similar research in LIS. However, due to the complexity of defining and agreeing with a classification of LIS research methods, we anticipate the task of automated analysis will face many challenges. Therefore, a first step in this direction would be to gain an in-depth understanding of such technical challenges.

How has the scope of “research methods” in LIS evolved, compared to previous definitions of this subject?

To what extent can we automatically identify the usage of research methods in LIS literature, and what are the challenges?

We review existing definitions and the scope of “research methods” in LIS, and discuss their limitations in the context of the increasingly multidisciplinary nature and diversification of research methods used in this domain. Following on from this, we propose an updated classification of LIS research methods based on an analysis of the past 10 years’ publications from three primary journals in this field. Although this does not address many of the limitations in the status quo of the definition and classification of LIS research methods, it reflects the significant changes that deviate from the previous findings and highlights issues that need to be addressed in future research in this direction. Second, we conduct the first study of automated methods for identifying research methods from LIS literature. To achieve this, we develop a data set containing human-labeled scientific publications according to our new classification scheme, and a text mining method that automatically recognizes these labels. Our experiments revealed that, compared to other disciplines where automated classification of this kind is well established, the task in LIS is extremely challenging and there remains a significant amount of work to be done and coordinated by different parties to improve the performance of the automated method. We discuss these challenges and potential ways to address them to inform future research taking this direction.

The remainder of this paper is structured as follows. We discuss related work in the next section, followed by a description of our method. We then present and discuss our results and the limitations of this study, with concluding remarks in the final section.

We discuss related work in two areas. First, we review studies of research methods in LIS. We do not cover research in similar directions within other disciplines, as research methods can differ significantly across different subject fields. Second, we discuss studies of automated methods for information extraction (IE) from scholarly data. We will review work conducted in other disciplines, particularly from Computer Science and Biomedicine, because significant progress has been made in these subject fields and we expect to learn from and generalize methods developed in these areas to LIS.

2.1. Studies of Research Methods in LIS

Chu (2015) surveyed pre-2013 studies of research methods in LIS and these have been summarized in Table 1 . To avoid repetition, we only present an overview of this survey and refer readers to her work for details. Järvelin and Vakkari (1990) conducted the first study on this topic and proposed a framework that contains “research strategies” (e.g., historical research, survey, qualitative strategy, evaluation, case or action research, and experiment) and “data collection methods” (e.g., questionnaire, interview, observation, thinking aloud, content analysis, and historical source analysis). This framework was widely adopted and revised in later studies. For example, Kumpulainen (1991) showed that 51% of studies belonged to “empirical research” where “interview and questionnaire” (combined) was the most popular data collection method, and 48% were nonempirical research and contained no identifiable methods of data collection. Bernhard (1993) defined 13 research methods in a flat structure. Some of these have a connection to the five research strategies by Järvelin and Vakkari (1990) (e.g., “experimental research” to “empirical research”), and others would have been categorized as “data collection methods” by Järvelin and Vakkari (e.g., “content analysis,” “bibliometrics,” and “historical research”). Other studies that proposed flat structures of method classification include Blake (1994) , who introduced a classification of 13 research methods largely resembling those in Bernhard (1993) , and Park (2004) , who identified 17 research methods when comparing research methods curricula in Korean and U.S. universities. The author identified new methods such as “focus group,” and “field study,” possibly indicating the changing scene in LIS. Hider and Pymm (2008) conducted an analysis that categorized articles from 20 LIS journals into the classification scheme defined by Järvelin and Vakkari (1990) . They showed that “survey” remained the predominant research strategy but there had been a notable increase of “experiment.” Fidel (2008) examined the use of “mixed methods” in LIS. She proposed a definition of “mixed method” and distinguished it with other concepts that are often misused as “mixed methods” in this field. Overall, only a very small percentage of LIS literature (5%) used “mixed methods” defined in this way. She also highlighted that in LIS, researchers often do not use the term mixed methods to describe their work.

A summary of literature on the studies of research methods in LIS

  833 articles from 37 journals in 1985 A classification scheme consisting of five “research strategies” and seven “data collection methods” 
  632 articles from 30 LIS journals in 1975 51% “empirical research,” 48% “nonapplicable,” 13% “historical method,” 11% “questionnaire and interview” 
  Including journals, theses, textbooks, and reference sources in LIS 13 research methods; some relate to the “research strategies” whereas others relate to the “data collection methods” in   
  LIS dissertations between 1975 and 1989 13 research methods, most of which are similar to   
  71 syllabus of Korean and U.S. universities between 2001 and 2003 17 research methods, some not reported before (e.g., field study, focus group) 
  465 articles from LIS journals between 2005 and 2006 Only 5% used “mixed methods,” whereas many that claimed to do so actually used “multiple methods” or “two approaches” 
  834 articles from 20 LIS journals in 2005 Based on the classification, “survey” remained as the predominant “research strategy” and “experiment” had increased significantly 
  1,162 articles from LIS journals between 2001 and 2010 A classification that extends earlier work in this area; “survey” no long dominating; instead, “content analysis,” “experiment,” and “theoretical approach” become more popular 
  1,362 journal articles published between 2000 and 2009 A classification scheme similar to the previous work; majority of research was “quantitative”, with “descriptive studies” based on “surveys” most common 
  580 Spanish LIS journal articles between 2012 and 2014 Proposed nine “research methods” and 13 “techniques.” “Descriptive research” was the most used “research method,” and “content analysis” was the most used “technique” 
  440 LIS journal articles between 2011 and 2016 A similar classification of 12 “research methods” similar to that in . “Survey” remained the dominant method 
  386 LIS journal articles between 2015 and 2018 Showed an increase in the use of “mixed methods” in this field 
  833 articles from 37 journals in 1985 A classification scheme consisting of five “research strategies” and seven “data collection methods” 
  632 articles from 30 LIS journals in 1975 51% “empirical research,” 48% “nonapplicable,” 13% “historical method,” 11% “questionnaire and interview” 
  Including journals, theses, textbooks, and reference sources in LIS 13 research methods; some relate to the “research strategies” whereas others relate to the “data collection methods” in   
  LIS dissertations between 1975 and 1989 13 research methods, most of which are similar to   
  71 syllabus of Korean and U.S. universities between 2001 and 2003 17 research methods, some not reported before (e.g., field study, focus group) 
  465 articles from LIS journals between 2005 and 2006 Only 5% used “mixed methods,” whereas many that claimed to do so actually used “multiple methods” or “two approaches” 
  834 articles from 20 LIS journals in 2005 Based on the classification, “survey” remained as the predominant “research strategy” and “experiment” had increased significantly 
  1,162 articles from LIS journals between 2001 and 2010 A classification that extends earlier work in this area; “survey” no long dominating; instead, “content analysis,” “experiment,” and “theoretical approach” become more popular 
  1,362 journal articles published between 2000 and 2009 A classification scheme similar to the previous work; majority of research was “quantitative”, with “descriptive studies” based on “surveys” most common 
  580 Spanish LIS journal articles between 2012 and 2014 Proposed nine “research methods” and 13 “techniques.” “Descriptive research” was the most used “research method,” and “content analysis” was the most used “technique” 
  440 LIS journal articles between 2011 and 2016 A similar classification of 12 “research methods” similar to that in . “Survey” remained the dominant method 
  386 LIS journal articles between 2015 and 2018 Showed an increase in the use of “mixed methods” in this field 

Drawing conclusions from the literature, Chu (2015) highlighted several patterns from the studies of research methods in LIS. First, researchers in LIS are increasingly using more sophisticated methods and techniques instead of the commonly used survey or historical method of the past. Methods such as experiments and modeling were on the rise. Second, there has been an increase in the use of qualitative approaches compared with the past, such as in the field of Information Retrieval. Building on this, Chu (2015) conducted a study of 1,162 research articles published from 2001 to 2010 in three major LIS journals—the largest collection spanning the longest time period in previous studies. She proposed a classification of 17 methods that largely echo those suggested before. However, some new methods included were “research journal/diary” and “webometrics” (e.g., link analysis, altmetrics). The study also showed that “content analysis,” “experiment,” and “theoretical approach” overtook “survey” and “historical method” to secure the dominant position among popular research methods used in LIS.

Since Chu (2015) , a number of studies have been conducted on the topic of research methods in LIS, generally using a similar approach. Research articles published from some major LIS journals are sampled and manually coded into a classification scheme that is typically based on those proposed earlier. We summarize a number of studies below. VanScoy and Fontana (2016) focused on reference and information service (RIS) literature, a subfield of LIS. Over 1,300 journal articles were first separated into research articles (i.e., empirical studies) and those that were not research. Research articles were then coded into 13 research methods that can be broadly divided into “qualitative,” “quantitative,” and “mixed” methods. Again, these are similar to the previous literature, but add new categories such as “narrative analysis” and “phenomenology.” Authors showed that most of the RIS research was quantitative, with “descriptive methods” based on survey questionnaires being the most common. Ferran-Ferrer et al. (2017) studied a collection of Spanish LIS journal articles and showed that 68% were empirical research. They developed a classification scheme that defines nine “research methods” and 13 “techniques.” Different categories to the previous studies include “log analysis,” “text interpretation,” etc. However, the exact difference between these concepts was not clearly explained. Togia and Malliari (2017) coded 440 LIS journal articles into a similar classification of 12 “research methods” to that in Chu (2015) . However, in contrast to Chu, they showed that “survey” remained in the dominant position. Grankikov et al. (2020) studied the use of “mixed methods” in LIS literature. Different from Fidel (2008) , they concluded that the use of “mixed methods” in LIS has been on the rise.

In addition to work within LIS there has been work more widely in the social sciences to produce typologies for methodology (e.g., Luff, Byatt, & Martin, 2015 ). This update to an earlier seminal work by Durrant (2004) introduces a rather comprehensive typology of methodology, differentiating research design, data collection, data quality, and data analysis, among other categories. While offering a detailed approach for the gamut of social science methods, it does not represent the full range of methods of use in LIS which draws on approaches beyond the social sciences. Thus, while contributing to the development of our own taxonomy, this work could only offer a useful input.

In summary, the literature shows a continued interest in the studies of research methods in LIS in the last two decades. However, there remains significant inconsistency in the interpretation of terminologies used to describe the research methods, and in the different categorizations of research methods. This “lack of consensus” was discussed in Risso (2016) and VanScoy and Fontana (2016) . Risso (2016) highlighted that first, studies of LIS research methods take different perspectives that can reflect research subareas within this field, object of study delimitation, or different ways of considering and approaching it. Second, a severe problem is the lack of category definitions in the different research method taxonomies proposed in the literature, and as a result, some were difficult to distinguish from each other. VanScoy and Fontana (2016) pointed out that existing methodology categorizations in LIS are difficult to use, due to “conflation of research design, data collection, and data analysis methods,” “ill-defined categories,” and “extremely broad ‘other’ categories.” As examples, whereas Chu (2015) proposed a classification primarily based on data collection techniques, methods such as “bibliometrics” and “webometrics” are arguably not for data collection, and were seen to be classified as “techniques” or “methods” in Ferran-Ferrer et al. (2017) . On the contrary, “survey,” “interview,” and “observation” are mixed with “content analysis” and “experiment” and all considered as “techniques” by Ferran-Ferrer et al. (2017) . In terms of the disagreement on the use of hierarchy, many authors have adopted a simple flat structure (e.g., Bernhard, 1993 ; Chu, 2015 ; Hider & Pymm, 2008 ; Park, 2004 ), whereas some introduced simple but inconsistent hierarchies (e.g., “research strategies” vs. “data collection methods” in Järvelin and Vakkari (1990) and “qualitative” vs. “quantitative” in VanScoy and Fontana (2016) ). While intuitively we may argue that a sensible approach is to split methods primarily into data collection and analysis methods, apparently the examples shown above suggest that this is not a view that warrants consensus.

We argue that this issue reflects the ambiguity and complexity in research methods used in LIS. As a result of this, the same data can be analyzed in different ways that reflect different conceptual stances. Adding to this is the lack of consistency among authors in reporting their methods. Researchers sometimes define their work in terms of data collection methods, others through modes of analysis. For this reason, we argue that it is intrinsically difficult, if not impossible, to fully address these issues with a single universally agreed LIS research method definition and classification. Nevertheless, it remains imperative for researchers to gain an updated view of the evolution and diversification of research methods in this field, and to appreciate the different viewpoints from which they can be structured.

2.2. Automated Information Extraction from Scholarly Data

IE is the task of automatically extracting structured information from unstructured or semistructured documents. There has been increasing research in IE from scientific literature (or “scholarly data”) in the last decades, due to the rapid growth of literature and the pressing need to effectively index, retrieve, and analyze such data ( Nasar, Jaffry, & Malik, 2018 ). Nasar et al. (2018) reviewed recent studies in this area and classified them into two groups: those that extract metadata about an article, and those that extract key insights from the content. Research in this area has been predominantly conducted in the computer science, medical, and biology domains. We present an overview of these studies below.

Metadata extraction may target “descriptive” metadata that are often used for discovery and indexing, such as title, author, keywords, and references; “structural” metadata that describe how an article is organized, such as the section structures; and “administrative” metadata for resource management, such as file type and size. A significant number of studies in this area focus on extracting information from citations ( Alam, Kumar et al., 2017 ), or header level metadata extraction from articles ( Wang & Chai, 2018 ). The first targets information in individual bibliographic entries, such as the author names (first name, last name, initial), title of the article, journal name, and publisher. The second targets information usually on the title page of an article, such as title, authors, affiliations, emails, publication venue, keywords, and abstract. Thanks to the continuous interest in the computer science, medical, and biology domains, several gold-standard data sets have been curated over the years to be used to benchmark IE methods developed for such tasks. For example, the CORA data set ( Seymore, McCallum, & Rosenfeld, 1999 ) was developed based on a collection of computer science research articles, and consists of both a set for header metadata extraction (935 records) and a set for citation extraction (500 records). The FLUX-CiM data set ( Cortez, da Silva et al., 2007 ) is a data set for citation extraction, containing over 2,000 bibliography entries for computer science and health science. Th UMASS data set consists of bibliographic information from 5,000 research papers in four major domains that include physics, mathematics, computer science, and quantitative biology.

According to Nasar et al. (2018) , key-insights extraction refers to the extraction of information within an article’s text content. The types of such information vary significantly. They are often ad hoc and there is no consensus on what should be extracted. However, typically, this can include mentions of objectives, hypothesis, method, related work, gaps in research, result, experiment, evaluation criteria, conclusion, limitations of the study, and future work. Augenstein et al. (2017) and QasemiZadeh and Schumann (2016) proposed more fine-grained information units for extraction, such as task (e.g., “machine learning,” “data mining”), process (i.e., solutions of a problem, such as algorithms, methods and tools), materials (i.e., resources studied in a paper or used to solve the problem, such as “data set,” “corpora”), technology, system, tool, language resources (specific to computational linguistics), model, and data item metadata. The sources of such information are generally considered to be either sentence- or phrase-level, where the first aims to identify sentences that may convey the information either explicitly or implicitly, and the second aims to identify phrases or words that explicitly describe the information (e.g., “CNN model” in “The paper proposes a novel CNN model that works effectively for text classification”).

Studies of key-insight extraction are also limited to computer science and medical domains. Due to the lack of consensus over the task definition, which is discussed above, different data sets have been created focusing on different tasks. Hirohata et al. (2008) created a data set of 51,000 abstracts of published biomedical research articles, and classified individual sentences into objective, method, result, conclusion, and none. Teufel and Moens (2002) coded 80 computational linguistics research articles into different textual zones that describe, for example, background, objective, method, and related work. Liakata, Saha et al. (2012) developed a corpus of 256 full biochemistry/chemistry articles which are coded at sentence-level for 11 categories, such as hypothesis, motivation, goal, and method. Dayrell, Candido et al. (2012) created a data set containing abstracts from Physical Sciences and Engineering and Life and Health Sciences (LH). Sentences were classified into categories such as background, method, and purpose. Ronzano and Saggion (2015) coded 40 articles of the computer imaging domain and classified sentences into similar categories. Gupta and Manning (2011) pioneered the study of phrase-level key-insight extraction. They created a data set of 474 abstracts of computational linguistics research papers, and annotated phrases that describe three general levels of concepts: “focus,” which describes an article’s main contribution; “technique,” which mentions a method or a tool used in an article; and “domain,” which explains the application domain of a paper, such as speech recognition. Augenstein et al. (2017) created a data set of computational linguistics research articles that focus on phrase-level insights. Phrases indicating a concept of task, process, and material are annotated within 500 article abstracts. QasemiZadeh and Schumann (2016) annotated “terms” in 300 abstracts of computational linguistics papers. The categories of these terms are more fine grained, but some are generic, such as spatial regions, temporal entities, and numbers. Tateisi, Ohta et al. (2016) annotated a corpus of 400 computer science paper abstracts for relations, such as “apply-to” (e.g., a method applied to achieve certain purpose) and “compare” (e.g., a method is compared to a baseline).

In terms of techniques, the state of the art has mostly used either rule-based methods or machine learning. With rule-based methods, rules are coded into programs to capture recurring patterns in the data. For example, words such as “results,” “experiments,” and “evaluation” are often used to represent results in a research article, and phrases such as “we use,” and “our method” are often used to describe methods ( Hanyurwimfura, Bo et al., 2012 ; Houngb & Mercer, 2012 ). With machine learning methods, a human annotated data set containing a large number of examples is first created, and is used subsequently to “train” and “evaluate” machine learning algorithms ( Hirohata et al., 2008 ; Ronzano & Saggion, 2015 ). Such algorithms will consume low-level features (e.g., words, word sequences (n-grams), part of speech, word-shape (capitalized, lower case, etc), and word position, which are usually designed by domain experts) to discover patterns that may help capture the type of information that is to be extracted.

In summary, although there have been a plethora of studies on IE in the scientific literature, these have been limited to only a handful of disciplines and none has studied the problem in LIS. Existing methods will not be directly applicable to our problems for a number of reasons. First, previous work that extracts “research methods” only aims to identify the sentence or phrase that mentions a method (i.e., sentence- or phrase-level of extraction), but not recognize the actual method used. This is different, because the same research method may be referred to in different ways (e.g., “questionnaire” and “survey” may indicate the same method). Previous work also expects the research methods to be explicitly mentioned, which is not always true in LIS. Studies that use, for example, “content analysis,” “ethnography,” or “webometrics” may not even use these terms in their work to explain their methods. For example, instead of stating “a content analysis approach is used,” many papers may only state “we analyzed and coded the transcripts….” For these reasons, a different approach needs to be taken and a deeper understanding of these challenges as well as to what extent they can be dealt with will add significant value for future research in this area.

We describe our method in four parts. First, we explain our approach to data collection. Second, we describe an exploratory study of the data set, with the goal of developing a preliminary view of the possible research methods mentioned in our data set. Third, guided by the literature and informed by the exploratory analysis, we propose an updated research method classification scheme. Instead of attempting to address the intrinsically difficult problem of defining a classification hierarchy, our proposed scheme will adopt a flat structure. Our focus will be the change in the scope of research methods (e.g., where previous classification schemes need a revision). Finally, we describe how we develop the first automated method for the identification of research methods used in LIS studies.

3.1. Data Collection

Our data collection methods are subject to the following criteria. First, we select scientific publications from popular journals that are representative of LIS. Second, we use data that are machine readable, such as those in an XML format that preserves all the structural information of an article, instead of PDFs. This is because we would like to be able to process the text content of each, and OCR from PDFs is known to create noise in converted text ( Nasar et al., 2018 ). Finally, we select data from the same or similar sources reported from the previous literature such that our findings can be directly compared to early studies. This may allow us to discover trends in LIS research methods.

Thus, building on Chu (2015) , we selected research articles published between January 1, 2008 and December 31, 2018 and from Journal of Documentation (JDoc), Journal of the American Society for Information Science & Technology (JASIS&T; now Journal of the Association for Information Science and Technology ), and Library & Information Science Research (LISR). These are among the core journals in LIS and were also used in Chu (2015) , thus allowing us to make a direct comparison against earlier findings. We used the CrossRef API 1 to fetch the XML copies of these articles, and only kept articles that describe empirical research. This is identified with a category label assigned to each article by a journal. However, we notice a significant degree of inter- and intrajournal inconsistency in terms of how their articles are labeled. Briefly, each journal used between 14 and 19 categories to label their articles. There appear to be repetitions in these categories within each journal, and a lack of consensus on how each journal categorizes its articles. We show details of this later in our results section. For JDoc, we included 381 (out of 508 articles published in this period) articles labeled as “research article” and “case study.” For JASIS&T, we included 1,837 “research articles” (out of 2,150). For LISR, we included 382 “research articles” and “full length articles (FLA).” This created a data set of 2,599 research articles, twice more than that in Chu (2015) .

The XML versions of research articles allow programmatic access to the structured content of the articles, such as the title, authors, abstract, sections of main text, subsections, and paragraphs. We extract this structured content from each article for automated analysis later. However, it is worth noting that different publishers have adopted different XML templates to encode their data, which created obstacles during data processing.

3.2. Exploratory Analysis

To support our development of the classification scheme, we begin by undertaking an exploratory analysis of our data set to gain a preliminary understanding of the scope of methods potentially in use. For this, we use a combination of clustering and terminology extraction methods. VOSviewer ( Van Eck & Waltman, 2010 ), a bibliometric software tool, is used to identify keywords from the publication data sets and their co-occurrence network within the three journals. Our approach consisted of three steps detailed below.

First, for each article, we extract the text content that most likely contains descriptions of its methodology (i.e., the “methodology text”). For this, we combine text content from title, keywords, abstracts, and also the methodology section (if available) of each article. To extract the methodology section from an article, we use a rule-based method to automatically identify the section that describes the research methods (i.e., the “methodology section”). This is done by extracting all level 1 sections in an article together with their section titles, and then using a list of keywords to match against these section titles. If a section title contains any one of these keywords, we consider that section to be the methodology section. The keywords include 2 “methodology, development, method, procedure, design, study description, data analysis/study, the model.” Note that although these keywords are frequently seen in methodology section titles, we do not expect them to identify all variations of such section titles, nor can we expect every article to have a methodology section. However, we did not need to fully recover them as long as we have a sufficiently large sample that can inform our development of the classification scheme later on. This method identified methodology sections from 290 (out of 381), 1,283 (out of 1,837), and 346 (out of 383) of JDoc, JASIS&T, and LISR articles respectively. Still, there remains significant variation in terms of how researchers name their methodology section. We show this later in the results section. When the methodology section cannot be identified by our method, we use the title, keywords, and abstract of the article only. We apply this process to each article in each journal, creating three corpora.

Second, we import each corpus to VOSviewer 3 (version 1.614) and use its text-mining function to extract important terms and create clusters based on co-occurrences of the terms. VOSviewer uses natural language processing algorithms in the process of identifying terms. It involves steps such as copyright statement removal, sentence detection, part-of-speech tagging, noun phrase identification, and noun phrase unification. The extracted noun phrases are then treated as term candidates. Next, the number of articles in which a term occurs is counted (i.e., document frequency, or DF). Binary counting is chosen to avoid the analysis being skewed by terms that are very frequent within single articles. Then we select the top 60% relevant terms ranked by document frequency, and exclude those with a DF less than 10. These terms are used to support the development of the classification scheme.

To facilitate our coders in their task, the terms are further clustered into groups using the clustering function in VOSviewer. Briefly, the algorithm starts by creating a keyword network based on the co-occurrence frequencies within the title, abstract, keyword list, and methodology section. It then uses a technique that is a variant of the modularity function by Newman and Girvan (2004) and Newman (2004) for clustering the nodes in a network. Details of this algorithm can be found in Van Eck and Waltman (2014) . We expect terms related to the same or similar research methods to form distinct clusters. Thus, by creating these clusters, we seek to gain some insight into the methods they may represent.

The term lists and their cluster memberships for the three journals are presented to the coders, who are asked to manually inspect them and consider them in their development of the classification scheme below.

3.3. Classification Scheme

Our development of the classification of research methods is based on a deductive approach informed by the previous literature and our exploratory analysis. A sample of around 110 articles (“shared sample”) were randomly selected from each of the three journals to be coded by three domain experts. To define “research methods,” we asked all coders to create a flat classification of methods primarily following the flat scheme proposed by Chu (2015) for reference. They could identify multiple methods for an article, and when this was the case, they were asked to identify the “main” (i.e., “first” as in Chu) method and other “secondary” methods (i.e., second, third, etc. in Chu). While Chu (2015) took a view focusing on data collection methods, we asked coders to consider both modes of analysis and data collection methods as valid candidates, as in Kim (1996) . We did not ask coders to explicitly separate analysis from data collection, because (as reflected in our literature review) there is disagreement in how different methods are classified from these angles.

Coders were asked to reuse the methods in Chu’s classification where possible. They were also asked to refer to the term lists extracted before, to look for terms that may support existing theory, or terms that may indicate new methods that were not present in Chu’s classification. When no codes from Chu’s model could be used, they were asked to discuss and create new codes that are appropriate, particularly informed by the term lists. Once the codes were finalized, the coders split the remaining data equally for coding. An Inter-Annotator-Agreement (Kappa statistics) of 86.7 was obtained on the shared sample when only considering the main method identified.

One issue at the beginning of the coding process is the notable duplicative and overlapping nature in the methods reported in the existing literature, as well as those proposed by the coders. Using Chu’s scheme as an example, ethnography often involves participant observation , whereas bibliometrics may use methods such as link analysis (as part of webometrics ). Another issue is the confusion of “topic” and “method.” For example, an article could clearly discuss a bibliometrics study, but it was debatable whether it uses a “bibliometrics” method. To resolve these issues, coders were asked to follow the following principles. The first was to distinguish the goal of an article and the means implemented to achieve it. The second was to treat the main method as the one that generally takes the larger part of the text. Examples will be provided later in the results section.

During the coding process, coders were also asked to document the keywords that they found to be often indicative of each research method. For example, “content analysis” and “inter coder/rater reliability” are often seen in articles that use the “content analysis” method, whereas “survey,” “Likert,” “sampling,” and “response rate” are often seen in articles that use “questionnaire.” Note however, that it is not possible to create an exhaustive vocabulary for all research methods. Many keywords could also be ambiguous, and some research methods may only have a very limited set of keywords. However, these keywords form an important resource for our automated methods to be proposed below. Our proposed method classification contains 29 methods. These, together with their associated keywords, are shown and discussed later in the results section.

3.4. Information Extraction of Research Methods

In this section, our goal is to develop automated IE methods that are able to determine the type of research method(s) that are used by a research article. As discussed before, this is different from the large number of studies on key-insights extraction that are already conducted in other disciplines. First, previous studies aim to classify text segments (e.g., sentences, phrases) within a research article into broad categories including “methods,” without identifying what the methods are. As we have argued, these are two different tasks. Second, compared to the types of key insights for extraction, our study tackles a significantly larger number of fine-grained tasks—29 research methods. This implies that our task is much more challenging and that previous methods will not be directly transferable.

As our study is the first to tackle this task in LIS, we opt for a rule-based method for two reasons. First, compared to machine learning methods, rule-based methods were found to have better interpretability and flexibility when requirements are unclear ( Chiticariu, Li, & Reiss, 2013 ). This is particularly important for studies in new domains. Second, despite increasing interest in machine learning-based methods, Nasar et al. (2018) showed that they do not have a clear advantage over rule-based methods. In addition, we also focus on a rather narrow target: identifying a single main method used. Note that this does not imply an assumption that each article will use only one method. It is rather a built-in limitation of our IE method. The reasons, as we shall discuss in more detail later, are twofold. On the one hand, almost every article will mention multiple methods, but it is extremely difficult to determine automatically which are actually used for conducting the research and which are not. On the other hand, as per Chu (2015) , articles that report using multiple methods remain a small fraction (e.g., 23% for JDoc, 13% for JASIS&T, and 18% for LISR in 2009–2010). With these in mind, it is extremely easy for automated methods to make false positive extractions of multiple methods. Therefore, our aim here is exploring the feasibility and understanding the challenges in achieving our goal, rather than maximizing the potential performance of the automated methods.

We used a smaller sample of 30 coded articles to develop the rule-based method, with the remaining 300 for evaluation later on. Generally, our method searches the keywords (as explained before) associated with each research method within the restricted sections of an article. The method receiving the highest frequency will be considered to be the main research method used in that study. As we have discussed previously, many of these keywords can be ambiguous, but we hypothesize that by restricting our search within specific contexts, such as the abstract or the methodology section, there will be a higher possibility of recovering true positives. Figure 1 shows the overall workflow of our method, which will be explained in detail below.

Overview of the IE method for research method extraction.

Overview of the IE method for research method extraction.

3.4.1. Text content extraction

In this step, we aim to extract the text content from the parts of an article that are most likely to mention the research methods used. We focus on three parts: the title of an article, its abstract, and the methodology section, if available. Titles and abstracts can be directly extracted from our data set following the XML structures. For methodology sections, we use the same method introduced before for identifying them.

3.4.2. Keywords/keyphrase matching

In this step, we aim to look up the keywords/keyphrases (to be referred to uniformly as “keywords” below) associated with each research method within the text elements identified above. For each research method, and for each associated keyword, we count its frequency within each of the identified text elements. Note that the inflectional forms of these keywords (e.g., plural forms) are also searched. Then we sum the frequencies of all matched keywords for each research method within each text element to obtain a score for that research method within that text element. We denote this as freq ( m , text i ), where m denotes one of the research methods, text i denotes the text extracted from the part i of the article, with i ∈ { title , abstract , methodsection }.

3.4.3. Match selection

In this step, we aim to determine the main research method used in an article based on the matches found before. Given the set of matched research methods for a particular type of text element, that is, for a set of { freq ( m 1 , text i ), freq ( m 2 , text i ) …, freq ( m k , text i )}, where i is fixed, we simply choose the method with the highest frequency. As an example, if “content analysis” and “interview” have frequencies of 5 and 3, respectively, in the abstract of an article, we select “content analysis” to be the method detected from the abstract of that paper. Next, we select the research method based on the following priority: title > abstract > methodology section. In other words, if a research method is found in the title, abstract, and methodology section of an article, we choose only the one found in the title. Following the example above, if “content analysis” is the most frequent method based on the abstract of an article, and “questionnaire” is the one selected for its methodology section, we choose “content analysis” to be the research method used by the study. If none of the research methods are found in any of the three text elements, we consider the article to be “theoretical.” If multiple methods are found to tie based on our method, then the one appearing earlier in the text will be chosen to be the main method.

3.4.4. Evaluation

Given a particular type of research method in the data set, the number of research articles that reported using that method is “total actual positives,” and the number predicted by the IE method is “total predicted positives.” The intersection of the two is “true positives.” Because the problem is cast as a classification task, and in line with the work in this direction but in other disciplines, we treat Precision and Recall with equal weights in computing F1. Also, we compute the “micro” average of Precision, Recall, and F1 over the entire data set across all research methods, where the “true positives,” “total predicted positives,” and “total actual positives” will simply be the sum of the corresponding values for each research method in the data set.

4.1. Data Collection

As mentioned previously, we notice a significant degree of inter- and intrajournal inconsistency in how different journals categorize their articles. We show the details in Table 2 .

Different categorizations of published articles by the three different journals

Research article 1837 Research paper 370 FLA 350 
Brief communication 115 Conceptual paper 121 EDI 40 
Letter to the editor 65 Review 75 research-article 32 
Editorial 31 Secondary article 52 e-review 23 
Advances in information science 31 Literature review 16 ANN 12 
Erratum 17 Viewpoint 14 BRV 11 
In this issue 13 Editorial 11 e-non-article 11 
Perspectives on design: information technologies and creative practice 12 Case study 11 IND 
Opinion paper 10 Article e-conceptual-paper 
Opinion General view SCO 
AIS review Book review EDB 
Review Technical paper review-article 
Opinion piece Guest editorial E-literature review 
Depth review List of referees 2013 REV 
Guest editorial     ERR 
        PRP 
        COR 
        DIS 
        PUB 
Research article 1837 Research paper 370 FLA 350 
Brief communication 115 Conceptual paper 121 EDI 40 
Letter to the editor 65 Review 75 research-article 32 
Editorial 31 Secondary article 52 e-review 23 
Advances in information science 31 Literature review 16 ANN 12 
Erratum 17 Viewpoint 14 BRV 11 
In this issue 13 Editorial 11 e-non-article 11 
Perspectives on design: information technologies and creative practice 12 Case study 11 IND 
Opinion paper 10 Article e-conceptual-paper 
Opinion General view SCO 
AIS review Book review EDB 
Review Technical paper review-article 
Opinion piece Guest editorial E-literature review 
Depth review List of referees 2013 REV 
Guest editorial     ERR 
        PRP 
        COR 
        DIS 
        PUB 

First, there is a lack of definition of these categorization labels from the official sources, and many of the labels are not self-explanatory. For example, it is unclear why fine-grained JASIS&T labels such as “advances in information science” and “AIS review” deserve to be separate categories, or what “technical paper” and “secondary article” entail in JDoc. For LISR, which uses mostly acronym codes to label its articles, we were unable to find a definition of these codes 4 .

Second, different journals have used a different set of labels to categorize their articles. While the three journals appear to include some types that are the same, some of these are named in different ways (e.g., “opinion paper” in JASIS&T and “viewpoint” in JDoc). More noticeable is the lack of consensus in their categorization labels. For example, only JASIS&T has “brief communication,” only JDoc has “secondary article,” and only LISR has “non-article.”

A more troubling issue is the intrajournal inconsistency. Each journal has used a large set of labels, many of which appear to be redundant. For example, in JASIS&T, “opinion paper,” “opinion,” and “opinion piece” seem to refer to the same type. “Depth review” and “AIS review” seem to be a part of “review.” In JDoc, “general review” and “book review” seem to be a part of “review.” And “article” seems to be too broad a category. In LISR, it is unclear why “e-review” is needed in addition to “review-article.” Also, note that for many categories, there are only a handful of articles, an indication that those labels may be no longer used, or were even created in error.

4.2. Exploratory Analysis

Figures 2 – 4 visualize the clusters of methodologyrelated keywords found in the articles from each of the three journals. All three journals show a clear pattern of three separated large clusters. For LISR, three clusters emerge as follows: One (green) centers on “interview,” with keywords such as “interviewee,” “theme,” and “transcript”; one (red) centers on “questionnaire,” with keywords such as “survey,” “respondent,” and “scale”; and one (blue) with miscellaneous keywords, many of which seem to correlate weakly with studies of scientific literature (e.g., keywords such as “author,” “discipline,” and “article”) or bibliometrics generally.

Cluster of terms extracted from the LISR corpus (top 454 terms ranked by frequency extracted from the entire corpus of 382 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the LISR corpus (top 454 terms ranked by frequency extracted from the entire corpus of 382 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the JDoc corpus (top 451 terms ranked by frequency extracted from the entire corpus of 381 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the JDoc corpus (top 451 terms ranked by frequency extracted from the entire corpus of 381 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the JASIS&T corpus (top 2,027 terms ranked by frequency extracted from the entire corpus of 1,837 articles). Font size indicates frequency of the keyword.

Cluster of terms extracted from the JASIS&T corpus (top 2,027 terms ranked by frequency extracted from the entire corpus of 1,837 articles). Font size indicates frequency of the keyword.

For JDoc, the two clusters around “interview” (green) and “questionnaire” (blue) are clearly visible. In contrast to LISR, the third cluster (red) features keywords that are often indicative of statistical methods, algorithms, and use of experiments. Overall, the split of the clusters seems to indicate the separation of methods that are typically qualitative (green and blue) and quantitative (red).

The clusters from JASIS&T appear to be more different from LISR and JDoc and also have clearer boundaries. One cluster (red) appears to represent methods based on “interview” and “survey”; one (green) features keywords indicative of bibliometrics studies; and one (blue) has keywords often seen in studies using statistical methods, experiments, or algorithms. Comparing the three journals, we see a similar focus of methodologies between LISR and JDoc, but quite different patterns in JASIS&T. The latter appears to be more open to quantitative and data science research.

4.3. Classification Scheme

Table 3 displays our proposed method classification scheme, together with references to previous work where appropriate, and keywords that were indicative of the methods. Notice that some of the keywords are selected based on the clusters derived from the exploratory studies. Also, the keywords are by no means a comprehensive representation of the methods, but only serve as a starting point for this type of study. In the following we define some of the methods in detail and explain their connection to the literature.

The proposed research method classification scheme

  
bibliometrics Same as Chu impact factor, scientometric, bibliometric, citation analysis, h-index… 
content analysis Same as Chu content analysis, inter coder reliability, inter annotator agreement, krippendorff 
delphi study Same as Chu delphi study 
ethnography/field study Traditional ethnographic studies excluding those done in a digital context (see “digital ethnography” below). Hammersley, participant observation, ethnography, ethnographic, ethnographer… 
experiment Classic experimental studies, not the generalized concept as per Chu. dependent variable, independent variable, experiment 
focus group Same as Chu focus group 
historical method Same as Chu historical method 
interview Same as Chu interview, interviewed, interviewer, interviewee, interviewing 
observation Same as Chu observation 
questionnaire Same as Chu respondent, questionnaire, survey, Likert, surveyed… 
research diary/journal Same as Chu diary study, cultural probe 
think aloud protocol Same as Chu think aloud 
transaction log analysis Same as Chu log analysis/technique 
theoretical studies Same as Chu Studies that cannot be classified into any of the other method categories 
webometrics Same as Chu webometrics, cybermetrics, link analysis 
 
agent based modeling/simulation Studies that use computational modeling methods for the purpose of simulation agent model/modeling, multi-agents 
annotation Studies that focus on using human users to create coded data annotation, tagging 
classification Studies that focus on developing computational classification techniques classification, classify 
clustering Studies that focus on developing computational clustering techniques cluster, clustering 
comparative evaluation Studies that follow systematic evaluation procedures to compare different methods comparative evaluation, evaluative studies 
document analysis Studies that analyze secondary document collections (e.g., historical policy documents, transcripts) with a critical close reading document/textual analysis, document review 
information extraction Studies that develop computational methods for the purpose of extracting structured information from texts named entity recognition, NER, relation extraction 
IR related indexing/ranking/query methods Studies that develop methods with a goal to improve search results Learning to rank, term weighting, indexing method, query expansion, question answering 
mixed method    cresswell and plano clark, mixed method 
digital ethnography Studies applying ethnography to the digital context digital ethnography, netnography, netnographic 
network analysis Studies that apply network theories with a focus to understand the properties of social networks network analysis/study 
statistical methods Studies of correlations between variables, hypothesis testing; proposing new statistical metrics that quantify certain problems other than bibliometrics. This category excludes comparisons based on simple descriptive statistics correlation, logistic test, t-test, chi-square, hypothesis test… 
topic modeling Studies that develop computational topic modeling methods topic model, topic modeling, LDA 
user task based study Studies that require human users to carry out certain tasks (sometimes using a system) to produce data for further analysis user study, user analysis 
  
bibliometrics Same as Chu impact factor, scientometric, bibliometric, citation analysis, h-index… 
content analysis Same as Chu content analysis, inter coder reliability, inter annotator agreement, krippendorff 
delphi study Same as Chu delphi study 
ethnography/field study Traditional ethnographic studies excluding those done in a digital context (see “digital ethnography” below). Hammersley, participant observation, ethnography, ethnographic, ethnographer… 
experiment Classic experimental studies, not the generalized concept as per Chu. dependent variable, independent variable, experiment 
focus group Same as Chu focus group 
historical method Same as Chu historical method 
interview Same as Chu interview, interviewed, interviewer, interviewee, interviewing 
observation Same as Chu observation 
questionnaire Same as Chu respondent, questionnaire, survey, Likert, surveyed… 
research diary/journal Same as Chu diary study, cultural probe 
think aloud protocol Same as Chu think aloud 
transaction log analysis Same as Chu log analysis/technique 
theoretical studies Same as Chu Studies that cannot be classified into any of the other method categories 
webometrics Same as Chu webometrics, cybermetrics, link analysis 
 
agent based modeling/simulation Studies that use computational modeling methods for the purpose of simulation agent model/modeling, multi-agents 
annotation Studies that focus on using human users to create coded data annotation, tagging 
classification Studies that focus on developing computational classification techniques classification, classify 
clustering Studies that focus on developing computational clustering techniques cluster, clustering 
comparative evaluation Studies that follow systematic evaluation procedures to compare different methods comparative evaluation, evaluative studies 
document analysis Studies that analyze secondary document collections (e.g., historical policy documents, transcripts) with a critical close reading document/textual analysis, document review 
information extraction Studies that develop computational methods for the purpose of extracting structured information from texts named entity recognition, NER, relation extraction 
IR related indexing/ranking/query methods Studies that develop methods with a goal to improve search results Learning to rank, term weighting, indexing method, query expansion, question answering 
mixed method    cresswell and plano clark, mixed method 
digital ethnography Studies applying ethnography to the digital context digital ethnography, netnography, netnographic 
network analysis Studies that apply network theories with a focus to understand the properties of social networks network analysis/study 
statistical methods Studies of correlations between variables, hypothesis testing; proposing new statistical metrics that quantify certain problems other than bibliometrics. This category excludes comparisons based on simple descriptive statistics correlation, logistic test, t-test, chi-square, hypothesis test… 
topic modeling Studies that develop computational topic modeling methods topic model, topic modeling, LDA 
user task based study Studies that require human users to carry out certain tasks (sometimes using a system) to produce data for further analysis user study, user analysis 

Our study was able to reuse most of the codes from Chu (2015) . We revised Chu’s “ethnography/field study” to two categories: “ethnography/field study,” which refers to traditional ethnographic research (e.g. using participant observation in real world settings), and “digital ethnography,” referring to the use of ethnographic methods in the digital world, including work following Kozinets’ (2010) suggestions for “netnography” as an influential branch of this work.

The major change we have introduced concerns the “experiment” category. Chu (2015) argued for a renewed perspective on “experiment,” in the sense that this refers to a broad range of studies where “new procedures (e.g., key-phrase extraction), algorithms (e.g., search result ranking), or systems (e.g., digital libraries)” are created and subsequently evaluated. This differs from the classic “experimental design” as per Campbell and Stanley (1966) . However, we argue that this is an “overgeneralization,” as Chu showed that more than half of the articles from JASIS&T have used this method. Such a broad category is less useful as it hides the complex multidisciplinary nature in LIS. Therefore, in our classification, we use “experiment” to refer to the classic “experimental design” method and introduce a more fine-grained list of methods that would have been classified as “experiment” by Chu. These include “agent based modeling/simulation,” “classification,” “clustering,” “information extraction,” “IR related indexing/ranking/query methods,” and “topic modeling,” all of which focus on developing procedures or algorithms (rather than simple application of such techniques for a different purpose) that are often subject to systematic evaluation; and “comparative evaluation,” which focuses on following scientific experimental protocols to systematically compare and evaluate a set of methods.

Further, we added methods that do not necessarily overlap with Chu’s classification. For example, “annotation” refers to studies that involve users annotating or coding certain content, with the coding frame or the coded content being the primary output of a study. “Document analysis” refers to studies that analyze a collection of documents (e.g., government policy papers) or media items (e.g., audio or video data) to discover patterns and insights. “Mixed methods” is added, as studies such as Grankikov et al. (2020) revealed an upward trend in the usage of this research method in LIS. Note that in this context, “mixed methods” refers to Fidel’s (2008) definition, which refers to research that combines data collection in a particular sequence for some reason, rather than any research that happens to involve multiple forms of data. “Statistical methods” has a narrow scope encompassing studies of correlation between variables or hypothesis testing, as well as those that propose metrics to quantify certain problems. This excludes metrics specifically targeting the bibliometrics domain (e.g., h -index), as the level of complexity and the extent of effort devoted to that area justifies it being an independent umbrella term that encompasses various statistical metrics. Statistical methods also exclude generic comparison based on descriptive statistics, which is very common (and thus can be overgeneralizing) in quantitative research; also, the majority of computational methods for classification, clustering, or regression are statistical-based in a more general sense. Finally, “user task based studies” refers to systematic methods that involve human users undertaking certain tasks following certain (often different) processes, with a goal to compare their behaviors or evaluate the processes.

Revisiting the issue of duplication and overlap often seen in the scope of LIS research methods discussed before, we use examples to illustrate how our classification should be used to avoid such an issue. In Table 4 , articles by Zuccala, van Someren, and van Bellen (2014) , Wallace, Gingras, and Duhon (2008) , Denning, Soledad, and Ng (2015) , and Solomon and Björk (2012) all study bibliometrics problems, but their main research method is classified differently under our scheme. Zuccala et al. (2014) focuses on developing a classifier to automatically categorize sentences in reviews by their scholarly credibility and writing style. The article studied a problem of bibliometrics nature, and used human coders to annotate training data. However, its ultimate goal is to develop and evaluate a classifier, as is the focus of the majority of the text. Therefore, the main research method is considered to be “classification,” and “annotation” may be considered a secondary research method and “bibliometrics” is more appropriate as a topic of the study. Wallace et al. (2008) has a similar pattern, where the content is dominated by technical details of how the “network analysis” method is constructed and applied to bibliometrics problems. Denning et al. (2015) describes a tool whose core method is formulating a statistical indicator, which the authors propose to measure book readability. Thus its main method qualifies under “statistical methods.” Solomon and Björk (2012) uses descriptive statistics to compare open access journals. By definition, we do not classify such an approach as “statistical methods.” But it can be argued that the authors used certain metrics to quantify a specific bibliometrics problem and therefore, we label its main method as “bibliometrics.” In terms of our very own article, arguably, we consider both “content analysis” and “classification” as our main methods, and “annotation” as a secondary method because it serves a purpose for content analysis and creating training data for classification. “Bibliometrics” is more appropriate as the topic rather than the method we use, because our work actually adapts generic methods to bibliometric problems.

Example articles and how their main research method will be coded under our scheme

A machine-learning approach to coding book reviews as quality indicators: Toward a theory of megacitation    Classification 
A new approach for detecting scientific specialties from raw cocitation networks    Network analysis 
A readability level prediction tool for K-12 books    Statistical methods 
A study of open access journals using article processing charges    Bibliometrics 
A machine-learning approach to coding book reviews as quality indicators: Toward a theory of megacitation    Classification 
A new approach for detecting scientific specialties from raw cocitation networks    Network analysis 
A readability level prediction tool for K-12 books    Statistical methods 
A study of open access journals using article processing charges    Bibliometrics 

Figure 5 compares the distribution of different research methods found in the samples of the three journals. We notice several patterns. First, compared to JDoc and LISR, work published at JASIS&T has a clear emphasis on using a wider range of computational methods. This is consistent with findings from Chu (2015) . Second, JASIS&T also has a substantial focus on bibliometrics research, which lacks representation in JDoc or LISR. Instead (the third pattern) for JDoc and LISR, questionnaire and interview remain the most dominant research methods. These findings resonate with those from our exploratory analysis. Fourth, for all three journals, a noticeable fraction of published work (between 10% and 18%) is of a theoretical nature, where no data collection or analysis methods are documented. Finally, we could not identify studies using “webometrics” as methods, but many may qualify under such a topic. However, they often use other methods (e.g., content analysis of web collections, annotation of web content) to study a webometrics problem.

Distribution of research methods found in the samples of the three journals. The y-axis indicates percentages represented by a method within a specific journal collection.

Distribution of research methods found in the samples of the three journals. The y-axis indicates percentages represented by a method within a specific journal collection.

4.4. Information Extraction of Research Methods

We evaluate our IE method using 300 articles from the coded sample data 6 (disjoint with the smaller set for developing the method), and present the Precision, Recall and F1 scores below. As mentioned before, we only evaluate the main method extracted by the IE process using Eqs. 1 – 3 . We then show the common errors made by our method.

4.4.1. Overview of Precision, Recall, and F1

Table 5 shows the Precision, Recall and F1 of our IE method obtained on the annotated samples from the three journals. Overall, the results show that the task is a very challenging one, as our method has obtained rather poor results on most of the research methods. Across the different journals and considering the size of the sample, our method has generally performed consistently on “interview,” “questionnaire,” and “bibliometrics.” Based on the nature of our method (i.e., keywords lookup), this suggests that terminologies related to these research methods may be used more often in nonambiguous contexts. The average performance of our IE method achieves a microaverage F1 of 0.783 on JDoc, 0.811 on LISR, and 0.61 on JASIS&T. State-of-the-art methods on key-insights extraction generally achieve an F1 of between 0.03 ( Lin, Ng et al., 2010 ) and 0.53 ( Kovačević, Konjović et al., 2012 ) on tasks related to “research methods” at either sentence or phrase levels. Notice that the figures should not be compared directly as-is, because the task we deal with is different: We aim to identify specific methods, whereas all the previous studies only aim to determine whether a specific piece of text describes a research method or not.

Precision (P), Recall (R) and F1 on the three journals. “–” indicates that no articles are classified under method by the coders; neither does our method predict that method for any articles. For the absolute number of instances for each method, see Figure 5  

  
bibliometrics 0.917 1.00 0.957 1.00 0.833 0.909 0.846 0.846 0.846 
content analysis 0.857 0.462 0.600 0.462 0.462 0.462 0.500 0.667 0.571 
delphi study 1.00 1.00 1.00 — — — 
ethnography/field study 0.875 0.875 0.875 0.250 1.00 0.400 — — — 
experiment 1.00 0.714 0.833 0.400 0.667 0.500 0.071 0.250 0.111 
focus group 0.500 0.500 0.500 0.667 0.750 0.706 — — — 
historical method 1.00 1.00 1.00 — — — — — — 
interview 0.793 0.821 0.807 0.735 0.781 0.758 1.00 0.500 0.667 
observation 0.375 1.00 0.545 0.444 0.500 0.471 
questionnaire 0.600 0.750 0.667 0.827 0.915 0.869 0.333 0.667 0.444 
research diary/journal – – – – – – 
think aloud protocol – – – 0.250 1.00 0.400 – – – 
transaction log analysis – – – – – – 
webometric – – – – – – – – – 
theoretical studies 0.310 0.923 0.462 0.474 0.857 0.610 0.476 0.769 0.588 
  
 
agent based modeling – – – – – – 1.00 1.00 1.00 
annotation – – – – – – 
classification 0.143 1.00 0.250 – – – 0.467 0.583 0.519 
clustering – – – 0.833 1.00 0.909 
comparative evaluation – – – – – – 
document analysis 1.00 0.200 0.333 1.00 0.600 0.750 
information extraction – – – – – – 1.00 0.250 0.400 
IR related indexing/ranking/query methods – – – – – – 0.67 0.25 0.363 
mixed method 0.667 1.000 0.800 – – – – – – 
digital ethnography 1.00 1.00 1.00 – – – – – – 
network analysis 1.00 1.00 1.00 0.750 0.333 0.462 
statistical methods 0.167 1.00 0.286 0.500 0.444 0.471 
topic modeling – – – – – – 1.00 0.500 0.667 
user task based study 1.00 0.500 0.667 
 0.783 0.783 0.783 0.811 0.811 0.811 0.610 0.610 0.610 
  
bibliometrics 0.917 1.00 0.957 1.00 0.833 0.909 0.846 0.846 0.846 
content analysis 0.857 0.462 0.600 0.462 0.462 0.462 0.500 0.667 0.571 
delphi study 1.00 1.00 1.00 — — — 
ethnography/field study 0.875 0.875 0.875 0.250 1.00 0.400 — — — 
experiment 1.00 0.714 0.833 0.400 0.667 0.500 0.071 0.250 0.111 
focus group 0.500 0.500 0.500 0.667 0.750 0.706 — — — 
historical method 1.00 1.00 1.00 — — — — — — 
interview 0.793 0.821 0.807 0.735 0.781 0.758 1.00 0.500 0.667 
observation 0.375 1.00 0.545 0.444 0.500 0.471 
questionnaire 0.600 0.750 0.667 0.827 0.915 0.869 0.333 0.667 0.444 
research diary/journal – – – – – – 
think aloud protocol – – – 0.250 1.00 0.400 – – – 
transaction log analysis – – – – – – 
webometric – – – – – – – – – 
theoretical studies 0.310 0.923 0.462 0.474 0.857 0.610 0.476 0.769 0.588 
  
 
agent based modeling – – – – – – 1.00 1.00 1.00 
annotation – – – – – – 
classification 0.143 1.00 0.250 – – – 0.467 0.583 0.519 
clustering – – – 0.833 1.00 0.909 
comparative evaluation – – – – – – 
document analysis 1.00 0.200 0.333 1.00 0.600 0.750 
information extraction – – – – – – 1.00 0.250 0.400 
IR related indexing/ranking/query methods – – – – – – 0.67 0.25 0.363 
mixed method 0.667 1.000 0.800 – – – – – – 
digital ethnography 1.00 1.00 1.00 – – – – – – 
network analysis 1.00 1.00 1.00 0.750 0.333 0.462 
statistical methods 0.167 1.00 0.286 0.500 0.444 0.471 
topic modeling – – – – – – 1.00 0.500 0.667 
user task based study 1.00 0.500 0.667 
 0.783 0.783 0.783 0.811 0.811 0.811 0.610 0.610 0.610 

4.4.2. Impact of the article of abstract

We conducted further analysis to investigate the quality of abstracts and its impact on our IE method. This includes three types of analysis. To begin with, we disabled the “methodology section” extraction component in our method, and retested our method on the same data set, but excluded articles where methods can only be identified from the methodology section. The results are shown in Table 6 . On average, we obtained noticeable improvement on the JDoc data set, but not on LISR or JASIS&T. Among the three journals, JDoc is the only one that enforces a structured abstract. Arguably, this ensures consistency and quality in writing the abstracts, from which our IE methods may have potentially benefited.

Precision (P), Recall (R) and F1 on the three journals when the text from the methodology section (if available) is ignored. “–” indicates that no articles are classified under method by the coders, and neither does our method predict that method for any articles. Bold indicates better results whereas underline indicates worse results compared to Table 4 . For the absolute number of instances for each method, see Figure 5  

  
bibliometrics 0.917 1.00 0.957 1.00 0.833 0.909 0.846 0.846 0.846 
content analysis   0.462   0.444 0.363 0.400 0.500 0.667 0.571 
delphi study 1.00 1.00 1.00 – – – 
ethnography/field study 0.875 0.875 0.875 0.250 1.00 0.400 – – – 
experiment 1.00 0.714 0.833   0.500   0.071 0.250 0.111 
focus group 0.500 0.500 0.500       – – – 
historical method 1.00 1.00 1.00 – – – – – – 
interview   0.821     0.766   1.00 0.500 0.667 
observation   1.00     0.429   
questionnaire 0.600 0.750 0.667   0.891   0.333 0.667 0.444 
research diary/journal – – – – – – 
think aloud protocol – – –   1.00   – – – 
transaction log analysis – – – – – – 
webometric – – – – – – – – – 
theoretical studies   0.923   0.457   0.604 0.476 0.769 0.588 
  
 
agent based modeling – – – – – – 1.00 1.00 1.00 
annotation – – – – – – 
classification   1.00   – – – 0.467 0.583 0.519 
clustering – – – 0.833 1.00 0.909 
comparative evaluation – – – – – – 
document analysis 1.00     1.00 0.500 0.667 
information extraction – – – – – – 1.00 0.250 0.400 
IR related indexing/ranking/query methods – – – – – – 0.67 0.25 0.363 
mixed method 0.667 1.000 0.800 – – – – – – 
digital ethnography 1.00 1.00 1.00 – – – – – – 
network analysis       1.00 1.00 1.00 0.750 0.333 0.462 
statistical methods 0.167 1.00 0.286 0.500 0.444 0.471 
topic modeling – – – – – – 1.00 0.500 0.667 
user task based study 1.00 0.500 0.667 
             0.610 0.610 0.610 
  
bibliometrics 0.917 1.00 0.957 1.00 0.833 0.909 0.846 0.846 0.846 
content analysis   0.462   0.444 0.363 0.400 0.500 0.667 0.571 
delphi study 1.00 1.00 1.00 – – – 
ethnography/field study 0.875 0.875 0.875 0.250 1.00 0.400 – – – 
experiment 1.00 0.714 0.833   0.500   0.071 0.250 0.111 
focus group 0.500 0.500 0.500       – – – 
historical method 1.00 1.00 1.00 – – – – – – 
interview   0.821     0.766   1.00 0.500 0.667 
observation   1.00     0.429   
questionnaire 0.600 0.750 0.667   0.891   0.333 0.667 0.444 
research diary/journal – – – – – – 
think aloud protocol – – –   1.00   – – – 
transaction log analysis – – – – – – 
webometric – – – – – – – – – 
theoretical studies   0.923   0.457   0.604 0.476 0.769 0.588 
  
 
agent based modeling – – – – – – 1.00 1.00 1.00 
annotation – – – – – – 
classification   1.00   – – – 0.467 0.583 0.519 
clustering – – – 0.833 1.00 0.909 
comparative evaluation – – – – – – 
document analysis 1.00     1.00 0.500 0.667 
information extraction – – – – – – 1.00 0.250 0.400 
IR related indexing/ranking/query methods – – – – – – 0.67 0.25 0.363 
mixed method 0.667 1.000 0.800 – – – – – – 
digital ethnography 1.00 1.00 1.00 – – – – – – 
network analysis       1.00 1.00 1.00 0.750 0.333 0.462 
statistical methods 0.167 1.00 0.286 0.500 0.444 0.471 
topic modeling – – – – – – 1.00 0.500 0.667 
user task based study 1.00 0.500 0.667 
             0.610 0.610 0.610 

To verify this, we conducted the second type of analysis. We asked coders to revisit the articles they coded and identify the percentage of articles for which they were unable to identify its main method confidently without going to the full texts. This provides an alternative but more direct view of the quality of abstracts from the three journals, without the bias from the IE method. The figures are 5%, 6%, and 12% for JASIS&T, JDoc, and LISR respectively. This shows that to a human reader, comparatively, both JDoc and JASIS&T abstracts are more explicit than LISR when it comes to explaining their methods. This may be an indication of better quality abstracts. To some extent, this is consistent with the pattern we observed from the previous analysis. The quality in JASIS&T abstracts does not translate to better performance of our IE method when focusing on only the abstracts. This could be partially attributed to the wider diversity of methods noted in JASIS&T articles ( Figure 5 ) as well as the implicitness in the description of many of those methods that deviate from LISR and JDoc. For example, none of the articles using “comparative evaluation” used the keywords shown in Table 3 . Instead, they used generic words that, if included, could have significantly increased false positives (e.g., “compare” and “evaluate” are typically used but will be nondiscriminative to identify studies that solely focus on comparative evaluations). Similarly, only one article using “user based task studies” used our proposed keywords. We will cover this issue again in the later sections.

Our third type of analysis involves studying the association between the length of an abstract and its quality, and subsequently (and potentially) its impact on our IE method. We notice that the three journals have different requirements on the length of abstract: 150 for LISR, 250 for JDoc, and 200 for JASIS&T. We do not make hypothesis a correlation between an abstract’s length and its clarity (hence affecting its quality), as this can be argued from contradictory angles. On the one hand, one may argue that a shorter length can force authors to be more explicit about their methodology; on the one hand, one could also argue that a shorter length may result in more ambiguity, as authors have little space to explain their approach clearly. Instead, we started with analyzing the distribution of abstract length in our data sets across the three journals. We wrote a program that counts the number of words in each abstract, where words are delimited by white space characters only. We made surprising findings, as shown in Figure 6 : a very large proportion of articles did not comply with the limit of the abstract length.

Distribution of abstract length across the three different journals.

Distribution of abstract length across the three different journals.

Figure 6 suggests that at least 50% of articles in our JASIS&T and LISR data sets have exceeded the abstract word limits. The situation of JDoc is not very much better. Across all three journals, there are also very long abstracts that almost doubled the word limit 8 ; and there are noticeable articles with very short abstracts, such as those containing fewer than 100 words: 1 for JDoc, 34 for LISR, and 14 for JASIS&T. Overall, we do not see significantly different patterns in the distributions across the three journals. We further manually inspected a sample of 20 articles from each journal to investigate whether there were any patterns in terms of the publication year of those articles that exceeded the word limit. This is because we were uncertain whether during the abstract word limit changed during the history of each journal. Again, we could not find any consistent patterns. For JDoc, the distributions are 2010 (3), 2011 (3), 2013 (4), 2014 (1), 2015 (2), 2016 (2), 2017 (2), and 2018 (3). For LISR, the distributions are 2010 (5), 2011 (1), 2012 (4), 2013 (2), 2014 (1), 2015 (4), 2016 (2), and 2018 (1). For JASIS&T, the distributions are 2010 (3), 2011 (4), 2012 (2), 2013 (1), 2014 (4), 2015 (2), 2016 (1), 2017 (2), and 2018 (1). Articles exceeding the abstract length limit can be found in any year in all three journals. For these reasons, we argue that there is no strong evidence indicating any association between the abstract length and its impact on our IE method. However, the lack of compliance with the journal requirement is rather concerning. While the quality of abstracts may be a factor that affects our method, it is worth noting that our method for detecting the methodology section has its limitations. Some articles do not have an explicit “methodology” section. Instead, they may describe different parts of their method in several top-level sections (e.g., see Saarikoski, Laurikkala et al., 2009 ). Some may have a “methodology” section that is a subsection of the top-level sections (e.g., the method section is within the “Case Study” section in Freeburg, 2017 ). A manual inspection of 50 annotated samples revealed that there were 10% of articles on which this method failed to identify the methodology section. In other words, the method has a 10% error rate. Thus arguably, with a more reliable method for finding methodology sections or generally content sections that describe methodology, our IE method could perform better.

4.4.3. Error analysis

To further understand the challenges of this task, we analyzed all errors made by our IE method and explain these below. Of the errors, 67% 9 are due to keywords used in different contexts than expected. For example, we define “classification” to be methods that use computational approaches for classifying data. However, the keywords “classify” or “classification” are also used frequently in work that may use, for example, content analysis or document analysis, to study library classification systems. A frequent error of this type is when a method is mentioned as future or previous work, such as in “In future studies, e.g., families’ focus-group interviews could bring new insights.” Some 10% of errors are due to ambiguity of the keywords themselves. For example, “bibliometrics” was identified as the wrong research method from the sentence “This paper combines practices emerging in the arts and humanities with research evaluation from a scientometric perspective…”. A further 33% of errors are due to the lack of keywords, or when a method is mentioned implicitly and can only be inferred from reading the context. As examples, we discussed “comparative evaluation” and “user based task studies” before. More examples include “information extraction,” which is a very broad topic and can be difficult to include all possible keywords; and “document analysis,” which is particularly difficult to capture because researchers rarely use distinctive keywords to describe their study. In all these cases, a lot of inference with background knowledge is required.

We discuss the lessons learned from this work with respect to our research questions, as well as limitations of our work.

5.1. Research Method Classification

Our first research question concerns the evolution of “research methods” in LIS. We summarize three key points below.

First, following a deductive coding process informed by literature as well as our data analysis, we developed a classification scheme that largely extends that of Chu (2015) . In particular, we refined Chu’s “experiment” category to include a range of methods that are based on computational approaches, used in the creation of procedures, algorithms, or systems. These are often found in work belonging to the “new frontier” of LIS (i.e., those that often cross boundaries with other disciplines, such as information retrieval, data mining, human computer interaction, and information systems). We also added new categories that were not included in the existing classification schemes by earlier studies. Overall, we believe that our significantly wider classification scheme indicates the increasing trend of diversification and interdisciplinary research in LIS. This could be seen as a strength in terms of LIS drawing fruitfully on a wide range of fields and influences, from humanities, social science, and science. It does not suggest a field moving towards the mature position of paradigmatic consensus, but it could be seen to reflect a healthy dynamism. More troubling might be considered the extent to which novelty comes largely from computational methods, suggesting a discipline without a long history of development and whose direction is subordinate to that of another.

Second, coming with this widening scope is the increasing complexity in defining “research methods.” While our proposed classification scheme remains a flat structure, as is the case for the majority of studies in this area, we acknowledge that the LIS community may benefit from a hierarchical classification that reflects different perspectives of research methodology. However, as we have discussed in extended depth earlier on, it has been difficult to achieve consensus, simply because researchers in different traditions view methodology differently and use terminology differently. Although it was not an aim of this study, we anticipate that this can be partially addressed by developing a framework for defining and classifying LIS research methods from multiple, complementary perspectives. For example, a study should have a topic (e.g., “bibliometrics” could be both a method and topic), could use certain modes of analysis and data collection methods (resonating with the “research strategy” and “data collection method” model by Järvelin and Vakkari (1990) ), and adopt a certain methodological stance (e.g., mixedmethods, multimethods, quantitative) based on the mode of analysis (resonating with that by Hider and Pymm (2008) ).

However, there exist significant hurdles to achieve this goal. As suggested by Risso (2016) , LIS needs to disambiguate and clearly define different categories of “methods” (e.g., to address issues such as “citation analysis” being treated as both research strategy and data collection method in Järvelin and Vakkari (1990) ). Further, there is a need to regularly update the framework to accommodate the evolution of the LIS discipline ( Ferran-Ferrer et al., 2017 ). For this, automated IE methods may be useful in coping with the growing amount of literature. Also, significant effort needs to be devoted to encourage the adoption of such standards. Last, but not least, researchers should be encouraged to share their coding frame and the data they coded as examples for future reference. Data sharing has been an obvious gap in LIS research on research methods, compared to other disciplines such as Computer Science and Biomedicine.

Third, there is a clear pattern of different methodological emphasis in the articles published by the three different journals. While JDoc and LISR appear to publish more work that uses “conventional” LIS research methods, JASIS&T appears to be more open to accepting work that uses a diverse range of methods that have an experimental nature and seen more common in other disciplines. This pattern may reflect the different scope of focus of these journals. For example, LISR explicitly states that it “does not normally publish technical information science studies … or most bibliometric studies,” whereas JASIS&T “focuses on the production, …, use, and evaluation of information and on the tools and techniques associated with these processes.” However, JDoc’s scope description is less indicative of the methodological emphasis, as it states “… welcome submissions exploring topics where concepts and models in the library and information sciences overlap with those in cognate disciplines.” This difference in terms of their scope and aims had an impact on our exploratory analysis and, therefore, our resulting classification scheme. However, this should not be considered a limitation of our approach. If an LIS journal expands its scope to cover such a diverse range of fields, then we argue there is a need to develop a more fine-grained classification that better reflects this trend.

5.2. Automated Extraction of Research Methods

Our IE method for detecting the research methods used in a study is the first in LIS. Similar to earlier studies on key-insight extraction from scientific literature, we found this task particularly challenging. Although our method is based on simple rules, we believe it is still representative of the state of the art. This is because, on the one hand, its average performance over all methods is comparable to figures previously reported in similar tasks, even if our task is arguably more difficult. On the other hand, research so far cannot show a clear advantage of more complex methods such as machine learning over rule-based ones. The typical errors we found from our method will be equally challenging for typical machine learning-based methods.

Overall, our method achieved reasonable performance on only a few methods (i.e., “interview,” “questionnaire,” and “bibliometrics”), whereas its performance on most methods is rather unsatisfactory. Compared to work in a similar direction from other disciplines, we argue that research on IE of research methods from the LIS literature will need to consider unique challenges. The first is the unique requirement of the task. As we discussed before, existing IE methods in this area only aim to identify the sentence or phrase that mentions a method (i.e., sentence- or phrase-level of extraction), but not to recognize the actual method used. This is not very useful when our goal is to understand the actual method adopted by a study, which may mention other methods for the purposes of comparison, discussion, and references. This implies a formulation of the task beyond the “syntactic” level to the “semantic” level, where the automated IE method needs not only to identify mentions of methods in text, but also to understand the context in which they appear to derive their meanings (e.g., recall the examples we have shown in the error analysis section).

Adding to the above (i.e., the second challenge) is the complexity in defining and classifying LIS “research methods,” as we have discussed in the previous section. The need for taking a multiperspective view and identifying not only the main but also secondary methods only escalates the level of difficulty for IE. Also, there is the lack of standard terminology to describe LIS methods. For example, from our own process of eliciting research methods, we discovered methods that are difficult to identify by keywords, such as “mixed methods” and “document analysis.”

Finally, researchers may need to cope with varying degrees of quality in research article abstracts. This is particularly important because, as we have shown, our method can benefit from well-structured abstracts. In Computer Science for example, IE of research methods has mostly focused on abstracts ( Augenstein et al., 2017 ) because they are generally deemed to be of high quality and information rich. In the LIS domain, however, we have noticed issues such as how journal publishers differ in terms of enforcing structured abstracts, and that not every study would clearly describe their method in the abstracts ( Ferran-Ferrer et al., 2017 ).

All these challenges mean that feature engineering—a crucial step for IE of research methods from texts—will be very challenging in the LIS discipline. We discuss some possibilities that may partially address this in the following section.

5.3. Other Issues

During our data collection and analysis, we discovered issues with how journal publishers categorize their articles. We have shown an extensive degree of intra- and interjournal inconsistency, as well as a lack of guidance on how to interpret these categories. This undoubtedly created difficulties for our data collection process and potential uncertainties in the quality of our data set, and will remain an obstacle for future research in this area. We therefore urge the journal publishers to be more transparent about their article categorization system, and to work on improving the quality of their categorization. It might also be useful for publishers to offer common guidelines on describing methods in abstracts and to prompt peer reviewers to examine keywords and abstracts with this in mind.

Our further analysis of the abstract lengths showed a significant extent of noncompliance, as many articles (around, or even exceeding, 50%) are published with an abstract exceeding the word limit, and a small number of articles had a very short abstract. While we were unable to confirm the association between the length of the abstracts and the performance of our IE method, such inconsistency could arguably be considered as a quality issue for the journal.

5.4. Limitations of This Study

First, our proposed classification scheme remains a flat structure, and as we discussed above, it may need to be further developed into a hierarchy to better reflect different perspectives on research methods. Some may also argue that our classification diverges from the core research methods used in LIS. Due to the multidisciplinary nature of LIS, do we really need to integrate method classifications that conventionally belong to other disciplines? Would it be better to simply use the classification schemes from those disciplines when a study crosses those disciplines? These are the questions that we do not have answers to but deserve a debate given the multidisciplinary trend in LIS.

Second, our automated IE method for extracting research methods has large room for improvement. Similar to the previous work on key-insight extraction, we have taken a classification-based approach. Our method is based on keyword lookup, which is prone to ambiguity due to both context and terminology, as we have discussed. As a result, its performance is still unsatisfactory. We envisage an alternative approach to be sentence- or paragraph-level classification that focuses on sentences or paragraphs from certain areas of a paper only, such as abstracts or the methodology section, when available. The idea is that sentences or paragraphs from such content may describe the method used and, compared to simple keywords lookup, provide additional context for interpretation. However, this creates a significant challenge for data annotation, because machine learning methods require a large amount of examples (training data) to learn from, and for this particular task there will be a very large number of categories that need examples. We therefore urge researchers in LIS to make a collective effort towards data annotation, sharing, and reuse.

Also, our IE method only targets a single, main research method from each article. Detecting multiple research methods may be necessary but will be even more challenging, as features that are usually effective for detecting single methods (e.g., frequency) will be unreliable, and it requires a more advanced level of “comprehension” by the automated method. In addition, existing IE methods only identify the research methods themselves but overlook other parameters of the methods that may also be very interesting. For example, new researchers to LIS may want to know what a reasonable sample size is when a questionnaire is used, whether the sample size has an impact on citation statistics, or what methods are often “mixed” in a mixed method research. Addressing these issues will be beneficial to the LIS research community, but remains a significant challenge to be tackled in the future.

Finally, our work has focused on the LIS discipline. Although this offers unique value compared to the existing work on IE of research methods predominantly covering Computer Science and Biomedicine, the question remains as to how the method can generalize to other social science disciplines or humanities. For example, our study shows that among the three journals, between 13% and 21% of articles are theoretical studies ( Figure 5 ). However, methods commonly used in the humanities (e.g., hermeneutics) would not be described in a manner like empirical studies in LIS. This means that our IE method, if applied to this discipline, can misclassify some studies that use traditional humanities methods as nonempirical, even though their authors might consider them to be empirical. Nevertheless, LIS is marked by considerable innovation in methods. This reflects wider pressures for more interdisciplinary studies to address complex social problems as well as individual researchers’ motives to innovate in methods to achieve novelty. These factors are by no means confined to LIS. We can anticipate that these factors will make the classification of methods in soft and applied disciplines equally challenging. Therefore, something may be learned from this study by those working in other fields.

The field of LIS is becoming increasingly interdisciplinary as we see a growing number of publications that draw on theory and methods from other subject areas. This leads to increasingly diverse research methods reported in this field. A deep understanding of these methods would be of crucial interest to researchers, especially those who are new to this field. While there have been studies of research methods in LIS in the past, there is a lack of consensus in the classification and definition of research methods in LIS, and nonexistence of studies of automated analysis of research methods reported in the literature. The latter has been recognized as of paramount importance and has attracted significant effort in fields that have witnessed significant growth of scientific literature, a situation that LIS is also undergoing.

Set in this context, this work analyzed a large collection of LIS literature published in three representative journals to develop a renewed perspective of research method classification in LIS, and to carry out an exploratory study into automated methods—to the best of our knowledge, the first of this nature in LIS—for analyzing the research methods reported in scientific publications. We discovered critical insights that are likely to impact the future studies of research methods in this field.

In terms of research method classification, we showed a widening scope of research methodology in LIS, as we see a substantial number of studies that cross disciplines such as information retrieval, data mining, human computer interaction, and information systems. The implications are twofold. First, conventional methodology classifications defined by the previous work can be too broad, as certain methodological categories (e.g., “experiment”) would include a significant number of studies and are too generic to differentiate them. Second, there is the increasing complexity of defining “research method,” which necessitates a hierarchically structured classification scheme that reflects different perspectives of research methodology (e.g., data collection method, analysis method, and methodological stance). Additionally, we also showed that different journals appear to have a different methodological focus, with JASIS&T being the most open to studies that are more quantitative, or algorithm and experiment based.

In terms of the automated method for method analysis, we tackled the task of identifying specific research methods used in a study, one that is novel compared to the previous work in other fields. Our method is based on simple rule-based keyword lookup, and worked well for a small number of research methods. However, overall, the task remains extremely challenging for recognizing the majority of research methods. The reasons are mainly due to language ambiguity, which results in challenges in feature engineering. Our data are publicly available and will encourage further studies in this direction.

Further, our data collection process revealed data quality issues reflecting an extensive degree of intra- and interjournal inconsistency with regards to how journal publishers organize their articles when making their data available for research. This data quality issue can discourage interest and effort in studies of research methods in the LIS field. We therefore urge journal publishers to address these issues by making their article categorization system more transparent and consistent among themselves.

Our future work will focus on a number of directions. First, we aim to progress towards developing a hierarchical, structured method classification scheme reflecting different perspectives in LIS. This will address the limitations of our current, flat method classification scheme proposed in this work. Second, as discussed before, we aim to further develop our automated method by incorporating more complex features that may improve its accuracy and enabling it to capture other aspects of research methods, such as the data sets involved and their quantity.

Ziqi Zhang: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Visualization, Writing—original draft. Winnie Tam: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing—review & editing. Andrew Cox: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Writing—review & editing.

The authors have no competing interests.

No funding was received for this research.

The data are available at Zenodo ( https://doi.org/10.5281/zenodo.4486156 ).

https://www.crossref.org/services/metadata-delivery/ , last retrieved in March 2020.

Their plural forms are also considered.

https://www.vosviewer.com/ . Last accessed May 2020.

All available codes are defined at: https://www.elsevier.com/__data/assets/text_file/0005/275666/ja5_art550_dtd.txt . However, no explanation of these codes can be found. A search on certain Q&A platforms found “FLA” to be “Full Length Article.”

Only up to five examples are shown. For the full list of keywords, see supplementary material in the appendix.

Data can be downloaded at https://doi.org/10.5281/zenodo.4486156 .

Average P, R, and F1 are identical because we are evaluating micro-average over all classes. Also the method predicts only one class for each article; therefore in Eqs. 1 and 2 , #total predicted positives = #total actual positives = #articles in the collection.

Examples: 10.1108/JD-10-2012-0138, 10.1108/00220410810912415, 10.1002/asi.21694.

More than one error category can be associated with each article.

Keywords associated with each research method

  
bibliometrics impact factor, scientometric, bibliometric, citation analysis/impact/importance/counts/index/report/window/rate/pattern/distributions/score/network, citation-based index, h-index, hindex, citers, citees, bibliometric indicator, leydesdorff, altmetrics 
content analysis content analysis, inter coder reliability, inter annotator agreement, krippendorff 
delphi study delphi study 
ethnography/field study Hammersley, participant observation, ethnography, ethnographic, ethnographer, field note, rich description 
experiment dependent variable, independent variable, experiment, experimental 
focus group focus group 
historical method historical method 
interview interview, interviewed, interviewer, interviewee, interviewing, transcript 
observation observation 
questionnaire respondent, questionnaire, survey, Likert, surveyed, sampling, response rate 
research diary/journal diary study, cultural probe 
think aloud protocol think aloud 
transaction log analysis log analysis/technique 
webometrics webometrics, cybermetrics, link analysis 
  
 
agent based modeling/simulation agent model/modeling, multi-agents 
annotation annotation, tagging 
classification classification, classify, classifier 
clustering cluster, clustering 
comparative evaluation comparative evaluation, evaluative studies 
document analysis document/textual analysis, document review 
information extraction named entity recognition, NER, relation extraction 
IR related indexing/ranking/query methods Learning to rank, term weighting, indexing method, query expansion, question answering 
mixed method cresswell and plano clark, mixed method 
digital ethnography digital ethnography, netnography, netnographic 
network analysis network analysis/study 
statistical methods correlation, logistic test, t-test, chi-square, hypothesis test, null hypothesis, dependence test 
topic modeling topic model, topic modeling, LDA 
user task based study user study, user analysis 
  
bibliometrics impact factor, scientometric, bibliometric, citation analysis/impact/importance/counts/index/report/window/rate/pattern/distributions/score/network, citation-based index, h-index, hindex, citers, citees, bibliometric indicator, leydesdorff, altmetrics 
content analysis content analysis, inter coder reliability, inter annotator agreement, krippendorff 
delphi study delphi study 
ethnography/field study Hammersley, participant observation, ethnography, ethnographic, ethnographer, field note, rich description 
experiment dependent variable, independent variable, experiment, experimental 
focus group focus group 
historical method historical method 
interview interview, interviewed, interviewer, interviewee, interviewing, transcript 
observation observation 
questionnaire respondent, questionnaire, survey, Likert, surveyed, sampling, response rate 
research diary/journal diary study, cultural probe 
think aloud protocol think aloud 
transaction log analysis log analysis/technique 
webometrics webometrics, cybermetrics, link analysis 
  
 
agent based modeling/simulation agent model/modeling, multi-agents 
annotation annotation, tagging 
classification classification, classify, classifier 
clustering cluster, clustering 
comparative evaluation comparative evaluation, evaluative studies 
document analysis document/textual analysis, document review 
information extraction named entity recognition, NER, relation extraction 
IR related indexing/ranking/query methods Learning to rank, term weighting, indexing method, query expansion, question answering 
mixed method cresswell and plano clark, mixed method 
digital ethnography digital ethnography, netnography, netnographic 
network analysis network analysis/study 
statistical methods correlation, logistic test, t-test, chi-square, hypothesis test, null hypothesis, dependence test 
topic modeling topic model, topic modeling, LDA 
user task based study user study, user analysis 

Author notes

Email alerts, related articles, affiliations.

  • Online ISSN 2641-3337

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

The case study approach

Sarah crowe.

1 Division of Primary Care, The University of Nottingham, Nottingham, UK

Kathrin Cresswell

2 Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, UK

Ann Robertson

3 School of Health in Social Science, The University of Edinburgh, Edinburgh, UK

Anthony Avery

Aziz sheikh.

The case study approach allows in-depth, multi-faceted explorations of complex issues in their real-life settings. The value of the case study approach is well recognised in the fields of business, law and policy, but somewhat less so in health services research. Based on our experiences of conducting several health-related case studies, we reflect on the different types of case study design, the specific research questions this approach can help answer, the data sources that tend to be used, and the particular advantages and disadvantages of employing this methodological approach. The paper concludes with key pointers to aid those designing and appraising proposals for conducting case study research, and a checklist to help readers assess the quality of case study reports.

Introduction

The case study approach is particularly useful to employ when there is a need to obtain an in-depth appreciation of an issue, event or phenomenon of interest, in its natural real-life context. Our aim in writing this piece is to provide insights into when to consider employing this approach and an overview of key methodological considerations in relation to the design, planning, analysis, interpretation and reporting of case studies.

The illustrative 'grand round', 'case report' and 'case series' have a long tradition in clinical practice and research. Presenting detailed critiques, typically of one or more patients, aims to provide insights into aspects of the clinical case and, in doing so, illustrate broader lessons that may be learnt. In research, the conceptually-related case study approach can be used, for example, to describe in detail a patient's episode of care, explore professional attitudes to and experiences of a new policy initiative or service development or more generally to 'investigate contemporary phenomena within its real-life context' [ 1 ]. Based on our experiences of conducting a range of case studies, we reflect on when to consider using this approach, discuss the key steps involved and illustrate, with examples, some of the practical challenges of attaining an in-depth understanding of a 'case' as an integrated whole. In keeping with previously published work, we acknowledge the importance of theory to underpin the design, selection, conduct and interpretation of case studies[ 2 ]. In so doing, we make passing reference to the different epistemological approaches used in case study research by key theoreticians and methodologists in this field of enquiry.

This paper is structured around the following main questions: What is a case study? What are case studies used for? How are case studies conducted? What are the potential pitfalls and how can these be avoided? We draw in particular on four of our own recently published examples of case studies (see Tables ​ Tables1, 1 , ​ ,2, 2 , ​ ,3 3 and ​ and4) 4 ) and those of others to illustrate our discussion[ 3 - 7 ].

Example of a case study investigating the reasons for differences in recruitment rates of minority ethnic people in asthma research[ 3 ]

Minority ethnic people experience considerably greater morbidity from asthma than the White majority population. Research has shown however that these minority ethnic populations are likely to be under-represented in research undertaken in the UK; there is comparatively less marginalisation in the US.
To investigate approaches to bolster recruitment of South Asians into UK asthma studies through qualitative research with US and UK researchers, and UK community leaders.
Single intrinsic case study
Centred on the issue of recruitment of South Asian people with asthma.
In-depth interviews were conducted with asthma researchers from the UK and US. A supplementary questionnaire was also provided to researchers.
Framework approach.
Barriers to ethnic minority recruitment were found to centre around:
 1. The attitudes of the researchers' towards inclusion: The majority of UK researchers interviewed were generally supportive of the idea of recruiting ethnically diverse participants but expressed major concerns about the practicalities of achieving this; in contrast, the US researchers appeared much more committed to the policy of inclusion.
 2. Stereotypes and prejudices: We found that some of the UK researchers' perceptions of ethnic minorities may have influenced their decisions on whether to approach individuals from particular ethnic groups. These stereotypes centred on issues to do with, amongst others, language barriers and lack of altruism.
 3. Demographic, political and socioeconomic contexts of the two countries: Researchers suggested that the demographic profile of ethnic minorities, their political engagement and the different configuration of the health services in the UK and the US may have contributed to differential rates.
 4. Above all, however, it appeared that the overriding importance of the US National Institute of Health's policy to mandate the inclusion of minority ethnic people (and women) had a major impact on shaping the attitudes and in turn the experiences of US researchers'; the absence of any similar mandate in the UK meant that UK-based researchers had not been forced to challenge their existing practices and they were hence unable to overcome any stereotypical/prejudicial attitudes through experiential learning.

Example of a case study investigating the process of planning and implementing a service in Primary Care Organisations[ 4 ]

Health work forces globally are needing to reorganise and reconfigure in order to meet the challenges posed by the increased numbers of people living with long-term conditions in an efficient and sustainable manner. Through studying the introduction of General Practitioners with a Special Interest in respiratory disorders, this study aimed to provide insights into this important issue by focusing on community respiratory service development.
To understand and compare the process of workforce change in respiratory services and the impact on patient experience (specifically in relation to the role of general practitioners with special interests) in a theoretically selected sample of Primary Care Organisations (PCOs), in order to derive models of good practice in planning and the implementation of a broad range of workforce issues.
Multiple-case design of respiratory services in health regions in England and Wales.
Four PCOs.
Face-to-face and telephone interviews, e-mail discussions, local documents, patient diaries, news items identified from local and national websites, national workshop.
Reading, coding and comparison progressed iteratively.
 1. In the screening phase of this study (which involved semi-structured telephone interviews with the person responsible for driving the reconfiguration of respiratory services in 30 PCOs), the barriers of financial deficit, organisational uncertainty, disengaged clinicians and contradictory policies proved insurmountable for many PCOs to developing sustainable services. A key rationale for PCO re-organisation in 2006 was to strengthen their commissioning function and those of clinicians through Practice-Based Commissioning. However, the turbulence, which surrounded reorganisation was found to have the opposite desired effect.
 2. Implementing workforce reconfiguration was strongly influenced by the negotiation and contest among local clinicians and managers about "ownership" of work and income.
 3. Despite the intention to make the commissioning system more transparent, personal relationships based on common professional interests, past work history, friendships and collegiality, remained as key drivers for sustainable innovation in service development.
It was only possible to undertake in-depth work in a selective number of PCOs and, even within these selected PCOs, it was not possible to interview all informants of potential interest and/or obtain all relevant documents. This work was conducted in the early stages of a major NHS reorganisation in England and Wales and thus, events are likely to have continued to evolve beyond the study period; we therefore cannot claim to have seen any of the stories through to their conclusion.

Example of a case study investigating the introduction of the electronic health records[ 5 ]

Healthcare systems globally are moving from paper-based record systems to electronic health record systems. In 2002, the NHS in England embarked on the most ambitious and expensive IT-based transformation in healthcare in history seeking to introduce electronic health records into all hospitals in England by 2010.
To describe and evaluate the implementation and adoption of detailed electronic health records in secondary care in England and thereby provide formative feedback for local and national rollout of the NHS Care Records Service.
A mixed methods, longitudinal, multi-site, socio-technical collective case study.
Five NHS acute hospital and mental health Trusts that have been the focus of early implementation efforts.
Semi-structured interviews, documentary data and field notes, observations and quantitative data.
Qualitative data were analysed thematically using a socio-technical coding matrix, combined with additional themes that emerged from the data.
 1. Hospital electronic health record systems have developed and been implemented far more slowly than was originally envisioned.
 2. The top-down, government-led standardised approach needed to evolve to admit more variation and greater local choice for hospitals in order to support local service delivery.
 3. A range of adverse consequences were associated with the centrally negotiated contracts, which excluded the hospitals in question.
 4. The unrealistic, politically driven, timeline (implementation over 10 years) was found to be a major source of frustration for developers, implementers and healthcare managers and professionals alike.
We were unable to access details of the contracts between government departments and the Local Service Providers responsible for delivering and implementing the software systems. This, in turn, made it difficult to develop a holistic understanding of some key issues impacting on the overall slow roll-out of the NHS Care Record Service. Early adopters may also have differed in important ways from NHS hospitals that planned to join the National Programme for Information Technology and implement the NHS Care Records Service at a later point in time.

Example of a case study investigating the formal and informal ways students learn about patient safety[ 6 ]

There is a need to reduce the disease burden associated with iatrogenic harm and considering that healthcare education represents perhaps the most sustained patient safety initiative ever undertaken, it is important to develop a better appreciation of the ways in which undergraduate and newly qualified professionals receive and make sense of the education they receive.
To investigate the formal and informal ways pre-registration students from a range of healthcare professions (medicine, nursing, physiotherapy and pharmacy) learn about patient safety in order to become safe practitioners.
Multi-site, mixed method collective case study.
: Eight case studies (two for each professional group) were carried out in educational provider sites considering different programmes, practice environments and models of teaching and learning.
Structured in phases relevant to the three knowledge contexts:
Documentary evidence (including undergraduate curricula, handbooks and module outlines), complemented with a range of views (from course leads, tutors and students) and observations in a range of academic settings.
Policy and management views of patient safety and influences on patient safety education and practice. NHS policies included, for example, implementation of the National Patient Safety Agency's , which encourages organisations to develop an organisational safety culture in which staff members feel comfortable identifying dangers and reporting hazards.
The cultures to which students are exposed i.e. patient safety in relation to day-to-day working. NHS initiatives included, for example, a hand washing initiative or introduction of infection control measures.
 1. Practical, informal, learning opportunities were valued by students. On the whole, however, students were not exposed to nor engaged with important NHS initiatives such as risk management activities and incident reporting schemes.
 2. NHS policy appeared to have been taken seriously by course leaders. Patient safety materials were incorporated into both formal and informal curricula, albeit largely implicit rather than explicit.
 3. Resource issues and peer pressure were found to influence safe practice. Variations were also found to exist in students' experiences and the quality of the supervision available.
The curriculum and organisational documents collected differed between sites, which possibly reflected gatekeeper influences at each site. The recruitment of participants for focus group discussions proved difficult, so interviews or paired discussions were used as a substitute.

What is a case study?

A case study is a research approach that is used to generate an in-depth, multi-faceted understanding of a complex issue in its real-life context. It is an established research design that is used extensively in a wide variety of disciplines, particularly in the social sciences. A case study can be defined in a variety of ways (Table ​ (Table5), 5 ), the central tenet being the need to explore an event or phenomenon in depth and in its natural context. It is for this reason sometimes referred to as a "naturalistic" design; this is in contrast to an "experimental" design (such as a randomised controlled trial) in which the investigator seeks to exert control over and manipulate the variable(s) of interest.

Definitions of a case study

AuthorDefinition
Stake[ ] (p.237)
Yin[ , , ] (Yin 1999 p. 1211, Yin 1994 p. 13)
 •
 • (Yin 2009 p18)
Miles and Huberman[ ] (p. 25)
Green and Thorogood[ ] (p. 284)
George and Bennett[ ] (p. 17)"

Stake's work has been particularly influential in defining the case study approach to scientific enquiry. He has helpfully characterised three main types of case study: intrinsic , instrumental and collective [ 8 ]. An intrinsic case study is typically undertaken to learn about a unique phenomenon. The researcher should define the uniqueness of the phenomenon, which distinguishes it from all others. In contrast, the instrumental case study uses a particular case (some of which may be better than others) to gain a broader appreciation of an issue or phenomenon. The collective case study involves studying multiple cases simultaneously or sequentially in an attempt to generate a still broader appreciation of a particular issue.

These are however not necessarily mutually exclusive categories. In the first of our examples (Table ​ (Table1), 1 ), we undertook an intrinsic case study to investigate the issue of recruitment of minority ethnic people into the specific context of asthma research studies, but it developed into a instrumental case study through seeking to understand the issue of recruitment of these marginalised populations more generally, generating a number of the findings that are potentially transferable to other disease contexts[ 3 ]. In contrast, the other three examples (see Tables ​ Tables2, 2 , ​ ,3 3 and ​ and4) 4 ) employed collective case study designs to study the introduction of workforce reconfiguration in primary care, the implementation of electronic health records into hospitals, and to understand the ways in which healthcare students learn about patient safety considerations[ 4 - 6 ]. Although our study focusing on the introduction of General Practitioners with Specialist Interests (Table ​ (Table2) 2 ) was explicitly collective in design (four contrasting primary care organisations were studied), is was also instrumental in that this particular professional group was studied as an exemplar of the more general phenomenon of workforce redesign[ 4 ].

What are case studies used for?

According to Yin, case studies can be used to explain, describe or explore events or phenomena in the everyday contexts in which they occur[ 1 ]. These can, for example, help to understand and explain causal links and pathways resulting from a new policy initiative or service development (see Tables ​ Tables2 2 and ​ and3, 3 , for example)[ 1 ]. In contrast to experimental designs, which seek to test a specific hypothesis through deliberately manipulating the environment (like, for example, in a randomised controlled trial giving a new drug to randomly selected individuals and then comparing outcomes with controls),[ 9 ] the case study approach lends itself well to capturing information on more explanatory ' how ', 'what' and ' why ' questions, such as ' how is the intervention being implemented and received on the ground?'. The case study approach can offer additional insights into what gaps exist in its delivery or why one implementation strategy might be chosen over another. This in turn can help develop or refine theory, as shown in our study of the teaching of patient safety in undergraduate curricula (Table ​ (Table4 4 )[ 6 , 10 ]. Key questions to consider when selecting the most appropriate study design are whether it is desirable or indeed possible to undertake a formal experimental investigation in which individuals and/or organisations are allocated to an intervention or control arm? Or whether the wish is to obtain a more naturalistic understanding of an issue? The former is ideally studied using a controlled experimental design, whereas the latter is more appropriately studied using a case study design.

Case studies may be approached in different ways depending on the epistemological standpoint of the researcher, that is, whether they take a critical (questioning one's own and others' assumptions), interpretivist (trying to understand individual and shared social meanings) or positivist approach (orientating towards the criteria of natural sciences, such as focusing on generalisability considerations) (Table ​ (Table6). 6 ). Whilst such a schema can be conceptually helpful, it may be appropriate to draw on more than one approach in any case study, particularly in the context of conducting health services research. Doolin has, for example, noted that in the context of undertaking interpretative case studies, researchers can usefully draw on a critical, reflective perspective which seeks to take into account the wider social and political environment that has shaped the case[ 11 ].

Example of epistemological approaches that may be used in case study research

ApproachCharacteristicsCriticismsKey references
Involves questioning one's own assumptions taking into account the wider political and social environment.It can possibly neglect other factors by focussing only on power relationships and may give the researcher a position that is too privileged.Howcroft and Trauth[ ] Blakie[ ] Doolin[ , ]
Interprets the limiting conditions in relation to power and control that are thought to influence behaviour.Bloomfield and Best[ ]
Involves understanding meanings/contexts and processes as perceived from different perspectives, trying to understand individual and shared social meanings. Focus is on theory building.Often difficult to explain unintended consequences and for neglecting surrounding historical contextsStake[ ] Doolin[ ]
Involves establishing which variables one wishes to study in advance and seeing whether they fit in with the findings. Focus is often on testing and refining theory on the basis of case study findings.It does not take into account the role of the researcher in influencing findings.Yin[ , , ] Shanks and Parr[ ]

How are case studies conducted?

Here, we focus on the main stages of research activity when planning and undertaking a case study; the crucial stages are: defining the case; selecting the case(s); collecting and analysing the data; interpreting data; and reporting the findings.

Defining the case

Carefully formulated research question(s), informed by the existing literature and a prior appreciation of the theoretical issues and setting(s), are all important in appropriately and succinctly defining the case[ 8 , 12 ]. Crucially, each case should have a pre-defined boundary which clarifies the nature and time period covered by the case study (i.e. its scope, beginning and end), the relevant social group, organisation or geographical area of interest to the investigator, the types of evidence to be collected, and the priorities for data collection and analysis (see Table ​ Table7 7 )[ 1 ]. A theory driven approach to defining the case may help generate knowledge that is potentially transferable to a range of clinical contexts and behaviours; using theory is also likely to result in a more informed appreciation of, for example, how and why interventions have succeeded or failed[ 13 ].

Example of a checklist for rating a case study proposal[ 8 ]

Clarity: Does the proposal read well?
Integrity: Do its pieces fit together?
Attractiveness: Does it pique the reader's interest?
The case: Is the case adequately defined?
The issues: Are major research questions identified?
Data Resource: Are sufficient data sources identified?
Case Selection: Is the selection plan reasonable?
Data Gathering: Are data-gathering activities outlined?
Validation: Is the need and opportunity for triangulation indicated?
Access: Are arrangements for start-up anticipated?
Confidentiality: Is there sensitivity to the protection of people?
Cost: Are time and resource estimates reasonable?

For example, in our evaluation of the introduction of electronic health records in English hospitals (Table ​ (Table3), 3 ), we defined our cases as the NHS Trusts that were receiving the new technology[ 5 ]. Our focus was on how the technology was being implemented. However, if the primary research interest had been on the social and organisational dimensions of implementation, we might have defined our case differently as a grouping of healthcare professionals (e.g. doctors and/or nurses). The precise beginning and end of the case may however prove difficult to define. Pursuing this same example, when does the process of implementation and adoption of an electronic health record system really begin or end? Such judgements will inevitably be influenced by a range of factors, including the research question, theory of interest, the scope and richness of the gathered data and the resources available to the research team.

Selecting the case(s)

The decision on how to select the case(s) to study is a very important one that merits some reflection. In an intrinsic case study, the case is selected on its own merits[ 8 ]. The case is selected not because it is representative of other cases, but because of its uniqueness, which is of genuine interest to the researchers. This was, for example, the case in our study of the recruitment of minority ethnic participants into asthma research (Table ​ (Table1) 1 ) as our earlier work had demonstrated the marginalisation of minority ethnic people with asthma, despite evidence of disproportionate asthma morbidity[ 14 , 15 ]. In another example of an intrinsic case study, Hellstrom et al.[ 16 ] studied an elderly married couple living with dementia to explore how dementia had impacted on their understanding of home, their everyday life and their relationships.

For an instrumental case study, selecting a "typical" case can work well[ 8 ]. In contrast to the intrinsic case study, the particular case which is chosen is of less importance than selecting a case that allows the researcher to investigate an issue or phenomenon. For example, in order to gain an understanding of doctors' responses to health policy initiatives, Som undertook an instrumental case study interviewing clinicians who had a range of responsibilities for clinical governance in one NHS acute hospital trust[ 17 ]. Sampling a "deviant" or "atypical" case may however prove even more informative, potentially enabling the researcher to identify causal processes, generate hypotheses and develop theory.

In collective or multiple case studies, a number of cases are carefully selected. This offers the advantage of allowing comparisons to be made across several cases and/or replication. Choosing a "typical" case may enable the findings to be generalised to theory (i.e. analytical generalisation) or to test theory by replicating the findings in a second or even a third case (i.e. replication logic)[ 1 ]. Yin suggests two or three literal replications (i.e. predicting similar results) if the theory is straightforward and five or more if the theory is more subtle. However, critics might argue that selecting 'cases' in this way is insufficiently reflexive and ill-suited to the complexities of contemporary healthcare organisations.

The selected case study site(s) should allow the research team access to the group of individuals, the organisation, the processes or whatever else constitutes the chosen unit of analysis for the study. Access is therefore a central consideration; the researcher needs to come to know the case study site(s) well and to work cooperatively with them. Selected cases need to be not only interesting but also hospitable to the inquiry [ 8 ] if they are to be informative and answer the research question(s). Case study sites may also be pre-selected for the researcher, with decisions being influenced by key stakeholders. For example, our selection of case study sites in the evaluation of the implementation and adoption of electronic health record systems (see Table ​ Table3) 3 ) was heavily influenced by NHS Connecting for Health, the government agency that was responsible for overseeing the National Programme for Information Technology (NPfIT)[ 5 ]. This prominent stakeholder had already selected the NHS sites (through a competitive bidding process) to be early adopters of the electronic health record systems and had negotiated contracts that detailed the deployment timelines.

It is also important to consider in advance the likely burden and risks associated with participation for those who (or the site(s) which) comprise the case study. Of particular importance is the obligation for the researcher to think through the ethical implications of the study (e.g. the risk of inadvertently breaching anonymity or confidentiality) and to ensure that potential participants/participating sites are provided with sufficient information to make an informed choice about joining the study. The outcome of providing this information might be that the emotive burden associated with participation, or the organisational disruption associated with supporting the fieldwork, is considered so high that the individuals or sites decide against participation.

In our example of evaluating implementations of electronic health record systems, given the restricted number of early adopter sites available to us, we sought purposively to select a diverse range of implementation cases among those that were available[ 5 ]. We chose a mixture of teaching, non-teaching and Foundation Trust hospitals, and examples of each of the three electronic health record systems procured centrally by the NPfIT. At one recruited site, it quickly became apparent that access was problematic because of competing demands on that organisation. Recognising the importance of full access and co-operative working for generating rich data, the research team decided not to pursue work at that site and instead to focus on other recruited sites.

Collecting the data

In order to develop a thorough understanding of the case, the case study approach usually involves the collection of multiple sources of evidence, using a range of quantitative (e.g. questionnaires, audits and analysis of routinely collected healthcare data) and more commonly qualitative techniques (e.g. interviews, focus groups and observations). The use of multiple sources of data (data triangulation) has been advocated as a way of increasing the internal validity of a study (i.e. the extent to which the method is appropriate to answer the research question)[ 8 , 18 - 21 ]. An underlying assumption is that data collected in different ways should lead to similar conclusions, and approaching the same issue from different angles can help develop a holistic picture of the phenomenon (Table ​ (Table2 2 )[ 4 ].

Brazier and colleagues used a mixed-methods case study approach to investigate the impact of a cancer care programme[ 22 ]. Here, quantitative measures were collected with questionnaires before, and five months after, the start of the intervention which did not yield any statistically significant results. Qualitative interviews with patients however helped provide an insight into potentially beneficial process-related aspects of the programme, such as greater, perceived patient involvement in care. The authors reported how this case study approach provided a number of contextual factors likely to influence the effectiveness of the intervention and which were not likely to have been obtained from quantitative methods alone.

In collective or multiple case studies, data collection needs to be flexible enough to allow a detailed description of each individual case to be developed (e.g. the nature of different cancer care programmes), before considering the emerging similarities and differences in cross-case comparisons (e.g. to explore why one programme is more effective than another). It is important that data sources from different cases are, where possible, broadly comparable for this purpose even though they may vary in nature and depth.

Analysing, interpreting and reporting case studies

Making sense and offering a coherent interpretation of the typically disparate sources of data (whether qualitative alone or together with quantitative) is far from straightforward. Repeated reviewing and sorting of the voluminous and detail-rich data are integral to the process of analysis. In collective case studies, it is helpful to analyse data relating to the individual component cases first, before making comparisons across cases. Attention needs to be paid to variations within each case and, where relevant, the relationship between different causes, effects and outcomes[ 23 ]. Data will need to be organised and coded to allow the key issues, both derived from the literature and emerging from the dataset, to be easily retrieved at a later stage. An initial coding frame can help capture these issues and can be applied systematically to the whole dataset with the aid of a qualitative data analysis software package.

The Framework approach is a practical approach, comprising of five stages (familiarisation; identifying a thematic framework; indexing; charting; mapping and interpretation) , to managing and analysing large datasets particularly if time is limited, as was the case in our study of recruitment of South Asians into asthma research (Table ​ (Table1 1 )[ 3 , 24 ]. Theoretical frameworks may also play an important role in integrating different sources of data and examining emerging themes. For example, we drew on a socio-technical framework to help explain the connections between different elements - technology; people; and the organisational settings within which they worked - in our study of the introduction of electronic health record systems (Table ​ (Table3 3 )[ 5 ]. Our study of patient safety in undergraduate curricula drew on an evaluation-based approach to design and analysis, which emphasised the importance of the academic, organisational and practice contexts through which students learn (Table ​ (Table4 4 )[ 6 ].

Case study findings can have implications both for theory development and theory testing. They may establish, strengthen or weaken historical explanations of a case and, in certain circumstances, allow theoretical (as opposed to statistical) generalisation beyond the particular cases studied[ 12 ]. These theoretical lenses should not, however, constitute a strait-jacket and the cases should not be "forced to fit" the particular theoretical framework that is being employed.

When reporting findings, it is important to provide the reader with enough contextual information to understand the processes that were followed and how the conclusions were reached. In a collective case study, researchers may choose to present the findings from individual cases separately before amalgamating across cases. Care must be taken to ensure the anonymity of both case sites and individual participants (if agreed in advance) by allocating appropriate codes or withholding descriptors. In the example given in Table ​ Table3, 3 , we decided against providing detailed information on the NHS sites and individual participants in order to avoid the risk of inadvertent disclosure of identities[ 5 , 25 ].

What are the potential pitfalls and how can these be avoided?

The case study approach is, as with all research, not without its limitations. When investigating the formal and informal ways undergraduate students learn about patient safety (Table ​ (Table4), 4 ), for example, we rapidly accumulated a large quantity of data. The volume of data, together with the time restrictions in place, impacted on the depth of analysis that was possible within the available resources. This highlights a more general point of the importance of avoiding the temptation to collect as much data as possible; adequate time also needs to be set aside for data analysis and interpretation of what are often highly complex datasets.

Case study research has sometimes been criticised for lacking scientific rigour and providing little basis for generalisation (i.e. producing findings that may be transferable to other settings)[ 1 ]. There are several ways to address these concerns, including: the use of theoretical sampling (i.e. drawing on a particular conceptual framework); respondent validation (i.e. participants checking emerging findings and the researcher's interpretation, and providing an opinion as to whether they feel these are accurate); and transparency throughout the research process (see Table ​ Table8 8 )[ 8 , 18 - 21 , 23 , 26 ]. Transparency can be achieved by describing in detail the steps involved in case selection, data collection, the reasons for the particular methods chosen, and the researcher's background and level of involvement (i.e. being explicit about how the researcher has influenced data collection and interpretation). Seeking potential, alternative explanations, and being explicit about how interpretations and conclusions were reached, help readers to judge the trustworthiness of the case study report. Stake provides a critique checklist for a case study report (Table ​ (Table9 9 )[ 8 ].

Potential pitfalls and mitigating actions when undertaking case study research

Potential pitfallMitigating action
Selecting/conceptualising the wrong case(s) resulting in lack of theoretical generalisationsDeveloping in-depth knowledge of theoretical and empirical literature, justifying choices made
Collecting large volumes of data that are not relevant to the case or too little to be of any valueFocus data collection in line with research questions, whilst being flexible and allowing different paths to be explored
Defining/bounding the caseFocus on related components (either by time and/or space), be clear what is outside the scope of the case
Lack of rigourTriangulation, respondent validation, the use of theoretical sampling, transparency throughout the research process
Ethical issuesAnonymise appropriately as cases are often easily identifiable to insiders, informed consent of participants
Integration with theoretical frameworkAllow for unexpected issues to emerge and do not force fit, test out preliminary explanations, be clear about epistemological positions in advance

Stake's checklist for assessing the quality of a case study report[ 8 ]

1. Is this report easy to read?
2. Does it fit together, each sentence contributing to the whole?
3. Does this report have a conceptual structure (i.e. themes or issues)?
4. Are its issues developed in a series and scholarly way?
5. Is the case adequately defined?
6. Is there a sense of story to the presentation?
7. Is the reader provided some vicarious experience?
8. Have quotations been used effectively?
9. Are headings, figures, artefacts, appendices, indexes effectively used?
10. Was it edited well, then again with a last minute polish?
11. Has the writer made sound assertions, neither over- or under-interpreting?
12. Has adequate attention been paid to various contexts?
13. Were sufficient raw data presented?
14. Were data sources well chosen and in sufficient number?
15. Do observations and interpretations appear to have been triangulated?
16. Is the role and point of view of the researcher nicely apparent?
17. Is the nature of the intended audience apparent?
18. Is empathy shown for all sides?
19. Are personal intentions examined?
20. Does it appear individuals were put at risk?

Conclusions

The case study approach allows, amongst other things, critical events, interventions, policy developments and programme-based service reforms to be studied in detail in a real-life context. It should therefore be considered when an experimental design is either inappropriate to answer the research questions posed or impossible to undertake. Considering the frequency with which implementations of innovations are now taking place in healthcare settings and how well the case study approach lends itself to in-depth, complex health service research, we believe this approach should be more widely considered by researchers. Though inherently challenging, the research case study can, if carefully conceptualised and thoughtfully undertaken and reported, yield powerful insights into many important aspects of health and healthcare delivery.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

AS conceived this article. SC, KC and AR wrote this paper with GH, AA and AS all commenting on various drafts. SC and AS are guarantors.

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1471-2288/11/100/prepub

Acknowledgements

We are grateful to the participants and colleagues who contributed to the individual case studies that we have drawn on. This work received no direct funding, but it has been informed by projects funded by Asthma UK, the NHS Service Delivery Organisation, NHS Connecting for Health Evaluation Programme, and Patient Safety Research Portfolio. We would also like to thank the expert reviewers for their insightful and constructive feedback. Our thanks are also due to Dr. Allison Worth who commented on an earlier draft of this manuscript.

  • Yin RK. Case study research, design and method. 4. London: Sage Publications Ltd.; 2009. [ Google Scholar ]
  • Keen J, Packwood T. Qualitative research; case study evaluation. BMJ. 1995; 311 :444–446. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sheikh A, Halani L, Bhopal R, Netuveli G, Partridge M, Car J. et al. Facilitating the Recruitment of Minority Ethnic People into Research: Qualitative Case Study of South Asians and Asthma. PLoS Med. 2009; 6 (10):1–11. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pinnock H, Huby G, Powell A, Kielmann T, Price D, Williams S, The process of planning, development and implementation of a General Practitioner with a Special Interest service in Primary Care Organisations in England and Wales: a comparative prospective case study. Report for the National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO) 2008. http://www.sdo.nihr.ac.uk/files/project/99-final-report.pdf
  • Robertson A, Cresswell K, Takian A, Petrakaki D, Crowe S, Cornford T. et al. Prospective evaluation of the implementation and adoption of NHS Connecting for Health's national electronic health record in secondary care in England: interim findings. BMJ. 2010; 41 :c4564. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pearson P, Steven A, Howe A, Sheikh A, Ashcroft D, Smith P. the Patient Safety Education Study Group. Learning about patient safety: organisational context and culture in the education of healthcare professionals. J Health Serv Res Policy. 2010; 15 :4–10. doi: 10.1258/jhsrp.2009.009052. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • van Harten WH, Casparie TF, Fisscher OA. The evaluation of the introduction of a quality management system: a process-oriented case study in a large rehabilitation hospital. Health Policy. 2002; 60 (1):17–37. doi: 10.1016/S0168-8510(01)00187-7. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stake RE. The art of case study research. London: Sage Publications Ltd.; 1995. [ Google Scholar ]
  • Sheikh A, Smeeth L, Ashcroft R. Randomised controlled trials in primary care: scope and application. Br J Gen Pract. 2002; 52 (482):746–51. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • King G, Keohane R, Verba S. Designing Social Inquiry. Princeton: Princeton University Press; 1996. [ Google Scholar ]
  • Doolin B. Information technology as disciplinary technology: being critical in interpretative research on information systems. Journal of Information Technology. 1998; 13 :301–311. doi: 10.1057/jit.1998.8. [ CrossRef ] [ Google Scholar ]
  • George AL, Bennett A. Case studies and theory development in the social sciences. Cambridge, MA: MIT Press; 2005. [ Google Scholar ]
  • Eccles M. the Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG) Designing theoretically-informed implementation interventions. Implementation Science. 2006; 1 :1–8. doi: 10.1186/1748-5908-1-1. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Netuveli G, Hurwitz B, Levy M, Fletcher M, Barnes G, Durham SR, Sheikh A. Ethnic variations in UK asthma frequency, morbidity, and health-service use: a systematic review and meta-analysis. Lancet. 2005; 365 (9456):312–7. [ PubMed ] [ Google Scholar ]
  • Sheikh A, Panesar SS, Lasserson T, Netuveli G. Recruitment of ethnic minorities to asthma studies. Thorax. 2004; 59 (7):634. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hellström I, Nolan M, Lundh U. 'We do things together': A case study of 'couplehood' in dementia. Dementia. 2005; 4 :7–22. doi: 10.1177/1471301205049188. [ CrossRef ] [ Google Scholar ]
  • Som CV. Nothing seems to have changed, nothing seems to be changing and perhaps nothing will change in the NHS: doctors' response to clinical governance. International Journal of Public Sector Management. 2005; 18 :463–477. doi: 10.1108/09513550510608903. [ CrossRef ] [ Google Scholar ]
  • Lincoln Y, Guba E. Naturalistic inquiry. Newbury Park: Sage Publications; 1985. [ Google Scholar ]
  • Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ. 2001; 322 :1115–1117. doi: 10.1136/bmj.322.7294.1115. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mays N, Pope C. Qualitative research in health care: Assessing quality in qualitative research. BMJ. 2000; 320 :50–52. doi: 10.1136/bmj.320.7226.50. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mason J. Qualitative researching. London: Sage; 2002. [ Google Scholar ]
  • Brazier A, Cooke K, Moravan V. Using Mixed Methods for Evaluating an Integrative Approach to Cancer Care: A Case Study. Integr Cancer Ther. 2008; 7 :5–17. doi: 10.1177/1534735407313395. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miles MB, Huberman M. Qualitative data analysis: an expanded sourcebook. 2. CA: Sage Publications Inc.; 1994. [ Google Scholar ]
  • Pope C, Ziebland S, Mays N. Analysing qualitative data. Qualitative research in health care. BMJ. 2000; 320 :114–116. doi: 10.1136/bmj.320.7227.114. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cresswell KM, Worth A, Sheikh A. Actor-Network Theory and its role in understanding the implementation of information technology developments in healthcare. BMC Med Inform Decis Mak. 2010; 10 (1):67. doi: 10.1186/1472-6947-10-67. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001; 358 :483–488. doi: 10.1016/S0140-6736(01)05627-6. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yin R. Case study research: design and methods. 2. Thousand Oaks, CA: Sage Publishing; 1994. [ Google Scholar ]
  • Yin R. Enhancing the quality of case studies in health services research. Health Serv Res. 1999; 34 :1209–1224. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Green J, Thorogood N. Qualitative methods for health research. 2. Los Angeles: Sage; 2009. [ Google Scholar ]
  • Howcroft D, Trauth E. Handbook of Critical Information Systems Research, Theory and Application. Cheltenham, UK: Northampton, MA, USA: Edward Elgar; 2005. [ Google Scholar ]
  • Blakie N. Approaches to Social Enquiry. Cambridge: Polity Press; 1993. [ Google Scholar ]
  • Doolin B. Power and resistance in the implementation of a medical management information system. Info Systems J. 2004; 14 :343–362. doi: 10.1111/j.1365-2575.2004.00176.x. [ CrossRef ] [ Google Scholar ]
  • Bloomfield BP, Best A. Management consultants: systems development, power and the translation of problems. Sociological Review. 1992; 40 :533–560. [ Google Scholar ]
  • Shanks G, Parr A. Proceedings of the European Conference on Information Systems. Naples; 2003. Positivist, single case study research in information systems: A critical analysis. [ Google Scholar ]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 23 September 2024

Home Team Effect and Opinion Network after the Sewol Ferry Disaster: A mixed-method study of the influence of symbol and feedback on liberal versus conservative newspapers’ negative opinions

  • Ki Woong Cho 1  

Humanities and Social Sciences Communications volume  11 , Article number:  1250 ( 2024 ) Cite this article

Metrics details

  • Cultural and media studies
  • Politics and international relations
  • Social policy

Studies on the symbol and feedback effects on the opinion based on the theory are lacking. Acknowledging that the media express their stance and opinion and that negative opinions are critical to policy change, this paper fills the gap in the literature by illustrating and comparing the effects of emotional and cognitive symbols and positive and negative feedback on the liberal and conservative newspapers’ negative opinions of South Korean President Park Geun-hye’s administration (Park administration) after the Sewol Ferry sank. This study used qualitative and quantitative methods to analyze the archival data, including 424 newspaper editorials and economic data published from April to December 2014. Multiple regression analyses were conducted following a content analysis of newspaper editorials, and network analysis was used to analyze the data. The results mostly supported the hypotheses that symbols and feedback affect the negative opinion on the political discourse, with new findings that deviate from the existing theories. The emotional symbols exerted a stronger influence on the negative opinion compared to cognitive symbols, regardless of the newspaper’s stance. The political system’s response to the positive and negative feedback was not definite; instead, it varied depending on the situation and newspaper perspective. The liberal newspaper responded to symbols and feedback more sensitively compared to the conservative one under the conservative administration. The conservative newspaper expressed more lenient negative opinions towards the conservative administration than the liberal newspaper, supporting the home team effect. These findings have practical and theoretical implications for future studies, highlighting the application of opinion networks in social science.

Similar content being viewed by others

case study method in library science

Implications of media reports of crime for public trust and social support: a conceptual analysis of individuals’ psychological wellbeing

case study method in library science

Topical and emotional expressions regarding extreme weather disasters on social media: a comparison of posts from official media and the public

case study method in library science

Influence of information attributes on information dissemination in public health emergencies

Introduction.

Some disasters, like Hurricane Sandy in the northeastern United States, can be politicalized in what is called “disaster politicizations” (Chung 2013 ). Since opinion is critical to policy, when politicians or bureaucrats face negative opinions from the media, they are more likely to try to affect policy change to avoid repercussions of negative publicity by interacting with people through media rather than face-to-face (Berelson et al. 1954 ).

Although a disaster usually does not become a nationwide major political issue on whether and how the government helps the weak and helpless victims, the Sewol Ferry Disaster became a political issue because of the numerous student victims and the upcoming election. Furthermore, a disaster usually does not become politicized due to victims’ weakness and helplessness (Schneider 1995 ); therefore, the Sewol Ferry Disaster is unique in this sense, and unlike other disasters, it has been a frequently discussed politicized disaster for many years. It also took longer to reduce people’s attention to the issue of the Sewol Ferry (Cho and Jung 2019 ). Therefore, it is still crucial to study the Sewol Ferry Disaster.

News articles tend to contain political bias that shapes opinion. The media’s negative opinion influences voters’ views and policy change. Thus, political media affect how politicians and policy entrepreneurs generate symbols while the political system responds to big events with feedback. Through disaster politicization, the Sewol Ferry Disaster changed the culture of disaster management and boosted its importance (Chae 2017 ; Cho 2017 ; Cho and Park 2023 ; Chung 2020 ). Following the Sewol Ferry Disaster, the culture and perspective on safety and disaster management changed and developed comprehensively. Thus, the Sewol Ferry Disaster informs further investigations on how media respond to disaster politicizations according to the symbol, feedback, and media’s political stance. This study employed an innovative method of an opinion network to describe opinion. Furthermore, through unique data analysis of the opinion network, this study evaluated the symbol and feedback effects on the policy stakeholders’ negative opinions of the Sewol Ferry Disaster, considering the newspapers’ ideologies.

Background: Sewol Ferry Disaster

South Korea has had a centralized bureaucracy for a long time, recognizing the citizens’ and media’s rights to expression (Cho and Jung 2019 ). Many issues from policies and disasters have emerged and stabilized over time as a result of media reports. Liberal newspapers tend to support liberal administrations, while conservative newspapers tend to support conservative administrations. In response to citizens asking the central government to address problems or causes of disasters and issues, the Korean administration, governments, and policy actors enacted policies by creating or reorganizing agencies, increasing budgets, drafting acts, and scapegoating individuals or organizations in charge when they encounter new problems (Cho 2017 ).

The Sewol Ferry Disaster occurred during the conservative administration on April 16, 2014. Due to the South Korean government’s ill-performed rescue efforts, the disaster claimed the lives of 295 people, including 246 high school students (Hwang 2015 ). When the Sewol Ferry Disaster occurred, victims’ innocence and helplessness induced empathy in people who wanted to help them. Facing a local election on June 4, politicians and the populace turned this disaster into a hot political issue. Opposing parties tried to use this tragedy to criticize the conservative Park administration, while the Park administration (president, ruling party, and the Blue House Footnote 1 ) tried to deny its responsibility. Many Korean media responded by focusing on the victims and their families. The liberal and conservative populace and media expressed different opinions offline and online.

Few empirical studies have investigated the symbol and feedback from different perspectives. Because of the lack of studies on disaster politicization, different perspectives of newspapers, the symbol of multiple streams approach (MSA), feedback in punctuated equilibrium theory (PET), and bias using the home team effect (HTE), this study aimed to investigate qualitatively and quantitatively how the symbol and feedback influenced negative opinion and how newspapers with different ideologies responded to the symbol and feedback based on negative opinion during the Sewol Ferry Disaster.

This paper provides an overview of the theoretical perspectives, followed by a case study of the events of the Sewol Ferry Disaster and the regression analysis evaluating the influence of symbols and feedback on newspapers’ negative opinions. The study compares liberal and conservative newspapers’ negative opinions of President Park’s administration to illustrate the HTE.

Theoretical background

Policy entrepreneurs utilize symbols to diminish the ambiguity of disaster politics, and the political system responds to them with feedback while the media describes them based on their political stances. The three theories reviewed in the following sections address all of these issues. When the disaster occurred, people were interested in why it happened and how the media and newspapers described the situation. Thus, MSA, PET, and HTE theories were appropriate for addressing the disaster and media response. MSA and PET are commonly used when accidents or events lead to sudden changes in policy outcomes (Cho 2017 ). MSA describes how the policy entrepreneurs utilize symbols to gain media attention, while the PET shows how the feedback influences the media’s attention and origination’s budget response. When explaining politically controversial issues, HTE is a proper way to understand the media’s perspective on controversial political issues.

Symbol and feedback in the policy process

Emotional and cognitive symbols for converging three streams in multiple streams approach (msa).

Symbols play an important role in the policy process, as they portray a simple message or idea and offer an argument to boost opinion cognitively and emotionally. During the policy process, politicians need to employ symbols to identify their legitimacy and integrity (Edelman 1964 : 190). Kingdon ( 1984 ) Footnote 2 argued that new policies do not emerge merely because the time is right; rather, the three streams (problems, politics, and policy streams) usually align within a window of opportunity (Ruvalcaba-Gomez et al. 2023 ). Problems linger and receive attention depending on how participants frame or define them. Crises and disasters often elevate the standing of particular problems, competing for the attention of policymakers and citizens. A policy stream exists and rises to the fore when the policy window opens, and it is presented as a resolution to an identified problem based on their pet solution Footnote 3 . Various actors shape and modify proposed policies, waiting for the right time to emerge as their pet solution to a problem. The politics stream Footnote 4 is activated when policymakers are motivated and able to advance a particular policy. Using symbols, policy entrepreneurs boost the public opinion of controversial proposals to couple Footnote 5 the streams and change policy when a policy window opens. The policy window Footnote 6 is opened when problems, policy, and politics streams align (Burgess 2006 ; Rinfret 2011 ) in response to the occurrence of a focusing event (Simon 2010 ). Following MSA, the Sewol Ferry Disaster presents a unique opportunity to examine how symbols can couple the streams and change policy.

Symbol Footnote 7 . Symbols occur in uncertain situations without certain meaning, while signs have a direct meaning (Veraksa 2013 ). When a policy window opens or needs to open, symbols are used to attract attention in MSA (Eckersley and Lakoma 2022 ; Ruvalcaba-Gomez et al. 2023 ; Zahariadis 2007 ). Carroll ( 1985 ) mentioned that policy entrepreneurs and policymakers utilize symbols with specific emotional repercussions and cognitive referents (Zahariadis 2007 ). The opponents of certain issues create new symbols that prevent an unfavorable issue from emerging (Bachrach and Baratz 1963 ; Bennett 1988 ; Schattschneider 1960 ). Thus, using familiar and positive or, alternatively, hostile or skeptical symbols facilitates mass reaction (Cobb et al. 1976 ). Although political symbols are neither immediate nor unanimous, they bring about change (Edelman 1964 ).

Symbols are messages embedded in the news. They are employed to expand individuals’ understanding of an issue (Birkland 1997 ). Employing the symbols helps entrepreneurs couple three streams by reaching more people, evoking compelling emotions, and explaining their proposal efficiently (Zahariadis 2007 ). During ambiguity, political manipulation facilitates symbolic politics while commonality is condensed in the symbols (Zahariadis 2007 ). The media use a symbol at a particular time depending on the “dominant culture” (Riffe et al. 2005 : 11). Using symbols helps entrepreneurs combine the three streams by reaching more people, evoking compelling emotions, and explaining their proposal efficiently (Zahariadis 2007 ). These symbols are also broadcasted via media to boost emotion. Ultimately, this emotional arousal brings about confrontational policy adoption (Zahariadis 2005 ). The examples above show how symbols boost the opinion via media and how policy entrepreneurs facilitate the development of their policy solutions.

The symbol affects people’s emotions and cognitions. Many researchers have suggested that a symbol has different functions (Veraksa 2013 ). Scholars have commonly employed emotional and cognitive functions (Elder and Cobb 1983 ; Veraksa 2013 ; Zahariadis 2007 ; Zahariadis 2014 ). Emotional symbols influence people’s positive and negative states and feelings (e.g., fear, love, joy, hate, and sorrow) Footnote 8 rather than appealing to their reason. Emotionally, symbols are sometimes used to define an issue (Cobb et al. 1976 ). The emotional function of a symbol delivers tension in uncertainty (Veraksa 2013 ). The cognitive symbol also provides information. The symbol functions cognitively by facilitating an individual’s interpretation capacity (Veraksa, 2013 ). Cognitive symbols influence people’s perception of mental reasoning Footnote 9 via simple information.

Positive and negative feedback of the political system in the Punctuated Equilibrium Theory (PET)

Public policy feedback affects future policymaking through enhanced incentives and the value of political actors (Béland 2010 ; Larsen 2019 ). Positive feedback boosts the policy change, while negative feedback reduces the change (Cho and Jung 2019 ). This study also used positive and negative feedback from the PET to examine policy developments in the aftermath of the Sewol Ferry Disaster. PET is appropriate for understanding radical changes in individual policy systems after stable periods (Baumgartner and Jones 1991 ).

Beginning from evolutionary biology (Gould and Eldredge 1993 ; Jones and Baumgartner 2012 ), Footnote 10 one of the main concepts of PET is a sudden change in a stable equilibrium in policy, referred to as a policy monopoly Footnote 11 . Only a few people, such as bureaucrats, politicians, and interest groups, can access the policy process. “Politics of the policy monopoly, incrementalism, a widely accepted supportive image Footnote 12 , and negative feedback” (Baumgartner et al. 2014 : 67) maintain politics of equilibrium in subsystems. The policy monopoly suppresses change with negative feedback, although this does not always work (Baumgartner et al. 2014 ).

The political shock or policy entrepreneur usually provides positive feedback at the domestic level (Joly and Richter, 2023 ). “Positive feedback occurs when a change … causes future changes to be amplified” (Baumgartner et al. 2014 : 64). In the meantime, political movements, government agenda improvement, and positive feedback cause the politics of punctuation (Baumgartner et al. 2014 ). Meanwhile, the proponents and opponents focus on different sets of images (Baumgartner and Jones 1993 ). When opponents compare new images with previous images, the policy monopoly will collapse (Baumgartner et al. 2014 ). In other words, even though a single image is broadly accepted and supported under the policy monopoly, if opponents raise a new image, the policy monopoly starts to collapse (Baumgartner and Jones, 1991 , 1993 ). A political agenda supports the shift in public policies to a new equilibrium through positive feedback, while negative feedback dampens the changes (True et al. 2007 ). This amplification process overcomes the cognition and institutional friction Footnote 13 embedded within the government (Jones et al. 2003 ; John and Jennings 2010 ), resulting in a large change to correct the deficiency of prior policy modification (Joly and Richter 2023 ).

When a policy monopoly collapses after an abrupt shock event, new policy actors dispute the rule to rearrange the balance of power (Baumgartner et al. 2014 ). After the shock event, the positive feedback that facilitates policy change drives the resulting policy change, reaching a new equilibrium along with negative feedback. However, people have difficulties discerning positive and negative feedback (Cho and Jung 2019 ).

Feedback. In PET, scholars assume that the “political system” (Cairney and Heikkila 2014 : 367) reacts to feedback and returns to stability even after the drastic change. Feedback occurs continuously during the policy process and provides information to improve performance (Chung et al. 2019 ). Feedback is a critical learning activity in the policy system (Chung et al. 2019 ). Characteristics of the political or policy system are the main factors affecting policy change (Chung et al. 2019 ). Thus, policy change occurs in response to how the political system addresses the support and opposition of policy (Chung et al. 2019 ).

Positive and negative feedback affect change. When feedback occurs, determining whether it is positive or negative is critical (Larsen 2019 ). Political systems maintain their stability after major periodic changes using positive and negative feedback (Baumgartner and Jones 1991 ; Cairney and Heikkila 2014 ). Policy feedback effects represent the direction of mobilization (Larsen 2019 ). A large scale of change comes from positive feedback (Baumgartner et al. 2014 ) with dramatic disruption by creating new images and mobilizing other participants (van den Dool and Li 2023 ). In PET, positive feedback refers to influences or processes that tend to create change or speed it up. Footnote 14 Thus, when change emerges, positive feedback affects future change (Baumgartner et al. 2014 ). During the positive process, an expanding political issue encourages policy to reach a new equilibrium (True et al. 2007 ).

Negative feedback, on the other hand, refers to influences that reinforce the status quo and resist change or slow it down. Usually, under the monopoly policy, the policy is stable due to negative feedback. Negative feedback tends to diminish policy changes (True et al. 2007 ), supporting the system’s stability (Baumgartner et al. 2014 ) due to unnoticed new issues or inefficient processes (van den Dool and Li 2023 ). When opponents raise problems with a negative opinion, the policy monopoly collapses. However, the positive and negative feedback do not undoubtedly work against each other, leading to difficulties in determining whether the feedback effect increases or decreases the change over time (Larsen 2019 ).

Scholars have demonstrated different punctuations, which is the main concept in PET (Cho and Jung 2019 ). Mostly two dimensions, the X -axis representing time and the Y -axis representing punctuation, describe the change in policy punctuation (Baumgartner et al. 2009 ; John and Jennings 2010 ; John 2006 ). In academia, policy punctuations are commonly represented in two ways (Cho and Jung 2019 ), the budgets (Jones and Baumgartner 2005 ; Jones et al. 1998 ; Jones et al. 2003 ; Jones et al. 2009 ; Robinson et al. 2014 ; True 2000 ) and attention to word frequency (Fowler et al. 2017 ), speech (Baumgartner et al. 2009 ; John and Jennings 2010 ), media coverage (John 2006 ), court decisions (Robinson 2013 ; Wood 2006 ), bills and laws (Baumgartner et al. 2009 ), coverage of lawmaking and executive orders (Jones et al. 2003 ), and Google Trends (Cho and Jung 2019 ). In addition to these, this study introduces a negative opinion as another way to visualize the attention after the Sewol Ferry Disaster.

Influence of liberal vs. conservative newspapers’ opinions on the Home Team Effect (HTE)

Opinion, particularly a negative one, significantly influences politicians. Support and demand influence the political system and policy (Easton 1966 ). Generally, media outlets respond to the policy by reporting and analyzing issues of public concern (Barnes et al. 2008 ). Thus, their response influences policies (Harris et al. 2001 ; Kim et al. 2002 ; McCombs 2004 ; McCombs and Shaw 1972 ; Scheufele and Tewksbury 2007 ). Following a tragedy or a disaster, they care more about negative than positive opinions. Responses of the media to these tragedies or disasters depend on their political stand and ideology. Among the various media, the newspaper is a “workhorse” in the public information system, acting as a moving force because most information comes from the newspapers (Cutlip et al. 2006 : 255). Newspapers fulfill their audiences’ needs, satisfying their motivation (Ryu 2018 ). Newspaper is commonly utilized to compare liberal and conservative newspapers’ political orientations (Valente et al. 2023 ).

Furthermore, newspapers tend to adopt a partisanship perspective on the controversial issue (Fico and Soffin 1995 ; Ryu 2018 ). Some newspapers’ political and ideological perspectives are salient through their liberal and conservative audience (Baek 1997 ; Kim and Choi 2016 ; Kwon 2016 ), especially in Korean editorials. Despite pursuing objectiveness and striving to present balanced perspectives, newspapers are biased (Kim 2018 ) based on their political stance. Although most media claim they are fair referees (Kang 1999 ), the incumbent administration and government could influence the media’s perception and opinions. When the newspaper expresses its opinion on politically controversial issues after the disaster, the newspaper could facilitate the policy change and respond differently. This means that the newspaper expresses its opinion following its audience’s stance because the audience would like to hear the news consistent with its preferences and perspectives. Therefore, this study employed the HTE.

HTE is a useful tool for scholars to predict or explain people’s behavior (Uribe et al. 2021 ; Van Reeth 2019 ). People generally support their preferred team (Erhardt 2023 ). The HTE, which states that individuals who support the incumbent government tend to perceive it as less corrupt than that supported by the opposition (Anderson and Tverdova 2003 ; Blais et al. 2017 ), could explain this phenomenon. Footnote 15 The “motivated political reasoning” explicates that “directional goals lead people to access and accept information which justifies their conclusion” (Blais et al. 2010 : 2). This biases individuals’ perceptions and opinions (Kunda 1990 , 1999 ; Taber et al. 2001 ). The newspaper is a for-profit firm motivated to provide slanted stories based on readers’ preferences (Gentzkow and Shapiro 2010 ). Partisan predisposition influences subjective political judgment (Blais et al. 2010 ). Much research has studied partisanship’s influence on political reasoning (Anduiza et al. 2013 ; Blais et al. 2010 ).

Media and opinion in the policy process

The news media play a crucial role in producing the language used to critique political bodies. Since people tend to pay attention selectively (Iyengar and Kinder 1987 ), mass media boost political agendas (Thesen 2013 ) by selecting and emphasizing certain issues (Mazzoleni and Schulz 1999 ; Thesen 2013 ). This means that audiences evaluate the leaders’ performance using specific information from the media (Scheufele and Tewksbury 2007 ). Media influence public knowledge of disaster recovery and public perceptions of disaster recovery successes and failures instead of reporting the situation objectively (Harris et al. 2001 ; Schneider 1995 ).

Opinions are useful for analyzing the underlying structure of social networks, which initiate opinion networks (Zhou et al. 2009 ). The opinion, or the expression of “an individual’s attitudes, beliefs, or values” (Beebe and Beebe 2006 : 179), can induce and foretell a policy change. Liu ( 2011 ) defined an opinion, in reference to the opinion mining Footnote 16 practices, as “a positive or negative sentiment, view, attitude, emotion, or appraisal about an entity or an aspect of the entity from an opinion holder” (Liu 2011 : 15) (see Appendix 2 for more detail). Politicians and administrations create policies that reflect the media’s and citizens’ opinions. The media’s agenda influences the voter’s agenda (Cutlip et al. 2006 ). Increased media coverage of an issue influences public priorities (Cutlip et al. 2006 ). The surge of media attention, interests, concerns, and stronger opinion enhances the policy’s importance and increases the likelihood of policy change (Baumgartner and Jones 1993 ; Cutlip et al. 2006 ; McCombs and Shaw 1972 ; Wanta et al. 2004 ). Footnote 17

Furthermore, opposing parties often emphasize bad news to impose responsibilities on the government based on faults (Thesen 2013 ). Meanwhile, the media express their positive and negative opinions by describing positive and negative situations. Media use surveys to evaluate public opinion and determine the percentage of individuals who support and oppose an issue or candidate (Cutlip et al. 2006 ). The administrations and politicians are concerned about negative opinions (Thesen 2013 ) because negative opinions are critical to their election and influence the perception of their performance. The government bears greater responsibility due to the pressure to react to the news (Thesen 2013 ). For this reason, negative opinion is a critical part of the political message of media. Thus, the opinion could be an alternative way to show the punctuation of attention.

A negative opinion, which is also one of the main points for the newspaper to report, is necessary for politicians’ and bureaucrats’ reelections. Groups outside the government express their grievance to advance their interests (Cobb et al. 1976 ). Even after turning the public agenda into a formal one, grievance (Cobb et al. 1976 ) or negative opinion remains important because the advocacy group still emphasizes negative opinion during policy formulation, decision-making, and agenda setting (Simon 2010 ). Furthermore, the elected officials and politicians reflect on their constituents’ opinions and grievances to ensure their reelection (Weaver 1986 ). On the other hand, the media express their positive and negative opinions and describe positive and negative situations consciously and subconsciously. This could mean that newspapers tend to express negative opinions by describing the negative situation and stories. Studying the influence of positive and negative coverage in foreign nations, Wanta et al. ( 2004 ) showed that negative coverage influences the audience’s perception. This explains why the political system and administration try to reduce negative coverage, as a more negative opinion of the president’s administration is more likely to lead to policy change. Thus, I can extend this logic to assume that more negative opinions of an administration influence the citizens’ perceptions, thereby affecting the presidential and legislative parties’ support and elections.

Gaps identified from the theoretical background

Although MSA considers the symbol in the policy process, studies on the symbols are limited. Furthermore, it is difficult to understand how different symbols affect newspapers’ opinions. Although various studies have utilized PET, they have not fully applied PET terminology (Cho and Jung 2019 ). Most PET studies have focused on punctuation rather than on positive and negative feedback (Cho and Jung 2019 ) as the key concept (Baumgartner et al. 2014 ). Moreover, previous PET studies did not clearly show whether positive and negative feedback change attention or negative opinion. The discrepancy between the theoretical and empirical effects of positive and negative feedback (Larsen 2019 ) requires further empirical testing. In terms of the HTE, the newspaper has a different political ideology. Media can be biased (Kim 2018 ) and respond to government policy by explaining a situation from their perspective and delivering either a positive or negative opinion to the public. The newspaper has a very different political ideology. Newspapers respond to government policy by explaining a situation from their perspective and delivering either a positive or negative opinion to the public. The policies influence the media’s opinion. Thus, the media’s political preference also influences their perspectives through their political judgment and opinions.

Accordingly, policymakers and entrepreneurs employ symbols, and the political system responds to feedback to influence the media’s negative opinion after a disaster. Opinion is critical for policymakers, politicians, and even laypersons to understand their society’s policy change and further development. Additionally, under a conservative administration, liberal and conservative newspapers respond to these settings and environments after a disaster by expressing their positive and negative opinions. However, descriptive studies on symbols and feedback across different political ideologies are lacking. Thus, the mixed method is necessary to bolster the more descriptive and reliable studies of symbols and feedback. In sum, this study conducted a case study and regression analysis of an opinion network to assess the media’s negative opinion change following the emotional and cognitive symbols and positive and negative feedback depending on the media’s political viewpoint (see Fig. 1 ).

figure 1

Conceptual Framework : Mixed Method on Sewol Ferry Disaster Editorial Data from Liberal and Conservative Newspapers.

As shown in Fig. 1 , three hypotheses and theoretical propositions Footnote 18 address this study’s research question. These propositions and hypotheses portray emotional and cognitive symbols from MSA, positive and negative feedback in PET, and different political ideologies from HTE qualitatively and quantitatively. Thus, after conducting the case study based on the three propositions, I ran a regression analysis on the negative opinion to test the three hypotheses to reveal the effect of symbol and feedback and the different responses depending on the different newspapers’ political perspectives.

Method and data

To increase the integrity of the study, I used the triangulation technique, utilizing various theories, data sources (Creswell 2013 ), and analyses due to the uniqueness of the opinion network. I conducted the analysis based on theoretical propositions and modified a theory from case study findings (Cavaye 1996 ). I utilized a qualitative case study based on theoretical propositions (Cavaye 1996 ; Cho and Jung 2019 ; Deslatte et al. 2023 ). Furthermore, I performed the content analysis of the newspaper editorials. I described how liberal and conservative newspapers employed symbols to criticize the Park administration and how liberal and conservative newspapers responded to each feedback with negative opinions. However, the case study is inferior to quantitative descriptions of symbols and feedback in newspapers with different political ideologies. To overcome this limitation, I compared the negative opinions of liberal and conservative newspapers using regression analysis and opinion network centrality.

Method and research design: opinion network

Opinion network combines network analysis and opinion mining to depict the relationship between an opinion of each actor. Social network analysis Footnote 19 can be applied to data obtained from opinion mining by measuring the relationship between actors (Wasserman and Faust 1994 ) using content analysis. Footnote 20 Merged network analysis, opinion mining, and content analysis form an opinion network to describe and visualize the positive and negative opinions to change and push a policy because it represents intensity and direction. Opinion networks usually describe diverse networks (e.g., Blex and Yasseri 2022 ; Ibitoye and Onifade 2022 ) of public opinions in data analysis (e.g., Wang et al. 2019 ). Despite the seminal study on opinion networks (Bamakan et al. 2019 ; Zhou et al. 2009 ), the opinion network had been understudied and underutilized in social science compared to other network studies.

Opinion mining studies positive or negative opinions (sentiments) (Liu 2012 ) Footnote 21 within the message. Essentially, opinion mining focuses on the relationships between the agencies of organizations and individuals (Liu 2012 ). It evaluates the “emotion associated with attitude objects,” such as products, people, or issues (Krippendorff 2013 : 245). Analyses of this type have been conducted in international affairs and text mining with computer programming. Opinion mining has often been a part of commercial analysis (Adedoyin-Olowe et al. 2014 ), for instance, to gather the opinions of consumers regarding an item or service on a given website. People or even objects are interrelated and form opinions about each other. Direction and intensity are the most common ways to evaluate people’s feelings and their depth (Cutlip et al. 2006 ). The direction of the opinion is an “evaluative quality of predisposition,” positive or negative, depending on the evaluation of the public (Cutlip et al. 2006 : 206). Intensity refers to the strength of people’s opinion directions (Cutlip et al. 2006 ). However, opinion mining and network analyses do not simultaneously show the opinion, relationship, and direction between nodes.

Opinion networks demonstrate the positive and negative relationship and direction Footnote 22 between nodes by adding weights to the text data. Opinion networks can depict common relationships in the text after the content analysis, overcoming time limitation issues associated with the process of interviewing people to collect data. This analysis also allows visualizing the positive and negative opinions of nodes with direction. In the opinion network, nodes are the subjects or objects of opinion in sentences. Ultimately, the opinion network helps people see the relationships and their directions, unlike other network analyses (e.g., semantic network Footnote 23 ). Thus, an opinion network can show the positive and negative opinions between people or objects. Links (edges or ties) represent positive or negative opinions with direction and weight between nodes (see Appendix 1 for more detail).

I collected archival data from web searches, government records, and websites. The geographical and national scope of the study was South Korea, and the time scope of the analysis covered the period from April 2014 to December 2014. Subsequently, I analyzed the data based on a chain of evidence, Footnote 24 theoretical propositions (Cho and Jung 2019 ; Deslatte et al. 2023 ; Jung et al. 2023 ), and the systematic protocol to ensure the reliability of the case study (Yin 2014 ). Then, I conducted the regression analysis with opinion network data derived from the content analysis of the Hankyoreh (liberal) and Chosun ( conservative) Footnote 25 newspaper editorials to evaluate different perspectives in the newspapers. Footnote 26 The South Korean editorials closely reflect the perspectives and opinions of their newspapers. Footnote 27 These newspapers usually publish three editorials daily on popular and critical topics that occurred the previous day. Thus, editorials provide condensed information on selected topics.

I employed a network data collection method using the word search “ Sewol Fe rry” to collect 192 online editorials from Chosun and 232 from Hankyoreh newspapers published from April 16, 2014, to December 31, 2014. Specifically, I applied hand-coded opinion mining and content analysis via an edge list (see Appendix 1). Footnote 28 The data were collected over 38 weeks, and each node and link were extracted from each sentence. The intercoder reliability of content analysis reached 82.99%. Footnote 29

Propositions

The three theoretical propositions are presented below. Proposition 1 concerning symbols from MSA considered problems, politics, and policy streams that existed before the Sewol Ferry Disaster. After the Sewol Ferry Disaster, many people blamed the Park administration, and policy entrepreneurs converged three streams to push their pet solutions using the symbol after the Sewol Ferry Disaster. The feedback in the PET of proposition 2 represents policy monopoly before the Sewol Ferry Disaster, reflecting the blameless image of the Park administration. However, after the Sewol Ferry Disaster, the political system produced positive and negative feedback across multiple venues and shifted the Sewol Ferry Disaster to macro-level politics with positive feedback and an increasingly negative opinion. After negative feedback, the punctuated attention stabilized. Meanwhile, consistent with proposition 3, responses to the symbol and feedback differed depending on the newspapers’ perspectives emerging from the HTE. Specifically, the conservative newspaper appeared more lenient than the liberal newspaper under the conservative administration.

Pre-disaster

Before the Sewol Ferry Disaster, problems could be detected due to governments’ weak supervision and loosely implemented marine safety laws (such as loose supervision of people on overcrowded vessels, poorly equipped safety facilities, renovation of ship structures, and skipped ship inspection) (Cho 2017 ). Regarding policy stream , the South Korean government and politicians dealt with problems by increasing budgets, creating special acts or organizations (such as task-related ministries and commissions), or redirecting the blame to or away from a person in charge (Cho and Jung 2019 ). Korean policy entrepreneurs can perceive these as popular pet solutions (Cho and Jung 2019 ). Concerning the politics stream , the ruling and opposing parties and their respective associates disagreed with each other even before the Sewol Ferry Disaster. Immediately following the presidential election, scandals surfaced, showing that the National Intelligence Service manipulated people’s opinions via social media to support the ruling party at that time (similar to President Park’s administration), as revealed through individuals’ pro-government replies to these published opinions. Footnote 30 When the ferry sank, questions about the legitimacy of President Park emerged. Civil societies questioned the legitimacy of President Park’s administration. Anti-government citizens distrusted the Park administration and the result of the election.

Under a policy monopoly, marine bureaucrats, politicians, and interest groups could access the marine policy process, while people did not show much interest in the marine industry or its safety protocols. People easily forget the lessons learned from previous disasters and accidents in Korea. At the beginning of President Park’s administration, the government restructured itself, creating an image of the Ministry of Security and Public Administration (MOSPA) by merging several organizations and emphasizing the term “security” in front of the public administration to highlight its commitment to safety (Chae 2014 ; Cho 2017 ; Cho and Jung 2019 ). Only a few politicians, bureaucrats, and interest groups decided on most issues and policies. While the liberal and conservative populaces actively responded to the Sewol Ferry Disaster, liberal and conservative media described the tragedy. These stable but incremental changes shifted to non-incremental changes in South Korea.

Post-disaster

The Sewol Ferry Disaster spurred the entire country to advocate for policy change. The media started to point out the problems and causes of the Sewol Ferry Disaster, focusing on the victims’ families, high school victims, and survivors while blaming the President, the Blue House, the ruling party, the government, bureaucrats, Korean Coast Guard (KCG), and persons related to the accident with diverse opinions (Cho 2017 ). Civil society organizations and politicians spread criticisms and negative opinions of President Park’s administration for the lax supervision and loosely implemented marine law that they believed led to the failed rescue efforts by consoling victims (symbols) and acting as policy entrepreneurs. In the meantime, they tried to resolve problems with their pet solutions. In the end, the national mood was so despondent that public employees and some citizens curtailed, postponed, and even canceled their leisure and recreational activities and ceremonies to grieve for deceased students, other passengers, and their relatives and families (Cho and Jung 2019 ).

Increasing negative opinion with symbol from MSA

The death of young people aboard the ship and the families they left behind became symbols that evoked purity, innocence, and helplessness, which enhanced the public’s emotions and facilitated policy changes with positive opinions. Symbols related to the victims of the Sewol Ferry Disaster and their families boosted the citizens’ and liberals’ negative opinions of the Park administration and facilitated the coupling of the three streams: problems, policies, and politics. Most victims were poor high school students from Ansan, an impoverished region (Cho and Jung 2019 ). The liberal and conservative media and the opposing parties influenced policy change by broadcasting many touching stories about the victims, their families, those who survived the disaster, and missing bodies found following the Sewol Ferry Disaster (symbol). Newspapers depicted positive opinions of victims, including condolences, pities, sympathies, comforts, and commiserations, as emotional symbols because they briefly expressed the dominant culture in South Korea after the Sewol Ferry Disaster. Intermittent media accounts spotlighting the number of bodies found (cognitive symbols) kept the fading memory of the sunken ship alive. These symbols from policy entrepreneurs and media increased the negative opinion, facilitating the policy change by attacking the Park administration (see Tables 1 and 2 ).

Punctuation after increasing negative opinion following the feedback in PET

In the meantime, the media and opponents presented the sunken Sewol Ferry problems and ill-performed government actions, unlike the previous MOSPA safety image or impression. This led to the collapse of the policy monopoly. The Sewol Ferry Disaster caught the attention of many groups and citizens and shifted the issue from subsystem politics to the macro-political stage. Figures 2 to 4 show how budget and attention (newspaper editorials and Google Trends) drastically changed in response to the Sewol Ferry Disaster to emphasize punctuation.

figure 2

Increasing Budgets of Public Order and Safety (Cho and Jung, 2019 ).

Concerning budget and attention, as the two ways to visualize the punctuation, Fig. 2 shows a larger budget change for Public Order and Safety. When focusing on safety without a public order budget, the safety-relevant budget increased by 17.9% (Heo 2014 ). This illustrates a vast punctuation in the safety-relevant budget. Footnote 31 The 17.9% increase Footnote 32 was higher compared to Wildavsky’s ( 1984 ) 10% and Kemp’s ( 1982 ) ± 10% criterion, the threshold criterion for punctuation. However, this budget change does not sufficiently capture a change in people’s attention.

On the other hand, this study revealed a change in attention as another way to view punctuation. Figures 3 and 4 show that the attention of liberal and conservative newspapers and people’s Google search to the Sewol Ferry Disaster changed abruptly in response to the Sewol Ferry sinking (Cho and Jung 2019 ). This proves the punctuation of the media’s attention.

figure 3

The Number of Liberal ( Hankyoreh ) vs. Conservative ( Chosun ) Newspaper Editorials After Sewol Ferry Disaster (Cho and Jung 2019 ).

figure 4

Google Trends for “Sewol Ferry” Searches in South Korea. https://trends.google.com/trends/explore?geo=KR&q=sewol%20ferry .

These drastic increases in budget and attention (Figs. 2 – 4 ) contributed to the collapse of policy monopoly among bureaucrats, politicians, and interest groups within the marine industry and increased the interest of other groups and citizens in the policy after the Sewol Ferry Disaster.

Pet solutions and feedback

Pet solutions : Politicians also employed their traditional pet solutions that led to the resignation of Prime Minister Chung Hong-won, the pursuit of Mr. Yoo, the reorganization of the government structure, the dismantlement of the KCG as a scapegoat, the drafting of the new legislation, the creation of the commission for investigation, and increased budgets to prevent similar accidents from occurring (Cho and Jung 2019 ). These represent pet solutions (committees, reorganization, scapegoat, drafting acts) to condole victims and stabilize negative opinions and change. After the June 4 local elections, three Sewol Acts Footnote 33 were passed after the long and tedious debate and compromise between the ruling and opposing parties to prevent future disasters. Subsequently, efforts to retrieve missing bodies ended, and the Ministry of Public Safety and Security (MPSS) was established (negative feedback).

Feedback : From a different viewpoint of PET, the political system provided both positive and negative feedback that reinforces and undermines changes in policies (Cho and Jung 2019 ; Larsen 2019 ) regarding the Sewol Ferry sinking. Originally intended to reduce change, the two failed negative feedback responses turned into positive (Cho and Jung 2019 ). One was the resignation of Prime Minister Chung. President Park’s administration tried to avoid the criticisms after Prime Minister Chung resigned (negative feedback), accepting responsibility for the accident three weeks after the ferry sank. To make matters worse, neither of the two candidates for the prime minister was appointed for ethical reasons, increasing criticism of President Park’s administration (positive feedback). The controversy ended with the resigned Prime Minister being reassigned to his position 12 weeks after the disaster. This failure converted the negative feedback to positive feedback.

The other negative feedback was associated with the pursuit of Mr. Yoo. The president charged the owner of the Sewol Ferry’s company, Yoo Byung-eun, with crimes because he ordered the reconstruction of the Sewol Ferry before the accident. Many observers presumed that the remodeled boat was loaded above its capacity, one of the main contributors to the accident. President Park’s ruling party, the government, and the news media described Yoo Byung-eun as cunning. The government planned to recover the rescue mission operation costs by fining Yoo and his family while the ruling party drafted the Yoo Byung-eun Law (Kim 2014 ). However, the government failed to find him. His body was later found in a pile of dead homeless people, undermining the Park administration’s abilities. Thus, the Park administration intended to provide negative feedback, chasing Mr. Yoo, which converted into positive feedback guiding new policies (Cho and Jung 2019 ).

Subsequently, President Park’s administration decided to dismantle the KCG on May 19, 2014, due to the KCG’s poor performance in rescuing people and created the MPSS to maintain stability and absorb responsibilities from the KCG (positive feedback). After the June 4 local election, long debates on the Sewol Special Act included a special investigation by the committee comprising liberal and conservative politicians, which lasted until the end of September. Ultimately, they passed the three Sewol Acts on November 7, 2014, which could be viewed as positive feedback confirming the change (Cho and Jung 2019 ). In the meantime, officials announced the end of the mission to retrieve the bodies on November 11, 2014, while the Korean government established the MPSS, which merged both KCG and the National Emergency Management Agency, on November 19, 2014. These events could be described as negative feedback supporting new safety.

HTE from the content analysis

Meanwhile, the liberal and conservative newspapers’ reactions to each feedback and symbol varied (see Tables 1 and 2 ).

Tables 1 and 2 show the newspapers’ varied opinions and responses to symbols and feedback. The liberal newspaper tended to blame the conservative Park administration more and express more positive opinions and condolences to the victims, their families, and survivors compared to the conservative newspaper. This also implies that the conservative newspaper was more lenient towards the conservative Park administration compared to the liberal newspaper due to less negative opinion.

Regression analysis and opinion network

The previous section described a case study and content analysis of newspapers’ responses to different types of symbols and feedback depending on their stance after the Sewol Ferry Disaster. This drove me to conduct the opinion network and regression analysis to uncover a more rigorous relationship. A case study could not assess how each media’s negative opinion reflected the symbols and feedback while controlling for other variables. To increase reliability, validity, and objectivity, I further examined whether negative opinions of newspapers with different ideologies reflected symbols and feedback using regression analysis and opinion networks.

Hypothesis 1

Korean citizens expressed their condolences, pities, and sympathies and comforted victims and victims’ families, especially high school victims and survivors, after the Sewol Ferry Disaster. Symbols convert complex ideas into simple ones by simplifying a situation with emotion and cognition. Innocent, helpless, and unblameable victims are symbolic during disasters (Schneider 1995 ). Just as a person, word, phrase, or speech can be a symbol (Edelman 1964 ; Schneider 1995 ), mentioning the victims delivered a symbolic message quickly and effectively during the Sewol Ferry Disaster. Society’s consensus or division on issues is intertwined with people’s emotions, which is a useful starting point for analyzing a mass response (Edelman 1964 ). During disaster, victims, their families, high school victims, and survivors were offered condolences, pities, sympathies, and comforts and thus represented a symbolic element of innocence and helplessness with positive opinions to boost people’s emotions, such as affective states and feelings. In the meantime, the media reported the number of retrieved bodies to the public. This study considered the number of found-missing bodies as a cognitive symbol because people might easily understand information about the number of found-missing bodies. Expressing condolences to the victims, their families, and survivors (emotional symbols) and sporadically discovering the victims’ bodies (cognitive symbols) triggered people’s emotions and cognition. The policy entrepreneurs used these symbols to further blame President Park’s administration and employed them to couple streams (Zahariadis 2007 ).

Accordingly, hypothesis 1 was generated to investigate how emotional and cognitive symbols (victims) influence the negative opinion of President Park’s administration, which initiated the policy change during the agenda-setting and policy formulation to avoid taking the blame. Thus, I assumed that the symbols, as one of the manipulating strategies of policy entrepreneurs, play an important role in blaming President Park’s administration. I tested the influence of emotional and cognitive symbols on the negative opinion of liberal and conservative newspapers by pushing for policy change, considering symbols have an emotional and cognitive effect.

Hypothesis 1.1: Emotional symbols (positive opinion of the victims) increase the negative opinion of President Park’s administration .

Hypothesis 1.2: Cognitive symbols (the number of retrieved bodies) increase the negative opinion of President Park’s administration .

Hypothesis 2

The political system experienced and responded to both positive and negative feedback. Thus, responses of the Park administration, the politicians, policy actors, and the media to the Sewol Ferry Disaster produced positive and negative feedback. Considering that the negative feedback reduced the negative opinion by stabilizing and reaching equilibrium while the positive feedback increased the negative opinion by encouraging people’s attention increase, hypothesis 2 was proposed to examine how the positive and negative feedback influenced the negative opinions of liberal and conservative newspapers.

Hypothesis 2.1: The resignation of Prime Minister Chung (positive feedback) increased the negative opinion of President Park’s administration .

Hypothesis 2.2: Chasing Mr. Yoo (positive feedback) increased the negative opinion of President Park’s administration .

Hypothesis 2.3: Announcing the Dismantling of KCG (positive feedback) increased the negative opinion of President Park’s administration .

Hypothesis 2.4: Passing Bills and Ending Mission (positive and negative feedback) did not influence the negative opinion of President Park’s administration because both feedback responses were canceled out .

Hypothesis 2.5: Establishing MPSS (negative feedback) decreased the negative opinion of President Park’s administration .

Hypothesis 3

Considering prior results, I wanted to compare how newspapers with different ideologies (liberal and conservative news) responded to emotional and cognitive symbols and positive and negative feedback under the conservative administration. From the HTE’s perspective, the newspapers can be biased. Under the conservative administration, the liberal newspaper would be expected to express a negative opinion of the conservative administration more frequently than the conservative newspapers. Expanding on hypotheses 1 and 2 of the case study, I proposed the following:

Hypothesis 3.1: Under the conservative administration, the conservative newspaper is more lenient towards the conservative Park administration and thus less likely to express a negative opinion of President Park’s administration when responding to emotional and cognitive symbols than the liberal newspaper .

Hypothesis 3.2: Under the conservative administration, the conservative newspaper is more lenient towards the conservative Park administration and thus less likely to express a negative opinion of President Park’s administration when responding to positive and negative feedback than the liberal newspaper .

Variable and measurement

Dependent variable: negative opinion as a forecast of policy change.

Regarding the dependent variable, this study analyzed the negative opinions of the agencies and actors after the Sewol Ferry Disaster because they can influence policy change and politicians’ reelection and reputation. Due to the poor performance of the government and relevant agencies and people, many negative opinions dominated the entire nation. The negative opinion toward President Park’s administration provoked policy change, as described in the above case study. Citizens, media, and social advocates offered condolences, which served as symbols, to victims of the Sewol Ferry Disaster, while the three Sewol Acts and other policies or feedback were created to respond to a grievance. These influenced the negative opinions of the liberal and conservative newspapers. Accordingly, this study considered the magnitude of negative opinion as an indicator of the policy change after the Sewol Ferry Disaster. I used centrality Footnote 34 [degree centrality of each node (in-degree Footnote 35 and sum of weight)] to measure certain variables. For the measurements of all variables, see Table 3 .

Control variables

Regarding control variables, the government influenced the degree of blame imposed on President Park’s administration. However, the entire government, except for the chief of ministries, did not represent President Park’s administration because most bureaucrats were not affiliated with President Park or political parties. Thus, I separated the negative opinion of the government and used it as a control variable because, in South Korea, government employees are prohibited by law from enrolling in political parties. Especially, KCG (a government agency populated by government employees) was given the greatest responsibility for the accident. I also separated the KCG from the government and controlled it to investigate the different negative opinions on the Park administration in the liberal and conservative newspapers. In addition, economic factors could have influenced blame attributed to President Park’s administration; thus, I controlled for them.

Furthermore, the partisan shift also influenced policy change (John 2006 ). Thus, I controlled for the partisan shift before and after the June 4 local election. I used the measurements in Table 3 to control for these variables.

Results and findings

Table 4 describes the data. I collected the data from newspapers, government documents, the Korea Composite Stock Price Index (also called KOSPI), and editorials from Hankyoreh ’s and Chosun newspapers’ websites. Then, I conducted the content analysis with the second coder (see Appendix 1 for more detail). Because the dependent variable was the negative opinion, the values were all 0 or less than 0. These values show that the conservative newspaper blamed President Park’s administration less than the liberal one, while the conservative newspaper used emotional symbols less than the liberal newspaper (see Table 4 ), consistent with the previous analysis (see Table 1 ).

I analyzed the data based on prior studies and data, as shown in Table 5 . The overall analysis showed that emotional symbols have more influence on the negative opinion of liberal rather than conservative newspapers, while cognitive symbols had no effect on the negative opinion in both liberal and conservative newspapers. Additionally, the negative opinion on the positive and negative feedback varied depending on the situation.

Explanations of emotional and cognitive symbols from MSA influence on negative opinion

Regarding Hankyoreh and Chosun newspapers, Table 5 shows that emotional symbols were used significantly more than cognitive symbols to blame President Park’s administration, regardless of the newspaper’s perspective. Footnote 36 This indicates that media reacted more sensitively to emotional than cognitive symbols. Figure 5 illustrates that Hankyoreh ’s negative opinion dropped slightly below – 0.4, while the negative opinion of Chosun decreased to around – 0.3. These findings support that emotional symbols, rather than cognitive symbols, influenced the negative opinion of President Park’s administration, particularly in the case of the liberal newspaper Hankyoreh . Regarding hypothesis 3, under the conservative administration, the liberal newspaper would be more likely to employ symbols compared to the conservative one, implying that the conservative newspaper was more lenient compared to the liberal one.

figure 5

Scatter Plot of Emotional Symbols and a Negative Opinion of the Park Administration.

Explanations of positive and negative feedback in PET and its effects on negative opinion

The results in Table 5 show how the liberal and conservative newspapers responded to the positive and negative feedback under the conservative administration. The liberal newspaper Hankyoreh (then supporting the opposing party) reacted more sensitively to the positive and negative feedback compared to the conservative newspaper Chosun (then supporting the ruling party) . Footnote 37 On the other hand, the conservative newspaper responded to the resignation of Prime Minister Chung. The newspapers’ different responses imply that they were not objective, further suggesting that positive and negative feedback responses do not always work in unison. While the positive feedback should theoretically facilitate the change, the positive feedback on the prime minister’s resignation, which might have originally intended to reduce negative opinion, increased the negative opinion of both newspapers, whereas the failed policy of negative feedback seemed to boost the negative opinion regardless of the newspaper’s perspective. Footnote 38 This positive feedback transformed from negative feedback and boosted the negative opinion of President Park’s administration in both newspapers.

On the other hand, the positive feedback on Mr. Yoo, which might have also aimed to reduce negative opinions, did not influence either newspaper’s negative opinion of President Park’s administration. Footnote 39 In addition, announcing the dismantling of KCG only exacerbated the liberal newspapers’ negative opinion of President Park’s administration. Regarding passing the bills (three Sewol Acts) and ending the mission, when the positive and negative feedback was mixed, negative feedback tended to surpass the positive feedback in the liberal newspaper. Footnote 40 Establishing the MPSS might not be considered negative feedback because issues surrounding the Sewol Ferry Disaster could have been eliminated while new issues could have emerged. Furthermore, even though some populace argued that the Sewol Ferry was still critical, most people felt the Sewol Ferry issue was too tedious and obsolete.

Comparing the symbol and feedback influence on negative opinion from the HTE perspective

Table 6 shows that the conservative newspaper was more lenient than the liberal one when criticizing the Park administration.

Regarding hypothesis 3, the liberal newspaper was more likely to employ the symbols compared to the conservative one under the conservative administration. The results imply that emotional and cognitive symbols and positive and negative feedback do not always cause the same response, considering the ideology of unique media outlets, as shown in Tables 5 and 6 . Emotional symbols raised negative opinions of President Park’s administration. Regarding the magnitude of emotional symbols, while both newspapers blamed the Park administration, the liberal newspaper Hankyoreh expressed stronger negative views than the conservative newspaper Chosun . This also shows that the liberal newspaper, contrary to the conservative newspaper, utilized emotional symbols rather than cognitive symbols to blame conservative President Park’s administration.

Tables 5 and 6 show that under the conservative administration, positive feedback could increase the momentum of policy change by boosting the liberal newspaper’s negative opinion rather than the conservative newspaper. Both tables show that liberal and conservative newspapers criticized the failed feedback, such as the prime minister’s resignation. Footnote 41 Furthermore, the liberal newspaper responded to dismantling KCG and establishing MPSS with significantly more negative opinions compared to the conservative newspaper. Thus, the liberal newspaper was more sensitive than the conservative newspaper during the conservative administration. These results generally support the HTE argument that people with opposing ideologies tend to oppose the opposition’s policies more strongly. In other words, the conservative newspaper is more lenient than the liberal newspaper under the conservative administration.

In addition, the control variable analysis supported this study’s argument. Interestingly, the conservative newspaper blamed the KCG rather than the conservative Park administration regarding the control variables in Table 5 , while the liberal newspaper blamed the government. The government is interchangeably called conservative Park administration, even though not all government employees are partisans. Thus, the government could be another alternative target of the liberal newspaper as an opponent of the conservative Park administration. This implies that the liberal newspaper blamed the government as an entity linked to the Park administration. On the other hand, the conservative newspaper expressed its negative opinion of KCG, suggesting that the conservative newspaper believed that the Park administration was not directly responsible for the Sewol Ferry Disaster. Thus, it attributed responsibility to the KCG because the KCG had the primary and direct obligation to rescue students from the sinking ship but failed. Blaming the KCG instead of the Park administration indicates that the conservative newspaper supported the conservative Park administration, confirming the HTE.

Alternative analysis with opinion network

I added an alternative example to visualize the negative opinion changes to increase the understanding of the opinion network. To visualize hypothesis 2.4 in Table 5 , Fig. 6 illustrates how the magnitude of the negative opinion of President Park in Hankyoreh decreased after the passing of the bills and the ending of the body-retrieval mission. Figure 6 illustrates how the liberal newspaper decreased its negative opinion under conservative administration based on the HTE. The figure depicts the reduced negative opinion of President Park and the liberal newspaper’s response to the feedback using the opinion network under the conservative administration. This helps scholars or laypersons understand the change more easily without a case study or regression analysis.

figure 6

Example of Negative Opinion of President Park in Liberal Newspaper: Differences before and after the Passing of the Bills and Ending the Mission.

Discussion and conclusion

This study revealed the mechanisms underlying the use of the symbols in the MSA, feedback in the PET, and the newspapers’ reactions to the policy process in line with the HTE. I used qualitative and quantitative evidence to describe the use of symbols and feedback in the newspaper with different ideological perspectives after the controversial disaster. Although the newspapers influenced agencies and actors’ opinion networks throughout policy processes, studies on symbols in MSA and positive and negative feedback in PET using the opinion network are limited. Furthermore, how symbols and feedback work according to a newspaper’s political ideology has not been studied in public policy from the perspective of HTE. The qualitative case study described the story of the Sewol Ferry Disaster with the symbols and feedback, while the quantitative part of the study with an opinion network described the function of the symbols and feedback based on the newspaper’s ideology. The qualitative and quantitative studies clarified how different ideological perspectives of media react to symbols and feedback regarding negative opinions.

The qualitative and quantitative results overall supported the propositions and hypotheses revolving around the Sewol Ferry Disaster and identified further aspects that did not support the initial propositions and hypotheses. The case study results, regression analysis, and opinion network analysis yielded consistent results. The findings confirmed that in terms of the MSA, the policy entrepreneurs used primarily emotional symbols rather than cognitive symbols to increasingly blame President Park’s administration. Additionally, this study revealed that the liberal newspaper responded to the symbols more sensitively and that emotional symbols were more influential than cognitive symbols, regardless of the newspaper. These results support the importance of emotion in increasing policy actors’ awareness of their negative opinions. This finding also implies that the masses tend to respond to impressive speech rather than facts (Edelman 1964 ) and that emotional symbols boost negative opinions more than cognitive symbols.

Regarding the positive and negative feedback in PET, the opinion network depicted in the liberal newspaper reacted to the Sewol Ferry sinking more sensitively compared to the conservative newspaper. This also shows that even with the same feedback, negative opinions varied depending on the newspapers’ perspectives, suggesting that the opposing political newspaper stance with the current administration tends to be more sensitive than the supportive political newspaper stance of the current administration. In theory, positive feedback facilitates change by increasing negative opinions, while negative feedback thwarts change by decreasing negative opinions. However, this argument applied mostly to liberal newspapers under the conservative administration, even though the feedback effect depends on the situation. In other words, the failure of negative feedback could change to positive feedback through criticism. This is consistent with the claim that a definite effect of positive and negative feedback does not exist due to its time variance (Larsen 2019 ) and that it is difficult to discern positive and negative feedback (Cho and Jung 2019 ). Therefore, this study suggests reconsidering these issues.

Regarding HTE, the conservative newspaper was more lenient in responding to symbols and feedback responses compared to the liberal one under the conservative administration. Although the newspapers pursued their objectivities, their use of symbols and responses to feedback described in the newspaper did not always work simultaneously as expected. If they were unbiased and objective in pursuing objectivities, their responses to the symbols and feedback should have been the same. The findings of this study imply that the newspaper symbols and feedback responses differed according to their stances, consistent with HTE’s main argument. Thus, the research findings employing the method of opinion network and content analysis extended the application of MSA, PET, and HTE to the policy theories.

This research focused on the symbols and feedback employed in politics using opinion networks. The results of this study can help researchers determine and empirically test how symbols and feedback influence negative opinions (Larsen 2019 ). Ultimately, this study used the opinion network with the figure of nodes and links to describe differential responses of media with different ideologies. The findings provide qualitative and quantitative evidence supporting the dynamic effects of symbols and feedback on negative opinions within the MSA and PET frameworks. Additionally, this study added a negative opinion as another way to depict attention. Using the opinion network allowed us to quantitatively measure attention and opinion and visualize the opinion and punctuation in addition to budget and attention.

Additionally, regarding the data collection method, I collected opinion network data by merging opinion mining and network analysis with content analysis based on the protocol, significantly contributing to the works of many scholars and practitioners. This method of data collection helps scholars study media and government documents. Moreover, the opinion network can visualize the opinion change in the published text. It could be applied in technology to describe the positive and negative relationships between actors in news content with graphics. Thus, readers from all walks of life could easily and quickly understand the basic relationships between actors, agendas, and issues in political and practical situations without reading the entire text.

This study implies that public policy scholars and practitioners can apply this method to consider the media’s responses while accounting for their administration’s and newspapers’ political stances. Policy actors must consider the symbols and feedback of the policy and media stance when introducing or implementing policies.

A future study could analyze other controversial issues and their influence on the opinion of other types of media besides newspapers. The media influence the government’s performance and evaluation of people, the populace, or constituents. Even though this study examined the newspapers’ responses, future studies can employ opinion networks and big data to evaluate policy responses or address other issues in not only newspapers but also other media’s (social media, online websites, blogs, replies, and all online text) standpoints and opinions because the newspaper’s audience has been decreasing and becoming less influential. Future research could investigate the effects of other symbols and evaluate whether the feedback effects are canceled out, why they occur, when they are altered, and how positive and negative feedback can be discerned depending on the ideology, country, language, or geographical area.

Data availability

The datasets used and analyzed in the current study are available in the editorial section of the Hankyoreh' s and Chosun ilbo’s newspaper repositories at https://www.hani.co.kr and https://www.chosun.com , respectively.

The Blue House is the Korean equivalent of the White House.

Drawing on the Garbage Can Model proposed earlier by Cohen et al. ( 1972 ), John Kingdon ( 1984 ) introduced the MSA to describe the emergence of new political policies in 1984.

Pet solution means advocates’ preferred resolution to problems (Bérut 2023 ; Kingdon 1995 ; Zahariadis 2014 ).

Politics streams are composed of three elements, “national mood, pressure group campaigns, and administrative or legislative turnover” (Zahariadis 2007 : 73; Zahariadis 2014 : 34).

“Couple” the stream is a terminology in public policies implying merging and manipulating the three streams to make a change.

Kingdon ( 1995 ) defined a policy window as an “opportunity for advocates of proposals to push their pet solutions, or to push attention to their special problems” (Kingdon, 1995 : 165).

According to Murray Edelman, as stated in his seminal text, The Symbolic Uses of Politics , “man is constantly creating and responding to symbols” (Edelman 1964 : 178). Edelman mentioned that a “symbol can be understood as a way of organizing a repertory of cognition into meaning” (Edelman 1971 : 34). As long as a symbol represents beyond objective content with a special meaning or value about an issue, symbol can be anything, “a word, a phrase, a gesture, an object, a person, an event” (Elder and Cobb 1983 ; Schneider 1995 : 12), “acts, speeches and gestures” (Edelman 1964 : 188) with embedded meaning or value about an issue. A symbol makes an idea easy to understand and transmits it by way of a “short cut” and “stereotyped” image, such as pictures and TV images (Birkland 1997 : 12). This helps symbols define an issue (Cobb et al. 1976 ).

Emotion. http://www.dictionary.com/browse/emotion Accessed 8 September 2023

Cognitive. http://www.dictionary.com/browse/cognitive Accessed 8 September 2023

The main concept of “punctuated equilibria” originated from the “discontinuous tempos of change” (Gould and Eldredge 1977 : 145).

Policy monopoly is defined as the situation in which only a few relevant people can access the policy process. A single widely acknowledged image that supports the policy is associated with policy monopoly (Baumgartner et al. 2014 ). Policy monopoly refers to “established interests tend to dampen departures from inertia until policy mobilization” (Baumgartner et al. 2014 : 67). A policy monopoly is an image created from the supported policy image (Baumgartner and Jones 1993 ). When a broken policy monopoly brings about a shift in how the issue is defined, this issue is enforced to move from the subsystem to macro politics (True et al. 2007 ).

PET counts on the policy image mechanism as one way to understand policy (Baumgartner et al. 2014 ). Policy image consists of mixed components of the “empirical information and emotive appeals” (Baumgartner et al. 2014 : 66) associated with “core political values” communicated to the public with simplicity and directiveness (Baumgartner and Jones 1993 : 5-7; Baumgartner et al. 2014 : 64).

Friction is another indicator of policy change. Friction delays reaction to issues until the pressure to overcome the resistance from the institution, which leads to punctuation (Baumgartner et al. 2014 ).

This is also called “feeding frenzy” and “bandwagon effect” (Baumgartner et al. 2014 : 64–65).

Media in China tend to have a “media’s home bias” by acting as a pro-government channel to support domestic industries (Kim 2018 : 954). Conservative newspapers tend to have more negative views on immigrant issues than liberal newspapers (Valente et al. 2023 ).

The opinion mining’s essential objective is to extract the orientation and strength of sentiments from subjective information data because the sentiment information influences the opinion network (Zhou et al. 2009 ).

In addition, more coverage of the topic enhances its importance (McComb & Shaw 1972; Wanta et al. 2004 ). Baumgartner and Jones ( 1993 ) also mentioned that people expect policy to change to some extent following increased media’s attention.

As one of the deductive approaches, theoretical propositions come from theoretical models (Cavaye 1996 ). A case study followed the outline of the researchers’ theoretical proposition for logical prediction and conclusion (Yin 2014 ). This is useful for theory development (Yin 2014 ) based on the tested propositions models (Cavaye 1996 ; Cho and Jung 2019 ).

The network analysis investigates the relationship between the senders and receivers, that is, starting points and ending points with a direction between them, although it can also show the relationship without opinion directions.

Most messages are assumed to reflect the psychological state (Riffe et al. 2014 ). Thus, content analysis has been employed in journalism, sociology, political science (Rowling et al. 2011 ), psychology, and economics (Riffe et al. 2014 ). On the other hand, content analysis can use different kinds of rhetorical objects as data (e.g., interviews, letters, diaries, journals, school essays, and newspaper stories) (Zullow et al. 1988 ). Because these measures are unobtrusive, they do not suffer limitations inherent in questionnaire or survey data collection.

Opinion mining is also as known as “sentiment analysis” (Liu 2012 : 1). Sentiment is closely related to opinion. Sentiments can be individual evaluations, “usually measurements of positive or negative affect of one person for another” (Wasserman and Faust 1994 : 37). Opinion mining has also been referred to as “sentiment analysis, opinion extraction, sentiment mining, subjectivity analysis, affect analysis, emotion analysis, review mining, etc.” with marginal different task (Liu 2012 : 1).

Opinion network is based on the “implicit opinion orientation” in online context, unlike the traditional social network connecting people through “explicit” relationships, such as friendship, consumption, and employment (Zhou et al. 2009 : 266).

Unlike the opinion network, the semantic network identifies the “most frequently occurring words in a text and determines the pattern of similarity based on their co-occurrence” (Doerfel and Barnett 1999 : 592) without positive and negative relationship.

The chain of evidence covers “the links—showing how findings come from the data that were collected and in turn from the guidelines in the case study protocol and from the original research questions—that strengthen the reliability of a case study’s research procedures” (Yin 2014 : 38).

Among the many studies on the newspapers’ political ideological perspectives, most scholars have commonly used the Hankyoreh and Chosun as the representative liberal and conservative newspapers (Baek 1997 ; Kim and Choi 2016 ; Kwon 2016 ), respectively, in South Korea.

I did not have to collect data from all media sources because studies indicate a high degree of similarity in issue content across different media outlets (Thesen 2013 ; Vliegenthart and Walgrave 2008 ). Since my interest here was not to compare different media, two representative newspaper sources with different political ideologies were sufficient to compare the effects of their different ideologies on the negative opinion.

Editorials in Korean and U.S. newspapers are written differently. The editorials in the U.S. do not necessarily share the same perspective as the newspaper articles. However, newspapers editorials in South Korea generally share the perspectives of the articles because editors and editorial writers are all considered representatives of newspapers.

“An edge list is a two-column list of the two nodes that are connected in a network” (Shizuka Lab 2013 ).

Although it utilizes a computer, it could cause errors due to computer’s reasoning limitations; thus, this study conducted the coding manually, which took several months. Two coders coded the data. The first coder was the author of this manuscript. The second coder was a judge and visiting scholar who majored in law and had multiple experiences as a judge in South Korea. To boost the data accuracy, two coders added or corrected mistakes and errors due to the missing nodes and opinions based on mutual agreement after checking intercoder reliability. In the end, 100% intercoder reliability was achieved after adjusting codes through discussion. This improved the data accuracy and increased the number of data without missing nodes and opinions.

Media, diversity, and content manipulation. https://freedomhouse.org/report/freedom-net/2015/south-korea Accessed 22 February 2016.

Furthermore, other budgets increased as well. For instance, Local Government of Education Budget raised their Safety Budget while Ministry of Ocean and Fisheries Budget increased after the Sewol Ferry Disaster (Cho 2017 ; Park 2014 ).

However, a few opponents have argued that most of the budget increase was derived from increased social overhead capital investment, such as road maintenance and construction building dam, rather than the expansion of safety projects and personnel (People’s Solidarity for Participatory Democracy 2014 ).

The Sewol Special Act (act to investigate the causes of the disaster for prevention of future disasters), the Government Organization Act, and the Yoo Byung-eun Act (Act on Regulation and Punishment of Criminal Proceeds Concealment).

Centrality “is a commonly used conceptual tool for exploring actor roles in social networks, and ‘degree centrality’ refers to the number of ties a node has to other nodes” (Carr et al. 2017 : 217; Wasserman and Faust 1994 ). Basically, more centrality means more declarations by other actors (node in the newspaper editorial). Wasserman and Faust ( 1994 ) defined actor centrality simply as “central actors must be the most active in the sense that they have the most ties to other actors in the network or graph” (Wasserman and Faust 1994 : 178). The actor-level degree centrality index (C D (n i )) is C D (n i ) = d i (n i ) (Wasserman and Faust 1994 ). This measurement varied depending on the group size g . Thus, the standardized measure used in Netminer program and my study was C’ D (n i ) = d i (n i )/(g-1) (Wasserman and Faust 1994 ).

In-degree is “the number of arcs terminating at n i (node)”(Wasserman and Faust 1994 : 126). This means that greater absolute value of centrality represents higher magnitude of opinion.

Cognitive symbols were not as significant because newspapers did not focus on President Park’s regime at the beginning of the Sewol Ferry Disaster, when many bodies were retrieved, because it was not yet clear who was responsible. Furthermore, the media could not show the retrieved bodies because showing and broadcasting bodies could be considered ethically inappropriate and disturbing.

This is because the conservative’s blame of President Park’s regime was already too low to show the effect of the positive and negative feedback as follows: the numbers in the table (see Table 4 ) show that positive feedback of President Park’s regime worked better in the liberal newspaper than in the conservative newspaper and that the conservative newspaper was less sensitive in blaming the regime compared to the liberal one because the magnitude of the worst negative opinion in conservative newspaper was already low (−0.051), reflecting overall negative opinions of President Park’s regime. On the other hand, Hankyoreh ’s lowest negative opinion magnitude was almost twice as high, (−0.095) as that of Chosun . The mean magnitude of the negative opinion of liberal newspapers was more than double (−0.019/−0.007) that of conservative ones (see Table 4 ). This means that the effect of the positive and negative feedback on the Chosun Newspaper was less pronounced compared to the Hankyoreh newspaper. This also supports hypothesis 3.

In the end, President Park gave up on choosing a new prime minister and withdrew the resignation of Prime Minister Chung, keeping him in office until Sewol Ferry Disaster issue calmed down.

Interestingly, in the conservative newspaper, it was marginally significant (┼), which might mean that the conservative newspaper might agree that Mr. Yoo was a criminal to reduce the negative opinion of conservative regime by supporting HTE stance.

While the combination of positive and negative feedback through the passing of the bills and ending the body retrieval mission influenced the liberal newspaper by decreasing its negative opinion, the same strategy did not influence the conservative newspaper. This might imply that negative feedback could overtake positive feedback because the three Sewol Acts were passed, although they were not immediately implemented, most debates ended as too much time had passed and other issues emerged.

Although the resignation of the prime minister was less significant in liberal newspaper, the magnitude was higher compared to the conservative one.

The unit for collecting data was a sentence. I assumed one sentence asserts one opinion from a single opinion holder (Liu 2012 ). However, various kinds of the sentences were based on single, compound, complex, and compound-complex sentences. To deal with various complex situations, this study conducted content analysis to collect the data following protocol.

Sentiment classification was employed to determine positive and negative reviews of some items and entities (Liu 2007 ; Pang and Lee 2008 ). I created an example of the sentiment classification after reading and conducting the mock analysis.

The topic of editorials, pictures of editorials, and descriptions of pictures were weighted double, with a value of +2 for positive opinion or −2 for negative opinion.

The same nodes were named differently depending on the sentence, context, and time. Thus, I needed to resolve the consistency of the nodes’ names based on a node example.

However, generally, some different nodes were recognized as the same nodes (such as president, she/her, President Park, or President Park Guen Hye, etc.) based on the context. The sample nodes instructed the coder how to code different nodes, such as “President Park.”

Ole Holsti ( 1969 ) formula = 2 M/(N 1  + N 2 )

M = Coincident items coding

N 1  = The number of coded items by Coder 1

N 2  = The number of coded items by Coder 2

This is based on the main cells for the Senders, Receivers, Content, and Level from the content analysis.

The unit of analysis in the regression analysis was one week, since disseminating the issue takes time and an issue is usually written, discussed, and spread after it was revealed or announced.

Examples are as follows: “less or decreased negative; more or increased positive; decreased quantity of negative potential items (NPI); large, larger, or increased quantity of positive potential items (PPI); desirable fact; within the desired value range; produce a large quantity of or more resource; produce no, little, or less waste; consume no, little, or less resource; consume a large quantity of or more waste” (Liu 2012 : 55-56).

Examples are as follows: “less or decreased positive; more or increase negative; decreased quantity of PPI; large, larger, or increased quantity of NPI; undesirable fact; deviate from the desired value range; produce no, little, or less resource; produce some or more waste; consume a large quantity of more resource; consume no, little, or less waste” (Liu 2012 : 55-56).

Adedoyin-Olowe M, Gaber MM, Stahl F (2014). A survey of data mining techniques for social network analysis. J Data Mining Digit Human http://arxiv.org/abs/1312.4617 Accessed 24 March 2024

Amiri H, Chua TS (2012) Sentiment classification using the meaning of words. In Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence

Anderson CJ, Tverdova YV (2003) Corruption, political allegiances, and attitudes toward government in contemporary democracies. Am J political Sci 47(1):91–109

Article   Google Scholar  

Anduiza E, Gallego A, Muñoz J (2013) Turning a blind eye: experimental evidence of partisan bias in attitudes toward corruption. Comp Political Stud 46(12):1664–1692

Bachrach P, Baratz M (1963) Decision and non-decisions: an analytical framework. Am Political Sci Rev 57:632–642

Bamakan SMH, Nurgaliev I, Qu Q (2019) Opinion leader detection: a methodological review. Expert Syst Appl 115:200–222

Baumgartner FR, Brouard S, Grossman E (2009) Agenda-setting dynamics in France: revisiting the ‘partisan hypothesis. Fr Politics 7(2):75–95

Baumgartner FR, Jones BD (1991) Agenda dynamics and policy subsystems. J Politics 53(4):1044–1074

Baumgartner FR, Jones BD (1993) Agendas and instability in American politics. University of Chicago Press, Chicago

Google Scholar  

Baumgartner FR, Jones BD, Mortensen PB (2014) Punctuated equilibrium theory: explaining stability and change in public policymaking. In: Sabatier PA, Weible CM (eds.) Theories of the policy process , 3rd edition. Westview Press, Boulder, CO, pp. 59–103

Barnes MD, Hanson CL, Novilla LM, Meacham AT, McIntyre E, Erickson BC (2008) Analysis of media agenda setting during and after Hurricane Katrina: implications for emergency preparedness, disaster response, and disaster policy. Am J Public Health 98(4):604–610

Article   PubMed   PubMed Central   Google Scholar  

Baek SG (1997) Types of gossipped news and its deep structure in the Korean newspaper’s coverage of the 15th general election for National Assembly. Korean J Journal. Commun Stud 41(2):41–107

Beebe SA, Beebe SJ (2006) Public speaking: An audience-centered approach (6th ed.). Pearson/Allyn and Bacon, Boston

Béland D (2010) Reconsidering policy feedback: how policies affect politics. Adm Soc 42(5):568–590

Bennett WL (1988) News: The politics of illusion. Longman, New York

Berelson B, Lazarsfeld PF, McPhee WN (1954) Voting. University of Chicago Press, Chicago

Bérut C (2023) The chemical framework: Exploring Europeanisation in French, Austrian, and Irish eHealth policy processes. Gov 36(4):1063–1081

Birkland TA (1997) After disaster: Agenda setting, public policy, and focusing events. Georgetown University Press, Washington

Blais A, Gidengil E, Fournier P, Nevitte N, Everitt J, Kim J (2010) Political judgments, perceptions of facts, and partisan effects. Elect Stud 29(1):1–12

Blais A, Gidengil E, Kilibarda A (2017) Partisanship, information, and perceptions of government corruption. Int J Public Opin Res 29(1):95–110

Blex C, Yasseri T (2022) Positive algorithmic bias cannot stop fragmentation in homophilic networks. J Math Sociol 46(1):80–97

Article   MathSciNet   Google Scholar  

Burgess C (2006) Multiple streams and policy community dynamics: The 1990 NEA Independent Commission. J Arts Manag Law Soc 36(2):104–126

Butts CT (2008) Social network analysis with sna. J Stat Softw 24(6):1–51

Cairney P, Heikkila T (2014) A comparison of theories of the policy process. In: Sabatier PA, Weible CM (eds.) Theories of the policy process, 3rd edition. Westview Press, Boulder, CO, pp. 363–390

Carr JB, Hawkins CV, Westberg DE (2017) An exploration of collaboration risk in joint ventures: perceptions of risk by local economic development officials. Econ Dev Q 31(3):210–227

Carroll JM (1985) What’s in a Name? An Essay in the Psychology of Reference. WH Freeman, New York

Cavaye AL (1996) Case study research: a multi‐faceted research approach for IS. Inf Syst J 6(3):227–242

Chae SH (2014) South Korean leader accepts resignation of Premier over Ferry Disaster. https://www.nytimes.com/2014/04/28/world/asia/south-korean-premier-resigns-over-ferry-disaster.html Accessed 15 October 2023

Chae J (2017) A study on the safety culture in Korea after the Sewol Ferry Disaster. Crisisonomy 13(8):191–206

Cho KW (2017) Political and policy responses to the Sewol Ferry Disaster: Examining change through multiple theory lenses. (Unpublished doctoral dissertation). Florida State University, Tallahassee, United States

Cho KW, Jung K (2019) Illuminating the Sewol Ferry Disaster using the institutional model of punctuated equilibrium theory. Soc Sci J 56:288–303

Cho KW, Park D (2023) Emergency management policy issues during and after COVID-19: focusing on South Korea. J Contemp East Asia 22(1):49–81

Chung J (2013) Conflict management during disasters. Korea Institution of Public Administration, Seoul, South Korea

Chung J (2020) Meaning and implication from the 4th National Basic Safety Management Plan. National Territory . 28–32

Chung CK, Choi JW, Lee SW, Jung JK (2019) Theories of policy sciences. Daemyung, Seoul, South Korea

Cobb R, Ross JK, Ross MH (1976) Agenda building as a comparative political process. Am Political Sci Rev 70(1):126–138

Cohen MD, March JG, Olsen JP (1972) A garbage can model of organizational choice. Adm Sci Q 17:1–25

Creswell JW (2013) Qualitative inquiry and research design: Choosing among five approaches. Sage, Thousand Oaks

Cutlip SM, Center AH, Broom GM (2006) Effective public relations (9th ed.). Pearson Education Inc, Upper Saddle River

Deslatte A, Koebele EA, Wiechman A (2023) Climate-adaptive collective action and ambiguity: Tracing institutional processes in urban water management. Paper presented at the 80th Annual Midwest Political Science Association Conference

Doerfel ML, Barnett GA (1999) A semantic network analysis of the International Communication Association. Hum Commun Res 25(4):589–603

Easton D (1966) Categories for the systems analysis of politics. In Easton, D (ed.), Varieties of Political Theory , Prentice Hall, Englewood Cliff

Eckersley P, Lakoma K (2022) Straddling multiple streams: focusing events, policy entrepreneurs and problem brokers in the governance of English fire and rescue services. Policy Stud 43(5):1001–1020

Edelman M (1964) The symbolic uses of politics. University of Illinois Press, Urbana

Edelman M (1971) Politics as Symbolic Action. Academic Press, New York

Elder CD, Cobb RW (1983) The political uses of symbols. Longman, New York

Erhardt J (2023) Political support through representation by the government: evidence from Dutch panel data. Swiss Political Sci Rev 29:202–222

Fowler L, Neaves TT, Terman JN, Cosby AG (2017) Cultural penetration and punctuated policy change: explaining the evolution of US Energy Policy. Rev Policy Res 34(4):559–577

Fico F, Soffin S (1995) Fairness and balance of selected newspaper coverage of controversial national, state, and local issues. Journal Mass Commun Q 72(3):621–633

Gentzkow M, Shapiro JM (2010) What drives media slant? Evidence from US daily newspapers. Econometrica 78(1):35–71

Gould SJ, Eldredge N (1977) Punctuated equilibria: the tempo and mode of evolution reconsidered. Paleobiology 3(2):115–151

Gould SJ, Eldredge N (1993) Punctuated equilibrium comes of age. Nature 366(18):223–227

Article   ADS   CAS   PubMed   Google Scholar  

Harris P, Kolovos I, Lock A (2001) Who sets the agenda?: An analysis of agenda setting and press coverage in the 1999 Greek European elections. Eur J Mark 35(9/10):1117–1135

Holsti OR (1969) Content analysis for the social sciences and humanities. Reading, MA: Addison-Wesley

Heo J (2014) Without mentioning Sewol Ferry … “17.9% Safety Budget Increase” https://www.joongang.co.kr/article/16268575#home Accessed 15 October 2023

Hwang KM (2015) Looking back on Sewol Ferry Disaster. http://www.koreatimes.co.kr/www/news/nation/2015/04/181176314.html Accessed 6 February 2019

Ibitoye AO, Onifade OF (2022) Social opinion network analytics in community based customer churn prediction. J Big Data 4(2):87–95

Iyengar S, Kinder DR (1987) News that matters: Television and American opinion. University of Chicago Press, Chicago

John P (2006) Explaining policy change: the impact of the media, public opinion and political violence on urban budgets in England. J Eur Public Policy 13(7):1053–1068

John P, Jennings W (2010) Punctuations and turning points in British politics: the policy agenda of the Queen’s speech, 1940–2005. Br J Political Sci 40(3):561–586

Jones BD, Baumgartner FR (2005) The politics of attention: how government prioritizes problems. University of Chicago Press

Jones BD, Baumgartner FR (2012) From there to here: punctuated equilibrium to the general punctuation thesis to a theory of government information processing. Policy Stud J 40(1):1–20

Jones BD, Baumgartner FR, True JL (1998) Policy punctuations: US budget authority, 1947–1995. J Politics 60(1):1–33

Jones BD, Sulkin T, Larsen HA (2003) Policy punctuations in American political institutions. Am Political Sci Rev 97(1):151–169

Jones BD, Baumgartner FR, Breunig C, Wlezien C, Soroka S, Foucault M, François A, Green‐Pedersen C, Koski C, John P, Mortensen PB, Varone F, Walgrave S (2009) A general empirical law of public budgets: a comparative analysis. Am J Political Sci 53(4):855–873

Joly J, Richter F (2023) The calm before the storm: a punctuated equilibrium theory of international politics. Policy Stud J 51(2):265–282

Jung H, Cho KW, Yang K, Kim SY, Liu Y (2023) A comparative analysis of government responses to COVID-19 in the United States, China, and South Korea: Lessons from the early stage of the pandemic. Korea Observer 54(1):29–58

Kang H (1999) Subjectivity and objectivity of press. Media Soc 26:113–145

Kemp KA (1982) Instability in budgeting for federal regulatory agencies. Soc Sci Q 63(4):643–660

Kingdon JW (1984) Agendas, alternatives, and public policies. Little, Brown & Company, Boston, MA

Kingdon JW (1995) Agendas, alternatives, and public policies. (2nd ed.). Harper Collin, New York

Kim D, Choi M (2016) Semantic network analysis on suicide reporting from 2005 to 2014: The Chosun Il-bo and the Hankyoreh. Korean J Journal Commun Stud 60(2):178–208

Kim SE (2018) Media bias against foreign firms as a veiled trade barrier: evidence from Chinese newspapers. Am Political Sci Rev 112(4):954–970

Kim SH, Scheufele DA, Shanahan J (2002) Think about it this way: attribute agenda-setting function of the press and the public’s evaluation of a local issue. Journal Mass Commun Q 79(1):7–25

Kim SK (2014) [South Korea Best Law Award] Yoo Byung-eun Act drafted by Kim, Jae eun. http://the300.mt.co.kr/newsView.html?no=2015012210557655979 Accessed 19 June 2023

Krippendorff K (2013) Content analysis: an introduction to its methodology. Sage, Thousand Oaks

Kwon H (2016) A semantic network analysis of newspaper coverage of the 20th general election in Korea: comparing conservative and progressive newspapers. J Political Commun 42:39–87

Kunda Z (1990) The case for motivated reasoning. Psychol Bull 108(3):480–498

Article   CAS   PubMed   Google Scholar  

Kunda Z (1999) Social cognition: making sense of people. MIT press, Cambridge

Book   Google Scholar  

Larsen EG (2019) Policy feedback effects on mass publics: a quantitative review. Policy Stud J 47(2):372–394

Liu B (2007) Web data mining: exploring hyperlinks, contents, and usage data. Springer Science & Business Media

Liu B (2011) Sentiment analysis and opinion mining. http://www.cs.uic.edu/~liub/FBS/Sentiment-Analysis-tutorial-AAAI-2011.pdf Accessed 9 September 2023

Liu B (2012) Sentiment Analysis and Opinion Mining. Morgan & Claypool Publisher

Mazzoleni G, Schulz W (1999) “Mediatization” of politics: a challenge for democracy? Political Commun 16(3):247–261

McCombs ME, Shaw DL (1972) The agenda-setting function of mass media. Public Opin Q 36(2):176–187

McCombs ME (2004) Setting the agenda: The mass media and public opinion. Blackwell, Malden

Pang B, Lee L (2008) Opinion mining and sentiment analysis. Found Trends Inf Retr 2(1-2):1–135

Park S (2014) Ministry of Ocean and Fisheries confirms 2016 budget. http://www.nocutnews.co.kr/news/4336010 Accessed 14 June 2023

People’s Solidarity for Participatory Democracy (2014) Embarrassed bare face of 17.9% increasing safety budget. http://www.ohmynews.com/NWS_Web/View/at_pg.aspx?CNTN_CD=A0002045333 Accessed 25 April 2016

Riffe D, Lacy S, Fico F (2005) Analyzing media messages. Using quantitative content analysis in research. Lawrence Erlbaum Associates, Mawah

Riffe D, Lacy S, Fico F (2014) Analyzing Media Messages Using Quantitative Content Analysis in Research, 3rd edition. Routledge, New York

Rinfret SR (2011) Behind the shadows: Interests, influence, and the US Fish and Wildlife Service. Hum Dimens Wildl 16(1):1–14

Robinson R (2013) Punctuated equilibrium and the supreme court. Policy Stud J 41(4):654–681

Robinson SE, Flink CM, King CM (2014) Organizational history and budgetary punctuation. J Public Adm Res Theory 24(2):459–471

Rowling CM, Jones TM, Sheets P (2011) Some dared call it torture: cultural resonance, Abu Ghraib, and a selectively echoing press. J Commun 61(6):1043–1061

Ruvalcaba-Gomez EA, Criado JI, Gil-Garcia JR (2023) Analyzing open government policy adoption through the multiple streams framework: the roles of policy entrepreneurs in the case of Madrid. Public Policy Adm 38(2):233–264

Ryu C (2018) The influence of newspaper user’s perceived news fairness and credibility on newspaper satisfaction. Kookmin Soc Sci Rev 30(2):73–101

Schattschneider EE (1960) The Semisovereign People. Holt, Rinehart and Winston, New York

Scheufele DA, Tewksbury D (2007) Framing, agenda setting, and priming: the evolution of three media effects models. J Commun 57(1):9–20

Schneider SK (1995) Flirting with disaster: Public management in crisis situations. Armonk, NY: ME Sharpe

Shizuka Lab (2013) Importing data for social network analysis. http://www.shizukalab.com/toolkits/sna/sna_data Accessed 20 November 2015

Simon CA (2010) Public policy: Preferences and outcomes. Longman Pearson Education

Taber CS, Lodge M, Glathar J (2001) The motivated construction of political judgments. In: Kuklinski JH (ed.), Citizens and Politics: Perspectives from Political Psychology. Cambridge University Press, New York, pp. 198–226

Thesen G (2013) When good news is scarce and bad news is good: government responsibilities and opposition possibilities in political agenda‐setting. Eur J Political Res 52(3):364–389

True JL, Jones BD, Baumgartner FR (2007) Punctuated equilibrium theory: explaining stability and change in policymaking. In: Sabatier PA (ed.), Theories of the policy process. Westview Press, Boulder, CO, pp. 155–187

True JL (2000) Avalanches and incrementalism: making policy and budgets in the United States. Am Rev Public Adm 30(1):3–18

Uribe R, Buzeta C, Manzur E, Alvarez I (2021) Determinants of football TV audience: the straight and ancillary effects of the presence of the local team on the FIFA world cup. J Bus Res 127:454–463

Valente A, Tudisca V, Pelliccia A, Cerbara L, Caruso MG (2023) Comparing liberal and conservative newspapers: diverging narratives in representing migrants? J Immigr Refugee Stud 21(3):411–427

van den Dool A, Li J (2023) What do we know about the punctuated equilibrium theory in China? A systematic review and research priorities. Policy Stud J 51:283–305

Van Reeth D (2019) Forecasting Tour de France TV audiences: a multi-country analysis. Int J Forecast 35(2):810–821

Veraksa AN (2013) Symbol as a cognitive tool. Psychol Russia: State Art 6(1):57–65

Vliegenthart R, Walgrave S (2008) The contingency of intermedia agenda setting: a longitudinal study in Belgium. Journal Mass Commun Q 85(4):860–877

Wang G, Chi Y, Liu Y, Wang Y (2019) Studies on a multidimensional public opinion network model and its topic detection algorithm. Inf Process Manag 56(3):584–608

Wanta W, Golan G, Lee C (2004) Agenda setting and international news: media influence on public perceptions of foreign nations. Journal Mass Commun Q 81(2):364–377

Wasserman S, Faust K (1994) Social network analysis: Methods and applications. Cambridge University Press

Weaver RK (1986) The politics of blame avoidance. J Public Policy 6(4):371–398

Wildavsky AB (1984) The Politics of the Budgetary Process, 4th edition. Little, Brown & Company, Boston

Wood RS (2006) The dynamics of incrementalism: subsystems, politics, and public lands. Policy Stud J 34(1):1–16

Yin RK (2014) Case study research: Design and methods. Sage Publications, Thousand Oaks

Zahariadis N (2005) Essence of Political Manipulation: Emotion, Institutions, and Greek Foreign Policy. Peter Lang, New York

Zahariadis N (2007) The Multiple streams framework, In: Sabatier PA (ed.), Theories of the policy process. Westview Press, Boulder, pp. 65–92

Zahariadis N (2014) Ambiguity and multiple streams. In: Sabatier PA, Weible CM (eds.) Theories of the policy process, 3rd edition. Westview Press, Boulder, pp. 25–58

Zhou H, Zeng D, Zhang C (2009). Finding leaders from opinion networks. In 2009 IEEE International Conference on Intelligence and Security Informatics, pp. 266–268

Zullow HM, Oettingen G, Peterson C, Seligman ME (1988) Pessimistic explanatory style in the historical record: CAVing LBJ, presidential candidates, and East versus West Berlin. Am Psychol 43(9):673–682

Download references

Acknowledgements

Some parts of this study were derived from Ki Woong Cho’s (Cho 2017 ) doctoral dissertation. This article was developed after it was presented at the Next Generation Public Administration Scholar Seminar in 2019 held by Korean Association for Public Administration (KAPA), the Korean Association for Local Government and Administration Studies in 2018, the American Society for Public Administration (ASPA)’s 4th Annual International Young Scholars Workshop in Public Policy and Administration Research in 2015, and the Northeast Conference on Public Administration (NECoPA) Conference in 2015.

Author information

Authors and affiliations.

Department of Public Administration, College of Social Science, Jeonbuk National University, 567 Baekje‑daero, Deokjin‑gu, Jeonju‑si, Jeonbuk State, 54896, Republic of Korea

Ki Woong Cho

You can also search for this author in PubMed   Google Scholar

Contributions

The first and corresponding author (single author) wrote all the content and conducted all analyses presented in this article. The author of this article conducted the entire study, including its conception, design, data acquisition for analysis, data analysis and interpretation, article drafting, revising, and final submission and approval, and is responsible for the accuracy and integrity of this work.

Corresponding author

Correspondence to Ki Woong Cho .

Ethics declarations

Competing interests.

The author declares no competing interests.

Ethical approval and informed consent

This article does not include any data from human participants or human subjects. Thus, informed consent statements are unnecessary.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Opinion Network Coding Procedures

Following the protocol, I conducted the content analysis of the opinion network using Hankyoreh ’s and Chosun ’s newspaper website repository. After reading editorials and identifying opinion sentences, the coder performed content analysis, coding each opinion sentence by filling in each cell of Table 7 in Appendix 1. The data were collected using the edge list following the format in Table 7 in Appendix 1. This procedure allowed me to conduct the case study and calculate centrality for regression analysis.

Step 1: Making an edge list from the content analysis

Based on Liu ( 2012 ), the procedure involved collecting data for the opinion network as follows: The first step was to “classify whether a sentence expresses an opinion” (Liu 2012 : 37). While reading a text in the editorial, the coder identified sentences that express a negative or positive opinion. The basic unit for collecting data was the meaning of a sentence or phrase. Footnote 42

The second step involved classifying “opinion sentences into positive and negative” (Liu 2012 : 37). To decide the positive and negative opinion, I chose the sentiment classification Footnote 43 to assign “a positive, negative or neutral label to a piece of text based on its overall opinion” (Amiri and Chua 2012 : 39) (see Appendix 2 for more detail). After the coders found positive and negative sentences, the coders coded the sentences, ignoring neutral sentences unrelated to this study’s research questions. Each opinion’s basic absolute value of magnitude was 1. The link between sender and receiver had a value of +1 for positive or −1 for negative opinion (Zhou et al. 2009 ) in a sentence or phrase. Footnote 44

I added the third step, which involved identifying senders and receivers in the opinion network to determine the relationship between nodes. The coders coded the nodes based on the original sentences. By reading and conducting the mock content analysis, I created a list of sample nodes (actors or vertices, also known as senders and receivers in this study) Footnote 45 for consistency and improved coding. This node list shows the same nodes written in the original context and specifies how to deal with the same node. Footnote 46 Coders followed these three steps to fill out the blanks in Table 7 in Appendix 1.

After coding, coders checked the inter-coder reliability in line with Holsti’s ( 1969 ) study, Footnote 47 in which inter-coder reliability reached 82.99%. Finally, I coded the 8107 relationships (4796 from Hankyoreh representing liberal newspapers and 3311 from Chosun representing conservative newspapers). Subsequently, I added and calculated the centrality of the relationships separately on a weekly basis Footnote 48 for 38 weeks for different newspapers, extracting a degree of centrality for dependent and independent variables.

Step 2: The case study based on the edge list from content analysis

Ultimately, the edge list of the opinion network from the content analysis can describe all relationships between the nodes in newspaper editorials. This data was used to conduct a case study after separating the liberal and conservative newspapers. I categorized and analyzed receivers and content (positive and negative opinion) to describe the liberal and conservative newspapers’ positive and negative opinions of the Park administration and victims (see Table 1 ) as well as editorials’ responses to the emotional and cognitive symbols and positive and negative feedback (see Table 2 ).

Step 3: Calculating the centrality and conducting the regression analysis

To convert the edge list into the matrix form, I used Netminer 4 to draw a figure and calculate each node’s centrality in the opinion network. The edge list was used to conduct the network analysis and calculate the centrality. Subsequently, after obtaining the centrality and other variables’ measurements, I conducted the regression analysis (see the measurement section, Table 3 , and the results in Tables 4 and 5 ).

Example of opinion network

The following example facilitates an understanding of the opinion network. For instance, the statement “I hate it,” as the 10 th sentence expressing an opinion in the 5 th editorial in an “A” newspaper published on April 17, 2016, after the 1 st week of the Sewol Ferry Disaster, was coded as in Table 8 in Appendix 1. Ultimately, this can reflect “I → it” (negative opinion). Additionally, “They like it,” as the 11 th opinion sentence in the 5 th editorial in an “A” newspaper published on April 17, 2016, after the 1 st week of the Sewol Ferry Disaster, was coded as in Table 8 in Appendix 1. This can reflect “They → it” (positive opinion) (see Fig. 7 in Appendix 1).

figure 7

A Sample Opinion Network from Appendix Table 8 .

The cumulative and numerous relationships between nodes per week that form opinion networks (e.g., Citizens and President) are added and depicted in Fig. 8 in Appendix 1.

figure 8

An Example of Negative Opinion on “It” Opinion Network.

What is Positive and Negative Opinion?

< Positive Footnote 49 >

Expressing a positive opinion was considered and counted as positive. Positive adjectives or adverbs in factual sentences were considered positive opinions.

Offering condolences, crying for victims because people have positive feelings for them.

Expressing concern and sympathy

Facilitating encouragement

< Negative Footnote 50 >

Expressing a negative opinion was considered and counted as negative. Negative adjectives or adverbs in factual sentences were considered negative opinions.

Responsibilities that the government should or should not have implemented (Thesen 2013 )

Editors’ suggestions on what one should do or what should have been done.

Boosting responsibility by suggesting alternative solutions and methods

Assigning responsibility

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cho, K.W. Home Team Effect and Opinion Network after the Sewol Ferry Disaster: A mixed-method study of the influence of symbol and feedback on liberal versus conservative newspapers’ negative opinions. Humanit Soc Sci Commun 11 , 1250 (2024). https://doi.org/10.1057/s41599-024-02773-4

Download citation

Received : 13 August 2022

Accepted : 30 January 2024

Published : 23 September 2024

DOI : https://doi.org/10.1057/s41599-024-02773-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

case study method in library science

  • Individual Login / Register
  • INSTITUTIONAL LOGIN
  • JOURNAL HOME
  • CURRENT ISSUE

Intelligent Landfill Slope Monitoring System and Data Analysis: Case Study for a Landfill Slope in Shenzhen, China

Information & authors, metrics & citations, get full access to this article.

View all available purchase options and get full access to this article.

Data Availability Statement

Acknowledgments, information, published in.

Go to ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering

Permissions

Affiliations, download citation, view options, access content.

Please select your options to get access

ASCE Library Card (5 downloads)

Asce library card (20 downloads), buy single article, copy the content link.

Copying failed.

Share with email

Previous article, next article, request username.

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

Create a new account

Change password, password changed successfully.

Your password has been changed

  • Username* Forgot username? Password* Forgot password? Reset it here Keep me logged in Fields with * are mandatory Don't have an account? Create one here
  • Email* Fields with * are mandatory Already have an account? Login here

Can't sign in? Forgot your password?

Enter your email address below and we will send you the reset instructions

If the address matches an existing account you will receive an email with instructions to reset your password.

Verify Phone

Your Phone has been verified

Postmortem CT and autopsy findings in an elevator-related death: a case report

  • Images in Forensics
  • Open access
  • Published: 20 September 2024

Cite this article

You have full access to this open access article

case study method in library science

  • Giovanni Aulino   ORCID: orcid.org/0000-0003-3071-6504 1 ,
  • Michele Rega 1 ,
  • Vittoria Rossi 1 ,
  • Massimo Zedda 1 &
  • Antonio Oliva 1  

18 Accesses

Explore all metrics

Elevator-related fatalities and injuries are rarely discussed. Falls have been identified as the first cause of mortality in the majority of these accidents. Evidence suggests that many elevator accidents may be attributed to inadequate equipment maintenance or malfunctions of the devices. This study examines a case involving an elevator maintenance worker found within an elevator shaft, using postmortem computed tomography (PMCT) along with a full autopsy. The autopsy revealed that the cause of death was severe polytrauma resulting from dragging, compression, and crushing mechanisms, which resulted in a dislocated skull and multiple thoraco-abdominal injuries, including exposed organs and viscera. Detailed examination identified a cranio-encephalic crush, leading to a significant alteration in the physiognomy of the facial structures. Additionally, PMCT revealed complex spinal fractures, such as a Jefferson fracture and a complete Chance fracture at the D6 vertebra, accompanied by spinal deviation proximal to the fracture site. Autopsy findings corroborated these PMCT results. A multidisciplinary approach, including PMCT, is proposed as a strategic method for the comprehensive reconstruction of such accidents, facilitating the collection of extensive data.

Similar content being viewed by others

case study method in library science

Transverse process fractures of the thoracic vertebrae—the significance of this injury in the context of medicolegal opinions on high-energy trauma cases

case study method in library science

Traumatic Chest Wall Injuries

Sledding injuries a practice-based study is it time to raise awareness.

Avoid common mistakes on your manuscript.

Introduction

Elevator-related fatalities and injuries are infrequently addressed in literature. Between 1992 and 2009, there were 443 reported fatalities associated with elevators in the United States [ 1 ]. Conversely, injuries attributed to elevators are believed to occur more frequently within the United States, impacting both children and the elderly [ 2 , 3 ]. In cases involving fall-related trauma, PMCT has facilitated a more accurate reconstruction of the accident and provided a detailed characterization of the injuries sustained, as corroborated by subsequent autopsy examinations [ 4 ].

In our institution, PMCT is utilized in specific forensic cases, including those involving extensive carbonization, gunshot wounds, and drowning. This technique was applied to assess a workplace accident involving an elevator, in which the victim was found motionless within the elevator shaft [ 5 , 6 , 7 , 8 , 9 ]. The injuries sustained from the precipitating event were significantly overshadowed by the presence of severe polytrauma, which led to cranio-encephalic displacement and multiple thoracic and abdominal injuries, including exposed organs and viscera. Consequently, this study aims to provide a comprehensive description of the fractures sustained, integrating evidence from the accident scene with radiological and autopsy findings.

Case report

Case presentation.

A 35-year-old elevator maintenance worker was discovered in a prone position at the base of the elevator shaft where he had conducted maintenance the preceding day. He had been reported missing since the previous evening. The individual was found wearing a bloodstained shirt and trousers that were partially lowered to just above the knees, along with work shoes, of which only the left shoe was worn. Additionally, longitudinal streaks of blood were noted on the walls of the upper compartment at the scene.

External examination

Upon examination of the head, a cranio-encephalic crush was identified, characterized by significant disruption of the facial structure, including disrupted eyes and destruction of the bony elements of the cranial vault. Comminuted fragments of cerebral parenchyma were also present (Fig.  1 a-c).

figure 1

External examination and PMCT findings of the craniofacial region. a, b, c : Cranio-encephalic injury characterized by significant loss of facial structure, including ocular rupture and destruction of the bony components of the cranial vault. d, e : PMCT 3D-rendered images depicting the bone structure of the craniofacial area

Two parallel ecchymotic and excoriated bands were observed: one in the mid-sternal region and another laterally adjacent, each approximately 1 cm thick and diffusely over the anterior thorax and abdomen. Additionally, lacerated and contused wounds affecting the anterior trunk were observed (Fig.  2 a).

figure 2

External examination findings of the thoraco-abdominal region. a: Two parallel ecchymotic and excoriated bands measuring 1 cm in thickness, located in the mid-sternal region and adjacent laterally. b: A lacerated-contused wound with exposure of underlying musculoskeletal tissue and homolateral lung parenchyma in the right thoracic region. c: A lacerated-contused wound with a longer transverse axis, exposing abdominal viscera in the left flank

In the right thoracic region, a lacerated contusion was identified, associated with multiple dislodged rib fractures and leakage of lung parenchyma. A similarly sized injury was found on the left flank, with exposure of the abdominal viscera (Fig.  2 b, c). Finally, a full-thickness fracture of the thoracic vertebra was documented (Fig.  3 a).

figure 3

External examination and PMCT findings of the spinal column. a: Full-thickness fracture of the thoracic spine demonstrated. b: PMCT 3D-rendered image of the spinal column’s bone tissue. c: Frontal view of the spine as observed on the PMCT scan

Post-mortem CT

Prior to the autopsy, and following the external examination of the decedent, PMCT was conducted using a Somatom Sensation 16 CT scanner (Siemens ® , Munich, Germany). The examination settings included 140 kVp, 160 mAs, 24-mm feed/rotation, 1-mm slice collimation, 1-mm slice width, and reconstruction kernels of 10, 30, 40, 70, and 80.

Subsequent to data processing, axial, coronal, and sagittal two-dimensional (2D) reconstructions, along with a three-dimensional (3D) volume rendering (VR) and shaded surface display (SSD), were performed. The resultant images were reviewed by a board-certified radiologist with over 10 years of experience in forensic imaging. The findings from the external examination were communicated to the radiologist prior to the review.

PMCT confirmed fractures in the craniofacial region (Fig.  1 d, e) and revealed complex fractures of the spine, including a Jefferson fracture and a complete Chance fracture at the D6 level, with spinal deviation proximal to the fracture (Fig.  3 b, c). Numerous fractures were also identified in all four limbs as well as in the thoracic cage. Additionally, a diaphragmatic rupture and mediastinal dislocation were noted.

Autopsy findings

All radiological findings were corroborated by the forensic autopsy. The macroscopic examination revealed multiple lacerations of the thoracic and abdominal viscera. On sectioning, pronounced pallor of the organs was observed. Moreover, no significant findings identified that would suggest the presence of underlying pathologies capable of contributing to the determination of death.

Elevator accidents, while infrequent, can lead to fatalities and severe injuries [ 10 ]. Despite being one of the safest forms of transportation, the high volume of elevator traffic can contribute to serious incidents [ 2 ]. According to McCann, 20.5% of elevator passengers were not engaged in work at the time of their accidents, while 20.1% were performing work-related duties, such as clerical, stock handling, and janitorial tasks. Notably, the remaining cases (59.4%) involved construction workers who were in or near elevator shafts [ 1 ].

Reconstructing elevator-related accidents is crucial for both safety improvement and legal accountability [ 4 ]. The majority of these accidents can often be attributed to inadequate maintenance or malfunctioning equipment [ 6 ]. The nature of injuries sustained in elevator incidents varies according to the specific circumstances surrounding each case. Prahlow et al. highlighted that falls from heights were the leading cause of elevator-related fatalities, followed closely by severe asphyxia, crushing injuries, and pressure-related injuries, which placed third [ 11 ].

In this case, the cause of death was determined to be severe polytrauma resulting from dragging, compression, and crushing injuries that led to a dislocated skull and multiple thoraco-abdominal injuries, exposing internal organs and viscera. The most plausible scenario suggests that the victim experienced compression between the elevator shaft wall and the elevator, followed by a fall that resulted in further crushing as the elevator descended.

Evidence supporting this hypothesis includes blood found on the wall adjacent to the elevator, facial lacerations, longitudinal bruising on the anterior chest, and multiple fractures identified through PMCT. It has demonstrated not only exceptional sensitivity and specificity in recognizing and classifying various types of fractures, but it also allows for the identification of injuries to soft tissues and organs [ 12 , 13 , 14 , 15 , 16 ].

A systematic review of 15 studies comparing PMCT and autopsy findings in cases of traumatic death indicated an agreement rate ranging from 50 to 100% in determining the cause of death, with enhanced concordance noted specifically in gunshot-related fatalities [ 17 ]. Additionally, a subsequent large-scale study revealed an almost perfect correlation between PMCT and autopsy results in the detection of craniofacial injuries and gunshot-related deaths [ 18 ]. Thus, PMCT was instrumental in characterizing fractures that would have been challenging to analyze during a standard autopsy, thereby providing critical insights into the dynamics of the accident [ 19 ]. Indeed, PMCT has enabled the description and reconstruction of fractures in the craniofacial region and has revealed complex spinal fractures, including a Jefferson fracture and a complete Chance fracture at the D6 level, along with spinal deviation proximal to the fracture.

The autopsy further confirmed the absence of any pre-existing pathological conditions that could have contributed to the victim’s injuries. To maximize data collection, a multidisciplinary approach incorporating PMCT is essential [ 20 ]. Nevertheless, the autopsy remains a key component in establishing the cause of death and ruling out any underlying health issues that may have played a role in the injuries sustained.

In conclusion, the circumstances of the accident and the height of the fall significantly influence the severity of injuries and the likelihood of a fatal outcome in elevator incidents. Precise reconstruction of these accidents is essential for forensic investigations, as it aids in understanding the dynamics involved, especially in industrial contexts. Additionally, PMCT has proven to be a valuable, rapid, and non-invasive tool for the documentation and reconstruction of traumatic injuries, enhancing the overall forensic analysis.

Data availability

My manuscript has associated data in a data repository.

McCann M. Deaths and injuries involving elevators and escalators (2013). https://www.cpwr.com/sites/default/files/publications/elevator_escalator_BLSapproved_2.pdf (accessed 17 Aug 2019).

O’Neil J, Steele GK, Huisingh C, Smith GA. Elevator-related injuries to children in the United States, 1990 through 2004. Clin Pediatr (Phila). 2007;46(7):619–25. https://doi.org/10.1177/0009922807300232 .

Article   PubMed   Google Scholar  

Steele GK, O’Neil J, Huisingh C, Smith GA. Elevator-related injuries to older adults in the United States, 1990 to 2006. J Trauma. 2010;68(1):188–92. https://doi.org/10.1097/TA.0b013e3181b2302b .

Jacobsen C, Schön CA, Kneubuehl B, Thali MJ, Aghayev E. Unusually extensive head trauma in a hydraulic elevator accident: post-mortem MSCT findings, autopsy results and scene reconstruction. J Forensic Leg Med. 2008;15(7):462–6. https://doi.org/10.1016/j.jflm.2008.03.006 .

Cascini F, Polacco M, Cittadini F, Paliani GB, Oliva A, Rossi R. Post-mortem computed tomography for forensic applications: a systematic review of gunshot deaths. Med Sci Law. 2020;60(1):54–62. https://doi.org/10.1177/0025802419883164 .

Coty JB, Nedelcu C, Yahya S, Dupont V, Rougé-Maillart C, Verschoore M, Ridereau Zins C, Aubé C. Burned bodies: post-mortem computed tomography, an essential tool for modern forensic medicine. Insights Imaging. 2018;9(5):731–43. https://doi.org/10.1007/s13244-018-0633-2 .

Article   PubMed   PubMed Central   Google Scholar  

Hourscht C, Christe A, Diers S, Thali MJ, Ruder TD. Learning from the living to diagnose the dead - parallels between CT findings after survived drowning and fatal drowning. Forensic Sci Med Pathol. 2019;15(2):249–51. https://doi.org/10.1007/s12024-018-0081-9 .

Cittadini F, Polacco M, D’Alessio P, Tartaglione T, De Giorgio F, Oliva A, Zobel B, Pascali VL. Virtual autopsy with multidetector computed tomography of three cases of charred bodies. Med Sci Law. 2010;50(4):211–6. https://doi.org/10.1258/msl.2010.010116 .

Oliva A, Grassi S, Zedda M, Calistri L, Cazzato F, Masini V, Polacco M, Maiolatesi F, Bianchi I, Defraia B, Grifoni R, Filograna L, Natale L, Focardi M, Pinchi V. Forensic significance and inferential value of PMCT features in charred bodies: a bicentric study. Forensic Imaging. 2024;37:200590. https://doi.org/10.1016/j.fri.2024.200590 .

Article   Google Scholar  

Khaji A, Ghodsi SM. Trend of elevator-related accidents in Tehran. Arch Bone Jt Surg. 2014;2(2):117–20.

PubMed   PubMed Central   Google Scholar  

Prahlow JA, Ashraf Z, Plaza N, Rogers C, Ferreira P, Fowler DR, Blessing MM, Wolf DA, Graham MA, Sandberg K, Brown TT, Lantz PE. Elevator-related deaths. J Forensic Sci. 2020;65(3):823–32. https://doi.org/10.1111/1556-4029.14235 .

Di Paolo M, Maiese A, dell’Aquila M, Filomena C, Turco S, Giaconi C, Turillazzi E. Role of post mortem CT (PMCT) in high energy traumatic deaths. Clin Ter. 2020;171(6):e490–500. https://doi.org/10.7417/CT.2020.2263 .

Henningsen MJ, Larsen ST, Jacobsen C, Villa C. Sensitivity and specificity of post-mortem computed tomography in skull fracture detection-a systematic review and meta-analysis. Int J Legal Med. 2022;136(5):1363–77. https://doi.org/10.1007/s00414-022-02803-3 .

Mondello C, Baldino G, Bottari A, Sapienza D, Perri F, Argo A, Asmundo A, Ventura Spagnolo E. The role of PMCT for the assessment of the cause of death in natural disaster (landslide and flood): a sicilian experience. Int J Legal Med. 2022;136(1):237–44. https://doi.org/10.1007/s00414-021-02683-z .

Ampanozi G, Halbheer D, Ebert LC, Thali MJ, Held U. Postmortem imaging findings and cause of death determination compared with autopsy: a systematic review of diagnostic test accuracy and meta-analysis. Int J Legal Med. 2020;134(1):321–37. https://doi.org/10.1007/s00414-019-02140-y .

Kranioti EF, Nathena D, Spanakis K, Karantanas A, Bouhaidar R, McLaughlin S, Thali MJ, Ampanozi G. Unenhanced PMCT in the diagnosis of fatal traumatic brain injury in a charred body. J Forensic Leg Med. 2021;77:102093. https://doi.org/10.1016/j.jflm.2020.102093 .

Scholing M, Saltzherr TP, Fung Kon Jin PH, Ponsen KJ, Reitsma JB, Lameris JS, Goslings JC. The value of postmortem computed tomography as an alternative for autopsy in trauma victims: a systematic review. Eur Radiol. 2009;19(10):2333–41. https://doi.org/10.1007/s00330-009-1440-4 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Le Blanc-Louvry I, Thureau S, Duval C, Papin-Lefebvre F, Thiebot J, Dacher JN, Gricourt C, Touré E, Proust B. Post-mortem computed tomography compared to forensic autopsy findings: a French experience. Eur Radiol. 2013;23(7):1829–35. https://doi.org/10.1007/s00330-013-2779-0 .

Filograna L, Manenti G, Grassi S, Zedda M, Ryan Collen P, Floris R, Oliva A. Health Technology Assessment (HTA) of virtual autopsy through PMCT with particular focus on Italy. Forensic Imaging. 2022;30:200516. https://doi.org/10.1016/j.fri.2022.200516 .

Filograna L, Manenti G, Micillo A, Chirico F, Carini A, Gigliotti PE, Floris R, Malizia A, Oliva A. Post-mortem imaging: a tool to improve post-mortem analysis and case management during terrorist attacks. Forensic Imaging. 2023;34:200551. https://doi.org/10.1016/j.fri.2023.200551 .

Download references

Acknowledgements

We would like to extend our heartfelt gratitude to Dr. Valentina Masini for her exceptional work in reconstructing the PMCT images for this study. Her expertise and meticulous attention to detail significantly enhanced the quality of our analysis. We appreciate her collaborative spirit and contributions, which were invaluable to the success of our research.

Open access funding provided by Università Cattolica del Sacro Cuore within the CRUI-CARE Agreement. This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and affiliations.

Department of Health Surveillance and Bioethics, Section of Legal Medicine, Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Francesco Vito, 1, Rome, 00168, Italy

Giovanni Aulino, Michele Rega, Vittoria Rossi, Massimo Zedda & Antonio Oliva

You can also search for this author in PubMed   Google Scholar

Contributions

Giovanni Aulino: Conceptualization, Methodology, Writing -original draft; Michele Rega: Writing -original draft; Writing – review & editing. Vittoria Rossi: Writing -original draft; Writing – review & editing. Massimo Zedda: Writing -original draft; Writing – review & editing. Antonio Oliva: Conceptualization, Writing -original draft; Writing – review & editing; All authors approved the final version of the manuscript.

Corresponding author

Correspondence to Giovanni Aulino .

Ethics declarations

Competing interests.

The authors declare they have no conflict of interest.

Article classification

10: Forensics.

10.010: Pathology.

10.190: Autopsy.

Ethical approval and consent to participate

This article does not contain any studies with animals. Informed consent is not necessary in this work because it was a judicial autopsy case. The report does not contain personal data. In any case, all data are covered by the Italian Law—Data Protection Authority (Official Gazette no. 72 of March 26, 2012)—for scientific research purposes.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Aulino, G., Rega, M., Rossi, V. et al. Postmortem CT and autopsy findings in an elevator-related death: a case report. Forensic Sci Med Pathol (2024). https://doi.org/10.1007/s12024-024-00896-3

Download citation

Accepted : 09 September 2024

Published : 20 September 2024

DOI : https://doi.org/10.1007/s12024-024-00896-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Forensic radiology
  • Elevator accident

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. (PDF) Exploring Case Study Method for Library and Information Science

    case study method in library science

  2. (PDF) Exploring Case Study Method for Library and Information Science

    case study method in library science

  3. Multiple Case Study Method

    case study method in library science

  4. (PDF) "Impact of Library and Information Science Master's degree (MLIS

    case study method in library science

  5. 😂 What is case study method. What is the Case Method?. 2019-02-12

    case study method in library science

  6. Case Study Method: Definition, Research Types, Advantages

    case study method in library science

VIDEO

  1. Case Study Method In Hindi || वैयक्तिक अध्ययन विधि || D.Ed SE (I.D) || All Students || Special BSTC

  2. #Case_study_method#notes #study #psychology #PG #BEd

  3. Case study method/research techniques/swats Passion

  4. Case study method: the art of discussion

  5. Qualitative research

  6. Source of Secondary Data in Research I Case Study Method I Data Processing Research Methodology

COMMENTS

  1. Case Study Methods and Examples

    The purpose of case study research is twofold: (1) to provide descriptive information and (2) to suggest theoretical relevance. Rich description enables an in-depth or sharpened understanding of the case. It is unique given one characteristic: case studies draw from more than one data source. Case studies are inherently multimodal or mixed ...

  2. (PDF) Exploring Case Study Method for Library and Information Science

    Case study is used in library and information science research in order to enhance the service. provided by the library as well as to understand how information is best managed and delivered ...

  3. Research Methods in Library and Information Science

    Library and information science (LIS) is a very broad discipline, which uses a wide rangeof constantly evolving research strategies and techniques. The aim of this chapter is to provide an updated view of research issues in library and information science. A stratified random sample of 440 articles published in five prominent journals was analyzed and classified to identify (i) research ...

  4. Research Methods in Library and Information Science

    1. Introduction. Library and information science (LIS), as its name indicates, is a merging of librarianship. and information science that took place in the 1960s [1, 2]. LIS is a eld of both ...

  5. DigitalCommons@University of Nebraska

    Case Study Research (CSR) Like other social science research, library and information science (LIS) scholars have adopted case study methods for decades. In a research context, a case study is defined as a research study which focused on a single case or set of cases. Case studies are means of reporting an in-depth

  6. PDF A (VERY) BRIEF REFRESHER ON THE CASE STUDY METHOD

    ON THE CASE STUDY METHOD The case study method embraces the full set of procedures needed to do case study research. These tasks include designing a case study, collecting the study's data, ana- ... among other social science methods, such as experiments, quasi-experiments, sur-veys, histories, and statistical analyses of archival data. The ...

  7. Case Studies: Types, Designs, and Logics of Inference

    8 As Hobsbawm (1997:109) argues, "basically all history aspires to . . . `total history,'" in which the analyst ". . . cannot decide to leave out any aspect of human history a priori."In contrast, theory-driven social science adopts a partial equilibrium perspective and assumes that the benefits of simplification by focusing on a restricted set of variables and relationships will ...

  8. 22 Case Study Research: In-Depth Understanding in Context

    Abstract. This chapter explores case study as a major approach to research and evaluation. After first noting various contexts in which case studies are commonly used, the chapter focuses on case study research directly Strengths and potential problematic issues are outlined and then key phases of the process.

  9. Case study teaching

    This chapter describes the history of case study teaching, types of cases, and experimental data supporting their effectiveness. It also describes a model for comparing the efficacy of the various case study methods.

  10. Research Methods in Library and Information Science

    following areas of research can be studied through. this method (a) Formulation of library legislation, (b) Library policies making, (c) curriculum design for. librarianship, (d) Methods of ...

  11. Research methods in library and information science: A content analysis

    3.1. Surveys of research methods used in LIS. Bernhard (1993) performed a review of research articles as found in five kinds of studies: 1) content analyses of core journals, 2) content analyses of specific journals, 3) reviews of doctoral theses, 4) analyses of secondary journals, and 5) analyses of other sources.

  12. Sage Research Methods

    Case Study Method. This is the most comprehensive guide to the current uses and importance of case study methods in social research. The editors bring together key contributions from the field, which reflect different interpretations of the purpose and capacity of case study research. They address issues such as: the problem of generalizing ...

  13. Writing a Case Study

    A case study paper usually examines a single subject of analysis, but case study papers can also be designed as a comparative investigation that shows relationships between two or among more than two subjects. The methods used to study a case can rest within a quantitative, qualitative, or mixed-method investigative paradigm.

  14. Using case study in library research

    Abstract. An examination of the case study approach which combines conceptual clarification with a pragmatic discussion of technique. The author sees the emergence of the case study as a consequence of the difficulties of applying the methodology of the sciences to problems 'in which human behaviour, action or intention play a large part'.

  15. Using case study in library research

    The case-study tradition most likely to be of interest and use to those concerned with libraries and library work is that which has grown up in educational research. Here the best-known English case studies of schools as 224 Using case study in library research institutions are those of~ Hargreaves (1966) and Lacey (1970).

  16. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  17. The Case Study as Research Method: A Practical Handbook

    This book aims to provide case‐study researchers with a step‐by‐step practical guide to "help them conduct the study with the required degree of rigour" (p. xi). It seeks to "demonstrate that the case study is indeed a scientific method" (p. 104) and to show "the usefulness of the case method as one tool in the researcher's ...

  18. Distinguishing case study as a research method from case reports as a

    VARIATIONS ON CASE STUDY METHODOLOGY. Case study methodology is evolving and regularly reinterpreted. Comparative or multiple case studies are used as a tool for synthesizing information across time and space to research the impact of policy and practice in various fields of social research [].Because case study research is in-depth and intensive, there have been efforts to simplify the method ...

  19. Evidence-based librarianship: an overview

    The methods of library science research: some results from a bibliometric survey. Libr Res. 1980-81; 2:251-68. [Google Scholar] Nour MM.. A quantitative analysis of the research articles published in the core library journals of 1980. ... Case study research: design and methods. Rev. ed. Newbury Park, CA: Sage Publications. 1989 [Google ...

  20. Towards automated analysis of research methods in library and

    Some may have a "methodology" section that is a subsection of the top-level sections (e.g., the method section is within the "Case Study" section in Freeburg, 2017). A manual inspection of 50 annotated samples revealed that there were 10% of articles on which this method failed to identify the methodology section.

  21. PDF How to evaluate library collections: a case study of collection mapping

    a case study of collection mapping. Hannele Nurminen Tampere University Library, Tampere, FinlandAbstractPurpose - This article aims to illustr. te a technique to map, evaluate and describe subject-based collections. The. ethod was designed in collaboration among Finnish university libraries. The case study describes t.

  22. The case study approach

    A case study is a research approach that is used to generate an in-depth, multi-faceted understanding of a complex issue in its real-life context. It is an established research design that is used extensively in a wide variety of disciplines, particularly in the social sciences. A case study can be defined in a variety of ways (Table 5), the ...

  23. Earth Surface Processes and Landforms

    This study evaluated the uncertainty of geomorphological classification of Shaanxi Province at the ground-feature class and image scales, which derived from rough set theory: rough entropy, approximate classification quality, and approximate classification accuracy.

  24. Home Team Effect and Opinion Network after the Sewol Ferry ...

    In sum, this study conducted a case study and regression analysis of an opinion network to assess the media's negative opinion change following the emotional and cognitive symbols and positive ...

  25. Preliminary biogas production assessment on insect frass and leachates

    For purposes of a preliminary evaluation as is the case in this study, the average frass BMP based on multivariate regression models of 212 LCH 4 /kgVS will however be adopted for discussions. Based on the theoretical methods for BMP prediction, the leachates from fruit and vegetable waste stockpile produces 150 L CH 4 /kgCOD in the leachates.

  26. Intelligent Landfill Slope Monitoring System and Data Analysis: Case

    The work presented in this paper was jointly supported by the Basic Science Center Program for Multiphase Media Evolution in Hypergravity of the National Natural Science Foundation of China (Grant No. 51988101), the Basic Public Welfare Research Program of Zhejiang Province (Grant No. LGF21E080013), and the National Natural Science Foundation of China (Grant No. 41931289).

  27. B‐type natriuretic peptide levels are ...

    The data that support the findings of this study are available from the corresponding author upon reasonable request. REFERENCES 1 Yang H , Chen W , Zhu R , Wang J , Meng J .

  28. Postmortem CT and autopsy findings in an elevator-related death: a case

    Elevator-related fatalities and injuries are rarely discussed. Falls have been identified as the first cause of mortality in the majority of these accidents. Evidence suggests that many elevator accidents may be attributed to inadequate equipment maintenance or malfunctions of the devices. This study examines a case involving an elevator maintenance worker found within an elevator shaft, using ...