Advertisement

Advertisement

The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research

  • Original Paper
  • Open access
  • Published: 25 March 2022
  • Volume 66 , pages 616–630, ( 2022 )

Cite this article

You have full access to this open access article

research articles for teachers

  • Ismail Celik   ORCID: orcid.org/0000-0002-5027-8284 1 ,
  • Muhterem Dindar 2 ,
  • Hanni Muukkonen 1 &
  • Sanna Järvelä 2  

69k Accesses

132 Citations

20 Altmetric

Explore all metrics

This study provides an overview of research on teachers’ use of artificial intelligence (AI) applications and machine learning methods to analyze teachers’ data. Our analysis showed that AI offers teachers several opportunities for improved planning (e.g., by defining students’ needs and familiarizing teachers with such needs), implementation (e.g., through immediate feedback and teacher intervention), and assessment (e.g., through automated essay scoring) of their teaching. We also found that teachers have various roles in the development of AI technology. These roles include acting as models for training AI algorithms and participating in AI development by checking the accuracy of AI automated assessment systems. Our findings further underlined several challenges in AI implementation in teaching practice, which provide guidelines for developing the field.

Similar content being viewed by others

research articles for teachers

The Role of AI Algorithms in Intelligent Learning Systems

research articles for teachers

Why Not Go All-In with Artificial Intelligence?

research articles for teachers

Teaching and Learning with AI in Higher Education: A Scoping Review

Explore related subjects.

  • Artificial Intelligence
  • Digital Education and Educational Technology

Avoid common mistakes on your manuscript.

Introduction

Artificial intelligence (AI) has been penetrating our everyday lives in various ways such as through web search engines, mobile apps, and healthcare systems (Sánchez-Prieto et al., 2020 ). The swift advancement of AI technologies also has important implications for learning and teaching. In fact, AI-supported instruction is expected to transform education (Zawacki-Richter et al., 2019 ). Thus, considerable investments have been made to integrate AI into teaching and learning (Cope et al., 2020 ). A significant challenge in the effective integration of AI into teaching and learning, however, is the profit orientation of most current AI applications in education. AI developers know little about learning sciences and lack pedagogical knowledge for the effective implementation of AI in teaching (Luckin & Cukurova, 2019 ). Moreover, AI developers often fail to consider the expectations of AI end-users in education, that is, of teachers (Cukurova & Luckin, 2018 , Luckin & Cukurova, 2019 ). Teachers are considered among the most crucial stakeholders in AI-based teaching (Seufert et al., 2020 ), so their views, experiences, and expectations need to be considered for the successful adoption of AI in schools (Holmes et al., 2019 ). Specifically, to make AI pedagogically relevant, the advantages that it offers teachers and the challenges that teachers face in AI-based teaching need to be understood better. However, little attention has been paid to AI-based education from the perspective of teachers. Moreover, teachers’ skills in the pedagogical use of AI and the roles of teachers in the development of AI have been somehow ignored in literature (Langran et al., 2020 ; Seufert et al., 2020 ). To address these research gaps, this study explores the promises and challenges of AI in teaching practice that have been surfaced in research. Since the field of AI-based instruction is still developing, this study can contribute to the development of comprehensive AI-based instruction systems that allow teachers to participate in the design process.

Educational Use of Artificial Intelligence

There have been several waves of emerging educational technologies over the past few decades, and now, there is artificial intelligence (AI; Bonk & Wiley, 2020 ). The term artificial intelligence was first mentioned in 1956 by John McCarthy (Russel & Norvig, 2010 ). Baker and Smith ( 2019 ) pointed out that AI does not refer to a single technology but is defined as “computers [that] perform cognitive tasks, usually associated with human minds, particularly learning and problem-solving” (p. 10). AI is a general term that refers to diverse analytical methods. These methods can be classified as machine learning, neural networks, and deep learning (Aggarwal, 2018 ). Machine learning is defined as the capacity of a computer algorithm learning from the data to make decisions without being programmed (Popenici & Kerr, 2017 ). Although numerous machine learning models exist, the two most used models are supervised and unsupervised learning models (Alloghani et al., 2020 ). Supervised machine learning algorithms build a model based on the sample data (or training data), while unsupervised machine learning algorithms are created from untagged data (Alenezi & Faisal, 2020 ). In other words, the unsupervised model performs on its own to explore patterns that were formerly undetected by humans.

AI is used in education in different ways. For instance, AI is integrated into several instructional technologies such as chatbots (Clark, 2020 ), intelligent tutoring, and automated grading systems (Heffernan & Heffernan, 2014 ). These AI-based systems offer several opportunities to all stakeholders throughout the learning and instructional process (Chen et al., 2020 ). Previous research conducted on the educational use of AI presented AI’s support for student collaboration and personalization of learning experiences (Luckin et al., 2016 ), scheduling of learning activities and adaptive feedback on learning processes (Koedinger et al., 2012 ), reducing teachers’ workload in collaborative knowledge construction (Roll & Wylie, 2016 ), predicting the probability of learners dropping out of school or being admitted into school (Popenici & Kerr, 2017 ), profiling students’ backgrounds (Cohen et al., 2017 ), monitoring student progress (Gaudioso et al., 2012 ; Swiecki et al., 2019 ), and summative assessment such as automated essay scoring (Okada et al., 2019 ; Vij et al., 2020 ; Yuan et al., 2020 ). Despite these opportunities, the educational use of AI is more behind what is expected, unlike in other sectors (e.g., finance and health). To achieve successful AI implementation in education, various stakeholders, specifically, teachers, should participate in AI creation, development, and integration (Langran et al., 2020 ; Qin et al., 2020 ).

The Roles of Teachers in AI-based Education

The evolution of education towards digital education does not imply that people will need less teachers in the future (Dillenbourg, 2016 ). Instead of speculating if AI will replace teachers, understanding the advantages that AI offers teachers and how these advantages can change teachers’ roles in the classroom is more reasonable (Hrastinski et al., 2019 ). Salomon ( 1996 ) demonstrated this during the early stages of development of educational technology by pointing out the need to consider how learning occurs through and with computers. As for AI, Holstein et al. ( 2019 ) suggested that in the future, AI-based machines can help teachers perform what Dillenbourg ( 2013 ) emphasized as their orchestrator role in the learning and teaching process. For AI to be able to truly help teachers in this way, however, it must first learn effective orchestration of learning and teaching from teachers’ data. This is because effective teaching depends on teachers’ capability to implement appropriate pedagogical methods in their instruction (Tondeur et al., 2020 ), and their pedagogically meaningful and productive teaching incidents can serve as models for AI-based educational systems (Prieto et al., 2018 ). That is, the data collected from the learning setting orchestrated by teachers form the foundation of AI-based teaching. For example, the data may help researchers to understand when and how teaching is effectively progressing (Luckin & Cukurova, 2019 ; Luckin et al., 2016 ). To prove that the role of teachers in providing the data on features of effective learning is crucial for the development of AI algorithms, we investigated the kind of data collected from teachers and teachers’ roles in the creation of AI algorithms.

To effectively integrate AI-based education in schools, teachers must be empowered to implement such integration by endowing them with the requisite knowledge, skills, and attitudes (Häkkinen et al., 2017 ; Kirschner, 2015 ; Seufert et al., 2020 ). However, teachers’ AI-related skills have not yet been sufficiently defined because the potential of AI in education has not yet been fully exploited (Luckin et al., 2016 ). To explore teachers’ AI-related knowledge, skills, and attitudes, their engagement with AI-based systems within their teaching setting has to be investigated in detail (Dillenbourg, 2016 ; Seufert et al., 2020 ). Therefore, in this study, we reviewed empirical research on how teachers interacted with AI-based systems and how they participated in the development of AI-based education systems. We believe that our synthesis of empirical research on the topic will contribute to the identification of AI-related teaching skills and the effective implementation of AI-based education in schools with the support of teachers.

This study explored the perspective and roles of teachers in AI-based research through a systematic review of the latest research on the topic. Our specific research questions (RQs) are as follows:

RQ1—What was the distribution over time of the studies that examined teachers’ AI use?

RQ2—What data were collected from teachers in the studies on AI-based education?

RQ3—What were the roles of teachers in AI-based research?

RQ4—What advantages did AI offer teachers?

RQ5—What challenges did teachers face when using AI for education?

RQ6—Which AI methods were utilized in AI-based research that teachers participated in?

Table 1 below lists these RQs with their corresponding rationales.

Manuscript Search and Selection Criteria

In reviews of research, several methods are used to select the studies that will be reviewed. Studies published in important journals of a given domain are selected from databases such as ProQuest (Heitink et al., 2016 ), Education Resources Information Center (ERIC), and the Social Science Citation Index (SSCI) (Akçayır & Akçayır, 2017 ; Kucuk et al., 2013 ). For this review, we selected English-language scientific studies on teachers’ AI use that were published in journals from the Web of Science (WoS) database within the last 20 years until 14 September 2020. We used this method because the field tags (e.g., the topic and research area) of the studies were easy to access from the WoS database (Luor et al., 2008 ). We used the following search string: “artificial intelligence,” “deep learning,” “reinforcement learning,” “supervised learning,” “unsupervised learning,” “neural network,” “ANN,” “natural language processing,” “fuzzy logic,” “decision trees,” “ensemble,” “Bayesian,” “clustering,” and “regularization.” To narrow our search, we used “teacher,” “teacher education,” “teacher professional development,” “K-12,” “middle school*,” “high school*,” “elementary school*,” and “kindergarten*.” We selected the search strings based on the main concepts of AI in education in past studies and literature reviews (Baran, 2014 ; Zawacki-Richter et al., 2019 ). Figure  1 presents our study search procedure.

figure 1

Flow chart for the selection of articles

In our first search, we found 751 studies. Next, we checked them to see if they met our inclusion and exclusion criteria. Our inclusion criteria were as follows: (a) empirical studies on AI in pre-service and in-service teacher education and on in-service teachers’ use of AI; (b) studies on AI applications and algorithms (e.g., personal tutors, automated scoring, personal assistant; decision trees, and artificial neural networks) for teaching or analyzing teachers’ data; and (c) studies on data collected from in-service K-12 teachers or pre-service teachers. We excluded editorials, reviews, and studies conducted at the higher education level. After we applied the criteria, 44 articles remained suitable for inclusion in this study.

Data Coding and Analysis

The publication year of the articles was noted to determine the distribution of the studies over time (RQ1). For RQ2, the following categories and category numbers were assigned to the data collected from teachers in previous AI-based research: self-report (1), video (2), interview (3), observation (4), feedback/discourse (5), grading (6), eye tracking (7), audiovisual/accelerometry (8), and log file (9). We qualitatively analyzed the content of the 44 articles to determine the advantages and challenges of AI for teachers (RQ4 and RQ5, respectively) and teachers’ roles in AI-based instruction as found in research (RQ3). We coded the studies not with the preliminary or template coding scheme, which would have unnecessarily limited them by fitting them into a pre-determined coding scheme (Şimşek & Yıldırım, 2011 ), but with the open coding process (Akçayır & Akçayır, 2017 ; Williamson, 2015 ), which followed these steps: (1) Familiarize with the whole set of articles; (2) Choose a document randomly, consider its primary meaning, and write down your thought on such meaning on the margin of the document; (3) List all your thoughts on the subject, combine similar thoughts, create three columns for key, unique, and leftover thoughts, and put each thought in the appropriate column; (4) Code the text; (5) Find the most illustrative phrases for your thoughts and turn them into categories; (6) Decide on an abbreviation for each category and alphabetize these abbreviations; (7) Incorporate the final codes and perform the initial analysis; and (8) Recode the studies if needed. To classify the AI methods (RQ6), we used previous literature reviews of AI use in diverse areas such as higher education, medicine, and business (Borges et al., 2020 ; Contreras & Vehi, 2018 ). We performed the investigator triangulation method to ensure the reliability of the coding process (Denzin, 2017 ). Accordingly, the first author coded the articles separately and then shared the codes with the second author. We negotiated disagreements by checking the code list and the relevant studies, and we updated and renamed some categories. Finally, we recoded the studies using the final code list.

Results and Discussion

Distribution of the studies.

(RQ1—What was the distribution over time of the studies that examined teachers’ AI use?)

Our analysis indicated that the first study on teachers’ AI use was published in 2004. Of the 44 studies we reviewed, 22 were published in 2018 and the following years. It has been forecasted that the usage of educational AI applications will increase (Qin et al., 2020 ; Zawacki-Richter et al., 2019 ). Such increase is implied in our finding that the publication of studies on AI-based teaching increased after 2017. Figure  2 presents the research trend on AI and teachers.

figure 2

Number of articles published by year

Figure  2 further indicates that research on teachers’ AI use in education intensified in the last four years. This implies that AI-based instruction by teachers is most likely to become more common in the near future. Supporting this, our review of literature on the topics “AI” and “education” showed that studies published between 2015 and 2019 accounted for 70% of all the studies from Web of Science and Google Scholar since 2010 (Chen et al., 2020 ). The availability of AI technologies and of educational software companies to create AI-based applications is increasing rapidly all over the world (Renz & Hilbig, 2020 ). Accordingly, it seems likely that teachers’ use of AI in the teaching process will grow and more studies will be conducted on this topic.

On the other hand, there are still fewer studies on AI use in education than in other areas such as medicine and business (Borges et al., 2020 ; Luckin & Cukurova, 2019 ). The educational technology (EdTech) market is growing much more slowly than other markets with respect to the dynamics of digital transformation. One of the reasons for this is the resistance of decision-makers such as educators, teachers, and traditional textbook publishers to the use of AI (EdTechXGlobal Report, 2016 ). Considering this resistance, it can be argued that more AI research is needed to show the pedagogical uses of AI in instructional processes and to speed up the uptake of AI technologies in education.

Data Types Collected from Teachers

(RQ2—What data were collected from teachers in the studies on AI-based education?)

Self-reported data were the most common data collected from teachers in the AI-based education studies. The researchers collected self-reported data to predict teacher-related variables such as engagement, performance, and teaching quality. In these studies, machine learning algorithms were used instead of conventional regression analysis to reveal nonlinear relationships between variables of teaching practice. For instance, Wang et al. ( 2020 ) collected data from 165 early childhood teachers to better understand indicators of quality teacher–child interaction. Similarly, in Yoo and Rho ( 2020 ), teachers’ self-reported job satisfaction was predicted by a machine learning technique. In some AI studies, teacher grades of student assignments or essays were used to train AI algorithms. For example, Yuan et al. ( 2020 ), in developing an automated scoring approach, needed expert teachers’ grades to validate their AI-based scoring system. A notable finding from our review is that self-reported grades accounted for nearly 44% of all data obtained from teachers (Fig.  3 ).

figure 3

In 11 of the studies that we reviewed, teachers provided more than one type of data. The data were mostly collected during or after teachers’ instruction. Our review findings highlight the crucial role of teachers in the instructional process (e.g., Huang et al., 2010 ; Lu, 2019 ; McCarthy et al., 2016 ; Pelham et al., 2020 ). For example, Schwarz et al. ( 2018 ) presented an online learning environment that uses machine learning to inform teachers about learners’ critical moments in collaborative learning by sending the teachers warnings. In their study, they observed how the teacher guided several groups at different times in a mathematics classroom. In addition to observations, they collected interview data from the teachers about the effectiveness of the online environment. Our review indicates that there is a significant gap in physiological data collection in AI studies with teachers. Only one of the studies we reviewed collected physiological data, that is, data on eye tracking and audiovisual/accelerometry data from sensors worn by the teachers (Prieto et al., 2018 ). In fact, physiological data can be considered relevant and useful for providing process-oriented, objective metrics regarding the critical moments that impact the quality of teaching or learning in an educational activity (Järvelä et al., 2021 ).

The Roles of Teachers in AI-based Research

(RQ3—What were the roles of teachers in AI-based research?)

Our findings from our open-coding analysis indicate that teachers have seven roles in AI research. These roles and their descriptions are shown in Table 2 . As seen from the table, teachers participated in AI research as models to train AI algorithms. This role was found to be the most common role of teachers in AI-based instruction ( f  = 18). This finding underlines the pivotal role of teachers in the development of AI-based education systems. For instance, Kelly et al. ( 2018 ) conducted a study to train AI algorithms to automatically detect teachers’ authentic questions in real-life classrooms. During the training of the AI algorithms, the teachers’ effective authentic questions were fed to the AI system as features. Following the AI training, the researchers tested AI in a different classroom and found that AI successfully identified authentic questions.

Another role that teachers were observed to have in AI research was providing big data to AI systems to enable them to forecast teachers’ professional development. In this line of research, teachers mostly provided data to AI systems for the latter’s prediction of different variables of the professional development of teachers such as their job satisfaction, performance, and engagement. For example, in one study, 10,642 teachers answered a survey (Buddhtha et al., 2019 ). Then, using AI, predictors of teacher engagement were determined. Similar to other areas, big data have played an important role in education, and teachers are considered among the most important sources of big data (Ruiz-Palmero et al., 2020 ). Our findings imply that AI can effectively inform teachers of their professional development.

This study also found that teachers involved in AI research provided input information on students’ characteristics for the AI-based implementation. For example, Nikiforos et al. ( 2020 ) investigated automatic detection of learners’ aggressive behavior in a virtual learning community. The AI system utilized teacher observations of students’ behavioral characteristics to predict the students who were more likely to bully others in the online community. Our review further revealed that teachers have taken on the role of grading assignments and essays to test the accuracy of AI algorithms in grading student performance. In such studies, the accuracy rate of the AI-based assessment was determined with the help of experienced teacher assessments (Bonneton-Botté et al., 2020 ; Gaudioso et al., 2012 ; McCarthy et al., 2016 ; Yuan et al., 2020 ).

In some AI-based education studies, teachers determined the criteria for some components of AI-based systems and assessments. For example, Huang et al. ( 2010 ) investigated the effect of the learning assistance tool ICT Literacy . The tool used machine learning. In their study, experienced teachers guided the AI system by defining the criteria for effective and timely feedback. In some studies, teachers also provided pedagogical guidance on the selection of materials for AI-based implementation. For example, Fitzgerald et al. ( 2015 ) utilized AI to present learning content with varying degrees of text complexity to early-grade students. They attempted to explore early-grade text complexity features. Text complexity in the AI system was determined based on teachers’ pedagogical guidance. Furthermore, teachers commented on the usability and design of AI-based technologies (Burstein et al., 2004 ). Finally, our results revealed a notable absence of pre-service teachers as participants in AI use studies. That is, there were no studies in which pre-service teachers actively participated or interacted with AI technologies.

Advantages of AI for Teachers

(RQ4—What advantages did AI offer teachers?)

We found several advantages of AI from our review of selected empirical studies on teachers’ AI use. The open coding revealed three categories of AI advantages: planning, implementation, and assessment (see Table 3 ).

The advantages of AI related to planning involved receiving information on students’ backgrounds and assisting teachers in deciding on the learning content during lesson planning. In a study, an AI system provided teachers background information on students’ risk factors for delinquency, such as aggression (Pelham et al., 2020 ). In terms of teacher assistance in planning learning content, Dalvean and Enkhbayar ( 2018 ) used machine learning to classify the readability of English fiction texts. The results of their study suggested that the classification can help English teachers to plan the course contents considering the readability features (Table 4 ).

Implementation

According to our review (see Table 3 ), the most prominent advantage of AI was stated as timely monitoring of learning processes ( f  =  12 ). For example, Su et al. ( 2014 ) developed a sensor-based learning concentration detection system using AI in a classroom environment. The system allowed teachers to monitor the degree of students’ concentration on lesson activities. Such AI-based monitoring can help teachers to provide immediate feedback (Burstein et al., 2004 ; Huang et al., 2010 , 2011 ) and quickly perform the necessary interventions (Nikiforos et al., 2020 ; Schwarz et al., 2018 ). For instance, teachers were able to discover critical moments in group learning and provide adaptive interventions for all the groups (Schwarz et al., 2018 ). Hence, AI systems can decrease the teaching burden on teachers by providing them feedback and assisting them with planning interventions and with student monitoring. In several studies, these contributions to teachers were particularly emphasized (Lu, 2019 ; Ma et al., 2020 ). Therefore, we assume that reduced teaching load may be another significant advantage of AI systems in education. For example, researchers reported that teachers benefitted from an AI-based peer tutor recommender system and saved time for other activities (Ma et al., 2020 ).

Our findings further revealed that AI can enable teachers to select or adapt the optimum learning activity based on AI feedback. For example, in Bonneton-Botté et al. ( 2020 ), teachers decided to implement exercises such as writing letters and numbers for students with a low graphomotor level based on the feedback they received from AI. According to our synthesis, AI can also make the teaching process more interesting for teachers. Teachers reported that AI-tutors facilitated enjoyable teaching experiences for them by breaking the monotony in the classroom (McCarthy et al., 2016 ). We also found out that AI algorithms can increase opportunities for teacher-student interaction by capturing and analyzing data from productive moments (Lamb & Premo, 2015 ) and tracking student progress (Farhan et al., 2018 ).

According to our review, AI helps teachers in exam automation and essay scoring and in decision-making on student performance. It has been found that an automated essay scoring system can not only significantly advance the effectiveness of essay scoring but also make scoring more objective (Yuan et al., 2020 ). Therefore, researchers are interested in the use of AI affordances to investigate automated systems. An important utility of AI-based applications in the context of assessment is to detect plagiarism in student essays (Dawson et al., 2020 ). Several existing AI-based systems (e.g., Turnitin) allow teachers to check the authenticity of essays submitted by students in graduate courses (Alharbi & Al-Hoorie, 2020 ). This can be considered an important utility of AI in student assessment. We coded seven studies on the advantage of exam automation and essay scoring. Six of these studies investigated the scoring of student-related outcomes (Annabestani et al., 2020 ; Huang et al., 2010 ; Tepperman et al., 2010 ; Yuan et al., 2020 ; Vij et al., 2020 ; Yang, 2012 ), and one study used AI-based systems to score teachers’ open-ended responses, to assess usable mathematics teaching knowledge (Kersting et al., 2014 ). We suggest that more studies be conducted on automatic scoring of teacher-related variables such as technological and pedagogical knowledge. Considering that classroom video analysis (CVA) assessment is capable of scoring and assessing teacher knowledge (Kersting et al., 2014 ), CVA can be used in both in-service and pre-service teacher education, particularly on micro-teaching methods. For example, natural language processing methods (Bywater et al., 2019 ) can utilize existing CVA scoring schemes to detect teachers’ verbal communication patterns in conveying instructional content to students. Furthermore, machine vision methods (Ozdemir & Tekin, 2016 ) can be applied to teachers’ video recordings to observe the patterns in their body posture. Such methods may provide valuable feedback to novice teachers on developing their teaching skills.

AI could also help provide teachers feedback on the effectiveness of their instructional practice (Farhan et al., 2018 ; Lamb & Premo, 2015 ). Teachers’ pedagogically meaningful teaching aspects can be modeled automatically using multiple data sources and AI (Dillenbourg, 2016 ; Prieto et al., 2018 ). Through these models, teachers can improve their instructional practices. Besides, the pedagogically effective models can train AI algorithms to make them more sophisticated.

Also, AI technologies were used to better predict or assess teacher performance or outcomes. Researchers predicted pre-service or in-service teachers’ professional development outcomes such as course achievement using machine learning algorithms, which are beneficial in revealing complex and nonlinear relationships. While seven studies collected data from in-service teachers, two studies obtained data from pre-service teachers (Akgün & Demir, 2018 ; Demir, 2015 ).

In addition, Cohen et al. ( 2017 ) conducted a study on a sample with autism spectrum disorder and another sample without. The results revealed that a machine learning tool can provide accurate and informative data for diagnosing autism spectrum disorder. In the study of Cohen et al., teachers commented on the accuracy of the tool.

Figure  4 illustrates the role of teachers in AI research and the advantages of AI for teachers. This gives us ideas about AI expectations from teachers and AI opportunities for teachers.

figure 4

Advantages of AI and teacher roles in AI research

Challenges in AI Use by Teachers

(RQ5—What challenges did teachers face when using AI for education?)

The challenges in teachers’ use of AI are summarized in Table 3 . One of the most observed challenges is the limited technical capacity of AI. For example, AI may not be efficient for scoring graphics or figures and text. Fitzgerald et al. ( 2015 ) reported that an AI-based system failed to assess the complexity of texts when they included images. The limited reliability of the AI algorithm was found to be another considerable challenge. Therefore, automated writing evaluation technologies that use AI algorithms have to be improved to provide trustworthy evaluations for teachers (Qian et al., 2020 ). Inefficiency of AI systems in assessment and evaluation is related more to validity than to reliability. AI-based scoring may sometimes improperly evaluate performance (Lu, 2019 ). Our review further indicated that AI systems may be too context-dependent such that using them in varying educational settings can be challenging. For example, an AI algorithm designed to detect specific behavior in a specific online learning environment cannot work in different languages (Nikiforos et al., 2020 ). In other words, this limitation can stem from cultural differences.

The lack of technological knowledge of teachers (Chiu & Chai, 2020 ) and the lack of technical infrastructure in schools (McCarthy et al., 2016 ) are two other challenges in integrating AI into education. It has also been reported that AI-based feedback is sometimes slow. This can lead to teacher boredom in using AI (McCarthy et al., 2016 ). Although adaptive and personalized feedback is important for teachers to reduce their workload, AI systems are not always capable of giving different kinds of feedback based on students’ needs (Burstein et al., 2004 ). Therefore, AI systems currently fall short of meeting the needs of teachers for effective feedback (Fig.  5 ).

figure 5

AI methods in the reviewed studies

AI Methods in Research

(RQ6—Which AI methods were utilized in AI-based research that teachers participated in?)

We coded AI methods in the studies, following previous reviews (Borges et al., 2020 ; Contreras & Vehi, 2018 ; Saa et al., 2019 ). Artificial neural networks (ANN) appeared to have been the most used ( f  =  16 ) AI method in the education studies involving teachers. ANN is a machine learning method that is widely used in business, economics, engineering, and higher education (Musso et al., 2013 ). According to our review, ANN also processes common data sourced from teachers. For example, Alzahrani and his colleagues (Alzahrani et al., 2020 ) investigated the relationship between thermal comfort and teacher performance. Through ANN analysis, they analyzed the data related to teachers’ productivity and the classroom temperature. Decision trees, another machine learning algorithm, were frequently utilized in our reviewed studies. For instance, Gaudioso et al. ( 2012 ) used decision tree algorithms on data to support teachers in detecting moments in which students were having problems in an adaptive educational system. Similar to our findings, a review of predictive machine learning methods for university students’ academic performance found that the decision tree algorithm was the most commonly used (Saa et al., 2019 ).

In our review, we also investigated the subject domains of teachers’ AI-based instruction. The studies with teachers from various domains accounted for 16% of all research (see Fig.  6 ). These studies generally had a larger sample size than the studies with teachers from a single domain (e.g., Buddhtha et al., 2019 ). Primary education and the English language appeared to be the domains where teachers use AI the most. Studies on automated essay scoring and adaptive feedback were conducted in English language courses. We found that 46% of all the studies we reviewed were performed in fields related to science, technology, engineering, and mathematics (STEM), and a much smaller percentage of studies were performed in the social science and early childhood fields together. These might have been because teachers in STEM fields are more accustomed to technology use (Chai et al., 2020 ).

figure 6

Distribution of studies by subject domain

Conclusions and Future Research

Due to the growing interest in AI use, the number of studies on teachers’ use of AI has been increasing in the last few years, and more studies are needed to know more about teachers’ AI use. As AI continues to become popular in education, undoubtedly more research will focus on AI use in teachers’ instruction. Our synthesis of relevant studies shows that there has been little interest in investigating AI in pre-service teacher education. Hence, we recommend more empirical studies on pre-service teachers’ AI use. Developing AI awareness and skills among pre-service teachers may facilitate better adoption of AI-based teaching in future classrooms. As Valtonen et al. ( 2021 ) have shown, teachers’ and students’ use of emerging technologies can make a major contribution to the development of 21st-century practices in schools.

Another gap we found in our review is the limited variety of methods and data channels used in AI-based systems. It seems that AI-based systems in education do not exploit the potential of multimodal data. Most of the AI applications that teachers use utilize only self-reported and/or observation data, while different data modalities can create more opportunities to understand teaching and learning processes (Järvelä & Bannert, 2021 ). Enriching AI systems with other data types (e.g., physiological data) may give a better understanding of different layers of teaching and learning, and thus, help teachers to plan effective learning interventions, provide timely feedback and conduct more accurate assessments of students’ cognitive and emotional states during the instruction. Utilizing multimodal data can help to model more efficient and effective AI systems for education. Thus, we conclude that further work is necessary to improve the capabilities of AI systems with multimodal data.

Our review revealed that teachers have limited involvement in the development of AI-based education systems. Although in some studies, experienced teachers were recruited to train AI algorithms, further efforts are needed to involve a wider population of teachers in developing AI systems. Such involvement should go beyond training AI algorithms and involve teachers in the crucial decision-making processes on how (not) to develop AI systems for better teaching. For their part, AI developers and software companies should consider involving teachers in the development process to a greater extent.

This study showed that AI has been reported as generally beneficial to teachers’ instruction. Teachers can take advantage of AI in their planning, implementation, and assessment work. AI assists them in identifying their students’ needs so that they can determine the most suitable learning content and activities for their students. During the activities, such as a collaborative task, with the help of AI, teachers can monitor their students in a timely manner and give them immediate feedback (e.g., Swiecki et al., 2019 ). After the instruction, AI-based automated scoring systems can help teachers with assessment (e.g., Kersting et al., 2014 ). These advantages mainly reduce teachers’ workload and help them to focus their attention on critical issues such as timely intervention and assessment (Vij et al., 2020 ). However, many of the studies reviewed were conducted to predict outcome variables (e.g., performance, engagement, and job satisfaction) through machine learning algorithms (Yoo & Rho, 2020 ). More studies are needed to enable AI systems to provide information and feedback on how the learning processes temporally unfold during teachers’ instruction. Then, teachers will be able to interact with actual AI systems to better understand possible opportunities.

This study revealed several limitations and challenges of AI for teachers’ use such as its limited reliability, technical capacity, and applicability in multiple settings. Future empirical research is necessary to address the challenges reported in this study. We conclude that developing AI systems that are technically and pedagogically capable of contributing to quality education in diverse learning settings is yet to be achieved. To achieve this objective, multidisciplinary collaboration between multiple stakeholders (e.g., AI developers, pedagogical experts, teachers, and students) is crucial. We hope that this review will serve as a springboard for such collaboration.

*References marked with an asterisk show the articles used in the review

Aggarwal, C. C. (2018). Neural networks and deep learning.  Springer ,  10 , 978-3. https://doi.org/10.1007/978-3-319-94463-0

Akçayır, M., & Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educational Research Review, 20 , 1–11. https://doi.org/10.1016/j.edurev.2016.11.002

Article   Google Scholar  

*Akgün, E., & Demir, M. (2018). Modeling course achievements of elementary education teacher candidates with artificial neural networks.  International Journal of Assessment Tools in Education ,  5 (3), 491–509. https://doi.org/10.21449/ijate.444073

Alenezi, H. S., & Faisal, M. H. (2020). Utilizing crowdsourcing and machine learning in education: Literature review.  Education and Information Technologies , 1-16. https://doi.org/10.1007/s10639-020-10102-w

Alharbi, M. A., & Al-Hoorie, A. H. (2020). Turnitin peer feedback: Controversial vs. non-controversial essays. International Journal of Educational Technology in Higher Education, 17 , 1–17. https://doi.org/10.1186/s41239-020-00195-1

Alloghani, M., Al-Jumeily, D., Mustafina, J., Hussain, A., & Aljaaf, A. J. (2020). A systematic review on supervised and unsupervised machine learning algorithms for data science. In  Supervised and Unsupervised Learning for Data Science  (pp. 3–21). Springer, Cham. https://doi.org/10.1007/978-3-030-22475-2_1

Alzahrani, H., Arif, M., Kaushik, A., Goulding, J., & Heesom, D. (2020). Artificial neural network analysis of teachers’ performance against thermal comfort. International Journal of Building Pathology and Adaptation . https://doi.org/10.1108/IJBPA-11-2019-0098

Annabestani, M., Rowhanimanesh, A., Mizani, A., & Rezaei, A. (2020). Fuzzy descriptive evaluation system: Real, complete and fair evaluation of students. Soft Computing, 24 (4), 3025–3035. https://doi.org/10.1007/s00500-019-04078-0

Baker, T., & Smith, L. (2019). Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges . Retrieved from Nesta Foundation website: https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf

Baran, E. (2014). A review of research on mobile learning in teacher education. Journal of Educational Technology & Society, 17 (4), 17–32.

Google Scholar  

*Bonneton-Botté, N., Fleury, S., Girard, N., Le Magadou, M., Cherbonnier, A., Renault, M., ... & Jamet, E. (2020). Can tablet apps support the learning of handwriting? An investigation of learning outcomes in kindergarten classroom.  Computers & Education ,  151 , 103831. https://doi.org/10.1016/j.compedu.2020.103831

Borges, A. F., Laurindo, F. J., Spínola, M. M., Gonçalves, R. F., & Mattos, C. A. (2020). The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions.  International Journal of Information Management , 102225. https://doi.org/10.1016/j.ijinfomgt.2020.102225

Bonk, C. J., & Wiley, D. A. (2020). Preface: Reflections on the waves of emerging learning technologies. Educational Technology Research and Development, 68 (4), 1595–1612. https://doi.org/10.1007/s11423-020-09809-x

Buddhtha, S., Natasha, C., Irwansyah, E., & Budiharto, W. (2019). Building an artificial neural network with backpropagation algorithm to determine teacher engagement based on the indonesian teacher engagement index and presenting the data in a Web-Based GIS. International Journal of Computational Intelligence Systems, 12 (2), 1575–1584. https://doi.org/10.2991/ijcis.d.191101.003

Burstein, J., Chodorow, M., & Leacock, C. (2004). Automated essay evaluation: The Criterion online writing service. Ai Magazine, 25 (3), 27–27. https://doi.org/10.1609/aimag.v25i3.1774

Bywater, J. B., Chiu J. l., Hong J., & Sankaranarayanan,V. (2019). The teacher responding tool: Scaffolding the teacher practice of responding to student ideas in mathematics classrooms.  Computers & Education   139 , 16-30. https://doi.org/10.1016/j.compedu.2019.05.004

Chai, C. S., Jong, M., & Yan, Z. (2020). Surveying Chinese teachers’ technological pedagogical STEM knowledge: A pilot validation of STEM-TPACK survey. International Journal of Mobile Learning and Organisation, 14 (2), 203–214. https://doi.org/10.1504/IJMLO.2020.106181

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8 , 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510

Chiu, T. K., & Chai, C. S. (2020). Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability, 12 (14), 5568. https://doi.org/10.3390/su12145568

Clark, D. (2020).  Artificial ıntelligence for learning: How to use AI to support employee development . Kogan Page Publishers.

*Cohen, I. L., Liu, X., Hudson, M., Gillis, J., Cavalari, R. N., Romanczyk, R. G., ... & Gardner, J. M. (2017). Level 2 Screening with the PDD Behavior Inventory: Subgroup Profiles and Implications for Differential Diagnosis.  Canadian Journal of School Psychology ,  32 (3-4), 299-315. https://doi.org/10.1177/0829573517721127

Contreras, I., & Vehi, J. (2018). Artificial intelligence for diabetes management and decision support: Literature review. Journal of Medical Internet Research, 20 (5), e10775. https://doi.org/10.2196/10775

Cope, B., Kalantzis, M., & Searsmith, D. (2020). Artificial intelligence for education: Knowledge and its assessment in AI-enabled learning ecologies.  Educational Philosophy and Theory , 1–17.

Cukurova, M., & Luckin, R. (2018). Measuring the impact of emerging technologies in education: A pragmatic approach. Springer, Cham. https://discovery.ucl.ac.uk/id/eprint/10068777

*Dalvean, M., & Enkhbayar, G. (2018). Assessing the readability of fiction: a corpus analysis and readability ranking of 200 English fiction texts* 4.  Linguistic Research ,  35 , 137–170. https://doi.org/10.17250/khisli.35.201809.006

Dawson, P., Sutherland-Smith, W., & Ricksen, M. (2020). Can software improve marker accuracy at detecting contract cheating? A pilot study of the Turnitin authorship investigate alpha. Assessment & Evaluation in Higher Education, 45 (4), 473–482.

*Demir, M. (2015). Predicting pre-service classroom teachers’ civil servant recruitment examination’s educational sciences test scores using artificial neural networks.  Educational Sciences: Theory & Practice ,  15 (5). Retrieved from https://doi.org/10.12738/estp.2015.5.0018

Denzin, N. K. (2017).  The research act: A theoretical introduction to sociological methods . Transaction publishers.

Dillenbourg, P. (2013). Design for classroom orchestration. Computers & Education, 69, 485–492. https://doi.org/10.1016/j.compedu.2013.04.013 .

Dillenbourg, P. (2016). The evolution of research on digital education. International Journal of Artificial Intelligence in Education, 26 (2), 544–560. https://doi.org/10.1007/s40593-016-0106-z

EdTechXGlobal. (2016). EdTechXGlobal report 2016—Global EdTech industry report: a map for the future of education and work. Retrieved from http://ecosystem.edtechxeurope.com/2016-edtech-report

Farhan, M., Jabbar, S., Aslam, M., Ahmad, A., Iqbal, M. M., Khan, M., & Maria, M. E. A. (2018). A real-time data mining approach for interaction analytics assessment: IoT based student interaction framework. International Journal of Parallel Programming, 46 (5), 886–903. https://doi.org/10.1007/s10766-017-0553-7

Fitzgerald, J., Elmore, J., Koons, H., Hiebert, E. H., Bowen, K., Sanford-Moore, E. E., & Stenner, A. J. (2015). Important text characteristics for early-grades text complexity. Journal of Educational Psychology, 107 (1), 4. https://doi.org/10.1037/a0037289

Gaudioso, E., Montero, M., & Hernandez-Del-Olmo, F. (2012). Supporting teachers in adaptive educational systems through predictive models: A proof of concept. Expert Systems with Applications, 39 (1), 621–625. https://doi.org/10.1016/j.eswa.2011.07.052

Häkkinen, P., Järvelä, S., Mäkitalo-Siegl, K., Ahonen, A., Näykki, P., & Valtonen, T. (2017). Preparing teacher students for 21st century learning practices (PREP 21): A framework for enhancing collaborative problem solving and strategic learning skills. Teachers and Teaching: Theory and Practice, 23 (1), 25–41. https://doi.org/10.1080/13540602.2016.1203772

Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education, 24 (4), 470–497. https://doi.org/10.1007/s40593-014-0024-x

Heitink, M. C., Van der Kleij, F. M., Veldkamp, B. P., Schildkamp, K., & Kippers, W. B. (2016). A systematic review of prerequisites for implementing assessment for learning in classroom practice. Educational Research Review, 17 , 50–62. https://doi.org/10.1016/j.edurev.2015.12.002

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and Implications for Teaching and Learning . Center for Curriculum Redesign.

Holstein, K., McLaren, B. M., & Aleven, V. (2019). Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity.  Journal of Learning Analytics ,  6 (2), 27–52. https://doi.org/10.18608/jla.2019.62.3

Hrastinski, S., Olofsson, A. D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., ... & Utterberg, M. (2019). Critical imaginaries and reflections on artificial intelligence and robots in post digital K-12 education.  Post digital Science and Education ,  1 (2), 427-445. https://doi.org/10.1007/s42438-019-00046-x

*Huang, C. J., Liu, M. C., Chang, K. E., Sung, Y. T., Huang, T. H., Chen, C. H., ... & Chang, T. Y. (2010). A learning assistance tool for enhancing ICT literacy of elementary school students.  Journal of Educational Technology & Society ,  13 (3), 126-138.

Huang, C. J., Wang, Y. W., Huang, T. H., Chen, Y. C., Chen, H. M., & Chang, S. C. (2011). Performance evaluation of an online argumentation learning assistance agent. Computers & Education, 57 (1), 1270–1280. https://doi.org/10.1016/j.compedu.2011.01.013

Järvelä, S. & Bannert, M. (2021). Temporal and adaptive processes of regulated learning – What can multimodal data tell? Learning and Instruction , 72, https://doi.org/10.1016/j.learninstruc.2019.101268

Järvelä, S., Malmberg, J., Haataja, E., Sobocinski, M., & Kirschner, P. A. (2021). What multimodal data can tell us about the students’ regulation of their learning process.  Learning and Instruction ,  101203 . https://doi.org/10.1016/j.learninstruc.2019.04.004

Kelly, S., Olney, A. M., Donnelly, P., Nystrand, M., & D’Mello, S. K. (2018). Automatically measuring question authenticity in real-world classrooms. Educational Researcher, 47 (7), 451–464. https://doi.org/10.3102/0013189X18785613

Kersting, N. B., Sherin, B. L., & Stigler, J. W. (2014). Automated scoring of teachers’ open-ended responses to video prompts: Bringing the classroom-video-analysis assessment to scale. Educational and Psychological Measurement, 74 (6), 950–974. https://doi.org/10.1177/0013164414521634

Kirschner, P. A. (2015). Do we need teachers as designers of technology enhanced learning? Instructional Science, 43 (2), 309–322. https://doi.org/10.1007/s11251-015-9346-9

Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The Knowledge-Learning-Instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36 (5), 757–798. https://doi.org/10.1111/j.1551-6709.2012.01245.x

Kucuk, S., Aydemir, M., Yildirim, G., Arpacik, O., & Goktas, Y. (2013). Educational technology research trends in Turkey from 1990 to 2011. Computers & Education, 68 , 42–50. https://doi.org/10.1016/j.compedu.2013.04.016

Lamb, R., & Premo, J. (2015). Computational modeling of teaching and learning through application of evolutionary algorithms. Computation, 3 (3), 427–443. https://doi.org/10.3390/computation3030427

Langran, E., Searson, M., Knezek, G., & Christensen, R. (2020). AI in Teacher Education. In  Society for Information Technology & Teacher Education International Conference  (pp. 735–740). Association for the Advancement of Computing in Education (AACE). https://www.learntechlib.org/p/215821/

Lu, X. (2019). An empirical study on the artificial intelligence writing evaluation system in China CET. Big Data, 7 (2), 121–129. https://doi.org/10.1089/big.2018.0151

Luckin, R., & Cukurova, M. (2019). Designing educational technologies in the age of AI: A learning sciences-driven approach. British Journal of Educational Technology, 50 (6), 2824–2838. https://doi.org/10.1111/bjet.12861

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education . Pearson Education.

Luor, T., Johanson, R. E., Lu, H. P., & Wu, L. L. (2008). Trends and lacunae for future computer assisted learning (CAL) research: An assessment of the literature in SSCI journals from 1998–2006. Journal of the American Society for Information Science and Technology, 59 (8), 1313–1320. https://doi.org/10.1002/asi.20836

Ma, Z. H., Hwang, W. Y., & Shih, T. K. (2020). Effects of a peer tutor recommender system (PTRS) with machine learning and automated assessment on vocational high school students’ computer application operating skills. Journal of Computers in Education, 7 (3), 435–462. https://doi.org/10.1007/s40692-020-00162-9

McCarthy, T., Rosenblum, L. P., Johnson, B. G., Dittel, J., & Kearns, D. M. (2016). An artificial intelligence tutor: A supplementary tool for teaching and practicing braille. Journal of Visual Impairment & Blindness, 110 (5), 309–322. https://doi.org/10.1177/0145482X1611000503

Musso, M. F., Kyndt, E., Cascallar, E. C., & Dochy, F. (2013). Predicting general academic performance and ıdentifying the differential contribution of participating variables using artificial neural networks.  Frontline Learning Research ,  1 (1), 42–71. https://doi.org/10.14786/flr.v1i1.13

Nikiforos, S., Tzanavaris, S., & Kermanidis, K. L. (2020). Virtual learning communities (VLCs) rethinking: Influence on behavior modification—bullying detection through machine learning and natural language processing. Journal of Computers in Education, 7 , 531–551. https://doi.org/10.1007/s40692-020-00166-5

Okada, A., Whitelock, D., Holmes, W., & Edwards, C. (2019). e-Authentication for online assessment: A mixed-method study. British Journal of Educational Technology, 50 (2), 861–875.

Ozdemir, O., & Tekin, A. (2016). Evaluation of the presentation skills of the pre-service teachers via fuzzy logic. Computers in Human Behavior, 61 , 288–299. https://doi.org/10.1016/j.chb.2016.03.013

Pelham, W. E., Petras, H., & Pardini, D. A. (2020). Can machine learning improve screening for targeted delinquency prevention programs? Prevention Science, 21 (2), 158–170. https://doi.org/10.1007/s11121-019-01040-2

Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12 (1), 1–13. https://doi.org/10.1186/s41039-017-0062-8

Prieto, L. P., Sharma, K., Kidzinski, Ł, Rodríguez-Triana, M. J., & Dillenbourg, P. (2018). Multimodal teaching analytics: Automated extraction of orchestration graphs from wearable sensor data. Journal of Computer Assisted Learning, 34 (2), 193–203. https://doi.org/10.1111/jcal.12232

Qian, L., Zhao, Y., & Cheng, Y. (2020). Evaluating China’s automated essay scoring system iWrite. Journal of Educational Computing Research, 58 (4), 771–790. https://doi.org/10.1177/0735633119881472

Qin, F., Li, K., & Yan, J. (2020). Understanding user trust in artificial intelligence-based educational systems: Evidence from China. British Journal of Educational Technology, 51 (5), 1693–1710. https://doi.org/10.1111/bjet.12994

Renz, A., & Hilbig, R. (2020). Prerequisites for artificial intelligence in further education: Identification of drivers, barriers, and business models of educational technology companies. International Journal of Educational Technology in Higher Education, 17 , 1–21. https://doi.org/10.1186/s41239-020-00193-3

Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26 (2), 582–599. https://doi.org/10.1007/s40593-016-0110-3

Ruiz-Palmero, J., Colomo-Magaña, E., Ríos-Ariza, J. M., & Gómez-García, M. (2020). Big data in education: Perception of training advisors on its use in the educational system. Social Sciences, 9 (4), 53. https://doi.org/10.3390/socsci9040053

Russel, S., & Norvig, P. (2010). Artificial intelligence - a modern approach . Pearson Education.

Saa, A. A., Al-Emran, M., & Shaalan, K. (2019). Factors affecting students’ performance in higher education: A systematic review of predictive data mining techniques. Technology, Knowledge and Learning, 24 (4), 567–598. https://doi.org/10.1007/s10758-019-09408-7

Salomon, G. (1996). Studying novel learning environments as patterns of change. In S. Vosiniadou, E. De Corte, R. Glaser & H. Mandl (Eds.). International Perspectives on the design of Technology Supported Learning. NJ: Lawrence Erlbaum Associates.

Swiecki, Z., Ruis, A. R., Gautam, D., Rus, V., & Williamson Shaffer, D. (2019). Understanding when students are active-in-thinking through modeling-in-context. British Journal of Educational Technology, 50 (5), 2346–2364. https://doi.org/10.1111/bjet.12869

Sánchez-Prieto, J. C., Cruz-Benito, J., Therón Sánchez, R., & García Peñalvo, F. J. (2020). Assessed by machines: Development of a TAM-based tool to measure ai-based assessment acceptance among students. International Journal of Interactive Multimedia and Artificial Intelligence, 6 (4), 80–86. https://doi.org/10.9781/ijimai.2020.11.009

Schwarz, B. B., Prusak, N., Swidan, O., Livny, A., Gal, K., & Segal, A. (2018). Orchestrating the emergence of conceptual learning: A case study in a geometry class. International Journal of Computer-Supported Collaborative Learning, 13 (2), 189–211. https://doi.org/10.1007/s11412-018-9276-z

Seufert, S., Guggemos, J., & Sailer, M. (2020). Technology-related knowledge, skills, and attitudes of pre-and in-service teachers: The current situation and emerging trends. Computers in Human Behavior, 115 , 106552. https://doi.org/10.1016/j.chb.2020.106552

Şimşek, H., & Yıldırım, A. (2011). Qualitative research methods in social sciences . Seçkin Publishing.

Su, Y. N., Hsu, C. C., Chen, H. C., Huang, K. K., & Huang, Y. M. (2014). Developing a sensor-based learning concentration detection system. Engineering Computations., 31 (2), 216–230. https://doi.org/10.1108/EC-01-2013-0010

Tepperman, J., Lee, S., Narayanan, S., & Alwan, A. (2010). A generative student model for scoring word reading skills. IEEE Transactions on Audio, Speech, and Language Processing, 19 (2), 348–360. https://doi.org/10.1109/TASL.2010.2047812

Tondeur, J., Scherer, R., Siddiq, F., & Baran, E. (2020). Enhancing pre-service teachers’ technological pedagogical content knowledge (TPACK): A mixed-method study. Educational Technology Research and Development, 68 (1), 319–343. https://doi.org/10.1007/s11423-019-09692-1

Valtonen, T., Hoang, N., Sointu, E., Näykki, P., Virtanen, A., Pöysä-Tarhonen, J., Häkkinen, P., Järvelä, S., Mäkitalo, K., & Kukkonen, J. (2021). How pre-service teachers perceive their 21st-century skills and dispositions: A longitudinal perspective. Computers in Human Behavior, 116 , 106643. https://doi.org/10.1016/j.chb.2020.106643

Vij, S., Tayal, D., & Jain, A. (2020). A machine learning approach for automated evaluation of short answers using text similarity based on WordNet graphs. Wireless Personal Communications, 111 (2), 1271–1282. https://doi.org/10.1007/s11277-019-06913-x

Wang, S., Hu, B. Y., & LoCasale-Crouch, J. (2020). Modeling the nonlinear relationship between structure and process quality features in Chinese preschool classrooms. Children and Youth Services Review, 109 , 104677. https://doi.org/10.1016/j.childyouth.2019.104677

Williamson, M. (2015). “I wasn’t reinventing the wheel, just operating the tools”: The evolution of the writing processes of online first-year composition students (unpublished doctorial dissertation) . Arizona State University.

Yang, C. H. (2012). Fuzzy fusion for attending and responding assessment system of affective teaching goals in distance learning. Expert Systems with Applications, 39 (3), 2501–2508. https://doi.org/10.1016/j.eswa.2011.08.102

*Yoo, J. E., & Rho, M. (2020). Exploration of predictors for Korean teacher job satisfaction via a machine learning technique, Group Mnet.  Frontiers in psychology ,  11 , 441. https://doi.org/10.3389/fpsyg.2020.00441

*Yuan, S., He, T., Huang, H., Hou, R., & Wang, M. (2020). Automated Chinese essay scoring based on deep learning.  CMC-Computers Materials & Continua ,  65 (1), 817–833. https://doi.org/10.32604/cmc.2020.010471

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 39. https://doi.org/10.1186/s41239-019-0171-0

Download references

Open Access funding provided by University of Oulu including Oulu University Hospital.

Author information

Authors and affiliations.

Learning and Learning Processes Research Unit, Faculty of Education, University of Oulu, 90014, Oulu, Finland

Ismail Celik & Hanni Muukkonen

Learning and Educational Technology Research Unit, Faculty of Education, University of Oulu, 90014, Oulu, Finland

Muhterem Dindar & Sanna Järvelä

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ismail Celik .

Ethics declarations

Human and animal rights.

There were no human participants and/or animals.

Informed Consent

This study is a literature review; therefore, no informed consent was needed.

Conflict of Interest

There are no potential conflicts of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Celik, I., Dindar, M., Muukkonen, H. et al. The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends 66 , 616–630 (2022). https://doi.org/10.1007/s11528-022-00715-y

Download citation

Accepted : 07 March 2022

Published : 25 March 2022

Issue Date : July 2022

DOI : https://doi.org/10.1007/s11528-022-00715-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence in education
  • Systematic review
  • Teacher professional development
  • Technology integration
  • Find a journal
  • Publish with us
  • Track your research

Classroom Q&A

With larry ferlazzo.

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to [email protected]. Read more from this blog.

Research Studies That Teachers Can Get Behind

research articles for teachers

  • Share article

Today’s post is the latest in a series on research findings that could be useful to teachers.

Reading Motivation

Julia B. Lindsey is a foundational literacy expert and the author of the Scholastic title, Reading Above the Fray: Reliable, Research-based Routines for Developing Decoding Skills :

Lately, I’ve noticed that many conversations about children’s foundational reading skills are disconnected from conversations about children’s motivation to read. When foundational skills and reading are mentioned together, there seem to be two very different positions: Either, there’s a concern that motivation will get in the way of teaching skills, or a concern that focusing on skills will harm children’s motivation.

The simple fact is that these shouldn’t be either-or conversations, and many of us worry about both foundational skills and long-term reading motivation. So, what should educators focus on the most in the early years? Just like many areas of reading instruction, we can look to the research to understand more about the relationship between foundational skills and reading motivation.

A recent meta-analysis of motivation and reading achievement found that, on average across over 132 studies, early reading was a stronger predictor of later motivation than early motivation was of later reading (Toste et al., 2020). In other words, children’s skill in reading is likely to help drive their motivation over time, but their motivation may not drive their growth in skill to the same level over time. Highly motivated young readers need knowledge and skills, not only motivation, to drive their continued growth in reading.

This idea has recently been confirmed and extended. Just last year, researchers published a study investigating the literacy skills over time of several thousand twins (van Bergen et al., 2022). Among other findings, researchers found that early literacy skills impacted later literacy enjoyment, but early enjoyment did not impact skill. Strong, early skills are likely to lead to motivation and enjoyment over time. But, motivation without support to develop children’s skills is unlikely to lead to long-term skill or enjoyment.

How can we navigate conversations about foundational skills and reading motivation? We can acknowledge these critical research findings that tell us supporting young readers in acquiring excellent skills is likely a powerful way to support their long-term motivation. Though we can certainly continue to address children’s motivation in other research-based ways throughout their reading lives, it is critical to know: Skills are not a motivation killer; they are a motivation driver!

skillsarenot

Teaching ELLs

Irina McGrath, Ph.D., is an assistant principal at Newcomer Academy in the Jefferson County district in Kentucky and the president of KYTESOL. She is also an adjunct professor at the University of Louisville, Indiana University Southeast, and Bellarmine University. She is a co-creator of the ELL2.0 site that offers free resources for teachers of English learners:

The topic of retaining new learning in a second language deserves greater attention than it has received so far. Retention refers to one’s ability to remember learning over time and recall it when necessary, which can be difficult. Humans tend to forget information easily.

German psychologist Hermann Ebbinghaus discovered in the 1880s that information is quickly forgotten without any reinforcement or connections to prior knowledge. In just one hour, people can forget about 56 percent of the information, approximately 66 percent after one day, and as much as 75 percent after six days. Various factors influence these percentages, including one’s prior knowledge of the topic, the difficulty of the material, the initial degree of learning, and the learning strategies used.

The learning process in a second language can be a challenging task, as students not only have to comprehend new concepts and ideas but also do so in a non-native language, which adds an additional layer of difficulty to retaining information.

Fortunately, researchers are making progress in identifying ways to support English learners in retaining and recalling information in their second or third language. An increasing number of studies are now focusing on specific strategies to enhance retention among ELs. One such effective strategy is the use of mnemonic devices, which have been found to improve ELs’ vocabulary retention by an average of 9 percent (Hill, 2022).

A 2021 study by Karatas, Özemir, and Ullman demonstrated that when students studied vocabulary words in their second language and utilized memory-enhancement techniques like spacing and retrieval practice, they experienced significant improvements in both learning and retention of the new vocabulary words.

Spacing allows learners to study across multiple sessions instead of cramming information into a single session, facilitating continuous built-in review and reducing the risk of learning burnout. On the other hand, retrieval practice involves active recall of information rather than passive engagement, such as quietly reviewing or rereading learned materials.

Recognizing the power of research-based strategies that promote retention of new learning in English learners is key to ensuring that valuable instructional time is not wasted and information does not fade away into the depths of forgetfulness.

researchersaremcgrath

Supporting Student Home Languages

Stephanie Dewing, Ph.D., is an associate professor of clinical education and the chair of the bilingual authorization program at the University of Southern California’s Rossier School of Education. A former classroom teacher in Ecuador and the United States, Stephanie specializes in language and literacy development and dual-language instruction, with an emphasis on newcomers :

Did you know that multilingualism can delay the onset of Alzheimers and dementia by up to five years?! This is just one of the many benefits associated with having a bilingual or multilingual brain . As a language teacher, I often get asked by other educators and multilingual families if it is best to use English only at home and in school so students do not get “confused” between the different languages. Thanks to decades of research, we more confidently know the answer to this question: no. It’s best to maintain the home language(s)! The brain is an amazing organ that, over time, will figure out which language is most appropriate to use with which person and in which context. ¿Increíble, no?

Several studies have found that the development of the first language, or L1, is beneficial to the development of English and other subsequent languages. For example, Umanksky and Reardon (2014) did a longitudinal study over 12 years that looked at the reclassification patterns of Latino English learners. Reclassification is when students who are identified as English learners demonstrate proficiency in English based on state exams and other bodies of evidence. What they found was that those who were enrolled in dual-language programs tended to reclassify a bit slower at the elementary level, but by the end of high school, they had a higher likelihood of becoming proficient in English and reclassifying (Umansky & Reardon, 2014).

The benefits of focusing on L1 acquisition in the early grades was undeniable. Riches and Genesee (2006) also argued for the importance of early literacy experiences and found through their research that those who develop literacy in their first language(s) develop skills that transfer to literacy development in English (or other additional languages). In fact, English learners who had developed literacy in their L1 were found to progress more quickly and successfully in English literacy development than those who had no prior L1 literacy.

One study even found that L1 reading abilities was the best predictor of L2, or second language, reading achievement in later grades (Riches & Genesee, 2006). Having bilingual or multilingual repertoires from which to draw is an asset, which means that whenever the opportunity presents itself, we should encourage families to maintain their home languages and engage in literacy-based activities in those languages.

Knowing which resources are available, such as print resources in different languages at local libraries, multilingual apps or websites, audiobooks in other languages, etc., can empower educators and families to reach this goal.

Finally, it is important to note the role that time plays in these studies. Language development takes time, and our patience and support is essential. When we give this incredible process the time, attention, and recognition it deserves (and start early!), we are not only setting our multilingual learners up for greater success, but we are helping to shape a more global, multilingual world, which is something to celebrate!

thedevelopmentdewing

Thanks to Julia, Irina, and Stephanie for contributing their thoughts!

Today’s post answered this question:

What are one to three research findings that you think teachers should know about but that you also think that many of them do not?

Part One in this series featured responses from Ron Berger, Wendi Pillars, and Marina Rodriguez.

In Part Two , Erica Silva, Min Oh, and Marilyn Chu contributed their answers.

Consider contributing a question to be answered in a future post. You can send one to me at [email protected] . When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.

You can also contact me on Twitter at @Larryferlazzo .

Just a reminder; you can subscribe and receive updates from this blog via email . And if you missed any of the highlights from the first 12 years of this blog, you can see a categorized list here .

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Sign Up for EdWeek Update

Edweek top school jobs.

Monica Guzman BS

Sign Up & Sign In

module image 9

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Teacher and Teaching Effects on Students’ Attitudes and Behaviors

David blazar.

Harvard Graduate School of Education

Matthew A. Kraft

Brown University

Associated Data

Research has focused predominantly on how teachers affect students’ achievement on tests despite evidence that a broad range of attitudes and behaviors are equally important to their long-term success. We find that upper-elementary teachers have large effects on self-reported measures of students’ self-efficacy in math, and happiness and behavior in class. Students’ attitudes and behaviors are predicted by teaching practices most proximal to these measures, including teachers’ emotional support and classroom organization. However, teachers who are effective at improving test scores often are not equally effective at improving students’ attitudes and behaviors. These findings lend empirical evidence to well-established theory on the multidimensional nature of teaching and the need to identify strategies for improving the full range of teachers’ skills.

1. Introduction

Empirical research on the education production function traditionally has examined how teachers and their background characteristics contribute to students’ performance on standardized tests ( Hanushek & Rivkin, 2010 ; Todd & Wolpin, 2003 ). However, a substantial body of evidence indicates that student learning is multidimensional, with many factors beyond their core academic knowledge as important contributors to both short- and long-term success. 1 For example, psychologists find that emotion and personality influence the quality of one’s thinking ( Baron, 1982 ) and how much a child learns in school ( Duckworth, Quinn, & Tsukayama, 2012 ). Longitudinal studies document the strong predictive power of measures of childhood self-control, emotional stability, persistence, and motivation on health and labor market outcomes in adulthood ( Borghans, Duckworth, Heckman, & Ter Weel, 2008 ; Chetty et al., 2011 ; Moffitt et. al., 2011 ). In fact, these sorts of attitudes and behaviors are stronger predictors of some long-term outcomes than test scores ( Chetty et al., 2011 ).

Consistent with these findings, decades worth of theory also have characterized teaching as multidimensional. High-quality teachers are thought and expected not only to raise test scores but also to provide emotionally supportive environments that contribute to students’ social and emotional development, manage classroom behaviors, deliver accurate content, and support critical thinking ( Cohen, 2011 ; Lampert, 2001 ; Pianta & Hamre, 2009 ). In recent years, two research traditions have emerged to test this theory using empirical evidence. The first tradition has focused on observations of classrooms as a means of identifying unique domains of teaching practice ( Blazar, Braslow, Charalambous, & Hill, 2015 ; Hamre et al., 2013 ). Several of these domains, including teachers’ interactions with students, classroom organization, and emphasis on critical thinking within specific content areas, aim to support students’ development in areas beyond their core academic skill. The second research tradition has focused on estimating teachers’ contribution to student outcomes, often referred to as “teacher effects” ( Chetty Friedman, & Rockoff, 2014 ; Hanushek & Rivkin, 2010 ). These studies have found that, as with test scores, teachers vary considerably in their ability to impact students’ social and emotional development and a variety of observed school behaviors ( Backes & Hansen, 2015 ; Gershenson, 2016 ; Jackson, 2012 ; Jennings & DiPrete, 2010 ; Koedel, 2008 ; Kraft & Grace, 2016 ; Ladd & Sorensen, 2015 ; Ruzek et al., 2015 ). Further, weak to moderate correlations between teacher effects on different student outcomes suggest that test scores alone cannot identify teachers’ overall skill in the classroom.

Our study is among the first to integrate these two research traditions, which largely have developed in isolation. Working at the intersection of these traditions, we aim both to minimize threats to internal validity and to open up the “black box” of teacher effects by examining whether certain dimensions of teaching practice predict students’ attitudes and behaviors. We refer to these relationships between teaching practice and student outcomes as “teaching effects.” Specifically, we ask three research questions:

  • To what extent do teachers impact students’ attitudes and behaviors in class?
  • To what extent do specific teaching practices impact students’ attitudes and behaviors in class?
  • Are teachers who are effective at raising test-score outcomes equally effective at developing positive attitudes and behaviors in class?

To answer our research questions, we draw on a rich dataset from the National Center for Teacher Effectiveness of upper-elementary classrooms that collected teacher-student links, observations of teaching practice scored on two established instruments, students’ math performance on both high- and low-stakes tests, and a student survey that captured their attitudes and behaviors in class. We used this survey to construct our three primary outcomes: students’ self-reported self-efficacy in math, happiness in class, and behavior in class. All three measures are important outcomes of interest to researchers, policymakers, and parents ( Borghans et al., 2008 ; Chetty et al., 2011 ; Farrington et al., 2012 ). They also align with theories linking teachers and teaching practice to outcomes beyond students’ core academic skills ( Bandura, Barbaranelli, Caprara, & Pastorelli, 1996 ; Pianta & Hamre, 2009 ), allowing us to test these theories explicitly.

We find that upper-elementary teachers have substantive impacts on students’ self-reported attitudes and behaviors in addition to their math performance. We estimate that the variation in teacher effects on students’ self-efficacy in math and behavior in class is of similar magnitude to the variation in teacher effects on math test scores. The variation of teacher effects on students’ happiness in class is even larger. Further, these outcomes are predicted by teaching practices most proximal to these measures, thus aligning with theory and providing important face and construct validity to these measures. Specifically, teachers’ emotional support for students is related both to their self-efficacy in math and happiness in class. Teachers’ classroom organization predicts students’ reports of their own behavior in class. Errors in teachers’ presentation of mathematical content are negatively related to students’ self-efficacy in math and happiness in class, as well as students’ math performance. Finally, we find that teachers are not equally effective at improving all outcomes. Compared to a correlation of 0.64 between teacher effects on our two math achievement tests, the strongest correlation between teacher effects on students’ math achievement and effects on their attitudes or behaviors is 0.19.

Together, these findings add further evidence for the multidimensional nature of teaching and, thus, the need for researchers, policymakers, and practitioners to identify strategies for improving these skills. In our conclusion, we discuss several ways that policymakers and practitioners may start to do so, including through the design and implementation of teacher evaluation systems, professional development, recruitment, and strategic teacher assignments.

2. Review of Related Research

Theories of teaching and learning have long emphasized the important role teachers play in supporting students’ development in areas beyond their core academic skill. For example, in their conceptualization of high-quality teaching, Pianta and Hamre (2009) describe a set of emotional supports and organizational techniques that are equally important to learners as teachers’ instructional methods. They posit that, by providing “emotional support and a predictable, consistent, and safe environment” (p. 113), teachers can help students become more self-reliant, motivated to learn, and willing to take risks. Further, by modeling strong organizational and management structures, teachers can help build students’ own ability to self-regulate. Content-specific views of teaching also highlight the importance of teacher behaviors that develop students’ attitudes and behaviors in ways that may not directly impact test scores. In mathematics, researchers and professional organizations have advocated for teaching practices that emphasize critical thinking and problem solving around authentic tasks ( Lampert, 2001 ; National Council of Teachers of Mathematics [NCTM], 1989 , 2014 ). Others have pointed to teachers’ important role of developing students’ self-efficacy and decreasing their anxiety in math ( Bandura et al., 1996 ; Usher & Pajares, 2008 ; Wigfield & Meece, 1988 ).

In recent years, development and use of observation instruments that capture the quality of teachers’ instruction have provided a unique opportunity to examine these theories empirically. One instrument in particular, the Classroom Assessment Scoring System (CLASS), is organized around “meaningful patterns of [teacher] behavior…tied to underlying developmental processes [in students]” ( Pianta & Hamre, 2009 , p. 112). Factor analyses of data collected by this instrument have identified several unique aspects of teachers’ instruction: teachers’ social and emotional interactions with students, their ability to organize and manage the classroom environment, and their instructional supports in the delivery of content ( Hafen et al., 2015 ; Hamre et al., 2013 ). A number of studies from developers of the CLASS instrument and their colleagues have described relationships between these dimensions and closely related student attitudes and behaviors. For example, teachers’ interactions with students predicts students’ social competence, engagement, and risk-taking; teachers’ classroom organization predicts students’ engagement and behavior in class ( Burchinal et al., 2008 ; Downer, Rimm-Kaufman, & Pianta, 2007 ; Hamre, Hatfield, Pianta, & Jamil, 2014 ; Hamre & Pianta, 2001 ; Luckner & Pianta, 2011 ; Mashburn et al., 2008 ; Pianta, La Paro, Payne, Cox, & Bradley, 2002 ). With only a few exceptions (see Downer et al., 2007 ; Hamre & Pianta, 2001 ; Luckner & Pianta, 2011 ), though, these studies have focused on pre-kindergarten settings.

Additional content-specific observation instruments highlight several other teaching competencies with links to students’ attitudes and behaviors. For example, in this study we draw on the Mathematical Quality of Instruction (MQI) to capture math-specific dimensions of teachers’ classroom practice. Factor analyses of data captured both by this instrument and the CLASS identified two teaching skills in addition to those described above: the cognitive demand of math activities that teachers provide to students and the precision with which they deliver this content ( Blazar et al., 2015 ). Validity evidence for the MQI has focused on the relationship between these teaching practices and students’ math test scores ( Blazar, 2015 ; Kane & Staiger, 2012 ), which makes sense given the theoretical link between teachers’ content knowledge, delivery of this content, and students’ own understanding ( Hill et al., 2008 ). However, professional organizations and researchers also describe theoretical links between the sorts of teaching practices captured on the MQI and student outcomes beyond test scores ( Bandura et al., 1996 ; Lampert, 2001 ; NCTM, 1989 , 2014 ; Usher & Pajares, 2008 ; Wigfield & Meece, 1988 ) that, to our knowledge, have not been tested.

In a separate line of research, several recent studies have borrowed from the literature on teachers’ “value-added” to student test scores in order to document the magnitude of teacher effects on a range of other outcomes. These studies attempt to isolate the unique effect of teachers on non-tested outcomes from factors outside of teachers’ control (e.g., students’ prior achievement, race, gender, socioeconomic status) and to limit any bias due to non-random sorting. Jennings and DiPrete (2010) estimated the role that teachers play in developing kindergarten and first-grade students’ social and behavioral outcomes. They found within-school teacher effects on social and behavioral outcomes that were even larger (0.21 standard deviations [sd]) than effects on students’ academic achievement (between 0.12 sd and 0.15 sd, depending on grade level and subject area). In a study of 35 middle school math teachers, Ruzek et al. (2015) found small but meaningful teacher effects on students’ motivation between 0.03 sd and 0.08 sd among seventh graders. Kraft and Grace (2016) found teacher effects on students’ self-reported measures of grit, growth mindset and effort in class ranging between 0.14 and 0.17 sd. Additional studies identified teacher effects on students’ observed school behaviors, including absences, suspensions, grades, grade progression, and graduation ( Backes & Hansen, 2015 ; Gershenson, 2016 ; Jackson, 2012 ; Koedel, 2008 ; Ladd & Sorensen, 2015 ).

To date, evidence is mixed on the extent to which teachers who improve test scores also improve other outcomes. Four of the studies described above found weak relationships between teacher effects on students’ academic performance and effects on other outcome measures. Compared to a correlation of 0.42 between teacher effects on math versus reading achievement, Jennings and DiPrete (2010) found correlations of 0.15 between teacher effects on students’ social and behavioral outcomes and effects on either math or reading achievement. Kraft and Grace (2016) found correlations between teacher effects on achievement outcomes and multiple social-emotional competencies were sometimes non-existent and never greater than 0.23. Similarly, Gershenson (2016) and Jackson (2012) found weak or null relationships between teacher effects on students’ academic performance and effects on observed schools behaviors. However, correlations from two other studies were larger. Ruzek et al. (2015) estimated a correlation of 0.50 between teacher effects on achievement versus effects on students’ motivation in math class. Mihaly, McCaffrey, Staiger, and Lockwood (2013) found a correlation of 0.57 between middle school teacher effects on students’ self-reported effort versus effects on math test scores.

Our analyses extend this body of research by estimating teacher effects on additional attitudes and behaviors captured by students in upper-elementary grades. Our data offer the unique combination of a moderately sized sample of teachers and students with lagged survey measures. We also utilize similar econometric approaches to test the relationship between teaching practice and these same attitudes and behaviors. These analyses allow us to examine the face validity of our teacher effect estimates and the extent to which they align with theory.

3. Data and Sample

Beginning in the 2010–2011 school year, the National Center for Teacher Effectiveness (NCTE) engaged in a three-year data collection process. Data came from participating fourth-and fifth-grade teachers (N = 310) in four anonymous, medium to large school districts on the East coast of the United States who agreed to have their classes videotaped, complete a teacher questionnaire, and help collect a set of student outcomes. Teachers were clustered within 52 schools, with an average of six teachers per school. While NCTE focused on teachers’ math instruction, participants were generalists who taught all subject areas. This is important, as it allowed us to isolate the contribution of individual teachers to students’ attitudes and behaviors, which is considerably more challenging when students are taught by multiple teachers. It also suggests that the observation measures, which assessed teachers’ instruction during math lessons, are likely to capture aspects of their classroom practice that are common across content areas.

In Table 1 , we present descriptive statistics on participating teachers and their students. We do so for the full NCTE sample, as well as for a subsample of teachers whose students were in the project in both the current and prior years. This latter sample allowed us to capture prior measures of students’ attitudes and behaviors, a strategy that we use to increase internal validity and that we discuss in more detail below. 2 When we compare these samples, we find that teachers look relatively similar with no statistically significant differences on any observable characteristic. Reflecting national patterns, the vast majority of elementary teachers in our sample are white females who earned their teaching credential through traditional certification programs. (See Hill, Blazar, & Lynch, 2015 for a discussion of how these teacher characteristics were measured.)

Participant Demographics

Full SampleAttitudes and
Behaviors
Sample
-Value on
Difference
Teachers
Male0.160.160.949
African-American0.220.220.972
Asian0.030.000.087
Hispanic0.030.030.904
White0.650.660.829
Mathematics Coursework (1 to 4 Likert scale)2.582.550.697
Mathematical Content Knowledge (standardized scale)0.010.030.859
Alternative Certification0.080.080.884
Teaching Experience (years)10.2910.610.677
Value Added on High-Stakes Math Test (standardized scale)0.010.000.505
Observations310111
Students
Male0.500.490.371
African American0.400.400.421
Asian0.080.070.640
Hispanic0.230.200.003
White0.240.28<0.001
FRPL0.640.590.000
SPED0.110.090.008
LEP0.200.14<0.001
Prior Score on High-Stakes Math Test (standardized scale)0.100.18<0.001
Prior Score on High-Stakes ELA Test (standardized scale)0.090.20<0.001
Observations10,5751,529

Students in our samples look similar to those in many urban districts in the United States, where roughly 68% are eligible for free or reduced-price lunch, 14% are classified as in need of special education services, and 16% are identified as limited English proficient; roughly 31% are African American, 39% are Hispanic, and 28% are white ( Council of the Great City Schools, 2013 ). We do observe some statistically significant differences between student characteristics in the full sample versus our analytic subsample. For example, the percentage of students identified as limited English proficient was 20% in the full sample compared to 14% in the sample of students who ever were part of analyses drawing on our survey measures. Although variation in samples could result in dissimilar estimates across models, the overall character of our findings is unlikely to be driven by these modest differences.

3.1. Students’ Attitudes and Behaviors

As part of the expansive data collection effort, researchers administered a student survey with items (N = 18) that were adapted from other large-scale surveys including the TRIPOD, the MET project, the National Assessment of Educational Progress (NAEP), and the Trends in International Mathematics and Science Study (TIMSS) (see Appendix Table 1 for a full list of items). Items were selected based on a review of the research literature and identification of constructs thought most likely to be influenced by upper-elementary teachers. Students rated all items on a five-point Likert scale where 1 = Totally Untrue and 5 = Totally True.

We identified a parsimonious set of three outcome measures based on a combination of theory and exploratory factor analyses (see Appendix Table 1 ). 3 The first outcome, which we call Self-Efficacy in Math (10 items), is a variation on well-known constructs related to students’ effort, initiative, and perception that they can complete tasks. The second related outcome measure is Happiness in Class (5 items), which was collected in the second and third years of the study. Exploratory factor analyses suggested that these items clustered together with those from Self-Efficacy in Math to form a single construct. However, post-hoc review of these items against the psychology literature from which they were derived suggests that they can be divided into a separate domain. As above, this measure is a school-specific version of well-known scales that capture students’ affect and enjoyment ( Diener, 2000 ). Both Self-Efficacy in Math and Happiness in Class have relatively high internal consistency reliabilities (0.76 and 0.82, respectively) that are similar to those of self-reported attitudes and behaviors explored in other studies ( Duckworth et al., 2007 ; John & Srivastava, 1999 ; Tsukayama et al., 2013 ). Further, self-reported measures of similar constructs have been linked to long-term outcomes, including academic engagement and earnings in adulthood, even conditioning on cognitive ability ( King, McInerney, Ganotice, & Villarosa, 2015 ; Lyubomirsky, King, & Diener, 2005 ).

The third and final construct consists of three items that were meant to hold together and which we call Behavior in Class (internal consistency reliability is 0.74). Higher scores reflect better, less disruptive behavior. Teacher reports of students’ classroom behavior have been found to relate to antisocial behaviors in adolescence, criminal behavior in adulthood, and earnings ( Chetty et al., 2011 ; Segal, 2013 ; Moffitt et al., 2011 ; Tremblay et al., 1992 ). Our analysis differs from these other studies in the self-reported nature of the behavior outcome. That said, other studies also drawing on elementary school students found correlations between self-reported and either parent- or teacher-reported measures of behavior that were similar in magnitude to correlations between parent and teacher reports of student behavior ( Achenbach, McConaughy, & Howell, 1987 ; Goodman, 2001 ). Further, other studies have found correlations between teacher-reported behavior of elementary school students and either reading or math achievement ( r = 0.22 to 0.28; Miles & Stipek, 2006 ; Tremblay et al., 1992 ) similar to the correlation we find between students’ self-reported Behavior in Class and our two math test scores ( r = 0.24 and 0.26; see Table 2 ). Together, this evidence provides both convergent and consequential validity evidence for this outcome measure. For all three of these outcomes, we created final scales by reverse coding items with negative valence and averaging raw student responses across all available items. 4 We standardized these final scores within years, given that, for some measures, the set of survey items varied across years.

Descriptive Statistics for Students' Academic Performance, Attitudes, and Behaviors

Univariate StatisticsPairwise Correlations
MeanSDInternal
Consistency
Reliability
High-
Stakes
Math Test
Low-
Stakes
Math Test
Self-
Efficacy in
Math
Happiness in
Class
Behavior in
Class
High-Stakes Math Test0.100.91--1.00
Low-Stakes Math Test0.611.10.820.70 1.00
Self-Efficacy in Math4.170.580.760.25 0.22 1.00
Happiness in Class4.100.850.820.15 0.10 0.62 1.00
Behavior in Class4.100.930.740.24 0.26 0.35 0.27 1.00

For high-stakes math test, reliability varies by district; thus, we report the lower bound of these estimates. Self-Efficacy in Math, Happiness in Class, and Behavior in Class are measured on a 1 to 5 Likert Scale. Statistics were generated from all available data.

3.2. Student Demographic and Test Score Information

Student demographic and achievement data came from district administrative records. Demographic data include gender, race/ethnicity, free- or reduced-price lunch (FRPL) eligibility, limited English proficiency (LEP) status, and special education (SPED) status. These records also included current- and prior-year test scores in math and English Language Arts (ELA) on state assessments, which we standardized within districts by grade, subject, and year using the entire sample of students.

The project also administered a low-stakes mathematics assessment to all students in the study. Internal consistency reliability is 0.82 or higher for each form across grade levels and school years ( Hickman, Fu, & Hill, 2012 ). We used this assessment in addition to high-stakes tests given that teacher effects on two outcomes that aim to capture similar underlying constructs (i.e., math achievement) provide a unique point of comparison when examining the relationship between teacher effects on student outcomes that are less closely related (i.e., math achievement versus attitudes and behaviors). Indeed, students’ high- and low-stake math test scores are correlated more strongly ( r = 0.70) than any other two outcomes (see Table 1 ). 5

3.3. Mathematics Lessons

Teachers’ mathematics lessons were captured over a three-year period, with an average of three lessons per teacher per year. 6 Trained raters scored these lessons on two established observational instruments, the CLASS and the MQI. Analyses of these same data show that items cluster into four main factors ( Blazar et al., 2015 ). The two dimensions from the CLASS instrument capture general teaching practices: Emotional Support focuses on teachers’ interactions with students and the emotional environment in the classroom, and is thought to increase students’ social and emotional development; and Classroom Organization focuses on behavior management and productivity of the lesson, and is thought to improve students’ self-regulatory behaviors ( Pianta & Hamre, 2009 ). 7 The two dimensions from the MQI capture mathematics-specific practices: Ambitious Mathematics Instruction focuses on the complexity of the tasks that teachers provide to their students and their interactions around the content, thus corresponding to the set of professional standards described by NCTM (1989 , 2014 ) and many elements contained within the Common Core State Standards for Mathematics ( National Governors Association Center for Best Practices, 2010 ); Mathematical Errors identifies any mathematical errors or imprecisions the teacher introduces into the lesson. Both dimensions from the MQI are linked to teachers’ mathematical knowledge for teaching and, in turn, to students’ math achievement ( Blazar, 2015 ; Hill et al., 2008 ; Hill, Schilling, & Ball, 2004 ). Correlations between dimensions range from roughly 0 (between Emotional Support and Mathematical Errors ) to 0.46 (between Emotional Support and Classroom Organization ; see Table 3 ).

Descriptive Statistics for CLASS and MQI Dimensions

Univariate StatisticsPairwise Correlations
MeanSDAdjusted
Intraclass
Correlation
Emotional
Support
Classroom
Organization
Ambitious
Mathematics
Instruction
Mathematical
Errors
Emotional Support4.280.480.531.00
Classroom Organization6.410.390.630.46 1.00
Ambitious Mathematics Instruction1.270.110.740.22 0.23 1.00
Mathematical Errors1.120.090.560.010.09−0.27 1.00

Intraclass correlations were adjusted for the modal number of lessons. CLASS items (from Emotional Support and Classroom Organization) were scored on a scale from 1 to 7. MQI items (from Ambitious Instruction and Errors) were scored on a scale from 1 to 3. Statistics were generated from all available data.

We estimated reliability for these metrics by calculating the amount of variance in teacher scores that is attributable to the teacher (the intraclass correlation [ICC]), adjusted for the modal number of lessons. These estimates are: 0.53, 0.63, 0.74, and 0.56 for Emotional Support, Classroom Organization, Ambitious Mathematics Instruction , and Mathematical Errors , respectively (see Table 3 ). Though some of these estimates are lower than conventionally acceptable levels (0.7), they are consistent with those generated from similar studies ( Kane & Staiger, 2012 ). We standardized scores within the full sample of teachers to have a mean of zero and a standard deviation of one.

4. Empirical Strategy

4.1. estimating teacher effects on students’ attitudes and behaviors.

Like others who aim to examine the contribution of individual teachers to student outcomes, we began by specifying an education production function model of each outcome for student i in district d , school s , grade g , class c with teacher j at time t :

OUTCOME idsgict is used interchangeably for both math test scores and students’ attitudes and behaviors, which we modeled in separate equations as a cubic function of students’ prior achievement, A it −1 , in both math and ELA on the high-stakes district tests 8 ; demographic characteristics, X it , including gender, race, FRPL eligibility, SPED status, and LEP status; these same test-score variables and demographic characteristics averaged to the class level, X ¯ it c ; and district-by-grade-by-year fixed effects, τ dgt , that account for scaling of high-stakes test. The residual portion of the model can be decomposed into a teacher effect, µ j , which is our main parameter of interest and captures the contribution of teachers to student outcomes above and beyond factors already controlled for in the model; a class effect, δ jc , which is estimated by observing teachers over multiple school years; and a student-specific error term,. ε idsgjct 9

The key identifying assumption of this model is that teacher effect estimates are not biased by non-random sorting of students to teachers. Recent experimental ( Kane, McCaffrey, Miller, & Staiger, 2013 ) and quasi-experimental ( Chetty et al., 2014 ) analyses provide strong empirical support for this claim when student achievement is the outcome of interest. However, much less is known about bias and sorting mechanisms when other outcomes are used. For example, it is quite possible that students were sorted to teachers based on their classroom behavior in ways that were unrelated to their prior achievement. To address this possibility, we made two modifications to equation (1) . First, we included school fixed effects, ω s , to account for sorting of students and teachers across schools. This means that estimates rely only on between-school variation, which has been common practice in the literature estimating teacher effects on student achievement. In their review of this literature, Hanushek and Rivkin (2010) propose ignoring the between-school component because it is “surprisingly small” and because including this component leads to “potential sorting, testing, and other interpretative problems” (p. 268). Other recent studies estimating teacher effects on student outcomes beyond test scores have used this same approach ( Backes & Hansen, 2015 ; Gershenson, 2016 ; Jackson, 2012 ; Jennings & DiPrete, 2010 ; Ladd & Sorensen, 2015 ; Ruzek et al., 2015 ). Another important benefit of using school fixed effects is that this approach minimizes the possibility of reference bias in our self-reported measures ( West et al., 2016 ; Duckworth & Yeager, 2015 ). Differences in school-wide norms around behavior and effort may change the implicit standard of comparison (i.e. reference group) that students use to judge their own behavior and effort.

Restricting comparisons to other teachers and students within the same school minimizes this concern. As a second modification for models that predict each of our three student survey measures, we included OUTCOME it −1 on the right-hand side of the equation in addition to prior achievement – that is, when predicting students’ Behavior in Class , we controlled for students’ self-reported Behavior in Class in the prior year. 10 This strategy helps account for within-school sorting on factors other than prior achievement.

Using equation (1) , we estimated the variance of µ j , which is the stable component of teacher effects. We report the standard deviation of these estimates across outcomes. This parameter captures the magnitude of the variability of teacher effects. With the exception of teacher effects on students’ Happiness in Class , where survey items were not available in the first year of the study, we included δ jc in order to separate out the time-varying portion of teacher effects, combined with peer effects and any other class-level shocks. The fact that we are able to separate class effects from teacher effects is an important extension of prior studies examining teacher effects on outcomes beyond test scores, many of which only observed teachers at one point in time.

Following Chetty et al. (2011) , we estimated the magnitude of the variance of teacher effects using a direct, model-based estimate derived via restricted maximum likelihood estimation. This approach produces a consistent estimator for the true variance of teacher effects ( Raudenbush & Bryk, 2002 ). Calculating the variation across individual teacher effect estimates using Ordinary Least Squares regression would bias our variance estimates upward because it would conflate true variation with estimation error, particularly in instances where only a handful of students are attached to each teachers. Alternatively, estimating the variation in post-hoc predicted “shrunken” empirical Bayes estimates would bias our variance estimate downward relative to the size of the measurement error (Jacob & Lefgren, 2005).

4.2. Estimating Teaching Effects on Students’ Attitudes and Behaviors

We examined the contribution of teachers’ classroom practices to our set of student outcomes by estimating a variation of equation (1) :

This multi-level model includes the same set of control variables as above in order to account for the non-random sorting of students to teachers and for factors beyond teachers’ control that might influence each of our outcomes. We further included a vector of their teacher j ’s observation scores, OBSER VAT ^ ION l J , − t . The coefficients on these variables are our main parameters of interest and can be interpreted as the change in standard deviation units for each outcome associated with exposure to teaching practice one standard deviation above the mean.

One concern when relating observation scores to student survey outcomes is that they may capture the same behaviors. For example, teachers may receive credit on the Classroom Organization domain when their students demonstrate orderly behavior. In this case, we would have the same observed behaviors on both the left and right side of our equation relating instructional quality to student outcomes, which would inflate our teaching effect estimates. A related concern is that the specific students in the classroom may influence teachers’ instructional quality ( Hill et al., 2015 ; Steinberg & Garrett, 2016 ; Whitehurst, Chingos, & Lindquist, 2014 ). While the direction of bias is not as clear here – as either lesser- or higher-quality teachers could be sorted to harder to educate classrooms – this possibility also could lead to incorrect estimates. To avoid these sources of bias, we only included lessons captured in years other than those in which student outcomes were measured, denoted by – t in the subscript of OBSER VAT ^ ION l J , − t . To the extent that instructional quality varies across years, using out-of-year observation scores creates a lower-bound estimate of the true relationship between instructional quality and student outcomes. We consider this an important tradeoff to minimize potential bias. We used predicted shrunken observation score estimates that account for the fact that teachers contributed different numbers of lessons to the project, and fewer lessons could lead to measurement error in these scores ( Hill, Charalambous, & Kraft, 2012 ). 11

An additional concern for identification is the endogeneity of observed classroom quality. In other words, specific teaching practices are not randomly assigned to teachers. Our preferred analytic approach attempted to account for potential sources of bias by conditioning estimates of the relationship between one dimension of teaching practice and student outcomes on the three other dimensions. An important caveat here is that we only observed teachers’ instruction during math lessons and, thus, may not capture important pedagogical practices teachers used with these students when teaching other subjects. Including dimensions from the CLASS instrument, which are meant to capture instructional quality across subject areas ( Pianta & Hamre, 2009 ), helps account for some of this concern. However, given that we were not able to isolate one dimension of teaching quality from all others, we consider this approach as providing suggestive rather than conclusive evidence on the underlying causal relationship between teaching practice and students’ attitudes and behaviors.

4.3. Estimating the Relationship Between Teacher Effects Across Multiple Student Outcomes

In our third and final set of analyses, we examined whether teachers who are effective at raising math test scores are equally effective at developing students’ attitudes and behaviors. To do so, we drew on equation (1) to estimate µ̂ j for each outcome and teacher j . Following Chetty et al., 2014 ), we use post-hoc predicted “shrunken” empirical Bayes estimates of µ̂ j derived from equation (1) . Then, we generated a correlation matrix of these teacher effect estimates.

Despite attempts to increase the precision of these estimates through empirical Bayes estimation, estimates of individual teacher effects are measured with error that will attenuate these correlations ( Spearman, 1904 ). Thus, if we were to find weak to moderate correlations between different measures of teacher effectiveness, this could identify multidimensionality or could result from measurement challenges, including the reliability of individual constructs ( Chin & Goldhaber, 2015 ). For example, prior research suggests that different tests of students’ academic performance can lead to different teacher rankings, even when those tests measure similar underlying constructs ( Lockwood et al., 2007 ; Papay, 2011 ). To address this concern, we focus our discussion on relative rankings in correlations between teacher effect estimates rather than their absolute magnitudes. Specifically, we examine how correlations between teacher effects on two closely related outcomes (e.g., two math achievement tests) compare with correlations between teacher effects on outcomes that aim to capture different underlying constructs. In light of research highlighted above, we did not expect the correlation between teacher effects on the two math tests to be 1 (or, for that matter, close to 1). However, we hypothesized that these relationships should be stronger than the relationship between teacher effects on students’ math performance and effects on their attitudes and behaviors.

5.1. Do Teachers Impact Students’ Attitudes and Behaviors?

We begin by presenting results of the magnitude of teacher effects in Table 4 . Here, we observe sizable teacher effects on students’ attitudes and behaviors that are similar to teacher effects on students’ academic performance. Starting first with teacher effects on students’ academic performance, we find that a one standard deviation difference in teacher effectiveness is equivalent to a 0.17 sd or 0.18 sd difference in students’ math achievement. In other words, relative to an average teacher, teachers at the 84 th percentile of the distribution of effectiveness move the medium student up to roughly the 57 th percentile of math achievement. Notably, these findings are similar to those from other studies that also estimate within-school teacher effects in large administrative datasets ( Hanushek & Rivkin, 2010 ). This suggests that our use of school fixed effects with a more limited number of teachers observed within a given school does not appear to overly restrict our identifying variation. In Online Appendix A , where we present the magnitude of teacher effects from alternative model specifications, we show that results are robust to models that exclude school fixed effects or replace school fixed effects with observable school characteristics. Estimated teacher effects on students’ self-reported Self-Efficacy in Math and Behavior in Class are 0.14 sd and 0.15 sd, respectively. The largest teacher effects we observe are on students’ Happiness in Class , of 0.31 sd. Given that we do not have multiple years of data to separate out class effects for this measure, we interpret this estimate as the upward bound of true teacher effects on Happiness in Class. Rescaling this estimate by the ratio of teacher effects with and without class effects for Self-Efficacy in Math (0.14/0.19 = 0.74; see Online Appendix A ) produces an estimate of stable teacher effects on Happiness in Class of 0.23 sd, still larger than effects for other outcomes.

Teacher Effects on Students' Academic Performance, Attitudes, and Behaviors

ObservationsSD of
Teacher-
Level
Variance
TeachersStudents
High-Stakes Math Test31010,5750.18
Low-Stakes Math Test31010,5750.17
Self-Efficacy in Math1081,4330.14
Happiness in Class515480.31
Behavior in Class1111,5290.15

Notes: Cells contain estimates from separate multi-level regression models.

All effects are statistically significant at the 0.05 level.

5.2. Do Specific Teaching Practices Impact Students’ Attitudes and Behaviors?

Next, we examine whether certain characteristics of teachers’ instructional practice help explain the sizable teacher effects described above. We present unconditional estimates in Table 5 Panel A, where the relationship between one dimension of teaching practice and student outcomes is estimated without controlling for the other three dimensions. Thus, cells contain estimates from separate regression models. In Panel B, we present conditional estimates, where all four dimensions of teaching quality are included in the same regression model. Here, columns contain estimates from separate regression models. We present all estimates as standardized effect sizes, which allows us to make comparisons across models and outcome measures. Unconditional and conditional estimates generally are quite similar. Therefore, we focus our discussion on our preferred conditional estimates.

Teaching Effects on Students' Academic Performance, Attitudes, and Behaviors

High-
Stakes
Math Test
Low-
Stakes
Math Test
Self-
Efficacy in
Math
Happiness
in Class
Behavior
in Class
Panel A: Unconditional Estimates
Emotional Support0.012 (0.013)0.018 (0.014)0.142 (0.031)0.279 (0.082)0.039 (0.027)
Classroom Organization−0.017 (0.014)−0.010 (0.014)0.065 (0.038)0.001 (0.090)0.081 (0.033)
Ambitious Mathematics Instruction0.017 (0.015)0.021 (0.015)0.077 (0.036)0.082 (0.068)0.004 (0.032)
Mathematical Errors−0.027 (0.013)−0.009 (0.014)−0.107 (0.030)−0.164 (0.076)−0.027 (0.027)
Panel B: Conditional Estimates
Emotional Support0.015 (0.014)0.020 (0.015)0.135 (0.034)0.368 (0.090)0.030 (0.030)
Classroom Organization−0.022 (0.014)−0.018 (0.015)−0.020 (0.042)−0.227 (0.096)0.077 (0.036)
Ambitious Mathematics Instruction0.014 (0.015)0.019 (0.016)−0.006 (0.040)0.079 (0.068)−0.034 (0.036)
Mathematical Errors−0.024 (0.013)−0.005 (0.014)−0.094** (0.033)−0.181 (0.081)−0.009 (0.029)
Teacher Observations196196904793
Student Observations8,6608,6601,2755171,362

In Panel A, cells contain estimates from separate regression models. In Panel B, columns contain estimates from separate regression models, where estimates are conditioned on other teaching practices. All models control for student and class characteristics, school fixed effects, and district-by-grade-by-year fixed effects, and include and teacher random effects. Models predicting all outcomes except for Happiness in Class also include class random effects.

We find that students’ attitudes and behaviors are predicted by both general and content-specific teaching practices in ways that generally align with theory. For example, teachers’ Emotional Support is positively associated with the two closely related student constructs, Self-Efficacy in Math and Happiness in Class . Specifically, a one standard deviation increase in teachers’ Emotional Support is associated with a 0.14 sd increase in students’ Self-Efficacy in Math and a 0.37 sd increase in students’ Happiness in Class . These finding makes sense given that Emotional Support captures teacher behaviors such as their sensitivity to students, regard for students’ perspective, and the extent to which they create a positive climate in the classroom. As a point of comparison, these estimates are substantively larger than those between principal ratings of teachers’ ability to improve test scores and their actual ability to do so, which fall in the range of 0.02 sd and 0.08 sd ( Jacob & Lefgren, 2008 ; Rockoff, Staiger, Kane, & Taylor, 2012 ; Rockoff & Speroni, 2010 ).

We also find that Classroom Organization , which captures teachers’ behavior management skills and productivity in delivering content, is positively related to students’ reports of their own Behavior in Class (0.08 sd). This suggests that teachers who create an orderly classroom likely create a model for students’ own ability to self-regulate. Despite this positive relationship, we find that Classroom Organization is negatively associated with Happiness in Class (−0.23 sd), suggesting that classrooms that are overly focused on routines and management are negatively related to students’ enjoyment in class. At the same time, this is one instance where our estimate is sensitive to whether or not other teaching characteristics are included in the model. When we estimate the relationship between teachers’ Classroom Organization and students’ Happiness in Class without controlling for the three other dimensions of teaching quality, this estimate approaches 0 and is no longer statistically significant. 12 We return to a discussion of the potential tradeoffs between Classroom Organization and students’ Happiness in Class in our conclusion.

Finally, we find that the degree to which teachers commit Mathematical Errors is negatively related to students’ Self-Efficacy in Math (−0.09 sd) and Happiness in Class (−0.18 sd). These findings illuminate how a teacher’s ability to present mathematics with clarity and without serious mistakes is related to their students’ perceptions that they can complete math tasks and their enjoyment in class.

Comparatively, when predicting scores on both math tests, we only find one marginally significant relationship – between Mathematical Errors and the high-stakes math test (−0.02 sd). For two other dimensions of teaching quality, Emotional Support and Ambitious Mathematics Instruction , estimates are signed the way we would expect and with similar magnitudes, though they are not statistically significant. Given the consistency of estimates across the two math tests and our restricted sample size, it is possible that non-significant results are due to limited statistical power. 13 At the same time, even if true relationships exist between these teaching practices and students’ math test scores, they likely are weaker than those between teaching practices and students’ attitudes and behaviors. For example, we find that the 95% confidence intervals relating Classroom Emotional Support to Self-Efficacy in Math [0.068, 0.202] and Happiness in Class [0.162, 0.544] do not overlap with the 95% confidence intervals for any of the point estimates predicting math test scores. We interpret these results as indication that, still, very little is known about how specific classroom teaching practices are related to student achievement in math. 14

In Online Appendix B , we show that results are robust to a variety of different specifications, including (1) adjusting observation scores for characteristics of students in the classroom, (2) controlling for teacher background characteristics (i.e., teaching experience, math content knowledge, certification pathway, education), and (3) using raw out-of-year observation scores (rather than shrunken scores). This suggests that our approach likely accounts for many potential sources of bias in our teaching effect estimates.

5.3. Are Teachers Equally Effective at Raising Different Student Outcomes?

In Table 6 , we present correlations between teacher effects on each of our student outcomes. The fact that teacher effects are measured with error makes it difficult to estimate the precise magnitude of these correlations. Instead, we describe relative differences in correlations, focusing on the extent to which teacher effects within outcome type – i.e., teacher effects on the two math achievement tests or effects on students’ attitudes and behaviors – are similar or different from correlations between teacher effects across outcome type. We illustrate these differences in Figure 1 , where Panel A presents scatter plots of these relationships between teacher effects within outcome type and Panel B does the same across outcome type. Recognizing that not all of our survey outcomes are meant to capture the same underlying construct, we also describe relative differences in correlations between teacher effects on these different measures. In Online Appendix C , we find that an extremely conservative adjustment that scales correlations by the inverse of the square root of the product of the reliabilities leads to a similar overall pattern of results.

An external file that holds a picture, illustration, etc.
Object name is nihms866620f1.jpg

Scatter plots of teacher effects across outcomes. Solid lines represent the best-fit regression line.

Correlations Between Teacher Effects on Students' Academic Performance, Attitudes, and Behaviors

High-Stakes
Math Test
Low-Stakes
Math Test
Self-
Efficacy in
Math
Happiness
in Class
Behavior in
Class
High-Stakes Math Test1.00 --
Low-Stakes Math Test0.64 (0.04)1.00 --
Self-Efficacy in Math0.16 (0.10)0.19 (0.10)1.00 --
Happiness in Class−0.09 (0.14)−0.21 (0.14)0.26~ (0.14)1.00 --
Behavior in Class0.10 (0.10)0.12 (0.10)0.49 (0.08)0.21 (0.14)1.00 --

Standard errors in parentheses. See Table 4 for sample sizes used to calculate teacher effect estimates. The sample for each correlation is the minimum number of teachers between the two measures.

Examining the correlations of teacher effect estimates reveals that individual teachers vary considerably in their ability to impact different student outcomes. As hypothesized, we find the strongest correlations between teacher effects within outcome type. Similar to Corcoran, Jennings, and Beveridge (2012) , we estimate a correlation of 0.64 between teacher effects on our high- and low-stakes math achievement tests. We also observe a strong correlation of 0.49 between teacher effects on two of the student survey measures, students’ Behavior in Class and Self-Efficacy in Math . Comparatively, the correlations between teacher effects across outcome type are much weaker. Examining the scatter plots in Figure 1 , we observe much more dispersion around the best-fit line in Panel B than in Panel A. The strongest relationship we observe across outcome types is between teacher effects on the low-stakes math test and effects on Self-Efficacy in Math ( r = 0.19). The lower bound of the 95% confidence interval around the correlation between teacher effects on the two achievement measures [0.56, 0.72] does not overlap with the 95% confidence interval of the correlation between teacher effects on the low-stakes math test and effects on Self-Efficacy in Math [−0.01, 0.39], indicating that these two correlations are substantively and statistically significantly different from each other. Using this same approach, we also can distinguish the correlation describing the relationship between teacher effects on the two math tests from all other correlations relating teacher effects on test scores to effects on students’ attitudes and behaviors. We caution against placing too much emphasis on the negative correlations between teacher effects on test scores and effects on Happiness in Class ( r = −0.09 and −0.21 for the high- and low-stakes tests, respectively). Given limited precision of this relationship, we cannot reject the null hypothesis of no relationship or rule out weak, positive or negative correlations among these measures.

Although it is useful to make comparisons between the strength of the relationships between teacher effects on different measures of students’ attitudes and behaviors, measurement error limits our ability to do so precisely. At face value, we find correlations between teacher effects on Happiness in Class and effects on the two other survey measures ( r = 0.26 for Self-Efficacy in Math and 0.21 for Behavior in Class ) that are weaker than the correlation between teacher effects on Self-Efficacy in Math and effects on Behavior in Class described above ( r = 0.49). One possible interpretation of these findings is that teachers who improve students’ Happiness in Class are not equally effective at raising other attitudes and behaviors. For example, teachers might make students happy in class in unconstructive ways that do not also benefit their self-efficacy or behavior. At the same time, these correlations between teacher effects on Happiness in Class and the other two survey measures have large confidence intervals, likely due to imprecision in our estimate of teacher effects on Happiness in Class . Thus, we are not able to distinguish either correlation from the correlation between teacher effects on Behavior in Class and effects on Self-Efficacy in Math .

6. Discussion and Conclusion

6.1. relationship between our findings and prior research.

The teacher effectiveness literature has profoundly shaped education policy over the last decade and has served as the catalyst for sweeping reforms around teacher recruitment, evaluation, development, and retention. However, by and large, this literature has focused on teachers’ contribution to students’ test scores. Even research studies such as the Measures of Effective Teaching project and new teacher evaluation systems that focus on “multiple measures” of teacher effectiveness ( Center on Great Teachers and Leaders, 2013 ; Kane et al., 2013 ) generally attempt to validate other measures, such as observations of teaching practice, by examining their relationship to estimates of teacher effects on students’ academic performance.

Our study extends an emerging body of research examining the effect of teachers on student outcomes beyond test scores. In many ways, our findings align with conclusions drawn from previous studies that also identify teacher effects on students’ attitudes and behaviors ( Jennings & DiPrete, 2010 ; Kraft & Grace, 2016 ; Ruzek et al., 2015 ), as well as weak relationships between different measures of teacher effectiveness ( Gershenson, 2016 ; Jackson, 2012 ; Kane & Staiger, 2012 ). To our knowledge, this study is the first to identify teacher effects on measures of students’ self-efficacy in math and happiness in class, as well as on a self-reported measure of student behavior. These findings suggest that teachers can and do help develop attitudes and behaviors among their students that are important for success in life. By interpreting teacher effects alongside teaching effects, we also provide strong face and construct validity for our teacher effect estimates. We find that improvements in upper-elementary students’ attitudes and behaviors are predicted by general teaching practices in ways that align with hypotheses laid out by instrument developers ( Pianta & Hamre, 2009 ). Findings linking errors in teachers’ presentation of math content to students’ self-efficacy in math, in addition to their math performance, also are consistent with theory ( Bandura et al., 1996 ). Finally, the broad data collection effort from NCTE allows us to examine relative differences in relationships between measures of teacher effectiveness, thus avoiding some concerns about how best to interpret correlations that differ substantively across studies ( Chin & Goldhaber, 2015 ). We find that correlations between teacher effects on student outcomes that aim to capture different underlying constructs (e.g., math test scores and behavior in class) are weaker than correlations between teacher effects on two outcomes that are much more closely related (e.g., math achievement).

6.2. Implications for Policy

These findings can inform policy in several key ways. First, our findings may contribute to the recent push to incorporate measures of students’ attitudes and behaviors – and teachers’ ability to improve these outcomes – into accountability policy (see Duckworth, 2016 ; Miller, 2015 ; Zernike, 2016 for discussion of these efforts in the press). After passage of the Every Student Succeeds Act (ESSA), states now are required to select a nonacademic indicator with which to assess students’ success in school ( ESSA, 2015 ). Including measures of students’ attitudes and behaviors in accountability or evaluation systems, even with very small associated weights, could serve as a strong signal that schools and educators should value and attend to developing these skills in the classroom.

At the same time, like other researchers ( Duckworth & Yeager, 2015 ), we caution against a rush to incorporate these measures into high-stakes decisions. The science of measuring students’ attitudes and behaviors is relatively new compared to the long history of developing valid and reliable assessments of cognitive aptitude and content knowledge. Most existing measures, including those used in this study, were developed for research purposes rather than large-scale testing with repeated administrations. Open questions remain about whether reference bias substantially distorts comparisons across schools. Similar to previous studies, we include school fixed effects in all of our models, which helps reduce this and other potential sources of bias. However, as a result, our estimates are restricted to within-school comparisons of teachers and cannot be applied to inform the type of across-school comparisons that districts typically seek to make. There also are outstanding questions regarding the susceptibility of these measures to “survey” coaching when high-stakes incentives are attached. Such incentives likely would render teacher or self-assessments of students’ attitudes and behaviors inappropriate. Some researchers have started to explore other ways to capture students’ attitudes and behaviors, including objective performance-based tasks and administrative proxies such as attendance, suspensions, and participation in extracurricular activities ( Hitt, Trivitt, & Cheng, 2016 ; Jackson, 2012 ; Whitehurst, 2016 ). This line of research shows promise but still is in its early phases. Further, although our modeling strategy aims to reduce bias due to non-random sorting of students to teachers, additional evidence is needed to assess the validity of this approach. Without first addressing these concerns, we believe that adding untested measures into accountability systems could lead to superficial and, ultimately, counterproductive efforts to support the positive development of students’ attitudes and behaviors.

An alternative approach to incorporating teacher effects on students’ attitudes and behaviors into teacher evaluation may be through observations of teaching practice. Our findings suggest that specific domains captured on classroom observation instruments (i.e., Emotional Support and Classroom Organization from the CLASS and Mathematical Errors from the MQI) may serve as indirect measures of the degree to which teachers impact students’ attitudes and behaviors. One benefit of this approach is that districts commonly collect related measures as part of teacher evaluation systems ( Center on Great Teachers and Leaders, 2013 ), and such measures are not restricted to teachers who work in tested grades and subjects.

Similar to Whitehurst (2016) , we also see alternative uses of teacher effects on students’ attitudes and behaviors that fall within and would enhance existing school practices. In particular, measures of teachers’ effectiveness at improving students’ attitudes and behaviors could be used to identify areas for professional growth and connect teachers with targeted professional development. This suggestion is not new and, in fact, builds on the vision and purpose of teacher evaluation described by many other researchers ( Darling-Hammond, 2013 ; Hill & Grossman, 2013 ; Papay, 2012 ). However, in order to leverage these measures for instructional improvement, we add an important caveat: performance evaluations – whether formative or summative – should avoid placing teachers into a single performance category whenever possible. Although many researchers and policymakers argue for creating a single weighted composite of different measures of teachers’ effectiveness ( Center on Great Teachers and Leaders, 2013 ; Kane et al., 2013 ), doing so likely oversimplifies the complex nature of teaching. For example, a teacher who excels at developing students’ math content knowledge but struggles to promote joy in learning or students’ own self-efficacy in math is a very different teacher than one who is middling across all three measures. Looking at these two teachers’ composite scores would suggest they are similarly effective. A single overall evaluation score lends itself to a systematized process for making binary decisions such as whether to grant teachers tenure, but such decisions would be better informed by recognizing and considering the full complexity of classroom practice.

We also see opportunities to maximize students’ exposure to the range of teaching skills we examine through strategic teacher assignments. Creating a teacher workforce skilled in most or all areas of teaching practice is, in our view, the ultimate goal. However, this goal likely will require substantial changes to teacher preparation programs and curriculum materials, as well as new policies around teacher recruitment, evaluation, and development. In middle and high schools, content-area specialization or departmentalization often is used to ensure that students have access to teachers with skills in distinct content areas. Some, including the National Association of Elementary School Principals, also see this as a viable strategy at the elementary level ( Chan & Jarman, 2004 ). Similar approaches may be taken to expose students to a collection of teachers who together can develop a range of academic skills, attitudes and behaviors. For example, when configuring grade-level teams, principals may pair a math teacher who excels in her ability to improve students’ behavior with an ELA or reading teacher who excels in his ability to improve students’ happiness and engagement. Viewing teachers as complements to each other may help maximize outcomes within existing resource constraints.

Finally, we consider the implications of our findings for the teaching profession more broadly. While our findings lend empirical support to research on the multidimensional nature of teaching ( Cohen, 2011 ; Lampert, 2001 ; Pianta & Hamre, 2009 ), we also identify tensions inherent in this sort of complexity and potential tradeoffs between some teaching practices. In our primary analyses, we find that high-quality instruction around classroom organization is positively related to students’ self-reported behavior in class but negatively related to their happiness in class. Our results here are not conclusive, as the negative relationship between classroom organization and students’ happiness in class is sensitive to model specification. However, if there indeed is a negative causal relationship, it raises questions about the relative benefits of fostering orderly classroom environments for learning versus supporting student engagement by promoting positive experiences with schooling. Our own experience as educators and researchers suggests this need not be a fixed tradeoff. Future research should examine ways in which teachers can develop classroom environments that engender both constructive classroom behavior and students’ happiness in class. As our study draws on a small sample of students who had current and prior-year scores for Happiness in Class , we also encourage new studies with greater statistical power that may be able to uncover additional complexities (e.g., non-linear relationships) in these sorts of data.

Our findings also demonstrate a need to integrate general and more content-specific perspectives on teaching, a historical challenge in both research and practice ( Grossman & McDonald, 2008 ; Hamre et al., 2013 ). We find that both math-specific and general teaching practices predict a range of student outcomes. Yet, particularly at the elementary level, teachers’ math training often is overlooked. Prospective elementary teachers often gain licensure without taking college-level math classes; in many states, they do not need to pass the math sub-section of their licensure exam in order to earn a passing grade overall ( Epstein & Miller, 2011 ). Striking the right balance between general and content-specific teaching practices is not a trivial task, but it likely is a necessary one.

For decades, efforts to improve the quality of the teacher workforce have focused on teachers’ abilities to raise students’ academic achievement. Our work further illustrates the potential and importance of expanding this focus to include teachers’ abilities to promote students’ attitudes and behaviors that are equally important for students’ long-term success.

Supplementary Material

Acknowledgments.

The research reported here was supported in part by the Institute of Education Sciences, U.S. Department of Education, through Grant R305C090023 to the President and Fellows of Harvard College to support the National Center for Teacher Effectiveness. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education. Additional support came from the William T. Grant Foundation, the Albert Shanker Institute, and Mathematica Policy Research’s summer fellowship.

Appendix Table 1

Factor Loadings for Items from the Student Survey

Year 1Year 2Year 3
Factor 1Factor 2Factor 1Factor 2Factor 1Factor 2
Eigenvalue2.130.784.841.335.441.26
Proportion of Variance Explained0.920.340.790.220.820.19
Self-Efficacy in Math
I have pushed myself hard to completely understand math in this class0.320.180.430.000.44−0.03
If I need help with math, I make sure that someone gives me the help I need.0.340.250.420.090.490.01
If a math problem is hard to solve, I often give up before I solve it.−0.460.01−0.380.28−0.420.25
Doing homework problems helps me get better at doing math.0.300.310.540.240.520.18
In this class, math is too hard.−0.39−0.03−0.380.22−0.420.16
Even when math is hard, I know I can learn it.0.470.350.560.050.640.02
I can do almost all the math in this class if I don't give up.0.450.350.510.050.600.05
I'm certain I can master the math skills taught in this class.0.530.010.560.03
When doing work for this math class, focus on learning not time work takes.0.580.090.620.06
I have been able to figure out the most difficult work in this math class.0.510.100.570.04
Happiness in Class
This math class is a happy place for me to be.0.670.180.680.20
Being in this math class makes me feel sad or angry.−0.500.15−0.540.16
The things we have done in math this year are interesting.0.560.240.570.27
Because of this teacher, I am learning to love math.0.670.260.670.28
I enjoy math class this year.0.710.210.750.26
Behavior in Class
My behavior in this class is good.0.60−0.180.47−0.420.48−0.37
My behavior in this class sometimes annoys the teacher.−0.580.40−0.350.59−0.370.61
My behavior is a problem for the teacher in this class.−0.590.39−0.380.60−0.360.57

Notes: Estimates drawn from all available data. Loadings of roughly 0.4 or higher are highlighted to identify patterns.

1 Although student outcomes beyond test scores often are referred to as “non-cognitive” skills, our preference, like others ( Duckworth & Yeager, 2015 ; Farrington et al., 2012 ), is to refer to each competency by name. For brevity, we refer to them as “attitudes and behaviors,” which closely characterizes the measures we focus on in this paper.

2 Analyses below include additional subsamples of teachers and students. In analyses that predict students’ survey response, we included between 51 and 111 teachers and between 548 and 1,529 students. This range is due to the fact that some survey items were not available in the first year of the study. Further, in analyses relating domains of teaching practice to student outcomes, we further restricted our sample to teachers who themselves were part of the study for more than one year, which allowed us to use out-of-year observation scores that were not confounded with the specific set of students in the classroom. This reduced our analysis samples to between 47 and 93 teachers and between 517 and 1,362 students when predicting students’ attitudes and behaviors, and 196 teachers and 8,660 students when predicting math test scores. Descriptive statistics and formal comparisons of other samples show similar patterns as those presented in Table 1 .

3 We conducted factor analyses separately by year, given that additional items were added in the second and third years to help increase reliability. In the second and third years, each of the two factors has an eigenvalue above one, a conventionally used threshold for selecting factors ( Kline, 1994 ). Even though the second factor consists of three items that also have loadings on the first factor between 0.35 and 0.48 – often taken as the minimum acceptable factor loading ( Field, 2013 ; Kline, 1994 ) – this second factor explains roughly 20% more of the variation across teachers and, therefore, has strong support for a substantively separate construct ( Field, 2013 ; Tabachnick & Fidell, 2001 ). In the first year of the study, the eigenvalue on this second factor is less strong (0.78), and the two items that load onto it also load onto the first factor.

4 Depending on the outcome, between 4% and 8% of students were missing a subset of items from survey scales. In these instances, we created final scores by averaging across all available information.

5 Coding of items from both the low- and high-stakes tests also identify a large degree of overlap in terms of content coverage and cognitive demand ( Lynch, Chin, & Blazar, 2015 ). All tests focused most on numbers and operations (40% to 60%), followed by geometry (roughly 15%), and algebra (15% to 20%). By asking students to provide explanations of their thinking and to solve non-routine problems such as identifying patterns, the low-stakes test also was similar to the high-stakes tests in two districts; in the other two districts, items often asked students to execute basic procedures.

6 As described by Blazar (2015) , capture occurred with a three-camera, digital recording device and lasted between 45 and 60 minutes. Teachers were allowed to choose the dates for capture in advance and directed to select typical lessons and exclude days on which students were taking a test. Although it is possible that these lessons were unique from a teachers’ general instruction, teachers did not have any incentive to select lessons strategically as no rewards or sanctions were involved with data collection or analyses. In addition, analyses from the MET project indicate that teachers are ranked almost identically when they choose lessons themselves compared to when lessons are chosen for them ( Ho & Kane, 2013 ).

7 Developers of the CLASS instrument identify a third dimension, Classroom Instructional Support . Factor analyses of data used in this study showed that items from this dimension formed a single construct with items from Emotional Support ( Blazar et al., 2015 ). Given theoretical overlap between Classroom Instructional Support and dimensions from the MQI instrument, we excluded these items from our work and focused only on Classroom Emotional Support.

8 We controlled for prior-year scores only on the high-stakes assessments and not on the low-stakes assessment for three reasons. First, including prior low-stakes test scores would reduce our full sample by more than 2,200 students. This is because the assessment was not given to students in District 4 in the first year of the study (N = 1,826 students). Further, an additional 413 students were missing fall test scores given that they were not present in class on the day it was administered. Second, prior-year scores on the high- and low-stakes test are correlated at 0.71, suggesting that including both would not help to explain substantively more variation in our outcomes. Third, sorting of students to teachers is most likely to occur based on student performance on the high-stakes assessments since it was readily observable to schools; achievement on the low-stakes test was not.

9 An alternative approach would be to specify teacher effects as fixed, rather than random, which relaxes the assumption that teacher assignment is uncorrelated with factors that also predict student outcomes ( Guarino, Maxfield, Reckase, Thompson, & Wooldridge, 2015 ). Ultimately, we prefer the random effects specification for three reasons. First, it allows us to separate out teacher effects from class effects by including a random effect for both in our model. Second, this approach allows us to control for a variety of variables that are dropped from the model when teacher fixed effects also are included. Given that all teachers in our sample remained in the same school from one year to the next, school fixed effects are collinear with teacher fixed effects. In instances where teachers had data for only one year, class characteristics and district-by-grade-by-year fixed effects also are collinear with teacher fixed effects. Finally, and most importantly, we find that fixed and random effects specifications that condition on students’ prior achievement and demographic characteristics return almost identical teacher effect estimates. When comparing teacher fixed effects to the “shrunken” empirical Bayes estimates that we employ throughout the paper, we find correlations between 0.79 and 0.99. As expected, the variance of the teacher fixed effects is larger than the variance of teacher random effects, differing by the shrinkage factor. When we instead calculate teacher random effects without shrinkage by averaging student residuals to the teacher level (i.e., “teacher average residuals”; see Guarino et al, 2015 for a discussion of this approach) they are almost identical to the teacher fixed effects estimates. Correlations are 0.99 or above across outcome measures, and unstandardized regression coefficients that retain the original scale of each measure range from 0.91 sd to 0.99 sd.

10 Adding prior survey responses to the education production function is not entirely analogous to doing so with prior achievement. While achievement outcomes have roughly the same reference group across administrations, the surveys do not. This is because survey items often asked about students’ experiences “in this class.” All three Behavior in Class items and all five Happiness in Class items included this or similar language, as did five of the 10 items from Self-Efficacy in Math . That said, moderate year-to-year correlations of 0.39, 0.38, and 0.53 for Self-Efficacy in Math , Happiness in Class , and Behavior in Class , respectively, suggest that these items do serve as important controls. Comparatively, year-to-year correlations for the high- and low-stakes tests are 0.75 and 0.77.

11 To estimate these scores, we specified the following hierarchical linear model separately for each school year: OBSER VAT ^ ION lj , − t = γ j + ε ljt The outcome is the observation score for lesson l from teacher j in years other than t ; γ j is a random effect for each teacher, and ε ljt is the residual. For each domain of teaching practice and school year, we utilized standardized estimates of the teacher-level residual as each teacher’s observation score in that year. Thus, scores vary across time. In the main text, we refer to these teacher-level residual as OBSER VAT ^ ION l J , − t rather than γ ̂ J for ease of interpretation for readers.

12 One explanation for these findings is that the relationship between teachers’ Classroom Organization and students’ Happiness in Class is non-liner. For example, it is possible that students’ happiness increases as the class becomes more organized, but then begins to decrease in classrooms with an intensive focus on order and discipline. To explore this possibility, we first examined the scatterplot of the relationship between teachers’ Classroom Organization and teachers’ ability to improve students’ Happiness in Class . Next, we re-estimated equation (2) including a quadratic, cubic, and quartic specification of teachers’ Classroom Organization scores. In both sets of analyses, we found no evidence for a non-linear relationship. Given our small sample size and limited statistical power, though, we suggest that this may be a focus of future research.

13 In similar analyses in a subset of the NCTE data, Blazar (2015) did find a statistically significant relationship between Ambitious Mathematics Instruction and the low-stakes math test of 0.11 sd. The 95% confidence interval around that point estimate overlaps with the 95% confidence interval relating Ambitious Mathematics Instruction to the low-stakes math test in this analysis. Estimates of the relationship between the other three domains of teaching practice and low-stakes math test scores were of smaller magnitude and not statistically significant. Differences between the two studies likely emerge from the fact that we drew on a larger sample with an additional district and year of data, as well as slight modifications to our identification strategy.

14 When we adjusted p -values for estimates presented in Table 5 to account for multiple hypothesis testing using both the Šidák and Bonferroni algorithms ( Dunn, 1961 ; Šidák, 1967 ), relationships between Emotional Support and both Self-Efficacy in Math and Happiness in Class , as well as between Mathematical Errors and Self-Efficacy in Math remained statistically significant.

Contributor Information

David Blazar, Harvard Graduate School of Education.

Matthew A. Kraft, Brown University.

  • Achenbach TM, McConaughy SH, Howell CT. Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity. Psychological Bulletin. 1987; 101 (2):213. [ PubMed ] [ Google Scholar ]
  • Backes B, Hansen M. Working Paper 146. Washington, D C: National Center for Analysis of Longitudinal in Education Research; 2015. Teach for America impact estimates on nontested student outcomes. Retrieved from http://www.caldercenter.org/sites/default/files/WP&%20146.pdf . [ Google Scholar ]
  • Bandura A, Barbaranelli C, Caprara GV, Pastorelli C. Multifaceted impact of self-efficacy beliefs on academic functioning. Child Development. 1996:1206–1222. [ PubMed ] [ Google Scholar ]
  • Baron J. Personality and intelligence. In: Sternberg RJ, editor. Handbook of human intelligence. New York: Cambridge University Press; 1982. pp. 308–351. [ Google Scholar ]
  • Blazar D. Effective teaching in elementary mathematics: Identifying classroom practices that support student achievement. Economics of Education Review. 2015; 48 :16–29. [ Google Scholar ]
  • Blazar D, Braslow D, Charalambous CY, Hill HC. Working Paper. Cambridge, MA: National Center for Teacher Effectiveness; 2015. Attending to general and content-specific dimensions of teaching: Exploring factors across two observation instruments. Retrieved from http://scholar.harvard.edu/files/david_blazar/files/blazar_et_al_attending_to_general_and_content_specific_dimensions_of_teaching.pdf . [ Google Scholar ]
  • Borghans L, Duckworth AL, Heckman JJ, Ter Weel B. The economics and psychology of personality traits. Journal of Human Resources. 2008; 43 (4):972–1059. [ Google Scholar ]
  • Burchinal M, Howes C, Pianta R, Bryant D, Early D, Clifford R, Barbarin O. Predicting child outcomes at the end of kindergarten from the quality of pre-kindergarten teacher-child interactions and instruction. Applied Developmental Science. 2008; 12 (3):140–153. [ Google Scholar ]
  • Center on Great Teachers and Leaders. Databases on state teacher and principal policies. 2013 Retrieved from: http://resource.tqsource.org/stateevaldb .
  • Chan TC, Jarman D. Departmentalize elementary schools. Principal. 2004; 84 (1):70–72. [ Google Scholar ]
  • Chetty R, Friedman JN, Hilger N, Saez E, Schanzenbach D, Yagan D. How does your kindergarten classroom affect your earnings? Evidence from Project STAR. Quarterly Journal of Economics. 2011; 126 (4):1593–1660. [ PubMed ] [ Google Scholar ]
  • Chetty R, Friedman JN, Rockoff JE. Measuring the impacts of teachers I: Evaluating Bias in Teacher Value-Added Estimates. American Economic Review. 2014; 104 (9):2593–2632. [ Google Scholar ]
  • Chin M, Goldhaber D. Working Paper. Cambridge, MA: National Center for Teacher Effectiveness; 2015. Exploring explanations for the “weak” relationship between value added and observation-based measures of teacher performance. Retrieved from: http://cepr.harvard.edu/files/cepr/files/sree2015_simulation_working_paper.pdf?m=1436541369 . [ Google Scholar ]
  • Cohen DK. Teaching and its predicaments. Cambridge, MA: Harvard University Press; 2011. [ Google Scholar ]
  • Corcoran SP, Jennings JL, Beveridge AA. Teacher effectiveness on high- and low-stakes tests. 2012 Unpublished manuscript. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.269.5537&rep=rep1&type=pdf .
  • Council of the Great City Schools. Beating the odds: Analysis of student performance on state assessments results from the 2012–2013 school year. Washington, DC: Author; 2013. [ Google Scholar ]
  • Darling-Hammond L. Getting teacher evaluation right: What really matters for effectiveness and improvement. New York: Teachers College Press; 2013. [ Google Scholar ]
  • Diener E. Subjective well-being: The science of happiness and a proposal for a national index. American Psychologist. 2000; 55 (1):34–43. [ PubMed ] [ Google Scholar ]
  • Downer JT, Rimm-Kaufman S, Pianta RC. How do classroom conditions and children's risk for school problems contribute to children's behavioral engagement in learning? School Psychology Review. 2007; 36 (3):413–432. [ Google Scholar ]
  • Duckworth A. Don’t grade schools on grit. The New York Times. 2016 Mar 26; Retrieved from http://www.nytimes.com/2016/03/27/opinion/sunday/dont-grade-schools-on-grit.html .
  • Duckworth AL, Peterson C, Matthews MD, Kelly DR. Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology. 2007; 92 (6):1087–1101. [ PubMed ] [ Google Scholar ]
  • Duckworth AL, Quinn PD, Tsukayama E. What No Child Left Behind leaves behind: The roles of IQ and self-control in predicting standardized achievement test scores and report card grades. Journal of Educational Psychology. 2012; 104 (2):439–451. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Duckworth AL, Yeager DS. Measurement matters: Assessing personal qualities other than cognitive ability for educational purposes. Educational Researcher. 2015; 44 (4):237–251. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dunn OJ. Multiple comparisons among means. Journal of the American Statistical Association. 1961; 56 (293):52–64. [ Google Scholar ]
  • Epstein D, Miller RT. Slow off the mark: Elementary school teachers and the crisis in science, technology, engineering, and math education. Washington, DC: Center for American Progress; 2011. [ Google Scholar ]
  • The Every Student Succeeds Act. Public Law 114-95, 114th Cong., 1st sess. 2015 Dec 10; available at https://www.congress.gov/bill/114th-congress/senate-bill/1177/text .
  • Farrington CA, Roderick M, Allensworth E, Nagaoka J, Keyes TS, Johnson DW, Beechum NO. Teaching adolescents to become learners: The role of non-cognitive factors in shaping school performance, a critical literature review. Chicago: University of Chicago Consortium on Chicago School Reform; 2012. [ Google Scholar ]
  • Field A. Discovering statistics using IBM SPSS statistics. 4. London: SAGE publications; 2013. [ Google Scholar ]
  • Gershenson S. Linking teacher quality, student attendance, and student achievement. Education Finance and Policy. 2016; 11 (2):125–149. [ Google Scholar ]
  • Goodman R. Psychometric properties of the strengths and difficulties questionnaire. Journal of the American Academy of Child & Adolescent Psychiatry. 2001; 40 (11):1337–1345. [ PubMed ] [ Google Scholar ]
  • Grossman P, McDonald M. Back to the future: Directions for research in teaching and teacher education. American Educational Research Journal. 2008; 45 :184–205. [ Google Scholar ]
  • Guarino CM, Maxfield M, Reckase MD, Thompson PN, Wooldridge JM. An evaluation of Empirical Bayes’ estimation of value-added teacher performance measures. Journal of Educational and Behavioral Statistics. 2015; 40 (2):190–222. [ Google Scholar ]
  • Hafen CA, Hamre BK, Allen JP, Bell CA, Gitomer DH, Pianta RC. Teaching through interactions in secondary school classrooms: Revisiting the factor structure and practical application of the classroom assessment scoring system–secondary. The Journal of Early Adolescence. 2015; 35 (5–6):651–680. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hamre B, Hatfield B, Pianta R, Jamil F. Evidence for general and domain-specific elements of teacher–child interactions: Associations with preschool children's development. Child Development. 2014; 85 (3):1257–1274. [ PubMed ] [ Google Scholar ]
  • Hamre BK, Pianta RC. Early teacher–child relationships and the trajectory of children's school outcomes through eighth grade. Child Development. 2001; 72 (2):625–638. [ PubMed ] [ Google Scholar ]
  • Hamre BK, Pianta RC, Downer JT, DeCoster J, Mashburn AJ, Jones SM, Brackett MA. Teaching through interactions: Testing a developmental framework of teacher effectiveness in over 4,000 classrooms. The Elementary School Journal. 2013; 113 (4):461–487. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hanushek EA, Rivkin SG. Generalizations about using value-added measures of teacher quality. American Economic Review. 2010; 100 (2):267–271. [ Google Scholar ]
  • Hickman JJ, Fu J, Hill HC. Technical report: Creation and dissemination of upper-elementary mathematics assessment modules. Princeton, NJ: Educational Testing Service; 2012. [ Google Scholar ]
  • Hill HC, Blazar D, Lynch K. Resources for teaching: Examining personal and institutional predictors of high-quality instruction. AERA Open. 2015; 1 (4):1–23. [ Google Scholar ]
  • Hill HC, Blunk ML, Charalambous CY, Lewis JM, Phelps GC, Sleep L, Ball DL. Mathematical knowledge for teaching and the mathematical quality of instruction: An exploratory study. Cognition and Instruction. 2008; 26 (4):430–511. [ Google Scholar ]
  • Hill HC, Charalambous CY, Kraft MA. When rater reliability is not enough teacher observation systems and a case for the generalizability study. Educational Researcher. 2012; 41 (2):56–64. [ Google Scholar ]
  • Hill HC, Grossman P. Learning from teacher observations: Challenges and opportunities posed by new teacher evaluation systems. Harvard Educational Review. 2013; 83 (2):371–384. [ Google Scholar ]
  • Hill HC, Schilling SG, Ball DL. Developing measures of teachers’ mathematics knowledge for teaching. Elementary School Journal. 2004; 105 :11–30. [ Google Scholar ]
  • Hitt C, Trivitt J, Cheng A. When you say nothing at all: The predictive power of student effort on surveys. Economics of Education Review. 2016; 52 :105–119. [ Google Scholar ]
  • Ho AD, Kane TJ. The reliability of classroom observations by school personnel. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2013. [ Google Scholar ]
  • Jackson CK. NBER Working Paper No. 18624. Cambridge, MA: National Bureau for Economic Research; 2012. Non-cognitive ability, test scores, and teacher quality: Evidence from ninth grade teachers in North Carolina. [ Google Scholar ]
  • Jacob BA, Lefgren L. Can principals identify effective teachers? Evidence on subjective performance evaluation in education. Journal of Labor Economics. 2008; 20 (1):101–136. [ Google Scholar ]
  • Jennings JL, DiPrete TA. Teacher effects on social and behavioral skills in early elementary school. Sociology of Education. 2010; 83 (2):135–159. [ Google Scholar ]
  • John OP, Srivastava S. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of personality: Theory and research. 1999; 2 (1999):102–138. [ Google Scholar ]
  • Kane TJ, McCaffrey DF, Miller T, Staiger DO. Have we identified effective teachers? Validating measures of effective teaching using random assignment. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2013. [ Google Scholar ]
  • Kane TJ, Staiger DO. Gathering feedback for teaching. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2012. [ Google Scholar ]
  • King RB, McInerney DM, Ganotice FA, Villarosa JB. Positive affect catalyzes academic engagement: Cross-sectional, longitudinal, and experimental evidence. Learning and Individual Differences. 2015; 39 :64–72. [ Google Scholar ]
  • Kline P. An easy guide to factor analysis. London: Routledge; 1994. [ Google Scholar ]
  • Kraft MA, Grace S. Working Paper. Providence, RI: Brown University; 2016. Teaching for tomorrow’s economy? Teacher effects on complex cognitive skills and social-emotional competencies. Retrieved from http://scholar.harvard.edu/files/mkraft/files/teaching_for_tomorrows_economy_-_final_public.pdf . [ Google Scholar ]
  • Koedel C. Teacher quality and dropout outcomes in a large, urban school district. Journal of Urban Economics. 2008; 64 (3):560–572. [ Google Scholar ]
  • Ladd HF, Sorensen LC. Working Paper No. 112. Washington, D C: National Center for Analysis of Longitudinal in Education Research; 2015. Returns to teacher experience: Student achievement and motivation in middle school. Retrieved from http://www.caldercenter.org/sites/default/files/WP%20112%20Update_0.pdf . [ Google Scholar ]
  • Lampert M. Teaching problems and the problems of teaching. Yale University Press; 2001. [ Google Scholar ]
  • Lockwood JR, McCaffrey DF, Hamilton LS, Stecher B, Le V, Martinez JF. The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Journal of Educational Measurement. 2007; 44 (1):47–67. [ Google Scholar ]
  • Luckner AE, Pianta RC. Teacher-student interactions in fifth grade classrooms: Relations with children's peer behavior. Journal of Applied Developmental Psychology. 2011; 32 (5):257–266. [ Google Scholar ]
  • Lynch K, Chin M, Blazar D. Working Paper. Cambridge, MA: National Center for Teacher Effectiveness; 2015. Relationship between observations of elementary teacher mathematics instruction and student achievement: Exploring variability across districts. [ Google Scholar ]
  • Lyubomirsky S, King L, Diener E. The benefits of frequent positive affect: Does happiness lead to success? Psychological Bulletin. 2005; 131 (6):803–855. [ PubMed ] [ Google Scholar ]
  • Mashburn AJ, Pianta RC, Hamre BK, Downer JT, Barbarin OA, Bryant D, Howes C. Measures of classroom quality in prekindergarten and children's development of academic, language, and social skills. Child Development. 2008; 79 (3):732–749. [ PubMed ] [ Google Scholar ]
  • Mihaly K, McCaffrey DF, Staiger DO, Lockwood JR. A composite estimator of effective teaching. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2013. [ Google Scholar ]
  • Miles SB, Stipek D. Contemporaneous and longitudinal associations between social behavior and literacy achievement in a sample of low-income elementary school children. Child Development. 2006; 77 (1):103–117. [ PubMed ] [ Google Scholar ]
  • Miller CC. Why what you learned in preschool is crucial at work. The New York Times. 2015 Oct 16; Retrieved from http://www.nytimes.com/2015/10/18/upshot/how-the-modern-workplace-has-become-more-like-preschool.html?_r=0 .
  • Moffitt TE, Arseneault L, Belsky D, Dickson N, Hancox RJ, Harrington H, Houts R, Poulton R, Roberts BW, Ross S. A gradient of childhood self-control predicts health, wealth, and public safety. Proceedings of the National Academy of Sciences. 2011; 108 (7):2693–2698. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • National Council of Teachers of Mathematics. Curriculum and evaluation standards for school mathematics. Reston, VA: Author; 1989. [ Google Scholar ]
  • National Council of Teachers of Mathematics. Principles to actions: Ensuring mathematical success for all. Reston, VA: Author; 2014. [ Google Scholar ]
  • National Governors Association Center for Best Practices. Common core state standards for mathematics. Washington, DC: Author; 2010. [ Google Scholar ]
  • Papay JP. Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal. 2011; 48 (1):163–193. [ Google Scholar ]
  • Papay JP. Refocusing the debate: Assessing the purposes and tools of teacher evaluation. Harvard Educational Review. 2012; 82 (1):123–141. [ Google Scholar ]
  • Pianta RC, Hamre BK. Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity. Educational Researcher. 2009; 38 (2):109–119. [ Google Scholar ]
  • Pianta R, La Paro K, Payne C, Cox M, Bradley R. The relation of kindergarten classroom environment to teacher, family, and school characteristics and child outcomes. Elementary School Journal. 2002; 102 :225–38. [ Google Scholar ]
  • Raudenbush SW, Bryk AS. Hierarchical linear models: Applications and data analysis methods. Second. Thousand Oaks, CA: Sage Publications; 2002. [ Google Scholar ]
  • Rockoff JE, Speroni C. Subjective and objective evaluations of teacher effectiveness. American Economic Review. 2010:261–266. [ Google Scholar ]
  • Rockoff JE, Staiger DO, Kane TJ, Taylor ES. Information and employee evaluation: evidence from a randomized intervention in public schools. American Economic Review. 2012; 102 (7):3184–3213. [ Google Scholar ]
  • Ruzek EA, Domina T, Conley AM, Duncan GJ, Karabenick SA. Using value-added models to measure teacher effects on students’ motivation and achievement. The Journal of Early Adolescence. 2015; 35 (5–6):852–882. [ Google Scholar ]
  • Segal C. Misbehavior, education, and labor market outcomes. Journal of the European Economic Association. 2013; 11 (4):743–779. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Šidák Z. Rectangular confidence regions for the means of multivariate normal distributions. Journal of the American Statistical Association. 1967; 62 (318):626–633. [ Google Scholar ]
  • Spearman C. “General Intelligence,” objectively determined and measured. The American Journal of Psychology. 1904; 15 (2):201–292. [ Google Scholar ]
  • Steinberg MP, Garrett R. Classroom composition and measured teacher performance: What do teacher observation scores really measure? Educational Evaluation and Policy Analysis. 2016; 38 (2):293–317. [ Google Scholar ]
  • Tabachnick BG, Fidell LS. Using multivariate statistics. 4. New York: Harper Collins; 2001. [ Google Scholar ]
  • Todd PE, Wolpin KI. On the specification and estimation of the production function for cognitive achievement. The Economic Journal. 2003; 113 (485):F3–F33. [ Google Scholar ]
  • Tremblay RE, Masse B, Perron D, LeBlanc M, Schwartzman AE, Ledingham JE. Early disruptive behavior, poor school achievement, delinquent behavior, and delinquent personality: Longitudinal analyses. Journal of Consulting and Clinical Psychology. 1992; 60 (1):64. [ PubMed ] [ Google Scholar ]
  • Tsukayama E, Duckworth AL, Kim B. Domain-specific impulsivity in school-age children. Developmental Science. 2013; 16 (6):879–893. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • U.S. Department of Education. A blueprint for reform: Reauthorization of the elementary and secondary education act. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation and Policy Development; 2010. [ Google Scholar ]
  • Usher EL, Pajares F. Sources of self-efficacy in school: Critical review of the literature and future directions. Review of Educational Research. 2008; 78 (4):751–796. [ Google Scholar ]
  • West MR, Kraft MA, Finn AS, Martin RE, Duckworth AL, Gabrieli CF, Gabrieli JD. Promise and paradox: Measuring students’ non-cognitive skills and the impact of schooling. Educational Evaluation and Policy Analysis. 2016; 38 (1):148–170. [ Google Scholar ]
  • Whitehurst GJ. Hard thinking on soft skills. Brookings Institute; Washington, DC: 2016. Retrieved from http://www.brookings.edu/research/reports/2016/03/24-hard-thinking-soft-skills-whitehurst . [ Google Scholar ]
  • Whitehurst GJ, Chingos MM, Lindquist KM. Evaluating teachers with classroom observations: Lessons learned in four districts. Brown Center on Education Policy at the Brookings Institute; Washington, DC: 2014. Retrieved from http://www.brookings.edu/~/media/research/files/reports/2014/05/13-teacher-evaluation/evaluating-teachers-with-classroom-observations.pdf . [ Google Scholar ]
  • Wigfield A, Meece JL. Math anxiety in elementary and secondary school students. Journal of Educational Psychology. 1988; 80 (2):210. [ Google Scholar ]
  • Zernike K. Testing for joy and grit? Schools nationwide push to measure students’ emotional skills. The New York Times. 2016 Feb 29; Retrieved from http://www.nytimes.com/2016/03/01/us/testing-for-joy-and-grit-schools-nationwide-push-to-measure-students-emotional-skills.html?_r=0 .

Good Teaching Is Not Just About the Right Practices

In a series of interviews with master teachers, a reporter finds that certain intangible qualities matter more than the best tactics. 

Your content has been saved!

Teacher working with high school student

Good teaching isn’t about following a “rigid list of the most popular evidence-based tools and strategies,” veteran high school English teacher Renee Moore tells Kristina Rizga for The Atlantic ’s On Teaching series. The most effective teaching tools, Moore suggests, are intangible qualities that directly address the fundamental human needs of a diverse classroom community—traits like empathy, kindness, and a deep respect for the lives and interests of individual students.

Working from a place of caring, Rizga reports, the best teachers establish deep connections with students, and then build up to a “daily commitment to bringing in well-considered, purposeful practices and working child by child.” For master teachers, then, the person precedes the pedagogy—and finding the right mix of practices, at least to some extent, is contingent on knowing what each child needs.

Rizga travelled across the country for two years for the series, interviewing some of America’s most accomplished veteran teachers in an effort to collect their wisdom and discover “what has helped them bring out the best in their students.” The result is an edifying collection of stories that touch on issues from race and culture to advice about how to teach remotely.

We pulled out some of the most constructive, foundational ideas that informed teacher mindsets through decades of work in the classroom, and helped them inspire even the most reticent students to grow and learn.

WORKING CHILD BY CHILD

Part of getting to know students, says high school English teacher Pirette McKamey, involves watching and listening as students speak in class or in the hallway, and observing how they express themselves in their work. “Every time a student does an assignment, they are communicating something about their thinking,” says McKamey, who is now the principal at Mission High School in San Francisco. “There are so many opportunities to miss certain students and not see them, not hear them, shut them down.”

It also means finding opportunities to connect with each child individually. Moore recalls a 17-year-old student who, in spite of excelling in math class, struggled with writing in her English class. After spending time with the child after school, she found he lit up when discussing sports and family—subjects she encouraged him to write about, resulting in more complex, lively writing. She also recorded their conversations and asked the student to transcribe the recordings—without worrying too much about spelling and grammar—an exercise that allowed him to see proof of his “capacity for unique ideas and analysis,” and opened the door for Moore to begin teaching him grammar and composition. The student became the first of his six siblings to graduate with a high school diploma.

The experience “taught me the power of getting to know your students well enough to teach,” says Moore, illuminating the powerful but not always intuitive connection between relationship-building and improving academic outcomes. Instead of designing pedagogy around individual student needs, “we’re shuffling kids through a system designed on a factory model, and we often give up too soon, because they don’t get to grade level by the time the system says they should. When they don’t, we say they’re not ready to learn or are hopeless. But they are just not on our schedule; it has nothing to do with their innate potential or ability.”

When Moore surveyed her students for a research project in 2000 about best practices for teaching English, students confirmed what she’d long suspected: They learned best when teachers “saw and heard them as individuals, helped them understand their strengths, and connected what they were learning with their future ambitions.” When, instead of recognizing and supporting student effort, teachers focused on minor issues like lateness or poor grammar, students reported feeling discouraged.

REFLECTING ON CLASSROOM PRACTICE

Finding time and head space for reflection—especially after teaching all day, grading assignments, fielding student and family queries, and preparing for the next day’s lessons—is challenging but absolutely essential to good teaching. It’s also not just about reflecting on your pedagogy.

McKamey got in the habit of spending her commute going over what she’d observed about each student that day. “She noted, for example, any body language that might indicate disengagement, like expressionless faces, or heads on desks,” writes Rizga. She also tracked student engagement, going over in her mind instances when she saw, for example, students chatting spontaneously about assignments, or doing extra work. “The next day, McKamey would synthesize what she’d observed, and adjust her lesson plans for the day ahead.”

LEARNING FROM COLLEAGUES

When thinking about productive relationships, teachers should think laterally too: acknowledging and tapping into the strengths of colleagues was a trait of master teachers. Peer networks allow educators to learn from each other, enrich their practice, and access a valuable support network that helps teachers feel connected and more likely to stay in the field.

For many seasoned educators, peer networks are “the main mechanism for transferring collective wisdom and acquiring tacit knowledge that can’t be learned by reading a book or listening to a lecture—skills such as designing a strong lesson plan with precise pacing, rhythm, and clear focus, for instance, or building positive relationships among students,” Rizga writes in another piece in the collection.

“When they struggled—and all of them told me they did—they conferred with colleagues at the school, or teachers in professional associations, or online communities. And together, these teacher groups acted intentionally to identify the challenges students were facing and come up with personalized plans,” Rizga reports.

THE VALUE OF TEAM PLANNING

When teachers were able to share insights and intentionally plan together, they collaborated across academic subjects in new and creative ways, Rigza writes, coming up with valuable lessons and programs that were “more likely to be culturally specific, speaking to the realities of their students’ lives.”

Former high school English teacher Judith Harper, for example, worked with her teaching colleagues in Mesa, Arizona, to help boost students’ public speaking, interviewing, and college-essay-writing skills.  Many of her students came from “working-class and Latino families who didn’t always speak English at home,” and building these skills opened up new opportunities for them. Rebecca Palacios, an early-childhood educator in Corpus Christi, Texas, worked with her teaching colleagues to launch a coaching program to help the Latino parents of her preschool students learn how to support their children’s reading skills at home.

  • Monash Online

Monash Education

  • Teacher education
  • Situational Judgement Test
  • The Monash Teaching Suitability Test (MTeST)
  • The Casper Test The Casper Test
  • Alternative pathways into the Bachelor of Education (Honours) Alternative pathways into the Bachelor of Education (Honours)
  • Teacher placements Teacher placements
  • Undergraduate study Undergraduate study
  • Postgraduate study Postgraduate study
  • Psychology courses
  • How to apply - Psychology How to apply - Psychology
  • Counselling courses Counselling courses
  • Research courses Research courses
  • PD short courses PD short courses
  • How to apply for a course
  • Single unit enrolments Single unit enrolments
  • Cross institutional enrolments Cross institutional enrolments
  • Scholarships and awards Scholarships and awards
  • Course enquiries Course enquiries
  • Find a researcher Find a researcher
  • Research groups
  • Arts.Creativity.Education Arts.Creativity.Education
  • Beyond Reconciliation Research Group Beyond Reconciliation Research Group
  • Bilingualism, Multilingualism and Plurilingualism Research Group Bilingualism, Multilingualism and Plurilingualism Research Group
  • Cultural-Historical and Activity Theory (CHAT) Research Group Cultural-Historical and Activity Theory (CHAT) Research Group
  • Early Childhood Conceptual PlayLab Early Childhood Conceptual PlayLab
  • Early Childhood Workforce Research Collective Early Childhood Workforce Research Collective
  • Education, Environment and Sustainability Education, Environment and Sustainability
  • Globalisation, Education and Work Globalisation, Education and Work
  • Health, Physical Education and Sport Research Group Health, Physical Education and Sport Research Group
  • Interdisciplinary Approaches Research Group Interdisciplinary Approaches Research Group
  • Mathematics Education Research Group Mathematics Education Research Group
  • Phenomenology and Education Research Group Phenomenology and Education Research Group
  • Plants + Places + Pedagogies Research Group Plants + Places + Pedagogies Research Group
  • queerEd Research Group queerEd Research Group
  • Science Education Science Education
  • Transforming Initial Teacher Education for Social Impact Research Group Transforming Initial Teacher Education for Social Impact Research Group
  • Faculty research agenda
  • Educational Leadership Educational Leadership
  • Teaching and Learning Teaching and Learning
  • Digital Futures Digital Futures
  • Health and Wellbeing Health and Wellbeing
  • Diversity and Inclusion Diversity and Inclusion
  • Research projects
  • Vocational institutions, undergraduate degrees: distinction or inequality Vocational institutions, undergraduate degrees: distinction or inequality
  • Data-smart schools: enhancing the use of digital data in secondary schools Data-smart schools: enhancing the use of digital data in secondary schools
  • Making a digital difference? An investigation of new technologies in secondary schools Making a digital difference? An investigation of new technologies in secondary schools
  • An investigation into the relations between imaginary situations and scientific abstractions in preschool digital play An investigation into the relations between imaginary situations and scientific abstractions in preschool digital play
  • Pathways to work engagement, wellbeing and positive teaching among mid-career teachers: The role of personal and workplace resources Pathways to work engagement, wellbeing and positive teaching among mid-career teachers: The role of personal and workplace resources
  • Affective imagination in science education: exploring the emotional nature of scientific and technological learning and engaging children and teachers Affective imagination in science education: exploring the emotional nature of scientific and technological learning and engaging children and teachers
  • Conceptual play: foregrounding imagination and cognition during concept formation in early years science education Conceptual play: foregrounding imagination and cognition during concept formation in early years science education
  • Elite independent schools in globalising circumstances: a multi-sited global Ethnography Elite independent schools in globalising circumstances: a multi-sited global Ethnography
  • Using pedagogical reasoning to explicate expert practice that aligns with national teaching standards Using pedagogical reasoning to explicate expert practice that aligns with national teaching standards
  • Peopling educational policy: realising the new Australian English and mathematics curricula Peopling educational policy: realising the new Australian English and mathematics curricula
  • Numeracy@Home: Enhancing the capacity of parents to support the learning of their children Numeracy@Home: Enhancing the capacity of parents to support the learning of their children
  • M-cubed: Money, meaning and maths for learners with cognitive disability M-cubed: Money, meaning and maths for learners with cognitive disability
  • The internationalization of Australian Independent Schools: The influence of Confucian Heritage Culture on Pedagogy The internationalization of Australian Independent Schools: The influence of Confucian Heritage Culture on Pedagogy
  • Getting Evidence Moving in Schools Getting Evidence Moving in Schools
  • Turning inquiry experiences into robust learning: Exploring the potential of structured mathematics learning sequences Turning inquiry experiences into robust learning: Exploring the potential of structured mathematics learning sequences
  • Teacher knowledge and practice of geometry in Kindergarten to Year 6 Teacher knowledge and practice of geometry in Kindergarten to Year 6
  • Connected2Learning - Thinking Outside the Square Connected2Learning - Thinking Outside the Square
  • Teachers' knowledge and practice in developing students' multiplicative thinking in Years 3 & 4 2018 Teachers' knowledge and practice in developing students' multiplicative thinking in Years 3 & 4 2018
  • Teachers' knowledge and practice in developing students' multiplicative thinking in Years 3 & 4 2017 Teachers' knowledge and practice in developing students' multiplicative thinking in Years 3 & 4 2017
  • Teachers' knowledge and practice in developing students' multiplicative thinking in Years 3 & 4 2016 Teachers' knowledge and practice in developing students' multiplicative thinking in Years 3 & 4 2016
  • Extending Mathematical Understanding for all Extending Mathematical Understanding for all
  • Evaluation of the Let's Count E-Learning Pilot Program for The Smith Family Evaluation of the Let's Count E-Learning Pilot Program for The Smith Family
  • External Evaluation of Explore Learning Sites Critical and Creative Thinking Collaborative Inquiry Project External Evaluation of Explore Learning Sites Critical and Creative Thinking Collaborative Inquiry Project
  • Conceptual PlayLab
  • About our research About our research
  • Our team Our team
  • Publications Publications
  • Updates Updates
  • About our project About our project
  • Publications and resources
  • QURE Framework Report (DONOT change url)
  • Discussion Paper
  • Q Suite Resources Q Suite Resources
  • Stories of impact Stories of impact
  • Informal sport as a health and social resource
  • Industry partners Industry partners
  • Pacific INDIE
  • Members Members
  • Launches Launches
  • Outcomes Outcomes
  • Respectful relationships in early childhood professional learning Respectful relationships in early childhood professional learning
  • Researching Curriculum Renewal Researching Curriculum Renewal
  • Dynamic Leadership in Early Childhood Dynamic Leadership in Early Childhood
  • Developing Leadership Language in Education Leaders Developing Leadership Language in Education Leaders
  • Developing a conflict resolution theoretical framework for principals, leaders, teachers and whole school staff Developing a conflict resolution theoretical framework for principals, leaders, teachers and whole school staff
  • Building an evidence-base for using Minecraft: Education Edition as an educational tool Building an evidence-base for using Minecraft: Education Edition as an educational tool
  • Zoos Victoria Fighting Extinction School Impact Project Zoos Victoria Fighting Extinction School Impact Project
  • School leadership amid the pandemic School leadership amid the pandemic
  • A novel transformational model for school food education in Pacific coastal communities A novel transformational model for school food education in Pacific coastal communities
  • Numeracy across the curriculum research project Numeracy across the curriculum research project
  • Engaged Exchanges: Teaching and integrating inquiry learning in online environments Engaged Exchanges: Teaching and integrating inquiry learning in online environments
  • Building Babies Brains (Bfor3) and Literacy Acquisition for Pre-Primary Students (LAPS) 2020 Project Building Babies Brains (Bfor3) and Literacy Acquisition for Pre-Primary Students (LAPS) 2020 Project
  • Family Violence Prevention in Victorian Muslim Communities Family Violence Prevention in Victorian Muslim Communities
  • Kids co-designing sustainable and healthy environments Kids co-designing sustainable and healthy environments
  • Are Islamic Schools in Indonesia Educating for or Against Religious Extremism Are Islamic Schools in Indonesia Educating for or Against Religious Extremism
  • Islamic Schooling in Australia and Indonesia: A Collaborative Documentary Film Islamic Schooling in Australia and Indonesia: A Collaborative Documentary Film
  • Mental Wellbeing in Community Sports Club - Healthway Mental Wellbeing in Community Sports Club - Healthway
  • Leading Virtual Learning Teacher Course Leading Virtual Learning Teacher Course
  • Challenging Homophobia in Rugby Union Challenging Homophobia in Rugby Union
  • Mathematics and Fractions and Decimals Online Interviews Review Mathematics and Fractions and Decimals Online Interviews Review
  • Quasi-experimental research on Home Based Parenting Education Program Quasi-experimental research on Home Based Parenting Education Program
  • Setting up for success: reengaging young parents with education Setting up for success: reengaging young parents with education
  • High Impact Teaching Strategies Resources Catalogue High Impact Teaching Strategies Resources Catalogue
  • Australian teachers’ ethical dilemmas regarding climate change in the midst of ecological crisis Australian teachers’ ethical dilemmas regarding climate change in the midst of ecological crisis
  • Learning through languages: Plurilingual pedagogy in the English classroom Learning through languages: Plurilingual pedagogy in the English classroom
  • theSTEMproblem
  • About About
  • Schools Schools
  • Resources Resources
  • Principles of Practice Principles of Practice
  • Primary teachers' adaptive expertise in interdisciplinary maths and science Primary teachers' adaptive expertise in interdisciplinary maths and science
  • Evaluation of the Gender Equality in Victorian Sport and Recreation Program Design Principles Evaluation of the Gender Equality in Victorian Sport and Recreation Program Design Principles
  • Dropping out or opting out? Examining girls and women’s participation in sport Dropping out or opting out? Examining girls and women’s participation in sport
  • Parent Report Card Survey 2019-2020 Parent Report Card Survey 2019-2020
  • Futurity Investment Group Parent Report Card Survey Research Project Futurity Investment Group Parent Report Card Survey Research Project
  • VicHealth Pride Cup Evaluation VicHealth Pride Cup Evaluation
  • Listening to the most important people in the room: How can student feedback surveys improve secondary teachers' practices? Listening to the most important people in the room: How can student feedback surveys improve secondary teachers' practices?
  • Promoting the Inclusion and Participation of Individuals with ASD In Sport: Evaluation of Online Coach Education Promoting the Inclusion and Participation of Individuals with ASD In Sport: Evaluation of Online Coach Education
  • Rapid Review: Mental Wellbeing in Community Sports Clubs Rapid Review: Mental Wellbeing in Community Sports Clubs
  • Research into the professional development of teachers for students with disabilities Research into the professional development of teachers for students with disabilities
  • Harmony Hubs Program Evaluation Harmony Hubs Program Evaluation
  • Renewal of Science Continuum Archive Renewal of Science Continuum Archive
  • Leading school-based change: The Knox School 2019-2020 Leading school-based change: The Knox School 2019-2020
  • The future of Victorian Jewish schools: a community consultation to re-assess the ethical responsibility of schooling The future of Victorian Jewish schools: a community consultation to re-assess the ethical responsibility of schooling
  • Digital literacies for adult migrant and refugee settlement Digital literacies for adult migrant and refugee settlement
  • Education for a World Worth Living In
  • Our focus Our focus
  • Our projects Our projects
  • Get involved Get involved
  • Teacher capabilities in conditions of superdiversity Teacher capabilities in conditions of superdiversity
  • Building a community of theatre champions for schools in the Kingdom of Saudi Arabia Building a community of theatre champions for schools in the Kingdom of Saudi Arabia
  • School principals’ emotional labour in volatile times
  • About the project About the project
  • Virtual Reality and Inclusive Teacher Education Virtual Reality and Inclusive Teacher Education
  • The impact of teacher shortages on teachers remaining in hard to staff schools The impact of teacher shortages on teachers remaining in hard to staff schools
  • What the teaching profession needs now for the future What the teaching profession needs now for the future
  • Victorian Curriculum F—10 Version 2.0 Mathematics Professional Learning Program Victorian Curriculum F—10 Version 2.0 Mathematics Professional Learning Program
  • Self-determination in community programs: a digital toolkit for young people with disability
  • Authentic Teaching and Learning through Adaptive Simulations (ATLAS) Authentic Teaching and Learning through Adaptive Simulations (ATLAS)
  • AI and HE Student trust: use and intentions AI and HE Student trust: use and intentions
  • Emerging Technologies in Education - School Leadership Training Emerging Technologies in Education - School Leadership Training
  • EdTech Innovation Alliance EdTech Innovation Alliance
  • Research degrees
  • How to apply How to apply
  • Pathways to research Pathways to research
  • Contact Contact
  • Scholarships Scholarships
  • MERC - Monash Education Research Community
  • Conference Conference
  • Dean's Research Fellows Dean's Research Fellows
  • Publications
  • Faculty of Education working papers Faculty of Education working papers
  • IRECE Journal
  • About the IRECE Journal About the IRECE Journal
  • Author and reviewer guidelines Author and reviewer guidelines
  • Partnerships
  • Showcase Showcase
  • Why work with us? Why work with us?
  • Consulting services Consulting services
  • Global partnerships Global partnerships
  • Contact us Contact us
  • Manage your course
  • Course maps Course maps
  • Unit selection Unit selection
  • New enrolments
  • Additional enrolment tasks Additional enrolment tasks
  • Student complaints Student complaints
  • Business rules (academic) Business rules (academic)
  • Teacher placements
  • Placement dates Placement dates
  • Allocation process Allocation process
  • Conditions of placement Conditions of placement
  • Security clearances
  • Immunisation Immunisation
  • Business rules (placements)
  • Professional Experience Intervention and Support Processes Professional Experience Intervention and Support Processes
  • Professional Experience Placement Variation Application processes Professional Experience Placement Variation Application processes
  • Professional experience visits business rule Professional experience visits business rule
  • Professional experience visits process Professional experience visits process
  • Forms Forms
  • Financial support Financial support
  • Placements Plus programs Placements Plus programs
  • Interstate placements Interstate placements
  • Rural placements Rural placements
  • International placements International placements
  • Who to contact Who to contact
  • Counselling placements
  • Counselling Placement Team Counselling Placement Team
  • Psychology placements Psychology placements
  • Academic skills
  • Assignment workshops Assignment workshops
  • English language workshops English language workshops
  • Literacy and Numeracy Development Program Literacy and Numeracy Development Program
  • Consultations for research students Consultations for research students
  • Seminars for research students Seminars for research students
  • Research proposal guidelines Research proposal guidelines
  • Writing groups for research students Writing groups for research students
  • Facilities and resources
  • Audio-visual services Audio-visual services
  • Theses collection Theses collection
  • Student life
  • Orientation Orientation
  • Student Ambassadors
  • Become a student ambassador Become a student ambassador
  • Student Engagement Office (SEO) Student Engagement Office (SEO)
  • International Student Peer Support (ISPS) International Student Peer Support (ISPS)
  • Contacts for students Contacts for students
  • Events Events
  • Categories Categories
  • Welcome message Welcome message
  • Senior Leadership Team Senior Leadership Team
  • Schools, research centres and clinics
  • Curriculum, Teaching and Inclusive Education Curriculum, Teaching and Inclusive Education
  • Education, Culture and Society Education, Culture and Society
  • Educational Psychology and Counselling Educational Psychology and Counselling
  • Monash Krongold Clinic Monash Krongold Clinic
  • Rankings and reputation Rankings and reputation
  • Our vision and strategy Our vision and strategy
  • Alumni Alumni
  • Our prize partners Our prize partners
  • Donations and bequests Donations and bequests
  • Our campuses Our campuses
  • Contact the Faculty Contact the Faculty
  • Skip to content
  • Skip to navigation

How teachers can use research effectively in their classroom

research articles for teachers

It’s important for teachers to be able to use the latest evidence from research in their classroom practice, but how can they use that research well to create meaningful impact?

Researchers from the Monash Q project provide some tips and resources for educators.

Almost any professional accreditation document or school improvement framework has an expectation that teachers use research in their practice.  At first glance, these statements appear simple, however the process of sourcing, interpreting and using research in practice is more complex.

At the Monash Q Project, we are interested in understanding how teachers use research and supporting them to use it well. In a recent survey of nearly 500 Australian educators, we found:

  • 85% believe using research will improve student outcomes,
  • 35% do not know where to find relevant research,
  • 32% don’t feel confident to analyse and interpret research,
  • 44% do not feel confident to judge the quality of research.

This article discusses four key considerations for using research well in the classroom, along with initial resources and practical guides to support teachers to engage with research.

1. Research comes from a variety of sources

The educators in our survey told us about the challenges they face in accessing research.  For instance, 68% indicated they didn’t have sufficient access to research, and 76% couldn’t keep up with new and emerging research.

While issues of access cannot be easily resolved without system-wide changes, there are a number of tricks that can make accessing research easier.

Resources for teachers:

  • An increasing number of research articles are now available open-access (no paywall). Many educational research databases, such as ERIC and Informit , host open-access research – just click “Full-text available” when searching.
  • Teacher Education Review is a fortnightly podcast focusing on sharing recent and topical educational research.
  • Education Research Reading Room is a podcast that explores different education theories or research-based strategies each month.
  • Teacher Magazine produces several different podcasts, including: The Research Files, Teacher Staffroom, and Behaviour Management.

Check out our Q Data Insight for a round-up of ideas.

  • In partnership with Behaviour Works Australia we developed a 3-step process for finding research – which is explained in our Q Behavioural Insight .

research articles for teachers

2. Not all research is created equal

Before you consider implementing research in your classroom, you need to evaluate and assess the research to determine whether it is suitable for your context. The educators in our survey were more confident to critique research in this manner if they were a school leader, held post-graduate qualifications and/or had more than five years of experience.

However, as research can be valuable for all educators, there are a number of guides to scaffold the assessment process.

  • Research studies are often riddled with complex terminology. The newly-formed Australian Education Research Organisation (AERO) has developed Key Concepts Explained , a page which clearly and concisely explains common research jargon.
  • It can be difficult to know what questions to ask when examining a piece of research. AITSL recently published a comprehensive step-by-step guide on how to examine, critique and interpret research – which has also been summarised in a 1-page document .
  • High Impact Teaching Strategies (VIC)
  • What Works Best (NSW)
  • Great Teaching By Design (ACT)
  • Standards of Evidence (QLD)

3. Research is not a ‘magic pill’

Research cannot simply be dropped into your classroom to solve all of your problems. After critiquing and interpreting the research, teachers should spend some time developing a plan to adapt the research, trial it in their classroom and then reflect on whether it worked or not.

For the educators in our survey, the most important considerations to keep in mind were whether the research had directly and sufficiently addressed a pre-identified problem (1st ranked importance) and whether it was compatible with their current teaching practices (2nd ranked importance).

  • AERO has developed a Research Reflection Guide that outlines key questions for teachers to ask when planning, implementing, evaluating and reflecting on research-informed changes to practice.
  • Evidence for Learning’s Implementation Guide provides evidence-based recommendations on effective implementation within schools.
  • For a narrative example of how one teacher found and adapted research to suit his small, regional primary school in Queensland, visit Michael’s Q Narrative .
  • For a case study of how two school leaders sourced and implemented research in their large Catholic co-educational secondary school in Victoria, visit Vaughan and Kendall’s Q Narrative .

research articles for teachers

4. Using research is not an isolated activity

Teachers don’t have to embark on the journey of engaging with research alone. In fact, 76% of educators in our survey used research as a prompt to discuss best practices with their colleagues.

Our most recent Q Data Insight also highlighted how collaborative learning environments can support educators’ beliefs about research as well as their capacities to source, critique and implement it.

  • To learn more about how collaboration can support teachers to use evidence better, take a look at this Collaboration research summary from AITSL and the Q Project.
  • Accessing Research Q Conversation and supporting slide deck .
  • Collaborative Research Use Q Conversation and supporting slide deck.
  • For a narrative case study about how two teachers collaborated to implement a research-based intervention in a small, regional primary school in Queensland, visit Penny and Julie’s Q Narrative .

Final thoughts

These four considerations aim to provide educators with a springboard to explore how they can use research well so it has a meaningful impact in their classrooms. We hope that the resources provided assist teachers to:

  • explore research in various ways,
  • interpret and critique its findings,
  • thoughtfully adapt it to suit their classroom, and
  • engage with others throughout this process.

These suggestions provide an important first step, but there are important systemic issues that also need to be addressed, such as the lack of dedicated time to engage with research.

These concerns are continuing to drive our work at the Monash Q Project, and we explore them in an upcoming discussion paper.

For more information about the Monash Q Project, visit our website or join the conversation with us on Twitter @MonashQProject .

With thanks to the additional Q Project researchers who also contributed to this article: Mandy Salisbury, Joanne Gleeson , and Connie Cirkony.

Radical Leadership Masterclass

Tuesday 15 October 2024, 7pm-8pm A one-hour online session designed for forward-thinking educational leaders ready to question the status quo. Register now

Further reading

Four ways to support teachers to use research in their practice, how do australian teachers feel about their work in 2022, the role of research in the professional development of graduate teachers, bringing future of education into the classroom, what do australians really think about teachers and schools.

Receive the latest on TeachSpace articles, our news, events and more. Subscribe to Monash Education Newsletter

Scholars Crossing

  • Liberty University
  • Jerry Falwell Library
  • Special Collections
  • < Previous

Home > ETD > Doctoral > 6047

Doctoral Dissertations and Projects

Examining the impact of special education teacher attrition on student performance: a causal-comparative study.

Kiandra Dane Jones , Liberty University Follow

School of Education

Doctor of Philosophy in Education (PhD)

Rebecca Lunde

attrition, prosocial classroom, student achievement, students with disabilities, burnout

Disciplines

Education | Special Education and Teaching

Recommended Citation

Jones, Kiandra Dane, "Examining the Impact of Special Education Teacher Attrition on Student Performance: A Causal-Comparative Study" (2024). Doctoral Dissertations and Projects . 6047. https://digitalcommons.liberty.edu/doctoral/6047

The purpose of this quantitative, causal-comparative study was to determine if there was a difference between the achievement of Georgia special education students on the Ninth Grade Literature and American Literature Georgia Milestones Test in school districts with high and low special education teacher (SET) attrition rates. This study provided quantifiable data that measured the impact of teacher burnout on student achievement. This research further supported the literature in this field by documenting the consequences of increasing teacher turnover rates. Participants in this study included Georgia Milestones student achievement data from approximately 180 Georgia school districts from 2019–2022. The state’s SET attrition data accounts for approximately 114,800 teachers. By providing quantifiable data reflecting the impact of teacher burnout on student achievement, the literature documenting the consequences of growing teacher attrition rates was supported. Two independent samples t-tests were conducted to determine if there was a difference between student achievement scores and school districts with high or low teacher attrition rates. The researcher determined that there was no significant difference between Georgia Literature Milestones student achievement scores and SET rates between school districts with high and low attrition rates. Results from this study may also assist leaders by providing a different perspective and strategic approach when seeking to improve Georgia Milestones student achievement data.

Included in

Special Education and Teaching Commons

  • Collections
  • Faculty Expert Gallery
  • Theses and Dissertations
  • Conferences and Events
  • Open Educational Resources (OER)
  • Explore Disciplines

Advanced Search

  • Notify me via email or RSS .

Faculty Authors

  • Submit Research
  • Expert Gallery Login

Student Authors

  • Undergraduate Submissions
  • Graduate Submissions
  • Honors Submissions

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

Is Higher Education Ignoring Inequality and Failing Disadvantaged Students?

Photo: Anthony (Tony) Jack, a Black man wearing glasses, a light plaid collared shirt, and jeans, stands on a small grassy hill on a sunny day. Picnic tables filled with people can been in the blurry background behind him.

In his new book, BU’s Anthony Abraham Jack says that colleges’ commendable efforts to admit more disadvantaged students aren’t matched by support for those students once they’re in the alien culture of higher education.

New book from BU’s Anthony Abraham Jack says colleges admit students from diverse backgrounds, but forget cultural support

Rich barlow, michael d. spencer.

As an Amherst College freshman in the early 2000s, Anthony Abraham Jack had to deal with a problem that never blips affluent students’ radars: the closing of the dining hall during spring break. Coming from a low-income Miami family, Jack couldn’t afford the plane fare home and was staying on campus. How would he eat? He lucked out, getting a job at the college’s gym to earn money for meals—and for his mother, who asked him for whatever he could spare to help pay bills back home.

“While Amherst had opened its doors to welcome poor students like me, they forgot to keep the doors open for those of us who couldn’t afford to leave,” Jack writes in his new book Class Dismissed (Princeton University Press). As part of his research for the book, Jack, faculty director of Boston University’s Newbury Center , which supports first-generation college students, interviewed 125 Harvard University undergraduates, “from families across the economic spectrum,” to reveal the daunting problems—aggravated, but not created by COVID-19 campus closures—that still confront disadvantaged students. 

While commendably diversifying admissions beyond traditionally wealthy, white students, Jack writes, academia too often fails to help disadvantaged students—for whom college often seems as culturally alien as Mars—succeed in school and after graduation: “It is not just about financial aid. Colleges remain woefully unprepared to support the students who make it in.”

The Brink spoke with Jack, a BU Wheelock College of Education & Human Development associate professor of higher education leadership, about Class Dismissed and how universities can better support students from disadvantaged backgrounds.

Jack will  read  from  Class Dismissed  on Saturday, September 28 from 3 to 4:30 pm in the George Sherman Union Conference Auditorium, 775 Commonwealth Ave., as part of  Alumni Weekend . 

With Anthony Abraham Jack

The brink: can you summarize the major inequalities that colleges ignored before covid, and which linger after the pandemic.

Jack: One of the biggest is that universities take a very hands-off approach on how students get into campus jobs. This is a problem because [for] many jobs on campus, you can only apply to them if you know the professor. For a lot of these jobs, like teaching assistant, it’s about who you know. Students who are comfortable gaining rapport with the faculty member are much more likely to not only apply for that job, but to [also] become a course assistant or research assistant.  Who is more comfortable engaging with faculty? More privileged students, or students for whom college is not new [to their families]. They are more likely to apply for those jobs and get those jobs. Lower-income students disproportionately are more likely to withdraw from faculty. To not feel comfortable with faculty members means that you are not likely to get one of those jobs. Why does this matter? Well, some jobs only give you a paycheck. If you’re working as a barista, as grounds crew, you get a paycheck at the end of the week or every two weeks. If you’re working as a research assistant, you get a paycheck and a letter of recommendation. In not just addressing the immediate needs of today, but [also in] taking concrete steps to have a better shot at a career after college, the jobs are unequal. COVID revealed what I say is a segregated labor market on campus. My research was the first to actually make explicit just how different the work experiences of lower-income versus more affluent students are on campus. COVID closures led to a removal of almost all the work hours for lower-income students. You had to be on campus and present to do them. But if you were a research assistant or course assistant, your services were actually needed in higher demand.  Unless we understand how students are funneled into different parts of campus, we will still be ignorant of just how our practices exacerbate the inequality in their lives. Lower-income students not only have to work because their financial aid letter says so, but [also] to support their families back home. 

Unless we understand how students are funneled into different parts of campus, we will still be ignorant of just how our practices exacerbate the inequality in their lives. Lower-income students not only have to work because their financial aid letter says so, but [also] to support their families back home. Anthony Abraham Jack

The Brink: Campus work is a ticket to their future?

It absolutely is. You can work for four years as a barista, as a groundskeeper, and your contact with university officials can be very minimal. If you work as a research assistant for the same faculty member, that faculty member has four years of interactions—research, travel, asking you to babysit their cat when they go away. You become someone in their life. [Students say], “This person took me under their wing when I went to office hours, they offered me a job in their lab, and that’s how I got interested in science.” 

The Brink: Your book mentions it’s not enough to admit a diverse class; you have to get them through to get their degrees. Why are elite schools better at ensuring that disadvantaged students graduate? 

Elite schools have money to remove hurdles out of students’ way to graduation. Things that trip students up at Bunker Hill [Community College], or at UMass Amherst, are not as prevalent here at BU, Harvard, Amherst. When we think about housing, about the gap between your financial aid package and tuition, about commuters to campus, those are things that we have known for years that lower the odds of graduation. The longer your commute, the less connected you are to the university, the more likely you are to step out of study or drop out. The more debt you have to take on, [the more] you might think that it’s not worthwhile. The more you work off campus, with your schedule constantly changing, you are more likely to have to make the choice between going to work and going to class. Students at elite schools are the traditional age, 18 to 22. They are least likely to have kids, they are least likely to have familial responsibilities that require them to be away from campus. But I don’t believe that graduation rates tell the whole story. You can have a school that has a 98 percent graduation rate, but the trajectory after college is divided by class, where your wealthier students, five years out, are on the path they want to be on, but your lower-income students are not—because wealthier students are more likely to have letters of recommendation and the connections introducing them to headhunters and businesses. [Disadvantaged students] who were uncomfortable in college may have a good GPA, but they don’t have a network who can vouch for them and be able to support them.  You need eight letters of recommendation for the Rhodes Scholarship. You can have two students, one with a 4.0 GPA and one a 3.8 GPA. But that 4.0 student has never connected with faculty members, has never been to office hours; that 3.8 student has been the research assistant for two faculty members, has babysat or pet-sat for [professors], is actually embedded within a group of faculty and staff. That person is more likely to get the nod.  Some people say: Why study elite schools? Haven’t they gotten undue attention? As a sociologist, I push back against that, because so much mobility literature was not about elite schools. I want to study two things at one time: the mobility prospects of disadvantaged youth and the gap between proclamation and practice. How are you putting [disadvantaged students’ admission] into action? Are you just opening the doors, or opening the doors and changing your daily practices to prepare yourself for who you are now welcoming in?

The Brink: Poverty hurts students long before they reach college. Are some inequalities beyond the ability of academia to fix?

It is not that I’m calling for colleges to fix the structural inequalities. I am asking them to prepare themselves. For those who have lived in poverty’s long shadow, that is incredibly important.  If you are going to admit more lower-income students and go to those new zip codes to get students, you know that they’re bright, they’re ready to do the academic work. But the calls that those students get in the middle of the night are very different from their middle-class peers. If you’re going to go to a place that suffered tremendously due to the opioid epidemic, if you are going to recruit students who come from neighborhoods where joblessness is a modal life event for adults, if you are going to communities where disadvantage is the norm, you have to make sure that your resources on campus can help students bridge the gap. 

The Brink: What solutions would be most effective in helping marginalized students?

One of the things I suggest in the book is: Why are the offices of career services fellowships, internships, and on-campus employment typically in separate places? We know from research that contact breeds trust, and trust breeds use of an office. Northeastern [University] has all of their employment-related offices in one office, so that students, from their first year to their last year, come to the same place with all questions related to work. We don’t want social class to dictate what opportunities students view as for them or not for them. It’s about demystifying the hidden curriculum.  You [also] have to make sure that your mental health services are prepared to help students deal with that gap. It’s not just about what [services] you have, it’s about who is carrying out therapy on campus, and what training and skill sets they have. Quite frankly, a lot of campuses don’t know how to deal with and support students who come from a more diverse class. I’ve heard Native students say mental health services are not aware of how to deal with the trauma and the legacy of settler colonialism. Mental health services know how to help students through traditional life events for someone who’s [age] 18 to 21: the death of a grandparent, divorced parents. But when it comes to problems that are located in [a particular] place—like what happens on reservations, what happens in inner cities, and what happens in rural parts of America—mental health services are not prepared to deal with a lot of those. Right now, therapists are helping you like: “Let’s develop study skills and help you get through the semester, let’s help you focus on the present.” We’re talking about the legacy of generations of trauma and exploitation and exclusion that many students carry with them on their way to college, and health services are not prepared to deal with that. Do they have specific training to deal with colonialism? There are counselors who are trained to understand racial trauma, students who are the children of immigrants.

The Brink: Your book talks about diversifying mental health professionals.

We need people who are more aware of the structural inequalities, and especially violence that happens in the country. It’s one thing to see it every day; you learn how to navigate it from [age] zero to 18. But it’s when you come to campus you have this freedom to actually think through things. But you sometimes need guidance on how to do that. 

This interview was edited for brevity and clarity .

Jack will read from Class Dismissed on September 28 from 3 to 4:30 pm in the George Sherman Union Conference Auditorium, 775 Commonwealth Ave., as part of Alumni Weekend . 

Jack’s research for Class Dismissed was funded by Harvard University and its Presidential Initiative on Harvard & the Legacy of Slavery.

Explore Related Topics:

  • Share this story
  • 0 Comments Add

Senior Writer

Photo: Headshot of Rich Barlow, an older white man with dark grey hair and wearing a grey shirt and grey-blue blazer, smiles and poses in front of a dark grey backdrop.

Rich Barlow is a senior writer at BU Today and  Bostonia  magazine. Perhaps the only native of Trenton, N.J., who will volunteer his birthplace without police interrogation, he graduated from Dartmouth College, spent 20 years as a small-town newspaper reporter, and is a former  Boston Globe  religion columnist, book reviewer, and occasional op-ed contributor. Profile

Michael D. Spencer Profile

Comments & Discussion

Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.

Post a comment. Cancel reply

Your email address will not be published. Required fields are marked *

Latest from The Brink

Boston’s rain-free month may mean fall foliage will come early this year, just how safe are soft robots, e-cigarette brands are skirting the rules about health warning labels on instagram, study finds, a new type of rna could revolutionize vaccines and cancer treatments, what a southern plantation’s paper trail can reveal about the lasting legacies of slavery, why are kids struggling with anxiety more than ever, getting their hands dirty in the lab—and in the charles river, 2024 ignition awards aim to bring bu science and tech to market, bu team wins major national science foundation grant to help phd students attack climate change, liberation through rhythm: bu ethnomusicologist studies history and present of african beats, oxygen produced in the deep sea raises questions about extraterrestrial life, the histories of enslaved people were written by slavers. a bu researcher is working to change that, making mri more globally accessible: how metamaterials offer affordable, high-impact solutions, “i love this work, but it’s killing me”: the unique toll of being a spiritual leader today, feeling the heat researchers say heat waves will put more older adults in danger, what the history of boston’s harbor can teach us about its uncertain future, eng’s mark grinstaff one of six researchers to receive nsf trailblazer engineering impact awards, how do we solve america’s affordable housing crisis bu research helps inspire a federal bill that suggests answers, missile defense won’t save us from growing nuclear arsenals.

NASA Logo

Suggested Searches

  • Climate Change
  • Expedition 64
  • Mars perseverance
  • SpaceX Crew-2
  • International Space Station
  • View All Topics A-Z

Humans in Space

Earth & climate, the solar system, the universe, aeronautics.

  • Learning Resources

News & Events

NASA’s Artemis II Crew Uses Iceland Terrain for Lunar Training

NASA’s Artemis II Crew Uses Iceland Terrain for Lunar Training

NASA’s Webb Peers into the Extreme Outer Galaxy

NASA’s Webb Peers into the Extreme Outer Galaxy

What’s Up: September 2024 Skywatching Tips from NASA

What’s Up: September 2024 Skywatching Tips from NASA

  • Search All NASA Missions
  • A to Z List of Missions
  • Upcoming Launches and Landings
  • Spaceships and Rockets
  • Communicating with Missions
  • James Webb Space Telescope
  • Hubble Space Telescope
  • Why Go to Space
  • Commercial Space
  • Destinations
  • Living in Space
  • Explore Earth Science
  • Earth, Our Planet
  • Earth Science in Action
  • Earth Multimedia
  • Earth Science Researchers
  • Pluto & Dwarf Planets
  • Asteroids, Comets & Meteors
  • The Kuiper Belt
  • The Oort Cloud
  • Skywatching
  • The Search for Life in the Universe
  • Black Holes
  • The Big Bang
  • Dark Energy & Dark Matter
  • Earth Science
  • Planetary Science
  • Astrophysics & Space Science
  • The Sun & Heliophysics
  • Biological & Physical Sciences
  • Lunar Science
  • Citizen Science
  • Astromaterials
  • Aeronautics Research
  • Human Space Travel Research
  • Science in the Air
  • NASA Aircraft
  • Flight Innovation
  • Supersonic Flight
  • Air Traffic Solutions
  • Green Aviation Tech
  • Drones & You
  • Technology Transfer & Spinoffs
  • Space Travel Technology
  • Technology Living in Space
  • Manufacturing and Materials
  • Science Instruments
  • For Kids and Students
  • For Educators
  • For Colleges and Universities
  • For Professionals
  • Science for Everyone
  • Requests for Exhibits, Artifacts, or Speakers
  • STEM Engagement at NASA
  • NASA's Impacts
  • Centers and Facilities
  • Directorates
  • Organizations
  • People of NASA
  • Internships
  • Our History
  • Doing Business with NASA
  • Get Involved

NASA en Español

  • Aeronáutica
  • Ciencias Terrestres
  • Sistema Solar
  • All NASA News
  • Video Series on NASA+
  • Newsletters
  • Social Media
  • Media Resources
  • Upcoming Launches & Landings
  • Virtual Guest Program
  • Image of the Day
  • Sounds and Ringtones
  • Interactives
  • STEM Multimedia

Hubble Examines a Spiral Star Factory

Hubble Examines a Spiral Star Factory

: NICER’s X-ray concentrators are dark circles in eight staggered rows covering this image. Each one is divided into six segments, like a sliced pie, by its sunshade. The concentrators rest in a white frame of the telescope.

NASA’s SpaceX Crew-9 to Conduct Space Station Research

NASA Data Helps Protect US Embassy Staff from Polluted Air

NASA Data Helps Protect US Embassy Staff from Polluted Air

NASA astronaut and Expedition 71 Flight Engineer Tracy C. Dyson smiles for a portrait in the vestibule between the Kibo laboratory module and the Harmony module aboard the International Space Station.

NASA Astronaut Tracy C. Dyson’s Scientific Mission aboard Space Station

Two astronauts float inside a densely packed laboratory module on the International Space Station, surrounded by a complex array of scientific equipment, cables, and instruments.

Station Science Top News: September 13, 2024

Screen capture of the My NASA Data homepage. Graphic has a blue background with an image of the Earth and rows across half of the picture representing water, sea ice, forests, clouds, and lava. Words at the top list the content categories.

Going Back-to-School with NASA Data

Amendment 48: A.5 Carbon Cycle Science Final Text and Due Dates.

Amendment 48: A.5 Carbon Cycle Science Final Text and Due Dates.

Learn about some of the engineering work being done by five members of NASA’s Europa Clipper mission, which aims to launch Thursday, Oct. 10.

New Video Series Spotlights Engineers on NASA’s Europa Clipper Mission

Celebrating 10 Years at Mars with NASA’s MAVEN Mission

Celebrating 10 Years at Mars with NASA’s MAVEN Mission

Amendment 51: F.13 Lunar Terrain Vehicle Instruments Program Final Text and Due Dates.

Amendment 51: F.13 Lunar Terrain Vehicle Instruments Program Final Text and Due Dates.

Hubble Lights the Way with New Multiwavelength Galaxy View

Hubble Lights the Way with New Multiwavelength Galaxy View

Burst Chaser

Burst Chaser

NASA’s Webb Provides Another Look Into Galactic Collisions

NASA’s Webb Provides Another Look Into Galactic Collisions

Image of white man, Forrest Melton, adjusting a scientific instrument

Water Resources

photo of orange lava flowing over the ground from Kilauea vent in the distance

Earth Surface and Interior

Meet the Authors series flyer

Meet the Authors – October 2024

A graphic showing various airplanes over a sun-drenched New York City skyline. with the words "Research Opportunities" overlaid on top.

ARMD Solicitations

Pilot A. Scott Crossfield has ignited all eight of the X-15’s engines to begin the powered flight.

65 Years Ago: First Powered Flight of the X-15 Hypersonic Rocket Plane 

Graphic shows a possible future General Electric jet engine with exposed fan blades in front of a cut-away-interior view of its core mechanisms -- all part of NASA's HyTEC research project.

NASA, GE Aerospace Advancing Hybrid-Electric Airliners with HyTEC

A.55 Decadal Survey Incubation Program: Science and Technology Date Change for Preproposal Telecon

A.55 Decadal Survey Incubation Program: Science and Technology Date Change for Preproposal Telecon

research articles for teachers

Reinventing the Clock: NASA’s New Tech for Space Timekeeping

This image — developed by a team of artists from the Advanced Concepts Lab at NASA’s Langley Research Center — features astronauts performing science on the surface of the Moon and Mars. The team developed the image with a blend of digital 2D illustration and 3D techniques to mimic a retro science fiction painting.

NASA Moon to Mars Architecture Art Challenge

High school students sit with their backs to the camera as they watch a large screen displaying a white extravehicular activity suit being tested

Bring NASA Into Your Classroom This Fall Through Virtual Experiences

Madyson Knox experiments with UV-sensitive beads.

How Do I Navigate NASA Learning Resources and Opportunities?

A 3D printer at RPM Innovations’ facility additively manufactures a funnel-shaped aerospike rocket engine nozzle

Printed Engines Propel the Next Industrial Revolution

Workers truck the HTV-1 to Vehicle Assembly Building (VAB)

15 Years Ago: Japan launches HTV-1, its First Resupply Mission to the Space Station

A close up image of a set of massive solar arrays measuring about 46.5 feet (14.2 meters) long and about 13.5 feet (4.1 meters) high on NASA’s Europa Clipper spacecraft inside the agency’s Payload Hazardous Servicing Facility at Kennedy Space Center in Florida.

La NASA invita a los medios al lanzamiento de Europa Clipper

A man supporting the installation of the X-59 ejection seat.

El X-59 de la NASA avanza en las pruebas de preparación para volar

Technicians tested deploying a set of massive solar arrays

La NASA invita a creadores de las redes sociales al lanzamiento de la misión Europa Clipper

Nasa, us department of education bring stem to after-school programs.

The headshot image of Abbey A. Donaldson

Abbey A. Donaldson

Nasa headquarters.

research articles for teachers

NASA and the U.S. Department of Education are teaming up to engage students in science, technology, engineering, and math (STEM) education during after-school hours. The interagency program aims to reach approximately 1,000 students in more than 60 sites across 10 states to join the program, 21st Century Community Learning Centers.

“Together with the Education Department, NASA aims to create a brighter future for the next generation of explorers,” said NASA Deputy Administrator Pam Melroy. “We are committed to supporting after-school programs across the country with the tools they need to engage students in the excitement of NASA. Through STEM education investments like this, we aspire to ignite curiosity, nurture potential, and inspire our nation’s future researchers and explorers, and innovators.”

On Monday, NASA and the Education Department kicked off the program at the Wheatley Education Campus in Washington. Students had an opportunity to hear about the interagency collaboration from Kris Brown, deputy associate administrator, NASA’s Office of STEM Engagement, and Cindy Marten, deputy secretary, Education Department, as well as participate in an engineering design challenge.

“The 21st Century Community Learning Centers will provide a unique opportunity to inspire students through hands-on learning and real-world problem solving,” said Brown. “By engaging with in learning opportunities with NASA scientists and engineers, students will not only develop the critical thinking and creativity needed to tackle the challenges of tomorrow, but also discover the joy of learning.”

“Through this collaboration between the U.S. Department of Education and NASA, we are unlocking limitless opportunities for students to explore, innovate, and thrive in STEM fields,” said Marten. “The 21st Century Community Learning Centers play a pivotal role in making this vision a reality by providing essential after-school programs that ignite curiosity and empower the next generation of thinkers, problem-solvers, and explorers. Together, we are shaping the future of education and space exploration, inspiring students to reach for the stars.”

NASA’s Glenn Research Center in Cleveland will provide NASA-related content and academic projects for students, in-person staff training, continuous program support, and opportunities for students to engage with NASA scientists and engineers. Through engineering design challenges, students will use their creativity, critical thinking, and problem-solving skills to help solve real-world challenges that NASA engineers and scientists may face.

In May 2023, NASA and the Education Department signed a Memorandum of Understanding, strengthening collaboration between the two agencies, and expanding efforts to increase access to high-quality STEM and space education to students and schools across the nation. NASA Glenn signed a follow-on Space Act Agreement in 2024 to support the 21st Century Community Learning Centers. The program, managed by the Education Department and funded by Congress, is the only federal funding source dedicated exclusively to afterschool programs.

Learn more about how NASA’s Office of STEM Engagement is inspiring the next generation of explorers at:

https://www.nasa.gov/stem

Abbey Donaldson Headquarters, Washington 202-269-1600 [email protected]

Jacqueline Minerd Glenn Research Center, Cleveland 216-433-6036 [email protected]

Related Terms

  • Glenn Research Center
  • Opportunities For Educators to Get Involved
  • Opportunities For Students to Get Involved

COMMENTS

  1. The 10 Most Significant Education Studies of 2021

    3. The Surprising Power of Pretesting. Asking students to take a practice test before they've even encountered the material may seem like a waste of time—after all, they'd just be guessing. But new research concludes that the approach, called pretesting, is actually more effective than other typical study strategies.

  2. Journal of Teacher Education: Sage Journals

    The mission of the Journal of Teacher Education, the flagship journal of AACTE, is to serve as a research forum for a diverse group of scholars invested in the preparation and continued support of teachers who can have a significant voice in discussions and decision-making. Issues covered include preparing teachers to effectively address the needs of marginalized youth; program design and ...

  3. Improving 21st-century teaching skills: The key to effective 21st

    Western education research suggests that teachers who are provided with specific feedback and opportunities to practice these changes in the classroom are able to increase the effectiveness of their teaching (Allen et al., 2011; Jones et al., 2013; Rivers et al., 2013). However, there are worthy examples from LMIC contexts as well.

  4. Full article: Good teachers are always learning

    Equally, in the USA, Kincheloe (Citation 2003) advocated for teachers' research of their own practice as a 'path to (their own professional) empowerment' with potential to mitigate against mediocrity in education caused by 'top-down standards and the desecration of teachers' (p.5).He proposed that 'Teachers as researchers could develop and implement a curriculum connected to the ...

  5. Effective Teacher Professional Development: New Theory and a Meta

    This investment has resulted in a marked increase in the number of rigorous studies quantifying the impact of different approaches to teacher PD on the quality of teaching, as reflected in pupil learning (Edovald & Nevill, 2021; Hedges & Schauer, 2018).In 2007, a review by Yoon et al. found just 9 such studies; in 2016, a review by Kennedy found 28 such studies; and in 2019, Lynch et al. found ...

  6. Full article: Teachers and teaching: (re)thinking professionalism

    Introduction and background. Teachers and their practice have been, and continue to be, important sites of critical research. From teacher-related policy, to pedagogy, professionalism and training (to name a few), the study of teachers and teaching has been critically examined within and across a variety of empirical sites, theoretical perspectives, and methodological approaches.

  7. Full article: Teacher education effectiveness as an emerging research

    Teacher education as an object for effectiveness examination. Research on teacher education effectiveness might be viewed in analogy to the research on teachers' effectiveness: Empirical educational research on teachers' effectiveness has largely focused on effective teaching (e.g. Hattie, Citation 2012; Kyriakides et al., Citation 2013; Muijs & Reynolds, Citation 2005; Seidel & Shavelson ...

  8. The Promises and Challenges of Artificial Intelligence for Teachers: a

    This study provides an overview of research on teachers' use of artificial intelligence (AI) applications and machine learning methods to analyze teachers' data. Our analysis showed that AI offers teachers several opportunities for improved planning (e.g., by defining students' needs and familiarizing teachers with such needs), implementation (e.g., through immediate feedback and teacher ...

  9. Using Research to Improve Teaching

    Teachers and researchers should work collaboratively to improve student learning. Though researchers in higher education typically conduct formal research and publish their work in journal articles, it's important for teachers to also see themselves as researchers. They engage in qualitative analysis while circulating the room to examine and ...

  10. Research Studies That Teachers Can Get Behind

    Keep talented teachers and unlock student success with strategic planning based on insights from Apple Education and educational leaders. Register Wed., September 25, 2024, 2:00 p.m. - 3:00 p.m. ET

  11. Stress, Burnout, Anxiety and Depression among Teachers: A Scoping

    The research also shows that teachers are not the only exception regarding experiencing a poor workplace environment which may lead to increased anxiety and depression [122,123]. Improving teachers' workplace environments may, therefore, reduce the prevalence of anxiety and depression among teachers. Anxiety has also been linked to stressors ...

  12. PDF Addressing Teacher Retention within the First Three to Five Years of

    teachers across all subject areas, grade levels, and disciplines. This research study explored ways in which the school district and its school-based administrators can increase teacher retention within their schools. Qualitative methodology was used to conduct interviews among ten participants and was based on the grounded theory framework.

  13. The JOURNAL OF TEACHER ACTION RESEARCH

    The Journal of Teacher Action Research is an international journal that publishes peer-reviewed articles written by teachers and researchers to inform classroom practice. The journal serves as a practical medium to read and publish classroom-based research. Our review process differs from other journals because we also look for potential.

  14. The Development of Teacher Burnout and the Effects of Resource Factors

    1. Introduction. Teacher burnout is a psychological syndrome that teachers experience in response to chronic job stress, and includes emotional exhaustion (EE), depersonalization (DP), and reduced personal accomplishment (PA) [].EE refers to feelings of overextending and draining emotional resources, while DP refers to negative, callous, or unfeeling responses to the job, and PA refers to ...

  15. Understanding Teacher Self-Efficacy to Address Students' Social

    Research in mental health and well-being in the workplace, including research conducted with teachers (Eddy et al., 2020), has demonstrated the reliability and validity of single-item measures, which support the application of research to practical settings (Ahmad et al., 2014). Quantitative and qualitative data were gathered simultaneously.

  16. Teacher and Teaching Effects on Students' Attitudes and Behaviors

    Abstract. Research has focused predominantly on how teachers affect students' achievement on tests despite evidence that a broad range of attitudes and behaviors are equally important to their long-term success. We find that upper-elementary teachers have large effects on self-reported measures of students' self-efficacy in math, and ...

  17. Full article: The power of teacher feedback in affecting student

    Together, these studies of student perspectives of teacher feedback helped fill the research gap of feedback mechanisms for improving student learning and performance. See also Figure 1 , trying to present an overall picture of the key variables examined in the six studies and mapped into the feedback ecological model (adapted from Yang et al ...

  18. Good Teaching Is Not Just About the Right Practices

    Good teaching isn't about following a "rigid list of the most popular evidence-based tools and strategies," veteran high school English teacher Renee Moore tells Kristina Rizga for The Atlantic's On Teaching series. The most effective teaching tools, Moore suggests, are intangible qualities that directly address the fundamental human needs of a diverse classroom community—traits like ...

  19. How teachers can use research effectively in their classroom

    This article discusses four key considerations for using research well in the classroom, along with initial resources and practical guides to support teachers to engage with research. 1. Research comes from a variety of sources. The educators in our survey told us about the challenges they face in accessing research.

  20. Examining the Impact of Special Education Teacher Attrition on Student

    The purpose of this quantitative, causal-comparative study was to determine if there was a difference between the achievement of Georgia special education students on the Ninth Grade Literature and American Literature Georgia Milestones Test in school districts with high and low special education teacher (SET) attrition rates. This study provided quantifiable data that measured the impact of ...

  21. Is Higher Education Ignoring Inequality and Failing Disadvantaged

    The Brink: Can you summarize the major inequalities that colleges ignored before COVID, and which linger after the pandemic? Jack: One of the biggest is that universities take a very hands-off approach on how students get into campus jobs. This is a problem because [for] many jobs on campus, you can only apply to them if you know the professor. For a lot of these jobs, like teaching assistant ...

  22. Full article: Reviews of teaching methods

    The overview format. This study is situated within the frames of a research project with the overall aim of increasing and refining our knowledge about teaching and teaching research (Hirsh & Nilholm, Citation 2019; Roman, Sundberg, Hirsh, Nilholm, & Forsberg, Citation 2018).In order to clarify the context in which the present study has emerged, a brief description of starting points and ...

  23. NASA, US Department of Education Bring STEM to After-School Programs

    Office of STEM Engagement Deputy Associate Administrator Kris Brown, right, and U.S. Department of Education Deputy Secretary Cindy Marten, left, watch as a student operates a robot during a STEM event to kickoff the 21st Century Community Learning Centers NASA and U.S. Department of Education partnership, Monday, Sept. 23, 2024, at Wheatley Education Campus in Washington.

  24. US-China research has given Beijing's military technology a boost

    Other measures include those to curb Beijing's influence on U.S. college campuses and to revive a Trump-era program meant to root out China's spying and theft of intellectual property at American ...

  25. Family Engagement in Schools: Parent, Educator, and Community

    Studies of family engagement in children's education reveal large associations between family engagement and success for students. Family engagement improves classroom dynamics and increases teacher expectations, student-teacher relationships, and cultural competence, regardless of students' age groups (Boberiene, 2013).While research supports the educational association between family ...

  26. Blomstedt named chief university lobbyist

    Matt Blomstedt, the Cornhusker State's former commissioner of education and a lifelong Nebraskan with decades of experience in education and public policy, has been named the University of Nebraska's next associate vice president for government relations. ... He has also served as a research analyst for the Legislature's Education ...

  27. Full article: A qualitative study of primary teachers' classroom

    Teacher feedback is, particularly from a symbolic interactionist view, a complex interactional pattern between teachers and students, meaning that it is crucial to examine students' perspectives to better understand teacher feedback. However, social research is always an issue of, and limited to, perspectives (Charon Citation 2007).

  28. 2024 Most Affordable Online Bachelor's in Social Work ...

    This article aims to alleviate those concerns by providing a comprehensive overview of the 2024 Most Affordable Online Bachelor's in Social Work Degree Programs Ranking in Kentucky. Created by the Research.com team of data scientists, this ranking is designed to empower students to make informed decisions about their education.

  29. Full article: Initial teacher education is not the problem: retaining

    Research on RRR education in Australia consistently demonstrates that schools located further away from major cities face greater challenges in recruiting and retaining staff (Knipe & Bottrell, Citation 2023). These schools are often staffed by beginning teachers who are offered financial incentives to attract them.