9. Early warning systems and indicators of dropping out of upper secondary school: the emerging role of digital technologies

Alex J. Bowers
Teachers College, Columbia University
United States

Students failing to graduate from upper secondary school (high school) is an issue globally. While the overall OECD average graduation rate from upper secondary school is 81%, there is large variation across national contexts, with lows at 60% graduation by age 25 in Mexico and highs around 90% in Greece, the Republic of Korea (South Korea), and Slovenia (OECD, 2019[1]). Failing to graduate upper secondary school (known as dropping out) is well-known to be linked with a wide array of student negative life outcomes, such as lower degree attainment, lower lifetime earnings, higher incarceration rates, as well as negative health effects (Belfield and Levin, 2007[2]; Lyche, 2010[3]; Rumberger, 2011[4]; Rumberger et al., 2017[5]). Thus, ensuring that students graduate upper secondary school is a high priority across education systems. This entails predicting early on which students are most likely to experience challenge in the schooling process that may lead to dropping out. With an accurate prediction and early warning, students may be provided additional resources that can help promote their persistence and success (Bowers and Zhou, 2019[6]). The field of education early warning systems and early warning indicators (EWS/EWI) is a recently emerging domain (Allensworth, 2013[7]; Balfanz and Byrnes, 2019[8]; Carl et al., 2013[9]; Davis, Herzog and Legters, 2013[10]; Frazelle and Nagel, 2015[11]; Kemple, Segeritz and Stephenson, 2013[12]; Mac Iver, 2013[13]; McMahon and Sembiante, 2020[14]) that is focused on providing actionable predictors of students failing to graduate from upper secondary school (Allensworth, Nagaoka and Johnson, 2018[15]). However, to date, while industries outside of education continually further leverage the technologies available through the emerging domains of big data, data science, data analytics, and machine learning (Piety, Hickey and Bishop, 2014[16]), the application of these technologies to drop out early warning systems has only recently come to the fore (Agasisti and Bowers, 2017[17]; Baker et al., 2020[18]; Bowers et al., 2019[19]). The purpose of this chapter is to discuss the present state of the field of dropout prediction and early warning systems and indicators. It focuses on the application of current innovations across the technology domains of machine learning, data analytics, and data science pattern analytics to discuss promising developments in this area. As the United States context has been the focus for much of the research in this field at the upper secondary level, I focus on primarily United States studies, informed by studies across other countries where possible. I conclude with a look forward at next-stage developments and an eye to the future.

Across the research literature on dropping out, there is a focus on creating what have come to be known as Early Warning Systems (EWS) and Early Warning Indicators (EWI). An EWS is intended to provide actionable predictors of student challenge to help focus the efforts of a school system on specific interventions, systems, and student persistence and achievement (Allensworth, Nagaoka and Johnson, 2018[15]; Balfanz and Byrnes, 2019[8]; Mac Iver and Messel, 2013[20]; Davis, Herzog and Legters, 2013[10]; McMahon and Sembiante, 2020[14]). As a form of personalisation in education using data (Agasisti and Bowers, 2017[17]), an EWS collects EWIs under a single system which is designed to more efficiently allocate the limited resources of schools towards specific areas of challenge for students who are identified as highly likely of dropping out (also known as “at-risk” students) (Carl et al., 2013[9]; Dynarksi et al., 2008[21]; Dynarski and Gleason, 2002[22]; Mac Iver, 2013[13]; Rumberger et al., 2017[5]; Stuit et al., 2016[23]). Taking a positive orientation towards student persistence, much of the research that has come out of the Chicago city school system in the United States, which is a large and diverse urban education system, refers to these indicators as “on-track” indicators for graduation (Allensworth, 2013[7]; Allensworth and Easton, 2005[24]; Allensworth and Easton, 2007[25]; Hartman et al., 2011[26]; Kemple, Segeritz and Stephenson, 2013[12]). Importantly, rather than focus on student context, background, and demographic factors, such as family socio-economic status, which are highly related to student persistence (Rumberger, 2011[4]), these systems focus on predictors and indicators that are malleable so that schools can intervene and provide support to students (McMahon and Sembiante, 2020[14]).

However, much remains to be learned about exactly which indicators are the most accurate and predictive for which systems, how to provide that information for action and evidence-based practice in schools, and then what those schools can do with that information. For example, in a recent study that randomly assigned 73 schools in the United States to use an early warning intervention and monitoring system, after one year, treatment schools saw a decrease in at-risk student chronic absences and course failures but there was no effect on suspensions, low grade point averages, or student credit accrual (Faria et al., 2017[27]). Student credit accrual is seen as a central early outcome of these studies as a direct indicator of continued student persistence and a positive graduation trajectory. Similarly, in a recent randomised controlled experiment of 41 high schools in the United States in which treatment schools used a half-time staff member to monitor ninth grade early warning indicators and provide student supports, treatment schools decreased chronic absenteeism yet had no significant differences on student course failures or credits earned (Mac Iver et al., 2019[28]). One additional difficulty is that relevant early warning indicators may vary across cultures (Box 9.1 and Box 9.2). Thus, while much remains to be learned, the early warning systems and indicators domain is an intriguing and growing research and practice domain that attempts to identify students at risk of dropping out and positively intervene to support student persistence, using the most recent data mining techniques.

A central concern across the early warning systems and indicators domain is the accuracy of indicators used to predict the probability of a student dropping out. While many studies examine a range of predictors using correlation, logistic regression, or similar types of statistics, and then report which variables are most significant in predicting student graduation or dropping out (Allensworth and Easton, 2007[25]; Balfanz, Herzog and Mac Iver, 2007[33]; Bowers, 2010[34]), recent research has begun to focus on comparing the accuracy of predictors using signal detection theory (Bowers, Sprott and Taff, 2013[35]; Bowers and Zhou, 2019[6]). In signal detection theory (Swets, 1988[36]; Swets, Dawes and Monahan, 2000[37]), all potential predictor variables are compared using a Receiver Operating Characteristic (ROC) plot which compares the sensitivity versus the specificity of a predictor, or the true-positives versus the false-positives, as one wishes for a predictor of an event to identify all of the cases who experience the event (here, dropping out), and not misidentify cases as likely to experience the event who did not (Bowers and Zhou, 2019[6]). For example, a dropout predictor may be very specific, in that all of the students with a combination of factors may drop out, but not sensitive as the ratio of students who have that combination of predictors who eventually drop out from the sample may be very small.

Using these aspects of signal detection theory, Bowers, Sprott and Taff (2013) compared 110 dropout predictors identified from across the literature, and demonstrated that the vast majority of the predictors were no better than a random guess. However, the authors did identify two specific sets of early warning indicators that were more accurate than the others.

The first was the Chicago early warning indicator. Based on over a decade of research from the Chicago urban school system in the United States (Allensworth, 2013[7]; Allensworth and Easton, 2005[24]; Allensworth and Easton, 2007[25]; Allensworth et al., 2014[38]; Allensworth, Nagaoka and Johnson, 2018[15]), the “Chicago on-track” indicator was identified as the most accurate cross-sectional dropout predictor (cross-sectional meaning that the data for the predictor comes from a single year), focusing on the ninth grade and low or failing grades in core subjects such as mathematics or English, and accumulated credits. As a cross-sectional indicator that is very accessible for educators, the Chicago on-track indicator provides a strong means to use an EWI that has higher accuracy than comparable cross-sectional indicators that combine information that currently exists in the school’s education data management system. Nevertheless, while the Chicago on-track indicator has higher accuracy than other cross-sectional indicators, the same study by Bowers et al. (2013) demonstrated much higher accuracy dropout predictors that rely on long-term longitudinal data.

In their comparison across 110 predictors of dropping out, the second set of most accurate early warning indicators was generated by one method that stood out as consistently providing the most accurate predictors of dropping out (Bowers, Sprott and Taff, 2013[35]), namely Growth Mixture Modelling (GMM). This technique has the ability to identify significantly different student data patterns over time (see Box 9.3). Three Growth Mixture Modelling analyses were identified for predicting dropping out with extremely high accuracy in comparison to all other predictors (Bowers, Sprott and Taff, 2013[35]). First, analysing a sample of more than 10 000 students from Québec (Canada) from ages 12 through 16, the authors examined student engagement with school over time, such as attendance, discipline, and subject enjoyment and interest, and identified specific low engagement trajectories predictive of dropping out (Janosz et al., 2008[39]). Second, a study using thousands of secondary student mathematics achievement in the United States identified specific growth or decline trajectories that proved predictive of dropout (Muthén, 2004[40]). And third, chief among these studies with high accuracy prediction was a study of trajectories of non-cumulative grade point averages (Bowers and Sprott, 2012[41]).

In their study, Bowers and Sprott (2012[41]) examined the non-cumulative grade point averages of over 5 000 students in the United States over the first three semesters of high school of ninth grade semester one, ninth grade semester two, and tenth grade semester one. A non-cumulative grade point average is the average grades for a student in any one grade level averaged across all subjects. The authors argued that this grade data is important to examine as school systems globally assign and collect teacher-assigned grades on a regular basis, yet historically this dataset has been rarely leveraged as important data within some education policy systems (Bowers, 2009[48]; Bowers, 2011[49]; Bowers, 2019[50]; Bowers and Sprott, 2012[41]; Brookhart, 2015[51]). A focus on non-cumulative GPAs is also an improvement over cumulative GPA as the variance in student data year-over-year can then be used to capture different trajectory types (Bowers, 2007[52]).

Bowers and Sprott (2012[41]) identified four significantly different trajectories of non-cumulative GPA: 1) students whose grades decline over this time period; 2) students whose grades start relatively low and rise slowly over time; 3) students who make up the majority and have flat grades through time near the average; and 4), students who have high grades throughout the time period. The main finding was that while the first two groups made up only 25% of the sample, they accounted for over 90% of all of the students who dropped out (Bowers and Sprott, 2012[41]). The majority of the dropouts experienced the low grades and slowly rising trajectory: their grades were going up, but apparently not fast enough. Students with the declining grades trajectory represented a much smaller fraction of dropouts. Together with previous studies using teacher-assigned grades (Allensworth and Luppescu, 2018[53]; Allensworth, Nagaoka and Johnson, 2018[15]; Battin-Pearson et al., 2000[54]; Bowers, 2010[34]; Bowers, 2010[55]; Finn, 1989[56]; Hargis, 1990[57]), these findings confirmed the strong predictive validity of grades on long-term student outcomes such as persistence in school and graduation. In spite of grades having a reputation in some of the psychometrics literature of being less reliable than standardised tests (Brookhart, 2015[51]), the research over the past 100 years has demonstrated that grades measure both academic achievement and a student’s ability to negotiate the social processing of school, which is highly related to school engagement, persistence, and later life outcomes (Bowers, 2009[48]; 2011[49]; 2019[50]; Brookhart et al., 2016[58]; Kelly, 2008[59]; Willingham, Pollack and Lewis, 2002[60]).

In a follow-up study, using Latent Class Analysis (LCA) and a large United States nationally generalisable sample of students who drop out of school, Bowers and Sprott (2012[61]) further identified these two types of students who dropped out, and identified the remaining small group that accounts for less than 10% of the dropouts. LCA, like Growth Mixture Modelling, is a form of mixture modelling and allows for the identification of significantly different types of responders across a set of survey items (Collins and Lanza, 2010[62]; Masyn, 2011[44]; Muthén, 2004[40]; Vermunt and Magidson, 2002[47]), identifying a typology of responders. Here, the authors identified a three-group typology of dropouts, which corresponded to the previous dropout typology research and identified the proportions of each type across the United States (Bowers and Sprott, 2012[61]).

First, 38% of the students who dropped out represented the Jaded dropout type: these students correspond to the traditional conception of a dropout as “maladaptive” of the previous research, in that those students do not like school, do not think that teachers are there for them, and have a general jaded conception of the schooling process. These are students with low and declining grades. These students return to schooling the least and have low long-term outcomes.

Second, the Quiet dropout type amounted to about 53% of all dropouts. It corresponded to students with low and slowly rising grades. This was a significant finding as these students correspond to the majority of students who drop out, but are rarely identified by schooling systems as “at risk” because they generally like school, are connected to school, and have grades that are slowly rising – although their grades do not rise fast enough for them to eventually pass all of their classes and graduate.

The third type was the Involved dropout type, who accounted for about 9% of the dropouts. These students were highly involved in school, had generally high grades, and returned the most over time to complete their degrees and go on to post-secondary education. Hypothesised previously as students who are “lost at the last minute” (Menzer and Hampel, 2009[63]), these dropouts, while accounting for the smallest proportion of dropouts, would seem to be the first set of dropouts an intervention would be attempted on, as they persist the longest before dropping out, and often drop out due to either a significant life event (such as pregnancy or family mobility) or to the discovery of a mistake in their transcript and an unexpected need to take an additional course.

As noted across these studies (Bowers and Sprott, 2012[41]; 2012[61]; Bowers, Sprott and Taff, 2013[35]), this typology perspective for dropouts is a significant advance, as previously dropping out of school or being “at risk” was seen as a single monolithic category. The single category perspective for dropping out leads to a naïve view in designing interventions for students who drop out. This may be contributing to the difficulty in finding consistent intervention effects in randomised controlled trials (RCTs) focused on dropouts (Agodini and Dynarksi, 2004[64]; Freeman and Simonsen, 2015[65]). For example, given the findings above, RCTs that attempt to intervene and support student persistence through creating interventions that reconnect students with schooling assuming the traditional Jaded conception of dropping out, focus on only one third of the students who actually drop out, and ignore the vast majority of dropouts who are Quiet and Involved dropouts. Thus, it is not surprising that many experiments have struggled to demonstrate treatment effects, as to date dropout intervention RCTs have lacked a direct focus on creating very different interventions for the three different types of dropouts (McMahon and Sembiante, 2020[14]; Sansone, 2019[66]). For example, the Jaded students may need reconnection with schooling, while the Quiet students may benefit from additional academic instruction and tutoring, and the Involved students counselling on life events, enrolment, and early transcript requirement audits (Bowers and Sprott, 2012[61]).

Many of the studies on early warning systems and indicators estimate and evaluate a set of variables to predict dropping out by viewing the likelihood of students dropping out as a single category. As noted in the dropout predictor accuracy literature (Bowers, Sprott and Taff, 2013[35]) as well as the early warning system literature (McMahon and Sembiante, 2020[14]), it is the attention to the typology theory of dropping out that significantly contributes to the jump in accuracy for the group of Growth Mixture Modelling dropout predictors presented above, identifying more than 90% of the students who drop out. Additionally, perhaps just as important is the longitudinal nature of these accurate predictors that captures significantly different student trajectories over time. A central theory in the dropout literature that is often overlooked in some of the early warning systems and indicators literature (McMahon and Sembiante, 2020[14]) is what has been termed the “life-course perspective” (Alexander, Entwisle and Kabbani, 2001[67]; Dupéré et al., 2018[68]; Dupéré et al., 2015[69]; Finn, 1989[56]; Pallas, 2003[70]). Here, student failure to complete upper secondary school is seen as a long-term process in which, rather than a single event happening in time, multiple stressors accumulate over time and eventually reach a threshold in which the student no longer attends school. Capturing this long-term process then is important for accurately predicting early which students may drop out. This technology of Growth Mixture Models (GMM) provides a useful means to analyse exactly this type of data, relying on long-term over-time data rather than single individual time-point predictors, leading to the increased accuracy of growth mixture model-based predictors (Bowers, Sprott and Taff, 2013[35]).

In a study that used the full state-level dataset of all students over multiple years from the state of Wisconsin in the United States to examine a wide range of available individual variables to use for predicting dropping out, Knowles (2015[71]) used multiple data mining and machine-learning techniques. However, none proved more accurate than the growth mixture model predictors. Indeed, the analysis did neither consider the longitudinal nature of the data nor adopt a typology perspective, which is the core innovation and value added of Growth Mixture Models. Similarly, using the United States nationally generalisable High School Longitudinal Study of 2009 (HSLS:2009) of more than 21 000 high school students, Sansone (2019[66]) focused on single year grade 9 data and applied a similar set of machine learning and logistic regression models to a wide range of student achievement, behaviour, and attitudinal data. The study obtained similar accuracy findings as Knowles (2015[71]) with again none of the models as accurate as the growth mixture model predictors (Sansone, 2019[66]). Interestingly, a recent study that included more than 1 million students from the United States from 1998-2013 in the three states of Massachusetts, North Carolina, and the state of Washington, examined the accuracy of simply using the single time-point mathematics and English reading and writing standardised state test scores from grade 3 or grade 8 to predict dropping out in high school (Goldhaber, Wolff and Daly, 2020[72]). They find that their predictor accuracy was about the same as the Chicago on-track indicator, missing 25% or more of the students who eventually drop out. Looking beyond the United States context, a study that included single year data from hundreds of thousands of students in Guatemala and Honduras (Adelman et al., 2018[73]), as well as a range of variables such as student achievement, student, school and community demographics, obtained predictor accuracy with similar ranges as Knowles (2015[71]) and Sansone (2019[66]) with simple logistic regression methods. This highlights again the strength of the Growth Mixture Modelling method but also that the chosen data analysis method and indicators matter more for the accuracy of the dropout predictors than the size or comprehensiveness of the dataset.

Together, these findings from the early warning systems and indicators literature indicate a growing usefulness of emerging technologies to identify accurate predictors of dropping out of upper secondary school that focus on groups of patterns of individuals’ data over time. While the research domain overall is still relatively new with much work yet to be done, the emerging and growing domains of pattern analytics, data analytics, data science, learning analytics, educational data mining, and machine learning (Koedinger et al., 2015[74]; Piety, 2019[75]) provide an attractive opportunity for researchers, education practitioners, and policymakers. They can take advantage of these new technologies to augment the ability of their current education systems to use the data already collected in schools in new ways that help inform decision making and instructional improvement (Agasisti and Bowers, 2017[17]; Baker and Inventado, 2014[76]; Bienkowski, Feng and Means, 2012[77]; Bowers, 2017[78]). As noted in this literature, these techniques “make visible data that have heretofore gone unseen, unnoticed, and therefore unactionable” (Bienkowski, Feng and Means, 2012[77]), p.ix). In an effort to inform early warning systems and indicators research and practice, I argue here for increased attention to the opportunities that these techniques can provide to not only increase the accuracy of the indicators in early warning systems, but to increase the usefulness and actionable information provided to educators to take action in informed ways.

For example, a domain in the dropout research that could be informed through the application of data analytic technology is the issue of dropout versus school discharge (also known as pushout). This domain is one area that has received little attention in the EWS/EWI research, but is a well-known issue in the more general dropout literature. Historically, given the life-course typology perspective noted above, dropping out of school is theorised as a long-term continuous process of multiple accumulating stressors that may affect students in different ways (Alexander, Entwisle and Kabbani, 2001[67]; Dupéré et al., 2018[68]; Dupéré et al., 2015[69]). It is the ability of researchers to draw on this type of theory to match the realities of the dropout process through the analysis techniques noted above that may additionally contribute to the accuracy of predictors that incorporate this information, such as the growth mixture model predictors. However, a much smaller set of research has demonstrated that rather than a singular voluntary process, students who fail to graduate from secondary school may voluntarily leave while others may be involuntarily discharged (Riehl, 1999[79]; Rumberger and Palardy, 2005[80]). Thus, there is some evidence, especially under accountability pressures, that some schools may discharge low performing students through encouraging them to leave the system through a variety of techniques in an effort to increase the average test performance of the school (Rumberger and Palardy, 2005[80]). One process where this takes place is through transferring low performing students so as to avoid the students taking the mandatory tests on which school accountability are based. With enough successive transfers from school to school, the students end up leaving the system (Riehl, 1999[79]). This behaviour by school decision makers is highly problematic, unethical, and does not support student success. It should thus also be detected as early as possible with appropriate follow-on management and policy changes and professional development to refocus the schools on student persistence and supports. Similar types of data analytics as those described above could be applied to data from large schooling systems to identify if this discharge practice is taking place. Pattern analytic techniques such as the longitudinal typology analysis are designed to detect and determine exactly whether students are leaving the system through a process that differs substantively from that of voluntary dropouts. This is especially true when deep sets of longitudinal data across entire systems are available. Specifically, typology analytics such as latent class analysis and Growth Mixture Modelling may be quite useful in identifying these types of school behaviour. Thus, not only are these digital technologies helpful in informing early warning systems and indicators research and practice, but they can be used within and across systems to detect many different types of patterns.

To date, while the research across this domain continues to develop, there are a range of pattern analytic techniques that are being developed to further inform EWS/EWI. This is what the remainder of this section will focus on.

Recent research has begun to expand the number of techniques available for early warning systems and indicators research and practice as researchers continue to explore the possibilities of using analytics from the data mining, data science, and longitudinal modelling fields (Agasisti and Bowers, 2017[17]; Piety, 2019[75]), including decision trees, longitudinal cluster analysis visualisations, and time-nested survival and hazard models.

First, Classification And Regression Decision Trees (CART) is a family of models that is well-known to be helpful for both researchers and decision makers, as decision trees provide empirically identified cut points, priorities, and weights on variables related to an outcome. As a form of data mining, the output of a decision tree is a figure that partitions and prioritises the most important variables identified by the algorithm in predicting the outcome, gives a cut point on that variable if it is continuous, and then proceeds to branches in the tree, showing the next set of highest priority variables, and so on (Breiman et al., 1993[81]; Quinlan, 1993[82]; Quinlan, 1990[83]). CART analysis has been used in education research and policy, to identify predictors of achievement and test score performance (Koon and Petscher, 2015[84]; Koon, Petscher and Foorman, 2014[85]; Martínez Abad and Chaparro Caso López, 2017[86]), and less often to predict on schooling completion and dropout. For example, Baker et al. (2020) used regression trees to analyse almost 5 000 students’ data from Texas in the United States across 23 sets of variable features, confirming much of the literature on the predictors of dropping out as well as identifying additional interesting predictors, such as dress code violations and number of times a student absence was corrected to present (Baker et al., 2020[18]). In another example in dropout prediction, Soland (2013[87]; 2017[88]) used large generalisable datasets in the United States and over 40 variables to identify that college and career aspirations were important variables to include in dropout early warning prediction in addition to grade point average, standardised test scores, and teacher expectations (Soland, 2013[87]; 2017[88]).

Indeed, researchers using machine-learning decision trees have shown strong predictive accuracy for dropout across country contexts, including Mexico (Márquez-Vera, Morales and Soto, 2013[89]), Denmark (Sara et al., 2015[90]) and the Republic of Korea (South Korea) (Chung and Lee, 2019[91]). In Mexico, the study examined data from 670 middle school students from Zacatecas and included more than 70 predictors, finding that regression trees had the highest accuracy of predicting eventual school dropout (Márquez-Vera et al., 2013[92]; Márquez-Vera, Morales and Soto, 2013[89]).

Second, random “forests” is another powerful data analysis technique based on decision tree. As implied by the name, random forests have many regression “trees”, estimating an ensemble of many CART analyses, while using a range of random starting parameters, and then testing the “forest” of tree models to find the best solution (Breiman, 2001[93]). In Denmark, using a large sample of over 72 000 students, multiple machine-learning procedures were examined for their accuracy in predicting dropout. The analysis was based on student data, including grades, absences, missing assignments, as well as school size and enrolment and community demographics (Sara et al., 2015[90]). Random forest techniques outperformed CART accuracy. Similarly, using random forest and analysing data from over 165 000 high school students in South Korea, Chung and Lee (2019) included a range of predictor variables such as absences, lateness to school or class, and amount of time on personal or extracurricular activities, clubs and volunteer work, and found strong accuracy in predicting dropout (Chung and Lee, 2019[91]). Using the same dataset, Lee and Chung (2019[91]) noted the extreme imbalance in South Korean dropout data, as the percentage of students who drop out is quite small, which can create problems for many standard regression tree-based machine-learning algorithms. To address this, the authors included a correction for this imbalance issue, demonstrating improvement in accuracy for both regression tree and random forest algorithms (Lee and Chung, 2019[94]).

The use of random forest machine-learning algorithms has also recently been examined with United States data. In a study analysing data from a mid-Atlantic regional school district of about 11 000 students (Aguiar et al., 2015[95]) the authors included a range of predictors such as student achievement, attendance, and mobility. Rather than predict the actual dropout event, the authors predicted students being in the top 10 percentile of their at-risk category, making it difficult to compare the results to other contexts or studies. More recently, a study using random forest machine learning and data from millions of students across the United States (Christie et al., 2019[96]) analysed a model that included over 70 student predictors including student achievement, attendance, behaviour, discipline, and course credits among many others. Unfortunately, while the authors demonstrate high accuracy of their model in predicting dropout, they do not list the 70 predictors, the algorithm, or how the predictors are weighted or used in the model, making it difficult to evaluate or use the results in further research.

Interestingly, across each of these studies using random forest machine learning, overall prediction accuracy was comparable to the growth mixture model predictors noted above, with accuracy over 80%-90%. Nevertheless, a consistent critique of such machine-learning models (Villagrá-Arnedo et al., 2017[97]), such as random forest, is that they are difficult to interpret and act upon, as exactly how the code and algorithm works is not apparent or simple to report, and across so many predictors, knowing how to use the predictor and intervene is not straightforward (Knowles, 2015[71]). While a model may be accurate in predicting dropout, if the algorithm is overly complex, hidden or unreported, it is more difficult to understand and implement models across contexts, and perhaps more importantly, design and test effective interventions.

A persistent issue across the dropout EWS/EWI prediction domain is the need to unpack and open algorithms and predictions, examining not only the code but also the patterns identified through the method (Agasisti and Bowers, 2017[17]; Bowers et al., 2019[19]). This allows experts to not only see inside of the prediction algorithm and code to understand how it works, but also to work to not simply summarise students in overall averages, prediction scores, or at-risk status categories (Hawn Nelson et al., 2020[98]), but rather to view the entire life course of the student as an individual through the system that has been provided to them using the data available (Bowers, 2010[34]). One method proposed to address these issues is the application of the visual data analytic technology of hierarchical cluster analysis (HCA) heat maps. Adapted from the big data taxonomy and bioinformatics domains (Bowers, 2010[34]; Eisen et al., 1998[99]; Wilkinson and Friendly, 2009[100]), but used rarely in education research (Kinnebrew, Segedy and Biswas, 2014[101]; Lee et al., 2016[102]; Moyer-Packenham et al., 2015[103]), cluster analysis heat maps visualise clusters of patterns of individual student data and link that information to overall student outcomes, providing a means to examine not only clusters of patterns but the variance across clusters, students, and variables. For example, Figure 9.1. from Bowers (2010[34]) applies HCA heat maps to the entire longitudinal grading history of 188 students from two small school districts in the United States. It patterns the data across all of the students and visualises every individual course grade across all subjects from kindergarten through grade 12, and links them to overall outcomes such as graduation or drop out, as well as taking a college placement exam (in this case: ACT). Importantly for the visualisation, each student’s data is represented in a “heat map” in which higher grades in each subject are a hotter red, and lower grades a colder blue. This replaces numbers with blocks of colour.

Hierarchical cluster analysis, relying on a long history of cluster analytic techniques from data mining (Bowers, 2007[52]; Romesburg, 1984[104]; Wilkinson and Friendly, 2009[100]), provides a useful means to provide organisation and pattern analysis to education data. Historically, education data is represented in data files with rows of students organised by either student name or identification number, and then columns of variables. When cluster analysis is applied to this type of data, rather than students being proximal to each other in the order of rows based on name or identification number, the similarity in the pattern of the variables included in the columns is used to reorganise the list, with indications of how similar or dissimilar every student is to their proximal cluster pattern in relation to all other clustered rows of students (Bowers, 2010[34]). The heat map then provides a visual means to display all of the data, and as similar rows are next to each other, blocks of colours form across the dataset, allowing researchers and decision makers to view the entire dataset and all students at the same time, linked to overall outcomes (for a review, please see Bowers, 2010a). In comparison, more traditional ways of viewing data in education, such as bar graphs or line plots of student data, requires a selection of variables to be displayed, and even for just 100 students can become uninterpretable with so many lines or bars. Rather than summarise student data to overall averages across only a small selection of variables, the HCA heat map analysis displays every student in the dataset along with every data point, patterned in a way that provides a means to examine each cluster linked to outcomes.

For the Bowers (2010[34]) study, each student’s grades over their entire history in their K-12 education system were patterned and linked to their overall K-12 outcomes. Importantly, specific clusters of student longitudinal grading patterns could be identified that were previously unknown to the schooling system, such as a cluster of students who obtained high grades until grade 3 and 4, and then their grades fell, matching low graded students who more often drop out (see Figure 9.1. , bottom, green annotated cluster). Conversely, one of the smallest clusters of students were a group of students who had low grades for the first few years of schooling, but then rose quickly near the end of elementary school and all graduated (see Figure 9.1. , top, yellow annotated cluster). For both of these examples, all individual students in each cluster can be identified in the visualisation along with all of their actual data. This provides a unique opportunity for decision makers to see the overall scope of data that is important to upper secondary school graduation, patterned over time and across students. In addition to the overall patterns, HCA heat maps allow decision makers to focus on specific students to gain a much deeper understanding of the lived experience of that student through the system provided to them. This technology thus serves to help create more actionable interventions that can potentially be personalised to that student based on how the student has progressed through the system provided to them.

Across these types of data mining and visual data analytic techniques, a central limitation is that the models are trained on specific school, district or state data but designed to provide actionable information about the broader education system. These data mining models historically are limited in their ability to generalise to a larger population beyond the training dataset. Indeed, recent early warning systems and indicators research across three school districts in the state of Ohio in the United States demonstrated different results across each district (Stuit et al., 2016[23]). This prompted the authors to recommend that researchers and practitioners validate findings from studies from outside of their system using the same variables but including the local system data, and then compare the findings. As the authors note:

Given the variability across school districts… in consistency of predicting failure to graduate on time, and in relative accuracy of indicators to predict failure to graduate on time, the findings suggest that it is important for school districts to examine and analyze their own student-level data in order to develop their own early warning system. (p.ii)

Similarly, at a larger scale, in a recent study that included student academic performance, attendance, and behaviour data from over 1 million students in the United States to predict dropout, when the data mining prediction algorithm was applied to 30 target districts outside of the dataset “none of the… models did as well as their average when applied to the target districts” (p.735) (Coleman, Baker and Stephenson, 2019[105]). Thus, a current central limitation of data mining techniques in dropout prediction is generalisability beyond the training dataset, yet this work is critically important for the school systems included in the training data. For school and system leaders, education data science techniques, such as data mining, pattern analysis, data visualisation, and prediction accuracy, are important recent innovations that can help support their decisions throughout the education system for actionable data use (Agasisti and Bowers, 2017[17]; Bowers, 2010[55]; Krumm, Means and Bienkowski, 2018[106]; Bowers, 2017[78]; Piety, 2019[75]). As noted in the recent literature on the intersection of data science, evidence-based improvement cycles, and school leadership, termed Education Leadership Data Analytics (ELDA) (Bowers et al., 2019[19]), this work includes:

ELDA practitioners working collaboratively with schooling system leaders and teachers to analyze, pattern, and visualize previously unknown patterns and information from the vast sets of data collected by schooling organisations, and then integrate findings in easy to understand language and digital tools into collaborative and community building evidence-based improvement cycles with stakeholders. (p.8).

Thus, this pattern analytic work within each school district and system is important to help inform decision making (Bowers, 2017[78]; Mandinach and Schildkamp, 2020[107]).

Nevertheless, this type of local replication and comparison of models can only take place if the methods and algorithms used are available publicly and open access. Following similar calls across multiple domains in research, health, industry, and government in which large datasets are analysed in similar ways to make recommendations (Stodden et al., 2016[108]; Wachter and Mittelstadt, 2019[109]), there are recent calls for transparency and open access publication of all algorithms in education. While education data must be private and confidential, if an algorithm makes a recommendation, prediction, or a decision for students, teachers, or schools, it is ethical that the code and algorithm be posted public and open access, free of hidden proprietary features (Agasisti and Bowers, 2017[17]; Bowers et al., 2019[19]). This open publication and access to algorithms in education decision making helps prevent issues noted in other sociological data science and data mining domains, such as bank loans and incarceration recommendations. Hidden algorithms and unintended consequences lead to bias and inequities in the algorithms and the outcomes they generate that go unidentified (Benjamin, 2019[110]; Hawn Nelson et al., 2020[98]; O’Neil, 2016[111]), whereas open algorithms with the appropriate data can be tested for bias and fairness (Corbett-Davies and Goel, 2018[112]; d’Alessandro, O’Neil and LaGatta, 2017[113]; Dudik et al., n.d.[114]; Loukina, Madnani and Zechner, 2019[115]; Zehlike et al., 2017[116]).

Recently, to help local school system practitioners assess the accuracy of their early warning systems and indicators, Bowers and Zhou (2019[6]) provided a guide for Receiver Operating Characteristic analysis. As noted above, Receiver Operating Characteristic analysis allows for the comparison of the accuracy of different predictors on an outcome, with the technique of receiver operating characteristic area under the curve (ROC AUC) providing a continuous measurement comparison of accuracy along with a statistical test of difference in the accuracy between two predictors (Bowers and Zhou, 2019[6]). However, as data practitioners in schooling systems come to their jobs through many different and idiosyncratic career paths (Bowers, 2017[78]; Bowers et al., 2019[19]), it is rare that they have received training in how to compare the accuracy of different predictors of educational outcomes. Additionally, in many instances, busy education data practitioners working on early warning systems and indicators may not have the time or training to generate the code for this type of EWI accuracy assessment and comparison. This issue is made more difficult in that few research studies in education currently publish their full algorithm and code to allow for replication and practitioner use (Bowers et al., 2019[19]; Knowles, 2015[71]). To address this issue, Bowers and Zhou (2019[6]) used large and comprehensive United States nationally generalisable datasets that are open and public to provide a guide and walkthrough for education data practitioners wishing to apply receiver operating characteristic area under the curve (ROC AUC) analysis of their EWIs within their EWS to assess the accuracy of each indicator and statistically compare significantly different levels of accuracy in prediction of education outcomes. Importantly, the study provides not only a guide demonstrating receiver operating characteristic area under the curve (ROC AUC) indicator accuracy analysis across a wide range of education outcomes, from dropout and upper secondary school completion, to post-secondary enrolment and completion, among others (Bowers and Zhou, 2019[6]), but also supplementary materials that include all of the code in the open source R statistical language for each table, equation, and figure in the study. This type of study helps to promote the sharing of algorithmic resources across contexts, testing of the code comparisons of results and application to local educational communities for decision making (Bowers et al., 2019[19]).

Across the early warning systems and indicators research and practice in reducing dropouts, emerging technologies from pattern analytics, data mining, learning analytics, and machine learning are providing an opportunity to expand what is known about how to accurately predict student outcomes and then provide interventions that help support student success. However, predictor identification and accuracy analysis is only one small part of a much larger system.

While there is much recent research and interest in student outcome predictor identification and accuracy analysis in early warning indicators and systems, identifying predictors is only one small part of a much larger system. Adapting the central figure from a paper by engineers at Google Inc. in which they note that machine learning is only a very small piece in which "the required surrounding infrastructure is vast and complex" (p.4) (Sculley et al., 2015[117]), Figure 9.2. places the issues of “predictor identification” in the larger context and systems of preventing students’ dropout. Multiple different pieces work together throughout the process, starting on the left of Figure 9.2. with collecting, cleaning, and managing a wide range of student and school data over time in collaboration with community members; providing that information to stakeholders through the early warning system and dashboard; and combining it with input from students, teachers, and family members. This then motivates inferences about student challenges and success in the system, which, when combined with appropriate and available resources, can be used to tailor interventions to student needs or modify current supports offered to all through the education system. Ultimately, as noted on the right of Figure 9.2. , information can be gathered to feed back into the system and help support continuous improvement.

While my focus throughout this discussion has been on the specifics of early warning indicators and their accuracy, the point that this discussion is only one small part of the vast system in education organisations that can be leveraged to help support student success is exemplified in the work in the Chicago context in the United States (Allensworth, Nagaoka and Johnson, 2018[15]). As noted above, the Chicago on-track indicator is a well-known and fairly accurate cross-sectional early warning indicator of students dropping out of school. Over the last two decades, the city of Chicago has seen a dramatic increase in graduation rates (Allensworth et al., 2016[118]), from 52.4% in 1998 to above 90% by 2019 (Issa, 2019[119]). However, as stated by researchers in Chicago, finding more accurate on-track indicators and providing these in an early warning system did not cause improvement, as this is a necessary but insufficient issue to address dropping out of school. Rather, the early warning system is a small piece of a much larger suite of systems combined with educator action that provides useful data to educators who then tailor interventions for students and create or modify current systems to help support student persistence. As noted by Allensworth (2013[7]):

Ten years ago, addressing high school dropout rates seemed like an intractable problem… Those were the days before wide access to student-level data systems. Now that educators can track students’ progress through school, they have early warning indicators that are available, highly predictive of when students begin high school, and readily available to high school practitioners. Furthermore, the factors that are most directly tied to eventual graduation are also the factors that are most malleable through school practices—student attendance and effort in their courses. Not only can students be identified for intervention and support, but schools can use patterns in the indicators to address structural issues that make it more difficult for students to graduate (p.68-69) (Allensworth, 2013[7]).

In order to be a core component of an effective strategy supporting student persistence and graduation, early warning systems must have indicators that have four central tenets. Here I propose these tenets as “The Four A’s of Early Warning Indicators”, claiming that outcome predictors must be: Accurate, Accessible, Actionable and Accountable:

  • Accurate in that the predictor actually identifies the outcome at some earlier time point, which is most easily assessed using accuracy metrics such as the receiver operating characteristic area under the curve (ROC AUC) as discussed above.

  • Accessible in that the predictor is easy to understand and open to investigation. Accessible does not mean simple, but rather that the algorithm can be accessed, examined, and understood. Accessible is the opposite of proprietary, hidden, or machine learned algorithms that obfuscate how the prediction takes place, but instead is open, public, and understandable.

  • Actionable in that the predictor can be used to take action to help tailor interventions, or modify the current system and organisation to address systemic issues. Actionable early warning indicators rely on predictors that are recent or real-time, malleable, and under the influence of stakeholders, in opposition to predictors that students, teachers, administrators, and family and community members have no control over.

  • Accountable in that predictors are regularly checked for bias, are held up to and inspected by the communities for which they are predicting, and regularly audited and critiqued to examine the extent of algorithmic bias and promote fairness. Accountable early warning indicators include the community in the design and application of the predictors to issues of concern to the community, designing and using predictors in collaboration with the communities for which the system is designed to serve.

Of the four A’s, accountability may be the most important aspect to consider for early warning systems and indicators. When Accountability is addressed well, the other three A’s of Accurate, Accessible and Actionable become clearer. Indeed, accountability in algorithmic prediction is a growing area of concern globally. In discussing the legal issues around data privacy and algorithmic prediction in the European Union and the United States, Wachter and Mittelstadt (2019[109]) note:

Unfortunately, there is little reason to assume that organizations will voluntarily offer full explanations covering the process, justification for, and accuracy of algorithmic decision making unless obliged to do so. These systems are often highly complex, involve (sensitive) personal data, and use methods and models considered to be trade secrets… An explanation might inform the individual about the outcome or decision and about underlying assumptions, predictions, or inferences that led to it. It would not, however, ensure that the decision, assumption, prediction, or inference is justified. In short, explanations of a decision do not equal justification of an inference or decision. Therefore, if the justification of algorithmic decisions is at the heart of calls for algorithmic accountability and explainability… Individual-level rights are required that would grant data subjects the ability to manage how privacy-invasive inferences are drawn, and to seek redress against unreasonable inferences when they are created or used to make important decisions. (p.503-505) (Wachter and Mittelstadt, 2019[109]).

This issue is made more problematic with the known racial, ethnic, and community bias of recent algorithmic prediction and recommendation systems in health care, finance, policing, and incarceration. As noted recently by Benjamin (2019[110]) writing in the journal Science:

Data used to train automated systems are typically historic and, in the context of health care, this history entails segregated hospital facilities, racist medical curricula, and unequal insurance structures, among other factors. Yet many industries and organizations well beyond health care are incorporating automated tools, from education and banking to policing and housing, with the promise that algorithmic decisions are less biased than their human counterpart. But human decisions comprise the data and shape the design of algorithms, now hidden by the promise of neutrality and with the power to unjustly discriminate at a much larger scale than biased individuals (p.422) (Benjamin, 2019[110]).

Thus, accountability to the community for which it serves is a core issue for education early warning systems. This has resulted in recent calls in education data use, and predictive systems for an expanded and central role of the community in the planning, design, testing, and use of these data systems (Bowers et al., 2019[19]; Hawn Nelson et al., 2020[98]; Mandinach and Schildkamp, 2020[107]). These recommendations encourage the community to be an equal participant in the evidence-use cycle in collaboration with researchers, teachers, and administrators, helping to inform how early warning systems and indicators are designed, what the inferences about the outcomes are, and how those inferences will be used in positive and supportive ways, with specific and actionable steps (Hawn Nelson et al., 2020[98]).

Given the current state of the research, three areas of interest to advance early warning indicator research and practice can be highlighted:

First, an issue across the early warning indicator research domain is replication of accuracy results, algorithms, and testing of new predictors across multiple contexts and datasets. Each study that identifies an early warning indicator typically analyses the data, notes the accuracy metrics (or not), and then encourages application of the identified indicator to practice. Yet, especially for machine-learning algorithms, but also for all indicators, there is a constant need to replicate each indicator across contexts, confirm accuracy and test for bias to provide a baseline comparison, and then innovate on what is already known. Thus, more studies of early warning indicators should replicate the previously known most accurate indicators for an outcome with the new dataset, following recent examples (Bowers, Sprott and Taff, 2013[35]; Coleman, Baker and Stephenson, 2019[105]; Knowles, 2015[71]), report the receiver operating characteristic area under the curve (ROC AUC) numbers, then compare those numbers to any new analytics, and then publish the code open access (Agasisti and Bowers, 2017[17]; Bowers et al., 2019[19]). To help encourage this type of code sharing and replication across datasets, implementing “FAIR” data standards in this domain would dramatically spur innovation by encouraging de-identified datasets and algorithms that are Findable, Accessible, Interoperable, and Reproducible (Austin et al., 2017[120]; Singh et al., 2019[121]). For detailed summary of FAIR data standards, see Austin et al (2017[120]).

Second, the vast majority of EWIs within most EWSs are cross-sectional single time-point variables, many of which are collected in lower secondary school. However, as shown above, the most accurate predictors use longitudinal data, examining the trajectories of students over time. Thus, a future area in this domain is to include ever more longitudinal and time-nested data that use long-term data starting much earlier than secondary school. However, time as a variable can create many problems for both traditional statistics and data mining, as a student’s data over time is dependent on earlier and later time points, violating a central assumption of regression-based statistics. Additionally, dropout data has another time-dependent issue in that, as time continues forward, the sample dataset dynamically changes as students drop out and “leave” the dataset. This time-nested longitudinal data dependency at both the individual level and the sample level can present difficulties to researchers and practitioners looking to apply methods from the cross-sectional literature. While the dropout research to date has rarely confronted this issue, there are a few studies that have worked to apply models drawn from epidemiology, in which these longitudinal time-dependent conditional data issues are ever present (Bowers, 2010[55]; Lamote et al., 2013[122]). Interestingly, these issues can be modelled quite well using survival modelling techniques, specifically discrete-time hazard models (Singer and Willett, 2003[123]). As Bowers (2010[55]) shows, the hazard of dropping out is time dependent, and estimating that hazard depends on the sample of remaining students. When this is taken into account, multiple predictors of interest can then be tested for when they exert the largest effects on the hazard of dropping out at specific time points. This focus on hazard of dropping out at different time points can potentially inform interventions, as one intervention that might work early in secondary school may have no effect later, as well as the opposite.

Third, a well-known issue with dropping out in the United States specifically is what has been termed “dropout factories” (Balfanz et al., 2010[124]; Balfanz and Legters, 2006[125]; Balfanz and West, 2009[126]). This issue acknowledges that across a country there may be specific schools that account for a large proportion of students who drop out. In the United States, some schools graduate only 50% of their students. These “dropout factories” indicate that dropping out is to some extent located at the school and schooling system level of analysis. This implies a multilevel modelling framework of appropriately nesting students within schools (Hox, 2010[127]; Raudenbush and Bryk, 2002[128]) and then estimating the variance in dropping out that is attributable to the student level and school level (Lamote et al., 2013[122]), an issue implied in the dropout versus discharge example above (Rumberger and Palardy, 2005[80]). For early warning systems and indicators research, a productive future area of research is using these hierarchical modelling frameworks in conjunction with the pattern and predictive analyses described above to provide actionable information. For example, multilevel Growth Mixture Modelling or multilevel latent class analysis may show which schools have different proportions of students from different groups within a dropout typology. In an example of the importance of taking into account the dependent nature of the time-nested data, (Lamote et al., 2013[122]) used a multilevel discrete-time hazard model and demonstrated the importance of taking longitudinal student mobility across schools into account in the EWI literature. Thus, the school level is an important variable to include in EWS/EWI predictor research. If specific types of dropping out are co-located in a specific school building, then the school management and policy can focus on the specific organisation of this building to help improve student outcomes for their community.

In conclusion, there have been many recent advances in education early warning systems and indicators. Thanks to the use of data analytics techniques, some early warning systems predict dropout with an accuracy above 80-90% of cases. However, there are yet also specific domains in need of further research and application to practice. For example, most of those techniques relying on machine learning and data mining do not easily transfer from one educational context to another. More worrisome, many early warning indicators are still grossly inaccurate.

The four A’s of early warning indicators of Accurate, Accessible, Actionable and Accountable provide a useful framework for advancing the field of predictive analytics and algorithms for helping support education outcomes. In the end, the technology of early warning systems and indicators is only a small piece of the much larger system of data use in schools, which includes a strong role and voice of the community, ethical use of data and algorithms throughout the process, and a continual focus on how to support student success through providing individual interventions and addressing system-level offerings and policies. Developing techniques and tools that make data actionable is just one of the steps towards effective actions supporting learning improvement and student success, but a promising one that digitalisation and innovations in data mining and data analytics should make sustainable in the near future.


[73] Adelman, M. et al. (2018), “Predicting school dropout with administrative data: new evidence from Guatemala and Honduras”, Education Economics, Vol. 26/4, pp. 356-372, https://doi.org/10.1080/09645292.2018.1433127.

[17] Agasisti, T. and A. Bowers (2017), “Data Analytics and Decision-Making in Education: Towards the Educational Data Scientist as a Key Actor in Schools and Higher Education Institutions”, in Johnes, G. et al. (eds.), Handbook on the Economics of Education, Edward Elgar Publishing, Cheltenham, UK, https://doi.org/10.7916/D8PR95T2.

[64] Agodini, R. and M. Dynarksi (2004), “Are experiments the only option? A look at dropout prevention programs”, The Review of Economics and Statistics, Vol. 86/1, pp. 180-194.

[95] Aguiar, E. et al. (2015), Who, when, and why: a machine learning approach to prioritizing students at risk of not graduating high school on time, ACM, Poughkeepsie, New York, https://doi.org/10.1145/2723576.2723619.

[67] Alexander, K., D. Entwisle and N. Kabbani (2001), “The dropout process in life course perspective: Early risk factors at home and school”, The Teachers College Record, Vol. 103/5, pp. 760-822.

[7] Allensworth, E. (2013), “The Use of Ninth-Grade Early Warning Indicators to Improve Chicago Schools”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 18/1, pp. 68-83, https://doi.org/10.1080/10824669.2013.745181.

[25] Allensworth, E. and J. Easton (2007), What matters for staying on-track and graduating in Chicago public high schools: A close look at course grades, failures, and attendance in the freshman year, The University of Chicago, http://www.consortium-chicago.org.

[24] Allensworth, E. and J. Easton (2005), The on-track indicator as a predictor of High School graduation, http://www.consortium-chicago.org/publications/p78.html.

[38] Allensworth, E. et al. (2014), Looking Forward to High School and College: Middle Grade Indicators of Readiness in Chicago Public Schools, http://ccsr.uchicago.edu/sites/default/files/publications/Middle%20Grades%20Report.pdf.

[118] Allensworth, E. et al. (2016), High school graduation rates through two decades of district change: The influence of policies, data records, and demographic shifts., http://consortium.uchicago.edu/sites/default/files/publications/High%20School%20Graduation%20Rates-Jun2016-Consortium.pdf.

[53] Allensworth, E. and S. Luppescu (2018), Why do students get good grades, or bad ones? The influence of the teacher, class, school, and student, https://consortium.uchicago.edu/sites/default/files/publications/Why%20Do%20Students%20Get-Apr2018-Consortium.pdf.

[15] Allensworth, E., J. Nagaoka and D. Johnson (2018), High School Graduation and College Readiness Indicator Systems: What We Know, What We Need to Know, https://consortium.uchicago.edu/sites/default/files/publications/High%20School%20Graduation%20and%20College-April2018-Consortium.pdf.

[120] Austin, C. et al. (2017), “Key components of data publishing: using current best practices to develop a reference model for data publishing”, International Journal on Digital Libraries, Vol. 18/2, pp. 77-92, https://doi.org/10.1007/s00799-016-0178-2.

[18] Baker, R. et al. (2020), “Predicting K-12 Dropout”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 25/1, pp. 28-54, https://doi.org/10.1080/10824669.2019.1670065.

[124] Balfanz, R. et al. (2010), Building a grad nation: Progress and challenge in ending the high school dropout epidemic., https://www.americaspromise.org/resource/building-grad-nation-progress-challenge-ending-high-school-dropout-epidemic-november-2010 (accessed on 18 January 2021).

[8] Balfanz, R. and V. Byrnes (2019), “Early Warning Indicators and Intervention Systems: State of the Field”, in Fredricks, J., A. Reschly and S. Christenson (eds.), Handbook of Student Engagement Interventions, Elsevier, https://doi.org/10.1016/b978-0-12-813413-9.00004-8.

[33] Balfanz, R., L. Herzog and D. Mac Iver (2007), “Preventing Student Disengagement and Keeping Students on the Graduation Path in Urban Middle-Grades Schools: Early Identification and Effective Interventions”, Educational Psychologist, Vol. 42/4, pp. 223-235, https://doi.org/10.1080/00461520701621079.

[125] Balfanz, R. and N. Legters (2006), “Closing ’dropout factories’: The graduation-rate crisis we know, and what can be done about it.”, Education Week, Vol. 25, pp. 42-43.

[126] Balfanz, R. and T. West (2009), Raising graduation rates: A series of data briefs: Progress toward increasing national and state graduation rates., https://new.every1graduates.org/raising-graduation-rates/ (accessed on 18 January 2021).

[54] Battin-Pearson, S. et al. (2000), “Predictors of early high school dropout: A test of five theories.”, Journal of Educational Psychology, Vol. 92/3, pp. 568-582, https://doi.org/10.1037/0022-0663.92.3.568.

[2] Belfield, C. and H. Levin (2007), “The education attainment gap: Who’s affected, how much, and why it matters”, in Belfield, C. and H. Levin (eds.), The price we pay: Economic and social consequences of inadequate education, Brookings Institution Press, Washington D.C.

[110] Benjamin, R. (2019), “Assessing risk, automating racism”, Science, Vol. 366/6464, pp. 421-422, https://doi.org/10.1126/science.aaz3873.

[77] Bienkowski, M., M. Feng and B. Means (2012), Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief, http://www.ed.gov/edblogs/technology/files/2012/03/edm-la-brief.pdf.

[50] Bowers, A. (2019), “Towards Measures of Different and Useful Aspects of Schooling: Why Schools Need Both Teacher Assigned Grades and Standardized Assessments”, in Brookhart, S. and J. McMillan (eds.), Classroom Assessment as Educational Measurement, National Council on Measurement in Education (NCME) Book Series, Routledge, New York.

[78] Bowers, A. (2017), “Quantitative Research Methods Training in Education Leadership and Administration Preparation Programs as Disciplined Inquiry for Building School Improvement Capacity”, Journal of Research on Leadership Education, Vol. 12/1, pp. 72-96, https://doi.org/10.1177/1942775116659462.

[49] Bowers, A. (2011), “What’s in a grade? The multidimensional nature of what teacher-assigned grades assess in high school”, Educational Research and Evaluation, Vol. 17/3, pp. 141-159, https://doi.org/10.1080/13803611.2011.597112.

[34] Bowers, A. (2010), “Analyzing the longitudinal K-12 grading histories of entire cohorts of students: Grades, data driven decision making, dropping out and hierarchical cluster analysis”, Practical Assessment Research and Evaluation, Vol. 15/7, pp. 1-18, http://pareonline.net/pdf/v15n7.pdf.

[55] Bowers, A. (2010), “Grades and Graduation: A Longitudinal Risk Perspective to Identify Student Dropouts”, The Journal of Educational Research, Vol. 103/3, pp. 191-207, https://doi.org/10.1080/00220670903382970.

[48] Bowers, A. (2009), “Reconsidering grades as data for decision making: more than just academic knowledge”, Journal of Educational Administration, Vol. 47/5, pp. 609-629, https://doi.org/10.1108/09578230910981080.

[52] Bowers, A. (2007), Grades and data driven decision making: Issues of variance and student patterns, Michigan State University, East Lansing, http://files.eric.ed.gov/fulltext/ED538574.pdf.

[19] Bowers, A. et al. (2019), Education leadership data analytics (ELDA): A white paper report on the 2018 ELDA Summit, Teachers College, Columbia University, https://doi.org/10.7916/d8-31a0-pt97.

[41] Bowers, A. and R. Sprott (2012), “Examining the Multiple Trajectories Associated with Dropping Out of High School: A Growth Mixture Model Analysis”, The Journal of Educational Research, Vol. 105/3, pp. 176-195, https://doi.org/10.1080/00220671.2011.552075.

[61] Bowers, A. and R. Sprott (2012), “Why Tenth Graders Fail to Finish High School: A Dropout Typology Latent Class Analysis”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 17/3, pp. 129-148, https://doi.org/10.1080/10824669.2012.692071.

[35] Bowers, A., R. Sprott and S. Taff (2013), “Do we know who will drop out? A review of the predictors of dropping out of high school: Precision, sensitivity and specificity”, The High School Journal, Vol. 96/2, pp. 77-100, http://muse.jhu.edu/journals/high_school_journal/v096/96.2.bowers.html.

[42] Bowers, A. and B. White (2014), “Do Principal Preparation and Teacher Qualifications Influence Different Types of School Growth Trajectories in Illinois? A Growth Mixture Model Analysis.”, Journal of Educational Administration, Vol. 52/5, pp. 705-736.

[6] Bowers, A. and X. Zhou (2019), “Receiver Operating Characteristic (ROC) Area Under the Curve (AUC): A Diagnostic Measure for Evaluating the Accuracy of Predictors of Education Outcomes”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 24/1, pp. 20-46, https://doi.org/10.1080/10824669.2018.1523734.

[93] Breiman, L. (2001), “Random forests”, Machine learning, Vol. 45/1, pp. 5-32.

[81] Breiman, L. et al. (1993), Classification and regression trees, Routledge, New York.

[51] Brookhart, S. (2015), “Graded Achievement, Tested Achievement, and Validity”, Educational Assessment, Vol. 20/4, pp. 268-296, https://doi.org/10.1080/10627197.2015.1093928.

[58] Brookhart, S. et al. (2016), “A Century of Grading Research”, Review of Educational Research, Vol. 86/4, pp. 803-848, https://doi.org/10.3102/0034654316672069.

[9] Carl, B. et al. (2013), “Theory and Application of Early Warning Systems for High School and Beyond”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 18/1, pp. 29-49, https://doi.org/10.1080/10824669.2013.745374.

[96] Christie, S. et al. (2019), Machine-Learned School Dropout Early Warning at Scale, Paper presented at the The 12th International Conference on Educational Data Mining, Montreal, Canada., https://www.infinitecampus.com/pdf/Machine-learned-School-Dropout-Early-Warning-at-Scale.pdf.

[91] Chung, J. and S. Lee (2019), “Dropout early warning systems for high school students using machine learning”, Children and Youth Services Review, Vol. 96, pp. 346-353, https://doi.org/10.1016/j.childyouth.2018.11.030.

[105] Coleman, C., R. Baker and S. Stephenson (2019), “A Better Cold-Start for Early Prediction of Student At-Risk Status in New School Districts.”, Paper presented at the Proceedings of The 12th International Conference on Educational Data Mining (EDM 2019)..

[62] Collins, L. and S. Lanza (2010), Latent Class and Latent Transition Analysis: With Applications in the Social, Behavioral, and Health Sciences, Wiley, Hoboken, NJ.

[112] Corbett-Davies, S. and S. Goel (2018), The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning., Stanford University.

[113] d’Alessandro, B., C. O’Neil and T. LaGatta (2017), “Conscientious Classification: A Data Scientist’s Guide to Discrimination-Aware Classification”, Big Data, Vol. 5/2, pp. 120-134, https://doi.org/10.1089/big.2016.0048.

[10] Davis, M., L. Herzog and N. Legters (2013), “Organizing Schools to Address Early Warning Indicators (EWIs): Common Practices and Challenges”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 18/1, pp. 84-100, https://doi.org/10.1080/10824669.2013.745210.

[114] Dudik, M. et al. (n.d.), fairlearn: A Python package to assess and improve fairness of machine learning models., Redmond, WA: Microsoft., https://fairlearn.github.io/.

[68] Dupéré, V. et al. (2018), “High School Dropout in Proximal Context: The Triggering Role of Stressful Life Events”, Child Development, Vol. 89/2, pp. e107-e122, https://doi.org/10.1111/cdev.12792.

[69] Dupéré, V. et al. (2015), “Stressors and Turning Points in High School and Dropout”, Review of Educational Research, Vol. 85/4, pp. 591-629, https://doi.org/10.3102/0034654314559845.

[21] Dynarksi, M. et al. (2008), Dropout prevention: A practice guide, http://ies.ed.gov/ncee/wwc/pdf/practiceguides/dp_pg_090308.pdf.

[22] Dynarski, M. and P. Gleason (2002), “How can we help? What we have learned from recent federal dropout prevention evaluations”, Journal of Education for Students Placed at Risk, Vol. 2002/1, pp. 43-69, https://doi.org/10.1207/S15327671ESPR0701_4.

[99] Eisen, M. et al. (1998), “Cluster analysis and display of genome-wide expression patterns”, Proceedings of the National Academy of Sciences, Vol. 95, pp. 14863-14868, http://rana.lbl.gov/papers/Eisen_PNAS_1998.pdf.

[27] Faria, A. et al. (2017), Getting students on track for graduation: Impacts of the Early Warning Intervention and Monitoring System after one year, https://ies.ed.gov/ncee/edlabs/projects/project.asp?projectID=388.

[56] Finn, J. (1989), “Withdrawing from school”, Review of Educational Research, Vol. 59/2, pp. 117-142.

[11] Frazelle, S. and A. Nagel (2015), A practitioner’s guide to implementing early warning systems, http://ies.ed.gov/ncee/edlabs/regions/northwest/pdf/REL_2015056.pdf.

[65] Freeman, J. and B. Simonsen (2015), “Examining the Impact of Policy and Practice Interventions on High School Dropout and School Completion Rates”, Review of Educational Research, Vol. 85/2, pp. 205-248, https://doi.org/10.3102/0034654314554431.

[72] Goldhaber, D., M. Wolff and T. Daly (2020), Assessing the Accuracy of Elementary School Test Scores as Predictors of Students’ High School Outcomes: CALDER Working Paper No. 235-0520, https://caldercenter.org/sites/default/files/CALDER%20WP%20235-0520_0.pdf.

[57] Hargis, C. (1990), Grades and grading practices: Obstacles to improving education and helping at-risk students, Charles C. Thomas, Springfield.

[26] Hartman, J. et al. (2011), Applying an on-track indicator for high school graduation: Adapting the Consortium on Chicago School Research indicator for five Texas districts, http://ies.ed.gov/ncee/edlabs/regions/southwest/pdf/REL_2011100.pdf.

[98] Hawn Nelson, A. et al. (2020), A Toolkit for Centering Racial Equity Throughout Data Integration, https://www.aisp.upenn.edu/wp-content/uploads/2020/05/AISP-Toolkit_5.27.20.pdf.

[127] Hox, J. (2010), Multilevel Analysis, Routledge, https://doi.org/10.4324/9780203852279.

[29] India AI (2019), AI is being used to identify potential school dropout rate in Andhra Pradesh, https://indiaai.gov.in/case-study/ai-is-being-used-to-identify-potential-school-dropout-rate-in-andhra-pradesh (accessed on 29 April 2021).

[32] Interview between Pasi Silander and Stéphan Vincent-Lancrin (2021), Private communication between Pasi Silander, City of Helsinki, and Stéphan Vincent-Lancrin, OECD.

[119] Issa, N. (2019, August 22), Chicago high school dropout rate hits all-time low, CPS says., https://chicago.suntimes.com/2019/8/22/20828653/cps-chicago-public-schools-dropout-rate (accessed on 18 January 2021).

[39] Janosz, M. et al. (2008), “School engagement trajectories and their differential predictive relations”, Journal of Social Issues, Vol. 64/1, pp. 21-40.

[59] Kelly, S. (2008), “What Types of Students’ Effort Are Rewarded with High Marks?”, Sociology of Education, Vol. 81/1, pp. 32-52, https://doi.org/10.1177/003804070808100102.

[12] Kemple, J., M. Segeritz and N. Stephenson (2013), “Building On-Track Indicators for High School Graduation and College Readiness: Evidence from New York City”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 18/1, pp. 7-28, https://doi.org/10.1080/10824669.2013.747945.

[101] Kinnebrew, J., J. Segedy and G. Biswas (2014), “Analyzing the temporal evolution of students’ behaviors in open-ended learning environments”, Metacognition and Learning, Vol. 9/2, pp. 187-215, https://doi.org/10.1007/s11409-014-9112-4.

[71] Knowles, J. (2015), “Of Needles and Haystacks: Building an Accurate Statewide Dropout Early Warning System in Wisconsin”, Journal of Educational Data Mining, Vol. 7/3, pp. 18-67, http://www.educationaldatamining.org/JEDM/index.php/JEDM/article/view/JEDM082.

[74] Koedinger, K. et al. (2015), “Data mining and education”, Wiley Interdisciplinary Reviews: Cognitive Science, Vol. 6/4, pp. 333-353, https://doi.org/10.1002/wcs.1350.

[84] Koon, S. and Y. Petscher (2015), Comparing methodologies for developing an early warning system, http://ies.ed.gov/ncee/edlabs/regions/southeast/pdf/REL_2015077.pdf.

[85] Koon, S., Y. Petscher and B. Foorman (2014), Using evidence-based decision trees instead of formulas to identify at-risk readers, https://doi.org/REL 2014–036.

[106] Krumm, A., B. Means and M. Bienkowski (2018), Learning Analytics Goes to School, Routledge, New York, NY : Routledge, 2018., https://doi.org/10.4324/9781315650722.

[122] Lamote, C. et al. (2013), “Dropout in secondary education: an application of a multilevel discrete-time hazard model accounting for school changes.”, Quality and Quantity, Vol. 47/5, pp. 2425-2446.

[76] Larusson, J. and B. White (eds.) (2014), Educational data mining and learning analytics, Springer, New York, http://link.springer.com/chapter/10.1007%2F978-1-4614-3305-7_4.

[102] Lee, J. et al. (2016), Hierarchical Cluster Analysis Heatmaps and Pattern Analysis: An Approach for Visualizing Learning Management System Interaction Data, Paper presented at the International Conference of Educational Data Mining (EDM), Raleigh, NC, http://www.educationaldatamining.org/EDM2016/proceedings/paper_34.pdf.

[94] Lee, S. and J. Chung (2019), “The Machine Learning-Based Dropout Early Warning System for Improving the Performance of Dropout Prediction”, Applied Sciences, Vol. 9/15, p. 3093, https://doi.org/10.3390/app9153093.

[115] Loukina, A., N. Madnani and K. Zechner (2019), “The many dimensions of algorithmic fairness in educational applications”, Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, https://doi.org/10.18653/v1/w19-4401.

[3] Lyche, C. (2010), “Taking on the Completion Challenge: A Literature Review on Policies to Prevent Dropout and Early School Leaving”, OECD Education Working Papers, No. 53, OECD Publishing, Paris, https://dx.doi.org/10.1787/5km4m2t59cmr-en.

[13] Mac Iver, M. (2013), “Early Warning Indicators of High School Outcomes”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 18/1, pp. 1-6, https://doi.org/10.1080/10824669.2013.745375.

[20] Mac Iver, M. and M. Messel (2013), “The ABCs of Keeping On Track to Graduation: Research Findings from Baltimore.”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 18/1, pp. 50-67.

[28] Mac Iver, M. et al. (2019), “An Efficacy Study of a Ninth-Grade Early Warning Indicator Intervention”, Journal of Research on Educational Effectiveness, Vol. 12/3, pp. 363-390, https://doi.org/10.1080/19345747.2019.1615156.

[107] Mandinach, E. and K. Schildkamp (2020), “Misconceptions about data-based decision making in education: An exploration of the literature”, Studies in Educational Evaluation, https://doi.org/10.1016/j.stueduc.2020.100842.

[92] Márquez-Vera, C. et al. (2013), “Predicting student failure at school using genetic programming and different data mining approaches with high dimensional and imbalanced data”, Applied Intelligence, Vol. 38/3, pp. 315-330, https://doi.org/10.1007/s10489-012-0374-8.

[89] Márquez-Vera, C., C. Morales and S. Soto (2013), “Predicting School Failure and Dropout by Using Data Mining Techniques”, IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, Vol. 8/1, pp. 7-14, https://doi.org/10.1109/rita.2013.2244695.

[43] Martin, D. and T. von Oertzen (2015), “Growth Mixture Models Outperform Simpler Clustering Algorithms When Detecting Longitudinal Heterogeneity, Even With Small Sample Sizes”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 22/2, pp. 264-275, https://doi.org/10.1080/10705511.2014.936340.

[86] Martínez Abad, F. and A. Chaparro Caso López (2017), “Data-mining techniques in detecting factors linked to academic achievement”, School Effectiveness and School Improvement, Vol. 28/1, pp. 39-55, https://doi.org/10.1080/09243453.2016.1235591.

[44] Masyn, K. (2011), Latent class analysis and finite mixture modeling, Oxford University Press, Oxford.

[14] McMahon, B. and S. Sembiante (2020), “Re‐envisioning the purpose of early warning systems: Shifting the mindset from student identification to meaningful prediction and intervention”, Review of Education, Vol. 8/1, pp. 266-301, https://doi.org/10.1002/rev3.3183.

[63] Menzer, J. and R. Hampel (2009), “Lost at the last minute”, Phi Delta Kappan, Vol. 90/9, pp. 660-664.

[103] Moyer-Packenham, P. et al. (2015), “Examining Patterns in Second Graders’ Use of Virtual Manipulative Mathematics Apps through Heatmap Analysis”, International Journal of Educational Studies in Mathematics, Vol. 2/2, pp. 1-16, https://doi.org/10.17278/ijesim.2015.02.004.

[31] MSV, J. (2016), Forbes, https://www.forbes.com/sites/janakirammsv/2016/07/30/how-microsoft-is-making-big-impact-with-machine-learning/?sh=784705a02f16 (accessed on 29 April 2021).

[40] Muthén, B. (2004), “Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data”, in Kaplan, D. (ed.), The Sage handbook of quantitative methodology for the social sciences, Sage Publications, Thousand Oaks, CA, http://www.statmodel.com/papers.shtml.

[1] OECD (2019), Education at a Glance 2019: OECD Indicators, OECD Publishing, Paris, https://dx.doi.org/10.1787/f8d7880d-en.

[111] O’Neil, C. (2016), Weapons of math destruction: How big data increases inequality and threatens democracy., Broadway Books.

[70] Pallas, A. (2003), “Educational transitions, trajectories, and pathways”, in Mortimer, J. and M. Shanahan (eds.), Handbook of the life course, Kluwer Academic/Plenum Publishers, New York.

[75] Piety, P. (2019), “Components, Infrastructures, and Capacity: The Quest for the Impact of Actionable Data Use on P–20 Educator Practice”, Review of Research in Education, Vol. 43/1, pp. 394-421, https://doi.org/10.3102/0091732x18821116.

[16] Piety, P., D. Hickey and M. Bishop (2014), Educational data sciences: Framing emergent practices for analytics of learning, organizations, and systems, ACM.

[82] Quinlan, J. (1993), C4.5: Programs for Machine Learning, Morgan Kaufmann, San Mateo, CA.

[83] Quinlan, J. (1990), “Probablistic decision trees”, in Kodratoff, Y. and R. Michalski (eds.), Machine learning, Morgan Kaufmann, San Francisco (CA), https://doi.org/10.1016/B978-0-08-051055-2.50011-0.

[45] Ram, N. and K. Grimm (2009), “Methods and Measures: Growth mixture modeling: A method for identifying differences in longitudinal change among unobserved groups”, International Journal of Behavioral Development, Vol. 33/6, pp. 565-576, https://doi.org/10.1177/0165025409343765.

[128] Raudenbush, S. and A. Bryk (2002), Hierarchical linear models: Applications and data analysis methods (2nd ed.), Thousand Oaks: Sage.

[79] Riehl, C. (1999), “Labeling and letting go: An organizational analysis of how high school students are discharged as dropouts”, in Pallas, A. (ed.), Research in sociology of education and socialization, JAI Press, New York.

[104] Romesburg, H. (1984), Cluster analysis for researchers, Lifetime Learning Publications, Belmont, CA.

[4] Rumberger, R. (2011), Dropping Out: Why Students Drop Out of High School and What Can Be Done About It, Harvard University Press, Cambridge, Mass.

[5] Rumberger, R. et al. (2017), Preventing dropout in secondary schools (NCEE 2017-4028), https://ies.ed.gov/ncee/wwc/Docs/PracticeGuide/wwc_dropout_092617.pdf.

[80] Rumberger, R. and G. Palardy (2005), “Test Scores, Dropout Rates, and Transfer Rates as Alternative Indicators of High School Performance”, American Educational Research Journal, Vol. 42/1, pp. 3-42, https://doi.org/10.3102/00028312042001003.

[66] Sansone, D. (2019), “Beyond Early Warning Indicators: High School Dropout and Machine Learning”, Oxford Bulletin of Economics and Statistics, Vol. 81/2, pp. 456-485, https://doi.org/10.1111/obes.12277.

[90] Sara, N. et al. (2015), High-School Dropout Prediction Using Machine Learning: A Danish Large-scale Study, Paper presented at the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium.

[117] Sculley, D. et al. (2015), “Hidden technical debt in machine learning systems.”, Paper presented at the Advances in neural information processing systems..

[123] Singer, J. and J. Willett (2003), Applied Longitudinal Data Analysis, Oxford University Press, https://doi.org/10.1093/acprof:oso/9780195152968.001.0001.

[121] Singh, L. et al. (2019), “NSF BIGDATA PI Meeting - Domain-Specific Research Directions and Data Sets”, ACM SIGMOD Record, Vol. 47/3, pp. 32-35, https://doi.org/10.1145/3316416.3316425.

[88] Soland, J. (2017), “Combining Academic, Noncognitive, and College Knowledge Measures to Identify Students Not on Track For College: A Data-Driven Approach”, Research & Practice in Assessment, Vol. 12/Summer 2017, pp. 5-19, http://www.rpajournal.com/dev/wp-content/uploads/2017/07/Summer_2017.pdf#page=5.

[87] Soland, J. (2013), “Predicting High School Graduation and College Enrollment: Comparing Early Warning Indicator Data and Teacher Intuition”, Journal of Education for Students Placed at Risk (JESPAR), Vol. 18/3-4, pp. 233-262, https://doi.org/10.1080/10824669.2013.833047.

[108] Stodden, V. et al. (2016), “Enhancing reproducibility for computational methods”, Science, Vol. 354/6317, pp. 1240-1241, https://doi.org/10.1126/science.aah6168.

[23] Stuit, D. et al. (2016), Identifying early warning indicators in three Ohio school districts, http://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/REL_2016118.pdf.

[36] Swets, J. (1988), “Measuring the accuracy of diagnostic systems”, Science, Vol. 240/4857, pp. 1285-1293, https://doi.org/10.1126/science.3287615.

[37] Swets, J., R. Dawes and J. Monahan (2000), “Psychological Science Can Improve Diagnostic Decisions”, Psychological Science in the Public Interest, Vol. 1/1, pp. 1-26, https://doi.org/10.1111/1529-1006.001.

[30] The Wire (2016), Aadhaar in Andhra: Chandrababu Naidu, Microsoft Have a Plan For Curbing School Dropouts, https://thewire.in/politics/aadhaar-in-andhra-chandrababu-naidu-microsoft-have-a-plan-for-curbing-school-dropouts (accessed on 29 April 2021).

[47] Vermunt, J. and J. Magidson (2002), “Latent class cluster analysis”, in Hagenaars, J. and A. McCutcheon (eds.), Applied latent class analysis, Cambridge University Press.

[46] Vermunt, J., B. Tran and J. Magidson (2008), “Latent class models in longitudinal research”, in Menard, S. (ed.), Handbook of longitudinal research: Design, measurement and analysis, Elsevier, Burlington, MA, http://www.statisticalinnovations.com/articles/VermuntTranMagidson2006.pdf.

[97] Villagrá-Arnedo, C. et al. (2017), “Improving the expressiveness of black-box models for predicting student performance”, Computers in Human Behavior, Vol. 72, pp. 621-631, https://doi.org/10.1016/j.chb.2016.09.001.

[109] Wachter, S. and B. Mittelstadt (2019), “A right to reasonable inferences: re-thinking data protection law in the age of big data and AI.”, Columia Business Law Review, Vol. 494.

[100] Wilkinson, L. and M. Friendly (2009), “The History of the Cluster Heat Map”, The American Statistician, Vol. 63/2, pp. 179-184, https://doi.org/10.1198/tas.2009.0033.

[60] Willingham, W., J. Pollack and C. Lewis (2002), “Grades and test scores: Accounting for observed differences”, Journal of Educational Measurement, Vol. 39/1, pp. 1-37, http://www.jstor.org/stable/1435104.

[116] Zehlike, M. et al. (2017), FA*IR: A Fair Top-k Ranking Algorithm, ACM, Singapore, Singapore, https://doi.org/10.1145/3132847.3132938.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.