14. Mexico: an innovative focus on evaluation of learning outcomes
Mexico has implemented policies to improve the quality of its higher education for over two decades, spanning four administrations. Evaluation of learning outcomes is one such policy. The institution analysed in this chapter promotes a culture of quality that permits the application of different perspectives and tools to the evaluation process. The results offered here focus on evaluating higher order cognition through performance and constructed response testing. The CLA+ test was applied in programmes currently without exit exams, receiving ample participation across many disciplines and campuses at one public state university. One main result corroborates the existing gap between better-performing students at metropolitan campuses and lower-performing students from socio-economically disadvantaged regional sites. However, certain educational programmes run counter to this disparity, generating better results. Deepening our understanding of the specific contexts producing these different results will enable us to learn from the most effective practices and improve learning outcomes.
The higher education system in Mexico is complex, owing to the country’s social and regional diversity. Therefore, learning outcomes should be analysed with reference to the national context, the educational system in general, and the variables endogenous to each educational level and subsystem. The worldwide economic recession and COVID-19 have left half of Mexico’s population in poverty with predictable increases in educational, digital, cognitive and human inequality. The challenges have become more acute for higher education institutions (HEIs) over a range of issues: governance; the acquisition and fair, transparent distribution of financial resources; quality; and the development of capabilities for consolidating achievements and successfully overcoming new problems.
Mexico is a country of great wealth and diversity. Its gross domestic product (GDP) ranks as the world’s 15th highest (World Bank, 2021[1]). Nevertheless, it suffers from enormous inequality: two-thirds of its wealth is concentrated in 10% of the nation’s families (ECLAC, 2017[2]), a statistic that ranks it among the top 25% of nations with the highest levels of inequality in the world (Oxfam, 2018[3]). Poverty is an inescapable reality that is responsible for a growing and worrisome chasm of inequality. In the presentation of the 2020 report on Latin America, the director of the Economic Commission for Latin America and the Caribbean (ECLAC) states that, owing to the COVID-19 crisis, Mexico now ranks as the fourth highest country in the region in terms of number of inhabitants living in poverty and extreme poverty (Villanueva, 2021[4]). The report states that from 2019 to 2020, poverty increased 9.1%, affecting 50.6% of the population or 63.8 million Mexicans; extreme poverty grew 7.7% to represent 18.3% of the population, equivalent to 23.2 million people (ECLAC, 2021[5]). According to the World Bank (2020[6]), extreme poverty is defined as having to live on less than USD 1.90 per day.
This economic disparity has repercussions in education. In 2020, 49.3% of the Mexican population above the age of 15 only had a basic level of education (INEGI, 2020a[7]). The rate of illiteracy in Mexico is 4.7%, nearly 6 million people (INEGI, 2020b[8]). The panorama is bleaker still when one takes into account factors such as reading habits and comprehension: of the population that is able to read, only one-fourth fully comprehend what they read (INEGI, 2020c[9]). The results of the Programme for International Student Assessment (PISA) are well known, but although the general results for 2018 in mathematics, reading and sciences are below the mean of member nations in the Organisation for Economic Co-operation and Development (OECD), Mexico has demonstrated a stability in its results since 2003 that could mask trends toward a diminution of disparity in results: “The point level achieved by at least 90% of the students in Mexico improved by 5 points for each three-year period, on average, in each of the three principal areas” (OECD, 2018[10]). Additionally, only 23% of the population between the ages of 25 and 34 has a higher education degree while the average for OECD countries is 44% (OECD, 2019a[11]).
Educational disparity amplifies digital disparity and vice versa. Recent results of a study on the impact of COVID-19 on education (INEGI, 2021[12]) reveal a decrease in the level of matriculation for academic year 2020/21 by 5.2 million at all education levels. There are a variety of reasons for this, chiefly including: distaste for online classes; lack of a computer or Internet connectivity; unemployment of one or another of the heads of households; and the need to dedicate oneself to money-earning labour. The pandemic made clear the need to improve digital literacy and methods of active teaching. The United Nations Development Programme (UNDP) calculated a reduction of 15.6% in undergraduate level matriculations for 2020/21, from more than 3.8 million students in 2019/20 to just over 3.2 million projected for 2020/21 (UNDP, 2020[13]). This is a drop of more than half a million students, according to statistics from the Ministry of Public Education (SEP, Secretaría de Educación Pública). The digital disparity also impacts education. In Mexico only 56.4% of homes have Internet connectivity (INEGI, 2020d[14]) and only 11.2% of higher education students have a desktop computer, with preference for laptops (55.7%) and smart phones (31.8%).
These educational and digital gaps imply a cognitive disparity. A low level of literacy does not allow information that is easily accessed and viewed on various media to be absorbed, interpreted and skilfully utilised. This has a deleterious effect on the ways thoughts and actions are exercised. As Pozo points out: “… whoever cannot access different cultural forms of symbolic representation (numerical, artistic, scientific, graphic, etc.) is socially, economically and culturally impoverished, as well as overwhelmed, confused and disconcerted by the avalanche of information they cannot translate into knowledge” (Pozo, 2006[15]).
This is the context in which higher education in Mexico has developed. It is a complex system due to its size, regional diversity and 13 subsystems. The subsystems include both state and national public universities; intercultural institutions; technological and polytechnical institution; solidarity-based and private institutions; decentralised federal technical institutes; public research centres; and public and private schools for training elementary school teachers. This complex system generates factors that complicate governance, financing, and institutional capacities.
According to the SEP’s General Directorate of Educational Planning, Programming and Statistics (Dirección General de Planeación, Programación y Estadística Educativa [The Mexican Ministry of Public Education’s General Directorate of Educational Planning, Programming and Statistics], 2020[16]), total enrolment in higher education for 2019/20 was above 4.9 million students, with 64% in public schools versus 36% in private ones. Among the different subsystems, the combination of public universities and technological institutes represented almost 60% of total enrolment and 92% of the public education subtotal. Enrolment at state universities surpassed 1.25 million or 43% of the entire system of public universities and institutes; that of the federal universities – slightly over half a million students – represents 20% of the total. There are a total of 591 public universities and institutions in Mexico, of which 341 are universities and 250 are technological institutes. These HEIs are crucial to the system and they face tremendous challenges in guaranteeing educational excellence.
For over two decades, national policies have been geared toward increasing access to higher education. These have resulted in a gross coverage rate of 38.4%, which is 10 percentage points below Latin American countries and more than 30 percentage points below the average of OECD member nations (ANUIES, 2018[17]). Although access has been increased, it is important to consider the issue of quality in education, too. The study by Ferreyra et al. (2017[18]) reports that in Latin America the rate of access to higher education for people between the ages of 18 and 24 has increased considerably and estimates that approximately 3 million students – 45% of the total increase in enrolment – come from underprivileged backgrounds and are therefore academically less prepared. Their report points out the existence of an early desertion rate of 35% during the first year, associated with low levels in development of these students’ cognitive abilities.
The system also “lacks diversity of fields and levels of study”, whether in terms of specific disciplines or interdisciplinary areas. “More than one-third of the students are enrolled in business administration and law.” (OECD, 2019a[11]). It needs improvement both in general and specific competencies, as well as in fostering the development of the “soft skills” through active learning methods that are so highly valued by employers.
Over the course of four administrations, Mexico has implemented quality education policies with strong support from the National Association of Universities and Institutions of Higher Education (ANUIES [acronyms are from the Spanish organisational names]). Public state universities have responded positively, not least because they have been able to access extra funding to strengthen institutions and infrastructure; provide student grants for travel and research; and carry out investigation, though many have recently ended. The first step was to introduce design plans for higher education and then to initiate processes of external review. Emblematic institutions are the Inter-institutional Committees for the Evaluation of Higher Education (CIEES); the Council for the Accreditation of Higher Education (COPAES) and its affiliated organisations; the National Centre for the Evaluation of Higher Education (CENEVAL); the now-defunct National Institute for the Evaluation of Education (INEE); and individual state commissions for higher education planning (COEPES), which were set up across the country.
Quality-improvement policies implemented by federal administrations two decades ago created the political context for evaluating learning outcomes by means of the CLA+ test. Processes of evaluation had been introduced in this institution with positive repercussions for admissions and graduation requirements, and evaluation and accreditation of educational offerings. On the other hand, among the international trends in education, the evaluation of learning outcomes appeared as a third-generation indicator for the processes of accreditation of quality. The administrators of this institution became involved in many experiences that offered knowledge about focal points and instruments for evaluation. This contributed to facilitating community participation in new forms of evaluating educational results such as CLA+.
The institution in question is typical of the subsystem of public state universities. Although it enjoys administrative autonomy, its funding comes principally from the state and federal governments. It is a massive institution in terms of enrolment and has campuses both in the state’s capital city and various regional locations throughout the state. While carrying out many functions – teaching, research and extension – its particular emphasis is on teaching. Students aspiring to admission must present an aptitude exam, which is considered together with their high school averages. Admission is granted as a function of available space in each educational programme (EP) and points obtained in the selection process.
This institution initiated its processes of external evaluation in 1991, shortly after the creation of the CIEES at a nationwide level. By 2004 it had formalised its policy through the establishment of an institutional fund to defray the costs of the evaluation, accreditation and learning outcome tests. Three years later, it constituted the Committee of Peers for Institutional Self-Evaluation and by 2012 it began its processes of accreditation by means of international organisations.
The institution began evaluating learning outcomes in 2005 as part of its policies for improving quality. It did so initially through the General Graduation Exam for Undergraduate Degrees (EGEL), which is implemented nationwide by the CENEVAL. When the Undersecretary for Higher Education of the SEP later implemented the OECD feasibility study Assessment of Higher Education Learning Outcomes (AHELO) at a nationwide level, this institution participated.
What the AHELO study showed was that though the measurement of general abilities is a good indicator of workplace applicability, it is nevertheless important to combine it, when possible, with an evaluation of specific abilities. Additionally, it was noted that the task of testing awakened enthusiasm in the academic community, which could be parlayed into teaching and future evaluations. One activity resulting from this institution’s participation in the AHELO study were the workshops on performance-based tasks offered to teachers by the Council for Aid to Education (CAE), based in the United States. These workshops constituted the basis for the development of a pilot study at four regional campuses with low learning outcome results. The intention was that first-year students would undertake foundation courses that would bring them up to an acceptable level, with lessons concentrated on performance-based tasks to develop their cognitive abilities. The results of these were positive and were published in Rosas and Silva (2019[19]).
In addition, this institution participated in a nationwide project convoked by the SEP’s office of the Director General of Higher University Training. The project, which was called Development and Evaluation of Competencies for Learning in Higher Education (DESCAES, in Spanish), measured HEIs’ value-added to competencies for managing information, problem solving, communication, metacognition and self-regulation. The tools were designed in 2014 and 2015; they were applied initially in 2016, with a second application in 2019.
The arguments for continuing evaluation of learning outcomes by means of the CLA+ test were:
1. The Graduate Exam for Undergraduate Degrees (EGEL) does not have tests for all of the institution’s educational programmes. At the time, 71 of its 214 programmes had no related EGEL.
2. There are 17 undergraduate programmes that are multidisciplinary or interdisciplinary in nature and cannot be properly evaluated by EGEL, which is structured for a single discipline.
3. More than 40% of undergraduate programmes had not been evaluating their outcomes, resulting in two situations:
a) Comparisons between the institution’s departments were not reliable because there are certain campuses where the EGEL test was only applied in one or two programmes while in other centres it was applied in each and every programme.
b) Some area co-ordinators commented that the level of work was unfairly distributed between programmes subjected to learning outcome evaluations and those that are not.
4. The cost of the CLA+ testing applied across the board would be comparable that of the EGEL. Consequently, adopting the CLA+ model would be a better institutional investment for the evaluation of the totality of undergraduate programmes.
5. The CLA+ offers the possibility of realising value-added studies. The institution would be able to make them available without exorbitant cost if admissions tests were done by the CAE. The cost would be covered by students’ paid fees while exit exams would be absorbed by the university’s institutional fund.
Not all of the university’s campus centres were willing to participate, especially those where practically all programmes already had EGEL testing. There was also a lack of enthusiasm, even in centres without EGEL testing in most programmes, as they would have to set up structures and logistics to do the CLA+ testing. The agreement, finally, was that programmes without EGEL testing would participate. To this end, a fund was authorised for initiating the new testing model.
Participation varied at each campus where the test was implemented. Three testing sessions per semester were carried out with a total of 8 577 tests, of which 2 176 were administered to graduating students. The rest were administered to newly matriculated first-year students, with the idea of later administering exit exams to evaluate the value-added of their educational programmes. Table 14.1 shows data from the testing.
The implementation process contemplated the following stages:
1. Establishing cognitive laboratories for translating and culturally adapting tests
2. Ensuring a sufficient number of implementation coordinators and test application personnel as well as training for them
4. Receiving test results from the CAE and statistically processing results for the governing board’s evaluation and ratification
There were some objections to the CLA+ testing, which had to do with extra workload and limited personnel. This was especially true at regional sites with fewer personnel. There were also concerns over the high cost of evaluating learning outcomes. This would require governmental funding, which can suffer from a lack of continuity when changes in administrations occur.
On the whole, however, the different participants involved gave a generally positive opinion of the CLA+ implementation. For the governing board, this test introduces new forms of evaluation, with the eventual possibility of comparing student performance from other institutions in other countries. It also provides evaluation of programmes that had lacked EGEL testing. Logistical co-ordinators and administering professors felt that this new test awakened enthusiasm among the students who took them.
Finally, students who responded to the brief survey about the CLA+ test were generally positive about it. Nevertheless, some responded that they did not exert themselves to the greatest degree because the tests were not obligatory and were not included in their programme’s curriculum. Other stated that the test required a lot of thought and proved stressful for that reason.
Though sampling had not been carried out to guarantee representativity, the number of tests that were applied across a wide variety of campus sites and educational programmes did confirm disparities between metropolitan campus sites and regional ones. While there here has been improvement in access to higher education for a greater number of young people from these underserved regional sectors, their lower levels of cognitive performance jeopardise their ability to remain in school and successfully conclude their studies. The disparity is reflected across the range of educational programmes (EP), too. However, there are results showing that context is not destiny. And, there are also results that push us to reflect on the very design of the educational programmes themselves and the ways in which the material is being taught.
Our analysis of learning outcomes involved estimates of the probability density function (PDF) and comparison between densities, with a 95% level of confidence. The reason for selecting this analytical option is that, in many cases, the simple comparison of averages is inefficient because it presupposes a normativity that is not always a reality. Or because measurements as a coefficient of variation or degree of effect could be insufficient for determining the estimated average and variation in the readings of interest. It is important to point out that these results only correspond to the exit exams of graduating students and are not representative of the institution’s global performance, given that representative samples were not selected at any level.
Thus, we begin our analysis starting with the general results represented in Figure 14.1, in which there can be seen four graphs of probability densities of the scores obtained by participating students for the entire database (upper figure) and, separately (lower figures), for the students who took the CLA+ test during the second semester of 2017 (Figure 14.1, 17B), the first semester of 2018 (Figure 14.1, 18A) and the second semester of 2018 (Figure 14.1, 18B).
In the same Figure 14.1 there can be seen, from the dotted vertical lines, the skill levels for the abilities evaluated and the percentage of students who reached each of these levels: 2% reached “Advanced” and the levels of greatest probability were “Basic”, with 37%, and “Proficient”, with 31%. There were no significant differences between the competencies observed across the three academic semesters during which testing was performed.
From results obtained in other tests given by the institution, such as admissions aptitude testing and EGEL exit exams, the performance of students at metropolitan campus sites is generally higher than that of students in the rural, regional centres. Therefore, we proceeded to observe the data broken down by type of campus site. In Figure 14.2 we see that students at metropolitan sites outperformed the regional sites at the levels of “Proficient”, “Accomplished” and “Advanced”, while students at regional sites had greater probability than those at metropolitan sites to test at the levels of “Basic” and “Below Basic”.
This difference can be attributed to various factors, ranging from socio-economic strata to levels of educational supply and demand. Socio-economic factors include the strata of students’ families and the availability of highly skilled staff and well-equipped libraries and computer centres at the different campus sites. Regarding supply-and-demand issues, if the number of students who want to go on to higher education does not exceed the number of admissions slots, a given campus site might well admit all comers. But when demand for admission outstrips the supply of available spaces, the possibility of selecting higher ranking students exists, which translates into better academic profiles for the average student at those sites.
Additionally, to implement a comparative analysis between the 14 institutional dependencies – five metropolitan, eight regional and one virtual site – we order them in descending average performance, where 1 is the highest average score value and 14 is the lowest one. Thus, in Figure 14.3 we observe the performance of students at metropolitan sites. The base taken was the point level of students from site Metro_1 (the institutional dependency with the highest average) and contrasts are provided with students from sites Metro_2 (Figure 14.3A), Metro_3 (Figure 14.3B), Metro_5 (Figure 14.3C) and Metro_10 (Figure 14.3D). Furthermore, to highlight the differences observed in Figure 14.3, in Table 14.2, we show the percentages of students at each mastery level and for each dependency.
Therefore, in Figure 14.3 and Table 14.2, we see that the students at Metro_1 show better performance when compared with the students at Metro_2, Metro_3, Metro_5 and Metro_10, principally in the categories “Accomplished” and “Advanced”. In the category “Proficient”, Metro_3 and Metro_5 have better results than Metro_1 while Metro_1 and Metro_2 show similar results, and Metro_1 is considerably higher than Metro_10. That D1 has the best performance is consistent with the EGEL test results for the rest of that site’s educational programmes, but it is also a site with far greater demand than supply, which accounts for the students admitted to that site having much better developed levels of cognitive abilities. The case of Metro_5 only reflects one of its educational programmes, which does not have a related EGEL test. However, D5 is a site with similar behaviour to Metro_1 in terms of students admitted and their performance as measured on the EGEL test. Metro_3 is the only site at which all its educational programmes applied the CLA+ test because only two of its programmes applied EGEL tests to graduating students. The disciplines offered at this site correspond principally to the field of the arts, but there is also more demand for entry than there is supply of available spaces, which guarantees a selection of students with better profiles. While Metro_10 reflects performance levels similar to those of the regional sites – including, in some cases, below some of them – this site only participated with two of its programmes; the rest of its programmes apply the EGEL tests with better levels of performance than was found in the two programmes measured here. Additionally, because of the type of programmes offered, it routinely admits all aspirants for admission, regardless of their admission-testing performance levels.
Figure 14.4 shows the performance of metropolitan site Metro_1 (with better performance) together with that of Virtual_4 (a virtual site for online learning). At the “Advanced” level, the highest scores were attained by the students of Virtual_4, while the students of Metro_1 exceeded probability at the levels “Proficient” and “Accomplished”. At the levels of “Below Basic” and “Basic”, students at Virtual_4 exceeded in probability the students at Metro_1. This result could be explained by the fact that the online site admits practically all its applicants, including many who are more mature students with a high degree of self-directedness.
Although performativity among the regional sites is quite similar, Figure 14.5 shows the probability densities of the points obtained by students at the regional sites having the best (Regional_6) and worst (Regional_13 and Regional_14) test results. The greatest number of results in the category “Below Basic” occur at site Regional_14 (Figure 14.5A), with approximately 50% of the students testing at that lowest of the five categories. Both Regional_13 and Regional_14 show similar results (Figure 14.5B), with the greatest proportion of students appearing on the left-hand side of the graph. These sites admit practically all aspirants, regardless of their admissions test scores, especially in the case of the area where the Regional_13 site is located – a region including some municipalities with 30-40% of their population living in conditions of extreme poverty.
In the combined view of Figure 14.6, we see at a glance the contrasts between students at Metro_1 (the highest average of metropolitan sites), Metro_10 (the lowest average of metropolitan sites), Regional_6 (the highest average of regional sites), and Regional_14 (the lowest average of regional sites). The graph in Figure 14.6C shows the similarity in performance between the lowest metropolitan site (Metro_10) and the highest regional site (Regional_6). This is reinforced by the graph in Figure 14.6A, which shows the disparity between the highest and lowest levels of the metropolitan sites. In Figure 14.6B we can compare the highest averages of the best metropolitan and regional sites; this graph shows comparable behaviour with that shown in Figure 14.6A. The greatest disparity is revealed in Figure 14.6D, in which the highest metropolitan site (Metro_1) is shown against the lowest regional site (Regional_14).
We know that at the micro level particular conditions exist in educational programmes (EP) which may contribute to each site’s general performance being either better or worse. In the same way as the sites, we ordered the educational programmes in descending average performance, where EP1 is the highest average score value and EP52 is the lowest. Therefore, we have made comparisons by EP for the metropolitan sites. Figure 14.7 presents a comparison of the data from EP1, EP2, EP10, EP16 and EP46. In Figure 14.7A we see somewhat parallel results. Although the percentage points of students in EP1 exceed those of students in EP2 at the levels “Accomplished” and “Advanced”, it is worth noting that both EP are given at the same site (Metro_1) and belong to the same field of knowledge: Science. The graph 14.7B shows the similarity in results in EP10 (Services field) and EP16 (Science field), which are also given at one and the same site (Metro_3). On the other hand, the graph 14.7C shows EP10 (Services field) at site Metro_3 along with EP46 (Science field) at site Metro_2, revealing that EP46 – even though it is offered at a metropolitan site – shows performance similar to the lowest level EPs among the regional sites. Graph 14.7D underscores the enormous difference in results between EP1 and EP46, despite their both being imparted at the metropolitan sites (Metro_1 and Metro_2, respectively) having the highest and second highest overall point averages among all participating sites.
In Figure 14.8 we compare performance by EP in all types of sites for the following programmes: EP1, EP9, EP38, EP46 and EP52. Numerical assignations are based on point averages such that EP1 was the programme with the highest score and EP52 the lowest.
In the graph shown as Figure 14.8D, what stands out is the difference between EP1 (metropolitan site - Science field) and EP52 (regional site – Agriculture field). More than 50% of the EP52 students are concentrated at the lowest level – “Below Basic” – while the inverse is true of EP1 where over 50% of students are within the three highest levels: “Proficient”, “Accomplished” and “Advanced”. Additionally, Figure 14.8A reveals that in comparing EP1 with EP9 (regional site – Services field), although EP1 shows better performance, the difference is not overwhelming despite the existing disparities between metropolitan and regional sites. In Figure 14.8B there is a comparison between EP46 (metropolitan site – Science field), which had the lowest average among all EPs given at metropolitan sites, and EP52 (regional site – Agriculture field), which had the lowest average among those given at the regional sites. Here we can see that, despite both being the lowest average of their respective site type, the disparity between geographic locations continues to appear as an important factor. Finally, Figure 14.8C shows a graph enabling us to compare the results of one EP that is offered at both metropolitan and regional sites – Agriculture field. Despite the fact that performance at both locations was low, performance at the metropolitan site was not as low as it was at the regional site.
Even though this is not a comparative study, the analysis of students answers to CLA+ test we found important differences in their performance per campuses and among EP. These differences can be attributed to different factors. The first factor is the geographical location that confirms the superposition of the economic gap to the education gap. The disparity in social and economic conditions among regional and metropolitan agencies, in some cases this gap is very evident between high-income and low-income families. The students in regional campuses, where the economic context is more precarious, lower scores are obtained in average, while in the metropolitan campuses, higher scores are obtained. This also happens generally in EP based analysis, the EP attached to the metropolitan sites take the first positions in the ranking, although some of the EP in regional campuses achieve a high rank, but these are exceptions.
A second factor is the offer and demand of these programs. In the metropolitan sites, generally, the demand exceeds the offer; hence, the availability of places in each campus leave an open possibility to select the students with highest access scores. Nevertheless, the results also show, as an exception, that some of the regional PEs had a good performance not as much related to places availability.
Through these responses’ analysis we could not respond to some of the concerns such as the differences in the performance of the metropolitan campuses 1 and 10; or between regional and metropolitan PEs. However, through these cues, looking into the future, it could be possible to make comparative studies to explore in depth the causes of these differences; this could constitute an important tool to improve the performance in every EP and all campuses.
Despite the gap between metropolitan and regional sites, other variables exist that help explain best or worst performances in educational programmes. We consider it indispensable to undertake further, deeper studies in order to fully understand these variables. The variables could be related to the design itself of the educational programmes. This is a provisional conclusion that may be drawn from the testing results of the EP imparted at five different sites, all of which demonstrated poor performance levels, albeit with marginally less dismal results at the metropolitan site. The variables may also include the preparation of faculty, given that we have observed that regional sites geographically closer to the metropolitan area derive some benefit from metropolitan-based faculty members, who generally have no difficulty in commuting an hour or so in order to teach some courses at nearby regional sites. Additionally, analysis is required of the other resources available to the educational programmes for their work in order to understand the degree to which these may have influenced the learning outcomes that resulted. Finally, understanding the results obtained for each ability, in particular, will enable the development of a plan for intervening to improve instruction.
Our position is that all evaluations involve lessons learnt because once we detect aspects that are not working at even a minimum level, it clearly becomes necessary to intervene in order to improve them. Such intervention, however, is not possible without first learning more about their specific contexts. The current evaluation has given us a guideline to begin studies of the value added by higher education. Admissions testing has already been in place and we are at the point where we require the application of exit testing. Although there is uncertainty because of current economic restrictions, we know there is a positive disposition among the institutional directors for continuing with evaluations of learning outcomes. What remains to be done is to assure that the testing continues, that deeper studies are undertaken of the educational programmes analysed here, and that institutional mechanisms are found that guarantee the continuity of evaluations of learning outcomes.
References
[17] ANUIES (2018), Visión y acción 2030 Propuesta de la ANUIES para renovar la educación superior en México [Vision and Action 2030: Proposal of the National Association of Universities and Institutions of Higher Learning for renovating higher education in Mexico], ANUIES, México, http://www.anuies.mx/media/docs/avisos/pdf/VISION_Y_ACCION_2030.pdf.
[16] Dirección General de Planeación, Programación y Estadística Educativa [The Mexican Ministry of Public Education’s General Directorate of Educational Planning, Programming and Statistics] (2020), Principales cifras del sistema educativo nacional” [Principal Statistics of the National Education System], https://www.planeacion.sep.gob.mx/Doc/estadistica_e_indicadores/principales_cifras/principales_cifras_2019_2020_bolsillo.pdf.
[5] ECLAC (2021), Social Panorama of Latin America: 2020, United Nations Economic Commission for Latin America and the Caribbean, Santiago, http://www.cepal.org/sites/default/files/publication/files/46688/S2100149_en.pdf.
[2] ECLAC (2017), Social Panorama for Latin America: 2016, United Nations Economic Commission for Latin America and the Caribbean, Santiago, http://www.cepal.org/sites/default/files/publication/files/41599/S1700566_en.pdf.
[18] Ferreyra, M. et al. (2017), At a Crossroads: Higher Education in Latin America and the Caribbean, https://doi.org/10.1596/978-1-4648-1014-5.
[12] INEGI (2021), Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED), Presentación de resultados. Instituto Nacional de Estadística y Geografía [Survey to Measure the Impact of COVID-19 on Education (ECOVID-ED). Outcomes Report. National Institute, https://www.inegi.org.mx/contenidos/investigacion/ecovided/2020/doc/ecovid_ed_2020_presentacion_resultados.pdf (accessed on 18 March 2021).
[8] INEGI (2020b), Analfabetismo [Illiteracy], http://cuentame.inegi.org.mx/poblacion/analfabeta.aspx?tema=P.
[7] INEGI (2020a), Características educativas de la población [Educational characteristics of the population], https://inegi.org.mx/temas/educacion/.
[14] INEGI (2020d), “Estadística a propósito del día mundial del internet (17 de mayo) datos nacionales”, Comunicado de prensa núm. 216/20, Vol. 2019, http://www.inegi.org.mx/contenidos/saladeprensa/aproposito/2020/eap_internet20.pdf (accessed on 5 August 2022).
[20] INEGI (2020f), Población [Population], http://www.inegi.org.mx/temas/estructura/.
[9] INEGI (2020c), press release no. 158/20, http://www.inegi.org.mx/contenidos/saladeprensa/boletines/2020/EstSociodemo/MOLEC2019_04.pdf. (accessed on 23 April 2020).
[10] OECD (2018), Panorama de la Educación 2017 [Educational Panorama], OECD, Paris, http://www.oecd.org/education/skills-beyond-school/EAG2017CN-Mexico-Spanish.pdf.
[11] OECD (2019a), Higher Education in Mexico: Labour Market Relevance and Outcomes, Higher Education, OECD Publishing, Paris, https://doi.org/10.1787/9789264309432-en.
[21] OECD (2019b), The Future of Mexican Higher Education: Promoting Quality and Equity, Reviews of National Policies for Education, OECD Publishing, Paris, https://doi.org/10.1787/9789264309371-en.
[3] Oxfam (2018), México justo: políticas públicas contra la desigualdad, https://ww.oxfammexico.org/mexico-justo-politicas-publicas-contra-la-desigualdad-0/.
[15] Pozo, J. (2006), Adquisición de conocimiento, Segunda Edicion [Acquisition of Knowledge, second edition], Morata, Madrid, https://download.e-bookshelf.de/download/0003/5619/65/L-G-0003561965-0006870423.pdf.
[19] Rosas, P; Silva, J; (ed.) (2019), Habilidades cognitivas y desempeño en el pregrado universitario [Cognitive Abilities and Performance in Undergraduate University Studies], ANUIES, Mexico City.
[13] UNDP (2020), Desarrollo Humano y COVID-19 en México: Desafíos para una recuperación sostenible [Human Development and Covid-19 in Mexico: Challenges for a sustainable recovery], UNDP, http://www.mx.undp.org/content/mexico/es/home/library/poverty/desarrollo-humano-y-covid-19-en-mexico-.html.
[4] Villanueva, D. (2021), México, entre los países de AL con más pobres por pandemia: Cepal [Mexico among the Latin American countries with the most poverty because of the pandemic: ECLAC], http://www.jornada.com.mx/notas/2021/03/04/economia/mexico-entre-los-paises-de-al-con-mas-pobres-por-pandemia-cepal/.
[1] World Bank (2021), Banco de datos: Indicadores del desarrollo mundial (DataBank: World Development Indicators [database]), https://databank.bancomundial.org/reports.aspx?source=2&series=NY.GDP.MKTP.CD&country=# (accessed on 18 March 2021).
[6] World Bank (2020), COVID-19 to Add as Many as 150 Million Extreme Poor by 2021, press release no. 2021/024/DEC-GPV, http://www.bancomundial.org/es/news/press-release/2020/10/07/covid-19-to-add-as-many-as-150-million-extreme-poor-by-2021 (accessed on 18 March 2021).