1. Assessment and recommendations

This chapter presents on overview of the main findings and recommendations resulting from the OECD review of the external quality assurance system in the federal higher education system in Brazil. The analysis has assessed the relevance, effectiveness and efficiency of the external quality assurance procedures applicable to undergraduate and postgraduate programmes and higher education institutions (HEIs) in the federal higher education system. The chapter summarises findings and recommendations in relation to the different components of the external quality assurance systems: a) procedures to regulate the “market entry” of new HEIs and new undergraduate programmes; b) procedures for the ongoing monitoring and evaluation of existing undergraduate programmes and related feedback and corrective measures; c) external quality procedures governing academic postgraduate programmes; d) ongoing monitoring and evaluation of higher education institutions (institutional evaluation) and; e) governance and administrative bodies and arrangements that have been created to implement and oversee the processes above.


1.1. Focus of this chapter

A review of procedures for external quality assurance in the federal higher education system in Brazil

This chapter presents on overview of the main findings and recommendations resulting from the OECD review of the procedures for external quality assurance in the federal higher education system in Brazil. In line with the terms of reference agreed with the Brazilian authorities at the start of the project, the analysis has assessed the relevance, effectiveness and efficiency of the external quality assurance procedures applicable to undergraduate and postgraduate programmes and higher education institutions (HEIs) in the federal higher education system in Brazil.

A focus on relevance, effectiveness and efficiency

Specially, the terms of reference ask the team to analyse the effectiveness and efficiency of the different aspects of existing quality assurance systems in a) ensuring minimum quality standards in educational provision; b) providing differentiated measurement of quality (between types of provision and levels of quality offered) and; c) promoting improvement of quality and quality-oriented practices in HEIs (quality enhancement). On the basis of the analysis, the OECD review team was invited to provide recommendations for improving the system.

An analysis structured around the main components of the system

In light of the organisation of external quality assurance in the federal higher education system, the review has analysed the different functions of the existing quality system in turn:

  1. 1. First, procedures in place to regulate the “market entry” of new HEIs and new undergraduate programmes.

  2. 2. Second, the procedures for the ongoing monitoring and evaluation of existing undergraduate programmes and related feedback and corrective measures.

  3. 3. Third, the external quality procedures governing the “market entry” and periodic evaluation of academic postgraduate programmes, coordinated by the Foundation for the Coordination of Improvement of Higher Education Personnel (CAPES).

  4. 4. Fourth, the ongoing monitoring and evaluation of higher education institutions (institutional evaluation) and related feedback and corrective measures.

  5. 5. Finally, the governance and administrative bodies and arrangements that have been created to implement and oversee the above processes.

1.2. External quality assurance in Brazil: evaluation, regulation and supervision

Established systems of external quality assurance for undergraduate programmes, postgraduate provision and higher education institutions

Brazil has well-established systems in place at national level to regulate the operation of public and private higher education providers and assess and monitor the quality of their teaching and learning activities. There are distinct external quality assurance systems for the undergraduate (ISCED 6) and postgraduate (ISCED 7-8) levels of education. Both systems involve the external evaluation of individual higher education programmes, while the evaluation of higher education institutions (HEIs) is coupled with the processes for quality review for undergraduate provision.

Most higher education in Brazil falls under the regulatory responsibility of the federal government and a large proportion of enrolment is in the private sector

The external quality processes for HEIs and undergraduate programmes apply to higher education providers that are legally considered to be part of the “federal higher education system”. This comprises federal public HEIs and all private HEIs in the country. Of the roughly 2 400 HEIs in Brazil, 92% (federal public and private) fall under the regulatory responsibility of the federal government and these institutions, together, account for 91% of undergraduate enrolment. 75% of total undergraduate enrolment in Brazil is in the private sector. The remaining 9% of undergraduate enrolment is in public state and municipal institutions, which are subject to regulation by state governments, not the federal authorities. The system of quality assurance for academic postgraduate programmes, implemented by CAPES, applies to all higher education providers in the country, including state and municipal institutions.

The quality assurance system for higher education institutions and undergraduate programmes involves regulation, evaluation and supervision

For HEIs and undergraduate programmes, the current system of quality assurance was established by legislation adopted in 2004 (Presidência da República, 2004[1]). This establishes a system based on the legal regulation of institutions and undergraduate courses, currently undertaken by the Secretariat for Regulation and Supervision of Higher Education (SERES) within the Ministry of Education (MEC). SERES grants official accreditation to HEIs and authorisation and recognition to individual undergraduate programmes. It does so on the basis of the results of quality evaluations of institutions and programmes, which form part of the National System of Higher Education Evaluation (SINAES) and are coordinated by the Anísio Teixeira National Institute for Educational Studies and Research (INEP), a semi-autonomous agency under the responsibility of MEC. INEP uses a combination of on-site peer reviews, results from the National Examination of Student Performance (ENADE) and programme-level quality indicators to evaluate HEIs and undergraduate programmes. SERES also uses the results of ongoing evaluation and monitoring of HEIs and programmes by INEP to inform its supervision of the federal higher education system and, in some cases, impose remedial measures and sanctions on HEIs that perform poorly.

CAPES evaluates academic postgraduate provision through a system of peer review, the results of which have far-reaching consequences

The evaluation of academic postgraduate programmes (Master’s, Professional Master’s and Doctoral programmes), undertaken by CAPES, involves an elaborate system of peer review, organised by 49 discipline-specific field committees. The field committees approve all new academic postgraduate programmes and undertake evaluations of existing programmes on a four-year cycle. The results of the CAPES evaluations are fundamental for the approval or new academic postgraduate programmes, the continued operation of existing programmes and the allocation of public money for researcher training. Programmes which fail to achieve minimum standards in CAPES evaluations lose funding and the official validity of their qualifications, and are forced to close.

1.3. Regulating market entry: new institutions and undergraduate programmes

Main findings

A system of institutional accreditation and programme-level recognition is used to regulate market entry for HEIs and undergraduate programmes, mostly affecting the private sector

Private higher education institutions are required to obtain formal external institutional accreditation (credenciamento) from MEC to allow them to begin operation and may only initially be established as teaching institutions, with the formal status of “college” (faculdade). Once established, private institutions may transition to the status of “university centre” (larger teaching institutions) or a fully-fledged “university” (institutions with teaching and research activity). This is possible if the institutions meet certain criteria related to number of programmes; the qualification and employment status of staff; and for universities, research; and if they successfully complete a process of re-accreditation. In 2017, MEC processed over 200 requests for institutional accreditation from private institutions.

Public HEIs may be established with any institutional form and are accredited automatically by their acts of establishment. The establishment of new federal public institutions is rare, but, from a legal perspective, such institutions can be created without any requirement to undergo an initial external evaluation.

As a general rule, colleges in the federal higher education system, which are almost exclusively private, are required to obtain formal authorisation from SERES (autorização) to start new undergraduate programmes. As discussed below, colleges with adequate institutional quality ratings may be exempted from the authorisation process in certain circumstances. Institutions with the status of university centre and university have a greater degree of autonomy and are not generally required to obtain authorisation in advance to start new undergraduate programmes, but must notify SERES of the creation of all new programmes. Universities and university centres do require prior authorisation to start new programmes in medical fields and law. In 2017, MEC processed nearly 1 600 requests for authorisation for new classroom-based and distance undergraduate programmes.

All HEIs in the federal system, whatever their legal form, are required to submit new programmes to an external quality assurance process called “recognition” (reconhecimento), once half of total teaching hours for the first student cohort have been completed (in the second or third year, for example). All new programmes offered by HEIs need to complete the recognition process successfully for the degrees they award to be valid nationally in Brazil. In practice, most of the programmes undergoing initial recognition are also in new or expanding private sector institutions. Federal public universities, as well-established institutions, already active in a broad range of study fields, create fewer new programmes and are thus less involved in processes of programme recognition.

Decisions about accreditation, authorisation and recognition are informed by the results of on-site peer evaluations coordinated by INEP

SERES makes decisions regarding the accreditation of institutions, and authorisation and recognition of programmes, based on the results of on-site peer evaluations of the institutions and programmes in question. Two or three external evaluators undertake review visits, using evaluation criteria and scoring detailed in evaluation template (“instruments”) specific to each process.

For the accreditation process, which applies only to new private institutions, the evaluation template is organised around five thematic axes, and assesses the proposed institution against 45 qualitative indicators, each of which is evaluated on a five-point scale. The principal foci are the Institutional Development Plan (30%), planned academic policies (20%), planned management policies (20%) and infrastructure (20%). The final score generated by this evaluation, on a scale of one to five, is referred to as the “institutional score” or Conceito Institucional (CI). Institutions need to score at least three to receive accreditation from SERES. Institutional accreditation for colleges is valid for three to five years, depending on the CI score they receive. After this period, colleges must formally undergo a process of re-accreditation (see discussion of institutional re-accreditation below).

For the processes of programme-level authorisation and recognition, the on-site evaluation instruments used by external reviewers establish nearly identical review templates. The evaluation template for authorisation focuses on planned inputs (teaching staff, infrastructure, etc.) and activities (pedagogical processes, support to students etc.) linked to the programme. There are around 50 qualitative indicators in the template. These are assessed by the external evaluation team appointed by INEP taking into account programme documents, discussions with proposed staff and visits to the facilities planned for the programme. The criteria in the template for recognition focus on the real inputs and activities involved in the new programme, once the first cohort of students has completed half of their study hours.

SERES can impose sanctions on HEIs if recently established programmes receive negative evaluation results in the process of recognition

If the result of the on-site evaluation at the stage of recognition is negative (a score of two or less), SERES requires the HEI to draw up a “Commitment Protocol”, which sets out how the quality problems detected will be addressed within a 12-month timeframe. If it considers there is an immediate risk for students, SERES can legally impose one or more sanctions on the HEI providing the programme, including suspension of the right to recruit new students. This rarely happens in practice. At the end of the period established by the Commitment Protocol, the programme is subject to another on-site inspection by INEP evaluators. If it still fails to meet minimum quality requirements, SERES can launch a “sanctioning procedure”, which may entail the same range of sanctions. For serious cases in private institutions, the relevant legislation allows for the withdrawal of institutional accreditation, which would effectively lead to the closure of the institution. Again, in practice, such cases are rare. Legally, some of the sanctions provided for in the legislation can be applied to public institutions, but the legal status of these institutions as public bodies means they may not have their institutional accreditation withdrawn.

Recent changes have removed the requirement for colleges to seek prior authorisation to start new programmes in specific cases

The authorisation of new programmes proposed by colleges is a risk-adjusted process. Recent changes to the regulatory regime allow colleges to obtain authorisation for new courses under certain circumstances, without undergoing an on-site inspection. Colleges with the minimum institutional quality score (CI) of three can start up to three new programmes a year without on-site reviews, provided they already have officially recognised (i.e. quality assured) programmes in the same disciplinary field. Colleges with institutional quality scores of four and five can create more new programmes a year in fields where they already operate. In these cases, HEIs must still request authorisation, but the procedure is based exclusively on a desk-based analysis by SERES of the programme documents submitted by the HEI.

The procedures for institutional accreditation have been effective in ensuring compliance with basic standards and not hindered expansion of the system

In contrast to some other countries in the Latin America and Caribbean region, compliance with Brazil’s system of institutional accreditation appears to be nearly universal. Private institutions do not frequently operate without institutional accreditation. The requirements of institutional accreditation appear to be sufficiently rigorous to limit fraudulent or grossly unqualified private institutions from entering the higher education marketplace. Moreover, accreditation requirements do not appear to have created excessive barriers to the market entry of private higher education providers. Brazil’s higher education system has grown swiftly over the last decade, and private sector institutions have provided the majority of new study places.

Some cases of fraud do exist, while information on the accreditation status of institutions is not as accessible and transparent as it could be

Nonetheless, there are examples of accredited higher education institutions offering programmes that are not authorised, and organisations that are not accredited higher education institutions offering fraudulent diplomas. While the Ministry’s e-MEC platform provides a single national registry of accredited institutions and authorised programmes, it is primarily an administrative database. Incidents of allegedly fraudulent provision suggest that not all students have ready access to information that allows them to confirm the validity of the institutions and programmes in which they plan to study. While the layout and functionality of the e-MEC site are not designed to be used by students and their families, the information contained in the system could easily be exploited as part of a more user-friendly information service.

Despite some strengths, there are concerns about the rules governing distance education providers and programmes

Distance education now accounts for almost 20% of total enrolment in Brazil, with over 90% provided by the private sector. Private distance education institutions and the programmes they provide are subject to the same procedures for institutional accreditation and programme-level authorisation and recognition as providers of traditional classroom-based higher education. A limited number of qualitative indicators relating specifically to distance education have been incorporated into the evaluation templates used for accreditation, authorisation and recognition, covering pedagogical approaches, digital technologies and infrastructure. For example, evaluators are called on to consider the capacity of teaching staff and assistant tutors to support and mentor the number of students proposed for each programme (the proposed study places). Brazilian legislation requires distance education programmes to respect the requirements of national curriculum guidelines (DCNs), for fields where these exist, and distance programmes have hitherto mostly been blended programmes, with some face-to-face instruction and assessments, often conducted in decentralised distance education learning centres (referred to as “poles”). Internationally, blended programmes have been shown to be more effective than fully online programmes (Escueta et al., 2017[2]).

Recent legislative changes have made it easier for private higher education providers to establish large numbers of distance education “poles” (up to 250 a year), in multiple locations, without the need for the facilities in each location to be inspected by INEP evaluators. Some stakeholders in Brazil are concerned that this will promote the uncontrolled expansion of distance education, without adequate quality guarantees. Furthermore, the specific evaluation criteria for distance education institutions and programmes used currently are few in number and underdeveloped in light of the risks associated with this kind of provision (limited staff-student interaction, the risk students are isolated, the challenges of organising fair and rigorous assessments and examinations, etc.).

The system for programme-level authorisation and recognition creates additional guarantees of minimum quality standards

The formal requirement for all courses to obtain official recognition in the early stages of their operation provides a basic guarantee of the quality of programmes. The procedures in place force higher education providers to reflect seriously about the design of the programmes they are providing and put in place a range of documents and processes – described in the Programme Pedagogical Project (PPC) – that should contribute positively to the delivery of relevant programmes meeting minimum quality criteria. Nevertheless, the factors verified through the on-site evaluation at the stage of recognition are all conditions for the delivery of quality programmes, rather than indicators of the initial performance of the programmes in question (student progression and performance, for example). Moreover, the processes used to evaluate the quality of new and recently created programmes are subject to various lines of criticism.

There have been concerns about the profile and objectivity of the review commissions undertaking on-site reviews

Representatives of private institutions consulted by the OECD review team complained that the composition of the reviewer pool used to implement on-site reviews is often skewed towards public universities, while institutional representatives more generally argued that those who are called upon to carry out reviews sometimes lack expertise concerning the programme under review. There is also concern about the subjectivity or unreliability of qualitative assessments. The revised process of on-site review for programme authorisation and recognition, as amended in late 2017, asks reviewers to make qualitative judgments on a five-point Likert scale, using pre-formulated judgement criteria. Despite the attempts by INEP to formulate the judgement criteria clearly, these scales still leave considerable room for interpretation. They call upon reviewers to make distinctions that are likely to be inconsistent between individuals. The OECD review team was told by campus officials that the same programme offered in different campuses with otherwise near-identical supply conditions received different marks from on-site reviewers.

The set of indicators used in the evaluation templates and the weight accorded to different topics are not optimal

The on-site evaluation instrument used by external reviewers for the process of programme recognition assigns 40% of its weight to assessment of the teaching staff attached to the programme. This reflects the fact that the staff are working at the time of recognition, so the composition of the teaching workforce can be judged more accurately than during the previous process of authorisation, where this is used. However, the judgement criteria reward the presence of full-time staff with doctoral degrees and attach little value to professional experience, thus disadvantaging professionally oriented programmes. At the same time, relatively little weight is attached to assessment of the pedagogical and didactic approaches implemented by the programme, despite their crucial role in supporting students to acquire relevant learning outcomes.

The on-site evaluation templates now make special provision for the authorisation and recognition of distance education courses. However, 45 out of 55 indicators in the templates are applicable to both classroom-based and distance programmes. The specific indicators of programme quality related to curriculum, instruction, learning support, and assessment in distance programmes are less developed than those used in accreditation systems in other OECD and partner countries, including the United States. Developing appropriate measures of quality that reflect the specific characteristics of distance education is, however, a challenge shared by many higher education systems.

Finally, on-site visits carried out for programme recognition permit higher education institutions to award degrees without providing evidence about the initial performance of the programme, such as rates of attrition among its students in the first years of operation. Additionally, the process of recognition does not systematically elicit information from the students whom the programmes serve, or external stakeholders who have experience of working with the programme and its students, such as public sector employers and private firms which provide internships.

The processes of programme-level authorisation and recognition are administratively burdensome for HEIs and INEP

The OECD review team heard frequent criticisms from institutional representatives of the delay and burden associated with the on-site review process for authorisation and recognition. INEP and SERES argue that the situation has improved in the last two years. In particular, they point to the fact that HEIs that have received adequate quality scores (a CI of three or above) are exempted from on-site reviews at the stage of authorisation for programmes in fields where they already have courses (within certain limits). They argue that the most recent regulatory changes in Decree 9 235/2017 reduce burden for institutions with an established quality record, allowing them to create additional study places more easily, for example.

While there has indeed been a shift in the regulatory approach, the market entry process for new undergraduate programmes in the federal higher education system remains administratively burdensome for HEIs and the evaluation agency (INEP), when compared to equivalent processes in many OECD countries. In Brazil, despite the recent changes, all new programmes are required to go through the recognition process, with on-site reviews that depend on peer review and are logistically complex to organise.

This contrasts with the situation in many OECD and partner countries, where HEIs can create new programmes and issue valid diplomas without prior programme-level authorisation. In these and other systems, authorities also often link quality review procedures more closely to risk of poor quality than is the case in the Brazilian system, with less complex procedures in place for institutions that can demonstrate they present a lower quality risk. Although the large private higher education sector in Brazil creates specific risks, which are not found in all higher education systems, there is certainly scope for Brazil to draw on risk management practice in other quality assurance systems.

Key recommendations

1. Improve the reliability and visibility of information about institutions’ accreditation status to ensure students and families are well informed

Although MEC, with the support of evaluations coordinated by INEP, regulates the entry of new institutions into the Brazilian higher education marketplace more comprehensively than in other systems undergoing rapid expansion, the quality assurance system is not fully effective in preventing fraudulent and unauthorised provision. The first line of defence against unaccredited higher education providers is students themselves. Informed students understand which institutions are accredited and not, and why this matters to them, and are able to identify and avoid unaccredited institutions. In principle, comprehensive information about accredited institutions and recognised programmes is available through the online e-MEC. However, e-MEC is not a user-friendly source of accreditation information. More accessible public Internet resources found in other higher education systems could serve as references for the Brazilian authorities in this regard. In the medium-term, the aim should be to develop a comprehensive online portal providing students and prospective students not only with programme-level information on quality assurance results, but also on issues such as graduation rates and graduate employment outcomes (see discussion on programme indicators below).

2. Over time, increase the focus on institutions as units of evaluation in the external quality assurance system to reduce burden, while maintaining effectiveness

Despite attempts to address concerns about the composition of review commissions and reduce requirements for authorisation in some cases, the Brazilian system of programme review at market entry remains complex and burdensome and may not represent the best use of the country’s resources. Programme-focused regulatory decisions – for new and existing programmes - account for more than 10 000 of the 12 000 acts that SERES handles annually. The Brazilian system of quality assurance currently focuses proportionally more efforts on the programme-level than on the institutional level as a unit of evaluation and monitoring. Permitting HEIs with demonstrated capacity to assume responsibility for the quality of the programmes that they offer and to become “self-accrediting institutions”, following rigorous institutional reviews, could significantly reduce the burden of programme approval through authorisation and recognition. It would also allow attention to be focused on programmes that present greater quality risks in institutions not granted self-accrediting status. Quality guarantees could be maintained across the system by an enhanced system of programme-level monitoring indicators and more rigorous and comprehensive process of institutional re-accreditation.

3. In the near term, take steps to improve the evaluation process for programmes that remain subject to programme-level authorisation and recognition

The OECD review team sees a clear case for maintaining programme-level authorisation and strict market entry requirements at programme level for HEIs that lack a strong track record of good quality provision and are not able to demonstrate adequate capacity to self-accredit their own programmes. It is thus important to increase the effectiveness of these processes in promoting quality practices for institutions that remain subject to programme-level authorisation and/or recognition. Priorities for improving current practice in the short-term include:

  • Further improving the criteria used to select and assign peer reviewers for on-site reviews to increase the fit between reviewer expertise and programme review responsibilities.

  • Continuing and increasing efforts to improve the training of peer reviewers, with a view to improving the reliability and impartiality of scoring.

  • Increasing the weight attached to the organisation and implementation of teaching and learning in the evaluation instrument for recognition, reflecting the importance of these factors for students.

  • In cooperation with international peers, refining and expanding the specific indicators used for the evaluation of distance education programmes, so that these address the particular risks associated with this type of provision. This should consider how best to evaluate decentralised distance education centres (“poles”).

  • Using the recently introduced process of feedback about the performance of peer reviewers to monitor and revise selection and training.

4. In the longer term, take steps to reduce further the burden and improve the effectiveness of quality assurance processes for programmes outside self-accrediting institutions

In the longer term, two issues should be considered in particular. First, the procedures for on-site visits could be fundamentally reformed. Responsibility for reviewing institutional infrastructure and basic institutional policies could be assigned to a well-trained and professionalised inspectorate. The expert judgment of academic peers (who currently review all aspects of institutions and programmes) could then be applied to a more limited set of indicators than at present, focused on core teaching and learning activities. A sequenced process of accreditation and authorisation could be implemented in which a professional inspectorate initially carried out its work, and academic peers would be engaged only for institutions and programmes that have passed a first stage of review. Second, it will be important to identify ways in which the more extensive, quantitative, and comparable information about intermediate programme performance can be incorporated into the process of programme recognition. Examples include student attrition from programmes, and student feedback concerning the teaching and learning environment.

1.4. Assuring and promoting quality for existing undergraduate programmes

Main findings

Student testing, programme indicators and on-site reviews all play a part in the ongoing quality assurance of undergraduate programmes

Once undergraduate programmes have been recognised, they are subject to an ongoing cycle of evaluation coordinated by INEP on behalf of the Ministry of Education. As currently designed, this cycle involves the collection and collation, by INEP, of programme-level data, including the results of the programme’s students in a national assessment of learning outcomes, to create a composite indicator of programme performance every three years. As a general rule, in cases where programmes score low ratings in relation to this composite indicator, a new on-site programme review is undertaken by external evaluators. The results of a programme in relation to the composite indicator and, where used, the on-site review determine whether or not SERES renews its official recognition, thus guaranteeing that the diplomas awarded by the programme retain national validity.

ENADE is a set of tests used to measure the performance of students in undergraduate programmes

Each year, students graduating from undergraduate programmes registered in a particular set of disciplinary fields are required to take a mandatory competence assessment – the National Examination of Student Performance (ENADE). Disciplines are assigned to three broad groups, with disciplines in group I evaluated one year, group II the year after and group III the year after that, meaning each discipline is subject to ENADE every three years. The ENADE tests contain a general competence assessment, common to exams in all fields in a single year, and a discipline-specific component. There are currently separate ENADE tests for 88 study fields, with discipline-specific questions for each test developed by multiple academics selected by INEP. In addition, all students participating in ENADE are required to complete a student feedback questionnaire providing biographical information and a personal assessment of their programme. The average results obtained by students in each programme are used to calculate an “ENADE score” on a five-point scale for that programme. These scores are published and also feed into the composite indicator of programme quality discussed below.

The objectives established for ENADE by the legislator are unrealistic

The objectives of ENADE, as currently formulated, are unrealistic. The legislation establishing SINAES requires ENADE to measure students’ performance in relation to the content of relevant national curriculum guidelines, their ability to analyse new information and their wider understanding of themes outside the scope of their programme. The requirement to measure understanding of unspecified “themes outside the scope of the programme” – which has given rise to the general competence assessment in ENADE - is inherently problematic because it is so general and the knowledge and skills assessed, by definition, are not part of the programme’s core intended learning outcomes. It is thus unclear how those running programmes could be expected to equip students with such a range of unspecified knowledge and skills or why they should be held accountable for students’ not having these competences at the end of their studies.

As ENADE is a written examination with a restricted duration, it is also impossible to measure the full range of learning outcomes that any adequately formulated curriculum guidelines should contain. Moreover, by implying that ENADE sets out to measure students’ learning outcomes in relation to the National Curriculum Guidelines for undergraduate programmes, there is a risk that the content of ENADE (in a given year or over several years) comes to be seen to define what is important in the National Curriculum Guidelines. If ENADE is to be maintained, Brazil’s legislators and quality assurance authorities need to provide a more credible account of what it can realistically achieve and how risks for innovation and responsiveness can be mitigated.

There are significant weaknesses in the way ENADE is currently designed and implemented, which undermine its ability to generate reliable information on student performance and programme quality

The OECD review team considers that there are at least five principal weaknesses in the way ENADE is currently designed and implemented:

  1. 1. The first problem relates to the participation of students and their motivation to make an effort in the test. A proportion of the students who should be taking the test each year are not doing so. Across years, between 10-15% of students registered to take the test each year do not turn up on the day. Moreover, there are concerns among stakeholders in Brazil that some HEIs seek to avoid registering a proportion of students for ENADE. At the same time, ENADE is a high stakes exam for HEIs (as it is used in the quality rating of their programmes), but a low stakes exam for students. Although attendance is compulsory, ENADE scores have no effect on students’ academic record and there is evidence that a significant proportion of students do not complete large parts of the test. Evidence from other OECD and partner countries suggests that if the results of tests have no real consequences for students, this impacts negatively on student motivation and performance. Low student motivation is likely to have negative implications for the validity of ENADE results as an accurate reflection of the learning outcomes of students.

  2. 2. A second concern relates to the development, selection and use of test items for each ENADE test. At present, there is no robust methodology to ensure that the difficulty of each test item is taken into account in the composition of the test and thus that a) tests in the same field are of equivalent difficulty between ENADE cycles and b) tests in different fields are of a broadly similar level of complexity. This means that it is not possible to compare the raw results or the Conceito ENADE between years or between disciplines. A related question is whether the number of discipline-specific items included in ENADE (30) is adequate to generate a reliable indication of students’ learning outcomes from an undergraduate programme. The answer almost certainly depends on what the exam is for. A robustly designed examination with only 30 items may be able to provide a general indication of a students’ level of knowledge and competencies in a specific disciplinary field. However, such a test is unlikely to provide reliable evidence of students’ performance in specific sub-fields or aspects of the curriculum, which limits its usefulness as a tool to help teaching staff and institutions improve the design of their programmes.

  3. 3. A third problem is that no explicit quality thresholds or expected minimum levels of performance are set for ENADE tests. Without tests of a comparable standard of difficulty and without defined quality thresholds (pass, good, excellent, etc.), ENADE scores are simply numbers, reported according to their position in relation to all other scores in the same test. It is thus impossible to know if students in programmes that achieve 50% or 60% in ENADE are performing well or poorly.

  4. 4. A fourth problem relates specifically to the design of the general competencies (formação geral) component of ENADE. This is currently composed of general knowledge questions regarding current affairs and social issues, including two questions that call for short discursive answers. However, unless all undergraduate programmes have knowledge of current affairs and social issues as explicit intended learning outcomes – which is not the case – it is unreasonable to judge individual programmes on students’ performance in these areas.

  5. 5. A final issue is that the standardisation of ENADE scores compounds the lack of transparency about what ENADE results really mean. Raw marks are attributed to a five-point scale based on the standard distribution of scores in a single subject in a given year. As tests may vary in difficulty and students obtain very different distributions of scores, where a programme falls on a standard distribution of the scores for all programmes says little about the actual quality of the programme in question.

These reliability issues and the limited use of ENADE results by HEIs for quality improvement call into question the resources dedicated to the exam

The results of ENADE are used by INEP and SERES for regulatory purposes, as discussed below. However, institutions consulted by the OECD review team report that they did not use of ENADE results in efforts to improve the design and content of programmes. Representatives of institutions consistently indicated that they did not see ENADE as providing useful feedback to help them improve their programmes. Although the OECD review team does not have access to a detailed breakdown of the costs of implementing ENADE, these account for a substantial part of INEP’s budget for evaluation of higher education, which amounted to over 118 million reals (USD 30.7 million) in 2017. It is questionable whether the quality and usefulness of the results achieved with the exam as currently configured justify the investment of public resources committed.

The Preliminary Course Score (CPC) is used as a composite indicator of programme quality

To monitor programme performance, INEP currently uses a set of indicators comprising a) measures of student performance and assumed learning gain (based on ENADE test results); b) the profile of the teaching staff associated with the programme and; c) feedback from students about teaching and learning, infrastructure and other factors from the questionnaires they complete in advance of taking the ENADE test. When new ENADE results are available for each programme, after each three-year cycle of testing, INEP calculates a programme score – the Preliminary Course Score (CPC). Programmes that score below three out of five on the CPC are systematically subject to on-site inspections by external review commissions, with a positive evaluation score (a CC of three or above) a pre-requisite for renewal of their official recognition. Courses that score three or above on the CPC generally have their programme recognition renewed automatically by SERES, without having to undergo an on-site inspection.

The CPC is an unreliable measure of quality, lacking transparency

The CPC attributes 35% of its total weight to an “Indicator of difference between observed and expected performance” or IDD. This is calculated by comparing each student’s actual results in ENADE with the performance that would be expected given their previous performance in the national high school leaving exam, ENEM. The combination of the boldness of the underlying assumptions about the predictive value of ENEM results for the performance of undergraduate students; the poor reliability of ENADE results in the first place; and the potential influence of factors outside the control of the programme on student performance mean that the IDD provides only limited information on programme quality.

Moreover, it is widely accepted that the weightings attributed to the different indicators in the CPC are arbitrary, with no discernible scientific basis (TCU, 2018[3]). This further compounds the lack of transparency about what the scores attributed to courses really mean in practice for students, families and society at large. It is positive that the CPC sets out to include indicators of the teaching process (through the imperfect proxy of teaching staff status); qualitative feedback from students (the main beneficiaries of the system) and measures of student learning outcomes. It does not, however, contain a measure of the attrition rate of students (what proportion of students entering a programme complete it) or the subsequent employment outcomes of students.

The principle of using indicators to identify “at risk” programmes and target finite resources for on-site inspections makes sense, especially in a system as large as Brazil’s. However, the CPC does not provide a reliable mechanism to identify poorly performing courses. The absence of quality thresholds in ENADE and the standardisation processes used to create the ENADE score, combined with the weaknesses of the IDD, mean it is far from clear whether a CPC score of three represents an adequate standard of quality or not. A reform of the monitoring indicators used and the way they are combined is necessary.

Site visits are undertaken for programmes that perform poorly on the CPC

When programmes are identified through the CPC as performing poorly – often meaning they have poor relative performance in ENADE – they are subject to an on-site inspection by external evaluators, coordinated by INEP. The evaluators assess the supply conditions for the programme using the same evaluation instrument that was already used for programme recognition (reconhecimento). The results of the new on-site inspections are used by SERES as a basis for decisions for the renewal programmes’ official recognition. The evaluation attributes a new quality score – an updated Conceito de Curso (CC) – that effectively replaces the CC attributed at the time of initial recognition and exists alongside the CPC score in the e-MEC system.

These site visits use a review template and scoring system that do not focus on identifying the causes of poor performance in the CPC and do not consider graduation rates and graduate destinations

The on-site visits for renewal of recognition, as currently organised, (re)check compliance with basic standards that was already checked through the initial on-site visit for the recognition of the programme. The evaluation instrument for recognition and renewal of recognition places a 40% weighting on the category “teaching staff” and 30% on “infrastructure”, with just 30% attributed to teaching and learning policies and practices. The indicators and judgement criteria relating to teaching staff mostly focus on the qualifications and experience of the individuals in question, with only three indicators dealing with the activities (atuação) of staff or their interaction with each other.

As such, the renewal of recognition on-site reviews do not focus strongly on the teaching and learning-related factors that might be expected to have greatest influence on student performance and quality. Frequently, it appears that programmes which score poorly on the CPC measure subsequently achieve a higher score on the CC (TCU, 2018[3]). As such, these programmes nominally recover the higher quality score. It is understandable that the CPC and the inspections leading to the CC can generate different values, as they measure almost entirely different things. A greater focus on teaching activities and the greater attention to outputs and outcomes (attrition rates, learning outcomes, graduation rates and employment outcomes) would make the evaluation instrument more effective in identifying the real causes of poor performance.

More generally, the objective of targeting on-site inspections on weakly performing programmes has advantages, as the systematic use of periodic on-site inspections for all existing programmes in Brazil would almost certainly be unfeasible for logistical and financial reasons. However, it also means programme-level site visits at this stage in the evaluative process always have a punitive character and that peer reviewers are not exposed to good practice in well-established programmes, which could inform their judgements about, and recommendations to, poorly performing programmes.

Key recommendations

1. Undertake a thorough assessment of the objectives, costs and benefits of large-scale student testing as part of the quality assurance system

Officially ENADE currently seeks to assess students’ acquisition of knowledge and skills specified in the relevant National Curriculum Guidelines (DCN) or the equivalent documents for Advanced Technology Programmes, as well as their understanding of unspecified “themes outside the specific scope” of their programme. This is an unrealistic objective and no standardised test could achieve this. Moreover, as discussed in the preceding analysis, the current design and implementation of the ENADE tests are characterised by significant weaknesses. At present, ENADE results are used extensively as a basis for regulatory decisions (renewal of programme recognition), but are not used by institutions and teachers to identify areas where they programmes need to be strengthened.

The OECD team believes that, in its current form, ENADE does not represent an effective use of public resources. As such, as a basis for decisions on the future of the system, a thorough reflection is needed about the objectives of large-scale student testing in Brazilian higher education and the costs and benefits of different approaches to implementing it. The main questions to answer are:

  1. 1. Can an improved version of ENADE, addressing the current design and implementation weaknesses noted in this report, be implemented and generate reliable information about the quality of undergraduate programmes?

  2. 2. Could the information about the quality of programmes generated by a revised ENADE be provided by other, potentially more readily available, indicators? What is the specific and unique added value of ENADE results?

  3. 3. If a revised version of ENADE does indeed have the potential to generate valuable information that cannot be obtained from other sources, does the value of this information justify the costs of implementing ENADE? How can the costs of implementation be minimised, while still allowing ENADE to generate reliable and useful results?

The OECD team believes two factors should be considered in particular. First, for ENADE to have the greatest possible added value, it needs to be able to provide reliable information that can help teachers and institutions to identify areas of weakness in their programmes (in terms of knowledge coverage or skills development). ENADE results cannot simply be a blunt indicator used to inform the regulatory process, as other indicators, such as graduation rates or employment outcomes could be used for this purpose. Second, the current requirement to apply the ENADE test to all programmes every three years increases the fixed cost of implementing the system. It is important to consider whether sampling techniques could be deployed to reduce costs, while maintaining reliability.

2. If a reformed version of ENADE is retained, ensure the objectives set for the exam are more realistic

If the decision is taken to maintain a revised version of ENADE, it is crucial to ensure the objectives set for it in the relevant legislation and implementing decisions are realistic and clearly formulated. The objective of a reformed ENADE could be to provide:

  • An indication – rather than a comprehensive picture - of the performance level of students in relation to intended learning outcomes, as one indicator, alongside others, in a comprehensive system of external quality evaluation and;

  • Data on student performance that can be used directly by teachers and institutions in identifying weaknesses in their programmes as a basis for improvement (quality enhancement).

To achieve these objectives, the test should focus on measuring knowledge and skills that programmes explicitly set out to develop in their students. This means abandoning claims to measure abstract general knowledge with no direct link to the programme and focusing on a) selected discipline-specific knowledge and skills and b) generic competencies that can realistically be developed in an undergraduate programme. The latter category might include critical thinking and problem-solving. These can theoretically be tested for using discipline-specific test items.

3. Improve the design of ENADE tests to ensure they generate more reliable information on learning outcomes that can also be used by teachers and HEIs

If maintained, ENADE tests should be designed in a more rigorous way to ensure that they are of comparable levels of difficulty within subjects from one year to the next and that tests for different disciplines are of equivalent difficulty for equivalent qualifications (bachelor’s, advanced technology programme, etc.). This may require a shift from classic test theory to item-response theory. As part of this process, performance thresholds and grades should be established clearly in advance. The objective should be to provide students and programmes with easily understood and usable grade-point averages and grade distributions. The approaches to both test design and performance thresholds used by CENEVAL in Mexico or testing organisations in the United States might provide valuable inspiration on how a revised form of the ENADE tests could be developed. It is important for INEP to draw on the expertise of other organisations involved in standardised testing internationally in the development of new approaches and test formats, to ensure it benefits from a wide range of expertise.

4. Explore ways to make the results of ENADE matter for students

If maintained, ENADE needs to be made into a higher stakes exam for students, so that they make an effort to actually demonstrate the level of knowledge and skills they possess. Currently, it is difficult to make ENADE results count towards individuals’ degree scores, not only because of institutional autonomy, but because only every third cohort has to take ENADE and including ENADE in degree results may be perceived as unfair to students in a year where the test exists. As a minimum, the ENADE score could be included in the student’s diploma supplement. Alternatively, ENADE could be made into a curriculum component for the years in which, or – in the case of sampling - for the students to whom, it is administered, with the requirement that an equivalent test for students in other years be administered by institutions. It is not yet clear if this would be possible legally.

5. Introduce a new indicator dashboard, with a broader range of measures, to monitor programme performance and identify “at risk” programmes

The use of the Preliminary Course Score (CPC) cannot be justified in its current form for the reasons discussed above. However, systematic programme level data are a crucial tool for monitoring a system as diverse and variable in quality as Brazil’s. The most promising option would be to include a broader set of more transparent indicators in an ongoing monitoring system, with thresholds established to indicate “at risk” performances on different indicators. This information could then be used to inform regulatory decisions and feed into subsequent evaluation steps (such as on-site reviews). The system should apply to all programmes, with data obtained from institutions and other sources, as appropriate, and consolidated in a renewed version of e-MEC.

Such a system could use a more diverse set of indicators of teaching staff, real (not standardised) ENADE results (based on established performance thresholds), an indicator of drop-out rates and, when possible through linking data sources using the national identity number (CPF), information on employment rates and earnings. Indicators of the socio-economic profile of students could be included in the system, with higher tolerances for issues like drop-out or ENADE performance for programmes with intakes from lower socio-economic groups. Such variation in tolerances should be limited, as all students should be expected to reach minimum standards and all programmes maintain a certain proportion of their students. A revised form of the IDD could potentially be maintained alongside the other indicators in the indicator dashboard, provided its status as a proxy for expected performance and its limitations are made clear, and its weight in the overall monitoring system is reduced.

The OECD review team understands that INEP is already planning (October 2018) to “disaggregate” the components of the CPC and complement these with additional indicators to inform the regulatory process. Hopefully, this recommendation will support this process.

6. As part of a new system of institutional accreditation, exclude institutions with demonstrated internal quality assurance capacity from on-site programme reviews for the duration of their accreditation period

There is scope to exempt institutions from systematic ongoing programme-level review that have a track record of good performance and that can demonstrate a high level of internal quality assurance capacity. As discussed below, this would require existing systems for institutional accreditation and re-accreditation to be strengthened. If problems were identified through programme indicators in the indicator dashboard, such institutions would be responsible for addressing these issues internally. Addressing poor quality would become a key focus of institutional review and poor performance or failure to address problems adequately could lead to institutions losing self-accrediting status in the subsequent round of institutional review. This move would further reduce some of the burden of external programme-level reviews for renewal of recognition (as well as the initial recognition process).

7. Maintain programme-level supervision for other institutions, with targeted on-site reviews for poorly performing programmes and randomly selected highly performing programmes.

For the remaining institutions, programme-level review would be maintained. The new programme-level indicator dashboard (which would cover all programmes, including in self-accrediting institutions) would allow poor programmes to be identified and replace the current CPC system. If annually collected data on completion rates and employment outcomes were included in the dashboard, alongside input indicators and periodic results from a reformed ENADE, this would allow more effective continuous monitoring of programmes. Problematic programmes could first be called upon to submit an improvement plan that could be assessed remotely, largely in line with current supervision procedures. SERES, or a future quality assurance agency (see below), could decide on timeframes for improvement and whether and when an on-site visit would be required. It is crucial that SERES, or a successor agency, have the capacity to close poor programmes rapidly if programme indicators fail to improve without clear justification and evaluators give a negative assessment following an on-site inspection.

However, while targeting of resources is important, the risk of evaluators only being exposed to poor quality programmes – and thus lacking good reference points – needs to be addressed. As such, it is recommended that reviewers also take part in reviews of randomly selected programmes that obtain good scores in relation to monitoring indicators – potentially including programmes in “self-accrediting” institutions - to allow them to gain more insights into the range of practices and performance that exists in their field in the country.

8. Develop a separate evaluation instrument for on-site reviews of established programmes

The current process for on-site reviews of established undergraduate programmes uses the same evaluation and judgement criteria as the instrument for programme recognition (which occurs when the first student cohort has completed between half and three-quarters of the programme). This instrument pays insufficient attention to programme outputs and outcomes (notably the results of (a revised) ENADE, attrition and graduation rates and employment outcomes) and to the teaching and student support practices that would be expected to have the greatest influence on these outputs and outcomes. A new instrument should thus be developed for on-site reviews of established programmes, which places most emphasis on these factors. The earlier suggestion for an inspectorate to examine infrastructure and basic institutional policies would mean that site visits by peer reviewers should focus exclusively on the learning environment and possible causes of poor outputs and outcomes.

1.5. Assuring the quality of postgraduate education

Main findings

A dedicated system for the evaluation of academic postgraduate programmes

The system of external quality assurance for academic postgraduate education in Brazil began in its current form in 1998. It evaluates and regulates academic (stricto sensu) Master’s programmes and doctorates. In Brazil, stricto sensu Master’s courses – including so-called “Professional Master’s” – are widely understood as the first stage in an academic or research career – a situation that is largely a reflection of the relatively recent expansion of doctoral education in the country. In many other OECD higher education systems, Master’s programmes, where they exist, are seen primarily as an extension and deepening of undergraduate education, preparing graduates for a wide range of high-skill jobs in the economy.

The CAPES evaluation system includes a specific approval process for new courses (APCN), designed to ensure only academic teams with demonstrated expertise, a proven track record of quality research and adequate facilities are authorised to provide academic postgraduate education. Course proposals are assessed by a field committee composed of academic peers from the field in which the course seeks to operate. Following a standard assessment and validation process, new courses are formally approved if they score at least three on a nominal scale of one to five. Every four years, CAPES implements a comprehensive evaluation of all academic postgraduate programmes that have already been accredited and been in operation sufficiently long for students to have produced academic results. The results of this evaluation – attributed through scores on a scale of one to seven - allow programmes to continue operating or, in case of poor performance, lead to withdrawal of funds and recognition for the diplomas they award. This effectively means programmes that fail the CAPES evaluation are forced to close.

The system for approval of new programmes sets a high bar for entry to the system, but there is scope to review the balance of quality indicators used

The APCN process consciously sets a comparatively high bar for entry into the system of academic postgraduate training and for the creation of doctoral training provision in programmes that already operate at Master’s level. In so doing, it seeks to maintain high minimum standards for postgraduate education, protect students against poor quality provision and ensure efficient targeting of public funding. During the review visits, the OECD team noted a high degree of support for the principle of maintaining a high threshold for entry into the academic postgraduate education system.

The criteria examined in the process for approval of new courses cover a wide range of the variables that might reasonably be expected in an ex-ante assessment of a proposed postgraduate programme. However, the current evaluation system attaches comparatively limited attention to the relevance of new courses to national or regional needs and developing knowledge areas; to the design of the training programme; and to support and personal development opportunities offered to students. Although the coherence of the proposed course with the Institutional Development Plan (PDI) of the host institution is assessed, there is no explicit assessment of the relevance of course to the needs of Brazil, in terms of knowledge development and highly qualified human resources. Similarly, there is little obvious room in the evaluation templates currently used to assess how the training programme will help to develop students’ knowledge and skills and monitor their progress.

The four-year periodic reviews involve resource-intensive review of staff outputs, while neglecting training conditions, student output and graduate destinations

Every four years, the field committees draw on information on staff, students, graduates and details of scientific outputs reported by each postgraduate programme through the online Sucupira platform as a basis for their assessment of each programme. The quality of student publications and the quality of the academic output of staff in academic journals are assessed using a standard classification of publication “vehicles”, recorded in an online database called Qualis. The assessment of books and book chapters is undertaken by physically reviewing a sample of publications for each programme in depth, but represents one of the largest calls of the time of members of some field committees (notably in the humanities, social sciences and some of the hard sciences).

The set of indicators used in the CAPES four-year evaluations covers many of the key variables that would widely be assumed to contribute to high quality postgraduate provision. It is positive that the evaluation grid, under different headings, takes into account factors such as staff-to-student ratios, time to graduation and cooperation networks with external research and non-academic organisations, for example.

However, the most striking feature of the four-year reviews is the strong focus on the scientific output of the academic staff involved in the programmes being evaluated. The CAPES evaluation is – nominally at least – an evaluation of postgraduate training programmes, not a research performance evaluation. As such, it is questionable why the system does not allocate less weight and fewer resources to assessing the performance of staff and more to assessing the performance of students and outcomes of graduates. Although there is also some attempt in the current CAPES system to assess the destinations of graduates from programmes, this aspect of programme performance is not currently addressed adequately.

The reliance on peer review will make the system harder to scale as postgraduate education expands, while inbreeding creates risks for objectivity and quality

Despite the strengths of the current division of responsibilities within the CAPES evaluation system, the evaluation system relies heavily on the voluntary contribution of academic staff organised in discipline-specific field committees. Although academics involved in the CAPES evaluation process consulted by the OECD review team felt the time and effort required of them for the current system for approval of new programmes remained reasonable, they highlighted that the CAPES system as a whole is becoming unmanageable for field committees, as the number of postgraduate programmes increases.

The Review team understands that no assessment of the value of the time dedicated to evaluation of courses by academic staff in the field committees – and thus also the cost to their home institutions - is currently available. Given the comparatively rapid rate of expansion of postgraduate provision in Brazil in recent years and the related increase in the number of proposals for new courses, it will be important to develop a better understanding of the number of person-hours used in the evaluation process and the associated costs.

The reliance on disciplinary committees composed exclusively of Brazilian academics also risks creating an excessively narrow academic focus in evaluations. While scientific excellence and traditional measures of academic output remain the basis for postgraduate education, it is important to complement this with perspectives from outside academia, to ensure that the development of postgraduate education responds to broader national and regional needs. Moreover, the current process for the evaluation of new courses involves limited or no direct interaction between those proposing the new courses and those evaluating the proposals.

A second key issue with the staffing of CAPES evaluation processes is the risk of endogamy (inbreeding). Even in a country size of Brazil – particularly given the relatively small size of its postgraduate training system – the number of established academics in a given field of study is limited. The number working in very high-quality departments and programmes at an international level is even smaller. As such, there is the risk that the people making judgements on whether or not a given programme is of international standard have close connections with the programmes they are judging. Moreover, the comparatively small pool of evaluators and their background may lead the evaluation process to reward programmes that reproduce existing models of education, rather than innovate.

Key recommendations

1. Adjust the weighting of evaluation criteria in assessment of new courses to focus more on relevance, training and continuous improvement

The OECD review team considers the current evaluation process for new courses could be improved by adopting the following modifications:

  • Revise the structure of the evaluation fiche for new courses to create a more transparent structure that follows the intervention logic for postgraduate training programmes, moving from inputs to outputs with a clearly explained rationale for each indicator used.

  • Include a separate section in the evaluation fiche on the relevance of the programme to national development needs, taking into consideration the development of new scientific areas and the knowledge and skills required for the further development of the private and public sectors in the country, including in natural sciences, social sciences and the arts.

  • Increase the weight attached in the evaluation of new courses to the training dimension of programmes and support provided to students, with an assessment of the likely capacity of the programme to equip students with relevant research and transversal skills.

  • Include a more explicit requirement for a programme development plan for all new programmes approved, setting out specific and measurable goals over time.

2. Bring additional perspectives into the evaluation of new programmes

To bring a broader range of perspectives to the process and potentially promote innovation and inter-disciplinary cooperation, CAPES should involve one or more academics from other academic fields in the field committees undertaking the assessment of new courses. In addition, to bring in expertise and perspectives from outside the academic community, CAPES should consider appointing specialists in economic development and the evolution of skills and knowledge requirements, as well as representatives of the private economy and the wider public sector to the Scientific and Technical Council (CTC-ES). If implemented effectively, this could ensure that final decisions on programme approval take into account broader national needs and developments.

3. Maintain programme-level accreditation in the medium-term, but consider the long-term desirability of transitioning to institutional self-accreditation for established institutions and programmes

Brazil’s postgraduate education system has grown rapidly in recent years and might still be considered to be in a phase of consolidation, when compared to postgraduate education systems in many other OECD and partner countries. In the medium term, it therefore makes sense to maintain course-level accreditation, to maintain oversight of the continued development of the system and ensure the promotion of quality. In the longer term, it could be possible to move to a system of institutional self-accreditation linked to a strengthened model of institutional accreditation. This would allow universities to start academic postgraduate programmes if they met certain criteria in terms of staff and profile and had been judged to have strong institutional quality systems in an institutional quality review (see below). The provision of publicly funded scholarships and additional programme funding should certainly remain dependent on positive external evaluation of the programme, in line with practice in many OECD systems.

4. Clarify the objectives of periodic evaluations and rebalance the focus of evaluation criteria to include greater focus on student outputs and outcomes

The periodic (four-year) evaluations of post-graduate programmes currently devote disproportionate attention and resources to assessing the outputs of academic staff. CAPES evaluations should focus on assessing the conditions for and performance of postgraduate training, not the research output of academic departments. The OECD review team therefore recommends increasing the weight attributed to educational processes, student outputs and employment outcomes, and reducing the weight attributed to staff outputs. This would make it possible to reduce the time and resources allocated to assessment of staff output, by assessing only a limited sample of research output. The Qualis system for journal rankings should also be reviewed, to introduce more uniformity in the classification of journals between knowledge fields. CAPES should also consider whether it is feasibly systematically to include interviews with course and programme coordinators as part of the periodic assessment of courses and programmes, to gain addition insights into the operation and performance of the programme and answer questions arising from documentary evidence.

5. Ensure those judging whether programmes are of international standing really have an international perspective

Given Brazil’s aspiration to develop a world-class postgraduate training system, it would be valuable to gain an international perspective on the programmes judged nationally to be among the best in the country. The OECD review team therefore recommends that CAPES systematically involve non-Brazilian academics in the assessment of programmes pre-selected by field committees as candidates for being programmes of international quality or excellence. In light of the number of programmes involved, it is likely to be most feasible to concentrate this international involvement on programmes proposed for the top score of seven. It may be possible to organise international peer review committees, who are able to review synthesised information about the programmes under review in English or Spanish, and potentially conduct group interviews remotely or in person with programme coordinators.

6. Undertake evaluations of specific components of the CAPES system and aspects of academic postgraduate provision as inputs to future policy

The OECD review team identified two specific issues where further information and analysis appears to be required in order to plan future policy for academic postgraduate education in Brazil, and its external quality assurance:

  • First, the full costs associated with the current system of external peer review are a “black box”. Peer review is inherently time-consuming and therefore expensive. The time academic staff spend involved in peer review is time they are not dedicating to their core activities of teaching, research and engagement with society. In order to help plan the future development of the system of peer review, CAPES should undertake an assessment of the cost of the time used by members of the field committees in the evaluation process, including the unit cost per programme evaluation.

  • Second, there is a wider question relating to the future of academic (stricto sensu) Master’s programmes. It would be valuable to undertake a systematic evaluation of the role of Master’s education in Brazil, including a specific focus on the profile and effectiveness of the professional Master’s programmes created in recent years. This evaluation should consider, in particular, the destinations of previous graduates from these programmes and the views of the academic community and private and public sector employers on the relevance and future role for Master’s level education in Brazil.

1.6. Assuring the quality of higher education institutions

Main findings

HEIs are also subject to monitoring and periodic re-accreditation

Legally, both private and federal public institutions are subject to periodic re-accreditation (recredenciamento), based on on-site reviews coordinated by INEP. For private institutions, successful re-accreditation is a prerequisite for their continued operation (although “de-accreditation” is rare). For federal public institutions, the process is essentially no more than a formality, as they cannot have their accreditation removed. The period for which (re-)accreditation is valid varies depending on the organisational status of the institution and institutional quality score (CI) already awarded to the institution. Universities and university centres are only re-accredited every eight to ten years, while colleges must be re-accredited at least every five years. In addition, institutions are subject to annual monitoring, based on their average performance of their programmes in relation to SINAES programme-level indicators and the results of CAPES evaluations for stricto sensu postgraduate programmes. The weighted averages of the Preliminary Course Score (CPC), and, where applicable, the scores attributed by CAPES for new and existing postgraduate programmes, are used to produce an overall score for each institution called the “General Course Index” (IGC).

But institutional review plays a far less prominent role than in many other systems of external quality assurance

While the letter of the law governing quality assurance in higher education in Brazil accords a central role to institutional autonomy and self-evaluation, the practical implementation of the SINAES imposes a complex system external programme-level scrutiny on a three-year cycle. For institutions that perform poorly in ENADE and on the CPC, this leads to regular programme-level inspections, using prescriptive processes that limit the room for manoeuvre for institutions. For institutions that tend to perform well in relation to ENADE and the CPC, particularly universities and university centres that are only subject to institutional review every eight to ten years, on-site evaluations by external reviewers are comparatively infrequent occurrences.

There are few incentives for institutions in this position to develop strong internal quality assurance systems that go beyond the minimum requirements imposed by the legislation, or to promote quality enhancement internally on a continual basis. Interviews conducted by the OECD review team in several institutions suggest that Internal Evaluation Commissions (CPAs) focus primarily on ensuring compliance with SINAES rules and delivering data to INEP, rather than developing internal quality systems tailored to institutional needs or promoting innovation and quality improvements. This contrasts with the situation in many European countries and in the United States, where institutional review and evaluation of internal quality procedures form the core of external quality assurance practices.

The General Course Index (IGC) provides limited signals about institutional quality

The IGC score – also calculated on scale of one to five - is used by external bodies and the media in reporting about the quality of higher education in Brazil. The IGC is widely perceived as a visible public signal of institutional quality that that institutions themselves feature in advertising. The real signal value of the IGC as a quality indicator for consumers is limited, however. While IGC scores range, in principle, from one to five, scores of one are virtually unknown, and nearly all scores cluster at values of three and four. In 2016, 93% of universities and 96% of university centres received scores of three or four. Setting aside the validity or reliability of the IGC, it is clear that its discriminating power for non-college institutions is low. Although the reputational effects of IGC can be important, it is not an indicator that is likely to have an impact of how institutions understand and manage the quality of the education that they provide. The IGC does not introduce new performance information for institutional leaders.

On-site re-accreditation reviews do not consider evidence of institutional performance and attach little weight to the quality of internal quality processes and their practical implementation

Like the other on-site review processes (such as recognition), the re-accreditation review process coordinated by INEP (potentially only every ten years) is focused on input and process, rather than outputs or performance, and reviewers are responsible for scoring qualitative indicators on a five-point scale. Given that the process of re-accreditation necessarily focuses on institutions that are already operating, with graduating students and graduates, there is scope to include greater consideration of outputs (graduates and evidence of the learning outcomes) and outcomes (graduate destinations) in the institutional assessments at this stage. The current evaluation instrument for institutional re-accreditation devotes comparatively little attention (in terms the number of indicators and judgement criteria) or weight to assessment of the internal evaluation capacity of institutions.

Owing to the schedules for re-accreditation, the institutional quality score awarded through re-accreditation processes (CI) is not calculated and reported on an annual basis, but rather with a periodicity that may range from three to ten years. In light of its infrequency, and perhaps because it is not linked to student outcomes as observed in ENADE, the CI score appears to function solely as a regulatory input, and not as a public signal of institutional quality.

Key recommendations

1. Reduce the period of re-accreditation for universities and university centres

Universities in some of the best-regarded higher education systems in the world must undergo external institutional reviews every four, five or six years. This is the case in the United Kingdom, the Netherlands and Sweden, for example. The current eight or ten-year accreditation periods for universities and university centres mean these institutions have few incentives to develop robust institutional quality mechanisms and problems in institutional quality management may go undetected for long periods. Instead of the current system, institutions with demonstrated internal quality capacity could be rewarded through dispensation from some or all aspects of programme-level review, subject to successful reaccreditation on a five or six-year cycle (see below).

2. Reduce the weight attached in institutional re-accreditation reviews to input and process indicators that measure basic supply conditions for higher education

There is scope to rebalance the weights attributed to the evaluation indicators used at the stage of institutional re-accreditation, away from inputs and towards processes and outputs. A first aspect of this is to remove indicators relating that measure basic supply conditions for higher education, such as infrastructure and equipment and general management policies. The availability of suitable infrastructure to supply each undergraduate programme is verified through the programme-level recognition and renewal of recognition processes, while some of the most general institutional policies are unlikely to change – or need to change - considerably over time. It is therefore wasteful to devote resources to re-evaluating and re-scoring these kinds of variable through the re-accreditation review. The inclusion of these indicators also reduces the proportional weight attributed to factors that are important to verify in re-accreditation such as educational results and institutional performance.

3. Increase the weight attributed to outputs and outcomes

Evidence about educational results and institutional performance is neglected in the current system of institutional re-accreditation. While processes of accreditation cannot take into account programmatic and institutional performance, re-accreditation can – but does not. Institutions should be able to graduate most students who begin their studies, and they should do so in a timely way. Those who graduate should be able to find employment, preferably in fields related to their area of study – and most certainly so if their studies have a career orientation – whether accounting, civil engineering, or nursing.

Quantitative programme and institutional indicators should ideally focus on the outputs and outcomes of higher education, while on-site reviews conducted by peers would helpfully focus on the inputs and processes that generate the outputs and outcomes observed in indicators. For example, indicators focused on outputs or outcomes, such as graduation rates, would be complemented by an on-site review process that examines the conditions that affect variation in these rates. These conditions include student advice and mentoring processes; how institutions identify students at risk of falling behind or dropping out; and the social or psychological, and academic support services provided to students at risk.

4. Increase incentives for institutions to take a strategic view of quality

The processes of institutional quality assurance do not encourage institutions to take a truly strategic and institution-wide view of quality. The IGC generates a score that is an aggregation of programme-level results. However, it does not generate a score that has been demonstrated to be useful in differentiating different levels of institutional performance or providing actionable feedback to institutions. Institutional Development Plans (PDIs), in their current form do not appear to provide an opportunity for institutions to take a comprehensive and strategic view of their institution, its profile, and quality of its educational programmes. It would be valuable to provide incentives to institutions to develop more meaningful PDIs with a stronger focus on how quality across a range of dimensions can be maintained and enhanced. One way to do this is to make assessment of internal quality policies and practice a much bigger part of the re-accreditation process, through greater weighting in the relevant evaluation instrument for on-site reviews.

5. Move to a system where institutions that can demonstrate strong internal quality assurance capacity and a proven record of delivering quality can accredit (authorise and recognise) their own programmes

Finally, processes for demonstrating institutional quality do not permit higher education institution to demonstrate that they have the capacity to take care of quality, and should be authorised to act as self-accrediting organisations, and should be permitted to create, revise, and eliminate programmes on their own initiative – as happens in other higher education systems in the world. The process of re-accreditation – specifically, the resulting CI score – changes the periodicity of institutional reviews, but it does not alter the level of responsibility that institutions are permitted to exercise. If account for institutional quality is to be joined up to institutional responsibility for the quality of programmes, it will need to be a very different and more robust process than at present. Examples of such differentiated models – where some institutions are subject to programme-level review and others are accorded self-accrediting status on the basis of rigorous institutional review - exist in other systems of higher education and could serve as inspiration for Brazil.

1.7. Governance of external quality assurance

Main findings

The current governance landscape for quality assurance, involving SERES, CONAES, INEP and CAPES has some strengths

There are important strengths to the governance and implementation of quality assurance in Brazil. For example, INEP is recognised internationally as a leading public agency for educational assessment. Its wide experience with large-scale assessment and its capacity to manage data collection systems provides the nation’s higher education quality assurance system with a high level of competence. CONAES has succeeded in attracting experts to its council, and through them has been able to mobilise higher education research from across the nation to inform the further development of SINAES.

The basic legitimacy of external quality assurance is not questioned and the system has developed significant experience and capacity in evaluation

The basic legitimacy and integrity of the quality assurance system is widely accepted across the higher education system, by public and private institutions alike, and by representatives of academic staff and the administrators and owners of higher education institutions. In the course of its implementation, SINAES has used a range of evaluation techniques – including self-assessment, peer review, and external review grounded in student assessment - that has been widely welcomed. Moreover, some higher education institutions in Brazil now closely monitor the experience of their students and their readiness to participate in external assessments. Others are making efforts to use compulsory self-assessment and peer review processes as opportunities for improvement, and to engage broadly their university community in the assurance of quality.

…but the current system of governance faces three main challenges

There are three fundamental challenges facing the institutions of quality assurance that merit attention and improvement.

  1. 1. First, the design of quality assurance institutions creates conflicting responsibilities for the Ministry of Education. MEC establishes, funds, and steers the federal university system, through its Secretariat for Higher Education (SESu). At the same time, it is responsible, through SERES and, indirectly, INEP, for evaluating their performance and for regulatory actions concerning the programmes they offer. These conflicting responsibilities lead the nation’s higher education institutions, especially its private institutions, to view the Ministry as a champion of one sector, rather than a neutral arbiter among all.

  2. 2. Second, while CONAES is responsible for providing guidance and feedback on the functioning of SINAES, it is not properly resourced and organised to do so. CONAES does not have its own professional staff or a dedicated budget, and lacks the capacity to undertake the sort of detailed and sustained analytical work that is needed to evaluate how SINAES is working. Instead, it depends upon the input of implementing bodies whose work it is to supervise and guide, most especially INEP. This dependence is exacerbated by the participation of the implementing bodies on the council itself. It lacks sufficiently wide input – from professional bodies, employer associations, and other centres of government - to take into account the broader social responsibilities of higher education.

  3. 3. Finally, in most higher education systems, responsibility for promoting and sharing quality improvement practices lies with bodies outside of government - with associations that represent sub-sectors (such as research, confessional, or polytechnic universities), and with bodies that represent professional groups within higher education institutions, including institutional research, curriculum design, assessment, and quality assurance. The review found few examples of the engagement of equivalent bodies in Brazil in research, advocacy, and training in support of quality improvement, and little attention on the part of public authorities to their potentially important role.

The federal system of quality assurance does not apply to all higher education providers in Brazil

As noted earlier, the systems for external quality assurance of HEIs and undergraduate programmes analysed in this report apply only to private HEIs and federal public HEIs. State and municipal public institutions – which account for almost 10% of enrolment - are not subject to SINAES, but rather to state-level regulatory and quality assurance rules. Although this situation reflects the constitutional distribution of competences in the Brazilian state, which allow considerable autonomy to states and municipalities, it leads to a fragmented system and means there is no single national benchmark of higher education quality. A single quality reference framework would make external quality assurance for higher education more transparent and understandable for students and their families.

Key recommendations

1. Create an independent quality assurance agency

To address the conflicting responsibilities of MEC – or indeed any future ministry responsible for higher education - Brazilian authorities should consider creating an independent quality assurance body that stands outside the Ministry, in line with practice in many OECD and partner countries. This agency would take the lead in implementing the reformed system of quality assurance proposed in this report. Good international models of bodies with strong legal, financial, and administrative independence exist. In systems with a similar legal tradition to Brazil, such agencies include, for example, Portugal’s Agency for the Assessment and Accreditation of Higher Education (A3ES).

The work to design and create any new agency for quality assurance in Brazil will need to address some key questions:

  • Which existing functions should be transferred to the new agency? In principle, the new agency would combine the evaluation functions coordinated by INEP’s higher education evaluation directorate (DAES) and the regulatory and supervisory roles of SERES. The changes to the overall model of regulation, evaluation and supervision proposed in this report – such as increased focus on institutional review, reduced numbers of programme-level reviews, a reformed ENADE and a new indicator dashboard - will affect requirements for staff in different roles. The advantages and disadvantages of creating specific evaluation units for different sets of disciplines (natural sciences, social sciences etc.) should be considered. Such units, integrated within the agency, could potentially allow evaluation to be better tailored to individual disciplines and work more closely with the discipline-specific CAPES evaluations.

  • Should some tasks be devolved to decentralised offices in the states? The current system of quality assurance in the federal higher education system is highly centralised, with all evaluation and regulation activities coordinated from Brasilía. Devolving responsibility to regional departments might theoretically allow a more differentiated approach to quality assurance, with better consideration of the large regional differences in Brazil. However, in the view of the OECD team, distinct quality assurance procedures in different parts of the country would risk creating a two- (or multi-)tier system and undermining national recognition of quality standards. It could be possible, however, to establish regional offices to house professional inspectorates to undertake inspection of infrastructure and institutional management, freeing academic peer reviewers to focus on assessment of academic performance, potentially remotely (see above). The costs of the current system of peer review and the potential costs of a permanent inspectorate would need to be assessed in detail.

  • How should the new agency be funded? The current system of external quality assurance in Brazil is funded by a combination of public resources (paying the salaries of public servants, for example) and fees paid by institutions for evaluation activities. Quality assurance agencies in a number of systems, including the Portuguese example mentioned above, are funded primarily through fees from institutions. To ensure efficient use of public resources, this should be the long-term aim in Brazil. A thorough analysis will be required to determine the costs of a new agency and the level of fees needed to finance its operation.

The OECD team recognises that there is an existing proposal to create a National Institute for the Supervision and Evaluation of Higher Education (INSAES), that was introduced as a draft bill in Congress in 2012 (Congreso Nacional, 2012[4]), but not pursued. This initiative effectively also proposed a merger of the functions of SERES and INEP, but was criticised for its potential cost and limited added value. The OECD team believes that a new agency would be the most effective way to implement a reformed system of external quality assurance. The reforms proposed in this report are vital to improve the effectiveness and efficiency of the system and any future agency must be designed to operate as efficiently as possible and with limited direct public subsidy.

2. Strengthen CONAES

To ensure that the quality assurance agency has an advisory council that brings a wide social vision to its work, CONAES could take on this responsibility, after substantial modification. CONAES would be a council with members holding fixed and staggered terms to ensure their independence of government, and encompass balanced representation from students, public and private sector employers, instructors from public and private higher education institutions, higher education administrators, leading researchers, and the senior policy official in MEC with responsibility for taking a comprehensive view of higher education.

3. Restructure the government departments that are responsible for higher education

MEC – or any future ministry responsible for higher education - can support the improvement of quality assurance by restructuring its responsibilities for higher education. This could entail creating a post for a principal policy officer who takes a comprehensive and strategic view of the entire Brazilian higher education system – which the Ministry presently lacks. Units organised along sectoral lines, for example, could support the work of a senior official. These might include groups responsible for (a) federal universities; (b) private universities; (c) technical higher education; and (d) coordination with state and municipal higher education institutions. This scheme of organisation would benefit the nation’s quality assurance system by supporting a strategic and comprehensive vision for the higher education system, by clarifying the role of private provision within the system, and by encouraging continued differentiation of institutions and policies.

4. Incentivise the development of expertise in quality assurance in sector organisations

In monitoring and evaluating the nation’s quality assurance system, a reconstituted quality assurance agency and advisory council (i.e. CONAES) should focus on supporting the development of quality enhancing organisations outside of government. For example, it could support collaboration among state and national bodies of institutional evaluation offices (CPAs), so they share experiences of quality management and improvement practices with one another.

5. Explore how a reformed external quality assurance system could also apply to state and municipal institutions

A single system of external quality assurance applying to all higher education institutions in the country would be more transparent for students and the public than that current co-existence of a large federal system and individual systems for state and municipal institutions in each state. The federal and state authorities, working with the higher education sector, should explore how – and under what conditions - a reformed federal system of quality assurance could be applied to state and municipal institutions, while respecting the distribution of competences enshrined in the constitution of the Union.


[4] Congreso Nacional (2012), Projeto de lei: cria o Instituto Nacional de Supervisão e Avaliação da Educação Superior - INSAES, e dá outras providências., Secretaria de Assuntos Parlamentares, http://www.planalto.gov.br/ccivil_03/Projetos/PL/2012/msg398-31agosto2012.htm (accessed on 30 November 2018).

[2] Escueta, M. et al. (2017), “Education Technology: An Evidence-Based Review”, NBER Working Paper Series, No. 23744, National Bureau of Economic Research, Cambridge, http://www.nber.org/papers/w23744 (accessed on 23 November 2018).

[1] Presidência da República (2004), Lei no 10 861 de 14 de Abril 2004, Institui o Sistema Nacional de Avaliação da Educação Superior – SINAES e dá outras providências (Law 10 861 establishing the National System for the Evaluation of Higher Education - SINAES and other measures), http://www.planalto.gov.br/ccivil_03/_ato2004-2006/2004/lei/l10.861.htm (accessed on 10 November 2018).

[3] TCU (2018), RA 01047120170 Relatório de auditoria operacional. Atuação da Secretaria de Regulação e Supervisão da Educação Superior do Ministério da Educação - Seres/Mec e do Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira - Inep nos processos de regulação, supervisão e avaliação dos cursos superiores de graduação no país. (Operational audit report: Activities of SERES/MEC and INEP related to regulation, supervision and evaluation of higher education programmes in the country), Tribunal de Contas da União, Brasília, https://tcu.jusbrasil.com.br/jurisprudencia/582915007/relatorio-de-auditoria-ra-ra-1047120170/relatorio-582915239?ref=juris-tabs (accessed on 13 November 2018).

End of the section – Back to iLibrary publication page