In response to the publication of the United Nations’ Sustainable Development Goals (SDGs) in 2015, it has become necessary to develop a means of assessing progress toward their achievement. Included in the 17 goals of the 2030 agenda for sustainable development is SDG 4: Quality Education. This calls on nations to “ensure inclusive and equitable quality education and promote lifelong learning opportunities for all” (Goal 4, 2015). There is a mutually agreed upon goal among the United Nations’ 193 member states to increase access and educational quality globally, yet there is a lack of consensus about how to measure the achievement of that goal. Indeed, there is no catchall solution to the question of evaluating equity and access in education (Edwards, 2016). In the absence of a single, cross-nationally comparable metric, international tests such as the Programme for International Student Assessment (PISA) and the Trends in International Math and Science Study (TIMSS) have been used to rank the cognitive skills and academic achievement of the youth in participating countries. Although data from these tests are compared and ranked internationally, it is notable that only one-quarter of the United Nations’ member states participate in and report on TIMSS, and even fewer participate in PISA. These current metrics do not allow for the evaluation of the successful attainment of SDG 4 across all 193 member states of the United Nations. At best, they reduce global achievement to the accomplishment of a fraction of the globe.
To overcome the lack of a single test measuring global progress toward access to high quality education for all, some academics and policy-makers support the use of globally linked national metrics. For example, Hanushek argues that the cognitive skills measured in tests such as TIMSS and PISA are necessary for an educated labor force equipped to foster economic development (CIES Symposium, 2016; Hanushek, 2016). Others suggest that globally linked, cross-national indicators of educational attainment such as PISA and TIMSS provide data that helps bring attention and international aid to education as a means for international development (CIES Symposium, 2016; Mundy, 2016). In response to the need to measure global educational quality and access, Silvia Montoya, the director of the UNESCO Institute for Statistics, is concerned with harmonizing local- and regional-level metrics and coming to a global consensus about the basic literacy and numeracy skills that, once attained, would signal educational quality (CIES Symposium, 2016).
Video: Silvia Montoya explains that the harmonization of current metrics is needed.
However, the practical problem of measuring inclusive and equitable education globally should not require cross-national rankings. Instead, locally generated metrics should be applied in the context where they were gathered in order to improve national educational quality. In other words, locally generated measures of educational attainment could inform contextualized social and pedagogical changes for targeted educational improvement. Although SDG 4 calls for global attainment by 2030, until there is a way to compare national progress toward the achievement of SDG 4 without ranking nations hierarchically, the current metrics will only offer contextualized snapshots of student achievement and should not be cross-nationally ranked.
Hanushek argues that globally linked, cross-nationally comparable metrics provide a vision and model to “have-not nations” of what is possible (CIES Symposium, 2016).
Video: Eric Hanushek calls for the use of Global Learning Metrics to show what is possible to “have not” nations.
Yet, in doing so, such rankings serve to perpetuate the notion of the superiority of some nations over others. The problem of hierarchical comparisons is compounded by current metrics, such as PISA, that provide a myopic view of the performance of a small sample of primarily high-income countries. Despite the absence of the majority of nations in the current metrics, it is assumed that the high-performing, high-income, highly-developed countries are blueprints for the future of low-income, aid-dependent nations.
An additional attraction to the cross-national comparison of globally linked national metrics is that, according to Mundy, such comparisons are a way to highlight that learning outcomes are not equitably distributed. The end goal of drawing attention to learning outcome inequities would be to target the distribution of resources to countries most in need (CIES Symposium, 2016; Mundy, 2016). Nonetheless, such distributive justice could also occur within national contexts. In this case, if student success measures indicate that a greater investment in education is necessary, governments could respond by reallocating funds or increasing taxes to invest in human capital development through their national education systems. The use of locally generated metrics for national distributive justice could curtail international aid that is provided based on the assumption that donors know what aid-dependent countries need (Moyo, 2009). If global learning metrics were to justify an increase in aid with oppressive repayment conditions, the abuse of metrics for the maintenance of national educational systems that are aid-dependent could do more harm than good.
In an attempt to simplify the complex task of measuring the success of SDG 4, the UNESCO Institute for Statistics aims to harmonize local and regional metrics. However, instead of harmonizing metrics globally, nations should strive to humanize metrics locally. It is important to re-center the human–the individual students–in a discussion about potential avenues for increased educational quality. Measures of student achievement are rich, individual-level data that are aggregated, abstracted and decontextualized to become national rankings. These national rankings, such as those generated by PISA data, add credence to the argument that top-performing national education systems are models of what is possible for lower ranking and non-participating nations. As a result, top-performing nations model “best practices” that can be transferred to “have-not” nations. Pedagogical changes based on decontextualized national rankings distort student- and classroom-level data that could provide contextualized solutions aimed at improving educational quality.
The local need for pedagogical improvement is, therefore, in tension with the notion that national rankings inform best practices in education for economic development. Just as the current metrics do not adequately measure SDG 4 attainment, the current system does not adequately address a diverse conceptualization of what constitutes quality education. For this reason, the reproduction of a one-size-fits-all education model that can be assessed by a simple metric will not suffice.
Edwards, D. (2016). Are global learning metrics desirable? That depends on what decision they are attempting to inform. Retrieved from https://education.asu.edu/sites/default/files/ps_david_edwards.pdf
CIES Symposium. Edwards, D., Hanushek, E. A., Montoya, S. & Mundy, K. (2016, November). Are global learning metrics desirable? In I. Silova (Chair), The Possibility and Desirability of Global Learning Metrics: Comparative Perspectives on Education Research, Policy and Practice. Inaugural symposium of the Comparative and International Education Society, Scottsdale, AZ.
Goal 4.:. Sustainable Development Knowledge Platform. (2015). Retrieved December 05, 2016, from https://sustainabledevelopment.un.org/sdg4
Hanushek, E. A. (2016). Are global learning metrics desirable? Retrieved from https://education.asu.edu/sites/default/files/ps_eric_hanushek.pdf
Moyo, D. (2009). Dead Aid: Why aid is not working and how there is a better way for Africa. New York, NY: Farrar, Straus and Giroux.
Mundy, K. (2016, November 9). Setting the stage for the CIES Symposium on Global Learning Metrics (Karen Mundy). FreshEd with Will Brehm. Podcast retrieved from http://www.freshedpodcast.com/karenmundy/