The case for a New Global Edtech Readiness Index

August 14, 2019//-Many medium and low income countries around the world are preparing to significantly increase their investments in the use of educational technologies — or have already begun to do so.

How might these countries, at a very high level, measure and track key components of their edtech investments and compare what they measure against what is happening in other countries in order to better understand what is working, and what isn’t?

Let’s be clear: Countries should ultimately measure the impact of their edtech-related investments against their educational goals (‘improved student learning’, for example — whatever that may mean). But as countries plan and roll out large national edtech initiatives, a set of interim measures might be rather useful to measure related progress (or lack of it) on the input side.

As part of such an effort, it might be rather helpful to adopt some standardized general measures, so as to allow for benchmarking what is happening in a given country against situations in other countries and to set related targets that are globally comparable.

More broadly: By articulating and highlighting a set of ‘indicators’ as part of a new global edtech readiness index, it might be possible to shape and influence high level discussions within education ministries, broadening related conversations beyond a traditional focus on just buying more (and more) hardware.

Unless you are the sort of person who questions the value in trying to measure most anything in education (in which case, you should probably just stop reading at this point), all of this probably seems rather reasonable.

Whether you are an evangelical enthusiast for, or diehard skeptic against, investments in educational technologies, having data to support or confirm your biases and beliefs might be rather useful.

And if your view of such things is a bit more nuanced, having data — especially data that can be compared to what is happening in other places — to help inform your thinking and actions would be rather useful as well.

Around 15 years ago, the UNESCO Institute for Statistics (UIS) recognized this challenge — and opportunity — and proposed a set of “ICT/education indicators” to help policymakers fill related knowledge gaps.

Developed over many years, with the help of statistical agencies and education ministries in over a dozen countries, UIS led a process that defined, debated and field-tested related indicators in a variety of high, middle and low income countries around the world.

The UIS Guide to Measuring Information and Communication Technologies in Education appeared in 2009 and quickly set the global standard for the collection of globally comparable data sets related to ‘edtech’, and was used in related official data collection efforts in many countries.

A lot can happen in ten years. Just as investments in educational technologies started to explode in many middle and low income countries around the world, the UIS ICT/education indicator initiative was phased out, a victim of budget cuts and other pressing priorities. Many of the technology tools and related practices defined and explored as part of the UIS ICT/education indicators work have changed, new ones have emerged, and the collective understanding and belief about what’s important when it comes to technology use in education has changed in important ways as well.

Might it be worth reviving some elements of this effort,
updating and adapting them to focus on a few key measures
related to a country’s perceived ‘readiness’
to utilize educational technologies within its education system?

Earlier this year, I was contacted ‘out of the blue’ by three different countries about to engage in massive new national edtech projects (more devices for more kids, more bandwidth for schools, more training programs to promote ‘digital literacy’, more digital textbooks — the usual stuff).

All asked:

  • Is there some way we can compare our current situation to that of other countries?
  • This could help inform us as we set some related investment targets.
  • In the long term, we are interested in the impact on ‘student learning’, but as we work toward that ultimate goal, it would be useful to have clarity on some of the things we should be measuring along the way related to our investments in educational technologies.

At the same time:

International donor agencies like the World Bank are considering ambitious ‘moonshot‘ initiativesthat will include significant investments in school connectivity and the development of a variety of ‘digital skills’ for young people.

  • They wonder: How might related ‘progress’ be measured and tracked over time?

And:

Some large philanthropies are exploring if there are critical gaps in national and local edtech-related ‘ecosystems’ that aren’t being filled through existing public or private investments.

  • They ask things like: How might we be able to quickly gauge if the lion’s share of edtech-related investments are going into buying hardware and software, but critical complementary investments in the capacity and skills of teachers and students to utilize increasingly available technology tools are not being made?

If such gaps exist, perhaps philanthropic monies can help to fill in some of them, and/or help build a case that others should do so?

A natural inclination is to measure things that are most easily counted. When it comes to the use of education technologies, people typically measure the number of digital devices for use by students, for example, or available bandwidth in schools.

Such things aren’t always that easy to count in practice, as it turns out, but they can in the end be measured, and it isn’t too difficult to convince people that doing so is worthwhile.

That said, there is little compelling evidence to suggest that the mere available of devices and connectivity alone makes a positive impact on student learning. While many people argue that related investments are necessary in the 21st century, only the most extreme techno-utopians would argue that they are sufficient.

Given the amount of money being spent on large scale edtech projects around the world, it would (presumably) be broadly useful to be able to track and compare the what is happening as a result.

If related investments in digital infrastructure in education systems are all that are tracked and compared across countries, however, there is a real danger that policymakers will largely focus their attention on such measures, ignoring other ones that may be equally — or perhaps even more — important in the end.

One can imagine a scenario, for example, where a policymaker in Country X can proudly proclaim that her country jumped 42 places in an international survey of ‘digital infrastructure for education’ (and that neighboring countries did not) … an achievement that obscures that fact that there is little likelihood that related investments will make tangible positive impacts on what students actually learn — unless some other things happen as well.

With this in mind, implicitly ‘suggesting’ some additional measures via an edtech readiness index could potentially help highlight the necessity to make a number of complementary investments *if* investments in devices and connectivity are likely to be building blocks for other activities more integral and fundamental to teaching and learning.

It might be, for example, that investments in things like digital education content, ‘human capacity’ (i.e. the development of skills by teachers and students to use technology tools effectively) and the capacity of an overall education system (both at a policy and implementation level) are important complements to investments in digital infrastructure. If so, might it might not be useful to measure them in some way as well?

At a high level, this is the thinking behind a growing movement to create some sort of new ‘global edtech readiness index’, comprising a limited set of key indicators that could help education policymakers, and decision makers at other organizations committed to support national education systems (in other public institutions, as well as in the private and non-profit sectors, in community organizations and academia), monitor and track related progress and better assess whether complementary investments might be useful or necessary.

By including indicators beyond things related to simple (if expensive) infrastructure-related investments (i.e. the stuff that people usually measure), such an edtech readiness index could signal to decision makers key elements of a ‘broader approach to edtech’ and help track progress related to investments in these elements over time.

While scoring highly on such an index would offer no guarantee that the desired impact on student learning would be achieved, low scores might suggest that some of the vital preconditions for impact are not in sufficient evidence — in which case, it might be worthwhile to reconsider whatever is being planned.

Follow-on posts will explore some potential principles that could inform the creation of a new global edtech readiness index, what components of such an index might look like, and examine the case for and against creating such an index in more detail.
You may also be interested in the following EduTech blog posts:

https://blogs.worldbank.org/edutech/new-global-edtech-readiness-index

Leave a Reply

*