Image by Des Syafrizal for USAID
  • Blog
  • 27 June 2023

Artificial intelligence for public good

DI's Claudia Wells explains how artificial intelligence (AI) could be used to enhance data systems in low- and middle-income countries while outlining the ethical considerations that must be taken into account when harnessing this technology.

Written by Claudia Wells

Director of Data & Evidence

The discussion on the impacts and benefits of artificial intelligence (AI) for social good is becoming increasingly dichotomised. But is the ongoing ‘Good vs Bad’ argument really helpful in progressing the dialogue to a place where we can understand and agree on the priorities for development data?

What are the risks and benefits?

In recent months we have heard AI experts warn about existential risk posed by digital intelligence. On the other hand, there are calls from the data for development space to advance our understanding on opportunities related to AI in low- and middle-income countries. If there is one thing that we can all rely on, it is that it’s very hard (perhaps impossible), to pause our human instinct for curiosity and progress.

Where do we stand on AI?

So, what is my take on this? I see this as a really exciting time for the development data field. As a firm supporter and member of the data for good community, I recognise the huge potential of AI to process and analyse exponential amounts of data. As a champion of inclusive data processes and data values, I also see risks that must be addressed if the AI revolution is to truly benefit the people who need it most. And as an ex-government statistician, I know how important foundational administrative systems, like civil registration and vital statistics (CRVS), are as an enabler for novel methodologies.

CRVS systems are the backbone of a formalised economy, vital for effective service delivery and for supporting democracy. When you combine datasets that count every birth, register every death and classify cause of death, with other administrative data sources you have a powerful set of information that has the potential to directly save lives and protect livelihoods. In my role as a government statistician, I worked with a team to explore these datasets and identify deaths related to drug poisoning. In partnership with leading academics, we produced evidence that a commonly used prescription drug in the UK was causing deaths from poisoning even in small doses. This evidence led directly to changes in prescribing policy in the UK, US and across Europe, which have been estimated to save hundreds of lives a year. So, for me the question isn’t whether data can save lives – I know it can. It is more a question of how we harness novel technologies and methods to maximise the potential of data for all.

► Read more from DI about data use

► Share your thoughts with us on Twitter or LinkedIn

► Sign up to our newsletter

How can AI improve data on people?

When it comes to data for development, data on people is critical, particularly in relation to the public sector. This means comprehensive administrative systems that support service delivery, representative national surveys on specific topics and robust census data that counts everyone. Private-sector data of course has a role, but it will never be a substitute for nationally owned, officially produced and collected data. The benefits of deploying AI into these ecosystems to produce insights from unstructured information are easy to see, especially with the advances in language-based models such as ChatGPT. However, the risks of models that predict outcomes (especially in the absence of robust foundational data) are also becoming more apparent. In the cases of data creation by AI, good quality data is a must – both predictor data and outcome data which enables models to analyse the patterns that link the two. AI-generated data should not be seen as a replacement for other sources. Instead, we need to approach this from the angle of joining up datasets to provide innovative solutions and gain a more comprehensive and inclusive picture of people's lived experiences.

In many low-income countries, foundational administration data simply doesn’t exist – births aren’t registered, deaths aren’t recorded and progress through school isn’t monitored. Where data does exist, our research shows that, despite a few notable exceptions such as the district health information system (DHIS2), most administrative systems still deliver incomplete, inaccurate data. Many systems are reliant on paper-based forms and registration books located in remote areas. While progress is being made towards digital transformation, data systems lack sufficient ICT infrastructure, mature bureaucratic standards and norms, and technical and human capacity. The development of these systems and the digitisation of the data they hold is critical. AI methods like natural language processing used to speed up the coding of data could be a real game changer in terms of accelerating digital transformation in many countries. Layering the use of new technologies over an administrative backbone, such as imaging software to detect skin cancer on a health information management system, could revolutionise data use for service delivery.

How do we ensure ethical data production and use?

As the use of new and emerging technologies continues to advance – creating new opportunities for development – we must recognise that there is a risk of exacerbating existing power imbalances due to underlying digital inequality. Closing the digital divide will ensure that the AI revolution is able to accelerate social inclusion and is why digital transformation and digital literacy must be a focus for development actors. This requires us to understand the complexities of the communities we aim to serve and the reasons why technological advances in the data for development space cannot happen in isolation. We must also address the questions of ethical and responsible data production and use. This is where data governance is key. As my colleague Karen Bett from GPSDD says in her recent data digest: “…inclusive data is analogous to governance. Good governance promotes equity and social justice by expanding representation and promoting equal access to opportunities.

As a community, we need to tackle the big challenges so that ethical and legislative considerations keep up with the advances of AI technology. This includes protecting individual data; understanding ownership and transparency of data and the underlying AI algorithms; and considering monetisation of data and how data can be made available for public good. It is crucial that bias in algorithms is tackled so that marginalised groups are not excluded. To prevent new and novel methods like AI becoming extractive and prescriptive, data processes must be inclusive, transparent and accountable. This will allow people to shape how they are represented and have confidence in using and engaging in the data, knowing that it will be used.

Importantly, all this work needs to happen as we invest long-term in the skills and expertise of people: from those collecting, processing and analysing data, to those using data in novel ways, to citizens and communities. In order for novel solutions to be sustainable, both the data and the algorithms they use need to be owned at the national and local level. Solutions need to be maintained, updated with the latest data, tested against real outcomes and recalibrated. Without this, models quickly become out of date, unreliable and ultimately unusable additions to data graveyards (places where well-intentioned and meticulously collected information remains unused and “goes to die”).

What is DI’s role?

Since 2011, DI has been focusing on the role of national and subnational data in informing decisions impacting on poverty eradication. We have explored the political economy of data, argued for a more inclusive approach to framing the national statistical system, and have championed the importance of both institutional and systems interoperability. We believe our three biggest learnings need to be taken into the AI data for development space:

  1. Firstly, governments need to own their own problems and solutions. They need to own their own data and – in the case of AI – the algorithms that run the models, on behalf of their citizens. And for this to be sustainable they need to finance and resource it in the long term.
  2. Secondly, this is all about people. Counting people – people living in poverty, left behind and dispossessed – so that they do indeed count is the primary challenge. And people – from primary health care assistants to the chief statisticians – are at the heart of the solution.
  3. And thirdly, we need to learn how to create a perpetual virtuous cycle at all levels where better use of data leads to demands for more and better-quality data, which in turn leads to demands for greater use of data.

At DI, we are committed to playing our part in ensuring data systems are in place for AI to meet its potential for making real change to the lives of people experiencing the greatest marginalisation. We challenge the AI community to work with and support the development of foundational data systems. A start could be by working in countries where digital public goods like DHIS2 have already been embedded. We are exploring ways that AI could be applied, and are keen to support conversations on AI with communities so their voices are heard. We are developing ideas on how AI can be used in our own work and are also looking at how it could be applied to support the development of existing data systems. We are keen to work with organisations who see the opportunities of AI and want to ensure those benefits reach those most marginalised. If we are to achieve our aim of reducing inequality, technology must be developed in a way that, at the very least, benefits all. At the most, it has the power to disproportionately benefit those who are already furthest behind.

If you see opportunities to work with us or if you would like to contribute to our discussions on AI, foundational data and data for development, please share your thoughts and opinions by email ([email protected]) or on Twitter (@StatsClaudie).