Image by International Monetary Fund
  • Report

Climate finance: Earning trust through consistent reporting: Chapter 5

Conclusion and recommendations

Share section

Conclusion

To have a chance of meeting the goals of the Paris Agreement, all countries need to dramatically scale up investment in climate action, and those who can support others need to do so. However, the current system is inadequate for providing consistency in financial reporting, across time or providers, which means that our ability to hold them account is limited. This needs to be addressed for the UNFCCC’s New Collective Quantified Goal on Climate Finance to be successful.

It is possible that some of the problems that afflicted the previous US$100 billion goal will be less important for the new goal. Countries have already made substantial efforts to redefine lots of activities as climate finance, and the scope for doing so further may now be limited, indicating that increases are more likely to be genuine. While in 2009 there was little attention or discussion around the use of markers, or what should count more generally, there has since been 15 years of scrutiny and debate that has at least highlighted some of the core issues.

Despite this, it clearly remains the case that changes are needed. Some changes are politically contentions and reaching agreement will be a considerable challenge. But our analysis suggests a number of technical solutions that could facilitate production of more useful climate finance figures to strengthen accountability and impact.

Share section

Recommendations

1. Greater transparency

First, this report echoes previous calls[1] for greater transparency in what is reported as climate finance to the UNFCCC. As shown in Chapter 3, it is essentially impossible to match projects across the UNFCCC and OECD databases for most countries. Moreover, the project information contained within the common tabular format (CTF – the main data tables that the UNFCCC produces on climate finance) is insufficient to learn about the reason for the project’s inclusion.

  • Links to documentation: The CTF should include a separate column for links to project documentation, and countries should be required to provide these. For most projects, detailed information exists but is hard to find and this inhibits accountability. In response to a review of its Biennial Report,[2] Japan commented that “the list of projects is too long to provide the requested links” because it would be too time-consuming. However, for most countries, a large share of climate finance comprises of a handful of large projects. Requiring documentation links for projects over, say, US$50 million would go a long way towards increasing understanding of climate finance.
  • Reporting the climate share of projects counted: Countries reporting on a case-by-case basis should be required to report what percentage of the total project value has been counted as climate finance. The fact that this is unavailable makes it impossible to judge the claims being made: whether a figure listed in UNFCCC submissions represents 5% or 100% of a project’s total value has a large impact on how we should view it.
  • Standard, stricter definition of ‘commitment’: When reporting to the UNFCCC, only commitments with firm written obligations should be eligible to be counted. This would promote harmonisation with the CRS data and increase trust that commitments will eventually be translated into actual spending.
  • Track disbursements separately: Ultimately, what we care about is how much finance actually gets delivered. Projects that providers have committed to can be cancelled for justifiable reasons, and delays are inevitable. But given the short deadlines for meeting climate goals, it is essential that commitments are turned into completed projects. As with ODA, it should be a requirement of the Enhanced Transparency Framework for countries to track disbursements against commitments made, where possible.
  • Require project codes: Many providers have already started to provide project codes that allow linking between UNFCCC data and the CRS data, including the UK, Japan, Denmark, Canada, Germany, Norway and EU Institutions. This needs to be standard practice.

2. Climate finance assessments should be as granular as possible

The more aggregated the level at which an assessment is made of climate focus, the harder it is to ensure that the climate finance counted reflects the true nature of the project, especially if the assessment is made at the commitment stage, where changes to the project are possible. Often, climate finance is assessed at the commitment stage, where full project details may not be available, which could prevent a more granular assessment. This emphasises the importance of reporting fully on both commitments and disbursements, as discussed more in our first recommendation on transparency.

3. Parties should consider novel techniques to ease capacity constraints

One of the most common concerns during our interviews with experts was a lack of capacity. Some countries outlined that they did not have sufficient capacity to review all projects, and others said that a lack of resources was the main barrier to adopting a different system. While analysing the climate focus of projects at a more granular level is clearly preferable, it also requires greater resources, which is a barrier for countries with smaller teams, especially in cases where countries check the marking of every single project.

However, it may not be necessary. Advances in techniques for analysing text data could provide a way of automating much of the review process. One option could be for individual countries to train an open-source model (such as those used in the analysis in Chapter 3) to identify the climate focus of projects that have already been through the review process, and to use this to predict the appropriate marking of new projects. The majority are likely to be predicted with a high degree of certainty (for example, “Solar panel project reducing coal use in India” or “Support for border control in Niger” having obvious markings of principal and no mitigation focus respectively). These can be ignored, allowing reviewers to focus on edge cases. Most processes involve entering projects into dedicated systems and so having these systems automatically flag projects for which the prediction is out of step should be possible.

This is not entirely ‘game-proof’: project designers may be able to alter the language used in the project descriptions to make it more likely that the model classifies the project as climate. But this would be more difficult than simply adding relevant keywords, as natural language processing (NLP – a form of machine learning) is able to pick up cues from context to give a more nuanced assessment. For example, if a project is in a sector rarely associated with climate action, then so long as the rest of the description accurately reflects the project, this may be a sufficiently strong cue to override the presence of climate-related words.

Some countries have already experimented with automating aspects of the review system, for example flagging project descriptions that contain certain key words, but there is scope to build on this using more sophisticated NLP models. Such models cannot replace human judgement or quality assurance processes entirely but can help direct reviewers to the projects most likely to be revised.

Any country could explore such models with the view to reducing the capacity needed to quality-assure projects. But another option would be for a model to be trained more centrally, e.g. the Standing Committee on Finance (SCF) or another body could manually code enough projects from all providers to train an NLP, and Parties could use this to predict classifications for their own projects. This would provide a check for the degree of consistency across Parties. Furthermore, the costs of operating and maintaining such a model (anticipated to be small) could be shared between donors.

4. The UNFCCC should strengthen its peer review process

DAC members have long been required to engage in a peer review process, whereby representatives from the OECD and other DAC members will examine shifts in aid spending and policy, to assess their likely effectiveness and highlight worrying trends. More recently, there has also been a voluntarily statistics peer review, whereby processes for generating aid statistics are evaluated. Officials that we spoke to see this process as a valuable way to share information, learn from other members and identify potential reporting issues. As part of this, many countries have requested additional webinars or workshops to discuss specific questions relating to climate finance.[3] However, this system is voluntary and, as a DAC process, is disconnected from climate finance reporting. While the Rio markers are included in the assessment (along with other policy markers) these are only a small part of the review (and the reviews of the Rio markers are less relevant for countries that do not use them as the basis of their UNFCCC submissions).

The UNFCCC also has a peer review processes (the International Assessment and Review process) in which Expert Review Teams assess aspects of reporting on climate action, including climate finance. But these are primarily limited to an assessment of whether Parties have complied with transparency requirements, such as including their definition of ‘new and additional’, or completing all fields in the CTF. These have brought valuable improvements in transparency, for example, Japan started providing less aggregated data after the technical review of its third Biennial Report encouraged it to provide more project details. However, these reviews could go beyond merely establishing whether Parties have responded to each “shall” commitment from various UNFCCC agreements and play a role assessing the veracity of claims on climate finance. For example:

  • The review team could establish differences in the ways that countries count similar projects. For example, have different markers been applied to core contributions to the same trust funds or PPPs?
  • Where an automated check – such as the use of an NLP model as per the previous recommendation – casts doubt on the relevance of a project, the review team could request an explanation for the inclusion of the project.
  • The review team could establish whether projects are relevant to needs identified in the NDCs/NAPs of partner countries.

It would not be feasible for the review team to check every single project. But even checking a random sample (with the probability of selection proportional to project size – or a sample informed by other factors, such as an NLP model, or projects whose markings deviate from the OECD’s suggested markings) would establish more rigorous reporting. The SCF could be tasked with drawing out common problems.

Expanding the remit of the Expert Review Teams could be relatively simple but has the drawback that Parties are reviewing each other and therefore may be vulnerable to ‘retaliation’: receiving a difficult review in exchange for asking difficult questions. An alternative would be to establish an external body to perform this audit function, as recommended by the Center for Global Development.[4]

5. Countries should improve reporting on impact, both ex-ante and ex-post

One of the key issues with climate finance is the lack of clarity about what its impacts are. In fact, some research suggests that mitigation finance is not even correlated with lower emissions pathways.[5] However, many countries already report some estimates of impact (both ex-ante or even ex-post) such as the UK’s performance metrics for International Climate Finance, which report both expected and achieved results. Several officials expressed that reporting ex-ante estimates of impact is a component of best practice, and that in principle there is nothing preventing countries from doing so. Providers should already be estimating the impact as part of project appraisals.

Reporting on ex-ante impact should be a minimum requirement of all climate finance, as argued in previous submissions to the UNFCCC on measuring climate finance:[6] if countries are unable to explain the impact they think finance will have on climate outcomes, there is no reason to accept that it is climate finance. In addition, requiring providers to report this will present us with a much better understanding of how finance is bringing the world closer to meeting the Paris Agreement. There are multiple ways this could be achieved but adding columns to the CTF for both the climate-relevant KPIs and the estimated progress towards it would be a start.

There are some types of expenditure for which this may not be possible. For example, relevant staff costs are often counted for departments or agencies with a climate finance role, and research and development is no doubt important but has outcomes that are hard to quantify. Yet these are the exception rather than the rule. For most projects, reporting on KPIs would be possible and allow a better understanding of why projects are included in climate finance estimates, and what impact they are likely to have.

6. The UNFCCC should provide full, official guidance on using the ‘case-by-case’ approach

One reason that countries use the Rio marker system is because it is there: a preexisting system for measuring climate finance that came with guidance from the OECD and is easily to integrate into existing reporting. There is nothing similar for the case-by-case approach – but, if there was, it is possible that other countries would have chosen to adopt that method, especially as many officials agreed it was better in principle. The SCF does not have authority to impose a set of rules, just as the DAC cannot enforce compliance with suggested markings, but if there was an agreed standard it would not only assuage the concerns of countries that think the approach less transparent but also remove the burden from countries of developing their own methodology.

Examples already exist. The UK has a detailed internal guidance document that project originators use to identify a project’s share of climate finance. The US provides “parameters of accounting” to departments to help them identify climate finance. There is already the joint multilateral development bank methodology, which guides MDBs in identifying the incremental cost of adding adaptation components to projects (for example).[7] If a UNFCCC body were able to build on this and develop a standard guide for the case-by-case approach, it would not only have more legitimacy than the Rio markers (developed solely by high-income countries) but also fill the gap that led many countries to adopt the markers in the first place.

Share section

Moving in the right direction

These recommendations will not solve all of the problems associated with climate finance. Many of these problems are highly political: the difficulty of spending money abroad, especially when domestic problems and debt-burdens have mounted in recent years; the fact that many climate finance projects are more about promoting domestic firms[8] or exporting technology; and the fact that many countries have an interest in keeping their numbers high to meet ambitious political targets. By contrast, these recommendations are largely technical.

However, the first step in improving the quantity and quality of finance provided is understanding the current landscape, and this requires consistent measurement across time and providers. Our conversations with officials suggest that these recommendations represent pragmatic and feasible ways to may move climate finance reporting significantly further in this direction.

Notes