Information about Middle East Oriente Medio
Journal Issue
Share
Article

Use of Third-Party Indicators in Fund Reports

Author(s):
International Monetary Fund. Strategy, Policy, & Review Department;International Monetary Fund. Statistics Dept.
Published Date:
November 2017
Share
  • ShareShare
Information about Middle East Oriente Medio
Show Summary Details

Introduction

1. Use of comparative indicators developed by other organizations, already widespread in core areas of Fund surveillance, is increasing in emerging macro-critical areas in response to the call from the membership. Fund staff rely on so-called third-party indicators (TPIs) to measure concepts such as business environment, competitiveness, and quality of governance to inform country-specific analysis across time and cross-country comparisons. These indicators play a role in supporting the evidence-based approach to advance work in priority areas for surveillance identified in the 2014 Triennial Surveillance Review (TSR), such as risks and spillovers, macro-financial surveillance, and structural issues; in emerging macro-critical areas like inequality, gender, and climate change; and in program work.1 It is anticipated that staff will continue to draw on other institutions’ expertise and estimates, consistent with the Executive Board’s guidance to leverage outside expertise in areas where internal expertise is limited or lacking.

2. TPIs used by staff are drawn from numerous compilers. They include international institutions and multilateral development banks, survey institutes, think tanks, non-governmental organizations, and private sector firms. In general, these indicators are also used by country authorities and the private sector to monitor economic activities or measure performance.

3. Their varied qualities present challenges and risks to the Fund’s credibility. Some TPIs are based on hard data, while others are perception-based or composites of various underlying data sources. In addition, there are other indicators based on qualitative assessments by experts, which are sometimes scored. The reliability of the perception-based indicators relating to governance and political risk has come under intense scrutiny. In some cases, this reflects a lack of confidence in the institutions generating the indicators and their inputs due to perceived bias. The opacity of some TPIs’ sources and methodologies—particularly those relying on value judgment and produced by private entities—also prevents validation of their methodological soundness. Not understanding their limitations could lead to flawed analysis and presentation, potentially undermining the Fund’s credibility. Close attention to indicators from private third-party providers (some of them based on hard data) may also be warranted, for example, in bilateral and multilateral financial sector surveillance.

4. This paper lays out a framework to address these challenges and promote best practice in Fund reports. The framework—which reflects the feedback received at the March 2017 informal Board session and in other Board discussions on multilateral and bilateral surveillance—will apply to all country documents, policy documents, and multi-country documents that are subject to the Fund’s Transparency Policy. To promote best practice, staff are encouraged to follow similar guidelines for other Fund documents that are not subject to the Transparency Policy (e.g., Regional Economic Outlooks and Staff Discussion Notes). The framework has three elements: (i) a Guidance Note for staff to raise awareness and promote the principles for best practice; (ii) the “Indicators Digest” (a database for internal use only) that compiles selected indicators’ characteristics, including their strengths and weaknesses, to inform staff’s judgment; and (iii) the review process to help ensure best practice.

5. The paper is structured as follows. Section II provides an overview of the current practice at the Fund and at selected international organizations (IOs) in using TPIs and lessons learned. Section III discusses the principles for best practice that will frame the future Guidance Note. Section IV presents the purpose, scope and coverage of the Indicators Digest, and outlines the structured methodology to assess data quality and its application to TPIs. Good practice examples drawn from recent Article IV staff reports are provided in Section V. Section VI presents the conclusions. The Indicators Digest is presented in full in Background Note I.

Current Practices in Using Third-Party Indicators

A. Current Practices in the Fund

6. TPIs are widely used across Fund surveillance products, though considerably less so in program staff reports.2 A text mining exercise covering 184 recent bilateral and multilateral surveillance reports, 113 Selected Issues Papers, and 82 UFR reports over the period 2016 to mid-2017 finds that more than three-quarters of the sampled Article IV reports included terms associated with selected TPIs,3 while only one fifth of program reports did.4 Approximately half of the multilateral surveillance reports contained terms associated with these TPIs (see Figure 1 and Annex II for details).

Figure 1.Current TPI Practices at the Fund: Results of Text Mining Exercise

7. Use of TPIs appears to be more prevalent in the discussion of issues associated with business climate.5 Slightly more than one-third of the sampled Article IV and program staff reports that discussed business climate issues used terms associated with the selected TPIs.6 This is followed by structural reform or political risk issues, where about 20 percent of the Article IV staff reports and only 4 percent of program reports discussing these issues used terms associated with TPIs from this set. TPIs that are associated with governance and corruption issues do not appear to be used regularly in staff’s analysis in surveillance and programs.7

8. Fund staff use TPIs primarily to reflect changes in country ranking over time. Nearly 40 percent of the sampled Article IV staff reports rely on TPIs to show the country’s ranking at a given time or changes of the ranking over time, most often when discussing issues related to the business environment.8 For this purpose, staff reports generally adopt three approaches: (i) a country’s overall rank position at a point in time, frequently supplemented by the country’s ranking in subcomponents; (ii) evolution or change over time of a country’s rank position; and (iii) comparing a given country’s rank position or its change over time to peer or benchmark countries.

9. For cross-country comparison, a common approach relies on distance to frontier (DTF) scores, which measure the deviation of a country from the best performer. Only 18 percent of the sampled Article IV reports make use of TPIs for this purpose. Staff used TPIs to identify the gap between an economy’s performance and the best performance at a point in time and assess the absolute change in the economy’s regulatory environment over time.9 Most of these reports relied on the Doing Business Indicators.

10. TPIs are also used to inform the diagnostics in program documents, although to a much lesser extent. Few program documents share the approach observed in surveillance cases. Twelve percent of the sampled program staff reports used TPIs for ranking purposes, and just 4 percent used them for cross-country comparison purposes. TPIs are not used in the specification of conditionality.

11. In general, there is scope for including specific information about the TPIs used to help guide interpretation and inform the overall assessment. The results of text mining and a broad reading of the Article IV staff reports to identify good practice (Box 2) find insufficient information about indicators’ characteristics, methodological shortcomings, and measurement uncertainties, for example. There are cases where a range of indicators would help ensure a more robust measure for a country.

B. Practices in Other International Organizations

12. As at the Fund, major IOs use TPIs in their analytical work. The EBRD, OECD, and the World Bank use a variety of indicators which significantly overlap with those commonly used by Fund staff.10 TPIs are used to help inform cross-country comparisons in analytical work, identify members’ policy challenges, assess compliance, or serve as inputs to IOs’ own composite indicators or reports. TPIs also supplement official data sources when doing so improves an in-house indicator and enhances timeliness of available data.11 In addition, TPIs are considered useful when resource constraints limit scope for collecting and analyzing primary data.

13. Staff judgment, review mechanisms, and stakeholder engagement are used to ensure the quality of end-products. While there is no formal policy or guidance at these IOs, common elements guiding work with TPIs include: (i) use of own data sources where possible; (ii) reliance on internal expert judgment and internal review processes to ensure good practices when integrating TPIs into staff assessments; (iii) clearly distinguishing the use of TPIs from the use of institution’s primary sources and/or staff judgment in the analysis; and (iv) taking stakeholders’ views into account as part of the regular consultation process for the particular product. For example, the OECD’s country Economic Surveys frequently use TPIs and complement them with internal indicators as well as detailed discussion of the country context (Box 1). The annual assessment of transition challenges by the EBRD is rooted in several TPIs complemented by internal experts’ judgments. At the World Bank, teams consult TPIs when assembling the Country Policy and Institutional Assessment (CPIA), and the importance of staff judgment is emphasized. In some instances, where a private source is being used repeatedly, methods used by those providers have been examined more closely through methodological seminars with member statistical agencies (OECD). This helps OECD staff gain insights into some TPIs’ quality standards and address their members’ concerns.

Box 1.Use of TPIs in Selected OECD Country Economic Surveys: 2014–17

Main findings. Analysis of 35 OECD country reports during 2014–17 shows the following:

  • A majority of these reports (63 percent) use at least one TPI as an input along with others for diagnostic and analytical purposes.
  • The most frequently used TPIs are the Doing Business indicators and Global Competitiveness Index. These are complemented by the OECD’s own indicators of Product Market Regulation.
  • Governance-related TPIs are used less often. When they are used, they are often complemented by the OECD’s civic engagement and governance index, a subcomponent of the Better Life Index.

From a good-practice perspective, complementing TPIs with internally produced indicators, where available, provides a useful way to check for robustness.

To what extent are TPIs used across OECD Country Economic Surveys?
Which are the most frequently used TPIs across the Country Economic Surveys?

C. Lessons Learned

14. The recent experiences at the Fund and other IOs confirm the usefulness of TPIs and the challenges associated with their use. TPIs have proven useful to facilitate cross-country comparisons, identify concrete evidence to inform analysis in areas where an institution may not gather its own primary data, and help inform and complement staff’s own assessments. However, users may have more limited insight into how TPIs are compiled and how their quality is ensured, compared to statistics compiled internally.

15. There appear to be common good practices in use of TPIs across IOs. Internally-produced statistics are accepted as the first-best option but, in cases where they are not available or where TPIs may usefully complement such statistics, a more deliberate approach to navigate the challenges associated with working with TPIs include (i) emphasis on staff judgment; (ii) use of internal review to assess strengths and weaknesses and ensure proper use; and (iii) consultation with country authorities and other stakeholders. Being clear about the key characteristics that are—and are not—known about the TPI and its source, its benefits and potential shortcomings, and its relevance to staff’s analysis is critical. In some cases, the overlap of TPIs from different sources measuring common concepts introduces the possibility of drawing insights from those multiple sources to improve robustness. To ensure that a more complete picture is presented, staff judgment remains a central element of the approach. In this regard, it is worthwhile to seek the authorities’ views on the TPIs used in the analysis and seek additional contextual background that could be integrated into the analysis and conclusions as relevant.

Principles for Best Practice for the Fund

The principles-based approach helps preserve flexibility for staff to make progress on issues identified in the ISD, TSR, and emerging macro-critical areas, while avoiding flawed analysis and reputational risks. Staff should not view these principles as a prescriptive list, and should exercise judgment in how to apply them for each specific document type to keep the reports streamlined. The Indicators Digest serves as a centralized quality assessment database for some of the most commonly used TPIs at the Fund. It summarizes key information that staff may wish to take into account in their use and presentation. The internal review process will help ensure observance of these principles at the operational level.

A. Principles for Best Practice

16. The first guiding principle is transparency. Being transparent when selecting indicators (including justifying their choice, while acknowledging their limitations) and discussing how results are used to inform the overall assessment (including by drawing on knowledge of the country context) would help make indicator selection more rigorous and reflective of facts on the ground. The following actions can help enhance transparency:12

  • Briefly discuss the key characteristics of the indicator. Characteristics that could feature in staff’s discussion include the extent to which the indicator represents an objective measurement of a concept, an expert qualitative assessment, or stakeholders’ views (perception-based)13 and other potential signals of an indicator’s strengths and weaknesses.14
  • Acknowledge measurement uncertainties. Recognizing that some concepts are difficult to measure using any kind of data would promote candor and credibility of staff analysis. Where possible, staff should discuss standard errors and take these into account before using indicators for comparisons across time or across countries (see further discussion below).

17. The second principle is robustness. The existing guidance to staff that “assessment and policy advice should be broad-based” promotes the use of indicators as one of many inputs and a complement to qualitative discussion. Staff should avoid relying on a single indicator or source to reach conclusions. Drawing on multiple indicators and sources that measure similar concepts, whenever possible, can help provide a thorough robustness check, and may reveal potential discrepancies across indicators. On the other hand, staff may need to consider a tradeoff between the number of indicators and their full availability across countries and reference periods. When multiple indicators/sources are not available for a given country or concept, it will be especially important to supplement the use of an indicator with other facts or information. This could include a discussion of qualitative factors that may be influencing a country’s performance over time or relative to other countries.

18. The third principle is to reflect stakeholders’ views. Fund reports should present the staff’s independent and candid views. To the extent that staff’s analysis makes use of indicators, it is best practice to have a dialogue and to present the views of the authorities and other key stakeholders, particularly when they have different interpretations of the indicators or further background information that may broaden the context in which the analysis is made.

  • For Multilateral Policy Issues Documents (such as World Economic Outlook, Global Financial Stability Report, and Fiscal Monitor), it may prove impractical to seek stakeholder feedback prior to the issuance of the documents to the Board. In these cases, staff will carefully document the sources of the data, add appropriate caveats, and conduct robustness tests as laid out in the previous two principles of best practice. Staff may decide to make any modifications to these publications in order to address the comments from the membership expressed at the relevant Executive Board meeting in line with current rules and practice, as allowed for by the Fund’s Transparency Policy.15
  • Staff may also revise a policy paper where the staff has modified its views in light of a Board discussion (provided that management shares those views) prior to publication (see Guidance Note on the Fund’s Transparency Policy).

B. Other Specific Guidance

19. To use and present TPIs in Fund reports effectively, users should also be aware of the potential pitfalls associated with different types of TPIs (Figure 2).

Figure 2.Other Specific Guidance

1/ See Indicators Digest for specific information.

2/ This matrix reflects two ends of the TPI continuum. As noted in Annex I, intermediate forms of indicators would include expert analysis and surveys of experience, as well as composite indicators which may blend different kinds of data.

3/ Degree of uncertainty of a score is determined by standard errors and/ or confidence intervals associated with the score which are normally published by compilers of some TPIs. For some TPIs, these measures are either not produced at all or not publicly available. When confidence intervals overlap, the focus should not be on rankings and it should be assumed that performance is roughly the same.

4/ Use of TPIs should serve as a complement to qualitative discussion based on other inputs such as knowledge of the country context, discussions with country authorities and other stakeholders, and other sources

  • To compare indicator scores across time, staff should be aware of changes in data sources and methodology over time. For cross-country comparisons, awareness of the size and representativeness of samples, as well as other country-specific quality concerns, would be important. This applies to indicators of all types (Annex I).
  • In addition, staff should recognize the degree of uncertainty (as measured via confidence intervals and/or standard errors) around reported point estimates, an important factor to be considered when using perception-based indicators. While perception indicators may usefully complement macroeconomic statistics, individual perceptions can change from year to year without fully reflecting changes in the fundamentals of a country. There is a risk that staff could overestimate the degree of accuracy of countries’ perception-based point scores and, therefore, the scope for making comparisons across countries or over time. This is because the difference between a country’s score in two different years—or the difference between two countries’ scores in a single year—can be statistically insignificant if their confidence intervals overlap. In this case, staff could consider presenting country scores relative to a range or an average of peers.

The Third-Party Indicators Digest

A. Purpose, Scope, and Coverage

20. The Indicators Digest is a central, internal database to help inform staff’s judgment on selected TPIs. The Digest provides information on selected indicators’ statistical characteristics, strengths, and weaknesses, and serves as a companion to the forthcoming Guidance Note.16 The TPIs in the Digest cover the most commonly used indicators in Fund reports that could be subject to scrutiny. The assessment in the Digest is not a validation exercise, nor is it intended to present an ex-ante positive or negative list of indicators acceptable for use in staff analysis and Fund products.

21. For the initial version of the Digest, STA has reviewed 13 TPIs that are widely used by the Fund. The initial set of indicators in the Digest are shown in Table 1. The Digest currently includes predominantly business climate, governance, and political risk. The Digest will remain a living database, but it will not be an exhaustive compilation of all possible TPIs staff may need or want to employ in their analysis. Further expansion of the Digest will be demand-driven, likely reflecting increasing use of indicators in emerging macro-critical areas.17 Staff and Board members will have continuing access to the expanded Digest.

Table 1.Indicators Digest: Summary of Indicators
IndicatorCompiler
Political Risk Indicators
1International Country Risk GuidePolitical Risk Services (PRS) Group
2Political Instability RankingsThe Economist Intelligence Unit (EIU)
3Political Stability Risk IndexThe Economist Intelligence Unit
4Political Risk Component of the EIU Country Risk ModelThe Economist Intelligence Unit
Governance Indicators
5Corruption Perception IndexTransparency International
6Daniel Kaufmann (Natural Resource Governance Institute and Brookings Institution) and Aart Kraay (World Bank)Worldwide Governance Indicators
7Corruption IndexVerisk Maplecroft
Business Indicators
8Index of Economic FreedomHeritage Foundation
9Global Competitiveness IndexWorld Economic Forum (WEF)
10Transition IndicatorsEuropean Bank for Reconstruction and Development (EBRD)
11Doing Business IndicatorsWorld Bank
Other
12Indicators of Central Bank Independence and TransparencyDincer and Eichengreen (2008, 2010, 2014)
13Consensus ForecastsConsensus Economics

B. The Methodology

22. The Fund has a structured method to assess the quality of official statistics. The Fund has developed the Data Quality Assessment Framework (DQAF) to provide a comprehensive assessment of countries’ data quality, covering the institutional environments, statistical processes, and the methodological foundations of the statistical products. The DQAF was endorsed by the Executive Board in 2003, and updated in 2012. It reflects best practices and internationally accepted standards in the compilation and dissemination of statistics. In addition to a set of prerequisites for quality, it recognizes five dimensions of data quality: assurances of integrity, methodological soundness, accuracy and reliability, serviceability, and accessibility. Each dimension is broken down into more granular elements. These dimensions are consistent with the United Nations Fundamental Principles of Official Statistics (1994). The assessment involves comparing national practices with internationally accepted standards, with statistical experts formulating ratings to reflect whether international standards are observed (O), largely observed (LO), largely non-observed (LNO), or non-observed (NO).

23. In the case of TPIs, there are no international standards to serve as a reference, but the DQAF can be tailored to inform staff judgment. Relevant elements of the DQAF can provide a structured and consistent approach in the review of a TPI, based on publicly available information. Such an approach was taken by STA to review the TPIs in the Digest.18 As illustrated in Figure 3, the approach entailed an assessment along four relevant areas broadly related to the transparency, compilation, and dissemination of the data product. The assessment covered:

Figure 3.Adapted Data Quality Assessment Framework

  • Assurances of integrity. Is the TPI produced on an impartial and transparent basis?
  • Methodological soundness. Does the compilation of the TPI follow internationally accepted statistical standards, guidelines, or good practices?
  • Accuracy and reliability. Are the data sources used for compiling the TPI accurate, reliable, and timely?
  • Accessibility. Do the users of the TPI have easy access to the data, metadata, and relevant contact staff?

24. This “adapted DQAF,” in the form of a template, guides the assessment of strengths and weaknesses of TPIs. It provides a detailed assessment—along the four dimensions, described above—and a rating of how compilation of the TPI observes, partially observes, or does not observe best practices, or if the assessment is not feasible or warranted. For each TPI, a textual assessment details the characteristics of the compiler and the indicator, with information on the institutional background of the compiling agency (e.g., international organizations, nongovernmental or private entities, foundations) and its known affiliations and sources of funding, thereby affording a notion of its professional independence and impartiality. The parts devoted to the indicator describe the data sources and collection systems (e.g., national data, international organizations, proprietary surveys, other TPIs), representativeness of survey samples, compilation techniques, and methodology used for compiling composite indicators.19 An overall assessment summarizes the principal elements that should assist users in forming their own judgment on the strengths and weaknesses of the indicator.

25. The approach aims to strike the right balance by considering the coverage of indicators, depth of analysis, and presentation of results. The adapted DQAF approach provides a consistent and comprehensive assessment for review of TPIs. Finally, the presentation of results aims to be concise, balanced, and informative.

Examples of Good Practices

26. The examples of good practices in Article IV and UFR staff reports presented in this section offer an illustration of how country teams have applied elements of the recommended principles outlined above.

Box 2.Example of Good Practices in Using Third Party Indicators in Surveillance 2014–16

These staff reports have been selected as good practice examples because they apply elements of one or more principles outlined in this paper, though not all of them adhere to the guiding principles in full. They provide useful examples of how the principles could be implemented in practice.

Hungary 2017 Article IV: In discussing competitiveness and business environment, the report explicitly qualifies third party indicators as measuring “perceptions” of Hungary’s investment climate. A scatterplot presents countries’ performance across indicators focused on similar issues from different sources. The report provides country context by discussing underlying factors that may influence Hungary’s performance on business climate indicators, and presents the authorities’ views on factors that may be impacting indicators.

Ukraine 2016 Article IV: An SIP on “Corruption and Growth” uses four different corruption indicators for robustness in analyzing the impact of corruption on economic growth. The SIP provides a description of the alternative indicators’ key characteristics upfront, in line with the transparency principle. In comparing Ukraine to its peers, latest data points as well as evolution over time are shown. Regional level corruption indicators are also shown, where the data is available, to complement the aggregated data. The analysis provides country context on reforms and implementation of anti-corruption measures to complement information from indicators.

Kingdom of Swaziland 2015 Article IV: In considering structural reforms to enhance inclusive growth, the staff report relies on several third-party indicators and, in reference to the survey-based indicators, clarifies that they reflect investors’ perceptions. The report includes a footnote encouraging caution when interpreting the indicators due to the limited number of respondents, geographical coverage, and standardized assumptions.

Myanmar 2015 Article IV: An SIP on developing a competitive export sector charts the country’s performance relative to peers, and includes a footnote encouraging caution when interpreting the indicators due to the limited number of respondents, geographical coverage, and standardized assumptions.

Georgia 2017 Request for an arrangement Under the Extended Fund Facility: An annex on unlocking Georgia’s growth potential draws on official and empirical data, as well as TPIs measuring similar concepts from different sources. The analysis reflects knowledge of the country context and the authorities’ reform agenda, and clearly links TPI performance to specific bottlenecks and opportunities facing the country. Charts reflect use of DTF scores and peer comparators.

Belgium 2016 Article IV: An SIP on “Making Public Expenditure More Efficient” relies on several sources for its third-party indicators and, in reference to the survey-based indicators, includes the qualifier “perceived” ahead of the indicator’s name in text and chart titles

Union of Comoros 2014 Article IV : The staff report section on assessment of the country’s competitiveness in paragraph 20 categorically indicates that the assessment is based on survey-based indicators that point to issues related to infrastructure and institutional and governance. In addition, the write-up discusses the authorities’ view on these rankings.

Kyrgyz Republic 2017 Third Review under ECF: The report draws on governance and corruption indicators to highlight the scope for reform in the context of an ECF review. Detailed country context, including specific examples, is provided to explain the trend in the country’s governance and corruption indicators. In developing policy advice, indicators are used as one input combined with knowledge of sectoral level developments.

Conclusions

27. TPIs are widely used at the Fund, other IOs, and in the private sector for various purposes. These institutions have relied on TPIs for cross-country comparisons, to identify concrete evidence to inform analysis in areas where an institution may not gather its own primary data, and to inform and complement their staff’s own assessments. At the Fund, TPIs have provided valuable inputs in surveillance and other contexts especially as staff continues to break new ground in macro-critical surveillance priorities, with the most commonly used being those related to business climate issues.

28. Notwithstanding their usefulness, there are challenges. In general, concerns center on the quality of several TPIs. In some cases, this is exacerbated by the opacity of the methodologies used by private entities to compile them and reliance on value judgment. Additional concerns in the Fund arise from the lack of systematic staff guidance, particularly related to the use and presentation of perception-based indicators. Not understanding the TPIs’ strengths and weaknesses could lead to potentially flawed presentation and analysis with negative effects on country work and reputational risks for the Fund. These concerns and challenges warrant the need for a structured approach and staff guidance.

29. A framework proposed in this paper can address these concerns and challenges. The approach consists of three best practice principles accompanied by a central database—the Indicators Digest. The principles cover the transparency in the compilation and use of the indicators, the value of robustness checks, and the importance of presenting the views of the authorities and/or other stakeholders to the extent that they may provide additional contextual value to the analysis.

30. The Indicators Digest provides information on selected indicators’ characteristics to help inform staff’s judgment on selected TPIs. The assessment of TPIs included in the Digest is based on the relevant criteria used in the Data Quality Assessment Framework endorsed by the Board in 2003. It is not intended as an ex-ante positive or negative list of acceptable indicators for use in staff analysis and Fund work. The Digest will remain a living database and will be demand-driven. However, it will not be an exhaustive compilation of all possible TPIs staff may need or want to employ in their analysis.

31. The proposed approach will apply to all country documents, policy documents, and multi-country documents that are subject to the Fund’s Transparency Policy. In operationalizing the framework, a staff guidance note will be issued and the review process will be used to help ensure quality standards and proper use. In addition, specific good practice examples will help apply the different elements in practice. Staff are encouraged to follow similar guidelines for other Fund documents not subject to the Transparency Policy. The proposed approach should help promote learning, and support best practice and better traction with the membership.

Issues for Discussion

32. Directors’ views on the following issues would be welcome:

  • Do Directors agree with the proposed approach, which consists of three best practice principles accompanied by a central database (the Indicators Digest), to help promote best practice in Fund reports?
  • Do Directors agree with the proposed scope and coverage of the Indicators Digest, specifically that future expansion of the Digest will be demand-driven, focusing on emerging macro-critical areas in surveillance?
References

    DincerNergizB.Eichengreen2008Central Bank Transparency: Where, Why and to What Effect? In: Jean-Philippe Touffut Central Banks as Economic InstitutionsUK: Edward Elgar105-142.

    • Search Google Scholar
    • Export Citation

    DincerNergizB.Eichengreen2010Central Bank Transparency: Causes, Consequences and UpdatesTheoretical Inquiries in Law Vol. 11 No.1 Article 5.

    • Search Google Scholar
    • Export Citation

    DincerNergizB.Eichengreen2014Central Bank Transparency and Independence: Updates and New MeasuresInternational Journal of Central Banking Vol. 10 No. 1: 189-259.

    • Search Google Scholar
    • Export Citation

    IMF2013Guidance Note on the Fund’s Transparency Policy.

    IMF2017The Role of the Fund in Governance Issues—Review of the Guidance Note—Preliminary Considerations.

    OmanP andArndtC.2010Measuring GovernanceOECD Development Centre Policy Brief No. 39. OECD PublishingParis.

    TrapnellStephanie E.2015User’s Guide to Measuring Corruption and Anti-Corruption ed. by H.FeigenblattD.Torres andA.Timilsina (New York: United Nations Development Program)

    • Search Google Scholar
    • Export Citation
Annex I. The Continuum of Data Types Underpinning TPIs

The table below, adapted from an approach used by the UNDP in its “User’s Guide to Measuring Corruption and Anti-Corruption,” reflects the continuum of data types found in TPIs’ methods and datasets. Some TPIs, especially those that are composites of various inputs, may incorporate features across this spectrum.

Annex II. Third-Party Indicators Text Mining Exercise

The text mining exercise was undertaken on a sample of all Article IV reports (main papers of both stand-alone and combined Article IV reports, and selected issues papers), program reports (for UFR, Post-Program Monitoring, Staff Monitored Programs, and Policy Support Instruments), and multilateral surveillance products (World Economic Outlook, Global Financial Stability Report, Fiscal Monitor) dating from 2016 and the first half of 2017.

ProductTotal number of reports in sample
Article IV reports (main papers)173
Selected issues papers113
UFR reports82
Multilateral surveillance products11

The geographical coverage of the sample is as follows

ProductAFRAPDEURMCDWHD
Article IV reports (main papers)2441492039
Selected issues papers2115411224
UFR reports33416218

The exercise focused on a core list of search terms related to a selected set of 14 TPIs and their compilers:

List of TPIs used in the Search

  • International Country Risk Guide, Political Risk Services Group
  • Political Instability Rankings, Economist Intelligence Unit
  • Political Stability Risk Index, Economist Intelligence Unit
  • Corruption Perception Index, Transparency International
  • Worldwide Governance Indicators, Kaufmann (Natural Resource Governance Research Institute and Brookings Institution) and Kraay (World Bank)
  • Corruption Index, Verisk Maplecroft
  • Index of Economic Freedom, Heritage Foundation
  • Global Competitiveness Index, World Economic Forum
  • Transition Indicators, European Bank for Reconstruction and Development
  • Doing Business Indicators, World Bank
  • Indicators of Product Market Regulation, OECD
  • Indicators of Employment Protection, OECD
  • Central Bank Independence and Transparency Indicators, Dincer and Eichengreen
  • Consensus Forecasts, Consensus Economics

The exercise explored six questions using the following search approaches:

Question 1: To what extent are these third-party indicators used across Fund products?

  • Search approach: Identify reports containing at least one of the TPI core search terms.

Question 2: To what extent are selected TPIs used when discussing business climate issues?

  • Search approach: Identify reports containing—in the same paragraph—at least one of the TPI core search terms and one of a set of search terms related to business climate (e.g., business climate, investment climate, business environment).

Question 3: To what extent are selected TPIs used when discussing structural reform or political issues?

  • Search approach: Identify reports containing—in the same paragraph—at least one of the TPI core search terms and one of a set of search terms related to structural reform issues (e.g., product market, labor market, political risk).

Question 4: To what extent are selected TPIs used for ranking purposes?

  • Search approach: Identify reports containing—in the same paragraph—at least one of the TPI core search terms and one of a set of search terms related to ranking exercises (e.g., increase, decrease, better, worse, relative to, unchanged).

Question 5: To what extent are selected TPIs used for cross-country comparisons?

  • Search approach: Identify reports containing—in the same paragraph—at least one of the TPI core search terms and one of a set of search terms related to comparisons across countries (e.g., better, worse, benchmark, peer, regional, average).

Question 6: To what extent are selected TPIs accompanied by a discussion of their characteristics or caveats?

  • Search approach: Identify reports containing—in the same paragraph—at least one of the TPI core search terms and one of a set of search terms related to indicator characteristics or caveats (e.g., coverage, limitation, methodology, perception, assumption, interpretation).
1

Other international organizations, sovereign credit rating agencies, and academia also use TPIs extensively.

2

For the purposes of this analysis, the category “program” includes staff reports for UFR, Post-Program Monitoring, SMPs, and PSIs; and the category “Article IV” includes both stand-alone Article IV reports and combined Article IV reports.

3

This set of TPIs covers areas that are core and emerging issues in Fund’s work—for example, institutional quality, governance, political risk, business climate, macroeconomic forecasts, and structural reforms.

4

Text mining provides an efficient mechanism for extracting information from a large number of reports that would otherwise be difficult to review manually. Results can be sensitive to the search terms and approaches used. Staff performed spot quality checks of the results, but did not perform thorough quality checks due to resource constraints.

5

TPIs are also used across the Fund for various other analytical purposes—for example, as an input in cross-country analytics, research, and diagnostics (e.g., as an input in the External Balance Assessment model). The specific nature of those exercises and the unique tradeoffs suggest that analysis of TPI use in those contexts could be considered in the relevant Board work programs as appropriate.

6

The World Bank’s Doing Business indicators and the World Economic Forum’s Global Competitiveness Index are among the most commonly used TPIs in the sampled Fund reports.

7

This is consistent with findings in the Board paper “The Role of The Fund in Governance Issues—Review of the Guidance Note—Preliminary Considerations” (August 2017). Only 16 percent of the Article IV staff reports, selected issues papers, UFR and SMP reports surveyed during 2005-16 for 40 countries cited TPIs. The World Bank governance indicators and Transparency International corruption perceptions measures appear most frequently.

8

Use of governance-related TPIs for ranking purposes is not common. TPIs used in these few reports are Transparency International’s Corruption Perceptions Index, the Heritage Foundation’s Index of Economic Freedom and the World Bank’s Worldwide Governance Indicators.

9

For example, the 2016 Article IV staff reports for Cambodia, Dominica, Grenada, Iran, and Panama. The staff report for Georgia’s 2016 Request for an Extended Arrangement Under the Extended Fund Facility presented Georgia’s relative performance against the best performer among a subset of regional peers according to Doing Business indicators, Transparency International’s Corruption Perceptions Index, and EBRD transition indicators.

10

These include the World Bank Doing Business indicators, the World Economic Forum Global Competitiveness Index, the Worldwide Governance indicators, Transparency International’s Corruption Perception indices, and OECD indicators for product market regulation and labor market outcomes.

11

For example, the guidelines for the EBRD’s Article I compliance assessment list Transparency International’s Corruption Perception Index as one of the sources to be consulted as part of the background work for the assessment. At the World Bank, TPIs are included in the World Development Indicators to help ensure a selection of well-defined, objective measures of development for public use. They also play a role in the Bank’s annual CPIA, with each criterion of the assessment including suggested indicators (both internally and externally produced) to assist country teams in determining country scores and in ranking countries.

12

To keep reporting requirements streamlined, for example, staff are not expected to provide a description of each data source and the underlying data.

14

These include impartiality and transparency in their production; timeliness and relevance of source data; cross-country coverage; the methodology used to construct aggregate indicators; and accessibility of data, metadata, and contacts. Consideration should also be given to factors that could influence the compiler’s objectivity, for example, the entity’s mission, funding, and governance, and whether the indicator is too blunt a tool to be useful in supporting specific policy advice. Many of these attributes related to indicators and compilers are captured in the framework underpinning the Indicators Digest (see Section III and the accompanying background note).

15

To promote best practice, staff are encouraged to follow similar guidelines for other Fund documents that are not subject to the Transparency Policy (e.g., Regional Economic Outlooks and Staff Discussion Notes).

16

As this Digest is intended to inform staff’s judgment on the use of a third-party indicator based on information provided to the public by the compiler. Staff also had interactions with selected compilers, including the EBRD, EIU, Maplecroft, PRS Group, Transparency International, and the World Bank to understand better their methodology and data collection, processing, and dissemination practices. These interactions helped inform the assessments in the Digest. Nevertheless, as the indicators are not produced by a member country’s statistical agency, STA staff were not able in all cases to obtain full access to compilers and/or complete information on sources and methods. Thus, staff are not able to authoritatively assess conclusively the validity, reliability, and impartiality of each indicator. The assessments are based on information provided to the public by the compiler. Moreover, the data quality assessment framework (DQAF) is more focused on statistical processes than on passing judgment on the quality of the statistical output itself.

17

Staff will guide expansion of the Digest and will take into account input from stakeholders when assessing demand. Concurrently, the Fund’s area and functional departments and working groups on emerging issues are helping close any remaining gaps—including in terms of data sources and the advantages and weaknesses of specific indicators—in the context of ongoing work.

18

The adapted DQAF leaves out the elements of the standard DQAF that are not applicable in the context of TPIs. Similarly, the language of some of the selected elements of the DQAF is adapted to reflect a different institutional environment between official statistics and TPIs.

19

For example, there may be challenges in comparing country rankings over time and across countries. In principle, the use of many data points can result in greater accuracy, if the same concepts are measured consistently over time. Considerations need to be given to some composite indicators where the methodologies and sources of data change from year to year. Further, given the many different data inputs, the rankings of individual countries may take years to change, particularly if other countries are also undergoing similar changes.

Other Resources Citing This Publication