|
Plenty of Cake to Go Around: Eat Your Fill, Sleep Well |
Well-known
as the source, but less so for the contents.
AA believes that
most people who use or quote TI’s rankings do not know what they mean and
operate with some or all of the following misperceptions.
To be very clear
upfront, this post is not arguing that we should not use TI’s CPI. But rather than we should understand what it
is, what are its limitations, and how to use it intelligently.
FIVE COMMON
MISPERCEPTIONS ABOUT TI’S CPI
TI’s rankings assess the overall level of
corruption in a country.
While not “facts”, the analytic process behind the
rankings results in fairly accurate assessments.
TI performs the analysis
behind the rankings or at the very least directs it.
Every country is rated using the same common set of standards.
The rankings are sufficiently precise that we can use
them to distinguish the level of corruption in one country from the level in another.
TI provides extensive disclosure
about the CPI at its Methodologies
page.
Those who read this material carefully will not hold any of the first four
misconceptions. The problem is it
appears that TI’s disclosures are infrequently read.
TI’s apparently precise ranking
system does give the impression that Misconception #5 is correct. It is not.
MISPERCEPTION #1 – Overall
Level of Corruption in a Country
Here’s a quote from a TI
FAQ that on its rankings:
“Is the
country/territory with the lowest score the world's most corrupt nation? No. The
CPI is an indicator of perceptions public sector corruption, i.e.
administrative and political corruption. It is not a verdict on the levels of
corruption of entire nations or societies, or of their policies, or the
activities of their private sector.”
What does TI rank then? What is its definition of “corruption”?
Why
should we care?
It’s very important to understand TI’s focus if one is to use
their rankings intelligently.
If you read the FAQs in the Methodologies
material (page 2), you will find a list of what is included and what is not.
Money
laundering, IFFs, informal markets, the private sector are NOT included.
Broadly
speaking, TI’s CPI focuses on the public sector only.
TI is very clear
on this but AA wonders how many users of TI’s CPI understand this.
What this
means then is that a private sector member’s actions do not affect the ranking
of its respective country.
This is very important because if one is using TI
rankings to construct assessments of money laundering and terrorism finance,
one might be mis-specifying the risk, if one assumes that TI rankings assess
the overall level of corruption in a country.
Why?
Private sector enterprises
are probably the major channels through which ML and TF take place in most
jurisdictions.
MISPERCEPTION #2 – Rankings as “Facts”
TI’s annual
ranking for 2018 is here.
The
first thing to note is that this is described as the “Corruption Perceptions
Index”.
The key word here is “perceptions”. “Opinions” not “facts”.
That makes
sense.
There are no formal reports filed on bribes paid or bribes accepted.
One
has to infer the extent of corruption in a country from very limited
hard data – corruption cases that have come to light—and other indirect
indicators.
The first takeaway then is that a ranking for a specific country
is an estimate.
Likely a very
rough estimate.
Similar to the 2% to 5% of global GDP (usually mis-stated as amounts from USD 800 billion to USD 2 trillion) estimate bandied
about as the annual flows of money laundering, corruption rankings are often
treated as scientific fact. They are not
and should not be treated as such.
MISPERCEPTION #3 - The rankings are
based on TI’s research.
TI uses the published assessments of 13 sources.
Each
of these sources prepares reports for its own or its clients’ use using its own
criteria and methodology.
TI does not do the research itself. It does not
set the focus, criteria or methodology for these sources’ studies.
Rather
TI repurposes the 13 sources’ reports to create the CPI. In 2015, one source, IHS Global, stopped
providing data to TI. TI now accesses
some IHS data via information published by the World Bank.
MISPERCEPTION
#4 – Common Standards and Methodologies
Who are the experts? What are
their methodologies?
For a detailed answer click on “Methodologies”.
Here you will find a discussion about each expert and its methodology.
Click here
to see the sources used in ranking a specific country.
The first thing you
will notice is that not every source rates every country.
In a situation where some countries are rated by some experts and other countries are rated by other experts should we automatically assume that all the experts use an identical single common standard and methodology?
Clearly we need to look a bit deeper because if the experts don't have a single common standard, then which experts rate a country will impact that country's rating.
AA has
read this material and encourages everyone who uses TI’s CPI to read it as
well.
Why?
First, this is quite a heterogeneous group.
It includes
multi-lateral institutions (2), NGOs/Foundations (5), companies selling country
risk or business information services (4), university affiliated entities (2).
Each
of these has a specific purpose for its study motivated by its stated “mission”
or, in some cases, perhaps by its ideology.
That is not meant as a pejorative
remark. But as a practical one. We need
to be sensitive to conscious and unconscious factors that may influence a
rating, particularly in the case where “perceptions” play a key role in
determining rankings.
AA argued in another
post that the collapse of Abraaj seemed to be treated in some circles as evidencing a more
serious failure by regulators and markets than scandals in certain OECD
countries that had a much greater impact on the world economydid.
Are there other
geographical biases? Is corruption in African Country G more heinous than
Baltic Country L?
Without taking a stand on the issue, AA would note that
there is some controversy about the independence of Freedom House from US
foreign policy. The FH study that TI uses rates former Soviet bloc states.
Second,
the experts’ focus is also heterogeneous.
Not all of these sources focus
on corruption itself: bribes paid, bribes taken.
Rather a number of them focus
on legal/institutional capacity. Whether
the country has an adequate framework to prevent/punish corruption, e.g., legislation,
staffing and independence of investigative and legal bodies, administrative
practices, e.g., professional independent civil service, open bidding, whether
information is available to the public, etc.
These indicators by themselves
are not indicators of corruption but rather perhaps indicators of opportunities
for corruption.
Very big difference.
Laws and frameworks are fine but as
experience shows repeatedly they do not prevent crime from occurring.
That’s
not to say that these elements aren’t important.
They are necessary but not
sufficient elements.
The question is how much weight they should be given when
assigning corruption perceptions to a particular country.
AA would be in the
camp where actual corruption rather than opportunities for corruption would be
given more weight in “rankings”.
Third, the experts’ methods are not
identical. Some use in-house experts to
make assessments. Others reach out to
local contacts, and other outside experts, e.g., academics, lawyers, accountants, etc. In
some cases like EIU they use in-country free-lancers at least in part.
Some
of the experts appear to ask a single or a couple of questions as part of a
larger study on more than just corruption.
Others have a more robust set of questions on corruption. Or survey a wider set of contacts.
For
example, in 2018 The World Economic Forum Executive Opinion Survey (WEF-EOS)--one
of TI’s sources—received 12,274 responses from executives in 140 countries in 2018
about corruption.
Fourth, some of the experts—primarily the 3 firms that
sell political risk and country assessments to businesses -- assess all levels
of corruption from the petty to “grand” corruption. Varieties of Democracy, another of TI's expert sources does as well.
As a practical matter,
their 3 firms' clients (businesses) are likely to be most interested in the need to pay ongoing
bribes to ensure their daily operations run unhindered if they invest in Country
X.
So smaller recurring cash payments to facilitate clearance through customs
of imports and exports, to secure connection to and maintenance of utilities, to
deal with tax authorities, to obtain licenses, etc. are of prime concern.
Finding
out about them is fairly easy. One can
ask businesses in the country. They will
be more likely to report such occurrences because they are imposed on them as opposed
to grand corruption where they may be a willing participant.
Because it’s
harder to find out the true level of grand corruption, there is a risk that
corruption ratings based on petty or moderate corruption may skew the rating
for a country.
Fifth, unlike the countries in the CPI, the 13 experts are unranked. Their perceptions are accorded equal
weighting. Each expert’s score is added
and a simple arithmetic mean is calculated.
They are all presumed to be
all equally smart and informed and use equally valid methods to evaluate
corruption. It doesn’t matter whether an
expert asked a single question or sent a questionnaire and got 12,274
responses.
It doesn’t matter if the expert is expert in a limited geographical
area or covers the world. The Economist
Intelligence Unit who use in-country free lancers in part to do their assessments
and rated 131 countries in 2018 are presumed to know as much about each of those
countries as the African Development Bank which uses in-house economists knows about the 54 African countries it rated. Or PERC which contacts a wide range of
potential respondents to ask a single question and rated 15 Asian countries.
As
you might expect, not every country is rated by all 13 experts. Some of this is because of geographical
specialty. The experts from the African Development Bank don’t rate
Switzerland, the USA, or France. PERC’s focus is a slice of Asia.
It’s not
unreasonable to say then that the rating standards across all countries are not
uniform given the diversity of focus, methodology, level of detail, etc. of the
13 experts and the fact that the same 13 experts do not rate each country.
The full data set shows the score, the standard error (think
standard deviation but for a sample), the Upper CI and Lower CI.
There is a
wealth of information here. If you use
the TI CPI, then you should be familiar with this information so you can use it
intelligently.
For example, should we treat a rating with only 3 experts
(the minimum required for a rating) as being as valid as one with 10?
If the
standard error is large, should we assess that the rating is less accurate than
one which has a smaller standard error?
For example, the SE for Switzerland is 1.57, Bahrain and the Philippines
are at 1.81, Saudi is at 6.34, Qatar at 8.08, and Oman at 9.46.
MISPERCEPTION
#5 - Ratings are Precise Measures
TI ranks some 180 countries. 100 is the theoretical “best” score. 0 the worst.
Denmark in the first rank with 88.
New
Zealand is at 87.
Then four countries follow at 85.
All the way down to Syria
(13) and Somalia (10).
This is some very precise parsing of differences in
corruption.
Let’s stop and reflect for a moment.
We started with “perceptions”
but we seem to have wound up with “precision”.
AA would argue “false” precision.
On a hundred point scale, NZ would
appear to be 1% more corrupt than Denmark.
Can we really parse gradations this
fine?
More importantly is there really a practical difference in
corruption between Denmark (ranking #1 with 88) and Germany (ranking #11 with a
score of 80)?
The answer to both questions is no.
TI agrees
with this at least in part.
In their FAQs, they answer a hypothetical
question from a reader about changes of 1 or 2 points in a specific country’s
rating year-on-year with:
“It is unlikely that a one or two point CPI score
change would be statistically significant.”
AA would argue that even larger
differences among countries are not significant either.
Let’s look at an
endeavor that has more data and more rigorous mathematical analysis of the data, though one
which is not devoid of opinion: credit
ratings.
S&P, Moody’s, and Fitch rank issuers.
But they don’t assign
them individual ranked ratings. Rather
they group them into categories of similar risk.
Those issuers least likely to
default are rated (placed in category) AAA.
If distinctions are made, a “+” or “–“sign is used.
AA doesn’t think it’s
a sensible proposition that corruption analysis is more scientific than credit
analysis and hopes you do too.
AA suggests that TI adopt a similar approach in
an effort to prevent misunderstanding and misuse of its rankings. That is, divide countries into broad categories of risk of corruption like credit ratings or S&P's BICRA.
This will have the immediate effect of
preventing users from plugging the current “precise” ratings into their models
and coming up with equally imprecise results in theirs.
Some even more impressive with results to two
digits to the right of the decimal point, though admittedly not on a 100 point
scale.