Lancet surveys of mortality before and after the 2003 invasion of Iraq

Learn more about Lancet surveys of mortality before and after the 2003 invasion of Iraq

Jump to: navigation, search
Image:2nd Lancet Iraqi death count Figure 4.gif
Figure 4 from the second The Lancet survey of Iraqi mortality, showing a comparison with two other mortality surveys.

The Lancet has published two studies on the effect of the 2003 invasion of Iraq and subsequent occupation on Iraqi mortality, the first in 2004, the second (by many of the same authors) in 2006. The articles provoked controversy in the US media and criticism from the US and British governments but the statistical methodology and epidemiological techniques applied are standard and universally accepted as reasonable by experts in the two fields.<ref>letter to The Age</ref> Total deaths (combatants plus non-combatants) also include all excess deaths due to increased lawlessness, degraded infrastructure, poor healthcare, etc..

The first survey <ref name="lancet2004"> "Mortality before and after the 2003 invasion of Iraq: cluster sample survey"PDF. By Les Roberts, Riyadh Lafta, Richard Garfield, Jamal Khudhairi, and Gilbert Burnham. The Lancet, 29 October 2004 (hosted by</ref> published on 29 October, 2004, estimated the risk of death following the 2003 invasion and subsequent occupation of Iraq to be 50% higher than that prior to the invasion. This led to an estimate of 98,000 excess deaths (with a range of 8,000 to 194,000, using a 95% CI confidence interval). The authors called this a conservative estimate, because it excluded the "extreme statistical outlier" data from Falluja. If Fallujah were included, the estimated increased risk of death was 2.5 fold (95% CI: 1.6 to 4.2). The Falluja cluster "indicates a point estimate of about 200,000 excess deaths in the 3% of Iraq represented by this cluster", while no confidence interval is given for this point estimate.

The second survey <ref name="lancetOct2006">"Mortality after the 2003 invasion of Iraq: a cross-sectional cluster sample survey"PDF. By Gilbert Burnham, Riyadh Lafta, Shannon Doocy, and Les Roberts. The Lancet, October 11, 2006</ref> <ref name="Lancet supplement">"The Human Cost of the War in Iraq: A Mortality Study, 2002-2006"PDF. By Gilbert Burnham, Shannon Doocy, Elizabeth Dzeng, Riyadh Lafta, and Les Roberts. A supplement to the October 2006 Lancet study.</ref> published on 11 October, 2006, estimated 654,965 excess deaths related to the war, or 2.5% of the population. The new study applied similar methods and involved surveys between May 20 and July 10, 2006. More households were surveyed, allowing for a 95% confidence interval of 392,979 to 942,636 excess Iraqi deaths.

The Lancet surveys' reliability was widely criticized by the US and Iraqi governments, pro-war politicians and commentators. The anti-war Iraq Body Count project and some experts in surveying and cluster sampling have also been critical. See the criticism sections below.

However, the Lancet surveys were supported by many epidemiologists and statisticians. See the "responses to criticism" sections below.


[edit] The first study (2004)

The survey was sponsored by the Center for International Emergency Disaster and Refugee Studies, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA (authors L Roberts PhD, G Burnham MD) and the Department of Community Medicine, College of Medicine, Al-Mustansiriya University, Baghdad, Iraq. Roberts' team was chosen for their experience in estimating total mortality in war zones, for example his estimate of 1.7 million deaths due to the war in the Congo<ref></ref> which not only met with widespread acceptance and no challenge when published in 2000,<Ref></ref> but resulted and was cited in a U.N. Security Council resolution that all foreign armies must leave Congo, a United Nations request for $140 million in aid, and the US State Department pledging an additional $10 million in aid. Similar studies have been accepted uncritically as estimates of wartime mortality in Darfur<ref></ref> and Bosnia.

Roberts' regular technique is to estimate total mortality by personal surveys of a sample of the households in the area under study; this method being chosen in order to avoid the undercounting inherent in using only reported deaths in areas so chaotic that many deaths are unreported, and to include those deaths not directly attributable to violence but nevetheless the result of the conflict through indirect means, such as contamination of water supply or unavailability of medical care. The baseline mortality rate calculated from the interviewees' reports for the period prior to the conflict is subtracted from that reported during the conflict, to estimate the excess mortality which may be attributed to the presence of the conflict, directly or indirectly. This technique has been accepted uncritically in the previous mortality surveys discussed above.

Because of the impracticality of carrying out an evenly distributed survey, particularly during a war, Roberts' surveys use "cluster sampling", dividing the area into a number of randomly-selected, approximately equally-populated regions; a random point is chosen within each region, and a fixed number of the households closest to that point are surveyed as a "cluster". While not as accurate as an evenly distributed survey of the same number of households, this technique is more accurate than merely surveying one household for each selected point.

In his study of Iraq, Roberts divided the country into 33 regions, attempting to sample 30 households for each cluster, and selecting 988 households, with 7868 residents. In September 2004, each surveyed household was interviewed about household composition, births, and deaths since January, 2002. Of 78 households where members were asked to show documentation to confirm their claims after the interview was finished, 63 were able to present death certificates. According to the authors, 5 (0.5%) of the 988 households that were randomly chosen to be surveyed refused to be interviewed.

The relative risk of death due to the 2003 invasion and occupation was estimated by comparing mortality in the 17.8 months after the invasion with the 14.6 months preceding it. The authors stated, "Making conservative assumptions, we think that about 100,000 excess deaths, or more have happened since the 2003 invasion of Iraq." Among such "conservative assumptions" is the exclusion of data from Fallujah in many of its findings. Since interpreting the results of the study would be complicated by the inclusion of an outlier cluster in Fallujah, where heavy fighting caused far more casualties than elsewhere in Iraq, the study focused mainly on the results that excluded the Fallujah cluster. While the authors argued that the Fallujah cluster's inclusion could be justified as a normal part of the sampling strategy (the authors noted that other "hotspots" like Najaf had not ended up being surveyed), and the authors presented two sets of results in some cases (one set including the Fallujah data and one not), the article, and most press coverage of the article, stresses the data that excluded the Fallujah cluster.

When including Fallujah data, forty six percent of the violent deaths involving coalition forces were men between ages 15 and 60, thus considered possible combatants by the authors. The remaining seven percent who were women and forty six percent who were children younger than 15 were considered to be in all probability noncombatants. On this basis, the authors stated that "Most individuals reportedly killed by coalition forces were women and children." This was erroneously reported by many media stories, including by Roberts himself, as 100,000 excess civilian casualties. <ref></ref> <ref> "Speculation is no substitute: a defence of Iraq Body Count". By Hamit Dardagan, John Sloboda, Josh Dougherty. Iraq Body Count project. April 2006.</ref>

The main debate in the media in the U.S. and UK focused on whether 98,000 (95% CI 8000–194,000) more Iraqis died as a result of coalition intervention, calculated from their estimate of an increased mortality of 1.5 times (95% CI 1.1-2.3) the prewar rate (excluding the Fallujah data). Had the Fallujah sample been included, the survey's estimate that mortality rates had increased about 2.5 times since the invasion (with a 95% CI 1.6-4.2) including the Fallujah data would have resulted in an excess of about 298,000 deaths (95% CI ?-?), with 200,000 concentrated in the 3% of Iraq around Fallujah (Roberts et al p.5).

According to the article, violence was responsible for most of the extra deaths whether or not the Fallujah data was excluded. Coalition airstrikes would be the main cause of these violent deaths if Falluja data were included. The study makes the controversial conclusion that: "Violent deaths were widespread, reported in 15 of 33 clusters, and were mainly attributed to coalition forces." and "Violence accounted for most of the excess deaths and air strikes from coalition forces accounted for most violent deaths." The study estimates that the risk of death specifically from violence in Iraq during the period after the invasion was approximately 58 times higher than in the period before the war, with the CI95 being 8.1-419, meaning that there is a 97.5% chance that the risk of death from violence after the invasion is at least 8.1 times higher than it was before. Newsday reported:

"The most common causes of death before the invasion of Iraq were heart attacks, strokes and other chronic diseases. However, after the invasion, violence was recorded as the primary cause of death and was mainly attributed to coalition forces—with about 95 percent of those deaths caused by bombs or fire from helicopter gunships".

It was noted that the large estimate of excess death is even more shocking in view of the widely accepted belief that deaths in Iraq were already very high at 0.5% per year, particularly among children, due to UN sanctions against Iraq.<ref></ref>

[edit] Criticism

The study immediately became controversial for several reasons.

Some skeptics criticized the relatively broad 95% confidence intervals (CI95) due to the relatively small number of clusters.

For instance, Fred Kaplan in an article on described the confidence interval: "the authors are 95 percent confident that the war-caused deaths totaled some number between 8,000 and 194,000. (The number cited in plain language—98,000—is roughly at the halfway point in this absurdly vast range.) This isn't an estimate. It's a dart board." <ref></ref>"

The authors responded that the phrase in parentheses in the above represents a poor understanding of the meaning of a statistical confidence interval, as the central estimate of 98,000 was not chosen solely because it is "roughly at the halfway point". The probability distribution follows the normal distribution, with numbers near the central point estimate much more likely to be accurate than numbers closer to either extreme. Roberts said, "this normal distribution indicates that we are 97.5% confident that more than 8,000 died, 90% confident more than 44,000 died and that the most likely death toll would be around 98,000,";<ref></ref> he said that many well-accepted statistics, such as the number killed under Saddam’s regime or the number dead from the 2005 tsunami, have a similarly broad CI due to small but statistically adequate sample sizes." He also criticised Kaplan for altering quoted text and focusing on one aspect of the report.

Lila Guterman, writing for the Columbia Journalism Review states: "I called about ten biostatisticians and mortality experts. Not one of them took issue with the study’s methods or its conclusions. If anything, the scientists told me, the authors had been cautious in their estimates. With a quick call to a statistician, reporters would have found that the probability forms a bell curve — the likelihood is very small that the number of deaths fell at either extreme of the range. It was very likely to fall near the middle." <ref> "Dead Iraqis. Why an Estimate was Ignored". By Lila Guterman, Columbia Journalism Review, March/April 2005.</ref>

A Ministerial Statement written 17 November, 2004, by the UK government stated "the Government does not accept its [the study's] central conclusion", because they were apparently inconsistent with figures published by the Iraq Ministry of Health, based on figures collected by hospitals, which said that "between 5 April 2004 and 5 October 2004, 3,853 civilians were killed and 15,517 were injured" .<ref></ref>

Some critics have said that The Lancet study authors were unable to visit certain randomly selected sample areas. In an interview on the radio program "This American Life," however, the authors of the study say that they never substituted different, more accessible, areas, and that every place that was randomly selected at the beginning of the study was surveyed in full, despite the risk of death to the surveyors.

Critics of the Lancet study have pointed out other difficulties in obtaining accurate statistics in a war zone. The authors of the study readily acknowledge this point and note the problems in the paper; for example they state that "there can be a dramatic clustering of deaths in wars where many die from bombings". They also said that the data their projections were based on were of "limited precision" because the quality of the information depended on the accuracy of the household interviews used for the study.<ref></ref><ref name="2004-10-28-casualties_x.htm"></ref>

Fred Kaplan argued in Slate that the study's prewar mortality rate estimate of 5 deaths per 1000 is erroneously low because Iraq's mortality rate in the period from 1980-1991 were all higher, ranging from 6.8 to 8.1, so that the study's conclusions were overstated because of this discrepancy.<ref></ref> Kaplan's pre-war mortality estimates were based on the findings of Beth DaPonte, whose 1980-1990 numbers came from United Nations figures; most of Daponte's estimates of Iraqi casualties from Desert Storm and its aftermath are higher than those of other researchers <ref></ref>

The results of the study were politically sensitive, since a heavy death toll could raise questions regarding the humanitarian justifications on the eve of a contested US presidential election. Critics objected to the timing of the report, claiming it was hastily prepared and published despite what they perceived as its poor quality in order to sway the U.S. electorate. On this topic, Les Roberts stated "I emailed it in on Sept. 30 under the condition that it came out before the election. My motive in doing that was not to skew the election. My motive was that if this came out during the campaign, both candidates would be forced to pledge to protect civilian lives in Iraq. I was opposed to the war and I still think that the war was a bad idea, but I think that our science has transcended our perspectives."<ref></ref><ref name="2004-10-28-casualties_x.htm"/> He replied to criticisms by Professor John Allen Paulos of the Temple University Math Department of "an expedient rush to publish" with

Dear Dr. Paulos,
I read your note below with some sadness. FYI, there was a rush to publish as I have said in every major interview I have given.
A) I have done over 20 mortality surveys in recent years and have never taken more than a week to produce and release a report (because people dying is important) until this article. Thus, this was the least rushed mortality result I have ever produced.
B) We finished the survey on the 20 Sept. If this had not come out until mid-Nov. or later, in the politicized lens of Baghdad (where the chief of police does not allow his name to be made public and where all the newly trained Iraqi soldiers I saw had bandanas to hide their faces to avoid their families being murdered…) this would have been seen as the researchers covering up for the Bush White House until after the election and I am convinced my Iraqi co-investigators would have been killed. Given that Kerry and Bush had the same attitude about invading and similar plans for how to proceed, I never thought it would influence the election and the investigators never discussed it with each other or briefed any political player.
C) if you have information about how and why people in New Orleans were dying today, would you rush to release it? The Falluja downfall happened just one week after the study came out and whether you believe the 500 or the 1600 or the 3600 estimates of associated Iraqi deaths, that alone was probably more than will occur from this moment on due to Katrina.
So, we rushed to get it out, I do not understand why the ‘study's scientific neutrality’ is influenced or the likelihood that the sample was valid, the analysis fair… What does neutrality mean? Do people who publish about malaria deaths need to be neutral about malaria?
Yours in confusion and disgust,
Les Roberts<ref></ref>

On the contrary, Roberts views critics of his study as motivated more by politics than by science; "It is odd that the logic of epidemiology embraced by the press every day regarding new drugs or health risks somehow changes when the mechanism of death is their armed forces."<ref>Media Alert: Burying The Lancet, Media Lens, September 5, 2005</ref>

[edit] Lancet publications related to criticisms

  • November 20, 2004. Criticism and suggestions by peer reviewer Sheila M Bird, MRC Biostatistics Unit, Cambridge CB2 2SR, UK, chair of the Royal Statistical Society's Working Party on Performance Monitoring in the Public Services. Calls scientific method "generally well described and readily repeatable", but says "[p]articular attention is needed to the methodology for randomly selecting the location(s) of cluster(s) within governorates. Roberts and colleagues describe this rather too succinctly". Suggests additional information be included so that more precise multipliers (to obtain the final estimate) can be applied. Discusses an example hypothetical circumstance incorporating said information, regarding airstrike deaths and collateral damage, under which overcounting could occur due to population density variances among cluster representations.<ref></ref>
  • March 26, 2005. Criticism by Stephen Apfelroth, Department of Pathology, Albert Einstein College of Medicine. Criticizes "several questionable sampling techniques that should have been more thoroughly examined before publication" and lists several flaws, including a "fatal" one, that "In such a situation, multiple random sample points are required within each geographic region, not one per 739000 individuals."<ref></ref>
  • March 26, 2005. Response by L Roberts et al to Apfelroth. Acknowledges flaws, but says "the key public-health findings of this study are robust despite this imprecision. These findings include: a higher death rate after the invasion; a 58-fold increase in death from violence, making it the main cause of death; and most violent deaths being caused by air-strikes from Coalition Forces. Whether the true death toll is 90000 or 150000, these three findings give ample guidance towards understanding what must happen to reduce civilian deaths. ...Before publication, the article was critically reviewed by many leading authorities in statistics and public health and their suggestions were incorporated into the paper. The death toll estimated by our study is indeed imprecise, and those interested in international law and historical records should not be content with our study. We encourage Apfelroth and others to improve on our efforts. In the interim, we feel this study, as well as the only other published sample survey we know of on the subject, point to violence from the Coalition Forces as the main cause of death and remind us that the number of Iraqi deaths is certainly many times higher than reported by passive surveillance methods or in press accounts."<ref></ref>

[edit] Other responses to criticism

The Chronicle of Higher Education also wrote an article discussing the differences in the survey's reception in the popular presses over how it was received in the scientific community.<ref></ref>

Epidemiologist Klim McPherson writes in the March 12, 2005 British Medical Journal <ref> "Counting the dead in Iraq". By Klim McPherson. British Medical Journal. March 12, 2005.</ref>: "The government rejected this survey and its estimates as unreliable; in part absurdly because statistical extrapolation from samples was thought invalid. Imprecise they are, but to a known extent. These are unique estimates from a dispassionate survey conducted in the most dangerous of epidemiological conditions. Hence the estimates, as far as they can go, are unlikely to be biased, even allowing for the reinstatement of Falluja. To confuse imprecision with bias is unjustified."

[edit] The second study (2006)

A second study by some of the same authors was published in October, 2006, in The Lancet. <ref name="lancetOct2006" /> <ref name="6040054.stm"></ref> The revised study showed a marked increase in the estimated violent death rate in Iraq during the intervening period between the first and second studies. Of 629 deaths verified and recorded among a sample of 1,849 households incorporating some 12,800 people, 13% took place in the 14 months before the invasion and 87% in the 40 months afterwards. Surveyors requested death certificates for 87% of reported deaths; these were provided in 92% of cases. 47 of the targeted 50 clusters were ultimately used in the study. Two of the remaining clusters were eliminated because "miscommunication resulted in clusters not being visited in Muthanna and Dahuk, and instead being included in other Governorates" while the insecurity in Wassit resulted in no community being surveyed there."

The study concluded that the mortality rate per 1,000 population per year in the pre-invasion period was 5.5 (range of 4.3-7.1, using a 95% CI, Confidence interval) and in the post-invasion period was 13.2 (95% CI, 10.9-16.1). Excess mortality rate over the pre-invasion period was therefore 7.8 per 1,000 population per year, with violent death accounting for 92% of the increased mortality rate. The study also concludes an increasing mortality rate throughout the post-invasion periods, with the excess mortality rate for June 2005-June 2006 of 14.2 (95% CI, 8.6-21.5) being nearly 5.5 times the excess mortality rate for March 2003-April 2004 of 2.6 (95% CI, 0.6-4.7). The 2006 study also provides an estimate for the 18-month period following the invasion (March 2003 through September 2004) of 112,000 deaths (95% CI, 69,000-155,000). The authors conclude, "Thus, the data presented here validates our 2004 study, which conservatively estimated an excess mortality of nearly 100,000 as of September, 2004."

The authors described the fact that their estimate is over ten times higher than other estimates such as the Iraq Body Count project (IBC) estimate and US DoD estimates as "not unexpected", stating that this is a common occurrence in conflict situations. They stated, "Aside from Bosnia, we can find no conflict situation where passive surveillance [used in the IBC] recorded more than 20% of the deaths measured by population-based methods [used in the Lancet studies]. ... Between 1960 and 1990, newspaper accounts of political deaths in Guatemala correctly reported over 50% of deaths in years of low violence but less than 5% in years of highest violence." <ref name="lancetOct2006" />

[edit] Criticisms

The Iraq Body Count project (IBC), who compiles a database of reported civilian deaths, has criticised the Lancet's estimate of 600,000 violent deaths <ref>"Reality checks: some responses to the latest Lancet estimates". By Hamit Dardagan, John Sloboda, and Josh Dougherty. Iraq Body Count project. October 16, 2006.</ref>. The IBC argues that the Lancet estimate is suspect "because of a very different conclusion reached by another random household survey, the Iraq Living Conditions Survey 2004 (ILCS), using a comparable method but a considerably better-distributed and much larger sample." IBC also enumerates several "shocking implications" which would be true if the Lancet report were accurate, e.g. "Half a million death certificates were received by families which were never officially recorded as having been issued" and claims that these "extreme and improbable implications" and "utter failure of local or external agencies to notice and respond to a decimation of the adult male population in key urban areas" are some of several reasons why they doubt the study's estimates. IBC states that these consequences would constitute "extreme notions". <ref></ref>

Jon Pedersen of the Fafo Institute <ref></ref> and research director for the ILCS survey, which estimated approximately 24,000 (95% CI 18,000-29,000) war-related deaths in Iraq up to April 2004, expressed reservations about the low pre-war mortality rate used in the Lancet study and about the ability of its authors to oversee the interviews properly as they were conducted throughout Iraq. Petersen has been quoted saying he thinks the Lancet numbers are "high, and probably way too high. I would accept something in the vicinity of 100,000 but 600,000 is too much." <ref name="washingtonpost" /> Both Iraq Body Count and Lancet authors have noted that the ILCS estimate of 24,000 was roughly twice the Iraq Body Count figure for the same period <ref></ref>. The "100,000" figure Petersen gives in the quote above is roughly double IBC's October 2006 figures, which would make it consistent with his own ILCS estimate scaled-up according to the IBC timeline.

Debarati Guha-Sapir, director of the Centre for Research on the Epidemiology of Disasters in Brussels, was quoted in an interview for saying that Burnham's team have published "inflated" numbers that "discredit" the process of estimating death counts. "Why are they doing this?" she asks. "It's because of the elections."<ref></ref>. However, another interviewer a week later paints a more measured picture of her criticisms: "She has some methodological concerns about the paper, including the use of local people — who might have opposed the occupation — as interviewers. She also points out that the result does not fit with any she has recorded in 15 years of studying conflict zones. Even in Darfur, where armed groups have wiped out whole villages, she says that researchers have not recorded the 500 predominately violent deaths per day that the Johns Hopkins team estimates are occurring in Iraq. But overall Guha-Sapir says the paper contains the best data yet on the mortality rate in Iraq." <ref>"</ref>

Professors Sean Gourley and Neil Johnson of the physics department at Oxford University and Professor Michael Spagat of the economics department of Royal Holloway, University of London, claimed the methodology of the study was fundamentally flawed by what they term "main street bias". They claimed the sampling methods used "will result in an over-estimation of the death toll in Iraq" because "by sampling only cross streets which are more accessible, you get an over-estimation of deaths." <ref>,,1930002,00.html</ref>

An article in Science magazine by John Bohannon describes some of their criticisms, as well as some responses from Lancet's lead author Gilbert Burnham: 'The [Lancet] paper indicates that the survey team avoided small back alleys for safety reasons. But this could bias the data because deaths from car bombs, street-market explosions, and shootings from vehicles should be more likely on larger streets, says Johnson. According to Bohannon, Burnham counters that such streets were included and that the methods section of the published paper is oversimplified. Bohannon also alleged that Burnham told Science that he does not know exactly how the Iraqi team conducted its survey; the details about neighborhoods surveyed were destroyed "in case they fell into the wrong hands and could increase the risks to residents." These explanations have infuriated the study's critics. Michael Spagat, an economist at Royal Holloway, University of London, who specializes in civil conflicts, says the scientific community should call for an in-depth investigation into the researchers' procedures. "It is almost a crime to let it go unchallenged," adds Johnson.' <ref></ref>

In a 24 November letter to Science, the authors of the report claimed that Bohannon misquoted Burnham, stating that "in no place does our Lancet paper say that the survey team avoided small back alleys", and that "The methods section of the paper was modified with the suggestions of peer reviewers and the editorial staff. At no time did Burnham describe it to Bohannon as 'oversimplified.'". Bohannon defended his comments as accurate, citing Burnham saying, in response to questions about why details of selecting "residential streets that that did not cross the main avenues", that "in trying to shorten the paper from its original very large size, this bit got chopped, unfortunately." In addition, the details which were destroyed refer to the "scraps" of paper on which streets and addresses were written to "randomly" choose households".<ref></ref>

Fred Kaplan of Slate criticized the first Lancet study and has again raised concerns about the second.<ref></ref>,<ref></ref> Kaplan argues that the second study has made some improvements over the first, such as "a larger sample, more fastidious attention to data-gathering procedures, a narrower range of uncertainty", and writes that "this methodology is entirely proper if the sample was truly representative of the entire population—i.e., as long as those households were really randomly selected." He cites the low pre-war mortality estimate and the "main street bias" critique as two reasons for doubting that the sample in this study was truly random. And he concludes saying that the question of the war's human toll is "a question that the Lancet study doesn't really answer".

Madelyn Hicks, a psychiatrist and public health researcher at King's College London in the U.K., says she "simply cannot believe" the paper's claim that 40 consecutive houses were surveyed in a single day. "There is simply not enough time in the day," she says, "so I have to conclude that something else is going on for at least some of these interviews." Households may have been "prepared by someone, made ready for rapid reporting," she says, which "raises the issue of bias being introduced." <ref></ref>

The Lancet estimate also drew criticism from the Iraqi government. Government spokesman Ali Debbagh said, "This figure, which in reality has no basis, is exaggerated" <ref>,22606,20567188-5005962,00.html</ref>. And Iraq's Health Minister Ali al-Shemari gave a similar view a month later, "Since three and a half years, since the change of the Saddam regime, some people say we have 600,000" killed, he said. "This is an exaggerated number." <ref></ref>

The US government immediately rejected the study. In a press conference shortly after the article was published President Bush called the figure "not credible". <ref></ref> He stood by the figure that he had previously given for the death toll saying that: "Six hundred thousand or whatever they guessed at ... it's not credible.". He went on to say that the article's methodology was "pretty well discredited." <ref></ref>

Steven E. Moore, who conducted survey research in Iraq for the Coalition Provisional Authority and was an advisor to Paul Bremer, also ridiculed the Lancet study in an editorial in the Wall Street Journal. In a piece entitled, "655,000 War Dead? A bogus study on Iraq casualties", Moore wrote, "I wouldn't survey a junior high school, no less an entire country, using only 47 cluster points. Neither would anyone else..."<ref></ref>.

[edit] Responses to criticisms

In a Democracy Now! interview , study co-author Les Roberts defended the methodology by noting that the method is the standard used in poor countries, that the same method was used by the US government following wars in Kosovo and Afghanistan, and that the US government's Smart Initiative program is spending millions of dollars per year teaching NGOs and UN workers how to use the same cluster method for estimating mortality rates. <ref></ref>

The article's authors defended their research, claiming that their work was the only active study of the death toll, and that this is more accurate than passively counting reported deaths. <ref name="6040054.stm" /> They cited a number of factors that could lead to smaller figures from other sources; for example, the cultural tendancy for bodies to be buried within 24 hours of death. They claim that the sources of bias in their study push the figure down.

In a letter to The Age, 27 epidemiologists and health professionals defended the methods of the study, writing that the study's "methodology is sound and its conclusions should be taken seriously." <ref>letter to The Age</ref>

A Reuters article reports on other researchers, epidemiologists, professors, and physicians who have defended the study. For example; this quote from the article;

"Over the last 25 years, this sort of methodology has been used more and more often, especially by relief agencies in times of emergency," said Dr. David Rush, a professor and epidemiologist at Tufts University in Boston. <ref>"Iraq death rate estimates defended by researchers". By Deena Beasley. Reuters. Oct. 21, 2006. Article is also here.</ref>

Sir Richard Peto, Professor of Medical Statistics and Epidemiology in the University of Oxford, described the 2006 report as "statistically valid" in an interview on BBC television <ref name="Newsnight"></ref>.

Dr. Ben Coghlan, an epidemiologist in Melbourne Australia, writes: "The US Congress should agree: in June this year [2006] they unanimously passed a bill outlining financial and political measures to promote relief, security and democracy in the Democratic Republic of Congo. The bill was based in part on the veracity of a survey conducted by the Burnet Institute (Melbourne) and the International Rescue Committee (New York) that found 3.9 million Congolese had perished because of the conflict. This survey used the same methodology as Burnham and his associates. It also passed the scrutiny of a UK parliamentary delegation and the European Union." <ref>Coghlan, Ben, "Gut reaction aside, those on the ground know Iraq reality", Centre for International Health, 31 October 2006</ref> Burnham is one of the authors of both of the Lancet studies.

Washington Post article <ref name="washingtonpost"> "Is Iraq's Civilian Death Toll 'Horrible' -- Or Worse?". By Jefferson Morley, Washington Post, October 19, 2006.</ref> quote:

"The numbers do add up," said Daniel Davies, a stockbroker and blogger for the Guardian. He argued that the sample of 1,849 households interviewed by Iraqi doctors working for the JHU research team was as large as that used by political pollsters.

An October 16, 2006 MediaLens article quotes many health experts, epidemiologists, biostatistics experts, polling experts, etc. who approve of the Lancet study and methodology. <ref name="medialensdebate">"Democracy and Iraq - Killing debate". MediaLens. October 18, 2006.</ref> For example:

John Zogby, whose New York-based polling agency, Zogby International, has done several surveys in Iraq since the war began, said: "The sampling is solid. The methodology is as good as it gets. It is what people in the statistics business do." ...
Professor Sheila Bird of the Biostatistics Unit at the Medical Research Council said: "They have enhanced the precision this time around and it is the only scientifically based estimate that we have got where proper sampling has been done and where we get a proper measure of certainty about these results."

In an October 31, 2006 MediaLens article, Lancet study co-author Les Roberts responded to several questions on the report, concluding that: "Of any high profile scientific report in recent history, ours might be the easiest to verify. If we are correct, in the morgues and graveyards of Iraq, most deaths during the occupation would have been due to violence. If Mr. Bush's '30,000 more or less' figure from last December is correct, less than 1 in 10 deaths has been from violence. Let us address the discomfort of Mr. Moore and millions of other Americans, not by uninformed speculation about epidemiological techniques, but by having the press travel the country and tell us how people are dying in Iraq." <ref name=lancet_co_author> "Lancet report co-author responds to questions". MediaLens. October 31, 2006.</ref>

[edit] UNDP ILCS study compared to Lancet study.

The Iraq Body Count project (IBC) records civilian deaths reported by English-language media, including all civilian deaths due to coalition military action, the insurgency or increased criminal violence<ref></ref>. The IBC death count at the time of the October 2006 Lancet study was released was between 43,546 and 48,343, or roughly 7% of the estimate in the Lancet study. Some of this difference is explained by the fact that the Lancet study was estimating all "excess" deaths from any and all violent and non-violent causes, and includes combatants and civilians alike. Also, the IBC states that its count is low due to its strict reliance on media reports. However, IBC believes some of it may also be explained by the Lancet having overestimated, citing the lower ILCS estimate as one reason to suspect this.

IBC illustrated several of what it calls "the main data that are relevant to a comparative assessment of" the ILCS study and the 2004 Lancet study. It points to, for example, a much larger number of clusters (2,200 for ILCS vs. 33 for Lancet), and a more accurate sampling rate (1 in 200 for ILCS vs. 1 in 3,000 for Lancet).<ref></ref> The 2006 Lancet study is somewhat larger than the first (it used 47 clusters instead of 33, and had a lower sampling rate), but a similar comparison using the 2006 Lancet study would still show large disparities favoring ILCS in the indicators listed by IBC. Such a comparison would also show a wider disparity between the two estimates, as the 2006 Lancet study estimated a higher number of violent deaths than did the 2004 Lancet study over the same period.

Lancet authors draw a different kind of comparison. From Appendix C of the 2006 Lancet study supplement <ref name="Lancet supplement" /> there is this concerning the UNDP's 2004 Iraq Living Conditions Survey (ILCS):

"Working for the U.N. Development Program [UNDP], the highly regarded Norwegian researcher Jon Pederson led a survey that recorded between 18,000 and 29,000 violent deaths during the first year of occupation. The survey was not focused on deaths, but asked about them over the course of lengthy interviews that focused on access to services. While this was more than twice the rate recorded by IBC [Iraq Body Count project] at the time, Pederson expressed concern for the completeness and quality of the data in a newspaper interview last year. The surveys reported in The Lancet were focused solely on recording deaths and count about two and a half times as many excess deaths from all causes over the same period."

From an October 19, 2006 Washington Post article <ref name="washingtonpost" /> there is this:

"In a telephone interview, Jon Pedersen, research director for the 2004 study, said several factors probably account for researchers' different findings. One key issue is how researchers extrapolate from the deaths identified in their field research to a death toll for the whole country. Pedersen noted that the Lancet study is based on a pre-invasion mortality rate of 5.5 deaths per thousand people [per year]. The U.N., he said, used the figure of 9 deaths per thousand. Extrapolating from the lower pre-invasion mortality rate would yield a greater increase in post-invasion deaths, he noted."

The difference between the 2 baseline rates is 3.5 deaths/1,000/year. The Lancet study used the number of 26,100,000 as the population of Iraq. <ref name="Lancet supplement" /> 3.5 times 26,100 equals 91,350. So 3.5 deaths/1,000/year means 91,350 deaths in one year in a population of 26.1 million.

Both the 2004 <ref name="lancet2004" /> and 2006 Lancet studies <ref name="lancetOct2006" /> <ref name="Lancet supplement" /> figured out the 2002 baseline pre-invasion mortality rate through their survey of deaths in the households they interviewed. They recorded the dates of the deaths, and used the 2002 death total to make a calculation of the 2002 mortality rate. The 2004 Lancet study surveyed 988 households. The 2006 study surveyed 1849 households. Even though they interviewed different sets of households across Iraq in their 2004 and 2006 surveys, they came up with the same 2002 mortality rate. From the 2006 Lancet article: "The striking similarity between the 2004 and 2006 estimates of pre-war mortality diminishes concerns about people’s ability to recall deaths accurately over a 4-year period." <ref name="lancetOct2006" />

Here is an excerpt from the supplement <ref name="Lancet supplement" /> to the 2006 Lancet study:

"For the purpose of analysis, the 40 months of survey data were divided into three equal periods—March 2003 to April 2004; May 2004 to May 2005, and June 2005 to June 2006. Following the invasion the death rate rose each year."
  • "Pre-invasion: 5.5 deaths/1,000/year
  • March 2003-April 2004: 7.5 deaths/1,000/year
  • May 2004-May 2005: 10.9 deaths/1,000/year
  • June 2005-June 2006: 19.8 deaths/1,000/year
  • Overall post-invasion: 13.2 deaths/1,000/year"

The ILCS baseline 2002 mortality rate of 9 deaths/1,000/year is more than the 2003 mortality rate measured by the Lancet studies.

The ILCS asked about deaths during the course of a lengthy interview on the household's living conditions. In the 3 main ILCS documents (in pdf form) all the war-related deaths info is in 6 paragraphs on page 54 of the analytical report. <ref></ref> <ref name="analytical"></ref> It states: "The ILCS data has been derived from a question posed to households concerning missing and dead persons during the two years prior to the survey. Although the date was not asked for, it is reasonable to suppose that the vast majority of deaths due to warfare occurred after the beginning of 2003."

Responses below by Lancet study co-author Les Roberts (LR below) to 2 ILCS-related questions are from an October 31, 2006 MediaLens article <ref name=lancet_co_author />. Questions 9 and 10:

9. Lancet 2 found a pre-invasion death rate of 5.5/ per 1000 people per year. The UN has as estimate of 10? Isn't that evidence of inaccuracy in the study?
LR: The last census in Iraq was a decade ago and I suspect the UN number is somewhat outdated. The death rate in Jordan and Syria is about 5. Thus, I suspect that our number is valid. Note that if we are somehow under-detecting deaths, then our death toll would have to be too low, not too high. Both because a) we must be missing a lot, and b) the ratio of violent deaths to non-violent deaths is so high.
I find it very reassuring that both studies found similar pre-invasion rates, suggesting that the extra two-years of recall did not dramatically result in under-reporting..a problem recorded in Zaire and Liberia in the past.
10. The pre-invasion death rate you found for Iraq was lower than for many rich countries. Is it credible that a poor country like Iraq would have a lower death rate than a rich country like Australia?
LR: Yes. Jordan and Syria have death rates far below that of the UK because the population in the Middle-east is so young. Over half of the population in Iraq is under 18. Elderly populations in the West are a larger part of the population profile and they die at a much higher rate.

In the article Les Roberts has this to say about the ILCS (Jon Pederson) method of recording deaths: "His group conducted interviews about living conditions, which averaged about 82 minutes, and recorded many things. Questions about deaths were asked, and if there were any, there were a couple of follow-up questions. A) I suspect that Jon's mortality estimate was not complete. I say this because the overall non-violent mortality estimate was, I am told, very low compared to our 5.0 and 5.5/ 1000 /year estimates for the pre-war period which many critics (above) claim seems too low. Jon sent interviewers back after the survey was over to the same interviewed houses and asked just about <5 year old deaths. The same houses reported ~50% more deaths the second time around. In our surveys, we sent medical doctors who asked primarily about deaths. Thus, I think we got more complete reporting."

[edit] See also

[edit] References

<references />

[edit] External links

First study
Second study

Lancet surveys of mortality before and after the 2003 invasion of Iraq

Personal tools
what is world wizzy?
  • World Wizzy is a static snapshot taken of Wikipedia in early 2007. It cannot be edited and is online for historic & educational purposes only.