This study, purporting to show that greater numbers of vaccines in the first year of life are associated with greater risk of infant mortality, came across my radar recently. I thought I’d take a moment to look at it in part of a series of posts on the emotionally fraught relationship between science and our everyday lives. This post is the one that has the most to do with parenting; the ones that follow will be more about a health scare I had recently and some of the changes it’s wrought on our life.
Neil Miller and Gary Goldman claim to have found a correlation, on a population scale, between the number of vaccines children receive in the first year of life in a given country and that country’s infant mortality rate. (Full text of the paper in PDF here.) Their work is riddled with conceptual and procedural problems, and of course whenever someone asserts a correlation without establishing a concrete causal mechanism, we should be skeptical. (Using the phrase “synergistic toxicity” over and over again does not count as establishing a causal mechanism.) But since this kind of “research” frequently gets turned into news items that get circulated among worried parents trying to make good decisions for their kids, I thought I’d delve into it a little bit, leaning gently on a couple of excellent analyses from David Gorski at Science-Based Medicine and Catherina at Just The Vax.
A summary of the problems addressed by Catherina and Dr. Gorski:
1. The paper is inconsistent in its definition of a “dose.” Catherina lays it out neatly:
[T]he way Miller and Goldman are counting vaccines is completely arbitrary and riddled with mistakes.
Arbitrary: they count number of vaccines in US bins (DTaP is one, hib is separate) and non-specific designations (some “polio” is still given as OPV in Singapore), rather than antigens. If they did that, Japan, still giving the live bacterial vaccine BCG, would immediately go to the top of the list. That wouldn’t fit the agenda, of course. But if you go by “shot” rather than by antigen, why are DTaP, IPV, hepB and hib counted as 4 shots for example in Austria, when they are given as Infanrix hexa, in one syringe?
Mistakes: The German childhood vaccination schedule recommends DTaP, hib, IPV AND hepB, as well as PCV at 2, 3 and 4 months, putting them squarely into the 21 – 23 bin. The fourth round of shots is recommended at 11 to 14 months, and MenC, MMR and Varicella are recommended with a lower age limit of 11 months, too, which means that a number of German kids will fall into the highest bin, at least as long as you count the Miller/Goldman way.
(If you’re bored and want to check their work, here are the vaccine schedules from Europe that Miller and Goldman claim to have relied on. They cite UNICEF’s website as their source for non-European countries, although, since they don’t provide a URL for a specific page on the site, I’ve been unable to find that data.)
The definition of a “dose” is critically important here. If you want to entertain the hypothesis that vaccines are in some way “toxic” because of, for example, preservatives or other foreign material, then the number of antigens matters less than the number of shots or vials. On the other hand, if you want to say that the antigens are the toxic substance, then as Catherina points out you have to account for different levels of antigens in different types of vaccines for the same diseases. Miller and Goldman’s vague and confusing approach does little to tease out or account for these differences.
2. Countries don’t all count dead infants the same way. Dr. Gorski quotes Bernardine Healy, former director of the NIH:
[I]t’s shaky ground to compare U.S. infant mortality with reports from other countries. The United States counts all births as live if they show any sign of life, regardless of prematurity or size. This includes what many other countries report as stillbirths. In Austria and Germany, fetal weight must be at least 500 grams (1 pound) to count as a live birth; in other parts of Europe, such as Switzerland, the fetus must be at least 30 centimeters (12 inches) long. In Belgium and France, births at less than 26 weeks of pregnancy are registered as lifeless. And some countries don’t reliably register babies who die within the first 24 hours of birth. Thus, the United States is sure to report higher infant mortality rates. For this very reason, the Organization for Economic Cooperation and Development, which collects the European numbers, warns of head-to-head comparisons by country.
Miller and Goldman claim to have accounted for these differences and quote a CDC paper which says that “[I]t appears unlikely that differences in reporting are the primary explanation for the United States’ relatively low international ranking.” Of course, this statement in itself is quite vague, giving no idea what percentage of the difference in rankings the reporting problem plays. But it also begs the question, “What is the primary explanation?” The same CDC paper gives a perfectly reasonable answer, to which we shall return later.
In the meantime, this paper commissioned by the Congressional Budget Office on the subject of America’s seemingly awful infant mortality stats provides more detail on the difficulties of accurately comparing IMRs:
In countries where physicians are more aggressive about attempting to resuscitate very premature newborns — of which the United States is probably the leading example — extremely small neonates are more likely to be classified as live births than in countries with less aggressive resuscitation policies. Thus, for example, if little attempt is made to resuscitate newborns weighing less than 500 grams (1 pound, 2 ounces), these births may be classified as fetal deaths and not be included in either the live birth or the infant mortality statistics. By contrast, when attempts are made to resuscitate the tiniest newborns, they are more likely to be classified as live births, although most will subsequently die and then be included in the infant mortality statistics.
(We’ll get back to this idea of aggressive treatment in the final section.)
3. Miller and Goldman selected data from a single year, 2009. But why? Surely an analysis over multiple years, or multiple decades, would be more useful. We could be more certain that the IMRs in 2009 weren’t some sort of statistical fluke. And we could watch IMRs move (or not) according to changes in vaccination schedules. As Catherina points out,
For example, in the early 1980ies, Germany’s infant mortality was about 5 times as high (10000 infants died per year) than it is today (2000 died in 2009 with approximately the same birth rate), however (in Miller’s and Goldman’s twisted logic), the vaccination schedule contained far fewer vaccines in the first year (essentially just DT and polio, since the whole cell pertussis was not given between 1974 and 1991, the aP not yet introduced, the MMR given in year 2, no hib, nor hepB, nor PCV given either), while Germany was already very much a “developed country”.
4. Miller and Goldman do not consider the whole world. It’s tempting to say that they’re on stronger ground here — that you want to compare wealthy, industrialized countries to other wealthy, industrialized countries. But they don’t seem to be particularly interested even in other industrialized and/or wealthy countries whose IMRs fall below that of the U.S. — say, countries in Eastern Europe, or the wealthy Arab states — to see whether their correlation holds up further down the list. Gorski:
[S]ince the focal point of the analysis seems to be the U.S., which, according to Miller and Goldman, requires more vaccine doses than any other nation, then it would make sense to look at the 33 nations with worse IMRs than the U.S.
Be that as it may, I looked at the data myself and played around with it. One thing I noticed immediately is that the authors removed four nations, Andorra, Liechenstein, Monaco, and San Marino, the justification being that because they are all so small, each nation only recorded less than five infant deaths. Coincidentally, or not, when all the data are used, the r2=.426, whereas when those four nations are excluded, r2 increases to 0.494, meaning that the goodness of fit improved.
In other words, even among the countries above the U.S., Miller and Goldman cherry pick the data, dropping small countries that don’t make the data fit the way they want it to. (4 countries out of 33 is an 8th of the data being excluded, in case you were counting.)
Are these decisions reasonable? Would including Russia or Andorra have made the data clearer, or muddied the waters? I’m not sure, but in light of other methodological decisions, this is questionable at best.
5. What’s with the grouping? Why sort the countries into groups based on the number of vaccines, and then plot the average IMR of each group, instead of just plotting all the data points separately? Gorski again:
[F]or some reason the authors, not content with an weak and not particularly convincing linear relationship in the raw data, decided to do a little creative data manipulation and divide the nations into five groups based on number of vaccine doses, take the means of each of these groups, and then regraph the data. Not surprisingly, the data look a lot cleaner, which was no doubt why this was done, as it was a completely extraneous analysis. As a rule of thumb, this sort of analysis will almost always produce a much nicer-looking linear graph, as opposed to the “star chart” in Figure 1. Usually, this sort of data massaging is done when a raw scatterplot doesn’t produce the desired relationship.
Indeed. Of particular note is Group 2, countries with a vaccination schedule of 15-17 “doses” in the first year. Group 2 only includes 5 countries, and one of those countries is Singapore, which has the best IMR in the world (2.31) and calls for its infants to receive 17 vaccines doses in their first year, according to Miller and Goldman’s counting. Because Group 2 is so small, Singapore is clearly dragging down the average IMR of the whole group — from 4.30 to 3.90. Take out Singapore, which is clearly an enormous outlier, and Group 2 has about the same IMR as Group 3, which makes the linear relationship a lot less neat. Also, 4.30 is very similar to Denmark’s 4.34, and Denmark only requires 12 vaccines in the first year. And speaking of Singapore, if this linear correlation based on vaccination schedules is so strong, why does Singapore have such a drastically low IMR with 17 vaccine doses in the first year, when Italy and San Marino have drastically high IMRs (5.51 and 5.53, respectively) with only a single dose more (18) per year? Naturally, there will be outliers in any linear regression, but it seems that when you get done smoothing out the outliers here by dropping data points and sorting the data into bins, you’ve essentially hidden half the statistical reality.
6. They fall prey to the “ecological fallacy.” Gorski once more:
The ecological fallacy can occur when an epidemiological analysis is carried out on group level data rather than individual-level data. In other words, the group is the unit of analysis. Clearly, comparing vaccination schedules to nation-level infant mortality rates is the very definition of an ecological analysis.
In other words, measuring correlations between variables on the population level tells you nothing about the correlation on an individual level, and indeed is likely to vastly overstate the likelihood of such a correlation. For example, let us suppose that Italians have fewer heart attacks than do Englishmen, and yet eat pasta at a much greater rate. Can we conclude that pasta is preventive against heart attacks? No, because, among other things, you haven’t demonstrated that the pasta-eating individuals in the Italian population are the ones getting fewer heart attacks. Perhaps there’s a smaller subset of Italians who eat hardly any pasta at all, yet get plenty of vigorous exercise, and therefore drag down the national average incidence of heart disease.
Similarly, if you want to find out if a heavier vaccine schedule in the first year correlates with higher infant mortality — or, to be even more specific, whether it correlates with higher rates of SIDS, since Miller and Goldman argue that SIDS and unexplained deaths caused by vaccine “toxicity” are probably the real culprit here — you should do a study following outcomes for individual kids who receive different schedules of vaccines. Trying to track a phenomenon, if there is one, by comparing different whole populations is both inefficient and brutally error-prone.
To their credit, Miller and Goldman attempt to address this problem in a section titled “Ecological Bias.” To their discredit, their explanation is simply awful:
Although most of the nations in this study had 90%–99% of their infants fully vaccinated, without additional data we do not know whether it is the vaccinated or unvaccinated infants who are dying in infancy at higher rates. However, respiratory disturbances have been documented in close proximity to infant vaccinations, and lethal changes in the brainstem of a recently vaccinated baby have been observed. Since some infants may be more susceptible to SIDS shortly after being vaccinated, and babies vaccinated against diarrhea died from pneumonia at a statistically higher rate than non-vaccinated babies, there is plausible biologic and causal evidence that the observed correlation between IMRs and the number of vaccine doses routinely given to infants should not be dismissed as ecological bias.
So after admitting that they have in no way correlated these higher rates of infant mortality with actual vaccination on the individual level, Miller and Goldman attempt to razzle-dazzle the reader with a lot of scary-sounding stuff. But, for example, the “lethal changes in the brainstem” occurred in a single child after a vaccination — to infer anything from that would be a classic case of “post hoc, ergo propter hoc” reasoning. I’m sure you can find a single case of a child who died of bullet wounds after being vaccinated, too.
And the babies who died of pneumonia at a statistically significantly higher rate after receiving the rotavirus vaccine? That was in a single study out of eight studies conducted on the safety of Rotarix, the vaccine in question. When you compile all eight studies, the relative risk of pneumonia between Rotarix and placebo is exactly 1, according to this exhaustive FDA briefing (PPT — skip to slide 59).
I’m not going to bother batting at the other examples, but you see where this is going. And the problem of the ecological fallacy is probably the most damning, because even if all the other problems in this paper were fixed, this alone would be enough to keep it from making any sense as science.
Finally, I’d like to discuss that CDC report I promised to come back to, and pile on a criticism of my own that neither Catherina nor Dr. Gorski really dealt with. Namely, we know the risk factors that bring the U.S.’s IMR up. Alice Park discusses them in a 2009 article for Time:
Starting in 2008, the March of Dimes began tracking three of the major contributors to the high preterm birth rate — lack of insurance among women of childbearing age, rates of cigarette smoking and the rate of babies born preterm, but at the tail end of pregnancy, between 34 and 36 weeks….
By far the biggest contributor to the high premature birth rate is the rate of so-called late-preterm births. About 70% of babies born too early in the U.S. are born between 34 and 37 weeks. There are many reasons for these early deliveries, making it particularly difficult to target one or even a few factors and address them head-on. The increase in multiples — twins, triplets or more — is one contributor. The rise in assisted reproductive technologies, such as in vitro fertilization, is another; these techniques are associated with both an increased risk of multiples as well as a higher risk of premature delivery, even of singletons….
This is relatively undisputed, as far as I can tell from reading through literature on America’s woeful infant mortality rate. What do Miller and Goodman make of this? From the paper:
Preterm birth rates in the United States have steadily increased since the early 1980s…. Preterm
babies are more likely than full-term babies to die within the first year of life. About 12.4% of US births are preterm…. Preventing preterm births is essential to lower infant mortality rates. However, it is important to note that some nations such as Ireland and Greece, which have very low preterm birth rates (5.5% and 6%, respectively) compared to the United States, require their infants to receive a relatively high number of vaccine doses (23) and have correspondingly high IMRs. Therefore, reducing preterm birth rates is only part of the solution to reduce IMRs.
There are several squirrelly points packed into this paragraph. First, note the phrase “within the first year of life,” which, while part of a technically correct definition of infant mortality, leads us to the question: why are we counting all deaths in the first year in this study anyway? Surely the correct measure of whether vaccines influence mortality would exclude all deaths prior to the first vaccine — i.e., all deaths that occur at or immediately after birth.
Second, the cherry-picking of Ireland and Greece as countries with low preterm birth rates and high IMRs, and then imputing those figures to vaccination rates is obviously putting the cart before the horse. If you’re trying to draw correlations of this kind, why not include a table of preterm birth rates and use them to factor out that difference in IMRs before trying to measure a difference attributable to vaccine schedules? I mean, if you have those preterm birth rates handy, which Miller and Goldman seem to, although they don’t provide a footnote for the Ireland and Greece numbers.
Anyway, here’s an interesting graphic from that CDC paper Miller and Goldman cited to show that reporting differences did not account for the bulk of the difference in IMRs. It shows what the US infant mortality rate would look like if we had Sweden’s level of preterm births:
What does this tell us? It tells us that, exactly as the CDC, the CBO, and the March of Dimes have concluded, much of the difference in IMR between the U.S. and other countries can be attributed to pre-term birth rates. And what does that tell us about this supposed correlation between vaccination and IMR?
It tells us that having an aggressively interventionist medical culture in the U.S. leads, somewhat paradoxically, to higher IMR. Remember that many of those preterm births are the result of fertility treatments. And U.S. physicians are more aggressive about attempting to resuscitate very small babies, even though most will die anyway; this leads to a much higher count of live births followed by death than in countries that treat those unbreathing preemies as still births. And aggressive monitoring of fetal health, and a greater willingness to either induce early labor or perform caesareans, may also play a role.
And then there’s this interesting paper from the New England Journal of Medicine that finds that, paradoxically, the rapidly increasing numbers of new neonatal ICUs in the U.S. may be responsible for at least some of the rise in infant morbidity and mortality:
In regions with a greater supply of beds and neonatologists, infants with less serious illness might be more likely to be admitted to a neonatal intensive care unit and might be subjected to more intensive diagnostic and therapeutic measures, with the attendant risks of errors and iatrogenic complications, as well as impaired family–infant bonding.
In short, if there is a correlation between vaccination schedules and IMR — a fact not proven here — there may be a simple explanation (e.g., a more aggressive approach to medicine overall) that does not require invoking unproven and unexplained “toxicity” in vaccines.
Where does all this leave us, in terms of what I was talking about at the beginning, the relationship between science and our everyday lives? Well, it counsels skepticism, certainly, when “news” of a disturbing “scientific” discovery shows up on parenting forums or in our inboxes. And of course it challenges each of us to become more scientifically literate in our reading — which is why I occasionally undertake these close examinations of scientific subjects related to parenting.
But this process is exhausting. To really delve into this paper, to take it apart and understand it to my own satisfaction, has taken two days and 3500 words. I can’t possibly do this with each piece of scientific information (or misinformation) that comes my way. For the most part, I’m forced to shrug and rely on professionals at the CDC, the FDA, and the doctor’s office to steer me the right way. But what happens when the professionals start to seem untrustworthy or themselves misinformed? What do you do when your need for expert knowledge is undermined by an almost paranoid sense that the experts are not on your side? And how do you avoid going too far in the other direction and falling victim to things like vaccine denialism?
I’ll try to talk more about that in the next couple of entries in this series.