India triumphs over polio

A two-woman vaccination team in Firozabad, Uttar Pradesh, (Photo: UNICEF)

From my article on Ars Technica:

In the year since January 13, 2011, India has had zero cases of polio. Previously, India led the world, accumulating over 5,000 cases since 2000. Polio's last victim in India was 18 month-old Rukhsar, a girl in West Bengal who began showing signs of paralysis on this day in 2011. Now, epic immunization efforts have brought global eradication of the disease a giant step closer. Outside India, however, backsliding Pakistan and Nigeria and splotches of polio across Africa have blocked the final stamping out of the disease worldwide...

 

 

 

IBM’s Watson: Portent or Pretense?

Game over.

IBM’s Watson, with a 15 terabyte chunk of human knowledge loaded into it like a Game Boy cartridge and set to hair trigger, poured out a high-precision fact fusillade that left no humans standing in the (aptly-named) Jeopardy. Is machine omniscience upon us?

“I, for one, welcome our new computer overlords,” said defeated Jeopardy champion Ken Jennings. But Jennings has buzzed in too quickly. Watson might point not to the inevitability of artificial intelligence but its unattainability. Barring unexpected revelations from IBM, Watson represents exquisite engineering work-arounds substituting for fundamental advance.

Questionable progress
In 1999, early question answering systems, including one from IBM, began competing against each other under the auspices of the National Institute for Standards Testing (NIST). At the time, researchers envisioned a four-step progression, starting with systems batting back facts to simple questions such as “When was Queen Victoria born?” A few tall steps later, programs would stand atop the podium and hold forth on matters requiring a human expert like “What are the opinions of the Danes on the Euro?”

Getting past step one proved difficult. While naming the grape variety in Chateau Petrus Bordeaux posed little difficulty, programs flailed on follow-up questions like “Where did the winery’s owner go to college?” even though the answer resided in the knowledgebase provided. These contextual questions were deemed “not suitable” by NIST and dropped. The focus remained on simple, unconnected factoids. In 2006, context questions returned—only to be cut again the following year. In the consensus view, such questions were “too hard,” according to Jimmy Lin, a computer science professor at the University of Maryland and a coordinator of the competition.

The entire contest was dropped after 2007. “NIST decides what to push,” explained Lin, “and we were not getting that much out of this…” Progress had turned incremental, like “trying to build a better gas engine,” according to Lin. Question answering wasn’t finished, but it was done. Although James T. Kirk asked the Star Trek computer questions like whether a storm could cause inter-dimensional contact with a parallel universe, actual question answering systems like Watson would be hard pressed to answer NIST level one questions like “Where is Pfizer doing business?”

ABC easy as 1, 2, 3
Kirk spoke to the computer. Watson’s designers opted for text messages—which says a lot. Speech recognition software accuracy reaches only around 80% whereas humans hover in the nineties. Speech software treats language not as words and sentences carrying meaning but as strings of characters following statistical patterns. Seemingly, Moore’s law and more language data should eventually yield accuracy at or conceivably beyond human levels. But although chips have sped up and data abound, recognition accuracy plateaued around 1999 and NIST stopped benchmarking in 2001 for lack of progress to measure. (See my Rest in Peas: the Unrecognized Death of Speech Recognition.)

Much of speech recognition’s considerable success derives from consciously rejecting the deeper dimensions of language. This source of success is now a cause of failure. Ironically, as Watson triggers existential crisis among humans, computers are struggling to find meaning.

Words are important in language. We’ve had dictionaries for a quarter millennium, and these became machine readable in the last quarter century. But the fundamental difficulties of word meanings have not changed. For his 1755 English dictionary, Samuel Johnson hoped that words, “these fundamental atoms of our speech might obtain the firmness and immutability of… particles of matter…” After nine years’ effort, Johnson published his dictionary but abandoned the idea of a periodic table of words. Concerning language, he wrote: “naked science is too delicate for the purposes of life.”

Echoing Johnson 250 years later, lexicographer Adam Kilgarriff wrote: “The scientific study of language should not include word senses as objects…” The sense of a word depends on context. For example, if Ken Jennings calculates that Oreos and crosswords originated in the 1920s, does calculate mean mathematical computation or judge to be probable? Well, both. Those senses are too fine-grained and need to be lumped together. But senses can also be too coarse. If I buy a vowel on Wheel of Fortune, do I own the letter A? No. This context calls for a finer, even micro-sense.

The decade of origin for Oreos and crosswords was actually the 1910s—as Watson correctly answered. In what decade will computers understand word meanings? Not soon; perhaps never. Theoretical underpinnings are absent: “[T]he various attempts to provide the concept ‘word sense’ with secure foundations over the last thirty years have all been unsuccessful,” as Kilgarriff wrote more than a decade ago, in 1997. Empirical, philosophy-be-damned approaches were tried the following year.

In 1998, at Senseval-1, researchers tackled a set of 35 ambiguous words. The best system attained 78% accuracy. The next Senseval, in 2001, used more nuanced word definitions which knocked accuracy down below 70%. It didn’t get up: “[I]t seems that the best systems have hit a wall,” organizers wrote. The wall wasn’t very high. Systems struggled to do better than mindlessly picking the first dictionary sense of a word every time. Organizers acknowledged that most of the efforts “seem to be of little use…” and had “not yet demonstrated real benefits in human language technology applications.”

Disambiguation was dustbinned. Senseval was renamed Semeval and semantic tasks subsumed word sense disambiguation by 2010. Today no hardware/software system can reliably determine the meaning of a given word in a sentence.

That’s imparsable
Belief continues unfazed, however, regarding whether language can be solved with statistical methods. “The answer to that question is ‘yes,’ “ declares an unabashedly partial Eugene Charniak, professor of computer science at Brown University. Whatever the trouble with word meanings, at the sentence level, computer comprehension is quite impressive—thanks to probabilistic models. Charniak has written a parsing program that unfurls a delicate mobile of syntax from the structure of a sentence. 

Mobile meaning, hanging in the balance (Diagram: phpSyntaxTree)

Such state-of-the-art parsers spin accurate mobiles about 80% of the time when given sentences from The Wall Street Journal. But feed in a piece of literature, biomedical text or a patent and the parses tangle. Nouns are mistaken for finite verbs in patents; in literary texts, different kinks and knots tug accuracy down to 70%.

Performance droops because the best parsers don’t apply universal rules of grammar. We don’t know them or if they exist. Instead parsers try to reverse engineer grammar by examining huge numbers of example sentences and generating a statistical model that substitutes for the ineffable principles.

That strategy hasn’t worked. Accuracy invariably declines when parsers confront an unfamiliar body of text. The machine learning approach finds patterns no human realistically could, but these aren’t universal. Change the text and the patterns change. That means current parsing technology performs poorly on highly diverse sources like the web.

Progress has gone extinct: parsing accuracy gained perhaps a few tenths of one percent in the last decade. “Squeezing more out of the current approaches is not going to work,” says Charniak. Instead, he concludes, “we need to squeeze more out of meaning.”

Surface features don’t provide a reliable grip on sentence syntax. Word order and parts of speech often aren’t enough. Regular sentences can be slippery, like:

  • President F.W. de Klerk released the ANC men along with one of the founding members of the Pan Africanist Congress (PAC).

Not knowing about apartheid, the parser must guess whether de Klerk and a PAC member together released the ANC men—although the PAC figure was also in prison. The program has no basis for deciding where in the mobile (pictured above) to hang up the phrase “along with…”

Notice that winning Jeopardy is easier than correctly diagramming some sentences. And Watson provides no help to a parser in need. Questions like, “What are the chances of PAC releasing members of the ANC?” are far too hard, the reasoning power and information required too vast. Watson’s designers likened organizing all knowledge to “boiling the ocean.” They didn’t try. Others are.

Unobtanium
But it’s called mining the World Wide Web and aims to penetrate to the inner core, the sanctum sanctorum, of meaning.

In the beginning was the word. But the problem of meaning arises in word senses and spreads, as we have, seen to sentences. Errors and misprisions accumulate, fouling higher level processing. In a paragraph referencing “Mr. Clinton,” “Clinton,” and “she,” programs cannot reliably figure out if “Clinton” refers to Bill Clinton or Hillary Clinton—after 15 years of effort. Perhaps because of this problem, Watson once answered “Richard Nixon” to a clue asking for a first lady,

Evading this error requires understanding the senses of nearby words, that is, solving the unsolved word disambiguation problem. Finding entry into this loop of meaning has been elusive, the tape roll seamless, thumbnail never catching at the beginning.

Structuring web-based human knowledge promises to break through today’s dreary performance ceiling. Tom Mitchell at Carnegie Mellon University seeks “growing competence without asymptote,” a language learning system which spirals upward limitlessly. Mitchell’s Never Ending Language Learning (NELL) reads huge tracts of the web, every day. NELL’s blade servers grind half a billion pages of text into atoms of knowledge, extracting simple facts like “Microsoft is a company.” Initially, facts are seeded onto a human-built knowledge scaffold. But the idea is to train NELL in this process and enable automatic accretion of facts into ever-growing crystals of knowledge, adding “headquartered in the city Redmond” to facts about Microsoft, for example. Iterate like only computers can and such simple crystals should complexify and propagate.

But instead NELL slumps lifelessly soon after human hands tip it on its feet. Accuracy of facts extracted drops from an estimated 90% to 57%. Human intervention became necessary: “We had to do something,” Mitchell told fellow researchers last year. The interventions became routine and NELL dependent on humans. NELL employs machine learning but knowledge acquisition might not be machine learnable: “NELL makes mistakes that lead to learning to make additional mistakes,” as NELL’s creators observed.

The program came to believe that F.W. de Klerk is a scientist not a former president of South Africa—providing little help in resolving ambiguous parsing problems. At the same time, NELL needs better parsing to mine knowledge more accurately: “[W]e know NELL needs to learn to parse,” Mitchell wrote in email. This particular Catch-22 might not be fundamentally blocking. But if NELL can’t enhance the performance of lower-level components, those components might clamp a weighty asymptote on NELL’s progress.

Represent, represent
An older, less surmountable, perhaps impossible problem faced by NELL is how to arrange facts, assuming they can be made immaculate. Facts gleaned by NELL must be pigeon-holed into one of just 270 categories—tight confines for all of knowledge. Mitchell wants NELL to be able to expand these categories. However, while incorrect individual facts might compromise NELL, getting categories wrong would be fatal.

But no one knows how to write a kind of forensic program that accurately reconstructs a taxonomy from its faint imprint in text. Humans manage, but only with bickering. Even organizing knowledge in relatively narrow, scientific domains poses challenges, small molecules in biology, for example. Some labs just isolate and name the different species. Other researchers with different interests represent a molecule with its weight and the weights of its component parts, information essential to studying metabolism.  However, 2D representations are needed for yet another set of purposes (reasoning about reaction mechanisms) whereas docking studies call for 3D representations, etc.

What a thing is or, more specifically, how you represent it, depends on what you are trying to do—just as the quest for word senses discovered. So even for the apparently simple task of representing a type of molecule, “there is not one absolute answer,” according to Fabien Campagne, research professor of computational biomedicine at Weill Medical College. The implication is that representation isn’t fixed, pre-defined. And new lines of inquiry, wrote Campagne, “may require totally new representations of the same entity.”

One of NELL’s biological conceptions is that “blood is a subpart of the body within the right ventricle.” Perhaps this and a complement of many other facts cut in a similar shape can represent blood in a way that answers some purpose or purposes. But it will not apply in discussions of fish blood. (Fish have no right ventricle.) And when it comes to human transfusion, blood is more a subpart of a bag.

The difficulties of representation represented: Marcel Duchamp’s Nude Descending Staircase. NELL’s representation of Marcel Duchamp

Particular regions of knowledge can be tamed by effort or imposition of a scheme by raw exercise of authority. But these fiefdoms resist unification and generally conflict. After millennia of effort, humans have yet to devise a giant plan which would harmonize all knowledge. The Wikipedia folksonomy works well for people but badly for automating reasoning. Blood diamonds and political parties in Africa, for example, share a category but clearly require different handling. One knowledge project, YAGO, simply lops off the Wikipedia taxonomy.

The dream of a database of everything is very much alive. Microsoft Research, from its Beijing lab, recently unveiled a project named Probase which its creators say contains 2.7 million concepts sifted from 1.68 billion web pages. “[P]robably it already includes most, if not all, concepts of worldly facts that human beings have formed in their mind,” claim the researchers with refreshing idealism. Leaving aside the contention that everything ever thought has been registered on the Internet, there still are no universal injection molds—categories—ready to be blown full of knowledge.

A much earlier, equally ambitious effort called Cyc failed for a number of reasons, but insouciance about knowledge engineering, about what to put where, contributed to Cyc’s collapse. Human beings tried to build Cyc’s knowledgebase by hand, assembling a Jenga stack of about one million facts before giving up.

NELL may be an automated version of Cyc. And it might succeed less. NELL’s minders already have their hands full tweaking the program’s learning habits to keep fact accuracy up. NELL is inferior to Cyc when it comes to the complexity of knowledge each system can handle. Unless NELL can learn to create categories, people will have to do it, entailing a monumental knowledge engineering effort and one not guaranteed to succeed. Machine learning relies on examples which simply might not work for elucidating categories and taxonomies. Undoubtedly, it is far harder than extracting facts.

NELL may also represent a kind of inverse of IBM’s Watson. NELL arguably is creating a huge Jeopardy clue database full of facts like “Microsoft is a company headquartered in Redmond.” NELL and Watson attack essentially the same problem of knowledge, just from different directions. But it will be difficult for NELL to reach even Watson’s level of performance. Watson left untouched the texts among its 15 terabytes of data. NELL eviscerates text, centrifuging the slurry to separate out facts and reassembling them into a formalized structure. That is harder.

And Watson is confined to the wading pool, factoid shallows of knowledge. The program is out of its depth on questions that require reasoning and understanding of categories. That may be why, in Final Jeopardy, Watson answered “Toronto” not “Chicago” for the United States city whose largest airport is named for a World War II hero and second largest for a World War II battle. Watson likely could have separately identified O’Hare and Midway if asked sequential questions. And pegging Chicago as the city served by both airports also presumably would be automatic for the computer. But decomposing and then answering the series appears to have been too hard. NIST dropped such questions—twice—for their perceived insuperability. And yet they are trivial compared to answering questions about the relations between F. W. de Clerk, the African National Congress, and the Pan Africanist Congress, questions of the kind which have stalled progress in parsing.

Google vs. language
Google contends with language constantly—and prevails. Most Google queries are actually noun phrases like “washed baby carrots.” To return relevant results, Google needs to know if the query is about a clean baby or clean carrots. Last year, a team of researchers crushed this problem under a trillion-word heap of text harvested from the many-acred Google server farms. Statistically, the two words “baby carrots” show up together more than “washed baby.” Problem solved. Well, mostly.

The method works an impressive 95.4% of the time, at least on sentences from The Wall Street Journal. Perhaps as important, accuracy muscled up as the system ingested ever-larger amounts of data. “Web-scale data improves performance,” trumpeted researchers. “That is, there is no data like more data.” And more data are inevitable. So will the growing deluge wash away the inconveniencies of parsing and other language processing problems?

Performance did increase with data, but bang for the byte still dropped—precipitously. Torqueing accuracy up just 0.1% required an order of magnitude increase in leverage, to four billion unique word sequences. Powering an ascent to 96% accuracy would require four quadrillion, assuming no further diminution of returns. To reach 97%, begin looking for 40 septillion text specimens. 

Mine the gap: Does the Internet have enough words to solve noun phrases? (Adapted from Pitler et al., “Using Web-scale N-grams to Improve Base NP Parsing Performance”)

More data yielding ever better results is the exception not the rule. The problem of words senses, for example, is relatively impervious to data-based assaults. “The learning curve moves upward fairly quickly with a few hundred or a few thousand examples,” according to Ted Pedersen, computer science professor at the University of Minnesota, Duluth, “but after a few thousand examples there's usually no further gain/learning, it just all gets very noisy.”

Conceivably, we are now witnessing the data wave in language processing. And it may pass over without sweeping away the problems.

Let the data speak, or silence please
In speech recognition too, according to MIT’s Jim Glass, “There is no data like more data.” Glass, head of MIT’s Spoken Language System Group, continued in email: “Everyone has been wondering where the asymptote will be for years but we are still eking out gains from more data.” However, evidence for continuing advance toward human levels of recognition accuracy is scarce, possibly non-existent.

Nova’s Watson documentary asserts that recognition accuracy is “getting better all the time” (~34:00) but doesn’t substantiate the claim. Replying to an email inquiry, a Nova producer re-asserted that programs like Dragon Naturally Speaking from Nuance “are clearly more accurate and continuing to improve,” but again adduced no evidence.

Guido Gallopyn, vice president of research and development at Nuance, has worked on Dragon Naturally Speaking for over a decade. He says Dragon’s error rate had been cut “more than in half.” But Gallopyn begged off providing actual figures, saying accuracy was “complicated.” He did acknowledge that there was still “a long way to go.” And while Gallopyn has faith that human-level performance can be attained, astonishingly, it is not a goal for which Nuance strives: “We don’t do that,” he stated flatly.

Slate also recently talked up speech recognition, specifically Google Voice. The article claims that programs like Dragon “tend to be slow and use up a lot of your computer's power when deciphering your words,” in contrast to Google’s powerful servers. In the Google cloud, 70 processor-years of data mashing can be squeezed into a single day. Accurate speech recognition then springs from the “magic of data,” but exactly how magic goes unmeasured. Google too is mum: “We don't have specific metrics to share on accuracy,” a spokesperson for the company said.

By contrast, The Wall Street Journal, recently reported on how Google Voice is laughably mistake prone, serving as the butt of jokes in a new comedic sub-genre.

There is no need for debate or speculation: the NIST benchmarks, gathering dust for a decade, can definitively answer the question of accuracy. The results would be suggestive for the prospects of web-scale data to overcome obstacles in language processing. Computer understanding of language, in turn, has substantial implications for machine intelligence. In any event, claims about recognition accuracy should come with data.

Today, all that can be said is this:

Progress in voice recognition: the sound of one hand clapping since 2001 (Adapted from NIST, “The History of Automatic Speech Recognition Evaluations at NIST”)

To be || not to be
That is the question about machine intelligence.

When Garry Kasparov was asked how IBM might improve Deep Blue, its chess playing computer, he answered tartly: “Teach it to resign earlier.” Kasparov, then world chess champion, had just soundly defeated Deep Blue. Rather than follow this advice, IBMers put some faster chips in, tweaked the software and then utterly destroyed Kasparov not long after, in 1997. It was IBM’s turn to vaunt: “One hundred years from now, people will say this day was the beginning of the Information Age,” claimed the head of the IBM team. Instead, apart from chess, Deep Blue has had no effect.

If Deep Blue represented an effort to rise above human intelligence by brute computational force, Watson represents the data wave. But we have been inundated by data for some time. Google released its trillion word torrent of text five years ago. Today the evidence may suggest that the problems of language will remain after the deluge. If the rising tide of world-wide data can’t float computing’s boat to human levels, “What’s the alternative?” demands Eugene Charniak. He perhaps means there is no alternative.

A somewhat radical idea is to revise the parts of speech, as Stanford University’s Christopher Manning has proposed. Disturbingly, Manning asks: “Are part-of-speech labels well-defined discrete properties enabling us to assign each word a single symbolic label?” Recall that words don’t cleanly map to discrete senses, and similarly that things in the world don’t fit into obvious, finite, universal categories. Now the parts of speech seem to be breaking down.

Tagging accuracy: time for new parts of speech? (Source: Flickr, Tone Ranger)

Manning is skeptical that machine learning could conjure even 0.2% more accuracy in the tagging of words with their part of speech. Achim Hoffmann, at the University of New South Wales, believes more generally that machine learning now bumps against a ceiling. “New techniques,” he adds, “are not going to substantially change that.” Hoffman points out that relatively old techniques “are still among the most successful learning techniques today, despite the fact that many thousand new papers have been written in the field since then.”

For Hoffman, the alternative is to approach intelligence not through language or knowledge but algorithm. Arguably, however, this is just a return to the very origins of artificial intelligence. John McCarthy, inventor of the term “artificial intelligence,” tried to find a formal logic and mathematical semantics that would lead to human-like reasoning. This project failed. It led to Cyc. As Cyc founder Doug Lenat wrote in 1990: “We don’t believe there is any shortcut to being intelligent, any yet-to-be-discovered Maxwell’s equations of thought.” Forget algorithm. Knowledge would pave the way to commonsense. Cyc, of course, also did not work.

Are we just turning circles, or is the noose cinching tighter with repeated exertions? There is something viscerally compelling—disturbing—about Watson and its triumph. “Cast your mind back 20 years,” as AI researcher Edward Feigenbaum recently said in the pages of The New York Times, “and who would have thought this was possible?” But 20 years ago, Feigenbaum published a paper with Doug Lenat about a project called Cyc. Cyc aimed at full blown artificial intelligence. Watson stands in relation to a completely realized Cyc the way J. Craig Venter’s synthetic cell stands to the original vision of genetic engineering: a toy.

John McCarthy derided the Kasparov-Deep Blue spectacle, calling it “AI as sport.” Jimmy Lin, the former NIST coordinator, is not derisive but more ho-hum, wordly-wise about Watson: “Like a lot of things,” he says, “it’s a publicity stunt.” Perhaps an artificially intelligent computer wouldn’t fall for it, but people have. The New Yorker sees the triumphs of Deep Blue and Watson as forcing would-be defenders of humanity to move the goalposts back, to re-define the boundaries of intelligence and leave behind the fields recently annexed by computers. But the goalposts arguably have been moved up, so that weak artificial intelligence—artificial AI—can put it through the uprights.

The New York Times contends that Watson means “rethinking what it means to be human.” Actually what needs redefinition may be humanity’s relationship to dreams of technological transcendence.

Globe-spanning effort tightens vise on polio; eyes on Angola

Statistics at left, tragedy at right
Closer to victory than ever, polio eradication efforts have intensified, with 2011 bringing new initiatives and funding to most every front in the global war on the virus. The encirclement extends from presidential palaces to the streets of Luanda, Angola to tent villages on the Kosi River in India. “[T]he reach is incredible,” said Ellyn Ogden, USAID’s polio eradication coordinator, “to the doorstep of every child in the developing world, multiple times… It is an extraordinary human achievement that is hard for most people imaging in a peace-time program.”
In India, for example, an army of 2.5 million vaccinator visited 68 million homes and immunized 172 million children; the president of India kicked off the January campaign. Cumulative efforts have driven cases in India to historic lows, just 42 cases last year versus 741 in 2009.
In Africa, 15 African countries launched a synchronized immunization campaign late last year with 290,000 vaccinators targeting 72 million children. But a similar campaign took place the year before—and the year before that. Yet despite these huge efforts, polio keeps coming back. Some countries like Burkina Faso have gotten rid of polio three times.
Nigeria once exported the most polio in Africa, but record-setting progress has occurred there. However, polio has developed a new stronghold—in Angola, which has fed explosive cross-border outbreaks. This year Angola will likely be the source of one third of the world’s polios cases. Continued transmission there has caused the Global Polio Eradication Initiative (GPEI) initiative to miss a major end-2010 milestone. “Angola now is almost the most important front in the global war on polio, and the whole world is watching to see how we do here," said UNICEF Executive Director Anthony Lake. Lake visited Angola with Tachi Yamada, president of global health at the Bill & Melinda Gates Foundation, in January.
Angola freed itself of polio in 2002 only to suffer re-importation—from India. Since then, 33 vaccination campaigns over half a decade have failed to stamp out the disease. Lack of political commitment explains these failures, according to multiple sources within the eradication initiative. Angola’s vaccination rounds have been staffed to a large extent by children. Inadequate supervision has meant just a few hours of vaccinating a day, with efforts dropping off further over the course of three-day campaigns.
Political commitment now appears solidly locked in. Visiting Angola, Lake and Yamada met with President José Eduardo dos Santos. “ ‘I’ll lead the campaign,’ ” Yamada said the president told him. The following day, Angop, the state-run news agency, ran the headline “Head of State committed to eradication of polio.” Subsequent news releases showed a domino effect down the political chain of command from the vice-president, to governors, to administrators of individual districts. One release identified a district manager by name and as acknowledging “the availability of the necessary conditions for vaccinators to reach all areas of the district,” likely coded language for placing direct responsibility on the manager for ensuring vaccination of the 156,000 children under five in that district.
The World Health Organization (WHO) places equal emphasis on community involvement in its formula for effective immunization campaigns. In the past, vaccination plans have been centrally created and handed down for execution. WHO finds that the best “microplans,” which map out block-by-block strategies and awareness efforts, are developed by the communities involved. In this way, “communities hold themselves accountable,” as Tim Petersen, a program
officer at the Bill & Melinda Gates Foundation, puts it.
Angola conducted a three-day polio vaccination campaign, February 23-25, across five high-risk areas of the country. A WHO spokesperson said the new decentralized planning led to some “hiccups” in execution. A report from independent monitors, expected in about a week, will reveal the quality of the campaigns which aim to immunize 90% of children under five.
[Note: I attempted to travel to Angola to cover the vaccination campaigns but was not granted a visa. The Angolan consulate in New York informed me four days before my flight that the signature page of my application was “missing,” that my letter from WHO did not meet requirements for documentation related to the purpose of travel and, still less plausibly, that the consulate had been trying unsuccessfully to reach me concerning these problems.]
High risk areas will be covered twice more in upcoming nation-wide vaccination campaigns. However, “It is clear that Angola has a tough few months ahead,” says Sona Bari, communications officer for polio at WHO. But Angola has beaten polio before. Today cases are relatively few, at about 30 a year, certainly in comparison with 1999 which saw more than 1,000. Also, the intensity of transmission is much lower in Angola than that faced by, say, India.
While political commitment seems to be in place, stability might be an issue. Some political tremors from Tunisia and Egypt have reached Angola, such as a call for public protest on March 7. (Recently Angola was without internet access for about two days which state media attributed to a cut cable.) Prior to the 7th, US State Department spokesperson Hilary Renner said she was not aware of “significant demonstrations in Angola.” The Associate Press report on turn out and reaction on the 7th suggests revolutionary force so far is not strong.
Rest of the World: Key Fronts
The eradication initiative must close out the major global sources of polio, India and Nigeria. India is closer to the goal and mostly needs to sustain its exertions. Nigeria trails but has made enormous progress; there are risks but today the country is essentially on track. If Angola too has turned in the right direction, Pakistan becomes the next focus.
Pakistan presents almost all possible obstacles to polio eradication. Like India, the oral polio vaccine in Pakistan fails to immunize among a significant number of children, usually under conditions of very low health and hygiene. Some parents in Pakistan refuse to allow their children to be immunized, a problem also once seen among Muslims in Nigeria who feared the vaccine had been purposefully tainted.
Much of Pakistan’s polio burden falls on border states with Afghanistan where security issues prevent vaccination teams from operating. The virus travels to more secure areas of the country where poorly run, corruption-riven vaccination campaigns fail to stamp it out. Even the house of a former minister of health was bypassed—twice—by polio vaccinators. “I had to call them to get my kids vaccinated,” reported the former minister.
Pakistan’s political stability is low. Natural disasters—huge flooding—have made a difficult situation worse. Last year saw a jump to 144 cases, up from 89. And so far in 2011, cases are accumulating more rapidly. Fortunately, the Pakistan/Afghanistan polio complex has not exported the virus to the rest of the world—so far.
Pakistan figured prominently in the careful eradication orchestrations of early 2011. Bill Gates met with President Asif Ali Zardari on January 15th. On January 25th, an emergency plan to immunize 32 million children was announced. The same day brought a joint announcement of $100 million in funding from the Gates Foundation and Mohammed bin Zayed Al Nahyan, crown prince of Abu Dhabi, to support vaccination efforts, with $34 million earmarked for polio immunization in Pakistan and Afghanistan.
Afghanistan offers ample challenges, including security problems. However the absolute number of cases, about 30 per year, is not extreme. Described as a “pretty strong program” by the Gates Foundation’s Tim Petersen, the Afghan polio eradication team appears to already enjoy the confidence of the members of the global eradication initiative.

The least controlled polio rampage is taking place in the Democratic Republic of Congo (DRC). Cases last year exploded to 100 versus three in 2009. The DRC and its northeastern neighbor, Angola, comprise an epidemiological block like Afghanistan and Pakistan. There is “huge cross-border traffic” between Angola and the DRC, according to Apoorva Mallya, a program officer at the Bill & Melinda Gates Foundation. A lack of roads and transportation infrastructure greatly complicate operations. For example, biological samples from possible polio victims sometimes must be floated down the Congo River en route to a lab for analysis.

The eradication initiative is looking at “local, local solutions,” according to Mallya. At the same time it seeks high level political commitment, just as in Angola and indeed all countries. WHO Director-General Margaret Chan travelled to the DRC in February to meet with President Laurent Kabila. UNICEF’s Anthony Lake then visited in in the first week of March and called for “an absolute commitment” to vaccinate every child.

New Trends in Media Coverage
The front in the polio war has been discouragingly broad and variable. Countries have been won and lost—some more than once. Low numbers or even single cases perpetually spatter the map. Gabon just reported a case, its first in more than ten years. Seemingly safe areas like Tajikistan and Congo have recently seen blowout epidemics. Transmission has become fully re-established in four African countries, not only Angola but also, for instance, Chad. Total cases globally have rarely dipped under a thousand a year over the last decade, giving rise to the view that this “last one percent,” like Jell-O, will squish somewhere else no matter how hard it is squeezed.
But the polio eradication initiative has focused on choking off the sources, following the strategy of von Clausewitz, who in, On War, recommended subduing the enemy “center of gravity.” In polio, that’s India and Nigeria. No other countries come close in polio burden. It’s not over, but India is astonishingly near to eliminating polio. The states of Bihar and Uttar Pradesh, where polio has been worst in India, haven’t seen a single case in six months. Among much else, this required tracking and immunizing enormous mobile populations. As many as six million people are on the move each day, according to a WHO estimate, with accessibility complicated by flooding of the Kosi River in Bihar. In addition, India’s eradication effort has overcome vaccine failure by achieving very high levels of population immunity: the virus basically can’t penetrate the thicket of immune people to access the vulnerable, those children in which the vaccine didn’t take.
The Associated Press recently recognized these developments in "India brings hope to stalled fight against polio."  ABC News posted a story in which progress in India provides hope for polio becoming “just the second disease to be wiped off the planet since smallpox.” (ABC News received $1.5 million from the Gates Foundation to support a television series on global health, making the representativeness of their current coverage more difficult to ascertain.)
Most recently, The Globe and Mail ran a polio package driving off successes in India, saying “Polio is all but gone from India…” (I have written to similar effect in Scientific American.) One article is entitled: “Anti-polio battle on verge of victory.”
No country has been as difficult as India. The obstacles in the countries of the rest of the world are largely different combinations of known problems which have been surmounted somewhere already. Polio has been expunged from anarchic, conflict-ridden states like Somalia. Rejection of vaccine by parents on cultural or religious grounds has been overcome in Nigeria. The quality and coverage of vaccination campaigns has been lifted even amidst rife corruption. Clearly, however, past performance doesn’t guarantee future results. Completely novel problems could arise. Failure on one or more of the numerous fronts in eradication is likely; compound failures could wreck the broader enterprise.
However, while feasibility remains an issue, coverage appears to be shifting—to whether the polio “endgame” can be won. The wild poliovirus is not the only threat to eradication. Very rarely, the oral polio vaccine, which uses a live attenuated virus, mutates into virulent form. Thus, in a sense, the eradication effort is fighting fire with fire, as a recent op-ed piece in the Los Angeles Times points out in “The Polio Virus Fights Back.” Not long after, Myanmar reported just such a case of vaccine-derived poliovirus. These mutants can—and have—spread. So far no related cases have been reported because the vaccine protects against it. And in Myanmar, “Immunization demand is high and the country conducts good quality campaigns,” according to WHO’s Sona Bari. In India, where oral polio vaccine dosing has been most intense, 2010 saw only one case of vaccine-derived virus. Rightly, however, the subject will likely gain in prominence in media coverage.
Not only has the nature of feasibility questioning changed, shifting to whether the next phase can be won, the position of arch critic of eradication now appears to be open. Donald Henderson played a key role in smallpox eradication but has long been skeptical of polio eradication. According to a January Seattle Times article, however, Henderson changed his mind six months ago and now believes polio could be eliminated. But not long after, The New York Times cited Henderson as a vehement critic of eradication. In mid-February, however, they ran a different story, Can Polio Be Eradicated? A Skeptic Now Thinks So, which (re-)disclosed that Henderson had changed his mind. The title of the earlier article in which Henderson was a critic also appears to have been changed online from “Critics Say Gates’s Anti-Polio Push Is Misdirected” to “Gates Calls for a Final Push to Eradicate Polio.”
At present, this leaves only the desirability of polio eradication in question. While no one argues for polio, there are other diseases which are more widespread, taking more lives and causing greater suffering. According to The New York Times, Richard Horton, editor-in-chief of The Lancet tweeted that:
“Bill Gates’s obsession with polio is distorting priorities in other critical [Bill &Melinda Gates Foundation} areas. Global health does not depend on polio eradication.”
The Gates Foundation, however, embraces the accusation. “We are overemphasizing polio,” says the foundation’s Tachi Yamada. Polio became the foundation’s number one priority late last year. And it’s not just Bill Gates or his foundation. In 2008, WHO Director-General Margaret Chan said “I am putting the full operational power of the World Health Organization into the job of finishing polio eradication… I am making polio eradication the Organization’s top operational priority on a most urgent, if not an emergency basis.”
But the emphasis on polio is indeed disproportionate. Both the Gates Foundation and WHO recognize that eradication would not just rid the world of a horrific disease: it would be a giant symbolic victory for global health. Chan, in her 2008 speech, also said “We have to prove the power of public health,” a goal which eradication would achieve. Similarly, Gates Foundation’s Yamada doesn’t want to give “fuel to cynics” by having eradication fail but instead to demonstrate that “this is what development assistance can do.”
Returning to the matter of eradication critics, The New York Times also quoted bioethicist Arthur L. Caplan, a professor at the University of Pennsylvania as saying “We ought to admit that the best we can achieve is control.” In June, 2009, Caplan wrote an opinion piece in The Lancet entitled, “Is Disease Eradication Ethical?” Caplan wondered if eradication was possible since it hadn’t worked after more than two decades of effort. The financial cost was high and diverted resources from better, more life-saving uses.
Caplan declined to comment for this article. However, questioning eradication as a strategy in global health, as polio demonstrates, is a worthwhile endeavor. And if polio resurges, so will skepticism of eradication.
Space Race to Human Race
The upcoming retirement of the Space Shuttle likely will attract enormous coverage. The Shuttle is not being replaced, however. And there are currently no plans for even a single human to permanently leave the planet. Still, the expectation of a spacefaring humanity persists although the 1960s might remain the golden age of manned space exploration.
In other words, the world has missed that the next giant step for humankind will take place on planet earth. The polio eradication effort might actually be larger than the Apollo program. Already in India, the number of cases can be counted down to zero; other countries might follow.
It’s a good story.

NYT Mistaken on Polio Eradication Feasibility

Whether polio eradication should be pursued or whether it is central to global health are questions that should be and are asked by The New York Times in “Critics Say Gates’s Anti-Polio Push Is Misdirected.” However, the Times also contends that “Victory may have been closest in 2006…” when victory may be closer now than ever before. And the latest blast of polio funding and initiatives, described by the Times, comes not because the eradication effort is on its heels but because it’s going for the kill.

Eradication hinges less on the number of countries suffering polio cases than on knocking out the sources—or “reservoirs”—of the disease. The two largest such reservoirs are India and Nigeria. Today, both countries have historic, record low cases. The Times describes this as “doing much better.” Perhaps also underappreciated by the article, wiping out the reservoirs of polio will stop outbreaks. The Times mentions outbreaks in Nepal, Kazakhstan, Tajikistan, Turkmenistan and Russia. All originated from India.

Further, the case of India seems to demonstrate that there are no scientific or technological barriers to eliminating polio. In particular, the Indian states of Bihar and Uttar Pradesh once were the most impregnable redoubts for the poliovirus anywhere on the planet. Yet, because of the huge expense and exertions described by The New York Times, Bihar and Uttar Pradesh saw nearly zero cases even during the “high season” for polio. (See my Polio in Retreat: New Cases Nearly Eliminated Where Virus Once Flourished.)

In Africa, Nigeria has been the most intractable polio problem. No sooner is eradication on track in that nation, than new sources—Angola, Chad, Congo and Sudan—arose to continue infecting the continent. Indeed, Angola and Sudan have even reverted back into polio reservoirs, the disease spreading within and across borders. The Times properly draws attention to this indisputable, highly problematic regress. But the obstacles to eliminating polio in Angola do not compare with those of India where the degree of difficulty approached near impossibility. And Angola has gotten rid of polio before. How did it come back? Cases imported from India, a reservoir now drawn down to historic lows.

The New York Times represents a crucial exception to the influence of the Gates Foundation on global health coverage. It is important to question whether, in retrospect, polio eradication ought to have been undertaken, given all the costs. Also, whether today polio ought to be treated as the number one priority in global health is likewise a valid inquiry. And the Times is right that the Bill & Melinda Gates Foundation has doubled down on polio eradication several times before in the aftermath of setbacks to the program. However, the recent slew of polio announcements and initiatives is not in response to setbacks. It’s to unload a knock-out punch while the opponent is staggered. It might work.

Science faltering? No comment, no coverage

A National Science Foundation study cast doubt on the idea that scientific progress is accelerating--or even maintaining speed. Nearly two dozen scientists, science administrators, members of Congress and the Executive branch declined to comment. The study elicited almost no coverage.

I explain these phenomena in Columbia Journalism Review:

Science Faltering? Obama wants more R&D, but few willing to discuss research productivity

---------------