The long struggle: vaccines versus malaria

Photo: Caitlin Kleiboer 

"After clean water, vaccines may have saved more lives than any other public health intervention. Eradication of malaria, a disease that may have killed more humans than any other single cause, likely requires a malaria vaccine. However, after nearly a century of research, today’s only candidate might not pack enough immunological punch to win deployment. Sadly, there are no obvious successors. Goals for vaccines set in 2006 are now approaching, but may not be possible to meet."

Read the rest @ Ars Technica

Third in my series on malaria.

1) Drug resistant malaria takes new ground, raising fears of global spread

2) After artemisinin: searching for the next front-line malaria drug

Polio almost crushed in Africa—except Nigeria

In anticipation of future performance: Rotary recognized Nigerian president Jonathan Goodluck in April for his vision of a polio free Nigeria. (Photo: Nigeria PolioPlus Committee)

Polio cases across Africa are near zero, with the exception of Nigeria where they are surging, jeopardizing a continent that is close to polio-free after decades of effort. Nigeria and international agencies are taking measures to halt the recrudescence and prevent spread outside the country, but the amount of disease and mobility of populations gives the virus a fighting chance to kindle outbreaks elsewhere on the continent.

With India having rid itself of polio, Nigeria now is the main front in the effort to eradicate the virus. Nigeria is the only African country which has never interrupted transmission of the disease, making it a supplier of poliovirus to its neighbors and the rest of the continent. Nigeria made huge strides, bringing cases down to 21 cases in 2010.  But then public health lost out to politics. Elections in early 2011 turned attention away from polio and cases bounced back to 65 for the year. Already in 2012 there are 35, even though it is the low season for cases. The only other country in Africa to report cases this year is Chad with three.

Vaccination rounds have been scheduled in countries neighboring Nigeria, but polio’s renewed momentum could carry it to any number of places in Africa where population immunity is low. “That’s the big question,” says the Gates Foundation’s Apoorva Mallya concerning the possibility of export. “We are trying a lot of new strategies, but it is definitely a tough challenge,” he said. Outbreaks could go undetected in remote areas, becoming larger and even seeding secondary outbreaks, undoing at least part of the work in getting rid of polio.  At the same time, the Global Polio Eradication Initiative has become adept at swiftly extinguishing outbreaks. And the initiative has returned to the some of the same countries several times already to stamp out recurrences of polio.

The World Health Assembly voted last week to make polio a global health emergency, raising the profile of the issue and perhaps attracting additional funding for a project continuously declaring funding shortfalls. The emergency declaration could also mean travel restrictions for countries that fail to bring polio under control, Nigeria being the obvious candidate. Leaving the country might come to require proof of vaccination.

Polio also continues to roam freely in parts of Pakistan and Afghanistan. That locus is considered a lesser threat for exporting the disease, although polio did cross from Pakistan into China before being quickly smothered.

Nigeria, hard pressed today, is perhaps at best several years away from putting an end to polio. President Jonathan Goodluck has set 2015 as his target, and global health authorities believe Nigerian leadership is sincere in its efforts. As in India, the tactics or “micro plans” for vaccination are changing to emphasize mobile and remote populations which have been consistently missed, perhaps since eradication efforts began decades ago. India shows eradication can be done and, in many ways how, but also the enormity of the effort required.

Why there will never be a model of a cell

Future imperfect: computer models cannot attain life-like fidelity

Biology’s holy grail, a full mechanistic understanding of the workings of life, is beyond reach according to two recent papers. Computer models that closely replicate the phenomena of a single cell are not possible, and the goal has been dropped.

Over the last decade, researchers have tried to grapple with biological complexity by modeling less complicated organisms. Yeast proved too complex and was replaced by organisms with smaller and smaller genomes, all the way down to tiny Mycoplasma pneumoniae. Unable to reduce genomes any further, scientists have radically reduced expectations for models instead.

In Science last month, researchers described the “popular view,” in which “we progress linearly, from conceptual to ever more detailed models.” The popular, linear view is no more. From now on, models “should be judged by how useful they are and what we can learn from them,” according to the paper’s authors, “not by how close we are to the elusive ‘whole cell model’.”

Alex Mogilner, one of authors and a professor at UC Davis, believes some future discovery might make the whole cell model again possible. “Never say never,” he advised. However, a paper from the Institute for Systems Biology forecloses the possibility for all time:

[N]o practically conceivable model will ever represent all possible physical parameters in a system, nor will enough data ever exist to fully constrain them all. It is also experimentally infeasible to measure, and technically prohibitive to model all possible phenomena in a cell, all possible environmental contexts, and all possible genetic perturbations.

There will be no in silico model of a cell, one that fully recapitulates cell behavior and substitutes for wet lab experiments. “Anyone who thinks we can ever obtain a completely deterministic view of an organism will have a hard job to convince me,” said Marc Kirschner, chair of the systems biology department at Harvard. “It is probably true that the number of equations to describe the events in a single cell is so large that this approach will never work,” according to Kirschner. He does hope to be able to predict “to some accuracy” particular responses of a system.

The implications for the future have yet to be worked out, although Mogilner and colleagues observed that such models were envisioned as enabling personalized medicine. For historical purposes, however, these papers bring an end to a monumentally successful, physics-based program for biology that began roughly a century ago.

Biologist Thomas Hunt Morgan successfully pioneered the methods of physics in biology, elucidating the role of chromosomes in heredity. This “turned out to be extraordinarily simple,” as he wrote in 1919, and nature was entirely approachable. “[I]f the world in which we live were as complicated as some of our friends would have us believe,” Morgan wrote, “we might well despair that biology could ever become an exact science.”

Shortly thereafter, physics underwent a crisis of faith as the discipline moved from an intuitive, mechanistic basis into a new and unsettling quantum era which renounced the Newtonian ideal of casually linking everything in space and time. When DNA was discovered decades later, the Newtonian paradise was regained. As a theoretical physicist turned biologist Max Delbrück said in his Nobel Prize lecture:

It might be said that Watson and Crick’s discovery of the DNA double helix in 1953 did for biology what many physicists hoped in vain could be done for atomic physics: it solved all the mysteries in terms of classical models and theories, without forcing us to abandon our intuitive notions about truth and reality.

Not long after, Lee Hood decided to become a biologist after reading an article by Francis Crick in Scientific American. Crick wrote how “the sequence of the bases acts as a kind of genetic code…” which was unknown. Many years later, Hood expressed the belief that “the core of biology is ultimately knowable, and hence, we start with a certainty that is not possible in the other disciplines,” like physics. He forecast being able to predict the behavior of a biological systems “given any perturbation.” His lab at Caltech invented the DNA sequencer.

A draft sequence of the human genome was published in 2000 and Hood founded his Institute for Systems Biology (ISB). The same year, Matt Ridley published his best-selling Genome which predicted a leap from knowing “almost nothing about our genes to knowing everything,” which he described as “the greatest intellectual moment in history. Bar none."

For the next dozen years, researchers from ISB heaved with might and main to realize Hood’s vision. Instead, they now say it is unattainable.

Undoubtedly, there will be disbelief. But Robert Millikan, a founding father of Caltech, didn’t want to believe in Einstein’s photoelectric effect. He won a Nobel Prize for being wrong and proving Einstein right.

This may still be one of the greatest intellectual moments in history, just not what we expected.

 

Image of yeast adapted from Nelson et al. DOI: 10.1073/pnas.0910874107

Drug resistant malaria takes new ground, raising fears of global spread

Photo: Robert Semeniuk

In Southeast Asia, drug-resistant falciparum malaria may have evolved resistance to another frontline therapy and established itself in new territory in western Thailand, according to the World Health Organization. The new area in Thailand joins previous hot spots in Cambodia, Vietnam, and Myanmar, with the latter being badly equipped to stanch further spread. Despite containment efforts, the possibility this strain may spread to Africa, which has the most significant malaria burden, remains very real.

From my article in Ars Technica, first in a series on malaria.

Read the full story

Robert Semeniuk's stirring photo shows a man from Myanmar with severe malaria who walked with his wife for four days to cross the border into Thailand, coming to the Mao Tao clinic in the village of Mae Sot. A forthcoming Lancet paper will describe the detection of artemisinin resistance arising in that region of Thailand. http://www.robertsemeniuk.com

India triumphs over polio

A two-woman vaccination team in Firozabad, Uttar Pradesh, (Photo: UNICEF)

From my article on Ars Technica:

In the year since January 13, 2011, India has had zero cases of polio. Previously, India led the world, accumulating over 5,000 cases since 2000. Polio's last victim in India was 18 month-old Rukhsar, a girl in West Bengal who began showing signs of paralysis on this day in 2011. Now, epic immunization efforts have brought global eradication of the disease a giant step closer. Outside India, however, backsliding Pakistan and Nigeria and splotches of polio across Africa have blocked the final stamping out of the disease worldwide...

 

 

 

IBM’s Watson: Portent or Pretense?

Game over.

IBM’s Watson, with a 15 terabyte chunk of human knowledge loaded into it like a Game Boy cartridge and set to hair trigger, poured out a high-precision fact fusillade that left no humans standing in the (aptly-named) Jeopardy. Is machine omniscience upon us?

“I, for one, welcome our new computer overlords,” said defeated Jeopardy champion Ken Jennings. But Jennings has buzzed in too quickly. Watson might point not to the inevitability of artificial intelligence but its unattainability. Barring unexpected revelations from IBM, Watson represents exquisite engineering work-arounds substituting for fundamental advance.

Questionable progress
In 1999, early question answering systems, including one from IBM, began competing against each other under the auspices of the National Institute for Standards Testing (NIST). At the time, researchers envisioned a four-step progression, starting with systems batting back facts to simple questions such as “When was Queen Victoria born?” A few tall steps later, programs would stand atop the podium and hold forth on matters requiring a human expert like “What are the opinions of the Danes on the Euro?”

Getting past step one proved difficult. While naming the grape variety in Chateau Petrus Bordeaux posed little difficulty, programs flailed on follow-up questions like “Where did the winery’s owner go to college?” even though the answer resided in the knowledgebase provided. These contextual questions were deemed “not suitable” by NIST and dropped. The focus remained on simple, unconnected factoids. In 2006, context questions returned—only to be cut again the following year. In the consensus view, such questions were “too hard,” according to Jimmy Lin, a computer science professor at the University of Maryland and a coordinator of the competition.

The entire contest was dropped after 2007. “NIST decides what to push,” explained Lin, “and we were not getting that much out of this…” Progress had turned incremental, like “trying to build a better gas engine,” according to Lin. Question answering wasn’t finished, but it was done. Although James T. Kirk asked the Star Trek computer questions like whether a storm could cause inter-dimensional contact with a parallel universe, actual question answering systems like Watson would be hard pressed to answer NIST level one questions like “Where is Pfizer doing business?”

ABC easy as 1, 2, 3
Kirk spoke to the computer. Watson’s designers opted for text messages—which says a lot. Speech recognition software accuracy reaches only around 80% whereas humans hover in the nineties. Speech software treats language not as words and sentences carrying meaning but as strings of characters following statistical patterns. Seemingly, Moore’s law and more language data should eventually yield accuracy at or conceivably beyond human levels. But although chips have sped up and data abound, recognition accuracy plateaued around 1999 and NIST stopped benchmarking in 2001 for lack of progress to measure. (See my Rest in Peas: the Unrecognized Death of Speech Recognition.)

Much of speech recognition’s considerable success derives from consciously rejecting the deeper dimensions of language. This source of success is now a cause of failure. Ironically, as Watson triggers existential crisis among humans, computers are struggling to find meaning.

Words are important in language. We’ve had dictionaries for a quarter millennium, and these became machine readable in the last quarter century. But the fundamental difficulties of word meanings have not changed. For his 1755 English dictionary, Samuel Johnson hoped that words, “these fundamental atoms of our speech might obtain the firmness and immutability of… particles of matter…” After nine years’ effort, Johnson published his dictionary but abandoned the idea of a periodic table of words. Concerning language, he wrote: “naked science is too delicate for the purposes of life.”

Echoing Johnson 250 years later, lexicographer Adam Kilgarriff wrote: “The scientific study of language should not include word senses as objects…” The sense of a word depends on context. For example, if Ken Jennings calculates that Oreos and crosswords originated in the 1920s, does calculate mean mathematical computation or judge to be probable? Well, both. Those senses are too fine-grained and need to be lumped together. But senses can also be too coarse. If I buy a vowel on Wheel of Fortune, do I own the letter A? No. This context calls for a finer, even micro-sense.

The decade of origin for Oreos and crosswords was actually the 1910s—as Watson correctly answered. In what decade will computers understand word meanings? Not soon; perhaps never. Theoretical underpinnings are absent: “[T]he various attempts to provide the concept ‘word sense’ with secure foundations over the last thirty years have all been unsuccessful,” as Kilgarriff wrote more than a decade ago, in 1997. Empirical, philosophy-be-damned approaches were tried the following year.

In 1998, at Senseval-1, researchers tackled a set of 35 ambiguous words. The best system attained 78% accuracy. The next Senseval, in 2001, used more nuanced word definitions which knocked accuracy down below 70%. It didn’t get up: “[I]t seems that the best systems have hit a wall,” organizers wrote. The wall wasn’t very high. Systems struggled to do better than mindlessly picking the first dictionary sense of a word every time. Organizers acknowledged that most of the efforts “seem to be of little use…” and had “not yet demonstrated real benefits in human language technology applications.”

Disambiguation was dustbinned. Senseval was renamed Semeval and semantic tasks subsumed word sense disambiguation by 2010. Today no hardware/software system can reliably determine the meaning of a given word in a sentence.

That’s imparsable
Belief continues unfazed, however, regarding whether language can be solved with statistical methods. “The answer to that question is ‘yes,’ “ declares an unabashedly partial Eugene Charniak, professor of computer science at Brown University. Whatever the trouble with word meanings, at the sentence level, computer comprehension is quite impressive—thanks to probabilistic models. Charniak has written a parsing program that unfurls a delicate mobile of syntax from the structure of a sentence. 

Mobile meaning, hanging in the balance (Diagram: phpSyntaxTree)

Such state-of-the-art parsers spin accurate mobiles about 80% of the time when given sentences from The Wall Street Journal. But feed in a piece of literature, biomedical text or a patent and the parses tangle. Nouns are mistaken for finite verbs in patents; in literary texts, different kinks and knots tug accuracy down to 70%.

Performance droops because the best parsers don’t apply universal rules of grammar. We don’t know them or if they exist. Instead parsers try to reverse engineer grammar by examining huge numbers of example sentences and generating a statistical model that substitutes for the ineffable principles.

That strategy hasn’t worked. Accuracy invariably declines when parsers confront an unfamiliar body of text. The machine learning approach finds patterns no human realistically could, but these aren’t universal. Change the text and the patterns change. That means current parsing technology performs poorly on highly diverse sources like the web.

Progress has gone extinct: parsing accuracy gained perhaps a few tenths of one percent in the last decade. “Squeezing more out of the current approaches is not going to work,” says Charniak. Instead, he concludes, “we need to squeeze more out of meaning.”

Surface features don’t provide a reliable grip on sentence syntax. Word order and parts of speech often aren’t enough. Regular sentences can be slippery, like:

  • President F.W. de Klerk released the ANC men along with one of the founding members of the Pan Africanist Congress (PAC).

Not knowing about apartheid, the parser must guess whether de Klerk and a PAC member together released the ANC men—although the PAC figure was also in prison. The program has no basis for deciding where in the mobile (pictured above) to hang up the phrase “along with…”

Notice that winning Jeopardy is easier than correctly diagramming some sentences. And Watson provides no help to a parser in need. Questions like, “What are the chances of PAC releasing members of the ANC?” are far too hard, the reasoning power and information required too vast. Watson’s designers likened organizing all knowledge to “boiling the ocean.” They didn’t try. Others are.

Unobtanium
But it’s called mining the World Wide Web and aims to penetrate to the inner core, the sanctum sanctorum, of meaning.

In the beginning was the word. But the problem of meaning arises in word senses and spreads, as we have, seen to sentences. Errors and misprisions accumulate, fouling higher level processing. In a paragraph referencing “Mr. Clinton,” “Clinton,” and “she,” programs cannot reliably figure out if “Clinton” refers to Bill Clinton or Hillary Clinton—after 15 years of effort. Perhaps because of this problem, Watson once answered “Richard Nixon” to a clue asking for a first lady,

Evading this error requires understanding the senses of nearby words, that is, solving the unsolved word disambiguation problem. Finding entry into this loop of meaning has been elusive, the tape roll seamless, thumbnail never catching at the beginning.

Structuring web-based human knowledge promises to break through today’s dreary performance ceiling. Tom Mitchell at Carnegie Mellon University seeks “growing competence without asymptote,” a language learning system which spirals upward limitlessly. Mitchell’s Never Ending Language Learning (NELL) reads huge tracts of the web, every day. NELL’s blade servers grind half a billion pages of text into atoms of knowledge, extracting simple facts like “Microsoft is a company.” Initially, facts are seeded onto a human-built knowledge scaffold. But the idea is to train NELL in this process and enable automatic accretion of facts into ever-growing crystals of knowledge, adding “headquartered in the city Redmond” to facts about Microsoft, for example. Iterate like only computers can and such simple crystals should complexify and propagate.

But instead NELL slumps lifelessly soon after human hands tip it on its feet. Accuracy of facts extracted drops from an estimated 90% to 57%. Human intervention became necessary: “We had to do something,” Mitchell told fellow researchers last year. The interventions became routine and NELL dependent on humans. NELL employs machine learning but knowledge acquisition might not be machine learnable: “NELL makes mistakes that lead to learning to make additional mistakes,” as NELL’s creators observed.

The program came to believe that F.W. de Klerk is a scientist not a former president of South Africa—providing little help in resolving ambiguous parsing problems. At the same time, NELL needs better parsing to mine knowledge more accurately: “[W]e know NELL needs to learn to parse,” Mitchell wrote in email. This particular Catch-22 might not be fundamentally blocking. But if NELL can’t enhance the performance of lower-level components, those components might clamp a weighty asymptote on NELL’s progress.

Represent, represent
An older, less surmountable, perhaps impossible problem faced by NELL is how to arrange facts, assuming they can be made immaculate. Facts gleaned by NELL must be pigeon-holed into one of just 270 categories—tight confines for all of knowledge. Mitchell wants NELL to be able to expand these categories. However, while incorrect individual facts might compromise NELL, getting categories wrong would be fatal.

But no one knows how to write a kind of forensic program that accurately reconstructs a taxonomy from its faint imprint in text. Humans manage, but only with bickering. Even organizing knowledge in relatively narrow, scientific domains poses challenges, small molecules in biology, for example. Some labs just isolate and name the different species. Other researchers with different interests represent a molecule with its weight and the weights of its component parts, information essential to studying metabolism.  However, 2D representations are needed for yet another set of purposes (reasoning about reaction mechanisms) whereas docking studies call for 3D representations, etc.

What a thing is or, more specifically, how you represent it, depends on what you are trying to do—just as the quest for word senses discovered. So even for the apparently simple task of representing a type of molecule, “there is not one absolute answer,” according to Fabien Campagne, research professor of computational biomedicine at Weill Medical College. The implication is that representation isn’t fixed, pre-defined. And new lines of inquiry, wrote Campagne, “may require totally new representations of the same entity.”

One of NELL’s biological conceptions is that “blood is a subpart of the body within the right ventricle.” Perhaps this and a complement of many other facts cut in a similar shape can represent blood in a way that answers some purpose or purposes. But it will not apply in discussions of fish blood. (Fish have no right ventricle.) And when it comes to human transfusion, blood is more a subpart of a bag.

The difficulties of representation represented: Marcel Duchamp’s Nude Descending Staircase. NELL’s representation of Marcel Duchamp

Particular regions of knowledge can be tamed by effort or imposition of a scheme by raw exercise of authority. But these fiefdoms resist unification and generally conflict. After millennia of effort, humans have yet to devise a giant plan which would harmonize all knowledge. The Wikipedia folksonomy works well for people but badly for automating reasoning. Blood diamonds and political parties in Africa, for example, share a category but clearly require different handling. One knowledge project, YAGO, simply lops off the Wikipedia taxonomy.

The dream of a database of everything is very much alive. Microsoft Research, from its Beijing lab, recently unveiled a project named Probase which its creators say contains 2.7 million concepts sifted from 1.68 billion web pages. “[P]robably it already includes most, if not all, concepts of worldly facts that human beings have formed in their mind,” claim the researchers with refreshing idealism. Leaving aside the contention that everything ever thought has been registered on the Internet, there still are no universal injection molds—categories—ready to be blown full of knowledge.

A much earlier, equally ambitious effort called Cyc failed for a number of reasons, but insouciance about knowledge engineering, about what to put where, contributed to Cyc’s collapse. Human beings tried to build Cyc’s knowledgebase by hand, assembling a Jenga stack of about one million facts before giving up.

NELL may be an automated version of Cyc. And it might succeed less. NELL’s minders already have their hands full tweaking the program’s learning habits to keep fact accuracy up. NELL is inferior to Cyc when it comes to the complexity of knowledge each system can handle. Unless NELL can learn to create categories, people will have to do it, entailing a monumental knowledge engineering effort and one not guaranteed to succeed. Machine learning relies on examples which simply might not work for elucidating categories and taxonomies. Undoubtedly, it is far harder than extracting facts.

NELL may also represent a kind of inverse of IBM’s Watson. NELL arguably is creating a huge Jeopardy clue database full of facts like “Microsoft is a company headquartered in Redmond.” NELL and Watson attack essentially the same problem of knowledge, just from different directions. But it will be difficult for NELL to reach even Watson’s level of performance. Watson left untouched the texts among its 15 terabytes of data. NELL eviscerates text, centrifuging the slurry to separate out facts and reassembling them into a formalized structure. That is harder.

And Watson is confined to the wading pool, factoid shallows of knowledge. The program is out of its depth on questions that require reasoning and understanding of categories. That may be why, in Final Jeopardy, Watson answered “Toronto” not “Chicago” for the United States city whose largest airport is named for a World War II hero and second largest for a World War II battle. Watson likely could have separately identified O’Hare and Midway if asked sequential questions. And pegging Chicago as the city served by both airports also presumably would be automatic for the computer. But decomposing and then answering the series appears to have been too hard. NIST dropped such questions—twice—for their perceived insuperability. And yet they are trivial compared to answering questions about the relations between F. W. de Clerk, the African National Congress, and the Pan Africanist Congress, questions of the kind which have stalled progress in parsing.

Google vs. language
Google contends with language constantly—and prevails. Most Google queries are actually noun phrases like “washed baby carrots.” To return relevant results, Google needs to know if the query is about a clean baby or clean carrots. Last year, a team of researchers crushed this problem under a trillion-word heap of text harvested from the many-acred Google server farms. Statistically, the two words “baby carrots” show up together more than “washed baby.” Problem solved. Well, mostly.

The method works an impressive 95.4% of the time, at least on sentences from The Wall Street Journal. Perhaps as important, accuracy muscled up as the system ingested ever-larger amounts of data. “Web-scale data improves performance,” trumpeted researchers. “That is, there is no data like more data.” And more data are inevitable. So will the growing deluge wash away the inconveniencies of parsing and other language processing problems?

Performance did increase with data, but bang for the byte still dropped—precipitously. Torqueing accuracy up just 0.1% required an order of magnitude increase in leverage, to four billion unique word sequences. Powering an ascent to 96% accuracy would require four quadrillion, assuming no further diminution of returns. To reach 97%, begin looking for 40 septillion text specimens. 

Mine the gap: Does the Internet have enough words to solve noun phrases? (Adapted from Pitler et al., “Using Web-scale N-grams to Improve Base NP Parsing Performance”)

More data yielding ever better results is the exception not the rule. The problem of words senses, for example, is relatively impervious to data-based assaults. “The learning curve moves upward fairly quickly with a few hundred or a few thousand examples,” according to Ted Pedersen, computer science professor at the University of Minnesota, Duluth, “but after a few thousand examples there's usually no further gain/learning, it just all gets very noisy.”

Conceivably, we are now witnessing the data wave in language processing. And it may pass over without sweeping away the problems.

Let the data speak, or silence please
In speech recognition too, according to MIT’s Jim Glass, “There is no data like more data.” Glass, head of MIT’s Spoken Language System Group, continued in email: “Everyone has been wondering where the asymptote will be for years but we are still eking out gains from more data.” However, evidence for continuing advance toward human levels of recognition accuracy is scarce, possibly non-existent.

Nova’s Watson documentary asserts that recognition accuracy is “getting better all the time” (~34:00) but doesn’t substantiate the claim. Replying to an email inquiry, a Nova producer re-asserted that programs like Dragon Naturally Speaking from Nuance “are clearly more accurate and continuing to improve,” but again adduced no evidence.

Guido Gallopyn, vice president of research and development at Nuance, has worked on Dragon Naturally Speaking for over a decade. He says Dragon’s error rate had been cut “more than in half.” But Gallopyn begged off providing actual figures, saying accuracy was “complicated.” He did acknowledge that there was still “a long way to go.” And while Gallopyn has faith that human-level performance can be attained, astonishingly, it is not a goal for which Nuance strives: “We don’t do that,” he stated flatly.

Slate also recently talked up speech recognition, specifically Google Voice. The article claims that programs like Dragon “tend to be slow and use up a lot of your computer's power when deciphering your words,” in contrast to Google’s powerful servers. In the Google cloud, 70 processor-years of data mashing can be squeezed into a single day. Accurate speech recognition then springs from the “magic of data,” but exactly how magic goes unmeasured. Google too is mum: “We don't have specific metrics to share on accuracy,” a spokesperson for the company said.

By contrast, The Wall Street Journal, recently reported on how Google Voice is laughably mistake prone, serving as the butt of jokes in a new comedic sub-genre.

There is no need for debate or speculation: the NIST benchmarks, gathering dust for a decade, can definitively answer the question of accuracy. The results would be suggestive for the prospects of web-scale data to overcome obstacles in language processing. Computer understanding of language, in turn, has substantial implications for machine intelligence. In any event, claims about recognition accuracy should come with data.

Today, all that can be said is this:

Progress in voice recognition: the sound of one hand clapping since 2001 (Adapted from NIST, “The History of Automatic Speech Recognition Evaluations at NIST”)

To be || not to be
That is the question about machine intelligence.

When Garry Kasparov was asked how IBM might improve Deep Blue, its chess playing computer, he answered tartly: “Teach it to resign earlier.” Kasparov, then world chess champion, had just soundly defeated Deep Blue. Rather than follow this advice, IBMers put some faster chips in, tweaked the software and then utterly destroyed Kasparov not long after, in 1997. It was IBM’s turn to vaunt: “One hundred years from now, people will say this day was the beginning of the Information Age,” claimed the head of the IBM team. Instead, apart from chess, Deep Blue has had no effect.

If Deep Blue represented an effort to rise above human intelligence by brute computational force, Watson represents the data wave. But we have been inundated by data for some time. Google released its trillion word torrent of text five years ago. Today the evidence may suggest that the problems of language will remain after the deluge. If the rising tide of world-wide data can’t float computing’s boat to human levels, “What’s the alternative?” demands Eugene Charniak. He perhaps means there is no alternative.

A somewhat radical idea is to revise the parts of speech, as Stanford University’s Christopher Manning has proposed. Disturbingly, Manning asks: “Are part-of-speech labels well-defined discrete properties enabling us to assign each word a single symbolic label?” Recall that words don’t cleanly map to discrete senses, and similarly that things in the world don’t fit into obvious, finite, universal categories. Now the parts of speech seem to be breaking down.

Tagging accuracy: time for new parts of speech? (Source: Flickr, Tone Ranger)

Manning is skeptical that machine learning could conjure even 0.2% more accuracy in the tagging of words with their part of speech. Achim Hoffmann, at the University of New South Wales, believes more generally that machine learning now bumps against a ceiling. “New techniques,” he adds, “are not going to substantially change that.” Hoffman points out that relatively old techniques “are still among the most successful learning techniques today, despite the fact that many thousand new papers have been written in the field since then.”

For Hoffman, the alternative is to approach intelligence not through language or knowledge but algorithm. Arguably, however, this is just a return to the very origins of artificial intelligence. John McCarthy, inventor of the term “artificial intelligence,” tried to find a formal logic and mathematical semantics that would lead to human-like reasoning. This project failed. It led to Cyc. As Cyc founder Doug Lenat wrote in 1990: “We don’t believe there is any shortcut to being intelligent, any yet-to-be-discovered Maxwell’s equations of thought.” Forget algorithm. Knowledge would pave the way to commonsense. Cyc, of course, also did not work.

Are we just turning circles, or is the noose cinching tighter with repeated exertions? There is something viscerally compelling—disturbing—about Watson and its triumph. “Cast your mind back 20 years,” as AI researcher Edward Feigenbaum recently said in the pages of The New York Times, “and who would have thought this was possible?” But 20 years ago, Feigenbaum published a paper with Doug Lenat about a project called Cyc. Cyc aimed at full blown artificial intelligence. Watson stands in relation to a completely realized Cyc the way J. Craig Venter’s synthetic cell stands to the original vision of genetic engineering: a toy.

John McCarthy derided the Kasparov-Deep Blue spectacle, calling it “AI as sport.” Jimmy Lin, the former NIST coordinator, is not derisive but more ho-hum, wordly-wise about Watson: “Like a lot of things,” he says, “it’s a publicity stunt.” Perhaps an artificially intelligent computer wouldn’t fall for it, but people have. The New Yorker sees the triumphs of Deep Blue and Watson as forcing would-be defenders of humanity to move the goalposts back, to re-define the boundaries of intelligence and leave behind the fields recently annexed by computers. But the goalposts arguably have been moved up, so that weak artificial intelligence—artificial AI—can put it through the uprights.

The New York Times contends that Watson means “rethinking what it means to be human.” Actually what needs redefinition may be humanity’s relationship to dreams of technological transcendence.

Globe-spanning effort tightens vise on polio; eyes on Angola

Statistics at left, tragedy at right
Closer to victory than ever, polio eradication efforts have intensified, with 2011 bringing new initiatives and funding to most every front in the global war on the virus. The encirclement extends from presidential palaces to the streets of Luanda, Angola to tent villages on the Kosi River in India. “[T]he reach is incredible,” said Ellyn Ogden, USAID’s polio eradication coordinator, “to the doorstep of every child in the developing world, multiple times… It is an extraordinary human achievement that is hard for most people imaging in a peace-time program.”
In India, for example, an army of 2.5 million vaccinator visited 68 million homes and immunized 172 million children; the president of India kicked off the January campaign. Cumulative efforts have driven cases in India to historic lows, just 42 cases last year versus 741 in 2009.
In Africa, 15 African countries launched a synchronized immunization campaign late last year with 290,000 vaccinators targeting 72 million children. But a similar campaign took place the year before—and the year before that. Yet despite these huge efforts, polio keeps coming back. Some countries like Burkina Faso have gotten rid of polio three times.
Nigeria once exported the most polio in Africa, but record-setting progress has occurred there. However, polio has developed a new stronghold—in Angola, which has fed explosive cross-border outbreaks. This year Angola will likely be the source of one third of the world’s polios cases. Continued transmission there has caused the Global Polio Eradication Initiative (GPEI) initiative to miss a major end-2010 milestone. “Angola now is almost the most important front in the global war on polio, and the whole world is watching to see how we do here," said UNICEF Executive Director Anthony Lake. Lake visited Angola with Tachi Yamada, president of global health at the Bill & Melinda Gates Foundation, in January.
Angola freed itself of polio in 2002 only to suffer re-importation—from India. Since then, 33 vaccination campaigns over half a decade have failed to stamp out the disease. Lack of political commitment explains these failures, according to multiple sources within the eradication initiative. Angola’s vaccination rounds have been staffed to a large extent by children. Inadequate supervision has meant just a few hours of vaccinating a day, with efforts dropping off further over the course of three-day campaigns.
Political commitment now appears solidly locked in. Visiting Angola, Lake and Yamada met with President José Eduardo dos Santos. “ ‘I’ll lead the campaign,’ ” Yamada said the president told him. The following day, Angop, the state-run news agency, ran the headline “Head of State committed to eradication of polio.” Subsequent news releases showed a domino effect down the political chain of command from the vice-president, to governors, to administrators of individual districts. One release identified a district manager by name and as acknowledging “the availability of the necessary conditions for vaccinators to reach all areas of the district,” likely coded language for placing direct responsibility on the manager for ensuring vaccination of the 156,000 children under five in that district.
The World Health Organization (WHO) places equal emphasis on community involvement in its formula for effective immunization campaigns. In the past, vaccination plans have been centrally created and handed down for execution. WHO finds that the best “microplans,” which map out block-by-block strategies and awareness efforts, are developed by the communities involved. In this way, “communities hold themselves accountable,” as Tim Petersen, a program
officer at the Bill & Melinda Gates Foundation, puts it.
Angola conducted a three-day polio vaccination campaign, February 23-25, across five high-risk areas of the country. A WHO spokesperson said the new decentralized planning led to some “hiccups” in execution. A report from independent monitors, expected in about a week, will reveal the quality of the campaigns which aim to immunize 90% of children under five.
[Note: I attempted to travel to Angola to cover the vaccination campaigns but was not granted a visa. The Angolan consulate in New York informed me four days before my flight that the signature page of my application was “missing,” that my letter from WHO did not meet requirements for documentation related to the purpose of travel and, still less plausibly, that the consulate had been trying unsuccessfully to reach me concerning these problems.]
High risk areas will be covered twice more in upcoming nation-wide vaccination campaigns. However, “It is clear that Angola has a tough few months ahead,” says Sona Bari, communications officer for polio at WHO. But Angola has beaten polio before. Today cases are relatively few, at about 30 a year, certainly in comparison with 1999 which saw more than 1,000. Also, the intensity of transmission is much lower in Angola than that faced by, say, India.
While political commitment seems to be in place, stability might be an issue. Some political tremors from Tunisia and Egypt have reached Angola, such as a call for public protest on March 7. (Recently Angola was without internet access for about two days which state media attributed to a cut cable.) Prior to the 7th, US State Department spokesperson Hilary Renner said she was not aware of “significant demonstrations in Angola.” The Associate Press report on turn out and reaction on the 7th suggests revolutionary force so far is not strong.
Rest of the World: Key Fronts
The eradication initiative must close out the major global sources of polio, India and Nigeria. India is closer to the goal and mostly needs to sustain its exertions. Nigeria trails but has made enormous progress; there are risks but today the country is essentially on track. If Angola too has turned in the right direction, Pakistan becomes the next focus.
Pakistan presents almost all possible obstacles to polio eradication. Like India, the oral polio vaccine in Pakistan fails to immunize among a significant number of children, usually under conditions of very low health and hygiene. Some parents in Pakistan refuse to allow their children to be immunized, a problem also once seen among Muslims in Nigeria who feared the vaccine had been purposefully tainted.
Much of Pakistan’s polio burden falls on border states with Afghanistan where security issues prevent vaccination teams from operating. The virus travels to more secure areas of the country where poorly run, corruption-riven vaccination campaigns fail to stamp it out. Even the house of a former minister of health was bypassed—twice—by polio vaccinators. “I had to call them to get my kids vaccinated,” reported the former minister.
Pakistan’s political stability is low. Natural disasters—huge flooding—have made a difficult situation worse. Last year saw a jump to 144 cases, up from 89. And so far in 2011, cases are accumulating more rapidly. Fortunately, the Pakistan/Afghanistan polio complex has not exported the virus to the rest of the world—so far.
Pakistan figured prominently in the careful eradication orchestrations of early 2011. Bill Gates met with President Asif Ali Zardari on January 15th. On January 25th, an emergency plan to immunize 32 million children was announced. The same day brought a joint announcement of $100 million in funding from the Gates Foundation and Mohammed bin Zayed Al Nahyan, crown prince of Abu Dhabi, to support vaccination efforts, with $34 million earmarked for polio immunization in Pakistan and Afghanistan.
Afghanistan offers ample challenges, including security problems. However the absolute number of cases, about 30 per year, is not extreme. Described as a “pretty strong program” by the Gates Foundation’s Tim Petersen, the Afghan polio eradication team appears to already enjoy the confidence of the members of the global eradication initiative.

The least controlled polio rampage is taking place in the Democratic Republic of Congo (DRC). Cases last year exploded to 100 versus three in 2009. The DRC and its northeastern neighbor, Angola, comprise an epidemiological block like Afghanistan and Pakistan. There is “huge cross-border traffic” between Angola and the DRC, according to Apoorva Mallya, a program officer at the Bill & Melinda Gates Foundation. A lack of roads and transportation infrastructure greatly complicate operations. For example, biological samples from possible polio victims sometimes must be floated down the Congo River en route to a lab for analysis.

The eradication initiative is looking at “local, local solutions,” according to Mallya. At the same time it seeks high level political commitment, just as in Angola and indeed all countries. WHO Director-General Margaret Chan travelled to the DRC in February to meet with President Laurent Kabila. UNICEF’s Anthony Lake then visited in in the first week of March and called for “an absolute commitment” to vaccinate every child.

New Trends in Media Coverage
The front in the polio war has been discouragingly broad and variable. Countries have been won and lost—some more than once. Low numbers or even single cases perpetually spatter the map. Gabon just reported a case, its first in more than ten years. Seemingly safe areas like Tajikistan and Congo have recently seen blowout epidemics. Transmission has become fully re-established in four African countries, not only Angola but also, for instance, Chad. Total cases globally have rarely dipped under a thousand a year over the last decade, giving rise to the view that this “last one percent,” like Jell-O, will squish somewhere else no matter how hard it is squeezed.
But the polio eradication initiative has focused on choking off the sources, following the strategy of von Clausewitz, who in, On War, recommended subduing the enemy “center of gravity.” In polio, that’s India and Nigeria. No other countries come close in polio burden. It’s not over, but India is astonishingly near to eliminating polio. The states of Bihar and Uttar Pradesh, where polio has been worst in India, haven’t seen a single case in six months. Among much else, this required tracking and immunizing enormous mobile populations. As many as six million people are on the move each day, according to a WHO estimate, with accessibility complicated by flooding of the Kosi River in Bihar. In addition, India’s eradication effort has overcome vaccine failure by achieving very high levels of population immunity: the virus basically can’t penetrate the thicket of immune people to access the vulnerable, those children in which the vaccine didn’t take.
The Associated Press recently recognized these developments in "India brings hope to stalled fight against polio."  ABC News posted a story in which progress in India provides hope for polio becoming “just the second disease to be wiped off the planet since smallpox.” (ABC News received $1.5 million from the Gates Foundation to support a television series on global health, making the representativeness of their current coverage more difficult to ascertain.)
Most recently, The Globe and Mail ran a polio package driving off successes in India, saying “Polio is all but gone from India…” (I have written to similar effect in Scientific American.) One article is entitled: “Anti-polio battle on verge of victory.”
No country has been as difficult as India. The obstacles in the countries of the rest of the world are largely different combinations of known problems which have been surmounted somewhere already. Polio has been expunged from anarchic, conflict-ridden states like Somalia. Rejection of vaccine by parents on cultural or religious grounds has been overcome in Nigeria. The quality and coverage of vaccination campaigns has been lifted even amidst rife corruption. Clearly, however, past performance doesn’t guarantee future results. Completely novel problems could arise. Failure on one or more of the numerous fronts in eradication is likely; compound failures could wreck the broader enterprise.
However, while feasibility remains an issue, coverage appears to be shifting—to whether the polio “endgame” can be won. The wild poliovirus is not the only threat to eradication. Very rarely, the oral polio vaccine, which uses a live attenuated virus, mutates into virulent form. Thus, in a sense, the eradication effort is fighting fire with fire, as a recent op-ed piece in the Los Angeles Times points out in “The Polio Virus Fights Back.” Not long after, Myanmar reported just such a case of vaccine-derived poliovirus. These mutants can—and have—spread. So far no related cases have been reported because the vaccine protects against it. And in Myanmar, “Immunization demand is high and the country conducts good quality campaigns,” according to WHO’s Sona Bari. In India, where oral polio vaccine dosing has been most intense, 2010 saw only one case of vaccine-derived virus. Rightly, however, the subject will likely gain in prominence in media coverage.
Not only has the nature of feasibility questioning changed, shifting to whether the next phase can be won, the position of arch critic of eradication now appears to be open. Donald Henderson played a key role in smallpox eradication but has long been skeptical of polio eradication. According to a January Seattle Times article, however, Henderson changed his mind six months ago and now believes polio could be eliminated. But not long after, The New York Times cited Henderson as a vehement critic of eradication. In mid-February, however, they ran a different story, Can Polio Be Eradicated? A Skeptic Now Thinks So, which (re-)disclosed that Henderson had changed his mind. The title of the earlier article in which Henderson was a critic also appears to have been changed online from “Critics Say Gates’s Anti-Polio Push Is Misdirected” to “Gates Calls for a Final Push to Eradicate Polio.”
At present, this leaves only the desirability of polio eradication in question. While no one argues for polio, there are other diseases which are more widespread, taking more lives and causing greater suffering. According to The New York Times, Richard Horton, editor-in-chief of The Lancet tweeted that:
“Bill Gates’s obsession with polio is distorting priorities in other critical [Bill &Melinda Gates Foundation} areas. Global health does not depend on polio eradication.”
The Gates Foundation, however, embraces the accusation. “We are overemphasizing polio,” says the foundation’s Tachi Yamada. Polio became the foundation’s number one priority late last year. And it’s not just Bill Gates or his foundation. In 2008, WHO Director-General Margaret Chan said “I am putting the full operational power of the World Health Organization into the job of finishing polio eradication… I am making polio eradication the Organization’s top operational priority on a most urgent, if not an emergency basis.”
But the emphasis on polio is indeed disproportionate. Both the Gates Foundation and WHO recognize that eradication would not just rid the world of a horrific disease: it would be a giant symbolic victory for global health. Chan, in her 2008 speech, also said “We have to prove the power of public health,” a goal which eradication would achieve. Similarly, Gates Foundation’s Yamada doesn’t want to give “fuel to cynics” by having eradication fail but instead to demonstrate that “this is what development assistance can do.”
Returning to the matter of eradication critics, The New York Times also quoted bioethicist Arthur L. Caplan, a professor at the University of Pennsylvania as saying “We ought to admit that the best we can achieve is control.” In June, 2009, Caplan wrote an opinion piece in The Lancet entitled, “Is Disease Eradication Ethical?” Caplan wondered if eradication was possible since it hadn’t worked after more than two decades of effort. The financial cost was high and diverted resources from better, more life-saving uses.
Caplan declined to comment for this article. However, questioning eradication as a strategy in global health, as polio demonstrates, is a worthwhile endeavor. And if polio resurges, so will skepticism of eradication.
Space Race to Human Race
The upcoming retirement of the Space Shuttle likely will attract enormous coverage. The Shuttle is not being replaced, however. And there are currently no plans for even a single human to permanently leave the planet. Still, the expectation of a spacefaring humanity persists although the 1960s might remain the golden age of manned space exploration.
In other words, the world has missed that the next giant step for humankind will take place on planet earth. The polio eradication effort might actually be larger than the Apollo program. Already in India, the number of cases can be counted down to zero; other countries might follow.
It’s a good story.