Rest in Peas: The Unrecognized Death of Speech Recognition

Pushing up daisies (Photo courtesy of Creative Coffins)

 Mispredicted Words, Mispredicted Futures

The accuracy of computer speech recognition flat-lined in 2001, before reaching human levels. The funding plug was pulled, but no funeral, no text-to-speech eulogy followed. Words never meant very much to computers—which made them ten times more error-prone than humans. Humans expected that computer understanding of language would lead to artificially intelligent machines, inevitably and quickly. But the mispredicted words of speech recognition have rewritten that narrative. We just haven’t recognized it yet.

After a long gestation period in academia, speech recognition bore twins in 1982: the suggestively-named Kurzweil Applied Intelligence and sibling rival Dragon Systems. Kurzweil’s software, by age three, could understand all of a thousand words—but only when spoken one painstakingly-articulated word at a time. Two years later, in 1987, the computer’s lexicon reached 20,000 words, entering the realm of human vocabularies which range from 10,000 to 150,000 words. But recognition accuracy was horrific: 90% wrong in 1993. Another two years, however, and the error rate pushed below 50%. More importantly, Dragon Systems unveiled its Naturally Speaking software in 1997 which recognized normal human speech. Years of talking to the computer like a speech therapist seemingly paid off.

However, the core language machinery that crushed sounds into words actually dated to the 1950s and ‘60s and had not changed. Progress mainly came from freakishly faster computers and a burgeoning profusion of digital text.

Speech recognizers make educated guesses at what is being said. They play the odds. For example, the phrase “serve as the inspiration,” is ten times more likely than “serve as the installation,” which sounds similar. Such statistical models become more precise given more data. Helpfully, the digital word supply leapt from essentially zero to about a million words in the 1980s when a body of literary text called the Brown Corpus became available. Millions turned to billions as the Internet grew in the 1990s. Inevitably, Google published a trillion-word corpus in 2006. Speech recognition accuracy, borne aloft by exponential trends in text and transistors, rose skyward. But it couldn’t reach human heights.

Source: National Institute of Standards and Technology Benchmark Test History 

“I’m sorry, Dave. I can’t do that.”

In 2001 recognition accuracy topped out at 80%, far short of HAL-like levels of comprehension. Adding data or computing power made no difference. Researchers at Carnegie Mellon University checked again in 2006 and found the situation unchanged. With human discrimination as high as 98%, the unclosed gap left little basis for conversation. But sticking to a few topics, like numbers, helped. Saying “one” into the phone works about as well as pressing a button, approaching 100% accuracy. But loosen the vocabulary constraint and recognition begins to drift, turning to vertigo in the wide-open vastness of linguistic space.

The language universe is large, Google’s trillion words a mere scrawl on its surface. One estimate puts the number of possible sentences at 10570.  Through constant talking and writing, more of the possibilities of language enter into our possession. But plenty of unanticipated combinations remain which force speech recognizers into risky guesses. Even where data are lush, picking what’s most likely can be a mistake because meaning often pools in a key word or two. Recognition systems, by going with the “best” bet, are prone to interpret the meaning-rich terms as more common but similar-sounding words, draining sense from the sentence.

Strings, heavy with meaning. (Photo credit: t_a_i_s)

Statistics veiling ignorance

Many spoken words sound the same. Saying “recognize speech” makes a sound that can be indistinguishable from “wreck a nice beach.” Other laughers include “wreck an eyes peach” and “recondite speech.” But with a little knowledge of word meaning and grammar, it seems like a computer ought to be able to puzzle it out. Ironically, however, much of the progress in speech recognition came from a conscious rejection of the deeper dimensions of language. As an IBM researcher famously put it: “Every time I fire a linguist my system improves.” But pink-slipping all the linguistics PhDs only gets you 80% accuracy, at best.

In practice, current recognition software employs some knowledge of language beyond just the outer surface of word sounds. But efforts to impart human-grade understanding of word meaning and syntax to computers have also fallen short.

We use grammar all the time, but no effort to completely formalize it in a set of rules has succeeded. If such rules exist, computer programs turned loose on great bodies of text haven’t been able to suss them out either. Progress in automatically parsing sentences into their grammatical components has been surprisingly limited. A 1996 look at the state of the art reported that “Despite over three decades of research effort, no practical domain-independent parser of unrestricted text has been developed.” As with speech recognition, parsing works best inside snug linguistic boxes, like medical terminology, but weakens when you take down the fences holding back the untamed wilds. Today’s parsers “very crudely are about 80% right on average on unrestricted text,” according to Cambridge professor Ted Briscoe, author of the 1996 report. Parsers and speech recognition have penetrated language to similar, considerable depths, but without reaching a fundamental understanding.

Researchers have also tried to endow computers with knowledge of word meanings. Words are defined by other words, to state the seemingly obvious. And definitions, of course, live in a dictionary. In the early 1990s, Microsoft Research developed a system called MindNet which “read” the dictionary and traced out a network from each word out to every mention of it in the definitions of other words.

Words have multiple definitions until they are used in a sentence which narrows the possibilities. MindNet deduced the intended definition of a word by combing through the networks of the other words in the sentence, looking for overlap. Consider the sentence, “The driver struck the ball.” To figure out the intended meaning of “driver,” MindNet followed the network to the definition for “golf” which includes the word “ball.” So driver means a kind of golf club. Or does it? Maybe the sentence means a car crashed into a group of people at a party.

To guess meanings more accurately, MindNet expanded the data on which it based its statistics much as speech recognizers did. The program ingested encyclopedias and other online texts, carefully assigning probabilistic weights based on what it learned. But that wasn’t enough. MindNet’s goal of “resolving semantic ambiguities in text,” remains unattained. The project, the first undertaken by Microsoft Research after it was founded in 1991, was shelved in 2005.

Can’t get there from here

We have learned that speech is not just sounds. The acoustic signal doesn’t carry enough information for reliable interpretation, even when boosted by statistical analysis of terabytes of example phrases. As the leading lights of speech recognition acknowledged last May, “it is not possible to predict and collect separate data for any and all types of speech…” The approach of the last two decades has hit a dead end. Similarly, the meaning of a word is not fully captured just by pointing to other words as in MindNet’s approach. Grammar likewise escapes crisp formalization.  

To some, these developments are no surprise. In 1986, Terry Winograd and Fernando Flores audaciously concluded that “computers cannot understand language.” In their book, Understanding Computers and Cognition, the authors argued from biology and philosophy rather than producing a proof like Einstein’s demonstration that nothing can travel faster than light. So not everyone agreed. Bill Gates described it as “a complete horseshit book” shortly after it appeared, but acknowledged that “it has to be read,” a wise amendment given the balance of evidence from the last quarter century.

Fortunately, the question of whether computers are subject to fundamental limits doesn’t need to be answered. Progress in conversational speech recognition accuracy has clearly halted and we have abandoned further frontal assaults. The research arm of the Pentagon, DARPA, declared victory and withdrew. Many decades ago, DARPA funded the basic research behind both the Internet and today’s mouse-and-menus computer interface. More recently, the agency financed investigations into conversational speech recognition but shifted priorities and money after accuracy plateaued. Microsoft Research persisted longer in its pursuit of a seeing, talking computer. But that vision became increasingly spectral, and today none of the Speech Technology group’s projects aspire to push speech recognition to human levels.

Cognitive dissonance

We are surrounded by unceasing, rapid technological advance, especially in information technology. It is impossible for something to be unattainable. There has to be another way. Right? Yes—but it’s more difficult than the approach that didn’t work. In place of simple speech recognition, researchers last year proposed “cognition-derived recognition” in a paper authored by leading academics, a scientist from Microsoft Research and a co-founder of Dragon Systems. The project entails research to “understand and emulate relevant human capabilities” as well as understanding how the brain processes language. The researchers, with that particularly human talent for euphemism, are actually saying that we need artificial intelligence if computers are going to understand us.

Originally, however, speech recognition was going to lead to artificial intelligence. Computing pioneer Alan Turing suggested in 1950 that we “provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.” Over half a century later, artificial intelligence has become prerequisite to understanding speech. We have neither the chicken nor the egg.

Speech recognition pioneer Ray Kurzweil piloted computing a long way down the path toward artificial intelligence. His software programs first recognized printed characters, then images and finally spoken words. Quite reasonably, Kurzweil looked at the trajectory he had helped carve and prophesied that machines would inevitably become intelligent and then spiritual. However, because we are no longer banging away at speech recognition, this new great chain of being has a missing link.

That void and its potential implications have gone unremarked, the greatest recognition error of all.  Perhaps no one much noticed when the National Institute of Standards Testing simply stopped benchmarking the accuracy of conversational speech recognition. And no one, speech researchers included, broadcasts their own bad news. So conventional belief remains that speech recognition and even artificial intelligence will arrive someday, somehow. Similar beliefs cling to manned space travel. Wisely, when President Obama cancelled the Ares program, he made provisions for research into “game-changing new technology,” as an advisor put it. Rather than challenge a cherished belief, perhaps the President knew to scale it back until it fades away.

Source: Google

Speech recognition seems to be following a similar pattern, signal blending into background noise. News mentions of Dragon System’s Naturally Speaking software peaked at the same time as recognition accuracy, 1999, and declined thereafter. “Speech recognition” shows a broadly similar pattern, with peak mentions coming in 2002, the last year in which NIST benchmarked conversational speech recognition.

With the flattening of recognition accuracy comes the flattening of a great story arc of our age: the imminent arrival of artificial intelligence. Mispredicted words have cascaded into mispredictions of the future. Protean language leaves the future unauthored.



Dude, where's my universal translator? (CBC radio show)

Dutch translation of Rest in Peas: De onbegrepen dood van spraakherkenning

Ray Kurzweil does not understand the brain

Cost of space tourism "rides" falls

Goofy yet compelling. (Photo credit: Armadillo Aerospace)

I have mostly denigrated space tourism, at least as a stepping stone to human space exploration. Proponents believe that attaching market forces to technology (which always advances) will lead inevitably to humanity exploring ever-more distant space.

About half of the equation works: market forces are driving down prices. Richard Branson's Virgin Galactic began by charging $200,000 for a ride aboard its SpaceShip Two. But now for just $100,000, Armadillo Aerospace is offering what called "sub-orbital joy rides." The mocking shift in nomenclature is perhaps a more important development than Armadillo's becoming the Budget Rent-a-car of space tourism.

Both Armadillo and Virgin only get you to the edge of space, about 62 miles up. To get higher, there is presently no alternative to a multi-stage rocket and not even market forces can change that. For the real deal, you still need to disgorge a million dollars to fly on the Soyuz.



Human Space Exploration: Scaled Back to Vanishing

Atlantis last flight, X-51 first flight

The Contours of Medical Progress in Depression, Diabetes and Arthritis

Celebrex photo (Allison Turrell)

Although medicine has never been more advanced, progress against the two biggest killers, cancer and heart disease, now comes more slowly than it did decades ago. A skeptical reader suggested that recent breakthroughs in depression, diabetes and arthritis present a truer picture of medical advance. But surveying these other diseases in fact reveals similar contours: the steepest gains occur in the past with progress flattening toward the present.


No antidepressant, however new, surpasses the very first, imipramine, which dates to 1952. Although we are on the third or fourth generation of antidepressants, the newest agents don’t outperform the oldest. Savella, the most recently approved antidepressant, worked the same as imipramine in a meta-study of seven clinical trials. Same for Effexor. Same for Zoloft. Same for Prozac. Unsurprisingly, the newer agents are about the same in terms of efficacy compared with each other. For severe, psychotic depression, imipramine remains the treatment of choice.

Having more therapeutic options greatly aids the art of treating an individual patient, and welcome progress has come on side effects. But we see refinement not revolution. Conceivably research emphasis actually shifts to side effects when progress on defeating a given disease slows. “Polypharmacy,” prescribing more than one drug for a condition, is another such symptom visible in antidepressants.

The reduced (or sometimes merely different) side effects of newer antidepressants come courtesy of very substantial advances in chemistry and receptor biology. Modern compounds strike their targets far more precisely than the older, less discriminating, “dirtier” agents like imipramine. However, consummate skills at the microscopic level are not matched by an understanding of the overall machinery of depression despite a vast expansion of neurological knowledge. In some respects, depression appears more complicated than ever, a tangled skein catching up many different biological pathways, looping through environmental and genetic factors. Breaking free of a complex disease, whether cancer or depression, requires more than snipping the single threads we have found so far.


In diabetes, insulin clearly wins the prize for biggest breakthrough, one achieved in 1923 before there were clinical trials. But none were needed. Insulin very straightforwardly saves the lives of type 1 diabetics who would die without it. Living with diabetes has since become progressively less and less difficult. Insulin has been improved many times, but these are refinements, like the recent fast-acting or long-release variants.

After insulin, the largest absolute gain in glycemic control came in the late 1950s and early 1960s with the introduction of sulfonylureas. For type 2 diabetes, these drugs were frontline until 1995 when metformin jumped to the front because of its greatly reduced side effects. The sulfonylureas actually provide better glycemic control than metformin but ultimately they exhaust the pancreas. After metformin came a series of new drug classes: glitazones in 1999, GLP-1 analogs like Byetta in 2005 and DPP-4 inhibitors like Januvia in 2006. These drugs exploit recently acquired, fine-grain knowledge of biochemical pathways. By contrast, we don't really know how metformin works except that it acts on the pancreas. But the new drugs aren’t “better” than older ones. Instead they provide useful (albeit more expensive) treatment alternatives.

As with depression, increased mastery of microbiological detail in diabetes hasn’t translated into mastery of the disease, a cure. Remarkably, gastric bypass surgery somehow controls type 2 diabetes quickly and not simply because of weight loss. A drug mimicking such an effect would likely be the biggest diabetes breakthrough ever. But the mysterious target has been lurking since 1987. Although today’s suite of diabetes therapies enables more precise glycemic control than ever before, for now, the largest advances are more than half a century old.

Rheumatoid Arthritis

A wave of new biological therapies has washed over rheumatoid arthritis, establishing a new high water mark in relief and remission. For all their scientific brilliance, however, biologicals back up an older drug, methotrexate. In use as early as 1967, methotrexate became the drug of choice in the mid-1980s and it remains first line therapy today.

The most important new biologicals, the TNF inhibitors, arrived in 1998 when the FDA approved Enbrel. However, in head-to-head trials, TNF inhibitors work about the same as methotrexate. Enbrel, for example, performed marginally better than methotrexate in one study while Humira did marginally worse in another. Not surprisingly, the differences between these biologicals are negligible. However, in patients where methotrexate has not worked, adding a TNF inhibitor can make an enormous difference. Remicade plus methotrexate boosted response rates to 52% compared to 17% with methotrexate alone.

The new agents also work much faster and appear to provide much greater protection against joint damage. Still, the biologicals haven’t beaten methotrexate and usually join them in combination therapy. In the case of Remicade, methotrexate is a requirement.

The newest biological for rheumatoid arthritis, Actemra, approved in January, 2010, supplants no other drugs but occupies the last line of treatment, after the TNF inhibitors. Actemra joins the other recently-approved biologicals Rituxan and Orencia sitting at the end of the bench.


Source: Methotrexate works in 70-80% of cases of moderate to severe rheumatoid arthritis. When TNF inhibitors are taken, 70% are taken with methotrexate or another conventional anti-rheumatic drug. TNF inhibitors work for about 75% of patients. The newest agents, Actemra (2010), Rituxan (2006) and Orencia (2005), address the smallest patient group, those failing TNF inhibitors.

 The benefits of new drugs for heart disease, breast cancer, depression, diabetes as well as rheumatoid arthritis have declined over time, with the largest gains realized decades ago. Methotrexate, dating to the early 1980s, provided the biggest gain, followed by TNF inhibitors in the 1990s, with the newest drug Actemra last and least.

However, we are heir to decades of medical advances. The best time in all of human history to have any chronic disease is now (or, better yet, tomorrow). Surprisingly, though, we are adding less to this legacy than previous generations—and we’re a bit oblivious about what’s happening.



If heart drugs keep improving, will we be able to tell?

Life expectancy, medical progress, birthday wishes (video)

Wall Street Journal: Pulling the plug on polio eradication?

In counterpoint to the New York Timespositive coverage of the war on polio earlier this month, the Wall Street Journal on Friday put forward a case for abandoning the goal of eradication—and not just for polio. The Journal depicts a potentially seismic policy shift as emanating from the de facto leader of global health, Bill Gates.  Such a reversal is unlikely.

Gates got behind polio eradication in 1999 with a $50 million grant he believed would close out the disease. He predicted in 2000 that “If necessary resources and political will are devoted to polio eradication, the world can claim victory over this killer by the end of this year and certify the planet as polio free by the year 2005.” A decade and nearly a billion dollars later, the result is not eradication but oscillation, case numbers rising and falling.

Debate about the wisdom of the eradication policy has ensued. Millions of children die from malaria, for example, compared to which polio’s afflictions, although still horrible, are minute. More suffering could be averted, the argument goes, with a different allocation of global health dollars. The Wall Street Journal urges a move away from disease-specific campaigns and towards strengthening of overall health systems.

Polio might be expensive, but dropping eradication might be more so.  A Lancet study in 2007 concluded that control would be more expensive than eradication. But whatever the optimal policy, it requires funding. Eradication provides a sense of urgency and heroism; a control strategy does not.

Credibility is also at stake for global health advocates and Bill Gates in particular. Gates has backed not only getting rid of polio. In 2007, he and wife Melinda strong-armed a skeptical global health community to embrace malaria eradication. As hoped, tremendous energy and funding were released. A giant vogue for malaria eradication ensued, like Ashton Kutcher’s Twitter-driven campaign for bed nets. At the governmental level, the largest surge in funding commitments to battle malaria came one year after setting eradication as the goal. Arguably to protect these gains, the Gates Foundation doubled-down on polio eradication in 2008. And as the Wall Street Journal points out, Gates personally led the assault, descending in 2009 on the hottest polio spot, Nigeria, where vaccination lagged.

A resurgence of polio that came from consciously letting up on the disease, even if the best policy, would be a public relations disaster. The current trajectory also presents serious problems but far less severe: Eradication is expensive and doesn’t appear to be working. The solution has been continued mass dosing of the polio-vulnerable in the developing world and, for donor nations, a steady drip of good news that, yes, we are at the absolute cusp of eradication. Right now, the news is good in Nigeria and indeterminate in India, the familiar cusp yet again.

Ultimately, polio can be snuffed out by the downward pressure of eradication, the strengthening of health systems and much broader, slower and more costly development—improvements in food, water and sanitation.  In the near term, eradication will remain the strategy, but elusive.



Polio Turns Stealthy in India (August 19, 2010)

Heavy Lifting: Raising Health Beyond Polio's Reach (May 13, 2010)

Polio Eradication: Harder Than it Looks (April 14, 2010)

Human Space Exploration: Scaled Back to Vanishing


The Orion capsule at miniature scale. Photo credit: NASA

An elaborate ruse is taking place. Instead of Mars, we’re going to an asteroid. NASA’s budget is supposed to grow a seemingly hefty $6 billion but that’s only a 1.5% increase next year with just cost of living adjustments the four years after. Finally, NASA administrators assured the Johnson Space Center that it would continue to be the home of mission control for human spaceflight—when no flights are planned.

But not all are duped. “[I]t is clear that this is the end of America’s leadership in space,” said Senator Richard Selby from the aerospace-heavy state of Alabama, regarding the Obama administration’s plan for space exploration, announced yesterday.

Back in February, the President cancelled the Ares program to return to the moon and eventually Mars, a decision eliciting very little public consternation. Despite Congressional backlash from the Spacebelt states of Texas, Florida and Alabama, Ares remains cancelled. Also telling, part of the plan allocates $40 million for job retraining to Space Shuttle program employees put out of work by the Shuttle’s scheduled retirement later this year, making for an unexpected but revealing parallel with the Rustbelt.

Cagily, the President is deploying technology and market forces to defer human spaceflight—to infinity and beyond. NASA’s revised budget shifts billions into commercial space flight. Only left-wing socialists would argue against harnessing market forces to get into space. But the real jujitsu came in requiring that future rockets use new technology, not just a retread of the Apollo program. This is brilliant. Enthusiasts for space travel revere technology and so can hardly oppose Obama’s new, higher standard. However, in some respects, rocket technology has been unchanged for nearly a century. Konstantin Tsiolkovsky laid out many of the principles in 1903 in his Exploration of Cosmic Space by Means of Reaction Devices. Since Apollo forty years ago, no alternatives have been found to very large, multi-stage rockets burning liquid or solid fuel. Today, none are in near prospect. The chances of coming up with something novel by the new deadline of 2015 are very, very low. NASA chief Charles Bolden implored yesterday’s gathering at Cape Canaveral that “This is not for show. We want your ideas. We want your thoughts.” But because there are no ideas, it is for show.

The actual plan is to go nowhere or at least nowhere new. Even the gung-ho, final frontiers readers of think NASA will not get to Mars by the newly distant date of 2030.

Adapted from

Unless China reprises its 2009 Olympics extravagance in space with a mission to Mars, the Moon might mark the furthest extent of human space travel.

The brave and hopeful era of the Space Age deserves a better send-off than these dissimulations.  Economics and myth dictate otherwise, stalling the redirection of noble aspiration toward terrestrial ends where giant leaps for humankind are both needed and possible.



Space Shuttle without people

Space Age entering eclipse—unnoticed

Atlantis last flight, X-51 first flight

Polio Eradication: Harder Than it Looks

Why it might require a new vaccine

The global campaign to eradicate polio crushed 90% of the disease in the space of a dozen years, by 2000. In the decade since, complete elimination has swung tantalizingly close—and then away again when case counts spiked and outbreaks reappeared in countries recently cleared of the virus. Once again, cases are ebbing and hopes rising. The New York Times recently reported that for four straight months, India has seen no new cases in its two most polio-burdened states. But the real news is the unnoticed opening of a new scientific front in vaccine research, portent of a longer battle.

The polio-free streak in the state of Uttar Pradesh, while encouraging, is actually only two months old and in the state of Bihar just over three. India as a whole did have zero cases in March. But in four of the last ten years, the country has started off with even fewer total polio cases only to see the tide turn. Nationwide, the total cases so far this year, 19, equals the number at this time in 2009. Polio incidence usually rises in May; given reporting delays, by July, the direction of the trend should be clearer.

The Times article cites extraordinary vaccination efforts to explain the new swing toward eradication. Polio vaccination campaigns in India are monumental undertakings, war-sized in scale involving an army of over two million vaccinators going house-to-house and overseen by a supervisor corps numbering greater than 100,000. However, the most enduring reason polio continues in India has not been a failure to vaccinate, but a failure of the vaccine.

If the population covered by campaigns is increasing, so too is the frequency of the campaigns. Vaccinations became monthly in Uttar Pradesh, for example, in 2007 in order to cover more quickly the 500,000 children born there each month.

Although the remarkable vaccination efforts have greatly pushed down polio cases, eradication has remained elusive because eight vaccine doses or even more don’t necessarily confer immunity. In India, the number of polio cases where the individual hasn’t been vaccinated has plummeted toward zero. Instead, increasingly the victims have been vaccinated over and over—to no effect.

In most places in the world—and many places in India—the Sabin oral polio vaccine works after two doses. No one knows what makes it go awry in Uttar Pradesh and Bihar. Those states are highly populous but low in income. Overcrowding and hygiene conditions help infection spread. But the vaccine's inability to elicit immunity is thought to be a product of malnutrition, immune suppression caused by other diseases and possibly genetic factors.

The World Health Organization just closed a call for research proposals to find out why, part of an ongoing large investment in improved vaccines. Critics of the eradication program have vociferated about the oral vaccine for some years. But developing new vaccines or drugs is time consuming, costly and not guaranteed to work, perhaps explaining the reluctance to open a scientific front in the war on polio. Previously, global eradication, while hugely daunting and complicated, came down to logistical execution, will power and funding.

Bruce Aylward, who heads the eradication effort at the World Health Organization, has ample willpower, but worries constantly about finances. He has steadily forecasted the imminent demise of polio for years. To fund the pursuit of eradication, he has learned that when there’s good news, like a favorable turn in case numbers, you cash it in. Aylward told the New York Times: “We’ve never had so many things looking so positive across so many areas.” Concerning a funding shortfall, he hastened to add: “I spend as much time in donor capitals as I do in infected countries.”

Hopefully his efforts will pay off. But if eradication remains unachieved, we will almost certainly be on the verge of it—still.


Photo credit: Jean-Marc Giboux via quilty2010, Sub-National Immunization Day. Lucknow, Uttar Pradesh

Source, 2nd graphic: AFP Surveillance Bulletin—India Report for the week ending February 6, 2010

Source, 3rd graphic: Paul, Y., Polio eradication in India: Have we reached the dead end?, Vaccine.  2010 Feb 17;28(7):1661-2.



Polio Turns Stealthy in India (August 19, 2010)

Heavy Lifting: Raising Health Beyond Polio's Reach (May 13, 2010)

Wall Street Journal: Pulling the plug on polio eradication? (April 26, 2010)

In Burma, the Wrong Kind of Resistance

Drug-resistant malaria may have spread to Burma and, worse, might now be impervious to current first-line drug defenses. Less than a year into the battle to contain resistance, everything that could go wrong may have.

In Southeast Asia, malaria has overrun--twice--the pharmaceutical defenses erected against it, evolving resistance to previously potent anti-malarial drugs and ultimately rendering them useless worldwide. Last year, portents of a third such performance appeared. In cases  along the Thai-Cambodia border, the first-line drug artemisinin began taking longer to completely clear malaria parasites, suggesting that today's champion had lost a step against a strengthening disease. (See Once again, it's 'Apocalypse Now' in Southeast Asia.)

Plans quickly developed to crush this new threat before it spread globally--again. Efforts to eradicate malaria from the affected areas of Cambodia have markedly reduced prevalence of the disease. But preliminary reports now suggest that  parasites in some regions of Burma and Vietnam may also respond poorly to artemisin, meaning the original lines of containment might already be breached.

In the past, drug-defeating strains originated in Southeast Asia and then spread by human carriers to Africa. Worryingly, researchers are currently working to determine whether artemisinin-resistant malaria has arrived in other parts of the world and if there is a connection to Southeast Asia.

On top of news of a faster-than-expected spread, some evidence suggests that artemisinin is getting slower and slower in some cases. If the trend continues, eventually treatment failure will result, meaning complete resistance to artemisinin when there are no new drugs to take its place.

On the other fronts of the war on malaria, a vaccine candidate, called RTS,S, has moved into final clinical trials. However, the protective effects of the vaccine have varied widely, from 40% to 60%, creating a difficult decision on whether to undertake large and costly vaccination campaigns when RTS,S emerges from clinical trials in 2014. The bright spots at present are insecticide-treated bednets. The bednet campaign has raised awareness, money and most importantly actual usage of the nets in malarial regions of the world.

If heart drugs keep improving, will we be able to tell?

Even under high magnification, new drug benefits are vanishing

By the end of the 20th century, modern medicine was fending off 190,000 deaths a year from otherwise fatal heart conditions. Funding poured into cardiovascular research, more than doubling from $3.8b in 1995 to $8.4b in 2005. Now from this richly oxygenated drug pipeline, two new heart drugs have emerged. Massive clinical trials depict, at IMAX scale, medicines that seem better, faster, stronger. But it still takes squinting to see the improvements.  And even tests in tens of thousands of people aren’t large enough to show that the new drugs actually save lives.

Once, life-saving effects were visible to the naked eye. In the 1980s, a clinical trial of 17,000 people demonstrated unequivocally that aspirin prevented hundreds of deaths. After a heart attack, aspirin cuts the subsequent risk of death, stroke or heart attack by 2.0%. Improving on aspirin took nearly a decade and a trial of over 19,000 people for the faint effects of a new drug, Plavix, to surface.  Plavix surpassed aspirin by a hard to see, Braille-like bump of 0.5%. But the benefits of Plavix and aspirin, taken together, are additive. After Plavix gained FDA approval in 1997, it won for drug-maker Sanofi-Aventis the second largest pharmaceutical franchise in the world.

The scent of that $8 billion market brought competitors loping, ears-back in pursuit. First came Effient from Eli Lilly. Perhaps to magnify small differences between Effient and Plavix, the company-funded study put heart attacks under a microscope. The trial looked not only at heart attacks with chest pain and other classic symptoms, but also those detectable only by a blood test measuring levels of cardiac enzymes. The precise definition of these invisible heart attacks varies and even changed mid-trial. And whether they matter is disputed. But largely because of the tally of non-fatal heart attacks, the Eli Lilly study showed Effient beating Plavix.

Neither drug, however, defeats death. Enter Brilinta, a new antiplatelet drug from AstraZeneca. A recent clinical trial showed Brilinta not only besting Plavix but saving lives—maybe. The study of nearly 19,000 people still wasn’t big enough to attribute the 89 fewer deaths among Brilinta patients to the new drug or to chance.

Chance is not why cardiovascular clinical trials funded by drug companies tend to report results favorable to the funder. AstraZeneca paid for the Brilinta trial, and two of the study’s authors were employees of the company. AstraZeneca also managed the trial data itself, contrary to good practice. Britain’s National Health Service has expressed doubt about the trial’s blinding—which could suggest that the new drug might have been given to patients who were healthier to begin with. Also raising eyebrows, the trial’s 1,800 North American patients fared worse on Brilinta, although that too could owe to chance. (Brilinta is currently wending through the FDA approval process).

Not only are the benefits of these drugs diminishing and arguable. The number of new drugs is plummeting. From eight in 1995, the number of novel chemical entities approved for heart conditions crashed to zero in 2005. All newly-approved drugs tumbled, from 53 in 1996 to just 18 by 2005.

Surprisingly, it’s not the drug companies’ fault. Huge updrafts of research funding did little to arrest the drug free fall. Not only did cardiovascular research funding double, government funding of all biomedical research ballooned, also doubling between 1998 and 2003. The biomedical research engine now gulps $100 billion annually in the United States. Reassuringly, it powers more scientists than ever and generates 200,000 research papers a year, nearly twice the output of 1995. But research and funding have clearly broken away from drug production. Why?

Research has dived deeper and deeper in search of the fundamental causes of disease. This fantastic voyage ever downward in scale was expected to conclude with the sequencing of the human genome and the molecular pinpointing of the genes that cause disease. Instead, the search is still on. Only 3% of the heritable, genetic basis for early heart attack has been discovered. Scrutinizing the DNA of nearly 3,000 sufferers turned up just nine genes in common, suggesting that there are hundreds more. Worse, early heart attacks have a stronger genetic basis than those occurring after age 65 which represent 90% of all heart attacks.

The research odyssey continues deeper and gets murkier. The genes with the strongest influence on early heart attack don’t, say, produce artery-blocking plaque. Instead they appear to control other, unidentified genes of unknown function. Disentangling this self-referential interplay of genes with each other and genes with environment is the daunting task of epigenetics. Like drug trials, research projects are becoming enormous. Inevitably, there is a Human Epigenome Project, vaster in scope than even its parent, the Human Genome Project.

But as research dives deeper, the medical payoff has become fainter. The tether connecting research to new drugs and health benefits began stretching a quarter century ago. In 1984, a group at Oxford quietly and presciently called for megatrials in the 10,000- to 20,000-person range because most trials were “too small to be of much independent value.” In other words, drug benefits had become too small to be detected without a large trial. In 1985, new drug approvals climbed to record heights. They held there, helped by the arrival of the last key heart medication, the statins, which began lowering cholesterol in 1987. In 1988, the Oxford team published the 17,000-person study of aspirin’s antiplatelet credentials. The era of megatrials began. In 1989, as if on cue, new drug approvals began dropping from their all time highs and have not recovered.

In the realm of heart medications, only modest refinements have ensued. Plavix and other antiplatelet drugs improved very slightly on venerable aspirin but lifesaving benefits have vanished even from megatrials. Similarly, new anticoagulants (with names like bivalirudin and fondaparinux) mostly burnished the achievements of heparin which began saving lives in its first, tiny 1960 trial of 35 patients.

A similar pattern holds for cancer, the number two killer in the United States after heart disease. For breast cancer, the 1960s delivered the biggest breakthrough ever: chemotherapy. It cut mortality by 14% and finally displaced 19th-century radiation treatment as front line therapy. Every therapeutic discovery for breast cancer since chemotherapy has produced only smaller benefits. In the 1970s, modified chemotherapy pared mortality just another 3.1% by employing more toxic drugs developed in the 1950s. Those treatments remain front line to this day. The biggest news since has been tamoxifen, which reduces mortality by 9.2% but only for about three quarters of patients with a particular type of breast cancer. Tamoxifen dates to 1977. The more recent aromatase inhibitors marginally improve on tamoxifen but not in a life-saving way.

The latest generation of cancer drugs, “targeted agents” like Tykerb, approved in 2007, exploit the new, high-definition molecular knowledge. But targeted agents, while higher in precision, have generally lost even the occasional power to cure wielded by older, cruder chemotherapy.

Seemingly the last thing to decouple from new drugs is expectations. In 1998, on the 50th anniversary of the first clinical trial, the Oxford trialists looked ahead to the next half century. They called once again for “greatly increasing” trial size. The reason, they said gently and soberly, is simple: “when it comes to major outcomes it is generally unrealistic to hope for large therapeutic effects.” Instead expectations, like new drug prices, continue to soar, high above shrinking health benefits below.

Photo credit: Buttersweet on Flickr



The Contours of Medical Progress in Depression, Diabetes and Arthritis

Life expectancy, medical progress, birthday wishes (video)