Gates Seeks to Close Out Polio in Nigeria

Bill Gates returned to Nigeria yesterday, outwardly to laud progress on polio but also to thrust vaccination and eradication efforts through to decisive conclusion. 

Polio is way down in Nigeria, in part because of Gates' first visit there early in 2009. A year and a half on, polio cases are nearly zero, just three so far in 2010 compared with 288 in the first half of 2009. Gates' arrival coincides with the first of two large-scale vaccination sweeps in Nigeria this month. Also, rather than directly fund polio vaccination efforts and hope for good results, the Gates Foundation agreed in 2009 to indirectly buy down existing World Bank loans to Nigeria when the country achieves specific vaccination targets. (See the picture Gates posted on Twitter which he entitled: "Reviewing statistics with leaders...")

Nigeria has 42 million children under 5; reaching 80% vaccination coverage takes an army of about 200,000 vaccinators. If polio can be dispatched in Nigeria, that would leave only India as a major polio epicenter, which Gates visited just three weeks ago. India also has seen its case rate fall precipitously but, unlike Nigeria, the oral polio vaccine doesn't always work even after repeated doses in the most polio-intensive regions of India. After Nigeria and India, the remaining polio redoubts are Aghanistan and Pakistan where vaccination campaigns are often impossible because of war-time conditions. Those are likely the only polio frontlines Gates won't visit.

Note on graphic: The location of the three cases reported so far in 2010 comes from AllAfrica.com. The map for 2009 is for illustrative purposes. It shows only half of 2009's cases with little fidelity to actual location.

------------------------------

Related:

Polio Turns Stealthy in India (August 19, 2010)

Heavy Lifting: Raising Health Beyond Polio's Reach (May 13, 2010)

Wall Street Journal: Pulling the plug on polio eradication? (April 26, 2010)

Polio Eradication: Harder Than it Looks (April 14, 2010)

Atlantis last flight, X-51 first flight

The X-51: making aeronautical engineering sexy again (Photo credit: United States Air Force.)

The space shuttle Atlantis landed for the last time yesterday while the hypersonic X-51 test aircraft flew for the first time. The X-51 "Waverider" flew for a little more than three minutes, hitting a top speed of Mach 5. The X-51 is not a replacement for the shuttle, but in contrast to the poor progress on ways to get into space, hypersonics are scorching a path forward in aeronautics.

Research on hypersonics has been going on for four decades, but it took until the 21st century for the first hypersonic vehicles to fly. The math gets extremely tricky. It's only been pretty recently that computing power has been overwhelming enough to begin replacing wind tunnel and flight testing for much slower moving objects like commercial passenger jets. Hypersonic flight presents a brand new set of modeling challenges. The extreme speed and temperatures mean you have to model the effects of, for instance, knocking oxygen atoms off nitrogen atoms.

However, actual hypersonic flight appears to be more a matter of grit than a product of particular advances. Michael Smart, head of the pioneering Hyshot Group in Australia says:

Computers have definitely given us more confidence in our predictions of scramjet  performance, and CAD based manufacturing has also helped.  I actually think it's more to do with different personalities pushing to “fly stuff” rather  than just testing on the ground that has really moved things forward.  As an engine  guy who mainly tests on the ground, having to adapt my engines to a real flight  vehicle has actually changed the way I design engines.

There are also interesting engineering parallels with synthetic biology. In both, elements of the design are highly interdependent in very complicated ways. Also (and perhaps consequently), design elements perform more than one function. The Waverider is in a sense, a flying engine, the interdependence of airframe and propulsion becoming extreme with the speed of the aircraft. Fuel serves not only to propel but to cool the X-51.

It's unlikely hypersonics will solve our space launch problem and right now the applications imagined are military. However, technologically, it's real deal, genuine gee-whiz stuff.

----------------------------------------------------

Related:

The future of spaceflight: “social welfare for nerds”

Human Space Exploration: Scaled Back to Vanishing

Cost of space tourism "rides" falls

Space Shuttle without people

Space Age entering eclipse—unnoticed

Goodbye Mars, Hello Malaria: Bill Gates’ Imprimatur on Science and the 21st Century

The cover story in the May 14 Science featured not space probes or cancer stem cells but malaria and tuberculosis. This first culminates trends away from classic 20th century research ambitions like a cure for cancer, becoming a spacefaring race and genetically engineering a new post-human species. Instead, as the pages of Science make clear, it’s goodbye Mars, hello malaria. Bill Gates printed out this new agenda, putting his stamp both on it and the new century.

Science both sets and reflects the agenda for American science. In the United States, cancer is the second biggest killer; malaria caused only four deaths in the most recent annual count, all from infections occurring abroad. Although coverage of cancer in Science still overwhelms that of malaria, in 2000 the count of cancer mentions in Science turned down for the first time in the history of the publication. At the same time, malaria coverage tilted up, reflecting a shift from developed to developing world health concerns. (As if in emphasis, last week Nature also ran a malaria cover story.)

Symbolic New Year’s Day 2000 saw the establishment of the Bill and Melinda Gates Foundation. Gates said then of his foundation’s mission: “I think that we could have the goal that every person in the world would have the same type of healthy life that people in the United States have.” His words now seem to have instantly reshaped the trajectory of science.

Source: Science

2000 marked an inflection point for NASA—and a turn in Science toward the terrestrial. Previously, the magazine's affections for space exploration had grown and grown even as the golden age of the 1960s receded. Ironically, coverage reached an apogee at the dawn of the 21st century and began falling back to earth in 2000. President Obama's subsequent cancellation of the Ares program earlier this year scaled back human space exploration to the vanishing point.

Neither are we on a trajectory to create a new, post-human species. In 2001, mentions of malaria in Science exceeded those of genetic engineering for the first time, a predominance that continues.

Can these shifts really be traced to the influence of Gates Foundation? Concerning the new emphasis on malaria, Gates is indisputably causal. The disease began gaining column inches in Science before Gates, from 1980 forward. However, the last decade’s spike to all-time highs not only coincides with Gates’ rhetoric but an enormous funding surge largely orchestrated by the Gates Foundation.

Research agendas are a zero-sum game. Consequently, the rise of malaria and global health automatically de-emphasizes all else. But difficulties specific to cancer, space, and genetic engineering also contributed to their demotion. The war on cancer and the space age are each roughly half a century old and not much nearer to victory or realization. By contrast, exponential advances in DNA sequencing technology seemed to be leading inexorably to a post-human species. However, genetic explanations of both complex diseases and complex traits have been—and might remain—elusive. As the number of genes involved in the relatively straightforward trait of height has grown, the prospects for and coverage of genetic engineering have dropped.

Gates still could have jumped on the spacewagon with fellow software billionaires Paul Allen, a major funder of SETI, or Jeff Bezos (with Blue Origin) and Elon Musk (SpaceX) who continue undeterred towards the spacefaring vision. Even software millionaires like John Carmack (Armadillo Aerospace) can’t help themselves. But Gates isn’t susceptible. In 1997, he praised the (unmanned) Mars Pathfinder mission as “a fine example of small science ... undertaken on a strict budget [with] limited, achievable goals.” He believed space would not be transformative: “Though humanity will do some great things in space in the next 100 years, and there will be enormous benefits, I don't think what goes on in space will fundamentally change the way we live.”

Concerning genetic engineering, Gates contended “It’s all a question of how, not if,” in 1995. He may still believe that, but his energies are going into saving rather than surpassing humans.

The opportunities (and imperatives) presented by global health might be greater than for any alternative research program. But nature yields to science only grudgingly no matter the frontier. Gates’ goal to eradicate malaria will be a multi-decade grind offering frequent parallels with the bogged down, four-decade war on cancer. Already polio eradication is a decade overdue. 

It’s a volitional, pivotal moment. Gates, his full weight on Archimedes’ lever, is moving the world in a new direction altogether different from 20th century imagination and expectation.

-------------------------------------------------

Related:

How Ray Suarez really caught the global health bug

Heavy Lifting: Raising Health Beyond Polio's Reach

Two of the world's kids, Bihar, India. (Photo: kuann)

Bill Gates expanded the campaign to eradicate polio during a frontline visit to India yesterday. The new strategy: lift health beyond polio's reach.

The largest remaining pockets of the disease are the Indian states of Bihar, site of Gates' visit, and neighboring Uttar Pradesh. Over 240 million people live in the region.  Fertility rates are high with more than 500,000 children born monthly in Uttar Pradesh. The new births are accompanied--and perhaps driven by--the highest child mortality rate in India.

Waves of vaccination campaigns have failed to eliminate the disease from the two states. Disconcertingly, even multiple doses of the oral vaccine don't guarantee immunity here, a failure usually explained by widespread unhygienic conditions, undernutrition and illness. By improving broader health conditions, the chain of circumstances favorable to polio could be broken.

The World Health Organization recently began investigating the biology underlying the vaccine failure. Meanwhile, however, Bill Gates signed an agreement with the state of Bihar to "to improve and increase the availability, quality and utilisation of health-care facilities and services," according to the Economic Times.

For all its assets, however, the Gates Foundation cannot fund better health care at Bihar and Uttar Pradesh scale. The memorandum between Bihar and the Foundation might represent a quid pro quo. Polio affects very few, even in India which experienced just 741 cases in 2009. The benefits of eradication accrue to the entire world but India must do the actual work at least to the partial exclusion of more pressing priorities such as child mortality.

A few weeks ago, the Wall Street Journal speculated that Gates might shift away from eradication toward strengthening health systems. Instead Gates added improving health infrastructure to his still-relentless campaign for eradication.

---------------------------------

Related:

Gates Seeks to Close Out Polio in Nigeria (June 7, 2010)

Wall Street Journal: Pulling the plug on polio eradication? (April 26, 2010)

Polio Eradication: Harder Than it Looks (April 14, 2010)

Life expectancy, medical progress, birthday wishes

A wry birthday wish for Ray Kurzweil. I gave this talk at Research Club about the prospects for living forever, which aren't very good.

Many thanks to Research Club and videographer Dustin Zemmel.

--------------------------------

Related:

If heart drugs keep improving, will we be able to tell?

The Contours of Medical Progress in Depression, Diabetes and Arthritis

Ray Kurzweil does not understand the brain

Rest in Peas: The Unrecognized Death of Speech Recognition

Pushing up daisies (Photo courtesy of Creative Coffins)

 Mispredicted Words, Mispredicted Futures

The accuracy of computer speech recognition flat-lined in 2001, before reaching human levels. The funding plug was pulled, but no funeral, no text-to-speech eulogy followed. Words never meant very much to computers—which made them ten times more error-prone than humans. Humans expected that computer understanding of language would lead to artificially intelligent machines, inevitably and quickly. But the mispredicted words of speech recognition have rewritten that narrative. We just haven’t recognized it yet.

After a long gestation period in academia, speech recognition bore twins in 1982: the suggestively-named Kurzweil Applied Intelligence and sibling rival Dragon Systems. Kurzweil’s software, by age three, could understand all of a thousand words—but only when spoken one painstakingly-articulated word at a time. Two years later, in 1987, the computer’s lexicon reached 20,000 words, entering the realm of human vocabularies which range from 10,000 to 150,000 words. But recognition accuracy was horrific: 90% wrong in 1993. Another two years, however, and the error rate pushed below 50%. More importantly, Dragon Systems unveiled its Naturally Speaking software in 1997 which recognized normal human speech. Years of talking to the computer like a speech therapist seemingly paid off.

However, the core language machinery that crushed sounds into words actually dated to the 1950s and ‘60s and had not changed. Progress mainly came from freakishly faster computers and a burgeoning profusion of digital text.

Speech recognizers make educated guesses at what is being said. They play the odds. For example, the phrase “serve as the inspiration,” is ten times more likely than “serve as the installation,” which sounds similar. Such statistical models become more precise given more data. Helpfully, the digital word supply leapt from essentially zero to about a million words in the 1980s when a body of literary text called the Brown Corpus became available. Millions turned to billions as the Internet grew in the 1990s. Inevitably, Google published a trillion-word corpus in 2006. Speech recognition accuracy, borne aloft by exponential trends in text and transistors, rose skyward. But it couldn’t reach human heights.

Source: National Institute of Standards and Technology Benchmark Test History 

“I’m sorry, Dave. I can’t do that.”

In 2001 recognition accuracy topped out at 80%, far short of HAL-like levels of comprehension. Adding data or computing power made no difference. Researchers at Carnegie Mellon University checked again in 2006 and found the situation unchanged. With human discrimination as high as 98%, the unclosed gap left little basis for conversation. But sticking to a few topics, like numbers, helped. Saying “one” into the phone works about as well as pressing a button, approaching 100% accuracy. But loosen the vocabulary constraint and recognition begins to drift, turning to vertigo in the wide-open vastness of linguistic space.

The language universe is large, Google’s trillion words a mere scrawl on its surface. One estimate puts the number of possible sentences at 10570.  Through constant talking and writing, more of the possibilities of language enter into our possession. But plenty of unanticipated combinations remain which force speech recognizers into risky guesses. Even where data are lush, picking what’s most likely can be a mistake because meaning often pools in a key word or two. Recognition systems, by going with the “best” bet, are prone to interpret the meaning-rich terms as more common but similar-sounding words, draining sense from the sentence.

Strings, heavy with meaning. (Photo credit: t_a_i_s)

Statistics veiling ignorance

Many spoken words sound the same. Saying “recognize speech” makes a sound that can be indistinguishable from “wreck a nice beach.” Other laughers include “wreck an eyes peach” and “recondite speech.” But with a little knowledge of word meaning and grammar, it seems like a computer ought to be able to puzzle it out. Ironically, however, much of the progress in speech recognition came from a conscious rejection of the deeper dimensions of language. As an IBM researcher famously put it: “Every time I fire a linguist my system improves.” But pink-slipping all the linguistics PhDs only gets you 80% accuracy, at best.

In practice, current recognition software employs some knowledge of language beyond just the outer surface of word sounds. But efforts to impart human-grade understanding of word meaning and syntax to computers have also fallen short.

We use grammar all the time, but no effort to completely formalize it in a set of rules has succeeded. If such rules exist, computer programs turned loose on great bodies of text haven’t been able to suss them out either. Progress in automatically parsing sentences into their grammatical components has been surprisingly limited. A 1996 look at the state of the art reported that “Despite over three decades of research effort, no practical domain-independent parser of unrestricted text has been developed.” As with speech recognition, parsing works best inside snug linguistic boxes, like medical terminology, but weakens when you take down the fences holding back the untamed wilds. Today’s parsers “very crudely are about 80% right on average on unrestricted text,” according to Cambridge professor Ted Briscoe, author of the 1996 report. Parsers and speech recognition have penetrated language to similar, considerable depths, but without reaching a fundamental understanding.

Researchers have also tried to endow computers with knowledge of word meanings. Words are defined by other words, to state the seemingly obvious. And definitions, of course, live in a dictionary. In the early 1990s, Microsoft Research developed a system called MindNet which “read” the dictionary and traced out a network from each word out to every mention of it in the definitions of other words.

Words have multiple definitions until they are used in a sentence which narrows the possibilities. MindNet deduced the intended definition of a word by combing through the networks of the other words in the sentence, looking for overlap. Consider the sentence, “The driver struck the ball.” To figure out the intended meaning of “driver,” MindNet followed the network to the definition for “golf” which includes the word “ball.” So driver means a kind of golf club. Or does it? Maybe the sentence means a car crashed into a group of people at a party.

To guess meanings more accurately, MindNet expanded the data on which it based its statistics much as speech recognizers did. The program ingested encyclopedias and other online texts, carefully assigning probabilistic weights based on what it learned. But that wasn’t enough. MindNet’s goal of “resolving semantic ambiguities in text,” remains unattained. The project, the first undertaken by Microsoft Research after it was founded in 1991, was shelved in 2005.

Can’t get there from here

We have learned that speech is not just sounds. The acoustic signal doesn’t carry enough information for reliable interpretation, even when boosted by statistical analysis of terabytes of example phrases. As the leading lights of speech recognition acknowledged last May, “it is not possible to predict and collect separate data for any and all types of speech…” The approach of the last two decades has hit a dead end. Similarly, the meaning of a word is not fully captured just by pointing to other words as in MindNet’s approach. Grammar likewise escapes crisp formalization.  

To some, these developments are no surprise. In 1986, Terry Winograd and Fernando Flores audaciously concluded that “computers cannot understand language.” In their book, Understanding Computers and Cognition, the authors argued from biology and philosophy rather than producing a proof like Einstein’s demonstration that nothing can travel faster than light. So not everyone agreed. Bill Gates described it as “a complete horseshit book” shortly after it appeared, but acknowledged that “it has to be read,” a wise amendment given the balance of evidence from the last quarter century.

Fortunately, the question of whether computers are subject to fundamental limits doesn’t need to be answered. Progress in conversational speech recognition accuracy has clearly halted and we have abandoned further frontal assaults. The research arm of the Pentagon, DARPA, declared victory and withdrew. Many decades ago, DARPA funded the basic research behind both the Internet and today’s mouse-and-menus computer interface. More recently, the agency financed investigations into conversational speech recognition but shifted priorities and money after accuracy plateaued. Microsoft Research persisted longer in its pursuit of a seeing, talking computer. But that vision became increasingly spectral, and today none of the Speech Technology group’s projects aspire to push speech recognition to human levels.

Cognitive dissonance

We are surrounded by unceasing, rapid technological advance, especially in information technology. It is impossible for something to be unattainable. There has to be another way. Right? Yes—but it’s more difficult than the approach that didn’t work. In place of simple speech recognition, researchers last year proposed “cognition-derived recognition” in a paper authored by leading academics, a scientist from Microsoft Research and a co-founder of Dragon Systems. The project entails research to “understand and emulate relevant human capabilities” as well as understanding how the brain processes language. The researchers, with that particularly human talent for euphemism, are actually saying that we need artificial intelligence if computers are going to understand us.

Originally, however, speech recognition was going to lead to artificial intelligence. Computing pioneer Alan Turing suggested in 1950 that we “provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.” Over half a century later, artificial intelligence has become prerequisite to understanding speech. We have neither the chicken nor the egg.

Speech recognition pioneer Ray Kurzweil piloted computing a long way down the path toward artificial intelligence. His software programs first recognized printed characters, then images and finally spoken words. Quite reasonably, Kurzweil looked at the trajectory he had helped carve and prophesied that machines would inevitably become intelligent and then spiritual. However, because we are no longer banging away at speech recognition, this new great chain of being has a missing link.

That void and its potential implications have gone unremarked, the greatest recognition error of all.  Perhaps no one much noticed when the National Institute of Standards Testing simply stopped benchmarking the accuracy of conversational speech recognition. And no one, speech researchers included, broadcasts their own bad news. So conventional belief remains that speech recognition and even artificial intelligence will arrive someday, somehow. Similar beliefs cling to manned space travel. Wisely, when President Obama cancelled the Ares program, he made provisions for research into “game-changing new technology,” as an advisor put it. Rather than challenge a cherished belief, perhaps the President knew to scale it back until it fades away.

Source: Google

Speech recognition seems to be following a similar pattern, signal blending into background noise. News mentions of Dragon System’s Naturally Speaking software peaked at the same time as recognition accuracy, 1999, and declined thereafter. “Speech recognition” shows a broadly similar pattern, with peak mentions coming in 2002, the last year in which NIST benchmarked conversational speech recognition.

With the flattening of recognition accuracy comes the flattening of a great story arc of our age: the imminent arrival of artificial intelligence. Mispredicted words have cascaded into mispredictions of the future. Protean language leaves the future unauthored.

---------------------------

Related:

Dude, where's my universal translator? (CBC radio show)

Dutch translation of Rest in Peas: De onbegrepen dood van spraakherkenning

Ray Kurzweil does not understand the brain


Cost of space tourism "rides" falls

Goofy yet compelling. (Photo credit: Armadillo Aerospace)

I have mostly denigrated space tourism, at least as a stepping stone to human space exploration. Proponents believe that attaching market forces to technology (which always advances) will lead inevitably to humanity exploring ever-more distant space.

About half of the equation works: market forces are driving down prices. Richard Branson's Virgin Galactic began by charging $200,000 for a ride aboard its SpaceShip Two. But now for just $100,000, Armadillo Aerospace is offering what Space.com called "sub-orbital joy rides." The mocking shift in nomenclature is perhaps a more important development than Armadillo's becoming the Budget Rent-a-car of space tourism.

Both Armadillo and Virgin only get you to the edge of space, about 62 miles up. To get higher, there is presently no alternative to a multi-stage rocket and not even market forces can change that. For the real deal, you still need to disgorge a million dollars to fly on the Soyuz.

---------------------------

Related:

Human Space Exploration: Scaled Back to Vanishing

Atlantis last flight, X-51 first flight

The Contours of Medical Progress in Depression, Diabetes and Arthritis

Celebrex photo (Allison Turrell)

Although medicine has never been more advanced, progress against the two biggest killers, cancer and heart disease, now comes more slowly than it did decades ago. A skeptical reader suggested that recent breakthroughs in depression, diabetes and arthritis present a truer picture of medical advance. But surveying these other diseases in fact reveals similar contours: the steepest gains occur in the past with progress flattening toward the present.

Depression

No antidepressant, however new, surpasses the very first, imipramine, which dates to 1952. Although we are on the third or fourth generation of antidepressants, the newest agents don’t outperform the oldest. Savella, the most recently approved antidepressant, worked the same as imipramine in a meta-study of seven clinical trials. Same for Effexor. Same for Zoloft. Same for Prozac. Unsurprisingly, the newer agents are about the same in terms of efficacy compared with each other. For severe, psychotic depression, imipramine remains the treatment of choice.

Having more therapeutic options greatly aids the art of treating an individual patient, and welcome progress has come on side effects. But we see refinement not revolution. Conceivably research emphasis actually shifts to side effects when progress on defeating a given disease slows. “Polypharmacy,” prescribing more than one drug for a condition, is another such symptom visible in antidepressants.

The reduced (or sometimes merely different) side effects of newer antidepressants come courtesy of very substantial advances in chemistry and receptor biology. Modern compounds strike their targets far more precisely than the older, less discriminating, “dirtier” agents like imipramine. However, consummate skills at the microscopic level are not matched by an understanding of the overall machinery of depression despite a vast expansion of neurological knowledge. In some respects, depression appears more complicated than ever, a tangled skein catching up many different biological pathways, looping through environmental and genetic factors. Breaking free of a complex disease, whether cancer or depression, requires more than snipping the single threads we have found so far.

Diabetes

In diabetes, insulin clearly wins the prize for biggest breakthrough, one achieved in 1923 before there were clinical trials. But none were needed. Insulin very straightforwardly saves the lives of type 1 diabetics who would die without it. Living with diabetes has since become progressively less and less difficult. Insulin has been improved many times, but these are refinements, like the recent fast-acting or long-release variants.

After insulin, the largest absolute gain in glycemic control came in the late 1950s and early 1960s with the introduction of sulfonylureas. For type 2 diabetes, these drugs were frontline until 1995 when metformin jumped to the front because of its greatly reduced side effects. The sulfonylureas actually provide better glycemic control than metformin but ultimately they exhaust the pancreas. After metformin came a series of new drug classes: glitazones in 1999, GLP-1 analogs like Byetta in 2005 and DPP-4 inhibitors like Januvia in 2006. These drugs exploit recently acquired, fine-grain knowledge of biochemical pathways. By contrast, we don't really know how metformin works except that it acts on the pancreas. But the new drugs aren’t “better” than older ones. Instead they provide useful (albeit more expensive) treatment alternatives.

As with depression, increased mastery of microbiological detail in diabetes hasn’t translated into mastery of the disease, a cure. Remarkably, gastric bypass surgery somehow controls type 2 diabetes quickly and not simply because of weight loss. A drug mimicking such an effect would likely be the biggest diabetes breakthrough ever. But the mysterious target has been lurking since 1987. Although today’s suite of diabetes therapies enables more precise glycemic control than ever before, for now, the largest advances are more than half a century old.

Rheumatoid Arthritis

A wave of new biological therapies has washed over rheumatoid arthritis, establishing a new high water mark in relief and remission. For all their scientific brilliance, however, biologicals back up an older drug, methotrexate. In use as early as 1967, methotrexate became the drug of choice in the mid-1980s and it remains first line therapy today.

The most important new biologicals, the TNF inhibitors, arrived in 1998 when the FDA approved Enbrel. However, in head-to-head trials, TNF inhibitors work about the same as methotrexate. Enbrel, for example, performed marginally better than methotrexate in one study while Humira did marginally worse in another. Not surprisingly, the differences between these biologicals are negligible. However, in patients where methotrexate has not worked, adding a TNF inhibitor can make an enormous difference. Remicade plus methotrexate boosted response rates to 52% compared to 17% with methotrexate alone.

The new agents also work much faster and appear to provide much greater protection against joint damage. Still, the biologicals haven’t beaten methotrexate and usually join them in combination therapy. In the case of Remicade, methotrexate is a requirement.

The newest biological for rheumatoid arthritis, Actemra, approved in January, 2010, supplants no other drugs but occupies the last line of treatment, after the TNF inhibitors. Actemra joins the other recently-approved biologicals Rituxan and Orencia sitting at the end of the bench.

 

Source: Methotrexate works in 70-80% of cases of moderate to severe rheumatoid arthritis. When TNF inhibitors are taken, 70% are taken with methotrexate or another conventional anti-rheumatic drug. TNF inhibitors work for about 75% of patients. The newest agents, Actemra (2010), Rituxan (2006) and Orencia (2005), address the smallest patient group, those failing TNF inhibitors.

 The benefits of new drugs for heart disease, breast cancer, depression, diabetes as well as rheumatoid arthritis have declined over time, with the largest gains realized decades ago. Methotrexate, dating to the early 1980s, provided the biggest gain, followed by TNF inhibitors in the 1990s, with the newest drug Actemra last and least.

However, we are heir to decades of medical advances. The best time in all of human history to have any chronic disease is now (or, better yet, tomorrow). Surprisingly, though, we are adding less to this legacy than previous generations—and we’re a bit oblivious about what’s happening.

----------------------------------------------------------

Related:

If heart drugs keep improving, will we be able to tell?

Life expectancy, medical progress, birthday wishes (video)

Wall Street Journal: Pulling the plug on polio eradication?

In counterpoint to the New York Timespositive coverage of the war on polio earlier this month, the Wall Street Journal on Friday put forward a case for abandoning the goal of eradication—and not just for polio. The Journal depicts a potentially seismic policy shift as emanating from the de facto leader of global health, Bill Gates.  Such a reversal is unlikely.

Gates got behind polio eradication in 1999 with a $50 million grant he believed would close out the disease. He predicted in 2000 that “If necessary resources and political will are devoted to polio eradication, the world can claim victory over this killer by the end of this year and certify the planet as polio free by the year 2005.” A decade and nearly a billion dollars later, the result is not eradication but oscillation, case numbers rising and falling.

Debate about the wisdom of the eradication policy has ensued. Millions of children die from malaria, for example, compared to which polio’s afflictions, although still horrible, are minute. More suffering could be averted, the argument goes, with a different allocation of global health dollars. The Wall Street Journal urges a move away from disease-specific campaigns and towards strengthening of overall health systems.

Polio might be expensive, but dropping eradication might be more so.  A Lancet study in 2007 concluded that control would be more expensive than eradication. But whatever the optimal policy, it requires funding. Eradication provides a sense of urgency and heroism; a control strategy does not.

Credibility is also at stake for global health advocates and Bill Gates in particular. Gates has backed not only getting rid of polio. In 2007, he and wife Melinda strong-armed a skeptical global health community to embrace malaria eradication. As hoped, tremendous energy and funding were released. A giant vogue for malaria eradication ensued, like Ashton Kutcher’s Twitter-driven campaign for bed nets. At the governmental level, the largest surge in funding commitments to battle malaria came one year after setting eradication as the goal. Arguably to protect these gains, the Gates Foundation doubled-down on polio eradication in 2008. And as the Wall Street Journal points out, Gates personally led the assault, descending in 2009 on the hottest polio spot, Nigeria, where vaccination lagged.

A resurgence of polio that came from consciously letting up on the disease, even if the best policy, would be a public relations disaster. The current trajectory also presents serious problems but far less severe: Eradication is expensive and doesn’t appear to be working. The solution has been continued mass dosing of the polio-vulnerable in the developing world and, for donor nations, a steady drip of good news that, yes, we are at the absolute cusp of eradication. Right now, the news is good in Nigeria and indeterminate in India, the familiar cusp yet again.

Ultimately, polio can be snuffed out by the downward pressure of eradication, the strengthening of health systems and much broader, slower and more costly development—improvements in food, water and sanitation.  In the near term, eradication will remain the strategy, but elusive.

-------------------------------------

Related:

Polio Turns Stealthy in India (August 19, 2010)

Heavy Lifting: Raising Health Beyond Polio's Reach (May 13, 2010)

Polio Eradication: Harder Than it Looks (April 14, 2010)

Human Space Exploration: Scaled Back to Vanishing

 

The Orion capsule at miniature scale. Photo credit: NASA

An elaborate ruse is taking place. Instead of Mars, we’re going to an asteroid. NASA’s budget is supposed to grow a seemingly hefty $6 billion but that’s only a 1.5% increase next year with just cost of living adjustments the four years after. Finally, NASA administrators assured the Johnson Space Center that it would continue to be the home of mission control for human spaceflight—when no flights are planned.

But not all are duped. “[I]t is clear that this is the end of America’s leadership in space,” said Senator Richard Selby from the aerospace-heavy state of Alabama, regarding the Obama administration’s plan for space exploration, announced yesterday.

Back in February, the President cancelled the Ares program to return to the moon and eventually Mars, a decision eliciting very little public consternation. Despite Congressional backlash from the Spacebelt states of Texas, Florida and Alabama, Ares remains cancelled. Also telling, part of the plan allocates $40 million for job retraining to Space Shuttle program employees put out of work by the Shuttle’s scheduled retirement later this year, making for an unexpected but revealing parallel with the Rustbelt.

Cagily, the President is deploying technology and market forces to defer human spaceflight—to infinity and beyond. NASA’s revised budget shifts billions into commercial space flight. Only left-wing socialists would argue against harnessing market forces to get into space. But the real jujitsu came in requiring that future rockets use new technology, not just a retread of the Apollo program. This is brilliant. Enthusiasts for space travel revere technology and so can hardly oppose Obama’s new, higher standard. However, in some respects, rocket technology has been unchanged for nearly a century. Konstantin Tsiolkovsky laid out many of the principles in 1903 in his Exploration of Cosmic Space by Means of Reaction Devices. Since Apollo forty years ago, no alternatives have been found to very large, multi-stage rockets burning liquid or solid fuel. Today, none are in near prospect. The chances of coming up with something novel by the new deadline of 2015 are very, very low. NASA chief Charles Bolden implored yesterday’s gathering at Cape Canaveral that “This is not for show. We want your ideas. We want your thoughts.” But because there are no ideas, it is for show.

The actual plan is to go nowhere or at least nowhere new. Even the gung-ho, final frontiers readers of Space.com think NASA will not get to Mars by the newly distant date of 2030.

Adapted from Space.com

Unless China reprises its 2009 Olympics extravagance in space with a mission to Mars, the Moon might mark the furthest extent of human space travel.

The brave and hopeful era of the Space Age deserves a better send-off than these dissimulations.  Economics and myth dictate otherwise, stalling the redirection of noble aspiration toward terrestrial ends where giant leaps for humankind are both needed and possible.

----------------------------

Related:

Space Shuttle without people

Space Age entering eclipse—unnoticed

Atlantis last flight, X-51 first flight