Screening was begun early in the 20th century by such pioneers as the German microbiologist Paul Ehrlich. Having learned that a compound of arsenic killed the microbe responsible for sleeping sickness, Ehrlich sought a related compound that would also not poison the patient. He tried some 900 compounds in mice, to little avail. Then his colleague Sahachiro Hata tested one on the microbe that causes syphilis. This compound killed the microbe, but not the experimental animals. A year later, in 1910, Ehrlich released salvarsan, the first synthesized drug to cure a disease.
How the drug worked played no role in its discovery. Why a compound killed germs without harming patients overmuch; what the microbes for two quite different diseases had in common – these puzzles went unexplained. The breakthrough – the idea that arsenic could kill germs – came from inspired observation; applying that discovery entailed laborious screening. No one dreamed that inspiration could be made routine or that works of genius could be produced on a corporate schedule.
Beginning in the 70’s, as knowledge of the molecular basis of disease increased, the pharmaceutical industry began to hope for steady discoveries based on a different model. Instead of searching for a compound that would simply, say, lower blood pressure in model organisms, pharmacologists could target specific enzymes, like the angiotensin 2 converting enzyme (ACE). In fact, the first such ACE inhibitor was discovered as the result of just such a request, by scientists at Squibb in 1977.
But even the roots of this discovery lie in serendipity. A doctor noticed that deaths from snakebite in the banana plantations of Brazil involved catastrophic declines in blood pressure. Researchers in London isolated the causative peptides and the search for the drug was taken up by two chemists at Squibb, David Cushman and Miguel Ondetti, who synthesized one and showed that injections of it lowered blood pressure. Squibb saw no market for an injectable antihypertensive and shelved the project. Then, in 1974, research in a related enzyme system – again, by academics – showed how molecules small enough to be effective if taken orally might do the job, and the Squibb team developed captopril. It gained FDA approval in 1981, starting an avalanche of development at other companies that has led to more than ACE inhibitors.
The saga of captopril shows the pharmaceutical industry at its very best, developing a drug that was at once a success for science, for patients, and for a company’s bottom line. Yet it hardly promised the industry what it yearned for: a regular supply of bankable discoveries. Since then, the bar has risen: demand for new ideas has increased as the number of important diseases lacking a treatment has dwindled.
The traditional, itinerant investigator, as in the Ehrlich example; the hypothesis-driven project, as in the insulin story; or applied research, as in case of ACE inhibitors – all relied on luck and brilliance to find new drugs. In its search for an alternative to these methods, the industry fell in love with a new approach to research that promised to minimize serendipity and maximize predictability: the industrialization of scientific research.
Part 2 of 3. Part 3 to follow shortly.
Contributed by Barry Sherman and Philip Ross. Originally published by them in The Acumen Journal of Sciences, Volume I and reprinted with their permission.