I recently participated in a panel at a local conference on healthcare and life science investing. The moderator asked me a question about how I thought the drug development process could be changed to make it more efficient, cost effective, quicker, and more like the technology development process. At the time I was a bit surprised by the question, given the complexity of the problem. Though I have thought a lot about this, like anyone in the biotech industry, I could only laugh and mumble something about Big Data saving the day.
Since this blog provides me an opportunity to develop ideas and put them down on paper (or screen) without the risk of sounding foolish to a large audience, I will expound on two ideas that I have been thinking about for quite some time: 1) reducing the high level of efficacy evidence needed in drug approvals and, and 2) reducing the impact the placebo effect has on drug testing.
FDA role in drug development. The FDA was formed in 1906 with the mission to monitor the safety levels of food and drugs, as well as the “standard of strength, quality, and purity” of drugs. In 1938 its charter was revised to mandate premarket safety reviews of all new drugs, monitoring of false therapeutic claims, and expansion of manufacturing inspections. Its role has evolved over the years to act as the gatekeeper for new therapeutics that are being developed for the market. My question is why the FDA should require such a high level of efficacy evidence before some level of approval, and whether they instead should be focused on making sure that only drugs with minimal safety risk and appropriate side effects are allowed to come to market. If a drug is shown to have a reasonable safety profile, relevant to the disease state to be treated, would it not be reasonable to allow a company or institution to market that as a product? In this day of rapid and thorough communication, with information moving freely and quickly from the research world to patients, physicians, and providers, most companies will be required by the market to provide some evidence of therapeutic effect in treating a condition. It takes many patients use before we really know how well a drug works, and in many instances evidence of therapeutic usefulness, or lack thereof, aren’t evident until several years in the hands of physicians and patients.
My opinion is based on the assumption that no drugs are without safety risk, and all will produce some side effects. The significance of these are dependent upon the disease state they are trying to treat. Chemotherapeutic drugs are some of the most prescribed drugs in the history of medicine, yet the side effects and safety profile of these are terrible. We allow it because the disease being treated is essentially terminal, and so patients accept the risk associated with the therapeutic in order to have a chance to live longer, or with better quality of life. Drugs to alleviate pain are also highly prescribed, and in this case the people taking the drugs will usually tolerate some low level side effects, particularly if they are dose dependent. The FDA allows these drugs on the market, despite their high level of addictive effect, because of the condition and lack of available alternatives. Alzheimers drugs are another example where a strong safety profile, and some reasonable rationale and data suggesting efficacy, should be enough to allow doctors and patients to make their own decisions on prescription and usage.
Thus, the first job of the FDA is to judge whether drugs that have shown some level of evidence to help patients have side effects that are serious enough to prevent it from going to market. Or whether there are certain populations of patients that might be at higher risk of suffering significant problems from the side effects of the drug.
Once the decision on safety has been made, whether binary or relative to patient state, then the FDA should require some level of experimental data in humans that shows a positive therapeutic effect in a population of patients. Positive data from a Phase II trial, for example could be sufficient. In reality, no reimbursement institution will approve paying for a drug that lacks efficacy data, so product developers will need to plan for clinical trials, just as they do now. In the meantime, however, why not allow all experimental drugs and devices, which have demonstrated sufficient safety data, be allowed to be prescribed by physicians? My belief is that this would lead us away from a series of placebo controlled trials, and into more comparative efficacy trials, which would hold more meaning for physicians, patients, and providers. I will take of the question of placebo controlled trials in a later blog.