The need for a heroic effort?

If there is one thing I am particularly proud of, it is providing insights into HOW to carry out research. Whether my suggestions are useful or not, you can be the judge. Overall, this topic is related to the problem of Eroom's law in drug discovery. The latter claims that the research pace in the drug industry has slowed down, or research is becoming more and more expensive. The authors go on to speculate that the regulatory climate is one of several reasons. I will go on to suggest that the regulatory framework is even more inadequate to deal with the science of biogerontology.

Methods to accelerate research progress in biogerontology - a recap from my blog
1. We need to take Tauber's paradox and preclinical data seriously when designing human studies if we want to help our aging population. On the one hand, there is a political reason to play it slow, because any failure may lead to bad publicity and public backlash. However, from a purely intellectual point of view there is every reason to be aggressive about aging research because the risk benefit/ratio is so much better for aging interventions than any other drug. Our conservative approach has led to large studies using an inferior anti-aging agent, Metformin, and almost no useful human research on Rapamycin and derivatives (1). This fear of side-effects has also ruined human calorie-restriction research because "classic harm" and "biogerontologic benefit" were weighed incorrectly (2).

2. We have to re-consider how we carry out animal research. Regulatory bodies could leverage pre-clinical testing as a shortcut to mass screening of potential anti-aging compounds (3).

Do we need heroic human studies?
So we have established that we should take higher risks in aging research, because the risk has to be proportional to the expected benefit. However, the question remains if we should accept studies with an appreciable risk of immediate harm and death to advance science? To some extent the answer is a trivial and empathic "yes" that boils down to informed consent. But how risky would be too risky? And are there any other problems?

First, let us look at scientific and engineering pursuits where we accept moderate to enormous risk for little benefit (compared to aging research):

We undertake space travel and research, launching people into orbit on flying bombs. Then there is the interest in the enigmatic and hidden. Research on the Antarctic continent does not sound very safe to me, although, I am not aware of recent fatalities. Plenty of other researchers risk their lives, some going to remote habitats to understand bio-diversity in the jungle of war-torn nations. Manned research on the deep sea is also dangerous yet beautiful culminating with the Trieste descending into the Mariana TrenchCaving and cave research is also not the safest pastime. Then there's moutaneering, hurricane and volcano research. To be fair, there is some overlap between "hobby adventurer" and "researcher" that accounts for the increased risk.

We undertake heroic efforts to treat invariably fatal diseases like aggressive cancer or rabies.

Then there is high speed travel ranging from rocket-powered sleds, cars, planes over electric trains up to people jumping from the stratosphere and other "research" loosely related to G-forces.

The medical study and supervision of highly dangerous behaviour is an interesting corner case. If we support extreme athletes, their sport becomes safer. Yet if we don't support them, fewer will take up the sport due to safety concerns thereby protecting athletes. What is the ethical choice? Ultra-endurance is a typical example (and so is doping). Here, we have doctors who abet athletes in their quest to destroy their body to win a trophy, e.g. in the 4800km long race across america (article in German).

Research on high explosives and other unstable chemical matter. Why do chemists work with these substances? Why do we allow them to risk their health and the safety of their lab? For that matter, why do we allow people to work on dangerous chemicals in industry? Accidents happen, people die. Why can they willingly die for chemistry, gasoline production or volcano research but not for medical science?

Research on pathogens requiring biosafety level 4 is inherently dangerous, although, incidents are more common in lower biosafety labs. There is no lack of dangerous research related jobs.

Then there are mundane and extraordinary occupations where we accept risk for the benefit of society or to satisfy our curiosity and desire to feel alive:
Fighting in wars (let's remember that defensive and strategic wars are not universally shunned)
War reporting
Doctors without borders and other NGOs
Mountaineering and climbing

Any highly competitive athlete and other "elite" jobs that require sacrifice. This includes the yuppie working 70-hours in an investment bank and overdosing on coke and Michael Phelps ingesting upwards of 12 000 kcal per day.

We don't need to look that far, however, since even mundane jobs are quite dangerous. Many people are at an increased risk of work-related deaths and thousands die every year just in the US. The following jobs are risky: Farmers and ranchers (10x higher than average work-related mortality), police (5x), fishers (30x), logging (22x) and steel workers (11x), pilots (17x), drivers (5x to 7x) and construction laborers (5x).

The only similarity between these activities is informed consent and the desire to work for the greater good or the desire to find meaning in life. Yet there is absolutely no framework for people to take on risk to help aging research. Phase I trials at least allow people to support regular drug research, though, the comparison with aging research is problematic. Participants in a phase I trial usually can't benefit from the treatment, but they are exposed to risks nonetheless. In contrast, everyone ages so there is always the potential for benefit in anti-aging studies, immediately improving the risk/benefit ratio (this is a benefit on top of Tauber's paradox).

Why not?
Clinical research has a dark history from Japanese "research" on Chinese prisoners, over Josef Mengele to the Tuskegee study. To this day, people fear chemicals, big corporations and other uncontrollable and intangible risks. Yet all these crimes have a complete lack of informed consent in common and are nothing like modern research. Logically, anyone should be allowed to give their life to science and to risk their health for the greater good. But the lingering fears in the public mean that any high profile accident could fuel a strong bio-luddite movement.

So let's be honest, at least between friends. To some extent we are careful about human research out of fear and strategic considerations. Simply because people are misinformed (although, not stupid and there is always hope for change!)

However, don't we fuel this vicious circle by giving in? If we tell people only what they want to hear, how can we change the system? I think we need to educate people about the real risks and benefits of (heroic) human studies and invest more to understand the risks of ludditism, while slowly modernizing Phase-I type research (basically a transitional period). At the end of this journey, we need to define a framework for high risk/reward research.

Let's take a real world example that is very current. Senescent cell ablation can reverse many age-related pathologies in animal models. This is one of the biggest breakthroughs in aging research and will be awarded the Nobel prize sooner or later. Old people could benefit the most from such a therapy, unless, of course, they die before we approve it. It makes sense to start micro-dosing senolytics and screening for cell lysis markers in healthy volunteers like yesteryear. Let me re-emphasize again why this is orders of magnitude safer than any phase I trial ever performed: most phase I trials do not start with microdosing; the potential benefit is tremendous because of Tauber's paradox and the multiplicator for this benefit is non-zero, because your test population actually suffers from your target disease. In contrast to a hypothetical HIV cure developed in a healthy population where the expected benefit is zero. This does not even consider the longevity dividend on the population level (i.e. altruistic people might want to risk their life because they benefit society.)

Let's take a second example. Rapamycin is another Nobel prize in the making. This drug was shown to rejuvenate middle aged animals and you could easily find thousands, probably millions, of healthy volunteers who would pay to take the drug. We could run a controlled Phase I trial or more of a cohort design. If I were slightly older I would very much consider low dose rapamycin myself. A self-study cohort similar to the CR-Society could be easily and very cheaply established by laypeople. There is absolutely no way it would be unethical for doctors to supervise people who decided to experiment on themselves, indeed the opposite might be the case.

When we run a phase I study or, even more risky, a do-it-yourself study organized online, there will be incidents. There will be severe adverse events and perhaps there will be deaths. But in the future, billions of people will be spared premature suffering and our sacrifices won't be in vain.

References & further reading

(1) Metformin vs Rapamycin

(2) In defense of being underweight
Set up to fail: are scientists stupid?

Optimizing resource use - Chronic toxicity and preclinical studies

"Nobody climbs mountains for scientific reasons. Science is used to raise money for the expeditions, but you really climb for the hell of it." - Edmund Hillary

He, Shenghui, and Norman E. Sharpless. "Senescence in Health and Disease." Cell 169.6 (2017): 1000-1011.

Dangerous jobs