A researcher in virology and immunotherapy got bad news: Her cancer was back with a vengeance; the treatments weren’t working. It was growing and spreading. Traditional options wouldn’t work for her.
She had no professional experience with cancer, but she knew a lot about viruses and the immune system. Her scientific expertise suggested a potential cure: Some viruses can debilitate cancers. You might be able to inject them into your body to fight them off. The only problem? This treatment is not yet proven, and no drugs were approved for use on her cancer. So what did she do?
She prepared a couple of viruses that she thought might kill her cancer, and she injected them into herself.
The cancer started shrinking.
Within a couple of months, it was small enough that surgeons could cut it out. Four years later, she’s still cancer-free.
But the ethicists disapproved of her gamble.
When I first read about this, I had so many questions:
Why didn’t her prior treatments work?
Why did she think those particular viruses would work against cancer?
Why does she think the viruses worked?
Why are there no available treatments that use this technique?
What potential does it have?
Why did so many journals reject the paper she wrote about her case?
Her name is Beata Halassy, and this is her story.
1. The Cancer
Beata’s breast cancer was first diagnosed in 2016. It was a nasty tumor.1 She underwent a mastectomy2 and then underwent chemotherapy. Unfortunately, two years later the cancer returned, and it was as aggressive as the first. Doctors cut it out again. Then, in 2020, she discovered a new hard tumor, 2 cm in size (nearly an inch).
It doesn’t bode well to have a cancer that aggressively returns every two years. What could she do?
She could go the traditional way: another round of surgery, chemo, radiation… and then cross her fingers that it never comes back, knowing that it probably would, and at some point might prevail.
Or she could try something different: something that, based on her expertise, she suspected would succeed.
2. Viruses to Treat Cancers
Over a century ago, there were reports that cervical cancers regressed after rabies vaccinations, and that lymphomas remitted after a bout of measles.3 Doctors began studying this and noticed that a tumor was destroyed by viruses in the 1950s. Researchers tested several viruses to treat cancers, and tumors regressed in many without strong side-effects.4 Unfortunately, treatment with these viruses didn’t fully cure the cancers, which eventually started growing again, so researchers dropped this line of research. But virology was in its infancy then, and we’ve continued making observations that suggest this could work—for example, cancers that spontaneously went into remission after a COVID infection. In the last few years, research in this domain has exploded, although only one drug has been approved in the US, and only for use when all else has failed.5
3. Beata’s Bet
Beata knew all this, so she had to make a bet:
Try the same treatments again that had failed twice before
Try something new and unproven, but that looked promising to her
Beata bet on herself.
She talked about it with her oncologist—who was not amused.
Can’t you just do the normal thing, Beata?
No. But can you help me track this anyway?
Will I be able to convince you?
No.
Ugh, alright, let’s try it.
So Beata’s team injected her tumor with a live measles virus! They did this every few days for several weeks. The hope was that, as the theory suggested:
The virus would preferentially attack and kill tumor cells.
This would disperse parts of the dead tumor cells.
The virus would also elicit a strong immune response.
Immune cells would rush to the area to fight the virus, find the pieces of dead tumor cells, learn to recognize them, and start attacking them, too.
If that worked, however, the immune system might learn to eliminate the measles virus (MeV) faster than the tumor. The answer to this was to inject another virus into the tumor—this time, one called VSV.6 So Beata’s treatment phase lasted for 50 days, with 10 virus injections in total.78
So what happened?
The cancer shrunk
These are ultrasounds of Beata’s tumor:
Here is the corresponding data, considering the MRIs and ultrasounds:
In the first few days, the tumor actually grew so much that the oncologist suggested that Beata stop and return to a more traditional treatment. But Beata had a good hypothesis for why this was happening: simple inflammation from the immune reaction! Of course swelling occurs when the immune system cells rush to an area to fight infection.
And she had made the right call, because soon after that, the tumor started shrinking. At the same time, Beata saw that her immune system was reacting to the viruses, because blood tests showed antibodies against them:
So Beata waited. Two months after treatment started, the tumor was about 65% smaller, and also much softer and more disconnected from the muscle and skin, so she had it extracted.
She received a final measles vaccine a month later, and for a year she also took a drug to treat her specific type of cancer.9 She has been cancer-free ever since, for 4 years now.
4. The Unpublishable Paper
You’d expect this to be fantastic news for cancer research: An expert who thoughtfully applies her virology knowledge, which allows her to break through red tape,10 and does a single-subject study that informs future research? Yes please!
So Beata and her team wrote a paper on it. And laymen were pretty excited by it.
The paper blew up:
But not just because of the great results:
These ethical issues were an obstacle to its publication:
It was a 2.5-year battle with different reviewers and editors from 13 different journals.
Why so much grief?
I read the reviews of the paper from the journal that eventually published it, as well as some other reviews from other journals that declined to publish the paper, so I got a good sense of their concerns.
Some were legitimate, like why certain viruses were chosen, and why on that schedule. Or the fact that Beata used lab-grade viruses instead of clinical-grade ones.11 But the majority of concerns were about the ethics of self-experimentation, which we could summarize with:
You experimented on yourself. This is not standard procedure. Usually, you want the patient and the experimenter to be different, and an Institutional Review Board (IRB) to review the process. Here, the experimenter and patient are the same, and there was no ethical review. So this is not standard procedure. I’m not even sure this is legal. Moreover, the patient here was a true expert, but other people might be inspired and wrongly think they’re expert enough to treat themselves, and so publishing this might give people dangerous ideas. Therefore, we will not publish the paper.
Which we can translate into broadly three concerns:
The experimenter is the patient
There was no IRB
Publishing this might give people dangerous ideas
Why is this wrong?
The History of Self-Experimentation
Here’s a table of Nobel-prize winners who did self-experimentation in the field they won the Nobel Prize on:12
And it’s not just in healthcare.
Many big breakthroughs in science comes from self-experimentation—the researcher notices something interesting and focuses her research on it. It’s nearly always the first step. There would be no theory of gravity without an apple falling on Sir Isaac Newton’s head. Self-experimentation results are not proof, but they bring mighty clues.
So why so much reluctance against it?
The Self-Experimenting Bureaucracy
The healthcare system builds up one death at a time. Typically, of this type:
DRUG PROMOTER: You should totally try this new drug I concocted, based on snake fat extracts.
SICK PERSON: Are you sure? I don’t know, I’m not so convinced…
DRUG PROMOTER: Yes! For sure, it will work, don’t worry! You’ll feel as good as new!
SICK PERSON: Alright, let’s try it… Aaargghhh!!
DRUG PROMOTER: Oops…
In other words: The experimenter reaps some benefit from the experiment, at very little cost. Nearly all the cost is borne by the patient. So what’s right for the experimenter might not be right for the patient. There’s a misalignment of incentives.
That’s the normal case, and is what most of the healthcare system is designed to avoid. 13That’s why the typical process for research is:
The patient must be well informed and consenting
An IRB should oversee the project to align incentives between patient and researcher, mainly to make sure the first point is covered, and that the risks outweigh the costs for the patient14
The problem is that, once a rule is implemented, 98% of humans will forget about why the rule is there, and will just remember the rule and the need to implement it. You’ve probably seen that at work: How many times have you suffered through a worthless weekly meeting? That’s because somebody once implemented it for a specific use, and then everyone forgot what that was and then never questioned its need. So everyone suffers it without questioning it.
The logic of the rule is:
(a) The Experimenter is different from the Patient → (b) There is a misalignment of incentives between them → (c) Experiments should have an IRB to resolve it
But people forget the first two steps and just remember: Experiments should have an IRB.
And since this works for 99.9% of the cases they stumble upon in their lives, they internalize that “The rule I learned is good and valid in all cases.”
Then, one day, they face a situation where steps (a) and (b) don’t apply, but their robobrain can’t compute it because the PROTOCOL SAYS (c) is THE TRUTH, and they don’t challenge it. They might not be able to pinpoint why exactly the rule applies, but it does for them, so… there.
But this is wrong. The point of the IRB is to align incentives between patient and experimenter. If the incentives are the same, there is no need for an IRB.15
A recent review of self-experimentation notes that in the US, the issue of self-experimentation is not regulated. The upside of this is that institutions and their Ethics Committees can do what they want. Accordingly, some consider that Ethics Committees aren’t responsible for cases of self-experimentation. Unfortunately, others believe ethical approval is required. In the rest of the world, this issue is even less discussed. In many of these cases, self-experimentation by experts is an issue that ethics boards encounter so infrequently that most don’t know how to think about it, and simply default to the bureaucratic response.
These ethics boards should know: Self-experimentation is OK and doesn’t require ethical oversight, especially when the researcher-patient is an expert.
When Is It Wrong to Publish Research?
Some journals didn’t want to publish Beata’s research so as not to endorse it.
I think poorly-designed research should not be carried out, because the cost of doing the research might be higher than the benefits of the learnings. So it makes sense to me when IRBs advise that certain research shouldn’t move forward.
But once the research has been carried out, the cost has been incurred. What’s the point of withholding the learnings from humanity—even if they’re small? One way to think about this is that, in healthcare research, all the cost is upfront (come up with treatments, recruit people to test them, have people undergo the treatment…), and all the benefits are down the line (the learning of whether the treatment works or not). The research you want to buy might be too expensive and shouldn’t be bought, but if you’ve already bought it, you might as well enjoy it.
You could argue that the learnings actually have net negative value to humanity. That could be true, for example, for papers outlining how to craft a nuclear bomb with household items. Is Beata’s that type of case?
The benefit of her paper is these insights:
Fighting cancers with viruses is promising
We might want to do it as a first line of treatment
Many injections, one every few days, sounds like a good approach
We should probably try more than one virus
The cost is: Maybe some dumb people will misinterpret it and take it as an incitement to perform questionable tests on themselves.
Everybody is self-experimenting all the time, from trying different foods, to different habits, supplements, activities… Some of these tests are more aggressive, others less aggressive. Some more thoughtful than others. But with shows like Jackass and TikTok challenges, there are plenty of people willing to try dumb stuff on themselves. We should not penalize the rest of us because these people exist. The existence of a scientific paper on oncolytic virolotherapy (oncolytic = “cancer-killing”, virotherapy = “therapy with viruses”) won’t make a dent. But maybe curing some cancers with viruses? That’s something I want people to know about!
All in all, it’s clear to me that more people should know about this.16
The Problem with IRBs—and Healthcare Regulations in General
If you take a step back, though, there’s a much grimmer aspect to this story. It took Dr. Halassy 2.5 years to publish this research. If it had been published on the first attempt, 2.5 years earlier, how much earlier would we have tried to test viruses against cancer as a first line of defense? If a cure for certain types of cancers has now been delayed by 2.5 years because of all this bureaucracy, how many people will have died because of that? Maybe 10,000 people will have died who wouldn’t have otherwise. What is the cost of this bureaucracy? Do we account for that? Are the supposed ethicists who make these decisions weighing the cost of their inaction, or are they just covering the downside?
In Standing on the Shoulders of Gnomes, I explained how taking some risks is necessary for progress:
Nearly four million Americans have died in car accidents.
Society could have decided: Not a single car death is unacceptable. We won’t allow any car on the streets that can kill somebody.
If we had made that decision, we would still be using bicycles today, and cars would cost $10 million a unit.
But together we decided that the deaths of the few are worth the freedom of the many. We made our peace with road deaths, and walked into the future.
Car manufacturers would then analyze every accident, every death. Oh wow, this frontal crash pushed the shaft through the driver’s heart. Maybe we should design it so that doesn’t happen again.
Millions have died so you can have a safer car.
The healthcare system opted for the bicycles. We are now hostages to its decisions, as millions die of ailments for which we could have a cure.
A few examples from Book Review: From Oversight To Overkill:
One IRB required researchers to explain to subjects that they could catch smallpox from skin contact in a trial—despite the fact that it’s been extinct in the wild since the 70s.
A doctor who wanted to start storing patient data they were already capturing had 27 requests for changes, including a back-and-forth on whether they should use pens or pencils.
One study that studied medical checklists on hygiene procedures was delayed for months. No patient data was involved, but they were still required to obtain consent, which was practically impossible considering that many of the patients were unconscious. This probably killed thousands of people who would have been saved by these checklists being adopted earlier.
One time, thousands of studies across dozens of universities were canceled (and tens of millions of dollars lost) because somebody died in one asthma trial in one university.
This type of example is so common and abusive that a researcher said in a survey “I hope all those at OHRP [the bureaucracy in charge of IRBs] and the ethicists die of diseases that we could have made significant progress on if we had [the research materials IRBs are banning us from using].”
The system was not always like that. It used to be the wild west, until a light amount of process was implemented which oversaw major research improvements, but then there was regulatory creep and the cost became too onerous while the outcomes barely improved.
So the cost-benefit calculation looks like—save a tiny handful of people per year, while killing 10,000 to 100,000 more, for a price tag of $1.6 billion. If this were a medication, I would not prescribe it.—Book Review: From Oversight To Overkill, Astral Codex Ten.
How does this translate into real life? The requirements for studies are just too high.
I spoke with Alan Parker, and he said:
One of the most interesting things about Beata’s case is the approach to treating her cancer. 11 injections with two different viruses? I can’t even imagine trying to get approval for a treatment like that.17
Which I interpreted as: The process for healthcare regulatory approval is too stringent and prevents us from designing the treatments we think would work best.
How do we solve this? From Oversight To Overkill, as reported by Scott Alexander, proposes this:
Allow zero-risk research (for example, testing urine samples a patient has already provided) with verbal or minimal written consent.
Allow consent forms to skip trivial issues no one cares about (“aspirin might taste bad”) and optimize them for patient understanding instead of liability avoidance.
Let each institution run their IRB with limited federal interference. Big institutions doing dangerous studies can enforce more regulations; small institutions doing simpler ones can be more permissive. The government only has to step in when some institution seems to be failing really badly.
Researchers should be allowed to appeal IRB decisions to higher authorities like deans or chancellors.
I would add these:
Decisions should be made to optimize the cost-benefit of the research, not to minimize cost.
Once the research is done, it should be published, with lots of caveats for all the ways it could have failed.
Note that all this doesn’t mean “research is bad and we should do alternative medicine”. Formalizing research is good. IRBs are good. Science is good. What I’m saying is that the specific requirements of research are too stringent today. As a result, to save a few hundreds of people a day from research shortcomings, we are killing millions of people in the future. And because nobody protests because the treatment that would save their life is 10 years away due to research stagnation, these millions of avoidable deaths have nobody to defend them.
We should defend them.
Takeaways: What Have We Learned from This Paper?
There are three broad takeaways from this paper:
1. Self-experimentation is OK
It has done lots of good throughout history.
It’s an ideal first step to research.
It shouldn’t need approval from an ethical board, as long as the experimenter is knowledgeable, because there’s no misalignment of incentive between researcher and patient.
This is yet another example where the medical bureaucracy loses the plot, follows the letter of the rules while forgetting their spirit, and only worries about the risk without accounting for the benefit. Let people self-experiment!
Journals should also stop being so paternalistic and trying to protect the public from useful research.
2. Healthcare regulation should be reduced
Lighten research regulation requirements—details above.
Make decisions based on cost-benefit, not just risk reduction.
3. An expert probably cured her own cancer by injecting herself with some viruses
But hold on, didn’t we already know this was possible?
Yes.
Then why couldn’t we make it work? Are we testing this type of approach on people?
Yes, but not the way we should.
How so? How should we be using viruses to kill cancer cells?
And why are viruses more likely to kill cancer cells than healthy ones?
Why did Beata use these viruses and not others? Why two? Are some better than others to kill cancers? Why so many injections?
I will answer these questions, and many others, in the next article, in which I interview Beata Halassy.
Called a triple-negative breast cancer. It’s triple-negative because the cancer cells don’t have enough estrogen receptors, progesterone receptors, or human epidermal growth factor receptors—which can be targeted by anti-cancer drugs traditionally used to treat it. These cancers are the deadliest form of breast cancer..
Excision of a breast
Darshini Kuruppu & Kenneth K. Tanabe (2005) Viral oncolysis by herpes simplex virus and other viruses, Cancer Biology & Therapy, 4:5, 524-531, DOI: 10.4161/ cbt.4.5.1820
Notably, in a particularly promising study, from Osaka University, tumor regressions were reported in 37 of 90 terminal cancer patients, with a variety of tumor histologies, treated with a nonattenuated mumps virus.
T-VEC (Imlygic), for recurring melanomas. Oncorine (H101) was approved by the China Food and Drug Administration (CFDA) in 2005, to be used in combination with chemotherapy for late-stage refractory nasopharyngeal carcinoma and other head and neck cancers (so not first-line treatment).
Vesicular stomatitis virus, Indiana strain
An 11th injection was given a few months later to remind the body to fight the cancer
She sourced the viruses from providers, and then adapted them herself in her lab. The resulting viruses were not as clean as a pharmaceutical company’s viruses would have been, but they were good enough. We’ll come back to this.
It turns out the tumor was not triple-negative, but instead had human epidermal growth factor receptors, which a drug can use to target the cancer.
This red tape has a function: Protect non-experts from snake-oil salesmen. This is important because many predators try to make money off of other people’s misery. But these anti-predator defenses come at the cost of slowing down other research. Also, predators are not just peddlers. They can be bad-faith researchers (or simply uninformed ones) who want career advancement by tweaking their study results. So please, don’t try this at home.
Clinical-grade preparations go through many rounds of purifications; more than lab-grade ones. If she had used clinical-grade viruses, she would have known for sure that what had an impact was the virus, and not the other stuff that wasn’t cleanly purified from the lab-grade preparation.
This list is limited to the ones who won a Nobel Prize in their field of self-experimentation! Many other Nobel Prize winners have done self-experimentation. One that comes to mind is Richard Feynman, whose book Surely You’re Joking Mr Feinman I just read and enjoye
Phase I trials confirm the side-effects are not too bad. The IRB checks many things, including:
Do the potential benefits outweigh the risks for the experimenting patient?
Is the patient properly consenting and informed?
Is the participant’s data safely protected?
Is the research fully legal?
Is it properly documented?
Another reviewer was concerned that the researcher might not be neutral. This could indeed be the case, especially if she had been the only author of the paper. But the paper has 9 authors, and they shared their methodology before performing the experiment—which is now the standard approach—and the treatment was executed not by the patient, but by the other authors. Many of the co-authors expected this to fail, too, and were only swayed to attempt this because of Beata’s insistence. Could the data be fabricated? Yes, like in any other paper. That’s why peer review exists. Having peer reviewers complain about the data potentially being biased sounds weird to me—it’s their job to figure that out.
We will see how they went about designing the treatment regime in next week’s article.
You forgot 2005 Nobel Prize winner Barry Marshall who proved Helicobacter pylori was the major cause of ulcers by drinking a flask of the culture.
Amazing article Tomas. The detail you put into ALL your pieces is tremendously educational.
Many thanks.