We could have instant drone delivery. We could have flying cars. We could have energy too cheap to meter. We could forget about diseases. We don’t because we’re too scared.
While I am sympathetic to some of the points here, I would have to respectfully disagree.
The reason we don't have a lot of these things is not because of bureaucracy and heavy regulations. Or because of low tax rates.
1945 - 1971 was the golden age of capitalism and innovation and the top tax rate was as high as 91 percent.
Since we have steadily cut tax rates, especially for the wealthy, we've not gotten increased innovation. We have gotten increased hoarding. Which is not surprising because it's easier to make money from monopoly rents and stock buybacks than from productive innovation.
This is true even for the ostensible entrepreneurs who purportedly care only about changing the world, as evidenced by the fact that as they and their companies grow richer, they adopt the mindsets of the companies they replaced. People do risky things like try to change the world or an industry because it's usually their only option to make something of themselves and a legacy. When that is done to a degree, they revert to basic human laziness and caution, like us all.
And it's simply untrue that politicians and the public are excessively risk averse. In the noughties and early years of the last decade, silicon valley was championed everywhere. Indeed, several of these companies only got so big because the government didn't want to tamper with innovation.
It is only now after the dreams they promised have soured that people are now pushing back.
It's the same thing with self driving cars and medical data. The careless disrespect and disregard Uber had for the transportation industry and the manipulative ends to which Facebook used consumer data among many other examples is why people are so cautious.
They are cautious from experience and they have every right to be. Nothing in the last two decades has merited the reckless and manipulative optimism these companies have sold to us. Unless that changes, there is absolutely no reason for people's attitudes about promises of new 'innovation' to change as well.
I’m def not a trickle-down economics guy. I believe in redistribution (but what amount?).
The point I make on taxes is quite novel however, since the research just got published. I’d look into it if you disagree with it.
Among other things, it’s useful because the post-war is quite a unique period, and drawing any conclusions from one data point is risky.
The internet was unregulated because it was deemed as safe, not because of a cry for freedom. As you can see, the moment there’s a potential threat identified, people claim for regulation.
Thank you for the recommendation. I have looked into the paper. It was all right. I don't agree with all of it. In fact, I don't agree with most of it. But it makes some valid points.
History shows us that new technologies and industries are always exploited and mismanaged, especially when large sums of money are in play. Capitalists cut corners until they are told they can't.
Thanks for your other work, which is engaging and informative.
Your claim that AI could treat or prevent any disease if we only gave it access to healthcare data is far from the truth, and your supporting information is also inaccurate (I am confident in this through my work as a computational biologist who develops drug & diagnostics).
The most important problem with the claim is that medicines work by addressing the molecular mechanisms of disease. Healthcare data contains virtually no information about the molecular mechanisms of disease. As a result, even unlimited AI cannot cure or treat all diseases by using healthcare data, or indeed have much impact at all.
In addition, your claims about access to healthcare data are inaccurate (at least for the US). Yes I can access and share my healthcare data.
I'm all for more effective sharing of healthcare data, and there are potential benefits, but that goal is not advanced by wild claims.
I try to strike a balance between precision, insightfulness, and succinctness. Very hard. Sometimes, one of the 3 falls.
I think you’re right that the claim that AI can solve all diseases with access to raw EMR data is hyperbolic. It’s also not what I tried to convey.
First, the access is not just to raw data but also papers. Second; the raw data is not meant to be what is accessible today, but what can be accessible tomorrow. Which includes some of the things you mention.
On the healthcare data, I’d be interested in knowing more. I once advised an EMR company, and that was my understanding at the time. But if it’s mistaken I’d love to know and correct the article.
"You hold the key to your health information and can send or have it sent to anyone you want."
This is not a hollow promise: I exercise these rights frequently. The Apple Health app on my iPhone downloads my records for me, I view them online, I have had them burned onto DVDs, and I have made them accessible to multiple third parties (physicians, a digital health startup...).
A more mild misconception in your article is the idea that physicians are responsible for devising new treatments. Most physicians focus on treating patients. Scientists (biologists of all kinds, chemists, physicists) do the bulk of therapeutic discovery (partnering with physicians of course, especially for clinical trials).
I share your excitement about what AI can do with access to the scientific literature and, perhaps more importantly, the underlying data. That idea doesn't really come across in your article, which focuses more on data generated in the course of medical care.
I've updated the article on HIPAA, but I am not sure you read the entire piece? The link I shared was not on HIPAA, but on Information Blockers and the Cures Act.
Also I'm not sure where you interpreted that i suggested that doctors came up with new treatments? I certainly didn't mean to imply that, rather that they have to figure out the right existing treatment given the diagnosis
I think a main thing that currently limits flying cars and delivery drones is the amount of noise they produce. We have flying cars, in common parlance they are called helicopters. And they are loud.
Think of how many neighbours you have within 100 metres of your house. What if even 1 in 10 starts using flying cars?
Maybe someone will invent a more quiet version. But you inherently have to move a large amount of air to stay up in the sky.
About that 1 breakdown in 100,000 km: an average car will cover that distance in about 7 years. It is not good if you have a fatal accident on average once in 7 years. Count me as one of the people who does not like this idea. That said, for our current generation flying cars this is solved by requiring that they can land using auto-rotation. The rotor is designed in a way that during a descent it somehow is still able to create enough lift to be able to pick a landing spot and survive the impact, even without power.
Incidentally, a well known but underexploited way to substantially improve our health is to actually start using bicycles instead of cars whenever it is feasible. We’re leaving a couple of years of extra life expectancy on the table right there.
It is always a balance. Tolerating the amount of people that get killed in traffic in the US is dumb. There are countries that get maybe 1/3d of that death toll per capita, and they have basically the same humans in the same cars as the US. But on the other hand I agree that we have to be honest with how safe or dangerous self-driving cars are. Should we compare them with the current situation on the road, or with the goal of zero fatalities? Maybe we should expect that self-driving cars become as safe as air travel (i.e. still much safer than travelling by road). Not sure what’s up with self-driving cars at the moment. I would have thought that we would have self-driving on motorways by now.
So much good stuff here, Tomas! Thanks for writing this.
I think most safety rules and regulations in the US could stand a serious cost/benefit analysis. Take nuclear power’s radiation requirement. It is based on a linear extrapolation of known harm. If X amount of radiation kills one in a thousand, then set the allowable limit to X/1000 to lose less than one in a million.
However, we evolved in a low-level radiation environment. Our cells have excellent mechanisms to repair the damage caused by the occasional gamma rays or fast neutrons the come our way.
From what I have read, no one died due to the radiation released at Fukushima. Thousands died from panic.
Prior to the war, at least, there was a community living quite well on the land adjacent to the Chernobyl reactor. They ate food from their gardens and were living longer, on average, than their neighbors.
But we find it very hard to write new rules that will make us “less safe.”
So, I buy paper-based fire-logs for my fireplace that have a warning label: danger, flammable. I buy knives with a label warning about sharp edges.
I did know about background radiation didn't know about the radiation rule. WTF... Makes it even worse to know that radiation outside a coal electricity plant is higher than for a nuclear reactor...
I don't know that thousands died because of the panic. I believe it's hundreds. But your point is still valid.
Fukushima will have its effects known in 20 years. Decreasing length of lives of thousands is no "low impact".
Warnings on packages are cheap, don't impact you, and if they save lives, where is the problem? These are two different issues you are trying to force together.
The issues with nuclear power plants is not the level of exposure. It's people not wanting it in their backyards and misassumptions over risks. And you think that by reducing regulation, more people will want it? That's a non sequitur.
Right now plenty of communities would accept a nuclear plant. In the US, they just can't get it because the regulator has never approved a single one. So there is no opportunity for communities to decide whether they want one or not. The point of the article is to say: let the people decide. If they decide they don't want it, great! Now we have our answer.
Excessive warnings are harmful because it basically teaches people they can be safely ignored. They become background noise. If you need to warn for something that is actually unusual and dangerous it will be impossible to make it stand out.
Excellent article Tomas. Satoshi for example, is credited with creating Bitcoin but he just compiled already existing technology, BitTorrent being the most prominent one, hence the name Bitcoin.
There's currently a movement to decentralize the financing and testing of innovative ideas using decentralized technology. It's called De-Sci.
VitaDAO is one of such efforts focused on longevity research if you're interested.
Thanks for your article. A couple of additional things to consider -
- People in the past have frequently promised a utopia that requires a certain number of sacrifices to make it happen. Stalin did. "We'll achieve the worker's paradise that Hegel and Marx said is inevitable, but a few million will have to die along the way. It's just the price of progress, and it will be worth it. Trust me, trust the process". The trouble is, these are just promises, and there are so many unanticipated factors that could turn the miracle (e.g. antibiotics) into a set of new problems (e.g superbugs) because of people doing the unexpected (e.g. low dosing animals to promote growth). Creating a car culture was good... was it? Except it led to suburban sprawl, loss of neighbourhoods, pollution etc. So many unintended consequences. DDT was going to be the next great miracle, until it wasn't. I'd rather see more bikes on streets than flying cars over my house at this point.
- There needs to be discussion about who decides, who benefits, who dies, and who pays some other price. Where are the reactors located? In whose backyard? Where do the benefits flow? What say do citizens have in everything? I don't trust the corporate class to make decisions that are in broader society's interest. It's not the way capitalism is designed.
In my province I can get all my medical records just by asking and I can share them as I wish.
About self driving cars this is a big complicated issue . I believe we don't have the algorithms or the right sensors to be able to do it right now. We can do pieces and eventually will be able to integrate them. I lived through this process in the industry I worked . Initially there was a lot of hype but we had to get down and build better sensors and better algorithms over the space of 40 years.
Truly we are numbed to the numbers of auto injuries and deaths yearly but i think automation of the driving task will be one of the tools we can use to bring this down.
Applied geometry and trig goes back at least 12,000 years at Gobekli Tepe and probably to the time cordage was invented more than 100,000 years ago. If you want to see how they did it check out: https://lostartpress.com/products/euclids-door
This was an excellent think piece. And I’m personally all for the healthcare use case. Organizations in China are already using personal data/ai for diagnostic purposes. And we could expose all of our medical data to ai (with all those benefits) without exposing it directly to the people around us! (Other thoughts on that if you’re interested: https://ellegriffin.substack.com/p/greenhouses-instead-of-doctors)
I've continue to think about this piece since you published it. Initially I wanted to viscerally react and state that "no human life is expendable" in the context of automobile accidents (or medicine trials or other high risk innovations), and I do believe that still. Thus, traffic programs like "Zero Fatalities" programs seem like a good thing and if your goal is to preserve human life at the cost of all other variables, it is (especially when the risk level is unequal such as it is when comparing a driver of a vehicle to a pedestrian in the same space). But after rereading your article, I see its more of a "Safety Third" approach to innovation (if you dont know this re-assesment of the old Safety First mantra, I suggest you do a quick search). Its not that fear/failures are a bad thing to be avoided at all costs (as i tell my kids, you dont learn anything new when you are always right), its that fear of failures shouldn't be the guiding principle. Thanks for your pieces... always interesting.
The extreme of deaths is important to tackle though, because it’s the one that forces tradeoffs.
First: human behavior is not consistent with the statement that life is invaluable.
Of course, your life is invaluable to you. Maybe the life of your loved ones is also invaluable.
But imagine that you had an official email from the White House that said: “your neighbor will die unless you pay $1M to save him”. Would you pay that? You probably wouldn’t. But if the price tag was $1, you probably would. There’s probably a price between $1 and $1M up to which you would pay.
And this is for your neighbor. For an unknown, it’s much less.
In fact, we know this because the effective altruism movement calculates this kind of stuff. The value of a remote life is about $2k, as in this is the price that today we need to pay to save one additional life (malaria nets, although this is now a hot topic, but the spirit is valid).
Another type of industry deals with the value of lives consistently: insurance. They know how much a life costs because they need to pay for them, and people are willing to pay to insure themselves against death only a certain amount. Based on what people are willing to pay, and their odds of death, you can broadly calculate the value of their lives. In the west this is between a few hundred thousand dollars to a few million.
Another: we know how to get driving deaths to zero: move speed limits to ten miles per hour.
We don’t because we would rather incur the small risk of an accident for the benefit of higher speed than to eliminate the risk altogether. Which means implicitly we have a tradeoff of life vs convenience.
Another: people die when working sometimes, and some jobs are more dangerous than others. So much so that in an industry like energy, you can calculate the deaths per TWh per type of energy. This is one of many factors considered when building certain energy types vs other (Eg coal is the worst, hydro can be very bad depending on your data, nuclear is the safest by far). Other factors include price, pollution, reliability, location, etc
All of these show that your life might be invaluable to you, but human life on average is not. We have implicit values of life unveiled by our behavior.
What this article tries to explain is that, in our behavior, implicitly, we’re saying that the value of people living today is substantially higher than the value of people living tomorrow (including our tomorrow selves), and that is making us so cautious that we hinder progress.
First of all, thank you for replying thoughtfully to my grammatically (and formattingly) stunted post.
Agreed, we dont stand too far from a shared perspective here. However, I know I personally struggle with the balance you suggest within this article everyday... value of safety vs value of innovation, value of risk vs reward, value of the individual vs the group, value of the future vs the present. As a scientist, I lean on data driven decisions, but even still I'm susceptable to unconcious bias for my own perspective and (and my immediate surroundings) as well as my own risk aversion or lack thereof, skews all future conversation. How do we get beyond that bias? and elevate the risk our society/individuals find acceptable and thus increase potential future innovation? Also, can we do it while minimizing recklessness (progress for progress sake) and without subversivly ellevating risk for a group that hasn't weighed in on the matter?
About Newton.... Yes, people had a practical knowledge of gravity in that it always pulls down in proportion to the mass of the object. What they lacked was the math necessary to predict it - and the ability to divide by zero.
Comparing Newton’s contributions to those of a gnome like me, a fair judge would consider him a giant. In his miracle year, when Cambridge was closed due to the plague, I think, he worked out the calculus and the visible light spectrum. In my lifetime I have done what, exactly?
Maybe we are all gnomes. But some gnomes are WAY TALLER than others.
Hahahha I didn’t mean to say there are no ppl who contribute more than others (some much more!).
Rather, that we have this image that most progress is just these giants when in fact it isn’t.
First, that eliminates the contribution of gnomes (and it adds up to much more overall!)
Second, it overweighs the contribution of giants, because it assumes each of them contributes a ton, when in fact they’re standing on the shoulders of others, which means their contribution is not as big as it seems.
Which can be realized by the fact that most inventions happen concurrently around the same time.
Can you present the economic argument of those who would increase taxes on the wealthiest and how that might improve innovation from their view
Your first statement of complexity complicating by the square of the organization size might suggest the billion dollar corporations be broken up to reduce their taxation burden by down sizing?
On the first part: Nobody thinks increasing taxation increases innovation. They simply don’t take innovation into consideration, when in fact it’s probably a bigger factor than others.
On the Metcalfe Antithesis: there are forces that favor concentration and others that penalize it. This one penalizes it. It there are others.
Good thinking in so many ways. We're accelerating at an incredible rate, even with more guard rails on the overall process than 100 years ago. the number of mistakes you can learn from are also accelerating and so many more are visible on the internet, so you get the benefit of someone else's errors to benefit your efforts.
breakthroughs can occur at any moment - so many things didn't exist even a few years ago, and more "combinant" solutions like Uber and AirBnb which simply weld together new capabilities into an old service that makes it capable of immense growth.
I'm more worried about government keeping up, the nuclear regulatory point was instructive - zero approvals and millions budgeted to say "no".
The entire premise of the article is that it is guaranteed to save more lives later than would die in the near term. This is critically flawed. Self driving cars will not save lives until all cars are self driving, no human override is possible, and the processing power is increased several thousandfold. None of this would be possible in even a decade by 'allowing' deaths due to self driving cars.
Drone deliveries are not economically feasible, even if we allowed risk. The technology has existed for them for a while now. Until you fully replace the delivery trucks with drone trucks, you are fighting economy of action. A single truck can make more deliveries to most locations in any span of time and use less energy to do so.
Same with flying cars. Our imagination exceeds our technical capabilities. No existing technology in aerospace nor automotive industries can produce a flying car that is safe enough to produce millions of. Our current technology relies on mechanical means of propulsion. We train pilots for hundreds of hours before they can fly the safest aircraft in the world. Yet the argument for flying cars misses the training required for the drivers, the decades of experience in creating safe places for aircraft to land (radio, radar, separation, etc), and the simple fact that mechanical parts fail frequently without frequent maintenance.
The article reads like a libertarian's fever dream, "why don't we drop more safety regulations so we can progress faster?" while ignoring the actual speed of progress we have achieved in spite of the mountains of safety rules? We didn't reach space by "accepting" deaths. Human progress has not slowed. Innovation is not being stifled by a lack of blood. Claiming otherwise is being incredibly naive, catering towards billionaires, and ignorant of history. Show me that in 2022 our progress as a species has meaningfully slowed compared to our history. Show me how the Age of Sail resulted in more innovation, faster, than we have with Osha and NTSB and FAA, etc. The answer is that you can't. The money to be made in industry upsetting fields is massive. Yet our imagination outstrips the reality. AI is hard. Flying is hard. Drug research is hard. Throwing more bodies at any of these is not going to simplify any of it.
>"This is critically flawed. Self driving cars will not save lives until all cars are self driving" The entire premise of your criticism is puzzling. You posit that there is a lot of work to be done left, but how on earth is that relevant?
World A (now): it takes 15 years
World B (loosened regulations): it takes 10 years
The delta is 5 years.
It doesn't matter that some technology isn't ready to go right now. What matters is shortening the timelines.
The gain from the mentioned technology isn't apparent. You gain 5 years and lose how many lives? How many would be saved in 5 years of delivering earlier? What happens if you make that calculation and are wrong? It isn't ready in 10 years but 50?
Loosing regulations to save more lives requires a level of evaluation that is not seen here. No discussions on how mature the tech is or even how less regulation would speed it up. It's all assumptions and no substance.
We thought we'd have flying cars in 2000. Turns out it's more complicated. We thought we'd have bipedal robots in 2000. Same song and dance, again. If your estimate is wrong by orders of magnitude, in lives lost, lives saved, or time to achieve the tech, all discussions on whether loosing regulations is incredibly short sighted and only benefits those in charge.
Musk wants less regulation on self driving because he doesn't appreciate or care how complicated it is. Google is taking their time because they know exactly how hard it is and aren't calling for reforms. In this specific example, Google is wanting the tech to be mature enough before releasing it to the public while Musk wants less regulation to sell more cars. Who should you listen to? The one who is making great efforts on safety and innovation? Or the one who wants to increase the numbers in his next Forbes article?
As someone born in 1958, who has worked in large companies, small companies, and start-ups, who has lived and worked in several EU countries, the UK, in the US, and (importantly) SE Asia... I completely agree with you.
One of my favourite lines is "we need more sabre-toothed tigers". What I mean is that people have become so used to being protected from normal risks and dangers that they no longer think that it is their responsibility. Sabre-toothed tigers presumable do not giver a s**t about health and safety regulations. Ignore that fact and you might become lunch.
Let's start with the war. Or more accurately, the response of the pro-Ukraine coalition? The correct attitude is "vicious dictators must be stopped now, or we will suffer worse in the future". In the words of the Manic Street Preachers song "If you tolerate this, then your children will be next". i.e. We must lose some people's lives now, pay a lot of money now, and suffer some major economic discomfort now in order to re-establish international law and protect the future. The politicians did not do it in 2014, and now many more people are dying.
The arguments that say "Uber disrespected the transportation industry" is exactly the example made in this essay: Allow a few problems to arise, and you very quickly discover a better solution. Over-regulate and you get something like the Black Cab industry of London; primitive heavy uncomfortable vehicles with scant use of modern tech. Compare that with Germany where cabs are more likely to be a luxury Mercedes.
Back to my opening statement:
Every large company I ever worked for was incapable of moving fast. It wasn't just the rules that slowed us down, it was the petty jealousies and personal politics that forced everyone to be so careful not to damage their standing. (I once told the IT Director of a major mobile phone company that she was absolutely wrong, and was then asked to leave the project.)
Every small company and startup I worked for could change direction in a moment. And the teams were much better integrated and knew everyone by name. The developed their businesses fast. Big companies buy start-ups for exactly that reason. If you can't do it yourself, buy it.
So what about taking responsibility? Compare SE Asia with Europe, America, etc. If you hurt yourself in Viet Nam or Thailand or Laos, it's your problem. People will support you, but you let yourself get hurt. It's your fault. Electric circuits have no earth connection; we regularly get shocks, and we quickly learn to be careful with appliances near water. Designs of plugs and sockets are ridiculously unsafe and it is easy to get a shock, so we always handle plugs with care. There are sharp edges, low roofs and rusty nails everywhere; expect them or get hurt. Few people obey any sensible road rules and the only reason there are not more deaths is because people drive slowly. First-world drivers routinely suffer accidents in Asia, so why don't the locals? ...Because they don't think someone else should look after them.
And while on the subject of tax rates: I recall the "golden age of capitalism" in which the various governments managed to chase all innovation and investment out of the country. Check how many car manufacturers went bankrupt! Check the value of the pound. Check how much GDP was spent on supporting industries that were nationalised because they were incapable of supporting themselves. The current generation doesn't know what it was really like. In 1960 the inflation rate was 1.8%. It rose steadily until 1975 when it was 24.9%.
I recall "the brain drain" when all the successful high earners left the country. Viable companies went offshore in order to avoid taxes. The Beatles even went to USA...
While I am sympathetic to some of the points here, I would have to respectfully disagree.
The reason we don't have a lot of these things is not because of bureaucracy and heavy regulations. Or because of low tax rates.
1945 - 1971 was the golden age of capitalism and innovation and the top tax rate was as high as 91 percent.
Since we have steadily cut tax rates, especially for the wealthy, we've not gotten increased innovation. We have gotten increased hoarding. Which is not surprising because it's easier to make money from monopoly rents and stock buybacks than from productive innovation.
This is true even for the ostensible entrepreneurs who purportedly care only about changing the world, as evidenced by the fact that as they and their companies grow richer, they adopt the mindsets of the companies they replaced. People do risky things like try to change the world or an industry because it's usually their only option to make something of themselves and a legacy. When that is done to a degree, they revert to basic human laziness and caution, like us all.
And it's simply untrue that politicians and the public are excessively risk averse. In the noughties and early years of the last decade, silicon valley was championed everywhere. Indeed, several of these companies only got so big because the government didn't want to tamper with innovation.
It is only now after the dreams they promised have soured that people are now pushing back.
It's the same thing with self driving cars and medical data. The careless disrespect and disregard Uber had for the transportation industry and the manipulative ends to which Facebook used consumer data among many other examples is why people are so cautious.
They are cautious from experience and they have every right to be. Nothing in the last two decades has merited the reckless and manipulative optimism these companies have sold to us. Unless that changes, there is absolutely no reason for people's attitudes about promises of new 'innovation' to change as well.
I’m def not a trickle-down economics guy. I believe in redistribution (but what amount?).
The point I make on taxes is quite novel however, since the research just got published. I’d look into it if you disagree with it.
Among other things, it’s useful because the post-war is quite a unique period, and drawing any conclusions from one data point is risky.
The internet was unregulated because it was deemed as safe, not because of a cry for freedom. As you can see, the moment there’s a potential threat identified, people claim for regulation.
Thank you for the recommendation. I have looked into the paper. It was all right. I don't agree with all of it. In fact, I don't agree with most of it. But it makes some valid points.
I always appreciate your warm and thoughtful comments.
I'll look into the research.
I like how you refer to the first decade of the new millennium as the “noughties,”. Definitely worth a chuckle.
Thanks for spotting that.
Thank you, this is exactly the argument I wanted to make against "trickle-down economics" but you stated it much better than I ever could have
I appreciate the compliment. Thank you very much
History shows us that new technologies and industries are always exploited and mismanaged, especially when large sums of money are in play. Capitalists cut corners until they are told they can't.
Curious. What system doesn’t exploit or mismanage? Which is the best economic system/government?
Yup.
Thanks for your other work, which is engaging and informative.
Your claim that AI could treat or prevent any disease if we only gave it access to healthcare data is far from the truth, and your supporting information is also inaccurate (I am confident in this through my work as a computational biologist who develops drug & diagnostics).
The most important problem with the claim is that medicines work by addressing the molecular mechanisms of disease. Healthcare data contains virtually no information about the molecular mechanisms of disease. As a result, even unlimited AI cannot cure or treat all diseases by using healthcare data, or indeed have much impact at all.
In addition, your claims about access to healthcare data are inaccurate (at least for the US). Yes I can access and share my healthcare data.
I'm all for more effective sharing of healthcare data, and there are potential benefits, but that goal is not advanced by wild claims.
Thank you GC. I appreciate your criticism.
I try to strike a balance between precision, insightfulness, and succinctness. Very hard. Sometimes, one of the 3 falls.
I think you’re right that the claim that AI can solve all diseases with access to raw EMR data is hyperbolic. It’s also not what I tried to convey.
First, the access is not just to raw data but also papers. Second; the raw data is not meant to be what is accessible today, but what can be accessible tomorrow. Which includes some of the things you mention.
On the healthcare data, I’d be interested in knowing more. I once advised an EMR company, and that was my understanding at the time. But if it’s mistaken I’d love to know and correct the article.
Thank you!
I appreciate the response. And yes, your job is hard.
Regarding patients freedom to access and share their healthcare data: this is one of the key tenets of HIPAA (a 26 year old law). Information on HIPAA is abundant (e.g. https://www.hhs.gov/hipaa/for-individuals/guidance-materials-for-consumers/index.html) but a TL;DR summary from the Feds:
"You hold the key to your health information and can send or have it sent to anyone you want."
This is not a hollow promise: I exercise these rights frequently. The Apple Health app on my iPhone downloads my records for me, I view them online, I have had them burned onto DVDs, and I have made them accessible to multiple third parties (physicians, a digital health startup...).
A more mild misconception in your article is the idea that physicians are responsible for devising new treatments. Most physicians focus on treating patients. Scientists (biologists of all kinds, chemists, physicists) do the bulk of therapeutic discovery (partnering with physicians of course, especially for clinical trials).
I share your excitement about what AI can do with access to the scientific literature and, perhaps more importantly, the underlying data. That idea doesn't really come across in your article, which focuses more on data generated in the course of medical care.
Thanks!
I've updated the article on HIPAA, but I am not sure you read the entire piece? The link I shared was not on HIPAA, but on Information Blockers and the Cures Act.
Also I'm not sure where you interpreted that i suggested that doctors came up with new treatments? I certainly didn't mean to imply that, rather that they have to figure out the right existing treatment given the diagnosis
I think a main thing that currently limits flying cars and delivery drones is the amount of noise they produce. We have flying cars, in common parlance they are called helicopters. And they are loud.
Think of how many neighbours you have within 100 metres of your house. What if even 1 in 10 starts using flying cars?
Maybe someone will invent a more quiet version. But you inherently have to move a large amount of air to stay up in the sky.
About that 1 breakdown in 100,000 km: an average car will cover that distance in about 7 years. It is not good if you have a fatal accident on average once in 7 years. Count me as one of the people who does not like this idea. That said, for our current generation flying cars this is solved by requiring that they can land using auto-rotation. The rotor is designed in a way that during a descent it somehow is still able to create enough lift to be able to pick a landing spot and survive the impact, even without power.
Incidentally, a well known but underexploited way to substantially improve our health is to actually start using bicycles instead of cars whenever it is feasible. We’re leaving a couple of years of extra life expectancy on the table right there.
It is always a balance. Tolerating the amount of people that get killed in traffic in the US is dumb. There are countries that get maybe 1/3d of that death toll per capita, and they have basically the same humans in the same cars as the US. But on the other hand I agree that we have to be honest with how safe or dangerous self-driving cars are. Should we compare them with the current situation on the road, or with the goal of zero fatalities? Maybe we should expect that self-driving cars become as safe as air travel (i.e. still much safer than travelling by road). Not sure what’s up with self-driving cars at the moment. I would have thought that we would have self-driving on motorways by now.
Thoughtful, thx! I agree.
On the flying cars: agreed too.
For the 100k km, however: the idea would be that you don’t buy them yet, but others should be free to
So much good stuff here, Tomas! Thanks for writing this.
I think most safety rules and regulations in the US could stand a serious cost/benefit analysis. Take nuclear power’s radiation requirement. It is based on a linear extrapolation of known harm. If X amount of radiation kills one in a thousand, then set the allowable limit to X/1000 to lose less than one in a million.
However, we evolved in a low-level radiation environment. Our cells have excellent mechanisms to repair the damage caused by the occasional gamma rays or fast neutrons the come our way.
From what I have read, no one died due to the radiation released at Fukushima. Thousands died from panic.
Prior to the war, at least, there was a community living quite well on the land adjacent to the Chernobyl reactor. They ate food from their gardens and were living longer, on average, than their neighbors.
But we find it very hard to write new rules that will make us “less safe.”
So, I buy paper-based fire-logs for my fireplace that have a warning label: danger, flammable. I buy knives with a label warning about sharp edges.
Sigh....
I did know about background radiation didn't know about the radiation rule. WTF... Makes it even worse to know that radiation outside a coal electricity plant is higher than for a nuclear reactor...
I don't know that thousands died because of the panic. I believe it's hundreds. But your point is still valid.
https://en.wikipedia.org/wiki/Fukushima_Daiichi_nuclear_disaster_casualties#:~:text=A%20May%202012%20United%20Nations,by%20the%20Fukushima%20nuclear%20disaster.
I'll write more about nuclear specifically
Thx for your thoughtful comment!
Fukushima will have its effects known in 20 years. Decreasing length of lives of thousands is no "low impact".
Warnings on packages are cheap, don't impact you, and if they save lives, where is the problem? These are two different issues you are trying to force together.
The issues with nuclear power plants is not the level of exposure. It's people not wanting it in their backyards and misassumptions over risks. And you think that by reducing regulation, more people will want it? That's a non sequitur.
Right now plenty of communities would accept a nuclear plant. In the US, they just can't get it because the regulator has never approved a single one. So there is no opportunity for communities to decide whether they want one or not. The point of the article is to say: let the people decide. If they decide they don't want it, great! Now we have our answer.
Excessive warnings are harmful because it basically teaches people they can be safely ignored. They become background noise. If you need to warn for something that is actually unusual and dangerous it will be impossible to make it stand out.
Excellent article Tomas. Satoshi for example, is credited with creating Bitcoin but he just compiled already existing technology, BitTorrent being the most prominent one, hence the name Bitcoin.
There's currently a movement to decentralize the financing and testing of innovative ideas using decentralized technology. It's called De-Sci.
VitaDAO is one of such efforts focused on longevity research if you're interested.
Thank you for writing.
I knew about DeFi but not DeSci. Interesting. Thanks! I need to look into it
Thanks for your article. A couple of additional things to consider -
- People in the past have frequently promised a utopia that requires a certain number of sacrifices to make it happen. Stalin did. "We'll achieve the worker's paradise that Hegel and Marx said is inevitable, but a few million will have to die along the way. It's just the price of progress, and it will be worth it. Trust me, trust the process". The trouble is, these are just promises, and there are so many unanticipated factors that could turn the miracle (e.g. antibiotics) into a set of new problems (e.g superbugs) because of people doing the unexpected (e.g. low dosing animals to promote growth). Creating a car culture was good... was it? Except it led to suburban sprawl, loss of neighbourhoods, pollution etc. So many unintended consequences. DDT was going to be the next great miracle, until it wasn't. I'd rather see more bikes on streets than flying cars over my house at this point.
- There needs to be discussion about who decides, who benefits, who dies, and who pays some other price. Where are the reactors located? In whose backyard? Where do the benefits flow? What say do citizens have in everything? I don't trust the corporate class to make decisions that are in broader society's interest. It's not the way capitalism is designed.
100% agreed.
What you’re saying is that we need to make explicit both:
- what are the trade offs
- how do we calculate them
I’m all in.
And who decides. That's the power/political dimension.
Yes I meant to include that in how they’re calculated
In my province I can get all my medical records just by asking and I can share them as I wish.
About self driving cars this is a big complicated issue . I believe we don't have the algorithms or the right sensors to be able to do it right now. We can do pieces and eventually will be able to integrate them. I lived through this process in the industry I worked . Initially there was a lot of hype but we had to get down and build better sensors and better algorithms over the space of 40 years.
Truly we are numbed to the numbers of auto injuries and deaths yearly but i think automation of the driving task will be one of the tools we can use to bring this down.
Applied geometry and trig goes back at least 12,000 years at Gobekli Tepe and probably to the time cordage was invented more than 100,000 years ago. If you want to see how they did it check out: https://lostartpress.com/products/euclids-door
I recently learned about Gobekli Tepe. Fascinating! Didn’t know about the geometry part. Makes total sense. Thank you!
This was an excellent think piece. And I’m personally all for the healthcare use case. Organizations in China are already using personal data/ai for diagnostic purposes. And we could expose all of our medical data to ai (with all those benefits) without exposing it directly to the people around us! (Other thoughts on that if you’re interested: https://ellegriffin.substack.com/p/greenhouses-instead-of-doctors)
Just read and subscribed. Very interesting, thx for sharing! That’s exactly the world we should be envisioning
Thanks for stopping by! 😊
I've continue to think about this piece since you published it. Initially I wanted to viscerally react and state that "no human life is expendable" in the context of automobile accidents (or medicine trials or other high risk innovations), and I do believe that still. Thus, traffic programs like "Zero Fatalities" programs seem like a good thing and if your goal is to preserve human life at the cost of all other variables, it is (especially when the risk level is unequal such as it is when comparing a driver of a vehicle to a pedestrian in the same space). But after rereading your article, I see its more of a "Safety Third" approach to innovation (if you dont know this re-assesment of the old Safety First mantra, I suggest you do a quick search). Its not that fear/failures are a bad thing to be avoided at all costs (as i tell my kids, you dont learn anything new when you are always right), its that fear of failures shouldn't be the guiding principle. Thanks for your pieces... always interesting.
I think we agree in principle.
The extreme of deaths is important to tackle though, because it’s the one that forces tradeoffs.
First: human behavior is not consistent with the statement that life is invaluable.
Of course, your life is invaluable to you. Maybe the life of your loved ones is also invaluable.
But imagine that you had an official email from the White House that said: “your neighbor will die unless you pay $1M to save him”. Would you pay that? You probably wouldn’t. But if the price tag was $1, you probably would. There’s probably a price between $1 and $1M up to which you would pay.
And this is for your neighbor. For an unknown, it’s much less.
In fact, we know this because the effective altruism movement calculates this kind of stuff. The value of a remote life is about $2k, as in this is the price that today we need to pay to save one additional life (malaria nets, although this is now a hot topic, but the spirit is valid).
Another type of industry deals with the value of lives consistently: insurance. They know how much a life costs because they need to pay for them, and people are willing to pay to insure themselves against death only a certain amount. Based on what people are willing to pay, and their odds of death, you can broadly calculate the value of their lives. In the west this is between a few hundred thousand dollars to a few million.
Another: we know how to get driving deaths to zero: move speed limits to ten miles per hour.
We don’t because we would rather incur the small risk of an accident for the benefit of higher speed than to eliminate the risk altogether. Which means implicitly we have a tradeoff of life vs convenience.
Another: people die when working sometimes, and some jobs are more dangerous than others. So much so that in an industry like energy, you can calculate the deaths per TWh per type of energy. This is one of many factors considered when building certain energy types vs other (Eg coal is the worst, hydro can be very bad depending on your data, nuclear is the safest by far). Other factors include price, pollution, reliability, location, etc
All of these show that your life might be invaluable to you, but human life on average is not. We have implicit values of life unveiled by our behavior.
What this article tries to explain is that, in our behavior, implicitly, we’re saying that the value of people living today is substantially higher than the value of people living tomorrow (including our tomorrow selves), and that is making us so cautious that we hinder progress.
First of all, thank you for replying thoughtfully to my grammatically (and formattingly) stunted post.
Agreed, we dont stand too far from a shared perspective here. However, I know I personally struggle with the balance you suggest within this article everyday... value of safety vs value of innovation, value of risk vs reward, value of the individual vs the group, value of the future vs the present. As a scientist, I lean on data driven decisions, but even still I'm susceptable to unconcious bias for my own perspective and (and my immediate surroundings) as well as my own risk aversion or lack thereof, skews all future conversation. How do we get beyond that bias? and elevate the risk our society/individuals find acceptable and thus increase potential future innovation? Also, can we do it while minimizing recklessness (progress for progress sake) and without subversivly ellevating risk for a group that hasn't weighed in on the matter?
The most Important question of the 21st century.
I believe we can.
About Newton.... Yes, people had a practical knowledge of gravity in that it always pulls down in proportion to the mass of the object. What they lacked was the math necessary to predict it - and the ability to divide by zero.
Comparing Newton’s contributions to those of a gnome like me, a fair judge would consider him a giant. In his miracle year, when Cambridge was closed due to the plague, I think, he worked out the calculus and the visible light spectrum. In my lifetime I have done what, exactly?
Maybe we are all gnomes. But some gnomes are WAY TALLER than others.
Hahahha I didn’t mean to say there are no ppl who contribute more than others (some much more!).
Rather, that we have this image that most progress is just these giants when in fact it isn’t.
First, that eliminates the contribution of gnomes (and it adds up to much more overall!)
Second, it overweighs the contribution of giants, because it assumes each of them contributes a ton, when in fact they’re standing on the shoulders of others, which means their contribution is not as big as it seems.
Which can be realized by the fact that most inventions happen concurrently around the same time.
Love the writing in this piece. Sharp and engaging. The imagery of giant standing on the shoulders of gnomes is sticky. Enjoyed reading it !
Can you present the economic argument of those who would increase taxes on the wealthiest and how that might improve innovation from their view
Your first statement of complexity complicating by the square of the organization size might suggest the billion dollar corporations be broken up to reduce their taxation burden by down sizing?
On the first part: Nobody thinks increasing taxation increases innovation. They simply don’t take innovation into consideration, when in fact it’s probably a bigger factor than others.
On the Metcalfe Antithesis: there are forces that favor concentration and others that penalize it. This one penalizes it. It there are others.
Good thinking in so many ways. We're accelerating at an incredible rate, even with more guard rails on the overall process than 100 years ago. the number of mistakes you can learn from are also accelerating and so many more are visible on the internet, so you get the benefit of someone else's errors to benefit your efforts.
breakthroughs can occur at any moment - so many things didn't exist even a few years ago, and more "combinant" solutions like Uber and AirBnb which simply weld together new capabilities into an old service that makes it capable of immense growth.
I'm more worried about government keeping up, the nuclear regulatory point was instructive - zero approvals and millions budgeted to say "no".
The entire premise of the article is that it is guaranteed to save more lives later than would die in the near term. This is critically flawed. Self driving cars will not save lives until all cars are self driving, no human override is possible, and the processing power is increased several thousandfold. None of this would be possible in even a decade by 'allowing' deaths due to self driving cars.
Drone deliveries are not economically feasible, even if we allowed risk. The technology has existed for them for a while now. Until you fully replace the delivery trucks with drone trucks, you are fighting economy of action. A single truck can make more deliveries to most locations in any span of time and use less energy to do so.
Same with flying cars. Our imagination exceeds our technical capabilities. No existing technology in aerospace nor automotive industries can produce a flying car that is safe enough to produce millions of. Our current technology relies on mechanical means of propulsion. We train pilots for hundreds of hours before they can fly the safest aircraft in the world. Yet the argument for flying cars misses the training required for the drivers, the decades of experience in creating safe places for aircraft to land (radio, radar, separation, etc), and the simple fact that mechanical parts fail frequently without frequent maintenance.
The article reads like a libertarian's fever dream, "why don't we drop more safety regulations so we can progress faster?" while ignoring the actual speed of progress we have achieved in spite of the mountains of safety rules? We didn't reach space by "accepting" deaths. Human progress has not slowed. Innovation is not being stifled by a lack of blood. Claiming otherwise is being incredibly naive, catering towards billionaires, and ignorant of history. Show me that in 2022 our progress as a species has meaningfully slowed compared to our history. Show me how the Age of Sail resulted in more innovation, faster, than we have with Osha and NTSB and FAA, etc. The answer is that you can't. The money to be made in industry upsetting fields is massive. Yet our imagination outstrips the reality. AI is hard. Flying is hard. Drug research is hard. Throwing more bodies at any of these is not going to simplify any of it.
>"This is critically flawed. Self driving cars will not save lives until all cars are self driving" The entire premise of your criticism is puzzling. You posit that there is a lot of work to be done left, but how on earth is that relevant?
World A (now): it takes 15 years
World B (loosened regulations): it takes 10 years
The delta is 5 years.
It doesn't matter that some technology isn't ready to go right now. What matters is shortening the timelines.
The gain from the mentioned technology isn't apparent. You gain 5 years and lose how many lives? How many would be saved in 5 years of delivering earlier? What happens if you make that calculation and are wrong? It isn't ready in 10 years but 50?
Loosing regulations to save more lives requires a level of evaluation that is not seen here. No discussions on how mature the tech is or even how less regulation would speed it up. It's all assumptions and no substance.
We thought we'd have flying cars in 2000. Turns out it's more complicated. We thought we'd have bipedal robots in 2000. Same song and dance, again. If your estimate is wrong by orders of magnitude, in lives lost, lives saved, or time to achieve the tech, all discussions on whether loosing regulations is incredibly short sighted and only benefits those in charge.
Musk wants less regulation on self driving because he doesn't appreciate or care how complicated it is. Google is taking their time because they know exactly how hard it is and aren't calling for reforms. In this specific example, Google is wanting the tech to be mature enough before releasing it to the public while Musk wants less regulation to sell more cars. Who should you listen to? The one who is making great efforts on safety and innovation? Or the one who wants to increase the numbers in his next Forbes article?
As someone born in 1958, who has worked in large companies, small companies, and start-ups, who has lived and worked in several EU countries, the UK, in the US, and (importantly) SE Asia... I completely agree with you.
One of my favourite lines is "we need more sabre-toothed tigers". What I mean is that people have become so used to being protected from normal risks and dangers that they no longer think that it is their responsibility. Sabre-toothed tigers presumable do not giver a s**t about health and safety regulations. Ignore that fact and you might become lunch.
Let's start with the war. Or more accurately, the response of the pro-Ukraine coalition? The correct attitude is "vicious dictators must be stopped now, or we will suffer worse in the future". In the words of the Manic Street Preachers song "If you tolerate this, then your children will be next". i.e. We must lose some people's lives now, pay a lot of money now, and suffer some major economic discomfort now in order to re-establish international law and protect the future. The politicians did not do it in 2014, and now many more people are dying.
The arguments that say "Uber disrespected the transportation industry" is exactly the example made in this essay: Allow a few problems to arise, and you very quickly discover a better solution. Over-regulate and you get something like the Black Cab industry of London; primitive heavy uncomfortable vehicles with scant use of modern tech. Compare that with Germany where cabs are more likely to be a luxury Mercedes.
Back to my opening statement:
Every large company I ever worked for was incapable of moving fast. It wasn't just the rules that slowed us down, it was the petty jealousies and personal politics that forced everyone to be so careful not to damage their standing. (I once told the IT Director of a major mobile phone company that she was absolutely wrong, and was then asked to leave the project.)
Every small company and startup I worked for could change direction in a moment. And the teams were much better integrated and knew everyone by name. The developed their businesses fast. Big companies buy start-ups for exactly that reason. If you can't do it yourself, buy it.
So what about taking responsibility? Compare SE Asia with Europe, America, etc. If you hurt yourself in Viet Nam or Thailand or Laos, it's your problem. People will support you, but you let yourself get hurt. It's your fault. Electric circuits have no earth connection; we regularly get shocks, and we quickly learn to be careful with appliances near water. Designs of plugs and sockets are ridiculously unsafe and it is easy to get a shock, so we always handle plugs with care. There are sharp edges, low roofs and rusty nails everywhere; expect them or get hurt. Few people obey any sensible road rules and the only reason there are not more deaths is because people drive slowly. First-world drivers routinely suffer accidents in Asia, so why don't the locals? ...Because they don't think someone else should look after them.
And while on the subject of tax rates: I recall the "golden age of capitalism" in which the various governments managed to chase all innovation and investment out of the country. Check how many car manufacturers went bankrupt! Check the value of the pound. Check how much GDP was spent on supporting industries that were nationalised because they were incapable of supporting themselves. The current generation doesn't know what it was really like. In 1960 the inflation rate was 1.8%. It rose steadily until 1975 when it was 24.9%.
I recall "the brain drain" when all the successful high earners left the country. Viable companies went offshore in order to avoid taxes. The Beatles even went to USA...
"If you drive a car, I'll tax the street
If you try to sit, I'll tax your seat
If you get too cold I'll tax the heat
If you take a walk, I'll tax your feet".