The extreme of deaths is important to tackle though, because it’s the one that forces tradeoffs.
First: human behavior is not consistent with the statement that life is invaluable.
Of course, your life is invaluable to you. Maybe the life of your loved ones is also invaluable.
But imagine that you had an official email from the White House that said: “your neighbor will die unless you pay $1M to save him”. Would you pay that? You probably wouldn’t. But if the price tag was $1, you probably would. There’s probably a price between $1 and $1M up to which you would pay.
And this is for your neighbor. For an unknown, it’s much less.
In fact, we know this because the effective altruism movement calculates this kind of stuff. The value of a remote life is about $2k, as in this is the price that today we need to pay to save one additional life (malaria nets, although this is now a hot topic, but the spirit is valid).
Another type of industry deals with the value of lives consistently: insurance. They know how much a life costs because they need to pay for them, and people are willing to pay to insure themselves against death only a certain amount. Based on what people are willing to pay, and their odds of death, you can broadly calculate the value of their lives. In the west this is between a few hundred thousand dollars to a few million.
Another: we know how to get driving deaths to zero: move speed limits to ten miles per hour.
We don’t because we would rather incur the small risk of an accident for the benefit of higher speed than to eliminate the risk altogether. Which means implicitly we have a tradeoff of life vs convenience.
Another: people die when working sometimes, and some jobs are more dangerous than others. So much so that in an industry like energy, you can calculate the deaths per TWh per type of energy. This is one of many factors considered when building certain energy types vs other (Eg coal is the worst, hydro can be very bad depending on your data, nuclear is the safest by far). Other factors include price, pollution, reliability, location, etc
All of these show that your life might be invaluable to you, but human life on average is not. We have implicit values of life unveiled by our behavior.
What this article tries to explain is that, in our behavior, implicitly, we’re saying that the value of people living today is substantially higher than the value of people living tomorrow (including our tomorrow selves), and that is making us so cautious that we hinder progress.
First of all, thank you for replying thoughtfully to my grammatically (and formattingly) stunted post.
Agreed, we dont stand too far from a shared perspective here. However, I know I personally struggle with the balance you suggest within this article everyday... value of safety vs value of innovation, value of risk vs reward, value of the individual vs the group, value of the future vs the present. As a scientist, I lean on data driven decisions, but even still I'm susceptable to unconcious bias for my own perspective and (and my immediate surroundings) as well as my own risk aversion or lack thereof, skews all future conversation. How do we get beyond that bias? and elevate the risk our society/individuals find acceptable and thus increase potential future innovation? Also, can we do it while minimizing recklessness (progress for progress sake) and without subversivly ellevating risk for a group that hasn't weighed in on the matter?
I think we agree in principle.
The extreme of deaths is important to tackle though, because it’s the one that forces tradeoffs.
First: human behavior is not consistent with the statement that life is invaluable.
Of course, your life is invaluable to you. Maybe the life of your loved ones is also invaluable.
But imagine that you had an official email from the White House that said: “your neighbor will die unless you pay $1M to save him”. Would you pay that? You probably wouldn’t. But if the price tag was $1, you probably would. There’s probably a price between $1 and $1M up to which you would pay.
And this is for your neighbor. For an unknown, it’s much less.
In fact, we know this because the effective altruism movement calculates this kind of stuff. The value of a remote life is about $2k, as in this is the price that today we need to pay to save one additional life (malaria nets, although this is now a hot topic, but the spirit is valid).
Another type of industry deals with the value of lives consistently: insurance. They know how much a life costs because they need to pay for them, and people are willing to pay to insure themselves against death only a certain amount. Based on what people are willing to pay, and their odds of death, you can broadly calculate the value of their lives. In the west this is between a few hundred thousand dollars to a few million.
Another: we know how to get driving deaths to zero: move speed limits to ten miles per hour.
We don’t because we would rather incur the small risk of an accident for the benefit of higher speed than to eliminate the risk altogether. Which means implicitly we have a tradeoff of life vs convenience.
Another: people die when working sometimes, and some jobs are more dangerous than others. So much so that in an industry like energy, you can calculate the deaths per TWh per type of energy. This is one of many factors considered when building certain energy types vs other (Eg coal is the worst, hydro can be very bad depending on your data, nuclear is the safest by far). Other factors include price, pollution, reliability, location, etc
All of these show that your life might be invaluable to you, but human life on average is not. We have implicit values of life unveiled by our behavior.
What this article tries to explain is that, in our behavior, implicitly, we’re saying that the value of people living today is substantially higher than the value of people living tomorrow (including our tomorrow selves), and that is making us so cautious that we hinder progress.
First of all, thank you for replying thoughtfully to my grammatically (and formattingly) stunted post.
Agreed, we dont stand too far from a shared perspective here. However, I know I personally struggle with the balance you suggest within this article everyday... value of safety vs value of innovation, value of risk vs reward, value of the individual vs the group, value of the future vs the present. As a scientist, I lean on data driven decisions, but even still I'm susceptable to unconcious bias for my own perspective and (and my immediate surroundings) as well as my own risk aversion or lack thereof, skews all future conversation. How do we get beyond that bias? and elevate the risk our society/individuals find acceptable and thus increase potential future innovation? Also, can we do it while minimizing recklessness (progress for progress sake) and without subversivly ellevating risk for a group that hasn't weighed in on the matter?
The most Important question of the 21st century.
I believe we can.