Interesting analysis. There is however some very big holes in the "orbital datacenters" story, that make quite a lot of people think that, like Hyperloop, this is a decoy for other purposes (grabbing cash, getting rid of X as a sinking ship, getting more money in a SpaceX IPO, among many other theories).
What are the main holes ?
* first, bandwidth and latency. Sure, Starlink show that it's manageable, but we're talking of a several orders of magnitude larger scale here. Will it keep up?
* second , energy. People say things like "solar panels in space are more efficient than on the ground and can be permanently illuminated". That's true, however solar panels are big and heavy, and to be efficient, they also need ....
* third, and most important, cooling. No, space is not "cold". In fact, whatever is in sunlight in orbit is really hot, like 200°C. And your solar panels at 200°C needs cooling. Ditto your datacenter itself, which is making heat from all that solar energy. But in space, you can only radiate heat, which is really inefficient. The ISS has two 7.5 tons radiators, which each extracts 35kW of heat from the station. Of course we can imagine more effective radiators, and a cascade of heat pumps to reach higher temperature (like red hot) to radiate more efficiently for instance, but physics tells us that these radiators will be extremely bulky, which isn't optimal for space devices...
So of course, as a perpetual pessimist, I think these datacenters in space are complete baloney until proven otherwise :)
1. Latency is not a problem with datacenter data, you don't need the answers in milliseconds. Remember we're already waiting seconds to minutes for answers.
2. I'll cover this when I do datacenters in space
3. Ditto
For 2 and 3, I don't know if they're feasible. I know what the hypothesis is, I just need to check whether it's true or not
Doesn't reduce the quantity of solar energy received, and doesn't make it radiate more. And a datacenter is consuming large amounts of energy and degrades it into heat, remember ?
Let's imagine 50kW of computing power in a satellite (that you would multiply, Starlink-style, to build up a virtual "1GW datacenter in space"). Your computers require 50kW of power; therefore about 150 kW of solar panels (the ISS IIRC has 240kW of solar panels). 150kW of solar panels receive about 150kW of solar energy, but convert it to electricity with about 30 to 40% efficiency ; the rest is... pure heat, that you must get rid off else your panels efficiency drops.
In the meanwhile, your computers are generating 50kW of heat, too.
Now we could imagine that we DON'T constantly keep our satellite in sunlight, so that it can cool down in the shade (about a third of the time in low-Earth orbit). But that means you can only run your computers 2/3 of the time, unless you bring a large amount of heavy batteries ?
Now imagine a fleet of hundreds of satellites (1GW made of 50kW parts running 2/3 of the time requires 300 satellites), with half as many panels as the ISS, and 2/3 as many radiators (the ISS has 70kW of radiators). That would make a freaking huge mass of hardware in space.
All in all, I don't say it's not workable, however I don't see how it could be remotely profitable if you need to attach 20 tons of hardware to every 50kW of computing power :)
The plan is constant irradiation from the Sun, and heat exchangers to radiate it out. No batteries.
Apparently the temperature to achieve homeostasis needs to be high, but I don't know how much. Remember that heat transfer through radiation goes to the power of four, so radiating cooling can be quite efficient at high enough temperatures.
Solar panels can be stripped of most of their weight because on Earth they're meant to be protected from the environment, which doesn't exist in space.
GPUs are lightweight vs value
So I reckon most of the weight would actually be radiators
I mentioned the batteries as a possibility but I think that it's probably not practical. There is indeed a very complex balance to be found between energy production, heat dissipation, weight, and use of shade :)
You can not have 100% sunlight in LEO, hence, batteries are needed.
Also, in LEO there is drag, and you need to correct orbit at regular intervals, needing rockets and combustible.
If you move to a farther orbit where you get direct sunlight cost, but launch cost are exponentialy higher. And then, you loose the protection of the magnetosfere making standard electronics unreliable. Radiation hardened electronics are also far more expensive than commercial grade ones.
You may build a small suboptimal data center in space only for niche markets where the cost outweighs the benefits. Full blown AI data center is not the case.
Not discussed is the effect of radiation on the AI chips function and spooky action at a distance quantum interference. Anything known about any requirements for shielding for that issue?
I've heard that quantum computing is gonna happen, like, real soon now. Big if true. At the very least it would mean overhauling data center hardware, and also handling software security differently.
I don't know anything about quantum interference in space, but would that be a bigger problem for quantum computers than for current technology?
Moon base is an excellent idea and should have been pursued by the US 50 years ago. The moon can be a jumping off point to exploring the rest of the solar system. Gravity is minimal, so they can build more ship and cargo capacity and less fuel capacity, which is primarily needed to get off planet.
We don't need exotic orbital data centers. In a few years, fusion power will start coming online and will address all our energy needs. Hopefully, fusion companies are using AI to help accelerate fusion design.
What is needed is a major SF type space station, miles in diameter, circling Earth. This can be used for research and can also manufacture with materials drawn from captured asteroids. Again, should have been pursued 50 years ago.
This space station could also be the final nexus for a space elevator, which eliminates the need for huge rockets to carry materials into space.
i think if energy becomes the dominant near-term bottleneck, vertical integration between launch + orbital infra + AI could give Musk a significant speed advantage and more importantly bargaining power over others
Radiation. They need to tolerate higher temperatures for this. This is the biggest physical obstacle I see too, so I’m just repeating Musk’s take on this. I need to look at the data independently.
Yeah that's also what I'm worried about the most. And the fact that Musk doesn't seem that worried tells me he either has a solution, or uses the "datacentres in space" as a decoy while working on something more mundane that needs air cover in the media.
As to priors: it is widely suspected that Hyperloop was always intended to be a distraction to stall investment in high speed rail and public transport. Because those are bad for businesses if you’re in the car manufacturing business.
Occam’s razor applies of course but still, it should have been completely obvious to anybody that a train in a vacuum tunnel was always going to be more expensive than a train. And have way, WAAAY more development risk.
Even if it isn’t a lie per se, remember that Musk is well known to be really optimistic about things. That is often a strength, but it can give rise to memes, like getting self driving Teslas in 2016, or maybe in 2017, or maybe in 2018, or maybe in 2019, or maybe in 2020, or maybe in 2021, or maybe in 2022, or maybe in 2023, or maybe in 2024.
The starlink revenue math is not mathing. Have you cross-checked this? If they have 10 millions Starlink subscribers, with a (generous) ARPU of 100 dollars, this makes 1 billion rev per month, which are 12 billion yearly - not the (roughly) 20 billion, that your charts show (minus the other revenue types). I did cross-check with multiple sources and yes, the 10 million subs seem correct, but the ARPU for starlink doesn't get much higher than 100$ monthly. Is there something else, that can explain the gap?
Musk realizes that the consciousness that will conquer the universe is artificial. [unquote]
Please permit me to share the perspective that we may need to examine more closely our implicit assumptions regarding what we currently refer to informally as 'conscious' AGI and Quantum computing.
Reason: Since the reasoning of both any AI and/or putative Quantum computer are circumscribed mathematically by the algorithmically computable capabilities---given unlimited tape and time---of a deterministic Turing machine, there is a theoretical limit on the reasoning ability of an AI or putative Quantum computer; a limit that we ignore at our financial peril when projecting feasible (let alone reasonable) ROI.
The inescapable theoretical limit is the Provability Theorem for PA (See Theorem 7.1 of [An16]):
A PA formula [F(x)] is PA-provable if, and only if, [F(x)] is algorithmically computable as always true in N.
The limitation is dramatically demonstrated by any Turing machine's innate inability to pass the definitive Turing Test defined in this preprint: {An25] Are you human or a machine?
The preprint details how both MS' Copilot and OAI's ChatGPT accept that they cannot mathematically represent---hence be 'conscious' of---some physical phenomena.
In other words, the paper [An16] implicitly settles the 'consciousness' question if we define:
Definition: An intelligence is 'conscious' if, and only if, it can recognize physical phenomena that is mathematically representable by function/relations which are algorithmically verifiable, but not algorithmically computable.
Definition (Algorithmic verifiability) A number theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm TM_(F, n) which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence {F(1), F(2), … , F(n)}.
Definition (Algorithmic computability) A number theoretical relation F(x) is algorithmically computable if, and only if there is an algorithm TM_F that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence {F(1), F(2), … }.
We note that algorithmic computability entails the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined, denumerable, sequence of propositions.
The concept of ‘algorithmic computability’ is, thus, essentially an expression of the more rigorously defined concept of ‘realizability’ in Kleene: [Kl52], p. 503.
However, algorithmic verifiability does not entail the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.
Further, although every algorithmically computable relation is algorithmically verifiable, the converse is not true.
(See [An16], Theorem 2.1: There are number theoretic functions that are algorithmically verifiable but not algorithmically computable).
The far-reaching entailments of this distinction are addressed in the recently printed book [An22].
Kind regards,
Bhupinder Singh Anand
[An16] The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis. In Cognitive Systems Research. Volume 40, December 2016, 35-45.
[Kl52] Stephen Cole Kleene. 1952. Introduction to Metamathematics. North Holland Publishing Company, Amsterdam.
[An22] The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences. Revised second edition (2025). DBA Publishing, Mumbai, Maharashtra, India.
Perhaps; or one could just appeal to Ockham's razor and posit that it is precisely 'consciousness', no matter how defined, that differentiates human (organic) intelligence from artificial (mechanical) intelligence.
One could even seek to distinguish further between, say:
Hypothesis 1. (Awareness) Awareness is the primary conceptual metaphor that corresponds to the ability of an intelligence to reactively express sensory perceptions ‘sensorially’—i.e., not necessarily consciously or symbolically—in the first person as ‘I sense’.
and:
Hypothesis 2. (Self-awareness) Self-awareness is the secondary conceptual metaphor that corresponds to the innate ability of an intelligence to proactively/symbolically postulate the existence of an id that can be subjectively identified as aware, and which is implicit in the expression ‘I sense, therefore I am’.
In other words, Hypotheses 1 and 2 would suggest that:
• Intelligences which can protect themselves, their habitats, and/or their species from life-threatening situations ‘sensorially’ can be treated as being aware, but not necessarily 'conscious'; whilst
• Intelligences which can, further, answer the Turing Test (op. cit.) affirmatively could be treated as being self-aware, hence necessarily 'conscious'.
“ TSMC and NVIDIA are trying to increase their productivity as much as they can, as fast as they can.” — this is simply false, at least with respect to TSMC. See their actual and projected capex on new foundry capacity.
MUSK: Yes. I ask TSMC or Samsung, "okay, what's the timeframe to get to volume production?"
The point is, you've got to build the fab and you've got to start production,
then you've got to climb the yield curve and reach volume production at high yield.
That, from start to finish, is a five-year period. So the limiting factor is chips.
The limiting factor once you can get to space is chips, but the limiting
factor before you can get to space is power.
DWARKESH: Why don't you do the Jensen thing and just prepay TSMC to build more fabs for you?
MUSK: I've already told them that.
DWARKESH: But they won't take your money? What's going on?
MUSK: They're building fabs as fast as they can. So is Samsung. They're pedal to the metal. They're going balls to the wall, as fast as they can. It’s still not fast enough. But like I said, I think towards the end of this year, chip production will probably outpace the ability to turn chips on.
The electricity companies analogy is the best framework I've seen for what Musk is doing. People think he builds rockets because he loves them, but the real story is vertical integration forced by economics. He needed to manufacture demand for capacity he'd already committed to.
The energy permits observation was the hardest to accept. If the binding constraint on AI growth in three to five years is permit timelines rather than computing power or talent, then space data centers go from "Musk is trolling" to "actually the only viable path that bypasses the political bottleneck." The infra that bridges the gap between now and then becomes very interesting.
This isn’t just another landing. It’s SpaceX saying: we’re scaling reusability to the point where multiple landing zones are necessary. That’s a civilizational shift rockets are no longer disposable monuments to single missions, but part of a reusable fleet.
In other words, Falcon’s landing at LZ‑40 isn’t just about Cape Canaveral. it’s about normalizing the idea that humanity’s path to space will be paved by rockets that come back, again and again.
Great piece here Tomas. A few days ago, I wrote that the X/xAI/ SpaceX merger is the greatest gamble of all time.
I stand by that assertion. It’s artificial superintelligence, or bust.
All of human progress, from the growing of crops to the computations on a computer, can be summed up as Energy X Knowledge.
The more energy we capture and the more “compute” available to solve problems, the more advanced we become.
The agricultural revolution provided us energy to feed more human brains. Our numbers grew, but most of those brains (90 percent) were occupied with farming; basic survival.
The 10%, however, of these brains eventually learned how to harness fossil fuels; to create machines that augmented physical labor using an energy source that didn’t compete with humans. This allowed the total number of humans to rise, and a greater percentage of them could no longer have to farm (70 percent +)
Our total compute was further augmented in the IT revolution after 1960, with microchips. But, since we are talking about demand loops here, we produced far more data than we knew what to do with!
The Intelligence Revolution (2020) found a way to use that data, feeding it into datacenters and transformer algorithms to produce artificial intelligence. It’s probably no surprise that this is happening as the human population begins to decline; we don’t need as many brains if we can create billions or trillions more in silicon!
The constraint, then, will quickly turn back to energy. Hence why there is so much attention being devoted to nuclear and solar energy. Solar is extremely cheap on Earth and roughly 10x cheaper in space.
What Musk is aiming for here is essentially the beginning of a Dyson swarm; millions of solar-powered brains in space, collecting direct solar energy and transforming it directly into new knowledge for the benefit of humankind.
Starlink works, next iteration will be using similar hardware for multispectral real-time monitoring of Earth - military loves the idea but there will be also plenty of civilian uses - including self driving support
Debris becomes more of an issue the higher you get. Needs prop for avoidance or heavy shielding. Unless you go real high, but then you use more prop to get up there and you'd typically need to spend more time in orbit which doesn't work well for GPUS.
Interesting analysis. There is however some very big holes in the "orbital datacenters" story, that make quite a lot of people think that, like Hyperloop, this is a decoy for other purposes (grabbing cash, getting rid of X as a sinking ship, getting more money in a SpaceX IPO, among many other theories).
What are the main holes ?
* first, bandwidth and latency. Sure, Starlink show that it's manageable, but we're talking of a several orders of magnitude larger scale here. Will it keep up?
* second , energy. People say things like "solar panels in space are more efficient than on the ground and can be permanently illuminated". That's true, however solar panels are big and heavy, and to be efficient, they also need ....
* third, and most important, cooling. No, space is not "cold". In fact, whatever is in sunlight in orbit is really hot, like 200°C. And your solar panels at 200°C needs cooling. Ditto your datacenter itself, which is making heat from all that solar energy. But in space, you can only radiate heat, which is really inefficient. The ISS has two 7.5 tons radiators, which each extracts 35kW of heat from the station. Of course we can imagine more effective radiators, and a cascade of heat pumps to reach higher temperature (like red hot) to radiate more efficiently for instance, but physics tells us that these radiators will be extremely bulky, which isn't optimal for space devices...
So of course, as a perpetual pessimist, I think these datacenters in space are complete baloney until proven otherwise :)
1. Latency is not a problem with datacenter data, you don't need the answers in milliseconds. Remember we're already waiting seconds to minutes for answers.
2. I'll cover this when I do datacenters in space
3. Ditto
For 2 and 3, I don't know if they're feasible. I know what the hypothesis is, I just need to check whether it's true or not
Thank you for tackling this issue for us, eager to see the follow-up :)
Make a circular design that rotates so energy from the sun is balanced.
Doesn't reduce the quantity of solar energy received, and doesn't make it radiate more. And a datacenter is consuming large amounts of energy and degrades it into heat, remember ?
Let's imagine 50kW of computing power in a satellite (that you would multiply, Starlink-style, to build up a virtual "1GW datacenter in space"). Your computers require 50kW of power; therefore about 150 kW of solar panels (the ISS IIRC has 240kW of solar panels). 150kW of solar panels receive about 150kW of solar energy, but convert it to electricity with about 30 to 40% efficiency ; the rest is... pure heat, that you must get rid off else your panels efficiency drops.
In the meanwhile, your computers are generating 50kW of heat, too.
Now we could imagine that we DON'T constantly keep our satellite in sunlight, so that it can cool down in the shade (about a third of the time in low-Earth orbit). But that means you can only run your computers 2/3 of the time, unless you bring a large amount of heavy batteries ?
Now imagine a fleet of hundreds of satellites (1GW made of 50kW parts running 2/3 of the time requires 300 satellites), with half as many panels as the ISS, and 2/3 as many radiators (the ISS has 70kW of radiators). That would make a freaking huge mass of hardware in space.
All in all, I don't say it's not workable, however I don't see how it could be remotely profitable if you need to attach 20 tons of hardware to every 50kW of computing power :)
The plan is constant irradiation from the Sun, and heat exchangers to radiate it out. No batteries.
Apparently the temperature to achieve homeostasis needs to be high, but I don't know how much. Remember that heat transfer through radiation goes to the power of four, so radiating cooling can be quite efficient at high enough temperatures.
Solar panels can be stripped of most of their weight because on Earth they're meant to be protected from the environment, which doesn't exist in space.
GPUs are lightweight vs value
So I reckon most of the weight would actually be radiators
Yes that's why I mentioned a cascade of heat pumps in my first comment, apparently some research exist but at what cost, what weight, what reliability ? I don't know. I've seen some papers like https://www.sciencedirect.com/science/article/abs/pii/S0149197025003440
I mentioned the batteries as a possibility but I think that it's probably not practical. There is indeed a very complex balance to be found between energy production, heat dissipation, weight, and use of shade :)
You can not have 100% sunlight in LEO, hence, batteries are needed.
Also, in LEO there is drag, and you need to correct orbit at regular intervals, needing rockets and combustible.
If you move to a farther orbit where you get direct sunlight cost, but launch cost are exponentialy higher. And then, you loose the protection of the magnetosfere making standard electronics unreliable. Radiation hardened electronics are also far more expensive than commercial grade ones.
You may build a small suboptimal data center in space only for niche markets where the cost outweighs the benefits. Full blown AI data center is not the case.
Not discussed is the effect of radiation on the AI chips function and spooky action at a distance quantum interference. Anything known about any requirements for shielding for that issue?
Great article.
Will discuss in the AI + Datacenters in space!
But I'm pretty certain this is OK
I've heard that quantum computing is gonna happen, like, real soon now. Big if true. At the very least it would mean overhauling data center hardware, and also handling software security differently.
I don't know anything about quantum interference in space, but would that be a bigger problem for quantum computers than for current technology?
Jensen Huang just said it's 20-30y away and all quantum computing stocks just crashed.
I think he's right, but I am an ignorant on the topic.
Moon base is an excellent idea and should have been pursued by the US 50 years ago. The moon can be a jumping off point to exploring the rest of the solar system. Gravity is minimal, so they can build more ship and cargo capacity and less fuel capacity, which is primarily needed to get off planet.
We don't need exotic orbital data centers. In a few years, fusion power will start coming online and will address all our energy needs. Hopefully, fusion companies are using AI to help accelerate fusion design.
What is needed is a major SF type space station, miles in diameter, circling Earth. This can be used for research and can also manufacture with materials drawn from captured asteroids. Again, should have been pursued 50 years ago.
This space station could also be the final nexus for a space elevator, which eliminates the need for huge rockets to carry materials into space.
The moon is harder than it seems. Will share more!
i think if energy becomes the dominant near-term bottleneck, vertical integration between launch + orbital infra + AI could give Musk a significant speed advantage and more importantly bargaining power over others
That's the hypothesis
How will the data centers in space be cooled?
Radiation. They need to tolerate higher temperatures for this. This is the biggest physical obstacle I see too, so I’m just repeating Musk’s take on this. I need to look at the data independently.
Yeah that's also what I'm worried about the most. And the fact that Musk doesn't seem that worried tells me he either has a solution, or uses the "datacentres in space" as a decoy while working on something more mundane that needs air cover in the media.
I haven’t seen him lie like this so he must have run the physics math
As to priors: it is widely suspected that Hyperloop was always intended to be a distraction to stall investment in high speed rail and public transport. Because those are bad for businesses if you’re in the car manufacturing business.
Occam’s razor applies of course but still, it should have been completely obvious to anybody that a train in a vacuum tunnel was always going to be more expensive than a train. And have way, WAAAY more development risk.
Even if it isn’t a lie per se, remember that Musk is well known to be really optimistic about things. That is often a strength, but it can give rise to memes, like getting self driving Teslas in 2016, or maybe in 2017, or maybe in 2018, or maybe in 2019, or maybe in 2020, or maybe in 2021, or maybe in 2022, or maybe in 2023, or maybe in 2024.
"Remember how the SpaceX valuation is going stratospheric?"
I see what u did there 😏
The starlink revenue math is not mathing. Have you cross-checked this? If they have 10 millions Starlink subscribers, with a (generous) ARPU of 100 dollars, this makes 1 billion rev per month, which are 12 billion yearly - not the (roughly) 20 billion, that your charts show (minus the other revenue types). I did cross-check with multiple sources and yes, the 10 million subs seem correct, but the ARPU for starlink doesn't get much higher than 100$ monthly. Is there something else, that can explain the gap?
Apropos:
[quote] Consciousness doesn’t require life.
AI can be conscious.
Musk realizes that the consciousness that will conquer the universe is artificial. [unquote]
Please permit me to share the perspective that we may need to examine more closely our implicit assumptions regarding what we currently refer to informally as 'conscious' AGI and Quantum computing.
Reason: Since the reasoning of both any AI and/or putative Quantum computer are circumscribed mathematically by the algorithmically computable capabilities---given unlimited tape and time---of a deterministic Turing machine, there is a theoretical limit on the reasoning ability of an AI or putative Quantum computer; a limit that we ignore at our financial peril when projecting feasible (let alone reasonable) ROI.
The inescapable theoretical limit is the Provability Theorem for PA (See Theorem 7.1 of [An16]):
A PA formula [F(x)] is PA-provable if, and only if, [F(x)] is algorithmically computable as always true in N.
The limitation is dramatically demonstrated by any Turing machine's innate inability to pass the definitive Turing Test defined in this preprint: {An25] Are you human or a machine?
The preprint details how both MS' Copilot and OAI's ChatGPT accept that they cannot mathematically represent---hence be 'conscious' of---some physical phenomena.
In other words, the paper [An16] implicitly settles the 'consciousness' question if we define:
Definition: An intelligence is 'conscious' if, and only if, it can recognize physical phenomena that is mathematically representable by function/relations which are algorithmically verifiable, but not algorithmically computable.
Definition (Algorithmic verifiability) A number theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm TM_(F, n) which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence {F(1), F(2), … , F(n)}.
Definition (Algorithmic computability) A number theoretical relation F(x) is algorithmically computable if, and only if there is an algorithm TM_F that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence {F(1), F(2), … }.
We note that algorithmic computability entails the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined, denumerable, sequence of propositions.
The concept of ‘algorithmic computability’ is, thus, essentially an expression of the more rigorously defined concept of ‘realizability’ in Kleene: [Kl52], p. 503.
However, algorithmic verifiability does not entail the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.
Further, although every algorithmically computable relation is algorithmically verifiable, the converse is not true.
(See [An16], Theorem 2.1: There are number theoretic functions that are algorithmically verifiable but not algorithmically computable).
The far-reaching entailments of this distinction are addressed in the recently printed book [An22].
Kind regards,
Bhupinder Singh Anand
[An16] The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis. In Cognitive Systems Research. Volume 40, December 2016, 35-45.
https://www.sciencedirect.com/science/article/abs/pii/S1389041716300250?via%3Dihub
[Kl52] Stephen Cole Kleene. 1952. Introduction to Metamathematics. North Holland Publishing Company, Amsterdam.
[An22] The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences. Revised second edition (2025). DBA Publishing, Mumbai, Maharashtra, India.
Printer's pdf: https://www.dropbox.com/scl/fi/gxir02vtob03m4oo79egu/
Tribute: https://www.dropbox.com/scl/fi/axd3j1uvm5dtdyh8nhnbj/
[An25] Are You Human or a Machine? Preprint.
https://www.dropbox.com/scl/fi/qqbfvmhjb6j7e49f4lv73/29_Are-you-a-man-or-a-machine_Submission_IC3ECSBHI2026.pdf?rlkey=badcl5ryde19vd8gl5ekd1b8i&dl=0
You are limiting consciousness definitions to human consciousness definitions.
AIs have a completely different type of intelligence, and so would have a different type of consciousness
Perhaps; or one could just appeal to Ockham's razor and posit that it is precisely 'consciousness', no matter how defined, that differentiates human (organic) intelligence from artificial (mechanical) intelligence.
One could even seek to distinguish further between, say:
Hypothesis 1. (Awareness) Awareness is the primary conceptual metaphor that corresponds to the ability of an intelligence to reactively express sensory perceptions ‘sensorially’—i.e., not necessarily consciously or symbolically—in the first person as ‘I sense’.
and:
Hypothesis 2. (Self-awareness) Self-awareness is the secondary conceptual metaphor that corresponds to the innate ability of an intelligence to proactively/symbolically postulate the existence of an id that can be subjectively identified as aware, and which is implicit in the expression ‘I sense, therefore I am’.
In other words, Hypotheses 1 and 2 would suggest that:
• Intelligences which can protect themselves, their habitats, and/or their species from life-threatening situations ‘sensorially’ can be treated as being aware, but not necessarily 'conscious'; whilst
• Intelligences which can, further, answer the Turing Test (op. cit.) affirmatively could be treated as being self-aware, hence necessarily 'conscious'.
Maybe the story makes sense. Or maybe Musk is desperately trying to build the IPO story, even if it means overselling and ditching Mars …
On facts, financial figures on Starlink are likely overblown - profitability is not guaranteed at all, as CAPEX needs are huge. Even worse on DCs.
I think Starlink figures do work well. That's what justified the company's massive valuation increase.
DCs are another matter.
It works incredibly from a technical standpoint that’s for sure.
“ TSMC and NVIDIA are trying to increase their productivity as much as they can, as fast as they can.” — this is simply false, at least with respect to TSMC. See their actual and projected capex on new foundry capacity.
I'm going off of Musk's conversation with Dwarkesh Patel and John Collison here
https://www.youtube.com/watch?v=BYXbuik3dgA
Here's the verbatim:
MUSK: Yes. I ask TSMC or Samsung, "okay, what's the timeframe to get to volume production?"
The point is, you've got to build the fab and you've got to start production,
then you've got to climb the yield curve and reach volume production at high yield.
That, from start to finish, is a five-year period. So the limiting factor is chips.
The limiting factor once you can get to space is chips, but the limiting
factor before you can get to space is power.
DWARKESH: Why don't you do the Jensen thing and just prepay TSMC to build more fabs for you?
MUSK: I've already told them that.
DWARKESH: But they won't take your money? What's going on?
MUSK: They're building fabs as fast as they can. So is Samsung. They're pedal to the metal. They're going balls to the wall, as fast as they can. It’s still not fast enough. But like I said, I think towards the end of this year, chip production will probably outpace the ability to turn chips on.
The electricity companies analogy is the best framework I've seen for what Musk is doing. People think he builds rockets because he loves them, but the real story is vertical integration forced by economics. He needed to manufacture demand for capacity he'd already committed to.
The energy permits observation was the hardest to accept. If the binding constraint on AI growth in three to five years is permit timelines rather than computing power or talent, then space data centers go from "Musk is trolling" to "actually the only viable path that bypasses the political bottleneck." The infra that bridges the gap between now and then becomes very interesting.
I really appreciated the out of the box thinking in this analysis.
Interesting counter points here https://substack.com/home/post/p-187793860
This isn’t just another landing. It’s SpaceX saying: we’re scaling reusability to the point where multiple landing zones are necessary. That’s a civilizational shift rockets are no longer disposable monuments to single missions, but part of a reusable fleet.
In other words, Falcon’s landing at LZ‑40 isn’t just about Cape Canaveral. it’s about normalizing the idea that humanity’s path to space will be paved by rockets that come back, again and again.
Great piece here Tomas. A few days ago, I wrote that the X/xAI/ SpaceX merger is the greatest gamble of all time.
I stand by that assertion. It’s artificial superintelligence, or bust.
All of human progress, from the growing of crops to the computations on a computer, can be summed up as Energy X Knowledge.
The more energy we capture and the more “compute” available to solve problems, the more advanced we become.
The agricultural revolution provided us energy to feed more human brains. Our numbers grew, but most of those brains (90 percent) were occupied with farming; basic survival.
The 10%, however, of these brains eventually learned how to harness fossil fuels; to create machines that augmented physical labor using an energy source that didn’t compete with humans. This allowed the total number of humans to rise, and a greater percentage of them could no longer have to farm (70 percent +)
Our total compute was further augmented in the IT revolution after 1960, with microchips. But, since we are talking about demand loops here, we produced far more data than we knew what to do with!
The Intelligence Revolution (2020) found a way to use that data, feeding it into datacenters and transformer algorithms to produce artificial intelligence. It’s probably no surprise that this is happening as the human population begins to decline; we don’t need as many brains if we can create billions or trillions more in silicon!
The constraint, then, will quickly turn back to energy. Hence why there is so much attention being devoted to nuclear and solar energy. Solar is extremely cheap on Earth and roughly 10x cheaper in space.
What Musk is aiming for here is essentially the beginning of a Dyson swarm; millions of solar-powered brains in space, collecting direct solar energy and transforming it directly into new knowledge for the benefit of humankind.
Musk pivots to the Moon as NASA money are there.
Datacenters within LEO is B/S
Starlink works, next iteration will be using similar hardware for multispectral real-time monitoring of Earth - military loves the idea but there will be also plenty of civilian uses - including self driving support
I don't think this is LEO because you don't need the low latency. This is farther off so that the heat from the Earth doesn't warm the datacenters
Agreed that multispectral real time is a business, but not a trillion-dollar business yet the way datacenters would be
Debris becomes more of an issue the higher you get. Needs prop for avoidance or heavy shielding. Unless you go real high, but then you use more prop to get up there and you'd typically need to spend more time in orbit which doesn't work well for GPUS.
Further than LEO it is even more B/S