Quote rom the 200 science fiction novel "Accelerando" by Charles Stross:
"... The solar system is a dead loss right now – dumb all over! Just measure the MIPS per milligram. If it isn't thinking, it isn't working. We need to start with the low-mass bodies, reconfigure them for our own use. Dismantle the moon! Dismantle Mars! Build masses of free-flying nanocomputing processor nodes exchanging data via laser link, each layer running off the waste heat of the next one in. Matrioshka brains, Russian doll Dyson spheres the size of solar systems. Teach dumb matter to do the Turing boogie!"
This is a reference to Robert Bradbury's Matrioshka brain concept, which presumes incandescently hot Drexler-style rod logic - presumably as a placeholder for something that might actually work efficiently at high temperatures without chemo-mechanical decay.
One of the issues with supposedly efficient reversable computation is that real systems have physical extent. Information, represented as bits with energies greater than ln(2)kT, is transported from place to place, where it is either deposited on some kind of discriminator device, or reflected. If there are /any/ losses, what is sent is a packet of information greater than the minimum discriminator energy, or it is received unreliably. Perhaps there is a magic black box at the other end of a channel that can extract reliable bits out of a stream of unreliable ones, but this typically involves nonlinearity and energy input. If we ever need to repeat a transmission, we need to store a bit at both ends of the channel, and non-reversably expend energy to do so. The energy expended per bit increases with temperature.
If a thermodynamic computation machine uses incoming power at temperature T_i and exhausts heat at output temperature T_o , it has a Carnot efficiency of ( T_i - T_o ) / T_i . If that power manufactures bits with energy T_o , then the number of bits manufactured is proportional to ( T_i - T_o ) /( T_i \times T_o ) or ( ( 1/T_o ) - ( 1/T_i ) ) . If we interpose another layer at temperature T_1 , and everything runs at perfect thermodynamic efficiency, then we have two stages: ( 1/T_o ) - ( 1/T_1 ) and ( 1/T_1 ) - ( 1/T_i ) , which summed together produces the one-stage result.
Except - we must make sure that there is zero energy backflow, no emissions from the T_o stage reaching back to heat (and reduce the radiative efficiency of) the T_1 stage. Given that black body emissions are broad spectrum, we cannot achieve this.
Further, the sun (from a 50 AU distance) is a tiny disk with low "etendue" or solid angle spread. An intermediate shell would add etendue to the energy reaching the outer shell. Many plausible processes can efficiently "disassemble" a high energy photon coming from one narrow direction, but not so for photons coming in over a large solid angle.
Hence, the best way to use energy efficiently to manufacture and maintain bits is where kT is the lowest - far out in the solar system. Perhaps a second "hotter" stage in the inner system - not to extract signficant energy, but to removing damaging high energy photons and solar wind particles.
And what would hot machines be made from, anyway? Only a few elements are mechanically stable at higher temperatures - carbon, tungsten, refactory ceramics. Silicon carbide melts at 3,000 K and uses abundant elements; diamond is metastable and eventually decomposes into graphite at such temperatures. Pure silicon melts at 1687 K. Tungsten melts at 3,697 K; its oxides, silicide, and carbide melts at lower temperatures. Because atoms heavier than iron are lower on the curve of binding energy (which is most heavy metals, including tungsten, tantalum, and other common refractory elements, we can presume that the hottest we want to make a rod logic machine would be about 2000K - which gives us the dubious privilege of making energy-expensive and unstable bits from uncommon materials.
H2O ice is the most common solid material in the solar system, a large component of the cores of the outer planets. It is stable for geological time (though not against radiation) at 60 Kelvin, the temperature of a black body shell with a diameter of 100 AU. A non-orbiting static shell supported by light pressure (the difference between incoming sunlight and low energy IR reflection on the inward side, and thermal emission on the back) will support a shell weighing about 1 gram per square meter against solar gravity. That would use up about 2% of Neptune. If we double the diameter of the shell to 200 AU, we will use 8% of Neptune and cut the temperature to 40 Kelvin, producing 50% more computation. We will need more elements for our machines than just hydrogen and oxygen; hopefully the carbon and nitrogen and the rarer elements in Neptune will be sufficient.
The most interesting object in the solar system is the earth (and its attendant moon, necessary for rotation axis stabilization). We may choose someday to clean out the asteroids and other potential impactors (after thoroughly documenting their atomic-level composition and trajectories), but the other planets will mostly be left alone. In extremis, used as reaction mass; expelled from the solar system or collided with the sun. Perhaps the latter can someday be done in such a way to remove helium and the metallics and other fusion "ash" - that would preserve the earth for a very long time.
But there is not much point in disassembling the earth, or placing computation in the inner solar system - the gravity well is too steep, and it is far too vulnerable to bombardment from the outer system. The across-shell speed-of-light delays are smaller in the inner solar system, which reduces communication delay, but also reduces the available reaction time to threats. Cold detectors can see threats that hot detectors cannot, so a cold shell placed high in the gravity well will have many strategic advantages over an inner hot one. Future intelligence will probably covet the inner solar system environment as little as we covet hydrothermal vents on the bottom of the ocean, and for the same reasons.