The following is intended as 1) request for specific criticisms regarding the value of time investment on this project, and 2) pending favorable answer to this, a request for further involvement from qualified individuals. It is not intended as a random piece of interesting pop-sci, despite the subject matter, but as a volunteer opportunity.
Server Sky is a an engineering proposal to place thousands (eventually millions) of micron-thin satellites into medium orbit around the earth in the near term. It is being put forth by Keith Lofstrom, the inventor of the Launch Loop.
Abstract from the 2009 paper:
It is easier to move bits than atoms or energy. Server-sats are ultralight disks of silicon that convert sunlight into computation and communications. Powered by a large solar cell, propelled and steered by light pressure, networked and located by microwaves, and cooled by black-body radiation. Arrays of thousands of server-sats form highly redundant computation and database servers, as well as phased array antennas to reach thousands of transceivers on the ground.
First generation server-sats are 20 centimeters across ( about 8 inches ), 0.1 millimeters (100 microns) thick, and weigh 7 grams. They can be mass produced with off-the-shelf semiconductor technologies. Gallium arsenide radio chips provide intra-array, inter-array, and ground communication, as well as precise location information. Server-sats are launched stacked by the thousands in solid cylinders, shrouded and vibration-isolated inside a traditional satellite bus.
Links:
Some mildly negative evidence to start with: I have already had a satellite scientist tell me that this seems unlikely to work. Avoiding space debris and Kessler Syndrome, radio communications difficulties (especially uplink), and the need for precise synchronization are the obstacles he stressed as significant. He did not seem to have studied the proposal closely, but this at least tells us to be careful where to set our priors.
On the other hand, it appears Keith has given these problems a lot of thought already, and solutions can probably be worked out. The thinsats would have optical thrusters (small solar sails) and would thus be able to move themselves and each other around; defective ones could be collected for disposal without mounting an expensive retrieval mission, and the thrusters would also help avoid things in the first place. Furthermore the zone chosen (the m288 orbit) is relatively unused, so collisions with other satellites are unlikely. Also the satellites have powerful radar capabilities, which should lead to more easily detecting and eliminating space junk.
For the communications problem, the idea is to use three dimensional phased arrays of thinsats -- basically a bunch of satellites in a large block working in unison to generate a specific signal, behaving as if they were a much larger antenna. This is tricky and requires precision timing and exact distance information. The array's physical configuration will need to be randomized (or perhaps arranged according to an optimized pattern) in order to prevent grating lobes, a problem with interference patterns that is common with phased arrays. They would link with GPS and each other by radio on multiple bands to achieve "micron-precision thinsat location and orientation within the array".
According to the wiki, the most likely technical show-stopper (which makes sense given the fact that m288 is outside of the inner Van Allen belt) is radiation damage. Proposed fixes include periodic annealing (heating the circuit with a heating element) to repair the damage, and the use of radiation-resistant materials for circuitry.
Has anyone else here researched this idea, or have relevant knowledge? It seems like a great potential source of computing power for AI research, mind uploads, and so forth, but also for all those mundane, highly lucrative near term demands like web hosting and distributed business infrastructures.
From an altruistic standpoint, this kind of system could reduce poverty and increase equitable distribution of computing resources. It could also make solving hard scientific problems like aging and cryopreservation easier, and pave the road to solar power satellites. As it scales, it should also create demand (as well as available funding and processing power) for Launch Loop construction, or some other similarly low-cost form of space travel.
Value of information as to whether it can work or not therefore appears to be extremely high, something I think is crucial for a rationalist project. If it can work, the value of taking productive action (leadership, getting it funded, working out the problems, etc.) should be correspondingly high as well.
Update: Keith Lofstrom has responded on the wiki to the questions raised by the satellite scientist.
Note: Not all aspects of the project have complete descriptions yet, but there are answers to a lot of questions in the wiki.
Here is a summary list of questions raised and answers so far:
- How does this account for Moore's Law? (kilobug)
In his reply to the comments on Brin's post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves. Obsolete sats would not stay in use for long. - What about ping time limits? (kilobug)
Ping times are going to be limited (70ms or so), and worse than you can theoretically get with a fat pipe (42ms), but it is still much better than you get with GEO (250+ ms). This is bad for high frequency trading, but fine for (parallelizable) number crunching and most other practical purposes. - What kind of power consumption? Doesn't it cost more to launch than you save? (Vanvier)
It takes roughly 2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. Blackbody cooling is another benefit. - Bits being flipped by cosmic radiation is a problem on earth, how can it be solved in space? (Vanvier)
Flash memory is acknowledged to be the most radiation sensitive component of the satellite. The solution would involve extensive error correction software and caching on multiple satellites. - Periodic annealing tends to short circuits. Wouldn't this result in very short lifetimes? (Vanvier)
Circuits will be manufactured as two dimensional planes, which don't short as easily. Another significant engineering challenge: Thermal properties in the glass will need to be matched with the silicon and wires (for example, slotted wiring with silicon dioxide between the gaps) to prevent circuit damage. Per Vanvier, it may be less expensive to replace silicon with other materials for this purpose. - What are the specific of putting servers in space? (ZankerH)
Efficient power/cooling, increased communications, overall scalability, relative lack of environmental impact.
Yet to be answered:
- Is the amount of speculative tech too high? E.g. if future kinds of RAM are needed, costs may be higher. (Vanvier)
- Is it easier to replace silicon with something else than find ways to make the rest of the sat match thermal expansion of silicon? (Vanvier)
- Can we get more data on economics/business plan? (Vanvier)
- Solar sails have been known to stick together. Is this a problem for thinsats, which are shipped stuck together? (Vanvier)
- Do most interesting processes bottleneck on communication efficiency? (skelterpot)
- What decreases in cost might we see with increased manufacturing yield? (skelterpot)
Insightful comments:
- Launch energy vs energy collection (answer above is more specific, but this was a commendable quick-check). (tgb)
- ECC RAM is standard technology used in server computers. (JoachimShipper)
- Fixing bit errors outside the memory (e.g. in CPU) is harder, something like Tandem Computers could be used, with added expense. (JoachimShipper)
- Some processor-heavy computing tasks, like calculating scrypt hashes, are not very parallelizable. (skelterpot)
- Other approaches like redundant hardware and error-checking within the CPU are possible, but they drive up the die area used. (skelterpot)
ECC RAM is standard for servers, so it's not especially hard to get. Fixing bit errors outside the memory (e.g. in CPU) is harder; I imagine something like http://en.wikipedia.org/wiki/Tandem_Computers, essentially running two computers in parallel and checking them against one another, would work. But all of this drives the cost up, which, as you note, is already a problem.
There are other clever things you can do, like including redundant hardware and error-checking within the CPU, but they all drive up the die area used. Some of this stuff might be able to actually drive down cost by increasing the manufacturing yield, but in general, it will probably be more expensive.