davidad has a 10-min talk out on a proposal about which he says: “the first time I’ve seen a concrete plan that might work to get human uploads before 2040, maybe even faster, given unlimited funding”.
I think the talk is a good watch, but the dialogue below is pretty readable even if you haven't seen it. I'm also putting some summary notes from the talk in the Appendix of this dialoge.
I think of the promise of the talk as follows. It might seem that to make the future go well, we have to either make general AI progress slower, or make alignment progress differentially faster. However, uploading seems to offer a third... (read 7438 more words →)
We do not assume mirrors. As you say, there are big limits due to conservation of etendué. We are assuming (if I remember right) photovoltaic conversion into electricity and/or microwave beams received by rectennas. Now, all that conversion back and forth induces losses, but they are not orders of magnitude large.
In the years since we wrote that paper I have become much more fond of solar thermal conversion (use the whole spectrum rather than just part of it), and lightweight statite-style foil Dyson swarms rather than heavier collectors. The solar thermal conversion doesn't change things much (but allows for a more clean-cut analysis of entropy and efficiency; see Badescu's work). The statite... (read more)
It seems to me that the real issue is rational weighing of reference classes when using multiple models. I want to assign them weights so that they form a good ensemble to build my forecasting distribution from, and these weights should ideally reflect my prior of them being relevant and good, model complexity, and perhaps that their biases are countered by other reference classes. In the computationally best of all possible world I go down the branching rabbit hole and also make probabilistic estimates of the weights. I could also wing it.
The problem is that the set of potential reference classes appears to be badly defined. The Tesla case potentially involves all... (read more)
I have been baking for a long time, but it took a surprisingly long while to get to this practical "not a ritual" stage. My problem was that I approached it as an academic subject: an expert tells you what you need to know when you ask, and then you try it. But the people around me knew how to bake in a practical, non-theoretical sense. So while my mother would immediately tell me how to fix a too runny batter and the importance of quickly working a pie dough, she could not explain why that worked in terms that I could understand. Much frustration ensued on both sides.
I had been looking at Fisher information myself during the weekend, noting that it might be a way of estimating uncertainty in the estimation using the Cramer-Rao bound (but quickly finding that the algebra got the better of me; it *might* be analytically solvable, but messy work).
I tried doing a PCA of the judgments, to see if there was any pattern in how the predictions were judged. However, the variance of the principal components did not decline fast. The first component explains just 14% of the variance, the next ones 11%, 9%, 8%... It is not like there are some very dominant low-dimensional or clustering explanation for the pattern of good or bad predictions.
No clear patterns when I plotted the predictions in PCA-space: https://www.dropbox.com/s/1jvhzcn6ngsw67a/kurzweilpredict2019.png?dl=0 (In this plot colour denotes mean assessor view of correctness, with red being incorrect, and size the standard deviation of assessor views, with large corresponding to more agreement). Some higher order components may correspond to particular correlated batches of questions like the VR ones.
The fundamental problem is not even distinguishing exponential from logistic: even if you *know* it is logistic, the parameters that you typically care about (inflexion point location and asymptote) are badly behaved until after the inflection point. As pointed out in the related twitter thread, you gain little information about the latter two in the early phase and only information about the first two in the mid phase: it is the sequential nature of the forecasting that is making this problem.
I find it odd that this does not have a classic paper. There are *lots* of Bass curves used in technology adoption studies, and serious business people are interested in using them to forecast - somebody ought to have told them they will get disappointed. It seems to be a result of the kind that everybody who knows the field would know but rarely mention since it is so obvious.
I think the argument can be reformulated like this: space has very large absolute amounts of some resources - matter, energy, distance (distance is a kind of resource useful for isolation/safety). The average density of these resources is very low (solar in space is within an order of magnitude of solar on Earth) and for matter it is often low-grade (Earth's geophysics has created convenient ores). Hence matter and energy collection will only be profitable if (1) access gets cheap, (2) one can use automated collection with a very low marginal cost - plausibly robotic automation. (2) implies that a lot of material demands on Earth can be fulfilled that way too... (read more)
Overall, typographic innovations like all typography are better the less they stand out yet do their work. At least in somewhat academic text with references and notation subscripting appears to blend right in. I suspect the strength of the proposal is that one can flexibly apply it for readers and tone: sometimes it makes sense to say "I~2020~ thought", sometimes "I thought in 2020".
I am seriously planning to use it for inflation adjustment in my book, and may (publisher and test-readers willing) apply it more broadly in the text.
Looking back at our paper, I think the weakest points are (1) we handwave the accelerator a bit too much (I now think laser launching is the way to go), and (2) we also handwave the retro-rockets (it is hard to scale down nuclear rockets; I think a detachable laser retro-rocket is better now). I am less concerned about planetary disassembly and building destination infrastructure: this is standard extrapolation of automation, robotics and APM.
However, our paper mostly deals with sending a civilization's seeds everywhere, it does not deal with near term space settlement. That requires a slightly different intellectual approach.
What I am doing in my book is trying to look at... (read more)
I have not seen any papers about it, but did look around a bit while writing the paper.
However, a colleague and me analysed laser acceleration and it looks even better. Especially since one can do non-rigid lens systems to enable longer boosting. We developed the idea a fair bit but have not written it up yet.