It seems to me that the real issue is rational weighing of reference classes when using multiple models. I want to assign them weights so that they form a good ensemble to build my forecasting distribution from, and these weights should ideally reflect my prior of them being relevant and good, model complexity, and perhaps that their biases are countered by other reference classes. In the computationally best of all possible world I go down the branching rabbit hole and also make probabilistic estimates of the weights. I could also wing it.
The problem is that the set of potential reference classes appears to be badly defined. The Tesla case potentially involves all possible subsets of stocks (2^N) over all possible time intervals (2^NT), but as the dictator case shows there is also potentially an unbounded set of other facts that might be included in selecting the reference classes. That means that we should be suspicious about having well-formed priors over the reference class set.
When I have some sensible reference classes pop up in my mind and I select from them I am doing naturalistic decision making where past experience gates availability. So while I should weigh their results together, I should be aware that they are biased in this way. I should broaden my model uncertainty for the weighing accordingly. But how much I broaden it depends on how large I allow the considerable set of potential reference classes to be, a separate meta-prior.
I have been baking for a long time, but it took a surprisingly long while to get to this practical "not a ritual" stage. My problem was that I approached it as an academic subject: an expert tells you what you need to know when you ask, and then you try it. But the people around me knew how to bake in a practical, non-theoretical sense. So while my mother would immediately tell me how to fix a too runny batter and the importance of quickly working a pie dough, she could not explain why that worked in terms that I could understand. Much frustration ensued on both sides.
A while ago I came across Harold McGee's "On Food and Cooking" and Jeff Potter's "Cooking for Geeks". These books explained what was going on in a format that made sense to me - starch gelation, protein denaturation, Maillard reactions, and so on - *and* linked it to the language of the kitchen. Suddenly I had freedom to experiment and observe but with the help of a framework of explicit chemistry and physics that helped me organise the observations. There has been a marked improvement in my results (although mother now finds me unbearably weird in the kitchen). It is also fun to share these insights: https://threadreaderapp.com/thread/1263895622433869827.html
The lesson of my experience is that sometimes it is important to seek out people who can explain and bootstrap your knowledge by speaking your "language", even if they are not the conveniently close and friendly people around you. When you get non-working explanations they usually do not explain much, and hence just become ritual rules. Figuring out *why* explanations do not work for you is the first step, but then one needs to look around for sources of the right kind of explanations (which in my case took far longer). Of course, if you are not as theoretical explanation-dependent as I am but more of the practical, empirical bent, you can sidestep this issue to a large extent.
Awesome find! I really like the paper.
I had been looking at Fisher information myself during the weekend, noting that it might be a way of estimating uncertainty in the estimation using the Cramer-Rao bound (but quickly finding that the algebra got the better of me; it *might* be analytically solvable, but messy work).
I tried doing a PCA of the judgments, to see if there was any pattern in how the predictions were judged. However, the variance of the principal components did not decline fast. The first component explains just 14% of the variance, the next ones 11%, 9%, 8%... It is not like there are some very dominant low-dimensional or clustering explanation for the pattern of good or bad predictions.
No clear patterns when I plotted the predictions in PCA-space: https://www.dropbox.com/s/1jvhzcn6ngsw67a/kurzweilpredict2019.png?dl=0 (In this plot colour denotes mean assessor view of correctness, with red being incorrect, and size the standard deviation of assessor views, with large corresponding to more agreement). Some higher order components may correspond to particular correlated batches of questions like the VR ones.
(Or maybe I used the Matlab PCA routine wrong).
Another nice example of how this is a known result but not presented in the academic literature:
https://constancecrozier.com/2020/04/16/forecasting-s-curves-is-hard/
The fundamental problem is not even distinguishing exponential from logistic: even if you *know* it is logistic, the parameters that you typically care about (inflexion point location and asymptote) are badly behaved until after the inflection point. As pointed out in the related twitter thread, you gain little information about the latter two in the early phase and only information about the first two in the mid phase: it is the sequential nature of the forecasting that is making this problem.
I find it odd that this does not have a classic paper. There are *lots* of Bass curves used in technology adoption studies, and serious business people are interested in using them to forecast - somebody ought to have told them they will get disappointed. It seems to be a result of the kind that everybody who knows the field would know but rarely mention since it is so obvious.
I think the argument can be reformulated like this: space has very large absolute amounts of some resources - matter, energy, distance (distance is a kind of resource useful for isolation/safety). The average density of these resources is very low (solar in space is within an order of magnitude of solar on Earth) and for matter it is often low-grade (Earth's geophysics has created convenient ores). Hence matter and energy collection will only be profitable if (1) access gets cheap, (2) one can use automated collection with a very low marginal cost - plausibly robotic automation. (2) implies that a lot of material demands on Earth can be fulfilled that way too on Earth, making the only reason to go to space that one can get very large absolute amounts of stuff. That is fairly different from most material economics on Earth.
Typical ways of getting around this is either claiming special resources, like the Helium-3 lunar mining proposals (extremely doubtful; He3 fusion requires you to have solved lower energy fusion, which has plentiful fuels), or special services (zero gravity manufacturing, military, etc). I have not yet seen any convincing special resource, and while nice niche services may exist (obviously comms, monitoring and research; tourism? high quality fibre optics? military use?) they seem to be niche and near-Earth - not enough to motivate settling the place.
So I end up roughly with Stuart: the main reasons to actually settle space would be non-economic. That leads to another interesting question: do we have good data or theory for how often non-economic settlement occurs and works?
I think one interesting case study is Polynesian island settlement. The cost of exploratory vessels were a few percent of the local economy, but the actual settlement effort may have been somewhat costly (especially in people). Yet this may have reduced resource scarcity and social conflict: it was not so much an investment in getting another island as getting more of an island to oneself (and avoiding that annoying neighbour).
Overall, typographic innovations like all typography are better the less they stand out yet do their work. At least in somewhat academic text with references and notation subscripting appears to blend right in. I suspect the strength of the proposal is that one can flexibly apply it for readers and tone: sometimes it makes sense to say "I~2020~ thought", sometimes "I thought in 2020".
I am seriously planning to use it for inflation adjustment in my book, and may (publisher and test-readers willing) apply it more broadly in the text.
Looking back at our paper, I think the weakest points are (1) we handwave the accelerator a bit too much (I now think laser launching is the way to go), and (2) we also handwave the retro-rockets (it is hard to scale down nuclear rockets; I think a detachable laser retro-rocket is better now). I am less concerned about planetary disassembly and building destination infrastructure: this is standard extrapolation of automation, robotics and APM.
However, our paper mostly deals with sending a civilization's seeds everywhere, it does not deal with near term space settlement. That requires a slightly different intellectual approach.
What I am doing in my book is trying to look at a "minimum viable product" - not a nice project worth doing (a la O'Neill/Bezos) but the crudest approach that can show a lower bound. Basically, we know humans can survive for years on something like the ISS. If we can show that an ISS-like system can (1) produce food and other necessities for life, (2) allow crew to mine space resources, (3) turn them into more habitat and life support material, (4) crew can thrive well enough to reproduce, and (5) this system can build more copies of itself with new crew at a faster rate than the system fails - then we have a pretty firm proof of space settlement feasibility. I suspect (1) is close to demonstration, (2) and (3) needs more work, (4) is likely a long term question that must be tested empirically, and (5) will be hard to strictly prove at present but can be made fairly plausible.
If the above minimal system is doable (and I am pretty firmly convinced it is - the hairy engineering problems are just messy engineering, rather than pushing against any limits of physics) then we can settle the solar system. Interstellar settlement requires either self-sufficient habitats that can last very long (and perhaps spread by hopping from Oort-object to Oort-object), AI-run mini-probes as in our paper, or extremely large amounts of energy for fast transport (I suspect having a Dyson sphere is a good start).
I have not seen any papers about it, but did look around a bit while writing the paper.
However, a colleague and me analysed laser acceleration and it looks even better. Especially since one can do non-rigid lens systems to enable longer boosting. We developed the idea a fair bit but have not written it up yet.
I would suspect laser is the way to go.
We do not assume mirrors. As you say, there are big limits due to conservation of etendué. We are assuming (if I remember right) photovoltaic conversion into electricity and/or microwave beams received by rectennas. Now, all that conversion back and forth induces losses, but they are not orders of magnitude large.
In the years since we wrote that paper I have become much more fond of solar thermal conversion (use the whole spectrum rather than just part of it), and lightweight statite-style foil Dyson swarms rather than heavier collectors. The solar thermal conversion doesn't change things much (but allows for a more clean-cut analysis of entropy and efficiency; see Badescu's work). The statite style however reduces the material requirements many orders of magnitude: Mercury is safe, I only need the biggest asteroids.
Still, detailed modelling of the actual raw material conversion process would be nice. My main headache is not so much the energy input/waste heat removal (although they are by no means trivial and may slow things down for too concentrated mining operations - another reason to do it in the asteroid belt in many places), but how to solve the operations management problem of how many units of machine X to build at time t. Would love to do this in more detail!