I got the impression Eliezer's claiming that a dangerous superintelligence is merely sufficient for nanotech.
How would you save us with nanotech? It had better be good given all the hardware progress you just caused!
A genuine congratulations for learning the rare skill of spotting and writing valid proofs.
Graham’s Number I see as ridiculous, apparently one of the answers to his original problem could be as low as a single digit number, why have power towers on power towers then?
Graham's number is an upper bound on the exact solution to a Ramsey-type problem. Ramsey numbers and related generalizations are notorious for being very easy to define and yet very expensive to compute with brute-force search, and many of the most significant results in Ramsey theory are proof...
Now that we clarified up-thread that Eliezer's position is not that there was a giant algorithmic innovation in between chimps and humans, but rather that there was some innovation in between dinosaurs and some primate or bird that allowed the primate/bird lines to scale better
Where was this clarified...? My Eliezer-model says "There were in fact innovations that arose in the primate and bird lines which allowed the primate and bird lines to scale better, but the primate line still didn't scale that well, so we should expect to discover algorithmic i...
If information is 'transmitted' by modified environments and conspecifics biasing individual search, marginal fitness returns on individual learning ability increase, while from the outside it looks just like 'cultural 'evolution.''
If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying "Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly a...
That was a pretty good Eliezer model; for a second I was trying to remember if and where I'd said that.
...I guess even though I don't disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don't see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don't even approach the physical limits on speed, we can't run multiple instances of our own
I have an alternative hypothesis about how consciousness evolved. I'm not especially confident in it.
In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is “How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?” I think the core cognitive traits in question originally evolved to model the internal state of conspecifi...
Your argument has a Holocene, sedentary, urban flavor, but I think it applies just as well to Pleistocene, nomadic cultures; I think of it as an argument about population size and 'cognitive capital' as such, not only about infrastructure or even technology. Although my confidence is tempered by mutually compatible explanations and taphonomic bias, my current models of behavioral modernity and Neanderthal extinction essentially rely on a demographic argument like the one made here. I don't think this comment would be as compelling without a reminder that a...
For those wondering about the literature, although Kahneman and Tversky coined no term for it, Kahneman & Tversky (1981) describes counterfactual-closeness and some of its affective consequences. This paper appears to be the origin of the missed flight example. Roese (1997) is a good early review on counterfactual thinking with a section on contrast effects, of which closeness effects are arguably an instance.
Succubi/incubi and the alien abduction phenomenon point to hypnagogia, and evo-psych explanations of anthropomorphic cognition are often washed down with arguments that anthropomorphism causes good enough decisions while being technically completely false; there's an old comment by JenniferRM talking about how surprisingly useful albeit wrong it would be to model pathogens as evil spirits.
An attempt at problem #1; seems like there must be a shorter proof.
The proof idea is "If I flip a light switch an even number of times, then it must be in the same state that I found it in when I'm finished switching."
Theorem. Let e a path graph on ertices with a vertex oloring uch that if hen Let s bichromatic Then s odd.
Proof. By the definition of a path graph, there exists a sequence ndexing An edge s bichromatic iff A...
I see that New Zealand is also a major wood exporter. In case of an energy crisis, wood gas could serve as a renewable alternative to other energy sources. Wood gas can be used to power unmodified cars and generators. Empirically this worked during the Second World War and works today in North Korea. Also, FEMA once released some plans for building an emergency wood gasifier.
The Lahav and Mioduser link in section 14 is broken for me. Maybe it's just paywalled?
Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.
The theme here seems similar to Stuart's past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.
The closest thing to this I've seen in the literature is processing fluency, but to my knowledge that research doesn't really address the willpower depletion-like features that you've highlighted here.
It's also a useful analogy for aspects of group epistemics, like avoiding double counting as messages pass through the social network.
Fake Causality contains an intuitive explanation of double-counting of evidence.
Re: proof calibration; there are a couple textbooks on proofwriting. I personally used Velleman's How to Prove It, but another option is Hammack's Book of Proof, which I haven't read but appears to cover the same material at approximately equal length. For comparison, Halmos introduces first-order logic on pages 6 and 7 of Naive Set Theory, whereas Velleman spends about 60 pages on the same material.
It doesn't fit my model of how mathematics works technically or socially that you can really get very confident but wrong about your math k...
Re: category-theory-first approaches; I find that most people think this is a bad idea because most people need to see concrete examples before category theory clicks for them, otherwise it's too general, but a few people feel differently and have published introductory textbooks on category theory that assume less background knowledge than the standard textbooks. If you're interested, you could try Awodey's Category Theory (free), or Lawvere's Conceptual Mathematics. After getting some more basics under your belt, you could give either...
I'd be very interested in reading about EverQuest as an exemplar of Fun Theory, if you're willing to share.
I think proponents of the instrumental convergence thesis would expect a consequentialist chess program to exhibit instrumental convergence in the domain of chess. So if there were some (chess-related) subplan that was useful in lots of other (chess-related) plans, we would see the program execute that subplan a lot. The important difference would be that the chess program uses an ontology of chess while unsafe programs use an ontology of nature.
It seems like a good idea to collect self-reports about why LessWrongers didn't invest in Bitcoin. For my own part, off the top of my head I would cite:
So something like We Agree: Get Froze, might...
I didn't invest in Bitcoin because I don't invest in things that I don't understand well enough to be confident that the Efficient Market Hypothesis doesn't apply. I continue to believe this is a rational choice-- okay, sure, this one time I might have made a lot of money, but most of the time I would waste a bunch of money/time/other resources. And no one writes blog posts about how they could have lost a lot of money but didn't, so the availability heuristic is going to overweight successes.
Other things equal, choose the reversible alternative.
In particular, thank you for pointing out that in social experiments, phenomenal difficulty is not much Bayesian evidence for ultimate failure.
I just want to register that there are things I feel like I couldn't have realistically learned about my partner in an amount of time short enough to precede the event horizon, and that my partner and I have changed each other in ways past the horizon that could be due to limerence but could also just be good old-fashioned positive personal change for reasonable reasons, and that I suspect that the takeaway here is not necessarily to attempt to make final romantic decisions before reaching the horizon, but that you will only be fully informed past the horizon, and that is just the hand that Nature has dealt you. I don't necessarily think you disagree with that but wanted to write that in case you did.