All of Gram Stone's Comments + Replies

I just want to register that there are things I feel like I couldn't have realistically learned about my partner in an amount of time short enough to precede the event horizon, and that my partner and I have changed each other in ways past the horizon that could be due to limerence but could also just be good old-fashioned positive personal change for reasonable reasons, and that I suspect that the takeaway here is not necessarily to attempt to make final romantic decisions before reaching the horizon, but that you will only be fully informed past the horizon, and that is just the hand that Nature has dealt you. I don't necessarily think you disagree with that but wanted to write that in case you did.

2Raemon
I mostly had been thinking about stuff that's more like "metadata about your situation, and very obvious things about your partner." I'm not suggesting hold off on every relationship until you've, like, had time to vet it thoroughly with your System 2 brain.
Gram StoneΩ010

I got the impression Eliezer's claiming that a dangerous superintelligence is merely sufficient for nanotech.

How would you save us with nanotech? It had better be good given all the hardware progress you just caused!

4Rob Bensinger
No, I'm pretty confident Eliezer thinks AGI is both necessary and sufficient for nanotech. (Realistically/probabilistically speaking, given plausible levels of future investment into each tech. Obviously it's not logically necessary or sufficient.) Cf. my summary of Nate's view in Nate's reply to Joe Carlsmith: (I read "sphexish" here as a special case of "narrow AI" / "shallow cognition", doing more things as a matter of pre-programmed reflex rather than as a matter of strategic choice.)

A genuine congratulations for learning the rare skill of spotting and writing valid proofs.

Graham’s Number I see as ridiculous, apparently one of the answers to his original problem could be as low as a single digit number, why have power towers on power towers then?

Graham's number is an upper bound on the exact solution to a Ramsey-type problem. Ramsey numbers and related generalizations are notorious for being very easy to define and yet very expensive to compute with brute-force search, and many of the most significant results in Ramsey theory are proof... (read more)

Now that we clarified up-thread that Eliezer's position is not that there was a giant algorithmic innovation in between chimps and humans, but rather that there was some innovation in between dinosaurs and some primate or bird that allowed the primate/bird lines to scale better

 

Where was this clarified...? My Eliezer-model says "There were in fact innovations that arose in the primate and bird lines which allowed the primate and bird lines to scale better, but the primate line still didn't scale that well, so we should expect to discover algorithmic i... (read more)

If information is 'transmitted' by modified environments and conspecifics biasing individual search, marginal fitness returns on individual learning ability increase, while from the outside it looks just like 'cultural 'evolution.''

If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying "Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly a... (read more)

6Matthew Barnett
FWIW when I use the word discontinuous in these contexts, I'm almost always referring to the definition Katja Grace uses, This is quite different than the mathematical definition of continuous.

That was a pretty good Eliezer model; for a second I was trying to remember if and where I'd said that.

Robbo100

I guess even though I don't disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don't see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don't even approach the physical limits on speed, we can't run multiple instances of our own

... (read more)

I have an alternative hypothesis about how consciousness evolved. I'm not especially confident in it.

In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is “How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?” I think the core cognitive traits in question originally evolved to model the internal state of conspecifi... (read more)

Your argument has a Holocene, sedentary, urban flavor, but I think it applies just as well to Pleistocene, nomadic cultures; I think of it as an argument about population size and 'cognitive capital' as such, not only about infrastructure or even technology. Although my confidence is tempered by mutually compatible explanations and taphonomic bias, my current models of behavioral modernity and Neanderthal extinction essentially rely on a demographic argument like the one made here. I don't think this comment would be as compelling without a reminder that a... (read more)

2abramdemski
Fascinating!
1Algernoq
Yeah. I hope Youtube knows what it's doing.

For those wondering about the literature, although Kahneman and Tversky coined no term for it, Kahneman & Tversky (1981) describes counterfactual-closeness and some of its affective consequences. This paper appears to be the origin of the missed flight example. Roese (1997) is a good early review on counterfactual thinking with a section on contrast effects, of which closeness effects are arguably an instance.

5Ruby
Thanks for surfacing these! I've now edited the post to mention these sources and your comment.

Succubi/incubi and the alien abduction phenomenon point to hypnagogia, and evo-psych explanations of anthropomorphic cognition are often washed down with arguments that anthropomorphism causes good enough decisions while being technically completely false; there's an old comment by JenniferRM talking about how surprisingly useful albeit wrong it would be to model pathogens as evil spirits.

Gram StoneΩ2110

An attempt at problem #1; seems like there must be a shorter proof.

The proof idea is "If I flip a light switch an even number of times, then it must be in the same state that I found it in when I'm finished switching."

Theorem. Let e a path graph on ertices with a vertex oloring uch that if hen Let s bichromatic Then s odd.

Proof. By the definition of a path graph, there exists a sequence ndexing An edge s bichromatic iff A... (read more)

7Gurkenglas

I see that New Zealand is also a major wood exporter. In case of an energy crisis, wood gas could serve as a renewable alternative to other energy sources. Wood gas can be used to power unmodified cars and generators. Empirically this worked during the Second World War and works today in North Korea. Also, FEMA once released some plans for building an emergency wood gasifier.

The Lahav and Mioduser link in section 14 is broken for me. Maybe it's just paywalled?

1TurnTrout
Sorry about that - should be fixed now?

Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.

The theme here seems similar to Stuart's past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.

The closest thing to this I've seen in the literature is processing fluency, but to my knowledge that research doesn't really address the willpower depletion-like features that you've highlighted here.

It's also a useful analogy for aspects of group epistemics, like avoiding double counting as messages pass through the social network.

Fake Causality contains an intuitive explanation of double-counting of evidence.

7abramdemski
Yeah, and it uses the same analogy for understanding belief propagation as Pearl himself uses, and a reference to Pearl, and a bit more discussion of Bayes nets as a good way to understand things. But, I think, a lot of people didn't derive the directive "Learn Bayes nets!" from that example of insight derived from Bayes nets (and would benefit from going and doing that). I do think there are some other intuitions lurking in Bayes net algorithms which could benefit from a similar write-up to Fake Causality, but which went "all the way" in terms of describing Bayes nets, rather than partially summarizing.

Re: proof calibration; there are a couple textbooks on proofwriting. I personally used Velleman's How to Prove It, but another option is Hammack's Book of Proof, which I haven't read but appears to cover the same material at approximately equal length. For comparison, Halmos introduces first-order logic on pages 6 and 7 of Naive Set Theory, whereas Velleman spends about 60 pages on the same material.

It doesn't fit my model of how mathematics works technically or socially that you can really get very confident but wrong about your math k... (read more)

7Qiaochu_Yuan
Slight nitpick: it means "prove that this set is a subset of every set with this property, and has this property." This sort of thing is terrible; I learned most of it from the internet (MathOverflow, Wikipedia, the nLab, blogs), for what it's worth.
2TurnTrout
Thanks, that’s very helpful! I appreciate the offer; let me see how I feel after the next book.

Re: category-theory-first approaches; I find that most people think this is a bad idea because most people need to see concrete examples before category theory clicks for them, otherwise it's too general, but a few people feel differently and have published introductory textbooks on category theory that assume less background knowledge than the standard textbooks. If you're interested, you could try Awodey's Category Theory (free), or Lawvere's Conceptual Mathematics. After getting some more basics under your belt, you could give either... (read more)

I'd be very interested in reading about EverQuest as an exemplar of Fun Theory, if you're willing to share.

5moridinamael
Sure. I've cut out my lengthy and indulgent love letter to EQ and put it here: https://www.evernote.com/shard/s79/sh/c867cd85-1d03-467a-a8b2-cab8d6fc61b8/4185c7120d9365f453d042f16949bc96 edit: If anybody tells me they enjoyed it, I'll probably just post it as a "blog" post here.

I think proponents of the instrumental convergence thesis would expect a consequentialist chess program to exhibit instrumental convergence in the domain of chess. So if there were some (chess-related) subplan that was useful in lots of other (chess-related) plans, we would see the program execute that subplan a lot. The important difference would be that the chess program uses an ontology of chess while unsafe programs use an ontology of nature.

2zulupineapple
First, Nick Bostrom has an example where a Riemann hypothesis solving machine converts the earth into computronium. I imagine he'd predict the same for a chess program, regardless of what ontology it uses. Second, if instrumental convergence was that easy to solve (the convergence in the domain of chess is harmless), it wouldn't really be an interesting problem.

It seems like a good idea to collect self-reports about why LessWrongers didn't invest in Bitcoin. For my own part, off the top of my head I would cite:

  • less visible endorsement of the benefits than e.g. cryonics;
  • vaguely sensing that cryptocurrency is controversial and might make my taxes confusing to file, etc.;
  • and reflexively writing off articles that give me investment advice because most investment advice is bad and I generally assume I lack some minimal amount of wealth needed to exploit the opportunity.

So something like We Agree: Get Froze, might... (read more)

I didn't invest in Bitcoin because I don't invest in things that I don't understand well enough to be confident that the Efficient Market Hypothesis doesn't apply. I continue to believe this is a rational choice-- okay, sure, this one time I might have made a lot of money, but most of the time I would waste a bunch of money/time/other resources. And no one writes blog posts about how they could have lost a lot of money but didn't, so the availability heuristic is going to overweight successes.

5Error
For my part, it was one part trivial inconveniences, one part that it read like woo. I was aware it existed through other avenues (I wasn't a Less Wronger then), and aware of what it was trying to do, and I had the technical acumen to get in on it if I had so chosen. Given that, I'm a little bitter that I didn't do so. I could retire today if I had. I could get into it today, of course, but now that everybody knows it's a magic money making machine I suspect a bubble is well underway. I don't want to be in when it breaks. I'm a little worried about Bitcoin's externalities. The mining process consumes more and more energy, and professional miners are driving up hardware costs. Which might be fine if most transactions were, well, transactions, i.e. if we're getting human value out of the work. But I get the impression that the vast majority of the network's effort goes towards playing musical chairs with money, and that seems bad. Bitcoin doesn't feel woo-ish, anymore, but it's starting to feel paperclippy instead.

Other things equal, choose the reversible alternative.

In particular, thank you for pointing out that in social experiments, phenomenal difficulty is not much Bayesian evidence for ultimate failure.