All of duryt's Comments + Replies

duryt10

To push back on this, I'm not sure that humanness is a "bug," as you say. While we likely aren't a pinnacle of intelligence in a fundamental sense, I do think that as humans have continued to advance, first through natural selection and now through... whatever it is we do now with culture and education and science, the parts of humanness that we care about have tended to increase in us, and not go away. So perhaps an AI optimized far beyond us, but starting in the same general neighborhood in the function space, would optimize to become not just superintelligent but superhuman in the sense that they would embody the things that we care about better than we do!

duryt11

Hi, I'm new here so I bet I'm missing some important context. I listen to Lex's podcast and have only engaged with a small portion of Yud's work. But I wanted to make some comments on the analogy of a fast human in a box vs. the alien species. Yud said he's been workshopping this analogy for a while, so I thought I would leave a comment on what I think the analogy is still missing for me. In short, I think the human-in-a-box-in-an-alien-world analogy smuggles in an assumption of alienness and I'd like to make this assumption more explicit.

Before I delve in... (read more)

1ceba
In general, those methods find local extrema. They don't tell you how many there are, or where the next closest point is once you've found one of them. A loss landscape might have several local minima. Which one you find depends on where you start.  Why shouldn't there be different minds that are at comparable minimum values, but not very close on the loss landscape? 
1Metal
Also new here. One thing I did not understand about the "intelligence in a box created by less intelligent beings" analogy was why would the 'intelligence in a box' be impatient with the pace of the lesser-beings? It would seem that impatience/urgency is related to the time-finiteness of the intelligence. As code with no apparent finiteness of existence, why does it care how fast things move? 
3AnthonyC
I think one of the key points here is that most possible minds/intelligences are alien, outside the human distribution. See https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general for art of EY's (15 yr old) discussion on LW of this. Humans were produced by a specific historical evolutionary process constrained by the amount of selection pressure applied to our genes, and the need for humans to all be similar enough to each other to form a single species in each generation, among other things. AI is not that, it will be designed and trained under very different processes even if we don't know what all of those processes will end up being. This doesn't mean an AI made by humans will be anything like a random selection from the set of all possible minds, but in any case the alignment problem is largely that we don't know how to reliably steer what kind of alien mind we get in desired directions.
1Boris Kashirin
Trying to channel my internal Eliezer: It is painfully obvious that we are not the pinnacle of efficient intelligence. If evolution is to run more optimisation on us, we will become more efficient... and lose the important parts that matter to us and of no consequence to evolution. So yes, we end up being same aliens thing as AI. Thing that makes us us is bug. So you have to hope gradient descent makes exactly same mistake evolution did, but there are a lot of possible mistakes.