Previously: round 1, round 2, round 3
From the original thread:
This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.
Ask away!
I'm a bit late on this, obviously, but I've had a question that I've always felt was a bit too nonsensical (and no doubt addressed somewhere in the sequences that I haven't found) to bring up but it kinda bugs me.
Do we have any ideas/guesses/starting points about whether or not "self-awareness" is some kind of weird quirk of our biology and evolution or if would be be an inevitable consequence of any general AI?
I realize that's not a super clear definition- I guess I'm talking about that feeling of "existing is going on here" and you can't take it away- even if it turned out that all the evidence I thought I was getting was really just artificial stimulation of a culture of neurons, even if I'm just a whole brain emulation on some computer, even if I'm really locked up in a psych ward somewhere on antipsychotics? Because my first-hand experience of existing is irrefutable evidence for existence, even if I'm completely wrong about everything besides that?
Since I assume that basically everything about me has a physical correlate, I assume there's some section of my brain that's responsible for processing that. I imagine it would be useful to have awareness of myself in order to simulate future situations, etc- building models in our heads is something human brains seem quite good at. So could an AI be built without that? Obviously it would have access to its own source code and such... but do we have any information on whether self-awareness/sense of self is just a trick our brains play on us and an accident of evolution or whether that would be a basic feature of basically any general AI?
Sorry if this question doesn't really make sense!
The question makes sense, but the answers probably won't.
Questions like this are usually approached in an upside-down way. People assume, as you are doing, that reality is "just neurons" or "just atoms" or "just information", then they imagine that what they are experiencing is somehow "just that", and then they try to live with that belief. They will even construct odd ways of speaking, in which elements of the supposed "objective reality" are substituted for subjective or mentalistic terms, in order to a... (read more)