TraditionalRationali comments on Open Thread: May 2010 - Less Wrong

3 Post author: Jack 01 May 2010 05:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (543)

You are viewing a single comment's thread. Show more comments above.

Comment author: AllanCrossman 04 May 2010 09:18:29PM 3 points [-]

Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...

Comment author: TraditionalRationali 16 May 2010 01:48:16AM *  5 points [-]

Eliezer Yudkowsky and Massimo Pigliucci just recently had a dialogue on Bloggingheads.tv. The title is The Great Singularity Debate.

After Yudkowsky at the beginning gives three different definitions of "the singularity" they discuss strong artificial intelligence and consciousness. Pigliucci is the one who quite quickly takes the discussion from intelligence to consciousness. Just before that they discuss whether simulated intelligence is actually intelligence. Yudkowsky made an argument (something like) if the AI can solve problems over a sufficiently broad range of areas and give answers then that is what we mean by intelligence, so if it manages to do that then it has intelligence. I.e., it is then not "just simulating to have intelligence" but is actually intelligent. Pigliucci however seems to want to distinguish between those and say that "well it may then just simulate intelligence, but maybe it is not actually having it". (Too difficult for me to summarize it very well, you have too look for yourself if you want it more accurately.)

There it seemed to me (but I am certainly not an expert in the field) that Yudkowsky's definition looked reasonable. It would have been interesting to have that point elaborated in more detail though.

Pigliucci's point seemed to be something like that for the only intelligence that we know so far (humans (and to lesser extent other higher animals)) intelligence comes together with consciousness. And for consciousness we know less, maybe only that the human biological brain somehow manages to have it, and therefore we of course do not know whether or not e.g. a computer simulating the brain on a different substrate will also be conscious. Yudkowsky seemed to think this very likely while Pigliucci seemed to think that very unlikely. But what I lacked in that discussion is what do we know (or reasonable conjecture) about the connection between intelligence and consciousness? Of course Pigliucci is right in that for the only intelligence we know of so so far (the human brain) intelligence and consciousness comes together. But for me (who do not know much about this subject matter) that seems not a strong argument for discussing them so closely together when it comes to artificial intelligence. Maybe someone here on Less Wrong who knows more about connection or not between intelligence and consciousness? For a naive non-expert like me intelligence seems (rather) easy to test if anything has: just test how good it is to solve general problems? While to test if anything has consciousness I would guess that a working theory of consciousness would have to be developed before a test could be designed?

This was the second recent BHTV dialogue where Pigliucci discussed singularity/transhumanism related questions. The previous I mentioned here. As mentioned there it seems to have started with a blogg-post of Pigliucci's where he criticized transhumanism. I think it interesting that Pigliucci continues his interest in the topic. I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community and "LessWrong-style rationalist/trans-humanist".community. Massimo Pigliucci very much gave the impression of enjoying the discussion with Elizer Yudkowsky! I am also pleased to have noticed that recently Pigliucci's blog has now and then linked to LessWrong/ElizerYudkowsky (mostly Julias Galef if I remember correctly (too lazy to locate the exact links right now)). I would very much like to see this continue (e.g. Yudkowsky discussing with people like e.g. Paul Kurtz, Michael Shermer, Richard Dawkins, Sean Carroll, Steven Weinberg, Victor Stenger (realizing of course that they are probably too busy for it to happen)).

Previous BHTV dialogues with Elizer Yudkowsky I have seen noticed here on LessWrong but not this one (hope it is not that I have just missed that post). Therefore I posted this here, I did not find a perfect place for it, this was the least-bad I noticed. Although my post here is only partly about "Is Elizer alive and well" (he surely looked so on BHTV), I hope it is not considered too much off-topic.

Comment author: kodos96 20 May 2010 09:28:30PM *  5 points [-]

I found this diavlog entertaining, but not particularly enlightening - the two of them seemed to mostly just be talking past each other. Pigliucci kept on conflating intelligence and consciousness, continually repeating his photosynthesis analogy, which makes sense in the context of consciousness, but not intelligence, and Eliezer would respond by explaining why that doesn't make sense in the context of intelligence, and then they'd just go in circles. I wish Eliezer had been more strict about forcing him to explicitly differentiate between intelligence/consciousness. Frustrating.... but worth watching regardless.

Note that I'm not saying I agree with Pigliucci's photosynthesis analogy, even when applied to consciousness, just that it seems at least to be coherent in that context, unlike in the context of intelligence, in which case it's just silly. Personally, I don't see any reason for consciousness to be substrate-dependant, but I feel much less confident in asserting that it isn't, just because I don't really know what consciousness is, so it seems more arrogant to make any definitive pronouncement about it.

Comment author: Christian_Szegedy 23 May 2010 08:21:17AM 5 points [-]

That diavlog was a total shocker!

Pigliucci is not a nobody: he is a university professor, authored several books, holds 3 PhD's.

Still, he made an utterly confused impression on me. I don't think people must agree on everything, especially when it comes to hard questions like consciousness,but his views were so weak and incoherent that it was just too painful to watch. My head still aches... :(

Comment author: Jack 16 May 2010 01:57:58AM 3 points [-]

SIAI may have built an automaton to keep donors from panicking

Comment author: Zack_M_Davis 20 May 2010 11:15:28PM 2 points [-]

I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community

I'm going to have to remember to use the word cishumanism more often.

Comment author: komponisto 21 May 2010 01:12:35AM 1 point [-]
Comment author: Kevin 16 May 2010 10:58:41AM 0 points [-]

You should post this as a top-level post for +10x karma.

Comment author: PeerInfinity 16 May 2010 02:51:08AM 0 points [-]

random, possibly off-topic question:

Is there an index somewhere of all of Eliezer's appearances on BHTV? Or a search tool on the BHTV site that I can use to find them?

Comment author: ata 16 May 2010 10:32:35AM 2 points [-]
Comment author: PeerInfinity 16 May 2010 04:35:07PM 1 point [-]

Thanks! I had tried using the search tool before, but I guess I hadn't tried searching for "Yudkowsky, Eliezer"

... oh, and it turns out that there was a note right beside the search box saying "NAME FORMAT = last, first". oops...

anyway, now I know, thanks :)

Comment author: John_Maxwell_IV 22 May 2010 07:39:21PM 4 points [-]

In general, google's site: operator is great for websites that have missing or uncooperative search functionality:

site:bloggingheads.tv eliezer

Comment author: NancyLebovitz 16 May 2010 10:27:33AM 0 points [-]

Orange button called "search" in the upper right hand corner.