Andrew comments on Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds - Less Wrong

18 Post author: Vladimir_Nesov 16 August 2009 04:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (102)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 16 August 2009 06:02:47PM *  8 points [-]

Upvoted, but it wasn't nearly as fascinating as I'd hoped, because it was all on our home turf. Eliezer reiterated familiar OB/LW arguments, Aaronson fought a rearguard action without saying anything game-changing. Supporting link for the first (and most interesting to me) disagreement: Aaronson's "The Singularity Is Far".

Comment author: Andrew 16 August 2009 06:25:19PM 4 points [-]

I agree. I stopped watching about five minutes into it when it became clear that EY and Scott were just going to spend a lot of time going back-and-forth.

Nothing game-changing indeed. Debate someone who substantially disagrees with you, EY.

Comment author: Eliezer_Yudkowsky 16 August 2009 08:46:59PM 5 points [-]

Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist - but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?

Comment author: eirenicon 16 August 2009 10:11:28PM 10 points [-]

I'd like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott's strengths coincide (and vice versa) and you'll both come out of the debate stronger for it. I wouldn't suggest this to just anyone but I know that (unlike most debaters, unlike most people) you're both eager to admit when you're wrong.

(I dearly love to argue, and I'm probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my father about astrophysics a couple months ago, and it had gotten quite heated even though I suspected he was right. I hadn't followed up, but the next time I saw him he showed me a couple diagrams he'd worked out. It took me thirty seconds to say, "Wow, I really was totally wrong about that. Well done." He looked at me like a boxer who enters the ring ready for ten rounds and then flattens his opponent while the bell's still ringing. No particular reason for this anecdote, just felt like sharing.)

Comment author: psb 17 August 2009 10:45:04PM 2 points [-]

Ok, that's a weird side-effect of watching the diavlog, now when I read your comments I can hear your voice in my mind.

Comment author: marks 18 August 2009 03:27:55PM 1 point [-]

I would like to see more discussion on the timing of artificial super intelligence (or human level intelligence). I really want to understand the mechanics of your disagreement.

Comment author: Andrew 17 August 2009 01:43:23AM 1 point [-]

It's okay.

What do you disagree with Scott over? I don't regularly read Shtetl-Optimized, and the only thing I associate with him is a deep belief that P != NP.

I don't really know much about his FAI/AGI leanings. I guess I'll go read his blog a bit.