LESSWRONG
LW

Personal Blog

9

Bloggingheads: Robert Wright and Eliezer Yudkowsky

by Liron
7th Aug 2010
1 min read
129

9

Personal Blog

9

Bloggingheads: Robert Wright and Eliezer Yudkowsky
26Will_Newsome
20sketerpot
18JenniferRM
4Will_Newsome
6JenniferRM
0Will_Newsome
14Paul Crowley
11AndyWood
6Paul Crowley
5XiXiDu
5Liron
4CarlShulman
1Will_Newsome
3timtyler
25timtyler
4RobinZ
2timtyler
1Richard_Kennaway
2RobinZ
9EbfromBoston
2RobinZ
4EbfromBoston
4RobinZ
6NancyLebovitz
4RobinZ
2EbfromBoston
1RobinZ
1SilasBarta
7timtyler
3Jonathan_Graehl
1SilasBarta
2timtyler
0SilasBarta
2timtyler
1SilasBarta
0timtyler
0Jonathan_Graehl
2sark
8wedrifid
8Matt_Simpson
0sark
0sark
1wedrifid
0sark
2wedrifid
0NancyLebovitz
1sark
0wedrifid
7timtyler
2sark
5timtyler
0sark
1wedrifid
4AlephNeil
0SilasBarta
9Paul Crowley
4NancyLebovitz
3Paul Crowley
1NancyLebovitz
4Paul Crowley
0timtyler
8knb
3Liron
2Letharis
2knb
7simplicio
7Morendil
6simplicio
4SeventhNadir
4Craig_Heldreth
2SeventhNadir
3Matt_Simpson
2MartinB
4timtyler
3timtyler
3LucasSloan
6CarlShulman
1[anonymous]
2LucasSloan
0teageegeepea
-1timtyler
6pjeby
0timtyler
4pjeby
0JamesAndrix
1pjeby
1JamesAndrix
0timtyler
0pjeby
0JamesAndrix
0timtyler
-4timtyler
2JoshuaZ
0timtyler
1pjeby
-1timtyler
5pjeby
0timtyler
1pjeby
2Apteris
0thomblake
2knb
2timtyler
2timtyler
7TobyBartels
3SilasBarta
2timtyler
4SilasBarta
3timtyler
1Alexandros
1Clippy
3JamesAndrix
7Clippy
0James_K
3[anonymous]
1katydee
1timtyler
0blogospheroid
0Ivan_Tishchenko
0timtyler
4Will_Newsome
0CarlShulman
0Gotcha
-2SforSingularity
3RobinZ
2timtyler
0RobinZ
2zero_call
0timtyler
New Comment
129 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:24 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Will_Newsome15y260

Maybe it's because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people. It's almost kinda painful to watch, because even though I wish someone would come along and pwn Eliezer in an argument, it never ever happens because everyone is more wrong than him, and I have to sit there and listen to them fail in such predictably irrational ways. Seriously, Eliezer is smart, but there have to be some academics out there that can point to at least one piece of Eliezer's fortress of beliefs and find a potentially weak spot. Right? Do you know how epistemically distressing it is to have learned half the things you know from one person who keeps on getting proven right? That's not supposed to happen! Grarghhhhhh. (Runs off to read the Two Cult Koans.) (Remembers Eliezer wrote those, too.) (God dammit.)

(And as long as I'm being cultish, HOW DEAR PEOPLE CALL OUR FEARLESS LEADER 'YUDKOWSKI'?!?!??!? IT COMPLETELY RUINS THE SYMMETRY OF THE ETERNAL DOUBLE 'Y'S! AHHH! But seriously, it kinda annoys me in a way that most trolling doesn't.)

[This comment is no longer endorsed by its author]Reply
[-]sketerpot15y200

It reminds me of when Richard Dawkins was doing a bunch of interviews and discussions to promote his then-latest book The God Delusion. It was kind of irritating to hear the people he was talking with failing again and again in the same predictable ways, raising the same dumb points every time. And you could tell that Dawkins was sick of it, too. The few times when someone said something surprising, something that might force him to change his mind about something (even a minor point), his face lit up and his voice took on an excited tone. And when he was particularly uncertain about something, he said so.

People accused him of being arrogant and unwilling to change his mind; the problem is that the people he was arguing with were just so piteously wrong that of course he's not going to change his mind from talking with them. It's funny, because one of the things I really like about Dawkins is that he's genuinely respectful in discussions with other people. Sometimes barbed, but always fundamentally respectful. When the other person says something, he won't ignore it or talk past them, and he assumes (often wrongly) that whoever he's speaking with is intelligent enough and sane enough to handle a lack of sugarcoating.

And of course, all this led to accusations of cultishness, for exactly the same reasons that are making you uncomfortable.

Reply
[-]JenniferRM15y180

Maybe it's because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people.

Start with a bit of LW's own "specialized cult jargon" (I kid, really!)... specifically the idea of inferential distance.

Now imagine formalizing this concept more concretely than you get with story-based hand waving, so that it was more quantitative -- with parametrized shades of grey instead of simply being "relevant" or "not relevant" to a given situation. Perhaps it could work as a quantitative comparison between two people who could potentially Aumann update with each other, so that "ID(Alice,Bob) == 0 bits" when Alice knows everything Bob knowsand they already believe exactly the same thing, and can't improve their maps by updating about anything with each other. If its 1 bit then perhaps a single "yes/no Q&A" will be sufficient to bring them into alignment. Larger and larger values imply that they have more evidence (and/or more surprising evidence) to share.

(A simple real world proxy for ID(P1,P2) might be words re... (read more)

Reply
4Will_Newsome15y
Why must you ruin my self-conscious countersignalling with good epistemology?! But seriously... Ack! Jennifer, you're brilliant. I dunno what they put in the water at that CCS place. Would you accept me as your apprentice? I hear tell you have a startup idea. I can't code, but I live very cheaply and can cheerfully do lots of menial tasks and errands of all kind, from in-the-field market research to buying donuts to washing dishes to answering customer questions and everything else. I'm versatile, energetic and a wicked good rationalist. And I feel that working for you even for a short while would significantly build my understanding of social epistemology and epistemology generally, helping me in my quest to Save the World. Doesn't that sound like a totally awesome idea? :D
6JenniferRM15y
Your compliments are appreciated but, I suspect, unwarranted :-P I'm not saying "definitely no" and I think it would be cool to work with you. But also you should probably reconsider the offer because I think the right question (tragically?) is not so much "Can I work with you to somehow learn your wisdom by osmosis?" but "Where are the practice grounds for the insight just displayed?" My working theory of "intellectual efficacy" is that it mostly comes from practice. Following this theory, if you're simply aiming for educational efficiency of the sort that was applied here, you could do much worse than getting some practice at competitive inter-collegiate policy debate (sometimes called CEDA or NDT depending on the region of the US). I would attribute my insight here not to "something in the water" at the CCS (the College of Creative Studies at UCSB, which for the record, I just hung out at because that's where my friends were), but to experiences before that on a college debate team in a two year program that included a debate tournament approximately every third weekend and about 10 hours per week in a college library doing research in preparation for said tournaments. Here is a partial list of four year colleges that have policy debate teams. If you were going to go for the best possible debate experience in the U.S. I'd estimate that the best thing to do would be to find a school that was valuable for other reasons and where (1) the head coach's favorite event is CEDA/NDT (2) the ((debate program budget)/debater) value is high. The funding is important because practical things like a room just for the debate team, travel/food/hotel subsidies are important for filling out a debate team and giving them a sense of community and the size and quality of the team will be a large source of the value of the experience. You might also try to maximize the "tournaments per team member per year" which might vary from school to school based on the costs of travel given
0Will_Newsome15y
It's less the insight just displayed and more a general tendency to see Pareto improvements in group rationality. But debate's an interesting idea.
[-]Paul Crowley15y140

Bear in mind that, like many good works of pop science, the vast majority of what the Sequences present is other people's ideas; I'm much more confident of the value of those ideas than of the parts that are original to Eliezer.

Reply
[-]AndyWood15y110

And who filtered that particular and exceptionally coherent set of "other people's ideas" out of a vastly larger total set of ideas? Who stated them in (for the most part) clear anti-jargon? I would not even go into the neighborhood of being dismissive of such a feat.

Originality is the ultimate strawman.

Reply
6Paul Crowley15y
I don't mean to be dismissive at all - leaving aside original content like the FAI problem, the synthesis that the Sequences represent is a major achievement, and one that contributes to making the clarity of writing possible.
5XiXiDu15y
There's not much he could be proven wrong about. What EY mainly accomplished is to put the right pieces, that have already been out there before him, together and create a coherent framework. But since I've only read maybe 5% of LW I might be wrong. Is there something unique that stems from EY? Another problem is that what EY is saying is sufficiently vague so that you cannot argue with it if you do not doubt some fundamental attributes of reality. I'm not trying to discredit EY. I actually don't know of any other person that comes even close to his mesh of beliefs. To the extent that I'm much more relaxed since I know about him for that if I was going to die there's everything and much more I ever came up with contained inside EY' mind :-) Anyway, I can't help and often muse about the possibility that EY is so much smarter that he actually created the biggest scam ever around the likelihood of uFAI to live of donations by a bunch of nonconformists. - "Let's do what the Raelians do! Let's add some nonsense to this meme!" Of course I'm joking, hail to the king! :-)
5Liron15y
Yeah, huge red flag. I'll also note that reading Eliezer's stuff made me feel like I got to extend my beliefs in the same direction away from mainstream that they were already skewed, which is probably why I was extremely receptive to it. Even though I've learned a lot, I don't get to congratulate myself for a real Mind Change.
4CarlShulman15y
Or this.
1Will_Newsome15y
Thanks! I guess there's a good reason not to have a 'cultishness' tag, but still, it'd be kinda cool...
3timtyler15y
There are not very many seriously written-up position statements from Eliezer. So, it probably doesn't represent a very attractive target for "academics" to attack. There are a couple of papers about the possibility of THE END OF THE WORLD. That is an unconvential academic subject - partly because no instances of this have ever been observed.
[-]timtyler15y250

Rabbits and foxes are used as a stereotypical example of conflict. However, "even" foxes and rabbits actually cooperate with each other - as follows:

A fox slinks about, looking for food. When he spies a rabbit munching in the grass, he begins to creep closer. If the rabbit sees the fox coming, it will stand on its hind legs, observing the fox. The fox now realizes that its been discovered, and it will turn away from the hunt. The rabbit could run, but that would entail wasteful energy expenditure. So it simply signals the fox. The fox gets the "I see you" signal, and turns away, because it also doesn't want to expend energy on a futile chase. So both animals come out ahead, by the use of a signal. The rabbit's work loop (stay alive) has been completed with minimum energy expended, and the fox's work loop (find food) has been terminated unsuccessfully, but with less energy used than if it had included a fruitless chase.

The rabbit helps the fox save energy, the fox helps the rabbit save energy - it's a deal. They don't want exactly the same thing - but that is true for many traders, and it doesn't prevent cooperative trade arising between them. Nature is full of such cooperation.

Reply
4RobinZ15y
Actually, do you have a citation for this datum? Edit: The author has commented downthread.
2timtyler15y
It's an anecdote, which I presented very bady :-( I was actually looking for evidence that white bunny tails signalled to foxes - but people mostly seem to think they signal danger to other rabbits. Update - abstract of "Do Brown Hares Signal to Foxes?": "Of a total of 32 sedentary brown hares (Lepus europaeus) approached across open ground by foxes (Vulpes vulpes), 31 reacted when the fox was 50 m or less from them by adopting a bipedal stance directly facing the fox. Of five sedentary hares approached by foxes from nearby cover, none stood, three moved away and two adopted the squatting (primed for movement) posture. Hares stood before foxes in all heights of vegetation and on 42% of occasions were solitary. Hares did not stand before approaching dogs (Canis familiaris). The functions of this behaviour are considered and competing hypotheses of Predator Surveillance and Pursuit Deterrence are examined by testing predictions against results obtained. The results suggest that by standing erect brown hares signal to approaching foxes that they have been detected." * http://onlinelibrary.wiley.com/doi/10.1111/j.1439-0310.1993.tb00544.x/abstract
1Richard_Kennaway15y
GIYF. Author.
2RobinZ15y
That's the same source I found for the quotation when I hit up the search engines, but I was rather hoping for a naturalist of some description to back up the theory. I don't see that you could be confident of that explanation without some amount of field work. Who put in the eye-hours to develop and confirm this hypothesis? Edit: I mean, if Eb the author did, that's fine, but he doesn't even mention growing up in the country.
9EbfromBoston15y
Sorry for not citing my fox/rabbit scenario; I am the author in question... I was basing my tale on observations made by some European ethologist/semiotician. The signals given by animals as they navigate the "umwelt". I read Uexkull, Kalevi Kull, Jesper Hoffmeyer, and Thomas Sebeok, among others. Somewhere was the description in question. The author said that he had something like 10,000 hours of observation. Sorry for not citing my sources. I'll try to be more precise in note-taking. But it was a thrill that someone read my website! http://adaptingsystems.com Eb
2RobinZ15y
Thanks for the quick response! If you can find the citation again among the sources you were reading, I'd appreciate it - perhaps you can add a footnote on the page RichardKennaway links. Welcome to Less Wrong, by the way! I don't know if you read the About page, but if you're interested in rationality, etc., there's a lot of good essays scattered about this blog.
4EbfromBoston15y
O, btw, I grew up in the country. Spent several years on the sheep farm. Interestingly, the herd dogs use the same "signal" mechanism to move sheep. Rather than run around and bark, they get in "predator" pose and the sheep move accordingly. Interesting to watch low-power energy, i.e. "signals", accomplish work.
4RobinZ15y
Now this is a completely irrelevant aside, but I remember hearing about a party at a house with three dogs, mostly in one room. A guest left to use the bathroom, and when she came back, she could see that everyone was packed in a neat group in the center of the room with the dogs patrolling and nudging the strays back in. That is a neat story about the dogs using the predator pose. Thanks.
6NancyLebovitz15y
Would you know whether the dogs were border collies? One of my friends had a border collie when she was a kid, and she told me that the dog was only really happy when the whole family was seated around the dining table.
4RobinZ13y
I finally got around to asking - they were indeed border collies. +1 for a correct prediction!
2EbfromBoston15y
Yes, border collies. The good border collies complete the work loop (move sheep) with minimal expenditure of energy. One would merely raise an eyebrow and the sheep got the message, and moved. Very impressive.
1RobinZ15y
May well have been - I got the story secondhand myself, and I have a terrible recall for details.
1SilasBarta15y
That doesn't seem like a stable equilibrium -- too much incentive for the rabbits to be "over-cautious" for foxes at the expense of running ability. If they can figure that "being able to notice and turn toward a fox" is just as good as having the energy to escape a fox, then they'll over-invest in being good at this signal until the foxes realize it's not a reliable signal of a failed hunt.
7timtyler15y
Note that this type of signalling to predators is well established in many other creatures: http://en.wikipedia.org/wiki/Stotting
3Jonathan_Graehl15y
Stotting is awesome. Thanks for that. I'm puzzled at the controversy over the original point, which is so plausible it's hard not to believe.
1SilasBarta15y
Well, let me explain my intuition behind my objection, even if there's a reason why it might be wrong in this case. I am, in general, skeptical of claims about Pareto-improvements between agents with fundamentally opposed goals (as distinguished from merely different goals, some of which are opposed). Each side has a chance to defect from this agreement to take utility from the other. It's a quite famliar case for two people to recognize that they can submit their disagreement to an arbitrator who will render a verdict and save them the costs of trying to tip the conflict in their favor. But to the extent that one side believes the verdict will favor the other, then that side will start to increase the conflict-resolution costs if it will get a better result at the cost of the other. For if a result favors one side, then a fundamentally opposed other side should see that it wants less of this. So any such agreement, like the one between foxes and rabbits, presents an opportunity for one side to abuse the other's concessions to take some of the utility at the cost of total utility. In this case, since the rabbit is getting the benefit of spending the energy of a full chase without spending that energy, the fox has reason to prevent it from being able to make the conversion. The method I originally gave shows one way. Another way foxes could abuse the strategy is to hunt in packs. Then, when the rabbit spots one of them and plans to run one direction, it will be ill-prepared for if another fox is ready to chase from another direction (optimally, the opposite) -- and gives away its location! (Another fox just has to be ready to spring for any rabbit that stands and looks at something else.) So even if the "stand and look"/"give up" pattern is observed, I think the situation is more complicated, and there are more factors at play than timtyler listed.
2timtyler15y
Re "pack hunting" - according to this link, the phenomenon happens with foxes - but not dogs: "Hares did not stand before approaching dogs". Perhaps they know that dogs pay no attenntion - or perhaps an increased chance of pack hunting is involved.
0SilasBarta15y
Okay, thank you, that answers the nagging concern I had about your initial explanation. There are reasons why that equilibrium would be destabilized, but it depends on whether the predator species would find the appropriate (destabilizing) countermeasures, and this doesn't happen with foxes. Confusion extinguished!
2timtyler15y
The basic idea is that both parties have a shared interest in avoiding futile chases - see the stotting phenomenon. Cooperation can arise out of that.
1SilasBarta15y
Yes, I'm familiar with stotting. But keep in mind, that doubles as an advertisement of fitness, figuring into sexual selection and thus providing an additional benefit to gazelles. So it's a case where other factors come into play, which is my point about the rabbit fox example -- that it can't be all that's going on.
0timtyler15y
There's often "other things going on" - but here is a description of the hypothesis: * http://beheco.oxfordjournals.org/cgi/content/full/17/4/547 Another example of signalling from prey to predator is the striped pattern on wasps.
0Jonathan_Graehl15y
The intuition does make sense, but I don't think it serves to refute the proposed co-evolved signal in this case. Perhaps the prey also likes to maintain view of its hunter as it slinks through the brush.
2sark15y
Stotting is costly, hence reliable. 'Noticing and turning to the fox' is not.
8wedrifid15y
Doing things that are costly isn't the only way to reliably signal. In this case the rabbit reliably communicates awareness of the fox's presence. It cannot be fake because the rabbit must look in the right direction. The fact that it's prey is not unaware of its presence is always going to be useful to a fox. It will still attempt to chase aware rabbits some times but the exchange of information will help both creatures in their decision making. This is an equilibrium that all else being equal will be stable. Speed will still be selected for for exactly the same reason that it always was.
8Matt_Simpson15y
Just to elaborate (for clarity's sake), by standing up and looking directly at the fox, the rabbit is changing the fox's expected utility calculation. If the rabbit doesn't see the fox, the fox will have the advantage of surprise and be able to close some of the distance between itself and the rabbit before the rabbit begins to run. This makes the chase less costly to the fox. If the rabbit does see the fox, when the fox begins the attack the rabbit will see it and be able to react immediately, neutralizing any surprise advantage the fox has. So if the fox knows that the rabbit knows that the fox is nearby, the fox may well not attack because of the amount of extra energy it would take to capture the rabbit. The rabbit standing up and staring at the fox is an effective signal of awareness of the fox because it is difficult to fake (costliness is only one way that a signal can be difficult to fake). The rabbit can stand up and stare in a random direction if it wants to, but the probability of a rabbit doing that and being able to randomly stare directly at the fox is pretty slim. So if the fox sees the rabbit staring at it, then the fox can be pretty certain that the rabbit knows where the fox is at.
0sark15y
Very clear. Thanks.
0sark15y
So 'noticing the fox' signals that the rabbit notices the fox and will run when it sees the fox beginning to chase. The fox uses the signal thus: "If the rabbit notices me it gets a headstart. With such a head start, and the fact that the rabbit runs at a certain minimum speed, I would not be able to catch it". Even though the reliability of the signal is independent of the running, its effectiveness/usefulness depends on the rabbit's speed. Once we have the free riding rabbits placing resources into noticing and away from running, foxes will realize this, and they will chase even when they have been noticed. So now noticing does not prevent the fox from chasing anymore, so there is less pressure on even fast rabbits to signal it. And then the signaling collapses? I admit to being quite confused over this. Waiting for someone to clear it all up!
1wedrifid15y
Placing emphasis on 'noticing vs running' is just confusing you. Noticing helps the rabbit run just as much as it helps it look in the right direction. No. Silas was just wrong. If average rabbit speed become slower then there will be a commensurate change in the threshold at which foxes chase rabbits even when they have been spotted. It will remain useful to show the fox that it has been spotted in all cases in which about 200ms of extra head start is worth sacrificing so that a chase may potentially be avoided. If you are still confused, consider a situation in which rabbits and foxes always become aware of each other's presence at a distance of precisely 250m. Would anyone suggest that rabbits would freeload and not bother to be fast themselves in that circumstance? No. In the 'rabbits standing up' situation the rabbits will still want to be fast for precisely the same reason. All standing up does is force the mutually acknowledged awareness.
0sark15y
Sorry I wasn't being clear, previously I had always meant noticing=='showing the fox you have noticed it'. What threshold? I'm guessing other factors such as the fox's independent assessment of the rabbit's speed? I didnt consider the fact that signaling having noticed required that sacrifice. Does it affect the analysis? I don't understand this part.
2wedrifid15y
If the average rabbit becomes slower then the average fox will be more likely to estimate that a given rabbit chase is successful. Not particularly. We haven't been quantising anyway and it reasonable to consider the overhead here negligible for our purposes. ' You don't particularly need to. Just observe that rabbits running fast to avoid foxes is a stable equilibrium. Further understand that nothing in this scenario changes the fact that running fast is a stable equilibrium. The whole 'signalling makes the equilibrium unstable' idea is a total red herring, a recipe for confusion.
0NancyLebovitz15y
Hypothesis: most rabbits which are in good enough shape to notice are also in good enough shape to escape. There simply aren't enough old? sick? rabbits to freeload to make the system break down. Anyone know whether inexperienced foxes chase noticing rabbits? If so, this make freeloading a risky enough strategy that it wouldn't be commonly used.
1sark15y
I expected the devil would be in the details! But yeah, your hypothesis sounds plausible, and freeloading seems risky.
0wedrifid15y
That correlation can not (and need not) be counted on to make the equilibrium stable over a large number of generations.
7timtyler15y
Standing on your hind legs - which is the behaviour under discussion - is costly to rabbits - since it increases the chance of being observed by predators - so they can't do it all the time. However, that is not really the point. The signal is not: "look how fast I can run" - it is "look how much of a head my family and I have - given that I can see you now".
2sark15y
Not all the time of course. I was refering to SilasBarta's observation that this might not be a stable equilibrium. Because noticing the fox and turning to it is much cheaper than being able to run fast enough such that the fox will not catch you once you notice it. A good noticer but bad runner can take advantage of the good noticer/good runner's signal and free ride off it. The fox wouldn't care if you were a good noticer if you weren't also a good runner, since it can still catch you once you have noticed it.
5timtyler15y
Maybe. Rabbits go to ground. Escape is not too tricky if they have time to reach their burrow. Running speed is probably a relatively small factor compared to how far away the fox is when the rabbit sees it.
0sark15y
Yeah, running speed may not be such an important factor.
1wedrifid15y
Not only that, you can only look in one direction at a time. You do need to know where the fox is. The rabbit only loses a couple of hundred milliseconds if the fox decides to make a dash for it anyway.
4AlephNeil15y
You've described how a scenario in which the rabbits use this behaviour might be unstable, but so what? For any kind of behaviour whatsoever one can dream up a scenario where animals over or underuse it and subsequently have to change. Prima facie there's nothing at all implausible about a stable situation where rabbits use that behaviour some of the time and it makes the foxes give up some of the time.
0SilasBarta15y
Elaboration of my skepticism on this claim here.
[-]Paul Crowley15y90

Is there software that would objectively measure who spoke most and who interrupted who most? If so, Bloggingheads should run such software as a matter of course and display the results alongside each conversation.

EDIT: it should also measure how often each participant allows the other to interrupt, versus simply raising their voice and ploughing on.

Reply
4NancyLebovitz15y
I'm willing to bet that such software would be very hard to develop. Requiring 5 second pauses after speaking to allow for thought would be an interesting experiment.
3Paul Crowley15y
Could you say more about the difficulties you foresee? I'm guessing that Bloggingheads would have the two separate streams of audio from each microphone, which should make it somewhat easier, but even without that figuring out which speaker is which doesn't seem beyond the realms of what audio processing might be able to do.
1NancyLebovitz15y
I may have been overpessimistic. I didn't think about the separate feeds, and you're right about that making things easier. There might be questions about who has the "right" to be speaking at a given moment-- that would define what constitutes an interruption.
4Paul Crowley15y
Need it be more complex than: person A begins to speak while person B is still speaking? It might get a few false positives, but it should be a useful metric overall.
0timtyler15y
I think people just use standard video-editing software to combine the videos and their audio streams before uploading them.
[-]knb15y80

Fun fact: if you pause the video and click to different random points, you get to look at a random sampling of Wright's facial expressions, which oscillate between frustration, exasperation, and red-faced rage. Eliezer's expressions move between neutral, amused, serene, and placid.

Reply
3Liron15y
Eliezer's repertoire is higher-status because it's less reactive.
2Letharis15y
I agree that Eliezer maintained his calm better, but I don't believe that Wright is the simpleton you seem to be painting him to be. I've watched a lot of his videos, and I would say there are very rarely moments of "red-faced rage," and certainly none in this video. He was at times frustrated, but he really is working to understand what Eliezer is saying.
2knb15y
Nothing I said implied Wright is a "simpleton", and I certainly don't think he is. I was merely pointing out an amusing aspect of their conversation. And, yes he did have a moment of "red-faced rage" when he yelled at Eliezer (I believe it was toward the middle of the video). I certainly understand his frustration since the conversation didn't really get anywhere and they seemed stuck on semantic issues that are hard to address in a 60 minute video.
[-]simplicio15y70

Wright gives the impression of a hostile conversation partner, one who is listening to you only to look for a rhetorical advantage via twisted words.

And most of the points he makes are very em... cocktail-party philosophical?

Reply
[-]Morendil15y70

Favorite bit:

  • RW: "We will give [the superintelligent AI] its goals; isn't that the case with every computer program bult so far?"
  • EY: "And, there's also this concept of bugs."
Reply
[-]simplicio15y60

Okay, so from what I can tell, Wright is just playing semantics with the word "purpose," and that's all the latter part of the argument amounts to - a lot of sound and noise over an intentionally bad definition.

He gets Eliezer to describe some natural thing as "purposeful" (in the sense of optimized to some end), then he uses that concession to say that it "has purpose" as an extra attribute with full ontological standing.

I guess he figures that if materialists and religionists can both agree that the eye has a "purpose," then he has heroically bridged the gap between religion and science.

Basically, it's an equivocation fallacy.

Reply
[-]SeventhNadir15y40

Maybe I'm just too dumb to understand what Robert Wright was saying, but was he being purposely evasive and misunderstanding what Eliezer was saying when he realised he was in trouble? Or was that just me?

Reply
4Craig_Heldreth15y
The reason Wright got bent out of shape (my theory): Eliezer seemed to imply the communal mind theory is Wright's wishful thinking. This seems a little simplistic. I do believe Wright is a little disingenuous, but it is a little more subtle than that. It appears to me he thinks he has an idea that can be used to wean millions of the religious faithful to a more sensible position, and he is trying to market it. And he would sort of like to have it both ways. With hard edged science folk he can say all that with a wink because we are sophisticated and we get it. And the rubes can all swallow it hook line sinker. I forget the exact term Eliezer used that seemed to set him off. It was something like wishing or hoping or rooting-for. Then Wright's speech got loud and fast and confused and his blood pressure went up. He seemed to feel like he was being accused of acting in bad faith when he was claiming to try to be helpful. Maybe Wright's friends thought he did great under fire?
2SeventhNadir15y
I wish I could have watched it without knowing who either person was, rather than just not knowing who Wright was. That would be interesting
3Matt_Simpson15y
I wouldn't say the evasiveness was purposeful. Robert misunderstood something Eliezer said fairly early, taking it as an attack when Eliezer was trying to make a point about normative implications. This probably switched Robert out of curiosity-mode and into adversarial-mode. Things were going fine after Eliezer saw what was happening and dropped the subject. But later, when Robert didn't understand Eliezer's argument, adversarial-mode was active and interpreted it as Eliezer continuing (in Robert's mind) to be a hostile debate partner. I doubt Robert thought he was in trouble; more likely he thought Eliezer was in trouble and was being disingenuous.
2MartinB15y
I do not know his position or view to see that. But i got the impression he was badly prepared. Severe misunderstandings, and a lesson in staying calm.
[-]timtyler15y40

On first watching, I didn't see where was Eliezer coming from at the end. My thoughts were:

The genetic code was produced by a optimisation process. Biochemists have pretty broad agreement on the topic. There are numerous adaptations - including an error correcting code. It did not happen by accident - it was the product of an optimisation process, executed by organisms with earlier genetic substrates. Before DNA and proteins came an RNA world with a totally different "code"-with no amino acids. It is not that there is no evidence for this - ... (read more)

Reply
[-]timtyler15y30

One of the better BHTV episodes, IMO. Robert Wright was a bit heavy on rhetoric for me: Have you sobered up? Why don't you accuse me of blah. Oh, if you are going to fling acusations around, that isn't very scientific - etc. Also the enthusiasm for extracting some kind of concession from Eliezer about updating his position at the end.

Wright gets a bit excited towards the end. It has some entertainment value - but tends to interfere with the discussion a little. It would have helped if he could have read some EY.

Interesting topics, though.

Reply
[-]LucasSloan15y30

The main problem in the discussion that appeared to me is the fact that the present state of the universe is really unlikely, and you would never get it by chance. This is true and the universe does naively appear to have been designed to produce us. However, this is a priori massively unlikely. This implies that we exist in a universe that tries out many possibilities (many worlds interpretation) and anthropic bias ensures that all observers see weird and interesting things. Robert's problem is that he gets an emotion kick out of ascribing human-frien... (read more)

Reply
6CarlShulman15y
Big World rather. Many-worlds doesn't give different laws of physics in the way that the string theory landscape or Tegmark's mathematical universe hypothesis do.
1[anonymous]15y
Any hypothesis that assigns a really low probability to the present state of the universe is probably wrong.
2LucasSloan15y
That's what I said. (The universe is in a state such that to uniquely determine it, we need a very complicated theory. Therefore, we should look for less complicated theories which contain it and many other things, and count on anthropics to ensure we only see the parts of the universe we're accustomed to.)
0teageegeepea15y
Have you read Sean Carroll's "From Eternity to Here"? It's a fairly layman-friendly take on that problem (or I suppose more accurately, the problem of why the past was in such an improbable state of low entropy). I think his explanation would fall under Carl Schulman's "Big World" category.
-1timtyler15y
I think this argument is mostly about whether purpose is there - not about where it comes from. Designoid entities as a result of anthropic selection effects seem quite possible in theory - and it would be equally appropriate to describe them as being purposeful [standard teleology terminology disclaimers apply, of course].
6pjeby15y
Especially if you unpack "purposeful" as meaning "stimulating that portion of the human brain that evolved to predict the behavior of other entities". ;-) The real confusion about purpose arises when we confuse the REAL definition of purpose (i.e. that one), with the naive inbuilt notion of "purposeful" (i.e. "somebody did it on purpose").
0timtyler15y
That should not be the definition of purpose - if we are trying to be scientific. Martian scientists should come to the same conclusions. "Purpose" - in this kind of context - could mean "goal directed" - or it could mean pursuing a goal with a mind that predicts the future. The former definition would label plants and rivers flowing downhill as purposeful - whereas the latter would not.
4pjeby15y
Do you mean that a Martian scientist would not conclude that when a human being uses that word, they are referring to a particular part of their brain that is being stimulated? What I'm saying is that the notion of "purpose" is an interpretation we project onto the world: it is a characteristic of the map, not of the territory. To put it another way, there are no purposeful things, only things that "look purposeful to humans". Another mind with different purpose-detecting circuitry could just as easily come to different conclusions -- which means that the Martians will be led astray if they have different purpose-recognition circuits, following which we will have all sorts of arguments on the boundary conditions where human and Martian intuitions disagree on whether something should be called "purposeful". tl;dr: if it's part of the map, the description needs to include whose map it is. Now you have to define "mind" as well. It doesn't seem to me that that's actually reducing anything here. ;-)
0JamesAndrix15y
I'm not sure we can rule out a meaningful and objective measure of purposfulness, or something closely related to it. If I saw a Martian laying five rocks on the ground in a straight line, I would label it an optimization process. Omega might tell me that the Martian is a reasonable powerful geral optimization process, currently optimizing for a target like 'Indicate direction to solstice sunrise." or "Communicate concept of five-ness to Terran". In a case like that the pattern of five rocks in a line is highly intentional. Omega might instead tell me that the Martian is not a strong general optimization process, but that member of its species frequently arrange five stones in a line as part of their reproductive process, that would be relatively low in intentionality. But intentionality can also go with high intelligence. Omega could tell me that the Martian is a strong general optimization agent, is currently curing Martian cancer, and smart martians just rocks in a line when they're thinking hard. (Though you might reparse that as there is a part of the martian brain that is a specialized optimizer for putting stones in a line. I think knowing whether this is valid would depend on the specifics of the thinking hard->stones in a line chain of causality.) And if I just found five stones in a line on Mars, I would guess zero intentionality, because that doesn't constitute enough evidence for an optimization process, and I have no other evidence for Martians.
1pjeby15y
Evolution is an optimization process, but it doesn't have "purpose" - it simply has byproducts that appear purposeful to humans. Really, most of your comment just helps illustrate my point that purposefulness is a label attached by the observer: your knowledge (or lack thereof) of Martians is not something that changes the nature of the rock pattern itself, not even if you observe the Martian placing the rocks. (In fact, your intiial estimate of whether the Martian's behavior is purposeful is going to depend largely on a bunch of hardwired sensory heuristics. If the Martian moves a lot slower than typical Earth wildlife, for example, you're less likely to notice it as a candidate for purposeful behavior in the first place.)
1JamesAndrix15y
How do you know it doesn't have purpose? Because you know how it works, and you know that nothing like "Make intelligent life." was contained in it's initial state in the way it could be contained in a Martian brain or an AI. The dumb mating martian also did not leave the rocks with any (intuitively labeled) purpose. I'm saying: Given a high knowledge of the actual process behind something, we can take a measure that can useful, and corresponds well to what we label intentionality. In turn, if we have only the aftermath of a process as evidence, we may be able to identify features which correspond to a certain degree of intentionality, and that might help us infer specifics of the process.
0timtyler15y
What Wright said in response to that claim was: how do you know that? "Optimisationverse The idea that the world is an optimisation algorithm is rather like Simulism - in that it postulates that the world exists inside a computer. However, the purpose of an optimisationverse is not entertainment - rather it is to solve some optimisation problem using a genetic algorithm. The genetic algorithm is a sophisticated one, that evolves its own recombination operators, discoveres engineering design - and so on." In this scenario, the process of evolution we witness does have a purpose - it was set up deliberately to help solve an optimisation problem. Surely this is not a p=0 case...
0pjeby15y
That's not the same thing as acting purposefully -- which evolution would still not be doing in that case. (I assume that we at least agree that for something to act purposefully, it must contain some form of representation of the goal to be obtained -- a thermostat at least meets that requirement, while evolution does not... even if evolution was as intentionally designed and purposefully created as the thermostat.)
0JamesAndrix15y
My purposeful thinking evolved into a punny story: http://lesswrong.com/lw/2kf/purposefulness_on_mars/
0timtyler15y
It would have a purpose in my proposed first sense - and in my proposed second sense - if we are talking about the evolutionary process after the evolution of forward-looking brains. Evolution (or the biosphere) was what was being argued about in the video. The claim was that it didn't behave in a goal directed manner - because of its internal conflicts. The idea that lack of harmony could mess up goal-directedness seems OK to me. One issue of whether the biosphere has enough harmony for a goal-directed model to be useful. If it has a single global brain, and can do things like pool resources to knock out incoming meteorites, it seems obvious that a goal-directed model is actually useful in predicting the behaviour of the overall system.
-4timtyler15y
Most scientific definitions should try to be short and sweet. Definitions that include a description of the human mind are ones to eliminate. Here, the idea that purpose is a psychological phenomenon is exactly what was intended to be avoided - the idea is to give a nuts-and-bolts description of purposefulness. Re: defining "mind" - not a big deal. I just mean a nervous system - so a dedicated signal processing system with I/O, memory and processsing capabilities.
2JoshuaZ15y
Any nervous system? That seems like a bad idea. Is a standard neural net trained to recognize human faces a mind? Is a hand-calculator a mind? Also, how does one define having a memory and processing capabilities. For example, does an abacus have a mind? What about a slide rule? What about a Pascaline or an Arithmometer?
0timtyler15y
I just meant "brain". So: caclulator - yes, computer - yes. Those other systems are rather trivial. Most conceptions of what constitutes a nervous system is run into the "how many hairs make a beard" issue at the lower end - it isn't a big deal for most purposes.
1pjeby15y
Hm. Which one is it? ;-) So, a thermostat satisfies your definition of "mind", so long as it has a memory?
-1timtyler15y
Human mind: complex. Cybernetic diagram of minds-in-general: simple. A thermostat doesn't have a "mind that predicts the future". So, it is off the table in the second definition I proposed.
5pjeby15y
Dude, have you seriously not read the sequences? First you say that defining minds is simple, and now you're pointing back to your own brain's inbuilt definition in order to support that claim... that's like saying that your new compressor can compress multi-gigabyte files down to a single kilobyte... when the "compressor" itself is a terabyte or so in size. You're not actually reducing anything, you're just repeatedly pointing at your own brain.
0timtyler15y
Re: "First you say that defining minds is simple, and now you're pointing back to your own brain's inbuilt definition in order to support that claim... " I am talking about a system with sensory input, motor output and memory/processing. Like in this diagram: http://upload.wikimedia.org/wikipedia/commons/7/7a/SOCyberntics.png That is nothing specifically to do with human brains - it applies equally well to the "brain" of a washing machine. Such a description is relatively simple. It could be presented to Martians in a manner so that they could understand it without access to any human brains.
1pjeby15y
That diagram also applies equally well to a thermostat, as I mentioned in a great-great-grandparent comment above.
[-]Apteris13y20

I'm watching this dialogue now, I'm 45 (of 73) minutes in. I'd just like to remark that:

  1. Eliezer is so nice! Just so patient, and calm, and unmindful of others' (ahem) attempts to rile him.
  2. Robert Wright seemed more interested in sparking a fiery argument than in productive discussion. And I'm being polite here. Really, he was rather shrill.

Aside: what is the LW policy on commenting on old threads? All good? Frowned upon?

Reply
0thomblake13y
It's pretty much okay. If there is a recent "Sequence rerun" thread about in in Discussion, then the discussion should happen there instead, but otherwise there's no particular issues.
[-]knb15y20

I really didn't care much for this one. I usually feel like I learned something when I watch a Bloggingheads video (there is a selection effect, because I only watch ones with people I already find interesting). But I'm afraid this one was wasted in misunderstandings and minor disagreements.

Reply
[-]timtyler15y20

Re: panspermia.

Applying Occam's razor isn't trivial here. The difficulty of the journey to earth makes panspermia less probable, but all the other places where life could then have previously evolved makes it more probable. The issue is - or should be - how these things balance.

If you write down the theory, panspermia has a longer description. However, that's not the correct way to decide between the theories in this kind of case - you have to look a bit deeper into the probabilities involved.

Reply
[-]timtyler15y20

I think it is quite acceptable to describe technological evolution as "purposeful" - in the same way as any other natural system is purposeful.

‘Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public.’ Today the mistress has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it. The only concession which they make to its disreputable past is to rename it ‘teleonomy’. - D. Hull.

So, I am sympathetic t... (read more)

Reply
7TobyBartels15y
So when unmarried, biology and teleogy happened to have the same last name. But after the marriage, teleology changed her surname to be different? No wonder ordinary people don't understand science!
3SilasBarta15y
Sure, so long as you recognize that "purpose" in "The purpose of the heart is to pump blood." cashes out as something different from "The purpose of the silicon CPU is to implement a truth table." In my experience, there are about zero philosophers of science who both understand this distinction, and harp on this point about teleology in biology. Here is one I read recently.
2timtyler15y
"Cashes out" seems rather vague. In one case, we have a mind to attribute purpose to - and in the other we don't. However, both are complex adapted systems, produced by other, larger complex adapted systems as part of an optimisation process. If that is all we mean by "purpose", these would be classified in much the same way. I didn't like the "No Teleology!" link much - it seemed pointless.
4SilasBarta15y
I use the term "cashes out" because that's the lingo here. But I'll expand out the two claims to show how they're different in a crucial way. In the case of the heart's purpose, the statement means, "Historically, genes were copied to the next generation in proportion to the extent to which they enhanced the capability/tendency of organisms whose DNA had that gene to make another organism with that gene. At an organism's present state, a gene complex causes a heart to exist, which causes blood to increase in pressure at one point in its circulation, which causes the organism to stay away from equilibrium with its environment, which permits it to pass on the genes related to the heart (the latter being the critical explanatory feature of organism). If the heart ceased causing the blood to increase in pressure, the organism would lose its ability to remain far from equilibrium (which as mentioned above relates to an aspect with critical explanatory power) much faster and securely than if the heart ceased causing any of its other effects, such as generation of heat." In the case of the CPU's purpose, the statement means, "The CPU was added to the computer system because a human designer identified that fast implementation of a truth table would be required for the computer system to do what the human designer intended (which is fast input/output of computations related to what human users will want out of it), and they recognized that inclusion of the CPU would lead to fast implementation of a truth table." Quite a mouthful in each case! So it's quite understandable when the distinctions are glossed over in simplified explanations of the topics. But the important thing to notice that if you take the meaning of "purpose" in the biological context to mean something more like it does in the computer context, you are led into critical errors. For example, you will have a hard time recognizing evolutionary paths, particularly when organs are -- for lack of a better term
3timtyler15y
As Wright says, organisms are not that harmonious anyway. At every polymorphic locus, two alleles are engaged in a battle-to-the-death. There is still enough cooperation for people to describe the resulting behaviour as "purposeful" - despite all the underlying replicator-level conflicts that produce it.
[-]Alexandros15y10

By replacing 'has purpose X' with 'is suitable for X', a lot of Wright's points become meaningless.

That said, I am also unsure about Eliezer's argument for purposelessness from competing designs.

Reply
[-]Clippy15y10

I like that one of the humans acknowledged the existence of paperclip maximizers (around 7:50).

Reply
3JamesAndrix15y
Why do you like it? How does it maximize paperclips?
7Clippy15y
Well, it's hard to tell people about the importance of paperclips when they don't even know that some of us find them really important in the first place.
0James_K15y
It seems to me your task would be easier if people didn't acknowledge paperclip maximisers. Humans aren't going to start maximising paperclips, so your best odds are to do and end run around us. That's easier to do if we aren't aware of your existence.
3[anonymous]15y
People have been involved in the production of every currently existing paperclip.
1katydee15y
Keep in mind that people could, in theory, be compelled or incentivized to maximize paperclips, or at least to consider paperclips much more important than they are now.
[-][anonymous]15y10

Wright gets a bit excited towards the end. It has some entertainment value - but tends to interfere with the discussion a little. It would have helped if he could have read some EY.

Reply
[-]blogospheroid15y00

During the dialogue, Eliezer wanted Robert to distinguish between the "accident hypothesis" and the non-zero hypothesis. He also mentioned that he would see the difference between the two by solomonoff induction, as in the shortest computer program that can output the result seen.

Now, any accident hypothesis involves a random number function, right?

The best random number functions are those that either go beyond the matrix or are very long.

So, does solomonoff induction imply that an intelligent designer is the better hypothesis once the length o... (read more)

Reply
[-]Ivan_Tishchenko15y00

Well, for me, there was only emotional disagreement between RW and EY. And, EY explanation did not make it through completely to RW.

To summarize the second part of the video:

RW: Can it be that evolution of the Earth biosphere is purposeful? EY: Yes, but that's very improbable.

That's it. Isn't it?

And by the way, RW was doing a very good argument! I saw that when I finally understood what RW was talking about, trying to compare a fox to the Earth. Because, you see, I too do not see that much of a difference between them -- provided that we agree on his c... (read more)

Reply
[-]timtyler15y00

...and what we end up doing with all the galaxies we see in our telescopes - assuming there's no one out there - which seems to be the case. - 24:30

There aren't any aliens in all the visible galaxies?!? I thought we were likely to see a universe with many observers in it. What gives?

Reply
4Will_Newsome15y
Our universe does seem to have infinitely many observers in it but that doesn't necessarily mean it has to have a particularly high density of them. It instead indicates that particularly densely populated universes are unlikely for some other reason (e.g. uFAI or other planet-wide or lightcone-wide existential risks). Alternatively, it could be that for some reason the computation 'Earth around roughly 2010' includes a disproportionately large amount of the measure of agents in the timtyler reference class. Perhaps we third millennium human beings are a particularly fun bunch to simulate and stimulate.
[-]CarlShulman15y00

You needed to raise observer selection effects: the laws of physics and conditions on Earth are pretty favorable compared to alternatives for the development of intelligence. And of course intelligent observers would be most common in regions of the multiverse with such conditions, and the Fermi Paradox, at least, tells us that Earth is unusually favorable to the development of intelligent life among planets in our galaxy.

Had that been explained and terms made clear, then I think the disagreement could have been made clear, but without it you were just tal... (read more)

Reply
[-][anonymous]15y00

Bob badgered Dan Dennett to get an "admission" of design/purpose some years ago, and has regularly cited it (with misleading context) for years. One example in this comment thread.

Reply
[-]SforSingularity15y-20

I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't cr... (read more)

Reply
3RobinZ15y
That's not what "purpose" means.
2timtyler15y
The discussion is about what "purpose" means - in the context of designoid systems. I for one am fine with attributing "purpose" to designoid entities that were created by an anthropic selective process - rather than by evolution and natural selection.
0RobinZ15y
I guess I can see that.
2zero_call15y
I don't think this is much of an insight, to be honest. The "anthropic" interpretation is a statement that the universe requires self-consistency. Which is, let's say, not surprising. My feeling is that this is a statement about the English language. This is not a statement about the universe.
0timtyler15y
There's also the possibility of "the adapted universe" idea - as laid out by Lee Smolin in "The Life of the Cosmos" and James Gardner in "Biocosm" and "Intelligent-Universe". Those ideas may face some Occam pruning - but they seem reasonably sensible. The laws of the universe show signs of being a complex adaptive system - and anthropic selection is not the only possible kind of selection effect that could be responsible for that. There could fairly easily be more to it than anthropic selection. Then there's Simulism... I go into the various possibilites in my "Viable Intelligent Design Hypotheses" essay: http://originoflife.net/intelligent_design/ Robert Wright has produced a broadly similar analysis elsewhere.
Moderation Log
More from Liron
View more
Curated and popular this week
129Comments

Sweet, there's another Bloggingheads episode with Eliezer.

Bloggingheads: Robert Wright and Eliezer Yudkowsky: Science Saturday: Purposes and Futures