Sweet, there's another Bloggingheads episode with Eliezer.
Bloggingheads: Robert Wright and Eliezer Yudkowsky: Science Saturday: Purposes and Futures
Sweet, there's another Bloggingheads episode with Eliezer.
Bloggingheads: Robert Wright and Eliezer Yudkowsky: Science Saturday: Purposes and Futures
Maybe it's because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people. It's almost kinda painful to watch, because even though I wish someone would come along and pwn Eliezer in an argument, it never ever happens because everyone is more wrong than him, and I have to sit there and listen to them fail in such predictably irrational ways. Seriously, Eliezer is smart, but there have to be some academics out there that can point to at least one piece of Eliezer's fortress of beliefs and find a potentially weak spot. Right? Do you know how epistemically distressing it is to have learned half the things you know from one person who keeps on getting proven right? That's not supposed to happen! Grarghhhhhh. (Runs off to read the Two Cult Koans.) (Remembers Eliezer wrote those, too.) (God dammit.)
(And as long as I'm being cultish, HOW DEAR PEOPLE CALL OUR FEARLESS LEADER 'YUDKOWSKI'?!?!??!? IT COMPLETELY RUINS THE SYMMETRY OF THE ETERNAL DOUBLE 'Y'S! AHHH! But seriously, it kinda annoys me in a way that most trolling doesn't.)
It reminds me of when Richard Dawkins was doing a bunch of interviews and discussions to promote his then-latest book The God Delusion. It was kind of irritating to hear the people he was talking with failing again and again in the same predictable ways, raising the same dumb points every time. And you could tell that Dawkins was sick of it, too. The few times when someone said something surprising, something that might force him to change his mind about something (even a minor point), his face lit up and his voice took on an excited tone. And when he was particularly uncertain about something, he said so.
People accused him of being arrogant and unwilling to change his mind; the problem is that the people he was arguing with were just so piteously wrong that of course he's not going to change his mind from talking with them. It's funny, because one of the things I really like about Dawkins is that he's genuinely respectful in discussions with other people. Sometimes barbed, but always fundamentally respectful. When the other person says something, he won't ignore it or talk past them, and he assumes (often wrongly) that whoever he's speaking with is intelligent enough and sane enough to handle a lack of sugarcoating.
And of course, all this led to accusations of cultishness, for exactly the same reasons that are making you uncomfortable.
Maybe it's because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people.
Start with a bit of LW's own "specialized cult jargon" (I kid, really!)... specifically the idea of inferential distance.
Now imagine formalizing this concept more concretely than you get with story-based hand waving, so that it was more quantitative -- with parametrized shades of grey instead of simply being "relevant" or "not relevant" to a given situation. Perhaps it could work as a quantitative comparison between two people who could potentially Aumann update with each other, so that "ID(Alice,Bob) == 0 bits" when Alice knows everything Bob knowsand they already believe exactly the same thing, and can't improve their maps by updating about anything with each other. If its 1 bit then perhaps a single "yes/no Q&A" will be sufficient to bring them into alignment. Larger and larger values imply that they have more evidence (and/or more surprising evidence) to share.
(A simple real world proxy for ID(P1,P2) might be words re...
Bear in mind that, like many good works of pop science, the vast majority of what the Sequences present is other people's ideas; I'm much more confident of the value of those ideas than of the parts that are original to Eliezer.
And who filtered that particular and exceptionally coherent set of "other people's ideas" out of a vastly larger total set of ideas? Who stated them in (for the most part) clear anti-jargon? I would not even go into the neighborhood of being dismissive of such a feat.
Originality is the ultimate strawman.
Rabbits and foxes are used as a stereotypical example of conflict. However, "even" foxes and rabbits actually cooperate with each other - as follows:
A fox slinks about, looking for food. When he spies a rabbit munching in the grass, he begins to creep closer. If the rabbit sees the fox coming, it will stand on its hind legs, observing the fox. The fox now realizes that its been discovered, and it will turn away from the hunt. The rabbit could run, but that would entail wasteful energy expenditure. So it simply signals the fox. The fox gets the "I see you" signal, and turns away, because it also doesn't want to expend energy on a futile chase. So both animals come out ahead, by the use of a signal. The rabbit's work loop (stay alive) has been completed with minimum energy expended, and the fox's work loop (find food) has been terminated unsuccessfully, but with less energy used than if it had included a fruitless chase.
The rabbit helps the fox save energy, the fox helps the rabbit save energy - it's a deal. They don't want exactly the same thing - but that is true for many traders, and it doesn't prevent cooperative trade arising between them. Nature is full of such cooperation.
Is there software that would objectively measure who spoke most and who interrupted who most? If so, Bloggingheads should run such software as a matter of course and display the results alongside each conversation.
EDIT: it should also measure how often each participant allows the other to interrupt, versus simply raising their voice and ploughing on.
Fun fact: if you pause the video and click to different random points, you get to look at a random sampling of Wright's facial expressions, which oscillate between frustration, exasperation, and red-faced rage. Eliezer's expressions move between neutral, amused, serene, and placid.
Wright gives the impression of a hostile conversation partner, one who is listening to you only to look for a rhetorical advantage via twisted words.
And most of the points he makes are very em... cocktail-party philosophical?
Favorite bit:
Okay, so from what I can tell, Wright is just playing semantics with the word "purpose," and that's all the latter part of the argument amounts to - a lot of sound and noise over an intentionally bad definition.
He gets Eliezer to describe some natural thing as "purposeful" (in the sense of optimized to some end), then he uses that concession to say that it "has purpose" as an extra attribute with full ontological standing.
I guess he figures that if materialists and religionists can both agree that the eye has a "purpose," then he has heroically bridged the gap between religion and science.
Basically, it's an equivocation fallacy.
Maybe I'm just too dumb to understand what Robert Wright was saying, but was he being purposely evasive and misunderstanding what Eliezer was saying when he realised he was in trouble? Or was that just me?
On first watching, I didn't see where was Eliezer coming from at the end. My thoughts were:
The genetic code was produced by a optimisation process. Biochemists have pretty broad agreement on the topic. There are numerous adaptations - including an error correcting code. It did not happen by accident - it was the product of an optimisation process, executed by organisms with earlier genetic substrates. Before DNA and proteins came an RNA world with a totally different "code"-with no amino acids. It is not that there is no evidence for this - ...
One of the better BHTV episodes, IMO. Robert Wright was a bit heavy on rhetoric for me: Have you sobered up? Why don't you accuse me of blah. Oh, if you are going to fling acusations around, that isn't very scientific - etc. Also the enthusiasm for extracting some kind of concession from Eliezer about updating his position at the end.
Wright gets a bit excited towards the end. It has some entertainment value - but tends to interfere with the discussion a little. It would have helped if he could have read some EY.
Interesting topics, though.
The main problem in the discussion that appeared to me is the fact that the present state of the universe is really unlikely, and you would never get it by chance. This is true and the universe does naively appear to have been designed to produce us. However, this is a priori massively unlikely. This implies that we exist in a universe that tries out many possibilities (many worlds interpretation) and anthropic bias ensures that all observers see weird and interesting things. Robert's problem is that he gets an emotion kick out of ascribing human-frien...
I'm watching this dialogue now, I'm 45 (of 73) minutes in. I'd just like to remark that:
Aside: what is the LW policy on commenting on old threads? All good? Frowned upon?
I really didn't care much for this one. I usually feel like I learned something when I watch a Bloggingheads video (there is a selection effect, because I only watch ones with people I already find interesting). But I'm afraid this one was wasted in misunderstandings and minor disagreements.
Re: panspermia.
Applying Occam's razor isn't trivial here. The difficulty of the journey to earth makes panspermia less probable, but all the other places where life could then have previously evolved makes it more probable. The issue is - or should be - how these things balance.
If you write down the theory, panspermia has a longer description. However, that's not the correct way to decide between the theories in this kind of case - you have to look a bit deeper into the probabilities involved.
I think it is quite acceptable to describe technological evolution as "purposeful" - in the same way as any other natural system is purposeful.
‘Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public.’ Today the mistress has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it. The only concession which they make to its disreputable past is to rename it ‘teleonomy’. - D. Hull.
So, I am sympathetic t...
By replacing 'has purpose X' with 'is suitable for X', a lot of Wright's points become meaningless.
That said, I am also unsure about Eliezer's argument for purposelessness from competing designs.
I like that one of the humans acknowledged the existence of paperclip maximizers (around 7:50).
Wright gets a bit excited towards the end. It has some entertainment value - but tends to interfere with the discussion a little. It would have helped if he could have read some EY.
During the dialogue, Eliezer wanted Robert to distinguish between the "accident hypothesis" and the non-zero hypothesis. He also mentioned that he would see the difference between the two by solomonoff induction, as in the shortest computer program that can output the result seen.
Now, any accident hypothesis involves a random number function, right?
The best random number functions are those that either go beyond the matrix or are very long.
So, does solomonoff induction imply that an intelligent designer is the better hypothesis once the length o...
Well, for me, there was only emotional disagreement between RW and EY. And, EY explanation did not make it through completely to RW.
To summarize the second part of the video:
RW: Can it be that evolution of the Earth biosphere is purposeful? EY: Yes, but that's very improbable.
That's it. Isn't it?
And by the way, RW was doing a very good argument! I saw that when I finally understood what RW was talking about, trying to compare a fox to the Earth. Because, you see, I too do not see that much of a difference between them -- provided that we agree on his c...
...and what we end up doing with all the galaxies we see in our telescopes - assuming there's no one out there - which seems to be the case. - 24:30
There aren't any aliens in all the visible galaxies?!? I thought we were likely to see a universe with many observers in it. What gives?
You needed to raise observer selection effects: the laws of physics and conditions on Earth are pretty favorable compared to alternatives for the development of intelligence. And of course intelligent observers would be most common in regions of the multiverse with such conditions, and the Fermi Paradox, at least, tells us that Earth is unusually favorable to the development of intelligent life among planets in our galaxy.
Had that been explained and terms made clear, then I think the disagreement could have been made clear, but without it you were just tal...
Bob badgered Dan Dennett to get an "admission" of design/purpose some years ago, and has regularly cited it (with misleading context) for years. One example in this comment thread.
I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.
The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)
The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't cr...
Well, let me explain my intuition behind my objection, even if there's a reason why it might be wrong in this case.
I am, in general, skeptical of claims about Pareto-improvements between agents with fundamentally opposed goals (as distinguished from merely different goals, some of which are opposed). Each side has a chance to defect from this agreement to take utility from the other.
It's a quite famliar case for two people to recognize that they can submit their disagreement to an arbitrator who will render a verdict and save them the costs of trying to tip the conflict in their favor. But to the extent that one side believes the verdict will favor the other, then that side will start to increase the conflict-resolution costs if it will get a better result at the cost of the other. For if a result favors one side, then a fundamentally opposed other side should see that it wants less of this.
So any such agreement, like the one between foxes and rabbits, presents an opportunity for one side to abuse the other's concessions to take some of the utility at the cost of total utility. In this case, since the rabbit is getting the benefit of spending the energy of a full chase without spending that energy, the fox has reason to prevent it from being able to make the conversion. The method I originally gave shows one way.
Another way foxes could abuse the strategy is to hunt in packs. Then, when the rabbit spots one of them and plans to run one direction, it will be ill-prepared for if another fox is ready to chase from another direction (optimally, the opposite) -- and gives away its location! (Another fox just has to be ready to spring for any rabbit that stands and looks at something else.)
So even if the "stand and look"/"give up" pattern is observed, I think the situation is more complicated, and there are more factors at play than timtyler listed.
The intuition does make sense, but I don't think it serves to refute the proposed co-evolved signal in this case. Perhaps the prey also likes to maintain view of its hunter as it slinks through the brush.