My Bloggingheads.tv interview with Jaron Lanier is up.  Reductionism, zombies, and questions that you're not allowed to answer:

This ended up being more of me interviewing Lanier than a dialog, I'm afraid.  I was a little too reluctant to interrupt.  But you at least get a chance to see the probes I use, and Lanier's replies to them.

If there are any BHTV heads out there who read Overcoming Bias and have something they'd like to talk to me about, do let me or our kindly producers know.

New Comment
66 comments, sorted by Click to highlight new comments since:

Will Wilkinson's show, 'Free Will' seems like a good idea. Wilkinson tends to be good at drawing out clear accounts of philosophers' and scientists' views, and after a fair-minded exposition, criticizing them at their weakest points or raising relevant issues. His episode with Robin seemed to go well.

Plus Wilkinson is hilarious. Sick & twisted sense of humor. Excellent!

Why are these blogging head interviews Eliezer has always end up more funny than informative?

It's interesting how many popular confusions can be answered using material already presented on this blog. As I listened to this diavlog and the one before with Horgan, the specific blog posts kind of clicked into place in my head as answers or challenges to almost every claim that was made. This I assume is partially supported in retrospect by specific answers Eliezer was making, since this effect is less striking in my own conversations with people, but nonetheless.

It strikes me from this diavlog that you both a) believe in capital-T Truth and mind-independent reality that has to be both understandable and explainable, even if human beings aren't or won't ever really understand it or be able to adequately explain it, and b) are resistant to "it is better that" notions of truth, as opposed to "it is true that" notions of truth. (In other words, pragmatics vs. metaphysics.)

My issue with this is that we don't, actually, have a philosophical/rational/scientific vision of capital-T Truth yet, despite all of our efforts. (Descartes, Spinoza, Kant, etc.) I don't know anyone who really doubts this. Even the capital-T Truth believers will admit that we don't know how to achieve an understanding of that truth, they'll just say that it's possible because there really is this kind of truth. Isn't it the case, then that your embracing this kind of objective truth is itself a "true because it's useful" kind of thinking, not a "true because it's true" kind of thinking? As long as we don't have a workable understanding of reality and how it is trouble by mind/body dualism-- the consciousness problem-- proceeding as if there is a capital-T Truth essentially entails a "this is true because it is useful and we want it to be true" way of embracing that notion. Right?

Eliezer occasionally looked like he was having trouble following Lanier's reasoning. I certainly did. My guess is that this is because, on those occasions, Lanier didn't have a reasoning.

Also, it seems to me that despite his protestations (in his one half a manifesto), Lanier really does believe that consciousness is unexplainable/irreducible/magical.

Eric makes a reasonable point in the comments: Lanier's argument about three-digit IQ giving an inaccurate impression of precision is highly dubious - since throwing away more of the measurements you have made and introducing rounding errors is itself pretty bad.

Once you have lost the data it has gone. Whereas if you have kept the data and you don't want it, you can always throw it away later.

My issue with this is that we don't, actually, have a philosophical/rational/scientific vision of capital-T Truth yet, despite all of our efforts. (Descartes, Spinoza, Kant, etc.)

Truth is whatever describes the world the way it is.

Even the capital-T Truth believers will admit that we don't know how to achieve an understanding of that truth, they'll just say that it's possible because there really is this kind of truth.

Do you mean an understanding of the way the world is, or an understanding of what truth is?

Isn't it the case, then that your embracing this kind of objective truth is itself a "true because it's useful" kind of thinking, not a "true because it's true" kind of thinking?

You can of course define "truth" however you like - it's just a word. If you're expecting some sort of actual relationship to hold between - say - ink on a page saying "Jupiter has four large moons" and the moons of Jupiter themselves, then of course there's no such thing; the universe is just made of protons, electrons, and such mundane objects.

But there still really are four large moons of Jupiter.

In short, Freddie, EY appears largely to have a correspondence theory of truth.

any conversation about beliefs without the express purpose of getting rid of them seems fairly pointless to me. after all, if your view conformed perfectly with reality you wouldn't have to call it a belief anymore. no one believes in the atlantic ocean.

Hi everybody, first time here over from bloggingheads. At the start of the diavlog I thought I'd sympathize with Lanier but his circular reasoning really rankled me. The repeated appeals to his own expertise were really obnoxious.

Has Eliezer ever examined some the socio-ecological assumptions of his singularity model? It seems to me that it's pretty dependent on substantial funding for this type of research, which isn't likely in the event of large-scale energy/resource shortages or nuclear war. I'm looking through the "future" and "disaster" posts, but if someone could point me in the right direction I'd be grateful. I'm finding a few mentions of things "going wrong" etc but I think he's referring to the development of evil AGI and not these more mundane constraints.

You can now count me among your regular readers, Eliezer. You should do a diavlog with Bob Wright - he's a good interviewer with a sense of humor and you guys would have some stuff to discuss when it comes to his Nonzero thesis (which I find specious).

[-]g00

nazgulnarsil, just because you wouldn't have to call it a belief doesn't mean it wouldn't be one; I believe in the Atlantic Ocean even though I wouldn't usually say so in those words.

It was rather tiresome the way that Lanier answered so many things with (I paraphrase here) "ha ha, you guys are so hilariously, stupidly naive" without actually offering any justification. (Apparently because the idea that you should have justification for your beliefs, or that truth is what matters, is so terribly terribly out of date.) And his central argument, if you can call it that, seems to amount to "it's pragmatically better to reject strong AI, because I think people who have believed in it have written bad software and are likely to continue doing so". Lanier shows many signs of being a smart guy, but ugh.

Eh, Lanier has some sound intuitions but his arguments supporting them seem confused. I am (naturally) more impressed with my own arguments against reductionism, some of which are collected here. My attempts to argue them around here have mostly come to naught though.

Also, you should read some Brian Cantwell Smith.

You owe it to yourself to take on the strongest arguments against your position as well as the weak ones. I don't know where my half-assed speculations fit in, but Smith is a serious thinker who, like you and Lanier, comes out of the computational world so you ought to at least be able to connect on some level.

I wasn't expecting such a swift response! Unfortunately I'm a little too tipsy to go through the whole paper right now (I'll get to it with the cold, godless Sunday morn), but I think I'd actually be more interested in the paper you reference at the start, about catastrophism bias. I completely agree that such a bias exists! But still, I don't think it's obvious that we'll develop an AGI that can solve the socio-ecological problems I mentioned earlier before they inhibit the research itself. As such I'm more concerned about the design of a benevolent agrarian revolution before we get to the singularity stuff.

I guess in my mind there's a big tension here - the societal mechanisms that right now support the development of powerful AGI are also horribly brutal and unjust. Could the research continue in a more just and ecologically-sound system?

[-]Ryan40

I find it odd that Mr. Lanier derides AI as a sort of religious belief, yet he is attached to the idea that consciousness is mystical and impenetrable. Surely the one invoking mystical and magical thinking is closer to a religious mindset?

Although frankly the whole comparison to religion is way overused and I don't think it applies to the views of either of them.

It seemed to me that Lanier drifted between useful but poorly explained ideas and incoherence throughout it. And that the talk was mostly about his ideas.

Incidentally, Eliezer asked early on, and has asked in the past: Can you name a belief which is untrue, but which you nevertheless believe?

I think, on reflection, that I have one. I believe that my conscious perception of reality is more or less accurate.

Suppose that this universe (or any possible universe) ends in heat death, rather than a "big crunch", repeated inflationary periods, etc, which is a plausible outcome of the cosmological debate on the ultimate fate of the universe. In that case, there is a very high probability that my brain is a random fluctuation in a maximum entropy universe, rather than a meaningful reflection of reality. Nevertheless, I believe and act as though my memories and perceptions describe the universe around me.

[-][anonymous]00

"If you disagree with me then you're not a human you are a zombie."

If I was California-Berkeley I'd fire the fucker and send him back to Sunday School.

[-][anonymous]00

Incidently, has someone on Overcoming Bias made a post on Laughter?

Lanier's use of laughter as his primary persuassive strategy is a perfect example of the role it plays in belief formation. I'd love to see someone with a more detailed understanding of how it works from the perspective of bias explain it.

I second the idea that Yudkowsky should square off with Robert Wright, then see if Kevin Kelly is interested. Test your premises against pulses; do it out loud on bloggingheads. The discourse will improve.

In that case, there is a very high probability that my brain is a random fluctuation in a maximum entropy universe, rather than a meaningful reflection of reality. Nevertheless, I believe and act as though my memories and perceptions describe the universe around me.

You buy the Boltzmann brain argument? How did you calculate the probabilities? Nobody knows the probability of what seems to be our universe forming, and certainly nobody knows the probability of a Boltzmann brain forming in a universe of unknown size and age. The Boltzmann brain paradox argues that one unknown highly speculative probability is bigger than another unknown highly speculative probability.

If you are a Boltzmann brain, don't worry about it - it's the other case that should be of interest to you.

Eliezer doesn't seem to be drawing on the Daniel Dennett side of the force when it comes to cultural evolution. Dennett is one of the more famous cheerleaders for cultural evolution - and would never dream of writing: it drives me up the wall when people lump together natural selection and intelligence-driven processes as "evolutionary".

I think I see where the disconnect was in this conversation. Lanier was accusing general AI people of being religious. Yudkowsky took that as a claim that something he believed was false, and wanted Lanier to say what.

But Lanier wasn't saying anything in particular was false. He was saying that when you tackle these Big Problems, there are necessarily a lot of unknowns, and when you have too many unknowns reason and science are inapplicable. Science and reason work best when you have one unknown and lots of knowns. If you try to bite off too big a chunk at once you end up reasoning in a domain that is now only e.g. 50% fact, and that reminds him of the "reasoning" of religious people.

Knowledge is a big interconnected web, with each fact reinforcing the others. You have to grow it from the edge. And our techniques are design for edge space.

Wow. On around 20 minutes Jaron wraps his irrationality up in so much floral language it's impossible to follow. There's no arguing with that, but you had a really good stab, Eliezer. I'd have snapped at all the implied barbs. Fascinating all the way through. Three cheers for physical reality!

I actually thought Eliezer that you did rather poorly in this dialogue. You and your logic reached their limits. The tools you drew on were from too narrow a scope and didn't match Lanier's breadth. I am surprised (and worried) that all the other comments "take your side". I think this "event" requires som updating by you, Eliezer. Person to person - you lost. And I think this phrase "I was a little too reluctant to interrupt" is an example of cognitive dissonance and not the truth of the matter at all.

Here's a question for Eli and/or other readers: is human intelligence somehow qualitatively different than animal intelligence, or do humans just have MORE intelligence than animals?

In other words, should we interpret the amazing success of human intelligence as simply the result of a phase transition in the relationship of intelligence vs. success? Or do humans have some actual computational ability not present in animals?

Furthermore, can we settle this question empirically? If we genetically engineer a tribe of apes to have far greater intelligence than normal apes, will they begin to understand mathematics, language, and technology?

There was recently a big debate in the world of linguistics regarding the question of what computational ability actually underlies language. Hauser, Chomsky, and Fitch wrote in a paper called "The Faculty of Language: What is it, Who has it, and How did it Evolve" that the fundamental computational ability was (drum roll...) recursion. Pinker and Jackendoff wrote a critical response, and Hauser's own later research seems to suggest that the ability to use recursion is NOT unique to humans (songbirds can learn certain types of recursive grammars) so the question appears to be open.

Daniel,

Eliezer has been addressing his definition of intelligence recently. See, for instance, here. You should be able to see how it relates to animal intelligence.

  1. Dennett's stock reply to those who say he is a zombie: "Yes I do have a soul; but it's made of lots of tiny robots."

  2. Can anyone divine what is Eleizer on about two minutes in - where he seems to be casting doubt on the evolution of the human brain?

  3. Things on Planet Lanier seemed as muddled as usual. We shouldn't try and build superintelligent machines - because of clippy?!? What?!?

  4. Debating partners with grand disagreements often make for confusion and misunderstanding. It's like debating creationists: you wind up stuck at first base. Personally I'd rather watch some slightly more technical content - which you might be more likely to get with debating partners that are on a vaguely-similar wavelength.

Did anyone else find it interesting that Lanier seems to be distrusting of large-scale claims about how various macro-level systems (like consciousness, reality, economics) function, yet also seems to have a very clear view on who smart people would/should elect for president? According to him, they should elect the candidate [Obama] that is looking to take a more top-down command and control position on the economy (unless he doesn't realize this is Obama's position, or thinks its not 'really' his position, or something to that effect). He also seemed to support the notion that the current financial mess was caused largely by financial actors with overly simplistic models of the economy and of their financial position caused their troubles, yet he thinks politicians can do better in trying to model/control/calibrate the economic system as a whole?

Yet I suspect he also strongly believes that the global warming alarmists who depend upon faulty and imprecise models for their conclusions are right when they warn of catastrophic global warming.

Tim Tyler: As far as the "evolved brain" thing, I think the argument wasn't "did the brain evolve?", but inside the brain, is the algorithm that the brain uses to think more or less "genetic algorithms" or something more sophisticated?

Eliezer seemed to be saying "no, the brain isn't simply running a large genetic algorithm, the processes of the brain are doing something else instead."

@Max M: According to him, they should elect the candidate [Obama] that is looking to take a more top-down command and control position on the economy (unless he doesn't realize this is Obama's position, or thinks its not 'really' his position, or something to that effect).

Um, maybe you should give some backing for this statement, given that government spending has wildly increased under recent Republican administrations, and that John McCain promises even more expensive foreign wars, which translates to even more of the economy being spent on the non-productive, non-investment military, which is the very definition of a "top-down command and control" organization.

In short, there are not any libertarians running (well, there are, but none with a chance of winning), and Obama is arguably a better choice for those who have libertarian values.

Sorry to pollute the high-minded intellectual tone of this blog with current events, but believe it or not this stuff is important.

Perhaps more on-topic, the Republicans under Bush declared themselves to be separate from what they called the "reality-based community". I assume anyone in the orbit of Overcoming Bias would prefer to be on the side of the RBC.

ou buy the Boltzmann brain argument? How did you calculate the probabilities? Nobody knows the probability of what seems to be our universe forming
That's not the right probability to be concerned about
and certainly nobody knows the probability of a Boltzmann brain forming in a universe of unknown size and age.
Emphasis added by me.

Again, you're asking about the wrong thing. Limiting the analysis to a single universe misses the point and -- as you rightfully object -- reduces the hypothesis to what is nearly a necessarily false statement. But there's no reason to limit analysis to a single universe.

Can you even state, explicitly, what it is that you mean by 'universe'?

Further political comments will be deleted.

Thanks, Psy-Kosh. I see. So - according to J.L. - the Pauling quote that originated this theory was: "The way to get good ideas, is to get lots of ideas and throw the bad ones away." I think I'll take Without Miracles, instead.

I'm 20 minutes in and wish Lanier's connection would just cut off and Eliezer would talk by himself.

"Eliezer occasionally looked like he was having trouble following Lanier's reasoning. I certainly did. My guess is that this is because, on those occasions, Lanier didn't have a reasoning."

That's my feeling too. He seemed to love calling anyone who disagrees with him an "idiot" or "religious nut" without ever really explaining why.

I'm going to keep watching because I expect Eliezer to say some interesting stuff.

This was disappointing. Lanier could not defend ANYTHING he said! He constantly changed the topic or argued from ignorance. We don't know so we can't know. Or im a really smart guy who has worked in this field for a long time and i have not found this idea useful. Ok valid but really limited statement. Also true that people can treat AI as a religion and it can make them nutty. But just because people believe wrong things about something does not make the subject itself wrong. example: quantum mechanics.

It is true that i have found AI and singularity people off putting but Laniers inability to defend his position has moved me to be more sympathetic to the efforts of strong AI people.

I think the one thing i would like to see defended is the time frame argument for AI. You do act different whether you think AI is imminent or that it is not. It could be ten years or it could be a few thousand. It does not seem decidable.

PS. Lanier started this BHTV clip trying to argue for epistemological soundness when considering AI statements and [URL="http://bloggingheads.tv/diavlogs/15555?in=19:30&out=20:48"]here[/URL] rejects completely epistemology. Why did he not just defend himself? He seemed to say yes i do accept an unjustified belief. He said that clearly, it seems defensible, but hen he seems to deny that that is even the point. why? He seems to want to avoid saying anything concrete even if he just said it and it was defensible he goes on to change the subject and say this is not something you can even talk about?

Lanier's thinking comes across to me as muddy and mixed up, and his argumentative style as obnoxious. I tried, but failed to distill anything of value from his side of it. You're a patient man, Eliezer.

Loved your diavlog, Eliezer. I hope to see more of you on bloggingheads in the future.

I thought the session was boring. I think that was at issue was not addressed. Lanier is protecting certain things (at all cost) from the probing light of reductionist science. I started to think that he believed he had a soul while he called others religious. It was like he thought that anyone who believed strongly in anything biggish that was different from what he believed, must be religious. He ended up just sounding like an idiot. Cameron Taylor's reference to laughter was interesting - it was covering something uncomfortable.

The laughter thing in argument is interesting. At BHTV there are a few that use it constantly and they come across as very likable people because of it but there is also lack of rigorous thought and defensibility of the POV. Or it is just recognition of the absurd in a very fundamental way, (nihilist)? They would never admit to nihilism but i think this is practice of it.

This is particularly a good watch if someone wanted to do an essay about laughter in argument. http://bloggingheads.tv/diavlogs/15319 (really, watch it. A fundamentalist that believes we should have lots of babies and militarize space so we can build a space elevator and go to mars.) Compare his argument and debate style with Laniers.

With all due respect and truly no offense meant, I worry some commenters here may be missing Jaron's position. He quickly appears to conclude that he cannot really discuss any issues with EY because they don't even share the same premises. He makes this clear early on in his points on humanism and epistemology.

Jaron's laughter seems largely the laughter of frustrated politesse. This comes out in his speech when he repeats to EY "I've been having this discussion for decades." He appears to feel that his thinking has moved on, but EY's may be stuck. He also says a couple of times "We could have a conversation about. . .this . . .or about . . .that," which indicates his stoppage and almost a near-puzzlement with EY.

He wants to take EY at his word, but ultimately appears to conclude that he and EY have so little in common intellectually that he resorts to simply imploring EY to be a better scientist.

I first met Jaron more than a decade ago at the Knitting Factory and I must say I am surprised at how little he has changed - still so good-humored - I enjoy Jaron, so perhaps this colors my understanding. But it was sad to see the two talk past each other so much.

He quickly appears to conclude that he cannot really discuss any issues with EY because they don't even share the same premises.
So they should establish what premises they DO share, and from that base, determine why they hold the different beliefs that they do.

I find it unlikely that they don't share any premises at all. Their ability to communicate anything, albeit strictly limited, indicates that's there's common ground of a sort.

I haven't watched the video, having only a slow dial-up connection, and probably wouldn't have even if I had a faster connection. Video and audio are a distraction when considering intellectual matters. Your biases will have a field day with all the intellectually extraneous information you receive. There was research done in the 1990s, I haven't been able to find a reference, that given a written transcript, "juries" performed more accurately than with an audio recording, which in turn was more accurate than an audiovisual.

Frelkins and Marshall pretty well sum up my impressions of the exchange between Jaron and EY.

Perhaps pertinent, I'd suggest an essay on OvercomingBias on our unfortunate tendency to focus on the other's statements, rather than focusing on a probabilistic model of the likelihood function generating those statements. Context is crucial to meaning, but must be formed rather than conveyed. Ironically—but reflecting the fundamentally hard value of intelligence—such contextual asymmetry appears to work against those who would benefit the most.

More concretely, I'm referring to the common tendency to shake one's head in perplexity and say "He was so wrong, he didn't make much sense at all." in comparison with laughing and saying "I can see how he thinks that way, within his context (which I may have once shared.)"

[-][anonymous]00

@frelkins

That's it exactly! I felt Lanier trying to convey just those beliefs at me, with all his charm, dreadlocks and laughter backing him up. He had such 'patience' in accepting, Eliezier's social faux pax of expecting a premise to be clarified and made explicit before it is wielded as a bludgeon. He also demonstrates 'Initiative', 'leadership' and 'dominant status' by benevolently and assertively leading a conversation away from conflict. He does of course attempt to obscure the fact that the only conflict is between Lanier of one moment, the reflection of Lanier 5 seconds ago and an appeal for consistency.

I felt my biasses being tugged at artfully, Lanier has a strong personality. In fact, the persistent substitution of manipulation for message was very nearly screaming at me. Put anywhere near a project that was likely to succeed in achieving an AGI and that guy would be extremely dangerous. I'm sure there are half a dozen ethical injunctions that I don't know about against what I would do should that circumstance arise.

"Jaron's laughter seems largely the laughter of frustrated politesse. This comes out in his speech when he repeats to EY "I've been having this discussion for decades.""

I think that's BS. If Jaron didn't want to discuss AI, then why agree to a BGTV episode with Eliezer, a research fellow at the Singularity Institute for Artificial Intelligence.

Eliezer tried to understand what Jaron was saying and asked him questions to get him to better explain his positions. jaron pretty much never tried to make himself clear (probably because there wasn't much to explain in the first place), and he never really explained what he didn't like about Eliezer's position.

How long he's been having this conversation ("for decades" or whatever) only means that he's been having it for a long time, not that he has convincing arguments or that there's any value to what he says.

[-]Lake00

Lanier struck me as a sort of latterday Rorty: broadly a pragmatist; suspicious about the rigidity of linguistic meaning; unwilling to try to refute big visions but rather inclined to imply that he finds them silly and that perhaps any decently civilized person should do too.

The trouble with this outlook is, if your sense of what's silly is itself miscalibrated, there's not much anyone can do to help you. Moreover if meaning really is too slippery and amorphous to make debating big visions worthwhile, presumably the bright thing to do would be to avoid those debates altogether. As opposed to turning up and chuckling through them.

I wonder what Robin made of the discussion, perceived silliness being one of his hot buttons and all.

@Lake

I wouldn't say Jaron thinks the discussion is "silly." He has several rather stark differences with EY, which he makes clear. First, it's obvious - and he basically comes out and says so - that he doesn't regard EY as a humanist.

EY at one point is forced to plaintively protest "But I'm a humanist!" It's clear they have differing definitions of humanism, and also as to what constitutes "knowability" in epistemology. That's clear from the beginning.

It also seems clear that Jaron has a different idea of free will than EY, altho' he doesn't say so plainly, and he doesn't use the word "indeterminism." I think at bottom this is his discomfort with the idea that consciousness "works" or that people can be seriously compared to machines.

He's also clear that he has a vastly different conception of what an AI is or would be. Finally Jaron just seems to repudiate the use of the term altogether, and laughs with frustration at EY's facial response.

Other differences Jaron states with more social formatting, because you know he really is a good-natured guy. That interesting remark about mathematic vs. scientific knowledge - he seems to be thinking of himself as an engineer-scientist-applied geek, which is why he keeps stressing all the stuff he's built.

In contrast, he seems to regard EY as a theorist, even an artist. This is an old debate, the theory guys vs. the experimentalists.

I think such stark examples of communication failure is exactly what drives Robin forward in life. It reminded me of his talk at OSCON07.

Robin Hanson at OSCON07.

Feedback for Robin: I wish people would not "skate over" Auman's theorem on "not agreeing to disagree" like this so much. It took me personally rather a long time to figure out that Auman was actually saying something deep and interesting - and a few slightly more comprehensive summaries would have helped me a lot.

The penny dropped for me upon reading this: "Once Robin tells me his estimate, and I tell him mine, we should converge upon a common opinion. In essence his opinion serves as a "sufficient statistic" for all of his evidence." ...which I didn't immediately understand (and indeed it's not really correct). Then I looked up Auman's paper and found everything spelled out.

@Tim Tyler

Let's just bite the bullet. We all know better. In the spirit of Robin's talk, why didn't EY & Jaron agree? Why didn't either one move?

Crucially, why have we, the observers of the disaster, not moved? To repeat myself from the open thread: what's wrong with us? What's wrong with me?

Disappointing. I kept on waiting for Eliezer to say some sort of amazingly witty thing that would cause everything Jaron was saying to collapse like a house of cards, but either he was too polite to interrupt or the format wasn't his style.

At first I thought Jaron was talking nonsense, but after thinking it over for a while, I'm prepared to give him the benefit of the doubt. He said that whether a computer can be intelligent makes no difference and isn't worth talking about. That's obviously wrong if he's using a normal definition of intelligent, but if by intelligent he means "conscious", it makes a lot of sense and he's probably even right - there's not a lot of practical value in worrying about whether an intelligent computer would be conscious (as opposed to a zombie) at this point. He wouldn't be the first person to use those two words in weird ways.

I am also at least a little sympathetic to his "consciousness can't be reduced" argument. It made more sense once he said that consciousness wasn't a phenomenon. Still not perfect sense, but trying to raise something stronger from its corpse I would argue something sort of Kantian like the following:

Goldbach's conjecture says that every number is the sum of three primes. It hasn't been proven but there's a lot of inductive evidence for it. If I give you a difficult large number, like 20145, you may not be capable of figuring out the three primes, but you should still guess they exist. Even if you work quite hard to find them and can't, it's still more likely that it's a failure on your part than that the primes don't exist.

However, North Dakota is clearly not the sum of three primes. Even someone with no mathematical knowledge can figure this out. This statement is immune to all of the inductive evidence that Goldbach's conjecture is true, immune to the criticism that you simply aren't smart enough to find the primes, and doesn't require extensive knowledge of the history and geography of North Dakota to make. It's just a simple category error.

Likewise, we have good inductive evidence that all objects follow simple scientific/reductionist laws. A difficult-to-explain object, like ball lightning, probably still follows scientific/reductionist laws, even if we haven't figured out what they are yet. But consciousness is not an object; it's the subject, that by which objects are perceived. Trying to apply rules about objects to it is a category error, and his refusal to do so is immune to the normal scientific/reductionist criticisms you would level against someone who tried that on ball lightning or homeopathy or something.

I'm not sure if I agree with this argument, but I think it's coherent and doesn't violate any laws of rationality.

I agree with everyone who found his constant appeal to "I make algorithms, so you have to believe me!" and his weird nervous laughter irritating.

frelkins,

I don't know what's wrong with you, and I don't know what's wrong with me, but what's wrong with Lanier is that he holds the belief that consciousness can't be understood, and he's so certain of this that, in his mind, anyone who doesn't agree with him must be a religious extremist. Furthermore, for him this ontological specialness of human beings is the basis of humanism.

Asking why he didn't move on that point is asking why he won't change his entire worldview. The answer is, because it would hurt too much.

How do you suffer morons like this...?

I was trying to figure out why are having dialogues with complete fools, but apparently this guy is famous or something.

huh.

@Yvain

"But consciousness is not an object; it's the subject, that by which objects are perceived. Trying to apply rules about objects to it is a category error"

Yes I agree, Jaron appears to argue as a neo-Kantian, a la Thomas Nagel. This is an intellectually respectable position. Robin has called Nagel excellent.

@Janus

If we were rational in good faith and sought truth, wouldn't we all have moved closer to Jaron? Otherwise, what are we?

It takes a very healthy sense of humor I suppose :)

Yvain, nicely put.

Another kind of argument, which I'm not sure if Lanier was making but other people have, is that you can be a naturalist without being a reductionist, and you can be a reductionist without believing that computation is the right model for human brains. EY himself has pointed out that certain forms of symbolic AI are misleading, since naming your Lisp symbol UNDERSTAND does not mean you have implemented understanding. Lanier is making a similar but stronger case against computation in general.

More reasoned critiques of computationalism from within the field have been produced by Rod Brooks, David Chapman, Phil Agre, Terry Winograd, Lucy Suchman, and others. I'd really recommend starting with them rather than revisiting the stale and ridiculous zombie argument and its relatives.

[-]Lake00

@ frelkins:

Eliezer's brand of humanism seems to consist in endorsing many of the values of traditional humanism while ditching the metaphysics. Jaron seemed to think the metaphysical stuff - specifically, psychological dualism of some sort - is indispensible. I'm not sure who should have proprietory rights over the word, but that argument is surely more about brand ownership rather than anything deep. And surely there's little enough to recommend dualism in itself.

Jaron's epistemic caution also struck me as being slightly odd. It's one thing to beware delusion, accept the likelihood of substantial error and press on as best you can, revising beliefs where necessary. But Jaron seemed to be moving into more sceptical territory, saying in effect that if it seems P, that isn't grounds for believing P - because of "epistemology".

Can you unpack the stuff about consciousness, free-will and indeterminism a bit? Consciousness is the bit that's usually taken as evidence for dualism. Why talk about the other things? Free will is a busted flush anyway, isn't it? Never mind the intricacies of the physics; we automatically accept responsibility for bodily movements which have been artificially induced. I'm sure Jaron knows a great deal more about all this than I do but, from an interested outsider's perspective, it isn't at all clear how these notions are meant to hang together.

I also found it frustrating that Lanier refused to engage on specifics.

Which is ironic, because I also came to the conclusion that the problem here was it was a discussion between (someone acting in the role of) a philosopher and (someone acting in the role of) an engineer, and it was the engineer who was refusing to talk about specifics.

Where I term philosophy the stuff you do with topics you don't have enough solid information yet to make a full-fledged science out of (as touched on in the discussions of the natural philosophy phase which preceded the science of biology, etc.)

Lanier may be correct that ignoring the philosophical side may make you a better scientist for certain strict definitions of scientist, but it may also fail to make future scientists better scientists. Just because the philosophical work precedes the formation of a full fledged science doesn't mean you can skip it. It's important for people to be thinking about these things even if we don't know much about them.

It's a very different kind of work from engineering though, and it may be that Lanier is uninterested in that work. I know I get quickly bored and frustrated when I try to spend much time doing that kind of work. But I try to distinguish between things that bore me and things which are unuseful.

At ~29min: Lanier: "But that wasn't my statement." Yes...yes it was. It's on record too!

A very impressive interview - I have gained much respect for Eliezer's patience.

"A very impressive interview - I have gained much respect for Eliezer's patience."

In a way, I think that maybe the most important stuff in this interview is what didn't happen. Eliezer indeed seems to possess super-human patience.

Come on lets be honest Eliezer got his ego handed to him in that debate, if things don't fit into nice logical categories he doesn't have a clue how to deal with them. The issue here is that most of Eliezer's big arguments that people wait for are based on imagined evidence such as the possibility of superhuman AGI. The belief in which is tantamount to religious fanaticism since the evidence for such does not exist and nobody knows what it will look like. The current arguments until the creation of superhuman AGI are hand waving. Further Eliezer does not have the math background or science background to deal Lanier. You can't hope to build AI with philosophy and some basic bayesian statistics. The issue of AGI is an engineering issue, and unfortunately engineering requires math, and from what I can see having read all of Eliezer's papers online he can't make the cut.

It's pretty pathetic how weak Lanier's arguments were...

For someone who works in the field for Microsoft and who's a professor; it's bad how easy it was for me to spot the holes in his arguments (including the places where he really had none).

It seems like he's someone who it'd be really hard to have a reasonable discussion with. He just seemed really slippery in general, Eliezer being unable to pin him down at all, although I suspect he wasn't trying his hardest to, to be polite. There should really be another word for what he was doing other than slippery, but that's the best I can come up with...

I agree with some people above, you should do a bloggingheads with someone more intelligent or at least someone who's willing to say what they think instead of just sliding around being slippery the entire diavlog.

Oh, I forgot to add that I think it kind of highlights the difference between people who have no room for magic in their worldview from people who still have room for magic, even if they don't call it magic and don't want to admit it. I don't understand how someone can hold this view...