All of Robin_Z's Comments + Replies

Robin_Z00

Brandon Reinhart: Jack Thompson. (Fortunately, he's been disbarred, now, so maybe that particular vein of stupidity is getting tapped out.)

Robin_Z00

Will Pearson: are you suggesting that the simplest algorithm for intelligence is too large to fit in human memory?

Robin_Z10

Dude. Dude. No wonder you've been so emphatic in your denunciations of mysterious answers to mysterious questions.

Robin_Z110

Regarding the first reply here (a year later...): perhaps there is another problem visible here, the problem of when advice is too plain. The story advises in a fashion so transparently evident that even SHRDLU could get it: the poor student quite literally wasn't looking at anything, so Pirsig/Phædrus gave her a topic so mundane that she had to go down and see for herself. If Zen and the Art of Motorcycle Maintenance were a math textbook, the rule would be clear: "if you examine something, you will have something to say about it." But because wr... (read more)

Robin_Z20

You're right that he should be able to engage standard critiques, Zubon, but if my (negligible) experience with the philosophy of free will is any indication, many "standard critiques" are merely exercises in wooly thinking. It's reasonable for him to step back and say, "I don't have time to deal with this sort of thing."

Robin_Z10

Wow, there are a lot of nihilists here.

I answered on my own blog, but I guess I'm sort of with dloye at 08:54: I'd try to keep the proof a secret, just because it feels like it would be devastating to a lot of people.

Robin_Z10

Robin Hanson: I don't think that's what he's getting at. Yes, surface similarities are correlated with structural similarities, or mathematical similarities (I know of a guy who found a couple big research papers towards his astrophysics PhD via a colleage's analogy between gravitational and electromagnetic waves), but they show up so often under other circumstances that it is meet to be suspicious of them. The outside view works really well for Christmas shopping, essay writing, program development, and the like because it is obvious that the structural similarities are present.

Robin_Z00

Joseph Knecht: When you say that you haven't seen evidence that puts "soul" on shaky grounds, [...]

Sorry, poor wording - please substitute "but nor have I seen evidence against 'choice' of the kind which puts 'soul' on shaky grounds." I am familiar with many of the neurological arguments against souls - I mentioned the concept because I am not familiar with any comparable evidence regarding choice. (Yes, I have heard of the experiments which show nervous impulses towards an action prior to the time when the actor thought they decided. That's interesting, but it's no Phineas Gage.)

Robin_Z00

Joseph Knecht: It is a clash of intuitions, then? I freely admit that I have seen no such account either, but nor have I seen the kind of evidence which puts "soul" on shaky grounds. And "fire" is comparably ancient to "soul", and still exists.

In fact, "fire" even suggests an intermediate position between yours and that which you reject: chemically, oxidation reactions like that of fire show up all over the place, and show that the boundary between "fire" and "not fire" is far from distinct. Would it be surprising were it found that the boundary between the prototypical human choices Eliezer names and your not-choices is blurry in a similar fashion?

Robin_Z60

Kip Werking, I can see where you're coming from, but "free will" isn't just some attempt to escape fatalism. Look at Eliezer's post: something we recognize as "free will" appears whenever we undergo introspection, for example. Or look at legal cases: acts are prosecuted entirely differently if they are not done of one's "free will", contracts are annulled if the signatories did not sign of their own "free will". We praise good deeds and deplore evil deeds that are done of one's own "free will". Annihilation of free will requires rebuilding all of these again from their very foundations - why do so, then, when one may be confident that a reasonable reading of the term exists?

Robin_Z50

Joseph Knecht: Why do you think that the brain would still be Eliezer's brain after that kind of change?

(Ah, it's so relaxing to be able to say that. In the free will class, they would have replied, "Mate, that's the philosophy of identity - you have to answer to the ten thousand dudes over there if you want to try that.")

Robin_Z00

Andy Wood: So, while I highly doubt that CC is equivalent to my view in the first place, I'm still curious about what view you adopted to replace it.

I suspect (nay, know) my answer is still in flux, but it's actually fairly similar to classical compatibilism - a person chooses of their own free will if they choose by a sufficiently-reasonable process and if other sufficiently-reasonable processes could have supported different choices. However, following the example of Angela Smith (an Associate Professor of Philosophy at the University of Washington), I h... (read more)

Robin_Z10

Hmm, it seems my class on free will may actually be useful.

Eliezer: you may be interested to know that your position corresponds almost precisely to what we call classical compatibilism. I was likewise a classical compatibilist before taking my course - under ordinary circumstances, it is quite a simple and satisfactory theory. (It could be your version is substantially more robust than the one I abandoned, of course. For one, you would probably avoid the usual trap of declaring that agents are responsible for acts if and only if the acts proceed from thei... (read more)

Robin_Z210
This is the limit of Eld science, and hence, the limit of public knowledge. Wait, so these people are doing this only for recreation?

No - this is Eliezer's alternate universe storyline in which the science-equivalent is treated as a secret the same way the Pythagoreans did. The initiates - the people with access to the secret knowledge - use it for technology, just as we do, except because the general public doesn't know the science, the tech looks amazing.

The idea, I believe, is to reduce the attraction of bogus secret societies. In Brennan's world, an... (read more)

1lessdazed
The societies I have been told about are limited in scope. Only a rationalistic conspiracy would be in direct competition with the Bayesians. The Cooperative Conspiracy and Bayesian Conspiracy apparently allow open membership in both, while the Cooperative Conspiracy would probably be in competition with an Individualist Conspiracy or Competitive Conspiracy. Of course, even the Model Airplane Conspiracy could restrict members to only their conspiracy, preventing them from being Bayesian Conspirators despite the conspiracies' dissimilar subjects, particularly if the Bayesians forbade hiding one's identity. Insofar as one ought to speak and think cleanly, secret societies would not be challenged to show results - this is to talk as if under the sway of their mystery. They would be assumed to be worse than useless until they demonstrated results - useless at first thought because mystery has no value, less than that at second thought because not showing value is evidence of not being able to show it, which is evidence of not having it.
billswift230

There are actually multiple reasons, some stories stress different ones. The one I like is that by keeping the results secret, they can train students in discovery by encouraging/forcing them to rediscover the laws as part of their training.

Robin_Z30

Richard Kennaway: I don't think we actually disagree about this. It's entirely possible that doubling the N of a brain - whatever the relevant N would be, I don't know, but we can double it - would mean taking up much more than twice as many processor cycles (how fast do neurons run?) to run the same amount of processing.

In fact, if it's exponential, the speed would drop by orders of magnitude for every constant increase. That would kill superintelligent AI as effectively as the laws of thermodynamics killed perpetual motion machines.

On the other hand, if ... (read more)

Robin_Z00

Richard Kennaway: I don't know what you mean - the subset-sum problem is NP-hard (and NP-complete) and the best known algorithms can - given infinite resources - be run on lists of any size with speed O(2^(N/2) N). It scales - it can be run on bigger sets - even if it is impractical to. Likewise, the traveling salesman problem can be solved in O(N^2 2^N). What I'm asking is if there are any problems where we can't change N. I can't conceive of any.

Robin_Z30
The Turing test doesn't look for intelligence. It looks for 'personhood' - and it's not even a definitive test, merely an application of the point that something that can fool us into thinking its a person is due the same regard we give people.

I said the Turing test was weak - in fact, I linked an entire essay dedicated to describing exactly why the Turing test was weak. In fact, I did so entirely to accent your point that we don't know what we're looking for. What we are looking for, however, is, by the Church-Turing thesis, an algorithm, an information... (read more)

Robin_Z20

I'm not denying your point, Caledonian - right now, our best conception of a test for smarts in the sense we want is the Turing test, and the Turing test is pretty poor. If we actually understood intelligence, we could answer your questions. But as long as we're all being physicalists, here, we're obliged to believe that the human brain is a computing machine - special purpose, massively parallel, but almost certainly Turing-complete and no more. And by analogy with the computing machines we should expect to be able to scale the algorithm to bigger problem... (read more)

Robin_Z20

I have to admit to some skepticism as well, Caledonian, but it seems clear to me that it should be possible with P > .99 to make an AI which is much smarter but slower than a human brain. And even if increasing the effective intelligence goes as O(exp(N)) or worse, a Manhattan-project-style parallel-brains-in-cooperation AI is still not ruled out.

Robin_Z10

Oddly enough, Lincoln didn't actually say exactly that. A minor distinction, true, but there it is.

Robin_Z00

Not replying to the comment thread: I think the quote might actually be Deuteronomy 13:6-10 in the King James Version.

Robin_Z30

Oh, that's subtle.

Check me if I'm wrong: according to the MWI, the evolving waveform itself can include instantiations of human beings, just as an evolving Conway's Life grid can include gliders. Thus, if we're proposing that humans exist (a reasonable hypothesis), they exist in the waveform, and if the Bohmian particles do not influence the evolution of the waveform, they exist in the waveform the same way whether or not Bohm's particles are there. And, in fact, if they do not influence the amplitude distribution, they're epiphenomenal in the same sense t... (read more)

Robin_Z00

Having quantum collapses IS having Many Worlds... unless and until you can demonstrate that the two are different in some way.

...

I do not believe that word means what you think it means.

Robin_Z10

03:16 was me - curse you, Typepad!

Robin_Z40

Correct me if I am wrong, but MWI does have noticeable consequences, or at least implications: for example, interference at all length-scales and proper evaluation of the waveform equations implying the Born probabilities. Neither of these are implicit in the Copenhagen interpretation - in fact, the first is contradicted.

Robin_Z83

...wait, the collapse postulate doesn't suggest different results? In order for collapse to occur, the amplitude-summing effect we see at the level of particles would have to vanish at some point. Which implies that above that point, "interference" effects will vanish.

We might have a hard time running the experiment, but that sounds like a different result to me.

Robin_Z150

Unknown, I don't think Egan's Law has anything to do with facing reality. If I read it correctly, Egan is saying that any theory (e.g. quantum mechanics, general relativity, the standard model) ought to predict normal events on the level of normal events. If relativity predicted that a ball dropped from a height of 4.9 meters would take 5.3 seconds to hit the ground, relativity would be disproven. It all must add up to normality.

Robin_Z20
Robin Z: The motivation for suspecting that something funny happens as you try scale up decoherance to full blown many-worlds comes from the serious problems that many-worlds has. Beyond the issue with predicting the Born postulate, there are serious conceptual problems with defining individual worlds, even emergently.

Enough said - I withdraw my implied objection. I, too, hope the experiment you refer to will provide new insight.

Robin_Z90

Is there any reason to believe that something interferes with the physics between "microscopic decoherence" and "macroscopic decoherence" that affects the latter and not the former? I'm just saying because I'm getting strong echoes of the "microevolution vs. macroevolution" misconception - in both cases, people seem to be rejecting the obvious extension of a hypothesis to the human level.