Comment author: Robin_Z 15 June 2008 12:14:09AM 3 points [-]

Joseph Knecht: Why do you think that the brain would still be Eliezer's brain after that kind of change?

(Ah, it's so relaxing to be able to say that. In the free will class, they would have replied, "Mate, that's the philosophy of identity - you have to answer to the ten thousand dudes over there if you want to try that.")

Comment author: Robin_Z 14 June 2008 07:28:17PM 0 points [-]

Andy Wood: So, while I highly doubt that CC is equivalent to my view in the first place, I'm still curious about what view you adopted to replace it.

I suspect (nay, know) my answer is still in flux, but it's actually fairly similar to classical compatibilism - a person chooses of their own free will if they choose by a sufficiently-reasonable process and if other sufficiently-reasonable processes could have supported different choices. However, following the example of Angela Smith (an Associate Professor of Philosophy at the University of Washington), I hold that free will is not required for responsibility. After all, is it not reasonable to hold someone responsible for forgetting an important duty?

Comment author: Robin_Z 14 June 2008 12:37:30PM 1 point [-]

Hmm, it seems my class on free will may actually be useful.

Eliezer: you may be interested to know that your position corresponds almost precisely to what we call classical compatibilism. I was likewise a classical compatibilist before taking my course - under ordinary circumstances, it is quite a simple and satisfactory theory. (It could be your version is substantially more robust than the one I abandoned, of course. For one, you would probably avoid the usual trap of declaring that agents are responsible for acts if and only if the acts proceed from their free will.)

Hopefully Anonymous: Are you using Eliezer's definition of "could", here? Remember, Eliezer is saying "John could jump off the cliff" means "If John wanted, John would jump off the cliff" - it's a counterfactual. If you reject this definition as a possible source of free will, you should do so explicitly.

In response to Class Project
Comment author: Robin_Z 31 May 2008 01:56:53AM 11 points [-]

This is the limit of Eld science, and hence, the limit of public knowledge.

Wait, so these people are doing this only for recreation?

No - this is Eliezer's alternate universe storyline in which the science-equivalent is treated as a secret the same way the Pythagoreans did. The initiates - the people with access to the secret knowledge - use it for technology, just as we do, except because the general public doesn't know the science, the tech looks amazing.

The idea, I believe, is to reduce the attraction of bogus secret societies. In Brennan's world, anyone who made one would be challenged to accomplish as great or greater feats than the Bayesians - a task that a mere mystery cult would fail at.

Comment author: Robin_Z 24 May 2008 09:08:35PM 0 points [-]

Richard Kennaway: I don't think we actually disagree about this. It's entirely possible that doubling the N of a brain - whatever the relevant N would be, I don't know, but we can double it - would mean taking up much more than twice as many processor cycles (how fast do neurons run?) to run the same amount of processing.

In fact, if it's exponential, the speed would drop by orders of magnitude for every constant increase. That would kill superintelligent AI as effectively as the laws of thermodynamics killed perpetual motion machines.

On the other hand, if you believe Richard Dawkins, Anatole France's brain was less that 1000 cc, and brains bigger than 2000 cc aren't unheard of (he lists Oliver Cromwell as an unverified potential example). Even if people are exchanging metaphorical clock rate for metaphorical instruction set size and vice-versa, and even if people have different neuron densities, this would seem to suggest the algorithm isn't particularly high-order, or if it is the high-order bottlenecks haven't kicked in at our current scale.

Comment author: Robin_Z 24 May 2008 01:01:33PM 0 points [-]

Richard Kennaway: I don't know what you mean - the subset-sum problem is NP-hard (and NP-complete) and the best known algorithms can - given infinite resources - be run on lists of any size with speed O(2^(N/2) * N). It scales - it can be run on bigger sets - even if it is impractical to. Likewise, the traveling salesman problem can be solved in O(N^2 * 2^N). What I'm asking is if there are any problems where we can't change N. I can't conceive of any.

Comment author: Robin_Z 23 May 2008 03:03:19PM 2 points [-]

The Turing test doesn't look for intelligence. It looks for 'personhood' - and it's not even a definitive test, merely an application of the point that something that can fool us into thinking its a person is due the same regard we give people.

I said the Turing test was weak - in fact, I linked an entire essay dedicated to describing exactly why the Turing test was weak. In fact, I did so entirely to accent your point that we don't know what we're looking for. What we are looking for, however, is, by the Church-Turing thesis, an algorithm, an information-processing algorithm, and I invite the computer scientists et al. here to name any known information-processing algorithm which doesn't scale.

Comment author: Robin_Z 23 May 2008 01:53:37PM 1 point [-]

I'm not denying your point, Caledonian - right now, our best conception of a test for smarts in the sense we want is the Turing test, and the Turing test is pretty poor. If we actually understood intelligence, we could answer your questions. But as long as we're all being physicalists, here, we're obliged to believe that the human brain is a computing machine - special purpose, massively parallel, but almost certainly Turing-complete and no more. And by analogy with the computing machines we should expect to be able to scale the algorithm to bigger problems.

I'm not saying it's practical. It could be the obvious scalings would be like scaling the Bogosort. But it would seem to be special pleading to claim it was impossible in theory.

Comment author: Robin_Z 23 May 2008 12:30:22PM 0 points [-]

I have to admit to some skepticism as well, Caledonian, but it seems clear to me that it should be possible with P > .99 to make an AI which is much smarter but slower than a human brain. And even if increasing the effective intelligence goes as O(exp(N)) or worse, a Manhattan-project-style parallel-brains-in-cooperation AI is still not ruled out.

In response to Rationality Quotes 3
Comment author: Robin_Z 18 May 2008 02:56:50PM 1 point [-]

Oddly enough, Lincoln didn't actually say exactly that. A minor distinction, true, but there it is.

View more: Prev | Next