Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: roko3 28 December 2008 05:55:26PM 0 points [-]

I think we're all out of our depth here. For example, do we have an agreed upon, precise definition of the word "sentient"? I don't think so.

I think that for now it is probably better to try to develop a rigorous understanding of concepts like consciousness, sentience, personhood and the reflective equilibrium of humanity than to speculate on how we should add further constraints to our task.

Nonsentience might be one of those intuitive concepts that falls to pieces upon closer examinations. Finding "nonperson predicates" might be like looking for "nonfairy predicates".

Comment author: roko3 02 December 2008 09:23:07PM 2 points [-]

Given that i'm lying in bed with my iPhone commenting on this post, I'd say ray did ok.

His extrapolations of computer hardware seem to be pretty good.

His extrapolations of computer software are far too optimistic. He clearly made the mistake of vastly underestimating how much work our brains do when we translate natural language or turn speech into text.

In response to Singletons Rule OK
Comment author: roko3 30 November 2008 10:27:31PM 0 points [-]

"Capitalist economists seem to like the idea of competition. It is the primary object of their study - if there were no comptition they would have to do some serious retraining."

Ditto.

In response to Crisis of Faith
Comment author: roko3 10 October 2008 11:25:03PM 2 points [-]

I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like "if a belief is false, you are better off knowing that it is false".

It is even possible that some overoptimistic transhumanists/singularitarians are better off, by their own standards, remaining deluded about the potential dangers of technology. You have the luxury of being intelligent enough to be able to utilize your correct belief about how precarious our continued existence is becoming. For many people, such a belief is of no practical benefit yet is psychologically detrimental.

This creates a "tradgedy of the commons" type problem in global catastrophic risks: each individual is better off living in a fool's paradise, but we'd all be much better off if everyone faced up to the dangers of future technology.

Comment author: roko3 25 September 2008 07:26:45PM 0 points [-]

@ carl: perhaps I should have checked through the literature more carefully. Can you point me to any other references on ethics using world-history utility functions with domain {world histories} ?

Comment author: roko3 25 September 2008 05:23:07PM 0 points [-]

@ shane: I was specifically talking about utility functions from the set of states of the universe to the reals, not from spacetime histories. Using the latter notion, trivially every agent is a utility maximizer, because there is a canonical embedding of any set X (in this case the set of action-perception pair sequences) into the set of functions from X to R. I'm attacking the former notion - where the domain of the utility function is the set of states of the universe.

Comment author: roko3 23 September 2008 12:03:46AM 0 points [-]

@ prase: well, we have to get our information from somewhere... Sure, past predictions of minor disasters due to scientific error are not in exactly the same league as this particular prediction. But where else are we to look?

@anders: interesting. So presumably you think that the evidence from cosmic rays makes the probability of an LHC disaster much less than 1 in 1000? Actually, how likely do you think it is that the LHC will destroy the planet?

In response to Optimization
Comment author: roko3 13 September 2008 07:22:57PM 0 points [-]

Eli: I think that your analysis here, and the longer analysis presented in "knowability of FAI" misses a very important point. The singularity is a fundamentally different process than playing chess or building a saloon car. The important distinction is that in building a car, the car-maker's ontology is perfectly capable of representng all of the high-level properties of the desired state, but the I stigators of the singularity are, by definition lacking a sufficiently complex representation system to represent any of the important properties of the desired state: post singularity earth. You have had the insight required to see this: you posted about " dreams of XML in a universe of quantum mechanics" a couple of posts back. I posted about this on my blog: "ontologies, approximations and fundamentalists" too.

It suffices to say that an optimization process which takes place with respect to a fixed background ontology or set of states is fundamentally different to a process which I might call vari-optimization, where optimization and ontology change happen at the same time. The singularity (whether an AI singularity or non AI) will be of the latter type.

In response to Invisible Frameworks
Comment author: roko3 22 August 2008 11:13:27PM 1 point [-]

@ marcello, quasi-anonymous, manuel:

I should probably add that I am not in favor of using any brand new philosophical ideas - like the ones that I like to think about - to write the goal system of a seed AI. That would be far too dangerous. For this purpose, I think we should simply concentrate on encoding the values that we already have into an AI - for example using the CEV concept.

I am interested in UIVs because I'm interested in formalizing the philosophy of transhumanism. This may become important because we may enter a slow takeoff, non-AI singularity.

Comment author: roko3 20 August 2008 07:48:30PM 0 points [-]

@ eli: nice series on lob's theorem, but I still don't think you've added any credibility to claims like "I favor the human one because it is h-right". You can do your best to record exactly what h-right is, and think carefully about convergence (or lack of) under self modification, but I think you'd do a lot better to just state "human values" as a preference, and be an out-of-the-closet-relativist.

View more: Next