Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
roko300

I think we're all out of our depth here. For example, do we have an agreed upon, precise definition of the word "sentient"? I don't think so.

I think that for now it is probably better to try to develop a rigorous understanding of concepts like consciousness, sentience, personhood and the reflective equilibrium of humanity than to speculate on how we should add further constraints to our task.

Nonsentience might be one of those intuitive concepts that falls to pieces upon closer examinations. Finding "nonperson predicates" might be like looking for "nonfairy predicates".

roko320

Given that i'm lying in bed with my iPhone commenting on this post, I'd say ray did ok.

His extrapolations of computer hardware seem to be pretty good.

His extrapolations of computer software are far too optimistic. He clearly made the mistake of vastly underestimating how much work our brains do when we translate natural language or turn speech into text.

roko300

"Capitalist economists seem to like the idea of competition. It is the primary object of their study - if there were no comptition they would have to do some serious retraining."

Ditto.

roko390

I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like "if a belief is false, you are better off knowing that it is false".

It is even possible that some overoptimistic transhumanists/singularitarians are better off, by their own standards, remaining deluded about the potential dangers of technology. You have the luxury of being intelligent enough to be able to utilize your correct belief about how precarious our continued existence is becoming. For many people, such a belief is of no practical benefit yet is psychologically detrimental.

This creates a "tradgedy of the commons" type problem in global catastrophic risks: each individual is better off living in a fool's paradise, but we'd all be much better off if everyone faced up to the dangers of future technology.

roko300

@ carl: perhaps I should have checked through the literature more carefully. Can you point me to any other references on ethics using world-history utility functions with domain {world histories} ?

roko300

@ shane: I was specifically talking about utility functions from the set of states of the universe to the reals, not from spacetime histories. Using the latter notion, trivially every agent is a utility maximizer, because there is a canonical embedding of any set X (in this case the set of action-perception pair sequences) into the set of functions from X to R. I'm attacking the former notion - where the domain of the utility function is the set of states of the universe.

roko300

@ prase: well, we have to get our information from somewhere... Sure, past predictions of minor disasters due to scientific error are not in exactly the same league as this particular prediction. But where else are we to look?

@anders: interesting. So presumably you think that the evidence from cosmic rays makes the probability of an LHC disaster much less than 1 in 1000? Actually, how likely do you think it is that the LHC will destroy the planet?

roko300

Eli: I think that your analysis here, and the longer analysis presented in "knowability of FAI" misses a very important point. The singularity is a fundamentally different process than playing chess or building a saloon car. The important distinction is that in building a car, the car-maker's ontology is perfectly capable of representng all of the high-level properties of the desired state, but the I stigators of the singularity are, by definition lacking a sufficiently complex representation system to represent any of the important properties of the desired state: post singularity earth. You have had the insight required to see this: you posted about " dreams of XML in a universe of quantum mechanics" a couple of posts back. I posted about this on my blog: "ontologies, approximations and fundamentalists" too.

It suffices to say that an optimization process which takes place with respect to a fixed background ontology or set of states is fundamentally different to a process which I might call vari-optimization, where optimization and ontology change happen at the same time. The singularity (whether an AI singularity or non AI) will be of the latter type.

roko310

@ marcello, quasi-anonymous, manuel:

I should probably add that I am not in favor of using any brand new philosophical ideas - like the ones that I like to think about - to write the goal system of a seed AI. That would be far too dangerous. For this purpose, I think we should simply concentrate on encoding the values that we already have into an AI - for example using the CEV concept.

I am interested in UIVs because I'm interested in formalizing the philosophy of transhumanism. This may become important because we may enter a slow takeoff, non-AI singularity.

Load More