Comment author: OrphanWilde 18 February 2016 08:47:38PM 4 points [-]

Intelligence is poorly-defined, for a start, artificial intelligence doubly so - think about the number of times we've redefined "AI" after achieving what we previously called "AI".

"Recursive self-improvement" is also poorly-defined; as an example, we have recursive self-improving AIs right now, in the form of self-training neural nets.

Superintelligence is even less well-defined, which is why I prefer the term "godhood", which I regard as more honest in its vagueness. It may also be illusory; most of us on Less Wrong are here in part because of boredom, because intelligence isn't nearly as applicable in daily life as we'd need it to be to stay entertained; does intelligence have diminishing returns?

We can tell that some people are smarter than other people, but we're not even certain what that means, except that they do better by the measurement we measure them by.

Comment author: SoerenE 19 February 2016 08:07:58AM *  -1 points [-]

Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.

Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.

I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with "superintelligence" for now.

Comment author: Lumifer 18 February 2016 08:43:42PM *  4 points [-]

Is it correct that the sentence can be divided into these 4 claims?:

You are missing an important claim: that the process of recursive self-improvement does not encounter any constraints, impediments, roadblocks, etc.

Consider the analogy of your 1. and 2. for human reproduction.

Comment author: SoerenE 19 February 2016 07:22:35AM *  0 points [-]

I meant claim number 3 to be a sharper version of your claim: The AI will meet constraints, impediments and roadblocks, but these are overcome, and the AI reaches superintelligence.

Could you explain the analogy with human reproduction?

Comment author: _rpd 18 February 2016 10:36:57AM 4 points [-]

this claim

Do you mean the metric expansion of space?

https://en.wikipedia.org/wiki/Metric_expansion_of_space

Because this expansion is caused by relative changes in the distance-defining metric, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity.

Comment author: SoerenE 19 February 2016 07:14:19AM 0 points [-]

Thank you. It is moderately clear to me from the link that James' thought-experiment is possible.

Do you know of a more authoritative description of the thought-experiment, preferably with numbers? It would be nice to have an equation where you give the speed of James' spaceship and the distance to it, and calculate if the required speed to catch it is above the speed of light.

Comment author: OrphanWilde 18 February 2016 01:39:37PM 6 points [-]

I consider a runaway process by which any AI ascends into godhood through recursive self-improvement of its intelligence to be... vaguely magical, by which I mean that while every word in that sentence makes sense, as a whole that sentence doesn't refer to anything. The heavy lifting is done by poorly-defined abstractions and assumptions.

Unfriendly AI, by the metrics I consider meaningful, already exists. It just isn't taking over the world.

Comment author: SoerenE 18 February 2016 08:14:58PM *  1 point [-]

Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly. Let us use the standard AI risk metric.

I feel that your sentence does refer to something: A hypothetical scenario. ("Godhood" should be replaced with "Superintelligence").

Is it correct that the sentence can be divided into these 4 claims?:

  1. An AI self-improves it's intelligence
  2. The self-improvement becomes recursive
  3. An AI reaches superintelligence through 1 and 2
  4. This can happen in a process that can be called "runaway"

Do you mean that one of the probabilities is extremely small? (E.g., p(4 | 1 and 2 and 3) = 0.02). Or do you mean that the statement is not well-formed? (E.g, Intelligence is poorly-defined by the AI Risk theory)

Comment author: James_Miller 18 February 2016 01:43:26AM *  2 points [-]

Great overall, but I disagree with this "while colonization would insulate us against a number of potential existential risks, there are some risks that it wouldn’t stop. A physics disaster on Earth, for example, could have consequences that are cosmic in scope. For example, the universe might not be in its most stable state. Consequently, a high-powered particle accelerator could tip the balance, resulting in a 'catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light.”'

If a positive singularity occurs and the solution to the Fermi paradox is that we are alone I would like to make a copy of myself and put that copy on a spaceship that travels fast enough away from earth so that (given sufficient time) when you add in the expansion of the universe something starting at earth and traveling at the speed of light would not be able to reach me. As I understand it, once I have traveled far enough from earth it will be impossible for something from earth to reach me regardless of my speed.

Comment author: SoerenE 18 February 2016 07:19:08AM 0 points [-]

I've seen this claim many places, including in the Sequences, but I've never been able to track down an authoritative source. It seems false in classical physics, and I know little about relativity. Unfortunately, my Google-Fu is too weak to investigate. Can anyone help?

Comment author: OrphanWilde 17 February 2016 06:11:15PM 4 points [-]

My prior on humans going extinct in the next century is less than .05%, because in the last 2,000 centuries, we haven't. My prior on civilization ending in the next century is less than 1.6%, because in the last 60 centuries, it hasn't.

I have, however, absolutely no faith in those priors, because in any universe in which either of those things happened, nobody is asking those questions.

As for evidence that would update those priors? Well... I see no evidence of anything that could or would end our species, so I have nothing to update on there. I see some evidence of things that could end our civilization, so that could be updated slightly higher. But my error bars dominate the question.

So I'm going to say <1% odds of humans (or rather, human minds) going extinct in the next century. Indeed, I'd bet my life it won't happen.

Comment author: SoerenE 18 February 2016 07:11:28AM 1 point [-]

Could you elaborate on why you consider p(UFAI before 2116) < 0.01? I am genuinely interested.

Comment author: SoerenE 14 February 2016 07:59:04PM *  0 points [-]

It is an interesting way of looking at the maximal potential of AIs. It could be that Oracle Machines are possible in this universe, but an AI built by humans cannot self-improve to that point because of the bound you are describing.

I feel that the phrasing "we have reached the upper bound on complexity" and later "can rise many orders of magnitude" gives a potentially misleading intuition about how limiting this bound is. Do you agree that this bound does not prevent us from building "paperclipping" AIs?

Comment author: lisper 11 February 2016 05:41:27PM -1 points [-]

Of course it's possible. That's not the point. The point is that "pernicious delusion" is pejorative in much the same way that "idiot" is (which is why I extrapolated it that way). Both imply some sort of mental deficiency or disorder. If someone believes in God, on this view, it can only be because their brains are broken.

To be sure, some people do have broken brains, and some people believe in God as a result. The hypothesis that I'm advancing here is that some people may believe in God not because their brains are broken, but because they have had (real) subjective experiences that non-believers generally have not had.

Comment author: SoerenE 11 February 2016 07:32:19PM 2 points [-]

I am tapping out of this thread.

Comment author: lisper 11 February 2016 02:26:13AM 1 point [-]

Well, OK, Dawkins doesn't use the word "idiot." He says that anyone who believes in God is suffering from "a pernicious delusion" (The God Delusion, Chapter 2). I think most people would say that distinguishing between idiocy and pernicious delusions is splitting a pretty fine hair. But be that as it may, the point is: Dawkins has absolutely no sympathy for religious belief of any kind for any reason. Or at least he didn't in 2006. Maybe he's mellowed since then. (But I met him in 2012 in a social setting and he told me, apropos of nothing, "I despise religion.")

Comment author: SoerenE 11 February 2016 06:29:29AM 2 points [-]

It is possible to be extremely intelligent, and suffer from a delusion.

Comment author: Yosarian2 27 January 2016 11:43:41PM 3 points [-]

As a meta-level version of this, I have to admit that I find it a little concerning that this site was created in the first place partly because Eliezer Yudkowsky wanted to convince people that funding safe AI research was the best possible use of resources, and that much of the logic on this site seems to come to that conclusion, irrespective of which direction the logic goes in to get to that point.

I don't necessarily disagree with the conclusion, but it is a surprising and suspicious convergence nonetheless.

Comment author: SoerenE 29 January 2016 07:50:48AM *  1 point [-]

My thoughts exactly.

When I first heard it, it sounded to me like a headline from BuzzFeed: This one weird trick will literally solve all your problems!

Turns out that the trick is to create an IQ 20000 AI, and get it to help you.

(Obviously, Suspicious <> Wrong)

View more: Prev | Next