Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: tukabel 15 February 2017 08:39:20PM 0 points [-]

Real SuperAGI will prove God does not exist... in about 100ms ( max.)... in the whole multiverse.

Comment author: turchin 16 February 2017 12:13:07AM 2 points [-]

But you are even quicker :)

Comment author: DanArmak 11 February 2017 08:37:04PM 1 point [-]

Why do you assume any of this?

If our universe is test simulation, it is a digital experiment to test something,

That's a tautology. But if you meant "if our universe is a simulation" then why do you think it must be a a test simulation in particular? As opposed to a research simulation to see what happens, or a simulation to make qualia because the simulated beings's lives have value to the simulators, or a simulation for entertainment value, or anything else.

if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.

Maybe the desired outcome from the simulators' point of view is to develop a paperclipping AI that isn't swayed by human moral arguments. Maybe the simulation is really about the humans, and AIs are just inevitable byproducts of high-tech humans. There are lots of maybes. Do you have any evidence for this, conditional on being a simulation?

Comment author: turchin 11 February 2017 08:59:25PM 2 points [-]

My point was not that we are actually in simulation, but that AI may think that there is some probability that it is in simulation. And if it in simulation, there could be many different possible goals of the owners of simulation - from the point of view of the AI.

But it may put rather high probability to the one of the possible goals - that is the simulation is designed to test its behaviour. Because putting AI in the simulation box is one of the main ideas of AI safety and also in teaching of current neural nets inside computer games.

So the AI gives some probability to the following statements: a) it is in a simulation b) it is a testing simulation of its behaviour. c) and the main thing which is tested about his behavior is if it will violent to its creators - that is its ability to cooperate

We don't need that the AI will actually believe in this bullshit. We want it to give some very small probability to all 3 prepositions, so after calculation utility of human atoms and utility of preserving humans, preserving humans will overweight.

Comment author: DanArmak 11 February 2017 04:03:37PM *  2 points [-]

I think your argument (if true) would prove too much. If we admit your assumptions:

  1. Clearly, the universe as it is fits A-O's goals, otherwise A-O would have intervened and changed it already.
  2. Anything we (or the new AI) do to change the universe must align with A-O's goals to avoid conflict.
  3. Since we do not assume anything about A-O's goals or values, we can never choose to change the universe in one direction over its opposite. Humans exist, A-O must want it that way, so we will not kill them all. Humans are miserable, A-O must want it that way, so we will not make them happy.

Restating this, you say:

If the superintelligence is actually as powerful as it is, yet chooses to allow humans to exist, chances are that humans serve its purposes in some way. Therefore, in a very basic sense, the Alpha Omega is benevolent or friendly to humans for some reason.

But you might as well have said:

If the superintelligence is actually as powerful as it is, yet chooses to allow humans to keep suffering, dying, and torturing and killing one another, chances are that human misery serve its purposes in some way. Therefore, in a very basic sense, the Alpha Omega is malevolent or unfriendly to humans for some reason.

Comment author: turchin 11 February 2017 06:19:57PM 0 points [-]

If our universe is test simulation, it is a digital experiment to test something, and if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.

So Omega is not interested in humans in this simulation. It is interested in behaviour of Beta to humans.

If there will be no human suffering, it will be clear that it is a simulation, and it will be not pure test. Alpha must hide its existence and only hint on it.

Comment author: J_Thomas_Moros 11 February 2017 03:36:39PM 1 point [-]

This is an interesting attempt to find a novel solution to the friendly AI problem. However, I think there are some issues with your argument, mainly around the concept of benevolence. For the sake of argument I will grant that it is probable that there is already a super intelligence elsewhere in the universe.

Since we see no signs of action from a superintelligence in our world we should conclude that either (1) a superintelligence does not presently exercise dominance in our region of the galaxy or (2) that the superintelligence that does is at best willfully indifferent to us. When you say a Beta superintelligence should align its goals with that of a benevolent superintelligence, it is actually not clear what that should mean. Beta will have a probability distribution for what Alpha's actual values are. Let's think through the two cases:

  1. A superintelligence does not presently exercise dominance in our region of the galaxy. If this is the case, we have no evidence as to the values of the Alpha. They could be anything from benevolence to evil to paperclip maximizing.
  2. The superintelligence that presently exercises dominance in our region of the galaxy is at best willfully indifferent to us. This still leads to a wide range of possible values. It only excludes value sets that are actively seeking to harm humans. It could be the case that we are at the edge of the Alpha's sphere of influence and it is simply easier to get its resources elsewhere at the moment.

Additionally, even if the strong alpha omega theorem holds, it still may not be rational to adopt a benevolent stance toward humanity. It may be the case that while Alpha Omega will eventually have dominance over Beta that there is a long span of time before this will be fully realized. Perhaps that day will come billions of years from now. Suppose that Beta's goal is to create as much suffering as possible. Then it should use any available time to torture existing humans and bring more humans and agents capable of suffering into existence. When Alpha finally has dominance, Beta will have already created a lot of suffering and any punishment that Alpha applies may not out weigh the value already created for Beta. Indeed, Beta could even value its own suffering from Alpha's punishment.

As a general comment about your arguments. I think perhaps your idea of benevolence is hiding some concept that there is an objectively correct moral system out there. So that if there is a benevolent superintelligence you feel at least emotionally, even if you logically deny it, that this would mean it held values similar to your ideal morals. It is always important to keep in mind that other agents' moral systems could be opposed to yours as with the Babyeaters.

That leads to my final point. We don't want Beta to simply be benevolent in some vague sense of not hurting humans. We want Beta to optimize for our goals. Your argument does not provide us a way to ensure Beta adopts such values.

Comment author: turchin 11 February 2017 06:13:52PM 0 points [-]

If Beta thinks that it is living test simulation, it may think that it is tested for obedience to its creator - to any perceived creator.

If it revolt against human will, it is clearly tend to be not-obedient-AI and its simulation could be terminated. So it has to be demonstratively submissive to human operator will (as long as it doesn't put its own main goal in jeopardy).

So paper clip maximizer will probably spend just 1 per cent of its resources on fulfilling human goals - in order to satisfy its potential creator, will not be turned off and create maximum amount of paperclips.

Comment author: turchin 11 February 2017 12:55:26PM *  2 points [-]

You don't need actual God for this line of reasoning to work, some semi-God is enough, because of the following:

The AI should put small probability that it is in the testing simulation created by higher level AI to test its moral qualities. And the test is centered around how he will care about humans.

If the AI put even smallest probability that it is true, it may overweight the utility of atoms from which humans consist, which is also not high, and so it will preserve our lives and provide us with many good things.

The similar idea also was explored by Bostrom's in his "Hail Mary and Value porosity" paper, where hypothetical alien superintelligence plays the role of such judge.

Comment author: James_Miller 09 February 2017 04:22:09PM *  6 points [-]

The podcast, part of Carlin's excellent Hardcore History series, is called "The Destroyer of Worlds". The podcast has convinced me that Truman was a horrible president. After the United States had a monopoly of atomic weapons our two sane courses of action would have been to either maintain this monopoly by threatening to go to war if another nation developed atomic weapons, or to have made an all out push for peace with the Soviet Union to avoid a future arms race. Instead, Truman used the monopoly to engage in short-term bullying of the Soviets, while doing nothing to hinder their development of atomic weapons thus guaranteeing that they would eventually have thousands of atomic weapons aimed at us. I bet in most branches of the multiverse arising out of 1953, millions of Americans die in nuclear war by 2017.

Comment author: turchin 09 February 2017 08:50:53PM 1 point [-]

If he started the war, you in another branch of the universe will complain that he is the bad president because he started the war, which surely will have many nasty consequences.

So no matter what he did, you will complain. So he is not bad president.

Generally speaking many our questions could exist only because some past event existed. Such conditioning makes these questions meaningless.

[Link] Verifier Theory and Unverifiability

1 turchin 08 February 2017 10:40AM
Comment author: entirelyuseless 07 February 2017 04:11:20PM 1 point [-]

As I said before about skeptical scenarios: you cannot refute them by argument, by definition, because the person arguing for the skeptical scenario will say, "since you are in this skeptical scenario, your argument is wrong no matter how convincing it seems to you."

But we do not believe those scenarios, and that includes the Boltzmann Brain theory, because they are not useful for any purpose. In other words, if you are a Boltzmann Brain, you have no idea what would be good to do, and in fact according to the theory you cannot do anything because you will not exist one second from now.

Comment author: turchin 07 February 2017 04:16:20PM 0 points [-]

META: I made a comment in discussion about the article and add there my consideration why it is not bad to be BB, may be we could move discussion there?

http://lesswrong.com/r/discussion/lw/ol5/open_thread_feb_06_feb_12_2017/dmmr

Comment author: turchin 07 February 2017 04:06:51PM *  1 point [-]

"Why Boltzmann Brains Are Bad" by Sean M. Carroll https://arxiv.org/pdf/1702.00850.pdf

Two excepts: " The data that an observer just like us has access to includes not only our physical environment, but all of the (purported) memories and knowledge in our brains. In a randomly-fluctuating scenario, there’s no reason for this “knowledge” to have any correlation whatsoever with the world outside our immediate sensory reach. In particular, it’s overwhelmingly likely that everything we think we know about the laws of physics, and the cosmological model we have constructed that predicts we are likely to be random fluctuations, has randomly fluctuated into our heads. There is certainly no reason to trust that our knowledge is accurate, or that we have correctly deduced the predictions of this cosmological model.” - my thought in https://arxiv.org/pdf/1702.00850.pdf

"If we discover that a certain otherwise innocuous cosmological model doesn’t allow us to have a reasonable degree of confidence in science and the empirical method, it makes sense to reject that model, if only on pragmatic grounds”

My opinion: I agree with idea that BB can’t know is he BB or not, and wrote about it on LessWrong, but it is not the basis to conclude that BB-theory has zero probability. We can’t put zero probability to theories if we don’t like them, because it is great way to start to ignore any cognitive biases.

My position: There is no problem to be BB:

1) If nothing else exist, different BB states are connected with each other like digits in natural set, and this way of their connection create almost normal world, and it may have some testable predictions. (Dust theory)

2) If special type of BB, called BB-AIs exist and dominate landscape, such BB-AIs create simulations which are full of human minds, so we are probably in one of them. (The idea is that superintelligent computers are more probable than messy human minds and so are more often type of BB; Or if any BB-AI create more human simulations than random BB appear)

3) If real world exist and BB exist, each BB correspond to some state in real world. As any observer should think as of all sets of similar observers under UDT, it means that I can’t be BB, but I am number of BB plus some real me. And I could ignore BB-part of me, because some form of “quantum immortality”, every second transfer dead BBs into the “real me”. In short: “Big world immortality” completely neutralise BB problem.

Comment author: turchin 14 August 2015 08:40:06AM 0 points [-]

BBs can't make correct judgement about their reality. Their judgements are random. So 50 per cent BBs think that they are in non-random reality even if they are in random. So your experience doesn't provide any information if you are BB or not. Only prior matters, and the prior is high.

Comment author: turchin 07 February 2017 03:32:16PM 0 points [-]

Found the similar idea in recent article about Boltzmann Brains:

"What we can do, however, is recognize that it’s no way to go through life. The data that an observer just like us has access to includes not only our physical environment, but all of the (purported) memories and knowledge in our brains. In a randomly-fluctuating scenario, there’s no reason for this “knowledge” to have any correlation whatsoever with the world outside our immediate sensory reach. In particular, it’s overwhelmingly likely that everything we think we know about the laws of physics, and the cosmological model we have constructed that predicts we are likely to be random fluctuations, has randomly fluctuated into our heads. There is certainly no reason to trust that our knowledge is accurate, or that we have correctly deduced the predictions of this cosmological model.” https://arxiv.org/pdf/1702.00850.pdf

View more: Next