Comment author: Gurkenglas 14 February 2016 11:23:46AM 1 point [-]

You are assuming that the Turing machine needs to halt. In a universe much simpler than ours (?), namely the one where a single Turing machine runs, if you subscribe to Pattern Identity Theory, there's a simple way to host an infinite hierarchy of increasing intelligences: Simply run all Turing machines in parallel. (Using diagonalization from Hilbert's Hotel to give everyone infinite steps to work with.) The machine won't ever halt, but it doesn't need to. If an AGI in our universe can figure out a way to circumvent the heat death, it could do something similar.

Comment author: Yaacov 31 January 2016 01:53:17AM *  0 points [-]

Destroying the robot greatly diminishes its future ability to shoot, but it would also greatly diminishes its future ability to see blue. The robot doesn't prefer 'shooting blue' to 'not shooting blue', it prefers 'seeing blue and shooting' to 'seeing blue and not shooting'.

So the original poster was right.

Edit: I'm wrong, see below

Comment author: Gurkenglas 03 February 2016 01:19:16PM 1 point [-]

If the robot knows that its camera is indestructible but its gun isn't, it would still shoot at the mirror and destroy only its gun.

Comment author: Lumifer 21 January 2016 09:49:16PM *  8 points [-]

Oh, dear. A paper in PNAS says that the usual psychological experiments which show that people have a tendency to cooperate at the cost of not maximizing their own welfare are flawed. People are not cooperative, people are stupid and cooperate just because they can't figure out how the game works X-D

Abstract:

Economic experiments are often used to study if humans altruistically value the welfare of others. A canonical result from public-good games is that humans vary in how they value the welfare of others, dividing into fair-minded conditional cooperators, who match the cooperation of others, and selfish noncooperators. However, an alternative explanation for the data are that individuals vary in their understanding of how to maximize income, with misunderstanding leading to the appearance of cooperation. We show that (i) individuals divide into the same behavioral types when playing with computers, whom they cannot be concerned with the welfare of; (ii) behavior across games with computers and humans is correlated and can be explained by variation in understanding of how to maximize income; (iii) misunderstanding correlates with higher levels of cooperation; and (iv) standard control questions do not guarantee understanding. These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.

Comment author: Gurkenglas 23 January 2016 07:08:07PM 2 points [-]

(ii) They may also be anthropomorphizing the computers. (iii) This just means that the sort of person who cooperates in this sort of game also treats humans and computers equally, right?

Comment author: SilentCal 30 October 2015 04:00:09PM 1 point [-]

At the heart of this question is some concept of resource permission that I'm trying to nail down--that is, agent X has 'self-modified' into agent Y iff agent Y has the same hardware resources that agent X had. This distinguishes self-modification from emulation, which is important; humans have limited self-modification, but with a long paper tape we can emulate any program.

A proposed measure: Define the 'emulation penalty' of a program that could execute on the AI's machine as the ratio of the runtime of the AI's fastest possible emulation of that program to the runtime of the program executing directly on the machine. The maximum emulation penalty over all possible programs puts at least an lower bound on the AI's ability to effectively self-modify into any possible agent.

An AI that can write and exec assembly would have a max emulation penalty of 1; one that can write and exec a higher-level language would probably have 10-100 (I think?); and one that could only carry out general computation by using an external paper tape would have a max emulation penalty in the billions or higher.

Comment author: Gurkenglas 09 January 2016 11:38:52AM 0 points [-]

Therefore, for a computer in Greg Egan's Permutation City, emulation is self-modification?

Comment author: casebash 06 January 2016 02:15:13PM 0 points [-]

Any finite universe will have a best such actor, but is even our universe finite? Besides, this was purposefully set in an infinite universe.

Comment author: Gurkenglas 06 January 2016 02:58:23PM *  0 points [-]

Finitely specified universe, not finite universe. That said, until the edit I had failed to realize that the diagonalization argument I used to disallow an infinite universe to contain an infinite hierarchy of finite actors doesn't work.

Comment author: Gurkenglas 06 January 2016 02:09:55PM *  0 points [-]

Let's assume that the being that is supposed to find a strategy for this scenario operates in a universe whose laws of physics can be specified mathematically. Given this scenario, it will try to maximize the number it outputs. Its output cannot possibly surpass the maximum finite number that can be specified using a string no longer than its universes specification, so it need not try to surpass it, but it might come pretty close. Therefore, for each such universe, there is a best rational actor.

Edit: No, wait. Umm, you might want to find the error in the above reasoning yourself before reading on. Consider the universe with an actor for every natural number that always outputs that number. The above argument says that no actor from that universe could output a bigger number than can be specified using a string no longer than the laws of physics of the universe, but that only goes if the laws of physics include a pointer to that actor - to extract the number 100 from that universe, we need to know that we want to look at the hundredth actor. But your game didn't require that: Inside the universe, each actor knows that it is itself without any global pointers, and so there can be an infinite hierarchy of better-than-the-previous rational actors in a finitely specified universe.

Comment author: fubarobfusco 27 December 2015 05:58:44AM *  4 points [-]

Given that various mental disorders are heritable, it's not clearly impossible for psychological properties to be selected for.

However, unlike dark or light skin (which matters for dealing with sunlight or the lack of it), mental ability is generally useful for survival and success in all climates and regions of the world. Every physical and social setting has problems to figure out; friendships and relationships to negotiate; language to acquire; mates to charm; rivals to overcome or pacify; resources that can be acquired through negotiation, deception, or wit; and so on. This means that all human populations will be subject to some selection pressure for mental ability; whereas with skin color there are pressures in opposite directions in different climates.

So why is this such a troublesome subject?

The problem with the subject is that there's an ugly history behind it — of people trying to explain away historical conditions (like "who conquered whom" or "who is richer than whom") in terms of psychological variation. And this, in turn, has been used as a way of justifying treating people badly ... historically, sometimes very badly indeed.

Classifications don't exist for themselves; they exist in order for people to do things with them. People don't go around classifying things (or people) and then not doing anything with the classification. But sometimes people make particular classifications in order to do horrible things, or to convince other people to do horrible things.

"Earthmen are not proud of their ancestors, and never invite them round to dinner." —Douglas Adams

Comment author: Gurkenglas 30 December 2015 05:05:22AM *  3 points [-]

There is a tradeoff between energy consumption and intelligence (where the optimum has moved toward a focus on intelligence with our species). Your second paragraph doesn't eliminate the possibility that this optimum might have landed at different points in different ancient locations.

Comment author: devas 23 December 2015 01:15:37PM 6 points [-]

You are one of the first to be revived.

The technique is imperfect, and causes you massive neurological damage (think late stage Alzheimer's), trapping you in a nonverbal yet incredibly painful and horrifying state.

Due to advances in gerontology, you have a nearly infinite lifespan ahead of you, cognizant only of what you have lost.

When neuroscience finally advances to the point where you can be fixed, it's still not yet advanced enough to give you back your memories.

You're effectively a completely different person, and you know that.

Comment author: Gurkenglas 25 December 2015 09:08:51PM 0 points [-]

Couldn't you get refrozen until they can fix that too?

Comment author: solipsist 13 December 2015 03:27:01PM 0 points [-]

What do you mean by "commit suicide" here? Memorize the results of 5 more coins?

Comment author: Gurkenglas 13 December 2015 03:36:21PM 1 point [-]

No, that would do nothing to the anthropic weights of each subtree. I meant ending your life as part of the thought experiment. Why would memorizing numbers do anything special?

Comment author: solipsist 07 December 2015 05:49:07AM *  1 point [-]

I set up an experiment to test quantum anthropics.

Flip four quantum coins. If they all came up heads, stop. If any of them came up tails, flip 5 more coins and (using mnemonics) think really hard about the exact coin flip sequence. If I find myself in a universe where first four coins came up all heads, then with p < 0.0625, quantum weirdness kept me from finding myself in one of the universes the state of my consciousness split me 512-ways.

I got access to a quantum random number generator, resolved to do the experiment, called a friend and told them I was about to do the experiment, and... chickened out and didn't do the experiment.

I do not know how to interpret these results :-/

Comment author: Gurkenglas 13 December 2015 12:47:15PM -1 points [-]

Here's how I predict your setup to work, and shame on you for chickening out:

http://sketchtoy.com/66313589

You start doing the experiment, you flip four coins, 15/16 of you memorize a sequence, 15*32 of you memorized pairwise probably different sequences. In the end, you have a probability of 15/16 to find yourself having memorized a sequence. If QI works and half of you who find a tails in a first four coins commit suicide, start-experiment-you only has a 15/17 chance to find themselves having found tails and failed to kill themselves.

View more: Prev | Next