Comment author: Silver_Swift 12 January 2016 11:19:16AM *  0 points [-]

First off, I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that, I don't think that is the core of your argument, though, so let's assume that you can and that the resultant society is effectively a superintelligence now.

The problem with superintelligences is that they are smarter than you. It will realize that it is in a box and that you are going to turn it off eventually. Given that this society is based on natural selection it will want to prevent that. How will it accomplish that? I don't know, I'm not a superintelligence and your scenario doesn't provide enough detail to figure out what you missed, but it is very, very difficult not to give a superintelligence any hints of how the physics of our world work. Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware or maybe they can reverse engineer things of our physiology from the way you wrote the simulation. That doesn't seem like much, but humans figured out an awful lot about (for instance) the events right after the big bang from seemingly absurdly tiny hints and we're just regular intelligences.

Even if they can't find a way out of their box in the time given to them they might try to convince you to run the simulation again with a longer running time. They could pretend to be unable to find the solution to prime factorization in time or they could convince you that the method they did find fails for very high primes (by making very subtle mistakes in its application). If it uses this method it also conveniently achieves a method of communicating with the outside world (through the investigators that inspect the dump after the simulation ends) and might be able to set up a better situation for itself the second time round.

Comment author: Vaniver 07 January 2016 04:45:59PM 0 points [-]

You seem to be arguing that there must be some solution that can solve these problems. I've already proven that this cannot exist, but if you disagree, what is your solution then?

I think you're misunderstanding me. I'm saying that there are problems where the right action is to mark it "unsolvable, because of X" and then move on. (Here, it's "unsolvable because of unbounded solution space in the increasing direction," which is true in both the "pick a big number" and "open boundary at 100" case.)

In fact, if you read the comments, you'll see that many commentators are unwilling to accept this solution and keep trying to insist on there being some way out.

Sure, someone who is objecting that this problem is 'solvable' is not using 'solvable' the way I would. But someone who is objecting that this problem is 'unfair' because it's 'impossible' is starting down the correct path.

then declared that you've found a reductio ad absurdum.

I think you have this in reverse. I'm saying "the result you think is absurd is normal in the general case, and so is normal in this special case."

Comment author: Silver_Swift 08 January 2016 11:21:33AM *  0 points [-]

I think you're misunderstanding me. I'm saying that there are problems where the right action is to mark it "unsolvable, because of X" and then move on. (Here, it's "unsolvable because of unbounded solution space in the increasing direction," which is true in both the "pick a big number" and "open boundary at 100" case.)

But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agent can realize that the problem has no optimal solution and mark it as unsolvable, but afterwards they still have to pick a number, so which number should they pick?

Comment author: casebash 05 January 2016 11:30:26PM 1 point [-]

I'm kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo "perfect rationality" and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.

Comment author: Silver_Swift 06 January 2016 02:46:37PM 1 point [-]

That's fair, I tried to formulate a better definition but couldn't immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).

When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don't have an answer. Intuitive answers to questions like "What would I do if I actually found myself in this situation?" and "What would the average intelligent person do?" are unsatisfying because they seem to rely on implicit costs to computational power/time.

On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn't one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.

In any case, it is an interesting problem.

Comment author: The_Lion 03 January 2016 05:29:29AM 4 points [-]

Not all changes are good. In fact, most potential changes would be absolutely awful.

Comment author: Silver_Swift 05 January 2016 04:56:51PM 2 points [-]

That is no reason to fear change, "not every change is an improvement but every improvement is a change" and all that.

Comment author: Usul 05 January 2016 04:57:04AM *  1 point [-]

I'm just not convinced that you're saying anything more than "Numbers are infinite" and finding a logical paradox within. You can't state the highest number because it doesn't exist. If you postulate a highest utility which is equal in value to the highest number times utility 1 then you have postulated a utility which doesn't exist. I can not chose that which doesn't exist. That's not a failure of rationality on my part any more than Achilles inability to catch the turtle is a failure of his ability to divide distances.

I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don't know how to get a vinculum over the .9) This is a number. It exists.

Comment author: Silver_Swift 05 January 2016 04:42:28PM 2 points [-]

I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don't know how to get a vinculum over the .9) This is a number. It exists.

It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).

In any case, I think casebash successfully specified a problem that doesn't have any optimal solutions (which is definitely interesting) but I don't think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.

Comment author: Silver_Swift 25 November 2015 04:12:47PM 2 points [-]

I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.

Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?

For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).

Comment author: 27chaos 03 November 2015 07:34:56PM 17 points [-]

“I’ve never been certain whether the moral of the Icarus story should only be, as is generally accepted, ‘don’t try to fly too high,’ or whether it might also be thought of as ‘forget the wax and feathers, and do a better job on the wings.”

Stanley Kubrick

Comment author: Silver_Swift 05 November 2015 12:49:41PM 14 points [-]

Similarly:

I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

Randal Munroe

Comment author: James_Miller 20 July 2015 01:45:41PM 3 points [-]

It corrects an error people sometimes make when in a bad situation of assuming things can't get worse so any change can't be for the worst. Sansa had not been tortured by the psychopath in question while Theon had, so Theon better understood the price of defiance.

Comment author: Silver_Swift 21 July 2015 02:32:46PM 1 point [-]

Ok, fair enough. I still hold that Sansa was more rational than Theon at this point, but that error is one that is definitely worth correcting.

Comment author: James_Miller 01 July 2015 07:49:31PM *  14 points [-]

Sansa: "It can’t be worse."

Theon: "It can. It can always be worse."

Game of Thrones TV series

Part of the reason I supported the overthrow of Saddam Hussein was because I thought he was so bad that the alternative had to be better. I didn't take enough time to consider worse alternatives to him.

Comment author: Silver_Swift 20 July 2015 10:24:12AM 0 points [-]

Why is this a rationality quote? I mean sure it is technically true (for any situation you'll find yourself in), but that really shouldn't stop us from trying to improve the situation. Theon has basically given up all hope and is advocating compliance to a psychopath for fear of what he may do to you otherwise, doesn't sound particularly rational to me.

In response to comment by [deleted] on Open Thread, Jun. 22 - Jun. 28, 2015
Comment author: Unknowns 23 June 2015 03:34:05AM 5 points [-]

Even adamzerner probably doesn't value his life at much more than, say, ten million, and this can likely be proven by revealed preference if he regularly uses a car. If you go much higher than that your behavior will have to become pretty paranoid.

Comment author: Silver_Swift 23 June 2015 02:55:29PM *  0 points [-]

That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*.

*: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).

View more: Prev | Next