Comment author: Stuart_Armstrong 02 August 2014 08:44:46PM 0 points [-]

only that they would be VNM-rational

But if the agent can't be subject to Dutch books, what's the point of being VNM-rational? (in fact, in my construction, the agent need not be initially complete).

But the main point is that VNM-rational isn't clearly defined. Is it over all possible decisions, or just over decisions the agent actually faces? Given that rationality is often defined on Less Wrong in a very practical way (generalised "winning") I see no reason to need to assume the first. It weakens the arguments for VNM-rationality, makes it into a philosophical ideal rather than a practical tool.

And so while it's clear that an AI would want to make itself into an unlosing agent, it's less clear that it would want to make itself into an expected utility maximiser. In fact, it's very clear that in some cases it wouldn't: if it knew that outcomes A and B were impossible, and it currently didn't have preferences between them, then there is no reason it would ever bother to develop preferences there (baring social signalling and similar).

Comment author: sebmathguy 03 August 2014 06:12:17AM *  1 point [-]

There's actually no need to settle for finite truncations of a decision agent. The unlosing decision function (on lotteries) can be defined in first-order logic, and your proof that there are finite approximations of a decision function is sufficient to use the compactness theorem to produce a full model.

Comment author: sebmathguy 02 May 2014 01:19:44AM 2 points [-]

I've just made an enrollment deposit at the University of Illinois at Urbana-Champaign, and I'm wondering if any other rationalists are going, and if so, would they be interested in sharing a dorm?

Comment author: malcolmocean 26 July 2013 01:43:05AM *  2 points [-]

"What is the experience of the other people I'm interacting with?"

I have sometimes found empathy to be a complicated concept, but this question really cuts to the heart of it and causes your brain to model the situation from the other person's perspective and use that in your decision-making.

Intriguingly, this seems to even apply to interactions with my future selves. If I don't ask this question or one like it, I'm likely to write a massive todo list for myself—probably a completely impossible list, and delegate it to my tomorrow!self with very little thought. Then when that self encounters the list, it's so overwhelming and inconsiderate that I find it hard to deal with at all. If instead, I stop to wonder what the experience will be of my future self encountering the tasks I've delegated for it, I realize that it makes sense to prioritize much of that up front, and to frame the delegation process with more context around why my past self wanted the thing done. This is basically the difference between saying "Future self, do this" and "Future self, at the moment it seems to make sense to me that you do this stuff, for reasons X, Y, Z." or some similarly empathetic request... which is much less likely to produce e.g. reactance.

Comment author: sebmathguy 28 July 2013 12:50:46AM 0 points [-]

Your link is messed up.

Comment author: Viliam_Bur 25 July 2013 05:56:44PM *  -2 points [-]

It's easier to think about unpredictability without picturing Many Worlds - e.g. do we say "don't worry about driving too fast because there will be plenty of worlds where we don't kill anybody?"

Yes, the problem is that it is easy to imagine Many Worlds... incorrectly.

We care about the ratio of branches where we survive, and yet, starting with Big Bang, the ratio of branches where we ever existed is almost zero. So, uhm, why exactly should we be okay about this almost zero, but be very careful about not making it even smaller? But this is what we do (before we start imagining Many Worlds).

So for proper thinking perhaps it is better to go with collapse intepretation. (Until someone starts making incorrect conclusions about mysterious properties of randomness, in which case it is better to think about Many Worlds for a moment.)

Comment author: sebmathguy 26 July 2013 10:59:11AM 0 points [-]

Perhaps instead of immediately giving up and concluding that it's impossible to reason correctly with MWI, it would be better to take the born rule at face value as a predictor of subjective probability.

Comment author: sebmathguy 24 July 2013 07:35:22AM 6 points [-]

I would immediately download this iff it had a GUI.

Comment author: Randaly 19 July 2013 01:22:56AM -1 points [-]

My understanding was that this was about whether the singularity was "AI going beyond "following its programming"," with goal-modification being an example of how an AI might go beyond its programming.

Comment author: sebmathguy 23 July 2013 06:11:10AM -1 points [-]

The AI is a program. Running on a processor. With an instruction set. Reading the instructions from memory. These instructions are its programming. There is no room for acausal magic here. When the goals get modified, they are done so by a computer, running code.

In response to comment by conchis on Universal Law
Comment author: Rixie 29 March 2013 04:20:08PM *  -1 points [-]

Hem hem.

http://lesswrong.com/lw/ic/the_virtue_of_narrowness/

There's a difference, I think. It's just that we havn't quite grasped it yet. I had it until I read your post, and then I lost it. It's like in Harry Potter and the Methods of Rationality when Harry learns Partial Transfiguration.

In this Harry Potter universe, you can you magic to change things into other things, but you can't change only part of a thing into another thing, for example you can change a wall into marshmallow in order to escape a room, but you would have to dispense the energy of changing the entire wall, and not just make yourself a little marshmallow hole.

Well Harry learns to violate this rule with science, because nothing in the world is really connected, it's just an illusion in our heads. So Harry can now transfigure only part of a rubber eraser into steel, if he wants to. This is all making perfect sense, right?

But Proffesor MacGonagall is skeptical:

"Harry's idea stemmed from simple ignorance, nothing more. If you changed half of a metal ball into glass, the whole ball had a different Form. To change the part was to change the whole, and that meant removing the whole Form and replacing it with a different one. What would it even mean to Transfigure only half of a metal ball? That the metal ball as a whole had the same Form as before, but half that ball now had a different Form?"

See, that makes sense too. And now everyone is confused. But partial transfiguration does exist (well, in the story) and the difference is that Harry could change a spot on the metal ball to glass in five minutes, instead of the thirty minutes it would have taken him to change the entire metal ball into a metal ball with a glass spot.

There's probably a distinction between laws with exceptions and new laws that you and I just don't know about yet.

Anyone care to enlighten us?

In response to comment by Rixie on Universal Law
Comment author: sebmathguy 09 July 2013 12:36:19AM 4 points [-]

Consider indicating that your post contains spoilers.

Comment author: JoshuaZ 02 July 2013 04:22:34AM 4 points [-]

The thing is that PTSD is really not that binary, like many mental illnesses, it has a wide range of symptoms and severity levels. What Nancy is talking about is how one death can push one drastically over, skipping much of the middle range where it might be ambiguous if one had symptoms severe enough to be diagnoseable. (Disclaimer, while I've heard the same sort of things NancyLebovitz is talking about, I'm not aware of any studies actually supporting this.)

Comment author: sebmathguy 02 July 2013 04:39:01AM 1 point [-]

Got it. I was previously having difficulty making that belief pay rent.

Comment author: NancyLebovitz 30 June 2013 08:26:50PM 1 point [-]

I've read the same thing about Nazi soldiers, and also that they couldn't handle another early method of killing-- driving prisoners around in closed trucks with the exhaust fed into the back compartment.

It's not that the thousands have no impact, it's that one person can make a much larger emotional difference.

I've also heard that for soldiers, seeing one more death or injury can be the tipping point into PTSD.

Comment author: sebmathguy 02 July 2013 04:15:34AM 0 points [-]

I've also heard that for soldiers, seeing one more death or injury can be the tipping point into PTSD.

Am I missing something, or does this follow trivially from PTSD being binary and the set of possible body counts being the natural numbers?

Comment author: ShardPhoenix 29 June 2013 11:40:09PM 1 point [-]

Poll for test takers:

Programming experience vs. whether you got the correct results (Here "experienced" means "professional or heavy user of programming" and "moderate" means "occasional user of programming"):

Did you think this was fair as a quick test?

Submitting...

Comment author: sebmathguy 02 July 2013 04:05:08AM *  2 points [-]

I'm a new user with -1 karma who therefore can't vote, so I'll combat censorship bias like this:

Moderate programmer, correct

Yes

View more: Next