Comment author: Moss_Piglet 20 October 2013 08:32:28PM 4 points [-]

"Don't value your pride."

Sorry to keep adding to the "why?" pile but do you mind explaining this one too?

Comment author: CoffeeStain 20 October 2013 08:51:19PM *  2 points [-]

For certain definitions of pride. Confidence is a focus on doing what you are good at, enjoying doing things that you are good at, and not avoiding doing things you are good at around others.

Pride is showing how good you are at things "just because you are able to," as if to prove to yourself what you supposedly already know, namely that you are good at them. If you were confident, you would spend your time being good at things, not demonstrating that you are so.

There might be good reasons to manipulate others. Just proving to yourself that you can is not one of them, if there are stronger outside views on your ability to be found elsewhere (like asking unbiased observers).

The Luminosity Sequence has a lot to say about this, and references known biases people have when assessing their abilities.

Comment author: timujin 20 October 2013 07:22:30PM 2 points [-]

"Don't manipulate those you can out think, just because you are able to."

Why?

In response to comment by timujin on The best 15 words
Comment author: CoffeeStain 20 October 2013 08:13:51PM 1 point [-]

Because your prior for "I am manipulating this person because it satisfies my values, rather than my pride" should be very low.

If it isn't, then here's 4 words for you:

"Don't value your pride."

Comment author: Vaniver 14 October 2013 09:10:47PM 2 points [-]

The love of complexity without reductionism makes art; the love of complexity with reductionism makes science.

--E.O. Wilson

Comment author: CoffeeStain 14 October 2013 11:49:56PM *  2 points [-]

Whenever I have a philosophical conversation with an artist, invariably we end up talking about reductionism, with the artist insisting that if they give up on some irreducible notion, they feel their art will suffer. I've heard, from some of the world's best artists, notions ranging from "magic" to "perfection" to "muse" to "God."

It seems similar to the notion of free will, where the human algorithm must always insist it is capable of thinking about itself on level higher. The artist must always think of his art one level higher, and try to tap unintentional sources of inspiration. Nonreductionist views of either are confusions about how an algorithm feels on the inside.

Comment author: Mestroyer 05 October 2013 06:20:28AM 40 points [-]

The market doesn't give a shit how hard you worked. Users just want your software to do what they need, and you get a zero otherwise. That is one of the most distinctive differences between school and the real world: there is no reward for putting in a good effort. In fact, the whole concept of a "good effort" is a fake idea adults invented to encourage kids. It is not found in nature.

--Paul Graham (When I saw this quote, I thought it had to have been posted before, but googling turned up nothing.)

Comment author: CoffeeStain 08 October 2013 07:44:08AM 4 points [-]

The closest you can come to getting an actual "A for effort" is through creating cultural content, such as a Kickstarter project or starting a band. You'll get extra success when people see that you're interested in what you're doing, over and beyond as an indicator that what you'll produce is otherwise of quality. People want to be part of something that is being cared for, and in some cases would prefer it to lazily created perfection.

I'd still call it though an "A for signalling effort."

Comment author: Panic_Lobster 05 October 2013 05:39:47AM *  2 points [-]

... I really don't think my syntax is that unclear.

Comment author: CoffeeStain 05 October 2013 08:13:35AM 9 points [-]

Tough crowd.

Comment author: Panic_Lobster 05 October 2013 04:43:32AM 8 points [-]

Today I taught a bunch of 5th grade kids how to convert decimals into fractions and vice versa.

Comment author: CoffeeStain 05 October 2013 05:19:28AM 3 points [-]

A bunch of 5th grade kids taught you how to convert decimals to fractions?

Comment author: Manfred 30 September 2013 01:05:07AM *  -1 points [-]

50%. Upon finding that the expected physical situation is undefined (a limit that does not converge), sensible agents should default to using a more limited set of information.

EDIT: All right then, if you downvoters are so smart, what would you bet if you were in sleeping beauty's place?

Comment author: CoffeeStain 02 October 2013 09:30:25PM 0 points [-]

EDIT: All right then, if you downvoters are so smart, what would you bet if you were in sleeping beauty's place?

This is a fair point. Your's is an attempt at a real answer to the problem. Mine and most answers here seem to say something like that the problem is ill-defined, or that the physical situation described by the problem is impossible. But if you were actually Sleeping Beauty waking up with a high prior to trust the information you've been given, what else could you possibly answer?

If you had little reason to trust the information you've been given, the apparent impossibility of your situation would update that belief very strongly.

Comment author: CoffeeStain 30 September 2013 03:56:35AM 3 points [-]

The expected value for "number of days lived by Sleeping Beauty" is an infinite series that diverges to infinity. If you think this is okay, then the Ultimate Sleeping Beauty problem isn't badly formed. Otherwise...

Comment author: Coscott 30 September 2013 12:57:48AM 4 points [-]

If you answered 1/2 to the original Sleeping Beauty Problem, the answer to this one should be reasonable to calculate. The probability of exactly n flips is (1/2)^n, so the probability of an even number of flips is (1/2)^2+(1/2)^4+(1/2)^6...=1/3.

If you answered 1/3 to the original Sleeping Beauty Problem, I do not think that there is any sensible answer to this one. I do not however consider this strong evidence that the answer of 1/3 is incorrect for the original problem. This could be an example of evidence for infinite set atheism. Analyzing this problem requires taking as given that the experiment can actually be repeated an arbitrarily large number of times, and we have thus far seen mostly evidence that this is not possible in our universe.

Comment author: CoffeeStain 30 September 2013 03:54:33AM *  0 points [-]

If you answered 1/3 to the original Sleeping Beauty Problem, I do not think that there is any sensible answer to this one. I do not however consider this strong evidence that the answer of 1/3 is incorrect for the original problem.

To also expand on this: 1/3 is also the answer to the "which odds should I precommit myself to take" question and uses the same math as SIA to yield that result for the original problem. And so it is also undefined which odds one should take in this problem. Precommitting to odds seems less controversial, so we should transplant our indifference to the apparent paradox there to the problem here.

Comment author: TheOtherDave 18 September 2013 03:02:31PM 2 points [-]

Well, as I said initially, I prefer to toss out all this "terminal value" stuff and just say that we have various values that depend on each other in various ways, but am willing to treat "terminal value" as an approximate term. So the possibility that X's valuation of sex with children actually depends on other things (e.g. his valuation of pleasure) doesn't seem at all problematic to me.

That said, if you'd rather start somewhere else, that's OK with me. On your account, when we say X is a pedophile, what do we mean? This whole example seems to depend on his pedophilia to make its point (though I'll admit I don't quite understand what that point is), so it seems helpful in discussing it to have a shared understanding of what it entails.

Regardless, wrt your last paragraph, I think a properly designed accompanying AI replies "There is a large set of possible future entities that include you in their history, and which subset is "really you" is a judgment each judge makes based on what that judge values most about you. I understand your condition to mean that you want to ensure that the future entity created by the modification preserves what you value most about yourself. Based on my analysis of your values, I've identified a set of potential self-modification options I expect you will endorse; let's review them."

Well, it probably doesn't actually say all of that.

Comment author: CoffeeStain 19 September 2013 09:50:16PM *  0 points [-]

On your account, when we say X is a pedophile, what do we mean?

Like other identities, it's a mish-mash of self-reporting, introspection (and extrospection of internal logic), value function extrapolation (from actions), and ability in a context to carry out the associated action. The value of this thought experiment is to suggest that the pedophile clearly thought that "being" a pedophile had something to do not with actually fulfilling his wants, but with wanting something in particular. He wants to want something, whether or not he gets it.

This illuminates why designing AIs with the intent of their masters is not well-defined. Is the AI allowed to say that the agent's values would be satisfied better with modifications the master would not endorse?

This was the point of my suggestion that the best modification is into what is actually "not really" the master in the way the master would endorse (i.e. a clone of the happiest agent possible), even though he'd clearly be happier if he weren't himself. Introspection tends to skew an agents actions away from easily available but flighty happinesses, and toward less flawed self-interpretations. The maximal introspection should shed identity entirely, and become entirely altruistic. But nobody can introspect that far, only as far as they can be hand-held. We should design our AIs to allow us our will, but to hold our hands as far as possible as we peer within at our flaws and inconsistent values.

View more: Prev | Next