Comment author: SforSingularity 18 August 2009 11:34:25AM *  3 points [-]

I suspect that people here have more of a problem with willpower/motivation than average, so they press "upvote!" on anything that promises, however vaguely, to solve their problem.

EDIT:

Wikipedia states that there is "scant research" "which suggests that the disorder is caused by mineral deficiency in many cases", and also lists other possible causes such as OCD. So Pica may, or may not, be mostly related to deficiencies. We also know that the rate of incidence of Pica is low in general, i.e. < 5% of people probably have it, so conclusions drawn about people who have Pica may not generalize well.

Comment author: conchis 18 August 2009 12:13:10PM *  2 points [-]

this post infers possible causation based upon a sample size of 1

Eh? Pica is a known disorder. The sample size for the causation claim is clearly more than 1.

[ETA: In case anyone's wondering why this comment no longer makes any sense, it's because most of the original parent was removed after I made it, and replaced with the current second para.]

Comment author: anonym 18 August 2009 04:41:08AM 6 points [-]

Would you agree that higher-quality posts should generate more discussion?

No. A good troll can get far more comments than almost any high-quality non-troll post. And you also cannot ignore the difficulty of the post, or how much knowledge it presupposes (and thus how small its potential audience is), or whether the post is on a topic that everybody is an expert in (e.g., politics, male-female relations, religion).

Comment author: conchis 18 August 2009 09:22:30AM *  5 points [-]

I for one comment far more on Phil's posts when I think they're completely misguided than I do otherwise. Not sure what that says about me, but if others did likewise, we would predict precisely the relationship Phil is observing.

Comment author: taw 17 August 2009 02:07:37AM 2 points [-]

There's a lot of counterevidence to most claims about X making people happy.

For example 1:

Most people were no more satisfied with life after marriage than they were prior to marriage [...] Study results, for example, showed, spikes in respondents' happiness levels both before and after marriage, but the increase was minimal—approximately one-tenth of one point on an 11-point scale—and was followed by a return to prior levels of happiness.

Also 2 (which is mostly about children of single vs married parents, but the same story - getting married doesn't improve anything).

Comment author: conchis 17 August 2009 02:40:05AM *  0 points [-]

Interesting. All the other evidence I've seen suggest that committed relationships do make people happier, so I'd be interested to see how these apparently conflicting findings can be resolved.

Part of the difference could just be the focus on marriage vs. stable relationships more generally (whether married or not): I'm not sure there's much reason to think that a marriage certificate is going to make a big difference in and of itself (or that anyone's really claiming that it would). In fact, there's some, albeit limited, evidence that unmarried couples are happier on average than married ones.

I'll try to dig up references when I have a bit more time. Don't suppose you happen to have one for the actual research behind your first link?

In response to comment by anonym on Calibration fail
Comment author: MichaelVassar 16 August 2009 03:55:34PM 1 point [-]

Honestly, this happens to me far too often.

Comment author: conchis 16 August 2009 04:22:20PM *  0 points [-]

Me too. It gets especially embarrassing when you end up telling someone a story about a conversation they themselves were involved in.

Comment author: Aurini 13 August 2009 09:45:06PM 2 points [-]

I might be missing something, but it seems as if you're needlessly complicating the situation.

First of all, I'm not convinced that sentences ought to be able to self reference. The example you give, "All complete sentences written in English contain at least one vowel" isn't necessarily self-referencing. It's stating a rule whic is inevitably true, and which it happens to conform to. I could equally well say "All good sentences must at least one verb." This is not a good sentence, but it does communicate a grammatical rule.

But none of this has a priori truth - they just happen to conform to accepted standards - and I don't think they demonstrate the usefulness of self-referencing. English grammar allows you to self-reference, but defining "Cat (n): a cat" is a tautology. English also allows you to ask the question "What happened before time began?" and while that is a perfectly valid sentence, it's a meaningless question.

As a corollary, mathematical notation allows me to write "2+2=5" (note - the person who writes this down isn't claiming that 2+2=5, she is far better versed than Aurini in the reasons it equals 4, she is just demonstrating that she can write down nonsense). This doesn't require a defense of arithmetic; it's simple enough to point out that the equation is nonsense.

"This sentence is false." "What happened before time?" "My pet elephant that I named George doesn't exist." I don't see that a rebuttal is necessary, meaningful, or even possible in these situations. It's enough to say "That's stupid," and move on to something interesting.

Comment author: conchis 13 August 2009 10:52:59PM *  2 points [-]

Warning, nitpicks follow:

The sentence "All good sentences must at least one verb." has at least one verb. (It's an auxiliary verb, but it's still a verb. Obviously this doesn't make it good; but it does detract from the point somewhat.)

"2+2=5" is false, but it's not nonsense.

In response to comment by conchis on Utilons vs. Hedons
Comment author: timtyler 13 August 2009 08:10:56PM *  -1 points [-]

What the Wiki says is: "Utilons generated by fulfilling base desires are hedons". I think it follows from that that Utilons and Hedons have the same units.

I don't much like the Wiki on these issues - but I do think it a better take on the definitions than this post.

Comment author: conchis 13 August 2009 08:25:19PM *  1 point [-]

I was objecting to the subset claim, not the claim about unit equivalence. (Mainly because somebody else had just made the same incorrect claim elsewhere in the comments to this post.)

As it happens, I'm also happy to object to claim about unit equivalence, whatever the wiki says. (On what seems to be the most common interpretation of utilons around these parts, they don't even have a fixed origin or scale: the preference orderings they represent are invariant to affine transforms of the utilons.)

In response to comment by conchis on Utilons vs. Hedons
Comment author: DanArmak 13 August 2009 07:03:27PM 1 point [-]

There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)

Yes, that was the point :-) On my reading of OP, this is the meaning of utility that was intended.

Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if you consider the risks demanded by your maximand excessive, then you should either change your maximand, or change your view of what constitutes excessive risk.

Yes. Here's my current take:

The OP argument demonstrates the danger of using a function-maximizer as a proxy for some other goal. If there can always exist a chance to increase f by an amount proportional to its previous value (e.g. double it), then the maximizer will fall into the trap of taking ever-increasing risks for ever-increasing payoffs in the value of f, and will lose with probability approaching 1 in a finite (and short) timespan.

This qualifies as losing if the original goal (the goal of the AI's designer, perhaps) does not itself have this quality. This can be the case when the designer sloppily specifies its goal (chooses f poorly), but perhaps more interesting/vivid examples can be found.

Comment author: conchis 13 August 2009 07:35:48PM *  0 points [-]

To expand on this slightly, it seems like it should be possible to separate goal achievement from risk preference (at least under certain conditions).

You first specify a goal function g(x) designating the degree to which your goals are met in a particular world history, x. You then specify another (monotonic) function, f(g) that embodies your risk-preference with respect to goal attainment (with concavity indicating risk-aversion, convexity risk-tolerance, and linearity risk-neutrality, in the usual way). Then you maximise E[f(g(x))].

If g(x) is only ordinal, this won't be especially helpful, but if you had a reasonable way of establishing an origin and scale it would seem potentially useful. Note also that f could be unbounded even if g were bounded, and vice-versa. In theory, that seems to suggest that taking ever increasing risks to achieve a bounded goal could be rational, if one were sufficiently risk-loving (though it does seem unlikely that anyone would really be that "crazy"). Also, one could avoid ever taking such risks, even in the pursuit of an unbounded goal, if one were sufficiently risk-averse that one's f function were bounded.

P.S.

On my reading of OP, this is the meaning of utility that was intended.

You're probably right.

In response to comment by conchis on Utilons vs. Hedons
Comment author: DanArmak 13 August 2009 04:35:10PM 0 points [-]

I guess what I'm suggesting, in part, is that the actual problem at hand isn't well-defined, unless you specify what you mean by utility in advance.

Utility means "the function f, whose expectation I am in fact maximizing". The discussion then indeed becomes whether f exists and whether it can be doubled.

My point is that you can't learn anything interesting from the thought experiment if Omega is offering to double f(x), so we shouldn't set it up that way.

That was the original point of the thread where the thought experiment was first discussed, though.

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive. This is in view of the original goals you want to achieve, to which maximizing f is a proxy - whether a designed one (in AI) or an evolved strategy (in humans).

"Valutilons" are specifically defined to be a measure of what we value.

If "we" refers to humans, then "what we value" isn't well defined.

Comment author: conchis 13 August 2009 05:23:55PM *  1 point [-]

Utility means "the function f, whose expectation I am in fact maximizing".

There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive.

Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if you consider the risks demanded by your maximand excessive, then you should either change your maximand, or change your view of what constitutes excessive risk.

In response to comment by conchis on Utilons vs. Hedons
Comment author: DanArmak 13 August 2009 04:35:10PM 0 points [-]

I guess what I'm suggesting, in part, is that the actual problem at hand isn't well-defined, unless you specify what you mean by utility in advance.

Utility means "the function f, whose expectation I am in fact maximizing". The discussion then indeed becomes whether f exists and whether it can be doubled.

My point is that you can't learn anything interesting from the thought experiment if Omega is offering to double f(x), so we shouldn't set it up that way.

That was the original point of the thread where the thought experiment was first discussed, though.

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive. This is in view of the original goals you want to achieve, to which maximizing f is a proxy - whether a designed one (in AI) or an evolved strategy (in humans).

"Valutilons" are specifically defined to be a measure of what we value.

If "we" refers to humans, then "what we value" isn't well defined.

Comment author: conchis 13 August 2009 05:04:06PM *  0 points [-]

Crap. Sorry about the delete. :(

In response to comment by conchis on Utilons vs. Hedons
Comment author: DanArmak 13 August 2009 02:47:11PM 0 points [-]

Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?

In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way (f is increasing in V(x)). Of course we can find an f such that a doubling in V translates to adding a constant to f, or if we like, even an infinitesimal increase in f. But all this means is that Omega is offering us the wrong thing, which we don't really value.

Comment author: conchis 13 August 2009 04:59:30PM *  1 point [-]

Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?

It wasn't intended to help with the the problem specified in terms of f(x). For the reasons set out in the thread beginning here, I don't find the problem specified in terms of f(x) very interesting.

In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way

You're assuming the output of V(x) is ordinal. It could be cardinal.

all this means is that Omega is offering us the wrong thing

I'm afraid I don't understand what you mean here. "Wrong" relative to what?

which we don't really value.

Eh? Valutilons were defined to be something we value (ETA: each of us individually, rather than collectively).

View more: Prev | Next