Comment author: Qiaochu_Yuan 19 June 2014 04:52:34PM 27 points [-]

a utility function is the structure any consistent preference ordering that respects probability must have.

This is the sort of thing I mean when I say that people take utility functions too seriously. I think the von Neumann-Morgenstern theorem is much weaker than it initially appears. It's full of hidden assumptions that are constantly violated in practice, e.g. that an agent can know probabilities to arbitrary precision, can know utilities to arbitrary precision, can compute utilities in time to make decisions, makes a single plan at the beginning of time about how they'll behave for eternity (or else you need to take into account factors like how the agent should behave in order to acquire more information in the future and that just isn't modeled by the setup of vNM at all), etc.

Comment author: redlizard 19 June 2014 10:19:01PM 2 points [-]

It's full of hidden assumptions that are constantly violated in practice, e.g. that an agent can know probabilities to arbitrary precision, can know utilities to arbitrary precision, can compute utilities in time to make decisions, makes a single plan at the beginning of time about how they'll behave for eternity (or else you need to take into account factors like how the agent should behave in order to acquire more information in the future and that just isn't modeled by the setup of vNM at all), etc.

Those are not assumptions of the von Neumann-Morgenstern theorem, nor of the concept of utility functions itself. Those are assumptions of an intelligent agent implemented by measuring its potential actions against an explicitly constructed representation of its utility function.

I get the impression that you're conflating the mathematical structure that is a utility function on the one hand, and representations thereof as a technique for ethical reasoning on the other hand. The former can be valid even if the latter is misleading.

Comment author: redlizard 19 June 2014 08:13:23AM 12 points [-]

It's more than a metaphor; a utility function is the structure any consistent preference ordering that respects probability must have. It may or may not be a useful conceptual tool for practical human ethical reasoning, but "just a metaphor" is too strong a judgment.

Comment author: Eliezer_Yudkowsky 17 June 2014 09:00:22PM 8 points [-]

Another way of swapping around the question is to ask under what circumstances Jacob Steinhardt would refuse to use a PRNG rather than an RNG because the PRNG wasn't random enough, and whether there's any instance of such that doesn't involve an intelligent adversary (or that ancient crude PRNG with bad distributional properties that everyone always cites when this topic comes up, i.e., has that happened more recently with an OK-appearing PRNG).

Obviously I don't intend to take a stance on the math-qua-math question of P vs. BPP. But to the extent that someone has to assert that an algorithm's good BPP-related properties only work for an RNG rather than a PRNG, and there's no intelligent adversary of any kind involved in the system, I have to question whether this could reasonably happen in real life. Having written that sentence it doesn't feel very clear to me. What I'm trying to point at generally is that unless I have an intelligent adversary I don't want my understanding of a piece of code to depend on whether a particular zero bit is "deterministic" or "random". I want my understanding to say that the code has just the same effect once the zero is generated, regardless of what factors generated the zero; I want to be able to screen off the "randomness" once I've looked at the output of that randomness, and just ask about the effectiveness of using a zero here or a one there. Furthermore I distrust any paradigm which doesn't look like that, and reject it as something I could really-truly believe, until the business about "randomness" has been screened off and eliminated from the analysis. Unless I'm trying to evade a cryptographic adversary who really can predict me if I choose the wrong PRNG or write down my random bits someplace that someone else can see them, so that writing down the output of an RNG and then feeding it into the computation as a deterministic constant is genuinely worse because my adversary might sneak a look at the RNG's output if I left it written down anywhere. Or I'm trying to randomize a study and prevent accidental correlations with other people's study, so I use an RNG just in case somebody else used a similar PRNG.

But otherwise I don't like my math treating the same bit differently depending on whether it's "random" or "deterministic" because its actual effect on the algorithm is the same and ought to be screened off from its origins once it becomes a 1 or 0.

(And there's also a deep Bayesian issue here regarding, e.g., our ability to actually look at the contents of an envelope in the two-envelope problem and update our prior about amounts of money in envelopes to arrive at the posterior, rather than finding it intuitive to think that we picked an envelope randomly and that the randomized version of this algorithm will initially pick the envelope containing the larger amount of money half the time, which I think is a very clear illustration of the Bad Confused Thoughts into which you're liable to be led down a garden-path, if you operate in a paradigm that doesn't find it intuitive to look at the actual value of the random bit and ask about what we think about that actual value apart from the "random" process that supposedly generated it. But this issue the margins are too small to contain.)

Is that helpful?

Comment author: redlizard 18 June 2014 08:56:50AM 1 point [-]

A more involved post about those Bad Confused Thoughts and the deep Bayesian issue underlying it would be really interesting, when and if you ever have time for it.

Comment author: redlizard 17 June 2014 05:57:12AM 5 points [-]

Upvoted for the simple reason that this is probably the first article I've EVER seen with a title of the form 'discussion about <something between quotes>' which is in fact about the quoted term, rather than the concept it refers to.

Comment author: redlizard 10 June 2014 04:52:29PM 8 points [-]

As a point of interest, I want to note that behaving like an illiterate immature moron is a common tactic for (usually banned) video game automation bots when faced with a moderator who is onto you, for exactly the same reason used here -- if you act like someone who just can't communicate effectively, it's really hard for others to reliably distinguish between you and a genuine foreign 13-year-old who barely speaks English.

Comment author: asr 23 May 2014 02:01:12PM 5 points [-]

Eliezer thinks the phrase 'worst case analysis' should refer to the 'omega' case.

"Worst case analysis" is a standard term of art in computer science, that shows up as early as second-semester programming, and Eliezer will be better understood if he uses the standard term in the standard way.

A computer scientist would not describe the "omega" case as random -- if the input is correlated with the random number source in a way that is detectable by the algorithm, they're by definition not random.

In response to comment by asr on Can noise have power?
Comment author: redlizard 25 May 2014 07:22:09PM *  3 points [-]

"Worst case analysis" is a standard term of art in computer science, that shows up as early as second-semester programming, and Eliezer will be better understood if he uses the standard term in the standard way.

Actually, in the context of randomized algorithms, I've always seen the term "worst case running time" refer to Oscar's case 6, and "worst-case expected running time" -- often somewhat misleadingly simplified to "expected running time" -- refer to Oscar's case 2.

A computer scientist would not describe the "omega" case as random -- if the input is correlated with the random number source in a way that is detectable by the algorithm, they're by definition not random.

A system that reliably behaves like the omega case is clearly not random. However, a random system such as case 2 may still occasionally behave like omega, with probability epsilon, and it is not at all unreasonable or uncommon to require your algorithm to work efficiently even in those rare cases. Thus, one might optimize a random system by modelling it as an omega system, and demanding that it works well enough even in that context.

Comment author: shminux 21 May 2014 12:32:07AM 1 point [-]

What caused the initial emergence of cooperative strategies in environments of PD-type?

I'd think that cooperative strategies emerge in non-PD-type situations, where individual defection is strictly worse than cooperation (e.g. hunting large prey in packs). When the environment changes toward PD-type (e.g. shortage of large prey means not every pack member is fed), some individuals evolve to defect. However, having too many of them results in reduced benefit for everyone, so the defection mutation never spreads too widely (e.g. a pack where everyone starts to defect by fighting and possibly killing others for food share soon becomes too small to hunt effectively). Instead of straight up defection, other mutations provide more fitness without significant detriment (e.g. pack hierarchy with the strongest thriving and the weakest dying out).

The summary you quoted seems to imply something like this. I am not familiar with the actual research on the topic, however, feel free to summarize.

Comment author: redlizard 21 May 2014 06:29:12AM 4 points [-]

Group selectionism alert. The "we are optimized for effectively playing the iterated prisoner's dilemma" argument, AKA "people will remember you being a jackass", sounds much more plausible.

Comment author: redlizard 15 May 2014 02:58:04AM *  19 points [-]

Even with measurements in hand, old habits are hard to shake. It’s easy to fall in love with numbers that seem to agree with you. It’s just as easy to grope for reasons to write off numbers that violate your expectations. Those are both bad, common biases. Don’t just look for evidence to confirm your theory. Test for things your theory predicts should never happen. If the theory is correct, it should easily survive the evidential crossfire of positive and negative tests. If it’s not you’ll find out that much quicker. Being wrong efficiently is what science is all about.

-- Carlos Bueno, Mature Optimization, pg. 14. Emphasis mine.

In response to The Fallacy of Gray
Comment author: RobinHanson 07 January 2008 02:23:47PM 17 points [-]

All who love this post, do you love it because it told you something you didn't know before, or because you think it would be great to show others who you don't think understand this point? I worry when our reader's favorite posts are based on how much they agree with the post, instead of how much they learned from it.

Comment author: redlizard 01 May 2014 06:09:45PM 3 points [-]

I already knew it, but this post made me understand it.

In response to 2013 Survey Results
Comment author: redlizard 19 January 2014 04:38:44AM *  9 points [-]

Passphrase: eponymous haha_nice_try_CHEATER

Well played :)

View more: Prev | Next