Comment author: Nick_Hay 23 October 2009 10:24:33PM *  3 points [-]

Can you translate your complaint into a problem with the independence axiom in particular?

Your second example is not a problem of variance in final utility, but aggregation of utility. Utility theory doesn't force "Giving 1 util to N people" to be equivalent to "Giving N util to 1 person". That is, it doesn't force your utility U to be equal to U1 + U2 + ... + UN where Ui is the "utility for person i".

Comment author: Eliezer_Yudkowsky 21 June 2009 04:24:20PM 0 points [-]

Well, yes. Nonparametric methods use similarity of neighbors. To predict that which has never been seen before - which is not, on its surface, like things seen before - you need modular and causal models of what's going on behind the scenes. At that point it's parametric or bust.

Comment author: Nick_Hay 21 June 2009 10:23:36PM *  6 points [-]

Your use of the terms parametric vs. nonparametric doesn't seem to be that used by people working in nonparametric Bayesian statistics, where the distinction is more like whether your statistical model has a fixed finite number of parameters or has no such bound. Methods such as Dirichlet processes, and its many variants (Hierarchical DP, HDP-HMM, etc), go beyond simple modeling of surface similarities using similarity of neighbours.

See, for example, this list of publications coauthored by Michael Jordan:

Comment author: Nick_Hay 02 March 2009 02:28:44AM 6 points [-]

Thou Art Godshatter: gives an intuitive grasp for why and how human morality is complex, but that not any complex thing will do.

Comment author: Nick_Hay 08 February 2009 05:03:26AM 1 point [-]

Z. M. Davis: Good point, I was brushing that distinction under the rug. From this perspective all people arguing about values are trying to change someone's value computation, to a greater or lesser degree i.e. this is not the place to look if you want to discriminate between "liberal" and "conservative".

With the obvious way to implement a CEV, you start by modeling a population of actual humans (e.g. Earth's), then consider extrapolations of these models (know more, thought faster, etc). No "wipe culturally-defined values" step, however that would be defined.

Where was it suggested otherwise?

Comment author: Nick_Hay 08 February 2009 03:53:07AM 1 point [-]

Ian C: neither group is changing human values as it is referred to here: everyone is still human, no one is suggesting neurosurgery to change how brains compute value. See the post value is fragile.

Comment author: Nick_Hay 11 January 2009 11:56:30PM 4 points [-]

Interestingly, you can have unboundedly many children with only quadratic population growth, so long as they are exponentially spaced. For example, give each newborn sentient a resource token, which can be used after the age of maturity (say, 100 years or so) to fund a child. Additionally, in the years 2^i every living sentient is given an extra resource token. One can show there is at most quadratic growth in the number of resource tokens. By adjusting the exponent in 2^i we can get growth O(n^{1+p}) for any nonnegative real p.

Comment author: Nick_Hay 12 December 2008 02:57:00AM 1 point [-]

Phil: Yes. CEV completely replaces and overwrites itself, by design. Before this point it does not interact with the external world to change it in a significant sense (it cannot avoid all change; e.g. its computer will add tiny vibrations to the Earth, as all computers do). It executes for a while then overwrites itself with a computer program (skipping every intermediate step here). By default, and if anything goes wrong, this program is "shutdown silently, wiping the AI system clean."

(When I say "CEV" I really mean a FAI which satisfies the spirit behind the extremely partial specification given in the CEV document. The CEV document says essentially nothing of how to implement this specification.)

In response to The Nature of Logic
Comment author: Nick_Hay 18 November 2008 04:56:55AM 2 points [-]

Personally, I prefer the longer posts.

Comment author: Nick_Hay 25 October 2008 08:53:59AM 1 point [-]

guest: right, so with those definitions you are overconfident if you are suprised more than you expected, underconfident if you are suprised less, calibration being how close your suprisal is to your expectation of it.

Comment author: Nick_Hay 25 October 2008 08:03:47AM 1 point [-]

I think there's a sign error in my post -- C(x0) = \log p(x0) + H(p) it should be.

View more: Prev | Next