Comment author: Nick_Hay 23 October 2009 10:24:33PM *  3 points [-]

Can you translate your complaint into a problem with the independence axiom in particular?

Your second example is not a problem of variance in final utility, but aggregation of utility. Utility theory doesn't force "Giving 1 util to N people" to be equivalent to "Giving N util to 1 person". That is, it doesn't force your utility U to be equal to U1 + U2 + ... + UN where Ui is the "utility for person i".

Comment author: Eliezer_Yudkowsky 21 June 2009 04:24:20PM 0 points [-]

Well, yes. Nonparametric methods use similarity of neighbors. To predict that which has never been seen before - which is not, on its surface, like things seen before - you need modular and causal models of what's going on behind the scenes. At that point it's parametric or bust.

Comment author: Nick_Hay 21 June 2009 10:23:36PM *  6 points [-]

Your use of the terms parametric vs. nonparametric doesn't seem to be that used by people working in nonparametric Bayesian statistics, where the distinction is more like whether your statistical model has a fixed finite number of parameters or has no such bound. Methods such as Dirichlet processes, and its many variants (Hierarchical DP, HDP-HMM, etc), go beyond simple modeling of surface similarities using similarity of neighbours.

See, for example, this list of publications coauthored by Michael Jordan:

Comment author: Nick_Hay 02 March 2009 02:28:44AM 6 points [-]

Thou Art Godshatter: gives an intuitive grasp for why and how human morality is complex, but that not any complex thing will do.

Comment author: thomblake 28 February 2009 02:48:31AM 2 points [-]

Not sure about that - those labels at least would look ugly. Maybe a title attribute on the "vote up" and "vote down" would be sufficient.

Comment author: Nick_Hay 28 February 2009 08:49:09AM 6 points [-]

How about buttons "High quality", "Low quality", "Accurate", "Inaccurate". We're increasing options here, but there's probably a nice way to design the interface to reduce the cognitive load.

Using the word "vote" seems broken here more generally -- we aren't implementing some democratic process, we're aggregating judgments (read: collecting evidence) across a population.

Comment author: steven0461 27 February 2009 09:28:36PM *  3 points [-]

If agreement votes aren't going to be used, why not do away with them altogether and just use the current system to vote based on quality only? True comments are higher quality than false comments so agreement should factor into quality judgments anyway.

Comment author: Nick_Hay 28 February 2009 08:44:52AM 5 points [-]

Because quality and truth are separate judgments in practice, and forcing them to be conflated into a single scale is losing information. To the extent that truth is positively correlated with quality this will fall out automatically: highly truthy posts will tend to have high quality. Low quality and high truth are not opposites.

Comment author: Nick_Hay 08 February 2009 05:03:26AM 1 point [-]

Z. M. Davis: Good point, I was brushing that distinction under the rug. From this perspective all people arguing about values are trying to change someone's value computation, to a greater or lesser degree i.e. this is not the place to look if you want to discriminate between "liberal" and "conservative".

With the obvious way to implement a CEV, you start by modeling a population of actual humans (e.g. Earth's), then consider extrapolations of these models (know more, thought faster, etc). No "wipe culturally-defined values" step, however that would be defined.

Where was it suggested otherwise?

Comment author: Nick_Hay 08 February 2009 03:53:07AM 1 point [-]

Ian C: neither group is changing human values as it is referred to here: everyone is still human, no one is suggesting neurosurgery to change how brains compute value. See the post value is fragile.

Comment author: Nick_Hay 11 January 2009 11:56:30PM 4 points [-]

Interestingly, you can have unboundedly many children with only quadratic population growth, so long as they are exponentially spaced. For example, give each newborn sentient a resource token, which can be used after the age of maturity (say, 100 years or so) to fund a child. Additionally, in the years 2^i every living sentient is given an extra resource token. One can show there is at most quadratic growth in the number of resource tokens. By adjusting the exponent in 2^i we can get growth O(n^{1+p}) for any nonnegative real p.

Comment author: Nick_Hay 12 December 2008 02:57:00AM 1 point [-]

Phil: Yes. CEV completely replaces and overwrites itself, by design. Before this point it does not interact with the external world to change it in a significant sense (it cannot avoid all change; e.g. its computer will add tiny vibrations to the Earth, as all computers do). It executes for a while then overwrites itself with a computer program (skipping every intermediate step here). By default, and if anything goes wrong, this program is "shutdown silently, wiping the AI system clean."

(When I say "CEV" I really mean a FAI which satisfies the spirit behind the extremely partial specification given in the CEV document. The CEV document says essentially nothing of how to implement this specification.)

In response to The Nature of Logic
Comment author: Nick_Hay 18 November 2008 04:56:55AM 2 points [-]

Personally, I prefer the longer posts.

View more: Prev | Next