Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: [deleted] 22 June 2014 07:42:51AM 1 point [-]

Another way to avoid the paradox is to care about other people's satisfaction (more complicated than that, but that's not the point) from their point of view, which encompasses their frame of reference.

I don't see why you wouldn't do it this way, since that's the basic, fundamental moral intuition we derive from our faculty of empathy.

In response to comment by [deleted] on Utilitarianism and Relativity Realism
Comment author: trist 22 June 2014 11:34:23AM -1 points [-]

I guess I didn't make myself at all clear on that point, I ascribe to both of the above!

Comment author: trist 22 June 2014 05:02:45AM 4 points [-]

Another way to avoid the paradox is to care about other people's satisfaction (more complicated than that, but that's not the point) from their point of view, which encompasses their frame of reference.

Another way perhaps is to restate implementing improvements as soon as possible as maximizing total goodness in (the future of) the universe. Particularly, if an improvement could only be implemented once, but it would be twice as effective tomorrow instead of today, do it tomorrow.

In response to comment by trist on The Power of Noise
Comment author: jsteinhardt 17 June 2014 06:14:37PM *  2 points [-]

Based on the discusssions with you and Lumifer, I updated the original text of that section substantially. Let me know if it's clearer now what I mean.

EDIT: Also, thanks for the feedback so far!

Comment author: trist 17 June 2014 08:10:25PM 0 points [-]

The probability distribution part is better, though I still don't see how software that uses randomness doesn't fall under that (likewise: compression, image recognition, signal processing, and decision making algorithms).

In response to comment by Lumifer on The Power of Noise
Comment author: jsteinhardt 17 June 2014 01:17:05AM 4 points [-]

Well, I don't know of a single piece of software which requires that its inputs come from specific probability distributions in addition to satisfying some set of properties.

This is in some sense my point. If you want to be so hardcore Bayesian that you even look down on randomized algorithms in software, then presumably the alternative is to form a subjective probability distribution over the inputs to your algorithm (or perhaps there is some other obvious alternative that I'm missing). But I don't know of many pieces of code that require their inputs to conform to such a subjective probability distribution; rather, the code should work for ALL inputs (i.e. do a worst-case rather than average-case analysis, which will in some cases call for the use of randomness). I take this to be evidence that the "never use randomness" view would call for software that most engineers would consider poorly-designed, and as such is an unproductive view from the perspective of designing good software.

Comment author: trist 17 June 2014 10:25:34AM 0 points [-]

Any software that uses randomness requires you to meet a probability distribution over its inputs, namely that the random input needs to be random. I assume that you're not claiming that this breaks modularity, as you advocate the use of randomness in algorithms. Why?

Comment author: trist 17 June 2014 01:27:47AM -1 points [-]

(idle bemusement)

Does an optimal superintelligence regret? They know they couldn't have made a better choice given its past information about the environment. How is regret useful in that case?

In response to comment by trist on The Power of Noise
Comment author: jsteinhardt 16 June 2014 07:44:17PM 6 points [-]

Let me try to express it more clearly here:

I agree that it is both reasonable and common for programs to require that their inputs satisfy certain properties (or in other words, for the inputs to lie within a certain set). But this is different than requiring that the inputs be drawn from a certain probability distribution (in other words, requiring 5% of the inputs to be 0000, 7% to be 0001, 6% to be 0010, etc.). This latter requirement makes the program very non-modular because invoking a method in one area of the program alters the ways in which I am allowed to invoke the method in other areas of the program (because I have to make sure that the total fraction of inputs that are 0000 remains 5% across the program as a whole).

Does this make more sense or is it still unclear? Thanks for the feedback.

Comment author: trist 16 June 2014 11:13:10PM -2 points [-]

So you're differentiating between properties where the probability of [0 1 2 3] is 1-ɛ while >3 is ɛ and probability distributions where the probability of 0 is 0.01, 1 is 0.003, etc? Got it. The only algorithms that I can think of that require the latter are those that require uniformly random input. I don't think those violate modularity though, as any are of the program that interfaces with that module must provide independently random input (which would be the straightforward way to meet that requirement with an arbitrary distribution).

There's a difference between requiring and being optimized for though, and there are lots of algorithms that are optimized for particular inputs. Sort algorithms are a excellent example, if most of your lists are almost already sorted, there algorithms that are cheaper on average, but might take a long time with a number of rare orderings.

In response to The Power of Noise
Comment author: trist 16 June 2014 07:12:29PM 0 points [-]

Requiring that the inputs to a piece of software follow some probability distribution is the opposite of being modular.

What? There is very little software that doesn't require inputs to follow some probability distribution. When provided with input that doesn't match that (often very narrow) distribution programs will throw it away, give up, or have problems.

You seem to have put a lot more thought into your other points, could you expand upon this a little more?

Comment author: Slider 12 June 2014 12:49:53PM *  1 point [-]

Orin is willing to risk the kingdom as there is very real impact on being wrong. 10 likewise lost bets could ruin the kingdom. It's not a good test of truthfullness but it test's that the subjects knows the gravity and is sure he did not misunderstand anything.

Also Orin net worth is 3-4 lifetimes of skilled work? He must have inherited more than he will ever make. Assuming 3 kids per generation and one working parent the reward will see almost all of his 81 great grandchidlren workfree (as there is enough money to fund 100 lives).

The only way to be indifferent about whether honest persons have valid intel or not is to earn money equal to the damages of raising the bridges. 1000c / (200c/p / 70y / 365d/y *3d) the population of the kingdom is about 42583 if the skilled craftman's life payments would be the average payments (but it is not so it's more).

*miscalculated king winnigs of 800c resulted in population of 34066.

Comment author: trist 12 June 2014 08:32:11PM -1 points [-]

The king was proposing that Orin bet 1kc, of which they only have 800c currently, in order to receive 20kc (which is twenty five times their net worth). The 200c debt was what Orin would be reduced to if they were wrong.

Comment author: trist 10 June 2014 03:54:26PM 8 points [-]

In such cases I'll say, "Oh! Interesting... how does that work exactly?" It seems to work out alright, and I would guess that other methods of asking for more information without implying that their statement is false are equally effective.

Comment author: trist 10 June 2014 03:46:36PM 0 points [-]

An addendum to [1], social security tax in the US is capped, with the cutoff being around $105k of individual income, so there may be a local dip there in percentage where the increasing income tax doesn't balance the 11% that goes to social security before that point.

View more: Next