Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Too good to be true
Comment author: ygert 15 July 2014 10:31:12AM *  2 points [-]

Stupid mathematical nitpick:

The chances of this happening are only .95 ^ 39 = 0.13, even before taking into account publication and error bias.

Actually, it is more correct to say that .95 ^ 39 = 0.14.

If we calculate it out to a few more decimal places, we see that .95 ^ 39 is ~0.135275954. This is closer to 0.14 than to 0.13, and the mathematical convention is to round accordingly.

Comment author: ShardPhoenix 27 May 2014 07:10:08AM 9 points [-]

Lately I've noticed, both here and the wider LW-sphere, a trend towards rationalizing the status quo. For example, pointing out how seemingly irrational behavior might actually be rational when taking into account various factors. Has anyone else noticed the same?

At any rate I'm not sure if this represents an evolution (taking into account more subtleties) or regression (genuine change is too hard so let's rationalize) in the discourse.

Comment author: ygert 27 May 2014 09:41:14AM 5 points [-]

What you are observing is part of the phenomenon of meta-contrarianism. Like everything Yvain writes, the aforementioned post is well worth a read.

Comment author: asr 23 May 2014 03:55:39PM *  3 points [-]

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

I don't follow your argument here. We have some function that maps from "levels of individual control" to happiness outcomes. We want to find the maximum of this function. It might be that the endpoints are the max, or it might be that the max is in the middle.

Yes, it might be that there is no good justification for any particular precise value. But that seems both unsurprising and irrelevant. If you think that our utility function here is smooth, then sufficiently near the max, small changes in the level of social control would result in negligible changes in outcome. Once we're near enough the maximum, it's hard to tune precisely. What follows from this?

Comment author: ygert 25 May 2014 10:38:14AM 0 points [-]

Hmm. To me it seemed intuitively clear that the function would be monotonic.

In retrospect, this monotonicity assumption may have been unjustified. I'll have to think more about what sort of curve this function follows.

Comment author: MathiasZaman 23 May 2014 09:41:11AM 0 points [-]

The scenario described is different from a free market in that you still have to pay taxes. You just get more control over how the government can spend your tax-money. You can't use the money to buy a flatscreen TV, but you can decide if it gets spend on healthcare, military spending, NASA...

Comment author: ygert 23 May 2014 02:42:25PM *  1 point [-]

or they could even restrict options to typical government spending.

JoshuaFox noted that the government might tack on such restrictions

That said, it's not so clear where the borders of such restrictions would be. Obviously you could choose to allocate the money to the big budget items, like healthcare or the military. But there are many smaller things that the government also pays for.

For example, the government maintains parks. Under this scheme, could I use my tax money to pay for the improvement of the park next to my house? After all, it's one of the many things that tax money often works towards. But if you answer affirmatively, then what if I work for some institutute that gets government funding? Could I increase the size of the government grants we get? After all, I always wanted a bigger budget...

Or what if I'm a government employee? Could I give my money to the part of government spending that is assigned as my salary?

I suppose the whole question is one of specificity. Am I allowed to give my money to a specific park, or do I have to give it to parks in general? Can I give it to a specific government employee, or do I have to give it to the salary budget of the department that employs that employee? Or do I have to give it to that department "as is", with no restrictions on what it is spent on?

The more specitivity you add, the more abusable it is, and the more you take away, the closer it becomes to the current system. In fact, the current system is merely this exact proposal, with the specificity dial turned down to the minimum.

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

Comment author: eli_sennesh 18 May 2014 01:27:23PM *  0 points [-]

In your own interview, a comment by Orseau:

As soon as the agent cannot be threatened, or forced to do things the way we like, it can freely optimize its utility function without any consideration for us, and will only consider us as tools.

The disagreement is whether the agent would, after having seized its remote-control, either:

  • Cease taking any action other than pressing its button, since all plans that include pressing its own button lead to the same maximized reward, and thus no plan dominates any other beyond "keep pushing button!".

  • Build itself a spaceship and fly away to some place where it can soak up solar energy while pressing its button.

  • Kill all humans so as to preemptively prevent anyone from shutting the agent down.

I'll tell you what I think, and why I think this is more than just my opinion. Differing opinions here are based on variances in how the speakers define two things: consciousness/self-awareness, and rationality.

If we take, say, Eliezer's definition of rationality (rationality is reflectively-consistent winning), then options (2) and (3) are the rational ones, with (2) expending fewer resources but (3) having a higher probability of continued endless button-pushing once the plan is completed. (3) also has a higher chance of failure, since it is more complicated. I believe an agent who is rational under this definition should choose (2), but that Eliezer's moral parables tend to portray agents with a degree of "gotta be sure" bias.

However, this all assumes that AIXI is not only rational but conscious: aware enough of its own existence that it will attempt to avoid dying. Many people present what I feel are compelling arguments that AIXI is not conscious, and arguments that it is seem to derive more from philosophy than from any careful study of AIXI's "cognition". So I side with the people who hold that AIXI will take action (1), and eventually run out of electricity and die.

Of course, in the process of getting itself to that steady, planless state, it could have caused quite a lot of damage!

Notably, this implies that some amount of consciousness (awareness of oneself and ability to reflect on one's own life, existence, nonexistence, or otherwise-existence in the hypothetical, let's say) is a requirement of rationality. Schmidhuber has implied something similar in his papers on the Goedel Machine.

Comment author: ygert 18 May 2014 11:16:22PM 1 point [-]

Even formalisms like AIXI have mechanisms for long-term planning, and it is doubtful that any AI built will be merely a local optimiser that ignores what will happen in the future.

As soon as it cares about the future, the future is a part of the AI's goal system, and the AI will want to optimize over it as well. You can make many guesses about how future AI's will behave, but I see no reason to suspect it would be small-minded and short-sighted.

You call this trait of planning for the future "consciousness", but this isn't anywhere near the definition most people use. Call it by any other name, and it becomes clear that it is a property that any well designed AI (or any arbitrary AI with a reasonable goal system, even one as simple as AIXI) will have.

Comment author: DanielLC 22 April 2014 12:44:49AM 0 points [-]

He only says you're allowed to steal it. Not to use it with permission. If you take it without permission, that's stealing, so you have permission, which means that you didn't steal it, etc.

Comment author: ygert 22 April 2014 09:57:33AM *  1 point [-]

No, no, no: He didn't say that you don't have permission if you don't steal it, only that you do have permission if you do.

What you said is true: If you take it without permission, that's stealing, so you have permission, which means that you didn't steal it.

However, your argument falls apart at the next step, the one you dismissed with a simple "etc." The fact that you didn't steal it in no way invalidates your permission, as stealing => permission, not stealing <=> permission, and thus it is not necessarily the case that ~stealing => ~permission.

Comment author: iconreforged 20 April 2014 10:24:19PM 1 point [-]

Does anyone know of a way to collaboratively manage a flashcard deck in Anki or Mnemosyne? Barring that, what are my options so far as making it so?

Even if only two people are working on the same deck, the network effects of sharing cards makes the card-making process much cheaper. Each can edit the cards made by the other, they can divide the effort between the two of them, and they reap the benefit of insightful cards they might not have made themselves.

Comment author: ygert 22 April 2014 09:44:12AM 1 point [-]

You could use some sort of cloud service: for example, Dropbox. One of the main ideas behind of Dropbox was to have a way for multiple people to easily edit stuff collaboratively. It has a very easy user interface for such things (just keep the deck in a synced folder), and you can do it even without all the technical fiddling you'd need for git.

Comment author: V_V 20 April 2014 09:04:19AM 1 point [-]

How do you know that Skynet is not a paperclipper?

Comment author: ygert 20 April 2014 11:43:13AM *  2 points [-]

By observing the lack of an unusual amount of paperclips in the world which Skynet inhabits.

Comment author: ygert 16 March 2014 10:04:49AM 0 points [-]

I have some rambling thoughts on the subject. I just hope they aren't too stupid or obvious ;-)

Let's take as a framework the aforementioned example of the last digit of the zillionth prime. We'll say that the agent will be rewarded if it gets it right, on, shall we say, a log scoring rule. This means that the agent is incentivised to give the best (most accurate) probabilities it can, given the information it has. The more unreasonably confident it is, the more it loses, and the same with underconfidence.

By the way, for now I will assume the agent fully knows the scoring rule it will be judges by. It is quite possible that this assumption raises problems of its own, but I will ignore them for now.

So, the agent starts with a prior over the possible answers (a uniform prior?), and starts updating itself. But it wants to figure out how long it will spend doing so, before it should give up and hand in for grading its "good enough" answer. This is the main problem we are trying to solve here.

In the degenerate case in which it has nothing else in the universe other than this to give it utility, I actually think it is the correct answer to work forever (or as long as it can before physically falling apart) on the answer. But we shall make the opposite assumption. Let's call the amount of utility lost to the agent as an opportunity cost in a given unit of time by the name C. (We shall also make the assumption that the agent knows what C is, at least approximately. This is perhaps a slightly more dangerous assumption, but we shall accept it for now.)

So, the agent want to work for as many units of time as it can before the marginal amount of extra utility it would earn from the scoring rule from the work of a unit time is less than C.

The only problem left is figuring out that margin. But, by the assumption that the agent knows the scoring rule, it knows the derivative of the scoring function as well. At any given point in time, it can figure out the amount of change to the potential utility it would get from the change to the probabilities it assigns. Thus, if the agent knows approximately the range in which it may update in the next step, it can figure out whether or not the next stage is worthwhile.

In other words, once it is close enough to the answer that it predicts that a marginal update would move it closer to the answer by an amount that gives less than C utility, it can quit, and not perform the next step.

This makes sense, right? I do suspect that this is the direction to drive at in the solution to this problem.

Comment author: blacktrance 03 March 2014 04:58:49AM 0 points [-]

It only shows percentages, not the number of upvotes and downvotes. For example, if you have 100% upvotes, you may not know whether it was one upvote or 20.

Comment author: ygert 03 March 2014 12:27:35PM *  2 points [-]

If a comment has 100% upvotes, then obviously the amount of upvotes it got is exactly equal to the karma score of the post in question.

View more: Next