Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Pablo_Stafforini 23 April 2015 06:08:52AM 2 points [-]

Any updates?

Comment author: jkaufman 27 April 2015 06:24:49PM *  2 points [-]

I eventually got annoyed at the interruptions and stopped, but only about a month ago, 11 months after the baby was born.

http://www.jefftk.com/happiness_graph is up to date with the final samples

I think the rise up to December 2013 was mostly me getting used to the scale I was using.

The baby was born 3/26.

There's no data from periods when I was asleep or trying to sleep, which misses out on the main source of unhappiness: night-time wakings.

The period with no data is data loss from a broken phone -- with TagTime I needed to do manual backups which I didn't get around to very often. This lost data was for a chunk of my paternity leave, sadly.

The low point in late january corresponds to my mother dying; the high point before that corresponds to lots of family being around for the holidays.

Comment author: V_V 04 April 2015 12:37:11PM 1 point [-]

I'm not a big fan of decision making by conditional prediction markets (btw, "futarchy" is an obscure, non-descriptive name. Better call it something like "prophetocracy"), but I think that proponents like Robin Hanson propose that the value system is not set once and for all but regularly updated by a democratically elected government. This should avoid the failure mode you are talking about.

Comment author: jkaufman 06 April 2015 12:49:22PM 2 points [-]

"Futarchy" is an obscure, non-descriptive name. Better call it something like "prophetocracy"

"Futarchy" is the standard term for this governmental system. Perhaps Hanson should have chosen a different name, but that's the name its been going under for about a decade and I don't think "prophetocracy" would be an improvement.

Comment author: shminux 03 April 2015 10:18:45PM 4 points [-]

Can you suggest a scenario in which futarchy would result in a clear negative outcome, something analogous to turning the universe into paper clips?

Comment author: jkaufman 06 April 2015 12:47:22PM 2 points [-]

Analogous to turning the universe into paper clips

That's a low bar: it's an intentionally silly example. No one actually thinks we're likely to accidentally create a paperclip maximizer AI anymore than we're likely to accidentally include a "number of paperclips in the world" term in a futarchy metric. But something as clearly negative would be mandatory wireheading to maximize a "human pleasure" term.

A less extreme (and less clearly negative, but also more likely) example would be maximizing GDP. Hanson often uses GDP as an example of something you could include in a futarchy metric. GDP only counts market work, however, which means you can increase GDP by moving tasks from "do them yourself" to "hire someone". For example, if I watch my kid that doesn't count towards GDP, but if I pay you to watch them, and you pay me to do whatever you would otherwise have done during that time, it does.

GDP/person is one of the best metrics for "how is a country doing", often doing much better than explicit attempts to measure things closer to what we care about, but put a big optimizing push behind it and soon all the tiny tasks we do over the course of the day are pressured into market work.

Futarchy and Unfriendly AI

9 jkaufman 03 April 2015 09:45PM

We have a reasonably clear sense of what "good" is, but it's not perfect. Suffering is bad, pleasure is good, more people living enjoyable lives is good, yes, but tradeoffs are hard. How much worse is it to go blind than to lose your leg? [1] How do we compare the death of someone at eighty to the death of someone at twelve? If you wanted to build some automated system that would go from data about the world to a number representing how well it's doing, where you would prefer any world that scored higher to any world scoring lower, that would be very difficult.

Say, however, that you've built a metric that you think matches your values well and you put some powerful optimizer to work maximizing that metric. This optimizer might do many things you think are great, but it might be that the easiest ways to maximize the metric are the ones that pull it apart from your values. Perhaps after it's in place it turns out your metric included many things that only strongly correlated with what you cared about, where the correlation breaks down under maximization.

What confuses me is that the people who warn about this scenario with respect to AI are often the same people in favor of futarchy. They both involve trying to define your values and then setting an indifferent optimizer to work on them. If you think AI would be very dangerous but futarchy would be very good, why?

I also posted this on my blog.

[1] This is a question people working in public health try to answer with Disability Weights for DALYs.

Comment author: eternal_neophyte 16 March 2015 12:11:37AM 2 points [-]

I don't think any solution to it would necessarily reveal any major new insights.

I'll just leave this here: http://math.stackexchange.com/questions/1892/the-practical-implication-of-p-vs-np-problem

Comment author: jkaufman 17 March 2015 05:26:02PM 0 points [-]

I mostly agree with passve_fist:

  • A problem can be in P and still not be computationally feasible; O(n^10000) algorithms are still in P.
  • Almost always a very good approximate answer is good enough. The travelling salesman problem is in NP, but we have efficient enough approximations (not just in P, but small exponents) that the benefit of getting to perfect is low.
Comment author: jkaufman 11 March 2015 04:26:29PM 9 points [-]

Prediction: people who aren't Harry can use the stone once every 216 seconds (3:36).

(The idea being that the rule is "400 times a day" and Harry has a 26hr day.)

Comment author: Unknowns 10 March 2015 07:23:35PM 14 points [-]

Time Turners use units of one hour, so at least some kinds of magical items use our units.

Comment author: jkaufman 11 March 2015 01:21:16PM 1 point [-]

Can you use the time turner in different increments, and possibly with a different maximum, if you fully understand hours are arbitrary? This sounds exactly like partial transfiguration.

(Alternatively, you go back some fixed amount of time for every grain of time sand in the turner, and you could build one of other increments by using a different quantity of sand.)

Comment author: qsz 04 March 2015 12:24:55PM *  6 points [-]

There was a good discussion on LW about Arthur Chu's performance on Jeopardy LINK - especially how he took advantage of regular patterns underlying question selection and "random" events to try and maximise his winnings.

The comment thread on that discussion also mentions other "rationalist" practices by other contestants.

Comment author: jkaufman 06 March 2015 04:45:27PM 3 points [-]

Note that Arthur does not consider himself a rationalist, and hates LW.

Comment author: jkaufman 02 March 2015 02:34:34AM *  3 points [-]

Harry can do a lot of things, but V already knows many of them. His strongest options are things he's sure V has no idea he can do, like the swerving hex he used on Moody.

EDIT: His strongest options for ways to outwit LV, not things to tell LV to save friends.

Comment author: Izeinwinter 26 February 2015 03:48:11PM 0 points [-]

And since todays temp work was impressively mindless, I got rather a lot of thinking done.

Fair warning, this may well just be heading right into epileptic trees turf.

Dumbledore just cast himself from time in order to fulfill the prophecy about Harry Potter.

That line about how Harry will have to find some other dark lord to vanquish? It was not about the far future at all, it was about the next four minutes.

Let me explain: As long as the prophecy is in play, only Harry can defeat the dark lord. And that is not going to work against Voldemort. An 11 year old, no matter how resourceful and clever is just not going to come out on top of that fight. But just as the prophecy could have been about Neville as well as Harry, there is also more than one dark lord it could be about. The story pointed this out earlier.

So the prophecy is no longer in effect. Voldemort can be defeated by anyone with the firepower and a counter for the horcruxes.

And Dumbledore, the order of the pheonix, and everyone else he could bring in on it have crowdsourced a smackdown, which is about to land.

Most of this smackdown is in the form of longterm plots that are about to bear fruit.

From the top: Dumbledore knew who Quirrel and Harry were, from day one. Each and every single piece of information given to Harry was relayed in the expectation that Voldemort would hear it.

Dumble, Flamel, et all made a fake true cloak of invisibility. The point of this being to provide misinformation about what the mark of the deadly hallows looks like. I don't know what the "stone of resurrection" actually is, but I think Voldy will not like what it does, not one tiny bit. At a guess, it is for mapping out the magical "net" and find the darn things?

The people who just apparated in: Not death eaters. They're masked and cloaked minions. That's such a cliche it's actually painful to contemplate.

Voldemorts use of the stone to raise the dead is not the first time it has been used to do that. The rite he intended to use is not original to him, it is an old piece of lore, and it is an old piece of lore Flamel told the order of the phoenix about. This means "Flamel" has been able to raise anyone who still had foes and servants living and known graves of ancestors.

So it isn't new. Not widely used, but not new. Note that this isn't a good rite for defeating death in general, simply because most people dont have the first two at all. But it is a very effective way to ditch an identity for any of her collaborators who are in it for the long haul. Which means a lot of very powerful, supposedly deceased wizards and witches owe her. And he just tried to have her killed. (May even have succeeded. If so, she's probably already back up and wanting her stone back before anything expires.)

That is what the hour's delay in Snapes room was about - it was to take all the deatheaters into custody, and gather people up for a seriously onesided bout. The reason this ends in "liters of blood" is that the plan is to drain Voldemort dry so that they can raise as many of his victims as possible.

Oh, and Dumbledore didn't have unique access to divination, beyond a season pass to the hall of prophecies. That would make the plot unsolvable, because we can't reason a-causally.

Comment author: jkaufman 26 February 2015 08:11:17PM 0 points [-]

How does the fake Cloak hide you from the Mirror?

View more: Next