Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: shminux 23 October 2014 03:24:46AM 0 points [-]

Posting to LW Main instead of, say, your tumblr account seen maybe by a couple of your online friends, seems like an example of an easy power multiplier. So, you are being consistent :)

Comment author: hawkice 23 October 2014 03:21:38AM 2 points [-]

I am somewhat disappointed to be asked about favorability with a movement without allowing me to distinguish between the ideals of that movement and the movement as it exists (see: feminism and social justice, which, as phenomenon in reality appear to be ways to generate indignation on tumblr -- I love equality but do not use tumblr and I don't see any purpose in being indignant on the internet).

Also, as regards a "Great Stagnation": Strongly Doubt is not the opposite of Strongly Believe. So I have strong doubts where the balance of my estimation is that Cowen is incorrect -- my radio button does not exist, it is too far to one end of the spectrum, despite not being a hyper-radicalized opinion.

Comment author: luminosity 23 October 2014 03:19:47AM 2 points [-]

Taken the survey (would have loved to do digit ratio, but too difficult to get access to the equipment needed).

Comment author: DataPacRat 23 October 2014 03:15:46AM *  2 points [-]

Done - and mildly disappointed that we won't be measuring the prevalence of transponyism this year.

Does this post appear on LW's Main or Discussion pages for anyone else? I only found it via an offsite reference. Edit: Nevermind, I had my Main set to 'Promoted' instead of 'New'.

Comment author: pjeby 23 October 2014 02:57:07AM 0 points [-]

Wow. This is the simplest/shortest explanation I've seen yet for how AI can becomes unfriendly, without reference to Terminator-style outcomes.

Of course, per the illusion of transparency, it may be that it only seems so clear to me because of my long term exposure to the idea of FAI... Still, it looks like an important step in subdividing the problem, and one that I expect would be more intuitively obvious to outsiders: "we're studying ways to make sure the sorcerer's apprentice can turn the magic mop off." ;-)

Comment author: rejuvyesh 23 October 2014 02:55:31AM 5 points [-]

Except for the digit lengths, survey taken!

Comment author: Vulture 23 October 2014 02:54:58AM 5 points [-]

Taken! The way you were being so apologetic about the length, I thought it would be much more grueling - I found it quick and fun! :)

In response to Original Seeing
Comment author: Timo 23 October 2014 02:53:30AM 0 points [-]

Whooo. That was great. I ended the last paragraph with a raised eyebrow and the wind of clearness brushing through my mind. I get it. I'm tempted to write some metaphor about Unoriginal Seeing, but what's the need, since this passage is perfect.

Comment author: ahbwramc 23 October 2014 02:46:23AM 5 points [-]

Survey complete! I'd have answered the digit ratio question, but I don't have a ruler of all things at home. Ooh, now to go check my answers for the calibration questions.

Comment author: gwern 23 October 2014 02:35:10AM 10 points [-]

Done. Too bad the basilisk question wasn't on it; I hope that will one day be possible.

Comment author: Fluttershy 23 October 2014 02:32:14AM 8 points [-]

I completed the survey, huzzah!

Comment author: Sigmaleph 23 October 2014 02:14:57AM 9 points [-]

Did the survey. Also, now I know my digit ratio!

Comment author: Vaniver 23 October 2014 01:57:29AM *  0 points [-]

Do you know of any cases where this simulation-seeded Gaussian Process was then used as a prior, and updated on empirical data?

None come to mind, sadly. :( (I haven't read through all of his work, though, and he might know someone who took it in that direction.)

Comment author: Vaniver 23 October 2014 01:41:24AM 9 points [-]

Did the survey!

Comment author: alex_zag_al 23 October 2014 01:41:15AM *  0 points [-]

Do you know of any cases where this simulation-seeded Gaussian Process was then used as a prior, and updated on empirical data?


  • uncertain parameters --simulation--> distribution over state

  • noisy observations --standard bayesian update--> refined distribution over state

Cari Kaufman's research profile made me think that's something she was interested in. But I haven't found any publications by her or anyone else that actually do this.

I actually think that I misread her research description, latching on to the one familiar idea.

Comment author: ete 23 October 2014 01:34:01AM 11 points [-]

Filled in, but did not do digit lengths because I have no access to a printer or scanner in the near future.

In response to Power and difficulty
Comment author: Unnamed 23 October 2014 12:30:35AM 1 point [-]

One part of HPMOR that seems especially relevant here is the "spend 5 minutes actually thinking about the problem" technique.

If you're facing a big, important problem, it's natural to suppose that solving it would take a whole lot of time, work, and skill. Maybe so much that it's beyond your capabilities, in which case you can write the problem off as impossible. But presumably at least enough so that it makes sense to put off thinking about the problem until sometime when you have a big chunk of free time to focus on it, and are feeling especially cognitively sharp and motivated. Right?

Turns out that if you spend 5 actual minutes thinking about the problem, that is sometimes enough to solve it (or at least make substantial progress). Especially if you are the Weasley twins, and the problem is a spectacular prank.

Comment author: Vaniver 22 October 2014 10:14:55PM 0 points [-]

The other is Cari Kaufman, who builds probability distributions over the results of a climate simulation. (the idea seems to be to extrapolate from simulations actually run with similar but not identical parameters)

I was introduced to the idea of 'emulation' of complex models by Tony O'Hagan a few years back, where you use a Gaussian Process to model what a black box simulation will give across all possible inputs, seeded with actual simulation runs that you performed. (This also helps with active learning, in that you can find the regions of the input space where you're most uncertain what the simulation will give, and then run a simulation with those input parameters.) I believe the first application it saw was also in climate modeling.

Comment author: undermind 22 October 2014 09:41:25PM 0 points [-]

Yeah, that original phrase about sunk costs was pretty unsubstantiated. What I meant to say (which I've edited in) is that much of the time, past investments are usually not in fact sunk costs.

Comment author: gwern 22 October 2014 09:36:55PM 1 point [-]

There are two people that I know of, doing research that resembles this. One is Francesco Stingo. He published a method for detecting binding between two different kinds of molecules--miRNA and mRNA. His method has a prior that is based in part on chemistry-based predictions of binding, and updated on the results of microarray experiments. The other is Cari Kaufman, who builds probability distributions over the results of a climate simulation. (the idea seems to be to extrapolate from simulations actually run with similar but not identical parameters)

Empirical priors + simulation of relevant models is somewhat similar to my idea on how to estimate P(causality|correlation): use explicit comparisons of correlational & randomized trials as priors when available, and simulate P(cauality|correlation) on random causal networks when not available.

View more: Next