Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, September 25 - October 1, 2017

0 Post author: Thomas 25 September 2017 07:36AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top-level comments on this article" and "

Comments (30)

Comment author: turchin 26 September 2017 12:07:40PM 6 points [-]

Happy Petrov day! 34 years ago nuclear war was prevented by a single hero. He died this year. But many people now strive to prevent global catastrophic risks and will remember him forever.

Comment author: Lumifer 02 October 2017 04:07:20PM *  1 point [-]
Comment author: g_pepper 02 October 2017 05:56:40PM *  0 points [-]

Interesting paper. But, contrary to the popular summary in the first link, it really only shows that simulations of certain quantum phenomena are impossible using classical computers (specifically, using the Quantum Monte Carlo method). But this is not really surprising - one area where quantum computers show much promise is in simulating quantum systems that are too difficult to simulate classically.

So, if the authors are right, we might still be living in a computer simulation, but it would have to be one running on a quantum computer.

Comment author: Lumifer 02 October 2017 06:29:46PM 0 points [-]

True. A bit more generally, this paper relies on the simulating universe having similar physics to the simulated universe which, as far as I can see, is an unfounded assumption made because otherwise there would be nothing to discuss.

Comment author: g_pepper 02 October 2017 06:57:42PM *  0 points [-]

Yep. This could be because Nick Bostrom's original simulation argument focuses on ancestor simulations, which pretty much implies that the simulating and simulated worlds are similar. However here, in question 11, Bostrom explains why he focused on ancestor simulations and states that the argument could be generalized to include simulations of worlds that are very different from the simulating world.

Comment author: Lumifer 02 October 2017 07:25:37PM 0 points [-]

Well... Bostrom says:

If the simulation-hypothesis is true, then we are living inside a computer, and whichever civilization built that computer is our "home" civilization by definition

and from this point of view the physics doesn't have to match.

Comment author: g_pepper 02 October 2017 09:10:19PM *  0 points [-]

Yep, I agree. The second sentence of this comment's grandparent was intended to support that conclusion, but my wording was sloppily ambiguous. I made a minor edit to it to (hopefully) remove the ambiguity.

Comment author: CellBioGuy 30 September 2017 07:26:31AM *  1 point [-]

Attended my first honest to god Astrobiology meeting/symposium/conference. Wow, it was amazing...

Comment author: morganism 30 September 2017 11:04:06PM 0 points [-]

are they going to post up the presentations and posters?

Comment author: CellBioGuy 01 October 2017 12:39:41AM *  0 points [-]

One coming this approaching spring will. This one was livestreamed but not sure if it was recorded.

An update to this was presented:

https://www.youtube.com/watch?v=IBR6th28qQg

Comment author: Thomas 25 September 2017 07:41:18AM 1 point [-]
Comment author: Oscar_Cunningham 25 September 2017 10:03:03AM *  1 point [-]

If you fail to get your n flips in a row, your expected number of flips on that attempt is the sum from i = 1 to n of i*2^-i, divided by (1-2^-n). This gives (2-(n+2)/2^n)/(1-2^-n). Let E be the expected number of flips needed in total. Then:

E = (2^-n)n + (1-2^-n)[(2-(n+2)/2^n)/(1-2^-n) + E]

Hence (2^-n)E = (2^-n)n + 2 - (n+2)/2^n, so E = n + 2^(n+1) - (n+2) = 2^(n+1) - 2

Comment author: Osho 02 October 2017 09:34:35PM 0 points [-]

Is anyone interested in starting a small team (2-3 people) to work on this Kaggle dataset?

https://www.kaggle.com/c/porto-seguro-safe-driver-prediction

Comment author: abcdef 29 September 2017 12:29:53PM 0 points [-]

Is there any Android app that you would suggest?

Comment author: Elo 02 October 2017 08:17:01PM 0 points [-]
Comment author: Khoth 29 September 2017 10:43:22PM 0 points [-]

Fire Emblem: Heroes

Comment author: Lumifer 29 September 2017 04:15:39PM 0 points [-]

LOL, a literal "is there an app for that?"

Comment author: WalterL 28 September 2017 03:25:57PM 0 points [-]

https://www.vox.com/policy-and-politics/2017/9/28/16367580/campaigning-doesnt-work-general-election-study-kalla-broockman

This is a pretty daunting takedown of the whole concept of political campaigning. It is pretty hilarious when you consider how much money, how much human toil, has been squandered in this manner.

Comment author: ChristianKl 30 September 2017 08:34:09PM 0 points [-]

It's not that much money. The 2016 campaign cost less than Pampers annual advertising budget.

Comment author: Lumifer 28 September 2017 04:36:53PM *  0 points [-]

From the link:

It’s an especially shocking result given the authors’ previous work. Kalla and Broockman conducted a large-scale canvassing experiment, published in 2016, that found that pro-trans-rights canvassers could change Miami residents' minds about transgender issues by having intense, substantive, 10-minute conversations with them. The persuasive effects of this canvassing were durable, lasting at least three months. ...

But now, Kalla and Broockman are finding that this kind of persuasion doesn’t appear to happen during campaigns, at least not very often.

I'd wait a couple of years, they'll probably change their mind again.

Besides, the goal of campaigning is not to change someone's mind -- it is to win elections.

Comment author: gjm 28 September 2017 07:49:37PM 1 point [-]

On the face of it, the goal of campaigning is to win elections by changing people's minds.

It may also help e.g. by encouraging The Base, but if it turns out that that's the main way it's effective then I bet there are more effective means to that goal than campaigning.

Incidentally, if anyone's having the same nagging feeling I did -- weren't Kalla and Broockman involved somehow in some sort of scandal where someone reported on an intense-canvassing experiment like that but it was all faked, or something? -- the answer is that they were "involved" but on the right side: they helped to expose someone else's dodgy study, at the same time as they were doing their own which so far as I know is not under any sort of suspicion.

Comment author: Lumifer 29 September 2017 04:13:26PM *  1 point [-]

On the face of it, the goal of campaigning is to win elections by changing people's minds.

That doesn't look obvious to me unless we're talking not about the face but the facade. Campaigning is mostly about telling people what they want to hear, certainly not about informing them they will need to rearrange their prejudices [1].

From the elections point of view there are three groups of people you're concerned with:

  • Your own Rabid Base. You want to energise them, provide incentives for them to be loud, active, confident, with contagious enthusiasm.

  • Other parties' Rabid Bases. Flip the sign: you want to demoralise them, make them doubtful, weak, passive. You want them to sit inside and mope.

  • The Undecideds, aka the Great Middle through which you have to muddle. This is where most of the action is. Do you want to convince them with carefully arranged chains of logical policy arguments? Hell, no. They don't vote on this basis. They vote on the basis of (1) Who promises more; (2) Who seems to be less likely to screw the pooch; and (3) Who exhudes more charisma/leadership -- not necessarily in this order, of course. Most of this is System 1 stuff, aka the gut feeling.

Notice how pretty much none of the above involves changing people's minds.


[1] "A great many people think they are thinking when they are merely rearranging their prejudices" -- William James

Comment author: abcdef 28 September 2017 12:21:46PM *  0 points [-]

I'm not a statistician, but I happen to have some intuitions and sometimes work out formulas or find them on the web.

I have a bunch of students that took a test each day. The test of each day had a threshold score out of, say, 100 points. Scores under the threshold are considered insufficient.

I don't know whether of the two is true:

  1. I can either use the tests to evaluate the students, or the students to evaluate the tests.

  2. I can evaluate the students using the tests and the tests using the students at the same time.

The option 2. seems counterintuitive at first sight, especially if one wants to be epistemically sound. It seems more intuitive at second sight, though. I think it might be analogous to how you can evaluate a circular flow of feedback by using linear algebra (cfr. LW 2.0 discussions).

Some other context: In my evaluation model I would rather not only consider whether the scores were sufficient or not, but consider how much they were sufficient or insufficient, possibly after opportunely transforming them. Also, I want the weights of the scores to decay exponentially. I would also rather use a bayesian approach.

Is this reasonable, and where can I find instructions on how to do so?

Comment author: IlyaShpitser 28 September 2017 02:08:21PM *  2 points [-]

You have an experimental design problem: https://en.wikipedia.org/wiki/Design_of_experiments.

The way that formalism would think about your problem is you have two "treatments" (type of test, that you can vary, and type of student), and an "outcome" (how a given student does on a given test, typically some sort of histogram that's hopefully shaped like a bell).

Your goal is to efficiently vary "treatment" values to learn as much as possible about the causal relationship between how you structure a test, and student quality, and the outcome.


There's reading you can do on this problem, it's a classical problem in statistics. Both Jerzy Neyman and Ronald Fisher wrote a lot about this, the latter has a famous book.

In fact, in some sense this is the problem of statistics, in the sense that modern statistics could be said to have grown out of, and generalized from, this problem.

Comment author: abcdef 29 September 2017 12:40:12PM 0 points [-]

In your opinion what is a reasonable price to have a statistician write me a formula for this?

Comment author: username2 29 September 2017 03:40:05PM 0 points [-]

i do statistical consulting as part of my day job responsibilities, i'm afraid to say this is not how it works.

if you came to me with this question i would roll back to ask what exactly you are trying to achieve with the analyses, before getting into the additional constraints you want to include. unfortunately it's far more challenging if the data owner comes to the statistician after the data are collected rather than before (when principles of experimental design as ilya mentioned can be considered to achieve ability to successfully answer those questions using statistical methods).

that said, temporarily ignoring the additional constraints you mentioned (e.g. whether and how to transform data; exponential decay and what that actually means with respect to student evaluation scores; magic word "bayes") perhaps a useful search term would be "item response theory".

good luck

Comment author: IlyaShpitser 29 September 2017 02:37:31PM *  0 points [-]

Don't know. Ask a statistician who knows about design.

Comment author: MrMind 29 September 2017 09:35:06AM 0 points [-]

From a Bayesian perspective, you calculate P(S|T) and P(T|S) at the same time, so it doesn't really matter. What does matter, and greatly, are your starting assumptions and models: if you have only one for each entity, you won't be able to calculate how much some datum is evidence of your model or not.

Comment author: abcdef 29 September 2017 12:41:07PM 0 points [-]

Sorry I don't follow. What do you mean by starting assumptions and models that I should have more than one for each entity?

Comment author: MrMind 29 September 2017 03:34:51PM 0 points [-]

Well, to calculate P(T|S) = p you need a model of how a student 'works', in such a way that the test's result T happens for the kind of students S with probability p. Or you can calculate P(S|T), thereby having a model of how a test 'works' by producing the kind of student S with probability p.
If you have only one of those, these are the only things you can calculate.

If on the other hand you have one or more complementary models (complemenetary here means that they exclude each other and form a complete set), then you can calculate the probabilities P(T1|S1), P(T1|S2), P(T2|S1) and P(T2|S2). With these numbers, via Bayes, you have both P(T|S) and P(S|T), so it's up to you to decide if you're analyzing stundents or tests.
Usually one is more natural than the other, but it's up to you, since they're models anyway.