Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Adele_L 26 September 2014 05:22:05PM *  2 points [-]

You should send a message to Viliam Bur.

Comment author: MugaSofer 27 September 2014 11:33:47AM 0 points [-]

Thank you!

Comment author: MugaSofer 26 September 2014 03:24:52PM 1 point [-]

So ... I suspect someone might be doing that mass-downvote thing again. (To me, at least.)

Where do I go to inform a moderator so they can check?

Comment author: MugaSofer 26 September 2014 03:13:55PM 0 points [-]

Hey, I've listened to a lot of ideas labelled "dangerous", some of which were labeled "extremely dangerous". Haven't gone crazy yet.

I'd definitely like to discuss it with you privately, if only to compare your idea to what I already know.

Comment author: KnaveOfAllTrades 06 September 2014 07:45:47PM 2 points [-]

I'm not sure if it's because I'm Confused, but I'm struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.

Comment author: MugaSofer 07 September 2014 04:56:31PM *  0 points [-]

I'm saying that if Sleeping Beauty's goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of "payoff" that gives Thirder results.

From If a tree falls on Sleeping Beauty...:

Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized after the experiment.

In this case it is optimal to bet 1/3 that the coin came up heads, 2/3 that it came up tails: [snip table]

Comment author: KnaveOfAllTrades 06 September 2014 06:48:09PM *  1 point [-]

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write 'halfer' and 'thirder' computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.

Comment author: MugaSofer 06 September 2014 07:10:14PM *  0 points [-]

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

OK, well by analogy, what's the "payoff structure" for nuclear anthropics?

Obviously, we can't prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.

It isn't perfectly analogous, but it seems to me that "be right" is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.

Comment author: MarkL 15 August 2014 08:11:12PM 13 points [-]

The point is that these speed runs presumably involve backtracking. They can rewind time and explore different paths until they find one they like.

Comment author: MugaSofer 06 September 2014 06:54:33PM *  1 point [-]

So do regular playthroughs, though; it's a video game. The first paragraph still remarks on "how different optimal play can be from normal play."

Comment author: MugaSofer 06 September 2014 05:59:53PM *  0 points [-]

The trouble is, anthropic evidence works. I wish it didn't, because I wish the nuclear arms race hadn't come so close to killing us (and may well have killed others), and was instead prevented by some sort of hard-to-observe cooperation.

But it works. Witness the Sleeping Beauty Problem, for example. Or the Sailor's Child, a modified Sleeping Beauty that I could go outside and play a version of right now if I wished.

The winning solution, that gives the right answer, is to use "anthropic" evidence.

If this confuses you, then I (seriously) suggest you re-examine your understanding of how to perform anthropic calculations.


In fact, what you are describing is not "anthropic" evidence, but just ordinary evidence.

I (think I) know that George VI had five siblings (because you told me so.) That observation is more likely in a world where he did have five siblings (because I guessed your line of argument pretty early in the post, so I know you have no reason to trick me.) Therefore, updating on this observation, it is probable that George VI had five siblings.

Is this an explanation? Sort of.

There might be some special reason why George VI had only five siblings - maybe his parents decided to stop after five, say.

More likely, the true "explanation" is that he just happened to have five siblings, randomly. It wasn't unusually probable, it just happened by chance that it was that number.

And if that is the true explanation, then that is what I desire to believe.

Comment author: blogospheroid 30 August 2014 11:34:55AM 1 point [-]

I'd like to repeat the comment I had made at "outside in" for the same topic, the great filter.

I think our knowledge of all levels – physics, chemistry, biology, praxeology, sociology is nowhere near the level where we should be worrying too much about the fermi paradox.

Our physics has openly acknowledged broad gaps in our knowledge by postulating dark matter, dark energy, and a bunch of stuff that is filler for – "I don’t know". We don't have physics theories that explain the smallest to the largest.

Coming to chemistry and biology, we’ve still not demonstrated abiogenesis. We have not created any new base of life other than the twisty strands mother nature already prepared and gave us everywhere. We don't know the causes of our mutations to predict them to any extent. We simply don’t know enough to fill in these gaps.

Coming to basic sustenance, we don’t know what are the minimum requirements for a self-contained multi generational habitat. The biosphere experiments were not complete in any manner.

We don’t know the code for intelligence. We don’t know the code for preventing our own bodily degradation.

We don’t know how to balance new knowledge acquisition and sustainability run a society. Our best centres of knowledge acquisition are IQ shredders (a term meant to highlight the fact that the most successful cities attract the highest IQ people and reduce their fertility compared to if they had remained back in small towns/rural areas) and not sustainable environmentally either. Patriarchy and castes work great in in static societies. We don’t know their equivalent in a growing knowledge society.

There are still many known ways in which we can screw up. Lets get all these basics right, repeatedly right and then wonder with our new found knowledge, according to these calculations, there is a X% chance that we should have been contacted. Why are we apparently alone in the universe?

Comment author: MugaSofer 06 September 2014 05:43:56PM 1 point [-]

If you aren't sure about something, you can't just throw up your hands, say "well, we can't be sure", and then behave as if the answer you like best is true.

We have math for calculating these things, based on the probability different options are true.

For example, we don't know for sure how abiogenesis works, as you correctly note. Thus, we can't be sure how rare it ought to be on Earthlike planets - it might require a truly staggering coincidence, and we would never know for anthropic reasons.

But, in fact, we can reason about this uncertainty - we can't get rid of it, but we can quantify it to a degree. We know how soon life appeared after conditions became suitable. So we can consider what kind of frequency that would imply for abiogenesis given Earthlike conditions and anthropic effects.

This doesn't give us any new information - we still don't know how abiogenesis works - but it does give us a rough idea of how likely it is to be nigh-impossible, or near-certain.

Similarly, we can take the evidence we do have about the likelihood of Earthlike planets forming, the number of nearby stars they might form around, the likely instrumental goals most intelligent minds will have, the tools they will probably have available to them ... and so on.

We can't be sure about any of these things - no, not even the number of stars! - but we do have some evidence. We can calculate how likely that evidence would be to show up given the different possibilities. And so, putting it all together, we can put ballpark numbers to the odds of these events - "there is a X% chance that we should have been contacted", given the evidence we have now.

And then - making sure to update on all the evidence available, and recalculate as new evidence is found - we can work out the implications.

Comment author: Azathoth123 06 September 2014 03:37:23AM *  2 points [-]

Well the right wing talk radio hosts like Rush Limbaugh opposed it from day 1.

Also I remember a left wing political cartoon from shortly after 9/11 implying that Republicans were in bed with the corporate sector for not wanting to nationalize airport security (which was then provided by private contractors).

Comment author: MugaSofer 06 September 2014 12:58:42PM *  0 points [-]

Ah, interesting! I didn't know that. Props to Limbaugh et al.

(Nationalizing airport security seems orthogonal to the TSA search issue, though.)

Comment author: ColbyDavis 30 August 2014 04:02:50PM 3 points [-]

Has anybody suggested that the great filter may be that AIs are negative utilitarians that destroy life on their planet? My prior on this is not very high but it's a neat solution to the puzzle.

Comment author: MugaSofer 06 September 2014 12:56:53PM 1 point [-]

Oh, a failed Friendly AI might well do that. But it would probably realize that life would develop elsewhere, and take steps to prevent us.

View more: Next