Group rationality diary, May 24th - June 13th

5 Prismattic 31 May 2015 03:41AM

This is the public group rationality diary for May 24th - June 13th, 2015. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit

  • Obtained new evidence that made you change your mind about some belief

  • Decided to behave in a different way in some set of situations

  • Optimized some part of a common routine or cached behavior

  • Consciously changed your emotions or affect with respect to something

  • Consciously pursued new valuable information about something that could make a big difference in your life

  • Learned something new about your beliefs, behavior, or life that surprised you

  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Archive of previous rationality diaries

Note to future posters: no one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it. It should run for about two weeks, finish on a Saturday, and have the 'group_rationality_diary' tag.

Game theory question -- iterated truel with private information

5 Prismattic 15 September 2011 01:15AM

I was reading the discussion here of how truels with unlimited shots and symmetric complete information favor the weakest truelist.  This set me to wondering about somewhat more complicated situations.  Suppose you are in a world where there are daily truels for some substantial period of time, say 30 days.  As with the linked problem, all hits are fatal and truelists are accurate 50%, 80%, or 100% of the time. Unlike the linked problem, however, rather than possessing complete information about the accuracy of other truelists, you only know your own true accuracy and the %hit of other truelists in earlier iterations, and there is no guarantee that there will be one truelist of each skill level in any given truel.

Now suppose you know that you are a perfect marksman.  On which of the iterations would you intentionally miss your first shot?  I definitely lack the math strength to offer a good strategy, but I'm sure many others here could do better.

 

Edit #2 -- I give up on the formatting.

Stanford Prison Retrospective

3 Prismattic 14 July 2011 01:23AM

This year is the 40th anniversary of the Stanford Prison Experiment.  I found this [retrospective](http://www.stanfordalumni.org/news/magazine/2011/julaug/features/spe.html) interesting.  What really caught my eye is that, to some degree, it contradicts the main lesson of the experiment -- that context more than character determines behavior.  If David Eschelman is accurately/truthfully recalling his role, then it seems like his individual character actually did play a role in how quickly things spiralled out of control (though the willingness of the other guards to go along with him supports the original conclusion).

An inflection point for probability estimates of the AI takeoff?

11 Prismattic 29 April 2011 11:37PM

Suppose that your current  estimate for possibility of an AI takeoff coming in the next 10 years is some probability x.  As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x.  And 10 years after that, it will be z > y.  My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year?  If so, how many decades (centuries) from now would you expect the inflection point in your estimate? 

The non-painless upload

1 Prismattic 15 February 2011 04:23AM

 

I have been trying to absorb the Lesswrong near-consensus on cryonics/quantum mechanics/uploading, and I confess to being unpersuaded by it. I'm not hostile to cryonics; just indifferent, and having a bit of trouble articulating why the insights on identity that I have been picking up from the quantum mechanics sequence aren't compelling to me. I offer the following thought experiment in hopes that others may be able to present the argument more effectively if they understand the objection here.

 

Suppose that Omega appears before you and says, “All life on Earth is going to be destroyed tomorrow by [insert cataclysmic event of your choice here]. I offer you the chance to push this button, which will upload your consciousness to a safe place out of reach of the cataclysmic event, preserving all of your memories, etc. up to the moment you pushed the button and optimizing you such that you will be effectively immortal. However, the uploading process is painful, and because it interferes with your normal perception of time, your original mind/body will subjectively experience the time after you pushed the button but before the process is complete as a thousand years of the most intense agony. Additionally, I can tell you that a sufficient number of other people will choose to push the button that your uploaded existence will not be lonely.”

 

Do you push the button?

 

My understanding of the Lesswrong consensus on this issue is that my uploaded consciousness is me, not just a copy of me. I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.