Posts

Sorted by New

Wiki Contributions

Comments

Of course! I meant to say that Richard's line of thought was mistaken because it didn't take into account the (default) independence of Omega's choice of number and the Number Lottery's choice of number. Suggesting that there are only two possible strategies for approaching this problem was a consequence of my poor wording.

Sorry for my poor phrasing. The Number Lottery's number is randomly chosen and has nothing to do with Omega's prediction of you as a two-boxer or one-boxer. It is only Omega's choice of number that depends on whether it believes you are a one-boxer or two-boxer. Does this clear it up?

Note that there is a caveat: if your strategy for deciding to one-box or two-box depends on the outcome of the Number Lottery, then Omega's choice of number and the Lottery's choice of number are no longer independent.

I think this line of reasoning relies on the Number Lottery's choice of number being conditional on Omega's evaluation of you as a one-boxer or two-boxer. The problem description (at the time of this writing) states that the Number Lottery's number is randomly chosen, so it seems like more of a distraction than something you should try to manipulate for a better payoff.

Edit: Distraction is definitely the wrong word. As ShardPhoenix indicated, you might be able to get a better payoff by making your one-box / two-box decision depend on the outcome of the Number Lottery.

I have a copy of Probability Theory, but I've never made a solid effort to go through it. I'd love to commit to a group reading. Definitely interested.

This project now has a small team, but we'd love to get some more collaborators! You wouldn't be taking this on single-handedly. Anyone who is interested should PM me.

I plan to use one of the current mockups like tinychat while development is underway. We are still evaluating different approaches, so we won't be able to use the product of our work to host the study hall in the very short term. We'll definitely make a public announcement when we have something that users could try.

One can surely argue the inability of hostile forces to build and deploy a nuke is significant: seems some relationship exists between the intellect needed to make these things and the intellect needed to refuse to make or deploy these.

Could you state the relationship more explicitly? Your implication is not clear to me.

I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don't think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn't think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In the moment, I took this response to be a complete rejection of rationality (or something like that), and I became slightly angry and very frustrated.

I realized afterwards that a big part of what upset me was that I was trying to do something that I felt would be helpful to this person and everyone around them and possibly the world at large, yet they were rejecting it for no reason that I could identify in the moment. (I know that my pushiness about rationality can make the world at large worse instead of better, but this was not on my mind in the moment.) I was thinking of myself as being charitable and nice, and I was thinking of them as inexplicably not receptive. On top of this, I had failed to liaise even decently on behalf of rationalists, and I had possibly turned this person off to the study of rationality. I think these things upset me more than I ever could have realized while the argument was still going on. Perhaps you felt some of this as well? I don't expect these considerations to account for all of the emotions you felt, but I would be surprised if they were totally uninvolved.

59: I will never build a sentient computer smarter than I am.

For anyone following the sequence rerun going on right now, this summary is highly recommended. It is much more manageable than the blog posts, and doesn't leave out anything important (that I noticed).

Has there been some previous discussion of reliance on custom hardware? My cursory search didn't turn anything up.

Load More