Sorry for my poor phrasing. The Number Lottery's number is randomly chosen and has nothing to do with Omega's prediction of you as a two-boxer or one-boxer. It is only Omega's choice of number that depends on whether it believes you are a one-boxer or two-boxer. Does this clear it up?
Note that there is a caveat: if your strategy for deciding to one-box or two-box depends on the outcome of the Number Lottery, then Omega's choice of number and the Lottery's choice of number are no longer independent.
I think this line of reasoning relies on the Number Lottery's choice of number being conditional on Omega's evaluation of you as a one-boxer or two-boxer. The problem description (at the time of this writing) states that the Number Lottery's number is randomly chosen, so it seems like more of a distraction than something you should try to manipulate for a better payoff.
Edit: Distraction is definitely the wrong word. As ShardPhoenix indicated, you might be able to get a better payoff by making your one-box / two-box decision depend on the outcome of the Number Lottery.
This project now has a small team, but we'd love to get some more collaborators! You wouldn't be taking this on single-handedly. Anyone who is interested should PM me.
I plan to use one of the current mockups like tinychat while development is underway. We are still evaluating different approaches, so we won't be able to use the product of our work to host the study hall in the very short term. We'll definitely make a public announcement when we have something that users could try.
One can surely argue the inability of hostile forces to build and deploy a nuke is significant: seems some relationship exists between the intellect needed to make these things and the intellect needed to refuse to make or deploy these.
Could you state the relationship more explicitly? Your implication is not clear to me.
I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don't think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn't think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In t...
To your first objection, I agree that "the gradient may not be the same in the two," when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer's stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the "steep" area extends, so I think your doubt is justified.
Your second objection also sounds reasonable to me, but I don't know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler ...
Eliezer's stated reason, as I understand it, is that evolution's work to increase the performance of the human brain did not suffer diminishing returns on the path from roughly chimpanzee brains to current human brains. Actually, there was probably a slightly greater-than-linear increase in human intelligence per unit of evolutionary time.
If we also assume that evolution did not have an increasing optimization pressure which could account for the nonlinear trend (which might be an assumption worth exploring; I believe Tim Tyler would deny this), then this...
I can't imagine a universe without mathematics, yet I think mathematics is meaningful. Doesn't this mean the test is not sufficient to determine the meaningfulness of a property?
Is there some established thinking on alternate universes without mathematics? My failure to imagine such universes is hardly conclusive.
I am not entirely sure how you arrived at the conclusion that justice is a meaningful concept. I am also unclear on how you know the statement "If X is just, then do X" is correct. Could you elaborate further?
In general, I don't think it is a sufficient test for the meaningfulness of a property to say "I can imagine a universe which has/lacks this property, unlike our universe, therefore it is meaningful."
I recently read the wiki article on criticality accidents, and it seems relevant here. "A criticality accident, sometimes referred to as an excursion or a power excursion, is the unintentional assembly of a critical mass of a given fissile material, such as enriched uranium or plutonium, in an unprotected environment."
Assuming Eliezer's analysis is correct, we cannot afford even 1 of these in the domain of self-improving AI. Thankfully, its harder to accidentally create a self-improving AI than it is to drop a brick in the wrong place at the wrong time.
Of course! I meant to say that Richard's line of thought was mistaken because it didn't take into account the (default) independence of Omega's choice of number and the Number Lottery's choice of number. Suggesting that there are only two possible strategies for approaching this problem was a consequence of my poor wording.