All of bsterrett's Comments + Replies

Of course! I meant to say that Richard's line of thought was mistaken because it didn't take into account the (default) independence of Omega's choice of number and the Number Lottery's choice of number. Suggesting that there are only two possible strategies for approaching this problem was a consequence of my poor wording.

Sorry for my poor phrasing. The Number Lottery's number is randomly chosen and has nothing to do with Omega's prediction of you as a two-boxer or one-boxer. It is only Omega's choice of number that depends on whether it believes you are a one-boxer or two-boxer. Does this clear it up?

Note that there is a caveat: if your strategy for deciding to one-box or two-box depends on the outcome of the Number Lottery, then Omega's choice of number and the Lottery's choice of number are no longer independent.

I think this line of reasoning relies on the Number Lottery's choice of number being conditional on Omega's evaluation of you as a one-boxer or two-boxer. The problem description (at the time of this writing) states that the Number Lottery's number is randomly chosen, so it seems like more of a distraction than something you should try to manipulate for a better payoff.

Edit: Distraction is definitely the wrong word. As ShardPhoenix indicated, you might be able to get a better payoff by making your one-box / two-box decision depend on the outcome of the Number Lottery.

0[anonymous]
Those are not the only precommitments one can make for this type of situation.
0mwengler
What? I can't even parse that. There IS a number in the box which is the same as the one at the Lottery Bank. The number either is prime or it is composite. According to the hypothetical, if I two-box, there is a 99.9% correlation with Omega putting a composite number in his box, in which case my payooff is $2,001,000. There is a 0.1% correlation with Omega putting a prime number in the box in which case my payoffis $1,001,000. If the correlation is a good estimate of probability, then my expected payoff from two-boxing is $2million more or less. If I one-box, blah blah blah expected payoff is $1million.

I have a copy of Probability Theory, but I've never made a solid effort to go through it. I'd love to commit to a group reading. Definitely interested.

This project now has a small team, but we'd love to get some more collaborators! You wouldn't be taking this on single-handedly. Anyone who is interested should PM me.

I plan to use one of the current mockups like tinychat while development is underway. We are still evaluating different approaches, so we won't be able to use the product of our work to host the study hall in the very short term. We'll definitely make a public announcement when we have something that users could try.

One can surely argue the inability of hostile forces to build and deploy a nuke is significant: seems some relationship exists between the intellect needed to make these things and the intellect needed to refuse to make or deploy these.

Could you state the relationship more explicitly? Your implication is not clear to me.

I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don't think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn't think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In t... (read more)

59: I will never build a sentient computer smarter than I am.

For anyone following the sequence rerun going on right now, this summary is highly recommended. It is much more manageable than the blog posts, and doesn't leave out anything important (that I noticed).

Has there been some previous discussion of reliance on custom hardware? My cursory search didn't turn anything up.

To your first objection, I agree that "the gradient may not be the same in the two," when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer's stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the "steep" area extends, so I think your doubt is justified.

Your second objection also sounds reasonable to me, but I don't know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler ... (read more)

Eliezer's stated reason, as I understand it, is that evolution's work to increase the performance of the human brain did not suffer diminishing returns on the path from roughly chimpanzee brains to current human brains. Actually, there was probably a slightly greater-than-linear increase in human intelligence per unit of evolutionary time.

If we also assume that evolution did not have an increasing optimization pressure which could account for the nonlinear trend (which might be an assumption worth exploring; I believe Tim Tyler would deny this), then this... (read more)

0RolfAndreassen
Two objections to this: Firstly you have to extrapolate from the chimp-to-human range and into superintelligence range. The gradient may not be the same in the two. Second, it seems to me that the more intelligent humans are, the more "the other humans in my tribe" becomes the dominant part of your environment; this leads to increased returns to intelligence, and consequently you do get an increasing optimisation pressure.

I found that leaving a question and coming back to it was much more helpful than trying to focus on it. There were several questions that I made no progress on for a few minutes, but I could immediately solve them upon returning to them.

To clarify: this involves selecting the hyperlink text with your mouse, but not releasing your mouse button, and then copying the text while it is still selected.

"Keeping it selected" is the default behavior of the browser which does not seem to be working.

I took the survey! Karma, please!

Never done an IQ test before. I thought it was fun! Now I want to take one of the legitimate ones.

I can't imagine a universe without mathematics, yet I think mathematics is meaningful. Doesn't this mean the test is not sufficient to determine the meaningfulness of a property?

Is there some established thinking on alternate universes without mathematics? My failure to imagine such universes is hardly conclusive.

-2Eugine_Nier
Sorry, misread what you wrote in the grand parent. I agree with you.

Like army1987 notes, it is an instruction and not a statement. Considering that, I think "if X is just, then do X" is a good imperative to live by, assuming some good definition of justice. I don't think I would describe it as "wrong" or "correct" at this point.

0Will_Sawin
OK. Exactly what you call it is unimportant. What matters is that it gives justice meaning.

I am not entirely sure how you arrived at the conclusion that justice is a meaningful concept. I am also unclear on how you know the statement "If X is just, then do X" is correct. Could you elaborate further?

In general, I don't think it is a sufficient test for the meaningfulness of a property to say "I can imagine a universe which has/lacks this property, unlike our universe, therefore it is meaningful."

1A1987dM
That's an instruction, not a statement.
-1Eugine_Nier
Um, mathematics.
0Will_Sawin
I did not intend to explain how i arrived at this conclusion. I'm just stating my answer to the question. Do you think the statement "If X is just, then do X" is wrong?

I'll do 10.

What is the error-checking process? Will we fix any mistakes in our verdicts via an LW discussion after they have been gathered?

1Stuart_Armstrong
Thanks! I'll think about the error checking process; certainly there is some possibility for factual errors, but most of the work is in interpreting qualifiers like "ubiquitous" and "most" and mapping that to what happened in the world.

I recently read the wiki article on criticality accidents, and it seems relevant here. "A criticality accident, sometimes referred to as an excursion or a power excursion, is the unintentional assembly of a critical mass of a given fissile material, such as enriched uranium or plutonium, in an unprotected environment."

Assuming Eliezer's analysis is correct, we cannot afford even 1 of these in the domain of self-improving AI. Thankfully, its harder to accidentally create a self-improving AI than it is to drop a brick in the wrong place at the wrong time.

I think this title sounds better if you are already familiar with the sequences. The importance and difficulty of changing your mind are not likely to be appreciated by people outside this community.

1handoflixue
While I suspect you are correct, I also suspect that the audience for this book would probably be significantly more aware of it as a problem. This isn't a mainstream book, it's just a more accessible version of the sequences, so the target audience is going to be less "random Joe" and more "aspiring rationalist / potential LessWrong reader". Speaking to my social circle alone, any of my friends who weren't curious about a book like that would probably also struggle with the material. For some reason, it also strikes me as signalling "I am not a peppy self-help 'you can do anything!' book", which seems like a very useful property in a title. I have no clue if that's just me though :)

What is the difference between constraining experience and constraining expectations? Is there one?