Posts

Sorted by New

Wiki Contributions

Comments

Robin, I see a fair amount of evidence that winner take all types of competition are becoming more common as information becomes more important than physical resources.
Whether a movie star cooperates with or helps subjugate the people in central Africa seems to be largely an accidental byproduct of whatever superstitions happen to be popular among movie stars.
Why doesn't this cause you to share more of Eliezer's concerns? What probability would you give to humans being part of the winning coalition? You might have a good argument for putting it around 60 to 80 percent, but a 20 percent chance of the universe being tiled by smiley faces seems important enough to worry about.

Eliezer,
This is a good explanation of how easy it would be to overlook risks.
But it doesn't look like an attempt to evaluate the best possible version of an Oracle AI.
How hard have you tried to get a clear and complete description of how Nick Bostrom imagines an Oracle AI would be designed? Enough to produce a serious Disagreement Case Study?
Would the Oracle AI he imagines use English for its questions and answers, or would it use a language as precise as comupter software?
Would he restrict the kinds of questions that can be posed to the Oracle AI?
I can imagine a spectrum of possibilities that range from an ordinary software verification tool to the version of Oracle AI that you've been talking about here.
I see lots of trade-offs here that increase some risks at the expense of others, and no obvious way of comparing those risks.

Eliezer, your non-response causes me to conclude that you aren't thinking clearly. John Maynard Smith's comments on Gould are adequate. Listen to Kaj and stick to areas where you know enough to be useful.

"progress in quality of vertebrate brain software (not complexity or size per se), and this shift in adaptive emphasis must necessarily have come at the expense of lost complexity elsewhere. Look at humans: we've got no muscles, no fangs, practically no sense of smell, and we've lost the biochemistry for producing many of the micronutrients we need." This looks suspicious to me. What measure of complexity of the brain's organization wouldn't show a big increase between invertebrates and humans? For the lost complexity you claim, only the loss of smell looks like it might come close to offsetting the increase in brain complexity; I doubt either of us has a good enough way of comparing the changes in complexity to tell much by looking at these features. If higher quality brains have been becoming more complex due to a better ability to use information available in the environment to create a more complex organization, there's no obvious reason to expect any major barriers to an increase in overall complexity.

Many popular reports of Eddington's test mislead people into thinking it provided significant evidence. See these two Wikipedia pages for reports that the raw evidence was nearly worthless. Einstein may have known how little evidence that test would provide.

When you hear someone say "X is not evidence ...", remember that the Bayesian concept of evidence is not the only concept attached to that word. I know my understanding of the word evidence changed as I adopted the Bayesian worldview. My recollection of my prior use of the word is a bit hazy, but it was probably influenced a good deal by beliefs about what a court would admit as evidence.(This is a comment on the title of the post, not on Earl Warren's rationalization).

Michael, I don't understand what opportunities you're referring to that could qualify as arbitrage. Also, reputation isn't necessarily needed - there are many investors who would use their own money to exploit the relevant opportunities if there were good reason to think they could be identified, without needing to convince clients of anything. One of the reasons I don't try to exploit opportunities that I can imagine involving apocalypse in the 2020s is that I think it's unlikely that markets will see any new information in the next few years that would make those opportunities less profitable if I wait to try exploiting them.

The treasury bond market appears to be as close to such a market as we can expect to get. It shows interest rates for bonds maturing in 2027 with a yield about 0.20% higher than those maturing in 2017, and bonds maturing in 2037 have a lower interest rate than those maturing in 2027. That's a clear prediction that apocalypse isn't expected. Markets for more than a few years into the future normally say that the best forecast is that conditions will stay the same and/or that existing trends will continue.

Rejecting Punctuated Equilibrium theory on the grounds that Gould was a scientifically dishonest crackpot seems to require both fundamental attribution error and an ad hominem argument.

It appears counterproductive to use the word mutants to describe how people think of enemies. Most people can easily deny that they've done that, and therefore conclude they don't need to learn from your advice. I think if you were really trying to understand those who accept misleading stereotypes of suicide bombers, you'd see that their stereotype is more like "people who are gullible enough to be brainwashed by the Koran". People using such stereotypes should be encouraged to think about how many people believe themselves to be better than average at overcoming brainwashing.

And for those who think suicide bombers are unusual deviants, I suggest reading Robert Pape's book Dying to Win.

Eliezer, if you anticipate a default more than 90 days in advance, it doesn't matter that other investors do also. You hold the Treasury bills to maturity and they are paid off before the default.

Load More