Posts

Sorted by New

Wiki Contributions

Comments

>John had previously observed me making contrarian claims where I’d turned out to be badly wrong, like endorsing Gary Taubes’ theories about the causes of the obesity epidemic.

Um…what? This might not be the *only* cause, but surely emphasizing sugar over fat has been a *major* one. What am I missing here?

I really dislike the “stepping out of character” bit. It disrupts the flow and ruins the story. Instead, just say, “Eliezer Yudkowski tells the story that…” and leave it at that.

I broadly agree with most of your points (to the point that I only read the summary of most of them), but I have issues with your responses to two objections, which I hold:

If lots of different companies and governments have access to AI, won't this create a "balance of power" so that nobody is able to bring down civilization?

  • This is a reasonable objection to many horror stories about AI and other possible advances in military technology, but if AIs collectively have different goals from humans and are willing to coordinate with each other11 against us, I think we're in trouble, and this "balance of power" idea doesn't seem to help.

I don't understand why it's plausible to think that AI's might collectively have different goals than humans. Where would they get such goals? I mean, if somebody was stupid enough to implement some sort of evolutionary function such that "natural" selection would result in some sort of survival urge, that could very easily pit that AI, or that family of AIs, against humanity, but I see no reason to think that even that would apply to AIs in general—and if they evolved independently, presumably they'd be at odds. 

Won't we see warning signs of AI takeover and be able to nip it in the bud? I would guess we would see some warning signs, but does that mean we could nip it in the bud? Think about human civil wars and revolutions: there are some warning signs, but also, people go from "not fighting" to "fighting" pretty quickly as they see an opportunity to coordinate with each other and be successful.

I feel that this is a weak response. Why wouldn't we be able to? I mean, unless you're saying that alignment is impossible, or that this could all happen before anyone figures alignment out (which does seem plausible), I don't see why we couldn't set "good" AI against "bad" AI. The "fighting" example seems weak because it's not the war itself that one side or the other is deeply interested in avoiding; it's losing, especially losing without a fight. That does not seem to be the sort of thing that humans easily allow to happen; the warning signs don't prompt us to act to avoid the war, but to defend against attack, or to attack preemptively. Which is what we want here. 

You should just flip it around and call it evaporative *heating.* Human groups work exactly opposite to hot atoms; it is the *cooler* ones who find it easier to escape. Then those who are left get hotter and hotter until they explode.

Technically, the fact that her ultimate fictional hero was John Galt is a spoiler too.

I don't care. Lots of people have published things that they wish they hadn't. That doesn't give them the right to demand that every book or newspaper or magazine issue that carried those undesirable words be destroyed.

I'm not railing against Scott here; he does have the right to remove things off of his LiveJournal. I'm railing against the nature of the Internet, that makes "de-publishing" not only possible, but easy.

1. The link for "Epistemic Learned Helplessness" goes to another article entirely.

2. "Epistemic Learned Helplessness" (and all other entries) have disappeared off of Scott Alexander's LiveJournal.

3. I found a copy on the Wayback Machine.

4. This is a travesty. Why have all these posts disappeared? Do they exist elsewhere?

5. *incoherent mumbling about the ephemeral nature of the Internet, and what a gigantic problem this is*

The Patri Friedman links are dead, and blocked from archive.org. Anyone have access to another archive, so I can see what he's talking about? There has got to be a better way to link. Has no one come up with a distributed archive of linked material yet?

Yes, this. It simply shouldn't be necessary—ever—to loudly defy a single result. An I replicated result should not be seen as a result at all, but merely a step in the experimental process. Sadly, that's not how most people treat results.

Load More