Posts

Sorted by New

Wiki Contributions

Comments

I prefer yesterday's post (which is why I wrote it). But I also suspect that yesterday's post is more persuasive, signaling more maturity and deliberately avoiding flashes of eloquence that might provoke skepticism here

I agree, and vastly prefer yesterday's post. Without intending to offend, the problem for me is that the 'flashes of eloquence' read more as 'attempts at eloquence'. They fall short for me, and thus cause me to doubt the rest of the piece.

On the other hand, this version seems easier to read, and you might find it more persuasive if you had just encountered it on the Net - if you weren't used to a different style from me.

The first piece I read through from start to finish, and felt more able to evaluate it as a whole. For the second, the style was sufficiently jarring that I found myself doubting the argument phrase by phrase. I guess one could conclude that the second style helps to achieve a critical reading, but in the wild I'd never have bothered to read the whole thing.

The question is great, though, and the side-by-side presentation is a great test case. Is there one of these that you consider to be your native tone?

Eliezer ---

Do you think this analogy is useful for estimating the value of friendliness? That is, is the impact of humans on other species and our environment during this explosion of intelligence a useful frame for pondering the impact of a rapidly evolving AI on humans?

I think it has potential to be useful, but I'm not sure in which direction it should be read.

While we've driven some species toward extinction, others have flourished. And while I'm optimistic that as intelligence increases we'll be better able to control our negative impacts on the environment, I'm also worried that as the scale of our impacts increases a single mistake could be fatal.

Eliezer ---

I'm confused by your desire for an 'automatic controlled shutdown' and your fear that further meta-reasoning will override ethical inhibitions. In previous writings you've expressed a desire to have a provably correct solution before proceeding. But aren't you consciously leaving in a race-condition here?

What's to prohibit the meta-reasoning from taking place before the shutdown triggers? It would seem that either you can hard-code an ethical inhibition or you can't. Along those lines, is it fair to presume that the inhibitions are always negative, so that non-action is the safe alternative? Why not just revert to a known state?

Eliezer ---

Could you please expand on your statement that "p(cults|aliens) isn't less than p(cults|~aliens)" even if greater evidence for aliens were to emerge.

My intuition is that the clearer the evidence for something, the more agreement there will be on the pertinent details. While an individual may be more likely to belong to an alien cult if aliens are present, won't the number and divergence of cults change depending on the strength of evidence?

This strikes me as parallel to one of your earlier posts, where I think you argued that a multiplicity of weak arguments offers no evidence as the truth of an new argument reaching the same conclusion. Intuitively, I disagree with this. I still think the landscape of arguments currently in use must have some correlation to the truth of a new argument that 'happens' to come to the same conclusion. Could you talk more about this someday?

Thanks!