Roko comments on Intelligence enhancement as existential risk mitigation - Less Wrong

17 [deleted] 15 June 2009 07:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread. Show more comments above.

Comment deleted 17 June 2009 09:19:28PM *  [-]
Comment author: CronoDAS 17 June 2009 09:55:29PM *  2 points [-]

It could also disrupt them in the wrong direction; there's no particular reason to assume that becoming "smarter" won't just make us better self-deceivers.

As Michael Shermer writes, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."

Comment deleted 17 June 2009 10:10:53PM [-]
Comment author: Annoyance 18 June 2009 01:57:19PM -1 points [-]

Cherished falsehoods are unlikely to be random. In groups that aren't artificially selected at random from the entirety of humanity, error will tend to be correlated with others'.

There are also deep flaws in humanity as a whole, most especially on some issues.

Should we decide to believe in ghosts because most human beings share that belief, or should we rely on rational analysis and the accumulation of evidence (data derived directly from the phenomena in question, not other people's opinions)?

Comment author: pjeby 17 June 2009 09:34:59PM *  0 points [-]

Evolution designed our brains with in-built self-deception mechanisms; it did not design those mechanisms to continue to operate optimally if the intelligence of the person concerned is artificially increased. It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.

Not if the original function of (verbal) "intelligence" was to improve our ability to deceive... and I strongly suspect this to be the case. After all, it doesn't take a ton of (verbal) intelligence to hunt and gather.

Comment deleted 17 June 2009 09:48:52PM *  [-]
Comment author: pjeby 17 June 2009 10:52:25PM 0 points [-]

If we evolved ever more complex ways of lying, then we must also have evolved ever more complex ways of detecting lies.

Good point. Of course, that mechanism is for detecting other people's lies, and there is some evidence that it's specific to ideas and/or people you already disagree with or are suspicious of... meaning that increased intelligence doesn't necessarily relate.

One of the central themes in the book I'm working on is that brains are much better at convincing themselves they've thought things through, when in actuality no real thinking has taken place at all.

Looking for problems with something you already believe is a good example of that: nobody does it until they have a good enough reason to actually think it through, as opposed to assuming they already did it, or not even noticing what it is they believe in the first place.

Comment author: Annoyance 18 June 2009 01:54:03PM 0 points [-]

It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.

No. Your argument is specious. Evolution 'designed' us with all sorts of things 'in mind' that no longer apply. That doesn't mean that any arbitrary aspect of our lives will have an influence if it's changed on any other aspect. If the environmental factors / traits have no relationship with the trait we're interested in, we have no initial reason to think that changing the conditions will affect the trait.

Consider the absurdity of taking your argumentative structure seriously:

"Nature designed us to have full heads of hair. Nature also gave us a sense of sight, which it did not design to operate optimally in hairless conditions. It is therefore reasonable to expect that shaving the head will, to some extent, disrupt our visual acuity."

Comment deleted 18 June 2009 02:03:04PM *  [-]
Comment author: Annoyance 18 June 2009 02:05:57PM -1 points [-]

This criticism is valid if we think that the trait we vary is irrelevant to the effect we are considering.

No, the criticism is valid if we have no reason to think that the traits will be causally linked. You're making another logical fallacy - confusing two statements whose logical structure renders them non-equivalent.

(thinking trait is ~relevant) != ~(thinking trait is relevant)