ChristianKl comments on Savulescu: "Genetically enhance humanity or face extinction" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (193)
The notion that higher IQ means that more money will be allocated to solving FAI is idealistic. Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right. Even if individuals have a high IQ that doesn't mean that they don't fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence. Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
Some sub-ideas of a FAI theory might be put to test in artificial intelligence that isn't smart enough to improve itself.
"Editing the mental states of ems" sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it's not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It's a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it's still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological "add-ons" ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
To whoever downvoted Roko's comment -- check out the distinction between these ideas:
I'd volunteer and I'm sure I'm not the only one here.
You're not, though I'm not sure I'd be an especially useful data source.
I've met at least one person who would like a synesthesia on-off switch for their brain - that would make your data useful right there.
Looks to me like that'd be one of the more complicated things to pull off, unfortunately. Too bad; I know a few people who'd like that, too.
Please expand on what "the end" means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
I worry these modified ems won't share our values to a sufficient extent.
Possibly. But I'd rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn't give a powerful intelligence explosion -- then FAI is simply formalization and preservation of preference, rather than power to enact this preference).