1 min read

1

This is a special post for quick takes by satchlj. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
2 comments, sorted by Click to highlight new comments since:

If you haven't already, I'd recommend reading Vinge's 1993 essay on 'The Coming Technological Singularity': https://accelerating.org/articles/comingtechsingularity

He is remarkably prescient, to the point that I wonder if any really new insights into the broad problem have been made in the last 22 years since he wrote. He discusses, among other things, using humans as a base to build superintelligence on as an possible alignment strategy, as well as the problems with this approach.

Here's one quote:

Eric Drexler [...] agrees that superhuman intelligences will be available in the near future — and that such entities pose a threat to the human status quo. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely. This is I. J. Good's ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate — say — one million times slower than you, there is little doubt that over a period of years (your time) you could come up with "helpful advice" that would incidentally set you free. [...] 

[+][comment deleted]10
Curated and popular this week