Logos01 comments on A Kick in the Rationals: What hurts you in your LessWrong Parts? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
Mass Effect kicks me in the LW.
Quantum entanglement communication. AI (including superAI) all over the place, life still normal. Bad ethics. Humans in funny suits.
Your strength as a rationalist is your ability to scream 'bullshit' and throw the controller at the screen.
Yep. Most mass-market space operas are guilty of this. Despite having knowledge and resources to fly to other planets, humans in them still have to shoot kinetic bullets at animals.
However, stories, in order to be entertaining (at least for the mainstream public), have to depict a protagonist (or a group thereof) who are changing because of conflict, and the conflict has to be winnable, resolvable -- it must "allow" the protagonist to use his wit, perseverance, luck and whatever else to win.
Now imagine a "more realistic" setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict. If the singularity was unfriendly, humans are either already disassembled for atoms, or soon will be -- and they have no chance to win against the AI because the capability gap is too big. Neither branch has much story potential.
This applies to game design as well -- enemies in a game built around a conflict have to be "repeatedly winnable", otherwise the game would become an exercise in frustration.
(I think there is some story / game potential in the early FOOM phase where humans still have a chance to shut it down, but it is limited. A realistic AI has no need to produce hordes of humanoid or monstrous robots vulnerable to bullets to serve as enemies, and it has no need to monologue when the hero is about to flip the switch. Plus the entire conflict is likely to be very brief.)
There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.
An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of what passes for bad science fiction in most space operas, actually. AI cores on ships that can understand human language but don't qualify as fully sentient (because the real AGI is gutting their intellects); androids that are fully humanoid and perhaps even sentient but haven't any clue why that is so (because you could rebuild human-like cognitive faculties by reverse-engineering black-box but if you actually knew what was going on in the parts you would have that information purged...) -- so on and so on.
And yet this would qualify as Friendly; human society and ingenuity would continue.