sixes_and_sevens comments on A Kick in the Rationals: What hurts you in your LessWrong Parts? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
Mass Effect kicks me in the LW.
Quantum entanglement communication. AI (including superAI) all over the place, life still normal. Bad ethics. Humans in funny suits.
Your strength as a rationalist is your ability to scream 'bullshit' and throw the controller at the screen.
Yep. Most mass-market space operas are guilty of this. Despite having knowledge and resources to fly to other planets, humans in them still have to shoot kinetic bullets at animals.
However, stories, in order to be entertaining (at least for the mainstream public), have to depict a protagonist (or a group thereof) who are changing because of conflict, and the conflict has to be winnable, resolvable -- it must "allow" the protagonist to use his wit, perseverance, luck and whatever else to win.
Now imagine a "more realistic" setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict. If the singularity was unfriendly, humans are either already disassembled for atoms, or soon will be -- and they have no chance to win against the AI because the capability gap is too big. Neither branch has much story potential.
This applies to game design as well -- enemies in a game built around a conflict have to be "repeatedly winnable", otherwise the game would become an exercise in frustration.
(I think there is some story / game potential in the early FOOM phase where humans still have a chance to shut it down, but it is limited. A realistic AI has no need to produce hordes of humanoid or monstrous robots vulnerable to bullets to serve as enemies, and it has no need to monologue when the hero is about to flip the switch. Plus the entire conflict is likely to be very brief.)
Data from Star Trek doesn't quite give me the lurching despair I was thinking of when I wrote the original post, but he does make me do a mental double-take whenever a physical embodiment of human understanding of cognition sits there wondering about esoteric aspects of human behaviour that were mysterious to sci-fi screenwriters in the early 1990s.
To be fair, he didn't actually have access to Soong's design notes.
Data's awareness of his own construction varies as befits the plot. My point was that TNG often asked a lot of questions about ethics and cognition and personhood and identity. Data himself talks about the mysterious questions of human experience all the bloody time.
In a world where Data exists, significant headway has been made on those questions already.
This is a special case of a general property of the Star Trek universe: it exhibits a very low permeability to new information. Breakthroughs and discoveries occur all over the place that have only local effects.
I've generally assumed that there's some as-yet-unrevealed Q-like entity that intervenes regularly to avoid too many changes in the social fabric in a given period of time.
The Federation government being deeply corrupt would also explain a lot.