[LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR
Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR.
I intend to print at least one high-quality physical HPMOR and release the files. There are printable texts which are being improved and a set of covers (based on e.b.'s) are underway. I have, however, been unable to find any blurbs I'd be remotely happy with.
I'd like to attempt to harness the hivemind to fix that. As a lure, if your ideas contribute significantly to the final version or you assist with other tasks aimed at making this book awesome, I'll put a proportionate number of tickets with your number on into the proverbial hat.
I do not guarantee there will be a winner and I reserve the right to arbitrarily modify this any point. For example, it's possible this leads to a disappointingly small amount of valuable feedback, that some unforeseen problem will sink or indefinitely delay the project, or that I'll expand this and let people earn a small number of tickets by sharing so more people become aware this is a thing quickly.
With that over, let's get to the fun part.
A blurb is needed for each of the three books. Desired characteristics:
* Not too heavy on ingroup signaling or over the top rhetoric.
* Non-spoilerish
* Not taking itself awkwardly seriously.
* Amusing / funny / witty.
* Attractive to the same kinds of people the tvtropes page is.
* Showcases HPMOR with fun, engaging, prose.
Try to put yourself in the mind of someone awesome deciding whether to read it while writing, but let your brain generate bad ideas before trimming back.
I expect that for each we'll want
* A shortish and awesome paragraph
* A short sentence tagline
* A quote or two from notable people
* Probably some other text? Get creative.
Please post blurb fragments or full blurbs here, one suggestion per top level comment. You are encouraged to remix each other's ideas, just add a credit line if you use it in a new top level comment. If you know which book your idea is for, please indicate with (B1) (B2) or (B3).
Other things that need doing, if you want to help in another way:
* The author's foreword from the physical copies of the first 17 chapters needs to be located or written up
* At least one links page for the end needs to be written up, possibly a second based on http://www.yudkowsky.net/other/fiction/
* Several changes need to be made to the text files, including merging in the final exam, adding appendices, and making the style of both consistent with the rest of the files. Contact me for current files and details if you want to claim this.
I wish to stay on topic and focused on creating these missing parts rather than going on a sidetrack to debate copyright. If you are an expert who genuinely has vital information about it, please message me or create a separate post about copyright rather than commenting here.
[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions
From a paper by Center for Technology and National Security Policy & National Defense University:
"Strong AI: Strong AI has been the holy grail of artificial intelligence research for decades. Strong AI seeks to build a machine which can simulate the full range of human cognition, and potentially include such traits as consciousness, sentience, sapience, and self-awareness. No AI system has so far come close to these capabilities; however, many now believe that strong AI may be achieved sometime in the 2020s. Several technological advances are fostering this optimism; for example, computer processors will likely reach the computational power of the human brain sometime in the 2020s (the so-called “singularity”). Other fundamental advances are in development, including exotic/dynamic processor architectures, full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson. It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings. For example, a 2013 report commissioned by the United Nations has called for a worldwide moratorium on the development and use of autonomous robotic weapons systems until international rules can be developed for their use.
National Security Implications: Over the next 10 to 20 years, robotics and AI will continue to make significant improvements across a broad range of technology applications of relevance to the U.S. military. Unmanned vehicles will continue to increase in sophistication and numbers, both on the battlefield and in supporting missions. Robotic systems can also play a wider range of roles in automating routine tasks, for example in logistics and administrative work. Telemedicine, robotic assisted surgery, and expert systems can improve military health care and lower costs. The built infrastructure, for example, can be managed more effectively with embedded systems, saving energy and other resources. Increasingly sophisticated weak AI tools can offload much of the routine cognitive or decisionmaking tasks that currently require human operators. Assuming current systems move closer to strong AI capabilities, they could also play a larger and more significant role in problem solving, perhaps even for strategy development or operational planning. In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions."
So strong AI is on the american military's radar, and at least some involved have a basic understanding of the fact that it could be risky. The paper also contains brief overviews of many other potentially transformational technologies.
The Useful Definition of "I"
aka The Fuzzy Pattern Theory of Identity
Background reading: Timeless Identity, The Anthropic Trilemma
Identity is not based on continuity of physical material.
Identity is not based on causal links to previous/future selves.
Identity is not usefully defined as a single point in thingspace. An "I" which only exists for an instant (i.e. zero continuity of identity) does not even remotely correspond to what we're trying to express by the word "I" in general use, and refers instead to a single snapshot. Consider the choice between putting yourself in stasis for eternity against living normally; a definition of "I" which prefers self-preservation by literally preserving a snapshot of one instant is massively unintuitive and uninformative compared to a definition which leads us to preserve "I" by allowing it to keep living even if that includes change.
Identity is not the current isolated frame.
So if none of those are what "I"/Identity is based on, what is?
Some configurations of matter I would consider to be definitely me, and some definitely not me. Between the two extremes there are plenty of border cases wherever you try to draw a line. As an exercise: five minutes in the past ete, 30 years in the future ete, alternate branch ete brought up by different parents, ete's identical twin, ete with different genetics/body but a mindstate near-identical to current ete, sibling raised in same environment with many shared memories, random human, monkey, mouse, bacteria, rock. With sufficiently advanced technology, it would be possible to change me between those configurations one atom at a time. Without appeals to physical or causal continuity, there's no way to cleanly draw a hard binary line without violating what we mean by "I" in some important way or allowing, at some point, a change vastly below perceptible levels to flip a configuration from "me" to "not-me" all at once.
Or, put another way, identity is not binary, it is fuzzy like everything else in human conceptspace.
It's interesting to note that examining common language use shows that in some sense this is widely known. When someone's changed by an experience or acting in a way unfitting with your model of them it's common to say something along the lines of "he's like a different person" or "she's not acting like herself", and the qualifier!person nomenclature that is becoming a bit more frequent, all hint at different versions of a person having only partially the same identity.
Why do we have a sense of identity?
For something as universal as the feeling of having an identity there's likely to be some evolutionary purpose. Luckily, it's fairly straightforward to see why it would increase fitness. The brain's learning is based on reward/punishment and connecting behaviours which are helpful/harmful to them, which is great for some things but could struggle with long term goals since the reward for making the right/punishment for wrong decision comes very distantly from the choice, so relatively weakly connected and reinforced. Creatures which can easily identify future/past continuations using an "I" concept of their own presence have a ready-built way to handle delayed gratification situations. Evolution needs to connect up "doing this will make "I" concept future be expected to get reward" to some reward in order to encourage the creature to think longer term, rather than specifically connecting each possible long term beneficial reward to each behaviour. Kaj_Sotala's attempt to dissolve subjective expectation and personal identity contains another approach to understanding why we have a sense of identity, as well as many other interesting thoughts.
So what is it?
If you took yourself from right now and changed your entire body into a hippopotamus, or uploaded yourself into a computer, but still retained full memories/consciousness/responses to situations, you would likely consider yourself a more central example of the fuzzy "I" concept than if you made the physically relatively small change of removing your personality and memories. General physical structure is not a core feature of "I", though it is a relatively minor part.
Your "I"/identity is a concept (in the conceptspace/thingspace sense), centred on current you, with configurations of matter being considered more central to the "I" cluster the more similar they are to current you in the ways which current you values.
To give some concrete examples: Most people consider their memories to be very important to them, so any configuration without a similar set of memories is going to be distant. Many people consider some political/social/family group/belief system to be extremely important to them, so an alternate version of themselves in a different group would be considered moderately distant. An Olympic athlete or model may put an unusually large amount of importance on their body, so changes to it would move a configuration away from their idea of self quicker than for most.
This fits very nicely with intuition about changing core beliefs or things you care about (e.g. athlete becomes disabled, large change in personal circumstances) making you in at least some sense a different person, and as far as I can tell does not fall apart/prove useless in similar ways to alternative definitions.
What consequences does this theory have for common issues with identity?
- Moment to moment identity is almost entirely, but not perfectly retained.
- You will wake up as yourself after a night's sleep in a meaningful sense, but not as quite as central example of current-you's "I" as you would after a few seconds.
- The teleporter to Mars does not kill you in the most important sense (unless somehow your location on Earth is a particularly core part of your identity).
- Any high-fidelity clone can be usefully considered to be you, however it originated, until it diverges significantly.
- Cryonics or plastination do present a chance for bringing you back (conditional on information being preserved to reasonable fidelity), especially if you consider your mind rather than your body as core to your identity (so would not consider being an upload a huge change).
- Suggest more in comments!
Why does this matter?
Flawed assumptions and confusion about identity seem to underlie several notable difficulties in decision theory, anthropic issues, and less directly problems understanding what morality is, as I hope to explore in future posts.
Thanks to Skeptityke for reading through this and giving useful suggestions, as well as writing this which meant there was a lot less background I needed to explain.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)