The Zombie Preacher of Somerset
Related to: Zombies? Zombies!, Zombie Responses, Zombies: The Movie, The Apologist and the Revolutionary
All disabling accidents are tragic, but some are especially bitter. The high school sports star paralyzed in a car crash. The beautiful actress horribly disfigured in a fire. The pious preacher who loses his soul during a highway robbery.
As far as I know, this last one only happened once, but once was enough. Simon Browne was an early eighteenth century pastor of a large Dissident church. The community loved him for his deep faith and his remarkable intelligence, and his career seemed assured.
One fateful night in 1723, he was travelling from his birthplace in Somerset to his congregation in London when a highway robber accosted the coach carrying him and his friend. With quick reflexes and the element of surprise, Browne and his friend were able to disarm the startled highway robber and throw him to the ground. Browne tried to pin him down while the friend went for help, but in the heat of the moment he used excessive force and choked the man to death. This horrified the poor preacher, who was normally the sort never to hurt a fly.
Whether it was the shock, the guilt, or some unnoticed injury taken in the fight, something strange began to happen to Simon Browne. In his own words, he gradually became:
...perfectly empty of all thought, reflection, conscience, and consideration, entirely destitute of the knowledge of God and Christ, unable to look backward or forward, or inward or outward, having no conviction of sin or duty, no capacity of reviewing his conduct, and, in a word, without any principles of religion or even of reason, and without the common sentiments or affections of human nature, insensible even to the good things of life, incapable of tasting any present enjoyments, or expecting future ones...all body, without so much as the remembrance of the ruins of that mind I was once a tenant in...and the thinking being that was in me is, by a consumption continual, now wholly perished and come to nothing.
Simon Browne had become a p-zombie.
We need new humans, please help
This topic is in vogue, so here's my pitch.
My fellow humans, I have some bad news and some good news. The bad news is that you are likely to eventually enter an enfeebled state, during which you will not be able to independently provide for yourself. Even worse, you will at some point altogether cease to function and then you can no longer contribute to the things you care about. The good news is that both of those problems can be ameliorated by the same scheme – the creation of new humans. The new humans can provide us with the assistance we need as our own abilities diminish. And when we cease to function, the new humans can carry on with the projects we value.
Now, the thing is, creating fully functioning new humans is a huge project, consuming many man-years of work. A person engaged in preparing and outfitting a new human will need to sacrifice a lot of time that could otherwise be devoted to personal leisure and other projects. We currently have a volunteer system for replenishing the population and in many ways this works well. Not everyone is well-placed for creating humans while some people are in a good position to create many. But this system is not perfect and it can be exploited. There are some freeloaders who do not create humans even though they are in a suitable position to do so. Those same people almost always value receiving care in old age and value humanity having a future. But they are relying on the rest of us to provide enough new humans for this to happen while they can devote all their time to other projects and zero time to diapers with poop in them.
Sometimes the non-child-creators justify their decision by suggesting that the projects they are working on are especially socially valuable and thus they can spend time on them in preference to child-creation without violating their duty to society. While it is *possible* that this argument goes through in some cases, it seems suspiciously self-serving. What is especially worth taking into account is that if the humans in question really are so highly valuable, they would statistically have highly valuable offspring. Thus, it seems doubtful in the general case that high-value people refraining from procreating is a net gain for society.
[Poorly conceived section on my personal experiences removed.]
Tulpa References/Discussion
There have been a number of discussions here on LessWrong about "tulpas", but it's been scattered about with no central thread for the discussion. So I thought I would put this up here, along with a centralized list of reliable information sources, just so we all stay on the same page.
Tulpas are deliberately created "imaginary friends" which in many ways resemble separate, autonomous minds. Often, the creation of a tulpa is coupled with deliberately induced visual, auditory, and/or tactile hallucinations of the being.
Previous discussions here on LessWrong: 1 2 3
Questions that have been raised:
1. How do tulpas work?
2. Are tulpas safe, from a mental health perspective?
3. Are tulpas conscious? (may be a hard question)
4. More generally, is making a tulpa a good idea? What are they useful for?
Pertinent Links and Publications
(I will try to keep this updated if/when further sources are found)
- In this article1, the psychological anthropologist Tanya M. Luhrmann connects tulpas to the "voice of God" experienced by devout evangelicals - a phenomenon more thoroughly discussed in her book When God Talks Back: Understanding the American Evangelical Relationship with God. Luhrmann has also succeeded2 in inducing tulpa-like visions of Leland Stanford, jr. in experimental subjects.
- This paper3 investigates the phenomenon of authors who experience their characters as "real", which may be tulpas by yet another name.
- There is an active subreddit of people who have or are developing tulpas, with an FAQ, links to creation guides, etc.
- tulpa.info is a valuable resource, particularly the forum. There appears to be a whole "research" section for amateur experiments and surveys.
- This particular experiment suggests that the idea of using tulpas to solve problems faster is a no-go.
- Also, one person helpfully hooked themselves up to an EEG and then performed various mental activities related to their tulpa.
- Another possibly related phenomenon is the way that actors immerse themselves in their characters. See especially the section on "Masks" in Keith Johnstone's book Impro: Improvisation and the Theatre (related quotations and video)4.
- This blogger has some interesting ideas about the neurological basis of tulpas, based on Julian Jaynes's The Origin of Consciousness in the Breakdown of the Bicameral Mind, a book whose scientific validity is not clear to me.
- It is not hard to find new age mystical books about the use of "thoughtforms", or the art of "channeling" "spirits", often clearly talking about the same phenomenon. These books are likely to be low in useful information for our purposes, however. Therefore I'm not going to list the ones I've found here, as they would clutter up the list significantly.
- (Updated 2/9/2015) The abstract of a paper by our very own Kaj Sotala hypothesizing about the mechanisms behind tulpa creation.5
(Bear in mind while perusing these resources that if you have serious qualms about creating a tulpa, it might not be a good idea to read creation guides too carefully; making a tulpa is easy to do and, at least for me, was hard to resist. Proceed at your own risk.)
Footnotes
1. "Conjuring Up Our Own Gods", a 14 October 2013 New York Times Op-Ed
2. "Hearing the Voice of God" by Jill Wolfson in the July/August 2013 Stanford Alumni Magazine
3. "The Illusion of Independent Agency: Do Adult Fiction Writers Experience Their Characters as Having Minds of Their Own?"; Taylor, Hodges & Kohànyi in Imagination, Cognition and Personality; 2002/2003; 22, 4
4. Thanks to pure_awesome
5. "Sentient companions predicted and modeled into existence: explaining the tulpa phenomenon" by Kaj Sotala
Why didn't people (apparently?) understand the metaethics sequence?
There seems to be a widespread impression that the metaethics sequence was not very successful as an explanation of Eliezer Yudkowsky's views. It even says so on the wiki. And frankly, I'm puzzled by this... hence the "apparently" in this post's title. When I read the metaethics sequence, it seemed to make perfect sense to me. I can think of a couple things that may have made me different from the average OB/LW reader in this regard:
- I read Three Worlds Collide before doing my systematic read-through of the sequences.
- I have a background in academic philosophy, so I had a similar thought to Richard Chapell's linking of Eliezer's metaethics to rigid designators independently of Richard.
Only You Can Prevent Your Mind From Getting Killed By Politics
Follow-up to: "Politics is the mind-killer" is the mind-killer, Trusting Expert Consensus
Gratuitous political digs are to be avoided. Indeed, I edited my post on voting to keep it from sounding any more partisan than necessary. But the fact that writers shouldn't gratuitously mind-kill their readers doesn't mean that, when they do, the readers' reaction is rational. The rules for readers are different from the rules for writers. And it especially doesn't mean that when a writer talks about a "political" topic for a reason, readers can use "politics!" as an excuse for attacking a statement of fact that makes them uncomfortable.
Meditation Trains Metacognition
Summary: Some forms of meditation may train key skills of metacognition, serving as powerful tools for applied rationality. I expect aspiring rationalists to advance more quickly with a regular practice of mindfulness meditation.
Criticisms of the Metaethics
I'll admit that I'm using the LessWrong Board to try and figure out flaws in my own philosophical ideas. I should also make a disclaimer that I do not dispute the usefulness of Eliezer's ideas for the purposes of building a Friendy AI.
My criticisms are designed for other purposes- namely, that contrary to what I am led to believe most of this site believes Eliezer's metaethics does not work for solving ethical dilemnas except as a set of arbitrary rules, and in no way the stand-out best choice compared to any other self-consistent deontological or consequentialist system.
I'll also admit that I have something of a bias, for those looking in- I find an interesting intellectual challenge to look through philosophies and find weak points in them, so I may have been over-eager to find a bias that doesn't exist. I have been attempting to find an appropriate flaw for some time as some of my posts may have foreshadowed.
Finally, I will note that I am also attempting to dodge attacks on Elizier's ethics despite it's connections to Eliezer's epistemology.
---------------------------------------
1: My Basic Argument
Typically, people ask two things out of ethics- a reason to be ethical in the first place, and a way to resolve ethical dilemnas. Eliezer gets around the former by, effectively, appealing to the fact that people want to be moral even if there is no universially compelling argument.
The problem with Eliezer's metaethics are based around what I call the A-case after the character I invented to be in it the first time I thought up this idea. A has two options- Option 1 is the best choice from a Consequentialist perspective and A is smart enough to figure that out. However, following Option 1 would make A feel very guilty for some reason (which A cannot overcome merely by thinking about it), whereas Option 2 would feel morally right on an emotive level.
This, of course, implies that A is not greatly influenced by consequentialism- but that's quite plausible. Perhaps you have to be irrational to an intelligent non-consequentialist, but an irrational non-consequentialist smart enough to perform a utility calculation as a theoretical exercise is plausible.
How can we say that the right thing for A to do is Option 1, in such a way as be both rational and in any way convincing to A? From the premises, it is likely that any possible argument will be rejected by A in such a manner that you can't claim A is being irrational.
This can also be used against any particular deontological code- in fact more effectively due to greater plausibility- by substituting it for Consequentialism and claiming that according to said code it is A's moral duty. You can define should all you like, but A is using a different definition of should (not part of the opening scenario, but a safe inference except for a few unusual philosophers). You are talking about two different things.
-----------------------------------------------------------------
2: Addressing Counterarguments
i:
It could be argued that A has a rightness function which, on reflection, will lead A to embrace consequentialism as best for humanity as a whole. This is, however, not necessarily correct- to use an extreme case, what if A is being asked to kill A's own innocent lover, or her own baby? ("Her" because it's likely a much stronger intution that way) Some people in A's posistion have said rightness functions- it is easily possible A does not.
In addition, a follower of Lesswrong morality in it's standard form has a dilemna here. If you say that A is still morally obliged to kill her own baby, then Eleizer's own arguments can be turned against you- still pulling a child off the traintracks regardless of any 'objective' right. If you say she isn't, you've conceded the case.
A deontological theory is either founded on intuitions or not. If not Hume's Is-Ought distinction refutes it. If it is, then it faces similiar dilemnas in scenarios like this. Intuitions, however, do not add up to a logically consistent philosophy- "moral luck" (the idea a person can be more or less morally responsible based on factors outside their control) feels like an oxymoron at first, but many intuitions depend on it.
ii:
One possible counteragument is that A wants to do things in the world, and merely following A's feelings turns A into a morality pump making actions which don't make sense. However, there are several problems with this.
i- A's actions probably make sense from the perspective of "Make A feel morally justified". A can't self-modify (at least not directly), after all.
ii- Depending on the strengths of the emotions, A does not necessarily care even if A is aware of the inconsistencies in their actions. There are plenty of possible cases- a person dealing with those with whom they have close emotional ties, biases related to race or physical attractiveness, condeming large numbers of innocents to death etc.
iii:
A final counterargument would be that the way to solve this is through a Coherentist style Reflective Equilibrium. Even if Coherentism is not epistemically true, by treating intuitions as if it were true and following the Coherentist philosophy the result could feel satisfying. The problem is- what if it doesn't? If a person's emotions are strong enough, no amount of Reflective Equilibrium is strong enough to contradict them.
If you take an emotivist posistion, however, you have the problem Emotivism has no solution when feelings contradict each other.
------------------------------------------------------------------
3: Conclusions
My contention here is that we have a serious problem. The concept of right and wrong is like the concept of personal identity- merely something to be abolished for a more accurate view of what exists. It can be replaced with "Wants" (for people who have a unified moral system but various feelings), "Moralities" (systematic moral codes which are internally coherent), and "Pseudo-Moralities" with no objective morality even in the Yudowskyite sense existing.
A delusion exists of morality in most human minds, of course- just as a delusion exists of personal identity in most if not all human minds. "Moralities" can still exist in terms of groups of entities who all want similiar things or agree with basic moral rules, that can be taken to their logical conclusions.
Why can that not lead to morality? It can, but if you accept a morality on that basis it implies that rational argument (as opposed to emotional argument, which is a different matter) is in many cases entirely impossible with humans with different moralities, just as it is with aliens.
This leaves two types of rational argument possible about ethical questions:
-Demonstrating that a person would want something different if they knew all the facts- whether facts such as "God doesn't exist", facts such as "This action won't have the consequences you think it will", or facts about the human psyche.
-Showing a person's Morality has internal inconsistencies, which in most people will mean they discard it. (With mere moral Wants this is more debatable)
Arguably it also leads a third- demonstrating to a person that they do not really want what they think they want. However, this is a philosophical can of worms which I don't want to open up (metaphorically speaking) because it is highly complicated (I can think of plenty of arguments against the possibility of such, even if I am not so convinced they are true as to assert it) and because solving it does not contribute much to the main issue.
Eliezer's morality cannot even work out on that basis, however. In any scenario where an individual B:
i- Acts against Eliezer's moral code
ii- Feels morally right about doing so, and would have felt guilty for following Eliezer's ideas
Then they can argue against somebody trying to use Eliezer's ideas against them by pointing out that regardless of any Objective Morality, Eliezer still has a good case for dragging children off train tracks.
I will not delve into what proportion of humans can be said to make up a single Morality due to having basically similiar premises and intuitions. Although there are reasons to doubt it is as large as you'd think (take the A case), I'm not sure if it would work.
In conclusion- there is no Universially Compelling argument amongst humans, or even amongst rational humans.
[Link] Trouble at the lab
Related: The Real End of Science
From the Economist.
“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.
Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.
The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.
...
I recommend reading the whole thing.
On Saying the Obvious
Related to: Generalizing from One Example, Connecting Your Beliefs (a call for help), Beware the Unsurprised
The idea of this article is something I've talked about a couple of times in comments. It seems to require more attention.
As a general rule, what is obvious to some people may not be obvious to others. Is this obvious to you? Maybe it was. Maybe it wasn't, and you thought it was because of hindsight bias.
Imagine a substantive Less Wrong comment. It's insightful, polite, easy to understand, and otherwise good. Ideally, you upvote this comment. Now imagine the same comment, only with "obviously" in front. This shouldn't change much, but it does. This word seems to change the comment in multifarious bad ways that I'd rather not try to list.
Uncharitably, I might reduce this whole phenomenon to an example of a mind projection fallacy. The implicit deduction goes like this: "I found <concept> obvious. Thus, <concept> is inherently obvious." The problem is that obviousness, like probability, is in the mind.
The stigma of "obvious" ideas has another problem in preventing things from being said at all. I don't know how common this is, but I've actually been afraid of saying things that I thought were obvious, even though ignoring this fear and just posting has yet to result in a poorly-received comment. (That is, in fact, why I'm writing this.)
Even tautologies, which are always obvious in retrospect, can be hard to spot. How many of us would have explicitly realized the weak anthropic principle without Nick Bostrom's help?
And what about implications of beliefs you already hold? These should be obvious, and sometimes are, but our brains are notoriously bad at putting two and two together. Luke's example was not realizing that an intelligence explosion was imminent until he read the I.J. Good paragraph. I'm glad he provided that example, as it has saved me the trouble of making one.
This is not (to paraphrase Eliezer) a thunderbolt of insight. I bring it up because I propose a few community norms based on the idea:
- Don't be afraid of saying something because it's "obvious". It's like how your teachers always said there are no stupid questions.
- Don't burden your awesome ideas with "obvious but it needs to be said".
- Don't vote down a comment because it says something "obvious" unless you've thought about it for a while. Also, don't shun "obvious" ideas.
- Don't call an idea obvious as though obviousness were an inherent property of the idea. Framing it as a personally obvious thing can be a more accurate way of saying what you're trying to say, but it's hard to do this without looking arrogant. (I suspect this is actually one of the reasons we implicitly treat obviousness as impersonal.)
I'm not sure if these are good ideas, but I think implementing them would decrease the volume of thoughts we cannot think and things we can't say.
Blind Spot: Malthusian Crunch
In an unrelated thread, one thing led to another and we got onto the subject of overpopulation and carrying capacity. I think this topic needs a post of its own.
TLDR mathy version:
let f(m,t) be the population that can be supported using the fraction of Earth's theoretical resource limit m we can exploit at technology level t
let t = k(x) be the technology level at year x
let p(x) be population at year x
What conditions must constant m and functions f(m,k(x)), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()? What empirical data are relevant to estimating the probability that these conditions are all satisfied?
Long version:
Here I would like to explore the evidence for and against the possibility that the following assertions are true:
- Without human intervention, the carrying capacity of our environment (broadly defined1) is finite while there are no *intrinsic* limits on population growth.
- Therefore, if the carrying capacity of our environment is not extended at a sufficient rate to outpace population growth and/or population growth does not slow to a sufficient level that carrying capacity can keep up, carrying capacity will eventually become the limit on population growth.
- Abundant data from zoology show that the mechanisms by which carrying capacity limits population growth include starvation, epidemics, and violent competition for resources. If the momentum of population growth carries it past the carrying capacity an overshoot occurs, meaning that the population size doesn't just remain at a sustainable level but rather plummets drastically, sometimes to the point of extinction.
- The above three assertions imply that human intervention (by expanding the carrying capacity of our environment in various ways and by limiting our birth-rates in various ways) are what have to rely on to prevent the above scenario, let's call it the Malthusian Crunch.
- Just as the Nazis have discredited eugenics, mainstream environmentalists have discredited (at least among rationalists) the concept of finite carrying capacity by giving it a cultish stigma. Moreover, solutions that rely on sweeping, heavy-handed regulation have recieved so much attention (perhaps because the chain of causality is easier to understand) that to many people they seem like the *only* solutions. Finding these solutions unpalatable, they instead reject the problem itself. And by they, I mean us.
- The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement to increase the "safety zone" between expansion of carrying capacity and population growth. Moreover, we are close to a level of technology that would allow us to start colonizing the rest of the solar system. Obviously any given niche within the solar system will have its own finite carrying capacity, but it will be many orders of magnitude higher than that of Earth alone. Expanding into those niches won't prevent die-offs on Earth, but will at least be a partial hedge against total extinction and a necessary step toward eventual expansion to other star systems.
Please note: I'm not proposing that the above assertions must be true, only that they have a high enough probability of being correct that they should be taken as seriously as, for example, grey goo:
Predictions about the dangers of nanotech made in the 1980's shown no signs of coming true. Yet, there is no known logical or physical reason why they can't come true, so we don't ignore it. We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.
Shouldn't we hold ourselves to the same standard when discussing population growth and overshoot? Substitute in some other existential risks you take seriously. Which of them have an expectation2 of occuring before a Malthusian Crunch? Which of them have an expectation of occuring after?
Footnotes:
1: By carrying capacity, I mean finite resources such as easily extractable ores, water, air, EM spectrum, and land area. Certain very slowly replenishing resources such as fossil fuels and biodiversity also behave like finite resources on a human timescale. I also include non-finite resources that expand or replenish at a finite rate such as useful plants and animals, potable water, arable land, and breathable air. Technology expands carrying capacity by allowing us to exploit all resource more efficiently (paperless offices, telecommuting, fuel efficiency), open up reserves that were previously not economically feasible to exploit (shale oil, methane clathrates, high-rise buildings, seasteading), and accelerate the renewal of non-finite resources (agriculture, land reclamation projects, toxic waste remediation, desalinization plants).
2: This is a hard question. I'm not asking which catastrophe is the mostly likely to happen ever while holding everything else constant (the possible ones will be tied for 1 and the impossible ones will be tied for 0). I'm asking you to mentally (or physically) draw a set of survival curves, one for each catastrophe, with the x-axis representing time and the y-axis representing fraction of Everett branches where that catastrophe has not yet occured. Now, which curves are the upper bound on the curve representing Malthusian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an aging researcher and biostatistician for whatever that's worth) you think about hazard functions, including those for existential hazards. Keep in mind that some hazard functions change over time because they are conditioned on other events or because they are cyclic in nature. This means that the thing most likely to wipe us out in the next 50 years is not necessarily the same as the thing most likely to wipe us out in the 50 years after that. I don't have a formal answer for how to transform that into optimal allocation of resources between mitigation efforts but that would be the next step.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)