Reminder Memes
EDIT: Apologies to anyone who wasted time with this; I did not intend it to go live. I left a draft post up on a computer that had an automatic system update; it must have posted as the window was terminated.
Problems in Education
Post will be returning in Main, after a rewrite by the company's writing staff. Citations Galore.
Caring about possible people in far Worlds
This relates to my recent post on existence in many-worlds.
I care about possible people. My child, if I ever have one, is one of them, and it seems monstrous not to care about one's children. There are many distinct ways of being a possible person. 1)You can be causally connected to some actual people in the actual world in some histories of that world. 2)You can be a counterpart of an actual person on a distinct world without causal connections 3)You can be distinct from all actual individuals, and in a causally separate possible world. 4)You can be acausally connectable to actual people, but in distinct possible worlds.
Those 4 ways are not separate partitions without overlap, sometimes they overlap, and I don't believe they exhaust the scope of possible people. The most natural question to ask is "should we care equally about about all kinds of possible people". Some people are seriously studying this, and let us hope they give us accurate ways to navigate our complex universe. While we wait, some worries seem relevant:
1) The Multiverse is Sadistic Argument:
P1.1: If all possible people do their morally relevant thing (call it exist, if you will) and
P1.2: We cannot affect (causally or acausally) what is or not possible
C1.0: Then we cannot affect the morally relevant thing.
2) The Multiverse is Paralyzing (related)
P2.1: We have reason to care about X-Risk
P2.2: Worlds where X-Risk obtains are possible
P2.3: We have nearly as much reason to worry about possible non-actual1 worlds where X-risk obtains, as we have to actual worlds where it obtains.
P2.4: There are infinitely more worlds where X-risk obtains that are possible than there are actual1
C2.0: Infinitarian Paralysis
1Actual here means belonging to the same quantum branching history as you. If you think you have many quantum successors, all of them are actual, same for predecessors, and people who inhabit your Hubble volume.
3) Reality-Fluid Can't Be All That Is Left Argument
P3.1) If all possible people do their morally relevant thing
P3.2) The way in which we can affect what is possible is by giving some subsets of it more units of reality-fluid, or quantum measure
P3.3) In fact reality-fluid is a ratio, such as a percentage of successor worlds of kind A or kind B for a particular world W
P3.4) A possible World3 with 5% reality-fluid in relation to World1 is causally indistinguishable from itself with 5 times more reality-fluid 25% in relation to World2.
P3.5) The morally relevant thing, though by constitution qualitative, seems to be quantifiable, and what matters is it's absolute quantity, not any kind of ratio.
C3.1: From 3.2 and 3.3 -> We can actually affect only a quantity that is relative to our world, not an absolute quantity.
C3.2: From C3.1 and P 3.5 -> We can't affect the relevant thing.
C3.3: We ended up having to talk about reality fluid because decisions matter, and reality fluid is the thing that decision changes (from P3.4 we know it isn't causal structure). But if all that decision changes is some ratio between worlds, and what matters by P3.5 is not a ratio between worlds, we have absolutely no clue of what we are talking about when we talk about "the thing that matters" "what we should care about" and "reality fluid".
These arguments are here not as a perfectly logical and acceptable argument structure, but to at least induce nausea about talking about Reality-Fluid, Measure, Morally relevant things in many-worlds, Morally relevant people causally disconnected to us. Those are not things you can Taboo the word away and keep the substance around. The problem does not lie in the word 'Existence', or in the sentence 'X is morally relevant'. It seems to me that the service that that existence or reality used to play doesn't make sense anymore (if all possible worlds exist or if Mathematical Universe Hypothesis is correct). We attempted to keep it around as a criterial determinant for What Matters. Yet now all that is left is this weird ratio that just can't be what matters. Without a criterial determinant for mattering, we are left in a position that makes me think we should head back towards a causal approach to morality. But this is an opinion, not a conclusion.
Edit: This post is an argument against the conjunctive truth of two things, Many Worlds, and the way in which we think of What Matters. It seems that the most natural interpretation of it is that Many Worlds is true, and thus my argument is against our notion of What Matters. In fact my position lies more in the opposite side - our notion of What Matters is (strongly related to) What Matters, so Many Worlds are less likely.
Game Theory of the Immortals
I’m sure many others have put much more thought into this sort of thing -- at the moment, I’m too lazy to look for it, but if anyone has a link, I’d love to check it out.
Anyway, I ran into some interesting musings on game theory for immortal agents and I thought it was interesting enough to talk about.
Cooperation in games like the iterated Prisoner’s Dilemma is partly dependent on the probability of encountering the other player again. Axelrod (1981) gives the payoff for a sequence of 'cooperate's as R/(1-p) where R is the payoff for cooperating, and p is a discount parameter that he takes as the probability of the players meeting again (and recognizing each other, etc.). If you assume that both players continue playing for eternity in a randomly mixing, finite group of other players, then the probability of encountering the other player again approaches 1, and the payoff for an extended period of cooperation approaches infinity.
So, take a group of rational, immortal agents, in a prisoner’s dilemma game. Should we expect them to cooperate?
I realize there is no optimal strategy without reference to the other players’ strategies, and that the universe is not actually infinite in time, so this is not a perfect model on at least two counts, but I wanted to look at the simple case before adding complexities.
Destructive mathematics
Follow-up to: Constructive mathematics and its dual
In last post, I've introduced constructive mathmatics, intuitionistic logic (JL) and its dual, uninspiringly called dual-intuitionistic logic (DL).
I've said that JL differs from classical logic about the status of the law of excluded middle, a principle valid in the latter which states that a formula can be meaningfully only asserted or negated. This, in the meta-theory, means you can prove that something is true if you can show that its negation is false.
Constructivists, coming from a philosophical platform that regards mathematics as a construction of the human mind, refuse this principle: their idea is that a formula can be said to be true if and only if there is a direct proof of it. Similarly, a formula can be said to be false if and only if there's a direct proof of its negation. If no proof or refutation exists yet (as is the case today, for example, for the Goldbach conjecture), then nothing can be said about A.
Thus is no more a tautology (although it can still be true for some formula, precisely for those that already have a proof or a refutation).
Intuitionism anyway (the most prominent subset of the constructivist program), thinks that is still always false, and so JL incorporates
, a principle called the law of non-contradiction.
Intuitionistic logic has no built-in model of time, but you can picture the mental activity of an adherent in this way: he starts with no (or very little) truths, and incorporates in his theory only those theorems of which he can build a proof of, and the negation of those theorems that he can produce a refutation of.
Mathematics, as an endeavour, is seen as an accumulation of truth from an empty base.
I've also indicated that there's a direct dual of JL, which is part of a wider class of systems collecively known as paraconsistent logics. Compared to the amount of studies dedicate to intuitionistic logic, DL is basically unknown, but you can consult for example this paper and this one.
In this second article, a model is presented for which DL is valid, and we can read the following quote: "[These semantics] reflect the notion that our current knowledge about the falsity of statements can increase. Some statements whose falsity status was previously indeterminate can down the track be established as false. The value false corresponds to firmly established falsity that is preserved with the advancement of knowledge whilst the value true corresponds to 'not false yet'".
My suggestion is to be a lot braver in our epistemology: let's suppose that the natural cognitive state is not one of utter ignorance, but of triviality. Let's then just assume that in the beginning, everything is true.
Our job then, as mathematician, is to discover refutations: the refutation of will expunge A from the set of truth, the refutation of A will remove
.
This dual of constructive mathematics just begs to be called destructive mathematics (or destructivism): as a program, it means to start with the maximal possibility and to develop careful collection of falsities.
Be careful though: it doesn't necessarily mean that we accept the existence of actual contradictions. It might be very well the case that in our world (or model of interest) there are no contradictions, we 'just' need to expunge the relevant assertions.
As the dual of constructive mathematics, destructivism regards mathematics as a mental construction, one though that procedes from triviality through confutations.
One major difficulty with destructive mathematics is that, to arrive to a finite set of truths, you need to destroy an infinite amount of falsities (but, on the other side, to arrive to a finite set of falsities in constructive mathematics you need to assert an infinite number of truths).
Usually, we are more interested in truth, so why should we embark in such an effort?
I can see at least two weak and two strong reasons, plus another one that counts as entertainment of which I'll talk about more extensively in the last post.
The first weak reason is that sometimes, we are more interested in falsity rather than truth. Destructivism seems to be a more natural background for the calculus of resolution, although, to my knowledge, this has only been developed in classical setting.
The second weak reason is that destructivism is an interesting choice for coalgebraic methods in computer science: there, co-induction and co-recursion are a method for 'observing' or 'destroying' (potentially) infinite objects. From the Wikipedia entry on coinduction: "As a definition or specification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As a proof technique, it may be used to show that an equation is satisfied by all possible implementations of such a specification."
I whish I could say more, but I don't know much myself: the parallelisms are tempting, but I have to leave the discovery of eventual low-hanging fruits to later times or someone else entirely.
Two instead much more promising fields of application are Tegmark universes and the Many World quantum mechanics.
It's difficult to give a cogent account for why all the mathematical structures should exists, but Tegmark position equates simply a platonist point of view on destructivism.
If all formulas are true, then this means that "somewhere" every model is realized, while on the other side, if all structures are realized, then "on the whole", every formula is true (somewhere).
But the most important reason why one should adopt this framework is that it gives a natural account of quantum mechanics in the Many World flavour (MWI).
Usually, physical laws are seeen as the corrispondence between physically realizable states, and time is the "adjunction" of new states from older ones. Do you recognize anything?
What if, instead, physical laws dictates only those states that ought to be excluded and time is simply the 'destruction' or 'localization' of all those possible states? Well, then you have (almost for free) MWI: every state is realized, but in times you are constrained to just one.
I'm extremely tempted to say that MWI is the dual of the wave function collapse, but of course I cannot (yet) prove it. Or should I just say that I cannot yet disprove it's not like that?
If that's the case, the mystery of why subjective probability follows the Born rule will be 'just' the dual of the non-linear mechanism of collapse. One mystery for a mystery.
I also suspect that destructive mathematics might have implication even for probability theory, but... This framework is still in its infancy, so who knows?
The last interesting motivation for taking seriously destructive mathematics is that it offers a possible coherent account of Chtulhu mythos (!!): what if God, instead of having created only this world from nothing out of pure love, has destructed every world but this one out of pure hate? If you accept the first scenario, then the second scenario is equally plausible / conceivable. I'll explore the theme in the last post: Azathoth hates us all!
Co-Working Collaboration to Combat Akrasia
Before I was very involved in the Less Wrong community, I heard that Eliezer was looking for people to sit with him while he worked, to increase writing productivity. I knew that he was doing important work in the world, and figured that this was the sort of contribution to improving humanity that I would like to make, which was within the set of things that would be easy and enjoyable for me.
So I got a hold of him and offered to come and sit with him, and did that once/week for about a year. As anticipated, it worked marvelously. I found it easy to sit and not talk, just getting my own work done. Eventually I became a beta reader for his "Bayes for Everyone Else" which is really great and helped me in my ability to estimate probabilities a ton. (Eliezer is still perfecting this work and has not yet released it, but you can find the older version here.)
In addition to learning the basics of Bayes from doing this, I also learned how powerful it is to have someone just to sit quietly with you to co-work on a regular schedule.
I’ve experimented with similar things since then, such as making skype dates with a friend to watch informational videos together. This worked for awhile until my friend got busy. I have two other recurring chat dates with friends to do dual n-back together, and those have worked quite well and are still going.
A client of mine, Mqrius, is working on his Master’s thesis and has found that the only way he has been able to overcome his akrasia so far is by co-working with a friend. Unfortunately, his friend does not have as much time to co-work as he’d like, so we decided to spend Mqrius’s counseling session today writing this Less Wrong post to see if we can help him and other people in the community who want to co-work over skype connect, since this will probably be much higher value to him as well as others with similar difficulties than the next best thing we could do with the time.
I encourage anyone who is interested in co-working, watching informational videos together, or any other social productivity experiments that can be done over skype or chat, to coordinate in the comments. For this to work best, I recommend being as specific as possible about the ideal co-working partner for you, in addition to noting if you are open to general co-working.
If you are specific, you are much more likely to succeed in finding a good co-working partner for you. While its possible you might screen someone out, its more likely that you will get the attention of your ideal co-working partner who otherwise would have glossed over your comment.
Here is my specific pitch for Mqrius:
If you are working on a thesis, especially if it’s related to nanotechnology like his thesis, and think that you are likely to be similarly motivated by co-working, please comment or contact him about setting up an initial skype trial run. His ideal scenario is to find 2-3 people to co-work with him for about 20 hours co-working/week time for him in total. He would like to find people who are dependable about showing up for appointments they have made and will create a recurring schedule with him at least until he gets his thesis done. He’d like to try an initial 4 hour co-working block as an experiment with interested parties. Please comment below if you are interested.
[Mqrius and I have predictions going about whether or not he will actually get a co-working partner who is working on a nanotech paper out of this, if others want to post predictions in the comments, this is encouraged. Its a good practice for reducing hindsight bias.]
[edit]
An virtual co-working space has been created and is currently live, discussion and link to the room here.
The more privileged lover
David is an atheist. He is dating Jane, who is a devout Christian. They have a fairly good relationship, except in the sex department: David thinks that having regular sex is important in a relationship, whereas Jane would like to remain a virgin until marriage due to religious reasons. Before they became a couple, David assumed that not having sex was something that he could tolerate, since he liked Jane very much, and was really eager to be with her. However, as months go by, David has become increasingly frustrated with the lack of physical intimacy, and is beginning to consider breaking up with Jane, even though he is still very fond of her.
What would you advise David to do? Given my experience, I think the most common response would be to advise David to leave Jane. Some people might even say that David shouldn't have started the relationship with Jane in the first place, since he has known all along that she intends to remain a virgin until marriage. They say that, if he really loves her and respects her religious beliefs, he should not ask her to have sex before marriage. Instead, he should break up with her so that they may both go on to look for more suitable partners.
Why is it that nobody says that Jane shouldn't have started the relationship with David in the first place, since she has known all along that he thinks that sexual compatibility/activity is very important in a relationship? Why is that nobody says that if she really loves him and respects his values, she should not make him abstain, and should instead engage in sex with him? Why do her religious beliefs render her position more privileged?
Perhaps the response would be this: Well, the criticism is mostly directed at David because he is the one who went into the relationship with unrealistic views of what he can or cannot do. Besides, since Jane lay out the terms clearly before they became a couple, then she could hardly be faulted.
That is a reasonable response. But imagine if the situation were reversed: What if, while they were still discussing whether to commit to each other, David lay out the terms that Jane would be expected to have sex regularly with him? Even if she agreed, chances are that people would say that he should have respected her religious convictions. Those who criticise David might point out that perhaps Jane was very reluctant when agreeing to it, but thought that it was something on which she could compromise, and that David should not have put her in such a difficult position in the first place. Well, then, perhaps David was very reluctant when agreeing to not have sex as well, but thought that it was something on which he could compromise, and Jane should not have put him in such a difficult position in the first place.
The emotional harm done to Jane by making her engage in pre-marital sexual activity could be as severe as the emotional harm done to David by making him agree to abstain from pre-marital sexual activity, and yet few people acknowledge it, at least in my experience. Or maybe many people do acknowledge it, but nevertheless there are few of them who would admit it openly and defend David. Why is wanting sex worse than not wanting sex?
What is it about being religious that gives one the more privileged position in love?
Why might the future be good?
(Cross-posted from Rational Altruist. See also recent posts on time-discounting and self-driving cars.)
When talking about the future, I often encounter two (quite different) stories describing why the future might be good:
- Decisions will be made by people whose lives are morally valuable and who want the best for themselves. They will bargain amongst each other and create a world that is good to live in. Because my values are roughly aligned with their aggregate preferences, I expect them to create a rich and valuable world (by my lights as well as theirs).
- Some people in the future will have altruistic values broadly similar to my own, and will use their influence to create a rich and valuable world (by my lights as well as theirs).
Which of these pictures we take more seriously has implications for what we should do today. I often have object level disagreements which seem to boil down to disagreement about which of these pictures is more important, but rarely do I see serious discussion of that question. (When there is discussion, it seems to turn into a contest of political ideologies rather than facts.)
If we take picture (1) seriously, we may be interested in ensuring that society continues to function smoothly, that people are aware of and pursue what really makes them happy, that governments are effective, markets are efficient, externalities are successfully managed, etc. If we take picture (2) seriously, we are more likely to be concerned with changing what the people of the future value, bolstering the influence of people who share our values, and ensuring that altruists are equipped to embark on their projects successfully.
I'm mostly concerned with the very long run---I am wondering what conditions will prevail for most of the people who live in the future, and I expect most of them to be alive very far from now.
It seems to me that there are two major factors that control the relative importance of pictures (1) and (2): how prominent should we expect altruism to be in the future, and how efficiently are altruistic vs. selfish resources being used to create value? My answer to the second question is mostly vague hand-waving, but I think I have something interesting to say on the first question.
How much altruism do we expect?
I often hear people talking about the future, and the present for that matter, as if we are falling towards a Darwinian attractor of cutthroat competition and vanishing empathy (at least as a default presumption, which might be averted by an extraordinary effort). I think this picture is essentially mistaken, and my median expectation is that the future is much more altruistic than the present.
Dose natural selection select for self-interest?
In the world of today, it may seem that humans are essentially driven by self-interest, that this self-interest was a necessary product of evolution, that good deeds are principally pursued instrumentally in service of self-interest, and that altruism only exists at all because it is too hard for humans to maintain a believable sociopathic facade.
If we take this situation and project it towards a future in which evolution has had more time to run its course, creating automations and organizations less and less constrained by folk morality, we may anticipate an outcome in which natural selection has stripped away all empathy in favor of self-interest and effective manipulation. Some may view this outcome as unfortunate but inevitable, others may view it as a catastrophe which we should work to avert, and still others might view it as a positive outcome in which individuals are free to bargain amongst themselves and create a world which serves their collective interest.
But evolution itself does not actually seem to favor self-interest at all. No matter what your values, if you care about the future you are incentivized to survive, to acquire resources for yourself and your descendants, to defend yourself from predation, etc. etc. If I care about filling the universe with happy people and you care about filling the universe with copies of yourself, I'm not going to set out by trying to make people happy while allowing you and your descendants to expand throughout the universe unchecked. Instead, I will pursue a similar strategy of resource acquisition (or coordinate with others to stop your expansion), to ensure that I maintain a reasonable share of the available resources which I can eventually spend to help shape a world I consider value. (See here for a similar discussion.)
This doesn't seem to match up with what we've seen historically, so if I claim that it's relevant to the future I have some explaining to do.
Historical distortions
Short-range consequentialism
One reason we haven't seen this phenomenon historically is that animals don't actually make decisions by backwards-chaining from a desired outcome. When animals (including humans) engage in goal-oriented behavior, it tends to be pretty local, without concern for consequences which are distant in time or space. To the extent that animal behavior is goal-oriented at a large scale, those goals are largely an emergent property of an interacting network of drives, heuristics, etc. So we should expect animals to have goals which lead them to multiply and acquire resources, even when those drives are pursued short-sightedly. And indeed, that's what we see. But it's not the fault of evolution alone---it is a product of evolution given nature's inability to create consequentialist reasoners.
Casually, we seem to observe a similar situation with respect to human organizations---organizations which value expansion for its own sake (or one of its immediate consequences) are able to expand aggressively, while organizations which don't value expansion have a much harder time deciding to expand for instrumental reasons without compromising their values.
Hopefully, this situation is exceptional in history. If humans ever manage to build systems which are properly consequentialist---organizations or automations which are capable of expanding because it is instrumentally useful---we should not expect natural selection to discriminate at all on the basis of those systems' values.
Value drift
Human's values are also distorted by the process of reproduction. A perfect consequentialist would prefer to have descendants who share their values. (Even if I value diversity or freedom of choice, I would like my children to at least share those values, at least if I want that freedom and diversity to last more than one generation!) But humans don't have this option---the only way we can expand our influence is by creating very lossy copies. And so each generation is populated by a fresh batch of humans with a fresh set of values, and the values of our ancestors only have an extremely indirect effect on the world of today.
Again, a similar problem afflicts human organizations. If I create a foundation that I would like to persist for generations, the only way it can expand its influence is by hiring new staff. And since those staff have a strong influence over what my foundation will do, the implicit values of my foundation will slowly but surely be pulled back to the values of the pool of human employees that I have to draw from.
These constraints distort evolution, causing selection to act only those traits which can be reliably passed on from one generation to the next. In particular, this exacerbates the problem from the preceding section---even to the extent that humans can engage in goal-oriented reasoning and expand their own influence instrumentally, these tendencies can not be very well encoded in genes or passed on to the next generation in other ways. This is perhaps the most fundamental change which would result from the development of machine intelligences. If it were possible to directly control the characteristics and values of the next generation, evolution would be able to act on those characteristics and values directly.
So what does natural selection select for?
If the next generation is created by the current generation, guided by the current generation's values, then the properties of the next generation will be disproportionately affected by those who care most strongly about the future.
In finance: if investors have different time preferences, those who are more patient will make higher returns and eventually accumulate much wealth. In demographics: if some people care more about the future, they may have more kids as a way to influence it, and therefore be overrepresented in future generations. In government: if some people care about what government looks like in 100 years, they will use their political influence to shape what the government looks like in 100 years rather than trying to win victories today.
What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.
I think this picture is reasonably robust. There are ways that natural selection (/ efficient markets) can be frustrated, and I would not be too surprised if these frustrations persisted indefinitely, but nevertheless this dynamic seems like one of the most solid features of an uncertain future.
What values are we starting with?
Most of people's preferences today seem to concern what happens to them in the near term. If we take the above picture seriously, these values will eventually have little influence over society. Then the question becomes: if we focus only on humanity's collective preferences over the long term, what do those preferences look like? (Trying to characterize preferences as "altruistic" or not no longer seems useful as we zoom in.)
This is an empirical question, which I am not very well-equipped to value. But I can make a few observations that ring true to me (though my data is mostly drawn from academics and intellectuals, who may fail to be representative of normal people in important ways even after conditioning on the "forward-looking" part of people's values):
- When people think about the far future (and thus when they articulate their preferences for the far future) they seem to engage a different mode of reasoning, more strongly optimized to produce socially praise-worthy (and thus prosocial) judgments. This might be characterized as a bias, but to the extent we can talk about human preferences at all they seem to be a result of these kinds of processes (and to the extent that I am using my own altruistic values to judge futures, they are produced by a similar process). This effect seems to persist even when we are not directly accountable for our actions.
- People mostly endorse their own enlightened preferences, and look discouragingly at attempts to lock-in hastily considered values (though they often seem to have overconfident views about what their enlightened preferences will look like, which admittedly might interfere with their attempts at reflection).
- I find myself sympathetic to very many people's accounts of their own preferences about the future, even where those accounts different significantly from my own. I would be surprised if the distribution of moral preferences was too scattered.
- To the extent that people care especially about their species, their nation, their family, themselves, etc. : they seem to be sensitive to fairness considerations (and rarely wish e.g. to spend a significant fraction of civilization's resources on themselves), their preferences seem to be only a modest distortion of aggregative values (wanting people with property X to flourish is not so different from wanting people to flourish, if property X is some random characteristic without moral significance), and human preferences seem to somewhat reliably drift in the direction of more universal concern as basic needs are addressed and more considerations are considered.
After cutting away all near-term interests, I expect that contemporary human society's collective preferences are similar to their stated moral preferences, with significant disagreement on many moral judgments. However, I expect that these values support reflection, that upon reflection the distribution of values is not too broad, and that for the most part these values are reasonably well-aligned. With successful bargaining, I expect a mixture of humanity's long-term interests to be only modestly (perhaps a factor of 10, probably not a factor of 1000) worse than my own values (as judged by my own values).
Moreover, I have strong intuitions to emphasize those parts of my values which are least historically contingent. (I accept that all of my values are contingent, but am happier to accept those values that are contingent on my biological identity than those that are contingent on my experiences as a child, and happier to accept those that are contingent on my experiences as a child than those that are contingent on my current blood sugar.) And I have strong reciprocity intuitions that exacerbate this effect and lead me to be more supportive of my peers' values. These effects make me more optimistic about a world determined by humanity's aggregate preferences than I otherwise would be.
How important is altruism?
(The answer to this question, unlike the first one, depends on your values: how important to what? I will answer from my own perspective. I have roughly aggregative values, and think that the goodness of a world with twice as many happy people is twice as high.)
Even if we know a society's collective preferences, it is not obvious what their relative importance is. At what level of prevalence would the contributions of explicit altruism become the source of value? If altruists are 10% of the influence-weighted population, do the contributions of the altruists matter? What if altruists are 1% of the population? A priori, it seems clear that the explicit altruists should do at least as much good--on the altruistic account--as any other population (otherwise they could decide to jump ship and become objectivists, or whatever). But beyond that, it isn't clear that altruists should create much more value--even on the altruistic account--than people with other values.
I suspect that explicit altruistic preferences create many times more value than self-interest or other nearly orthogonal preferences. So in addition to expecting a future in which altruistic preferences play a very large role, I think that altruistic preferences would be responsible for most of the value even if they controlled only 1% of the resources.
One significant issue is population growth. Self-interest may lead people to create a world which is good for themselves, but it is unlikely to inspire people to create as many new people as they could, or use resources efficiently to support future generations. But it seems to me that the existence of large populations is a huge source of value. A barren universe is not a happy universe.
A second issue is that population characteristics may also be an important factor in the goodness of the world, and self-interest is unlikely to lead people to ensure that each new generation has the sorts of characteristics which would cause them to lead happy lives. It may happen by good fortune that the future is full of people who are well-positioned to live rich lives, but I don't see any particular reason this would happen. Instead, we might have a future "population" in which almost all resources support automation that doesn't experience anything, or a world full of minds which crave survival but experience no joy, or etc.; "self-interest" wouldn't lead any of these populations to change themselves to experience more happiness. It's not clear why we would avoid these outcomes except by a law of nature that said that productive people were happy people (which seems implausible to me) or by coordinating to avoid these outcomes.
(If you have different values, such that there is a law [or at least guideline] of nature: "productive people are morally valuable people," then this analysis may not apply to you. I know several such people, but I have a hard time sympathizing with their ethics.)
Conclusion
I think that the goodness of a world is mostly driven by the amount of explicit optimization that is going on to try and make the world good (this is all relative to my values, though a similar analysis seems to carry with respect to other aggregative values). This seems to be true even if relatively little optimization is going on. Fortunately, I also think that the future will be characterized by much higher influence for altruistic values. If I thought altruism was unlikely to win out, I would be concerned with changing that. As it is, I am instead more concerned with ensuring that the future proceeds without disruptions. (Though I still think it is worth it to try and increase the prevalence of altruism faster, most of all because this seems like a good approach to minimizing the probability of undesired disruptions.)
Need help with an MLP fanfiction with a transhumanist theme.
EDIT: I am now taking arguments for alicornism. Alicornism being the placeholder term I've given to the stance that all ponies should be alicorns. Please PM me or post here if you have a good one, or an argument against one of anti-alicornism's strongest points: Overpopulation/over-use of resources, magical abuse/existential risk, or upheaval of the respect ponies have for their rulers due to their alicorn status. I would prefer general arguments for alicornism over counter-arguments if possible. Deathist / anti-alicornist arguments are still fine to post here.
Disclaimer: I'm not sure if this is worthy of a discussion post, but I figured, given the amount of people on LW who like My Little Pony, it would have at least as many potentially interested people as a regional meet-up thread would, so I figured I'd give it a shot. If this is too trivial or frivolous for LW, feel free to tell me and/or downvote, and I'll refrain from such threads in future. A place where I could go to find some help instead of the Discussion section would also be greatly appreciated in such a case.
So I had an idea for a one-shot or small novella, depending on how the plot developed, about an argument between Twilight and Celestia. Twilight finds out she's immortal now that she's an alicorn, and Twilight then decides that, given the standard anti-death concepts that immortality is good, death is bad, and so on, they should turn everyone who wants to be an alicorn into one.
The problem is, I'm having a very difficult time coming up with actual arguments for Celestia.
- Celestia herself is immortal, she's lived for well over a thousand years, and she isn't horrifically depressed, so clearly, immortal life is worth living and there's enough stuff to do with an extended lifespan.
- For the purposes of this fic, it's possible to turn anypony into an alicorn. I'm likely going to go with the idea that the spell can only be used a few times a year, but that's still enough to turn anyone who wants it into an alicorn within a couple of decades via exponentiation: The first targets can all be gifted unicorns who can be easily trained to use the magic.
- In most of the "Immortality sucks" fics I've read, the only real argument that immortality sucks is that you have to watch everyone else grow up and die. If a large majority of the population were turned alicorn, this wouldn't be a problem anymore.
- Nothing in canon suggests that there's any sort of religion in Equestria. Even in fanfics I've read, I've only read one fanfic where someone made up an afterlife that some ponies believed in, and in many more that I've read, Celestia's name is actually used in place of God in various sentences, like "Oh for Celestia's sake!" Thus, it's unlikely they'd believe in an afterlife: Both in canon and the majority of fanon, the closest thing to a God appears to be Celestia herself.
I've come up with arguments for Celestia by roleplaying the argument out by myself, but I haven't come up with anything that Twilight can't just shoot down, and I'd prefer if the argument wasn't just Celestia getting steamrolled, and I'd like to do this by strengthening Celestia's side, not weakening Twilight's.
Is the argument for deathism really that weak? I've read over the Harry vs. Dumbledore deathism argument in HPMOR several times looking for ideas, and IIRC Eliezer actually claimed he steel-manned Dumbledore's position, but I don't find anything Dumbledore says convincing in the slightest, and ended that chapter feeling that Harry was the clear winner in that debate, and that's with Dumbledore having access to arguments that Celestia doesn't, given that in the Potterverse, nobody actually knows what it's like to be immortal, and Dumbledore believes in an afterlife.
Some other arguments I've come up with for Celestia:
Argument: We can't just have a massive ruling class.
Response: There's no need for alicorns to be royalty. "Princess = Alicorn, Alicorn = Princess" is only something that law and tradition dictate: They can be changed. After all, Blueblood is a prince and not an alicorn, and it's certainly possible for an alicorn to NOT be royalty, if the princesses wanted.
Argument: Harder to keep the populace in line, if everyone has more power.
Response: Celestia's not exactly going around fighting criminals herself with her alicorn powers, so Celestia being much more powerful than others isn't necessary to keep the peace. If anything, an alicornified populace is MORE likely to be able to govern itself: Atm, a pegasus criminal can only be pursued effectively by about one-third of police officers, for example.
Argument: Overpopulation.
Response: One response to this is the idea that, starting a year or so from a royal edict, ponies who wish to be changed into alicorns aren't permitted to give birth more than once or twice. A broader response is that "overpopulation" isn't actually a reason to oppose alicornification, it's just a problem that has to be solved in order to do it. Saying "There'd be overpopulation" and then forgetting about the entire idea would be like Twilight saying that they didn't know how she was supposed to save the Crystal Empire from being banished again when she got given the task, and responding to this by saying "Oh well, guess that's it, we may as well pack up and go home." rather than trying to actually solve the problem. That said, this is the only truly legitimate argument I've come up with, an argument that requires real thought to fully defeat, rather than an argument that has an easy response leap to my mind.
Argument: Mortals wouldn't understand the consequence of their decision.
Response: Again, several arguments for this. Firstly, there's no reason to believe the alicorn transformation is irreversible, even if it's not currently known how to transform it back. Secondly, Celestia can already predict the consequences, and since she thinks HER life is worth living, clearly there's a solid chance that other ponies will have their lives worth living as well.
So, the questions to ask:
Are there good arguments for Celestia I haven't thought of?
Are the arguments I've already posited sufficient to not straw-man the lifeism position, and to allow for a reasonable argument?
EDIT: I am now taking arguments for alicornism. Alicornism being the placeholder term I've given to the stance that all ponies should be alicorns. Please PM me or post here if you have a good one, or an argument against one of anti-alicornism's strongest points: Overpopulation/over-use of resources, magical abuse/existential risk, or upheaval of the respect ponies have for their rulers due to their alicorn status. I would prefer general arguments for alicornism over counter-arguments if possible. Deathist / anti-alicornist arguments are still fine to post here.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)