Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!

New to LessWrong?

New Comment
298 comments, sorted by Click to highlight new comments since: Today at 2:12 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?

Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it's not good enough to say that we're really rational, scientific, altruist, utilitarian, etc, in contrast to those people -- they thought the same.)

So, how might we find that all these ideas are massively wrong?

Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.

Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.

1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.

2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don't launch any AI before civilization runs out of resources or collapses for some other reason.

3) Consciousness is sort of optional feature, intelligence can work just well without i... (read more)

4Chrysophylax10y
Under the assumption that cryonics patients will never be unfrozen, cryonics has two effects. Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage. The second effect is in increasing the rate of circulation of the currency; freezing corpses that will never be revived is pretty close to burying money, as Keynes suggested. Widespread, sustained cryonic freezing would certainly have stimulatory, and thus inflationary, effects; I would anticipate a slightly higher inflation rate and an ambiguous effect on economic growth. The effects would be very small, however, as cryonics is relatively cheap and would presumably grow cheaper. The average US household wastes far more money and real resources by not recycling or closing curtains and by allowing food to spoil.

Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.

How does this connect with the funding process of cryonics? When someone signs up and buys life insurance, they are eliminating consumption during their lifetime of the premiums and in effect investing it in the wider economy via the insurance company's investment in bonds etc; when they die and the insurance is cashed in for cryonics, some of it gets used on the process itself, but a lot goes into the trust fund where again it is invested in the wider economy. The trust fund uses the return for expenses like liquid nitrogen but it's supposed to be using only part of the return (so the endowment builds up and there's protection against disasters) and in any case, society's gain from the extra investment should exceed the fund's return (since why would anyone offer the fund investments on which they would take a loss and overpay the fund?). And this gain ought to compound over the long run.

So it seems to me that the main effect of cryonics on the economy is to increase long-term growth.

-1lmm10y
Money circulates more when used for short-term consumption, than long-term investment, no? So I'd expect a shift from the former to the latter to slow economic growth.
3gwern10y
I don't follow. How can consumption increase economic growth when it comes at the cost of investment? Investment is what creates economic output.
0lmm10y
Economic activity, i.e. positive-sum trades, are what generate economic output (that and direct labour). Investment and consumption demand can both lead to economic activity. AIUI the available evidence is that with the current economy a marginal dollar will produce a greater increase in economic activity in consumption than in investment.
1RolfAndreassen10y
I think you are failing to make a crucial distinction: positive-sum trades do not generate economic activity, they are economic activity. Investment generates future opportunities for such trades.
-1Chrysophylax10y
There is such a thing as overinvestment. There is also such a thing as underconsumption, which is what we have right now.
1RolfAndreassen10y
Can you define either one without reference to value judgements? If not, I suggest you make explicit the value judgement involved in saying that we currently have underconsumption.
-2Chrysophylax10y
Yes, due to those being standard terms in economics. Overinvestment occurs when investment is poorly allocated due to overly-cheap credit and is a key concept of the Austrian school. Underconsumption is the key concept of Keynesian economics and the economic views of every non-idiot since Keynes; even Friedman openly declared that "we are all Keynesians now". Keynesian thought, which centres on the possibility of prolonged deficient demand (like what caused the recession), wasn't wrong, it was incomplete; the reason fine-tuning by demand management doesn't work simply wasn't known until we had the concept of the vertical long-run Phillips curve. Both of these ideas are currently being taught to first-year undergraduates.

I think the whole MIRI/LessWrong memeplex is not massively confused.

But conditional on it turning out to be very very wrong, here is my answer:

A. MIRI

  1. The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.

  2. MIRI's AI work turns out to trigger a massive negative outcome -- either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.

  3. It turns out that the UFAI explosion really is the risk, but that MIRI's AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.

B. CfAR

  1. It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only f

... (read more)
9John_Maxwell10y
MIRI failure modes that all seem likely to me: * They talk about AGI a bunch and end up triggering an AGI arms race. * AI doesn't explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues. * The future is just way harder to predict than everyone thought it would be... we're cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn't have possibly forseen. * Uploads come first.

If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?

A few that come to mind:

  • Some religious framework being basically correct. Humans having souls, an afterlife, etc.
  • Antinatalism as the correct moral framework.
  • Romantic ideas of the ancestral environment are correct and what feels like progress is actually things getting worse.
  • The danger of existential risk peaked with the cold war and further technological advances will only hasten the decline.

It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.

Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.

Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren't signed up for cryonics.

Take a figure like Nassim Taleb. He's frequently quoted on LessWrong so he's not really outside the LessWrong memeplex. But he's also a Christian.

There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don't take to their full conclusion.

So, how might we find that all these ideas are massively wrong?

It's a topic that's very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measure... (read more)

It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.

That's not LW-memeplex being wrong, that's just a LW-meme which is slightly more pessimistic than the more customary "the vast majority of all UFAI's are unfriendly but we might be able to make this work" view. I don't think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.

MIRI-LW being plausibly wrong about AI friendliness is more like, "Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don't actually "FOOM" dramatically ... they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn't much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that."

If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)

2ChristianKl10y
If it's impossible to build FAI that might mean that one should in general discourage technological development to prevent AGI from being build. It might building moral framework that allow for effective prevention of technological development. I do think that's significantly differs from the current LW-memeplex.
5Ishaan10y
What I mean is...the difference between "FAI is possible but difficult" and "FAI is impossible and all AI are uFAI" is like the difference between "A narrow subset of people go to heaven instead of hell" and " and "every human goes to hell". Those two beliefs are mostly identical Whereas "FOOM doesn't happen and there is no reason to worry about AI so much" is analogous to "belief in afterlife is unfounded in the first place". That''s a massively different idea. In one case, you're committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that "all AI are UFAI" is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)
0RolfAndreassen10y
Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you're in the narrow subset. In the second, you want to overthrow the system.
6RomeoStevens10y
We should be wary of ideologies that involve one massive failure point....crap.
0Curiouskid10y
Could you elaborate/give-some-examples? What are some ideologies that do/don't have (one massive failure point)/(Lots of small failure points)?
2RomeoStevens10y
The one I was thinking of was capitalism vs communism. I have had many communists tell me that communism only works if we make the whole world do it. A single point of failure.
0Luke_A_Somers10y
I wouldn't call that a single point of failure, I'd call that a refusal to test it and an admission of extreme fragility.
0Nornagest10y
That's kind of surprising to me. A lot of systems have proportional tipping points, where a change is unstable up to a certain proportion of the sample but suddenly turns stable after that point. Herd immunity, traffic congestion, that sort of thing. If the assumptions of communism hold, that seems like a natural way of looking at it. A structurally unstable social system just seems so obviously bad to me that I can't imagine it being modeled as such by its proponents. Suppose Marx didn't have access to dynamical systems theory, though.
2Lalartu10y
This is what some modern communists say, and it is just an excuse (and in fact wrong, it will not work even in that case). Early communists actually believed the opposite thing: an example of one communitst nation would be enough to convert the whole world.
3Nornagest10y
It's been a while since I read Marx and Engels, but I'm not sure they would have been speaking in terms of conversion by example. IIRC, they thought of communism as a more-or-less inevitable development from capitalism, and that it would develop somewhat orthogonally to nation-state boundaries but establish itself first in those nations that were most industrialized (and therefore had progressed the furthest in Marx's future-historical timeline). At the time they were writing, that would probably have meant Britain. The idea of socialism in one country was a development of the Russian Revolution, and is something of a departure from Marxism as originally formulated.
3Squark10y
Define "massively wrong". My personal opinions (stated w/o motivation for brevity): * Building AGI from scratch is likely to be unfeasible (although we don't know nearly enough to discard the risk altogether) * Mind uploading is feasible (and morally desirable) but will trigger intelligence growth of marginal speed rather than a "foom" * "Correct" morality is low Kolmogorov complexity and conforms with radical forms of transhumanism Infeasibility of "classical" AGI and feasibility of mind uploading should be scientifically provable. So: My position is very different from MIRI's. Nevertheless I think LessWrong is very interesting and useful (in particular I'm all for promoting rationality) and MIRI is doing very interesting and useful research. Does it count as "massively wrong"?
3Calvin10y
We might find out by trying to apply them to the real world and seeing that they don't work. Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.
5lmm10y
Is it? I mean, I'd happily say that the LW crowd as a whole does not seem particularly good at winning at life, but that is and should be our goal.
0Calvin10y
Speaking broadly, the desire to lead happy / successful / interesting life (however winning is defined) it is a laudable goal shared by wast majority of humans. The problem was that some people took the idea further and decided that winning is a good qualification measure as to weather someone is a good rationalist or not, as debunked by Luke here. There are better examples, but I can't find them now. Also, my two cents are that while rational agent may have some advantage over irrational one in a perfect universe, real world is so fuzzy and full of noisy information that if superior reasoning decision making skill really improves your life, improvements are likely to be not as impressive as advertised by hopeful proponents of systematized winning theory.
1lmm10y
I think that post is wrong as a description of the LW crowd's goals. That post talks as if one's akrasia were a fixed fact that had nothing to do with rationality, but in fact a lot of the site is about reducing or avoiding it. Likewise intelligence; that post seems to assume that your intelligence is fixed and independent of your rationality, but in reality this site is very interested in methods of increasing intelligence. I don't think anyone on this site is just interested in making consistent choices.
2bokov10y
It would look like a failure to adequately discount for inferential chain length.
0[anonymous]10y
By their degree of similarity to ancient religious mythological and sympathetic magic forms with the nouns swapped out.

Should I not be using my real name?

Do you want to have a career at a conservative institution such a bank or a career in politics? If so, it's probably a bad idea to have too much attack surface by using your real name.

Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.

If you meet people in real life might already know you from your online commentary that they have read and you don't have to start introducing yourself.

It's really a question of whether you think strangers are more likely to hurt or help you.

Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.

I think the best long-term strategy would be to invent a different name and use the other name consistently, even in the real life. With everyone, except the government. Of course your family and some close friends would know your real name, but you would tell them that you prefer to be called by that other name, especially in public.

So, you have one identity, you make it famous and everyone knows you. Only when you want to get anonymous, you use your real name. And the advantage is that you have papers for it. So your employer will likely not notice. You just have to be careful never to use your real name together with your fake name.

Unless your first name is unusual, you can probably re-use your first name, which is how most people will call you anyway, so if you meet people who know your true name and people who know your fake name at the same time, the fact that you use two names will not be exposed.

This seems to be what Gwern has done.

4Viliam_Bur10y
Exactly! He is so good example that it is easy to not even notice him being a good example. There is no "Gwern has an identity he is trying to hide" thought running in my mind when I think about him (unlike with Yvain). It's just "Gwern is Gwern", nothing more. Instead of a link pointing to the darkness, there is simply no link there. It's not like I am trying to respect his privacy; I feel free to do anything I want and yet his privacy remains safe. (I mean, maybe if someone tried hard... but there is nothing reminding people that they could.) It's like an invisible fortress. But if instead he called himself Arthur Gwernach (abbreviated to Gwern), that would be even better.
0ChristianKl10y
What threat do you want to protect against? If you fear the NSA, they have probably have no trouble linking your real name to your alias. They know where the person with your real name lives and they know what web addresses get browsed from that location. I could not do that. I study in university under my real name and my identity as a university student is linked to my public identity. The link is strong enough that a journalist who didn't contact me via a social network called my university to get in touch with me. On LessWrong I write under my firstname plus the first two letters of my lastname. That means that anyone who recognises my identity from somewhere else can recognize me but if someone Google's for me he can't find me easily. I have no trouble having to stand up for write I write on Lesswrong to people I meet in real life but having a discussion with one of my aunts about it wouldn't be fun, so I don't make it too easy. I also wouldn't want the writing to be quoted out of context in other places. I would survive it but given the low level of filtering on what I write on LW it would be annoying. As far as self censoring goes I feel safe to say one of my aunts given that I have multiple of them. Anybody reading couldn't reduce who I mean. Whenever else I write something about someone I know I think twice whether someone could identify the person and if so I wouldn't write it publicly under this identity. Asking about relationship advice and flashing out specific a problem would be a no-go for me because it might make details public that the other person didn't want to have public. Everything I say in that regard is supposed to be general enough that no harm will come from it to other people I know personally.
3Viliam_Bur10y
A conservative employer, less skilled than NSA. For example I want to write blogs against religion or against some political party, and yet not be at a disadvantage when applying for a job in a company where the boss suports them. Also to avoid conflicts with colleagues. Good point. In such case I would put the university in the same category as an employer. Generally, all institutions that have power over me at some point of my life.
3Error10y
This. The face one presents to one's peers is justifiably different from the face one presents to amoral, potentially dangerous organizations. Probably the first thing that, say, a job interviewer will do with a potential candidate is Google their name. Unless the interviewer is exceptionally open minded, it is critical to your livelihood that they not find the Harry Potter erotica you wrote when you were fifteen. I have both a handle and a legal name. The handle is as much "me" as the legal one (more so, in some ways). I don't hesitate to give out my real name to people I know online, but I won't give my handle out to any organizational representative. I fear the bureaucracy more than random Internet kooks. It's not about evading the NSA; it's about keeping personal and professional life safely separated.
3Viliam_Bur10y
It's like when I lock my doors, a skilled thief would get inside anyway. But it's good to protect myself against the 99% of unskilled thieves (people who could become thieves when given a tempting opportunity). Similarly, it would be good to be protected against random people who merely type my name into google, look at the first three pages of results, open the first five linked articles, and that's it. It's already rather late for me, but this is probably an advice I will give my children. Technically, I could start using a new identity for controversial things today, and use my real name only for professional topics. But I am already almost 40. Even if after 10 years most of stuff written using my real name would get away from google top search results, it probably wouldn't make a big difference. And seems to me that these days link rot is slower than it used to be. Also, I wouldn't know what to do with my old unprofessional blog articles: deleting them would be painful; moving them to the new identity would expose me; keeping them defeats the purpose. -- I wish I could send this message back in time to my teenage self. Who would probably choose a completely stupid nickname.
2Lumifer10y
How about a publicly accessible collection of everything you did or said online that is unerasable and lasts forever? "I hope you know that this will go down on your permanent record"
2ChristianKl10y
You don't name a threat. If you think that the work you produce online is crap and people you care about will dislike you for it, than having a permanent record of it is bad. If you think that the work that you produce online is good than having a permanent record of it is good. You might say that some people might not hire me when they read that I expressed some controversial view years ago in an online forum. I would say that I don't want to increase the power of those organisations by working for them anyway. I rather want to get hired by someone who likes me and values my public record. There a bit of stoicism involved but I don't think that it's useful to live while hiding yourself. I rather fear having a lived a life where I leave no meaningful record than living a life that leaves a record.
5gattsuru10y
How far from the normal are you? You may quickly find your feelings change drastically as your positions become more opposed. I don't want to work for organizations that would not hire me due to controversial views, but depending on the view and on my employment prospects, my choices may be heavily constrained. I'd rather know which organizations I can choose to avoid, rather than be forced out of organizations. ((There are also time-costs involved with doing it this way: it a company says it hates X in the news, and I like X, I can not send them a resume. But I send them a resume and then discover than they don't want to hire me due to my positions on X during an interview, it's a lot of lost energy.)) Conversely, writing under my own name would incentive avoiding topics that are or are likely to become controversial enough.
1Lumifer10y
No, I name a capability to misrepresent and hurt you.
2ChristianKl10y
If I do most of my public activity under identity Bob but the government knows me as Dave, someone can still misrepresent me as I'm acting as Bob by misquoting things written under the Bob identity in the past. If I want to prevent permanent records I would have to switch identities every so often which is hard to do without losing something if you have anything attached to those identities that you don't want to lose.
1Ishaan10y
It depending on how vocal and how controversial you are being with your internet persona. There is always the chance that you'll acquire the ire of an angry mob...and if so, you've effectively doxxed yourself for them.
0ChristianKl10y
Being open with your name doesn't automatically mean that phone numbers and your address is also public. For most people I don't think the risk is significant compared to other risks such as getting hit by a car. I would expect it to be one of those risks that's easy to visualize but that has a rather low probability.
3gattsuru10y
Being open with your name does mean that your phone numbers and address are likely to be public. Saarelma is a little more protected than the average, since Finland's equivalent to WhitePages is not freely available world-wide, but those in the United States with an unusual name can be found for free. That's separate from the "ire of an angry mob" risk, which seems more likely to occur primarily for people who have a large enough profile that they'd have to have outed themselves anyway, though.

Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?

ETA: Potentially less contentious rephrase: why isn't making a life as important as saving a life?

Whether this is so or not depends on whether you are assuming hedonistic or preference utilitarianism. For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder, except that as a matter of fact murder causes much more suffering than contraception does, both to the person who dies, to his or her loved ones, and to society at large (by increasing fear). By contrast, preference utilitarians can also appeal to the preferences of the individual who is killed: whereas murder causes the frustration of an existing preference, contraception doesn't, since nonexisting entities can't have preferences.

The question also turns on issues about population ethics. The previous paragraph assumes the "total view": that people who do not exist but could or will exist matter morally, and just as much. But some people reject this view. For these people, even hedonistic utilitarians can condemn murder more harshly than contraception, wholly apart from the indirect effects of murder on individuals and society. The pleasure not experienced by the person who fails to be conceived doesn't count, or counts less than the pleasure that the victim of murder is deprived of, since the latter exists but the former doesn't.

For further discussion, see Peter Singer's Practical Ethics, chap. 4 ('What's wrong with killing?").

3torekp10y
Pablo makes great points about the suffering of loved ones, etc. But, modulo those points, I'd say making a life is as important as saving a life. (I'm only going to address the potentially contentious "rephrase" here, and not the original problem; I find the making life / saving life case more interesting.) And I'm not a utilitarian. When you have a child, even if you follow the best available practices, there is a non-trivial chance that the child will have a worse-than-nothing existence. They could be born with some terminal, painful, and incurable illness. What justifies taking that risk? Suggested answer: the high probability that a child will be born to a good life. Note that in many cases, the child who would have an awful life is a different child (coming from a different egg and/or sperm - a genetically defective one) than the one who would have a good life.
0[anonymous]10y
Only if the hedonistic utilitarian is also a total utilitarian, rather than an average utilitarian, right? Edit: Read your second paragraph, now I feel silly.

Making a person and unmaking a person seem like utilitarian inverses

Doesn't seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you're only focusing on the utility for the person made or unmade, then maybe (although see blacktrance's comment on that), but as a utilitarian you have no license for doing that.

9solipsist10y
A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple's child leads a life far longer and happier than the forgotten Hermit's ever would have been. Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?

Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.

If you're asking me, I'd say no, but I'm not a utilitarian, partly because utilitarianism answers "yes" to questions similar to this one.

0Luke_A_Somers10y
Only if you use a stupid utility function.
1pragmatist10y
Utilitarianism doesn't use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone's utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things. In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as "utilitarianism doesn't respect the separateness of persons." For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it's possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don't matter, just the amount of utility sloshing about (or, if you're an average utilitarian, the number of vessels matters, but the vessels don't matter beyond that). An extreme consequence of this kind of thinking is the whole "utility monster" problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley). I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn't mean that trade-offs between peoples' rights/well-being/whatever are always ruled out, but they shouldn't be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can't capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.
0Luke_A_Somers10y
Yes, I should have rephrased that as 'Only because hedonic utilitarianism is stupid' --- how's that?
3[anonymous]10y
If there are a large number of "yes" replies, the hermit lfestyle becomes very unappealing.
3cata10y
Sure, Eve did a good thing.
2solipsist10y
Does that mean we should spend more of our altruistic energies on encouraging happy productive people to have more happy productive children?
0cata10y
Maybe. I think the realistic problem with this strategy is that if you take an existing human and help him in some obvious way, then it's easy to see and measure the good you're doing. It sounds pretty hard to figure out how effectively or reliably you can encourage people to have happy productive children. In your thought experiment, you kill the hermit with 100% certainty, but creating a longer, happier life that didn't detract from others' was a complicated conjunction of things that worked out well.
0Calvin10y
I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.
0solipsist10y
I didn't mean for the hermit to be sad, just less happy than the child.
0Calvin10y
Ah, must have misread your representation, but English is not my first language, so sorry about that. I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.
0RowanE10y
It's specified that he was killed painlessly.
0Calvin10y
It is true, I wasn't specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition. He was, presumably - killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective. If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
2Chrysophylax10y
We live in a world full of utility monsters. We call them humans.
2Calvin10y
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don't become upset about those atrocities that are currently being committed in my name? We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
1Chrysophylax10y
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn't exist otherwise, but the same cannot be said for battery animals.
0Gunnar_Zarncke10y
But driving this reasoning to its logical conclusion you get a lot of strange results. The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to. Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don't know will always choose defect and thus reap a higher benefit than you - except if they are too many. So either * You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this. * Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby. The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its 'logical conclusion' out of context and without regard to that this concept is an evolved feature itself. As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it. In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment - only we have already learned that the latter might be no good idea). The conclusion is that you also have to balance extreme empathy with reality. ADDED: Just found this relevant link: http://lesswrong.com/lw/69w/utility_maximization_and_complex_values/
0Chrysophylax10y
Robert Nozick: My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don't identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
0Gunnar_Zarncke10y
That way it looks. And this is probably part of being human. I'd like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
-1Calvin10y
In this case, I concur that your argument may be true if you include animals in your utility calculations. While I do have reservations against causing suffering in humans, I don't explicitly include animals in my utility calculations, and while I don't support causing suffering for the sake of suffering, I don't have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.
2Ishaan10y
This fails to fit the spirit of the problem, because it takes the preferences of currently living beings (the childless couple) into account. A scenario that would capture the spirit of the problem is: "Eve kills a moderately happy hermit who moderately prefers being alive, uses the money to create a child who is predisposed to be extremely happy as a hermit. She leaves the child on the island to live life as an extremely happy hermit who extremely prefers being alive." (The "hermit" portion of the problem is unnecessary now - you can replace hermit with "family" or "society" if you want.) Compare with... "Eve must choose between creating a moderately happy hermit who moderately prefers being alive OR an extremely happy hermit who extremely prefers being alive." (Again, hermit / family / society are interchangeable) and "Eve must choose between kliling a moderately happy hermit who moderately prefers being alive OR killing an extremely happy hermit who extremely prefers being alive."
2Lumifer10y
This looks very similar to the trolley problem, specifically the your-organs-are-needed version.
2A1987dM10y
The grounds to avoid discouraging people from walking into hospitals are way stronger than the grounds to avoid discouraging people from being hermits.
1Lumifer10y
So you think that the only problem with the Transplant scenario is that it discourages people from using hospitals..?
0A1987dM10y
Not the only one, but the deal-breaking one.
1Lumifer10y
See this
-2Eugine_Nier10y
Well, that's the standard rationalization utilitarians use to get out of that dilemma.
0adbge10y
I thought the same thing and went to dig up the original. Here it is: This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.
9solipsist10y
This situation seems different for me for two reason: Off-topic way: Killing the "donor" is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn't go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian. On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I'm not sure why (or if) they differ from a utilitarian perspective.
8Viliam_Bur10y
Analogically, "killing a less happy person and conceiving a more happy one" may be wrong in a long term, by changing a society into one where people feel unsafe.
0Lumifer10y
You're fixating on the unimportant parts. Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?
4Moss_Piglet10y
That people are stupefyingly irrational about risks, especially in regards to medicine. As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race. Now granted that's a rather extreme case, and she wasn't exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals. (Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might "just happen" to come up in their cases if they went in for treatment; it's not like American bureaucrats have never abused their power to target political enemies before.)
2Nornagest10y
The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we'd expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it's being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.
0Lumifer10y
That's a very weak objection given that the real world is full or perverse incentives and still manages to function, more or less, sorta-kinda...
1A1987dM10y
Only if the Q in QALY takes into account the fact that people will be constantly worried they might be picked by the RNG.
0A1987dM10y
And of course, I wouldn't trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...
0solipsist10y
Edited away an explanation so as not to take the last word Short answer, no. I'd like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.
0A1987dM10y
Yes, but I wouldn't do that myself because of ethical injunctions.
6lmm10y
Cheap answer, but remember that it might be the true one: because utilitarianism doesn't accurately describe morality, and the right way to live is not by utilitarianism.
5Dias10y
Upvoted. Remember to keep in mind the answer might be "making a person is as good as killing a person is bad. Here's a simple argument for why we can't be indifferent to creating people. Suppose we have three worlds: * Jon is alive and has 10 utils * Jon was never conceived * 1Jon is alive and has 20 utils Assume we prefer Jon to have 20 utils to 10. Assume also we're indifferent between 10 utils and Jon's. Hence by transitivity we must prefer Jon exist and have 20 utils to Jon's non-existance. So we should try to create Jon, if we think he'll have over 10 utils.
0Gurkenglas10y
Note that this kind of utilon calculation also equates your scenarios with those where, magically, a whole bunch of people came and ceased to exist a few minutes ago with lots of horrible torture, followed by amnesia, in between.
4Ishaan10y
Possibly because... You have judged. It's possible that this is all there is to it... not killing people who do not want to die might just be a terminal value for humans, while creating people who would want to be created might not be a terminal value. (Might. If you think that it's an instrumental value in favor of some other terminal goal, you should look for it)
3Arran_Stirton10y
As far as I can tell killing/not-killing a person isn't the same not-making/make a person. I think this becomes more apparent if you consider the universe as timeless. This is the thought experiment that comes to mind. It's worth noting that all that follows depends heavily on how one calculates things. Comparing the universes where we choose to make Jon to the one where we choose not to: * Universe A: Jon made; Jon lives a fulfilling life with global net utility of 2u. * Universe A': Jon not-made; Jon doesn't exist in this universe so the amount of utility he has is undefined. Comparing the universes where we choose to kill an already made Jon to the one where we choose not to: * Universe B: Jon not killed; Jon lives a fulfilling life with global net utility of 2u. * Universe B': Jon killed; Jon's life is cut short, his life has a global net utility of u. The marginal utility for Jon in Universe B vs B' is easy to calculate, (2u - u) gives a total marginal utility (i.e. gain in utility) from choosing to not kill Jon over killing him of u. However the marginal utility for Jon in Universe A vs A' is undefined (in the same sense 1/0 is undefined). As Jon doesn't exist in universe A' it is impossible to assign a value to Utility_Jon_A', as a result our marginal (Utility_Jon_A - Utility_Jon_A') is equal to (u - [an undefined value]). As such our marginal utility lost or gained by choosing between universes A and A' is undefined. It follows from this that the marginal utility between any universe and A' is undefined. In other words our rules for deciding which universe is better for Jon break down in this case. I myself (probably) don't have a preference for creating universes where I exist over ones where I don't. However I'm sure that I don't want this current existence of me to terminate. So personally I choose maximise the utility of people who already exist over creating more people. Eliezer explains here why bringing people into existence isn't all t
2Kaj_Sotala10y
I created a new article about this.
2Douglas_Knight10y
Here are two related differences between a child is and an adult. (1) It is very expensive to turn a child into an adult. (2) An adult is highly specific and not replaceable, while a fetus has a lot of subjective uncertainty and is fairly easily duplicated within that uncertainty. Uploading is relevant to both of these points.
2blacktrance10y
Because killing a person deprives them of positive experiences that they otherwise would have had, and they prefer to have them. But a nonexistent being doesn't have preferences.

Once you've killed them and they've become nonexistent, then they don't have preferences either.

2pragmatist10y
Presumably what should matter (assuming preference utilitarianism) when we evaluate an act are the preferences that exist at (or just before) the time of commission of the act. If that's right, then the non-existence of those preferences after the act is performed is irrelevant. The Spanish Inquisition isn't exculpated because it's victims' preferences no longer exist. They existed at the time they were being tortured, and that's what should matter.
0lmm10y
So it's fine to do as much environmental damage as we like, as long as we're confident the effects won't be felt until after everyone currently alive is dead?
4Nornagest10y
I'd presume that many people's preferences include terms for the expected well-being of their descendants.
-1lmm10y
That's a get out of utilitarianism free card. Many people's preferences include terms for acting in accordance with their own nonutilitarian moral systems.
3Nornagest10y
Preference utilitarianism isn't a tool for deciding what you should prefer, it's a tool for deciding how you should act. It's entirely consistent to prefer options which involve you acting according to whim or some nonutilitarian system (example: going to the pub), yet for it to dictate -- after taking into account the preferences of others -- that you should in fact do something else (example: taking care of your sick grandmother). There may be some confusion here, though. I normally think of preferences in this context as being evaluated over future states of the world, i.e. consequences, not over possible actions; it sounds like you're thinking more in terms of the latter.
-2lmm10y
Yeah, I sometimes have trouble thinking like a utilitarian. If we're just looking at future states of the world, then consider four possible futures: your (isolated hermit) granddaughter exists and has a happy life, your granddaughter exists and has a miserable life, your granddaughter does not exist because she died, your granddaughter does not exist because she was never born. It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same - if we're allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism. If we accept that the utility of the last two options is the same, then we have an awkward dilemma. Either this utility value is higher than option 2 - meaning that if someone's life is sufficiently miserable, it's better to kill them than allow them to continue living. Or it's lower, meaning that it's always better to give birth to someone than not. Worse, if your first granddaughter was going to be miserable and your second would be happy, it's a morally good action if you can do something that kills your first granddaughter but gives rise to the birth of your second granddaughter. It's weirdly discontinuous to say that your first granddaughter's preferences become valid once she's born - does that mean that killing her after she's born is a bad thing, but if you set up some rube goldberg contraption that will kill her after she's born then that's a good thing?
0pragmatist10y
Whatever action I take right now, eventually the macroscopic state of the universe is going to look the same (heat death of the universe). Does this mean the utilitarian is committed to saying that all actions available to me are morally equivalent? I don't think so. Even though the (macroscopic) end state is the same, the way the universe gets there will differ, depending on my actions, and that matters from the perspective of preference utilitarianism.
0lmm10y
What, then, would you say is the distinction between a utilitarian and a virtue ethicist? Are they potentially just different formulations of the same idea? Are there any moral systems that definitely don't qualify as preference utilitarianism, if we allow this kind of distinction in a utility function?
0pragmatist10y
Do you maybe mean the difference between utilitarianism and deontological theories? Virtue ethics is quite obviously different, because it says the business of moral theory is to evaluate character traits rather than acts. Deontology differs from utilitarianism (and consequentialism more generally) because acts are judged independently of their consequences. An act can be immoral even if it unambiguously leads to a better state of affairs for everyone (a state of affairs where everyone's preferences are better satisfied and everyone is happier, say), or even if it has absolutely no impact on anyone's life at any time. Consequentialism doesn't allow this, even if it allows distinctions between different macroscopic histories that lead to the same macroscopic outcome.
-3Eugine_Nier10y
No, deontologists are simply allowed to consider factors other than consequences.
1blacktrance10y
That's true, but they have preferences before you kill them. In the case of contraception, there is no being to have ever had preferences.
0Leonhart10y
They never do "become nonexistent". You just happen to have found one of their edges.
0[anonymous]10y
Yes, but there may be a moral difference between frustrating a preference that once existed, and causing a preference not to be formed at all. See my reply to the original question.
1hairyfigment10y
Even within pleasure- or QALY-utilitarianism, which seems technically wrong, you can avoid this by recognizing that those possible people probably exist regardless in some timeline or other. I think. We don't understand this very well. But it looks like you want lots of people to follow the rule of making their timelines good places to live (for those who've already entered the timeline). Which does appear to save utilitarianism's use as a rule of thumb.
1Manfred10y
From a classical utilitarian perspective, yeah, it's pretty much a wash, at least relative to non-fatal crimes that cause similar suffering. However, around here, "utilitarian" is usually meant as "consistent consequentialism." In that frame we can appeal to motives like "I don't want to live in a society with lots of murder, so it's extra bad."
0DanielLC10y
It takes a lot of resources to raise someone. If you're talking about getting an abortion, it's not a big difference, but if someone has already invested enough resources to raise a child, and then you kill them, that's a lot of waste.
0hyporational10y
Isn't negative utility usually more motivating to people anyway? This seems like a special case of that, if we don't count the important complications of killing a person that pragmatist pointed out.
-3Lumifer10y
No because time is directional.

How much does a genius cost? MIRI seems intent on hiring a team of geniuses. I’m curious about what the payroll would look like. One of the conditions of Thiel’s donations was that no one employed by MIRI can make more than one-hundred thousand a year. Is this high enough? One of the reasons I ask is I just read a story about how Google pays an extremely talented programmer over 3 million dollars per year - doesn't MIRI also need extremely talented programmers? Do they expect the most talented to be more likely to accept a lower salary for a good cause?

6ChristianKl10y
Yes. Any one with the necessary mindset of thinking that AI is the most important issue in the world will accept a lower salary than what's possible in the market elsewhere. I don't know whether MIRI has an interest in hiring people who don't have that moral framework.
5ChrisHallquist10y
Highly variable with skills, experience, and how badly they want the job. I bet there are some brilliant adjunct professors out there effectively making minimum wage because they really wanted to be professors. OTOH, I bet that google programmer isn't just being paid for talent, but specific skills and experience.
5Dan_Weinand10y
Two notes: First, the term "genius" is difficult to define. Someone may be a "genius" at understanding the sociology of sub-Saharan African tribes, but this skill will obviously command a much lower market value compared to someone who is a "genius" as a chief executive officer of a large company. A more precise definition of genius will narrow the range of costs per year. Second, and related to the first, MIRI is (to the extent of my knowledge) currently focusing on mathematics and formal logic research rather than programming. This makes recruiting a team of "geniuses" much cheaper. While skilled mathematicians can attract quite strong salaries, highly skilled programmers can demand significantly more. It seems the most common competing job for MIRI's researchers would be that of a mathematics professor (which have a median salary ~88,000$). Based on this, MIRI could likely hire high quality mathematicians while offering them relatively competitive salaries.
4DanArmak10y
Many such geniuses (top intellectual performers in fields where they can out-perform the median by several orders of magnitude) choose their work not just on the basis of payment, but what they work on, where, how, and with whom (preferring the company of other top performers). If MIRI were to compete with Google at hiring programmers, I expect money be important but not overwhelmingly so. Google let you work with many other top people in your field, develop and use cool new tech, have big resources for your projects, and provides many non-monetary workplace benefits. MIRI lets you contribute to existential risk reduction, work with rationalists, etc.
2D_Alex10y
From some WSJ article: The setting of Einstein's initial salary at Princeton illustrates his humility and attitude toward wealth. According to "Albert Einstein: Creator & Rebel" by Banesh Hoffmann, (1972), the 1932 negotiations went as follows: "[Abraham] Flexner invited [Einstein] to name his own salary. A few days later Einstein wrote to suggest what, in view of his needs and . . . fame, he thought was a reasonable figure. Flexner was dismayed. . . . He could not possibly recruit outstanding American scholars at such a salary. . . . To Flexner, though perhaps not to Einstein, it was unthinkable [that other scholars' salaries would exceed Einstein's.] This being explained, Einstein reluctantly consented to a much higher figure, and he left the detailed negotiations to his wife." The reasonable figure that Einstein suggested was the modest sum of $3,000 [about $46,800 in today's dollars]. Flexner upped it to $10,000 and offered Einstein an annual pension of $7,500, which he refused as "too generous," so it was reduced to $6,000. When the Institute hired a mathematician at an annual salary of $15,000, with an annual pension of $8,000, Einstein's compensation was increased to those amounts.
2Chrysophylax10y
Eliezer once tried to auction a day of his time but I can't find it on ebay by Googling. On an unrelated note, the top Google result for "eliezer yudkowsky " (note the space) is "eliezer yudkowsky okcupid". "eliezer yudkowsky harry potter" is ninth, while HPMOR, LessWrong, CFAR and MIRI don't make the top ten.
3kalium10y
I suspect more of the price comes from his reputation than his intelligence.
2drethelin10y
I believe eliezer started the bidding at something like 4000 dollars
3DanArmak10y
But where did it end?
4drethelin10y
There were no bids
-3shminux10y
Actually, his fb profile comes up first in instant search:
1TheWakalix5y
That's not search, that's history.

Suppose someone has a preference to have sex each evening, and is in a relationship with someone what a similar level of sexual desire. So each evening they get into bed, undress, make love, get dressed again, get out of bed. Repeat the next evening.

How is this different from having exploitable circular preferences? After all, the people involved clearly have cycles in their preferences - first they prefer getting undressed to not having sex, after which they prefer getting dressed to having (more) sex. And they're "clearly" being the victims of ... (read more)

The circular preferences that go against the axioms of utility theory, and which are Dutch book exploitable, are not of the kind "I prefer A to B at time t1 and B to A at time t2", like the ones of your example. They are more like "I prefer A to B and B to C and C to A, all at the same time".

The couple, if they had to pay a third party a cent to get undressed and then a cent to get dressed, would probably do it and consider it worth it---they end up two cents short but having had an enjoyable experience. Nothing irrational about that. To someone with the other "bad" kind of circular preferences, we can offer a sequence of trades (first A for B and a cent, then C for A and a cent, then B for C and a cent) after which they end up three cents short but otherwise exactly as they started (they didn't actually obtain enjoyable experiences, they made all the trades before anything happened). It is difficult to consider this rational.

2Kaj_Sotala10y
Okay. But that still makes it sound like there would almost never be actual real-life cases where you could clearly say that the person exhibited circular preferences? At least I can't think of any real-life scenario that would be an example of the way you define "bad" circular preferences.
4ChristianKl10y
I think there are plenty of cases where people prefer not sit on in front of their computer today over going to the fitness studio while preferring going to the fitness studio to sitting on in front of their computer tomorrow. Changing the near to a far frame changes references. I know that not an exact example of the Dutch Book but it illustrates the principle that framing matters. I don't think it's hard to get people in a laboratory and offer them different food choices to get a case where a person prefers A for B, C for A and B for C. I think it's difficult to find general model real life cases because we don't use the idea as a phenomenal primitive and therefore don't perceive those situations as general situations where people act normal but see those situation as exception where people are being weird.
0asr10y
I feel like it happens to me in practice routinely. I see options A, B, C and D and I keep oscillating between them. I am not indifferent; I perceive pairwise differences but can't find a global optimum. This can happen in commonplace situations, e.g., when choosing between brands of pasta sauce or somesuch. And I'll spend several minutes agonizing before finally picking one. I had the impression this happened to a lot of people.
2Richard_Kennaway10y
That looks like noisy comparisons being made on near indistinguishable things. (Life tip: if they're too difficult to distinguish, it hardly matters which you choose. Just pick one already.)
0Douglas_Knight10y
I can't find it anymore, but years ago I found on LW a recording of an interview with someone who had exhibited circular preferences in an experiment.
0Alejandro110y
The Allais paradox is close to being one such example, though I don't know if it can be called "real-life". There may be marketing schemes that exploit the same biases. A philosophical case where I feel my naive preferences are circular is torture vs. dust specks. As I said here:
[-][anonymous]10y70

On the Neil Degrasse Tyson Q&A on reddit, someone asked: "Since time slows relative to the speed of light, does this mean that photons are essentially not moving through time at all?"

Tyson responded "yes. Precisely. Which means ----- are you seated?Photons have no ticking time at all, which means, as far as they are concerned, they are absorbed the instant they are emitted, even if the distance traveled is across the universe itself."

Is this true? I find it confusing. Does this mean that a photon emitted at location A at t0 is a... (read more)

There are no photons. There, you see? Problem solved.

(no, the author of the article is not a crank; he's a Nobel physicist, and everything he says about the laws of physics is mainstream)

2satt10y
Problem evaded. Banning a word fails to resolve the underlying physical question. Substitute "wavepackets of light" for "photons"; what then?
4Anatoly_Vorobey10y
I know, I was joking. And it was a good opportunity to link to this (genuinely interesting) paper. ... well, mostly joking. There's a kernel of truth there. "There are no photons" says more than just banning a word. "Wavepackets of light" don't exist either. There's just the electromagnetic field, its intensity changes with time, and the change propagates in space. Looking at it like this may help understand the other responses to the question (which are all correct). When you think of a photon as a particle flying in space, it's hard to shake off the feeling that you somehow ought to be able to attach yourself to it and come along for the ride, or to imagine how the particle itself "feels" about its existence, how its inner time passes. And then the answer that for a photon, time doesn't pass at all, feels weird and counter-intuitive. If you tell yourself there's no particle, just a bunch of numbers everywhere in space (expressing the EM field) and a slight change in those numbers travels down the line, it may be easier to process. A change is not an object to strap yourself to. It doesn't have "inner time".
4satt10y
I feel I should let this go, and yet... But we can make them! On demand, even. By this argument, ocean waves don't exist either. There's only the sea, its height changes with time, and the change propagates in space.
4Douglas_Knight10y
You say that as a reductio ad absurdum, but it is good for some purposes. Anatoly didn't claim that one should deny photons for all purposes, but only for the purpose of unasking the original question.
0satt10y
In this case, unasking the original question is basically an evasion, though, isn't it? Denying photons may enable you to unask hen's literal question, or the unnamed Reddit poster's literal question, but it doesn't address the underlying physical question they're driving at: "if observer P travels a distance x at constant speed v in observer Q's rest frame, does the elapsed time in P's rest frame during that journey vanish in the limit where v tends to c?"
0Douglas_Knight10y
I reject the claim that your rephrasing is the "real" question being asked. By rephrasing the question, you are rejecting it just as much as Anatoly. I think it is more accurate to say that you evade the question, while he is up front about rejecting it. In fact, I think your answer is better and probably it is generally better to rephrase problematic questions to answerable questions before explaining that they are problematic, but the latter is part of a complete answer and I think Anatoly is correct in how he addresses it.
-2satt10y
That multiple different people automatically treated hen's question like it were my rephrasing backs me up on this one, I reckon. Rephrasing a question can be the first step to confronting it head-on rather than rejecting it. If a tourist, looking for the nearest train station, wandered up to me and asked, "where station is the?", and I rearranged their question to the parseable "where is the station?" and answered that, I wouldn't say I rejected or evaded their query.
0[anonymous]10y
Oh. Great!

Other people have explained this pretty well already, but here's a non-rigorous heuristic that might help. What follows is not technically precise, but I think it captures an important and helpful intuition.

In relativity, space and time are replaced by a single four-dimensional space-time. Instead of thinking of things moving through space and moving through time separately, think of them as moving through space-time. And it turns out that every single (non-accelerated) object travels through space-time at the exact same rate, call it c.

Now, when you construct a frame of reference, you're essentially separating out space and time artificially. Consequently, you're also separating an object's motion through space-time into motion through space and motion through time. Since every object moves through space-time at the same rate, when we separate out spatial and temporal motion, the faster the object travels through space the slower it will be traveling through time. The total speed, adding up speed through space and speed through time, has to equal the constant c.

So an object at rest in a particular frame of reference has all its motion along the temporal axis, and no motion at all ... (read more)

1[anonymous]10y
That is helpful, and interesting, though I think I remain a bit confused about the idea of 'moving through time' and especially 'moving through time quickly/slowly'. Does this imply some sort of meta-time, in which we can measure the speed at which one travels through time? And I think I still have my original question: if a photon travels through space at c, and therefore doesn't travel through time at all, is the photon at its starting and its final position at the same moment? If so, in what sense did it travel through space at all?
6Alejandro110y
At the same moment with respect to whom? That is the question one must always ask in relativity. The answer is: no, emission and arrival do not occur at the same moment with respect to any actual reference frame. However, as we consider an abstract sequence of reference frames that move faster and faster approaching speed c in the same direction as the photon, we find that the time between the emission and the reception is shorter and shorter.
4pragmatist10y
No it doesn't. Remember, in relativity, time is relative to a frame of reference. So when I talk about a moving object traveling slowly through time, I'm not relativizing its time to some meta-time, I'm relativizing time as measured by that object (say by a clock carried by the object) to time as measured by me (someone who is stationary in the relevant frame of reference). So an object moving slowly through time (relative to my frame of reference) is simply an object whose clock ticks appear to me to be more widely spaced than my clock ticks. In the limit, if a photon could carry a clock, there would appear to me to be an infinite amount of time between its ticks. I will admit that I was using a bit of expository license when I talked about all objects "moving through space-time" at the constant rate c. While one can make sense of moving through space and moving through time, moving through space-time doesn't exactly make sense. You can replace it with this slightly less attractive paraphrase, if you like: "If you add up a non-accelerating object's velocity through space and its (appropriately defined) rate of motion through time, for any inertial frame of reference, you will get a constant." Again, it's important to realize there are many different "time" parameters in relativity, one for each differently moving object. Also, whether two events are simultaneous is relative to a frame of reference. Relative to my time parameter (the parameter for the frame in which I am at rest), the photon is moving through space, and it takes some amount of (my) time to get from point A to point B. Relative to its own time parameter, though, the photon is at point A and point B (and every other point on its path) simultaneously. Since I'll never travel as fast as a photon, it's kind of pointless for me to use its frame of reference. I should use a frame adapted to my state of motion, according to which the photon does indeed travel in non-zero time from place to place. Again,
9lmm10y
In the photon's own subjective experience? Yes. (Not that that's possible, so this statement might not make sense). But as another commenter said, certainly the limit of this statement is true: as your speed moving from point A to point B approaches the speed of light, the subjective time you experience between the time when you're at A and the time when you're at B approaches 0. And the distance does indeed shrink, due to the Lorentz length contraction. It travels in the sense that an external observer observes it in different places at different times. For a subjective observer on the photon... I don't know. No time passes, and the universe shrinks to a flat plane. Maybe the takeaway here is just that observers can't reach the speed of light.
9gjm10y
Not quite either of those. The first thing to say is that "at t0" means different things to different observers. Observers moving in different ways experience time differently and, e.g., count different sets of spacetime points as simultaneous. There is a relativistic notion of "interval" which generalizes the conventional notions of distance and time-interval between two points of spacetime. It's actually more convenient to work with the square of the interval. Let's call this I. If you pick two points that are spatially separated but "simultaneous" according to some observer, then I>0 and sqrt(I) is the shortest possible distance between those points for an observer who sees them as simultaneous. The separation between the points is said to be "spacelike". Nothing that happens at one of these points can influence what happens at the other; they're "too far away in space and too close in time" for anything to get between them. If you pick two points that are "in the same place but at different times" for some observer, then I<0 and sqrt(-I) is the minimum time that such an observer can experience between visiting them. The separation between the points is said to be "timelike". An influence can propagate, slower than the speed of light, from one to the other. They're "too far away in time and too close in space" for any observer to see them as simultaneous. And, finally, exactly on the edge between these you have the case where I=0. That means that light can travel from one of the spacetime points to the other. In this case, an observer travelling slower than light can get from one to the other, but can do so arbitrarily quickly (from their point of view) by travelling very fast; and while no observer can see the two points as simultaneous, you can get arbitrarily close to that by (again) travelling very fast. Light, of course, only ever travels at the speed of light (you might have heard something different about light travelling through a medium such as gla
1[anonymous]10y
Thanks, that was very helpful, especially the explanation of timelike and spacelike relations.
6Plasmon10y
The Lorentz factor diverges when the speed approaches c. Because of Length contraction and time dilation, both the distance and the time will appear to be 0, from the "point of view of the photon". (the photon is "in 2 places at once" only from the point of view of the photon, and it doesn't think these places are different, after all they are in the same place! This among other things is why the notion of an observer traveling at c, rather than close to c, is problematic)
2DanielLC10y
You can't build a clock with a photon. You can't build a clock with an electron either. You can build one with a muon though, since it will decay after some interval. It's not very accurate, but it's something. In general, you cannot build a clock moving at light speed. You could build a clock with two photons. Measure the time by how close they are together. But if you look at the center of mass of this clock, it moves slower than light. If it didn't, the photons would have to move parallel to each other, but then they can't be moving away from each other, so you can't measure time.
2[anonymous]10y
I'm not sure what the significance of building a clock is...but then, I'm not sure I understand what clocks are. Anyway, isn't 'you can't build a clock on a photon' just what Tyson meant by 'Photons have no ticking time at all'?
2DanielLC10y
Yes. I meant that he meant that.
1Alejandro110y
Assume there are observers at A and B, sitting at rest relative to each other. The distance between them as seen by them is X. Their watches are synchronized. Alice, sitting at A, emits a particle when her watch says t0; Bob, sitting at B, receives it when his watch says t1. Define T = t1-t0. The speed of the particle is V = X/T. If the particle is massive, then V is always smaller than c (the speed of light). We can imagine attaching a clock to the particle and starting it when it is emitted. When Bob receives it, the clock's time would read a time t smaller than T, given by the equation: t = T (1 - V^2/c^2)^(1/2) (this is the Lorentz factor equation mentioned by Plasmon). As the speed V of the particle gets closer and closer to c, you can see that the time t that has passed "for the particle" gets closer and closer to 0. One cannot attach a clock to a photon, so the statement that "photons are not moving through time" is somewhat metaphoric and its real meaning is the limiting statement I just mentioned. The photon is not "at two places at once" from the point of view of any physical observer, be it Alice and Bob (for whom the travel took a time T = X/c) or any other moving with a speed smaller than c (for whom the time taken may be different but is never 0).
0[anonymous]10y
Thanks, it sounds like Tyson just said something very misleading. I looked up the Lorentz factor equation on Wiki, and I got this: gamma = 1/[(1 - V^2/c^2)^(1/2)] Is that right? If that's right, then the Lorentz transformation (I'm just guessing here) for a photon would return an undefined result. Was Tyson just conflating that result with a result of 'zero'?
5Alejandro110y
Your equation for the gamma factor is correct. You are also correct in saying that they Lorentz transformation becomes undefined. The significance of this is that it makes no sense to talk about the "frame of reference of photon". Lorentz transformation equations allow us to switch from some set of time and space coordinates to another one moving at speed V < c relative to the first one. They make no sense for V = c or V > c. I think that what Tyson meant by his somewhat imprecise answer was what I said in my comment above: if you take the equation t *gamma = T (that relates the time t that passes between two events for an object that moves with V from one to the other, with the time T that passes between between the events on a rest frame) and take the limit V approaching c for finite T, you get t = 0. If you want to keep the meaning of the equation in this limit, you have then to say that "no time passes for a photon". The issue is that the equation is just a consequence of the Lorentz transformations, which are inapplicable for V = c, and as a consequence the words "no time passes for a photon" do not have any clear, operational meaning attached to them.
2[anonymous]10y
I think I understand. Thanks very much for taking the time.
0Luke_A_Somers10y
Getting this property for electromagnetic waves was one of the main things that led Einstein to develop Special Relativity: he looked at waves and thought, "If we do a Galileian transform so that light is standing still, the resulting field is an invalid electrostatic field"

What are your best arguments against the reality/validity/usefulness of IQ?

Improbable or unorthodox claims are welcome; appeals that would limit testing or research even if IQ's validity is established are not.

5pragmatist10y
These are not my arguments, since I haven't thought about the issue enough. However, the anthropologist Scott Atran, in response to the latest Edge annual question, "What Scientific Idea is Ready for Retirement?", answered "IQ". Here's his response:
8Douglas_Knight10y
Which of reality, validity, and usefulness is this an argument against? All three? None? Added: I don't know what it would mean for IQ to be "real." Maybe this is an argument that IQ is not real. Maybe it is an argument that IQ is not ontologically fundamental. But it seems to me little different than arguing that total body weight, BMI, or digit length ratio are not "real"; or even that arguing that temperature is not "real," either temperature of the body or temperature of an ideal gas. The BQ sentence seems to assert that this kind of unreality implies that IQ is not useful, but I'd hardly call that an argument.
2pragmatist10y
I tend to interpret "Is X real?" more or less as "Is X a part of the best predictive theory of the relevant domain?" This doesn't require an object/property to be ontologically fundamental, since our best (all things considered) theories of macroscopic domains include reference to macroscopic (non-fundamental) properties. According to this standard, Atran is arguing that IQ is not real, I think. Temperature would be real (as far as we know), but maybe BMI wouldn't? I don't know enough about the relevant science to make that judgment. Anyway, given my preferred pragmatist way of thinking about ontology, there isn't much difference between the reality, validity and usefulness of a concept.
1Douglas_Knight10y
It seems excessive to me to define real as a superlative. Isn't it enough to be part of some good predictive theory? Shalizi explicitly takes this position, but it seems insane to me. He says very clearly says that he rejects IQ because he thinks that there is a better model. It's not that he complains that people are failing to adopt a better model, but failing to develop a better model. To the extent that Atran means anything, he appears to mean the same thing. I think the difference between usefulness and validity is that usefulness is a cost-benefit analysis, considering the cost of using the model in a useful domain.
1pragmatist10y
Lorentz ether theory is a good predictive theory, but I don't want to say that ether is real. In general, if there's a better theory currently available that doesn't include property X, I'd say we're justified in rejecting the reality of X. I do agree that if there's no better theory currently available, it's a bit weird to say "I reject the reality of X because I'm sure we're going to come up with a better theory at some point." Working with what you have now is good epistemic practice in general. But it is possible that your best current theory is so bad at making predictions that you have no reason to place any substantive degree of confidence in its ontology. In that case, I think it's probably a good idea to withhold ontological commitment until a better theory comes along. Again, I don't know enough about IQ research to judge which, if any, of these scenarios holds in that field.
0ChristianKl10y
What does arguments against "reality" mean? Arguing against misconceptions of what people believe about IQ? In general I consider arguments against reality pointless. Asking whether IQ is real is the wrong question. It makes much more sense to ask what IQ scores mean.
0Eugine_Nier10y
IQ, or intelligence as commonly understood, is a poor proxy for rationality. In many cases it simple makes people better at rationalizing beliefs they acquired irrational.
-5Calvin10y

Doesn't cryonics (and subsequent rebooting of a person) seem obviously too difficult? People can't keep cars running indefinitely, wouldn't keeping a particular consciousness running be much harder?

I hinted at this in another discussion and got downvoted, but it seems obvious to me that the brain is the most complex machine around, so wouldn't it be tough to fix? Or does it all hinge on the "foom" idea where every problem is essentially trivial?

4Calvin10y
Most of the explanations found on cryonics site, do indeed seem to base their arguments around the hopeful explanation that given nanotechnology and science of the future every problem connected to as you say rebooting would become essentially trivial.
2ChristianKl10y
There are oldtimer cars that seem to have no problem with running "indefinitely" provided you fix parts here and there.
0Torello10y
This is sort of my point--wouldn't it be hard to keep a consciousness continually running (to avoid the death we feared in the first place) by fixing or replacing parts?
0gattsuru10y
Continuity of consciousness very quickly becomes a hard word to define : not only do you interrupt consciousness for several hours on a nightly basis, you actually can go into reduced awareness modes on a regular basis even when 'awake'. Moreover, it might not be necessary to interrupt continuity of consciousness in order to "replace parts" in the brain. Hemispherectomies demonstrate that large portions of the brain can be removed at once without causing death, for example.
1RomeoStevens10y
error checking on solid state silicon is much easier than error checking neurons.
1VAuroch10y
We know a lot more about solid state silicon than neurons. When we understand neurons as well as we currently do solid state silicon, I see no reason why error checking on them should be harder than error checking on silicon is now.
0Luke_A_Somers10y
Too difficult for whom? Us, now? Obviously. Later? Well, how much progress are you willing to allow for 'too difficult' to become 'just doable'?

What motivates rationalists to have children? How much rational decision making is involved?

ETA: removed the unnecessary emotional anchor.

ETA2: I'm not asking this out of Spockness, I think I have a pretty good map of normal human drives. I'm asking because I want to know if people have actually looked into the benefits, costs and risks involved, and done explicit reasoning on the subject.

I wouldn't dream of speaking for rationalists generally, but in order to provide a data point I'll answer for myself. I have one child; my wife and I were ~35 years old when we decided to have one. I am by any reasonable definition a rationalist; my wife is intelligent and quite rational but not in any very strong sense a rationalist. Introspection is unreliable but is all I have. I think my motivations were something like the following.

  1. Having children as a terminal value, presumably programmed in by Azathoth and the culture I'm immersed in. This shows up subjectively as a few different things: liking the idea of a dependent small person to love, wanting one's family line to continue, etc.

  2. Having children as a terminal value for other people I care about (notably spouse and parents).

  3. I think I think it's best for the fertility rate to be close to the replacement rate (i.e., about 2 in a prosperous modern society with low infant mortality), and I think I've got pretty good genes; overall fertility rate in the country I'm in is a little below replacement and while it's fairly densely populated I don't think it's pathologically so, so for me to have at least one child and probably

... (read more)
1Aharon10y
I first wanted to comment on 5, because I had previously read that having children reduces happiness. Interestingly, when searching a link (because I couldn't remember where I had read it), I found this source (http://www.demogr.mpg.de/papers/working/wp-2012-013.pdf) that corrobates your specific expectation: children lead to higher happiness for older, better educated parents.
1Douglas_Knight10y
Having children is an example where two methodologies in happiness research dramatically diverge. One method is asking people in the moment how happy they are; the other is asking how they happy they generally feel about their lives. The first method finds that people really hate child care and is probably what you remembered.
0adbge10y
I think the paper you're thinking of is Kahneman et al's A survey method for characterizing daily life experience: The day reconstruction method. Notably, On the other hand, having children also harms marital satisfaction. See, for example, here.
0gjm10y
How excellent! It's nice to be statistically typical :-).
0DaFranker10y
(This might seem obviously stupid to someone who's thought about the issue more in-depth, but if so there's no better place for it than the Stupid Questions Thread, is there?): I think some tangential evidence could be gleaned, as long as it's understood as a very noisy signal, from what other humans in your society consider as signals of social involvement and productivity. Namely, how well your daughter is doing at school, how engaged she gets with her peers, her results in tests, etc. These things are known, or at least thought, to be correlated with social 'success' and 'benefit'. Basically, if your daughter is raising the averages or other scores that comprise the yardsticks of teachers and other institutions, then this is information correlated with what others consider being beneficial to society later in life. (the exact details of the correlation, including its direction, depend on the specific environment she lives in)
1gjm10y
That would be evidence (albeit, as you say, not very strong evidence) that my daughter's contribution to net utility is above average. That doesn't seem enough to guarantee it's positive.
1DaFranker10y
Good catch. Didn't notice that one sneaking in there. That kind of invalidates most of my reasoning, so I'll retract it willingly unless someone has an insight that saves the idea.
6blacktrance10y
Disclaimer: I don't have kids, won't have them anytime soon (i.e. not in the next 5 years), and until relatively recently didn't want them at all. The best comparison I can make is that raising a child is like making a painting. It's work, but it's rewarding if done well. You create a human being, and hopefully impart them with good values and set them on a path to a happy life, and it's a very personal experience. Personally, I don't have any drive to have kids, not one that's comparable to hunger or sexual attraction.
3hyporational10y
I'd like that personal painting experience if it went well and I have experienced glimpses of it with some kids not of my own. Unfortunately it's not clear to me at all how much success of the project could be of my own doing, and I've seen enough examples of when things go horribly wrong despite of optimally seeming conditions. I wonder what kinds of studies could be done on the subject of parenting skills and parental satisfaction on the results of upbringing that aren't hugely biased. ETA: my five year old step brother just barged into my room (holiday at my folks). "You always get new knowledge in this room.", he said, and I was compelled to pour that little vessel full again.
5Lumifer10y
The same what motivates other people. Being rational doesn't necessarily change your values. Clearly, some people think having children is worthwhile and others don't, so that's individual. There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection. The amount of decision-making also obviously varies -- from multi-year deliberations to "Dear, I'm pregnant!" :-)
4CronoDAS10y
Really? The reproductive urge in humans seems to be more centered on a desire for sex rather than on a desire for children. And, in most animals, this is sufficient; sex leads directly to reproduction without the brain having to take an active role after the exchange of genetic material takes place. Humans, oddly enough, seem to have evolved adaptations for ensuring that people have unplanned pregnancies in spite of their big brains. Human females don't have an obvious estrus cycle, their fertile periods are often unpredictable, and each individual act of copulation has a relatively low chance of causing a pregnancy. As a result, humans are often willing to have sex when they don't want children and end up having them anyway.
5Lumifer10y
These are not mutually exclusive alternatives. Not in those animals where babies require a long period of care and protection.
0CronoDAS10y
Yes, you're right. I didn't think to put the "take care of your children once they're out of the uterus" programming into the same category.
1Randy_M10y
A developmentally complex species needs a drive to care for offspring. A simple species just needs a drive to reproduce. ETA: What Lumifer said
0hyporational10y
Women talk to me about baby fever all the time. Lucky me, eh.
0hyporational10y
True, but it might make you weigh them very differently if you understand how biased your expectations are. I'm interested if people make some rational predictions about how happy having children will make them for example. I already have a pretty good idea about how people in general make these decisions, hence the specific question.
3Ishaan10y
I think you mean "humans"? With respect to adoption vs. biological children, having your own child allows you more control over the circumstances and also means the child will probably share a some facets of your / your mate's personality, in ways that are often surprising and pleasurable. With respect to raising children in general, it's intrinsically rewarding, like a mix of writing a book and being in love. Also, if you're assuming the environment won't radically change, having youth probably makes aging easier. (I don't have children, but have watched them being raised. Unsure of my own plans.)
0hyporational10y
Nope, not planning to go Spock. I also edited the original question now for clarification. I'd like to see some evidence how much control I can have. You're describing just the best case scenario, but having a child can also be incredibly exhausting if things go wrong.
2Ishaan10y
Oh, okay. Sorry to misunderstand. (Also, I meant "control" as compared to the control one has when adopting.) In that case, I have insufficient research for a meaningful answer. I guess one aught to start here or there or there, to get a rough idea?
1Lumifer10y
One more point that I haven't seen brought up -- listen to Queen: Can anybody find me somebody to love? Each morning I get up I die a little Can barely stand on my feet Take a look in the mirror and cry Lord what you're doing to me I have spent all my years in believing you But I just can't get no relief, Lord! Somebody, somebody Can anybody find me somebody to love?
2CronoDAS10y
Personally, I'd recommend a dog or cat to this person.
0hyporational10y
Children as match makers when you're too old to stand on your feet? ;)
0Lumifer10y
That's an interesting interpretation :-) it was also fun to watch it evolve :-D
0hyporational10y
I was calibrating political correctness.
0Lumifer10y
Entirely within your own mind? I don't think you got any external feedback to base calibration on :-)
2hyporational10y
I got plenty of feedback from the intensive simulations I ran.
1Calvin10y
I don't consider myself an explicit rationalist, but the desire to have children stems from the desire to have someone to take care of me when I am older. Do you see your own conception and further life as a cause for "huge heap of disutility" that can't be surpassed by the good stuff?
4DaFranker10y
I've always been curious to see the response of someone with this view to the question: What if you knew, as much as any things about the events of the world are known, that there will be circumstances in X years that make it impossible for any child you conceive to possibly take care of you when you are older? In such a hypothetical, is the executive drive to have children still present, still being enforced by the programming of Azathoth, merely disconnected from the original trigger that made you specifically have this drive? Or does the desire go away? Or something else, maybe something I haven't thought of (I hope it is!)?
0Calvin10y
Am I going to have a chance to actually interact with them, see them grow, etc? I mean, assuming hypothetical case where as soon as a child is born, nefarious agents of Population Police snatch him never to be seen or heard from again, then I don't really see the point of having children. If on the other hand, I have a chance to actually act as a parent to him, then I guess it is worth it, after all, even if the child disappears as soon as it reaches adulthood and joins Secret Society of Ineffective Altruism never to be heard from again. I get no benefit of care, but I am happy that I introduced new human into the world (uh... I mean, I actually helped to do so, as it is a two-person exercise so to speak). It is not ideal case but I am still consider the effort well spent. In ideal world, I still have a relation with my child, even as he/her reaches adulthood so that I can feel safer knowing that there is someone who (hopefully) considers all the generosity I have granted to him and holds me dear. P.S. Why programing of Azathoth? In my mind it makes it sound as if desire to have children was something intristically bad.
1DaFranker10y
Thanks for the response! This puts several misunderstandings I had to rest. Programming of Azathoth because Azathoth doesn't give a shit about what you wish your own values were. Therefore what you want has no impact whatsoever on what your body and brain are programmed to do, such as make some humans want to have children even when every single aspect of it is negative (e.g. painful sex, painful pregnancy, painful birthing, hell to raise children, hellish economic conditions, absolutely horrible life for the child, etc. etc. such as we've seen some examples of in slave populations historically)
0Calvin10y
I suspect our world views might differ for a bit, as I don't wish that my values where any different than they are. Why should I? If Azathoth decided to instill the value that having children is somehow desirable deep into my mind, than I am very happy that as a first world parent I have all the resources I need to turn it into a pleasant endeavor with a very high expected value (happy new human who hopefully likes me and hopefully shares my values, but I don't have much confidence in a second bet).
0hyporational10y
Not to me obviously. Not necessarily to my parents either, but I think they might have been quite lucky in addition to being good parents. Doesn't money take care of you when old too? As a side note, if I were old, dying and in a poor enough condition that I couldn't look after myself, I'd rather sign off than make other people take care of me because I can't imagine that being an enjoyable experience.
0Calvin10y
Still, if it is possible to have a happy children (and I assume happy humans are good stuff), where does the heap of dis-utility come into play? EDIT: It is hard to form a meaningful relationship with money, and I would reckon that teaching it to uphold values similar to yours isn't an easy task either. As for taking care I don't mean palliative care as much as simply the relationship you have with your child.
0hyporational10y
You can have relationships with other people, and I think it's easier to influence what they're like. I'll list some forms of disutility later, but I think for now it's better not to bias the answers to the original question further. I removed the "heap of disutility" part, it was unnecessarily exaggerated anyway.
0[anonymous]10y
You can have a relationship with your friends, but don't expect them to take care of you when you're old.

Why hasn't anyone ever come back from the future and stopped us all from suffering, making it so we never horrible things? Does that mean we ever never learn time travel, or at least time travel+a way to make the original tough experiences be un-experienced?

2metatroll10y
Whenever they invent time travel, they discover that the ability to change the past becomes the biggest cause of suffering, so in the end they always un-invent it.
1seez10y
And, similarly, should I be depressed that there currently exists NO alien species with the inclination+ability to eliminate horrific suffering in all sentient life-forms?

When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?

Utilitarians could say they are trying to maximize the World's something.

But non utiltarians, like I used to be, and like most here still are, are just... doing it like everyone else does it! "Oh, that seems like a cool change, I'll do it! yay!" then two weeks later that particular thing has none of the coolness effect it had before, but they are stuck with the decision for years....... (in case of dec... (read more)

8Dahlen10y
I don't know the extent to which this applies to other people, but for me (a non-utilitarian) it does, so here's my data point which may or may not give you some insight into how other non-utilitarians judge these things. I can't really say I value my own happiness much. Contentment / peace of mind (=/= happiness!) and meaningfulness are more like what I aim for; happiness is too fleeting, too momentary to seek it out all the time. I'm also naturally gloomy, and overt displays of cheerfulness just don't hold much appeal for me, in an aesthetic sense. (They get me thinking of those fake ad people and their fake smiles. Nobody can look that happy all the time without getting paid for it!) There simply are more important things in life than my own happiness; that one can be sacrificed, if need be, for the sake of a higher value. I suppose it's just like those utilitarians you're talking about which are "trying to maximize the world's something" rather than their own pleasure, only we don't think of it in a quantitative way. Well... that's a rather unflattering way of putting it. You don't have to compute utilities in order for your decision-making process to look a wee little more elaborate than that.
4cata10y
I know a lot of LW-ish people in the Bay Area and I see them explicitly thinking carefully about a lot of big life changes (e.g. moving, relationships, jobs, what habits to have) in just the way you recommended. I don't know if it has something to do with utilitarianism or not. I'm personally more inclined to think in that way than I was a few years ago, and I think it's mostly because of the social effects of from hanging out with & looking up to a bunch of other people who do so.
1pragmatist10y
"Non-utilitarian" doesn't equate to "ethical egoist". I'm not a utilitarian, but I still think my big life decisions are subject to ethical constraints beyond what will make me happy. It's just that the constraint isn't always (or even usually) the maximization of some aggregate utility function.
1ChristianKl10y
I don't think the predictive power of models build from data driven happiness research is very high. I wouldn't ignore the research completely but there nothing rational about using a model just because it's data based if nobody showed that the model is useful for prediction in the relevant domain.

What amount of disutility does creating a new person generate in Negative Preference Utilitarian ethics?

I need to elaborate in order to explain exactly what question I am asking: I've been studying various forms of ethics, and when I was studying Negative Preference Utilitarianism (or anti-natalism, as I believe it's often also called) I came across what seems like a huge, titanic flaw that seems to destroy the entire system.

The flaw is this: The goal of negative preference utilitarianism is to prevent the existence of unsatisfied preferences. This means... (read more)

2Kaj_Sotala10y
(To the extent that I'm negative utilitarian, I'm a hedonistic negative utilitarian, so I can't speak for the preference NUs, but...) Note that every utilitarian system breaks once you introduce even the possibility of infinities. E.g. a hedonistic total utilitarian will similarly run into the problem that, if you assume that a child has the potential to live for an infinite amount of time, then the child can be expected to experience both an infinite amount of pleasure and an infinite amount of suffering. Infinity minus infinity is undefined, so hedonistic total utilitarianism would be incapable of assigning a value to the act of having a child. Now saving lives is in this sense equivalent to having a child, so the value every action that has even a remote chance of saving someone's life becomes undefined as well... A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters. I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself. One possible answer is that loudly advocating "you shouldn't have children, it's literally infinitely bad" is a horrible PR strategy that will just get your movement discredited, and e.g. talking about NU in the abstract and letting people piece the full implications themselves may be more effective. Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account. Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the
1Ghatanathoah10y
Yes, and that is my precise point. Even if we assume a bounded utility function for human preferences, I think it's reasonable assume that it's a pretty huge function. Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to this point, but still die with a lot of unfulfilled desires. I find this massively counter-intuitive and want to know how the antinatalist community addresses this. If the disutility they assign to having children is big enough they should still spend every waking hour doing something about it. What if some maniac kidnaps them and forces them to have a child? The odds of that happening are incredibly small, but they certainly aren't zero. If they really assign such a giant negative to having a child they should try to guard even against tiny possibilities like that. Yes, but from a preference utilitarian standpoint it doesn't need to actually be possible to live forever. It just has to be something that you want. Well, of course I'm not expecting an exact answer. But a ballpark would be nice. Something like "no more than x, no less than y." I think, for instance, that a total utilitarian could at least say something like "no less than a thousand rush hour frustrations, no more than a million."
1Kaj_Sotala10y
Is that really how preference utilitarianism works? I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment. Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones. I'm confused. How is wanting to live forever in a situation where you don't think that living forever is possible, different from any other unsatisfiable preference? That doesn't sound right. The disutility is huge, yes, but the probability is so low that focusing your efforts on practically anything with a non-negligible chance of preventing further births would be expected to prevent many times more disutility. Like supporting projects aimed at promoting family planning and contraception in developing countries, pro-choice policies and attitudes in your own country, rape prevention efforts to the extent that you think rape causes unwanted pregnancies that are nonetheless carried to term, anti-natalism in general (if you think you can do it in a way that avoids the PR disaster for NU in general), even general economic growth if you believe that the connection between richer countries and smaller families is a causal and linear one. Worrying about vanishingly low-probability scenarios, when that worry takes up cognitive cycles and thus reduces your chances of doing things that could have an even bigger impact, does not maximize expected utility. I don't know. At least I personally find it very difficult to compare experiences of such differing magnitudes. Someone could come up with a number, but that feels like trying to play baseball with verbal probabilities - the number that they name might not have anything to do with what they'd actually choose
0Ghatanathoah10y
I don't think that would be the case. The main intuitive advantage negative preference utilitarianism has over negative hedonic utilitarianism is that it considers death to be a bad thing, because it results in unsatisfied preferences. If it only counted immediate consciously held goals it might consider death a good thing, since it would prevent an agent from developing additional unsatisfied preferences in the future. However, you are probably onto something by suggesting some method of limiting which unsatisfied preferences count as negative. "What a person is thinking about at any given moment" has the problems I pointed out earlier, but another formulation could well work better. I believe Total Preference Utilitarianism typically avoids this by regarding the creation of at most types of unsatisfied preferences as neutral rather than negative. While there are some preferences whose dissatisfaction typically counts as negative, such as the preference not to be tortured, most preference creations are neutral. I believe that under TPU, if a person spends the majority of their life not preferring to be dead then their life is considered positive no matter how many unsatisfied preferences they have. I feel like I could try to get some sort of ballpark by figuring how much I'm willing to pay to avoid each thing. For instance, if I had an agonizing migraine I knew would last all evening, and had a choice between paying for an instant cure pill, or a device that would magically let me avoid traffic for the next two months, I'd probably put up with the migraine. I'd be hesitant to generalize across the whole population, however, because I've noticed that I don't seem to mind pain as much as other people, but find boredom far more frustrating than average.
1RomeoStevens10y
Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering. Death is bad and causes negative experiences. I want to solve death before we have more kids, but I recognize this isn't realistic. It's worth pointing out that negative utilitarianism is incoherent. Prioritarianism makes slightly more sense.
3Ghatanathoah10y
If I understand you correctly, the problem with doing this with negative utilitarianism is that it suggests we should painlessly kill everyone ASAP. The advantage of negative preference utilitarianism is that it avoids this because people have a preference to keep on living that killing would thwart. Why? For the reason I pointed out, or for a different one? I'm not a negative utilitarian personally, but I think a few aspects of it have promise and would like to see them sorted out.

What does changing a core belief feel like? If I have a crisis of faith, how will I know?

I would particularly like to hear from people who have experienced this but never deconverted. Not only have I never been religious, no one in my immediate family is, none of the extended family I am close with is, and while I have friends who believe in religion I don't think I have any who believe their faith. So I have no real point of comparison.

4ESRogs10y
A sense of panic and dread, and a feeling of being lost were some highlights for me. I think it would be hard to not know, though perhaps others experience these things differently.
0ChristianKl10y
I think there are many ways how beliefs get changed. Take a belief such as: "The world is an hostile place and therefore I have to hide myself behind behind a shield of anonymity when I post online." Ten years ago I would have feared that somebody associates my online writing with my real identity at that time I thought I needed the shield. Today I don't (Nickname is firstname + first 2 letters of lastname). How did that process felt like? At the beginning I felt fear and now I don't but it was a gradual process over time. For most practical concerns I think that we use religion way to often as reference concept. Children get usually taught that it's bad to talk to strangers. In our world it's a useful skill to talk to strangers in an friendly and inviting way. Most people hit walls very quickly if the try to start to say to hello with a smile to every stranger they pass on the street. The come up with excuses that saying so is weird and that people will hate them if the find out that they will engage in such weird behavior. If you want to experience a crisis of faith those social beliefs are were I would focus. There are more interesting because they actually have empirical results that you can see and you can't just pretend that you have changed your belief.

I have tremendous trouble with hangnails. My cuticles start peeling a little bit, usually near the center of the base of my nail, and then either I remove the peeled piece (by pulling or clipping) or it starts getting bigger and I have to cut it off anyway. That leaves a small hole in my cuticle, the edges of which start to wear away and peel more, which makes me cut away more. This goes on until my fingertips are a big mess, often involving bleeding and bandages. What should I do with my damaged cuticles, and how do I stop this cycle from starting in the first place?

4dougclow10y
To repair hangnails: Nail cream or nail oil. I had no idea these products existed, but they do, and they are designed specifically to deal with this problem, and do a very good job IME. Regular application for a few days fixes my problems. To prevent it: Keep your hands protected outside (gloves). Minimise exposure of your hands to things that will strip water or oil from them (e.g. detergent, soap, solvents, nail varnish, nail varnish remover), and when you can't avoid those, use moisturiser afterwards to replace the lost oil. (Explanation: Splitting/peeling nails is usually due to insufficient of oil or more rarely moisture. I've heard some people take a paleo line that we didn't need gloves and moisturiser and nail oil in the ancestral environment. Maybe, but we didn't wash our hands with detergent multiple times a day then either.)
2CronoDAS10y
It's not the nail itself, it's the skin around the nail...
2dougclow10y
Yes - that's the part I too have trouble with, and that these products and practices help. They also help the nail itself, but fewer people tend to have that problem. In my explanation should've said "Splitting/peeling nails, and troubles with the skin around them, are usually due to insufficient oil ...", sorry. There's no reason why you should trust a random Internet person like me with health advice. But think cost/expected benefit. If your hangnails are anything like as painful and distracting as mine were, trying out a tube of nail cream, moisturiser, and a pair of gloves for a week is a small cost compared to even an outside chance that it'll help. (Unless the use of such products causes big problems for your self image.)
1CronoDAS10y
I'll see if I can find any nail cream at my local supermarket, then. How often should I apply it? I've seen similar advice on various web pages after I did a Google search on the problem, too. Which means that it's many random Internet people, which is slightly more trustworthy. ;)
0dougclow10y
:) I got mine in a large pharmacist, in case you're still looking. I'd be guided by the instructions on the product and your common sense. For me, a single application is usually enough these days - so long as I've been able to leave it on for ages and not have to wash my hands. The first time I used it, when my fingernails had got very bad, it took about three or four applications over a week. Then ordinary hand moisturiser and wearing gloves outside is enough for maintenance. Then I get careless and forget and my fingernails start getting bad again and the cycle repeats! But I'm getting better at noticing, so the cycles are getting shallower, and I've not actually had to use the nail cream at all so far this winter. (Although it hasn't been a very cold one where I am.) (Almost a month late, sorry.)
1Paul Crowley10y
I would take a recommendation from Doug as strong evidence that something is a good idea, FWIW.
0ChristianKl10y
Calcium Deficiency could be a possible issue.
0Chrysophylax10y
Nail polish base coat over the cuticle might work. Personally I just try not to pick at them. I imagine you can buy base coat at the nearest pharmaceuticals store, but asking a beautician for advice is probably a good idea; presumably there is some way that people who paint their nails prevent hangnails from spoiling the effect.
0dougclow10y
I'd be cautious about using nail polish and similar products. The solvents in them are likely to strip more oil from the nail and nail bed, which will make the problem worse, not better. +1 for asking a beautician for advice, but if you just pick a random one rather than one you personally trust, the risk is that they will give you a profit-maximising answer rather than a cheap-but-effective one.

Computers work by performing a sequence of computations, one at a time: parallelization can cut down the time for repetitive tasks such as linear algebra, but hits diminishing returns very quickly. This is vey different than the way the brain works. the brain is highly parallel. Is there any reason to think that our current techniques for making algorithms are powerful enough to produce "intelligence" whatever that means.

1DanArmak10y
All biological organisms, considered as signalling or information processing networks, are massively parallel: huge amounts of similar cells with slightly different state signalling one another. It's not surprising that the biological evolved brain works the same way. A turing machine-like sequential computer powerful/fast enough for general intelligence would be far less likely to evolve. So the fact that human intelligence is slow and parallel isn't evidence for thinking you can't implement intelligence as a fast serial algorithm. It's only evidence that the design is likely to be different from that of human brains. It's likely true that we don't have the algorithmic (or other mathematical) techniques yet to make general intelligence. But that doesn't seem to me to be evidence that such algorithms would be qualitatively different from what we do have. We could just as easily be a few specific algorithmic inventions away from a general intelligence implementation. Finally, as far as sheer scale goes, we're on track to achieve rough computational parity with a human brain in a single multi-processor cluster within IIRC something like a decade.
0pianoforte61110y
I'm not trying to play burden of proof tennis here but surely the fact that the only "intelligence" that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially. Unless of course the kind of AI that humans create is nothing like the human mind, in which my question is irrelevant. But we already know that the existing algorithms (in the brain) are qualitatively different from computer programs. I'm not an expert so apologies for any mistakes but the brain is not massively parallel in the way that computers are. A parallel piece of software can funnel a repetitive task into different processors (like the same algorithm for each value of a vector). But parallelism is a built in feature of how the brain works; neurons and clusters of neurons perform computations semi-independently of each other, yet are still coordinated together in a dynamic way. The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be? Regarding computational parity: sure I never said that would be the issue.
3Locaha10y
There is no such thing as qualitatively different algorithms. Anything that a parallel computer can do, a fast enough serial computer also can do.
0DanArmak10y
An optimization process (evolution) tried and succeeded at producing massively-parallel biological intelligence. No optimization process has yet tried and failed to produce serial-processing based intelligence. Humans have been trying for very little time, and our serial computers may be barely fast enough, or may only become fast enough some years from now. The fact the parallel intelligence could be created, is not evidence that other kinds of intelligence can't be created. Talking about "the only intelligence we know of" ignores the fact that no process ever tried to create a serial intelligence, and so of course none was created. That's quite possible. All algorithms can be implemented on our Turing-complete computers. The questions is what algorithms we can successfully design.
0pianoforte61110y
Why do you think that intelligence can be implemented serially?
2DanArmak10y
What exactly do you mean by 'serially'? Any parallel algorithm can be implemented on a serial computer. And we do have parallel computer architectures (multicore/multicpu/cluster) that we can use for speedups, but that's purely an optimization issue.

Society, by survival, in the survival of the fittest sense, stimulates people to be of service, be interesting, useful, effective, and even altruistic.

I suspect, and would like to know your opinion, that we are, for that social and traditional reason biased against a life of personal hedonic exploration, even if for some particular kinds of minds, that means, literally, reading internet comics, downloading movies and multiplayer games for free, exercising near your home, having a minimal amount of friends and relationships, masturbating frequently, and eating unhealthy for as long as the cash lasts.

So two questions, do you think we are biased against these things, and do you think doing this is a problem?

2DanielLC10y
What do you mean by biased? Is there a difference between being biased towards something and desiring to do it?
1Viliam_Bur10y
For example, a bias could be if your prediction of how much you will enjoy X is systematically smaller than how much you actually do enjoy X when you are doing it.
0DanielLC10y
So what you're asking is if people are good at maximizing their own happiness? We are not. Our happiness is set up to make sure we maximize inclusive genetic fitness. Rather than fixing a bias, evolution can simply account for it. For example, the joy of sex does not compare with the discomfort of pregnancy, but due to time discounting, it's enough to make women want to have sex. As for what would maximize happiness, I'm not an expert. You'd need to ask a psychologist. I'm given to understand that doing things that at first appear to make you happy will tend to reset your hedonic setpoint and have little effect. The obvious conclusion from that is that no matter what you do, your happiness will be the same, but I'm pretty sure that's not right either. People can change how generally happy they are. I am in favor of happiness, so all else being equal, I'd prefer it if people were more successful at making themselves happy.
0somervta10y
what do you mean by 'personal hedonic exploration'? The things you list don't sound very exploratory...

What's the most useful thing for a non-admin to do with/about wiki spam?

[-][anonymous]10y00

When will the experience machine be developed?

[This comment is no longer endorsed by its author]Reply

Average utilitarianism seems more plausible than total utilitarianism, as it avoids the repugnant conclusion. But what do average utilitarians have to say about animal welfare? Suppose a chicken's maximum capacity for pleasure/preference satisfaction is lower than a human's. Does this mean that creating maximally happy chickens could be less moral than non-maximally happy humans?

0DanielLC10y
My intuition is that chickens are less sentient, and that is sort of like thinking slower. Perhaps a year of a chicken's life is equivalent to a day of a human's. A day of a chicken's life adds less to the numerator than a day of a human's, but it also adds less to the denominator.
2Dan_Weinand10y
Maybe I'm way off base here, but it seems like average utilitarianism leads to disturbing possibility itself. That being 1 super happy person is considered a superior outcome to 1000000000000 pretty darn happy people. Please explain how, if at all, I'm misinterpreting average utilitarianism.
0DanielLC10y
I think you just have different intuitions than average utilitarians. I have talked to someone who saw no reason why having a higher population is good in of itself. I am somewhat swayed by an anthropic argument. If you live in the first universe, you'll be super happy. If you live in the second, you'll be pretty darn happy. Thus, the first universe is better.
0DanArmak10y
On the other hand, you often need to consider that you're less likely to live in one universe than in another. For instance, if you could make 10% of population vastly happier by killing the other 90%, you need to factor in the 10% chances of survival.
0DanielLC10y
I don't buy into that theory of identity. The way the universe works, observer-moments are arranged in lines. There's no reason this is necessary in principle. It could be a web where minds split and merge, or a bunch of Boltzmann brains that appear and vanish after a nanosecond. You are just a random one of the observer-moments. And you have to be one that actually exist, so there's a 100% chance of survival. If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.
1DanArmak10y
I don't think the word "you" is doing any work in that sentence. Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it's meaningful to talk about identities (lines of observer-moments) which may die (the line ends). I'm not sure I understand what you mean (I don't endorse average utilitarianism in any case). Do you mean that I might want to maximize the average of the utilities of my possible time-lines (due to imperfect knowledge), weighted by the probability of those time-lines? Isn't that just maximizing expected utility?
0DanielLC10y
I don't think that's relevant in this context. You are a random observer. You live. I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters. But if you just think that you're not going to have as much total happiness because you don't live as long then either you're fundamentally mistaken or the argument I just gave is. If "you" are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn't help. If you're saying that it's more likely to be a person who has a longer life, then I guess our "different" views on identity probably are just semantics, and you end up with the form of average utilitarianism I was originally suggesting.
1DanArmak10y
That's very different from saying "you are a random observer-moment" as you did before. I consider it intrinsically important to have a personal future. If I am now a specific observer - I've already observed my present - then I can drastically narrow down my anticipated future observations. I don't expect to be any future observer existing in the universe (or even near me) with equal probability; I expect to be one of the possible future observers who have me in their observer-line past. This seems necessary to accept induction and to reason at all. But in the actual universe, when making decisions that influence the future of the universe, I do not treat myself as a random person; I know which person I am. I know about the Rawlsian veil, but I don't think we should have decision theories that don't allow to optimize the utility of observers similar to myself (or belonging to some other class), rather than all observers in the universe. We should be allowed to say that even if the universe is full of paperclippers who outnumber us, we can just decide to ignore their utilities and still have a consistent utilitarian system. (Also, it would be very hard to define a commensurable 'utility function' for all 'observers', rather than just for all humans and similar intelligences. And your measure function across observers - does a lizard have as many observer-moments as a human? - may capture this intuition anyway.) I'm not sure this is in disagreement with you. So I still feel confused about something, but it may just be a misunderstanding of your particular phrasing or something. I didn't intend that. I think I should taboo the verb "to be" in "to be a person", and instead talk about decision theories which produce optimal behavior - and then in some situations you may reason like that.
0DanielLC10y
I meant observer-moment. That's what I think of when I think of the word "observer", so it's easy for me to make that mistake. If present!you anticipates something, it makes life easy for future!you. It's useful. I don't see how it applies to anthropics, though. Yous aren't in a different reference class than other people. Even if they were, it can't just be future!yous that are one reference class. That would mean that whether or not two yous are in the same reference class depends on the point of reference. First!you would say they all have the same reference class. Last!you would say he's his own reference class. I think you do if you use UDT or TDT.
1DanArmak10y
I'm not an expert, but I got the impression that UDT/TDT only tells you to treat yourself as a random person from the class of persons implementing the same decision procedure as yourself. That's far more narrow than the set of all observers. And it may be the correct reference class to use here. Not future!yous but all timeline!yous - except that when taking a timeful view, you can only influence future!yous in practice.
[-][anonymous]10y00

Don't raw utilitarians mind being killed by somebody who thinks they suffer too much?

3DanArmak10y
Of course they mind, since they disagree and think that someone is wrong! If they don't disagree, either they've killed themselves already or it becomes an assisted suicide scenario.
0[anonymous]10y
Yeah, right, thanks.
-2DanielLC10y
Why would we care what someone else thinks? As to if I thought I suffer too much, and that I wouldn't do much to help anyone else out, just because I have an explicit preference to die doesn't mean that I don't have instincts that resist it.
0[anonymous]10y
Because, as long as I understand, if utilitarian thinks that killing you will have positive expected utility, he ought to kill you. So, if you were completely rational, you would kill yourself without hesitation in this situation, right? Just in case, I didn't downvote you.

_Stupid question: Wouldn't a calorie restriction diet allow Eliezer to lose weight?_

Not a single person who's done calorie restriction consistently for a long period of time is overweight. Hence, it seems that the problem of losing weight is straightforward: just eat less calories than you would normally.

I posted a version of this argument on Eliezer's Facebook wall and the response, which several people 'liked', was that there is a selection effect involved. But I don't understand this response, since "calorie restriction" is defined as restri... (read more)

Let's assume the following extremely simplified equation is true:

CALORIES_IN = WORK + FAT

Usually the conclusion is "less calories = less fat". But it also could be "less calories = less work". Not just in the sense that you consciously decide to work less, but also that your body can make you unable to work. Which means: you are extremely tired, unable to focus, in worst case you fall into coma.

The problem with calorie restriction is that it doesn't come with a switch for "please don't make me tired or falling in coma, just reduce my fat". -- Finding the switch is the whole problem.

If your metabolical switch is broken, calorie restriction can simply send you in a zombie mode, and your weight remains the same.

2niceguyanon10y
Great explanation. I didn't consider this before, will update accordingly.
0Pablo10y
Thanks. When you say that do you mean that this is a plausible hypothesis, supported by evidence, or just a speculation which would, if true, explain why calorie restriction might not work for Eliezer? If the former, could you point me to the relevant evidence?
2Viliam_Bur10y
I think I remember Eliezer complaining somewhere that calorie restriction (or something like that) makes him tired or just unable to do his work, but I don't remember the source. I'm pretty sure that he tried it.
2NancyLebovitz10y
Yes, he did. He found that as little as missing a meal knocked him out. Currently, he's found a ketogenic diet which is working for him at least in the short run.
0ChristianKl10y
Do a three day fast and see if you can still be as active physically at the end as you are at the moment. That's a very easy experiment. If you don't believe that changes in eating have any effect on how your body works there not even a price involved.
6ChristianKl10y
All the people for which the diet produces problems quit it and don't engage in it consistently. If your brain function goes down because you body downregulates your metabolism to deal with having less calories and you want to keep your brain functioning at a high level you will stop engaging consistently in the diet. There also the issue that you deal with a hunger process that evolved over tens of millions of years and try to beat it with a cerebral decision making process that evolved over 100,000s of years. It just like telling someone with tachycardia of 100 heart beats pro minute to switch to 80 heart beats per minute. It's just a switch. Just go and with your heart beat. If I sit down on the toilet my body has no problem to go down 20 beats per minute in a few seconds. I however have no way to fire of the same process by a cognitive decision. Food intake seems like it should be easier to manage than blood pulse via cognitive decisions because you can do voluntary decisions over short time frames. But over long time frames that doesn't seem to be the case.

I’m curious, but despite a lot of time poking around Wikipedia, I don’t have the means to discriminate between the possibilities. Please help me understand. Is there reason to believe that an infinite quantity of the conditions required for life is/was/will be available in any universe or combination of universes?

0VAuroch10y
I am not an expert, but as far as I know this is essentially a lemma/corollary to the anthropic question, ("Why do we, intelligent humans, exist?") which is obviously a Hard Problem. All answers to the big question I'm aware of, other than religious ones, seem to answer Yes to the corollary. This is largely on grounds of what amounts to a proof by contradiction; if there wasn't an infinite quantity of the conditions available for life, the probability that we existed would be infinitesimal, and it would strain credibility that we existed.
[-][anonymous]10y00

If the rate of learning of an AGI is t then is it correct to assume that the rate of learning of a FAI would be t+x where x > 0, considering that it would have the necessary additional constraints?

If this is the case, then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built. Are there upper limits on intelligence, or would there be diminishing returns as intellence grows?

1somervta10y
No, there's no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see. However, one of the problems needed to be solved for FAI (stable self-modification) could certainly make an FAI's rate of self-improvement faster than a comparative AI which has not solved that problem. There are other questions that need to be answered there (does the AI realize that modifications will go wrong and therefore not self-modify? If it's smart enough to notice the problem, won't it's first step be to solve it?), and I may be off base here. I'm not sure it's that useful to talk about an FAI vs a analogue UFAI, though. If an FAI is built, there will be many significant differences between the resulting intelligence and the one that would have been built if the FAI was not, simply due to the different designers. In terms of functioning, the different design choices, even those not relevant to FAI (if that's even meaningful - FAI may well need to be so fully integrated that all the aspects are made with it in mind), may be radically different depending on the designer and are likely to have most of the effect you're talking about. In other words, we don't know shit about what the first AGI might look like, and we certainly don't know enough to do detailed separate counterfactuals
2JacekLach10y
I believe you have this backwards - the OP is asking whether a FAI would be worse at learning than an UFAI, because of additional constraints on its improvement. If so: Of course one of the first actions of a FAI would be to prevent any UFAI from being built at all.
0somervta10y
I assumed otherwise because of : Which says the FAI is learning faster. But that would make more sense of the last paragraph. I may have a habit of assuming that the more precise formulation of a statement is the intended/correct interpretation, which, while great in academia and with applied math, may not be optimal here.
2handoflixue10y
Read "rate of learning" as "time it takes to learn 1 bit of information" So UFAI can learn 1 bit in time T, but a FAI takes T+X Or, at least, that's how I read it, because the second paragraph makes it pretty clear that the author is discussing UFAI outpacing FAI. You could also just read it as a typo in the equation, but "accidentally miswrote the entire second paragraph" seems significantly less likely. Especially since "Won't FAI learn faster and outpace UFAI" seems like a pretty low probability question to begin with... Erm... hi, welcome to the debug stack for how I reached that conclusion. Hope it helps ^.^