You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: cousin_it 05 September 2017 05:22:07PM 0 points [-]

Agreed. It happens in STEM as well, e.g. lots of "semantic web" papers are like that. Some of it can be traced to grant committees being clueless. Right now many of the folks giving money to MIRI do have a clue, we should keep it that way.

Comment author: Lumifer 05 September 2017 05:11:52PM 1 point [-]

LW2 better hurry up. Healing a patient is much easier than resurrecting one.

Comment author: Lumifer 05 September 2017 05:10:52PM 1 point [-]

how do we set up a status economy that will encourage research? Peer review is one way

Peer review by itself does not encourage (good) research, but merely mutual back-scratching. There is an astounding amount of published peer-reviewed crap -- see e.g. gender studies and such.

Comment author: Wei_Dai 05 September 2017 04:59:06PM 1 point [-]

I know it's not true for you, because you came up with UDT on your own.

I have to think about the rest of your comment carefully, but I want to correct this before too many people read it. I think status is in fact a significant motivation even for me, and even the more "pure" motivations like intellectual curiosity can in some sense be traced back to status. It seems unlikely that UDT would have been developed without the existence of forums like extropians, everything-list, and LW, for reasons of both motivation and feedback/collaboration.

Comment author: Dr_Manhattan 05 September 2017 04:47:51PM 1 point [-]
  • LW2 is in the works, and is an opportunity to make significant improvements to the model. Contribute ideas to make it better! I'll contribute some + yell at interesting people to get off FB or at least x-post

  • I think your honest admission of strong status motivation is very important. Big reason high-status ppl avoid the forum is not to be bogged down with n00bs and cranks. LW2 karma system+moderation will be really important to keep them around. Any ideas on improving it?

Comment author: cousin_it 05 September 2017 04:35:04PM *  1 point [-]

Thank you for not giving up on this discussion! Many people have mentioned the intellectual benefits of peer review, but I just thought of another argument that might be new to you.

Many of us agree that solving problems together is great fun. But what if it's just rationalization? What if we really want to participate in some status economy, and will come up with smart things to say only if we're paid with status in return? I know it's not true for you, because you came up with UDT on your own. But it's definitely true for me. Posting something like this and getting no response feels very discouraging to me, even if the topic is exciting. And since I'm close to the top of the LW heap, I imagine it's even more true for others.

The question then becomes, how do we set up a status economy that will encourage research? Peer review is one way, because publications and citations are a status badge desired by many people. Participating in a forum like LW when it's "hot" and frequented by high status folks is another way, but unfortunately we don't have that now. From that perspective it's easy to see why the massively popular HPMOR didn't attract many new researchers to AI risk, but attracted people to HPMOR speculation and rational fic writing. People do follow their interests sometimes, but mostly they try to find venues to show off.

Of course you could be happy with a system that's optimized for people like you, with few status rewards. But I suspect you'd miss out on many good contributors (think of all the smart people who drifted away from LW in recent years). I'd prefer to have something more like a pyramid or funnel, with popular appeal on one end and intellectual progress on the other. Academic credibility (including peer review) could be a key part of that funnel for us, and a central forum like LW would also help a lot. There are probably other measures that could work in synergy with these.

I wonder if people at MIRI think the same way. In a sense, the funnel idea was there from the beginning, as "raising the sanity waterline". CFAR can also be seen as part of that. But these efforts are mostly aimed at outreach, and I'm not sure they ever consciously tried to build a mechanism for converting status to research. What would it take to build such a mechanism today?

Comment author: wearsshoes 05 September 2017 03:09:59PM 1 point [-]

Hi, I'm helping to organize this year's NYC Solstice, as raemon has moved to the Bay Area. I've been reading LW, SSC, and rationalist Tumblr since about January and going to NYC meetups semiregularly since April, but haven't yet posted on here, so I thought I'd make a quick introduction.

My name is Rachel, I'm an undergrad senior at NYU completing a communications major. I'm originally from the Bay Area. On the MBTI, I'm an INTJ. Besides rationality, my passions are graphic novels and cooking. My intellectual interests tend towards things that have a taxonomic character, like biology and linguistics. (I'm also a fan of the way that these subjects keep evading a perfect taxonomy.)

I occasionally wear sandals.

Comment author: entirelyuseless 05 September 2017 03:05:02PM 0 points [-]

"take a pill every morning to live 5 years longer"

It is an assumption that it will be that easy. If there is a complicated surgery that will extend people's lives by 5 years, or even by 20, it is likely that many people will not want it.

Comment author: Erfeyah 05 September 2017 02:58:01PM 0 points [-]

[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way.

Sure this is a valid hypothesis. But my assessment and the individual points I offered above can be applied to this possibility as well uncovering the same issues with it.

In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.

Novel situations can be seen through the lens of certain stories because they are acting to such a level of abstraction that they are applicable to all human situations. The most universal and permanent levels of abstraction are considered archetypal. These would apply equally to a human living in a cave thousands of years ago and a Wall Street lawyer. Of course it is also true that the stories always need to be revisited to avoid their dissolution into dogma as the environment changes. Interestingly it turns out that there are stories that recognize this need for 'revisiting' and deal with the strategies and pitfalls of the process.

Comment author: Lumifer 05 September 2017 02:56:04PM 1 point [-]

I think it depends, for example on who are your peers in the "peer review" process and what kind of online forums you frequent.

Generally speaking, this is the problem of filtering out noise and finding honest and competent people to comment on your papers. It's a hard problem. Peer review is not a perfect solution, but neither is online discussion.

Comment author: Lumifer 05 September 2017 02:52:40PM 0 points [-]

Depends on your definition of sociopathy. Not under DSM.

Comment author: Lumifer 05 September 2017 02:52:01PM *  0 points [-]

Generally speaking, it's fine to discuss political philosophy and political theory. What LW tries to avoid is dumb tribal-emotional fights along the lines of "Trump is a moron! No, he will MAGA!" which just make everyone stupider.

Of course you should be prepared for disagreement -- this is a diverse forum, so it's guaranteed that there will be someone who doesn't like your ideas. Note that this is normal -- ideas that everyone agrees with are too milquetoast to be interesting.

Comment author: Wei_Dai 05 September 2017 02:47:50PM 0 points [-]

But there are no/few philosophers working in "anthropic reasoning" - there are many working in "anthropic probability", to which my paper is an interesting irrelevance. it's essentially asking and answering the wrong question, while claiming that their own question is meaningless

Seems like a good explanation of what happened to this paper specifically.

(and doing so without quoting some of the probability/decision theory stuff which might back up the "anthropic probabilities don't exist/matter" claim from first principles)

I guess that would be the thing to try next, if one was intent on pushing this stuff back into academia.

And the main problem with academia here is that people tend to stay in their silos.

By doing that they can better know what the fashionable topics are, what referees want to see in a paper, etc., which help them maximize the chances of getting papers published. This seems to be another downside of the current peer review system as well as the larger publish-or-perish academic culture.

Comment author: Lumifer 05 September 2017 02:45:24PM *  0 points [-]

Is there some sort of new member "kiddie pool"

This is the kiddie pool.

there needs to be some sort of safe "bumper bowling" alley available

You are on an internet forum. How much safer to do you want to be?

It is perfectly fine to try, fail, and try again. In fact, that's how most of learning works.

Sure, some people will misunderstand you. Take it as an opportunity to practice expressing yourself very very clearly.

Comment author: Wei_Dai 05 September 2017 02:32:21PM *  0 points [-]

I guess I can see how it might be too much effort if you're trying to participate in online discussions in addition to academia (and your main effort by necessity has to be in academia because that's your livelihood). If you only had to do the former though, it doesn't seem that bad, at least in my experience. (Would appreciate a link to Scott Aaronson's post if you can find it.)

EDIT: Maybe as a busy academic, just look at posts that are already highly upvoted or have positive comments from people you trust. Is it still too much effort if you did that?

Comment author: cousin_it 05 September 2017 01:41:28PM *  0 points [-]

I always felt that AVL trees were easier to understand than red-black. Just wrote some Haskell code for you. As you can see, both insertion and deletion are quite simple and rely on the same rebalancing operation.

Comment author: turchin 05 September 2017 01:40:48PM 0 points [-]

One advantage of peer review is that it helps the author to improve the paper. I have one published article that greatly benefited from two anonymous reviewers who find some important flows. If I just published it somewhere, they may just ignore it and the improvement will not happen. But the peer review system forced them to search for flaws according to some quastionary, and they have to write a couple of pages each.

Comment author: IlyaShpitser 05 September 2017 01:37:04PM *  0 points [-]

I replied that on online discussion forums, "it doesn't take a lot of effort to detect cranks and previously addressed ideas".

It takes a lot of effort, so much so that academics just gave up (Scott Aaronson had a post on this). I gave up doing this here.

I agree that peer review has a lot of problems, though.

Comment author: turchin 05 September 2017 01:34:45PM 0 points [-]

Yes, but there are situation when the race is not tight, like 40 to 60, and it is very improbable that my vote will work alone, but if we assume that something like ADT works, all people similar to me will behave as if I command them and total utility will be millions time more - as my vote will turn in million votes of people similar to me.

Comment author: TheAncientGeek 05 September 2017 01:13:06PM 0 points [-]

can think of two possibilities:

[1] that morality is based on rational thought as expressed through language

[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..

[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.

Comment author: Elo 05 September 2017 12:19:33PM 0 points [-]

There are some chat groups you can join, you can post in the open thread. You can try and fail. If you want to write a post and are not sure about the quality - make sure to have spent 2 hours writing it (if not more like 20 hours) as a fail-safe.

Yes we come across as elitist. As long as you are willing to learn, willing to be curious about why others think differently from you and willing to change your mind - that's what matters.

If you want to teach yourself and you are willing to read and do your research you will fit right in. That means books, papers, theories. We are always ferocious about knowledge. And if you can teach us - that would be great too

Comment author: Elo 05 September 2017 12:13:52PM 0 points [-]

By the time you have named a political figure of recent history you are already in the territory of what might be people's identities.

Sometimes by naming an ideology you challenge someone's identity. Then without realising it you are having a debate about how a person's own character must be wrong because this ideology is wrong. From there is a short step to full flame wars.

Part of the problem is that people are not good at talking about their ideologies while separating those ideologies from themselves.

There is theoretical discussion here. Some people will choose to not participate, if there is too much talk there will be complaints.

We work with "not too much" being a common resource as you might find in the tragedy of the commons. It's very hard to agree on how much is not too much but still worth it.

There is a series called, "politics is the mindkiller" which fuelled a lot of avoiding talking about politics. There are definitely other places to talk about politics on the internet. Having said that if you can explain (when you do) by way of moving up and down the ladder of abstraction while not naming ideologies or politicians - you are welcome to start a discussion.

Rationality has lots of parts. It has the parts that have you working out how to conclude that a coin flip is or is not biased (epistemics) and it has the parts that have you deciding how to bet on the coin in real life (instrumental). Yes some of that is socio-political. But some of it is also working out how to stop procrastinating or how to lose weight.

Comment author: jmh 05 September 2017 11:36:48AM 0 points [-]

Would it be correct to define selfish utility as sociopathic?

Comment author: RobQuesting 05 September 2017 09:58:13AM 0 points [-]

Regarding politics, and the frowning, is it acceptable to focus on measurable results, rather than ideologies (or political "teams" - re: cerulean vs blue vs green)? Whilst I understand the tribalism you refer to, it is a bias this group and website seems to be inherently about combating; as such falsely dichotomous thinking is irrational.

For example: No matter which party is in power, across most of the world's countries, economic systems remain largely unaltered over recent decades. The social and psychological effects on cultural norms, born of the structural economic framework, ought not be discussed despite their affect on trends of perceived rationality (the bias of culturally normal rational thought) because this topic bleeds into "politics". I don't see how economic debate can be considered separate from political or cultural debate. I don't see how rationality can be separated from politics.

Is that too political for the scope of this forum? Interdependent causation?

If so, that's okay, it just negates about half of my reasons for engaging here.

I don't know how it is possible to separate rational discourse and political discourse. I don't see how there can be a firewall between them. The social is the political, which defines what is considered rational, which is in turn influenced by cultural normalcy in the form of bias. Art, culture, community, education, social and even civilisation outcomes seem inextricable from the organisational structure we call the political sphere.

I could be wrong about all of the above.

It may be better to let me know now, if political discourse, about theory and measurable socio-cultural results, are beyond the scope of this forum, because then, I won't waste anyone's time.

I opened by saying: "I have unfortunately come to the conclusion that socioeconomic revolt, by any means necessary, is a moral and ethical imperative for all people, to maximise the chances of the survival of the human species."

This is my present, primary concern. If I am not allowed to discuss this, I am in the wrong place. Thanks.

Comment author: RobQuesting 05 September 2017 08:54:09AM 0 points [-]

Is there some sort of new member "kiddie pool" where people aspiring to improve their own rational processes can feel free to speak as they/we wish without knowing the correct terminology, and without an academic background regarding logic itself?

I guess, to learn, and express, in aid of learning, there needs to be some sort of safe "bumper bowling" alley available.

I have little access to formal education, and so, in the interests of self improvement, would like discourse which is both forgiving and conducive to improving discursive quality.

I feel I am just as likely to say something which is misinterpreted, due to (what amounts to) sub-cultural norms here, from this community, as I am to say something accurately insightful. This is intimidating, despite my intention to improve my expressive accuracy. Maybe I am intimidated by elitism and expertise, to the point of rejecting the service itself? This is probably biased and irrational, but worth describing, because the act of changing cultural attitudes (in service to the goal of increasing societal rationality), requires us all to be aware of the limitations of a macro-cultural audience.

Maybe I just mean to ask: Is there a way to throw ideas around and see what sticks, without becoming a forum pariah?

Comment author: Elo 05 September 2017 08:34:29AM 0 points [-]

Welcome! You may find the topic of politics is generally frowned upon around here because of the tendency for people to go a little bit tribal in the process of talking about it. "us or them" and all that.

Aside from that, glad to have you on board and willing to question your beliefs. Feel free to ask any questions you have :)

Comment author: RobQuesting 05 September 2017 08:19:29AM 0 points [-]

Hello to all rationalistas. (?)

I am new here, and I intend to lurk, doing the reading regularly, and catching up from a position of being far behind, until I feel more confident about contributing. I only discovered this group a few days ago.

I have unfortunately come to the conclusion that socioeconomic revolt, by any means necessary, is a moral and ethical imperative for all people, to maximise the chances of the survival of the human species.

I hope to be proven wrong, and have my bias revealed and dissected. I am perhaps rather desperate to be proven wrong, because I do not like my own conclusions.

Thanks in advance for any help I receive, and am able to reciprocate.

Comment author: Unnamed 05 September 2017 04:31:09AM 2 points [-]

This is not an easy-to-implement tip, but my suggestion is to try to get into a mental space where the social things that you're trying to do are easy / come naturally / are the things that you want to do in the moment.

A person who is naturally friendly, non-critical, and interesting in hearing about you probably did not get that way just by practicing each of those behaviors as habits; they have some deeper motivation/perspective/emotion/something that those behaviors naturally follow from. Try to get in touch with that deeper thing.

One thing that helps with this is noticing when you've had the experience of being in a mental space where the things come more naturally (even if only briefly, or only marginally more naturally). Then you can try to get back into that mental space, and take it further.

Another thing that can help is putting yourself in different social situations, including ones that you're liable to get swept up in (that is, ones that are likely to put you in a different mental space from where you usually are). That can be a quicker way to get some experience being in different modes. Reading books (and watching videos, etc.) can also help, especially if you do things like these as you read them.

Comment author: fortyeridania 05 September 2017 02:49:17AM 1 point [-]

but I don't feel them becoming habitual as I would like

Have you noticed any improvement? For example, an increase in the amount of time you feel able to be friendly? If so, then be not discouraged! If not, try changing the reward structure.

For example, you can explicitly reward yourself for exceeding thresholds (an hour of non-stop small talk --> extra dark chocolate) or meeting challenges (a friendly conversation with that guy --> watch a light documentary). Start small and easy. Or: Some forms of friendly interaction might be more rewarding than others; persist in those to acclimate yourself to longer periods of socialising.

There's a lot of literature on self-management out there. If you're into economics, you might appreciate the approach called picoeconomics:

Caution: In my own experience, building new habits is less about reading theories and more about doing the thing you want to get better at, but it's disappointingly easy to convince myself that a deep dive into the literature is somehow just as good; your experience may be similar (or it may not).

Comment author: Yosarian2 05 September 2017 01:23:15AM 0 points [-]

I don't believe that my vote will change a result of a presidential election, but I have to behave as if it will, and go to vote.

The way I think of this is something like this:

There is something like a 1 in 10 million chance that my vote will affect the presidential election (and also some chance of my voting affecting other important elections, like Congress, Governor, ect).

Each year, the federal government spends $3.9 trillion dollars. It's influence is probably actually significantly greater then that, since that doesn't include the effect of laws and regulations and such, but let's go with that number for the sake of argument.

If you assume that both parties are generally well-intended and will mostly use most of that money in ways that create positive utility in one way or another, but you think that party A will do 10% more effectively then party B, that's a difference in utility of $390 billion dollars.

So a 1 in 10 million chance of having a 390 billion dollar effect divides into something like an expected utility of $39,000 for something that will take you maybe half an hour. (Plus, since federal elections are only every 2 years, it's actually double that.)

I could be off by an order of magnitude with any of these estimates, maybe you have a 1 in 100 million chance of making a difference, or maybe one party is only 1% better the the other, but it seems from a utilitarian point of view like it's obviously worth doing even so.

The same logic can probably be used for these kind of existential risks as well.

Comment author: Elo 05 September 2017 01:14:04AM 0 points [-]

I have some research that will help you on your quest to make it more easy for you to do the thing.

  1. Nvc https://youtu.be/l7TONauJGfc (and accompanying books)
  2. Daring greatly - brene Brown (book) brief review - https://youtu.be/iCvmsMzlF7o
  3. Search inside yourself - book (mindfulness)

NVC will keep you aware of what takes you out of the habit, vulnerability will keep you oh track to a different strategy. And search inside yourself will encourage practice on the topic of being thoughtful and caring of the people around you.

In this order.

Comment author: Rossin 05 September 2017 12:35:10AM 0 points [-]

Does anyone have any tips or strategies for making better social skills habitual? I'm trying to be more friendly, compliment people, avoid outright criticism, and talk more about other people than myself. I can do these things for a while, but I don't feel them becoming habitual as I would like. Being friendly to people I do not know well is particularly hard, when I'm tired I want to escape interaction with everyone except close friends and family.

In response to comment by gwern on P: 0 <= P <= 1
Comment author: Rossin 05 September 2017 12:20:40AM 0 points [-]

That's a very interesting condition, and I will agree that it indicates that it is possible I could come to the belief that I did not exist if some event of brain damage or other triggering event occurred to cause this delusion. However, I would only have that belief because my reasoning processes had been somehow broken. It would not be based on a Bayesian update because the only evidence for not existing would be ceasing to have experiences, which it seems axiomatic that I could not update upon. People with this condition seem to still have experiences, they just strangely believe that they are dead or don't exist.

Comment author: Yosarian2 04 September 2017 11:12:35PM 0 points [-]

It would certainly have to depend on the details, since obviously many people do not choose the longevity treatments that are already available, like healthy eating and exercise, even though they are usually not very expensive.

Eh. That seems to be a pretty different question.

Let's say that an hour of exercise a day will extend your lifespan by 5 years. If you sleep 8 hours a night, that's about 6.3% of your waking time; if you live 85 years without exercise vs 90 years with exercise, you probably have close to the same amount of non-exercising waking time either way. So if it's worthwhile probably depends on how much you enjoy or don't enjoy exercise, how much you value free time when you're 30 vs time when you're 85, ect.

I think exercise is a good deal all around, but then again that's partly because I think there's a significant chance that we will get longevity treatments in our lifetime, and want to be around to see them. It's not the same kind of clear-cut decision that, say, "take a pill every morning to live 5 years longer" would be.

Comment author: Daniel_Burfoot 04 September 2017 09:57:11PM *  1 point [-]

Has anyone studied the Red Black Tree algorithms recently? I've been trying to implement them using my Finite State technique that enables automatic generation of flow diagrams. This has been working well for several other algorithms.

But the Red Black tree rebalancing algorithms seem ridiculously complicated. Here is an image of the deletion process (extracted from this Java code) - it's far more complicated than an algorithm like MergeSort or HeapSort, and that only shows the deletion procedure!

I'm weighing two hypotheses:

  1. Keeping a binary tree balanced in N log N time is an intrinsically complex task.
  2. There is some much simpler method to efficiently maintain balance in a binary tree, but nobody bothered looking for it after the RB tree algorithms and analysis were published.

I'm leaning toward the latter theory. It seems to me that most of the other "elementary" algorithms of computer science are comparatively simple, so the weird overcomplexity of the tool we use for binary tree balancing is some kind of oversight. Here is the Wiki page on RB trees - notice how the description of the algorithm is extremely hard to understand.

Comment author: entirelyuseless 04 September 2017 07:17:40PM 0 points [-]

But depending on the details, I think it will be pretty high.

It would certainly have to depend on the details, since obviously many people do not choose the longevity treatments that are already available, like healthy eating and exercise, even though they are usually not very expensive. Sure, maybe someone will be more motivated by an extra 50-100 years than by an extra 5-15. But then again maybe they won't.

Comment author: Yosarian2 04 September 2017 07:13:06PM 0 points [-]

This is a falsifiable empirical prediction. We will see whether it turns out to be true or not.

Yes, agreed.

I should probably be more precise. I don't think that 100% of people will necessarally choose longevity treatments once they become available. But depending on the details, I think it will be pretty high. A think that a very high percentage of people who today sound ambivalent about it will go to great lengths to get it once it becomes something that exists in reality.

I also think that the concern that "other people" will get to live a very long time and you might not will motivate a lot of people. People are even deeply worried about the fear that rich people might live forever and they might not now, even people who don't seem to really believe that it's possible seem to be worried about that, which is interesting.

Comment author: Manfred 04 September 2017 04:40:31PM *  0 points [-]

And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the "non-anthropic probabilities" produced by the two different universes.

Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predicts the world.

EDIT: Okay, getting a better understanding of what you mean now. So you'd probably just say that the weight on the different universes should be exactly this non-anthropic probability, assigned by some universal prior or however one assigns probability to universes. My problem with this is that when assigning probabilities in a principled, subjective way - i.e. trying to figure out what your information about the world really implies, rather than starting by assuming some model of the world, there is not necessarily an easily-identifiable thing that is the non-anthropic probability of a boltzmann brain copy of me existing, and this needs to be cleared up in a way that isn't just about assuming a model of the world. If anthropic reasoning is, as I said above, not some add-on to the process of assigning probabilities, but a part of it, then it makes less sense to think something like "just assign probabilities, but don't do that last anthropic step."

But I suspect this problem actually can be resolved. Maybe by interpreting the non-anthropic number as something like the probability that the universe is a certain way (i.e. assuming some sort of physicalist prior), conditional on there only being at least one copy of me, and then assuming that this resolves all anthropic problems?

Comment author: HungryHippo 04 September 2017 04:18:13PM *  3 points [-]

With the Dota OpenAI bot, Alpha GO, and Deep Blue --- it's funny how we keep training AIs to play zero-sum war simulation games against human enemies.

Comment author: entirelyuseless 04 September 2017 04:12:46PM 0 points [-]

There's a bit of circularity here, since I acknowledge that it is possible to think about belief in such a way that it would not be voluntary. But I voluntarily choose to think about belief as voluntary, namely as having a definition that implies that it is voluntary, because I think that the consequences of thinking about it this way (both epistemically and instrumentally) are better than the consequences of thinking about it in such a way that it would be involuntary.

The reason both are possible is that saying that someone believes something is a vague generalization. It does not have rigid borders. It normally includes both voluntary and involuntary aspects, and we normally expect these things to go together. But when we consider edge cases, there are different places where we could draw the line, and common sense does not sufficiently determine the matter. Consequently we have to choose. I choose to draw it by saying belief is what you voluntarily treat as a fact. I think that this corresponds better to common usage than alternative definitions, even if it has a few odd edge cases; they are much less odd than the ones that follow from definitions implying that it is involuntary.

"Voluntary" is not a confused term; it means "because I wanted it." Feeling like I made a choice would often be a consequence, but not always, since in some cases I would want something so much that I can't conceive of wanting anything different. When I say that belief is voluntary, I mean that people have beliefs because they want to have them.

What I mean by "belief": treating something as a fact, namely in all that ways that one is able to do so. So if you have an involuntary expectation of something, I do not count that as a belief unless you choose to act as if the thing will actually happen; if you choose to act as if it will not, then I say that you feel an inclination to have that belief, but choose not to have it.

I understand why you're giving the definition you suggest, and I agree that expectations are involved in understanding the meaning of any statement (we've had that discussion before.) Nonetheless, you cannot understand the idea of "this will correspond with my expectation" unless you already feel you understand "this will happen." So at least the idea of correspondence with reality has to come before the idea of fulfilled expectations, even if we cannot fully cash out the idea of correspondence with reality without talking about our expectations.

I agree that people's concrete beliefs involve many involuntary things, and I agree that theoretically you could define belief to refer to some of these things. But I do not think this corresponds best with common usage, I don't think it gives us the best understanding of what is going on, and I don't think it has the best practical consequences.

Comment author: tut 04 September 2017 04:03:41PM *  0 points [-]

You tried to access the address https://lbry.io/news/20000-illegal-college-lectures-rescued, which is currently unavailable. Please make sure that the web address (URL) is correctly spelled and punctuated, then try reloading the page.

Edit: So that's weird. The above is what I got in Opera. But in Firefox I get a page that says (among many other things) that lbry isn't available to the public

Comment author: entirelyuseless 04 September 2017 03:58:12PM 1 point [-]

This is a falsifiable empirical prediction. We will see whether it turns out to be true or not. I think more likely you will see some ambivalence in people's response. I do see many people around the age of 80 who think they have lived long enough, and it pretty clearly has nothing to do with their state of health. I accept the same thing to happen in many cases even after aging can be prevented biologically. Calling it "sour grapes" is just not recognizing that some people are different from you.

Comment author: JohnGreer 04 September 2017 03:48:13PM *  0 points [-]

You might be interested in Inbox When Ready which can hide your inbox and do a number of other things.

Comment author: Yosarian2 04 September 2017 02:16:01PM 1 point [-]

I don't think lack of life extension research funding actually comes from people not wanting to live, I think it has more to do with the fact that the vast majority of people don't take it seriously yet and don't beleive that we could actually significantly change our lifespan. That's compounded with a kind of "sour grapes" defensive reflex where when people think they can never get something they try to convince themselves they don't really want it.

I think that if progress is made that at some point there will be a phase change where, when more people start to realize that it is possible and suddenly flip from not caring at all to caring a great deal.

Comment author: Elo 04 September 2017 10:50:42AM 0 points [-]
Comment author: tut 04 September 2017 10:26:24AM 0 points [-]

The link is 404 enabled. Or at least it was the two times I clicked on it.

Comment author: Thomas 04 September 2017 07:42:30AM 8 points [-]

No problem this week, just an appreciation for people of LessWrong who can be right, when I am wrong.

Comment author: fortyeridania 04 September 2017 04:48:01AM *  0 points [-]

Is one's answer to the dilemma supposed to illuminate something about the title question? Presumably a large part of the worth-livingness of life consists in the NPV of future experiences, not just in past experiences.

  • Title question: Yes. Proof by revealed preference:

(1) Life is a good with free disposal.

(2) I am alive.

(3) Therefore, life is worth living.

  • Dilemma: Choose the second, on the odds that God changes its mind and lets you keep living, can't find you again the second time around, is itself annihilated in the interim, etc.

Quibble: Annihilationism is an eschatalogical doctrine about the final fate of all souls, not the simple event of the annihilation.

Comment author: gworley 04 September 2017 03:25:26AM 0 points [-]

I don't really follow why it should be that beliefs are necessarily voluntary.

Maybe it's a matter of what we each think "belief" means. Can you be a bit more precise? My conception is somewhere in the range of experience of an experience that gives a correspondence between the experienced experience and expected other experiences. Basically that to believe is to expect or make a prediction about future experience and a belief is a reification of the experience of believing. In this sense I don't really see why belief couldn't also be involuntary, for some vague sense of "voluntary" like "feels like I made a choice" since "voluntary" seems a bit of a confused term itself unless you have a firm sense of causality and intention/will.

Comment author: vaultDweller 04 September 2017 01:47:21AM 0 points [-]

"... as the old saying went: 'Not all windowless vans have residential surveillance equipment.' In other words, not everything can be as good as it seems."

  • Welcome to Night Vale (novel)
In response to comment by gwern on P: 0 <= P <= 1
Comment author: g_pepper 03 September 2017 10:39:21PM *  0 points [-]

Which of Rossin's statements was your "Cotard delusion" link intended to address? It does seem to rebut the statement that "nothing I could experience could convince me that I do not exist", since experiencing the psychiatric condition mentioned in the link could presumably cause Rossin to believe that he/she does not exist.

However, the link does nothing to counter the overall message of Rossin's post which is (it seems to me) that "I think, therefore I am" is a compelling argument for one's own existence.

BTW, I agree with the general notion that from a Bayesian standpoint, one should not assign p=1 to anything, not even to "I exist". However, the fact of a mental condition like the one described in your link does nothing (IMO) to reduce the effectiveness of the "I think, therefore I am" argument.

In response to comment by Rossin on P: 0 <= P <= 1
Comment author: gwern 03 September 2017 08:34:53PM 0 points [-]
In response to P: 0 <= P <= 1
Comment author: Rossin 03 September 2017 08:17:39PM 1 point [-]

I found the fact that Eliezer did not mention the classic "I think, therefore I am" argument in these essays odd as well. It does seem as though nothing I could experience could convince me that I do not exist because by experiencing it, I am existing. Therefore, assigning a probablitly of 1 to "I exist" seems perfectly reasonable.

Comment author: Rossin 03 September 2017 08:10:02PM 0 points [-]

My first thought is one of some sort of heroic defiance against a God that ridiculous and tyrannical, and yelling imprecations at the God while he presumably annihilates my soul. That probably wouldn't be smart though, as I have enjoyed life thus far, so I guess reliving in would be enjoyable as well, as I imagine I would have to have no prior knowledge of having already lived it, so I suppose I would choose the second option.

Comment author: Dr_Manhattan 03 September 2017 05:16:22PM 0 points [-]

thanks, fixed!

Comment author: Torello 03 September 2017 02:41:48PM 0 points [-]
Comment author: Stuart_Armstrong 03 September 2017 01:44:33PM 0 points [-]

I'll deal with the non-selfish case, which is much easier.

In that case, Earth you and Boltzmann brain you have the same objectives. And most of the time, these objectives make "Boltzmann brain you" irrelevant, as their actions have so consequences (one exception could be "ensure everyone has a life that is on average happy, in which case Earth you should try and always be happy, for the sake of the Boltzmann brain yous). So most of the time, you can just ignore Boltzmann brains in ADT.

Yes, that is a natural reference class in ADT (note that it's a reference class of agents-moments making decisions, not of agents in general; it's possible that someone else is in your reference class for one decision, but not for another).

But "all beings who think about DA" is not a natural reference class, as you can see when you start questioning it ("to what extent do they think about DA? Under what name? Does it matter what conclusions they draw?...)

Comment author: turchin 03 September 2017 12:25:37PM *  0 points [-]

I agree with this: "yes. You are both. And you currently control the actions of both. It is not meaningful to ask 'which' one you are."

But have the following problem: what if the best course of action for me depends on am I Boltzmann brain or real person? It looks like I still have to update according to which group is larger: real me of Boltzmann brain me.

It also looks like we use "all decision computation processes like mine process" as something like what I called before "natural reference class". And in case of DA it is all beings who thinks about DA.

Comment author: Stuart_Armstrong 03 September 2017 09:47:21AM 0 points [-]

Actually, the probability that you should assign to there being a copy of you is not defined under your system - otherwise you'd be able to conceive of a solution to the sleeping beauty problem

Non-anthropic ("outside observer") probabilities are well defined in the sleeping beauty problem - the probability of heads/tails is exactly 1/2 (most of the time, you can think of these as the SSA probabilities over universes - the only difference being in universes where you don't exist at all). You can use a universal prior or whatever you prefer; the "outside observer" doesn't need to observe anything or be present in any way.

I note that you need these initial probabilities in order for SSA or SIA to make any sense at all (pre-updating on your existence), so I have no qualms claiming them for ADT as well.

Comment author: Risto_Saarelma 03 September 2017 08:33:24AM 0 points [-]

This isn't working for me as pumping the intuition you seem to want it to. I think life is worth living and I'd just cut to the chase and pick 1 because option 2 doesn't make sense as a way to get more life. Pattern theory of identity, life is a process, not a weighted lump of time-space-matter-stuff where you can just say "let's double the helping" like this. If you run the exact same process twice, that doesn't get you any new patterns and new life compared to just running it once.

Or if the idea is that I'd be aware of having gotten a second run, the part about the exact same decisions and experiences seems to make this amount to spending a few decades watching a boring home video with nothing you-on-second-trip can do about it and constantly aware that you'll be annihilated at the end. I guess the "maybe the horse will learn to sing" thinking would make sense here, but that's just fighting the hypothetical that the thought experiment will unfold exactly as described.

Comment author: Brian_Tomasik 03 September 2017 06:35:05AM 0 points [-]

I assume the thought experiment ignores instrumental considerations like altruistic impact.

For re-living my actual life, I wouldn't care that much either way, because most of my experiences haven't been extremely good or extremely bad. However, if there was randomness, such that I had some probability of, e.g., being tortured by a serial killer, then I would certainly choose not to repeat life.

Comment author: ShardPhoenix 03 September 2017 06:10:03AM *  1 point [-]

You can also move windows between monitors with Win + Shift + Left/Right.

Comment author: Prometheus 03 September 2017 05:56:58AM 0 points [-]

I think contrarians are severely undervalued. I was originally a contrarian because 1: it's fun to have a whole room made at you; and 2: I always found it unnerving when a whole group of people all agreed on something, even if I mostly agreed with them. I found people's comfort zone discomforting. Now, thanks to my research into Group Think, and the evidence even one dissenter is enough to cast doubt on someone's perceptions and opinions, I've become something of a contrarian crusader. Pedophiles, terrorists, Nazis: the more toxic, the better. I do this for the reasons above... and because it's a whole lot of fun.

Comment author: Manfred 03 September 2017 01:06:31AM *  0 points [-]

That's not quite what I was talking about, but I managed to resolve my question to my own satisfaction anyhow. The problem of conditionalization can be worked around fairly easily.

Suppose that there is 50% ehance of there being a boltzmann brain copy of you

Actually, the probability that you should assign to there being a copy of you is not defined under your system - otherwise you'd be able to conceive of a solution to the sleeping beauty problem - the entire schtick is that Sleeping Beauty is not merely ignorant about whether another copy of her exists, but that it is supposedly a bad question.

Hm, okay, I think this might cause trouble in a different way that I was originally thinking of. Because all sorts of things are possibilities, and it's not obvious to me how ADT is able to treat reasonable anthropic possibilities different from astronomically-unlikely ones, if it throws out any measure of unlikeliness. You might try to resolve this by putting in some "outside perspective" probabilities, e.g. that an outside observer in our universe would see me as normal most of the time and me as a Boltzmann brain less of the time, but this requires making drastic assumptions about what the "outside observer" is actually outside, observing. If I really was a Boltzmann brain in a thermal universe, an outside observer would think I was more likely to be a Boltzmann brain. So postulating an outside perspective is just an awkward way of sneaking in probabilities gained in a different way.

This seems to leave the option of really treating all apparent possibilities similarly. But then the benefit of good actions in the real world gets drowned out by all the noise from all the unlikely possibilities - after all, for every action, one can construct a possibility where it's both good and bad. If there's no way to break ties between possibilities, no ties get broken.

Comment author: ESRogs 03 September 2017 12:19:13AM 0 points [-]

is likely to be different

Did you mean "likely to be difficult"?

Comment author: entirelyuseless 03 September 2017 12:01:03AM 1 point [-]

I agree with all this.

Comment author: gwern 02 September 2017 11:45:05PM 1 point [-]
Comment author: morganism 02 September 2017 11:15:47PM 0 points [-]

Application of Systematic Review Methods in an Overall Strategy for Evaluating Low-Dose Toxicity from Endocrine Active Chemicals

https://www.nap.edu/catalog/24758/application-of-systematic-review-methods-in-an-overall-strategy-for-evaluating-low-dose-toxicity-from-endocrine-active-chemicals

Meta study that tries to deal with cross-study imbalances, and animal dose vs. internal take up. Seems balanced. Phatalates and flame proofing are models studied.

Comment author: RowanE 02 September 2017 11:13:21PM 0 points [-]

That's the reason she liked those things in the past, but "acheiving her goals" is redundant, she should have known years in advance about that, so it's clear that she's grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons?

Hedonism isn't bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number.

I don't want to be upgraded into a "capable agent" and then cast back into the wilderness from whence I came, I'd settle for a one-room apartment with food and internet before that, which as a NEET I can tell you is a long way down from Reedspacer's Lower Bound.

Comment author: Stuart_Armstrong 02 September 2017 09:28:03PM 0 points [-]

in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation)

I do not think there is a sensible ADT DA that can be constructed for reasonable civilizations. In ADT, only weird utilities like average utilitarians have a DA.

SSA has a DA. ADT has a SSAish like agent, which is the average utilitarian. Therefore, ADT must have a DA. I constructed it. And it turns out the ADT DA via this has no real doom aspect to it; it has behaviour that looks like avoiding doom, but only for agents with strange preferences. ADT does not have a DA with teeth.

Comment author: ArisKatsaris 02 September 2017 09:18:45PM 0 points [-]

Short Online Texts Thread

Comment author: ArisKatsaris 02 September 2017 09:18:41PM 0 points [-]

Online Videos Thread

Comment author: ArisKatsaris 02 September 2017 09:18:37PM 0 points [-]

Fanfiction Thread

Comment author: ArisKatsaris 02 September 2017 09:18:31PM 0 points [-]

Nonfiction Books Thread

Comment author: ArisKatsaris 02 September 2017 09:18:28PM 0 points [-]

Fiction Books Thread

Comment author: ArisKatsaris 02 September 2017 09:18:24PM 0 points [-]

TV and Movies (Animation) Thread

Comment author: ArisKatsaris 02 September 2017 09:18:20PM 0 points [-]

TV and Movies (Live Action) Thread

Comment author: ArisKatsaris 02 September 2017 09:18:17PM 0 points [-]

Games Thread

Comment author: ArisKatsaris 02 September 2017 09:18:13PM 0 points [-]

Music Thread

Comment author: ArisKatsaris 02 September 2017 09:18:09PM 0 points [-]

Podcasts Thread

View more: Next