Comment author: Jayson_Virissimo 16 August 2016 03:30:56AM 0 points [-]

I like your failed arguments section. IMO, frequent reminders about the phenomenon of using "arguments as soldiers" is one of the most straightforward and effective ways to encourage a higher levels of rationality in ourselves and others.

Comment author: Bound_up 16 August 2016 04:15:10PM 0 points [-]

Thank you :)

If you can think of any others to add, I'm definitely looking for more content in that section, and in every other, really.

Ideally, that section would eventually become near-comprehensive, so much so that theists might use it as a resource to rebut atheists who are making bad arguments, and atheists might use it to learn which arguments to avoid.

Comment author: Viliam 16 August 2016 07:40:11AM *  3 points [-]

A nice website, but of course I am looking at it from a position of a LW fan; I don't know how it will seem to other people.

One part I didn't like was the rationalization video about "why death isn't so bad". (Spoilers: because if you know you will die, it motivates you to do important things faster.)

To illustrate why it is bad, imagine the reversal: if you would get immortality, would you rather trade it for mortality (defined as: you will die in a random moment during the following 100 years) as a cool way of Getting Things Done? I would think such decision is quite stupid for several reasons. First, the random moment may happen even today; I don't see how that helps you accomplish plans that take more than one day. Second, plans that require at least several decades of work become unlikely even if you work hard. Third, it limits the total number of plans you can accomplish.

Generally, the whole reasoning about the benefits of death is motivated thinking. We already know the bottom line (almost certainly we are going to die, probably quite soon); and the real reason is that our bodies are fragile, and it is extremely difficult to coordinate humanity on fixing this problem (all the necessary medical and technological research, plus the necessary economical and political background). That's it. It would be too much of a coincidence if this situation would also be somehow optimal.

(However, optimizing the website for me could make it less attractive to an average reader.)

EDIT: The quotations from "The Last Christmas" should be formatted differently than the surrounding text. For example use dark grey font, and a vertical line on the left (something like "font-color: #333; border-left: 2px solid #333;"). Don't separate different quotes by bullets, but enclose them in different "div" tags (with the vertical line it should be visible where the next "div" starts).

On the "Arguments" page, the different arguments should be probably formatted by headings ("h2"), not bullet points. I think in general that the bullet points look ugly, unless used for a list of short items.

Comment author: Bound_up 16 August 2016 04:12:26PM *  2 points [-]

Yes, I considered adding exactly that kind of status quo bias rebuttal to the note that some atheists argue that death isn't so bad.

But I didn't want to make it TOO LW-ey, and I'm hoping that by including that view, along with the views of the Methuselah people, and the cryonics people, that people would be able to take them all seriously without feeling like I was biased in my presentation

I'm open to adjusting that strategy, but that was my thinking behind it. Not to appear like an arguer, merely an unbiased presenter of information, who apparently took aging research and cryonics seriosuly enough to mention.

Thanks for the editing suggestions, I think you're right. I'll keep you in mind for other formatting points, if you don't mind

Comment author: TheAncientGeek 04 August 2016 10:02:21AM *  0 points [-]

Yes, if you can't solve the presupposition problem, the main alternative is to carry on as before, at the object level, but with less confidence at the meta level. But who is failing to take that advice? As far as I can see, it is Yudkowsky. He makes no claim to have solved the problem of unfounded foundations, but continues putting very high probabilities on ideas he likes, and vehemently denying ones he doesn't.

To say you "should" be moral is tautological. It's just saying you "should" do what you "should" do.

Ok. You should be moral. But there is no strong reason why you should folliow arbitrary values. Therefore, arbitrary values are not morality.

Morals are not arbitrary. I'm just talking about the Sequences' take on morality. If you care about a different set of things, then morality doesn't follow that change, it just means that you now care about something other than morality.

So what are the correct moral values?

Comment author: Bound_up 04 August 2016 04:27:02PM 0 points [-]

Well, in the normal course of life, on the object level, some things are more probable than others.

If you push me about if I REALLY know they're true, then I admit that my reasoning and data could be confounded by a Matrix or whatever.

Maybe it's clearer like so:

Colloquially, I know how to judge relative probabilities.

Philosophically (strictly), I don't know the probability that any of my conclusions are true (because they rest on concepts I don't pretend to know are true).

About the moral values thing, it sounds kinda like you haven't read the sequence on metaethics. If not, then I'm glad to be the one to introduce you to the idea, and I can give you the broad strokes in a few sentences in a comment, but you might want to ponder the sequence if you want more.

Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality.

But, humans don't have access to our source code. We can't see all that we care about. Figuring out the specific values, and how much to weight them against each other is just the old game of thought experiments and considering trade-offs, etcetera.

Nothing that can be reduced to some one-word or one-sentence idea that sums it all up. So we don't know what all the values are or how they're weighted. You might read about "Coherent Extrapolated Volition," if you like.

Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn't change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.

A great example is Alexander Wales' short story "The Last Christmas" (particularly chapter 2 and 3). See below.

The elves care about Christmas Spirit, not right and wrong, or morality, or fairness.

When it's pointed out that what they're doing isn't fair, they don't protest, they just say "We don't care. Fairness isn't part of the Christmas Spirit."

And we might say, "Santa being fat? We don't care, that's not part of morality. We don't deny that it's part of the Christmas Spirit; we just don't care that it is."

If aliens care about different things, it's not about our morality versus "their" morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn't care about morality. It cares about clippiness.

The moral thing and the clippy thing to do are both fixed calculations. Once you know the answer, it's a feature of your mind if you happen to respond to morality, or clippiness, or Glumpshizzle, or Christmas Spirit.

If anybody thinks I've misunderstood part of this, please, do let me know. I've tried to understand, and would like to correct any mistakes if I have them.

“You wouldn’t even make any arguments for why you should live?” asked Charles.

“My life is meaningless in the face of the Christmas spirit,” said Matilda.

“But if it didn’t matter to the Christmas spirit,” said Charles, “If I just wanted to see you die for fun?”

“Allowing you to satisfy your desires is part of maintaining the Christmas spirit, Santa,”


“It’s unfair,” said Charles.

“Life is unfair,” said Matilda.

“Does it have to be?” asked Charles. “Is that the Christmas spirit?”

“I don’t know,” said Matilda. “Fairness doesn’t enter into it, I don’t think. Why should Christmas be fair if life isn’t fair?”

http://alexanderwales.com/the-last-christmas-chapter-1-2/

In response to Motivated Thinking
Comment author: SquirrelInHell 04 August 2016 05:01:23AM *  1 point [-]

I like your mnemonics idea, though the part "Self-deprecation and Conceit" seems a little bit forced. Maybe make them rhyme or something else instead.

I think it's one of the most important things to teach someone about rationality (any other suggestions? Confirmation bias, placebo, pareidolia, and the odds of coincidences come to mind...)

The things that come to your mind are object-level skills. However I'd say that the most important thing to teach is the meta-skill of dissociation - looking at your thoughts as a machine with some properties, and controlling this machine from the "outside".

In other words, intuitively noticing that thinking something about X is not a fact about X, but a fact about your thoughts.

Having this habit that when you think X, you also automatically think "hmm, I seem to be thinking X, what do I make of it?".

Comment author: Bound_up 04 August 2016 04:06:38PM 0 points [-]

Hmm, so the map/territory distinction?

That's a good one.

Some of mine ARE object-level, but there aren't just ANY object-level ones. They focus on teaching you how to discern between real and fake evidence, I guess...

Are you just referring to map/territory, or is there more to it than that?

Comment author: TheAncientGeek 02 August 2016 02:17:34PM *  2 points [-]

Having an official doctrine that nothing is certain is not a all the same as having no presuppositions. To have a presupposition is to treat something as true (including using it methodologically) without being able to prove it. In the absence of any p=1 data, it makes sense to use your highest probability uncertain beliefs presuppositionally. It's absence of foundations (combined with a willingness to employ it, nonetheless ) that makes something presuppositional, not presence of certainty.

Treating something as true non-methodologically means making inferences from it, or using it to disprove soemethign else.

Treating something as true methodologically means using as a rule of inference.

If you have a method of showing that induction works, that method will ground out in presuppositions. If you dion't, then induction itself is a (methodological) presupposition for you.

Finally: treating moral values as arbitrary, but nonetheless something you should pursue[*], is at the farthest possible remove from showing that they are not presuppositions!

[*]Why?

Comment author: Bound_up 03 August 2016 11:05:09PM 0 points [-]

None of this requires that you pretend to know more than you do.

I don't have to pretend to know whether I'm in a simulation or not. I can admit my ignorance, and then act, knowing that I do not know for certain if my actions will serve.

I think of this in levels

I can't prove that logic is true. So I don't claim to know it is with probability 1. I don't pretend to.

But, IF it is true, then my reasonings are better than nothing for understanding things.

So, my statements end up looking something like: "(IF logic works) the fact that this seems logical means it's probably true."

But, I don't really know if my senses are accurate messengers of knowledge (Matrix, etc). That's on another level But I don't have to pretend that I know they are and I don't. So my statements end up looking like: "((IF logic works) and my senses are reporting accurately) the fact that this seems logical means it's probably true."

We just have to learn to act amid uncertainty. We don't have to pretend that we know anything to do so.

Morals are not arbitrary. I'm just talking about the Sequences' take on morality. If you care about a different set of things, then morality doesn't follow that change, it just means that you now care about something other than morality.

If you love circles, and then start loving ovals, that doesn't make ovals into circles, it just means you've stopped caring about circles and started caring about something else.

Morality is a fixed equation.

To say you "should" be moral is tautological. It's just saying you "should" do what you "should" do.

Comment author: Arielgenesis 27 July 2016 04:14:00AM 2 points [-]

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?

Instrumental rationality

Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?

Comment author: Bound_up 31 July 2016 01:29:29PM 1 point [-]

Okay, I don't know why everyone is making this so complicated.

In theory, nothing is presupposed. We aren't certain of anything and never will be.

In practice, if induction works for you (it will) then use it! Once it's just a question of practicality, try anything you like, and use what works.

It won't let you be certain, but it'll let you move with power within the world.

As for values, morals, your question suggests you might be interested in A Thousand Shards of Desire in the sequences. We value what we do, with lots of similarities to each other, because evolution designed our psychology that way.

Evolution is messy and uncoordinated. We ended up with a lump of half random values not at all coherent.

So, we don't look for, or recommend looking for, any One Great Guiding Principle of morality; there probably isn't one.

We just care about life and fairness and happiness and fun and freedom and stuff like anyone else. Lots of lw people get a lot of mileage out of consequentialism, utilitarianism, and particularly preference utilitarianism.

But these are not presumed. Morality is, more or less, just a pile of things that humans value. You don't HAVE to prove it to get people to try to be happy or to like freedom (all else equal).

If I've erred here, I would much like to know. I puzzled over these questions myself and thought I understood them.

Comment author: Bound_up 28 July 2016 08:55:34PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

At the same time, killing babies is the go-to example of something awful.

Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?

Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?

Comment author: Bound_up 13 July 2016 07:38:23PM 0 points [-]

I've been thinking about belief as anticipation versus belief as association.

Some people associate with beliefs like they associate with sports teams. Asking them to provide evidence for their belief is like asking them to provide evidence for their sports team being "the best."

And beliefs as anticipation you know, I'm sure.

My question is: What are signs of a "belief" being an anticipation versus it being a mere association (or other non-anticipating belief)?

One is the attempt to defend against falsification: "If you REALLY believed you wouldn't be making excuses in advance, you would confidently accept a test that you knew would show how right you were."

Got any other useful ones?

Comment author: Bound_up 09 July 2016 08:42:43PM 0 points [-]

I used to live in Boise; I've got family there.

You know the Williams?

Comment author: Bound_up 19 June 2016 10:54:21PM 0 points [-]

View more: Prev | Next