Comment author: Bound_up 15 September 2016 02:42:28PM 1 point [-]

So, I was trying to figure out exactly what Socrates was doing, and think I figured it out. But it made me realize I don't know how induction (deriving from inductive reasoning) works.

Socratic questioning:

You take someone's claim, induce(derive by inductive reasoning) a general principle, deduct a different claim from the same principle, disprove the new example, this disproves the general principle, which leaves the original claim unsupported. Repeat until they run out of principles, which leaves their claim ultimately unsupported.

But if someone says we shouldn't paint someone's house purple without their permission, how do we know which abstract principle to induce?

My mind goes immediately to "don't do things to people's stuff without their permission." But how? Why didn't I think the rule was "don't paint things purple," or "don't paint houses?" Obviously in this case, my familiarity is influencing me, but what about in unfamiliar situations?

Does anyone know how to reduce inductive reasoning? What algorithm are we using? What's going on in a mind which outputs an inductive inference?

Comment author: gjm 17 August 2016 03:55:14PM -1 points [-]

Have you looked on the web to see what other new-atheist-advice there is out there? In some cases you may be able to link or copy rather than creating afresh.

(I was sure I'd seen a new-atheists-start here site somewhere before, but can't find it without more time than I'm willing to put into it right now given that I'm at work and meant to be, er, working. I will have a more thorough look later and link if I find it.)

Comment author: Bound_up 17 August 2016 04:21:25PM 0 points [-]

I looked, though not for long.

Didn't find anything useful. Just a "Survival Guide for Atheists" by a not-particularly-deep-thinking theist.

If you find one or remember that one you say, I'd love to look at it, though

Comment author: Jayson_Virissimo 16 August 2016 03:30:56AM 0 points [-]

I like your failed arguments section. IMO, frequent reminders about the phenomenon of using "arguments as soldiers" is one of the most straightforward and effective ways to encourage a higher levels of rationality in ourselves and others.

Comment author: Bound_up 16 August 2016 04:15:10PM 0 points [-]

Thank you :)

If you can think of any others to add, I'm definitely looking for more content in that section, and in every other, really.

Ideally, that section would eventually become near-comprehensive, so much so that theists might use it as a resource to rebut atheists who are making bad arguments, and atheists might use it to learn which arguments to avoid.

Comment author: Viliam 16 August 2016 07:40:11AM *  3 points [-]

A nice website, but of course I am looking at it from a position of a LW fan; I don't know how it will seem to other people.

One part I didn't like was the rationalization video about "why death isn't so bad". (Spoilers: because if you know you will die, it motivates you to do important things faster.)

To illustrate why it is bad, imagine the reversal: if you would get immortality, would you rather trade it for mortality (defined as: you will die in a random moment during the following 100 years) as a cool way of Getting Things Done? I would think such decision is quite stupid for several reasons. First, the random moment may happen even today; I don't see how that helps you accomplish plans that take more than one day. Second, plans that require at least several decades of work become unlikely even if you work hard. Third, it limits the total number of plans you can accomplish.

Generally, the whole reasoning about the benefits of death is motivated thinking. We already know the bottom line (almost certainly we are going to die, probably quite soon); and the real reason is that our bodies are fragile, and it is extremely difficult to coordinate humanity on fixing this problem (all the necessary medical and technological research, plus the necessary economical and political background). That's it. It would be too much of a coincidence if this situation would also be somehow optimal.

(However, optimizing the website for me could make it less attractive to an average reader.)

EDIT: The quotations from "The Last Christmas" should be formatted differently than the surrounding text. For example use dark grey font, and a vertical line on the left (something like "font-color: #333; border-left: 2px solid #333;"). Don't separate different quotes by bullets, but enclose them in different "div" tags (with the vertical line it should be visible where the next "div" starts).

On the "Arguments" page, the different arguments should be probably formatted by headings ("h2"), not bullet points. I think in general that the bullet points look ugly, unless used for a list of short items.

Comment author: Bound_up 16 August 2016 04:12:26PM *  2 points [-]

Yes, I considered adding exactly that kind of status quo bias rebuttal to the note that some atheists argue that death isn't so bad.

But I didn't want to make it TOO LW-ey, and I'm hoping that by including that view, along with the views of the Methuselah people, and the cryonics people, that people would be able to take them all seriously without feeling like I was biased in my presentation

I'm open to adjusting that strategy, but that was my thinking behind it. Not to appear like an arguer, merely an unbiased presenter of information, who apparently took aging research and cryonics seriosuly enough to mention.

Thanks for the editing suggestions, I think you're right. I'll keep you in mind for other formatting points, if you don't mind

Seeking Optimization of New Website "New Atheist Survival Kit," a go-to site for newly-made atheists

4 Bound_up 16 August 2016 01:03AM

I've put together a website, "New Atheist Survival Kit" at atheistkit.wordpress.com

 

The idea is to help new atheists come to terms with their change in belief, and also invite them to become more than atheists: rationalists.

 

And if it helps theists become atheists, too, and helps old atheists become rationalists, more the better.

 

The bare bones of it are all in place now. Once a few people have gone over it, for editing, and for advice about what to include, leave out, improve, re-organize, whatever, I'll ask a bunch of atheist and rationalist communities to write up their own blurb for us to include in a list of communities that we'll point people to in the "Atheist Communities" or "Thinker's Communities" sections on the main menu.

It includes my rough draft attempt to basically bring down the Metaethics sequence to a few thousand words and make it stylistically and conceptually accessible to a mass audience, which I could especially use some help with.

 

So, for now, I'm here to ask that anyone interested check it out, and message me any improvements they think worth making, from grammar and spelling all the way up to what content to include, or how to present things.

 

Thanks to all for any help.

Comment author: TheAncientGeek 04 August 2016 10:02:21AM *  0 points [-]

Yes, if you can't solve the presupposition problem, the main alternative is to carry on as before, at the object level, but with less confidence at the meta level. But who is failing to take that advice? As far as I can see, it is Yudkowsky. He makes no claim to have solved the problem of unfounded foundations, but continues putting very high probabilities on ideas he likes, and vehemently denying ones he doesn't.

To say you "should" be moral is tautological. It's just saying you "should" do what you "should" do.

Ok. You should be moral. But there is no strong reason why you should folliow arbitrary values. Therefore, arbitrary values are not morality.

Morals are not arbitrary. I'm just talking about the Sequences' take on morality. If you care about a different set of things, then morality doesn't follow that change, it just means that you now care about something other than morality.

So what are the correct moral values?

Comment author: Bound_up 04 August 2016 04:27:02PM 0 points [-]

Well, in the normal course of life, on the object level, some things are more probable than others.

If you push me about if I REALLY know they're true, then I admit that my reasoning and data could be confounded by a Matrix or whatever.

Maybe it's clearer like so:

Colloquially, I know how to judge relative probabilities.

Philosophically (strictly), I don't know the probability that any of my conclusions are true (because they rest on concepts I don't pretend to know are true).

About the moral values thing, it sounds kinda like you haven't read the sequence on metaethics. If not, then I'm glad to be the one to introduce you to the idea, and I can give you the broad strokes in a few sentences in a comment, but you might want to ponder the sequence if you want more.

Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality.

But, humans don't have access to our source code. We can't see all that we care about. Figuring out the specific values, and how much to weight them against each other is just the old game of thought experiments and considering trade-offs, etcetera.

Nothing that can be reduced to some one-word or one-sentence idea that sums it all up. So we don't know what all the values are or how they're weighted. You might read about "Coherent Extrapolated Volition," if you like.

Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn't change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.

A great example is Alexander Wales' short story "The Last Christmas" (particularly chapter 2 and 3). See below.

The elves care about Christmas Spirit, not right and wrong, or morality, or fairness.

When it's pointed out that what they're doing isn't fair, they don't protest, they just say "We don't care. Fairness isn't part of the Christmas Spirit."

And we might say, "Santa being fat? We don't care, that's not part of morality. We don't deny that it's part of the Christmas Spirit; we just don't care that it is."

If aliens care about different things, it's not about our morality versus "their" morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn't care about morality. It cares about clippiness.

The moral thing and the clippy thing to do are both fixed calculations. Once you know the answer, it's a feature of your mind if you happen to respond to morality, or clippiness, or Glumpshizzle, or Christmas Spirit.

If anybody thinks I've misunderstood part of this, please, do let me know. I've tried to understand, and would like to correct any mistakes if I have them.

“You wouldn’t even make any arguments for why you should live?” asked Charles.

“My life is meaningless in the face of the Christmas spirit,” said Matilda.

“But if it didn’t matter to the Christmas spirit,” said Charles, “If I just wanted to see you die for fun?”

“Allowing you to satisfy your desires is part of maintaining the Christmas spirit, Santa,”


“It’s unfair,” said Charles.

“Life is unfair,” said Matilda.

“Does it have to be?” asked Charles. “Is that the Christmas spirit?”

“I don’t know,” said Matilda. “Fairness doesn’t enter into it, I don’t think. Why should Christmas be fair if life isn’t fair?”

http://alexanderwales.com/the-last-christmas-chapter-1-2/

In response to Motivated Thinking
Comment author: SquirrelInHell 04 August 2016 05:01:23AM *  1 point [-]

I like your mnemonics idea, though the part "Self-deprecation and Conceit" seems a little bit forced. Maybe make them rhyme or something else instead.

I think it's one of the most important things to teach someone about rationality (any other suggestions? Confirmation bias, placebo, pareidolia, and the odds of coincidences come to mind...)

The things that come to your mind are object-level skills. However I'd say that the most important thing to teach is the meta-skill of dissociation - looking at your thoughts as a machine with some properties, and controlling this machine from the "outside".

In other words, intuitively noticing that thinking something about X is not a fact about X, but a fact about your thoughts.

Having this habit that when you think X, you also automatically think "hmm, I seem to be thinking X, what do I make of it?".

Comment author: Bound_up 04 August 2016 04:06:38PM 0 points [-]

Hmm, so the map/territory distinction?

That's a good one.

Some of mine ARE object-level, but there aren't just ANY object-level ones. They focus on teaching you how to discern between real and fake evidence, I guess...

Are you just referring to map/territory, or is there more to it than that?

Motivated Thinking

3 Bound_up 03 August 2016 11:27PM

I'm playing around with an article on Motivated Cognition for general consumption

 

I think it's one of the most important things to teach someone about rationality (any other suggestions? Confirmation bias, placebo, pareidolia, and the odds of coincidences come to mind...)

 

So, I've taken the five kinds of motivated cognition I know of 

(Motivated skepticism)

 

(Motivated stopping)

 

(Motivated neutrality)

 

(Motivated credulity)

 

(Motivated continuation)

 

added a counterpart to "neutrality," and then renamed neutrality.

 

The end result being six kinds of motivated cognition, three pairs of two kinds each, which are opposites of each other. Also, each pair has one kind that beings with an S and the other that begins with a C, which is good for mnemonic purposes.

 

So, I've got

Stopping and Continuation - Controls WHICH arguments you put in front of yourself (Do you continue because you haven't found what supports you yet, or do you stop because you have?)

Self-deprecation and Conceit - these control WHETHER you judge an argument in front of you (Do you refuse to judge ("Who am I to judge?") clear arguments that oppose your side or do you judge arguments you have no capacity to understand (the probability of abiogenesis, for example) because it lets you support your side?)

Skepticism and Credulity - Controls HOW you judge arguments (Do you demand higher evidence for ideas you don't like, and less for ideas you do? Do you scrutinize ideas you don't like more than ideas you do? Do you ask if the evidence forces you to accept, or if it allows you to accept an idea?)

 

I'm thinking of introducing them in that order, too, with the "Which/Whether/How you judge" abstraction.

 

Anybody see better abstractions, better explanations, better mnemonic techniques? Any advice of any kind on how to teach this effectively to people? Other fundamentals to rationality? (Maybe the beliefs as probabilities idea?)

 

Comment author: TheAncientGeek 02 August 2016 02:17:34PM *  2 points [-]

Having an official doctrine that nothing is certain is not a all the same as having no presuppositions. To have a presupposition is to treat something as true (including using it methodologically) without being able to prove it. In the absence of any p=1 data, it makes sense to use your highest probability uncertain beliefs presuppositionally. It's absence of foundations (combined with a willingness to employ it, nonetheless ) that makes something presuppositional, not presence of certainty.

Treating something as true non-methodologically means making inferences from it, or using it to disprove soemethign else.

Treating something as true methodologically means using as a rule of inference.

If you have a method of showing that induction works, that method will ground out in presuppositions. If you dion't, then induction itself is a (methodological) presupposition for you.

Finally: treating moral values as arbitrary, but nonetheless something you should pursue[*], is at the farthest possible remove from showing that they are not presuppositions!

[*]Why?

Comment author: Bound_up 03 August 2016 11:05:09PM 0 points [-]

None of this requires that you pretend to know more than you do.

I don't have to pretend to know whether I'm in a simulation or not. I can admit my ignorance, and then act, knowing that I do not know for certain if my actions will serve.

I think of this in levels

I can't prove that logic is true. So I don't claim to know it is with probability 1. I don't pretend to.

But, IF it is true, then my reasonings are better than nothing for understanding things.

So, my statements end up looking something like: "(IF logic works) the fact that this seems logical means it's probably true."

But, I don't really know if my senses are accurate messengers of knowledge (Matrix, etc). That's on another level But I don't have to pretend that I know they are and I don't. So my statements end up looking like: "((IF logic works) and my senses are reporting accurately) the fact that this seems logical means it's probably true."

We just have to learn to act amid uncertainty. We don't have to pretend that we know anything to do so.

Morals are not arbitrary. I'm just talking about the Sequences' take on morality. If you care about a different set of things, then morality doesn't follow that change, it just means that you now care about something other than morality.

If you love circles, and then start loving ovals, that doesn't make ovals into circles, it just means you've stopped caring about circles and started caring about something else.

Morality is a fixed equation.

To say you "should" be moral is tautological. It's just saying you "should" do what you "should" do.

Comment author: Arielgenesis 27 July 2016 04:14:00AM 2 points [-]

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?

Instrumental rationality

Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?

Comment author: Bound_up 31 July 2016 01:29:29PM 1 point [-]

Okay, I don't know why everyone is making this so complicated.

In theory, nothing is presupposed. We aren't certain of anything and never will be.

In practice, if induction works for you (it will) then use it! Once it's just a question of practicality, try anything you like, and use what works.

It won't let you be certain, but it'll let you move with power within the world.

As for values, morals, your question suggests you might be interested in A Thousand Shards of Desire in the sequences. We value what we do, with lots of similarities to each other, because evolution designed our psychology that way.

Evolution is messy and uncoordinated. We ended up with a lump of half random values not at all coherent.

So, we don't look for, or recommend looking for, any One Great Guiding Principle of morality; there probably isn't one.

We just care about life and fairness and happiness and fun and freedom and stuff like anyone else. Lots of lw people get a lot of mileage out of consequentialism, utilitarianism, and particularly preference utilitarianism.

But these are not presumed. Morality is, more or less, just a pile of things that humans value. You don't HAVE to prove it to get people to try to be happy or to like freedom (all else equal).

If I've erred here, I would much like to know. I puzzled over these questions myself and thought I understood them.

View more: Prev | Next