Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: I_D_Sparse 21 March 2017 09:57:37PM 0 points [-]

First comes some gene A which is simple, but at least a little useful on its own, so that A increases to universality in the gene pool. Now along comes gene B, which is only useful in the presence of A, but A is reliably present in the gene pool, so there's a reliable selection pressure in favor of B. Now a modified version of A* arises, which depends on B, but doesn't break B's dependency on A/A. Then along comes C, which depends on A and B, and B, which depends on A and C.

Can anybody point me to some specific examples of this type of evolution? I'm a complete layman when it comes to biology, and this fascinates me. I'm having a bit of a hard time imagining such a process, though.

Comment author: SnowSage4444 18 March 2017 03:01:28PM 0 points [-]

No, really, what?

What "Different rules" could someone use to decide what to believe, besides "Because logic and science say so"? "Because my God said so"? "Because these tea leaves said so"?

Comment author: I_D_Sparse 18 March 2017 08:56:42PM *  0 points [-]

Unfortunately, yes.

Comment author: SnowSage4444 17 March 2017 11:44:29PM 0 points [-]

what

Comment author: I_D_Sparse 18 March 2017 12:50:32AM 0 points [-]

If someone uses different rules than you to decide what to believe, then things that you can prove using your rules won't necessarily be provable using their rules.

Comment author: SnowSage4444 17 March 2017 04:26:35PM 0 points [-]

Prove you're right. And then you win.

Comment author: I_D_Sparse 17 March 2017 07:31:58PM 1 point [-]

Yes, but the idea is that a proof within one axiomatic system does not constitute a proof within another.

Comment author: gjm 13 March 2017 06:02:37PM 0 points [-]

Is there good reason to believe that any method exists that will reliably resolve epistemological disputes between parties with very different underlying assumptions?

Comment author: I_D_Sparse 13 March 2017 08:12:10PM 0 points [-]

Not particularly, no. In fact, there probably is no such method - either the parties must agree to disagree (which they could honestly do if they're not all Bayesians), or they must persuade each other using rhetoric as opposed to honest, rational inquiry. I find this unfortunate.

Comment author: Elo 11 March 2017 09:23:11AM 0 points [-]

can you do me a favour and separate this into paragraphs, (or fix the formatting).

Thanks.

The lesswrong slack has a channel called #world_domination.

Comment author: I_D_Sparse 11 March 2017 09:19:29PM 1 point [-]

Fixed the formatting.

Comment author: I_D_Sparse 11 March 2017 09:17:08AM *  1 point [-]

Regarding instrumental rationality: I've been wondering for a while now if "world domination" (or "world optimization", as HJPEV prefers) is feasible. I haven't entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one's approximate chances of success with various possible paths to achieving one's values. I have therefore come up with the "feasibility problem." Basically, a solution to the problem consists of an estimation of how much one can actually hope to influence the world, and to what extent one can actually fulfill one's values. I think it would be very wise to solve the feasibility problem before attempting to take over the world, or become the President, or lead a social revolution, or improve the rationality of the general populace, etc.

Solving the FP would seem to require a deep understanding of how the world operates (anthropomorphically speaking, if you get my drift; I'm talking about the hoomun world, not physics and chemistry).

I've even constructed a GPOATCBUBAAAA (general plan of action that can be used by any and all agents): first, define your utility function, and also learn how the world works (easier said than done). Once you've completed that, you can apply your knowledge to solve the FP, and then you can construct a plan to fulfill your utility function, and then put it into action.

This is probably a bit longer than 100 words, but I'm posting it here and not in the open thread because I have no idea if it's of any value whatsoever.

Comment author: I_D_Sparse 10 March 2017 07:57:56PM 0 points [-]

What if the disagreeing parties have radical epistemological differences? Double crux seems like a good strategy for resolving disagreements between parties that have an epistemological system in common (and access to the same relevant data), because getting to the core of the matter should expose that one or both of them is making a mistake. However, between two or more parties that use entirely different epistemological systems - e.g. rationalism and empiricism, or skepticism and "faith" - double crux should, if used correctly, eventually lead all disagreements back to epistemology, at which point... what, exactly? Use double-crux again? What if the parties don't have a meta-epistemological system in common, or indeed, any nth-order epistemological system in common? Double crux sounds really useful, and this is a great post, but a system for resolving epistemological disputes would be extremely helpful as well (especially for those of us who regularly converse with "faith"-ists about philosophy).

Comment author: Bound_up 10 March 2017 02:26:16PM *  5 points [-]

I had an idea to increase average people’s rationality with 4 qualities:

It doesn’t seem/feel “rationalist” or “nerdy.”

It can work without people understanding why it works

It can be taught without understanding its purpose

It can be perceived as about politeness

A high school class, where people try to pass Intellectual Turing Tests. It's not POLITE/sophisticated to assert opinions if you can't show that you understand the people that you're saying are wrong.

We already have a lot of error-detection abilities when our minds criticize others' ideas, we just need to access that power for our own ideas.

Comment author: I_D_Sparse 10 March 2017 07:43:43PM *  0 points [-]

This is an interesting idea, although I'm not sure what you mean by

It can work without people understanding why it works

Shouldn't the people learning it understand it? It doesn't really seem much like learning otherwise.

Comment author: Dagon 09 March 2017 11:30:30PM *  0 points [-]

Your edit should go at the top. I was disagreeing with most of it when I was reading "describe yourself as" literally as how you concisely communicate your belief clusters to others, who may or may not be as rational as you are.

If you just mean a variant of "don't believe your own press", I fully agree.

If you're talking about self-labeling, you should probably just choose "me" as the label. Any other risks an incorrect focus on some external aspect of your beliefs.

Comment author: I_D_Sparse 10 March 2017 01:02:44AM 0 points [-]

Moved it to the top.

View more: Next