Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: satt 28 June 2017 08:23:34PM 0 points [-]

You reminded me of a tangentially related post idea I want someone to steal: "Ideologies as Lakatosian Research Programmes".

Just as people doing science can see themselves as working within a scientific research programme, people doing politics can see themselves as working within a political research programme. Political research programmes are scientific/Lakatosian research programmes generalized to include normative claims as well as empirical ones.

I expect this to have some (mildly) interesting implications, but I haven't got round to extracting them.

Comment author: Jayson_Virissimo 29 June 2017 04:44:25AM *  0 points [-]

You've already been scooped. The "research programme" that Lakatos talks about was designed to synthesize the views of Kuhn and Popper, but Kuhn himself modeled his revolutionary science after constitutional crises, and his paradigm shifts after political revolutions (and, perhaps more annoyingly to scientists, religious conversions). Also, part of what was so controversial (at the time) about Kuhn, was the prominence he gave to non-epistemic (normative, aesthetic, and even nationalistic) factors in the history of science.

Comment author: DragonGod 21 June 2017 02:03:10PM 0 points [-]

True. I don't disagree with that.

Comment author: Jayson_Virissimo 23 June 2017 05:22:17AM 2 points [-]

So, in the meantime, priors are useful?

Comment author: siIver 08 June 2017 04:01:56PM *  3 points [-]

This is the ultimate example of... there should be a name for this.

You figure out that something is true, like utilitarianism. Then you find a result that seems counter intuitive. Rather than going "huh, I guess my intuition was wrong, interesting" you go "LET ME FIX THAT" and change the system so that it does what you want...

man, if you trust your intuition more than the system, then there is no reason to have a system in the first place. Just do what is intuitive.

The whole point of having a system like utilitarinism is that we can figure out the correct answers in an abstarct, general way, but not necessarily for each particular situation. Having a system tells us what is correct in each situation, not vice versa.

The utility monster is nothing to be fixed. It's a natural consequence of doing the right thing, that just happens to make some people uncomfortable. It's hardly the only uncomfortable consequence of utilitarianism, either.

Comment author: Jayson_Virissimo 08 June 2017 04:29:53PM 1 point [-]

This is the ultimate example of... there should be a name for this.

I think the name you are looking for is ad hoc hypothesis.

Comment author: Jayson_Virissimo 10 April 2017 03:45:35AM 0 points [-]

...deontological responses (DRs) seem to be equivalent to responses that demonstrate cognitive biases in non-moral situations. For example, the omission bias favors harms of omission over less harmful harms caused by acts, in both moral and non-moral situations (Ritov & Baron, 1990). This similarity suggests that the DRs arise from some sort of error, or poor thinking. Much evidence indicates that the cognitive processes supporting moral and non-moral judgments are largely the same (e.g., Greene, 2007). If this is true, the question arises of what sort of thinking is involved, and when it occurs.

[Link] Explanations of deontological responses to moral dilemmas

1 Jayson_Virissimo 10 April 2017 03:43AM
In response to comment by gwillen on LessWrong Discord
Comment author: gwern 14 March 2017 03:22:11AM 0 points [-]

No IRC :(

In response to comment by gwern on LessWrong Discord
Comment author: Jayson_Virissimo 14 March 2017 04:17:42AM 0 points [-]

IRC is near the center.

Comment author: Jayson_Virissimo 21 November 2016 02:07:03AM *  7 points [-]

This could be a great technique for adding structure to internet discussions, but this algorithm says more about which debate team has more time on their hands, than which arguments have been refuted or not.

Comment author: Val 06 November 2016 10:57:52AM 3 points [-]

For many people, religion helps a lot in replenishing willpower. Although, what I observed, it's less about stopping procrastination, and more about not despairing in a difficult or depressing situation. I might even safely guess that for a lot of believers this is among the primary causes of their beliefs.

I know that religious beliefs on this site are significantly below the offline average, I didn't want to convince anyone of anything, I just pointed out that for many people it helps. Maybe by acknowledging this fact we might understand why.

Comment author: Jayson_Virissimo 07 November 2016 03:06:23AM 3 points [-]

I've noticed something even more general: people that have a well-defined philosophy of life seems more motivated and resilient to setbacks or tragedy than those who lack such a self-narrative. But this appears to be the case even for philosophies of life which have tenets that contradict (or at least stand is strong tension with) each other in important ways, such as Christianity, Objectivism, Buddhism, Stoicism, etc...

This is pure anecdote, and obviously the people I come in contact with are not even close to a random sample of humanity, so I'd very much like to be pointed towards a more systematic study of the phenomena (or lack thereof).

Comment author: MrMind 03 November 2016 03:08:00PM 0 points [-]

Isn't this a ripoff of Slate Star Codex take on voting?

Comment author: Jayson_Virissimo 04 November 2016 12:56:30AM *  0 points [-]

Trying to calculate the expected value of voting goes back at least to public choice economists in the 1960s.

Comment author: Jayson_Virissimo 16 August 2016 03:30:56AM 0 points [-]

I like your failed arguments section. IMO, frequent reminders about the phenomenon of using "arguments as soldiers" is one of the most straightforward and effective ways to encourage a higher levels of rationality in ourselves and others.

View more: Next