How many rationalists would retain their belief in reason, if they could accurately visualize that hypothetical world in which there was no rationality and they themselves have become irrational?
I'm not sure what "no rationality" would mean. Evolutionarily relevant kinds of rationality can still be expected, like preference to sexually fertile mates, fearing spiders/snakes/heights, and if we're still talking about something at all similar to Homo Sapiens, language and cultural learning and such, which require some amounts of rationality to use.
I wonder if you might be imagining rationality in the form of essentialism, allowing you to universally turn the attribute off, but in reality there no such off switch that is compatible with having decision making agents.
WoT appears to rely on users rating the website, and singularity.org would probably be a website that very few WoT users have rated (since it is a fairly niche website), so each rating has a large influence on the overall rating.
And the one comment is complaining about "Mass mailing of non-thematic Forums" (according to google translate), so that person possibly rated it low because they were annoyed by SI.
I see. I rated it highly to try and counter it. Perhaps if a few other LWers did this it would shift the rating sufficiently. And as for me, I will not assume reliability in user-rated systems.
Web of Trust, a browser app designed to build a website security rating and trustworthiness oriented community, is warning me that singularity.org has untrustworthy attributes. I don't find it particularly likely that singularity.org is trying something malicious, but whatever the circumstances have been, I would like to know why this has occurred, or at least to point it out. Could be a false positive on WoT's part, or something else (I know almost nothing about web security).
If it is simply a case of WoT failing to be thorough enough in how it weighs ratings and avoiding false positives, then perhaps someone could recommend something else? I like the idea of WoT and would like to either help improve it or find a better service. EDIT: It has more ratings now and WoT no longer warns about it.
Oops, you're right.
Edit: I've gained way too much karma from this exchange when it seems I should've instead lost some for not paying attention.
Perhaps a loss due to not paying attention, but I think making a rapid correction based on criticism is worthy of some amount of karma, even on something trivial. I've misunderstood loads of things and felt stupid every single time, and I do believe it helps some amount to see someone else publicly brush the dirt off here and there. For me one of the hallmarks of lousy dialogue quality is that it seems like 100% of participants either believe they have 0% error rate or that they cannot brush it off publicly for emotional reasons.
Beware my opinion though, it is possible that seeing you do something against my views, then later retract and agree with my views makes me feel good. I am sorry to say that it is difficult for me to tell if that is playing a significant role or if I am some sort of dialogue connoisseur. Perhaps more likely is that it is some finicky combination. Or even more insidiously, I could be pointing this out to try and seem more aware than I am. Oh how bias is like a six-headed, head-regenerating dragon.
Eliezer Yudkowsky, The word "normative" has stood in the way of my understanding what you mean, at least the first few times I saw you use it, before I pegged you as getting it from the heuristics and biases people. It greatly confused me many times when I first encountered them. It's jargon, so it shouldn't be surprising that different fields use it to mean rather different things.
The heuristics and biases people use it to mean "correct," because social scientists aren't allowed to use that word. I think there's a valuable lesson about academics, institutions, or taboos in there, but I'm not sure what it is. As far as I can tell, they are the only people that use it this way.
My dictionary defines normative as "of, relating to, or prescribing a norm or standard." It's confusing enough that it carries those two or three meanings, but to make it mean "correct" as well is asking for trouble or in-groups.
I agree - it can be especially ambiguous if you're also used to the economics context of normative, meaning "how subjectively desirable something is".
Hello, Less Wrong.
With no particular or unusual intellect (that I could objectively test aside from an IQ test in elementary school, which scored somewhere around 115-125), as well as low school grades, I found myself as a teenager who took issue with religion. I suppose my journey in becoming rational started when I decided I was an atheist. I was finding various flaws with religion, as well as enjoying material put out by Richard Dawkins and Christopher Hitchens. I consider that as the starting point because it was when I realized that humans are inherently terrible at understanding reality, and that merely not succumbing to wrong beliefs is something the vast majority of people fail at, let alone actually understanding reality to even the vague degree our brains could comprehend. I would describe this point as "when I started thinking", or at least trying to do so.
My interest in being studious grew over time. The next milestone related to politics. I was a very typical bleeding heart liberal throughout my teenage years, having such simplistic convictions as "corporations are bad!" and "pictures of oil-soaked penguins mean we should hold back industry" and "we might as well socialize most industries!". Eventually I began studying economics, which caused me to go from liberal to libertarian. I had so many irrational beliefs about policy and society, it's a bit shameful for me to think back on it. I now frequently speak against Keynesianism, and finally am beginning to understand the subtle but huge negatives of government intervention.
But I'm not sure my journey as a rationalist was even in an uptrend. I was just absorbing material other people put out, and wasn't really able to make good decisions for myself. I was just cynical and suspicious of commonly held views.
I flipped through Less Wrong, came across Eliezer's article "Cynical about cynicism", and then I realized I was...full of it. I thought I was being rational, but now I realize I was being childish and angsty. In fact I wonder if that should be part of the sequences, I know many people who would benefit from it, many of them are either environmentalists or atheists (or both). It was the article that made me realize I have so, so much work to do yet before I can consider myself rational.
Which brings me here, now. I am working my way through the sequences, and occasionally re-reading previous ones to try and learn it as well as I can. I am highly fortunate to be here, I can escape my irrational past, and hopefully have something similar to Yudkowsky's Bayesian enlightenment. I feel as if in many ways I am starting over, and...it feels very, very good.
These documents are interesting, particularly #1 and 2. My opinion has not changed on whether or not one should supplement (except in the case of a specific deficiency caused by some sort of malabsorption). My expectation is that someone eating a diet that includes a variety of vegetables, especially dark leafy greens, and also seafood that is low in mercury, will be healthier than someone who eats none of those but supplements, ceteris paribus, even if the supplement doses were optimal. I'm not convinced both is typically necessary.
I believe supplements are a long ways off yet from being reliable ways to improve one's diet, and to become so I would expect they will require more sophisticated measures like genetic and blood tests. Though you can get tests for deficiencies right now, which you should do to get them corrected, but I would lean towards eating more of a food or foods that are rich in that nutrient except in cases where someone has a compromised ability to absorb something and cannot get enough in food.
As has been previously said, there are a boatload of factors that we know exist and affect this subject but that we aren't yet able to anticipate. Eating the foods I described (and some others) does appear reliable.
An adversarial approach may impress spectators. In Eliezer's example, it impressed at least one. But I think it's more likely to alienate the person you're actually conversing with.
I don't have objective research on this. I'm working from personal experience and social work training. In social work you assume people are pretty irrational and coax them round to seeing what you think are better approaches in a way that doesn't embarrass them.
In social work we'd call it "collaborative empricism" or Socratic questioning. Here's video example of a therapist not shouting "Of course you're not being punished by God!" It's more touchy-feely than an argument, but the elements (taking the outside view, encouraging him to lay out the evidence on the situation) are there.
Hmm! I found that actually quite helpful. The therapist didn't even voice any apparent disagreement, he coaxed the man into making his reasoning explicit. This would greatly reduce the percent of the argument spent in an adversarial state. I noticed that it also put the emphasis of the discussion on the epistemology of the subject which seems the best way for them to learn why they are wrong, as opposed to a more example-specific "You're wrong because X".
Thank you for that link. Would it be useful for me to use other videos involving a therapist who disagrees with a delusional patient? It seems like the ideal type of behaviour to try and emulate. This is going to take me lots of practice but I'm eager to get it.
Thank you for your help and advice!
Interesting. Do we have any good information on the attributes of discussions or debates that are the most likely to educate the other person when they disagree?
Something I've noticed: when someone takes the "conquer the debate" adversarial approach, a typical-minded audience appears more likely to be interested and side with the "winner" than if the person takes a much more reserved and cooperative approach despite having just as supported arguments. Maybe the first works well for typical audiences and the second for above-typical ones?
I hope you've noticed you changed the subject here. In the first paragraph you're trying to persuade the person with whom you are conversing; in the second paragraph you're trying to convince an audience. They might well require entirely different methods.
You're right, I see now that the effect on audiences does not relate much to the one-on-one, so I should have kept a clear distinction. Thank you for pointing this out.
I believe this obvious mistake shows that I shouldn't comment on the sequences as I work my way through them, but rather it is better if I only start commenting after I have become familiar with them all. I am not ready yet to make comments that are relevant and coherent, and the very last thing I want to do is pollute the comment section. I am so glad about the opportunity for growth this site has, thanks very much to all.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
-- Tim Kreider, The Quiet Ones
I don't know the circumstances, but I would have tried to make eye contact and just blatantly stare at them for minutes straight, maybe even hamming it up with a look of slight unhinged interest. They would have become more uncomfortable and might have started being anxious that a stranger is eavesdropping on them, causing them to want to be more discrete, depending on their disposition. I've actually tried this before, and it seems to sometimes work if they can see you staring at them. Give a subtle, slight grin, like you might be sexually turned on. If you won't see them again then it's worth a try.