Rational Healthcare
So what is "rational healthcare?" BetterCare. BetterCare is a startup that brings healthcare sharing from Christian healthcare sharing ministries to the general public. We are also considering facilitating healthcare sharing among other circles like religious communities, local neighborhoods, or even interest-groups like rationalists. Christian healthcare sharing ministries don’t provide health insurance. Instead, they cut out the middlemen, in this case insurance companies, and let members share healthcare costs directly among themselves. The result is the equivalent of health insurance at half the price, and with low “deductibles” and full coverage of almost all conditions to boot. BetterCare can do the same for you. We can provide monthly rates at just $180 per person and $410 per family, as opposed to the U.S. national average for health insurance at $490 per person and $1363 per family per month. If you want to learn more and stay updated on our progress, check out our website and join the waitlist for new member opportunities at www.bettercare.tk.
Neo-reactionaries, why are you neo-reactionary?
Through LessWrong, I've discovered the no-reactionary movement. Servery says that there are some of you here.
I'm curious, what lead you to accept the basic premises of the movement? What is the story of your personal "conversion"? Was there some particular insight or information that was important in convincing you? Was it something that just "clicked" for you or that you had always felt in a vague way? Were any of you "raised in it"?
Feel free to forward my questions to others or direct me towards a better forum for asking this.
I hope that this is in no way demeaning or insulting. I'm genuinely curious and my questioning is value free. If you point me towards compelling evidence of the neo-reactionary premise, I'll update on it.
I Want To Believe: Rational Edition
Relevant: http://lesswrong.com/lw/k7h/a_dialogue_on_doublethink/
I would like this conversation to operate under the assumption that there are certain special times when it is instrumentally rational to convince oneself of a proposition whose truth is indeterminate, and when it is epistemically rational as well. The reason I would like this conversation to operate under this assumption is that I believe questioning this assumption makes it more difficult to use doublethink for productive purposes. There are many other places on this website where the ethics or legitimacy of doublethink can be debated, and I am already aware of its dangers, so please don't mention such things here.
I am hoping for some advice. "Wanting to believe" can be both epistemically and instrumentally rational, as in the case of certain self-fulfilling prophecies. If believing that I am capable of winning a competition will cause me to win, believing that I am capable of winning is rational both in the instrumental sense that "rationality is winning" and in the epistemic sense that "rationality is truth".
I used to be quite good at convincing myself to adopt beliefs of this type when they were beneficial. It was essentially automatic, I knew that I had the ability and so applying it was as trivial as remembering its existence. Nowadays, however, I'm almost unable to do this at all, despite what I remember. It's causing me significant difficulties in my personal life.
How can I redevelop my skill at this technique? Practicing will surely help, and I'm practicing right now so therefore I'm improving already. I'll soon have the skill back stronger than ever, I'm quite confident. But are there any tricks or styles of thinking that can make it more controllable? Any mantras or essays that will help my thought to become more fluidly self-directed? Or should I be focused on manipulating my emotional state rather than on initiating a direct cognitive override?
I feel as though the difficulties I've been having become most pronounced when I'm thinking about self-fulfilling prophecies that do not have guarantees of certainty attached. The lower my estimated probability that the self-fulfilling prophecy will work for me, the less able I am to use the self-fulfilling prophecy as a tool, even if the estimated gains from the bet are large. How might I deal with this problem, specifically?
"Follow your dreams" as a case study in incorrect thinking
This post doesn't contain any new ideas that LWers don't already know. It's more of an attempt to organize my thoughts and have a writeup for future reference.
Here's a great quote from Sam Hughes, giving some examples of good and bad advice:
"You and your gaggle of girlfriends had a saying at university," he tells her. "'Drink through it'. Breakups, hangovers, finals. I have never encountered a shorter, worse, more densely bad piece of advice." Next he goes into their bedroom for a moment. He returns with four running shoes. "You did the right thing by waiting for me. Probably the first right thing you've done in the last twenty-four hours. I subscribe, as you know, to a different mantra. So we're going to run."
The typical advice given to young people who want to succeed in highly competitive areas, like sports, writing, music, or making video games, is to "follow your dreams". I think that advice is up there with "drink through it" in terms of sheer destructive potential. If it was replaced with "don't bother following your dreams" every time it was uttered, the world might become a happier place.
The amazing thing about "follow your dreams" is that thinking about it uncovers a sort of perfect storm of biases. It's fractally wrong, like PHP, where the big picture is wrong and every small piece is also wrong in its own unique way.
The big culprit is, of course, optimism bias due to perceived control. I will succeed because I'm me, the special person at the center of my experience. That's the same bias that leads us to overestimate our chances of finishing the thesis on time, or having a successful marriage, or any number of other things. Thankfully, we have a really good debiasing technique for this particular bias, known as reference class forecasting, or inside vs outside view. What if your friend Bob was a slightly better guitar player than you? Would you bet a lot of money on Bob making it big like Jimi Hendrix? The question is laughable, but then so is betting the years of your own life, with a smaller chance of success than Bob.
That still leaves many questions unanswered, though. Why do people offer such advice in the first place, why do other people follow it, and what can be done about it?
Survivorship bias is one big reason we constantly hear successful people telling us to "follow our dreams". Successful people doesn't really know why they are successful, so they attribute it to their hard work and not giving up. The media amplifies that message, while millions of failures go unreported because they're not celebrities, even though they try just as hard. So we hear about successes disproportionately, in comparison to how often they actually happen, and that colors our expectations of our own future success. Sadly, I don't know of any good debiasing techniques for this error, other than just reminding yourself that it's an error.
When someone has invested a lot of time and effort into following their dream, it feels harder to give up due to the sunk cost fallacy. That happens even with very stupid dreams, like the dream of winning at the casino, that were obviously installed by someone else for their own profit. So when you feel convinced that you'll eventually make it big in writing or music, you can remind yourself that compulsive gamblers feel the same way, and that feeling something doesn't make it true.
Of course there are good dreams and bad dreams. Some people have dreams that don't tease them for years with empty promises, but actually start paying off in a predictable time frame. The main difference between the two kinds of dream is the difference between positive-sum games, a.k.a. productive occupations, and zero-sum games, a.k.a. popularity contests. Sebastian Marshall's post Positive Sum Games Don't Require Natural Talent makes the same point, and advises you to choose a game where you can be successful without outcompeting 99% of other players.
The really interesting question to me right now is, what sets someone on the path of investing everything in a hopeless dream? Maybe it's a small success at an early age, followed by some random encouragement from others, and then you're locked in. Is there any hope for thinking back to that moment, or set of moments, and making a little twist to put yourself on a happier path? I usually don't advise people to change their desires, but in this case it seems to be the right thing to do.
Truth vs Utility
According to Eliezer, there are two types of rationality. There is epistemic rationality, the process of updating your beliefs based on evidence to correspond to the truth (or reality) as closely as possible. And there is instrumental rationality, the process of making choices in order to maximize your future utility yield. These two slightly conflicting definitions work together most of the time as obtaining the truth is the rationalists' ultimate goal and thus yields the maximum utility. Are there ever times when the truth is not in a rationalist's best interest? Are there scenarios in which a rationalist should actively try to avoid the truth to maximize their possible utility? I have been mentally struggling with these questions for a while. Let me propose a scenario to illustrate the conundrum.
Suppose Omega, a supercomputer, comes down to Earth to offer you a choice. Option 1 is to live in a stimulated world where you have infinite utility (on this world there is no, pain, suffering, death, its basically a perfect world) and you are unaware you are living in a stimulation. Option 2 is Omega will answer one question on absolutely any subject truthfully pertaining to our universe with no strings attached. You can ask about the laws governing the universe, the meaning of life, the origin of time and space, whatever and Omega will give you a absolutely truthful, knowledgeable answer. Now, assuming all of these hypotheticals are true, which option would you pick? Which option should a perfect rationalist pick? Does the potential of asking a question whose answer could greatly improve humanity's knowledge of our universe outweigh the benefits of living in a perfect simulated world with unlimited utility? There is probably a lot of people who would object outright to living in a simulation because it's not reality or the truth. Well lets consider the simulation in my hypothetical conundrum for a second. It's a perfect reality and has unlimited utility potential, and you are completely unaware you are in a simulation on this world. Aside from the unlimited utility part, that sounds a lot like our reality. There are no signs of our reality of being a simulation and all (most) of humanity is convinced that our reality is not a simulation. There for, the only difference that really matters between the simulation in Option 1 and our reality is the unlimited utility potential that Option 1 offers. If there is no evidence that a simulation is not reality then the simulation is reality for the people inside the simulation. That is what I believe and that is why I would choose Option 1. The infinite utility of living in a perfect reality outweighs almost any utility amount increase I could contribute to humanity.
I am very interested in which option the less wrong community would choose (I know Option 2 is kind of arbitrary I just needed an option for people who wouldn't want to live in a simulation). As this is my first post, any feedback or criticism is appreciated. Also many more information on the topic of truth vs utility would be very helpful. Feel free to down vote me to oblivion if this post was stupid, didn't make sense, etc. It was simply an idea that I found interesting that I wanted to put into writing. Thank you for reading.
Does this seem to you like evidence for the existence of psychic abilities in humans?
I was recently reminded of something I have encountered that seems to me to be good evidence for paranormal phenomena. Can anyone help me figure out what might be going on?
When I was a little younger, I used to play the online riddle game Notpron. In this game, the player (essentially) has to analyze a webpage for clues towards the URL to the next webpage, and then repeat for 140 stages. The creator of this game, DavidM, at some point became a huge new age conspiracy theory loony type. Three years after the original ending of the riddle went online, he revised it to include an additional final level: Level Nu. This level is very different than the ones preceding it. I can't link to the page for obvious reasons, but I will transcribe it here:
835 492 147 264
Remote view the photography this number represents!
Email me all your results to david@david-m.org. I'll get you some feedback. Get me all elements or impressions that seem really strong for you. Or send me your sketches if you like.
Don't bruteforce, or you'll be banned from this one. You have as many attempts as you like, take your time.
Yes, I mean it. No tricks here, just pure remote viewing. The number represents a picture, I want to know what's on there.
So learn some remote viewing technique you like best and go ahead. The internet has lots of information. Have fun!
Please do this ALL by yourself, not even with your very very close friends. Because its boring and stupid, and because you can put bullshit into each others head, which is hard to get rid of again, because the mind needs to be shut down for this to work properly. So do it alone, just talk to me about it, please.
(Yes, this really works, one friend got the content of the picture on first try...and yes, he only got the number from me.)
- 31 people have successfully completed this level.
- Before this level went up, around 200 people had successfully completed the game (iirc). Given that Notpron has declined in popularity since Level Nu was created in 2008, I would estimate that around 300 people in total are in a position to attempt Level Nu, although it could be more. However, I would imagine that many people 1) probably did not come back once they had already finished, 2) were too intimidated by remote viewing and the trivial inconvenience of having an email discussion with DavidM, 3) did not even bother due to disbelief in remote viewing.
- The first person who solved it did so by dreaming about the answer. She dreamt night after night that a German man (DavidM is German) was aggressively trying to sell her a boat. The solution picture was of a boat. One of the very first posts on the thread was her talking about her dream and saying "I think this has something to do with Notpron, but I don't know what". DavidM had to immediately remove the post so as not to give away the answer.
- The second person solved it on their first try with just one word (presumably "boat").
- Someone who solved it said "What I got was literally a much sharper much detailed version of a badly scribbled picture in my mind". This person apparently also got "the one right word that you need to solve it" (boat).
- Someone on the forum writes: "Mailed my visions. I swear it was first thing i saw in my head. But no doubts i was wrong =)". Immediately after, DavidM replies saying that he figured it out.
- "The last 3 or 4 people solved the thing at the first attempt. Some little inaccuracies everytime, but the main 2 objects were always named first."
- "i didn't have any "visions". just was reading my university-stuff, when snowman "forced" me to write david. i thought it could be funny though and wrote the first shit of which i was thinking at that second. didn't even look at the numbers or anything."
- Someone's first idea that he sent was what David planned as the future solution. It seems like what he said was "rainbow colors" for a picture of an assortment of fruit. David told him to look at the current solution instead, and again, his first idea was correct.
- Same guy: "weird thing is. i got the "future solution" picture in my head right away. without even trying. then i just send it in.and when david asked me to get the current one. my gf came to me with my son in her arms saying i had to take him and i just: "Hold on, i just need to get a picture in my head". and while she was standing there with my son crying next to me. i got a pic up in my head immidietly, but that didnt feel right so i pushed it away and got another on right away and mailed it in. and it was the right one. hehe. :) and especially the second pic, i saw very clearly. even colours."
- Post where he reveals the original answer: "Most people just said right away, "it's a boat" or "boat/raft on a lake/sea/river". Or one said "going fishing", which was vague, but I let it count. What I got a lot as well was the skyline and water. 2 guys have been listening to a song called "I'm on a boat" while solving the riddle, and I watched the video clip. One scene in it looks just like the solution. Crazy."
- Post where he reveals the second answer: He says several times that he believes that this one was harder than the first. "Almost [all? sic] saw round things. Some interpreted it as ball(sport), circles, pom poms, the sun or the moon etc. So I'm glad this round-element was so dominant. CTRL saw rainbow colours right away. At least something. Kasper then pretty much nailed it in his this attempt: I saw two things O.o i saw an animal and fruit/vegetables maybe animals eating fruit/vegatables." It seems like only two people solved it during this time, although there may be more.
- Finally, someone who doesn't believe: "(This is Jooly, who used to be a mod here and one of the first solvers of the fair levels, and whose account has been mysteriously deactivated since she started discussing DavidM's increasingly wacky ideas a while ago) I spoke with one of the level Nu solvers, who explained to me exactly how it was solved. Remote viewing had nothing to do with it. Duping a very very gullible (desperately wanting to believe?) DavidM was all it took, and it was very easy too. I won't bother, having solved the real notpron levels. But for those of you who must have the new certificate, don't worry. It doesn't take any magic powers or much effort to do so." (David denies that he deactivated Jooly's account and says Jooly is free to disagree with him.)
- I personally talked to the skeptic in question on IRC back in the day. I can't recall the conversation too well, but he refused to give any concrete details on how he solved it exactly. I asked him "Was it something like, for example, you say 'Is it blue?', David says 'no', you say 'Is it red?', David says 'no', you say 'Is it big?', David says 'no', you say 'it's an apple', David says you figured it out?". He said it was something close to that. Note that as far as I can tell, everyone else who solved it either believes in remote viewing or remains agnostic.
- On how someone solved the level: "Yeah, she asked a friend about the number. He said the correct answer, and there you go."
- The third answer is revealed. There's too much stuff here to copy and paste, but he reveals a bunch of successful attempts, some of which are pretty uncanny. The most interesting part is: "Kimmo, who was not considered to have solved it said: 'It is something that is approaching me, not sure what it is. It is that kind of situation where you need to react to and not stay there just looking what it is.' (Now I don't really see why I didn't let him pass; if you're reading this, contact me!)"
- After around twenty-something solves, DavidM maintains that most people guessed it on their first try.
- "Most people" apparently guessed it on their first try.
- According to David, about half the people who tried it have solved it.
- The dream thing - absolutely insane, hard to imagine that it's a coincidence.
- David did not consider the guy who guessed the shark as "something approaching me, it is a situation that I need to react to" to have solved the level. This shows that he requires fairly high standards of accuracy.
- David implies that in order to have guessed the boat, you need to say the word "boat", also implying high standards.
- David did not really give me very much help or "lead" me anywhere when I tried to solve it.
- One person who solved it says that he did not solve it using remote viewing.
- It didn't work for me at all.
- David might very well be exaggerating both the percentage of people who successfully solved it and the percentage of people who guessed it on their first try.
- David might be (and in fact probably is) only reporting the "best" answers in his forum posts. For the fruit and the shark, he seems to be posting about half of the people who solved it in that time period. For the boat, he doesn't really give specifics, and instead says "Most people just said it was a boat on their first guess."
- Maybe DavidM is in fact "leading" people to the answer through a series of multiple guesses. For this to be true, however, a few things would have to be the case. First of all, his assertion that most people guessed it on their first try would have to be greatly exaggerated. Let's imagine that David is outright lying about most people guessing it on their first try and that half the people who attempted the riddle solved it. However, at least six people (I don't feel like going back through all 29 pages and counting) posted on the forum that they solved it on their first try. Let's imagine that all 300 people who reached the level attempted it. This is still a 1/50 "first guess" rate, and that's out of all the photographs in the world. However, maybe by some conjunction of 1) exaggerating those two numbers, 2) his dialogue with me being atypical, 3) the answers he posted on the forum being atypical, 4) his refusal to accept "something approaching me" being atypical and 5) the dream being a total coincidence, it may be true that he actually is doing a form of "leading" and is covering it up well. This feels like a really unsatisfactory answer. It relies on a lot of conjunctions and it seems clear that the only way to arrive at it is by a thorough search for some sort of answer that fits nicely in with our pre-existing worldview. That being said, I suspect it might be the most likely answer.
- Perhaps the level is an elaborate joke. In reality there is some other more conventional means of arriving at a solution, and people who solve it are told to play along. I can sort of see this being the case, given that 1) there are some other levels of Notpron that have "prankster-ish" elements and 2) I have actually myself been a part of a very similar joke on an even bigger scale, so I know that it can happen. However, on the other hand, DavidM really strongly believes in the conspiracy theory new age stuff and vigorously promotes it, so it seems unlikely that he would sabotage his own ideology like that. Also, while there are other prankster-ish levels of Notpron, nothing comes close to being as clever or elaborate as this scenario would be.
What do rationalists think about the afterlife?
I've read a fair amount on Less Wrong and can't recall much said about the plausibility of some sort of afterlife. What do you guys think about it? Is there some sort of consensus?
Here's my take:
- Rationality is all about using the past to make predictions about the future.
- "What happens to our consciousness when we die?" (may not be worded precisely, but hopefully you know what I mean).
- We have some data on what preconditions seem to produce consciousness (ie. neuronal firing). However, this is just data on the preconditions that seem to produce consciousness that can/do communicate/demonstrate its consciousness to us.
- Can we say that a different set of preconditions doesn't produce consciousness? I personally don't see reason to believe this. I see 3 possibilities that we don't have reason to reject, because we have no data on them. I'm still confused and not too confident in this belief though.
- Possibility 1) Maybe the 'other' conscious beings don't want to communicate their consciousness to us.
- Possibility 2) Maybe the 'other' conscious beings can't communicate their consciousness to us ever.
- Possibility 3) Maybe the 'other' conscious beings can't communicate their consciousness to us given our level of technology.
- And finally, since we have no data, what can we say about the likelihood of our consciousness returning/remaining after we die? I would say the chances are 50/50. For something you have no data on, any outcome is equally likely (This feels like something that must have been talked about before. So side-question: is this logic sound?).
Edit: People in the comments have just taken it as a given that consciousness resides solely in the brain without explaining why they think this. My point in this post is that I don't see why we have reason to reject the 3 possibilities above. If you reject the idea that consciousness could reside outside of the brain, please explain why.
Intelligence-disadvantage
While LessWrong contains a large amount of high-quality material, most of the rationality advice isn't actually targeted at our core audience. The focus seems to be more on irrational things that people do, rather than irrational things that smart people do. (Sidenote: If we wanted to create a site focused on spreading general rationality, then we'd need to simplify the discussion, remove a lot of the maths/controversial ideas and add in some friendly images. Does such a site exist?).
This has led to a number of comments questioning the real world value of having read the sequences. If your average person had the patience to read through the core sequences and understand them, they'd find them extremely valuable. It'd provide them with a glimpse into a new way of thinking and even though they would likely hardly appear to be very logical to most Less Wrongers, they'd be much better than they were at the start.
On the other hand, most Less Wrongers already know the basics of logic. That's not to say that we don't act extremely irrational much of the time, but just that going over the basics of logic again probably provides minimal benefit. What is needed is something specifically targeted at the kind of irrational mistakes and beliefs that intelligent people make. I would argue that if this were a sequence, it would be the most important sequence in the entire site. But, since I lack that level of writing ability, I'm not even going to attempt such a project. So I just created a post where we can list articles or ideas that should be part of such a sequence in the hope that someone else might pick it up
Here are some examples of mistakes that intelligent people make:
Taking a fixed instead of a growth mindset - shying away from challenges, convincing oneself that we are just naturally bad at non-intellectual things and that we shouldn't focus on them
Directly pointing out people's flaws
Overthinking issues that are really very simple
Counter-signalling by ignoring the value of fashion, money, being liked
Valuing intelligence above all other qualities
Rigidly adhering to rules
Expecting other people to always be rational
Not considering popularity as a signal that is worthwhile understanding
Overvaluing being right
I'm sure there are plenty more. Any other suggestions or relevant articles?
In favour of terseness
I like posts that are concise and to the point. Posts like that maximize my information/effort ratio. I would really like to see experienced rationalists simply post a list of things they believe on any given subject with a short explanation for why they believe each of those things. Then I could go ahead and adjust my beliefs based on those lists as necessary.
Sadly I don’t see any posts like this. Presumably this is because of the social convention where you’re expected to back up any public belief with arguments, so that other people can attempt to poke holes in them. I find this strange because the arguments people present rarely have anything to do with why they believe those things, which makes the whole exercise a giant distraction from the main point that the author is trying to bring across. In order to prevent this kind of derailment, posters tend to cover their arguments with endless qualifications so that their sentences read like this: “I personally believe that, in cases X Y Z and under circumstances B and C, ceteris paribus and barring obvious exceptions, it seems safe to say that murder is wrong, though of course I could be mistaken.” The problems with such excessive argumentation and qualification are threefold:
- The post becomes less readable: The information/effort ratio is lowered.
- It becomes much more difficult to tell what the author genuinely believes: Are they really unsure or just trying to appear humble? Is that their true objection, or just an argument?
- Despite everything, someone is STILL going to miss the point and reply that sometimes killing people is ok in certain situations, and then the next 100 comments will be about that.
By contrast, terseness makes posts more readable and makes it less likely that the main point is misunderstood. So if we as a community could just relax the demand for argumentation and qualification somewhat, and we all focussed on debating the main points of posts instead of getting sidetracked, then perhaps experienced rationalists here could write nice and concise posts that give clear and direct answers to complicated questions. Instead, some of the sequences are so long and involve so many arguments, counter-arguments and disclaimers that I feel the point is lost entirely.
LINK: In favor of niceness, community, and civilisation
Scott, known on LessWrong as Yvain, recently wrote a post complaining about an inaccurate rape statistic.
Arthur Chu, who is notable for winning money on Jeopardy recently, argued against Scott's stance that we should be honest in arguments in a comment thread on Jeff Kaufman's Facebook profile, which can be read here.
Scott just responded here, with a number of points relevant to the topic of rationalist communities.
I am interested in what LW thinks of this.
Obviously, at some point being polite in our arguments is silly. I'd be interested in people's opinions of how dire the real world consequences have to be before it's worthwhile debating dishonestly.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)