When you make a data declaration like this, you are essentially postulating (A → □ A) and (□ A → A), for any formula A. This essentially removes the distinction between A (the formula A is "true") and □ A (the formula A is "provable"), meaning you are not working inside the modal logic you think you are working in. We made the same mistake in the /r/haskell thread at first.
It's an easy mistake to make, because the first rule of inference for provability logic is
⊢ A
------
⊢ □ A
So it looks like (A → □ A) should be a theorem, but notice...
I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.
It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn't need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.
After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into "issues for which contributions are welcome" versus "issues which we want to fix ourselves".
The path to becoming a volunteer contributor is now very clear.
Are you kidding? Sign me up as a volunteer polyglot programmer, then!
Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.
Maybe it's just that volunteers that will actually do any work are hard to find. Related.
Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.
I'm pretty sure I would come up with a reason to continue behaving as today. That's what I did when I discovered, to my horror, that good and bad were human interpretations and not universal mathematical imperatives. Or are you asking what the rational reaction should be?
Thanks for the suggestion. This is not, however, what I was looking for.
Cached thoughts explains that hearing a phrase might be enough for our brain to remember it as true, while genetic fallacy warns that the original cause of a belief is not as important as the sum-total of evidence for or against that belief.
I am not, however, looking for evidence that our past taints our beliefs. I have come to realize that finding the historical cause of a thought is a good first step towards getting rid of an unwanted thought, and I wanted to know whether this strate...
search for the historical causes of your thoughts, rather than their justifications.
Is there a standard name or LW article on the subject? I first stumbled upon the importance of that skill here, on this site, and I wish I knew more about it than just that one personal anecdote.
Great case study, in that studying my own reaction to your article has thought me a lot about my own decision making. And my conclusion is that reading a rationalist blog isn't sufficient to become rational!
I am thin, despite having very bad eating habits (according to conventional dietary wisdom). I had not heard of Taubes before. Specifically, I have never considered that conventional dietary wisdom could be incorrect; people say that I eat unhealthily, and I have simply taken their word for it. The fact that I continue to eat unhealthily has more to do ...
It is rational to update by a lot in response to a small amount of evidence if that evidence brings along a possibility you hadn't considered before, that possibility has a high prior probability, and you didn't have much evidence to begin with.
could you link some of these stories, please? I am known to entertain utopian ideas from time to time, but if utopias really do hurt people, then I'd rather believe that they hurt people.
I started to believe in the Big Bang here. I was convinced by the evidence, but as this comment indicates, not by the strongest evidence I was given; rather, it was necessary to contradict the specific reasoning I used to disbelieve the Big Bang in the first place.
Is this typical? I think it would be very helpful if, in addition to stating which opinion you have changed, you stated whether the evidence convinced you because it was strong or because it broke the chain of thought which led to your pre-change opinion.
I often get the feeling that Alicorn's posts could use more evidence. However, given her status here, I take the very fact that she recommends something as evidence that she has herself encountered good evidence that the recommendation works; you know, Aumann agreement and all that.
Besides, even though it would be nice to see which evidence she has encountered, I know that I wouldn't bother to read the research if she linked to it. Intellectually, I trust Alicorn's conclusions. Therefore, I wish to believe in her conclusions; you know, Tarski's litany and ...
I voted for the karma-coward option because people are different, so having more options is good. But being new myself (only 16 comments, 43 karma points), you might be more interested in the fact that I would not use them if they were available.
I find it very gratifying to receive karma points. It motivates me to write more comments. If I was granted a grace period in which my comments did not receive karma points, I might have posted even less. Even the downvotes are a motivator, not to post more, but to put more effort next time I have something to say....
Sure, let's try.
...okay, writing down my thoughts. seems easy enough. reminds me of this exercise where we needed to write down our thoughts without thinking about it and i was mad because I kept thinking private thoughts and writing "can't write this" but in a very pleasing way and i liked being liked and that's a pattern which characterizes me well and i stopped- why? to catch my breath. not to think about the next sentence? are you sure? I put a lot of importance on writing down clearly and smartly I want readers to feel how smart I am, I want
You win. I did not realize that we knew that galaxies have been flying apart for billions and billions of years, as opposed to just right now. If something has been going on for so long, I agree that the simplest explanation is that it has always been going on, and this is precisely the conclusion which I thought popular science books took for granted.
Your other arguments only hammer the nail deeper, of course. But I notice that they have a much smaller impact on my unofficial beliefs, even thought they should have a bigger impact. I mean, the fact that th...
I prefer the meme where you've just won by learning something new; you now know more than most people about the justifications for Big Bang cosmology, in addition to (going meta) the sort of standards for evidence in physics, and (most meta and most importantly) how your own mind works when dealing with counterintuitive claims. I won too, because I had to look up (for the first time) some claims I'd taken for granted in order to respond adequately to your critique.
I take this as an opportunity to improve the art of rationality
Good idea! It's especial...
I don't think we can reasonably elevate our estimate of our own rationality by observing that we disagree with the consensus of a respected community.
But isn't Eliezer suggesting, in this very post, that we should use uncommon justified beliefs as an indicator that people are actually thinking for themselves as opposed to copying the beliefs of the community? I would assume that the standards we use to judge others should also apply when judging ourselves.
On the other hand, what you're saying sounds reasonable too. After all, crackpots also disagree wit...
Thanks! I had only heard about the accidental discovery by two Bell employees of an excess measurement which they could not explain, but now that you mention that it was in fact predicted, it's totally reasonable that the Bell employees simply did not know about the scientific prediction at the moment of their measurement. I should have read Wikipedia.
The probability of predicting something as strange as the background radiation given that the theory on which the prediction is based is fundamentally flawed seems rather low. Accordingly, I should update my ...
I've been following Alicorn's sequence on luminousness, that is, on getting to know ourselves better. I had lowered my estimate of my own rationality when she mentioned that we tend to think too highly of ourselves, but now I can bump my estimate back up. There is at least one belief which my tribe elevates to the rank of scientific fact, yet which I think is probably wrong: I do not believe in the Big Bang.
Of course, I don't believe the universe was created a few thousand years ago either. I don't have any plausible alternative hypothesis, I just think th...
There is no particular reason to assume that if the stars are moving away from each other right now, then they must always have done so. They could be expanding and contracting in a sort of sine wave, or something more complicated.
The key is there at the end of your quote. From the first set of observations (of relatively close galaxies), the simplest behavior that explained the observations was that everything was flying apart fast enough to overcome gravity. This predicted that when they had the technology to look at more distant galaxies, these too...
If the background radiation was a prediction of Big Bang theory, then I might have been convinced by this experimental evidence, but in fact the background radiation was discovered by accident. Only afterwards did the proponents of Big Bang theory retrofit it as a prediction of their model.
Not true; Alpher & Gamow predicted the radiation, although they were off by a few kelvins.
...there is no particular reason to assume that if the stars are moving away from each other right now, then they must always have done so. They could be expanding and contrac
but now I can bump my estimate back up. There is at least one belief which my tribe elevates to the rank of scientific fact, yet which I think is probably wrong: I do not believe in the Big Bang.
I don't think we can reasonably elevate our estimate of our own rationality by observing that we disagree with the consensus of a respected community.
...Second, the background radiation which is said to be leftover stray photons from the big bang. If the background radiation was a prediction of Big Bang theory, then I might have been convinced by this experimenta
I'm rather happy with my Zs and omegas, but some of my dear ones aren't.
I expect that this sequence will allow me to give them more educated suggestions about how to fix themselves up, but based on past experience, they will most likely nod in agreement, yet go on using the dysfunctional coping method they've been using for years. This expected lack of result will, of course, not prevent me from eagerly awaiting the next instalment of your sequence.
This story sports an interesting variation on the mind projection fallacy anti-pattern. Instead of confusing intrinsic properties with those whose observation depends both on one's mind and one's object of study, this variation confuses intrinsically correct conclusions with those whose validity depends both on the configuration of the world and on the correct interpretation of the evidence. In particular, one of the characters would like the inhabitants of the simulation to reconstruct our modern, "correct" scientific theories, even though said ...
Reading that Argunet was both open source and flawed, I decided to download the code (signals geekiness) in order to fix the problem you have encountered (signals generosity). I didn't have to change anything: it took me a while (signals perseverance) but I finally figured out how to move a box in Argunet (signals superior problem-solving abilities).
It turns out that the result of dragging a box depends on whether the box was selected prior to dragging it. If the box was selected, it is moved, otherwise, an arrow is created. So, before you move a box, select it by clicking on it.
You can't maximize truth.
I think it's fairly obvious that "maximizing truth" meant "maximizing the correlation between my beliefs and truth".
Having accurate beliefs is an incredibly useful thing. You may well find it serves your utility better.
Truth is overrated. My prior was heavily biased toward truth, but then a brief and unpleasant encounter with nihilism caused me to lower my estimate.
And before you complain that this doesn't make any sense either, let me spell out that is an estimate of the probability that the strategy &qu...
Ah, so that's why people downvoted my comment! Thanks for explaining. I thought it was only because I appeared to be confusing utilons with hedons.
Regarding the doublethink post, I agree that I couldn't rationally assign myself false but beneficial beliefs, and I feel silly for writing that I could. On the other hand, sometimes I want to believe in false but beneficial beliefs, and that's why I can't pretend to be an aspiring rationalist.
Uh-oh.
I... I don't think I do want to believe in things due to evidence. Not deep down inside.
When choosing my beliefs, I use a more important criterion than mere truth. I'd rather believe, quite simply, in whatever I need to believe in order to be happiest. I maximize utility, not truth.
I am a huge fan of lesswrong, quoting it almost every day to increasingly annoyed friends and relatives, but I am not putting much of what I read there into practice, I must admit. I read it more for entertainment than enlightenment.
And I take notes, for those rare cases i...
Here's another set of downvotes I don't get (ETA: parent was at -2 when I arrived). Gelisam is just stating their personal experience, not in order to claim we must all do likewise, but as their own reaction to the debate.
I think this community would be ill served by a norm that makes it a punishable offense to ever admit one doesn't strive for truth as much as one ought.
As far as replies go:
I'd rather believe, quite simply, in whatever I need to believe in order to be happiest.
It's not so simple. If you're self-deceiving, you might be quite wrong ab...
Oh, so that's what Eliezer looks like! I had imagined him as a wise old man with long white hair and beard. Like Tellah the sage, in Final Fantasy IV.
Everybody seemed to be going backward from assuming AK's guilt at 50-50, whereas komponisto went forward from the background probability.
I think I can see why. komponisto pretended to be a juror following the "innocent unless proven otherwise" mantra and updating on the evidence presented in the court. We, on the other hand, did what komponisto challenged us to do: figure out the answer to his riddle using the two websites he gave us. This being a riddle, not a court, we had no reason to favour one hypothesis over the other, hence the 50-50.
Th...
Amanda Knox is guilty: 75%, Raffaele Sollecito: 75%, Rudy Guede: 25%. I shall abbreviate future percentage triples as 75/75/25.
No knowledge of the case before reading this post. My prior is due to my assumption that trial people know what they are doing, and on the fact that I imagined that the trial was trying to show that the guilty were K+S instead of G.
Reading about G's DNA, which should be rather good evidence: switching to 50/50/75. I contemplated switching all the way to 25/25/75, but I figured there had to be some reas...
theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.
I happen to be studying conflicts in a completely different domain, in which I claim the solution is to ensure the events shape identical result no matter in which order they are applied. I briefly wondered whether my result could be useful in other domains, and I thought of lesswrong: perhaps we should advocate update strategies which don't depend on the order in which the evidence is encountered.
And then your post came up! Nice timing.
The reason we shouldn't update on the "room color" evidence has nothing to do with the fact that it constitutes anthropic evidence. The reason we shouldn't update is that we're told, albeit indirectly, that we shouldn't update (because if we do then some of our copies will update differently and we will be penalized for our disagreement).
In the real world, there is no incentive for all the copies of ourselves in all universes to agree, so it's all right to update on anthropic evidence.
I agree! You are right, the formula "A → □ A" does not mean that all true statements are provable. It means that all "true" statements are "provable" :)
Discussing the math of provability is tricky, because there are many notions of truth and many notions of provability. There are true-in-the-real-world and provable-in-the-real-world, the notions we are trying to model. True-in-the-real-world does not imply provable-in-the-real-world, so ideally, we would... (read more)