All of gelisam's Comments + Replies

gelisam
40

A → □ A doesn't mean "all true statements are provable" [...]

I agree! You are right, the formula "A → □ A" does not mean that all true statements are provable. It means that all "true" statements are "provable" :)

Discussing the math of provability is tricky, because there are many notions of truth and many notions of provability. There are true-in-the-real-world and provable-in-the-real-world, the notions we are trying to model. True-in-the-real-world does not imply provable-in-the-real-world, so ideally, we would... (read more)

gelisam
90

When you make a data declaration like this, you are essentially postulating (A → □ A) and (□ A → A), for any formula A. This essentially removes the distinction between A (the formula A is "true") and □ A (the formula A is "provable"), meaning you are not working inside the modal logic you think you are working in. We made the same mistake in the /r/haskell thread at first.

It's an easy mistake to make, because the first rule of inference for provability logic is

⊢ A
------
⊢ □ A

So it looks like (A → □ A) should be a theorem, but notice... (read more)

3gallabytes
I'm not really seeing the importance of the separate construct Theorem (A → B) vs (A → B). It seems to get manipulated in the exact same way in the code above. Is there some extra capability of functions that Theorem + postulates doesn't include? Also, I think you're misunderstanding what a function A → B means in Agda. A → □ A doesn't mean "all true statements are provable", it means "all statements we can construct a proof term for are provable" - you don't get an element of type A to apply that to without a proof of A, so it's actually a pretty trivial implication. I'm actually tempted to say that the notion of provability is actually wholly unnecessary here because of the Curry-Howard Correspondence, and that the real issue is the construction of the fixpoint
gelisam
40

I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.

It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn't need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.

gelisam
30

After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into "issues for which contributions are welcome" versus "issues which we want to fix ourselves".

The path to becoming a volunteer contributor is now very clear.

4AdeleneDawner
Getting someone to sort a list, even on an ongoing basis, is not functionally useful if there's nobody to take action on the sorted list.
gelisam
150

Are you kidding? Sign me up as a volunteer polyglot programmer, then!

Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.

yli
130

Maybe it's just that volunteers that will actually do any work are hard to find. Related.

Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.

7Risto_Saarelma
The site is open source, you should be able to just write a patch and submit it.
gelisam
00

I'm pretty sure I would come up with a reason to continue behaving as today. That's what I did when I discovered, to my horror, that good and bad were human interpretations and not universal mathematical imperatives. Or are you asking what the rational reaction should be?

gelisam
00

Thanks for the suggestion. This is not, however, what I was looking for.

Cached thoughts explains that hearing a phrase might be enough for our brain to remember it as true, while genetic fallacy warns that the original cause of a belief is not as important as the sum-total of evidence for or against that belief.

I am not, however, looking for evidence that our past taints our beliefs. I have come to realize that finding the historical cause of a thought is a good first step towards getting rid of an unwanted thought, and I wanted to know whether this strate... (read more)

gelisam
00

search for the historical causes of your thoughts, rather than their justifications.

Is there a standard name or LW article on the subject? I first stumbled upon the importance of that skill here, on this site, and I wish I knew more about it than just that one personal anecdote.

0beoShaffer
The whole seeing with fresh eyes subsequence is relevant, cached thoughts and the genetic fallacy are probably the most relevant.
gelisam
80

Great case study, in that studying my own reaction to your article has thought me a lot about my own decision making. And my conclusion is that reading a rationalist blog isn't sufficient to become rational!

I am thin, despite having very bad eating habits (according to conventional dietary wisdom). I had not heard of Taubes before. Specifically, I have never considered that conventional dietary wisdom could be incorrect; people say that I eat unhealthily, and I have simply taken their word for it. The fact that I continue to eat unhealthily has more to do ... (read more)

It is rational to update by a lot in response to a small amount of evidence if that evidence brings along a possibility you hadn't considered before, that possibility has a high prior probability, and you didn't have much evidence to begin with.

gelisam
00

could you link some of these stories, please? I am known to entertain utopian ideas from time to time, but if utopias really do hurt people, then I'd rather believe that they hurt people.

1Matt_Simpson
Communism is one utopia that ended in disaster, see Rummel's Death by Government
1xamdam
I recommend reading Blank Slate to get a good perspective on the Utopian issues; the examples (I was born in USSR) are trivial to come by, but the book will give you a mental framework to deal with the issues.
2[anonymous]
Personal stories, from a friend, so no, there's no place to link them. Well-meaning liberals have either hurt, or failed to help, him and people close to him.
gelisam
60

I started to believe in the Big Bang here. I was convinced by the evidence, but as this comment indicates, not by the strongest evidence I was given; rather, it was necessary to contradict the specific reasoning I used to disbelieve the Big Bang in the first place.

Is this typical? I think it would be very helpful if, in addition to stating which opinion you have changed, you stated whether the evidence convinced you because it was strong or because it broke the chain of thought which led to your pre-change opinion.

gelisam
100

I often get the feeling that Alicorn's posts could use more evidence. However, given her status here, I take the very fact that she recommends something as evidence that she has herself encountered good evidence that the recommendation works; you know, Aumann agreement and all that.

Besides, even though it would be nice to see which evidence she has encountered, I know that I wouldn't bother to read the research if she linked to it. Intellectually, I trust Alicorn's conclusions. Therefore, I wish to believe in her conclusions; you know, Tarski's litany and ... (read more)

gelisam
50

I voted for the karma-coward option because people are different, so having more options is good. But being new myself (only 16 comments, 43 karma points), you might be more interested in the fact that I would not use them if they were available.

I find it very gratifying to receive karma points. It motivates me to write more comments. If I was granted a grace period in which my comments did not receive karma points, I might have posted even less. Even the downvotes are a motivator, not to post more, but to put more effort next time I have something to say.... (read more)

gelisam
100

Sure, let's try.

okay, writing down my thoughts. seems easy enough. reminds me of this exercise where we needed to write down our thoughts without thinking about it and i was mad because I kept thinking private thoughts and writing "can't write this" but in a very pleasing way and i liked being liked and that's a pattern which characterizes me well and i stopped- why? to catch my breath. not to think about the next sentence? are you sure? I put a lot of importance on writing down clearly and smartly I want readers to feel how smart I am, I want

... (read more)
gelisam
340

You win. I did not realize that we knew that galaxies have been flying apart for billions and billions of years, as opposed to just right now. If something has been going on for so long, I agree that the simplest explanation is that it has always been going on, and this is precisely the conclusion which I thought popular science books took for granted.

Your other arguments only hammer the nail deeper, of course. But I notice that they have a much smaller impact on my unofficial beliefs, even thought they should have a bigger impact. I mean, the fact that th... (read more)

I prefer the meme where you've just won by learning something new; you now know more than most people about the justifications for Big Bang cosmology, in addition to (going meta) the sort of standards for evidence in physics, and (most meta and most importantly) how your own mind works when dealing with counterintuitive claims. I won too, because I had to look up (for the first time) some claims I'd taken for granted in order to respond adequately to your critique.

I take this as an opportunity to improve the art of rationality

Good idea! It's especial... (read more)

gelisam
40

I don't think we can reasonably elevate our estimate of our own rationality by observing that we disagree with the consensus of a respected community.

But isn't Eliezer suggesting, in this very post, that we should use uncommon justified beliefs as an indicator that people are actually thinking for themselves as opposed to copying the beliefs of the community? I would assume that the standards we use to judge others should also apply when judging ourselves.

On the other hand, what you're saying sounds reasonable too. After all, crackpots also disagree wit... (read more)

6thezeus18
"But isn't Eliezer suggesting, in this very post, that we should use uncommon justified beliefs as an indicator that people are actually thinking for themselves as opposed to copying the beliefs of the community? I would assume that the standards we use to judge others should also apply when judging ourselves. On the other hand, what you're saying sounds reasonable too. After all, crackpots also disagree with the consensus of a respected community." Eliezer didn't say that we should use "disagreeing with the consensus of a respected community" as an indicator of rationality. He said that we should use disagreeing with the consensus of one's own community as an indicator of rationality.
gelisam
90

Thanks! I had only heard about the accidental discovery by two Bell employees of an excess measurement which they could not explain, but now that you mention that it was in fact predicted, it's totally reasonable that the Bell employees simply did not know about the scientific prediction at the moment of their measurement. I should have read Wikipedia.

The probability of predicting something as strange as the background radiation given that the theory on which the prediction is based is fundamentally flawed seems rather low. Accordingly, I should update my ... (read more)

gelisam
80

I've been following Alicorn's sequence on luminousness, that is, on getting to know ourselves better. I had lowered my estimate of my own rationality when she mentioned that we tend to think too highly of ourselves, but now I can bump my estimate back up. There is at least one belief which my tribe elevates to the rank of scientific fact, yet which I think is probably wrong: I do not believe in the Big Bang.

Of course, I don't believe the universe was created a few thousand years ago either. I don't have any plausible alternative hypothesis, I just think th... (read more)

There is no particular reason to assume that if the stars are moving away from each other right now, then they must always have done so. They could be expanding and contracting in a sort of sine wave, or something more complicated.

The key is there at the end of your quote. From the first set of observations (of relatively close galaxies), the simplest behavior that explained the observations was that everything was flying apart fast enough to overcome gravity. This predicted that when they had the technology to look at more distant galaxies, these too... (read more)

simplicio
120

If the background radiation was a prediction of Big Bang theory, then I might have been convinced by this experimental evidence, but in fact the background radiation was discovered by accident. Only afterwards did the proponents of Big Bang theory retrofit it as a prediction of their model.

Not true; Alpher & Gamow predicted the radiation, although they were off by a few kelvins.

there is no particular reason to assume that if the stars are moving away from each other right now, then they must always have done so. They could be expanding and contrac

... (read more)
wedrifid
130

but now I can bump my estimate back up. There is at least one belief which my tribe elevates to the rank of scientific fact, yet which I think is probably wrong: I do not believe in the Big Bang.

I don't think we can reasonably elevate our estimate of our own rationality by observing that we disagree with the consensus of a respected community.

Second, the background radiation which is said to be leftover stray photons from the big bang. If the background radiation was a prediction of Big Bang theory, then I might have been convinced by this experimenta

... (read more)
gelisam
20

I'm rather happy with my Zs and omegas, but some of my dear ones aren't.

I expect that this sequence will allow me to give them more educated suggestions about how to fix themselves up, but based on past experience, they will most likely nod in agreement, yet go on using the dysfunctional coping method they've been using for years. This expected lack of result will, of course, not prevent me from eagerly awaiting the next instalment of your sequence.

1Nisan
Right. Sometimes well-meant but unsolicited suggestions don't do anyone any good.
gelisam
10

This story sports an interesting variation on the mind projection fallacy anti-pattern. Instead of confusing intrinsic properties with those whose observation depends both on one's mind and one's object of study, this variation confuses intrinsically correct conclusions with those whose validity depends both on the configuration of the world and on the correct interpretation of the evidence. In particular, one of the characters would like the inhabitants of the simulation to reconstruct our modern, "correct" scientific theories, even though said ... (read more)

gelisam
90

Reading that Argunet was both open source and flawed, I decided to download the code (signals geekiness) in order to fix the problem you have encountered (signals generosity). I didn't have to change anything: it took me a while (signals perseverance) but I finally figured out how to move a box in Argunet (signals superior problem-solving abilities).

It turns out that the result of dragging a box depends on whether the box was selected prior to dragging it. If the box was selected, it is moved, otherwise, an arrow is created. So, before you move a box, select it by clicking on it.

0Morendil
Heh. Thanks a bunch for your efforts, that's useful. There are some conclusions to draw from this incident about the attributes that make for good software design, but they might be a topic for another thread (or another blog).
gelisam
80

You can't maximize truth.

I think it's fairly obvious that "maximizing truth" meant "maximizing the correlation between my beliefs and truth".

Having accurate beliefs is an incredibly useful thing. You may well find it serves your utility better.

Truth is overrated. My prior was heavily biased toward truth, but then a brief and unpleasant encounter with nihilism caused me to lower my estimate.

And before you complain that this doesn't make any sense either, let me spell out that is an estimate of the probability that the strategy &qu... (read more)

gelisam
10

Ah, so that's why people downvoted my comment! Thanks for explaining. I thought it was only because I appeared to be confusing utilons with hedons.

Regarding the doublethink post, I agree that I couldn't rationally assign myself false but beneficial beliefs, and I feel silly for writing that I could. On the other hand, sometimes I want to believe in false but beneficial beliefs, and that's why I can't pretend to be an aspiring rationalist.

gelisam
50

Uh-oh.

I... I don't think I do want to believe in things due to evidence. Not deep down inside.

When choosing my beliefs, I use a more important criterion than mere truth. I'd rather believe, quite simply, in whatever I need to believe in order to be happiest. I maximize utility, not truth.

I am a huge fan of lesswrong, quoting it almost every day to increasingly annoyed friends and relatives, but I am not putting much of what I read there into practice, I must admit. I read it more for entertainment than enlightenment.

And I take notes, for those rare cases i... (read more)

2Wilka
Have you ever the experience of learning something true that you would rather not have learned? The only type of examples I can think of here (of the top of my head) would be finding out you had an unfaithful lover, or that you were really adopted. But in both case, it seems like the 'unhappiness' you get from learning it would pass and you'd be happy that you found out in the long wrong. I've heard people say similar things about losing the belief in God - because it could lead to losing (or at least drifting away from) people you hold close, if their belief in God had been an import thing in their relationship to you.

Here's another set of downvotes I don't get (ETA: parent was at -2 when I arrived). Gelisam is just stating their personal experience, not in order to claim we must all do likewise, but as their own reaction to the debate.

I think this community would be ill served by a norm that makes it a punishable offense to ever admit one doesn't strive for truth as much as one ought.

As far as replies go:

I'd rather believe, quite simply, in whatever I need to believe in order to be happiest.

It's not so simple. If you're self-deceiving, you might be quite wrong ab... (read more)

4Nanani
"Maximizing truth" doesn't make any sense. You can't maximize truth. You can improve your knowlege of the truth, but the truth itself is independent of your brain state. In any case, when is untruth more instrumental to your utility function than truth? Having accurate beliefs is an incredibly useful thing. You may well find it serves your utility better.
gelisam
60

Oh, so that's what Eliezer looks like! I had imagined him as a wise old man with long white hair and beard. Like Tellah the sage, in Final Fantasy IV.

3khafra
I believe Steve Rayhawk is SIAI's designated "Tellah the Sage."
2Corey_Newsome
Speaking of appearances, Eliezer makes me feel self-conscious about how un-white my teeth are.
8Eliezer Yudkowsky
I'll have you know that I work hard at not going down that road.
gelisam
00

Everybody seemed to be going backward from assuming AK's guilt at 50-50, whereas komponisto went forward from the background probability.

I think I can see why. komponisto pretended to be a juror following the "innocent unless proven otherwise" mantra and updating on the evidence presented in the court. We, on the other hand, did what komponisto challenged us to do: figure out the answer to his riddle using the two websites he gave us. This being a riddle, not a court, we had no reason to favour one hypothesis over the other, hence the 50-50.

Th... (read more)

gelisam
120

priors

Amanda Knox is guilty: 75%, Raffaele Sollecito: 75%, Rudy Guede: 25%. I shall abbreviate future percentage triples as 75/75/25.

No knowledge of the case before reading this post. My prior is due to my assumption that trial people know what they are doing, and on the fact that I imagined that the trial was trying to show that the guilty were K+S instead of G.

acquiring information

Reading about G's DNA, which should be rather good evidence: switching to 50/50/75. I contemplated switching all the way to 25/25/75, but I figured there had to be some reas... (read more)

3gwern
I like the rapist theory. It's not like Amanda was the only promiscuous American collegian around - birds of a feather... And who would rob an exchange student? No, a consensual meeting gone bad sounds like a far more common scenario to me.
gelisam
00

theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.

I happen to be studying conflicts in a completely different domain, in which I claim the solution is to ensure the events shape identical result no matter in which order they are applied. I briefly wondered whether my result could be useful in other domains, and I thought of lesswrong: perhaps we should advocate update strategies which don't depend on the order in which the evidence is encountered.

And then your post came up! Nice timing.

0Aurini
This seems to open up the idea of 'logic games' a la confirmation bias: Eg: Likelihood of - US invading China China testing nukes in the Pacific *US invading because of China testing nukes in the Pacific In fact, there's a lot of stuff from this site that could be compiled into a "Puzzles for people with High IQs" books, except if somebody from here made the book it would be more useful, and less arbitrary in what it considers to be the 'right' answers.
gelisam
10

The reason we shouldn't update on the "room color" evidence has nothing to do with the fact that it constitutes anthropic evidence. The reason we shouldn't update is that we're told, albeit indirectly, that we shouldn't update (because if we do then some of our copies will update differently and we will be penalized for our disagreement).

In the real world, there is no incentive for all the copies of ourselves in all universes to agree, so it's all right to update on anthropic evidence.