I declare this Open Thread open for discussion of Less Wrong topics that have not appeared in recent posts.
I declare this Open Thread open for discussion of Less Wrong topics that have not appeared in recent posts.
In light of how important it is (especially in terms of all the decision theory posts!) to know about Judea Pearl's causality ideas, I thought I might share something that helped me get up to speed on it: this lecture on Reasoning with Cause and Effect.
For me, it has the right combination of detail and brevity. Other books and papers on Pearlean causality were either too pedantic or too vague about the details, but I learned a lot from the slides, which come with good notes. Anyone know what proceedings papers it refers to though?
Does anyone feel the occasional temptation to become religious?
I do, sometimes, and push it away each time. I doubt that I could, really, fall for that temptation - even if I tried to, the illogicality of the whole thing would very likely prevent me from really believing in even a part of it very seriously. And as more and more time passes, religious structures begin to seem more and more ridiculous and contrived. Not that I'd have believed that them starting to feel more contrived would even have been possible.
And yet... occasionally I remember the time back in my teens, when I had some sort of a faith. I remember the feeling of ultimate safety it brought with it - the knowledge that no matter what happens, everything will turn out well in the end. It might be a good thing that I spend time worrying over existential risks, and spend time thinking about what I could do about them, but it sure doesn't exactly improve my mental health. The thought of returning to the mindset of a believer appeals on an emotional level, in the same way stressed adults might longingly remember the carefree days of childhood. But while you can't become a child again, becoming a believer is at least theoretically possible. And sometimes I do play around with the idea of what it'd be like, to adopt a belief again.
On the consolidation of dust specks and the preservation of utilitarian conclusions:
Suppose that you were going to live for at least 3^^^3 seconds. (If you claim that you cannot usefully imagine a lifespan of 3^^^3 seconds or greater, I must insist that you concede that you also cannot usefully imagine a group of 3^^^3 persons. After all, persons are a good deal more complicated than seconds, and you have experienced more seconds than people.)
Suppose that while you are contemplating how to spend your 3^^^3-plus seconds, you are presented with a binary ch...
Sean Carroll and Carl Zimmer are leaving Bloggingheads, mostly because it's started playing nice with creationists. Click their names to read their full explanations.
Since the topic of atheism, morality, and the like often come up here, I would like to point people to the free online book Secular Wholeness by David Cortesi. He approaches the topic of religion by trying to determine what benefits it can provide (community, challenges to improve oneself, easier ethical decisions, etc.), then tries to describe how to achieve these same benefits without resorting to religion. It's not very heavy on detail but seems very well sourced and with some good pointers on why people choose religions and what they get out of them.
ETA: This could have been a reply to the thread on Scientology, had I seen it before posting.
Scott Aaronson announced Worldview Manager, "a program that attempts to help users uncover hidden inconsistencies in their personal beliefs".
You can experiment with it here. The initial topics are Complexity Theory, Strong AI, Axiom of Choice, Quantum Computing, Libertarianism, Quantum Mechanics.
Is there an easy way to access the first comment of someone without looking at their comment page and uploading "next" zillions of times?
You're probably wondering why I would want to do that. I have been motivated occasionally to read someone's comments from beginning to end, and today I found myself wondering what my first comment was about.
My mind frequently returns to and develops the idea that sometime in the future, a friendly artificial intelligence is going to read Less Wrong. It uploads all the threads and is simultaneously able to (a) ...
Readers of Less Wrong may be interested in this New Scientist article by Noel Sharkey, titled Why AI is a dangerous dream, in which he attacks Kurzweil's and Moravec's "fairy tale" predictions and questions whether intelligence is computational ("[the mind] could be a physical system that cannot be recreated by a computer").
[edit] I thought this would go without saying, but I suspect the downvotes speak otherwise, so: I strongly disagree with the content of this article. I still consider it interesting because it is useful to be aware o...
Just found this note in Shalizi's notebooks which casts an interesting shadow on the Solomonoff prior:
...The technical results say that a classification rule is simple if it has a short description, measured in bits. (That is, we are in minimum description length land, or very close to it.) The shorter the description, the tighter the bound on the generalization error. I am happy to agree that this is a reasonable (if language-dependent) way of defining "simplicity" for classifier rules. However, so far as I can tell, this really isn't what makes
Any Santa Fe or Albuquerque lesswrong-ers out there, who might want to chat for an hour? I'll be in Santa Fe for a conference from 9/13 to 9/17, and am flying in and out of Albuquerque, and will have some free time Sunday 9/13.
I'm going to use this open thread to once again suggest the idea of a Less Wrong video game.
( Here's a link to the post I made last month about it )
After some more thought, I realized that making a game with fancy graphics and complex gameplay would probably not be a good idea for a first project to try.
A better idea would be a simple text-based game you play in your browser, probably running on either PHP or Python.
This might not have as much fun appeal as a traditional video game, since it would probably look like a university exam, but it could still b...
Out of pure curiosity, what's your probability distribution of Scientology (or some other such group) being useful? Not the Xenu part, but is it possible that they've discovered some techniques to make people happier, more successful, etc.?
We already have some limited evidence that conventionally religious people are happier, and conventional religions are quite weak.
I'm curious about how Less Wrong readers would answer these questions:
What is your probability estimate for some form of the simulation hypothesis being true?
If you received evidence that changed your estimate to be much higher (or lower), what would you do differently in your life?
So, I was thinking about how people conclude stuff. We tend to think of ourselves as having about two levels of conclusion: the "rational" level, which is the level we identify with, considering its conclusions to be our conclusions, and the "emotional" level, which is the one that determines our behavior. (Akrasia is disagreement between the two levels.)
Now, there doesn't seem to be any obvious rule for what becomes a rational level conclusion. If you go outside and wonder at nature, have you proven that God exists? For some people, it...
In reading the Singularity Institute's research goals, and the ruminations of Yudkowski, Wei Dai, Nesov et al. in postings here, the approach to developing friendly AI which stands out the most, and from my perspective seems to just always have been the case, seems to be exclusively logic based in the vein of John McCarthy.
I am wondering how the decision was made to focus research for SIAI on the pure logic side, rather than, for example building a synthetic conscious which uses the brain as a model?
To be sure, nearly all AI approaches overlap at some poi...
I have at least one other legacy identity here, dating from the old OB days, "mitchell_porter2". Is there some way to merge us?
To quote from http://www.sens.org/files/sens/FHTI07-deGrey.pdf:
"But wait – who’s to say that progress will remain “only” exponential? Might not progress exceed this rate, following an inverse polynomial curve (like gravity) or even an inverse exponential curve? I, for one, don’t see why it shouldn’t. If we consider specifically the means whereby the Singularity is most widely expected to occur, namely the development of computers with the capacity for recursive improvement of their own workings,4 I can see no argument why the rate at which such a comp...
Came across this: What We Can Learn About Pricing From Menu Engineers. Probably nothing new to you/us. Summary: Judgements of acceptable prices are strongly influenced by other seen prices.
I have a couple of intuitions about the structure of human preferences over a large universe.
The first intuition is that your preferences over one part of the universe (or universe-history) should be independent of what happens in another part of the universe, if the "distance" between the two parts is great enough. In other words, if you prefer A happening to B happening in one part of the universe, this preference shouldn't be reversed no matter what you learn about a distant part of the universe. ("Distance" might be spatial, tempora...
Several times recently I asked for simple clarifications about a comment that replied to something I wrote, and had my question ignored. (See here, here, and here.) And I don't know why. Did I violate some rule of etiquette, or what? How can I rephrase my questions to get a better response rate?
ETA: Here are the questions, in case people don't want to search through the comments to find them:
Has Eliezer written anything good about the evolution of morality? It should probably go on a wiki page titled "Evolution of morality".
ETA: While I'm at it, how about reasons people are religious?
Concerning Newcomb’s Problem I understand that the dominant position among the regular posters of this site is that you should one-box. This is a position I question.
Suppose Charlie takes on the role of Omega and presents you with Newcomb’s Problem. So far as it is pertinent to the problem Charlie is identical to Omega with the notable exception that his prediction is only %55 likely to be accurate. Should you one-box or two-box in this case?
If you one-box then the expected utility is (.55 1,000,000) $550,000 and if you two-box then it is (.45 1,001,000) $450,450 so it seems you should still one-box even when the prediction is not particularly accurate. Thoughts?
Does this http://www.sciencedaily.com/releases/2009/08/090831130751.htm suffer same problem as:
12 healthy male volunteers were chosen to study what is "just right" amount of beer for driving car. These men consumed doses of beer at 2 bottles, 4 bottles, 8 bottles, and 16 bottles per day for two weeks for each dose amount, with beer being the only alcohol in their diet. Surely the 2 bottles would win, but it definitely ain't the "just right" amount.
Am I missing something in the sciencedaily news, or did they really end up to that conclusion, of 200mg from that test?
On the consolidation of dust specks and the preservation of utilitarian conclusions:
Suppose that you were going to live for at least 3^^^3 seconds. (If you claim that you cannot usefully imagine a lifespan of 3^^^3 seconds or greater, I must insist that you concede that you also cannot usefully imagine a group of 3^^^3 persons. After all, persons are a good deal more complicated than seconds, and you have experienced more seconds than people.)
Suppose that while you are contemplating how to spend your 3^^^3-plus seconds, you are presented with a binary choice: you may spend the next 50 years of this period of time being tortured, or you may spend the next 3^^^3 seconds with a speck of dust in your eye that you cannot get rid of until that time period is up. (Should you succeed in uploading or similar over the course of the next 3^^^3 seconds, the sensation of the speck in the eye will accompany you in the absence of a physical eye until you have waited it out). Assume that after the conclusion of the torture (should you select it), you will be in fine physical health to go on with the rest of your lengthy life, although no guarantees are made for your sanity. Assume that the speck of dust does not impede your vision, and that you will not claw out your eye trying to be rid of it at any time; likewise, no guarantees are made for your sanity.
What selection would you make?
If this choice was actually presented to someone, my guess would be that he would first choose the speck, and then after an extremely long time (i.e. much, much longer than 50 years, giving him a sense of proportion) he would undergo a preference reversal and ask for the torture.