I declare this Open Thread open for discussion of Less Wrong topics that have not appeared in recent posts.
I declare this Open Thread open for discussion of Less Wrong topics that have not appeared in recent posts.
In light of how important it is (especially in terms of all the decision theory posts!) to know about Judea Pearl's causality ideas, I thought I might share something that helped me get up to speed on it: this lecture on Reasoning with Cause and Effect.
For me, it has the right combination of detail and brevity. Other books and papers on Pearlean causality were either too pedantic or too vague about the details, but I learned a lot from the slides, which come with good notes. Anyone know what proceedings papers it refers to though?
Does anyone feel the occasional temptation to become religious?
I do, sometimes, and push it away each time. I doubt that I could, really, fall for that temptation - even if I tried to, the illogicality of the whole thing would very likely prevent me from really believing in even a part of it very seriously. And as more and more time passes, religious structures begin to seem more and more ridiculous and contrived. Not that I'd have believed that them starting to feel more contrived would even have been possible.
And yet... occasionally I remember the time back in my teens, when I had some sort of a faith. I remember the feeling of ultimate safety it brought with it - the knowledge that no matter what happens, everything will turn out well in the end. It might be a good thing that I spend time worrying over existential risks, and spend time thinking about what I could do about them, but it sure doesn't exactly improve my mental health. The thought of returning to the mindset of a believer appeals on an emotional level, in the same way stressed adults might longingly remember the carefree days of childhood. But while you can't become a child again, becoming a believer is at least theoretically possible. And sometimes I do play around with the idea of what it'd be like, to adopt a belief again.
On the consolidation of dust specks and the preservation of utilitarian conclusions:
Suppose that you were going to live for at least 3^^^3 seconds. (If you claim that you cannot usefully imagine a lifespan of 3^^^3 seconds or greater, I must insist that you concede that you also cannot usefully imagine a group of 3^^^3 persons. After all, persons are a good deal more complicated than seconds, and you have experienced more seconds than people.)
Suppose that while you are contemplating how to spend your 3^^^3-plus seconds, you are presented with a binary ch...
Sean Carroll and Carl Zimmer are leaving Bloggingheads, mostly because it's started playing nice with creationists. Click their names to read their full explanations.
Since the topic of atheism, morality, and the like often come up here, I would like to point people to the free online book Secular Wholeness by David Cortesi. He approaches the topic of religion by trying to determine what benefits it can provide (community, challenges to improve oneself, easier ethical decisions, etc.), then tries to describe how to achieve these same benefits without resorting to religion. It's not very heavy on detail but seems very well sourced and with some good pointers on why people choose religions and what they get out of them.
ETA: This could have been a reply to the thread on Scientology, had I seen it before posting.
Scott Aaronson announced Worldview Manager, "a program that attempts to help users uncover hidden inconsistencies in their personal beliefs".
You can experiment with it here. The initial topics are Complexity Theory, Strong AI, Axiom of Choice, Quantum Computing, Libertarianism, Quantum Mechanics.
Is there an easy way to access the first comment of someone without looking at their comment page and uploading "next" zillions of times?
You're probably wondering why I would want to do that. I have been motivated occasionally to read someone's comments from beginning to end, and today I found myself wondering what my first comment was about.
My mind frequently returns to and develops the idea that sometime in the future, a friendly artificial intelligence is going to read Less Wrong. It uploads all the threads and is simultaneously able to (a) ...
Readers of Less Wrong may be interested in this New Scientist article by Noel Sharkey, titled Why AI is a dangerous dream, in which he attacks Kurzweil's and Moravec's "fairy tale" predictions and questions whether intelligence is computational ("[the mind] could be a physical system that cannot be recreated by a computer").
[edit] I thought this would go without saying, but I suspect the downvotes speak otherwise, so: I strongly disagree with the content of this article. I still consider it interesting because it is useful to be aware o...
Just found this note in Shalizi's notebooks which casts an interesting shadow on the Solomonoff prior:
...The technical results say that a classification rule is simple if it has a short description, measured in bits. (That is, we are in minimum description length land, or very close to it.) The shorter the description, the tighter the bound on the generalization error. I am happy to agree that this is a reasonable (if language-dependent) way of defining "simplicity" for classifier rules. However, so far as I can tell, this really isn't what makes
Any Santa Fe or Albuquerque lesswrong-ers out there, who might want to chat for an hour? I'll be in Santa Fe for a conference from 9/13 to 9/17, and am flying in and out of Albuquerque, and will have some free time Sunday 9/13.
I'm going to use this open thread to once again suggest the idea of a Less Wrong video game.
( Here's a link to the post I made last month about it )
After some more thought, I realized that making a game with fancy graphics and complex gameplay would probably not be a good idea for a first project to try.
A better idea would be a simple text-based game you play in your browser, probably running on either PHP or Python.
This might not have as much fun appeal as a traditional video game, since it would probably look like a university exam, but it could still b...
Out of pure curiosity, what's your probability distribution of Scientology (or some other such group) being useful? Not the Xenu part, but is it possible that they've discovered some techniques to make people happier, more successful, etc.?
We already have some limited evidence that conventionally religious people are happier, and conventional religions are quite weak.
I'm curious about how Less Wrong readers would answer these questions:
What is your probability estimate for some form of the simulation hypothesis being true?
If you received evidence that changed your estimate to be much higher (or lower), what would you do differently in your life?
So, I was thinking about how people conclude stuff. We tend to think of ourselves as having about two levels of conclusion: the "rational" level, which is the level we identify with, considering its conclusions to be our conclusions, and the "emotional" level, which is the one that determines our behavior. (Akrasia is disagreement between the two levels.)
Now, there doesn't seem to be any obvious rule for what becomes a rational level conclusion. If you go outside and wonder at nature, have you proven that God exists? For some people, it...
In reading the Singularity Institute's research goals, and the ruminations of Yudkowski, Wei Dai, Nesov et al. in postings here, the approach to developing friendly AI which stands out the most, and from my perspective seems to just always have been the case, seems to be exclusively logic based in the vein of John McCarthy.
I am wondering how the decision was made to focus research for SIAI on the pure logic side, rather than, for example building a synthetic conscious which uses the brain as a model?
To be sure, nearly all AI approaches overlap at some poi...
I have at least one other legacy identity here, dating from the old OB days, "mitchell_porter2". Is there some way to merge us?
To quote from http://www.sens.org/files/sens/FHTI07-deGrey.pdf:
"But wait – who’s to say that progress will remain “only” exponential? Might not progress exceed this rate, following an inverse polynomial curve (like gravity) or even an inverse exponential curve? I, for one, don’t see why it shouldn’t. If we consider specifically the means whereby the Singularity is most widely expected to occur, namely the development of computers with the capacity for recursive improvement of their own workings,4 I can see no argument why the rate at which such a comp...
Came across this: What We Can Learn About Pricing From Menu Engineers. Probably nothing new to you/us. Summary: Judgements of acceptable prices are strongly influenced by other seen prices.
I have a couple of intuitions about the structure of human preferences over a large universe.
The first intuition is that your preferences over one part of the universe (or universe-history) should be independent of what happens in another part of the universe, if the "distance" between the two parts is great enough. In other words, if you prefer A happening to B happening in one part of the universe, this preference shouldn't be reversed no matter what you learn about a distant part of the universe. ("Distance" might be spatial, tempora...
Several times recently I asked for simple clarifications about a comment that replied to something I wrote, and had my question ignored. (See here, here, and here.) And I don't know why. Did I violate some rule of etiquette, or what? How can I rephrase my questions to get a better response rate?
ETA: Here are the questions, in case people don't want to search through the comments to find them:
Has Eliezer written anything good about the evolution of morality? It should probably go on a wiki page titled "Evolution of morality".
ETA: While I'm at it, how about reasons people are religious?
Concerning Newcomb’s Problem I understand that the dominant position among the regular posters of this site is that you should one-box. This is a position I question.
Suppose Charlie takes on the role of Omega and presents you with Newcomb’s Problem. So far as it is pertinent to the problem Charlie is identical to Omega with the notable exception that his prediction is only %55 likely to be accurate. Should you one-box or two-box in this case?
If you one-box then the expected utility is (.55 1,000,000) $550,000 and if you two-box then it is (.45 1,001,000) $450,450 so it seems you should still one-box even when the prediction is not particularly accurate. Thoughts?
Does this http://www.sciencedaily.com/releases/2009/08/090831130751.htm suffer same problem as:
12 healthy male volunteers were chosen to study what is "just right" amount of beer for driving car. These men consumed doses of beer at 2 bottles, 4 bottles, 8 bottles, and 16 bottles per day for two weeks for each dose amount, with beer being the only alcohol in their diet. Surely the 2 bottles would win, but it definitely ain't the "just right" amount.
Am I missing something in the sciencedaily news, or did they really end up to that conclusion, of 200mg from that test?
Thanks for the link.
That LogicTutor site you linked to provides a good, basic introduction to a few concepts and fallacies. However, the practice problems just ask you to identify which fallacy is in the sentence they give you. They're missing the other half of the game, which is spotting the fallacy in a block of text that's deliberately designed to hide the fallacy. I'll keep looking in case someone has already made a game that contains this part.
One way to make the game more fun would be to have interesting text to find the fallacy in. Eliezer's short stories are a good example of this. Though for the purpose of the game, we would need just short segments of stories, which contain one clear example of a fallacy. Preferably one that's well-hidden, but obvious once you see it. Also, to keep players on their toes, we could include segments that don't actually contain a fallacy, and players would have the option of saying that there is no fallacy.
And as I mentioned before, another idea is to flesh out the stories even more, so that it could be expanded into a mystery game, or an adventure game, or an escape-the-room game, where in order to continue you need to talk to people, and some of these people will give inaccurate information, because they didn't notice a flaw in their own reasoning, and you will need to point out the flaw in their reasoning before they will give you the accurate information.
You would also have to choose your replies during the conversation, and have to choose a reply that doesn't introduce a new fallacy and send the conversation off in the wrong direction. Many of the possible responses would be to question why the person believes specific things that they just said. Maybe there could also be a feature where you could interrupt the person in the middle of what they're saying, to point out the problem. Optionally, score the player based on how long they took, and how many wrong paths they went down before finding the correct path.
And if this still doesn't make the game fun, then there are ways to make people want to play games even if they aren't fun.
One way is to make the not-so-fun game into a small but necessary part of a bigger game, that is fun.
Another way is to make it into a Flash game, and submit it to a site that tracks your achievements in the games, and gives you an overall score over all the games you've played. Examples of sites like this are Kongregate.com, and Newgrounds.com. If you're lucky, the game might even get to spend a few days on the site's front page, or get a limited-time challenge, for extra bonus points.
If the game could be made into a series, that would be even better.
So far I've only seen one "educational" game get promoted to the front page of Kongragate.com, with its own limited-time-only challenge. That was Globetrotter XL. The game says the name of a city, and you have to click on where that city is, on a map of the world with no border lines or labels. You're scored for how close you click to the actual city. Unfortunately, I ended up not completing this game's limited-time challenge, because the game was so unforgiving, and my knowledge of geography was so poor.
Anyway, while it would be a really nice bonus if the spot-the-fallacy game became popular outside LW, the main purpose of the game is to give current LW members an objective way to test their rationality, even if the game isn't especially fun.
There have been several LW posts now talking about how desperately we need a way to measure our rationality, and so far I haven't seen any serious proposals that anyone is actually working on. Or maybe there's a project already started, and I just didn't notice it because I got so far behind on reading the LW posts.
This game/quiz/whatever is a proposal for a way to go from having no objective way to measure our rationality, to at least having something.
Incidentally, another link right up your alley: http://projects.csail.mit.edu/worldview/about
(Starting to think maybe we could use a wiki page, even if only for links and ideas. This game discussion is now spread out over something like 5 LW articles...)