This post tests how much exposure comments to open threads posted "late" get. If you are reading this then please either comment or upvote. Please don't do both and don't downvote. When the next open thread comes, I'll post another test comment as soon as possible with the same instructions. Then I'll compare the scores.
If the difference is insignificant, a LW forum is not warranted, and open threads are entirely sufficient.
PS: If you don't see a test comment in the next open thread (e.g. I've gone missing), please do post one in my stead. Thank you.
Edit: Remember that if you don't think I deserve the karma, but still don't want to comment, you can upvote this comment and downvote any one or more of my other comments.
I apologize if this is blunt or already addressed but it seems to me that the voting system here has a large user based problem. It seems to me that the karma system has become nothing more then a popularity indicator.
It seems to me that many here vote up or down based on some gut-level agreement or disagreement with the comment or post. For example it is very troubling that some single line comments of agreement that should have 0 karma in my opinion end up with massive amounts and comments that may be in opposition to the popular beliefs here are voted ...
So, I'm reading A Fire Upon The Deep. It features books that instruct you how to speedrun your technological progress all the way from sticks and stones to interstellar space flight. Does anything like that exist in reality? If not, it's high time we start a project to make one.
Edit (10 October 2009): This is encouraging.
What's the best way to follow the new comments on a thread you've already read through? How do you keep up with which ones are new? It'd be nice if there were a non-threaded view. RSS feed?
One of the old standard topics of OB was cryogenics; why it's great even thought it's incredibly speculative & relatively expensive, and how we're all fools for not signing up. (I jest, but still.)
Why is there so much less interest in things like caloric restriction? Or even better, intermittent fasting, which doesn't even require cuts in calories? If we're at all optimistic about the Singularity or cryogenic-revival-level-technology being reached by 2100, then aren't those way superior options? They deliver concrete benefits now, for a price that can'...
Eliezer Yudkowsky and Andrew Gelman on Bloggingheads: Percontations: The Nature of Probability
I haven't watched it yet, but the set-up suggests it could focus a discussion, so should probably be given a top-level post.
A link you might find interesting:
The Neural Correlates of Religious and Nonreligious Belief
Summary:
Religious thinking is more associated with brain regions that govern emotion, self-representation, and cognitive conflict, while thinking about ordinary facts is more reliant upon memory retrieval networks, scientists at UCLA and other universities have found. They used fMRI to measure signal changes in the brains of committed Christians and nonbelievers as they evaluated the truth and falsity of religious and nonreligious propositions. For both groups, beli...
I plan to develop this into a top level post, and it expands on my ideas in this comment, this comment, and the end of this comment. I'm interested in what LWers have to say about it.
Basically, I think the concept of intelligence is somewhere between a category error and a fallacy of compression. For example Marcus Hutter's AIXI purports to identify the inferences a maximally-intelligent being would make, yet it (and efficient approximations) does not have practical application. The reason (I think) is that it works by finding the shortest hypothesis th...
For you non-techies who'd like to be titillated, here's a second bleg about some very speculative and fringey ideas I've been pondering:
What do you think the connection between motivation & sex/masturbation is?
Here's my thought: it's something of a mystery to me why homosexuals seem to be so well represented among the eminent geniuses of Europe & America. The suggestion I like best is that they're not intrinsically more creative thanks to 'female genes' or whatever, but that they can't/won't participate in the usual mating rat-race and so in a Fre...
I have something of a technical question; on my personal wiki, I've written a few essays which might be of interest to LWers. They're in Markdown, so you would think I could just copy them straight into a post, but, AFAIK, you have to write posts in that WSYIWG editor thing. Is there any way around that? (EDIT: Turns out there's a HTML input box, so I can write locally, compile with Pandoc, and insert the results.)
The articles, in no particular order:
...This is just a comment I can edit to let people elsewhere on the Net know that I am the real Eliezer Yudkowsky.
10/30/09: Ari N. Schulman: You are not being hoaxed.
Dual n-back is a game that's supposed to increase your IQ up to 40%. http://en.wikipedia.org/wiki/Dual_n_back#Dual_n-back
Some think the effect is temporary, long-term studies underway. Still, I wouldn't mind having to practice periodically. I've been at it for a few days, might retry the Mensa test in a while. (I washed out at 113 a few years ago) Download link: http://brainworkshop.sourceforge.net/
It seems to make sense. Instead of getting a faster CPU, a cheap and easy fix is get more RAM. In a brain analogy, I've often thought of the "magic number ...
Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us -- and how that compares to the current cost of signing up -- one thing that needs to be considered is whether your head will actually make it to that future time.
Ted Williams seems to be having a tough time of it.
Henry Markram's recent TED talk on cortical column simulation. Features philosophical drivel of appalling incoherence.
We need a snappy name like "analysis paralysis" that is focused on people who spend all their time studying rather than doing. They (we) intend to do, but never fell like they know enough to start.
I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:
Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Ea...
I recently realized that I don't remember seeing any LW posts questioning if it's ever rational to give up on getting better at rationality, or at least on one aspect of rationality that a person is just having too much trouble with.
There have been posts questioning the value of x-rationality, and posts examining the possibility of deliberately being irrational, but I don't remember seeing any posts examining if it's ever best to just give up and stop trying to learn a particular skill of rationality.
For example, someone who is extremely risk-averse, and e...
I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians....
So, there's this set, called W. The non-emptiness of W would imply that many significant and falsifiable conjectures, which we have not yet falsified, are false. What's the probability that W is empty?
(Yep, it's a bead jar guess. Show me your priors. I will not offer clarification unless I find that there's something I meant to be clearer about but wasn't.)
Movie: Cloudy with a Chance of Meatballs - I took the kids to see that this week-end and it struck me as a fun illustration of the UnFriendly AI problem.
I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:
Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Eac...
The Other Presumptuous Philosopher:
It begins pretty much as described here:
...It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry conside
Bug alert: this comment has many children, but doesn't currently have a "view children" link when viewing this entire thread.
I've only been reading Open Threads recently, so forgive me if it's been discussed before.
A band called The Protomen just recently came out with their second rock opera of a planned triology of rock operas based on (and we're talking based on) the Megaman video game. The first is The Protomen: Hope Rides Alone, the second one is Act II: The Father of Death.
The first album tells the story of a people who have given up and focuses on the idea of heroism. The second album is more about creation of the robots and the moral struggles that occur. I suggest you start with: The Good Doctor http://www.youtube.com/watch?v=HP2NePWJ2pQ
Mini heuristic that seems useful but not big enough for a post.
To combat ingroup bias: before deciding which experts to believe, first mentally sort the list of experts by topical qualifications. Allow autodidact skills to count if they have been recognized by peers (publication, citing, collaboration, etc).
My thought of the day: An 'Infinite Improbability Drive' is slightly less implausible than a faster than light engine.
Is there a complete guide anywhere to comment/post formatting? If so, it should probably linked on the "About" page or something. I can't figure out how to do html entities; is that possible?
I would like to throw out some suggested reading: John Barnes's Thousand Cultures and Meme Wars series. The former deals with the social consequences of smarter-than-human AI, uploading, and what sorts of pills we ought to want to take. The latter deals with nonhuman, non-friendly FOOMs. Both are very good, smart science fiction quite apart from having themes often discussed here.
I'll make my more wrong confession here in this thread: I'm a multiple worlds skeptic. Or at least I'm deeply skeptical of Egan's law. I won't pretend I'm arguing from any sort of deep QM understanding. I just mean in my sci-fi, what-if, thinking about what the implications would be. I truly believe there would be more wacky outcomes in an MWI setting than we see. And I don't mean violations of physical laws; I'm hung up on having to give up the idea of cause and effect in psychology. In MWI, I don't see how it's possible to think there would be cause and ...
Hear ye, hear ye: commence the discussion of things which have not been discussed.
As usual, if a discussion gets particularly good, spin it off into a posting.
(For this Open Thread, I'm going to try something new: priming the pump with a few things I'd like to see discussed.)