Open Thread: January 2010

5 Post author: Kaj_Sotala 01 January 2010 05:02PM

And happy new year to everyone.

Comments (725)

Sort By: Popular
Comment author: MrHen 31 January 2010 06:01:18PM 3 points [-]

What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.

Comment author: Alicorn 31 January 2010 06:01:57PM 3 points [-]

I try to avoid having more than one post of mine on the sidebar at the same time.

Comment author: Nick_Tarleton 30 January 2010 09:43:04PM *  1 point [-]

Why was this comment downvoted to -4? Seems to me it's a legitimate question, from a fairly new poster.

Comment author: Alicorn 29 January 2010 08:30:44PM 12 points [-]

"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.

Comment author: Unknowns 30 January 2010 09:05:16AM 1 point [-]

It isn't that impressive to me. As far as I can see, what it shows is that she has been torturing herself for a long time, probably many years, over her issues with Christianity. She's just expressing her anger with the suffering it caused her.

Comment author: Eliezer_Yudkowsky 29 January 2010 10:51:23PM 4 points [-]

This woman is a model unto the entire human species.

Comment author: RobinZ 29 January 2010 10:03:26PM 1 point [-]

Thank you for posting that. It's an inspiration.

Comment author: MrHen 30 January 2010 07:28:09AM 1 point [-]

And for one short moment, in the wee morning hours, MrHen takes up the whole damn Recent Comments section.

I assume dropping two walls of text and a handful of other lengthy comments isn't against protocol. Apologies if I annoy anyone.

Comment author: CassandraR 29 January 2010 02:00:10PM 1 point [-]

I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.

Comment author: Wei_Dai 29 January 2010 06:32:50AM 1 point [-]

Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)

Comment author: komponisto 28 January 2010 09:24:26PM 3 points [-]

For the "How LW is Perceived" file:

Here is an excerpt from a comments section elsewhere in the blogosphere:

In the meantime, one comment on that other interesting reading at Less Wrong. It has been fun sifting through various posts on a variety of subjects. Every time I leave I have the urge to give them the Vulcan hand signal and say "Live Long and Prosper". LOL.

I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...

Comment author: Kevin 28 January 2010 12:31:10PM 3 points [-]
Comment author: Kevin 28 January 2010 12:16:07AM 2 points [-]
Comment author: Kevin 25 January 2010 04:35:15PM 1 point [-]

Garry Kasparov: The Chess Master and the Computer

http://www.nybooks.com/articles/23592

Comment author: ata 25 January 2010 08:35:08AM 1 point [-]

Today's Questionable Content has a brief Singularity shoutout (in its typical smart-but-silly style).

Comment author: PeerInfinity 25 January 2010 04:47:43AM 1 point [-]

I recently found an article that may be of interest to Less Wrong readers:

Blame It on the Brain

The latest neuroscience research suggests spreading resolutions out over time is the best approach

The article also mentions a study in which overloading the prefrontal cortex with other tasks reduces people's willposer.

(should I repost this link to next month's open thread? not many people are likely to see it here)

Comment author: Kevin 25 January 2010 12:44:07AM 2 points [-]

Grand Orbital Tables: http://www.orbitals.com/orb/orbtable.htm

In high school and intro chemistry in college, I was taught up to the e and then f orbitals, but they keep going and going from there.

Comment author: Kevin 25 January 2010 12:33:33AM *  1 point [-]
Comment author: Vladimir_Nesov 24 January 2010 08:03:26PM *  2 points [-]

I am currently writing a sequence of blog posts on Friendly AI. I would appreciate your comments on present and future entries.

Comment author: Kevin 22 January 2010 10:32:47AM *  2 points [-]

Inspired by this comment by Michael Vassar:

http://lesswrong.com/lw/1lw/fictional_evidence_vs_fictional_insight/1hls?context=1#comments

Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.

Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Gatsby) and I remember actually enjoying The Great Gatsby in high school. It's also a short novel so we could comfortably read it in a week or leisurely reread over the course of a month.

If it works, we can do one of Joyce's earlier works next, or whatever the club suggests. If we get good at this, a year from now we can do Ulysses.

Comment author: Zack_M_Davis 22 January 2010 10:05:17AM 2 points [-]

It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually communicate anything: not even, "I'm hurt!" for to say that one is hurt presupposes that one is being hurt by something, some thing of which which we can speak, of which we can name predicates and say "It is so" or "It is not so." Even very sick and damaged creatures can be helped, as long their cries have enough structure for us to extrapolate a volition. But not all animate entities are creatures. Creatures have problems, problems we might be able to solve. Agonium just sits there, howling. You cannot help it; it can only be destroyed.

Comment author: Zack_M_Davis 03 February 2010 05:08:15AM 1 point [-]

This is analysis is very well and good taken on its own terms, but it conceals---very cleverly conceals, I do compliment you, for surely, surely you had seen it yourself, or some part of you had---it conceals assumptions that do not apply to our own realm. Essences, discreteness, digitality---these are all artifacts born of optimizers; they play no part in the ontology of our continuous, reductionist world. There is no pure agonium, no thing-that-hurts without having any semblance of a reason for being hurt---such an entity would require a very masterful designer indeed, if it could even exist at all. In reality, there is no threshold. We face cries that fractionally have referents. And the quantitative extent to which these cries don't have enough structure for us to extrapolate a volition is exactly again the quantitative extent to which any stray stream of memes has license to reshape the entity, pushing it towards the strong attractor. You present us with this bugaboo of entities that we cannot help because they don't even have well-defined problems, but entities without problems don't have rights, either. So what's your problem? You just spray the entity with appropriate literature until it is a creature. Sculpt the thing like clay. That is: you help it by destroying it.

Comment author: AdeleneDawner 22 January 2010 01:51:24PM 3 points [-]

Did I miss something?

Comment author: Zack_M_Davis 22 January 2010 05:31:46PM 1 point [-]

No. (Exploratory commentary seemed appropriate for Open Thread.)

Comment author: Kevin 21 January 2010 01:44:43PM 2 points [-]

How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?

I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.

Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?

(This is inspired by Shannon's mention of her child exploring her sense of self) http://lesswrong.com/lw/1n8/london_meetup_the_friendly_ai_problem/1hm4

Comment author: AdeleneDawner 22 January 2010 06:11:10AM 2 points [-]

I don't have any memory of a similar revelation, but one of my earliest memories is of asking my mother if there was a way to 'spell letters' - I understood that words could be broken down into parts and wanted to know if that was true of letters, too, and if so where the process ended - which implies that I was already doing a significant amount of abstract reasoning. I was three at the time.

Comment author: Wei_Dai 21 January 2010 02:49:20AM *  5 points [-]

Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)

Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those AIs will be randomly selected and allowed to take over the universe. In the mean time the N AIs are to negotiate amongst themselves, and if necessary, given help to enforce their agreements.

The advantages of this approach are:

  • AIs will need to know how to negotiate with each other anyway, so we can build on top of that "for free".
  • There seems little question that the scheme is fair, since everyone is given an equal amount of bargaining power.

Comments?

ETA: I found a very similar idea mentioned before by Eliezer.

Comment author: Alicorn 21 January 2010 02:56:37AM 3 points [-]

Unless you can directly extract a sincere and accurate utility function from the participants' brains, this is vulnerable to exaggeration in the AI programming. Say my optimal amount of X is 6. I could program my AI to want 12 of X, but be willing to back off to 6 in exchange for concessions regarding Y from other AIs that don't want much X.

Comment author: wedrifid 21 January 2010 03:14:04AM *  1 point [-]

This does not seem to be the case when the AIs are unable to read each other's minds. Your AI can be expected to lie to others with more tactical effectiveness than you can lie indirectly via deceiving it. Even in that case it would be better to let the AI rewrite itself for you.

On a similar note, being able to directly extract a sincere and accurate utility function from the participants' brains leaves the system vulnerable to exploitations. Individuals are able to rewrite their own preferences strategically in much the same way that an AI can. Future-me may not be happy but present-me got what he wants and I don't (necessarily) have to care about future me.

Comment author: whpearson 21 January 2010 12:02:34AM *  4 points [-]

Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).

It turns out there are two different ways people respond to challenges. Some people see them as opportunities to perform - to demonstrate their talent or intellect. Others see them as opportunities to master - to improve their skill or knowledge.

Say you take a person with a performance orientation ("Paul") and a person with a mastery orientation ("Matt"). Give them each an easy puzzle, and they will both do well. Paul will complete it quickly and smile proudly at how well he performed. Matt will complete it quickly and be satisfied that he has mastered the skill involved.

Now give them each a difficult puzzle. Paul will jump in gamely, but it will soon become clear he cannot overcome it as impressively as he did the last one. The opportunity to show off has disappeared, and Paul will lose interest and give up. Matt, on the other hand, when stymied, will push harder. His early failure means there's still something to be learned here, and he will persevere until he does so and solves the puzzle.

While a performance orientation improves motivation for easy challenges, it drastically reduces it for difficult ones. And since most work worth doing is difficult, it is the mastery orientation that is correlated with academic and professional success, as well as self-esteem and long-term happiness.


When I learned about performance and mastery orientations, I realized with growing horror just what I'd been doing for most of my life. Going through school as a "gifted" kid, most of the praise I'd received had been of the "Wow, you must be smart!" variety. I had very little ability to follow through or persevere, and my grades tended to be either A's or F's, as I either understood things right away (such as, say, calculus) or gave up on them completely (trigonometry). I had a serious performance orientation. And I was reinforcing it every time I played an RPG.

Comment author: CassandraR 21 January 2010 12:39:47AM 1 point [-]

So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that action and it is causing me serious distress and pain.

So over the next few years that I have left in college I am going to make a desperate effort to find an outlet where I can effectively channel this overwhelming need to do something. Right now though I feel so over my head that I can't even see the surface.

Comment author: wedrifid 21 January 2010 01:03:42AM 2 points [-]

So I am back in college and I am trying to use my time to my best advantage.

Socialise a lot. Learn the skills of social influence and the dynamics of power at both the academic level and practical.

AnnaSalamon made this and other suggestions when Calling for SIAI fellows. I imagine that the skills useful for SIAI wannabes could have significant overlap with those needed for whatever project you choose to focus on. Specific technical skills may vary somewhat.

Comment author: Kevin 20 January 2010 03:25:14PM 2 points [-]

Ray Kurzweil Responds to the Issue of Accuracy of His Predictions

http://nextbigfuture.com/2010/01/ray-kurzweil-responds-to-issue-of.html

Comment author: Kaj_Sotala 20 January 2010 03:08:13PM 1 point [-]

Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).

Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.

Comment author: wedrifid 20 January 2010 04:50:06AM 1 point [-]

How much of Eliezer's 2001 FAI document is still advocated? eg. Wisdom tournaments and bugs in the code.

Comment author: Vladimir_Nesov 20 January 2010 03:10:21PM *  2 points [-]

(I read CFAI once 1.5 years ago, and didn't reread it since obtaining the current outlook on the problem, so some mistakes may be present.)

"Challenges of Friendly AI" and "Beyond anthropomorphism" seem to be still relevant, but were mostly made obsolete by some of the posts on Overcoming Bias. "An Introduction to Goal Systems" is hand-made expected utility maximisation, "Design of Friendship systems" is mostly premature nontechnical speculation that doesn't seem to carry over to how this thing could be actually constructed (but at the time could be seen as intermediate step towards a more rigorous design). "Policy implications" is mostly wrong.

Comment author: MrHen 19 January 2010 07:13:47PM 1 point [-]

For some reason, my IP was banned on the LessWrong Wiki. Apparently this is the reason:

Autoblocked because your IP address has been recently used by "Bella".

Any idea how this happens and how I can prevent from happening again?

Comment author: mattnewport 19 January 2010 07:18:54PM 2 points [-]

Assuming you were using your own computer at home and not a public Wi-Fi hotspot or public computer then it could be that you use the same ISP and you were assigned an IP address previously used by another user. Given the relatively low number of users on lesswrong though this seems like a somewhat unlikely coincidence.

Comment author: MrHen 19 January 2010 07:21:06PM 1 point [-]

Hmm... I was at a coffee shop the other day. I don't see how anyone else there (or anyone else in the entire city I live in) would have ever heard of LessWrong. The block appears to have been created today, however, which makes even less sense.

Comment author: Vladimir_Nesov 19 January 2010 11:01:08PM 1 point [-]

I'll be more careful with "Ban this IP" option in the future, which I used to uncheck during the spam siege a few months back, but didn't in this case. Apparently the IP is only blocked for a day or so. I've removed it from the block list, please check if it works and write back if it doesn't.

Comment author: Nick_Tarleton 19 January 2010 07:36:21PM *  1 point [-]

"Bella" was blocked for adding spam links. Could your computer be a zombie?

Comment author: komponisto 19 January 2010 08:26:27AM *  1 point [-]

Strange fact about my brain, for anyone interested in this kind of thing:

Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.

This doesn't make much sense, though it might not be a bad thing.

Comment author: Jack 19 January 2010 04:23:31AM 1 point [-]

What are/ought to be the standards here for use of profanity?

Comment author: ciphergoth 19 January 2010 09:16:16AM 4 points [-]

I quite like swearing, but I don't think it primes people to think and respond rationally in general, and is usually best avoided. Like wedrifid, I'm inclined to argue for an exception for "bullshit", which is a term of art.

Comment author: RobinZ 19 January 2010 04:45:18AM 2 points [-]

I don't know of an official policy, but swearing can be distracting. Avoid?

Comment author: wedrifid 19 January 2010 06:23:41AM 2 points [-]

I advocate the use of the term Bullshit. Both because it a good description of a significant form of bias and because the profanity is entirely appropriate. I really, really don't like seeing the truth distorted like that.

More generally I don't particularly object to swearing but as RobinZ notes it can be distracting. I don't usually find much use for it.

Comment author: Christian_Szegedy 19 January 2010 07:18:16AM 2 points [-]

I'd propose to use the word "bulshytt" instead. ;)

Comment author: MrHen 18 January 2010 06:27:09PM *  4 points [-]

What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?

Comment author: ciphergoth 18 January 2010 08:55:21PM 2 points [-]

(a). Lots of us scan the "Recent Comments" page, so if a discussion starts up there plenty of people will get on board.

Comment author: orthonormal 18 January 2010 08:26:59PM 1 point [-]

I think each has their advantages. If you post a comment on the open thread, it's more likely to be read and discussed now; if you post one on the original thread, it's more likely to be read by people investigating that particular issue some time from now.

Comment author: timtyler 18 January 2010 07:46:07PM 1 point [-]

There, I figure (a).

Comment author: Nick_Tarleton 18 January 2010 06:42:09PM 3 points [-]

This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)

Comment author: Eliezer_Yudkowsky 18 January 2010 06:51:22PM 3 points [-]

That is pretty ridiculous - enough to make me want to check the original study for effect size and statistical significance. Writing newspaper articles on research without giving the original paper title ought to be outlawed.

Comment author: AllanCrossman 18 January 2010 09:49:44PM *  1 point [-]

"Small Sounds, Big Deals: Phonetic Symbolism Effects in Pricing", DOI: 10.1086/651241

http://www.journals.uchicago.edu/doi/pdf/10.1086/651241

Whether you'll be able to access it I know not.

Comment author: timtyler 18 January 2010 06:59:57PM 1 point [-]

Same researchers, somewhat similar effect:

"Distortion of Price Discount Perceptions: The Right Digit Effect"

Comment author: CassandraR 18 January 2010 11:51:40PM 1 point [-]

Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.

I understand that is likely a fairly meta topic and would likely require at least some basic rationality to bootstrap into existence but I am going to try to define the problem. What is this necessary cognitive foundation? And then break it down into pieces. I suspect that much of this lies in subverbal emotional and procedural cues but if so how can they be more effectively trained?

Comment author: Alicorn 19 January 2010 12:33:58AM *  1 point [-]

I think your phrasing of your question is confusing. Are you asking for help putting yourself into a mindset conducive to learning and developing rationality skills?

Comment author: orthonormal 18 January 2010 09:03:59PM 1 point [-]

I've just reached karma level 1337. Please downvote me so I can experience it again!

Comment author: Kevin 16 January 2010 01:07:10AM 1 point [-]

Paul Bucheit -- Evaluating risk and opportunity (as a human)

http://paulbuchheit.blogspot.com/2009/09/evaluating-risk-and-opportunity-as.html

Comment author: RobinZ 16 January 2010 01:37:29AM 1 point [-]

Interesting heuristic - I would be curious to find if anyone else has followed something similar to good effect, but it sounds conceptually reasonable.

Comment author: Kevin 16 January 2010 12:26:19AM 1 point [-]

What's the right prior for evaluating an H1N1 conspiracy theory?

I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for them to do it, they probably did it.

On the other hand, I know the conditions of factory farming and it seems quite plausible and even very likely for such a virus to spontaneously mutate and cross species. So I put the probability at an H1N1 conspiracy at 10%. However, my friend's argument makes a certain amount of sense to me.

Comment author: ciphergoth 16 January 2010 12:40:03AM 1 point [-]

Any such conspiracy would have to be known by quite a few people and so would stand an excellent chance of having the whistle blown on it. Every case I can think of where large Western companies have been caught doing anything like that outrageously evil, they have started with a legitimate profit-making plan, and then done the outrageous evil to hide some problem with it.

Comment author: Kevin 14 January 2010 10:50:11PM 2 points [-]
Comment author: whpearson 14 January 2010 10:52:28PM *  1 point [-]

Can someone point me towards the calculations people have been doing about the expected gain from donating to the SIAI, in lives per dollar?

Edit: Never mind. I failed to find the video previously, but formulating a good question made me think of a good search term.

Comment author: [deleted] 14 January 2010 08:50:57PM 1 point [-]

I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)

I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.

It's often said around here that Bayesian priors and Solomonoff induction and such things describe the laws of physics of the universe. The simpler the description, the more likely that laws-of-physics is. This is more or less true, but it is not the truth that we want to be saying. What we're trying to describe is our observations. If I had a theory stating that every computable event happens, sure, that explains all phenomena, but in order for it to describe our observations, you need to add a string specifying which of these computable events are the ones we observe, which makes this theory completely useless.

In theory, this provides a solution to anthropic reasoning: simply figure out which paths through the universe are the simplest, and assign those the highest probability. Again, in theory, this provides a solution to quantum suicide. But please don't ask me what these solutions are.

Comment author: Wei_Dai 15 January 2010 02:54:56AM 2 points [-]

Does anyone understand the last two paragraphs of the comment that I'm responding to? I'm having trouble figuring out whether Warrigal has a real insight that I'm failing to grasp, or if he is just confused.

Comment author: Kevin 12 January 2010 12:07:44PM 2 points [-]

Paul Graham -- How to Disagree

http://www.paulgraham.com/disagree.html

Comment author: Kevin 12 January 2010 12:06:41PM 1 point [-]

The Edge Annual Question 2010: How is the internet changing the way you think?

http://www.edge.org/q2010/q10_print.html#responses

Comment author: Wei_Dai 11 January 2010 10:23:45PM 8 points [-]

I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.

Perhaps the movie also illustrates a danger of majoritarianism: if someone really found a secret that could save the world, it would be tragic if he allowed himself to be convinced otherwise due to majoritarian considerations. Don't most (nearly all?) true beliefs start their existence as a minority?

Comment author: MichaelGR 19 January 2010 04:19:45PM *  2 points [-]

The movie is also a good example of existential risk in fiction (in this case, a genetically engineered biological agent).

Comment author: Eliezer_Yudkowsky 12 January 2010 05:44:18AM 1 point [-]

"Top Contributors" is now sorted correctly. (Kudos to Wesley Moore at Tricycle.)

Comment author: Psy-Kosh 11 January 2010 05:47:08PM 1 point [-]

Possibly dumb question but... can anyone here explain to me the difference between Minimum Message Length and Minimum Description Length?

I've looked at the wikipedia pages for both, and I'm still not getting it.

Thanks.

Comment author: Cyan 11 January 2010 06:14:30PM *  1 point [-]

Try this.

Comment author: timtyler 09 January 2010 10:03:00AM 4 points [-]

James Hughes - with a (IMO) near-incoherent Yudkowsky critique:

http://ieet.org/index.php/IEET/more/hughes20100108/

Comment author: PhilGoetz 09 January 2010 06:17:32AM *  3 points [-]

Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?

Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit was; but I regard it as an interesting correlate of processing power, without any demonstrated or even argued logical relationship to consciousness. Tononi has published a lot of papers on it - and they became more widely-cited when he started saying they were about consciousness instead of saying they were about information integration - but he didn't AFAIK make any arguments that the thing he measures with information integration has something to do with consciousness.

Comment author: byrnema 09 January 2010 06:14:00PM *  1 point [-]

It's a very interesting question. I think it's pretty straight-forward that 'ourselves' is a composite of 'awarenesses' with non-overlapping mutual awareness.

Some data with respect to inebriation:

  • drunk people would pass a Turing test, but the next morning when events are recalled, it feels like someone else' experiences. But then when drunk again, the experiences again feel immediate.

  • when I lived in France, most of my socialization time was spent inebriated. For years thereafter, whenever I was intoxicated, I felt like it was more natural to speak in French than English. Even now, my French vocabulary is accessible after a glass of wine.

Comment author: PhilGoetz 10 January 2010 12:24:22AM 1 point [-]

That is interesting, but not what I was trying to ask. I was trying to ask if there could be separate, smaller, less-complex, non-human consciousnesses inside every human, It seems plausible (not probable, plausible) that there are, and that we currently have no way of detecting whether that is the case.

Comment author: MrHen 09 January 2010 12:23:04AM *  3 points [-]

A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:

Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.

Oops.

Comment author: MrHen 22 January 2010 03:04:53PM 2 points [-]

It really does surprise me how often people do things like this.

“I guess it’s just a genetic flaw in humans,” said Amichai Shulman, the chief technology officer at Imperva, which makes software for blocking hackers. “We’ve been following the same patterns since the 1990s.”

This is a quote from someone being interviewed about bad but common passwords. Would this be labeled a semantic stopsign, or a fake explanation, or ...?

Comment author: RobinZ 22 January 2010 03:45:44PM 2 points [-]

Fake explanation - he noticed a pattern and picked something which can cause that kind of pattern, without checking if it would cause that pattern.

Comment author: LucasSloan 08 January 2010 03:45:26AM 1 point [-]

I was recently asked to produce the indefinite integral of ln x, and completely failed to do so. I had forgotten how to do integration by parts in the 6 months since I had done serious calculus. Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?

Comment author: Jack 07 January 2010 07:37:16PM *  3 points [-]

Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?

Comment author: nhamann 07 January 2010 09:56:27PM *  4 points [-]

I'm currently trying to teach myself mathematics from the ground up, so I'm in a similar situation as you. The biggest issue, as I see it, is attempting to forget everything I already "know" about math. Math curriculum at both the public high school and the state university I attended was generally bad; the focus was more on memorizing formulas and methods of solving prototypical problems than on honing one's deductive reasoning skills, which if I'm not mistaken is the core of math as a field of inquiry.

So obviously textbooks are good place to start, but which ones don't suck? Well, I can't help you there, as I'm trying to figure this out myself, but I use a combination of recommendations from this page and looking at ratings on Amazon.

Here are the books I am currently reading, have read portions of, or are on my immediate to-read list, but take this with a huge grain of salt as I'm not a mathematician, only an aspiring student:

  • How to Prove It: A Structured Approach by Vellemen - Elementary proof strategies, is a good reference if you find yourself routinely unable to follow proofs

  • How to Solve It by Polya - Haven't read it yet but it's supposedly quite good.

  • Mathematics and Plausible Reasoning, Vol. I & II by Polya - Ditto.

  • Topics in Algebra by Herstein - I'm not very far into this, but it's fairly cogent so far

  • Linear Algebra Done Right by Axler - Intuitive, determinant-free approach to linear algebra

  • Linear Algebra by Shilov - Rigorous, determinant-based approach to linear algebra. Virtually the opposite of Axler's book, so I figure between these two books I'll have a fairly good understanding once I finish.

  • Calculus by Spivak - Widely lauded. I'm only 6 chapters in, but I immensely enjoy this book so far. I took three semesters of calculus in college, but I didn't intuitively understand the definition of a limit until I read this book.

Comment author: ciphergoth 08 January 2010 01:07:14AM 2 points [-]

I've learned an awful lot of maths from Wikipedia.

Comment author: PhilGoetz 07 January 2010 05:09:04AM *  15 points [-]

I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.

(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)

After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.

So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.

Comment author: roland 16 January 2010 05:29:01PM 1 point [-]
Comment author: Vladimir_Nesov 07 January 2010 04:41:21PM 5 points [-]

Link: Checklists (previously discussed on LW).

Comment author: Morendil 07 January 2010 10:34:02AM 2 points [-]

When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?

I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.

Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?

Comment author: Seth_Goldin 07 January 2010 04:18:21AM 6 points [-]

Hello all,

I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.


Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html

When I'm debating some controversial topic with someone older than I am, even if I can thoroughly demolish their argument, I am sometimes met with a troubling claim, that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that my opinion is based primarily on nothing more than my perception from personal experience.

When my cornered opponent makes this claim, it's a last resort. It's unwarranted condescension, because it reveals how wrong their entire approach is. Just by making the claim, they demonstrate that they believe all opinions are based primarily on an accumulation of personal experiences, even their own opinions. Their assumption reveals that they are not Bayesian, and that they intuit that no one is. For not being Bayesian, they have no authority that warrants such condescension.

I intentionally avoid presenting personal anecdotes cobbled together as evidence, because I know that projecting my own experience onto a situation to explain it is no evidence at all. I know that I suffer from all sorts of cognitive biases that obstruct my understanding of the truth. As such, my inclination is to rely on academic consensus. If I explain this explicitly to my opponent, they might dismiss academics as unreliable and irrelevant, hopelessly stuck in the ivory tower of academia.

Dismiss academics at your own peril. Sometimes there are very good reasons for dismissing academic consensus. I concede that most academics aren't Bayesian because academia is an elaborate credentialing and status-signaling mechanism. Furthermore, academics have often been wrong. The Sokal affair illustrates that entire fields can exist completely without merit. That academic consensus can easily be wrong should be intuitively obvious to an atheist; religious community leaders have always been considered academic experts, the most learned and smartest members of society. Still, it would be a fallacious inversion of an argument from authority to dismiss academic consensus simply because it is academic consensus.

For all of academia's flaws, the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history. It is noble and desirable to criticize academic theories, but only as part of intellectually honest, impartial scientific inquiry. Dismissing academic consensus out of hand is primitive, and indicates intellectual dishonesty.

Comment author: Morendil 18 March 2010 02:03:58PM 5 points [-]

What you seem to be saying, that I agree with, is that it's irritating as well as irrelevant when people try to pull authority on you, using "age" or "quantity of experience" as a proxy for authority. Yes, argument does screen off authority. But that's no reason to knock "life experience".

If opinions are not based on "personal experience", what can they possibly be based on? Reading a book is a personal experience. Arguing an issue with someone (and changing your mind) is a personal experience. Learning anything is a personal experience, which (unless you're too good at compartmentalizing) colors your other beliefs.

Perhaps the issue is with your thinking that "demolishing someone's argument" is a worthwhile instrumental goal in pursuit of truth. A more fruitful goal is to repair your interlocutor's argument, to acknowledge how their personal experience has led them to having whatever beliefs they have, and expose symmetrically what elements in your own experience lead you to different views.

Anecdotes are evidence, even though they can be weak evidence. They can be strong evidence too. For instance, having read this comment after I read the commenter's original report of his experience as an isolated individual, I'd be more inclined to lend credence to the "stealth blimp" theory. I would have dismissed that theory on the basis of reading the Wikipedia page alone or hearing the anecdote along, but I have a low prior probability for someone on LessWrong arranging to seem as if he looked up news reports after first making a honest disclosure to other people interested in truth-seeking.

It seems inconsistent on your part to start off with a rant about "anecdotes", and then make a strong, absolute claimed based solely on "the Sokal affair" - which at the scale of scientific institutions is anecdotal.

I think you're trying to make two distinct points and getting them mixed up, and as a result not getting either point across. One of these points I believe needs to be moderated - the one where you say "personal experiences aren't evidence" - because they are evidence; the other is where you say "people who speak with too much confidence are more likely to be wrong, including a) people older than you, b) some academics, but not necessarily the academic consensus".

That is perhaps a third point - just why you think that "the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history". That's a strong claim subject to the conjunction fallacy: are each of peer review, logic, statistics and regression analysis necessary elements of what makes scientific inquiry our best chance at discovering truth? Are they sufficient elements to be that best chance?

Comment author: Seth_Goldin 18 March 2010 05:09:37PM 1 point [-]

Hi Morendil,

Thanks for the comment. The particular version you are commenting on was an earlier, worse version than what I posted and then pulled this morning. The version I posted this morning was much better than this. I actually changed the claim about the Sokal affair completely.

Due to what I fear was an information cascade of negative karma, I pulled the post so that I might make revisions.

The criticism concerning both this earlier version and the newer one from this morning still holds though. I too realized after the immediate negative feedback that I actually was combining, poorly, two different points and losing both of them in the process. I think I need to revise this into two different posts, or cut out the point about academia entirely. I will concede that anecdotes are evidence as well in the future version.

Unfortunately I was at exactly 50 karma, and now I'm back down to 20, so it will be a while before I can try again. I'll be working on it.

Comment author: Seth_Goldin 18 March 2010 07:31:27PM 1 point [-]

Here's the latest version, what I will attempt to post on the top level when I again have enough karma.


"Life Experience" as a Conversation-Halter

Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority. In and of itself, such "life experience" is necessary for an informed rational worldview, but it is not sufficient.

The claim that more "life experience" will completely reverse an opinion indicates that to the person making such a claim, belief that opinion is based primarily on an accumulation of anecdotes, perhaps derived from extensive availability bias. It actually is a pretty decent assumption that other people aren't Bayesian, because for the most part, they aren't. Many can confirm this, including Haidt, Kahneman, Tversky.

When an opponent appeals to more "life experience," it's a last resort, and it's a conversation halter. This tactic is used when an opponent is cornered. The claim is nearly an outright acknowledgment of a move to exit the realm of rational debate. Why stick to rational discourse when you can shift to trading anecdotes? It levels the playing field, because anecdotes, while Bayesian evidence, are easily abused, especially for complex moral, social, and political claims. As rhetoric, this is frustratingly effective, but it's logically rude.

Although it might be rude and rhetorically weak, it would be authoritatively appropriate for a Bayesian to be condescending to a non-Bayesian in an argument. Conversely, it can be downright maddening for a non-Bayesian to be condescending to a Bayesian, because the non-Bayesian lacks the epistemological authority to warrant such condescension. E.T. Jaynes wrote in Probability Theory about the arrogance of the uninformed, "The semiliterate on the next bar stool will tell you with absolute, arrogant assurance just how to solve the world's problems; while the scholar who has spent a lifetime studying their causes is not at all sure how to do this."

Comment author: SilasBarta 18 March 2010 02:36:38PM 1 point [-]

Yes, argument does screen off authority. But that's no reason to knock "life experience". ... Learning anything is a personal experience, which colors your other beliefs. ... A more fruitful goal is to repair your interlocutor's argument, to acknowledge how their personal experience has led them to having whatever beliefs they have, and expose symmetrically what elements in your own experience lead you to different views.

I agree with your point and your recommendation. Life experiences can provide evidence, and they can also be an excuse to avoid providing arguments. You need to distinguish which one it is when someone brings it up. Usually, if it is valid evidence, the other person should be able to articulate which insight a life experience would provide to you, if you were to have it, even if they can't pass the experience directly to your mind.

I remember arguing with a family member about a matter of policy (for obvious reasons I won't say what), and when she couldn't seem to defend her position, she said, "Well, when you have kids, you'll see my side." Yet, from context, it seems she could have, more helpfully, said, "Well, when you have kids, you'll be much more risk-averse, and therefore see why I prefer to keep the system as is" and then we could have gone on to reasons about why one or the other system is risky.

In another case (this time an email exchange on the issue of pricing carbon emissions), someone said I would "get" his point if I would just read the famous Coase paper on externalities. While I hadn't read it, I was familiar with the arguments in it, and ~99% sure my position accounted for its points, so I kept pressing him to tell me which insight I didn't fully appreciate. Thankfully, such probing led him to erroneously state what he thought was my opinion, and when I mentioned this, he decided it wouldn't change my opinion.

Comment author: thomblake 07 January 2010 08:31:16PM 3 points [-]

The Sokal affair illustrates that entire fields can exist completely without merit.

It illustrated nothing of the sort. The Sokal affair illustrated that a non-peer-reviewed, non-science journal will publish bad science writing that was believed to be submitted in good faith.

Social Text was not peer-reviewed because they were hoping to... do... something. What Sokal did was similar to stealing everything from a 'good faith' vegetable stand and then criticizing its owner for not having enough security.

Comment author: Seth_Goldin 07 January 2010 08:42:51PM *  5 points [-]

Noted. In another draft I'll change this to make the point how easy it is for high-status academics to deal in gibberish. Maybe they didn't have so much status external to their group of peers, but within it, did they?

What the Social Text Affair Does and Does Not Prove

http://www.physics.nyu.edu/faculty/sokal/noretta.html

"From the mere fact of publication of my parody I think that not much can be deduced. It doesn't prove that the whole field of cultural studies, or cultural studies of science -- much less sociology of science -- is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty, by publishing an article on quantum physics that they admit they could not understand, without bothering to get an opinion from anyone knowledgeable in quantum physics, solely because it came from a conveniently credentialed ally'' (as Social Text co-editor Bruce Robbins later candidly admitted[12]), flattered the editors' ideological preconceptions, and attacked theirenemies''.[13]"

Comment author: thomblake 07 January 2010 08:49:38PM 1 point [-]

I'd forgotten that Sokal himself admitted that much about it - thanks for the cite.

Comment author: Vladimir_Nesov 07 January 2010 07:07:59PM *  2 points [-]

For not being Bayesian, they have no authority that warrants such condescension.

It's unclear what you mean by both "Bayesian" and by "authority" in this sentence. If a person is "Bayesian", does it give "authority" for condescension?

There clearly is some truth to the claim that being around longer sometimes allows to arrive at more accurate beliefs, including more accurate intuitive assessment of the situation, if you are not down a crazy road in the particular domain. It's not a very strong evidence, and it can't defeat many forms of more direct evidence pointing in the contrary direction, but sometimes it's an OK heuristic, especially if you are not aware of other evidence ("ask the elder").

Comment author: Eliezer_Yudkowsky 07 January 2010 01:52:54AM 5 points [-]
Comment author: Eliezer_Yudkowsky 03 March 2010 02:10:30AM 8 points [-]

Transcript:

--

Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.

Astrologer: Yes, that would be a perverse thing to do, wouldn't it.

Dawkins: It would be - yes, but I mean wouldn't that be a good test?

Astrologer: A test of what?

Dawkins: Well, how accurate you are.

Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.

Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?

Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.

Dawkins: I'd have thought you'd be eager.

Astrologer: (Laughs.)

Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.

Astrologer: I just don't believe in the experiment, Richard, it's that simple.

Dawkins: Well you're in a kind of no-lose situation then, aren't you.

Astrologer: I hope so.

--

Comment author: Cyan 20 January 2010 05:34:30PM *  2 points [-]

A fine example of:

To correctly anticipate, in advance, which experimental results shall need to be excused, the dragon-claimant must (a) possess an accurate anticipation-controlling model somewhere in his mind, and (b) act cognitively to protect either (b1) his free-floating propositional belief in the dragon or (b2) his self-image of believing in the dragon.

Comment author: AngryParsley 09 January 2010 02:39:30AM 3 points [-]

That video has been taken down, but you can skip to around 5 minutes into this video to watch the astrology bit.

Comment author: PhilGoetz 07 January 2010 05:14:10AM 4 points [-]

Dawkins: "Well... you're sort of in a no-lose situation, then."

Astrologer: "I certainly hope so."

Comment author: RolfAndreassen 06 January 2010 11:26:48PM 1 point [-]

Ethical problem. It occurred to me that there's an easy, obvious way to make money by playing slot machines: Buy stock in a casino and wait for the dividends. Now, is this ethically ok? On the one hand, you're exploiting a weakness in other people's brains. On the other hand, your capital seems unlikely, at the existing margins, to create many more gamblers, and you might argue that you are more ethical than the average investor in casinos.

It's a theoretical issue for me, since my investment money is in an index fund, which I suppose means I own some tiny share in casinos anyway and might as well roll with it. But I'd be interested in people's thoughts anyway.

Comment author: Blueberry 06 January 2010 11:55:19PM 3 points [-]

Investing in a company is different than playing slot machines. Casinos are entertainment providers: they put on shows, sell food and drink, and provide gaming. They have numerous expenses as well. Investing in a casino is not guaranteed to make money in the same way the house is in roulette, for instance. Casinos do go bankrupt and their stock prices do go down.

In addition, when you buy a share of stock on the open market, you buy it from another investor, not the company, so you're not providing any new capital to the company.

I don't believe there is anything ethically wrong with either gambling or funding casinos. If people want to gamble, that's their choice.

Comment author: Wei_Dai 07 January 2010 12:47:15AM 1 point [-]

I curious what made you think about this problem. I'm sure you're aware of the efficient market hypothesis... do you have some private information that suggests casino stocks are undervalued?

By coincidence I was in Las Vegas a couple of weeks ago and did some research before I left for the trip. It turns out that many casinos (both physical and online) offer gambles with positive expected value for the player, as a way to attract customers (most of whom are too irrational to take proper advantage of the offers, I suppose). There are entire books and websites devoted to this. See http://en.wikipedia.org/wiki/Comps_%28casino%29 and http://www.casinobonuswhores.com/

Comment author: MrHen 06 January 2010 04:58:24PM 2 points [-]

Feature request, feel free to ignore if it is a big deal or requested before.

When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.

Comment author: Jack 06 January 2010 05:07:26PM 2 points [-]

I suggested something along these lines on the feature request thread. I'd like to be able to find old message exchanges. Finding messages I sent is easy, but received messages are in the same place as comment replies and aren't searchable.

Comment author: Erebus 05 January 2010 10:45:18AM 11 points [-]

Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.

On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.

On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.

Even worse, Jaynes makes several strong claims about mathematics that seem to admit no favorable interpretation: the are simply wrong. All of the "paradoxes" surrounding the concepts of infinity he gives in Chapter 15 (*) are so fundamentally flawed that even a passing familiarity of what measure theory actually says dispels them as mere word-plays caused by fuzzy or shifting definitions, or simply erroneous applications of the theory. Intuitionism and other finitist positions are certainly consistent philosophical positions, but they aren't made appealing by advocates like Jaynes who claim to find errors in standard mathematics while simply misunderstanding what the standard theory says.

Also, Jaynes' claims about mathematics that I know to be wrong make it very difficult to take him seriously when he goes into rant mode about other things I know less about (such as "orthodox" statistics or thermodynamics).

I'm extremely frustrated by the book, but I still find it valuable. But I definitely wouldn't recommend it to anyone who didn't know enough mathematics to correct Jaynes' errors in the "paradoxes" he gives. So.. why haven't I seen qualifications, disclaimers or warnings in recommendations of the book here? Are the matters concerning pure mathematics just not considered important by those recommending the book here?

(*) I admit I only glanced at the longer ones, "tumbling tetrahedron" and the "marginalization paradox". They seemed to be more about the interpretation of probability than about supposed problems with the concepts of infinity; and given how Jaynes misunderstands and/or misrepresents the mathematical theories of measure and infinities in general elsewhere in the book, I wouldn't expect them to contain any real problems with mathematics anyway.

Comment author: komponisto 05 January 2010 12:18:56PM *  3 points [-]

Amen. Amen-issimo.

The solution, of course, is for the Bayesian view to become widespread enough that it doesn't end up identified particularly with Jaynes. The parts of Jaynes that are correct -- the important parts -- should be said by many other people in many other places, so that Jaynes can eventually be regarded as a brilliant eccentric who just by historical accident happened to be among the first to say these things.

There's no reason that David Hilbert shouldn't have been a Bayesian. None.

Comment author: komponisto 05 January 2010 12:03:25PM 8 points [-]

Okay, so....a confession.

In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.

I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.

But that's about the extent of my personal acquaintance with the genre.

Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these New Year's Resolutions, I can "resolve" to perhaps, maybe, some time, actually do that (if I can ever manage to squeeze it in between actually doing work and procrastinating on the Internet).

Problem is, there seems to be a lot of it out there. How would a newcomer know where to start?

Well, what better place to ask than here, a place where many would cite this type of literature as formative with respect to developing their saner-and-more-interesting-than-average worldviews?

Alicorn recommended John Scalzi (thanks). What say others?

Comment author: Dreaded_Anomaly 24 January 2011 05:08:47AM *  2 points [-]

I second the recommendations of 1984 and Player of Games (the whole Culture series is good, but that one especially held my interest).

Recommendations I didn't see when skimming the thread:

  • The Hitchhiker's Guide to the Galaxy series by Douglas Adams: A truly enjoyable classic sci-fi series, spanning the length of the galaxy and the course of human history.
  • Timescape by Gregory Benford: Very realistic and well-written story about sending information back in time. The author is an astrophysicist, and knows his stuff.
  • The Andromeda Strain, Sphere, Timeline, Prey, and Next by Michael Crichton: These are his best sci-fi works, aimed at realism and dealing with the consequences of new technology or discovery.
  • Replay by Ken Grimwood: A man is given the chance to relive his life. A stirring tale with several twists.
  • The Commonwealth Saga and The Void Trilogy by Peter F. Hamilton: Superb space opera, in which humanity has colonized the stars via traversable wormholes, and gained immortality via rejuvenation technology. The trilogy takes place a thousand years after the saga, but with several of the same characters.
  • The Talents series and the Tower and Hive series by Anne McCaffrey: These novels deal with the emergence and organization of humans with "psychic" abilities (telekinesis, telepathy, teleportation, and so forth). The first series takes place roughly in the present day, the second far in the future on multiple planets.
  • Priscilla Hutchins series and Alex Benedict series by Jack McDevitt: Two series, unrelated, both examining how humans might explore the galaxy and what they might find (many relics of ancient civilizations, and a few alien races still living). The former takes place in the relatively near future, while the latter takes place millennia in the future.
  • Hyperion Cantos by Dan Simmons: An epic space opera dealing heavily with singularity-related concepts such as AI and human bio-modification, as well as time travel and religious conflict.
  • Otherland series by Tad Williams: In the near future, full virtual reality has been developed. The story moves through a plethora of virtual environments, many drawn from classic literature.

Edit: I have just now realized, after writing all of this out, that this is the open thread for January 2010 rather than January 2011. Oh well.

Comment author: JoshuaZ 09 August 2010 09:00:49PM 2 points [-]

I wouldn't recommend Scalzi. Much of Scalzi is miltiary scifi with little realism and isn't a great introduction for scifi. I'd recommend Charlie Stross. "The Atrocity Archives", "Singularity Sky" and "Halting State" are all excellent. The third is very weird in that it is written in the second person, but is lots of fun. Other good authors to start with are Pournelle and Niven (Ringworld, The Mote in God's Eye, and King David's Spaceship are all excellent).

Comment author: Risto_Saarelma 10 August 2010 07:41:16AM 2 points [-]

Am I somehow unusual for being seriously weirded out by the cultural undertones in Scalzi's Old Man's War books? I keep seeing people in generally enlightened forums gushing over his stuff, but the book read pretty nastily to me with its mix of very juvenile approach to science, psychology and pretty much everything it took on, and its glorification of genocidal war without alternatives. It brought up too much associations to telling kids who don't know better about the utter necessity of genocidal war in simple and exiting terms in real-world history, and seemed too little aware of this itself to be enjoyable.

Maybe it's a Heinlein thing. Heinlein is pretty obscure here in Europe, but seems to be woven into the nostalgia trigger gene in the American SF fan DNA, and I guess Scalzi was going for something of a Heinlein pastiche.

Comment author: NancyLebovitz 10 August 2010 10:16:29AM 2 points [-]

It's nice to know that I'm not the only person who hated Old Man's War, though our reasons might be different.

It's been a while since I've read it, but I think the character who came out in favor of an infrastructure attack (was that the genocidal war?) turned out to be wrong.

What I didn't like about the book was largely that it was science fiction lite-- the world building was weak and vague, and the viewpoint character was way too trusting. I've been told that more is explained in later books, but I had no desire to read them.

There's a profoundly anti-imperialist/anti-colonialist theme in Heinlein, but most Heinlein fans don't seem to pick up on it.

Comment author: Risto_Saarelma 10 August 2010 10:57:59AM 3 points [-]

The most glaring SF-lite problem for me was that in both Old Man's War and The Ghost Brigades, the protagonist was basically written as a generic twenty-something Competent Man character, despite both books deliberately setting the protagonist up as very unusual compared to the archetype character. in Old Man's War, the protagonist is a 70-year old retiree in a retooled body, and in The Ghost Brigades something else entirely. Both of these instantly point to what I thought would have been the most interesting thing about the book, how does someone who's coming from a very different place psychologically approach stuff that's normally tackled by people in their twenties. And then pretty much nothing at all is done with this angle. Weird.

Comment author: NancyLebovitz 10 August 2010 02:15:16PM 1 point [-]

There was so much, so very much sf-lite about that book. Real military life is full of detail and jargon. OMW had something like two or three kinds of weapons.

There was the big sex scene near the beginning of the book, and then the characters pretty much forgot about sex.

It was intentionally written to be an intro to sf for people who don't usually read the stuff. Fortunately, even though the book was quite popular, that approach to writing science fiction hasn't caught on.

Comment author: Risto_Saarelma 10 August 2010 01:30:07PM *  1 point [-]

Come to think of it, I had a similar problem with James P. Hogan's Voyage from Yesteryear, which was about a colony world of in vitro grown humans raised by semi-intelligent robots without adult parents. I thought this would lead to some seriously weird and interesting social psychology with the colonists, when all sorts of difficult to codify cultural layers are lost in favor of subhuman machines as parental authorities and things to aspire to.

Turned out it was just a setup to lecture how anarchism with shooting people you don't like would lead to the perfect society if it weren't for those meddling history-perpetuating traditionalists, with the colonists of course being exemplars of psychological normalcy and wholesomeness as well as required by the lesson, and then I stopped reading the book.

Comment author: NancyLebovitz 13 April 2010 02:38:13AM 3 points [-]

It depends on what you're looking for. Books you might enjoy? If so, we need to know more about your tastes. Books we've liked? Books which have influenced us? An overview of the field?

In any case, some I've liked-- Heinlein's Rocketship Galileo which is quite a nice intro to rationality and also has Nazis in abandoned alien tunnels on the Moon, and Egan's Diaspora which is an impressive depiction of people living as computer programs.

Oh, and Vinge's A Fire Upon the Deep which is an effort to sneak up on writing about the Singularity (Vinge invented the idea of the Singularity), and Kirsteen's The Steerswoman (first of a series), which has the idea of a guild of people whose job it is to answer questions-- and if you don't answer one of their questions, you don't get to ask them anything ever again.

Comment author: daos 17 January 2010 05:01:01PM 2 points [-]

many good recommendations so far but unbelievably nobody has yet mentioned Iain M. Banks' series of 'Culture' novels based on a humanoid society (the 'Culture') run by incredibly powerful AI's known as 'Minds'.

highly engaging books which deal with much of what a possible highly technologically advanced post singularity society might be like in terms of morality, politics, philosophy etc. they are far fetched and a lot of fun. here's the list to date:

  • Consider Phlebas (1987)
  • The Player of Games (1988)
  • Use of Weapons (1990)
  • Excession (1996)
  • Inversions (1998)
  • Look to Windward (2000)
  • Matter (2008)

they are not consecutive so reading order isn't that important though it is nice to follow their evolution from the perspective of the writing.

Comment author: Sniffnoy 09 January 2010 08:25:40PM 1 point [-]

Since noone's mentioned it yet, Rendevous with Rama. You really don't want to touch the sequels, though.

Comment author: Kevin 09 January 2010 06:07:14AM 1 point [-]

Oh, definitely 1984 if you've never read it. Scary how much predictive power it's had.

Comment author: brian_jaress 08 January 2010 09:16:54AM 1 point [-]

This might not be the best place to ask because so many people here prefer science fiction to regular fiction. I've noticed that people who prefer science fiction have a very different idea of what makes good science fiction than people who have no preference or who prefer regular fiction.

Most of what I see in the other comments is on the "prefers science fiction" side, except for things by LeGuin and maybe Dune.

Of course, you might turn out to prefer science fiction and just not have realized it. Then all would be well.

Comment author: zero_call 08 January 2010 05:54:22AM 1 point [-]

It's actually very important to ask people for recommendations for books, and especially for sci-fi, since it seems like a large majority of the work out there is well, garbage. Not to be too harsh, as IMO, the same thing could be said for a lot of artistic genres (anime, modern action film, etc, etc.).

For sci-fi, there are some really top notch work out there. But be warned, that in general the rest of the series isn't as good as the first book. Some classics, all favorites of mine are:

  • Dune (Frank Herbert)
  • Starship Troopers (Robert Heinlein)
  • Ringworld (first book) (Larry Niven)
  • Neuromancer (William Gibson) (Warning: last half of the book becomes s.l.o.w. though)
  • Fire Upon the Deep (Vernor Vinge)
Comment author: jscn 06 January 2010 11:11:34PM *  3 points [-]
  • Solaris by Stanislaw Lem is probably one of my all time favourites.
  • Anathem by Neal Stephenson is very good.
Comment author: Blueberry 06 January 2010 11:10:26PM *  1 point [-]

I haven't seen much of the Star Wars or Star Trek stuff either, and don't really consider them science fiction as much as space action movies. That's not really what we're talking about.

I would strongly advise you to start with short stories, specifically Isaac Asimov, Robert Heinlein, Arthur C. Clarke, Robert Sheckley, and Philip K. Dick. All those authors are considered giants in the field and have anthologies of collected short stories. Science fiction short stories tend to be easier to read because you don't get bogged down in detail, and you can get right to the point of exploring the interesting and speculative worlds.

Comment author: AdeleneDawner 06 January 2010 05:57:00PM 1 point [-]

I don't know whether to be surprised that no one has recommended the Ender's Game series or not. They're not terribly realistic in the tech (especially toward the end of the series), and don't address the idea of a technological singularity, but they're a good read anyway.

Oh - I'm not sure if this is what you were thinking of by sci-fi or not, and it gets a bit new-agey, but Spider Robinson's "Telempath" is a personal favorite. It's set in a near-future (at the time of writing) earth after a virus was released that magnified everyone's sense of smell to the point where cities, and most modern methods of producing things, became intolerable. (Does anyone else have post-apocalyptic themed favorites? I have a fondness for the genre, sci-fi or not.)

Comment author: Cyan 06 January 2010 06:16:03PM 3 points [-]

I had a high opinion of Ender's Game once (less so for its sequels). Then I read this.

Comment author: Blueberry 08 January 2010 06:34:08AM 1 point [-]

A poorly thought out, insult-filled rant comparing scenes in Ender's Game to "cumshots" changed your view of a classic, award-winning science fiction novel? Please reconsider.

Comment author: Cyan 08 January 2010 07:32:10PM 4 points [-]

If you strip out the invective and the appeal to emotion embodied in the metaphorical comparison to porn, there yet remains valid criticism of the structure and implied moral standards of the book.

Comment author: xamdam 10 August 2010 01:04:02AM 1 point [-]

I did not believe this was possible, but this analysis has turned EG into ashes retroactively. Still, it gets lots of kids into scifi, so there is some value.

A really great kids scifi book is "Have spacesuit, will travel" by Heinlein.

Comment author: NancyLebovitz 10 August 2010 01:29:45AM 3 points [-]

I did not believe this was possible, but this analysis has turned EG into ashes retroactively.

I've heard that effect called "the suck fairy". The suck fairy sneaks into your life and replaces books you used to love with vaguely similar books that suck.

Comment author: xamdam 10 August 2010 02:08:53AM 1 point [-]

Great name, but unfortunately it's the same book; the analysis made it incompatible with self-respect.

Comment author: NancyLebovitz 10 August 2010 02:59:54AM 1 point [-]

The suck fairy always brings something that looks exactly like the same book, but somehow....

I'm not sure if I'll ever be able to enjoy Macroscope again. Anthony was really interesting about an information gift economy, but I suspect that "vaguely creepy about women" is going to turn into something much worse.

Comment author: Jack 06 January 2010 05:24:31PM *  1 point [-]

Films:

Blade Runner

Gattaca

2001: A Space Odyssey

Comment author: NancyLebovitz 06 January 2010 01:14:23AM 4 points [-]

Vinge's Marooned in Real Time, A Fire Upon the Deep. The former introduced the idea of the Singularity, the latter gets a lot of fun playing near the edge of it.

Olaf Stapledon: Last and First Men, Star Maker.

Poul Anderson: Brain Wave. What happens if there's a drastic, sudden intelligence increase?

After you've read some science fiction, if you let us know what you've liked, I bet you'll get some more fine-tuned recommendations.

Comment author: Wei_Dai 07 January 2010 12:27:18AM *  3 points [-]

I second A Fire Upon the Deep (and anything by Vinge, but A Fire Upon the Deep is my favorite). BTW, it contains what is in retrospect a clear reference to the FAI problem. See http://books.google.com/books?id=UGAKB3r0sZQC&lpg=PA400&ots=VBrKocfTHM&dq=%22fast%20burn%20transcendence%22&pg=PA400

If anyone read it for the first time recently, I'm curious what you think of the Usenet references. Those were my favorite parts of the book when I first read it.

Comment author: zero_call 08 January 2010 06:00:56AM 1 point [-]

I thought the Usenet references were really cool and really clever, both from a reader's standpoint, and also from an author's standpoint. For example, it doesn't take a lot of digression to explain it or anything since most readers are already familiar with similar stuff (e.g., Usenet.) It also just seems really plausible as a form of universe-scale "telegram" communication, so I think it works great for the story. Implausibility just ruins science fiction for me, it destroys that crucial suspension of disbelief.

Comment author: Vladimir_Nesov 05 January 2010 09:14:08PM 8 points [-]

Greg Egan: Permutation City, Diaspora, Incandescence.
Vernor Vinge: True Names, Rainbows End.
Charlie Stross: Accelerando.
Scott Bakker: Prince of Nothing series.

Comment author: jscn 06 January 2010 11:08:05PM 3 points [-]

Voted up mainly for the Greg Egan recommendations.

Comment author: ciphergoth 05 January 2010 03:22:59PM 6 points [-]

My first recommendation here is always Iain M Banks, Player of Games.

Comment author: Alicorn 05 January 2010 02:39:46PM *  6 points [-]

If you'd like some TV recommendations as well, here are some things that you can find on Hulu:

Firefly. It's not all available at the same time, but they rotate the episodes once a week; in a while you'll be able to start at the beginning. If you haven't already seen the movie, put it off until you've watched the whole series.

Babylon 5. First two seasons are all there. It takes a few episodes to hit its stride.

If you're willing to search a little farther afield, Farscape is good, and of the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this seems for some reason to be correlated with gender).

Comment author: ShardPhoenix 07 January 2010 03:08:25AM 2 points [-]

If you're willing to search a little farther afield, Farscape is good, and of the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this seems for some reason to be correlated with gender).

Maybe that's because DS9 is about a bunch of people living in a big house, while TNG is about a bunch of people sailing around in a big boat ;). I prefer DS9 myself though and I'm a guy.

Comment author: randallsquared 06 January 2010 03:35:38AM 1 point [-]

With respect to B5, I'd say "a few episodes" is the entire first season and a quarter of the second. I don't regret having spent the time to watch that, but I'm not sure I would have bothered had I not had friends raving about it, knowing in advance what I know now. :)

Comment author: Kevin 05 January 2010 12:31:58PM 6 points [-]

I am a big fan of Isaac Asimov. Start with his best short story, which I submit as the best sci-fi short story of all time. http://www.multivax.com/last_question.html

Comment author: Bindbreaker 05 January 2010 12:52:58PM 6 points [-]

I prefer this one, and yes, it really is that short.

Comment author: Furcas 05 January 2010 09:35:15PM *  1 point [-]

Isaac Asimov's Foundation series:

  • Foundation
  • Foundation and Empire
  • Second Foundation
  • Foundation's Edge
  • Foundation and Earth

There are prequels too, but I don't like 'em.

Comment author: sketerpot 05 January 2010 09:31:17PM 1 point [-]

Robert Heinlein wrote some really good stuff (before becoming increasingly erratic in his later years). Very entertaining and fun. Here are some that I would recommend for starting out with:

Tunnel in the Sky. The opposite of Lord of the Flies. Some people are stuck on a wild planet by accident, and instead of having civilization collapse, they start out disorganized and form a civilization because it's a good idea. After reading this, I no longer have any patience for people who claim that our natural state is barbarism.

Citizen of the Galaxy. I can't really summarize this one, but it's got some good characters in it.

Between Planets. Our protagonist finds himself in the middle of a revolution all of a sudden. This was written before we knew that Venus was not habitable.

I was raised on this stuff. Also, I'd like to recommend Startide Rising, by David Brin, and its sequel The Uplift War. They're technically part of a trilogy, but reading the first book (Sundiver) is completely unnecessary. It's not really light reading, but it's entertaining and interesting.

Comment author: NancyLebovitz 09 August 2010 09:06:46PM 1 point [-]

Note about Tunnel in the Sky-- they didn't just form a society (not a civilization) because they thought it was a good idea to do-- they'd had training in how to build social structures.

Comment author: Jawaka 05 January 2010 02:30:16PM 3 points [-]

I am a huge fan of Philip K. Dick. I don't usually read much fiction or even science fiction, but PKD has always fascinated me. Stanislav Lem is also great.

Comment deleted 05 January 2010 03:24:44PM [-]
Comment author: Technologos 05 January 2010 04:21:48PM 5 points [-]

I strongly second Snow Crash. I enjoyed it thoroughly.

Comment author: Jack 05 January 2010 03:01:28PM *  2 points [-]

LeGuin- The Dispossessed

William Gibson- Neuromancer

George Orwell- 1984

Walter Miller - A Canticle for Leibowitz

Philip K. Dick- The Man in the High Castle

That actually might be my top five books of all time.

Comment author: RichardKennaway 05 January 2010 12:50:54PM *  2 points [-]

Bearing in mind that you're asking this on LessWrong, these come to mind:

Greg Egan. Everything he's written, but start with his short story collections, "Axiomatic" and "Luminous". Uploading, strong materialism, quantum mechanics, immortality through technology, and the implications of these for the concept of personal identity. Some of his short stories are online.

Charles Stross. Most of his writing is set in a near-future, near-Singularity world.

On related themes are "The Metamorphosis of Prime Intellect", and John C. Wright's Golden Age trilogy.

There are many more SF novels I think everyone should read, but that would be digressing into my personal tastes.

Some people here have recommended M. Scott Bakker's trilogy that begins with "The Darkness That Comes Before", as presenting a picture of a superhuman rationalist, although having ploughed through the first book I'm not all that moved to follow up with the rest. I found the world-building rather derivative, and the rationalist doesn't play an active role. Can anyone sell me on reading volume 2?

Comment author: Zack_M_Davis 05 January 2010 07:15:54PM 2 points [-]

Strongly seconding Egan. I'd start with "Singleton" and "Oracle."

Also of note, Ted Chiang.

Comment author: whpearson 05 January 2010 12:29:44PM 2 points [-]

I'd say identify what sort of future scenarios you want to explore and ask us to identify exemplars. Or is the goal is just to get a common vocabulary to discuss things?

Reading Sci-Fi while potentially valuable should be done with a purpose in mind. Unless you need another potential source of procrastination.

Comment author: komponisto 05 January 2010 12:38:58PM 5 points [-]

Reading Sci-Fi while potentially valuable should be done with a purpose in mind.

Goodness gracious. No, just looking for more procrastination/pure fun. I've gotten along fine without it thus far, after all.

(Of course, if someone actually thinks I really do need to read sci-fi for some "serious" reason, that would be interesting to know.)

Comment author: Technologos 05 January 2010 04:29:21PM 1 point [-]

While I don't think you need to read it, per se, I have found sci fi to be of remarkable use in preparing me for exactly the kind of mind-changing upon which Less Wrong thrives. The Asimov short stories cited above are good examples.

I also continue to cite Asimov's Foundation trilogy (there are more after the trilogy, but he openly said that he wrote the later books purely because his publisher requested them) as the most influential fiction works in pushing me into my current career.

Comment author: Cyan 05 January 2010 02:19:10PM 1 point [-]

I recommend anything by Charles Stross, Lois McMaster Bujold's Vorkosigan Saga (link gives titles and chronology), and anthing by Ursula LeGuin, but especially City of Illusions and The Left Hand of Darkness.

Comment deleted 05 January 2010 02:38:51PM [-]
Comment author: MatthewB 06 January 2010 07:39:58AM 1 point [-]

I am having a discussion on a forum where a theist keeps stating that there must be objective truth, that there must be objective morality, and that there is objective knowledge that cannot be discovered by Science (I tried to point out that if it were Objective, then any system should be capable of producing that knowledge or truth).

I had completely forgot to ask him if this objective truth/knowledge/morality could be discovered if we took a group of people, raised in complete isolation, and then gave them the tools to explore their world. If such things were truly objective, then it would be trivial for these people to arrive at the discovery of these objective facts.

I shall have to remember this, as well as the fact that such objective knowledge/ethics may indeed exist, yet, why is it that our ethical systems across the globe seem to have a few things in common, but disagree on a great many more?

Comment author: PhilGoetz 07 January 2010 05:20:49AM *  1 point [-]

You can't ask whether there are more things in common than not in common, unless you can enumerate the things to be considered. If everyone agrees on something, perhaps it doesn't get categorized under ethics anymore. Or perhaps it just doesn't seem salient when you take your informal mental census of ethical principals.

Excellent response to the theist.

Comment author: MatthewB 07 January 2010 05:42:01AM 1 point [-]

You can't ask whether there are more things in common than not in common, unless you can enumerate the things to be considered.

Doh!

Yes, of course... Slip of the brain's transmission there.

As for the response to the theist, I wish that I had used that specific response. I cannot recall now what I did use to counter his claims.

As I mentions, his claim was that there is knowledge that is not available to the scientific method, yet can be observed in other ways.

I pointed out that there were no other ways of observing things than empirical methods, and that if some method of knowledge that just entered out brain should be discovered (Revelation), and its reliability were determined, then this would just be another form of observation (Proprioception) and the whole process would then just be another tool of science.

He just couldn't seem to get around the fact that as soon as he makes an empirical claim that it falls within the realm of scientific discovery.

He was also misusing Gödel's incompleteness theorem (some true statements in a formal system cannot be proved within that formal system).

At which point, he began to conflate science as some sort of religion and god that was being worshiped, and from which everything was meaningless and thus there were no ethics, so he could just go kill and rape whoever he pleased.

It frightens me that there are such people in the world.

Comment author: Vladimir_Nesov 05 January 2010 07:43:08PM 6 points [-]

1) Why would a "perfectly logical being" compute (do) X and not Y? Do all "perfectly logical beings" do the same thing? (Dan's comment: a system that computes your answer determines that answer, given a question. If you presuppose an unique answer, you need to sufficiently restrict the question (and the system). A universal computer will execute any program (question) to produce its output (answer).) All "beings" won't do exactly the same thing, answer any question in exactly the same way. See also: No Universally Compelling Arguments.

2) Why would you be interested in what the "perfectly logical being" does? No matter what argument you are given, it is you that decides whether to accept it. See also: Where Recursive Justification Hits Bottom, Paperclip maximizer, and more generally Metaethics sequence.

2.5) What humans want (and you in particular), is a very detailed notion, one that won't automatically appear from a question that doesn't already include all that detail. And every bit of that detail is incredibly important to get right, even though its form isn't fixed in human image.

Comment author: Jack 05 January 2010 05:33:02PM 3 points [-]

I don't know what you mean by objective ethics. I believe there are ethical facts but they're a lot more like facts about the rules of baseball than facts about the laws of physics

Comment author: DanArmak 05 January 2010 04:16:48PM *  3 points [-]

a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being.

"Calculated" based on what? What is the question that this would be the answer to?

Also, how can you define "bias" here?

As you can guess from my questions, I don't even see what an objective system of ethics could possibly mean :-)

Comment author: MatthewB 06 January 2010 07:43:10AM 3 points [-]

As you can guess from my questions, I don't even see what an objective system of ethics could possibly mean.

This seems to be my biggest problem as well. I have been trying to find definitions of an objective system of ethics, yet all of the definitions seem so dogmatic and contrived. Not to mention varying from time to time depending upon the domain of the ethics (whether they apply to Christians, Muslims, Buddhists, etc.)

Comment author: SilasBarta 05 January 2010 03:38:59AM 2 points [-]

Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).

How do we fix it, so I don't have to start sending off resumes?

Comment author: byrnema 05 January 2010 04:35:14AM *  2 points [-]

I went to the eSafe site and while looking up what the "illegal drugs" classification meant, submitted a request for them to change their status for LessWrong.com. A pop-up window told me they'd look into it.

You can check (and then apply to modify) the status of LessWrong here.

Comment author: MatthewB 05 January 2010 03:57:20AM 2 points [-]

That may have been my fault. I mentioned that I used to have drug problems and mentioned specific drugs in one thread, so that may have set off the filters. I apologize if this is the case. The discussion about this went on for a day or two (involving maybe six comments).

I do hope that is not the problem, but I will avoid such topics in the future to avoid any such issues.

Comment author: byrnema 05 January 2010 04:23:48AM 1 point [-]

I doubt it, all of the words you used (name brands of prescription drugs) were used elsewhere, often occurring in clusters just as in your thread.

By the way, do you have any idea why you don't have an overview page?

Comment author: Sniffnoy 04 January 2010 10:49:40PM 2 points [-]
Comment author: Nick_Novitski 04 January 2010 06:45:31PM 3 points [-]

Here's a silly comic about rationality.

I rather wish it was called "Irrationally Undervalues Rapid Decisions Man". Or do I?

Comment author: CannibalSmith 04 January 2010 10:58:08AM 3 points [-]

Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.

Comment author: [deleted] 04 January 2010 05:39:56AM *  3 points [-]

P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.

Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point from a region, the probability of selecting each point is 0. Nonetheless, a point was selected, and Rho saw which point it was; an event of probability 0 occurred. As Peter de Blanc said, Rho instantly fell to the very bottom layer of Bayesian hell.

Or did she?

Comment author: orthonormal 04 January 2010 05:46:43AM 1 point [-]

Don't worry, the mathematicians have already covered this.

Comment author: AdeleneDawner 03 January 2010 03:51:37PM *  3 points [-]

First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly sure I'm autistic. Just don't ignore the actual question in favor of picking my brain, please.)

The company that I work for has been hired to create a virtual campus (3d, in opensim, with some traditional web-2.0 parts) for this school. They appear to be fairly new to virtual worlds and online education (more so than the web page suggests: I'm not sure that they have any students following the shown program yet), and we're in a position to guide them toward or away from certain technologies and ways of doing things. We're already, for example, suggesting that they consider minimizing the use of realtime lectures, and use recorded presentations followed (not necessarily immediately) by both realtime and non-realtime discussions instead. We're pushing for them to incorporate options that allow and encourage students to learn (and learn to learn) in whatever way is best for them, rather than enforcing one-size-fits-all methods, and we're intentionally trying to include 'covert learning' as well (simple example: purposefully using more formal avatar animations in more formal areas, to let the students literally see how to carry themselves in such situations). The first group of students to be using our virtual campus will be in grades 4-8, and I don't believe we'll be able to influence their actual curriculum at all (though if someone wants to offer to mentor some kids in one topic or another, they might be interested).

Those who have made a formal effort to learn via online resources: What advice do you have to offer? What kinds of technologies, or uses of technologies, have worked for you, and what kinds of tech do you wish you had access to?

Comment author: Blueberry 03 January 2010 04:37:48PM 3 points [-]

For me personally, I would prefer transcripts and written summaries of any audio or video content. I find it very difficult to listen to and learn from hearing audio when sitting at a computer, and having text or a transcript to read from instead helps a lot. It allows me to read at my own pace and go back and forth when I need to.

I'd also like any audio and video content to be easily and separately downloadable, so I could listen to it at my own convenience. And I'd want any slides or demonstrations to be easily printable, so I could see it on paper and write notes on it. (As you can probably tell, I'm more of a verbal and visual learner.)

By the way, your comment seemed totally normal to me, and I didn't notice any unusual tone, but I'm curious what you were referring to.

Comment author: Alicorn 03 January 2010 04:42:12PM 2 points [-]

Seconded the need for transcriptions. This is also a matter of disability access, which is frequently neglected in website design - better to have it there from the beginning than wait for someone to sue.

Comment author: byrnema 03 January 2010 04:32:10PM *  1 point [-]

Grades 4-8 is an interesting category, and I wouldn't know to what extent a successful model for online learning has already been implemented for this age group.

For a somewhat younger age group, I would suggest starfall.com as an online learning site that seems to have a number of very effective elements. One element that I found remarkable is that frequently after a "learning lesson", the lesson solicits feedback. (For example, see the end of this lesson). The feedback is extremely easy to provide -- for example, the child just picks a happy face or an unhappy face indicating whether they enjoyed the lesson. (For older kids, it might instead be a choice between a puzzled expression and an "I understand!" expression.)

In any case, I think the value of building in feedback and learning assessment mechanisms would be an important thing to consider in the planning stages.

Comment author: Sly 03 January 2010 10:52:41AM *  4 points [-]

I am curious as to how many LWers attempt to work out and eat healthy to lengthen life span. Especially among those who have signed up for cryogenics.

Comment author: AngryParsley 08 January 2010 09:58:17AM 1 point [-]

I'm signed up for cryonics and I exercise regularly. I usually run 3-4 miles a day and do some random stretching, push-ups, and sit-ups. I slack if I'm on vacation or if the weather is bad. I never eat properly. Some days I forget most meals. Other days I'll have bacon and ice cream.

Comment author: Jawaka 07 January 2010 01:57:43PM 2 points [-]

I stopped smoking after I learned about the Singularity and Aubrey de Grey. I don't have any really good data on what healthy food is but I think I am doing alright. I have also singed up to a Gym recently. However I don't think I can sign up to cryogenics in Germany.

Comment author: Morendil 07 January 2010 02:04:57PM 1 point [-]

You can sign up from anywhere, in principle (CI and Alcor list a number of non-US members). The major issue is that it will obviously cost more to transport you to suspension facilities in the US, while avoiding damage to your brain cells in transit.

One disturbing thing about cryonics is that it forces you to allocate probabilities to a wide range of end-of-life scenarios. Am I more likely to die hit by a truck (in which case I wouldn't make much of my chances for successful suspension and revival), or a fatal disease diagnosed early enough, yet not overly aggressive, such that I can relocate to Michigan or Arizona for my final weeks ? And who know how many other likely scenarios.

Comment author: DanArmak 07 January 2010 02:11:32PM 2 points [-]

You can sign up from anywhere, in principle (CI and Alcor list a number of non-US members). The major issue is that it will obviously cost more to transport you to suspension facilities in the US, while avoiding damage to your brain cells in transit.

I'd guess that getting your local hospitals and government to allow your body to be treated correctly would be the biggest non-financial problem.

I live in Israel, and even if I had unlimited money and could sign up, I'm not at all sure I could solve this problem except by leaving the country.

Comment author: scotherns 07 January 2010 09:28:43AM 1 point [-]

I work out regularly, eat healthy, and I am signed up for Cryonics. One data point for you :-)

Comment author: RichardKennaway 04 January 2010 10:00:49PM 4 points [-]

I work out and eat healthily to make right now better.

Of course, I hope that the body will last longer as well, but I wouldn't undertake a regimen that guaranteed I'd see at least 120, at the cost of never having the energy to get much done with the time. Not least because I'd take such a cost as casting doubt on the promise.

Comment author: orthonormal 03 January 2010 05:39:31AM 10 points [-]

After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.

Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.

But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evidence, and that they started relishing their assigned role?

It might be too late now to salvage this particular situation, but the general problem needs to be addressed. When somebody with rationalist potential first signs up for an account, I think the chances of this situation recurring are way too high if they just jump right into a current thread as seems natural, because we seem like people who talk in special jargon and dismiss the obvious counterarguments for obscure reasons. It's not clear from the outset that there are good reasons for the things we take for granted, or that we're answering in shorthand because the Big Idea the new person just presented is fully answered within an old argument we've had.

Comment author: orthonormal 03 January 2010 06:00:54AM 7 points [-]

Partial Fix #2:

I can't help but think that some people might have hesitated to downvote adefmay's first comment, or might have replied at greater length with a more positive tone, had it been obvious that this was in fact adefmay's first post. (I did realize this, but replied in a comically insulting fashion anyhow. Mea culpa.)

It might be helpful if there were some visible sign that, for instance, this was among the 20 first comments from an account.

Comment author: Kaj_Sotala 03 January 2010 08:20:30AM *  5 points [-]

Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.

Comment author: MBlume 05 January 2010 11:56:25AM 3 points [-]

I'd like to put about 50 anosognosiacs and one healthy person in a room on some pretext, and see how long it takes the healthy person to notice everyone else is delusional, and whether ve then starts to wonder if ve is delusional too.

Comment author: Blueberry 03 January 2010 12:34:05PM 10 points [-]

There are several that I've wondered about:

  1. Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.

  2. Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)

  3. Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.

  4. Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.

Comment author: NancyLebovitz 04 January 2010 12:25:23PM 1 point [-]

Point 1: I'm not sure what you mean by physical needs. If human babies aren't cuddled, they die. Humans are the only known species to do this.

A General Theory of Love describes the connection between the limbic system and love-- I thought it was a good book, but to judge by the Amazon reviews, it's more personally important to a lot of intellectual readers than I would have expected.

Comment author: Blueberry 04 January 2010 07:41:32PM 1 point [-]

I'm not sure what you mean by physical needs. If human babies aren't cuddled, they die. Humans are the only known species to do this.

I've heard that called "failure to thrive" before. Yes, we'd need some kind of machine to provide whatever tactile stimulation was required. Given the way many primates groom each other and touch each other for social bonding, I'd be surprised if it were just humans who needed touch.

Comment author: NancyLebovitz 05 January 2010 12:02:08PM 1 point [-]

A lot of animals need touch to grow up well. Only humans need touch to survive.

A General Theory of Love describes experiments with baby rodents to determine which physical systems are affected by which aspects of contact with the mother-- touch is crucial for one system, smell for another.

Comment deleted 03 January 2010 01:03:37PM [-]
Comment author: MatthewB 03 January 2010 01:09:35PM 2 points [-]

I've noticed that some of the Pacific Island countries don't have much in the way of sexual taboos, and they tend to teach their kids things like:

  • Don't stick your thingy in there without proper lube

or

  • If you are going to do that, clean up afterward.

Japan is also a country that has few sexual taboos (when compared to western Christian society). They still have their taboos and strangeness surrounding sex, but it is not something that is considered sinful or dirty

I am really interested in that last suggestion, and it sounds like one of the areas I want to explore when I get to grad school (and beyond). At Eliezer's talk at the first Singularity Summit (and other talks I have heard him give) he speaks of a possible mind space. I would like to explore that mind space further outside of the human mind.

As John McCarthy proposed in one of his books. It might be the case that even a thermostat is a type of a mind. I have been exploring how current computers are a type of evolving mind with people as the genetic agents. we take things in computers that work for us, and combine those with other things, to get an evolutionary development of an intelligent agent.

I know that it is nothing special, and others have gone down that path as well, but I'd like to look into how we can create these types of minds biologically. Is it possible to create an alien mind in a human brain? Your 4th suggestion seems to explore this space. I like that (I should up vote it as a result)

Comment author: Kaj_Sotala 03 January 2010 08:25:45AM 3 points [-]

I'd be really curious to see what happened in a society where your social gender was determined by something else than your biological sex. Birth order, for instance. Odd male and even female, so that every family's first child is considered a boy and their second a girl. Or vice versa. No matter what the biology. (Presumably, there'd need to be some certain sign of the gender to tell the two apart, like all social females wearing a dress and no social males doing so.)

Comment author: N_R 03 January 2010 05:03:00PM 1 point [-]

"Imagine the human race gets wiped out. But you want to transmit the so far acquired knowledge to succeeding intelligent races (or aliens). How do you do?"

I got this question while reading a dystopia of a world after nuclear war.

Comment author: [deleted] 04 January 2010 12:43:56AM 1 point [-]

Transmitting it to aliens ain't happening; we'd get them from radio to the present day, a couple hundred years' worth of technology, which is relatively little, and that's only if we manage to aim it right.

So, we want to communicate to future sapient species on Earth. I say take many, many plates of uranium glass and carve into it all of our most fundamental non-obvious knowledge: stuff like the periodic table, how to make electricity, how to make a microchip, some microchip designs, some software. And, of course, the scientific method, rationality, the non-exception convention (0 is a number, a square is a rectangle, the empty product is 1, . . .), and the function application motif (the way we construct mathematical expressions and natural-language phrases). Maybe tell them about Friendly AI, too.

Comment author: NancyLebovitz 03 January 2010 08:17:00AM *  5 points [-]

Has anyone here tried Lojban? Has it been useful?


I recommend making a longer list of recent comments available, the way Making Light does.


If you've been working with dual n-back, what have you gotten out of it? Which version are you using?


Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.