Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread: January 2010

5 Post author: Kaj_Sotala 01 January 2010 05:02PM

And happy new year to everyone.

Comments (725)

Comment author: PhilGoetz 07 January 2010 05:09:04AM *  15 points [-]

I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.

(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)

After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.

So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.

Comment author: Vladimir_Nesov 07 January 2010 04:41:21PM 5 points [-]

Link: Checklists (previously discussed on LW).

Comment author: Erebus 05 January 2010 10:45:18AM 11 points [-]

Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.

On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.

On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.

Even worse, Jaynes makes several strong claims about mathematics that seem to admit no favorable interpretation: the are simply wrong. All of the "paradoxes" surrounding the concepts of infinity he gives in Chapter 15 (*) are so fundamentally flawed that even a passing familiarity of what measure theory actually says dispels them as mere word-plays caused by fuzzy or shifting definitions, or simply erroneous applications of the theory. Intuitionism and other finitist positions are certainly consistent philosophical positions, but they aren't made appealing by advocates like Jaynes who claim to find errors in standard mathematics while simply misunderstanding what the standard theory says.

Also, Jaynes' claims about mathematics that I know to be wrong make it very difficult to take him seriously when he goes into rant mode about other things I know less about (such as "orthodox" statistics or thermodynamics).

I'm extremely frustrated by the book, but I still find it valuable. But I definitely wouldn't recommend it to anyone who didn't know enough mathematics to correct Jaynes' errors in the "paradoxes" he gives. So.. why haven't I seen qualifications, disclaimers or warnings in recommendations of the book here? Are the matters concerning pure mathematics just not considered important by those recommending the book here?

(*) I admit I only glanced at the longer ones, "tumbling tetrahedron" and the "marginalization paradox". They seemed to be more about the interpretation of probability than about supposed problems with the concepts of infinity; and given how Jaynes misunderstands and/or misrepresents the mathematical theories of measure and infinities in general elsewhere in the book, I wouldn't expect them to contain any real problems with mathematics anyway.

Comment author: komponisto 05 January 2010 12:18:56PM *  3 points [-]

Amen. Amen-issimo.

The solution, of course, is for the Bayesian view to become widespread enough that it doesn't end up identified particularly with Jaynes. The parts of Jaynes that are correct -- the important parts -- should be said by many other people in many other places, so that Jaynes can eventually be regarded as a brilliant eccentric who just by historical accident happened to be among the first to say these things.

There's no reason that David Hilbert shouldn't have been a Bayesian. None.

Comment author: orthonormal 03 January 2010 05:39:31AM 10 points [-]

After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.

Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.

But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evidence, and that they started relishing their assigned role?

It might be too late now to salvage this particular situation, but the general problem needs to be addressed. When somebody with rationalist potential first signs up for an account, I think the chances of this situation recurring are way too high if they just jump right into a current thread as seems natural, because we seem like people who talk in special jargon and dismiss the obvious counterarguments for obscure reasons. It's not clear from the outset that there are good reasons for the things we take for granted, or that we're answering in shorthand because the Big Idea the new person just presented is fully answered within an old argument we've had.

Comment author: orthonormal 03 January 2010 06:00:54AM 7 points [-]

Partial Fix #2:

I can't help but think that some people might have hesitated to downvote adefmay's first comment, or might have replied at greater length with a more positive tone, had it been obvious that this was in fact adefmay's first post. (I did realize this, but replied in a comically insulting fashion anyhow. Mea culpa.)

It might be helpful if there were some visible sign that, for instance, this was among the 20 first comments from an account.

Comment author: Jack 03 January 2010 06:25:58AM 4 points [-]

When it became clear that adefmay couldn't role with the punches there were quite a few sensitive comments with good advice and explanations for why he/she had been sent links. His/her response to those was basically to get rude, indignant and come up with as many counter-arguments as possible while not once trying to understand someone else's position or consider the possibility he/she was mistaken about something.

I don't know if adefmay was intentionally trolling but he/she was certainly deficient in rationalist virtue.

That said, I think we need to handle newcomers better anyway and an FAQ section is really important. I'd help with it.

Comment author: orthonormal 03 January 2010 07:49:10AM 5 points [-]

It seems plausible that things could have turned out much differently, but that the initial response did irreparable damage to the conversation. Perhaps putting adefmay on the defensive so soon made it implicitly about status and not losing face. Or perhaps the exchange fell into a pattern where acting the troll started to feel too good.

Overall, I didn't find adefmay's tone and obstinacy at the start to be worse than some comments (elsewhere) by people who I consider valuable members of Less Wrong.

Comment author: Eliezer_Yudkowsky 03 January 2010 05:54:42AM 4 points [-]

I'd have to say that the trollness seems obvious as all hell to me. Also, consider the prior probabilities.

Comment author: orthonormal 03 January 2010 05:54:47AM 3 points [-]

Partial Fix #1:

We put together a special forum (subset of threads and posts) for a number of old argument topics, and make sure that it is readily accessible from the main page, or especially salient for new people. We have a norm there to (as much as possible) write out our points from scratch instead of using shorthand and links as we do in discussions between LW veterans.

Benefits:

  • It's much less of a status threat to be told that one's comment belongs in another thread than to have it dismissed as happened to adefmay.

  • Most of the trouble seems to happen when new people jump into a current thread and derail a conversation between LW veterans, who react brusquely as above. Separating the newest/most advanced conversations from the old objections should make everyone happier.

  • I find that the people who have been on LW for a few months have just the right kind of zeal for these newfound ideas that makes them eager and able to defend them against the newest people, who find them absurd. I think this would be a good thing for both groups of people, and I expect it to happen naturally should such a place be created.

So if we made some collection of "FAQ threads" and made a big, obvious, enticing link to them on either the front page or the account creation page (that is, we give them a list of counterintuitive things we believe or interesting questions we've tackled, in the hopes they head there first), we might avoid more of these unfortunate calamities in the future.

Comment author: Jack 03 January 2010 07:38:18AM 16 points [-]

I'm not sure there needs to be more than one FAQ thread. But lets start by generating a list of frequently asked questions, coming up with answers with consensus support.

  • Why is almost everyone here an atheist?
  • What are the "points" on each comment?
  • Aren't knowledge and truth subjective or undefinable?
  • Can you ever really prove anything?
  • What's all this talk about probabilities and what is a Bayesian?
  • Why do you all agree on so much? Am I joining a cult?
  • What are the moderation rules? What kind of comments will result in downvotes and what kind of comments could result in a ban?
  • Who are you people? (Demographics, and a statement to the effect of demographics don't matter here. )

What else? Anyone have drafts of answers?

Comment author: orthonormal 03 January 2010 06:19:51PM *  3 points [-]

More FAQ topics:

  • Why the MWI?
  • Why do you all think cryonics will probably work?
  • Why a computational theory of mind?
  • What about free will and consciousness?
  • What do you mean by "morality", anyway?
  • Wait a sec. Torture over dust specks?!?

Basically, I think we need to do more for newcomers than just tell them to read a sequence; I mean, I think each of us had to actually argue out points we thought were obvious before we moved forward on these issues. Having a continuous open thread on such topics (including, of course, links to the relevant posts or Wiki entry) would be much better, IMO.

A monthly "Old Topics" thread, or a collection of them on various topics, would be great, although there ought to be a really obvious link directing people to it.

Comment author: dfranke 01 January 2010 06:27:10PM *  10 points [-]

In one of the dorkier moments of my existence, I've written a poem about the Great Filter. I originally intended to write music for this, but I've gone a few months now without inspiration, so I think I'll just post the poem to stand by itself and for y'all to rip apart.

The dire floor of Earth afore
saw once a fortuitous spark.
Life's swift flame sundry creature leased
and then one age a freakish beast
awakened from the dark.
Boundless skies beheld his eyes
and strident through the void he cried;
set his devices into space;
scryed for signs of a yonder race;
but desolate hush replied.
Stars surround and worlds abound,
the spheres too numerous to name.
Yet still no creature yet attains
to seize this lot, so each remains
raw hell or barren plain.
What daunting pale do most 'fore fail?
Be the test later or done?
Those dooms forgone our lives attest
themselves impel from first inquest:
cogito ergo sum.
Man does boast a charmèd post,
to wield the blade of reason pure.
But if this prov'ence be not rare,
then augurs fate our morrow bare,
our fleeting days obscure.
But might we nigh such odds defy,
and see before us cosmos bend?
Toward the heavens thy mind set,
and waver not: this proof, till 'yet,
did ne'er with man contend!

Suggested tweaks are welcome. Things that I'm currently unhappy with are that "fortuitous" scans awkwardly, and the skies/eyes rhyme feels clichéd.

Comment author: pjeby 03 January 2010 05:39:26AM 3 points [-]

I'll just post the poem to stand by itself and for y'all to rip apart.

It reminds me of something that happened in college, where a poem of mine was being put in some sort of collection; there was a typo in it, and I mentioned a correction to the professor. He nodded wisely, and said, "yes, that would keep it to iambic pentameter."

And I said, "iambic who what now?"... or words to that effect.

And then I discovered the wonderful world of meter. ;-)

Your poem is trying to be in iambic tetrameter (four iambs - "dit dah" stress patterns), but it's missing the boat in a lot of places. Iambic tetrameter also doesn't lend itself to sounding serious; you can write something serious in it, sure, but it'll always have kind of a childish singsong-y sort of feel, so you have to know how to counter it.

Before I grokked this meter stuff, I just randomly tried to make things sound right, which is what your poem appears to be doing. If you actually know what meter you're trying for, it's a LOT easier to find the right words, because they will be words that naturally hit the beat. Ideally, you should be able to read your poem in a complete monotone and STILL hear the rhythmic beating of the dit's and dah's... you could probably write a morse code message if you wanted to. ;-)

Anyway, you will probably find it a lot easier to fix the problems with the poem's rhythm if you know what rhythm you are trying to create. Enjoy!

Comment author: Eliezer_Yudkowsky 03 January 2010 05:55:17AM 2 points [-]

For those who still read books, recommend "The Poem's Heartbeat".

Comment author: Wei_Dai 11 January 2010 10:23:45PM 8 points [-]

I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.

Perhaps the movie also illustrates a danger of majoritarianism: if someone really found a secret that could save the world, it would be tragic if he allowed himself to be convinced otherwise due to majoritarian considerations. Don't most (nearly all?) true beliefs start their existence as a minority?

Comment author: MichaelGR 19 January 2010 04:19:45PM *  2 points [-]

The movie is also a good example of existential risk in fiction (in this case, a genetically engineered biological agent).

Comment author: komponisto 05 January 2010 12:03:25PM 8 points [-]

Okay, so....a confession.

In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.

I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.

But that's about the extent of my personal acquaintance with the genre.

Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these New Year's Resolutions, I can "resolve" to perhaps, maybe, some time, actually do that (if I can ever manage to squeeze it in between actually doing work and procrastinating on the Internet).

Problem is, there seems to be a lot of it out there. How would a newcomer know where to start?

Well, what better place to ask than here, a place where many would cite this type of literature as formative with respect to developing their saner-and-more-interesting-than-average worldviews?

Alicorn recommended John Scalzi (thanks). What say others?

Comment author: Vladimir_Nesov 05 January 2010 09:14:08PM 8 points [-]

Greg Egan: Permutation City, Diaspora, Incandescence.
Vernor Vinge: True Names, Rainbows End.
Charlie Stross: Accelerando.
Scott Bakker: Prince of Nothing series.

Comment author: jscn 06 January 2010 11:08:05PM 3 points [-]

Voted up mainly for the Greg Egan recommendations.

Comment author: ciphergoth 05 January 2010 03:22:59PM 6 points [-]

My first recommendation here is always Iain M Banks, Player of Games.

Comment author: Alicorn 05 January 2010 02:39:46PM *  6 points [-]

If you'd like some TV recommendations as well, here are some things that you can find on Hulu:

Firefly. It's not all available at the same time, but they rotate the episodes once a week; in a while you'll be able to start at the beginning. If you haven't already seen the movie, put it off until you've watched the whole series.

Babylon 5. First two seasons are all there. It takes a few episodes to hit its stride.

If you're willing to search a little farther afield, Farscape is good, and of the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this seems for some reason to be correlated with gender).

Comment author: ShardPhoenix 07 January 2010 03:08:25AM 2 points [-]

If you're willing to search a little farther afield, Farscape is good, and of the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this seems for some reason to be correlated with gender).

Maybe that's because DS9 is about a bunch of people living in a big house, while TNG is about a bunch of people sailing around in a big boat ;). I prefer DS9 myself though and I'm a guy.

Comment author: Kevin 05 January 2010 12:31:58PM 6 points [-]

I am a big fan of Isaac Asimov. Start with his best short story, which I submit as the best sci-fi short story of all time. http://www.multivax.com/last_question.html

Comment author: Bindbreaker 05 January 2010 12:52:58PM 6 points [-]

I prefer this one, and yes, it really is that short.

Comment author: NancyLebovitz 06 January 2010 01:14:23AM 4 points [-]

Vinge's Marooned in Real Time, A Fire Upon the Deep. The former introduced the idea of the Singularity, the latter gets a lot of fun playing near the edge of it.

Olaf Stapledon: Last and First Men, Star Maker.

Poul Anderson: Brain Wave. What happens if there's a drastic, sudden intelligence increase?

After you've read some science fiction, if you let us know what you've liked, I bet you'll get some more fine-tuned recommendations.

Comment author: Wei_Dai 07 January 2010 12:27:18AM *  3 points [-]

I second A Fire Upon the Deep (and anything by Vinge, but A Fire Upon the Deep is my favorite). BTW, it contains what is in retrospect a clear reference to the FAI problem. See http://books.google.com/books?id=UGAKB3r0sZQC&lpg=PA400&ots=VBrKocfTHM&dq=%22fast%20burn%20transcendence%22&pg=PA400

If anyone read it for the first time recently, I'm curious what you think of the Usenet references. Those were my favorite parts of the book when I first read it.

Comment author: NancyLebovitz 13 April 2010 02:38:13AM 3 points [-]

It depends on what you're looking for. Books you might enjoy? If so, we need to know more about your tastes. Books we've liked? Books which have influenced us? An overview of the field?

In any case, some I've liked-- Heinlein's Rocketship Galileo which is quite a nice intro to rationality and also has Nazis in abandoned alien tunnels on the Moon, and Egan's Diaspora which is an impressive depiction of people living as computer programs.

Oh, and Vinge's A Fire Upon the Deep which is an effort to sneak up on writing about the Singularity (Vinge invented the idea of the Singularity), and Kirsteen's The Steerswoman (first of a series), which has the idea of a guild of people whose job it is to answer questions-- and if you don't answer one of their questions, you don't get to ask them anything ever again.

Comment author: jscn 06 January 2010 11:11:34PM *  3 points [-]
  • Solaris by Stanislaw Lem is probably one of my all time favourites.
  • Anathem by Neal Stephenson is very good.
Comment author: Jawaka 05 January 2010 02:30:16PM 3 points [-]

I am a huge fan of Philip K. Dick. I don't usually read much fiction or even science fiction, but PKD has always fascinated me. Stanislav Lem is also great.

Comment author: Dreaded_Anomaly 24 January 2011 05:08:47AM *  2 points [-]

I second the recommendations of 1984 and Player of Games (the whole Culture series is good, but that one especially held my interest).

Recommendations I didn't see when skimming the thread:

  • The Hitchhiker's Guide to the Galaxy series by Douglas Adams: A truly enjoyable classic sci-fi series, spanning the length of the galaxy and the course of human history.
  • Timescape by Gregory Benford: Very realistic and well-written story about sending information back in time. The author is an astrophysicist, and knows his stuff.
  • The Andromeda Strain, Sphere, Timeline, Prey, and Next by Michael Crichton: These are his best sci-fi works, aimed at realism and dealing with the consequences of new technology or discovery.
  • Replay by Ken Grimwood: A man is given the chance to relive his life. A stirring tale with several twists.
  • The Commonwealth Saga and The Void Trilogy by Peter F. Hamilton: Superb space opera, in which humanity has colonized the stars via traversable wormholes, and gained immortality via rejuvenation technology. The trilogy takes place a thousand years after the saga, but with several of the same characters.
  • The Talents series and the Tower and Hive series by Anne McCaffrey: These novels deal with the emergence and organization of humans with "psychic" abilities (telekinesis, telepathy, teleportation, and so forth). The first series takes place roughly in the present day, the second far in the future on multiple planets.
  • Priscilla Hutchins series and Alex Benedict series by Jack McDevitt: Two series, unrelated, both examining how humans might explore the galaxy and what they might find (many relics of ancient civilizations, and a few alien races still living). The former takes place in the relatively near future, while the latter takes place millennia in the future.
  • Hyperion Cantos by Dan Simmons: An epic space opera dealing heavily with singularity-related concepts such as AI and human bio-modification, as well as time travel and religious conflict.
  • Otherland series by Tad Williams: In the near future, full virtual reality has been developed. The story moves through a plethora of virtual environments, many drawn from classic literature.

Edit: I have just now realized, after writing all of this out, that this is the open thread for January 2010 rather than January 2011. Oh well.

Comment author: JoshuaZ 09 August 2010 09:00:49PM 2 points [-]

I wouldn't recommend Scalzi. Much of Scalzi is miltiary scifi with little realism and isn't a great introduction for scifi. I'd recommend Charlie Stross. "The Atrocity Archives", "Singularity Sky" and "Halting State" are all excellent. The third is very weird in that it is written in the second person, but is lots of fun. Other good authors to start with are Pournelle and Niven (Ringworld, The Mote in God's Eye, and King David's Spaceship are all excellent).

Comment author: Risto_Saarelma 10 August 2010 07:41:16AM 2 points [-]

Am I somehow unusual for being seriously weirded out by the cultural undertones in Scalzi's Old Man's War books? I keep seeing people in generally enlightened forums gushing over his stuff, but the book read pretty nastily to me with its mix of very juvenile approach to science, psychology and pretty much everything it took on, and its glorification of genocidal war without alternatives. It brought up too much associations to telling kids who don't know better about the utter necessity of genocidal war in simple and exiting terms in real-world history, and seemed too little aware of this itself to be enjoyable.

Maybe it's a Heinlein thing. Heinlein is pretty obscure here in Europe, but seems to be woven into the nostalgia trigger gene in the American SF fan DNA, and I guess Scalzi was going for something of a Heinlein pastiche.

Comment author: NancyLebovitz 10 August 2010 10:16:29AM 2 points [-]

It's nice to know that I'm not the only person who hated Old Man's War, though our reasons might be different.

It's been a while since I've read it, but I think the character who came out in favor of an infrastructure attack (was that the genocidal war?) turned out to be wrong.

What I didn't like about the book was largely that it was science fiction lite-- the world building was weak and vague, and the viewpoint character was way too trusting. I've been told that more is explained in later books, but I had no desire to read them.

There's a profoundly anti-imperialist/anti-colonialist theme in Heinlein, but most Heinlein fans don't seem to pick up on it.

Comment author: Risto_Saarelma 10 August 2010 10:57:59AM 3 points [-]

The most glaring SF-lite problem for me was that in both Old Man's War and The Ghost Brigades, the protagonist was basically written as a generic twenty-something Competent Man character, despite both books deliberately setting the protagonist up as very unusual compared to the archetype character. in Old Man's War, the protagonist is a 70-year old retiree in a retooled body, and in The Ghost Brigades something else entirely. Both of these instantly point to what I thought would have been the most interesting thing about the book, how does someone who's coming from a very different place psychologically approach stuff that's normally tackled by people in their twenties. And then pretty much nothing at all is done with this angle. Weird.

Comment author: NancyLebovitz 10 August 2010 02:15:16PM 1 point [-]

There was so much, so very much sf-lite about that book. Real military life is full of detail and jargon. OMW had something like two or three kinds of weapons.

There was the big sex scene near the beginning of the book, and then the characters pretty much forgot about sex.

It was intentionally written to be an intro to sf for people who don't usually read the stuff. Fortunately, even though the book was quite popular, that approach to writing science fiction hasn't caught on.

Comment author: Risto_Saarelma 10 August 2010 01:30:07PM *  1 point [-]

Come to think of it, I had a similar problem with James P. Hogan's Voyage from Yesteryear, which was about a colony world of in vitro grown humans raised by semi-intelligent robots without adult parents. I thought this would lead to some seriously weird and interesting social psychology with the colonists, when all sorts of difficult to codify cultural layers are lost in favor of subhuman machines as parental authorities and things to aspire to.

Turned out it was just a setup to lecture how anarchism with shooting people you don't like would lead to the perfect society if it weren't for those meddling history-perpetuating traditionalists, with the colonists of course being exemplars of psychological normalcy and wholesomeness as well as required by the lesson, and then I stopped reading the book.

Comment author: daos 17 January 2010 05:01:01PM 2 points [-]

many good recommendations so far but unbelievably nobody has yet mentioned Iain M. Banks' series of 'Culture' novels based on a humanoid society (the 'Culture') run by incredibly powerful AI's known as 'Minds'.

highly engaging books which deal with much of what a possible highly technologically advanced post singularity society might be like in terms of morality, politics, philosophy etc. they are far fetched and a lot of fun. here's the list to date:

  • Consider Phlebas (1987)
  • The Player of Games (1988)
  • Use of Weapons (1990)
  • Excession (1996)
  • Inversions (1998)
  • Look to Windward (2000)
  • Matter (2008)

they are not consecutive so reading order isn't that important though it is nice to follow their evolution from the perspective of the writing.

Comment deleted 05 January 2010 03:24:44PM [-]
Comment author: Technologos 05 January 2010 04:21:48PM 5 points [-]

I strongly second Snow Crash. I enjoyed it thoroughly.

Comment author: RichardKennaway 05 January 2010 12:50:54PM *  2 points [-]

Bearing in mind that you're asking this on LessWrong, these come to mind:

Greg Egan. Everything he's written, but start with his short story collections, "Axiomatic" and "Luminous". Uploading, strong materialism, quantum mechanics, immortality through technology, and the implications of these for the concept of personal identity. Some of his short stories are online.

Charles Stross. Most of his writing is set in a near-future, near-Singularity world.

On related themes are "The Metamorphosis of Prime Intellect", and John C. Wright's Golden Age trilogy.

There are many more SF novels I think everyone should read, but that would be digressing into my personal tastes.

Some people here have recommended M. Scott Bakker's trilogy that begins with "The Darkness That Comes Before", as presenting a picture of a superhuman rationalist, although having ploughed through the first book I'm not all that moved to follow up with the rest. I found the world-building rather derivative, and the rationalist doesn't play an active role. Can anyone sell me on reading volume 2?

Comment author: Zack_M_Davis 05 January 2010 07:15:54PM 2 points [-]

Strongly seconding Egan. I'd start with "Singleton" and "Oracle."

Also of note, Ted Chiang.

Comment author: Jack 05 January 2010 03:01:28PM *  2 points [-]

LeGuin- The Dispossessed

William Gibson- Neuromancer

George Orwell- 1984

Walter Miller - A Canticle for Leibowitz

Philip K. Dick- The Man in the High Castle

That actually might be my top five books of all time.

Comment author: whpearson 05 January 2010 12:29:44PM 2 points [-]

I'd say identify what sort of future scenarios you want to explore and ask us to identify exemplars. Or is the goal is just to get a common vocabulary to discuss things?

Reading Sci-Fi while potentially valuable should be done with a purpose in mind. Unless you need another potential source of procrastination.

Comment author: komponisto 05 January 2010 12:38:58PM 5 points [-]

Reading Sci-Fi while potentially valuable should be done with a purpose in mind.

Goodness gracious. No, just looking for more procrastination/pure fun. I've gotten along fine without it thus far, after all.

(Of course, if someone actually thinks I really do need to read sci-fi for some "serious" reason, that would be interesting to know.)

Comment author: sketerpot 05 January 2010 09:31:17PM 1 point [-]

Robert Heinlein wrote some really good stuff (before becoming increasingly erratic in his later years). Very entertaining and fun. Here are some that I would recommend for starting out with:

Tunnel in the Sky. The opposite of Lord of the Flies. Some people are stuck on a wild planet by accident, and instead of having civilization collapse, they start out disorganized and form a civilization because it's a good idea. After reading this, I no longer have any patience for people who claim that our natural state is barbarism.

Citizen of the Galaxy. I can't really summarize this one, but it's got some good characters in it.

Between Planets. Our protagonist finds himself in the middle of a revolution all of a sudden. This was written before we knew that Venus was not habitable.

I was raised on this stuff. Also, I'd like to recommend Startide Rising, by David Brin, and its sequel The Uplift War. They're technically part of a trilogy, but reading the first book (Sundiver) is completely unnecessary. It's not really light reading, but it's entertaining and interesting.

Comment author: NancyLebovitz 09 August 2010 09:06:46PM 1 point [-]

Note about Tunnel in the Sky-- they didn't just form a society (not a civilization) because they thought it was a good idea to do-- they'd had training in how to build social structures.

Comment author: gwern 02 January 2010 01:50:47PM *  8 points [-]

I recently revisited my old (private) high school, which had finished building a new >$15 million building for its football team (and misc. student activities & classes).

I suddenly remembered that when I was much younger, the lust of universities and schools in general for new buildings had always puzzled me: I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and then maintain, and space didn't seem plausible (because I passed empty classrooms all the time and they were often the same classroom pretty much all day). So this always puzzled me as a kid - big buildings seemed like perfect white elephants. I could understand the donors' reason, but not anyone else's.

When I remembered my childhood aporia, I suddenly realized - 'Oh, this is status-seeking behavior; big buildings are unfakeable social signals of wealth and influence. I was just being narrow-minded in assuming that if it didn't have your name on it, it couldn't boost your status.'

(I don't really have any point to this anecdote, but I thought it was interesting that OB/LW reading solved a longstanding puzzle of mine.)

* Obviously a few subject-matters do require specialized facilities; it's hard to do pottery without a specialized art-room, for example. But those are a minority.

Comment author: Eliezer_Yudkowsky 01 January 2010 08:37:48PM 8 points [-]

Akrasia FYI:

I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working. I would have thought that recovering in large bites and working in large bites would have been more efficient, but apparently no, it's not.

I'm currently testing the hypothesis that reading fiction (possibly reading anything?) comes out of my energy-to-work-on-the-book budget.

Next up to try: Pick up a CPAP machine off Craigslist.

Comment author: wedrifid 02 January 2010 09:49:13AM 6 points [-]

I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working.

A technical problem that is easily solvable. My approach has been to use VMWare. All the productive tools are installed on the base OS. Procrastination tools are installed on a virtual machine. Starting the procrastination box takes about 20 seconds (and more importantly a significant active decision) but closing it to revert to 'productive mode' takes no time at all.

Comment author: jimrandomh 02 January 2010 02:34:05AM 3 points [-]

I've noticed the same problem in separating work from procrastination environments. But it might work if it was asymmetric - say, there's a single fast hotkey to go from procrastination mode to work mode, but you have to type a password to go in the other direction. (Or better yet, a 5 second delay timer that you can cancel.)

Comment author: kpreid 03 January 2010 12:57:06PM 2 points [-]

I had the same problem when I was using just virtual screens with a key to switch, not even separate accounts. It was a significant decrease in productivity before I realized the problem. I think it's not just the effort to switch; it's also that the work doesn't stay visible so that you think about it.

Comment author: AdeleneDawner 01 January 2010 06:06:56PM 7 points [-]

This article about gendered language showed up on one of my feeds a few days ago. Given how often discussions of nongendered pronouns happen here, I figure it's worth sharing.

Comment author: Daniel_Burfoot 02 January 2010 01:47:48PM 5 points [-]

Nice, I liked the part about Tuyuca:

Most fascinating is a feature that would make any journalist tremble. Tuyuca requires verb-endings on statements to show how the speaker knows something. Diga ape-wi means that “the boy played soccer (I know because I saw him)”, while diga ape-hiyi means “the boy played soccer (I assume)”. English can provide such information, but for Tuyuca that is an obligatory ending on the verb. Evidential languages force speakers to think hard about how they learned what they say they know.

It would be fun to try to build a "rational" dialect of English that requires people to follow rules of logical inference and reasoning.

Comment author: pdf23ds 03 January 2010 12:54:20AM 6 points [-]

If quantum immortality is correct, and assuming life extension technologies and uploading are delayed for a long time, wouldn't each of us, in our main worldline, become more and more decrepit and injured as time goes on, until living would be terribly and constantly painful, with no hope of escape?

Comment author: SoullessAutomaton 07 January 2010 03:31:55AM 3 points [-]

I present for your consideration a delightful quote, courtesy of a discussion on another site:

The Sibyl of Cumae, who led Aeneas on his journey to the underworld, for which he collected the Golden Bough, was the most famous prophetess of the ancient world. Beloved of Apollo, she was given anything she might desire. She asked for eternal life. Sadly, Apollo granted her wish, for she had forgotten to ask for eternal youth. Now dried, dessicated, and shrunken, she is carried in a cricket cage, and when the boys ask her what she desires, she says: "I want to die."

I think the moral of the story is: stay healthy and able-bodied as much as possible. If, at some point, you should find yourself surviving far beyond what would be reasonably expected, it might be wise to attempt some strategic quantum suicide reality editing while you still have the capacity to do so...

Comment author: Alicorn 03 January 2010 12:55:58AM 4 points [-]

We frequently become unconscious (sleep) in our threads of experience. There is no obvious reason we couldn't fall comatose after becoming sufficiently battered.

Comment deleted 03 January 2010 01:07:52PM [-]
Comment author: orthonormal 03 January 2010 06:55:51PM *  4 points [-]

A superhuman intelligence that understood the nature of human consciousness and subjective experience would presumably know whether QI was correct, incorrect, or somehow a wrong question. Consciousness and experience all happen within physics, they just currently confuse the hell out of us.

Comment deleted 03 January 2010 09:12:40PM *  [-]
Comment author: Eliezer_Yudkowsky 03 January 2010 04:01:35AM 2 points [-]

"The author recommends that anyone reading this story sign up with Alcor or the Cryonics Institute to have their brain preserved after death for later revival under controlled conditions."

(From a little story which assumes QTI.)

Comment author: Vladimir_Nesov 02 January 2010 12:40:33AM 16 points [-]

The Guardian published a piece citing Less Wrong:

The number's up by Oliver Burkeman

When it comes to visualising huge sums – the distance to the moon, say, or the hole the economy is in – we're pretty useless really

Comment author: RichardKennaway 02 January 2010 02:33:17PM 2 points [-]

Here's a nice visualisation of some big numbers.

Comment author: Eliezer_Yudkowsky 01 January 2010 08:53:01PM 14 points [-]

Recent observations on the art of writing fiction:

  1. My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses. I think this may be related to the fundamental attribution error: we see ourselves as reacting naturally to the environment, but others as driven by innate impulses. Unfortunately this doesn't work for storytelling at all! It means my viewpoint character ends up as a ping-pong ball in a world of strong, driven other characters. (If you don't see this error in my published fiction, it's because I don't publish unsuccessful stories.)

  2. Closely related to the above is another recent observation: My main character has to be sympathetic, in the sense of having motivations that I can respect enough to write them properly. Even if they're mistaken, I have to be able to respect the reasons for their mistakes. Otherwise my viewpoint automatically shifts to the characters around them, and once again the non-protagonist ends up stronger than the protagonist.

  3. Just as it's necessary to learn to make things worse for your characters, rather than following the natural impulse to make things better, it's also necessary to learn to deepen mysteries rather than following the natural impulse to explain them right away.

  4. Early problems in a story have to echo the final resolution.

  5. This isn't really about my own work, but I've been reading some fanfiction lately and it just bugs the living daylights out of me. I hereby dub this the First Law of Fanfiction: Every change which strengthens the protagonists requires a corresponding worsening of their challenges. Or in plainer language, You can't make Frodo a Jedi without giving Sauron the Death Star. There are stories out there with correctly spelled words, and even good prose, which are failing out of ignoring this one simple principle. If I could put this up on a banner on all the authors' pages of Fanfiction.Net, I would do so.

Comment author: CronoDAS 02 January 2010 08:58:48AM 4 points [-]

My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses.

That's not uncommon. Villains act, heroes react.

I hereby dub this the First Law of Fanfiction: Every change which strengthens the protagonists requires a corresponding worsening of their challenges. Or in plainer language, You can't make Frodo a Jedi without giving Sauron the Death Star.

It's already called The Law of Bruce, but it's stated a little differently.

Comment author: wedrifid 02 January 2010 09:15:13AM *  3 points [-]

I noticed where I was while on the first page this time. Begone with you!

Comment author: Wei_Dai 21 January 2010 02:49:20AM *  5 points [-]

Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)

Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those AIs will be randomly selected and allowed to take over the universe. In the mean time the N AIs are to negotiate amongst themselves, and if necessary, given help to enforce their agreements.

The advantages of this approach are:

  • AIs will need to know how to negotiate with each other anyway, so we can build on top of that "for free".
  • There seems little question that the scheme is fair, since everyone is given an equal amount of bargaining power.

Comments?

ETA: I found a very similar idea mentioned before by Eliezer.

Comment author: Alicorn 21 January 2010 02:56:37AM 3 points [-]

Unless you can directly extract a sincere and accurate utility function from the participants' brains, this is vulnerable to exaggeration in the AI programming. Say my optimal amount of X is 6. I could program my AI to want 12 of X, but be willing to back off to 6 in exchange for concessions regarding Y from other AIs that don't want much X.

Comment author: Eliezer_Yudkowsky 07 January 2010 01:52:54AM 5 points [-]
Comment author: Eliezer_Yudkowsky 03 March 2010 02:10:30AM 8 points [-]

Transcript:

--

Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.

Astrologer: Yes, that would be a perverse thing to do, wouldn't it.

Dawkins: It would be - yes, but I mean wouldn't that be a good test?

Astrologer: A test of what?

Dawkins: Well, how accurate you are.

Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.

Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?

Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.

Dawkins: I'd have thought you'd be eager.

Astrologer: (Laughs.)

Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.

Astrologer: I just don't believe in the experiment, Richard, it's that simple.

Dawkins: Well you're in a kind of no-lose situation then, aren't you.

Astrologer: I hope so.

--

Comment author: PhilGoetz 07 January 2010 05:14:10AM 4 points [-]

Dawkins: "Well... you're sort of in a no-lose situation, then."

Astrologer: "I certainly hope so."

Comment author: AngryParsley 09 January 2010 02:39:30AM 3 points [-]

That video has been taken down, but you can skip to around 5 minutes into this video to watch the astrology bit.

Comment author: Cyan 20 January 2010 05:34:30PM *  2 points [-]

A fine example of:

To correctly anticipate, in advance, which experimental results shall need to be excused, the dragon-claimant must (a) possess an accurate anticipation-controlling model somewhere in his mind, and (b) act cognitively to protect either (b1) his free-floating propositional belief in the dragon or (b2) his self-image of believing in the dragon.

Comment author: Kaj_Sotala 03 January 2010 08:20:30AM *  5 points [-]

Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.

Comment author: Blueberry 03 January 2010 12:34:05PM 10 points [-]

There are several that I've wondered about:

  1. Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.

  2. Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)

  3. Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.

  4. Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.

Comment deleted 03 January 2010 01:03:37PM [-]
Comment author: MatthewB 03 January 2010 01:09:35PM 2 points [-]

I've noticed that some of the Pacific Island countries don't have much in the way of sexual taboos, and they tend to teach their kids things like:

  • Don't stick your thingy in there without proper lube

or

  • If you are going to do that, clean up afterward.

Japan is also a country that has few sexual taboos (when compared to western Christian society). They still have their taboos and strangeness surrounding sex, but it is not something that is considered sinful or dirty

I am really interested in that last suggestion, and it sounds like one of the areas I want to explore when I get to grad school (and beyond). At Eliezer's talk at the first Singularity Summit (and other talks I have heard him give) he speaks of a possible mind space. I would like to explore that mind space further outside of the human mind.

As John McCarthy proposed in one of his books. It might be the case that even a thermostat is a type of a mind. I have been exploring how current computers are a type of evolving mind with people as the genetic agents. we take things in computers that work for us, and combine those with other things, to get an evolutionary development of an intelligent agent.

I know that it is nothing special, and others have gone down that path as well, but I'd like to look into how we can create these types of minds biologically. Is it possible to create an alien mind in a human brain? Your 4th suggestion seems to explore this space. I like that (I should up vote it as a result)

Comment author: Kaj_Sotala 03 January 2010 08:25:45AM 3 points [-]

I'd be really curious to see what happened in a society where your social gender was determined by something else than your biological sex. Birth order, for instance. Odd male and even female, so that every family's first child is considered a boy and their second a girl. Or vice versa. No matter what the biology. (Presumably, there'd need to be some certain sign of the gender to tell the two apart, like all social females wearing a dress and no social males doing so.)

Comment author: MBlume 05 January 2010 11:56:25AM 3 points [-]

I'd like to put about 50 anosognosiacs and one healthy person in a room on some pretext, and see how long it takes the healthy person to notice everyone else is delusional, and whether ve then starts to wonder if ve is delusional too.

Comment author: MatthewB 03 January 2010 09:11:17AM 2 points [-]

I'd like to know how many people would eat human meat if it was not so taboo (No nervous system so as to avoid nasty prion diseases). I know that since I accidentally had a bite of finger when I was about 19 that I've wondered what a real bite of a person would taste like (prepared properly... Maybe a ginger/garlic sauce???).

Also, building on Kaj Sotala's proposal, what about sexual assignment by job or profession (instead of biological sex). So, all Doctors or Health Care workers would be female, all Soldiers would be male, all ditch diggers would be male, yet all bakers would be female. All Mailmen would be male, yet all waiters would be female.

Then, one could have multiple sex-assignments if one worked more than one job. How about a neuter sex and a dual sex in their as well (so the neuter sex would have no sex, and the hermaphrodite would be... well, both...)

Comment author: orthonormal 03 January 2010 09:45:14AM 2 points [-]

since I accidentally had a bite of finger when I was about 19

After your prior revelations and this, I'm waiting for the third shoe to drop.

Comment author: MatthewB 03 January 2010 12:21:58PM *  3 points [-]

Then shoes could be dropping for quite a while...

Edit: I better stop biographing for a while. I've led a life that has been colorful to say the least (I wish that it had been more profitable - it was at one point... But, well, you have a link to what happened to the money)

Comment author: NancyLebovitz 03 January 2010 08:17:00AM *  5 points [-]

Has anyone here tried Lojban? Has it been useful?


I recommend making a longer list of recent comments available, the way Making Light does.


If you've been working with dual n-back, what have you gotten out of it? Which version are you using?


Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.

Comment author: RichardKennaway 03 January 2010 10:30:07AM 2 points [-]

Years ago I was involved with both Loglan (the original) and Lojban (the spin-off, started by a Loglan enthusiast who thought the original creator was being too possessive of Loglan). For me it was simply an entertaining hobby, along with other conlangs such as Láadan and Klingon. But in the history of artificial languages, it is important as the first to be based on the standard universal language of mathematics, first-order predicate calculus.

Comment author: MichaelGR 02 January 2010 05:44:41PM *  5 points [-]

I spent December 23rd, 24th and 25th in the hospital. My uncle died of brain cancer (Glioblastoma multiforme). He was an atheist, so he knew that this was final, but he wasn't signed up for cryonics.

We learned about the tumor 2 months ago, and it all happened so fast.. and it's so final.

This is a reminder to those of you who are thinking about signing up for cryonics; don't wait until it's too late.

Comment author: Larks 02 January 2010 07:10:54PM 10 points [-]

Because trivial inconvieniences be a strong deterent, maybe someone should make a top-level post on the practicallities of cryonics; an idiots guide to immortality.

Comment author: Alicorn 02 January 2010 06:34:44PM 9 points [-]

I want to sign up. I don't want to sign up alone. I can't convince any of my family to sign up with me. Help.

Comment author: Eliezer_Yudkowsky 02 January 2010 11:51:56PM 9 points [-]

Most battles like this end in losses; I haven't been able to convince any of my parents or grandparents to sign up. You are not alone, but in all probability, the ones who stand with you won't include your biological family... that's all I can say.

Comment author: Technologos 02 January 2010 06:39:54PM 5 points [-]

Now that would be a great extension of the LW community--a specific forum for people who want to make rationalist life decisions like that, to develop a more personal interaction and decrease subjective social costs.

Comment author: aausch 02 January 2010 11:55:51PM 5 points [-]

It could be a more general advice-giving forum. Come and describe your problem, and we'll present solutions.

That might also be a useful way to track the performance of rationalist methods in the real world.

Comment author: Dagon 02 January 2010 10:37:53PM 4 points [-]

Can I help by pointing out flaws in your implied argument ("I believe cryonics is worthwhile, but without my family, I'd rather die, and they don't want to")?

Do you intend to kill yourself when some or all of your current family dies? If living beyond them is positive value, then cryonics seems a good bet even if no current family member has signed up.

Also, your arguments to them that they should sign up gets a LOT stronger with your family if you're actually signed up and can help with the paperwork, insurance, and other practical barriers. In fact, some of your family might be willing to sign up if you set everything up for them, including paying, and they just have to sign.

In fact, cryonics as gift seems like a win all around. It's a wonderful signal: I love you so much I'll spend on your immortality. It gets more people signed up. It sidesteps most of the rationalization for non-action (it's too much paperwork, I don't know enough about what group to sign up, etc.).

Comment author: Alicorn 02 January 2010 10:43:19PM *  7 points [-]

Do you intend to kill yourself when some or all of your current family dies?

No. I do expect to create a new family of my own between now and then, though. It is the prospect of spending any substantial amount of time with no beloved company that I dread, and I can easily imagine being so lonely that I'd want to kill myself. (Viva la extroversion.) I would consider signing up with a fiancé(e) or spouse to be an adequate substitute (or even signing up one or more of my offspring) but currently have no such person(s).

Actually, shortly after posting the grandparent, I decided that limiting myself to family members was dumb and asked a couple of friends about it. My best friend has to talk to her fiancé first and doesn't know when she'll get around to that, but was generally receptive. Another friend seems very on-board with the idea. I might consider buying my sister a plan if I can get her to explain why she doesn't like the idea (it might come down to finances; she's being weird and mumbly about it), although I'm not sure what the legal issues surrounding her minority are.

Edit: Got a slightly more coherent response from my sister when I asked her if she'd cooperate with a cryonics plan if I bought her one. Freezing her when she dies "sounds really, really stupid", and she's not interested in talking about her "imminent death" and asks me to "please stop pestering her about it". I linked her to this, and think that's probably all I can safely do for a while. =/

Comment author: Peter_de_Blanc 03 January 2010 12:35:26AM 3 points [-]

Even if none of your relatives sign up for cryonics, I would expect some of them to still be alive when you are revived.

Comment author: Vladimir_Nesov 03 January 2010 12:48:54AM 4 points [-]

Since there is already only a slim chance of actually getting to the revival part (even though high payoff keeps the project interesting, like with insurance), after mixing in the requirement of reaching the necessary tech in (say) 70 years for someone alive today to still be around, and also managing to die before that, not a lot is left, so I wouldn't call it something to be "expected". "Conditional on you getting revived, there is a good chance some of your non-frozen relatives are still alive" is more like it (and maybe that's what you meant).

Comment author: Alicorn 03 January 2010 12:47:18AM *  2 points [-]

Do you mean that a relative I have now, or one who will be born later, will probably be around at that time? Because the former would require that I die soon (while my relatives don't) or that there's an awfully rapid turnaround between my being frozen and my being defrosted.

Comment author: JamesAndrix 03 January 2010 09:44:29AM 4 points [-]

Well the whole point of signing up now is that you might die soon.

So sign up now. If you get to be old And still have no young family And the singularity doesn't seem close, then cancel.

Comment author: AndrewWilcox 05 January 2010 12:55:28AM 2 points [-]

You have best friends now, how did you meet them? In the worst case scenario where people you currently know don't make it, do you doubt that you'll be able to quickly make new friends?

Suppose that there are hundreds of people who would want to be your best friend, and that you would genuinely be good friends with. Your problem is that you don't know who they are, or how to find them. Not to be too much of a technology optimist :-), but imagine if the super-Facebook-search engine of the future would be able to accurately put you in touch with those hundreds.

Comment author: AngryParsley 03 January 2010 11:54:32AM 3 points [-]

It's much easier to overcome your own aversion to signing up alone than to convince your family to sign up with you. Even assuming you can convince them that living longer is a good thing, there are a ton of prerequisites needed before one can accurately evaluate the viability of cryonics.

Comment author: scotherns 07 January 2010 02:30:44PM 2 points [-]

Do it anyway. Lead by example. Over time, you might find they become more used to the idea, particularly if they have someone who can help them with the paperwork and organisational side of things. If you can help them financially, so much the better.

If you are successfully revived, you will have plenty of time to make new friends, and start a new family. I'm not meaning to sound callous, but its not unheard of for people to lose their families and eventually recover. I'm doing everything I can to persuade my family to sign up, but its up to them to make the final decision.

I'd give my life to save my family, but I wouldn't kill myself if I found myself alone.

Comment author: rwallace 03 January 2010 03:06:24AM 1 point [-]

I think it's great that you've taken the first steps, and would encourage you to go ahead and sign up.

In my experience, arguing with people who've decided they definitely don't want to do something, especially if their reasons are irrational, is never productive. As Eliezer says, it may simply be that those who stand with you will be your friends and the family you create, not the family you came from. But I would guess the best chance of your sister signing up would be obtained by you going ahead right now, but not pushing the matter, so that in a few years the fact of your being signed up will have become more of an established state of affairs.

It's a sobering demonstration of just how much the human mind relies on social proof for anything that can't be settled by immediate personal experience. (Conjecture: any intelligence must at least initially work this way; a universe in which it were not necessary, would be too simple to evolve intelligence in the first place. But I digress.)

Is there anything that can be done to bend social instinct more in the right direction here? For example, I know there have been face-to-face gatherings for those who live within reach of them; would it help if several people at such a gathering showed up wearing 'I'm signed up for cryonics' badges?

Comment author: Vladimir_Nesov 02 January 2010 01:23:48PM *  5 points [-]

Alexandre Borovik summarizes the Bayesian error in null hypothesis rejection method, citing the classical
J. Cohen (1994). `The Earth Is Round (p < .05)'. American Psychologist 49(12):997-1003.

The fallacy of null hypothesis rejection

If a person is an American, then he is probably not a member of Congress. (TRUE, RIGHT?)
This person is a member of Congress.
Therefore, he is probably not an American.

Comment author: Alicorn 29 January 2010 08:30:44PM 12 points [-]

"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.

Comment author: Eliezer_Yudkowsky 29 January 2010 10:51:23PM 4 points [-]

This woman is a model unto the entire human species.

Comment author: whpearson 21 January 2010 12:02:34AM *  4 points [-]

Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).

It turns out there are two different ways people respond to challenges. Some people see them as opportunities to perform - to demonstrate their talent or intellect. Others see them as opportunities to master - to improve their skill or knowledge.

Say you take a person with a performance orientation ("Paul") and a person with a mastery orientation ("Matt"). Give them each an easy puzzle, and they will both do well. Paul will complete it quickly and smile proudly at how well he performed. Matt will complete it quickly and be satisfied that he has mastered the skill involved.

Now give them each a difficult puzzle. Paul will jump in gamely, but it will soon become clear he cannot overcome it as impressively as he did the last one. The opportunity to show off has disappeared, and Paul will lose interest and give up. Matt, on the other hand, when stymied, will push harder. His early failure means there's still something to be learned here, and he will persevere until he does so and solves the puzzle.

While a performance orientation improves motivation for easy challenges, it drastically reduces it for difficult ones. And since most work worth doing is difficult, it is the mastery orientation that is correlated with academic and professional success, as well as self-esteem and long-term happiness.


When I learned about performance and mastery orientations, I realized with growing horror just what I'd been doing for most of my life. Going through school as a "gifted" kid, most of the praise I'd received had been of the "Wow, you must be smart!" variety. I had very little ability to follow through or persevere, and my grades tended to be either A's or F's, as I either understood things right away (such as, say, calculus) or gave up on them completely (trigonometry). I had a serious performance orientation. And I was reinforcing it every time I played an RPG.

Comment author: MrHen 18 January 2010 06:27:09PM *  4 points [-]

What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?

Comment author: ciphergoth 18 January 2010 08:55:21PM 2 points [-]

(a). Lots of us scan the "Recent Comments" page, so if a discussion starts up there plenty of people will get on board.

Comment author: timtyler 09 January 2010 10:03:00AM 4 points [-]

James Hughes - with a (IMO) near-incoherent Yudkowsky critique:

http://ieet.org/index.php/IEET/more/hughes20100108/

Comment author: Sly 03 January 2010 10:52:41AM *  4 points [-]

I am curious as to how many LWers attempt to work out and eat healthy to lengthen life span. Especially among those who have signed up for cryogenics.

Comment author: RichardKennaway 04 January 2010 10:00:49PM 4 points [-]

I work out and eat healthily to make right now better.

Of course, I hope that the body will last longer as well, but I wouldn't undertake a regimen that guaranteed I'd see at least 120, at the cost of never having the energy to get much done with the time. Not least because I'd take such a cost as casting doubt on the promise.

Comment author: Jawaka 07 January 2010 01:57:43PM 2 points [-]

I stopped smoking after I learned about the Singularity and Aubrey de Grey. I don't have any really good data on what healthy food is but I think I am doing alright. I have also singed up to a Gym recently. However I don't think I can sign up to cryogenics in Germany.

Comment author: Kaj_Sotala 02 January 2010 10:12:59AM 4 points [-]

A little knowledge can be a dangerous thing. At least Eliezer has previously often recommended Judgment Under Uncertainty as something people should read. Now, I'll admit I haven't read it myself, but I'm wondering if that might be a bad advice, as the book's rather dated. I seem to frequently come across articles that cite JUU, but either suggest alternative interpretations or debunk its results entirely.

Just today, I was trying to find recent articles about scope insensitivity that I could cite. But on a quick search I primarily ran across articles pointing out it isn't so clear-cut as we seem to assume:

Psychological explanations of scope insensitivity do not imply CV invalidation. Green and Tunstall (1999, p. 213) argue that observed scope insensitivity (part-whole bias, embedding) “is the result of asking questions which are essentially meaningless to the respondents because [of] false assumptions about the cognitions of the respondents”. This position is close to that of, e.g., Carson and Mitchell (1993), arguing that apparent scope insensitivity is primarily due to flaws in survey design leading to amenity misspecification bias.

There are also explanations from economic theory. Rollins and Lyke (1998) argue that observed insensitivity to scope can result from diminishing marginal values. Successive quantities of, e.g., protected areas would receive ever positive but lower values per unit, such that the possibility of observing scope sensitivity would depend on the baseline scarcity of the resource. Income effects provide a related explanation. CV respondents have limited budgets or sub-budgets, whether these are mental or real, so their optimisation of spending on private and public goods is constrained (Randall and Hoehn 1993, 1996). Thus, even if the valuation is hypothetical, respondents are expected to limit totally stated [Willingness to Pay] to their ability to pay and to account for an executed hypothetical purchase when asked to value another good.

Indeed, the scope sensitivity issue remains controversial...

The scope test in the present CV study was over the composition of endangered species preservation. ... Of four external tests of insensitivity to scope, one was rejected, two gave mixed results, depending on either the type of test or elicitation format, and for the last one the null hypothesis could not be rejected. Of five internal tests, insensitivity to scope was rejected in three cases, one test gave mixed results, and one could not be rejected. Survey design features of the CV study, especially a fuzzy subgroup of endangered species, could explain the apparent insensitivity to scope observed.

So if anyone is reading the book, take it with a grain of salt. At least do a Google Scholar search for more data before accepting the conclusions.

Comment author: MrHen 31 January 2010 06:01:18PM 3 points [-]

What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.

Comment author: Alicorn 31 January 2010 06:01:57PM 3 points [-]

I try to avoid having more than one post of mine on the sidebar at the same time.

Comment author: komponisto 28 January 2010 09:24:26PM 3 points [-]

For the "How LW is Perceived" file:

Here is an excerpt from a comments section elsewhere in the blogosphere:

In the meantime, one comment on that other interesting reading at Less Wrong. It has been fun sifting through various posts on a variety of subjects. Every time I leave I have the urge to give them the Vulcan hand signal and say "Live Long and Prosper". LOL.

I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...

Comment author: Kevin 28 January 2010 12:31:10PM 3 points [-]
Comment author: Nick_Tarleton 18 January 2010 06:42:09PM 3 points [-]

This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)

Comment author: Eliezer_Yudkowsky 18 January 2010 06:51:22PM 3 points [-]

That is pretty ridiculous - enough to make me want to check the original study for effect size and statistical significance. Writing newspaper articles on research without giving the original paper title ought to be outlawed.

Comment author: MrHen 09 January 2010 12:23:04AM *  3 points [-]

A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:

Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.

Oops.

Comment author: MrHen 22 January 2010 03:04:53PM 2 points [-]

It really does surprise me how often people do things like this.

“I guess it’s just a genetic flaw in humans,” said Amichai Shulman, the chief technology officer at Imperva, which makes software for blocking hackers. “We’ve been following the same patterns since the 1990s.”

This is a quote from someone being interviewed about bad but common passwords. Would this be labeled a semantic stopsign, or a fake explanation, or ...?

Comment author: RobinZ 22 January 2010 03:45:44PM 2 points [-]

Fake explanation - he noticed a pattern and picked something which can cause that kind of pattern, without checking if it would cause that pattern.

Comment author: thomblake 22 January 2010 04:59:59PM -1 points [-]

Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.

This isn't an example of a logical fallacy; it could be read that way if the conclusion was "their way must be right" or something like that. As it is, the heuristic is "X is successful and Y is part of X's business plan, so Y probably leads to success".

If you think their planning is no better than chance, or that Y usually only works when combined with other factors, then disagreeing with this heuristic makes sense. Otherwise, it seems like it should work most of the time.

Affirming the consequent, in general, is a good heuristic.

Comment author: MrHen 22 January 2010 05:24:17PM 4 points [-]

Within the context of the article, the bigger form of the argument can be phrased as such:

  • DirectX is not cross-platform
  • OpenGL is cross-platform
  • Blizzard is successful
  • Blizzard releases cross-platform software
  • It is more successful to release cross-platform software
  • It is more successful to use OpenGL than DirectX

This is bad and wrong. As a snap judgement, it is likely that releasing cross-platform software is a more successful thing to do but using that snap judgement to build bigger arguments is dangerous.

This is an example of an appeal from authority and fallacy of division.

As it is, the heuristic is "X is successful and Y is part of X's business plan, so Y probably leads to success".

But Y doesn't lead to success. If I say, "Blizzard is successful and making video games is part of their business plan, so making video games probably leads to success," something should be obviously wrong. Why would it be true if I use "always releases Mac versions of their games simultaneously" instead of "makes video games"?

If you think their planning is no better than chance, or that Y usually only works when combined with other factors, then disagreeing with this heuristic makes sense. Otherwise, it seems like it should work most of the time.

As far as I can tell, the emphasized part is the whole reason you should be careful. Picking one part out of a business plan is stupid. If you know enough about the subject material to determine whether that part of the business plan is applicable to whatever you are doing, fair enough, but this is a judgement call above and beyond the statements given in this example.

Affirming the consequent, in general, is a good heuristic.

Maybe, but it is still a logical fallacy.

Comment author: Jack 07 January 2010 07:37:16PM *  3 points [-]

Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?

Comment author: nhamann 07 January 2010 09:56:27PM *  4 points [-]

I'm currently trying to teach myself mathematics from the ground up, so I'm in a similar situation as you. The biggest issue, as I see it, is attempting to forget everything I already "know" about math. Math curriculum at both the public high school and the state university I attended was generally bad; the focus was more on memorizing formulas and methods of solving prototypical problems than on honing one's deductive reasoning skills, which if I'm not mistaken is the core of math as a field of inquiry.

So obviously textbooks are good place to start, but which ones don't suck? Well, I can't help you there, as I'm trying to figure this out myself, but I use a combination of recommendations from this page and looking at ratings on Amazon.

Here are the books I am currently reading, have read portions of, or are on my immediate to-read list, but take this with a huge grain of salt as I'm not a mathematician, only an aspiring student:

  • How to Prove It: A Structured Approach by Vellemen - Elementary proof strategies, is a good reference if you find yourself routinely unable to follow proofs

  • How to Solve It by Polya - Haven't read it yet but it's supposedly quite good.

  • Mathematics and Plausible Reasoning, Vol. I & II by Polya - Ditto.

  • Topics in Algebra by Herstein - I'm not very far into this, but it's fairly cogent so far

  • Linear Algebra Done Right by Axler - Intuitive, determinant-free approach to linear algebra

  • Linear Algebra by Shilov - Rigorous, determinant-based approach to linear algebra. Virtually the opposite of Axler's book, so I figure between these two books I'll have a fairly good understanding once I finish.

  • Calculus by Spivak - Widely lauded. I'm only 6 chapters in, but I immensely enjoy this book so far. I took three semesters of calculus in college, but I didn't intuitively understand the definition of a limit until I read this book.

Comment author: ciphergoth 08 January 2010 01:07:14AM 2 points [-]

I've learned an awful lot of maths from Wikipedia.

Comment author: Nick_Novitski 04 January 2010 06:45:31PM 3 points [-]

Here's a silly comic about rationality.

I rather wish it was called "Irrationally Undervalues Rapid Decisions Man". Or do I?

Comment author: CannibalSmith 04 January 2010 10:58:08AM 3 points [-]

Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.

Comment author: [deleted] 04 January 2010 05:39:56AM *  3 points [-]

P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.

Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point from a region, the probability of selecting each point is 0. Nonetheless, a point was selected, and Rho saw which point it was; an event of probability 0 occurred. As Peter de Blanc said, Rho instantly fell to the very bottom layer of Bayesian hell.

Or did she?

Comment author: AdeleneDawner 03 January 2010 03:51:37PM *  3 points [-]

First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly sure I'm autistic. Just don't ignore the actual question in favor of picking my brain, please.)

The company that I work for has been hired to create a virtual campus (3d, in opensim, with some traditional web-2.0 parts) for this school. They appear to be fairly new to virtual worlds and online education (more so than the web page suggests: I'm not sure that they have any students following the shown program yet), and we're in a position to guide them toward or away from certain technologies and ways of doing things. We're already, for example, suggesting that they consider minimizing the use of realtime lectures, and use recorded presentations followed (not necessarily immediately) by both realtime and non-realtime discussions instead. We're pushing for them to incorporate options that allow and encourage students to learn (and learn to learn) in whatever way is best for them, rather than enforcing one-size-fits-all methods, and we're intentionally trying to include 'covert learning' as well (simple example: purposefully using more formal avatar animations in more formal areas, to let the students literally see how to carry themselves in such situations). The first group of students to be using our virtual campus will be in grades 4-8, and I don't believe we'll be able to influence their actual curriculum at all (though if someone wants to offer to mentor some kids in one topic or another, they might be interested).

Those who have made a formal effort to learn via online resources: What advice do you have to offer? What kinds of technologies, or uses of technologies, have worked for you, and what kinds of tech do you wish you had access to?

Comment author: Blueberry 03 January 2010 04:37:48PM 3 points [-]

For me personally, I would prefer transcripts and written summaries of any audio or video content. I find it very difficult to listen to and learn from hearing audio when sitting at a computer, and having text or a transcript to read from instead helps a lot. It allows me to read at my own pace and go back and forth when I need to.

I'd also like any audio and video content to be easily and separately downloadable, so I could listen to it at my own convenience. And I'd want any slides or demonstrations to be easily printable, so I could see it on paper and write notes on it. (As you can probably tell, I'm more of a verbal and visual learner.)

By the way, your comment seemed totally normal to me, and I didn't notice any unusual tone, but I'm curious what you were referring to.

Comment author: Alicorn 03 January 2010 04:42:12PM 2 points [-]

Seconded the need for transcriptions. This is also a matter of disability access, which is frequently neglected in website design - better to have it there from the beginning than wait for someone to sue.

Comment author: DanArmak 02 January 2010 10:14:25PM 3 points [-]

And happy new year to everyone.

Except wireheads.

Comment author: whpearson 02 January 2010 01:13:09AM 3 points [-]

I found this interesting and the paper it discusses children's conception of intelligence.

The abstract to the article

Two studies explored the role of implicit theories of intelligence in adolescents' mathematics achievement. In Study 1 with 373 7th graders, the belief that intelligence is malleable (incremental theory) predicted an upward trajectory in grades over the two years of junior high school, while a belief that intelligence is fixed (entity theory) predicted a flat trajectory. A mediational model including learning goals, positive beliefs about effort, and causal attributions and strategies was tested. In Study 2, an intervention teaching an incremental theory to 7th graders (N=48) promoted positive change in classroom motivation, compared with a control group (N=43). Simultaneously, students in the control group displayed a continuing downward trajectory in grades, while this decline was reversed for students in the experimental group.

People on lesswrong commonly talk as if intelligence is a thing we can put a number too, which implies a fixed trait. Yet that is counter productive in children. Is this another example of a useful lie? I feel that this issue is at the core of some of the arguments I have had over the years.

Comment author: Nick_Tarleton 02 January 2010 08:34:52PM 3 points [-]

People on lesswrong commonly talk as if intelligence is a thing we can put a number too, which implies a fixed trait.

No, it doesn't. What about weight?

Comment author: whpearson 02 January 2010 09:22:58PM *  4 points [-]

Fair point. Would you agree with, "People on lesswrong commonly talk as if intelligence is a thing we can put a number to (without temporal qualification), which implies a fixed trait."?

We often say our weight is currently X or Y. But people rarely say their IQ is currently Z, at least in my experience.

Comment author: Zack_M_Davis 02 January 2010 01:23:55AM 3 points [-]

Is this another example of a useful lie?

If it works, it can't be a lie. In any case, surely a sophisticated understanding does not say that intelligence is malleable or not-malleable. Rather, we say it's malleable to this-and-such an extent in such-and-these aspects by these-and-such methods.

Comment author: Kaj_Sotala 02 January 2010 01:23:13PM 2 points [-]

If it works, it can't be a lie.

"Intelligence is malleable" can be a lie and still work. Kids who believe their general intelligence to be malleable might end up exercising domain-specific skills and a general perseverance so that they don't get too easily discouraged. That leaves their general intelligence unchanged, but nonetheless improves school performance.

Comment author: RolfAndreassen 01 January 2010 06:58:43PM 3 points [-]

A suggestion for the site (or perhaps the Wiki): It would be useful to have a central registry for bets placed by the posters. The purpose is threefold:

  • Aid the memory of posters, who might accumulate quite a few bets as time passes.
  • Form a record of who has won and lost bets, helping us calibrate our confidences.
  • Formalise the practice of saying "I'll take a bet on that", prodding us to take care when posting predictions with probabilities attached. The intention here is to overcome akrasia in the form of throwing out a number and thus signalling our rationality; numbers are important and should be well considered when we use them at all.
Comment author: Kaj_Sotala 01 January 2010 05:13:19PM 6 points [-]

Suppose you could find out the exact outcome (up to the point of reading the alternate history equivalent of Wikipedia, history books etc.) of changing the outcome of a single historical event. What would that event be?

Note that major developments like "the Roman empire would never have fallen" or "the Chinese wouldn't have turned inwards" involve multiple events, not just one.

Comment author: Yvain 01 January 2010 05:53:48PM *  15 points [-]

So many. I can't limit it to one, but my top four would be "What if Mohammed had never been born?", "What if Julian the Apostate had succeeded in stamping out Christianity?" and "What if Thera had never blown and the Minoans had survived?" and "What if Alexander the Great had lived to a ripe old age?"

The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places. It also dealt a fatal wound to Byzantium as well. If Mohammed had never existed, I would look forward to reading about the Zoroastrian Persians, the Byzantines, and the Romanized Syrians and Egyptians surviving much longer than they did.

The Minoans were the most advanced civilization of their time, and had plumbing, three story buildings, urban planning and possibly even primitive optics in 2000 BC (I wrote a bit about them here). Although they've no doubt been romanticized, in the romanticized version at least they had a pretty equitable society, gave women high status, and revered art and nature. Then they were all destroyed by a giant volcano. I remember reading one historian's speculation that if they'd lived, a man would've landed on the moon by 1 AD.

I don't have such antipathy to Christianity that I'd want to prevent it from ever existing, but it sure did give us 2000 odd years of boring religion. Julian the Apostate was a Roman emperor who ruled a few reigns after Constantine and tried to turn back the clock, de-establish Christianity, and revive all the old pagan cults. He was also a philosopher, an intellectual, and by most accounts a pretty honest and decent guy. He died after reigning barely over a year, from a spear wound incurred in battle. If he'd lived, for all we know the US could be One Nation Under Zeus (or Wodin, or whoever) right now.

As for Alexander the Great, he was just plain nifty. I think I heard he was planning a campaign against Carthage before he died. If he'd lived to 80, he could've conquered all Europe, North Africa, and Western Asia, and have unified the whole western world under a dynasty of philosopher-kings dedicated to spreading Greek culture and ideas. Given a few more years, he might also have solved that whole "successor" issue.

Comment author: James_Miller 01 January 2010 06:02:12PM 12 points [-]

Given that Alexander was one of the most successful conquerors in all of history, he almost certainly benefited from being extremely lucky. If he had lived longer, therefore, he would have probably experienced much regression to the mean with respect to his military success.

Comment author: wedrifid 02 January 2010 12:35:21AM 4 points [-]

Of course, once you are already the most successful conqueror alive you tend to need less luck. You can get by on the basic competence that comes from experience and the resources you now have at your disposal. (So long as you don't, for example, try to take Russia. Although even then Alexander's style would probably have worked better than Napoleon's.)

Comment author: Morendil 01 January 2010 10:54:48PM 12 points [-]

I'd really, really like to see what the world would be like today if a single butterfly's wings had flapped slightly faster back in 5000 B.C.

Comment author: anonym 02 January 2010 08:38:18PM 2 points [-]

Along the same idea, but much more likely to yield radical differences to the future of human society, I'd like to know what would have happened if some ancient bottleneck epidemic had not happened or had happened differently (killed more or fewer people, or just different individuals). Much or all of the human gene pool after that altered event would be different.

Comment author: DanArmak 02 January 2010 10:26:31PM 2 points [-]

I'd like to see a world in which all ancestor-types of humans through to the last common ancestor with chimps still lived in many places.

Comment author: RolfAndreassen 01 January 2010 11:57:51PM 4 points [-]

I would try to study the effects of individual humans, Great-Man vs Historical Inevitability style, by knocking out statesmen of a particular period. Hitler is a cliche, whom I'd nonetheless start with; but I'd follow up by seeing what happens if you kill Chamberlain, Churchill, Roosevelt, Stalin... and work my way down to the likes of Turing and Doenitz. Do you still get France overrun in six weeks? A resurgent German nationalism? A defiant to-the-last-ditch mood in Britain? And so on.

Then I'd start on similar questions for the unification of Germany: Bismarck, Kaiser Wilhelm, Franz Josef, Marx, Napoleon III, and so forth. Then perhaps the Great War or the Cold War, or perhaps I'd be bored with recent history and go for something medieval instead - Harald wins at Stamford Bridge, perhaps. Or to maintain the remove-one-person style of the experiment, there's the three claimants to the British throne, one could kill Edgar the Confessor earlier, the Pope has a hand in it, there's the various dukes and other feudal lords in England... lots of fun to be had with this scenario!

Comment author: dfranke 01 January 2010 05:47:54PM 4 points [-]

I'd like to know what would have happened if movable type had been invented in the 3rd century AD.

Comment author: Nick_Novitski 04 January 2010 06:09:54PM 2 points [-]

For starters, the Council of Nicea would flounder helplessly as every sect with access to a printing press floods the market with their particular version of christianity.

Comment author: Kaj_Sotala 01 January 2010 05:18:52PM 7 points [-]

I'd be curious to know what would have happened if Christopher Columbus's fleet had been lost at sea during his first voyage across the Atlantic. Most scholars were already highly skeptical of his plans, as they were based on a miscalculation, and him not returning would have further discouraged any explorers from setting off in that direction. How much longer would it have taken before Europeans found out about the Americas, and how would history have developed in the meanwhile?

Comment author: anonym 02 January 2010 07:58:09PM 3 points [-]

I'd like to know what would have happened if the Library of Alexandria hadn't been destroyed. If even the works of Archimedes alone -- including the key insight underlying Integral Calculus -- had survived longer and been more widely disseminated, what difference would that have made to the future progress of mathematics and technology?

Comment author: PeterS 01 January 2010 08:26:08PM *  3 points [-]

I've been curious to know what the "U.S." would be like today if the American Revolution had failed.

Also, though it's a bit cliche to respond to this question with something like "Hitler is never born", it is interesting to think about just what is necessary to propel a nation into war / dictatorship / evil like that (e.g. just when can you kill / eliminate a single man and succeed in preventing it?) That's something I'm fairly curious about (and the scope of my curiosity isn't necessarily confined to Hitler - could be Bush II, Lincoln, Mao, an Islamic imam whose name I've forgotten, etc.).

Comment author: DanielLC 01 January 2010 11:56:34PM 2 points [-]

Something like Canada I guess.

While we're at it, what if the Continental Congress failed at replacing the Articles of Confederation?

Comment author: i77 01 January 2010 09:47:04PM 1 point [-]

I've been curious to know what the "U.S." would be like today if the American Revolution had failed.

Code Geass :)

Comment author: Alicorn 01 January 2010 05:29:21PM 3 points [-]

I would like to know what would have happened if, sometime during the Dark Ages let's say, benevolent and extremely advanced aliens had landed with the intention to fix everything. I would diligently copy and disseminate the entire Wikipedia-equivalent for the generously-divulged scientific and sociological knowledge therein, plus cultural notes on the aliens such that I could write a really keenly plausible sci-fi series.

Comment author: Gavin 01 January 2010 09:58:34PM 3 points [-]

A sci-fi series based on real extra-terrestrials would quite possibly be so alien to us that no one would want to read it.

Comment author: billswift 01 January 2010 11:18:17PM *  5 points [-]

Not just science fiction and aliens either. Nearly all popular and successful fiction is based around what are effectively modern characters in whatever setting. I remember a paper I read back around the mid-eighties pointing out that Louis L'Amour's characters were basically just modern Americans with the appropriate historical technology and locations.

Comment author: Alicorn 01 January 2010 10:13:00PM 2 points [-]

I might have to mess with them a bit to get an audience, yes.

Comment author: Zack_M_Davis 01 January 2010 10:18:41PM *  2 points [-]

Of course you can't fully describe the scenario, or you would already have your answer, but even so, this question seems tantalizingly underspecified. Fix everything, by what standard? Human goals aren't going to sync up exactly with alien goals (or why even call them aliens?), so what form does the aliens' benevolence take? Do they try to help the humans in the way that humans would want to be helped, insofar as that problem has a unique answer? Do they give humanity half the stars, just to be nice? Insofar as there isn't a unique answer to how-humans-would-want-to-be-helped, how can the aliens avoid engaging in what amounts to cultural imperialism---unilaterially choosing what human civilization develops into? So what kind of imperialism do they choose?

How advanced are these aliens? Maybe I'm working off horribly flawed assumptions, but in truth it seems kind of odd for them to have interstellar travel without superintelligence and uploading. (You say you want to write keenly plausible science fiction, so you are going have to do this kind of analysis.) The alien civilization has to be rich and advanced enough to send out a benevolent rescue ship, and yet not develop superintelligence and send out a colonization wave at near-c to eat the stars and prevent astronomical waste. Maybe the rescue ship itself was sent out at near-c and the colonization wave won't catch up for a few decades or centuries? Maybe the rescue ship was sent out, and then the home civilization collapsed or died out?---and the rescue ship can't return or rebuild on its own (not enough fuel or something), so they need some of the Sol system's resources?

Or maybe there's something about the aliens' culture and psychology such that they are capable of developing interstellar travel but not capable of developing superintelligence? I don't think it should be too surprising if the aliens should be congenitally confused, unable to discover certain concepts. (Compare how the hard problem of consciousness just seems impossible; maybe humans happen to be flawed in such a way such that we can never understand qualia.) So the aliens send their rescue ship, share their science and culture (insofar as alien culture can be shared), and eighty years later, the humans build an FAI. Then what?

Comment author: Alicorn 01 January 2010 10:25:02PM 3 points [-]

Human goals aren't going to sync up exactly with alien goals

Why not, as long as I'm making things up?

(or why even call them aliens?)

Because they are from another planet.

I do not know enough science to address the rest of your complaints.

Comment author: Zack_M_Davis 01 January 2010 11:08:20PM 3 points [-]

Why not, as long as I'm making things up?

I'm worried that some of my concepts here are a little be shaky and confused in a way that I can't articulate, but my provisional answer is: because their planet would have to be virtually a duplicate of Earth to get that kind of match. Suppose that my deepest heart's desire, my lifework, is for me to write a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. That's a necessary condition for my ideal universe: it has to contain me writing this beautiful, beautiful novel.

It doesn't seem all that implausible that powerful aliens would have a goal of "be nice to all sentient creatures," in which case they might very well help me with my goal in innumerable ways, perhaps by giving me a better word processor, or providing life extension so I can grow up to have a broader experience base with which to write. But I wouldn't say that this is the same thing as the alien sharing my goals, because if humans had never evolved, it almost certainly wouldn't have even occurred to the alien to create, from scratch, a human being who writes a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. A plausible alien is simply not going to spontaneously invent those concepts and put special value on them. Even if they have rough analogues to courtship story or even person who is rewarded for doing economic risk-management calculations, I guarantee you they're not going to invent New York.

Even if the alien and I end up cooperating in real life, when I picture my ideal universe, and when they picture their ideal universe, they're going to be different visions. The closest thing I can think of would be for the aliens to have evolved a sort of domain-general niceness, and to have a top-level goal for the universe to be filled with all sorts of diverse life with their own analogues of pleasure or goal-achievement or whatever, which me and my beautiful, beautiful novel would qualify as a special case of. Actually, I might agree with that as a good summary description of my top-level goal. The problem is, there are a lot of details that that summary description doesn't pin down, which we would expect to differ. Even if the alien and I agree that the universe should blossom with diverse life, we would almost certainly have different rankings of which kinds of possible diverse life get included. If our future lightcone only has room for 10^200 observer-moments, and there are 10^4000 possible observer-moments, then some possible observer-moments won't get to exist. I would want to ensure that me and my beautiful, beautiful novel get included, whereas the alien would have no advance reason to privilege me and my beautiful, beautiful novel over the quintillions of other possible beings with desires that they think of as their analogue of beautiful, beautiful.

This brings us to the apparent inevitability of something like cultural imperialism. Humans aren't really optimizers---there doesn't seem to be one unique human vision for what the universe should look like; there's going to be room for multiple more-or-less reasonable construals of our volition. That being the case, why shouldn't even benevolent aliens pick the construal that they like best?

Comment author: Alicorn 01 January 2010 11:37:52PM 2 points [-]

Domain-general niceness works. It's possible to be nice to and helpful to lots of different kinds of people with lots of different kinds of goals. Think Superhappies except with respect for autonomy.

Comment author: orthonormal 01 January 2010 10:43:19PM *  3 points [-]

OK, I sense cross-purposes here. You're asking "what would be the most interesting and intelligible form of positive alien contact (in human terms)", and Zack is asking "what would be the most probable form of positive alien contact"?

(By "positive alien contact", I mean contact with aliens who have some goal that causes them to care about human values and preferences (think of the Superhappies), as opposed to a Paperclipper that only cares about us as potential resources for or obstacles to making paperclips.)

Keep in mind that what we think of as good sci-fi is generally an example of positing human problems (or allegories for them) in inventive settings, not of describing what might most likely happen in such a setting...

Comment author: Kevin 28 January 2010 12:16:07AM 2 points [-]
Comment author: Kevin 25 January 2010 12:44:07AM 2 points [-]

Grand Orbital Tables: http://www.orbitals.com/orb/orbtable.htm

In high school and intro chemistry in college, I was taught up to the e and then f orbitals, but they keep going and going from there.

Comment author: Vladimir_Nesov 24 January 2010 08:03:26PM *  2 points [-]

I am currently writing a sequence of blog posts on Friendly AI. I would appreciate your comments on present and future entries.

Comment author: Kevin 22 January 2010 10:32:47AM *  2 points [-]

Inspired by this comment by Michael Vassar:

http://lesswrong.com/lw/1lw/fictional_evidence_vs_fictional_insight/1hls?context=1#comments

Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.

Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Gatsby) and I remember actually enjoying The Great Gatsby in high school. It's also a short novel so we could comfortably read it in a week or leisurely reread over the course of a month.

If it works, we can do one of Joyce's earlier works next, or whatever the club suggests. If we get good at this, a year from now we can do Ulysses.

Comment author: Kevin 21 January 2010 01:44:43PM 2 points [-]

How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?

I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.

Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?

(This is inspired by Shannon's mention of her child exploring her sense of self) http://lesswrong.com/lw/1n8/london_meetup_the_friendly_ai_problem/1hm4

Comment author: AdeleneDawner 22 January 2010 06:11:10AM 2 points [-]

I don't have any memory of a similar revelation, but one of my earliest memories is of asking my mother if there was a way to 'spell letters' - I understood that words could be broken down into parts and wanted to know if that was true of letters, too, and if so where the process ended - which implies that I was already doing a significant amount of abstract reasoning. I was three at the time.

Comment author: Kevin 20 January 2010 03:25:14PM 2 points [-]

Ray Kurzweil Responds to the Issue of Accuracy of His Predictions

http://nextbigfuture.com/2010/01/ray-kurzweil-responds-to-issue-of.html

Comment author: Kevin 14 January 2010 10:50:11PM 2 points [-]
Comment author: Kevin 12 January 2010 12:07:44PM 2 points [-]

Paul Graham -- How to Disagree

http://www.paulgraham.com/disagree.html

Comment author: Morendil 07 January 2010 10:34:02AM 2 points [-]

When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?

I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.

Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?

Comment author: SilasBarta 05 January 2010 03:38:59AM 2 points [-]

Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).

How do we fix it, so I don't have to start sending off resumes?

Comment author: byrnema 05 January 2010 04:35:14AM *  2 points [-]

I went to the eSafe site and while looking up what the "illegal drugs" classification meant, submitted a request for them to change their status for LessWrong.com. A pop-up window told me they'd look into it.

You can check (and then apply to modify) the status of LessWrong here.

Comment author: MatthewB 05 January 2010 03:57:20AM 2 points [-]

That may have been my fault. I mentioned that I used to have drug problems and mentioned specific drugs in one thread, so that may have set off the filters. I apologize if this is the case. The discussion about this went on for a day or two (involving maybe six comments).

I do hope that is not the problem, but I will avoid such topics in the future to avoid any such issues.

Comment author: Sniffnoy 04 January 2010 10:49:40PM 2 points [-]
Comment author: Seth_Goldin 07 January 2010 04:18:21AM 6 points [-]

Hello all,

I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.


Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html

When I'm debating some controversial topic with someone older than I am, even if I can thoroughly demolish their argument, I am sometimes met with a troubling claim, that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that my opinion is based primarily on nothing more than my perception from personal experience.

When my cornered opponent makes this claim, it's a last resort. It's unwarranted condescension, because it reveals how wrong their entire approach is. Just by making the claim, they demonstrate that they believe all opinions are based primarily on an accumulation of personal experiences, even their own opinions. Their assumption reveals that they are not Bayesian, and that they intuit that no one is. For not being Bayesian, they have no authority that warrants such condescension.

I intentionally avoid presenting personal anecdotes cobbled together as evidence, because I know that projecting my own experience onto a situation to explain it is no evidence at all. I know that I suffer from all sorts of cognitive biases that obstruct my understanding of the truth. As such, my inclination is to rely on academic consensus. If I explain this explicitly to my opponent, they might dismiss academics as unreliable and irrelevant, hopelessly stuck in the ivory tower of academia.

Dismiss academics at your own peril. Sometimes there are very good reasons for dismissing academic consensus. I concede that most academics aren't Bayesian because academia is an elaborate credentialing and status-signaling mechanism. Furthermore, academics have often been wrong. The Sokal affair illustrates that entire fields can exist completely without merit. That academic consensus can easily be wrong should be intuitively obvious to an atheist; religious community leaders have always been considered academic experts, the most learned and smartest members of society. Still, it would be a fallacious inversion of an argument from authority to dismiss academic consensus simply because it is academic consensus.

For all of academia's flaws, the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history. It is noble and desirable to criticize academic theories, but only as part of intellectually honest, impartial scientific inquiry. Dismissing academic consensus out of hand is primitive, and indicates intellectual dishonesty.

Comment author: Morendil 18 March 2010 02:03:58PM 5 points [-]

What you seem to be saying, that I agree with, is that it's irritating as well as irrelevant when people try to pull authority on you, using "age" or "quantity of experience" as a proxy for authority. Yes, argument does screen off authority. But that's no reason to knock "life experience".

If opinions are not based on "personal experience", what can they possibly be based on? Reading a book is a personal experience. Arguing an issue with someone (and changing your mind) is a personal experience. Learning anything is a personal experience, which (unless you're too good at compartmentalizing) colors your other beliefs.

Perhaps the issue is with your thinking that "demolishing someone's argument" is a worthwhile instrumental goal in pursuit of truth. A more fruitful goal is to repair your interlocutor's argument, to acknowledge how their personal experience has led them to having whatever beliefs they have, and expose symmetrically what elements in your own experience lead you to different views.

Anecdotes are evidence, even though they can be weak evidence. They can be strong evidence too. For instance, having read this comment after I read the commenter's original report of his experience as an isolated individual, I'd be more inclined to lend credence to the "stealth blimp" theory. I would have dismissed that theory on the basis of reading the Wikipedia page alone or hearing the anecdote along, but I have a low prior probability for someone on LessWrong arranging to seem as if he looked up news reports after first making a honest disclosure to other people interested in truth-seeking.

It seems inconsistent on your part to start off with a rant about "anecdotes", and then make a strong, absolute claimed based solely on "the Sokal affair" - which at the scale of scientific institutions is anecdotal.

I think you're trying to make two distinct points and getting them mixed up, and as a result not getting either point across. One of these points I believe needs to be moderated - the one where you say "personal experiences aren't evidence" - because they are evidence; the other is where you say "people who speak with too much confidence are more likely to be wrong, including a) people older than you, b) some academics, but not necessarily the academic consensus".

That is perhaps a third point - just why you think that "the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history". That's a strong claim subject to the conjunction fallacy: are each of peer review, logic, statistics and regression analysis necessary elements of what makes scientific inquiry our best chance at discovering truth? Are they sufficient elements to be that best chance?

Comment author: Seth_Goldin 18 March 2010 05:09:37PM 1 point [-]

Hi Morendil,

Thanks for the comment. The particular version you are commenting on was an earlier, worse version than what I posted and then pulled this morning. The version I posted this morning was much better than this. I actually changed the claim about the Sokal affair completely.

Due to what I fear was an information cascade of negative karma, I pulled the post so that I might make revisions.

The criticism concerning both this earlier version and the newer one from this morning still holds though. I too realized after the immediate negative feedback that I actually was combining, poorly, two different points and losing both of them in the process. I think I need to revise this into two different posts, or cut out the point about academia entirely. I will concede that anecdotes are evidence as well in the future version.

Unfortunately I was at exactly 50 karma, and now I'm back down to 20, so it will be a while before I can try again. I'll be working on it.

Comment author: Seth_Goldin 18 March 2010 07:31:27PM 1 point [-]

Here's the latest version, what I will attempt to post on the top level when I again have enough karma.


"Life Experience" as a Conversation-Halter

Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority. In and of itself, such "life experience" is necessary for an informed rational worldview, but it is not sufficient.

The claim that more "life experience" will completely reverse an opinion indicates that to the person making such a claim, belief that opinion is based primarily on an accumulation of anecdotes, perhaps derived from extensive availability bias. It actually is a pretty decent assumption that other people aren't Bayesian, because for the most part, they aren't. Many can confirm this, including Haidt, Kahneman, Tversky.

When an opponent appeals to more "life experience," it's a last resort, and it's a conversation halter. This tactic is used when an opponent is cornered. The claim is nearly an outright acknowledgment of a move to exit the realm of rational debate. Why stick to rational discourse when you can shift to trading anecdotes? It levels the playing field, because anecdotes, while Bayesian evidence, are easily abused, especially for complex moral, social, and political claims. As rhetoric, this is frustratingly effective, but it's logically rude.

Although it might be rude and rhetorically weak, it would be authoritatively appropriate for a Bayesian to be condescending to a non-Bayesian in an argument. Conversely, it can be downright maddening for a non-Bayesian to be condescending to a Bayesian, because the non-Bayesian lacks the epistemological authority to warrant such condescension. E.T. Jaynes wrote in Probability Theory about the arrogance of the uninformed, "The semiliterate on the next bar stool will tell you with absolute, arrogant assurance just how to solve the world's problems; while the scholar who has spent a lifetime studying their causes is not at all sure how to do this."

Comment author: SilasBarta 18 March 2010 02:36:38PM 1 point [-]

Yes, argument does screen off authority. But that's no reason to knock "life experience". ... Learning anything is a personal experience, which colors your other beliefs. ... A more fruitful goal is to repair your interlocutor's argument, to acknowledge how their personal experience has led them to having whatever beliefs they have, and expose symmetrically what elements in your own experience lead you to different views.

I agree with your point and your recommendation. Life experiences can provide evidence, and they can also be an excuse to avoid providing arguments. You need to distinguish which one it is when someone brings it up. Usually, if it is valid evidence, the other person should be able to articulate which insight a life experience would provide to you, if you were to have it, even if they can't pass the experience directly to your mind.

I remember arguing with a family member about a matter of policy (for obvious reasons I won't say what), and when she couldn't seem to defend her position, she said, "Well, when you have kids, you'll see my side." Yet, from context, it seems she could have, more helpfully, said, "Well, when you have kids, you'll be much more risk-averse, and therefore see why I prefer to keep the system as is" and then we could have gone on to reasons about why one or the other system is risky.

In another case (this time an email exchange on the issue of pricing carbon emissions), someone said I would "get" his point if I would just read the famous Coase paper on externalities. While I hadn't read it, I was familiar with the arguments in it, and ~99% sure my position accounted for its points, so I kept pressing him to tell me which insight I didn't fully appreciate. Thankfully, such probing led him to erroneously state what he thought was my opinion, and when I mentioned this, he decided it wouldn't change my opinion.

Comment author: thomblake 07 January 2010 08:31:16PM 3 points [-]

The Sokal affair illustrates that entire fields can exist completely without merit.

It illustrated nothing of the sort. The Sokal affair illustrated that a non-peer-reviewed, non-science journal will publish bad science writing that was believed to be submitted in good faith.

Social Text was not peer-reviewed because they were hoping to... do... something. What Sokal did was similar to stealing everything from a 'good faith' vegetable stand and then criticizing its owner for not having enough security.

Comment author: Seth_Goldin 07 January 2010 08:42:51PM *  5 points [-]

Noted. In another draft I'll change this to make the point how easy it is for high-status academics to deal in gibberish. Maybe they didn't have so much status external to their group of peers, but within it, did they?

What the Social Text Affair Does and Does Not Prove

http://www.physics.nyu.edu/faculty/sokal/noretta.html

"From the mere fact of publication of my parody I think that not much can be deduced. It doesn't prove that the whole field of cultural studies, or cultural studies of science -- much less sociology of science -- is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty, by publishing an article on quantum physics that they admit they could not understand, without bothering to get an opinion from anyone knowledgeable in quantum physics, solely because it came from a conveniently credentialed ally'' (as Social Text co-editor Bruce Robbins later candidly admitted[12]), flattered the editors' ideological preconceptions, and attacked theirenemies''.[13]"

Comment author: Vladimir_Nesov 07 January 2010 07:07:59PM *  2 points [-]

For not being Bayesian, they have no authority that warrants such condescension.

It's unclear what you mean by both "Bayesian" and by "authority" in this sentence. If a person is "Bayesian", does it give "authority" for condescension?

There clearly is some truth to the claim that being around longer sometimes allows to arrive at more accurate beliefs, including more accurate intuitive assessment of the situation, if you are not down a crazy road in the particular domain. It's not a very strong evidence, and it can't defeat many forms of more direct evidence pointing in the contrary direction, but sometimes it's an OK heuristic, especially if you are not aware of other evidence ("ask the elder").

Comment deleted 05 January 2010 02:38:51PM [-]
Comment author: Vladimir_Nesov 05 January 2010 07:43:08PM 6 points [-]

1) Why would a "perfectly logical being" compute (do) X and not Y? Do all "perfectly logical beings" do the same thing? (Dan's comment: a system that computes your answer determines that answer, given a question. If you presuppose an unique answer, you need to sufficiently restrict the question (and the system). A universal computer will execute any program (question) to produce its output (answer).) All "beings" won't do exactly the same thing, answer any question in exactly the same way. See also: No Universally Compelling Arguments.

2) Why would you be interested in what the "perfectly logical being" does? No matter what argument you are given, it is you that decides whether to accept it. See also: Where Recursive Justification Hits Bottom, Paperclip maximizer, and more generally Metaethics sequence.

2.5) What humans want (and you in particular), is a very detailed notion, one that won't automatically appear from a question that doesn't already include all that detail. And every bit of that detail is incredibly important to get right, even though its form isn't fixed in human image.

Comment author: Jack 05 January 2010 05:33:02PM 3 points [-]

I don't know what you mean by objective ethics. I believe there are ethical facts but they're a lot more like facts about the rules of baseball than facts about the laws of physics

Comment author: DanArmak 05 January 2010 04:16:48PM *  3 points [-]

a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being.

"Calculated" based on what? What is the question that this would be the answer to?

Also, how can you define "bias" here?

As you can guess from my questions, I don't even see what an objective system of ethics could possibly mean :-)

Comment author: MatthewB 06 January 2010 07:43:10AM 3 points [-]

As you can guess from my questions, I don't even see what an objective system of ethics could possibly mean.

This seems to be my biggest problem as well. I have been trying to find definitions of an objective system of ethics, yet all of the definitions seem so dogmatic and contrived. Not to mention varying from time to time depending upon the domain of the ethics (whether they apply to Christians, Muslims, Buddhists, etc.)

Comment author: MatthewB 06 January 2010 07:39:58AM 1 point [-]

I am having a discussion on a forum where a theist keeps stating that there must be objective truth, that there must be objective morality, and that there is objective knowledge that cannot be discovered by Science (I tried to point out that if it were Objective, then any system should be capable of producing that knowledge or truth).

I had completely forgot to ask him if this objective truth/knowledge/morality could be discovered if we took a group of people, raised in complete isolation, and then gave them the tools to explore their world. If such things were truly objective, then it would be trivial for these people to arrive at the discovery of these objective facts.

I shall have to remember this, as well as the fact that such objective knowledge/ethics may indeed exist, yet, why is it that our ethical systems across the globe seem to have a few things in common, but disagree on a great many more?

Comment author: PhilGoetz 09 January 2010 06:17:32AM *  3 points [-]

Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?

Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit was; but I regard it as an interesting correlate of processing power, without any demonstrated or even argued logical relationship to consciousness. Tononi has published a lot of papers on it - and they became more widely-cited when he started saying they were about consciousness instead of saying they were about information integration - but he didn't AFAIK make any arguments that the thing he measures with information integration has something to do with consciousness.

Comment author: Zack_M_Davis 22 January 2010 10:05:17AM 2 points [-]

It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually communicate anything: not even, "I'm hurt!" for to say that one is hurt presupposes that one is being hurt by something, some thing of which which we can speak, of which we can name predicates and say "It is so" or "It is not so." Even very sick and damaged creatures can be helped, as long their cries have enough structure for us to extrapolate a volition. But not all animate entities are creatures. Creatures have problems, problems we might be able to solve. Agonium just sits there, howling. You cannot help it; it can only be destroyed.

Comment author: AdeleneDawner 22 January 2010 01:51:24PM 3 points [-]

Did I miss something?

Comment author: MrHen 06 January 2010 04:58:24PM 2 points [-]

Feature request, feel free to ignore if it is a big deal or requested before.

When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.

Comment author: Jack 06 January 2010 05:07:26PM 2 points [-]

I suggested something along these lines on the feature request thread. I'd like to be able to find old message exchanges. Finding messages I sent is easy, but received messages are in the same place as comment replies and aren't searchable.

Comment author: orthonormal 18 January 2010 09:03:59PM 1 point [-]

I've just reached karma level 1337. Please downvote me so I can experience it again!

Comment author: [deleted] 14 January 2010 08:50:57PM 1 point [-]

I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)

I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.

It's often said around here that Bayesian priors and Solomonoff induction and such things describe the laws of physics of the universe. The simpler the description, the more likely that laws-of-physics is. This is more or less true, but it is not the truth that we want to be saying. What we're trying to describe is our observations. If I had a theory stating that every computable event happens, sure, that explains all phenomena, but in order for it to describe our observations, you need to add a string specifying which of these computable events are the ones we observe, which makes this theory completely useless.

In theory, this provides a solution to anthropic reasoning: simply figure out which paths through the universe are the simplest, and assign those the highest probability. Again, in theory, this provides a solution to quantum suicide. But please don't ask me what these solutions are.

Comment author: Wei_Dai 15 January 2010 02:54:56AM 2 points [-]

Does anyone understand the last two paragraphs of the comment that I'm responding to? I'm having trouble figuring out whether Warrigal has a real insight that I'm failing to grasp, or if he is just confused.

Comment author: blogospheroid 02 January 2010 09:44:59AM 0 points [-]

Drawing on the true prisoner's dilemma, the story arch Three worlds collide and the recent Avatar

In the case of avatar, humans did cooperate in the prisoners dilemma first, we tried the schooling and medicine thingy and apparently it has been rejected from the na'avi side. Differences were still so high that dream-walkers (na'avi avatars of humans) were being derided with statements like 'a rock sees more'.

So, the question is, when we cooperate with an alien species, will they even recognise it as cooperation? How does that change the contours of a decision theory? If you are a superior species and have the option of taking a cooperative decision that appears to be hostile(In the avatar scenario, it could be trying to tell Na'avi that sometimes things just don't all fit into a plan, thus blowing a huge hole in their worldview), and a hostile decision that appears to be cooperative (giving away free narcotics, for eg.)

Will you be genuinely cooperative or only signal that you are cooperative?

Let us truly consider this from the perspective of superiority of humans i.e. there are no uber-guardians of pandora who can wipe humanity out like dust. (which is a possibility i would consider if humanity were going to launch another attack)