And happy new year to everyone.
And happy new year to everyone.
I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.
(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)
After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.
So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.
The Guardian published a piece citing Less Wrong:
The number's up by Oliver Burkeman
When it comes to visualising huge sums – the distance to the moon, say, or the hole the economy is in – we're pretty useless really
Recent observations on the art of writing fiction:
My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses. I think this may be related to the fundamental attribution error: we see ourselves as reacting naturally to the environment, but others as driven by innate impulses. Unfortunately this doesn't work for storytelling at all! It means my viewpoint character ends up as a ping-pong ball in a world of strong, driven other characters. (If you don't see this error in my published fiction, it's because I don't publish unsuccessful stories.)
Closely related to the above is another recent observation: My main character has to be sympathetic, in the sense of having motivations that I can respect enough to write them properly. Even if they're mistaken, I have to be able to respect the reasons for their mistakes. Otherwise my viewpoint automatically shifts to the characters around them, and once again the non-protagonist ends up stronger than the protagonist.
Just as it's necessary to learn to make things worse for your characters, rather than following the natural impulse to
"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.
Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.
On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.
On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.
Even worse, Jaynes makes several strong ...
After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.
Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.
But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evi...
I'm not sure there needs to be more than one FAQ thread. But lets start by generating a list of frequently asked questions, coming up with answers with consensus support.
What else? Anyone have drafts of answers?
Okay, so....a confession.
In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.
I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.
But that's about the extent of my personal acquaintance with the genre.
Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these...
In one of the dorkier moments of my existence, I've written a poem about the Great Filter. I originally intended to write music for this, but I've gone a few months now without inspiration, so I think I'll just post the poem to stand by itself and for y'all to rip apart.
The dire floor of Earth afore
saw once a fortuitous spark.
Life's swift flame sundry creature leased
and then one age a freakish beast
awakened from the dark.
Boundless skies beheld his eyes
and strident through the void he cried;
set his devices into space;
scryed for signs of a yonder race;
but desolate hush replied.
Stars surround and worlds abound,
the spheres too numerous to name.
Yet still no creature yet attains
to seize this lot, so each remains
raw hell or barren plain.
What daunting pale do most 'fore fail?
Be the test later or done?
Those dooms forgone our lives attest
themselves impel from first inquest:
cogito ergo sum.
Man does boast a charmèd post,
to wield the blade of reason pure.
But if this prov'ence be not rare,
then augurs fate our morrow bare,
our fleeting days obscure.
But might we nigh such odds defy,
and see before us cosmos bend?
Toward the heavens thy mind set,
and waver not: this proof, till 'yet,
did ne'er with man contend!
Suggested tweaks are welcome. Things that I'm currently unhappy with are that "fortuitous" scans awkwardly, and the skies/eyes rhyme feels clichéd.
I recently revisited my old (private) high school, which had finished building a new >$15 million building for its football team (and misc. student activities & classes).
I suddenly remembered that when I was much younger, the lust of universities and schools in general for new buildings had always puzzled me: I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and...
Akrasia FYI:
I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working. I would have thought that recovering in large bites and working in large bites would have been more efficient, but apparently no, it's not.
I'm currently testing the hypothesis that reading fiction (possibly reading anything?) comes out of my energy-to-work-on-the-book budget.
Next up to try: Pick up a CPAP machine off Craigslist.
Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)
Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those A...
I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.
Perh...
Hello all,
I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.
Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html
When I'm debating some controversial topic wi...
Suppose you could find out the exact outcome (up to the point of reading the alternate history equivalent of Wikipedia, history books etc.) of changing the outcome of a single historical event. What would that event be?
Note that major developments like "the Roman empire would never have fallen" or "the Chinese wouldn't have turned inwards" involve multiple events, not just one.
So many. I can't limit it to one, but my top four would be "What if Mohammed had never been born?", "What if Julian the Apostate had succeeded in stamping out Christianity?" and "What if Thera had never blown and the Minoans had survived?" and "What if Alexander the Great had lived to a ripe old age?"
The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places. It also dealt a fatal wound to Byzantium as well. If Mohammed had never existed, I would look forward to reading about the Zoroastrian Persians, the Byzantines, and the Romanized Syrians and Egyptians surviving much longer than they did.
The Minoans were the most advanced civilization of their time, and had plumbing, three story buildings, urban planning and possibly even primitive optics in 2000 BC (I wrote a bit about them here). Although they've no doubt been romanticized, in the romanticized version at least they had a pretty equitable society, gave women high status, and revered art and nature. Then they were all destroyed by a giant volcano. I remember reading one hist...
Given that Alexander was one of the most successful conquerors in all of history, he almost certainly benefited from being extremely lucky. If he had lived longer, therefore, he would have probably experienced much regression to the mean with respect to his military success.
I'd really, really like to see what the world would be like today if a single butterfly's wings had flapped slightly faster back in 5000 B.C.
Prisoner's Dilemma on Amazon Mechanical Turk: http://blog.doloreslabs.com/2010/01/altruism-on-amazon-mechanical-turk/
Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.
There are several that I've wondered about:
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.
Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)
Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.
Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like
They'd probably be like the average less wrong commenter/singularitarian/transhumanist, so really no need to run this one.
Has anyone here tried Lojban? Has it been useful?
I recommend making a longer list of recent comments available, the way Making Light does.
If you've been working with dual n-back, what have you gotten out of it? Which version are you using?
Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.
If quantum immortality is correct, and assuming life extension technologies and uploading are delayed for a long time, wouldn't each of us, in our main worldline, become more and more decrepit and injured as time goes on, until living would be terribly and constantly painful, with no hope of escape?
I spent December 23rd, 24th and 25th in the hospital. My uncle died of brain cancer (Glioblastoma multiforme). He was an atheist, so he knew that this was final, but he wasn't signed up for cryonics.
We learned about the tumor 2 months ago, and it all happened so fast.. and it's so final.
This is a reminder to those of you who are thinking about signing up for cryonics; don't wait until it's too late.
Because trivial inconvieniences be a strong deterent, maybe someone should make a top-level post on the practicallities of cryonics; an idiots guide to immortality.
Alexandre Borovik summarizes the Bayesian error in null hypothesis rejection method, citing the classical
J. Cohen (1994). `The Earth Is Round (p < .05)'. American Psychologist 49(12):997-1003.
The fallacy of null hypothesis rejection
If a person is an American, then he is probably not a member of Congress. (TRUE, RIGHT?)
This person is a member of Congress.
Therefore, he is probably not an American.
What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.
From The Rhythm of Disagreement:
Nick Bostrom, however, once asked whether it would make sense to build an Oracle AI, one that only answered questions, and ask it our questions about Friendly AI.
Has Bostrom made this proposal in anything published? I can't seem to find it on nickbostrom.com.
Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).
...It turns out there are two different ways people respond to challenges. Some people see them as opportunities to perform - to demonstrate their talent or intellect. Others see them as opportunities to master - to improve their skill or knowledge.
Say you take a person with a performance orientation ("Paul") and a person with a mastery orientation ("Matt"). Give them
This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)
What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?
Transcript:
--
Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.
Astrologer: Yes, that would be a perverse thing to do, wouldn't it.
Dawkins: It would be - yes, but I mean wouldn't that be a good test?
Astrologer: A test of what?
Dawkins: Well, how accurate you are.
Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.
Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?
Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.
Dawkins: I'd have thought you'd be eager.
Astrologer: (Laughs.)
Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.
Astrologer: I just don't believe in the experiment, Richard, it's that simple.
Dawkins: Well you're in a kind of no-lose situation then, aren't you.
Astrologer: I hope so.
--
Why is the news media comfortable with lying about science?
James Hughes - with a (IMO) near-incoherent Yudkowsky critique:
A few years back I did an ethics course at university. It very quickly made me realise that both I and most of the rest of the class based our belief in the existence of objective ethics simply on a sense that ethics must exist. When I began to question this idea my teacher asked me what I expected an objective form of ethics to look like. When I said I didn't know she asked if I would agree that a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being. This seemed fair enough but the problem...
Here's a silly comic about rationality.
I rather wish it was called "Irrationally Undervalues Rapid Decisions Man". Or do I?
P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.
Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point f...
I am curious as to how many LWers attempt to work out and eat healthy to lengthen life span. Especially among those who have signed up for cryogenics.
A little knowledge can be a dangerous thing. At least Eliezer has previously often recommended Judgment Under Uncertainty as something people should read. Now, I'll admit I haven't read it myself, but I'm wondering if that might be a bad advice, as the book's rather dated. I seem to frequently come across articles that cite JUU, but either suggest alternative interpretations or debunk its results entirely.
Just today, I was trying to find recent articles about scope insensitivity that I could cite. But on a quick search I primarily ran across articles point...
I found this interesting and the paper it discusses children's conception of intelligence.
The abstract to the article
...Two studies explored the role of implicit theories of intelligence in adolescents' mathematics achievement. In Study 1 with 373 7th graders, the belief that intelligence is malleable (incremental theory) predicted an upward trajectory in grades over the two years of junior high school, while a belief that intelligence is fixed (entity theory) predicted a flat trajectory. A mediational model including learning goals, positive beliefs about
A suggestion for the site (or perhaps the Wiki): It would be useful to have a central registry for bets placed by the posters. The purpose is threefold:
For the "How LW is Perceived" file:
Here is an excerpt from a comments section elsewhere in the blogosphere:
In the meantime, one comment on that other interesting reading at Less Wrong. It has been fun sifting through various posts on a variety of subjects. Every time I leave I have the urge to give them the Vulcan hand signal and say "Live Long and Prosper". LOL.
I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...
I am currently writing a sequence of blog posts on Friendly AI. I would appreciate your comments on present and future entries.
Inspired by this comment by Michael Vassar:
http://lesswrong.com/lw/1lw/fictional_evidence_vs_fictional_insight/1hls?context=1#comments
Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.
Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Ga...
How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?
I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.
Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?
(This is inspired by Shannon's mention of her child exploring her sense of s...
I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)
I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.
It's often s...
A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:
Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.
Oops.
Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?
When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?
I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.
Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?
Feature request, feel free to ignore if it is a big deal or requested before.
When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.
An interesting application of near/far:
Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.
First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly ...
Why was this comment downvoted to -4? Seems to me it's a legitimate question, from a fairly new poster.
Ask Peter Norvig anything: http://www.reddit.com/r/programming/comments/auvxf/ask_peter_norvig_anything/
Grand Orbital Tables: http://www.orbitals.com/orb/orbtable.htm
In high school and intro chemistry in college, I was taught up to the e and then f orbitals, but they keep going and going from there.
It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually commu...
Ray Kurzweil Responds to the Issue of Accuracy of His Predictions
http://nextbigfuture.com/2010/01/ray-kurzweil-responds-to-issue-of.html
How much of Eliezer's 2001 FAI document is still advocated? eg. Wisdom tournaments and bugs in the code.
Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.
I understand that is likely a fairly meta topic and woul...
Possibly dumb question but... can anyone here explain to me the difference between Minimum Message Length and Minimum Description Length?
I've looked at the wikipedia pages for both, and I'm still not getting it.
Thanks.
Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?
Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit ...
Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).
How do we fix it, so I don't have to start sending off resumes?
And for one short moment, in the wee morning hours, MrHen takes up the whole damn Recent Comments section.
I assume dropping two walls of text and a handful of other lengthy comments isn't against protocol. Apologies if I annoy anyone.
I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.
Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)
Today's Questionable Content has a brief Singularity shoutout (in its typical smart-but-silly style).
I recently found an article that may be of interest to Less Wrong readers:
The latest neuroscience research suggests spreading resolutions out over time is the best approach
The article also mentions a study in which overloading the prefrontal cortex with other tasks reduces people's willposer.
(should I repost this link to next month's open thread? not many people are likely to see it here)
Inorganic dust with lifelike qualities: http://www.sciencedaily.com/releases/2007/08/070814150630.htm
So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that actio...
Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).
Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.
For some reason, my IP was banned on the LessWrong Wiki. Apparently this is the reason:
Autoblocked because your IP address has been recently used by "Bella".
Any idea how this happens and how I can prevent from happening again?
Strange fact about my brain, for anyone interested in this kind of thing:
Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.
This doesn't make much sense, though it might not be a bad thing.
Paul Bucheit -- Evaluating risk and opportunity (as a human)
http://paulbuchheit.blogspot.com/2009/09/evaluating-risk-and-opportunity-as.html
What's the right prior for evaluating an H1N1 conspiracy theory?
I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for ...
Can someone point me towards the calculations people have been doing about the expected gain from donating to the SIAI, in lives per dollar?
Edit: Never mind. I failed to find the video previously, but formulating a good question made me think of a good search term.
The Edge Annual Question 2010: How is the internet changing the way you think?
I was recently asked to produce the indefinite integral of ln x, and completely failed to do so. I had forgotten how to do integration by parts in the 6 months since I had done serious calculus. Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?
Ethical problem. It occurred to me that there's an easy, obvious way to make money by playing slot machines: Buy stock in a casino and wait for the dividends. Now, is this ethically ok? On the one hand, you're exploiting a weakness in other people's brains. On the other hand, your capital seems unlikely, at the existing margins, to create many more gamblers, and you might argue that you are more ethical than the average investor in casinos.
It's a theoretical issue for me, since my investment money is in an index fund, which I suppose means I own some tiny share in casinos anyway and might as well roll with it. But I'd be interested in people's thoughts anyway.
"Imagine the human race gets wiped out. But you want to transmit the so far acquired knowledge to succeeding intelligent races (or aliens). How do you do?"
I got this question while reading a dystopia of a world after nuclear war.
I recently had to have some minor surgery. However, there's a body of thought that says it's safe to wait and watch for symptoms, and only have surgery later. There's a peer reviewed (I assume) paper supporting this position.
Upon reading this paper I found what looked like a statistical error. Looking at outcomes between two groups, they report p = 0.52, but doing the sums myself I got p = 0.053. For this reason, I went and had the surgery.
Since I'm just a novice at statistics, I was wondering if I had in fact got it right - it's disturbing to think that a...
Laser fusion test results raise energy hopes: http://news.bbc.co.uk/2/hi/science/nature/8485669.stm
I'll track down the paper from Science on request.
Does anybody have any updates as to the claims made against Alcor, i.e. the Tuna Can incident? I've tried a bunch of searches, but haven't been able to find anything conclusive as to the veracity of the claims.
Does a Turing chatbot deserve recognition as a person?
(Turing chatbot = bot that can pass the Turing test... 50% of the time? 95% of the time? 99% of the time?)
oh but surely there has got to be some sort of simple cure for that sickness where you should be sleeping but you just stay up wanting to scream
From Pharyngula: Bertrand Russell on God. Some of the things he says about what to believe and why seem rather familiar...
A discussion of Cass Sunstein's proposal to flood the haunts of conspiracy theorists with secret government agents. who would try to convince the conspiracy theorists that there are no conspiracies:
Is anyone aware of research on biased perception of errors in one's own work vs. others'? It seems like a lot of work should have been done on this, but I haven't been able to find any (only things on evaluating traits).
HN discussion of cognitive flaws related to gaming: http://news.ycombinator.com/item?id=1057351
I'll ask the question again here -- does anyone know of some more extensive writing on the subject of cognitive flaws related to gaming? Or something recent on the psychology of rewards?
I've been downvoted quite often recently and since I'm actually here to learn something I would like to better understand the reasons behind it.
Specifically I would like to hear your opinion on the following comment of mine: "I'll be the judge of that." This was given as an answer to someone suggesting how I should use my time.
http://lesswrong.com/lw/1lv/the_wannabe_rational/1gea?context=1#comments
Do you think that a downvote was justified and if so why?
As far as candidates for making AI other than the Singularity Institute, is there any more likely than Google? Surely they want to make one.
They have a lot of really smart AI researchers working on hard problems within the world's largest dataset, and who knows what can happen when you combine that with 20% time. Does Google controlling the AI scare you?
The US military or any government making the AI seems a recipe for certain destruction, but I'm not so sure about Google.
Mike Gibson has a great and interesting question. How would Bayesian methodology address this? Might this be an information cascade?
I have come across the online novel The Metamorphosis of Prime Intellect. (Contains depictions of assorted things squeamish people should not read.) It has an AI in it that is this close to being Friendly.
Would we (Earth) show up in our universe's stats pages?
http://www.gabrielweinberg.com/blog/2010/01/would-we-earth-show-up-in-our-universes-stats-pages.html
Hey, exactly 500 comments.
So, elsewhere someone just brought up moral luck. I'm wondering how this relates to the Yudkowskian view on morality (I forget what he called it), and I'd like to invite someone to think about it and perhaps post on it. If no one else does so, I might be motivated to do so eventually. There might be some potential to shed some real light on the issue of moral luck--specifically the extent of the validity or otherwise of the Control Principle--with reference to Yudkowsky's framework.
How do people here consume Less Wrong? I just started reading and am looking for a good way to stay on top of posts and comments. Do you periodically check the website? Do you use an RSS feed? (which?) Or something else?
http://www.youtube.com/watch?v=vyfPZLb3kqc (Edit: disregard what the guy says after he explains the concept. I'll find a better link later.)
Does this allow us to cheat on the secret sauce in the AGI recipe?
I also see it as a limitless supply of statistics about humans. Prediction markets! "Will you bet this penny I'm giving you on X?"
What else can we use it for?
Suppose you have an agent with k bits of knowledge, that is given n bits of information. You can imagine it's an agent shown a digitized picture. The agent will infer u bits of useful information from those n bits. u is, critically, to be measured in an agent-independent way. u is the number of words the agent will need to use if the agent is going to write a book about those n bits for a general audience.
What can be said about the function u(k, n), relating the number of useful bits extracted to both the number of bits presented, and the number of bits...
What is the probability that this is the ultimate base layer of reality?
Eliezer gave the joke answer to this question, because this is something that seems impossible to know.
However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans won't...
For Darwin’s sake, reject “Darwin-ism” (and other pernicious terms)
A short article explaining why using the terms "Darwinism" and "theory of evolution" are harmful to public understanding.
My life is priceless to me of course, but what is it worth to the government? My friends? The average person? You?
How much are you willing to pay me to continue reading and commenting on Less Wrong? :)
If you mean, "what would we pay to save your life", you could probably take up a respectable collection if you credibly identified a threat to your health that could be fixed with a medium-sized amount of money.
If you mean, "will we bribe you to hang out with us"... uh... no.
Drawing on the true prisoner's dilemma, the story arch Three worlds collide and the recent Avatar
In the case of avatar, humans did cooperate in the prisoners dilemma first, we tried the schooling and medicine thingy and apparently it has been rejected from the na'avi side. Differences were still so high that dream-walkers (na'avi avatars of humans) were being derided with statements like 'a rock sees more'.
So, the question is, when we cooperate with an alien species, will they even recognise it as cooperation? How does that change the contours of a decis...
Link pointer: http://www.eurekalert.org/pub_releases/2010-01/hu-qcc010810.php Quantum computer calculates exact energy of molecular hydrogen. http://www.nature.com/nchem/journal/vaop/ncurrent/abs/nchem.483.html
The submitter on Hacker News: "This is arguably one of the most important breakthroughs ever in the field of computing."
Imagine how much easier this comment would be to browse if it was part of a subreddit here.
Since I don't have self-contained results, I can't describe what I'm searching for concisely, and the working hypotheses and hunches are too messy to summarize in a blog comment. I'll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.
Yes, this is not very helpful. Consider the question: what is the difference between (1) preference, (2) strategy that the agent will follow, and the (3) whole of agent's algorithm? Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don't know, nor will ever know with certainty, the true laws of the universe. And what we really want is to get to (3), not (1), but with good understanding of (1) so that we know (3) to be based on our (1).
Thanks. I look forward to that.
I don't understand what you mean here, and I think maybe you misunderstood something I said earlier. Here's what I wrote in the UDT1 post:
... (read more)