It doesn't seem like it's ever going to be mentioned otherwise, so I thought I should tell you this:
Lesswrong is writing a story, called "Harry Potter and the Methods of Rationality". It's just about what you'd expect; absolutely full of ideas from LW.com. I know it's not the usual fare for this site, but I'm sure a lot of you have enjoyed Eliezer's fiction as fiction; you'll probably like this as well.
Who knows, maybe the author will even decide to decloak and tell us who to thank?
My fellow Earthicans, as I discuss in my book Earth In The Balance and the much more popular Harry Potter And The Balance Of Earth, we need to defend our planet against pollution. As well as dark wizards.
-- Al Gore on Futurama
Yeah, I don't think I can plausibly deny responsibility for this one.
Googling either (rationality + fanfiction) or even (rational + fanfiction) gets you there as the first hit, just so ya know...
Also, clicking on the Sitemeter counter and looking at "referrals" would probably have shown you a clickthrough from a profile called "LessWrong" on fanfiction.net.
Want to know the rest of the plot? Just guess what the last sentence of the current version is about before I post the next part on April 3rd. Feel free to post guesses here rather than on FF.net, since a flood of LW.com reviewers would probably sound rather strange to them.
"Oh, dear. This has never happened before..."
Voldemort's Killing Curse had an epiphenomenal effect: Harry is a p-zombie. ;)
I don't like where this is headed - Harry isn't provably friendly and they're setting him loose in the wizarding world!
Also, there is a sharply limited supply of people who speak Japanese, Hebrew, English, math, rationality, and fiction all at once. If it wasn't you, it was someone making a concerted effort to impersonate you.
This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn.
Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!).
And the last chapter's disclaimer?
The enemy's gate is Rowling.
If the parallels aren't intentional I'm going insane.
It's almost done, actually. Here's a sneak preview of the next chapter:
...Dumbledore peered over his desk at young Harry, twinkling in a kindly sort of way. The boy had come to him with a terribly intense look on his childish face - Dumbledore hoped that whatever this matter was, it wasn't too serious. Harry was far too young for his life trials to be starting already. "What was it you wished to speak to me about, Harry?"
Harry James Potter-Evans-Verres leaned forward in his chair, looking bleak. "Headmaster, I got a sharp pain in my scar during the Sorting Feast. Considering how and where I got this scar, it didn't seem like the sort of thing I should just ignore. I thought at first it was because of Professor Snape, but I followed the Baconian experimental method which is to find the conditions for both the presence and the absence of the phenomenon, and I've determined that my scar hurts if and only if I'm facing the back of Professor Quirrell's head, whatever's under his turban. Now it could be that my scar is sensitive to something else, like Dark Arts in general, but I think we should provisionally assume the worst - You-Know-Who."
"Great heavens
I think you underestimate the real-world value of Just Testing It. If I got a mysterious letter in the mail and Mom told me I was a wizard and there was a simple way to test it, I'd test it. Of course I know even better than rationalist!Harry all the reasons that can't possibly be how the ontologically lowest level of reality works, but if it's cheap to run the test, why not just say "Screw it" and test it anyway?
Harry's decision to try going out back and calling for an owl is completely defensible. You just never have to apologize for doing a quick, cheap experimental test, pretty much ever, but especially when people have started arguing about it and emotions are running high. Start flipping a coin to test if you have psychic powers, snap your fingers to see if you can make a banana, whatever. Just be ready to accept the result.
A "Jedi"? Obi-Wan Kenobi?
I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.
Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.
After the top level post about it, I bought a bottle of Melatonin to try. I've been taking it for 3 weeks. Here are my results.
Background: Weekdays I typically sleep for ~6 hours, with two .5 hour naps in the middle of the day (once at lunch and once when I get home from work). Weekends I sleep till I feel like getting up, so I usually get around 10-11 hours.
I started with a 3mg pill, then switched to a ~1.5 mg pill (I cut them in half) after being extremely tired the next day. I take it about an hour before I go to sleep.
The first thing I noticed was that it makes falling asleep much easier. It's always been a struggle for me to fall asleep (usually I have to lay there for an hour or more), but now I'm almost always out cold within 20 minutes.
I've also noticed that I feel much less tired during the day, which was my impetus for trying it in the first place. However, I'm not sure how much of this is a result of needing less sleep, and how much is a result of me falling asleep faster and thus sleeping for longer. But it's definitely noticeable.
Getting up in the morning is not noticeably easier.
No evidence that it's habit forming. I'm currently not taking it on weekends (I found mys...
I have a couple of problems with anthropic reasoning, specifically the kind that says it's likely we are near the middle of the distribution of humans.
First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it's a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.
ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.
Second, the reasoning requires changing my observation. My observation is, "I am the Xth human born." The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?
http://www4.gsb.columbia.edu/ideasatwork/feature/735403/Powerful+Lies
The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars — the leaders —resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars — the subordinates — showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.
A couple of articles on the benefits of believing in free will:
Vohs and Schooler, "The Value of Believing in Free Will"
Baumeister et al., "Prosocial Benefits of Feeling Free"
The gist of both is that groups of people experimentally exposed to statements in favour of either free will or determinism[1] acted, on average, more ethically after the free will statements than the determinism statements.
References from a Sci. Am. article.
[1] Cough.
ETA: This is also relevant.
I've written a reply to Bayesian Flame, one of cousin_it's posts from last year. It's titled Frequentist Magic vs. Bayesian Magic. I'd appreciate some review and comments before I post it here. Mainly I'm concerned about whether I've correctly captured the spirit of frequentism, and whether I've treated it fairly.
BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.
The London meet is going ahead. Unless someone proposes a different time, or taw's old meetings are still going on and I just didn't know about them, it will be:
5th View cafe on top of Waterstone's bookstore near Piccadilly Circus Sunday, April 4 at 4PM
Roko, HumanFlesh, I've got your numbers and am hoping you'll attend and rally as many Londoners as you can.
EDIT: Sorry, Sunday, not Monday.
I recently found something that may be of interest to LW readers:
This post at the Lifeboat Foundation blog announces two tools for testing your "Risk Intelligence":
The Risk Intelligence Game, which consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. Then it calculates your risk intelligence quotient (RQ) on the basis of your estimates.
The Prediction Game, which provides you with a bunch of statements, and your task is to say how likely...
Karma creep: It's pleasant to watch my karma going up, but I'm pretty sure some of it is for old comments, and I don't know of any convenient way to find out which ones.
If some of my old comments are getting positive interest, I'd like to revisit the topics and see if there's something I want to add. For that matter, if they're getting negative karma, there may be something I want to update.
Is there any evidence that Bruce Bueno de Mesquita is anything else than a total fraud?
A recent study (hiding behind a paywall) indicates people overestimate their ability to remember and underestimate the usefulness of learning. More ammo for the sophisticated arguer and the honest enquirer alike.
US Government admits that multiple-time convicted felon Pfizer is too big to fail. http://www.cnn.com/2010/HEALTH/04/02/pfizer.bextra/index.html?hpt=Sbin
Did the corporate death penalty fit the crime(s)? Or, how can corporations be held accountable for their crimes when their structure makes them unpunishable?
Applied rationality April Edition: convince someone with currently incurable cancer to sign up for cryonics: http://news.ycombinator.com/item?id=1239055
Hacker News rather than Reddit this time, which makes it a little easier.
My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I'm still a vegetarian. Clearly I'm on shaky ground, since my beliefs weren't formed from evidence, but purely from nurture.
Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse tha...
I hope this isn't a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.
Perhaps the folks at LW can help me clarify my own conflicting opinions on a matter I've been giving a bit of thought lately.
Until about the time I left for college, most of my views reflected those of my parents. It was a pretty common Republican party-line cluster, and I've got concerns that I have anchored at a point too close to favoring the death penalty than I should. I read studies about how capital punishment disproportionately harms minorities, and I think Robin Hanson had more to say about difference in social tier. Early in my college time, t...
My take on capital punishment is that it's not actually that important an issue. With pretty much anything that you can say about the death penalty, you can say something similar about life imprisonment without parole (especially with the way that the death penalty is actually practiced in the United States). Would you lock an innocent man in a cell for the rest of his life to keep 19 bad ones locked up?
Virtually zero chance of recidivism? True for both. Very expensive? Check. Wrongly convicted innocent people get screwed? Check - though in both cases they have a decent chance of being exonerated after conviction before getting totally screwed (and thus only being partially screwed). Could be considered immoral to do something so severe to a person? Check. Deprives people of an "inalienable" right? Check (life/liberty). Strongly demonstrates society's disapproval of a crime? Check (slight edge to capital punishment, though life sentences would be better at this if the death penalty wasn't an option). Applied disproportionately to certain groups? I think so, though I don't know the research. Strong deterrent? It seems like the death penalty should be a bit...
As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.
Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.
Discuss.
Does brain training work? Not according to an article that has just appeared in Nature. Paper here, video here or here.
...These results provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults. This was true for both the ‘general cognitive training’ group (experimental group 2) who practised tests of memory, attention, visuospatial processing and mathematics similar to many of those found in commercial brain trainers, and for a more focused training group (experimental group 1) w
Rats have some ability to distinguish between correlation and cauation
...To get back to the rat study—it's very simple actually. What I did is: I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food. So they might have used Pavlovian conditioning; just as I said, Pavlovian conditioning might be the substrate by which animals learn to piece together spatial maps and maybe causal maps as well. If they treat the light as a common cause of the tone and of food, they see [hear] the ton
David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:
From the blog post where he announced the paper:
...The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into comp
PDF: "Are black hole starships possible?"
This paper examines the possibility of using miniature black holes for converting matter to energy via Hawking radiation, and propelling ships with that. Pretty interesting, I think.
I'm no physicist and not very math literate, but there is one issue I pondered: namely, how the would it be possible to feed matter to a mini black hole that has an attometer scale event horizon and radiating petajoules of energy in all directions? The black hole would be an extremely tiny target in a barrier of ridiculous energy density. The paper, as rudimentary it is, does not discuss this feeding issue.
Why doesn't brain size matter? Why is a rat with its tiny brain smarter than a cow? Why does the cow bother devoting all those resources to expensive gray matter? Eliezer posted this question in the February Open Topic, but no one took a shot at it.
FTA: "In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter."
This statement seems ripe for semantic disambiguation. Cows can "afford" a larger brain than rats can, and although "large cow brain < sma...
I'd like to plug a facebook group:
Once we reach 4,096 members, everyone will donate $256 to SingInst.org.
Folks may also be interested in David Robert's group:
My mother's sister has two children. One is eleven and one is seven. They are both being given an unusually religious education. (Their mother, who is Catholic, sent them to a prestigious Jewish pre-school, and they seem to be going through the usual Sunday School bullshit.) I find this disturbing and want to proselytize for atheism to them. Any advice?
ETA: Their father is non-religious. I don't know why he's putting up with this.
I'd put it differently: There's nothing intrinsically wrong with a 16-year-old and a 30-year-old having sex, any more than there is anything intrinsically wrong with two 30-year-olds having sex. There may be extrinsic factors in either case that make it problematic (somebody's being coerced or forced, somebody's elsewhere married, somebody's intoxicated, somebody's being manipulative to get the sex). The way our society is set up, the first case is dramatically more likely to feature such extrinsic factors than the second case.
"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."
Attention everyone: This post is currently broken for some unknown reason. Please use the new post at http://lesswrong.com/lw/212/announcing_the_less_wrong_subreddit_2/ if you want to discuss the sub-Reddit. The address of the sub-Reddit is http://www.reddit.com/r/LessWrong
What do you value?
Here are some alternate phrasings in an attempt to find the same or similar reasoning (it is not clear to me whether these are separate concepts):
Here's another article asking a similar question: Post Your Utility Function. I think people did a poor job answering it back then.
Well, User:Rain, that's about the story of my existence right there. What kinds of paperclips are the right ones? What tradeoffs should I make?
However, regarding the specific matters you bring up, they are mostly irrelevant. Yes, there could be some conceivable situation in which I have to trade off paperclips now against paperclips later. But the way it usually works is that once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever. Also, it's obviously the number of paperclips that matters, and the constraint on bizarre paperclips is obviously that they have to be able to (counterfactually) hold sheets of paper together.
If you want to get past this abstract philosophizing and on to some concrete problems, it would be better to talk about the dilemma that User:h-H posed to me, in which I must consider alternate models of paperclipping that don't have the shape of standard paperclips. Here's my recent progress on thinking about the issue.
My current difficulty is extrapolating my values to cover unexpected situations like this, starting from the simplest algorithm I can find which generates my current preference. The problem is that...
I've become a connoisseur of hard paradoxes and riddles, because I've found that resolving them always teaches me something new about rationalism. Here's the toughest beast I've yet encountered, not as an exercise for solving but as an illustration of just how much brutal trickiness can be hidden in a simple-looking situation, especially when semantics, human knowledge, and time structure are at play (which happens to be the case with many common LW discussions).
...A teacher announces that there will be a surprise test next week. A student objects that this
Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.
I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my en...
I recently got into some arguments with foodies I know on the merits (or lack thereof) of organic / local / free-range / etc. food, and this is a topic where I find it very difficult to find sources of information that I trust as reflective of some sort of expert consensus (insofar as one can be said to exist.) Does anyone have any recommendations for books or articles on nutrition/health that holds up under critical scrutiny? I trust a lot of you as filters on these issues.
Having read the quantum physics sequence I am interested in simulating particles at the level of quantum mechanics (for my own experimentation and education). While the sequence didn't go into much technical detail, it seems that the state of a quantum system comprises an amplitude distribution in configuration space for each type of particle, and that the dynamics of the system are governed by the Shroedinger equation. The usual way to simulate something like this would be to approximate the particle fields as piecewise linear and update iteratively accor...
Arithmetic, Population, and Energy by Dr. Albert A. Bartlett, Youtube playlist. Part One. 8 parts, ~75 minutes.
Relatively trivial, but eloquent: Dr. Bartlett describes some properties of exponential functions and their policy implications when there are ultimate limiting factors. Most obvious policy implication: population growth will be disastrous unless halted.
An extensive observation-based discussion of why people leave cults Worth reading, not just for the details, but because it's made very clear that leaving has to make emotional sense to the person doing it. Logical argument is not enough!
People leave because they've been betrayed by leaders, they've been influenced by leaders who are on their own way out of the cult, they find the world is bigger and better than the cult has been telling them, the fears which drove a person into a cult get resolved, and /or life changes which show that the cult isn't working for them.
Does anyone know a popular science book about, how should I put it, statistical patterns and distributions in the universe. Like, what kind of things follow normal distributions and why, why do power laws emerge everywhere, why scale-free networks all over the place, etc. etc.
Sorry for ranting instead of answering your question, but "power laws emerge everywhere" is mostly bullshit. Power laws are less ubiquitous than some experts want you to believe. And when you do see them, the underlying mechanisms are much more diverse than what these experts will suggest. They have an agenda: they want you to believe that they can solve your (biology, sociology, epidemiology, computer networks etc.) problem with their statistical mechanics toolbox. Usually they can't.
For some counterbalance, see Cosma Shalizi's work. He has many amusing rants, and a very good paper:
So You Think You Have a Power Law — Well Isn't That Special?
Speaking Truth to Power About Weblogs, or, How Not to Draw a Straight Line
Power-law distributions in empirical data
Note that this is not a one-man crusade by Shalizi. Many experts of the fields invaded by power-law-wielding statistical physicists wrote debunking papers such as this:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.8169
Another very relevant and readable paper:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.6305
Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.
It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are ...
Are there any Germans, preferably from around Stuttgart, who are interested in forming a society for the advancement of rational thought? Please PM me.
I know I asked this yesterday, but I was hoping someone in the Bay Area (or otherwise familiar) could answer this:
Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.
It all looks pretty flaky to me at this point, but I figure some of you must ...
A couple of physics questions, if anyone will indulge me:
Is quantum physics actually an improvement in the theory of how reality works? Or is it just building uncertainty into our model of reality? I was browsing A Brief History of Time at a bookstore, and the chapter on the Heisenberg uncertainty principle seem to suggest the latter - what I read of it, anyway.
If this is just a dumb question for some reason, feel free to let me know - I've only taken two classes in physics, and we never escaped the Newtonian world.
On a related note, I'm looking for a ...
I'm looking at the question of whether it's certainly the case that getting an FAI is a matter of zeroing in directly on a tiny percentage of AI-space.
It seems to me that an underlying premise is that there's no reason for a GAI to be Friendly, so Friendliness has to be carefully built into its goals. This isn't unreasonable, but there might be non-obvious pulls towards or away from Friendliness, and if they exist, they need to be considered. At the very least, there may be general moral considerations which incline towards Friendliness, and which would be...
I wonder how alarming people find this? I guess that if something fooms, this will provide the infrastructure for an instant world takeover. OTOH, the "if" remains as large as ever.
...RoboEarth is a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment.
Bringing a new meaning to the phrase "experience is the best teacher", the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, pavi
CFS: creative non-fiction about immortality
BOOK PROJECT: Immortality postmark deadline August 6, 2010
For a new book project to be published by Southern Methodist University Press, entitled "Immortality," we're seeking new essays from a variety of perspectives on recent scientific developments and the likelihood, merits and ramifications of biological immortality. We're looking for essays by writers, physicians, scientists, philosophers, clergy--anyone with an imagination, a vision of the future, and a dream (or fear) of living forever.
Essays must...
How does the notion of time consistency in decision theory deal with the possibility of changes to our brains/source code? For example, suppose I know that my brain is going to be forcibly re-written in 10 minutes, and that I cannot change this fact. Then decisions I make after that modification will differ from those I make now, in the presence of the same information (?).
If you were going to predict the emergence of AGI by looking at progress towards it over the past 40 years and extrapolate into the future, then what parameter(s) would you measure and extrapolate?
Kurzweil et al measure raw compute power in flops/$, but as has been much discussed on LessWrong there is more to AI than raw compute power. Another popular approach is to chart progress in terms of the animal kingdom, saying things like "X years ago computers were as smart as jellyfish, now they're as smart as a mouse, soon we'll be at human level", b...
In spite of the rather aggressive signaling here in favor of atheism, I'm still an agnostic on the grounds that it isn't likely that we know what the universe is ultimately made of.
I'm even willing to bet that there's something at least as weird as quantum physics waiting to be discovered.
Discussion here has led me to think that whatever the universe is made of, it isn't all that likely to lead to a conclusion there's a God as commonly conceived, though if we're living in a simulation, whoever is running it may well have something like God-like omnipotence...
Mass Driver's recent comment about developing the US Constitution being like the invention of a Friendly AI opens up the possibility of a mostly Friendly AI-- an AI which isn't perfectly Friendly, but which has the ability to self-correct.
Is it more possible to have an AI which never smiley-faces or paperclips or falls into errors we can't think of than to have an AI which starts to screw up, but can realize it and stops?
Is anybody interested in finding a study buddy for the material on Less Wrong? I think a lot of the material is really deep -- sometimes hard to internalize and apply to your own life even if you're articulate and intelligent -- and that we would benefit from having a partner to go over the material with, ask tough questions, build trust, and basically learn the art of rationality together. On the off chance that you find Jewish analogies interesting or helpful, I'm basically looking for a chevruta partner, although the sacredish text in question would be the Less Wrong sequences instead of the Bible.
I was disallowed from posting this on the LessWrong subreddit, so here it is on the LessWrong mainland:
I decided the following quote wasn't up to Quotes Thread standard, but worth remarking on here:
Read a book a day.
Arthur C. Clarke, quoted in "Science Fictionisms".
I've never managed to do this. I've sometimes read a book in a day, but never day after day, although I once heard Jack Cohen) say he habitually read three SF books and three non-fiction books a week.
How many books a week do you read, and what sort of books? (No lists of recommended titles, please.)
Question about Mach's principle and relativity, and some scattered food for thought.
Under Mach and relativity, it is only relative motion, including acceleration, that matters. Using any frame of reference, you predict the same results. GR also says that acceleration is indistinguishable from being in a gravitational field.
However, accelerations have one observable impact: they break things. So let's say I entered the gravitational field of a REALLY high-g planet. That can induce a force on me that breaks my bones. Yet I can define myself as being at ...
Sam Harris gave a TED talk a couple months ago, but I haven't seen it linked here. The title is Science can answer moral questions.
I was disallowed from posting this on the LessWrong subreddit, so here it is on the LessWrong mainland: Shoeperstimulus
An Open Thread: a place for things foolishly April, and other assorted discussions.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.