If it's worth saying, but not worth its own post, even in Discussion, it goes here.

Open Thread, December 1-15, 2012
New Comment
178 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A couple of days ago, GiveWell updated their top charity picks. AMF is still on top, but GiveDirectly bumped SCI from #2 to #3.

They also (very) tentatively recommend splitting your donation among the three: 70% to AMF, 20% to GiveDirectly, and 10% to SCI. The arguments about this in the blog post and comments are pretty interesting. (But I wouldn't stress too much about it: harder choices matter less).

Nassim Nicholas Taleb argues: The future will not be cool

http://www.salon.com/2012/12/01/nassim_nicholas_taleb_the_future_will_not_be_cool/

Taleb's characterization of "technothinkers" as cultural ignoramuses doesn't sound quite right to me, because they tend to read and assimilate the writings of learned (in the liberal arts sense) fantasy and science fiction writers. In this way they at least get some exposure to humane culture once removed, if they don't immerse themselves in it directly. J.R.R. Tolkien taught Anglo-Saxon language and literatur... (read more)

I think the real difference between people like Taleb and the techno-optimists is that we think the present is cool. He brags about going to dinner in minimalist shoes, and eating food cooked over a fire, whereas I think it's awesome that I can heat things up instantly in a microwave oven, and do just about anything in meticulously engineered and perfectly fitted, yet cheaply mass-produced, running shoes without worrying about damaging my feet. I also like keyboards, and access to the accumulated knowledge of humanity from anywhere, and contact lenses. And I thought it was funny when he said that condoms were one of the most important new technologies, but aren't talked about much, as if to imply that condoms aren't cool. I think that condoms are cool! I remember when I first got condoms, and took one out to play with. After testing it a couple different ways, I thought: *how does anyone manage to break one of these!?" It's easy to extrapolate that no "cool" technology will exist in the future, if you don't acknowledge that any cool technology currently exists.

But I think Taleb's piece is valuable, because it illustrates what we are up against, as people tryin... (read more)

You might get a different perspective on the present when you reach your 50's, as I have. I used Amazon's book-previewing service to read parts of W. Patrick McCray's book, The Visioneers, and I realized that I could nearly have written that book myself because my life has intersected with the story he tells at several points. McCray focuses on Gerard K. O'Neill and Eric Drexler, and in my Amazon review I pointed out that after a generation, or nearly two in O'Neill's case, we can get the impression that their respective ideas don't work. No one has gotten any closer to becoming a space colonist since the 1970's, and we haven't seen the nanomachines Drexler promised us in the 1980's which can produce abundance and make us "immortal."

So I suspect you youngsters will probably have a similar letdown waiting for you when you reach your 40's and 50's, and realize that you'll wind up aging and dying like everyone else without having any technological miracles to rescue you.

http://www.amazon.com/The-Visioneers-Scientists-Nanotechnologies-Limitless/dp/0691139830/

A lot of young people, including me, seem to be getting a lot of "man, we're really living in the future" kind of emotional reactions relatively frequently. E.g. I remember that as a kid, I imagined having a Star Trek-style combined communicator and tricorder so that if someone wanted to know where I was, I could snap them a picture of my location and send it to them instantly. To me, that felt cool and science fictiony. Today, not only can even the cheapest cell phone do that, but many phones can be set up to constantly share their location to all of one's friends.

Or back in the era of modems and dial-up Internet, the notion of having several gigabytes of e-mail storage, wireless broadband Internet, or a website hosting and streaming the videos of anyone who wanted to upload them all felt obviously unrealistic and impossible. Today everyone takes the existence of those for granted. And with Google Glass, I expect augmented reality to finally become commonplace and insert itself into our daily lives just as quickly as smartphones and YouTube did.

And since we're talking about Google, self-driving cars!

Or Planetary Resources. Or working brain implants. Or computers beating ... (read more)

I'm 55 and I think the present is more shocking now than it was in the 1970s and 1980s. For me, the 70s and 80s were about presaging modern times. I think the first time I could look up the card catalog at my local library, ~1986 on gopher, I began to believe viscerally that all this internet stuff and computers was going to seriously matter SOON. Within a few months of that I saw my first webpage and that literally (by which of course I mean figuratively) knocked me in to the next century. I was flabbergasted.

Part of what was so shocking about being shocked was that it was, in some wierd sense, exactly what I expected. I had played with hypercard on macs years earlier and the early web was just essentially a networked extension of that. In my science fiction youth, I had always known or believed that knowledge would be ubiquitously available. I could summarize as saying there were no electronics in Star Trek (the original) that seemed unreasonable, from talking computers, big displays, tricorders and communicators. To me, faster-than-light travel, intelligent species all over the universe that looked and acted like made-up humans, and the transporter all seemed unreason... (read more)

9NancyLebovitz
I'm 59. It didn't seem to me as though things changed very much until the 90's. Microwaves and transistor radios are very nice, but not the same sort of qualitative jump as getting on line. And now we're in a era where it's routine to learn about extrasolar planets-- admittedly not as practical as access to the web, but still amazing. I'm not sure whether we're careening towards a singularity, though I admit that self-driving cars are showing up much earlier than I expected. Did anyone else expect that self-driving cars would be so much easier than natural language?
3gwern
I was very surprised. I had been using Google Translate and before that Babel Fish for years, and expected them to slowly incrementally improve as they kept on doing; self-driving cars, on the other hand, had essentially no visible improvement to me in the 1990s and the 2000s essentially up to the second DARPA challenge where (to me) they did the proverbial '0 to 60'.
3[anonymous]
Not I -- they seem like different kinds of messy. Self-driving cars have to deal with the messy, unpredictable natural world, but within a fairly narrow set of constraints. Many very simple organisms can find their way along a course while avoiding obstacles and harm; driving obviously isn't trivial to automate, but it just seems orders of magnitude easier than automating a system that can effectively interface with the behavior-and-communication protocols of eusocial apes, as it were.
2TheOtherDave
I have always expected computers that were as able to navigate a car to a typical real-world destination as an average human driver to be easier to build than computers that were as able to manage a typical real-world conversation as an average human native speaker. That said, there's a huge range of goalposts in the realm of "natural language", some of which I expected to be a lot easier than they seem to be.
2mwengler
I had access to a basic-programmable shared time teletype '73 & '74, dial-up and a local IBM (we loaded cards, got printo of results) '74-'78 @ swarthmore college, programmed in Fortran for Radioastronomers '78-'80 and so on... I always took computers for granted and assumed through that entire time period that it was "too late" to get in on the ground floor because everybody already knew. I never realized before now how lucky I was, how little excuse I have for not being rich. If by "expect" you mean BEFORE I knew the result? :) It is very hard to make predictions, ESPECIALLY about the future. Now I didn't anticipate this would happen, but as it happens it seems very sensible. Stuff we were particularly evolved to do is more complex than stuff we use our neocortex for, stuff we were not particularly evolved to do. I think we systematically underestimate how hard language is because we have all sorts of eolutionarily provided "black boxes" to help us along that we seem blind too until we try to duplicate the function outside our heads. Driving, on the other hand, we are not particularly well evolved to do, so we have had to make it so simple that even a neocortex can do it. Probably the hardest part of automated driving is bringing the situational awareness into the machine driving the car: interpreting camera images to tell what a stoplight is doing, where the other cars are and how they are moving, and so on, which all recapitulate things we are well evolved to do. But no, automated driving before relatively natural language interfaces was a shocking result to me as well. And I can't WAIT to get one of those cars. Although my daughter getting her learner's permit in half a year is almost as good ( what do I care whether google drives me around or Julia does?)
1mwengler
In an amazing coincidence, soon after seeing your commentI came across a Hacker News link that included this quote:
0CronoDAS
I'm not surprised that it's easier, but I also didn't expect to see self-driving cars that worked.
0NancyLebovitz
Does this imply that you expected natural language to be impossible?
2CronoDAS
You know, I actually don't know! It can't literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn't happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do - input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by "natural language" you mean something like "really good Zork-style interactive fiction parser", that might be a bit less difficult than making a computer that can pass a high school English course. And I'm really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn't be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and "answers" are, then the computer should have to use a video camera and microphone, too.
0A1987dM
The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.
5CronoDAS
Yeah, it's interesting the trick they used - they basically used translated books, rather than dictionaries, as their reference... that, and a whole lot of computing power. If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don't have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don't.
0NancyLebovitz
Is Google Translate a somewhat imperfect Chinese Room? Also, is Google Translate getting better?
4Armok_GoB
/me points at cryonics.
4Shmi
OK, I'm a bit younger than you, though I still remember having to use a slide rule in school. And I agree, it's been an exciting ride. Not my impression at all. To me the ride appears full of wild surprises around every turn. In retrospect, while I did foresee one or two things that came to pass, others were totally unexpected. That's one reason I keep pointing out on this forum that failure of imagination is one of the most pervasive and least acknowledged cognitive fallacies. There are many more black swans than we expect. In that sense, we are living through the event horizon already. As a person trained in General Relativity, I dislike misusing this term, but there is a decent comparison here: when free-falling and crossing the event horizon of a black hole one does not notice anything special at all, it's business as usual. There is no visible "no going back" moment at all. In that vein, I expect the surprises, both good and bad, to continue at about the same pace for some time. I am guessing that the worst problems will be those no one thinks about now, except maybe in a sci-fi story or two, or on some obscure blog. Same with x-risk. It will not be Skynet, or nanobots, bioweapons, asteroids, but something totally out of the left field. Similarly, the biggest progress in life extension will not be due to cryo or WBE, but some other tech. Or maybe there won't be any at all for another century.

I get the same feeling. It seems unusually hard to come up with an idea for how things will be like after ten or so years that don't sound either head-in-the-sand denial of the technological change or crazy.

I wonder how you could figure out just how atypical things are now. Different than most of history, sure, most people lived in a world where you expected life parameters to be the same for your grandparents' and grandchildren's generations, and we definitely don't have that now. But we haven't had that in the first world for the last 150 years. Telegraphs, steam engines and mass manufacture were new things that caused massive societal change. Computers, nuclear power, space rockets, and figuring out that space and time are stretchy and living cells are just chemical machines were stuff that were more likely to make onlookers go "wait, that's not supposed to happen!" than "oh, clever".

People during the space age definitely thought they were living in the future, and contemporary stuff is still a bit tinged by how their vast projections failed to materialize on schedule. Did more people in 1965 imagine they were living in the future than people in 1975? What ab... (read more)

9Richard_Kennaway
It may seem to have been a golden age of promise now lost, but I was there, and that isn't how it seems to me. As examples of computer science in 1985, the linked blog post cites the Lisp machine and ALICE. The Lisp machine was built. It was sold. There are no Lisp machines now, except maybe in museums or languishing as mementos. ALICE (not notable enough to get a Wikipedia article) never went beyond a hardware demo. (I knew Mike Reeve and John Darlington back then, and knew about ALICE, although I wasn't involved with it. One of my current colleagues was, and still has an old ALICE circuit board in his office. I was involved with another alternative architecture, of which, at this remove, the less said the better.) What killed them? Moore's Law, and this was an observation that was made even back then. There was no point in designing special purpose hardware for better performance, because general purpose hardware would have doubled its speed before long and it would outperform you before you could ever get into production. Turning up the clock made everything faster, while specialised hardware only made a few things faster. Processors stopped getting faster in 2004 (when Intel bottled out of making 4GHz CPUs). The result? Special-purpose hardware primarily driven not by academic research but by engineers trying to make stuff that did more within that limit: GPUs for games and server farms for the web. Another damp squid of the 1980s, the Transputer, can be seen as ancestral to those developments, but I suspect that if the Transputer had never been invented, the development of GPUs would be unaffected. When it appears, as the blog post says, "that all you must do to turn a field upside-down is to dig out a few decades-old papers and implement the contents", well, maybe a geek encountering the past is like a physicist encountering a new subject. OTOH, he is actually trying to do something, so props to him, and I hope he succeeds at what could not be done back th
6Risto_Saarelma
Thinking a bit more of this, I think the basic pattern I'm matching here is that each era there's some grand technocratic narrative where an overarching first-principles design from the current Impressive Technology, industrial production, rocket engines, internetworked computers, or artificial intelligence, will produce a clean and ordered new world order. This won't happen, and instead something a lot more organic, diffuse, confusing, low-key and wildly unexpected will show up. On the other hand, we don't currently seem to be having the sort of unified present-day tech paradigm like there was during the space age. My guess for the next big tech paradigm thing would be radical biotechnology and biotech-based cognitive engineering, but we don't really have either of those yet. Instead, we've got Planetary Resources and Elon Musk doing the stuff of the space age folk, Bitcoin and whatnot that's something like the 90s cypherpunks thought up, IBM Watson and Google cars are something that AI was supposed to deliver in the 80s before the AI Winter set in, and we might be seeing a bit of a return to 80s style diverse playing field in computing with stuff like Raspberry PI, 3D printing and everybody being able to put their apps online and for sale without paying for brick & mortar shelf space. So it's kinda like all the stuff that was supposed to happen any time now at various points of the late 20th century was starting to happen at once. But that could be just the present looking like it has a lot more stuff than the past, since I'm seeing a lot less of the past than the present.
3gwern
You know, that's a good description of my reaction to reading Brin's Existence the other day. I think 10 years is not that revolutionary, but at 50+ years, the dichotomy is getting pretty bad.
8thomblake
I'm 33, and same here. I like to point out the difference between the time I think of something cool and the time it is invented. In general, that time has been usually negative for a number of years now. As a trivial silly example, after hearing the Gangnam Style song, I said "I want to see the parody video called 'Gungan style' about Star Wars." (I just assumed it exists already). While there were indeed several such videos, the top result was instead a funnier video making fun of the concept of making such a parody video.
6NancyLebovitz
If we're living in the future, when is the present?

We just missed it.

2Oligopsony
The time at which classical images of "the future" were generated and popularized.
8Desrtopa
On the other hand, we do have nanomachines, which can do a number of interesting things, and we didn't have them a couple decades ago. We're making much more tangible progress towards versatile nanotechnology than we are towards space colonization.
3JoshuaZ
It seems that both Taleb and Aaronde are talking about a much smaller scale change than things like space colonization and general nanotech.
8aaronde
Yeah, that was my impression. One of the things that's interesting about the article is that many of the technologies Taleb disparages already exist. He lists space colonies and flying motorcycles right along side mundane tennis shoes and video chat. So it's hard to tell when he's criticizing futurists for expecting certain new technologies, and when he's criticizing them for wanting those new technologies. When he says that he's going to take a cab driven by an immigrant, is he saying that robot cars won't arrive any time soon? Or that it wouldn't make a difference if they did? Or that it would be bad if they did? I think his point is a bit muddled. One thing he gets right is that cool new technologies need not be revolutionary. Don't get me wrong; I take the possibility of truly transformative tech seriously, but futurists do overestimate technology for a simple reason. When imagining what life will be like with a given gadget, you focus on those parts of your life when you could use the gadget, and thus overestimate the positive effect of the gadget (This is also why people's kitchens get cluttered over time). For myself, I think that robot cars will be commonplace in ten years, and that will be friggin' awesome. But it won't transform our lives - it will be an incremental change. The flip side is that Taleb may underestimate the cumulative effect of many incremental changes.
2David_Gerard
I'm 45 (edit: 46) and think the modern age is simply goddamn fantastic.
0gwern
So why don't we see an inverse Maes-Garreau effect, where predictors upon hitting their 40-50s are suddenly letdown and disenchanted and start making predictions for centuries out, rather than scores of years? And what would you predict for the LW survey results? All 3 surveys ask for the age of the respondent, so there's plenty of data to correlate against, and we should be able to see any discouragement in the 40-50syo respondents.

On this site there is a lot of talk about x-risk like unfriendly AI, grey goo or meteorite strikes. Now x-risk is not a concept completely confined to humanity as a whole but is also applicable to any individual that is affected not only by global risks but also by local and individual events. Has anyone here researched ways to effectively reduce individual catastrophic risk and mitigate the effects of local and global catastrophic events? I am thinking of things like financial, juristic and political risk, natural disasters and pandemics. So far I have found the emergency kit as designed by www.ready.gov, but I am positive that there is much much more out there.

7NancyLebovitz
Taleb recommends staying out of debt so as to increase your flexibility.
0EvelynM
National Response Teams (http://www.nrt.org) are governmental inter-agency teams formed to respond to incidents of a wide variety of sizes. That may be a place to start your research.

A black hole of light only, has its name - Kugelblitz. When you have enough dense light, a black hole is normally formed and has this cool name - Kugelblitz!

The black hole of almost exclusively neutrinos, what is its name? I googled a little but haven't found anything yet.

5Manfred
By analogy, Kugelneutrino. Or maybe "kugelnichts" or "kugellangweiligkeit."
0Thomas
Google has no reference to Kugelneutrino until this will be indexed. Okay. I wonder if a Kugelneutrino exits. Enough supernovas, maybe a million, spaced on a large sphere, igniting simultaneously for a sphere centered observer, would send a lot of neutrinos in all directions. In the sphere center their combined mass should stop them making a Kugelneutrino.
3Plasmon
Reminds me of this paper,

People in the very theoretical end of programming language research seem to be making noise about something called homotopy type theory, that's supposed to make programming awesome and machine-provable once someone gets around to implementing it.

Like the lambda calculus before it (which is actually embedded in it), MLTT may be a fruit of mathematics that has a very real and practical impact on how programmers think about programming. It could be the unifying paradigm that eventually, finally puts an end to all the “programming language wars” of our time.

... (read more)
3Douglas_Knight
You seem to be confusing intuitionist MLTT (c.1970) and HTT (c.2005). Your second and third links are about MLTT, not HTT. The second link does mention HTT in passing and claims that it is "a new interpretation of type theory," but this is simply false. In particular, your first quote is not about HTT. Your first link really is about HTT, but does not claim that it is relevant to programming. HTT is an extension of type theory to reduce the impedance mismatch between logic and category theory, especially higher category theory. It is for mathematicians who think in terms of categories, not for programmers. In as much as programmers should be using category theory, HTT may be relevant. For example, Haskell encourages users to define monads, but does not require them to prove the monad laws. HTT provides a setting for implementing a true monad type, but is overkill. This is largely orthogonal to MLTT. Automated theorem proving is rarely done, but it usually uses MLTT.
0bogus
You are of course right about the distinction between MLTT and HTT, but Risto_Saarelma's first link is a computer science blog. In my view, the claim that computer scientists are "making noise" about homotopy type theory as applicable to programming is fairly justified: in particular, HTT might make it easier to reason about and exploit isomorphisms between data types, e.g. data representations. Also, Martin-Lof type theory is not exclusively used for computer-assisted theorem proving: for instance, the Calculus of Constructions is one alternative. Formal work on HTT is done in Coq, which is based on the CoC.

My father told me about someone he knew when he was working as a nurse at a mental hospital, who tried killing himself three times with a gun in the mouth. The first two times he used a pistol of some sort - both times, the bullet passed between the hemispheres of his brain (causing moderate but not fatal brain damage), exited through the back of his head, and all the hot gases from the gun cauterised the wounds.

The third time he used a shotgun, and that did the job. For firearm based suicide, I think above the ear is a safer bet.

3NancyLebovitz
There should be a word for that kind of luck.
3A1987dM
o.O
0vi21maobk9vp
Pistol to the mouth seems to require full mouth of water for high chance of success.
0[anonymous]
Shotgun's not going to have the problems of a pistol, unless you're using slugs -- and I suspect the hydrostatic shock differential will still do the trick there.
[-][anonymous]60

So assuming you have good evidence of eldritch abominations what is the best suicide method? I'm guessing anything that really scrambles your information right. Please keep in mind practicality. Really powerful explosives seem hard to obtain. Having someone dispose of your body after suicide seems an ok but risky option.

Fire?

6MixedNuts
Sufficiently clever eldritch abominations should be able to reconstruct you from very little material. * Your brain, of course, must be entirely destroyed. * It's safer to destroy the rest of your nervous system, which might also contain information. * Your genetic material and records of your actions (e.g. your comments on the Internet) are individually insufficient to deduce you, but I'm not so sure about the combination. So first you want to erase as much information about yourself as you can. Take down everything you put on the Internet, burn everything you wrote, exert your right to delete personal information everywhere you have such a right. You'll also want to distort other people's memories of you. (Or kill them too, but then we get recursive and reprehensible.) If you have sufficient time, you might do a few hugely out of character things and then isolate yourself completely. Maybe suggest a few false memories first. There's probably nothing you can do about leaving DNA everywhere. At least try not to have kids. Fire could work, but you're likely to burn incompletely. I suggest going out to a remote, hot area (think Amazonian jungle), obscuring your starting location as much as you can, going as far out as you can, and dying by having your head crushed or a bullet to the head. By the time someone notices your disappearance, figures out where you went, searches the area, and finds your body, you should have rotten completely. If the eldritch abominations are coming right now and you don't have time for that, yeah, just jump into an incinerator. You should find one of these by following a garbage truck. (Also, you okay, kid? This is just silly, Deep-Ones-dancing-on-the-head-of-a-pin musing, right? You can message me if you need to.)
8A1987dM
I'm starting to wonder whether one of the reasons why Roko deleted all of his comments was that he didn't want to leave too many horcruxes behind.
5[anonymous]
Thank you for the excellent comment. This is what most worries me.
1A1987dM
Unfortunately, I'm afraid that in my case I'd have to at least nuke Facebook's servers. I used not to worry about possible future eldritch abominations at all because I thought if I saw them coming I could just guillotine my head into a fireplace or something, but now that I realize that they could likely still reconstruct a sufficient-fidelity copy of me, I do worry a little about not pissing them off. Unfortunately I barely have any idea about what would piss them off, so all in all I don't behave that differently than I used to, as per the standard atheist reply to Pascal's wager. Also, I don't think such abominations are that likely.
0[anonymous]
I kinda think that we shouldn't make this forum into a place to give people advice about ways on how best to kill themselves.

Rationality, winning, and munchkinry

I can't help but notice that, in reviews and comments to what we like to call "rationalist fiction", detractors often call characters whose approach to problems is to seek the winning approach. rather than, say, the "reasonable" approach (like one-boxing on Newcomb's rather than two-boxing) "munchkins", as if it were some sort of insult.

A work of fiction that Yudkwosky recently recommended, "Harry Potter and the Natural 20", features a protagonist, Milo, who is a wizard from a D&a... (read more)

Munchkinry is a terrible way to play a game because maximizing your character's victories and maximizing your and other players' enjoyment of the game are two very different things. (For one thing, rules-lawyering is a boring waste of time (unless you're into that, but then there are better rulesets, like the Talmud (Zing.)); for another, it's fun to let your character make stupid in-character mistakes.) It is a good way to live a life, and indeed recommended as such by writers of rationalist fiction.

8Emile
There can be several ways to get enjoyement out of a roleplaying game: * The sheer intellectual challenge of the game (which you can also get from storyless boardgames) * Telling or enjoying an interesting story, with interesting situations * Escapism - living as someone else, in a different world These are usually called Gamist, narrativist, Simulationist. They are not mutually incompatible, and you can indeed have different people around the same table with different tastes / goals. There can be problem when one ruins the story or the believability in order to get a game advantage, and other players were caring about the story etc. - this is when people claim about munchkinry. But you can still have good game sessions where everybody is a munchkin, or when the rules and DM are good enough so that the player's don't get to choose between a game advantage and an interesting story (for example, I think in most versions of D&D you basically have some points you can only spend on combat-useful stuff (opicking feats or powers), and some points you can only spend on combat-useless stuff (skill points)).
3blashimov
Anyone who thinks skill points (or any other character ability) is useless in combat gets an "F" in munchkinry. ;)
0Ritalin
Yes, but why do people seem to think that it should also apply to fictional characters (not PC), and people leading their actual lives? Or, you know, actual Law.
2TimS
Hey, I resemble that remark! Although the actual practice of law is about as rules-lawyer-y as programming a computer. More than the average person has any reason to be, but the purpose is precision, not being a jerk.
0Ritalin
I contest that loophole exploitation and leaving room for doubt and interpretation is equivalent to being a jerk.
0TimS
In real life? Legal and factual uncertainty favors the unjust (particularly those with power in the current status quo who desire more power). And even institutional players who would want to be unjust make game-theoretic decisions about whether they prefer cost certainty or greater upside (and variability). But in RPG environment? It depends a far bit on whether the goals of the other players are Gamist, Narrationist, or Simulationist. Playing the munchkin in an Narrationist environment has significant jerk potential.
4Ritalin
Not if, like in Harry Potter And The Natural 20, you postulate that you the characters have an innate, total knowledge of the source-book, and that these are to them as the laws of physics are to us. Exploiting them to the utmost becomes a matter of common sense and enlightened self-interest. Also, their psychology becomes strangely inhuman and quite interesting; it's, essentially, xenofiction rather than fantasy. Or gives legal institutions flexibility to deal with the case-by-case problems the original legislators could never have thought of. I'm thinking of the US Constitution as an instance of that, which was left deliberately as vague as possible so that it could be used for centuries and by all kinds of different ideologies. Countries that have written constitutions that were too specific have found themselves having to change them more frequently, as they became obsolete more rapidly. Am I right, so far?
0TimS
I'm not sure what advantage deliberately unclear rules provide when there are legitimate methods to modify the rules and the processes to change rules can be invoked at any time. If your social governance lacks sufficient legitimacy to change the rules, the specificity or vagueness of the current rules is the least of your problem. And if rules can be changed, certainty about results under the current rules is a valuable thing - as multiple economists studying the economic value of the "rule of law" will attest. ---------------------------------------- Knowing the laws of physics better is a great way to be more powerful. But be careful about distinguishing between what the player knows and what the character knows. If character doesn't know that +1 swords aren't worth the effort, but +2 swords are great values, then having the player make decisions explicitly and solely on that basis (as opposed to role-playing) can be very disruptive to the interactions between players or between player and GM.
2Ritalin
That is true in Real Life. But, in the world of, say, Dungeons and Dragons, believing that you can run and cast a spell at the same time, or down more than one potion in the span of six seconds, is tantamount to insanity; it just can't be done. The rules of the game are the laws of physics, or at least the most important subset thereof. Your comment on rules is very interesting. Every time the topic came up, the citizens of those United States of America have been bashing me over the head with the common wisdom that the rules being flexible and accommodating, and therefore not requiring to be changed or rewritten, is a wonderful thing, and that the opposite would be a source of political and legislative instability. And that's when they didn't call the much more frequently-changed constitutions of European countries "toilet paper". I think the reason US citizens care so much about keeping things the way they are is that they have allowed a great deal of regional diversity within the Federation, and creating a clearer, more modern, more specific set of rules would be a dangerous, complex process that would force outliers into convergence and create tons of resistance and a high chance for disaster. It's no coincidence that constitution changes in Europe and other places have come from fighting (especially losing) wars that involve their own territory, getting invaded by foreign powers, or having a revolution. The US haven't had any of these things since... the war with Mexico?

Another paper on the low quality of much scientific evidence, here in the field of diet-related cancer risk. Just out in the Journal of Clinical Nutrition: Is everything we eat associated with cancer? A systematic cookbook review.

3Manfred
Also see the oncological ontology project, which aims to separate all things into either causing or curing cancer, as determined by the Daily Mail.

New study on rationalization and the sophistication effect curtesy of marginal revolution.

2mwengler
I hope that got an A.
0A1987dM
I'm reminded of this.
0FiftyTwo
It interests/unsettles me how much I anthropomorphise it based on very simple behaviours.

Is anyone doing self-scoring on everyday predictions? I've been considering doing this for a while - writing down probability estimates for things like 'will have finished work by 7PM' and 'will have arrived on time' and even 'will rain' or 'friend will arrive on time', so I can detect systemic errors and try to correct for them. (In particular, if you are consistently over- or under-confident, you can improve your score by correcting for this without actually needing to be any more accurate.) This seems like a pretty straightforward way of getting `better' at predicting things, potentially very quickly, so I'm curious about other's experiences.

0beoShaffer
I do (sometimes) do this with private predictions on prediction book.
0Ritalin
I haven't. Before I imitate you, I'd like to know, what specific implementations have you performed yet, and what are your results so far?

There's been some talk recently of the need for programmers and how people that are unsatisfied with their current employment can find work in that area while making a decent living. Does there exist some sort of virtual meet-up for people that are working towards becoming programmers? I'd like to form, or be part of, a support group of LW-ers that are beginning programming. There may be something like this around that I've just missed because I mostly lurk and not even that regularly anymore. (Hoping to change that, though.)

6Viliam_Bur
Is there a reason to believe that a LW-related environment will provide better help than existing environments, such as Stack Exchange, or one of the free online universities? I believe there would be some advantages from the LW culture. For example questions like "which programming language is the best?" would be processed differently in culture which pays attention to mindkilling and values being specific. On the other hand, LW is just a tiny subset of the world, and there is the strength in numbers. If a website is visited by thousands of programmers, you are more likely to get your answer, fast. LW articles related to programming: * Free Tutoring in Math/Programming * Learn to code * What is the best programming language? * Computer Science and Programming: Links and Resources * Advice On Getting A Software Job I could give free Skype lessons in programming (specifically Pascal, Java, JavaScript) if anyone is interested (send me a PM). There are probably more people like this, so we could have a list somewhere. Not just a list about programming, but more generally a list of LWers willing to provide professional-level advice on something, categorized by topic.
1Vive-ut-Vivas
The main reason I am interested in a LW-related environment (other than it really being my only online "community") is because I know there's been talk here before about people switching fields to become programmers. That's a group of particular interest to me, since I'm one of them. I also know of at least one other person here who is working on becoming a programmer through self-study. There was a post a while back about encouraging more people to become computer programmers, so I'm betting that there are more of us out there.

Why do LWers believe in global warming? The community's belief has changed my posterior odds significantly, but it's the only argument I have for global warming at the moment. I saw the CO2 vs temperature graphs, and that seemed to sell it for me... Then I heard that the temperature increases preceded the CO2 emissions by about 800 years...

So why does the community at large believe in it?

Thanks!

6blashimov
I believe it is true as an environmental engineer engaged in atmospheric modeling. Atmospheric modeling is a field in which the standard scientific method seems to be working well, that is, there is a large benefit to researchers who are right and/or can prove others wrong. This means that there is a lot of effort going into improving models that are already quite accurate, to the limits of the data you input. For example, the 1990 model of climate change does quite well if you give it better data, and at least correctly predicts the temperature trend with bad data. http://www.huffingtonpost.com/2012/12/10/1990-ipcc-report_n_2270453.html Similar to comments below, the IPCC is an enormous body, and I find invalidating their arguments to require an implausible conspiracy theory. You can look up the executive summary for the various reports at your leisure, they are quite readable.
6Bakkot
This is one of those things you should probably just take on authority, like relativity or the standard model of particle physics. That is to say, it's an exceedingly complex topic in practice, and any argument stated for either side which can readily be understood is likely to be wrong. You have two or three options: study the field long enough to know what's going on, or trust the people who have already done so. (The third option, 'form an opinion without having any idea what's going on', is also commonly taken.) In short: I believe it's happening because this is what scientists tell me, and it's not worth putting in the time required to understand the field well enough that I could trust my opinion over theirs.
6[anonymous]
Can't speak for the community at large. CO2 blocks some frequencies of infrared. This is known and uncontested by even the craziest deniers. Without an atmosphere the earth's average temperature would be around -20 C. You can calculate this based on radiation theory. (that specific number may be wrong, but it's around there). An atmosphere with CO2 (and some other major ones I don't remember) blocks a higher proportion of the radiation from earth than from the sun (because the earth radiation is mostly infrared near the range blocked by CO2). With a model for that, you can recalculate the surface temperature. It will be much higher. edit: (on the other hand, now that I think about it, I can't prove to myself that absorbant CO2 will actually cause a greenhouse effect. Maybe it's reflective, which would cause greenhouse...) /edit edit2: ok I just read the wiki article. Everything they tell you about how the greenhouse effect works is wrong. It's not that the atmosphere somehow blocks the outgoing radiation, as that would violate the second law by allowing the earth to heat up relative to it's surroundings. The real mechanism is that the absorbtion surface (the ground) and the emission surface (roughly tropopause) is seperated by a mechanism that enforces a temperature difference (adiabatic lapse rate). I need to think about this more. /edit That analysis does not include things like the effect of temperature on albedo (clouds and snow), which changes things, and other effects, but it gives you rough bounds for what must happen. The model establishes a causal link from CO2 to temperature (there are also links the other way, like forest fires and desertification). Beyond that, though, climate science is mostly empirical I think. My rough belief is that global warming is a thing, but is probably hyped up a bit too much for political reasons.
2satt
That can't be right. The atmosphere does block most of the outgoing radiation — its transmissivity for the Earth's longwave radiation is only about 20% — and if it were transparent to radiation it couldn't exert a greenhouse effect at all. Also, a thought experiment: if we had an electric oven plugged into a solar panel orbiting the Sun, the oven could heat itself relative to the surrounding space just by using light from the Sun, and that wouldn't violate the second law.
0[anonymous]
Maybe the second law is the wrong way to look at it. The second law says that the sun can't cause you to heat up hotter than the sun on average. (you can do tricks with eat pumps to make parts of you hotter than the sun, though) It also says you can't do tricks with surface properties to change your temperature. (in the absence of heat pumps) Ok I'm still a bit confused about this. I suspect that this effect alone is not enough to cause a greenhouse effect. Let's think it through: Assume the 0.2 missing from transmissivity is all absorptivity (t, a, and r add up to one). And that we model it as simply an optical obstruction in thermal equilibrium. The sun's radiation comes, some of it goes to the atmosphere, some to the earth. If the atmosphere magically ate heat, the earth would get less radiation. However, it does not magically eat heat; it heats up until it is emitting as much as it absorbs. The longwave from earth also gets eaten and re-emitted. About half of the emitted goes to earth, the rest out to space. So our greenhouse layer prevents some power P1 from getting to earth. Earth emits, and P2 is also eaten. The emitted P3 = P1+P2. Earth gets P3/2. The sun is hotter than earth so the power at any given wavelength will be higher, so P1 > P2, therefor P3/2 > P2, which means on net, heat is flowing from the greenhouse layer to earth. However the earth is receiving P1 less from the sun, and P1 > P3/2. So the earth cools down relative to the similar earth without "greenhouse" effect. This makes sense to me because the earth is effectively hiding behind a barrier. Therefor, if a greenhouse effect exists, it cannot be explained by mere atmospheric absorption. Unless I made some mistake there... The assumption that is not true in that model is the atmosphere being in independent thermal equilibrium. If we instead make the atmosphere be in thermal eq with earth, there is no effect; the earth acts as a single body, and absorption by atmosphere is the same as ab
0satt
I think so. In practice, changing the surface properties of a body in orbit can affect its temperature. If we coated the Moon with soot it would get hotter, and if we coated it in silver it would get colder. Two key complications break this toy model: 1. P1 > P2 doesn't follow from the Sun having higher spectral power. The Sun being hotter just means it emits more power per unit area at its own surface, but our planet intercepts only a tiny fraction of that power. 2. The atmosphere likes to eat Earth's emissions much more than it likes to eat the Sun's. This allows P1 to be less than P2, and in fact it is. P2 > P1 implies P3/2 > P1, which turns the cooling into a warming. The barrier metaphor's a bit dodgy because it suggests a mental picture of a wall that blocks incoming and outgoing radiation equally — or at least it does to me! (This incorrect assumption confused me when I was a kid and trying to figure out how the greenhouse effect worked.) It's a false assumption, but it's not the assumption breaking your (first) model. It's possible to successfully model the greenhouse effect by pretending the atmosphere's a single isothermal layer with its own temperature. The second model you sketch in your last 4 paragraphs sounds basically right, although the emission/absorption surface is some way below the tropopause. That surface is about 5km high, where the temperature's about -19°C, but the tropopause is 9-17km high. (Also, there's mixing way beyond the top of the troposphere because of turbulence.)
1Manfred
Yeah, understanding the real reason for the greenhouse effect was tricky for me. CO2 makes the atmosphere opaque to infrared even on the scale of meters, so it's not like a regular greenhouse. If the CO2 already absorbs all the infrared emitted from the ground, why does increasing CO2 decrease the amount of energy reaching space? Because what space sees is the temperature of the last atom to emit infrared, and as you add more CO2, the last atom gets higher and higher on average, and thus colder and colder. This is more like a "warm, clear blanket" effect than a greenhouse effect. (That is, more like diffusion than reflection). Though note that neither greenhouses nor warm blankets violate the second law - they just can't get any warmer than the sun, which is pouring in energy at wavelengths for which the atmosphere is mostly transparent. Good ol' sun.
2drnickbone
You might want to look at Skeptical Science which lists a large number of arguments raised by skeptics of global warming, and what climate science has to say about them. "CO2 lags temperature" is number 11 on the list. Here is the basic response:
1Paul_G
This is exactly what I was looking for! Thank you kindly, looking through it as soon as I find time.
0FiftyTwo
Source? I have lots of reasons for believing in climate change I could quote at you, but they can mainly be found on the relevant wikipedia pages (so I assume you've already looked at them). So why am I putting more credence on those arguments than you? (Assuming we're both equally rational/sane/intelligent). What it comes down to when you abstract from individual arguments, is that those who have most domain specific expertise strongly believe it to be true. In general it is best to trust experts in a particular domain unless you have strong reasons to believe that field is flawed. Absent improbable conspiracy theories I have no reason to in this case.
0Paul_G
Teacher in a geology class who is decidedly non-rationalist mentioned that 800 years thing, without a source. Something about thickness of a line. This is the first topic I've found in which I have no idea how to dissect this and figure out what's going on. It appears that there are incredibly powerful arguments for both sides, and mountains of strong evidence both for and against human caused climate change... Which shouldn't be possible. A lot of the skeptics seem to have strong arguments countering many of the "alarmist" ideas... I'm not a good enough rationalist for this, yet. If it weren't for this community's famous support of global warming, there is no way I'd believe in it, given the data I have. Strange. I'm not sure it's worth posting sources and the like, counter-counter arguments become difficult to follow, and it could easily cause a kurfuffle that I would rather avoid. Thank you all greatly!
3Mitchell_Porter
The lag is a phenomenon of the ice age cycle, which is caused by orbital shifts but amplified by emission or absorption of carbon dioxide by the ocean. It takes the ocean about a thousand years to respond to changed atmospheric temperature.
-4drethelin
I don't know if there's an official consensus in the way you seem to think there is. My personal point of view is that it seems fairly obvious that dumping tons of shit into the atmosphere is going to have an effect, and is not good for various obvious health and pleasant atmosphere reasons. There are also reasonable arguments about not upsetting existing equilibria that exist. On the other hand, speculations about disastrous scenarios seem blatantly over-specified and ridiculous to me. We've had dozens of Ice Ages and warm epochs throughout earths' history, obviously not caused by humans, and we have no idea how they worked or ended or whatnot. I think worrying about global warming as a disaster scenario is ridiculous and semi-religiously enforced for political power as well as tribal affiliation.
5blashimov
It depends on what you mean by "disaster" and "over specified." I will add that the IPCC, a body I accept as reputable, predicts a large range of possible outcomes with probability estimates, some of which I think can be fairly categorized as "disastrous." Global warming is a large potential human misery-causer, but not even close to an existential threat. For certain countries, such as the US, it probably won't be that bad, at least until the second half of this century.
1Oscar_Cunningham
This is a hollow argument. You characterise CO2 (and other waste gases?) as "tons of shit" which sounds suitably negative but doesn't actually mean anything. What are you using to classify some gases as "tons of shit" that then makes it obvious they'll have an effect? Not all waste products of chemical processes are dangerous; dumping nitrogen into the atmosphere will have no effect at all.
0drethelin
I invite you to stand outside a coal power plant or in a large city in china. My point was vaguely made but you're attacking it as if it said way more than it did.
[-][anonymous]30

From wikipedia.

Principles of cosmicism

The philosophy of cosmicism states that there is no recognizable divine presence, such as a god, in the universe, and that humans are particularly insignificant in the larger scheme of intergalactic existence, and perhaps are just a small species projecting their own mental idolatries onto the vast cosmos, ever susceptible to being wiped from existence at any moment. This also suggested that the majority of undiscerning humanity are creatures with the same significance as insects and plants in a much greater struggle

... (read more)
7Ritalin
Yes.
0Zaine
While you may not err here, do keep in mind that not all characters are extensions of their author.
4Ritalin
I think he made that same point in other words somewhere in the sequences. And I couldn't agree more. Lovecraftian horror always struck me as a rather unwise way of looking at things; so what if incomprehensible forces in the universe could walk over us at any time and obliterate us? If we can't stop them, and can't predict them, why should we possibly even think about them or let their existence get us down? They're, essentially, irrelevant. I also take issue with all the "drives you mad" reactions in Lovecraftian stories. PTSD drives you mad. Seeing seemingly-impossible things confuses you, because it messes with your epistemic models, but why should it mess with your capacity for rational thought?
0Multiheaded
(Antonin Artaud, There Is No More Firmament, 1933. He never knew Lovecraft.)
[-][anonymous]30

Would a current or former Carnegie Mellon student be interested in talking to me, a high school senior, about the school? I intend on majoring in physics. Please private message me if you are.

[-][anonymous]20

Someone is planning to do (and documenting on video) 100 days of rejection therapy. He's currently up to day 26.

My friend just asked me how many people typically attend our meet ups. I don't know the answer. How do I find out?

0[anonymous]
Do you mean just your (regional) group, or some kind of average over all meetup groups?
0pleeppleep
just in general
1drethelin
6.5 people and 1/3 dogs
5[anonymous]
I see your 1/3 dog and raise you half a cat.

I've recently become aware of the existence of the Lending Club which appears to be a peer-to-peer framework for borrowers and lenders. I find myself intrigued by the interest rates claimed, but most of what I've found in my own research indicates that these interest rate computations involve a lot of convenient assumptions. Also, apparently if the Lending Club itself goes bankrupt, there is no expectation that you will get your investment back.

It seems at least conceivable that the interest rates are actually that high, since it is a new, weird type of in... (read more)

3EvelynM
The interest rates for that sort of peer-to-peer lending are high, because the default rates are high. That is, you have a lower probability of getting all of your money back.

Someone smart recently argued that there's no empirical evidence young earth creationists are wrong because all the evidence we have of the Earth's age is consistent either hypothesis that God created the earth 4000 years ago but designed it to look like it was much older. Is there a good one-page explanation of the core LessWrong idea that your beliefs need to be shifted by evidence even when the evidence isn't dispositive as versus the standard scientific notion of devastating proof? Right now the idea seems smeared across the Sequences.

8MinibearRex
Prior probabilities seem to me to be the key idea. Essentially, young earth creationists want P(evidence|hypothesis) = ~1. The problem is that to do this, you have to make P(hypothesis) very small. Essentially, they're overfitting the data. P(no god) and P(deceitful god) may have identical likelihood functions, but the second one is a conjunction of a lot of statements (god exists, god created the world, god created the world 4000 years ago, god wants people to believe he created the world 4000 years ago, god wants people to believe he created the world 4000 years ago despite evidence to the contrary, etc). All of these statements are an additional decrease in probability for the prior probability in the Bayesian update.
5TrE
IIRC the main post about this concept is conservation of expected evidence.
2Vaniver
The articles that come to mind are Scientific Evidence, Legal Evidence, and Rational Evidence and Making Beliefs Pay Rent (in Anticipated Experiences).
1DanielLC
He's not entirely wrong. Essentially, the more evidence you find of the Earth being more than 4000 years old, the more evidence you have against a non-deceiving god having created it 4000 years ago. If there's a 0.1% chance that a god will erase all evidence of his existence, then we can only get 20 bits of evidence against him. The problem is most likely that he's overestimating the probability of a god being deceitful (conjunction fallacy), and that he's forgetting that it's equally impossible to find evidence for such a god (conservation of expected evidence).
1mwengler
If you are trying to explain the fossil, geological, and astronomical record, you might consider two hypotheses: 1) the details reflect the process that put these in place and current physical constants put the time for that to happen based on that record in the billions of years 2) somebody or something "God" for which we have little other evidence other than the world and universe created it all about 4000 years ago and made it look like a billions year project. In the 2nd case, you take on the additional burden of explaing the existence and physics of God. Explaining why God would want to trick us is probably easier than explaining God's existence and physics in the first place. I am reminded of Wg's statement "Believing you are in a sim is not distinguishable from believing in an omnipotent god (of any type)." Certainly, a sim would have the property that it would be much younger than it appeared to be, that the "history" built in to it would not be consistent with what actually appeared to have happened. Indeed, a sim seems to mean a reality which appears to be one thing but is actually another quite different thing created by powerful consciousnesses that are hiding their existence from us.
1NancyLebovitz
Also, supposing that God created the world 6000 years ago or whenever and added a detailed past for artistic verisimilitude (a kinder explanation than the idea that the deep past is a way of tempting people into lack of faith), what would the difference be between God imagining the past in such detail and the past actually having happened?
0mwengler
The difference is that in one situation we are conscious actors learning about our world and in the other we are simulations of meat puppets with experiences that are completely unreliable for indicating something about the world. Further, if I can be deluded enough to think that dinosaur bones imply dinosaurs and star formation regions imply stars are formed in star formation regions, then God could be deluded, she could be part of a higher level simulation, set up to simulate a God that believed it was omnipotent, omniscient and omnigood. The difference is in one case we are finite intelligences in a MUCH larger universe, evolving and adapting, with an intelligence that imperfectly but simply reflects reality. In the other case, we aer prisoners in a nightmarish experiment of deception where the rules/physics could be changed at any moment, and in deeply incomprehensible ways by either our God or God's God. I suppose the problem of induction means we can never now that the persistence of laws of physics for thousands of miles and hundreds of years implies they will be the same tomorrow. But induction is not just our best bet, it is really our ONLY bet in predicting the future, in a world where we accept a God, predictability is purely at the whim of the programmer (God). The only sense in whcih there is no difference is the sense in which God decieves us Perfectly.
0NancyLebovitz
I may have been imagining a God not obviously worse than the one that (hypothetically) is running this universe-- the occasional miracle at most, but with the laws of physics applying almost all the time to almost everything. Does it make sense to think of people surviving a substantial change in the laws of physics? That's probably close to one of those "can God defy the laws of logic?" questions.
2mwengler
As I understand both God and anybody running a sim, at any point, with the proper programming skills, they can cause essentially ANYTHING to happen. God could blow up the earth with blue heavenly fire, or convert all the Oxygen to Iron or change the Iron in our hemoglobin so it no longer grabbed oxygen for delivery to our cells. To the extent that the God in our universe doesn't interfere, I am put in mind of "Black Swans," God is out getting coffee for a few thousand years so we think he is a good guy, but then his coffee break is over, he sees where his sim got 10 billion people with super high tech, and he becomes interested in trimming us back down to biblical proportions. Or who knows what. The point is if these are not the REAL rules of physics, we are at the whim of a god. And indeed the evidence of what "our" benign (for now) God might do is not promising, He seems in the past to have sent clever and annoying plagues, flooded everything, cast us out of Eden, and he has certainly communicated to us the ideas of the end of the world. It makes sense to think of people surviving a substantial change in the laws of physics if that is what God or the Simulator wants to happen. The essence of being unconstrained by Physics is that it is entirely up to the simulator what happens.
0NancyLebovitz
Being unconstrained by physics isn't the same as being unconstrained by logic.
0mwengler
Certainly if you are in a simulated world, and I am controlling the simulation, I can pull the floor out from you instantly, I can transport you not only to any different location instantly, but change the orientation of your body, even how your body feels to you instantly. Indeed, if I am running the sim of you, I can create sensations in you by directly stimulating parts of the sim of your brain which in your real brain would be purely internal. I could make you feel intensely afraid every time you saw yourself in a mirror, I could make you see whatever I wanted as you looked at your hand, I could trigger face recognizers and have you recognizing your mother as you gazed at a robot or a pussy cat or a dinner plate. Being less intrusive in to your brain, I can make things move in any fashion I choose at any time. I could create a billiards game where balls curved through the air, expanded and contracted, bounced off each other with extra energy, exploded, multiplied on the table, whatever. Your car could speed through the street at 10,000 mph. I think the only constraint on the sim is temporary, it is what I can make your brain percieve by stimulating its simulated bits wherever I wish. And I can distort your wiring slowly enough so that you had the sensation of continuity but so your simulated nerves appeared to control extra limbs, mechanical objects, whatever. I could grow your intelligence in the sim by expanding the sim of your neocortex, you would feel yourself getting smarter. I am not constrained to present to you a world which has any internal logic, or even consistent from moment to moment. Object permanence requires a continuity editor, it is easier to make a sim which doesn't have object permanence for example. Just what constraints do you think logic imposes that I may have been violating in my comment above?
-2MugaSofer
That would depend on whether God's thoughts contain conscious beings, wouldn't it?
0[anonymous]
Try this and let me know if it's what you're looking for.
4aaronsw
That's a good explanation of how to do Solomonoff Induction, but it doesn't really explain why. Why is a Kolmgorov complexity prior better than any other prior?
-2MugaSofer
Personally, I always argue that if God created the world recently, he specifically designed it to look old; he included light from distant stars, fossils implying evolution, and even created radioactive elements pre-aged. Thus, while technically the Earth may be young, evolution etc. predict what God did with remarkable accuracy, and thus we should use them to make predictions. Furthermore, if God is so determined to deceive us, shouldn't we do as he wants? :P

Could someone please break down the exact difference between a 'preference' and a 'bias' for me?

5Alicorn
Biases cause you to systematically get less of what you want, or be less likely to get what you want. Preferences are the content of "what you want".
0BerryPick6
So is wanting to satisfy other people's preferences a 'preference' or a 'bias'?
1Alicorn
That's a preference.
0BerryPick6
Even though it causes one to systematically get less of what ze wants?
4Alicorn
It doesn't. If you want other people to get what they want, then when that happens, you get something you want. You have to trade it off against other wants, but everybody has to do that, even people who only can't decide what to have for dinner.
2BerryPick6
Do all preferences work this way, or are there some which don't have to be traded off at all? These questions really should go in the "stupid questions open thread", but I can't seem to find a recent one. Thanks for taking the time to answer me.
3Alicorn
No problem. You can only have a preference that doesn't get traded off with if it happens to never conflict with anything - for instance, my preference that there be a moon has yet to interact with any other preferences I could act towards fulfilling; the moon just goes on being regardless - or, if it's your only preference. Even if you have only one preference, there could be tradeoffs about instrumental subgoals. You might have to decide between a 50% chance of ten units of preference-fulfillment and a guarantee of five units, even if you'd really like to have both at once, even if the units are the only thing you care about.

Has anyone used one of those pay for a doctor's opinion websites? How do you know if it's a scam?

Do we have a lesswrong steam group?

Can anyone give me a source/citation for the idea that more intellignet people are better at rationalisaiton? I've seen it mentioned several time but without link to experimental evidence.

[This comment is no longer endorsed by its author]Reply

Book Recommendation; Fiction; AI; While this might be the kind of scifi book to merely annoy experts, I found it enjoyable. It surrounds military use of potentially FOOM AI's which are wiped periodically to prevent the foom. Soiler: vg snvyf. It is also part of a series, in which some overlapping events are told from different perspectives, which I also found enjoyable. http://www.amazon.com/Insidious-Michael-McCloskey/dp/1440192529

[-][anonymous]00

Can anyone think of any good sci-fi written about a world in which time travel is a commonplace (like something everyone has access to and uses in accomplishing everyday tasks)? It occurs to me that 1) this might be interesting to try to sort out, and 2) I can't even imagine how it would work.

0drethelin
The closest example I can think of is some of the Company novels by Kage Baker. Time travel is never for everyone but you do end with characters that have trivial access to it. On the other hand there are a lot of limitations

How should one distinguish disgreement on empirical grounds vs disagreement about values? I'm increasingly convinced I'm miscalibrated on this.

4TheOtherDave
My usual approach is to keep asking variations on "What would you expect to experience if that were false?" If that question has an answer, we're still in the realm of the empirical. If it doesn't, it's possible we've transitioned into the realm of disagreement about values.
2Nisan
Can you give an example of a disagreement?
2FiftyTwo
Friend proposes policy A, I think policy A is obviously bad. How do I most efficiently determine whether we have fundamentally different values or believe different facts to be the case?
2Nisan
"I think policy A is bad because it would cause B." "But policy A wouldn't cause B. Also it would cause C which is good." "If policy A did turn out to cause B, would A still be good?"
[-][anonymous]00

FOR THE EMPEROR! HE IS THE ONLY VIABLE SCHELLING POINT!

Beware the anthropic implications of aliens, the selection pressure behind mutants, the institutional damage of heresy.

-- Sanctus Muflax of Holy Terra

In the grim dark future of our hostile multi-verse, past the dark age of technology when the men of iron were crushed by those who would be less wrong as the Emperor sits a hundred centuries undying on the golden throne there is only war.

Coming soon.

Appropriate context. Fanfiction, you know you want it.

4gwern
The Emperor is an attractive Singleton compared to the em hell of the Necrons...
4FiftyTwo
The Necrons seem to have failed entirely to self improve. Ironically the Tyranids are probably closest to an unfriendly AI despite being organic. They improve themselves in response to problems. They don't love you or hate you, they're just hungry, and you're made of organic matter they'd like to use for something else.
[-][anonymous]00

The Worst-Run Big City in the U.S.

An very interesting autopsy of institutional dysfunction related to government and non-profits. I recommend reading the whole thing.

Minus the alleged harassment, city government is filled with Yomi Agunbiades — and they're hardly ever disciplined, let alone fired. When asked, former Board of Supervisors President Aaron Peskin couldn't remember the last time a higher-up in city government was removed for incompetence. "There must have been somebody," he said at last, vainly searching for a name.

Accordingly, mil

... (read more)
[This comment is no longer endorsed by its author]Reply

I agree with EY that collapse interpretations of QM are ridiculous but are there any arguments against the Bohm interpretation better than the ones canvassed in the SEP article?

http://plato.stanford.edu/entries/qm-bohm/#o

2Manfred
Conflict with special relativity is the most common decisive reason for rejecting Bohmian mechanics - which is oddly not covered in the SEP article. Bohmian mechanics is nonlocal, which in the context of relativity means time travel paradoxes. When you try to make a relativistic version of it, instead of elegant quantum field theory, you get janky bad stuff.
-2Vaniver
Not that I know of, but, in my interpretation preference Bohm is only beat by "shut up and calculate," so I may not be the most informed source.

Many Worlds Interpretation (MWI) is favored by EY as having a shorter message than others.

However, the short-message version of MWI does not include a theory as to how my particular stream of consciousness winds up in one branch or another. So Copenhagen (wave function collapse) is a theory of what I will experience, MWI is not.

Further, I have always thought MWI motivated by the ideas behind Einstein's "God does not play dice with the universe." That is, a non-deterministic theory is no theory at all. And then, MWI, would be a theory witho... (read more)

how my particular stream of consciousness winds up in one branch or another

This assumes there is such a thing as a particular stream of consciousness, rather than your brain retconning a stream of consciousness to you when you bother to ask it (as is what appears to happen).

-1mwengler
Yes it does assume that. However, we have plenty of evidence for this hypothesis. My memory, and the memory of humans and higher mammals alike, has tremendous predictive power. Things like I remember a particular National Lampoon magazine cartoon with a topless boxer chanting "I am the queen of england, I like to sing and dance, and if you don't believe me, I will punch you in the pants," from about 40 years ago. I recently saw a DVD purporting to have all National Lampoons recorded digitally on it, I bought this and sure enough, the cartoon was there. It seems clear to me that if conscious memory is predictive of future physical experience, it is drawn from something local to the Everett Branch my consciousness is in. Let me design an experiment to test this. Set up a Schrodinger's cat experiment, include a time display which will show the time at which the cat was killed if in fact the cat is killed. If I once open the lid of the box and find the cat, and look at the time it was killed, record the time on a piece of paper which I put in a box on the table next to me and then close the box. I reopen it many subsequent times and each time I record the time on a piece of paper and put it on the box, or I record "N/A" on the paper if the cat is still alive. My prediction is that every time I open the box with the memory of seeing the dead cat, I will still see the dead cat. Further, I predict that the time on the decay timer will be the same every time I reopen the box. This in my opinion proves that memory sticks with the branch my consciousness is in. Even if we only saw the same time 99 times out of 100, it would still prove that memory sticks, but not perfectly, with the branch my consciousness is in, which would then be a fact that physics explaining what I experience of the world would have to explain. Having not explicitly done this experiment, I cannot claim for sure that we will conclude my consciousness is "collapsing" on an Everett Branch just as in Co
[-]gjm120

It seems clear to me that if conscious memory is predictive of future physical experience, it is drawn from something local to the Everett Branch my consciousness is in.

Whatever makes you think that your consciousness is in only one Everett branch? (And what do you think is happening on all those other branches that look so much like this one but that lack your consciousness?)

Surely the right account of this, conditional on MWI, is not that your consciousness is on a particular branch but that each branch has its own version of your consciousness, and each branch has its own version of your memory, and each branch has its own version of what actually happened, and -- not at all by coincidence -- these match up with one another.

What happens to your consciousness and your memories is much more like splitting than like collapse.

(It sounds as if you think that this ought to mean that you'd have conscious memories in one branch from other branches, but I can't see why. Am I missing something?)

4mwengler
I misunderstood what David Gerard was suggesting and took a long riff proposing an experiment to address something he wasn't saying. The tricky part for me is the extremely clear conscious experience I have of being on only one branch. That there are other consciousnesses NEARLY identical to mine on other nearby Everett branches, presumably having the same strong awareness that they are on only one Everett branch and have no direct evidence of any other branch is clearer to me. MWI seems to truly be an interpretation, not a theory, with apparently absolutely no Popperian experiments that could ever distinguish it from wave function collapse theories.
1Nisan
You can upload a person into a quantum computer and do Schrödinger's cat experiments on them. If you have a computational theory of mind, this should falsify at least some informal collapse theories.
7David_Gerard
You could have that predictive power without actually having a continuous stream of awareness. Consider sleepwalkers who can do things and have conversations (if not very good ones) with no conscious awareness. You're using philosophy to object to observed reality.
6mwengler
OK, i misunderstood what you were implying in your previous post. So there are multiple streams of consciousness, one on each everett branch, and the memories returned on each everett branch are the ones in the (conscious+unconscious) brain that exists on that everett branch. So I experience my mind always returning memories consistent with my branch even as other branch-mwengler's experience memories consistent with their branch, and like me, use that as evidence for their uniqueness. So it really is an interpretation, predicting nothing different in experience than does copenhagen.
0endoself
https://en.wikipedia.org/wiki/Wigner%27s_friend
2aleksiL
I haven't seen one example of a precise definition of what constitutes an "observation" that's supposed to collapse the wavefunction in Copenhagen interpretation. Decoherence, OTOH, seems to perfecty describe the observed effects, including the consistency of macro-scale history. Actually it just proves that memory sticks with the branch it's consistent with. For all we know, our consciousnesses are flitting from branch to branch all the time and we just don't remember because the memories stay put. Yeah, settling these kinds of questions would be much easier if we weren't limited to the data that manages to reach our senses. In MWI the definition of "I" is not quite straightforward: the constant branching of the wavefunction creates multiple versions of everyone inside, creating indexical uncertainty which we experience as randomness.
8Nominull
Your mistake lies in using the word "I" like it means something. There is some mwengler-stuff, it has some properties, then there is a split and the mwengler-stuff is in two separate chunks. They both experience their "stream of consciousness" showing up in their particular branch, they both wonder how it is that they ended up in the one branch rather than the other.
8endoself
Copenhagen is not a theory of what you will experience either; there are multiple minds even in Copenhagen's single world
0mwengler
Copenhagen is an interpretation where I have one mind, you have one mind, and each of us have one thread of experience. There are numerous places along that thread where the physics to calculate the time evolution of that thread is not deterministic, where a random choice has been made. MWI is an interpretation where I have many minds, as opposed to the one mind I have in Copenhagen. In the MWI interpretation, each of my minds exists in a separate and non-interacting universe from all the other versions of my mind. If I wonder as I type this why this version of me is the one in THIS branch, MWI has no theory for that. MWI tries to make that question seem less interesting by pointing out that there are lots of versions of me asking that same question, so somehow obscuring the me-ness of the me in this branch with the me-ness of all these other similar but not identical me's in these other branches would render the question meaningless. But as an interpretation with no observable experimental differences, MWI and Copenhagen are likely to have the same number of random events dictating progress. In MWI, the randomness is isolated to just one of many me's which of course is still quite unique and interesting to me, but which is not as bad as Copenhagen where it is the entire universe that got changed by each random waveform collapse.
4Richard_Kennaway
How is this different to wondering why you are THIS mind in THIS branch rather than THIS OTHER mind in THIS branch? Why you are you rather than someone else?
0mwengler
Do I have multiple minds even in Copenhagen? And by I I mean flesh-and-blood me?
2endoself
I mean that there are other minds in the world, in the sense of other people. Neither Copenhagen nor many worlds chooses a preferred mind, but people don't notice it as strongly in Copenhagen since they're already used to the idea of other conscious beings.
0mwengler
If I understand correctly, Copenhagen has only one mind for me, and the reality experienced by this mind is fundamentally randomly branched through wave function collapses. MWI creates a new mind for me so their are many minds for me, one in each Everett Branch. Did I miss something?
4endoself
I'm not sure what you're getting at here. Even under Copenhagen, one can duplicate an upload as it's running.
0Viliam_Bur
Let's suppose that your mind is a function of your brain, and that your brain is composed of atoms. In MWI there are many branches with many configurations of atoms, that means many branches of your brain, that means many branches of your mind. In every branch your mind is entangled with the other atoms of the same branch. So for example in the universe with atoms of a dead cat, your mind is in the "poor kitty" state, and in the branch with the atoms of an alive cat, your mind is in the "kitty, you are so lucky, I promise I will never try this cruel experiment on you again" state. In Copenhagen, on a tiny time scale there are many branches of atoms. But it is believed that on a larger scale it is not so. At some unspecified moment there is supposed to be a collapse where many branches of atoms become a single branch again (through a process of random selection). Nobody knows when does this happen. On a large scales, we are not able to run a precise enough experiment that would say either way. On smaller scales, where we can run the experiment, the result has always been that the collapse did not occur yet. So after the collapse, there is only one branch, and therefore one mind. Before the collapse... I would say that there is a superposition of minds (because there is a superposition of brains, because there is a superposition of atoms the brain is composed of), which should become one mind again at the moment of the collapse. But it is believed that this superposition exists only for a very small fraction of the second, so it's not like the different minds in the superposition have enough time to really think significantly different thoughts. The neurons work at a limited speed, and sending a signal from one neuron to another requires dozens of chemical reactions.
-3khafra
Copenhagen: 1. You bounce a photon off a half-silvered mirror and don't look at the results: no universe split. 2. You bounce a photon off a half-silvered mirror and look at the results: Bam! Split universe. MWI: 1. You bounce a photon off a half-silvered mirror and don't look at the results. Since the physical state of your brain is not causally dependent on the destination of the photon, you don't branch into two mwenglers in any noticeable way. 2. You bounce a photon off a half-silvered mirror and look at the results. Since you've made the state of your brain causally dependent on an event with quantum randomness, you branch into two mwenglers which are different on a macroscopic level. Two persons, which happen to share a causal history up to looking at the experimental outcome.
2mwengler
Copenhagen Interpretation never splits universes. Instead, you have a wave function collapse in the one and only universe. In MWI, you NEVER branch in to two anythings in a "noticeable" way. All the myriads of branches have no interactions, there is nothing noticeable about any of the other branches from within the branch we are in. If there is something noticeable about other branches, then an experiment could be defined to check the hypothesis of branching, and we would start to gather evidence for or against branching. Until such time as an hypothesis is created and tested and shows evidence for branches, MWI is an interpretation, and not a theory. So why does it even matter? I am thinking it through and realizing that an interpretation is in some way a pre-theory. As we sit with the idea of MWI, maybe one of us develops hypotheses about experiments whic might show evidence for the other branches, or not. Without the interpretation of MWI, that hypothetical progress might never be available.
3Viliam_Bur
They do interact. This is how quantum physics was discovered. The problem is that the magnitude of interaction is getting very small very quickly, so after a few microseconds it becomes technically impossible to measure. This is what allows people to say: "Yeah, for a few microseconds there is something mathematically equivalent to branches, but then it disappears completely" and you can't experimentally prove them wrong. One side believes that the interaction is getting smaller, but it never reaches exactly zero. Other side believes that the interaction is getting smaller, and then in some unspecified moment all branches except one disappear. Experimental data say that the interaction is getting smaller until it becomes too small to see... and then, well, it is too small to see what happens. So essentially both sides disagree about who has the burden of proof; about the exact meaning of "fewest assumptions" in Occam's razor. One side says that "the extra branches disappearing" is the extra assumption. Other side says that "the extra branches not disappearing, even when their interaction becomes too small to measure" is the extra assumption. More precisely, the magnitude of interaction depends on how much the particles in the branches are different. Therefore the branches we have measurable interaction with are those almost the same as our branch. The interaction is largest when both branches are exactly alike except for one particle. This is the famous double-slit experiment -- two branches of the universe with the particle going through different slits interact with each other. The branches are there. The question is not whether multiple branches exist, but whether they disappear later when their interaction becomes very small. How do you prove experimentally that the other branches do not disappear, especially if your opponents refuse to specify when they should disappear. If you make an experiment that proves that "after N seconds, the branches still exist",