If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Can someone explain nanotech enthusiasm to me? Like, I get that nanotech is one of the sci-fi technologies that's actually physics-compliant, and furthermore it should be possible because biology.
But I get the impression that among transhumanist types slightly older than me, there's a widespread expectation that it will lead to absolutely magical things on the scale of decades, and I don't get where that comes from, even after picking up Engines of Creation.
I'm thinking of, e.g. Eliezer talking about how he wanted to design nanotechnology before he got into AI, or how he casually mentions nanotechnology as being one of the big ways a super-intelligent AI could take over the world. I always feel totally mystified when I come across something like that, like it's a major gulf between me and slightly older nerds.
Predicting chemistry from physics should be easy with a quantum computer, but appears hard with a classical computer. Often people say that even once you make a classical approximation, ie, assume that the dynamics are easy on a classical computer, that the problem of finding the minimum energy state of a protein is NP-hard. That's true, but a red herring, since the protein isn't magically going to know the minimum energy state. Though it's still possible that there's some catalyst to push it into the right state, so simulating the dynamics in a vacuum won't get you the right answer (cf prions). Anyhow, there's some hope that evolution has found a good toolbox for designing proteins and that if can figure out the abstractions that evolution is using, it will all become easy. In particular, there are building blocks like the alpha helix. Certainly an engineer, whether evolution or us, doesn't need to understand every protein, just know how to make enough.
I think the possibility that a sufficiently smart AI would quickly find an adequate toolbox for designing proteins is quite plausible. I don't know what Eliezer means, but the possibility seems to me adequate for his arguments.
New ligament discovered in the human knee as a result of surgeons trying to figure out why some people didn't recover fully after knee injuries.
I'm tempted to deduce "Keep paying attention, you never know what might have been missed"-- I really would have expected that all the ligaments had been discovered a long time ago.
Another conclusion might be "Try to solve real problems, you're more likely to find out something new that way than by just poking around."
Does someone have the medical knowledge to explain how this is possible? My layperson guess is that once cut up a knee, you can more or less see all the macroscopic structures. Did they just think it was unimportant?
I'm more impressed, actually, in terms of the unevenness of progress - it took ~134 years to confirm his postulate? It's not like corpses were unavailable for dissection in 1879.
The media giveth sensationalism, and the media taketh away.
reddit - "So that "new" ligament? Here's a study from 2011 that shows the same thing. It's not even close to a new development and has been seen many times over the past 100 years." Summary quote: "The significance of the Belgian paper was to link [the ligament's] functionality to what they called "pivot shift", and knee reinjuries after ACL surgery. The significance of this paper, I believe, is that in the near future surgeons performing these operations will have an additional ligament to inspect and possibly repair during ACL surgery, which will hopefully reduce recurrence rates, and likely the rates of developing osteoarthritis in the injured knee down the line."
So I get home from a weekend trip and go directly to the HPMOR page. No new chapter yet. But there is a link to what seems to be a rationalist Death Note.
The way he saw it, the world was a pretty awful place. Corrupt politicians, cruel criminals, evil CEOs and even day-to-day evil acts made it that way, but everyday stupidity ensured it would stay like that. Nobody could make even a simple utility calculation. The only saving grace was that this was as true for the villains as for the heroes.
I am going to read it. Here are my next thoughts:
So, it seems like Eliezer succeeded to create a whole new genre of literature: rationalist fiction. Nice job!
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.
Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry. Would that lead to a spiral of even more and more rational heroes?
What exactly could the MoreRational!Harry do? It would be pretty awesome if he could someho...
Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry.
An idea came to my mind. Would it be possible to make a story in which Harry is less intelligent, in a way that he would score less in an IQ test for example, but at the same time more rational? HJPEV seems to be a highly intelligent prodigy even without the rationality addition. I would like to see how a more normal boy would do.
Is "a story where the protagonist behaves rationally" really a new genre of literature?
I think what you are referring to here is "a story where the protagonist describes their actions and motivations using rationality terminology" or maybe "a story where the rational thinking of the protagonist motivates the plot or moves it along". At least some of the genre of detective fiction — early examples being Poe's Auguste Dupin stories — would be along these lines.
Stories where protagonists behave rationally (without using rationality terrminology) wouldn't look like stories about rationality. They look like stories where protagonists do things that make sense.
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.
Yup. At least sort-of. If you haven't read Eliezer's old post Lawrence Watt-Evans's Fiction I recommend it. However, conspicuous failures of rationality in fiction may be mostly an issue with science fiction and fantasy. If you want to keep the characters in your cop story from looking like idiots, you can do research on real police methods, etc. and if you do it right, you have a decent shot at writing a story that real police officers will read without thinking your characters are idiots.
On the other hand, when an author is trying to invent an entire fictional universe, with futuristic technology and/or magic, it can be really hard to figure out what would constitute "smart behavior" in that universe. This may be partly because most authors aren't themselves geniuses, but even more importantly, the fictional universe, if it were real, would have millions of people trying to figure out how to make optimal use of the resources that exist in that universe. It's hard for one person, however smart, to compete with that.
For that matter, it's hard for one author to compete with an army of fans dissecting their work, looking for ways the characters could have been smarter.
which stories should be x-rationalizated next
This leads to another comment on rationalist fiction: Most of it seems to be restricted to fan-fiction. The mold appears to be: "Let's take a story in which the characters underutilized their opportunities and bestow them with intelligence, curiosity, common sense, creativity and genre-awareness". The contrast between the fanfic and the canon is a major element of the story, and the canon an existing scaffold which saves the writer from having to create a context.
This isn't a bad thing necessarily, just an observation.
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature?
So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?
Every genre has a theme...romance, adventure, etc.
So where are the stories which are, fundamentally, about stuff like epistemology and moral philosophy?
Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?
Hmm, conditional on that being the case, do you also believe that the closer to physics the mind is the more person it is in it? Example: action potentials encoded in the position of rods in a babbage engine vs. spread over fragmented ram used by a functional programing language using lazy evaluation in the cloud.
Brian Leiter shared an amusing quip from Alex Rosenberg:
...So, the... Nobel Prize for “economic science” gets awarded to a guy who says markets are efficient and there are no bubbles—Eugene Fama (“I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning”—New Yorker, 2010), along with another economist—Robert Shiller, who says that markets are pretty much nothing but bubbles, “Most of the action in the aggregate stock market is bubbles.” (NY Times, October 19, 2013) Imagine the parallel in physics or chemistry or biology—the prize is split between Einstein and Bohr for their disagreement about whether quantum mechanics is complete, or Pauling and Crick for their dispute about whether the gene is a double helix or a triple, or between Gould and Dawkins for their rejection of one another’s views about the units of selection. In these disciplines Nobel Prizes are given to reward a scientist who has established something every one else can bank on. In economics, “Not so much.” This wasn’t the first time they gave the award to an economist who says one thing and another one who asserts its direct d
Ugh. The prize was first and foremost in recognition of Fama, Shiller, and Hansen's empiricism in finance. In the sixties, Fama proposed a model of efficient markets, and it held up to testing. Later, both Fama, Shiller, and Hansen showed further tests didn't hold up. Their mutual conclusion: the efficient market hypothesis is mostly right, and while there is no short-term predictability based on publicly available information, there is some long-term predictability. Since the result is fairly messy, Fama and Shiller have differences about what they emphasize (and are both over-rhetorical in their emphasis). Does "mostly right" mean false or basically true?
What's causing the remaining lack of agreement, especially over bubbles? Lack of data. Shiller thinks bubbles exist, but are rare enough he can't solidly establish them, while Fama is unconvinced. Fama and Shiller have done path-breaking scientific work, even if the story about asset price fluctuation isn't 100% settled.
SPOILERS FOR "FRIENDSHIP IS OPTIMAL"
Why is 'Friendship is optimal' "dark" and "creepy"? I've read many people refer it that way. Only things that are clearly bad are the killings of all the other lifeforms, but otherwise this is scenario is one of the best that humanity could come across. It's not perfect, but it's good enough and much better than the world we have today. I'm not sure if it's realistic to ask for more. Considering how likely it is that humanity will end in some incredibly fucked up way full of suffering, then I would definitely defend this kind of utopia.
(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)
OK, I'll have a go (will be incomplete).
People in general will find the Optimalverse unpleasant for a lot of reasons I'll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.
People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.
(rot13 some bits that might consitute spoilers)
Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength, Free to Optimize.
The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.
Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevra
Let's ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better.
No, let's not ignore it. Let's confront it, because I want a better explanation. Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).
I mean, seriously? Why would I want to mix any noise into this process?
Am I the only one who is bothered that these threads don't start on Monday anymore?
Posting a request from a past open thread again: Does anyone have a table of probabilities for major (negative) life events, like divorse or being in a car accident? I ask this to have a priority list of events to be prepared for, either physically or mentally.
ETA: Apparently a new WHO recommendation for filling death certificates was introduced in 2005-2006 and this caused a significant drop in pneumonia mortality in Finland.
I'm not entirely sure if it works this way in the whole EU, but it probably does. It's more complicated than what I explain below, but it's the big picture that matters.
The most common way to record mortality statistics is that the doctor who was treating the patient fills a death certificate. There are three types of causes of death that can be recorded in a death certificate. There are immediate causes of death and there are underlying causes of death. There are also intermediate causes of death, but nobody really cares about those because recording them is optional. The statistics department in Finland is interested in recording only the underlying causes of death and that's what gets published as mortality statistics. Only one cause of death per patient gets recorded.
If someone with advanced cancer gets pneumonia and dies, a doctor fills the death certificate saying that the underlying cause of death was cancer and the immediate cause of death was pneumonia. Cancer gets recorded as the one and only cause of dea...
In honor of NaNoWriMo, I offer up this discussion topic for fans of HPMOR and rationalist fiction in general:
How many ways can we find that stock superpowers (magical abilities, sci-fi tech, whatever), if used intelligently, completely break a fictional setting? I'm particularly interested in subtly game-breaking abilities.
The game-breaking consequences of mind control, time travel, and the power to steal other powers are all particularly obvious, but I'm interested in things like e.g. Eliezer pointing out that he had to seriously nerf the Unbreakable Vow in HPMOR to keep the entire story from being about that.
Thats because those are among the worst possible ways to use those abilities.
The energy blasts as usually depicted break conservation of energy; with a bit of physics trickery you can get time travel out of that. Even if not, they make you an extremely portable and efficient energy source, perfect for a spaceship where mass is critical and a human needs to come along anyway but it doesn't matter in particular who since it's for PR reasons.
Mind reading is a means of communication that does not require cooperation or any abilities in the target, and cant be lied through. Communication with locked-in patients, interrogation, extraction of testimonials from animals. And if you an find a way to yourself precommit, you also have fully reliable precommitment checking for everyone, lie detection for political promises, and the ultimate forensics tool.
If you combine the strengths of 2 kinds of system, you get something greater than the sum of it's parts. So it is with human senses and digital sensors. The key here is bandwidth, and analysis. Sure, you can get all the same data onto a computer, but it won't do much good there. Someone with true super-senses as flexible and integrated as the...
Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.
Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantag...
I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"
I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.
If I made a game in RPG Maker, would anyone actually play it?
::is trying to decide whether or not to attempt a long-term project with uncertain rewards::
Only if I heard particularly good things about it.
Most creative endeavors you could undertake have a very small chance of leading to external reward, even the validation of people reading/watching/playing them - there's simply too much content available these days for people to read yours. So I'd advise against making such a thing, unless you find making it to be rewarding enough in itself.
I have a game I've been fantasizing about and I think I could make it work. It has to be a game, not a story, because I want to pull a kind of trick on the player. It's not that unusual in fiction for a character to start out on the side of the "bad guys", have a realization that his side is the one that's bad, and then go on to save the day. (James Cameron's Avatar is a recent example.) I want to start the player out on the side of bad guys that appear good, as in Eliezer's short story "The Sword of Good", and then give the player the opportunity to fail to realize that he's on the wrong side. There would be two main story branches: a default one, and one that the player can only get to by going "off-script", as it were, and not going along with what it seems like you have to do to continue the story. (At the end of the default path, the player would be shown a montage of the times he had the chance to do the right thing, but chose not to.)
The actual story would be something like the anti-Avatar; a technological civilization is encroaching on a region inhabited by magic-using, nature-spirit-worshiping nomads. The nature spirits are EVIL (think: "nature, red in tooth and claw") and resort to more and more drastic measures to try to hold back the technological civilization, in which people's lives are actually much better.
Does this sound appealing?
That sounds fun, and something that'd actually translate nicely to the RPG Maker template. It's also something that takes skill to pull off well, you'll need to play with how the player will initially frame the stuff you show to be going on, and how the stuff should actually be interpreted. Not coming off as heavy-handed is going to be tricky. Also, pulling this off is based on knowing how to use the medium, so if this is the first RPG Maker thing you're going to be doing, it's going to be particularly challenging.
There might also be a disconnect between games and movies here. Movies tend to always go out of their way to portray the protagonist's side as good, while games have a lot more of just semi-symmetric opposing factions. You get to play as the kill-happy Zerg or Undead Horde, and nobody pretends you're siding with the noble savages against the inhuman oppressors. So the players might just go, "ooh, I'm the Zerg, cool!" or "I guess I'm supposed to defect from Zerg to Terran here".
Random other thoughts, Battlezone 2 has a similar plot twist with off-script player action needed, though both factions are high-tech. Dominions 4 has Asphodel that's a neat corr...
http://www.refsmmat.com/statistics/
Statistics Done Wrong is a guide to the most popular statistical errors and slip-ups committed by scientists every day, in the lab and in peer-reviewed journals. Many of the errors are prevalent in vast swathes of the published literature, casting doubt on the findings of thousands of papers. Statistics Done Wrong assumes no prior knowledge of statistics, so you can read it before your first statistics course or after thirty years of scientific practice.
Not particularly important ,but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments, please feel free to do so.
Has anyone else had this happen to them?
Was some change made in the lw code in the past couple of weeks or so? I can't browse this site with my android smartphone anymore, have tried several browsers. The site either frequently freezes the browser or shows a blank page after the page has finished loading. This happens more with bigger threads.
Anyone else having this problem?
If Yvain is (understandably) too busy to run it this year, I am willing to do it. But I will be making changes if I do it, including reducing the number of free responses and including a basilisk question.
Give me a few days to see if I can throw something together and otherwise I will turn it over to your capable hands (reluctantly; I hate change).
Read about hyperbolic discounting, if you haven't already.
Assuming a conflict between short- and long-term decisions, the general advice is to mentally bundle a given short-term decision with all similar decisions that will occur in the future. For example, you might think of an unhealthy snack tonight as "representing" the decision to eat an unhealthy snack every night.
I wonder if there's research that rationalists should do that could be funded this way. I'd pay for high quality novel review articles about topics relevant to lw.
Incidentally, I'm making a hash precommitment:
43a4c3b7d0a0654e1919ad6e7cbfa6f8d41bcce8f1320fbe511b6d7c38609ce5a2d39328e02e9777b339152987ea02b3f8adb57d84377fa7ccb708658b7d2edc
See http://www.reddit.com/r/DarkNetMarkets/comments/1pta82/precommitment/
I'm a bit emotionally tense at the moment, so this observation might not be as valuable as it seems to me, but it occurs to me that there are two categories of things I do: thinking things through in detail, and acting on emotion with very little forethought involved. The category that we want--thinking an action through, then performing it--is mysteriously absent.
It's possible to get around this to some extent, but it requires the emotionally-driven, poorly-thought out things to involve recurring or predictable stimuli. In those cases, I can think through...
A way to fall asleep and/or gain gut intuition for "exponentially slow": count in binary, in your head, at a regular beat. YMMV.
Here's a more difficult version of the AI box experiment. I haven't seen this particular version anywhere, but I'd be pleased to be proven wrong.
Imagine we've come up with a very intelligent AI that is free to manipulate the environment and uses the action-reward system like Hutter's AIXI. Also imagine that we've somehow figured a way to make the rewards very hard to counterfeit (perhaps we require the rewards to be cryptographically signed). It's clear that in such a system, the 'weak point' would be the people in control of the private key. In this case ...
I've seen a few posts about the sequences being released as an ebook, is there a time frame on this?
I'd really like to get the ebook printed out by some online service so I can underline/write on them as I read through them.
Why does this forum spend so much time and effort discussing untestables and unprovables? I'ts disappointing.
Not particularly important ,but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments, please feel free to do so.
test