This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
I propose that LessWrong should produce a quarterly magazine of its best content.
LessWrong readership has a significant overlap with the readers of Hacker News, a reddit/digg-like community of tech entrepreneurs. So you might be familiar with Hacker Monthly, a print magazine version of Hacker News. The first edition, featuring 16 items that were voted highly on Hacker News, came out in June, and the second came out today. The curator went to significant effort to contact the authors of the various articles and blog posts to include them in the magazine.
Why would we want LessWrong content in a magazine? I personally would find it a great recruitment tool; I could have copies at my house and show/lend/give them to friends. As someone at the Hacker News discussion commented, "It's weird but I remember reading some of these articles on the web but, reading them again in magazine form, they somehow seem much more authoritative and objective. Ah, the perils of framing!"
The publishing and selling part is not too difficult. Hacker Monthly uses MagCloud, a company that makes it easy to publish and sell PDFs into printed magazines.
Unfortunately, I don't have the skills or time to d...
A New York Times article on Robin Hanson and his wife Peggy Jackson's disagreement on cryonics:
http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html?ref=health&pagewanted=all
While I'm not planning to pursue cryopreservation myself, I don't believe that it's unreasonable to do so.
Industrial coolants came up in a conversation I was having with my parents (for reasons I am completely unable to remember), and I mentioned that I'd read a bunch of stuff about cryonics lately. My mom then half-jokingly threatened to write me out of her will if I ever signed up for it.
This seemed... disproportionately hostile. She was skeptical of the singularity and my support for the SIAI when it came up a few weeks ago, but she's not particularly interested in the issue and didn't make a big deal about it. It wasn't even close to the level of scorn she apparently has for cryonics. When I asked her about it, she claimed she opposed it based on the physical impossibility of accurately copying a brain. My father and I pointed out that this would literally require the existence of magic, she conceded the point, mentioned that she still thought it was ridiculous, and changed the subject.
This was obviously a case of my mom avoiding her belief's true weak points by not offering her true objection, rationality failures common enough to deserve blog posts pointing them out; I wasn'...
Wanting cryo signals disloyalty to your present allies.
Women, it seems, are especially sensitive to this (mothers, wives). Here's my explanation for why:
Women are better than men at analyzing the social-signalling theory of actions. In fact, they (mostly) obsess about that kind of thing, e.g. watching soap operas, gossiping, people watching, etc. (disclaimer: on average)
They are less rational than men (only slightly, on average), and this is compounded by the fact that they are less knowledgeable about technical things (disclaimer: on average), especially physics, computer science, etc.
Women are more bound by social convention and less able to be lone dissenters. Asch's conformity experiment found women to be more conforming.
Because of (2) and (3), women find it harder than men to take cryo seriously. Therefore, they are much more likely to think that it is not a feasible thing for them to do
Because they are so into analyzing social signalling, they focus in on what cryo signals about a person. Overwhelmingly: selfishness, and as they don't think they're going with you, betrayal.
Communicating complex factual knowledge in an emotionally charged situation is hard, to say nothing of actually causing a change in deep moral responses. I don't think failure is strong evidence for the nonexistence of such information. (Especially since I think one of the most likely sorts of knowledge to have an effect is about the origin — evolutionary and cognitive — of the relevant responses, and trying to reach an understanding of that is really hard.)
You are underestimating just how enormously Peggy would have to change her mind. Her life's work involves emotionally comforting people and their families through the final days of terminal illness. She has accepted her own mortality and the mortality of everyone else as one of the basic facts of life. As no one has been resurrected yet, death still remains a basic fact of life for those that don't accept the information theoretic definition of death.
To change Peggy's mind, Robin would not just have to convince her to accept his own cryonic suspension, but she would have to be convinced to change her life's work -- to no longer spend her working hours convincing people to accept death, but to convince them to accept death while simultaneously signing up for very expensive and very unproven crazy sounding technology.
Changing the mind of the average cryonics-opposed life partner should be a lot easier than changing Peggy's mind. Most cryonics-opposed life partners have not dedicated their lives to something diametrically opposed to cryonics.
Drowning Does Not Look Like Drowning
Fascinating insight against generalizing from fictional evidence in a very real life-or-death situation.
Cryonics scales very well. People who think cryonics is costly, even if you had to come up with the entire lump sum close to the end of your life, are generally ignorant of this fact.
So long as you keep the shape constant, for any given container the surface area is a based on a square law whereas the volume is a cube. For example with a cube shaped object, one side squared times 6 is the surface area whereas one side cubed is the volume. Surface area is where the heat gets entry, so if you have a huge container holding cryogenic goods (humans in this case) it costs much less per unit volume (human) than is the case with a smaller container of equal insulation. A way to understand this is that you only have to insulate the outside -- the inside gets free insulation.
But you aren't stuck using equal insulation. You can use thicker insulation, with a much smaller proportional effect on total surface area as you use bigger sizes. Imagine the difference between a marble sized freezer and a house-sized freezer, when you add a foot of insulation. The outside of the insulation is where it begins collecting heat. But with a gigantic freezer, you might add a meter of insulation without it ha...
This is a mostly-shameless plug for the small donation matching scheme I proposed in May:
I'm still looking for three people to cross the "membrane that separates procrastinators and helpers" by donating $60 to the Singularity Institute. If you're interested, see my original comment. I will match your donation.
I was at a recent Alexander Technique workshop, and some of the teachers had been observing how two year olds crawl.
If you've had any experience with two year olds, you know they can cover ground at an astonishing rate.
The thing is, adults typically crawl with their faces perpendicular to the ground, and crawling feels clumsy and unpleasant.
Two year olds crawl with their faces at 45 degrees to the ground, and a gentle curve through their upper backs.
Crawling that way gives access to a surprisingly strong forward impetus.
The relevance to rationality and to akrasia is the implication that if something seems hard, it may be that the preconditions for making it easy haven't been set up.
Here's a puzzle I've been trying to figure out. It involves observation selection effects and agreeing to disagree. It is related to a paper I am writing, so help would be appreciated. The puzzle is also interesting in itself.
Charlie tosses a fair coin to determine how to stock a pond. If heads, it gets 3/4 big fish and 1/4 small fish. If tails, the other way around. After Charlie does this, he calls Al into his office. He tells him, "Infinitely many scientists are curious about the proportion of fish in this pond. They are all good Bayesians with the same prior. They are going to randomly sample 100 fish (with replacement) each and record how many of them are big and how many are small. Since so many will sample the pond, we can be sure that for any n between 0 and 100, some scientist will observe that n of his 100 fish were big. I'm going to take the first one that sees 25 big and team him up with you, so you can compare notes." (I don't think it matters much whether infinitely many scientists do this or just 3^^^3.)
Okay. So Al goes and does his sample. He pulls out 75 big fish and becomes nearly certain that 3/4 of the fish are big. Afterwards, a guy na...
Does anybody know what is depicted in the little image named "mini-landscape.gif" at the bottom of each top level post, or why it appears there?
Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.
Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. E...
...A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we
Here are some assumptions one can make about how "intelligences" operate:
and an assumption about what "rationality" means:
I have two questions:
I think that these assumptions are implicit in most and maybe all of what this community ...
Is there an on-line 'rationality test' anywhere, and if not, would it be worth making one?
The idea would be to have some type of on-line questionnaire, testing for various types of biases, etc. Initially I thought of it as a way of getting data on the rationality of different demographics, but it could also be a fantastic promotional tool for LessWrong (taking a page out of the Scientology playbook tee-hee). People love tests, just look at the cottage industry around IQ-testing. This could help raise the sanity waterline, if only by making people aware of ...
I can't remember if this has come up before...
Currently the Sequences are mostly as-imported from OB; including all the comments, which are flat and voteless as per the old mechanism.
Given that the Sequences are functioning as our main corpus for teaching newcomers, should we consider doing some comment topiary on at least the most-read articles? Specifically, I wonder if an appropriate thread structure be inferred from context; also we could vote the comments up or down in order to make the useful-in-hindsight stuff more salient. There's a lot of great ...
I've suggested in the past that we use the old posts as filler; that is, if X days go by without something new making it to the front page, the next oldest item gets promoted instead.
Even if we collectively have nothing to say that is completely new, we likely have interesting things to say about old stuff - even if only linking it forward to newer stuff.
http://www.badscience.net/2010/07/yeah-well-you-can-prove-anything-with-science/
Priming people with scientific data that contradicts a particular established belief of theirs will actually make them question the utility of science in general. So in such a near-mode situation people actually seem to bite the bullet and avoid compartmentalization in their world-view.
From a rationality point of view, is it better to be inconsistent than consistently wrong?
There may be status effects in play, of course: reporting glaringly inconsistent views to those smarty-p...
Has anyone continued to pursue the Craigslist charity idea that was discussed back in February, or did that just fizzle away? With stakes that high and a non-negligible chance of success, it seemed promising enough for some people to devote some serious attention to it.
Thanks for asking! I also really don't want this to fizzle away.
It is still being pursued by myself, Michael Vassar, and Michael GR via back channels rather than what I outlined in that post and it is indeed getting serious attention, but I don't expect us to have meaningful results for at least a year. I will make a Less Wrong post as soon as there is anything the public at large can do -- in the meanwhile, I respectfully ask that you or others do not start your own Craigslist charity group, as it may hurt our efforts at moving forward with this.
ETA: Successfully pulling off this Craigslist thing has big overlaps with solving optimal philanthropy in general.
Okay, here's something that could grow into an article, but it's just rambling at this point. I was planning this as a prelude to my ever-delayed "Explain yourself!" article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.
Title: On Mechanizing Science (Epistemology?)
"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan
“It is not possible … to construct ...
I think there is an additional interpretation that you're not taking into account, and an eminently reasonable one.
First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you'd need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.
However, the more important question is -- what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless -- or worse. At the same time, it's unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informa...
Poking around on Cosma Shalizi's website, I found this long, somewhat technical argument for why the general intelligence factor, g, doesn't exist.
The main thrust is that g is an artifact of hierarchal factor analysis, and that whenever you have groups of variables that have positive correlations between them, a general factor will always appear that explains a fair amount of the variance, whether it a actually exists or not.
I'm not convinced, mainly because it strikes me as unlikely that an error of this type would persist for so long, and that even his c...
I pointed this out to my buddy who's a psychology doctoral student, his reply is below:
I don't know enough about g to say whether the people talking about it are falling prey to the general correlation between tests, but this phenomenon is pretty well-known to social science researchers.
I do know enough about CFA and EFA to tell you that this guy has an unreasonable boner for CFA. CFA doesn't test against truth, it tests against other models. Which means it only tells you whether the model you're looking at fits better than a comparator model. If that's a null model, that's not a particularly great line of analysis.
He pretty blatantly misrepresents this. And his criticisms of things like Big Five are pretty wild. Big Five, by its very nature, fits the correlations extremely well. The largest criticism of Big Five is that it's not theory-driven, but data-driven!
But my biggest beef has got to be him arguing that EFA is not a technique for determining causality. No shit. That is the very nature of EFA -- it's a technique for loading factors (which have no inherent "truth" to them by loading alone, and are highly subject to reification) in order to maximize variance explained. He doesn't need to argue this point for a million words. It's definitional.
So regardless of whether g exists or not, which I'm not really qualified to speak on, this guy is kind of a hugely misleading writer. MINUS FIVE SCIENCE POINTS TO HIM.
cousin_it:
A sad sight, that.
Indeed. A while ago, I got intensely interested in these controversies over intelligence research, and after reading a whole pile of books and research papers, I got the impression that there is some awfully bad statistics being pushed by pretty much every side in the controversy, so at the end I was left skeptical towards all the major opposing positions (though to varying degrees). If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it. Alas, Shalizi has definitely let his ideology get the better of him this time.
He also wrote an interesting long post on the heritability of IQ, which is better, but still clearly slanted ideologically. I recommend reading it nevertheless, but to get a more accurate view of the whole issue, I recommend reading the excellent Making Sense of Heritability by Neven Sesardić alongside it.
Morendil:
Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?
A piece of writing biased for ideological reasons doesn't even have to have any specific parts that can be shown to be in error per se. Enormous edifices of propaganda can be constructed -- and have been constructed many times in history -- based solely on the selection and arrangement of the presented facts and claims, which can all be technically true by themselves.
In areas that arouse strong ideological passions, all sorts of surveys and other works aimed at broad audiences can be expected to suffer from this sort of bias. For a non-expert reader, this problem can be recognized and overcome only by reading works written by people espousing different perspectives. That's why I recommend that people should read Shalizi's post on heritability, but also at least one more work addressing the same issues written by another very smart author who doesn't share the same ideological position. (And Sesardić's book is, to my knowledge, the best such reference about this topic.)
Instead of getting into a convoluted discussion of concrete points in Shalizi's article, I'll ju...
SarahC:
But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.
Just to be clear, this is not an original idea by Shalizi, but the well known "sampling theory" of general intelligence first proposed by Godfrey Thomson almost a century ago. Shalizi states this very clearly in the post, and credits Thomson with the idea. However, for whatever reason, he fails to mention the very extensive discussions of this theory in the existing literature, and writes as if Thomson's theory had been ignored ever since, which definitely doesn't represent the actual situation accurately.
In a recent paper by van der Maas et al., which presents an extremely interesting novel theory of correlations that give rise to g (and which Shalizi links to at one point), the authors write:
...Thorndike (1927) and Thomson (1951) proposed one such alternative mechanism, namely, sampling. In this sampling theory, carrying out cognitive tasks requires the use of many lower order uncorrelated modules or n
I can't immediately think of any additional issue. It's more that I don't see the lack of well-known disjoint sets of uncorrelated cognitive modules as a fatal problem for Thomson's theory, merely weak disconfirming evidence. This is because I assign a relatively low probability to psychologists detecting tests that sample disjoint sets of modules even if they exist.
For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)". The paper's about the general goal of coming up with universal definitions of and ways to measure intelligence, but in the middle of it is a polemical/sceptical summary of research into g & IQ.
Smith went through a correlation matrix for 57 tests given to 240 people, published by Thurstone in 1938, and saw that the 3 most negative of the 1596 intercorrelations were between these pairs of tests:
My beef isn't with Shalizi's reasoning, which is correct. I disagree with his text connotationally. Calling something a "myth" because it isn't a causal factor and you happen to study causal factors is misleading. Most people who use g don't need it to be a genuine causal factor; a predictive factor is enough for most uses, as long as we can't actually modify dendrite density in living humans or something like that.
Robert Ettinger's surprise at the incompetence of the establishment:
...Robert Ettinger waited expectantly for prominent scientists or physicians to come to the same conclusion he had, and to take a position of public advocacy. By 1960, Ettinger finally made the scientific case for the idea, which had always been in the back of his mind. Ettinger was 42 years old and said he was increasingly aware of his own mortality.[7] In what has been characterized as an historically important mid-life crisis,[7] Ettinger summarized the idea of cryonics in a few pages, w
I'm a bit surprised that nobody seems to have brought up The Salvation War yet. [ETA: direct links to first and second part]
It's a Web Original documentary-style techno-thriller, based around the premise that humans find out that a Judeo-Christian Heaven and (Dantean) Hell (and their denizens) actually exist, but it turns out there's nothing supernatural about them, just some previously-unknown/unapplied physics.
The work opens in medias res into a modern-day situation where Yahweh has finally gotten fed up with those hairless monkeys no longer being the ...
The following is a story I wrote down so I could sleep. I don't think it's any good, but I posted it on the basis that, if that's true, it should quickly be voted down and vanish from sight.
one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random e...
Something I've been pondering recently:
This site appears to have two related goals:
a) How to be more rational yourself b) How to promote rationality in others
Some situations appear to trigger a conflict between these two goals - for example, you might wish to persuade someone they're wrong. You could either make a reasoned, rational argument as to why they're wrong, or a more rhetorical, emotional argument that might convince many but doesn't actually justify your position.
One might be more effective in the short term, but you might think the rational argu...
I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?
The thing that's puzzling me now is egalitarianism. As Carl Shulman pointed out, the problem that CEV has with people being able to cheaply copy themselves in the future is shared with democracy and other political and ethical systems that are based on equal treatment or rights of all...
The comments on the Methods of Rationality thread are heading towards 500. Might this be time for a new thread?
This seems extremely pertinent for LW: a paper by Andrew Gelman and Cosma Shalizi. Abstract:
...A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distri
I have begun a design for a general computer tool to calculate utilities. To give a concrete example, you give it a sentence like
I would prefer X2 amount of money in Y1 months, to X2 in Y2 months. Then, give it reasonable bounds for X and Y, simple additional information (i.e. you always prefer more money to less), and let it interview some people. It'll plot a utility function for each person, and you can check the fit of various models (i.e. exponential discounting, no discounting, hyperbolic discounting).
My original goals were to
Antinatalism is the argument that it is a bad thing to create people.
What arguments do people have against this position?
I know Argumentum ad populum does not work, and I know Arguments from authority do not work, but perhaps they can be combined into something more potent:
Can anyone recall a hypothesis that had been supported by a significant subset of the lay population, consistently rejected by the scientific elites, and turned out to be correct?
It seems belief in creationism has this structure. the lower you go in education level, the more common the belief. I wonder whether this alone can be used as evidence against this 'theory' and others like it.
Is there a principled reason to worry about being in a simulation but not worry about being a Boltzmann brain?
Here are very similar arguments:
If posthumans run ancestor simulations, most of the people in the actual world with your subjective experiences will be sims.
If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.
Therefore, if posthumans run ancestor simulations, you are probably a sim.
vs.
If our current model of cosmology is correct,
A small koan on utility functions that "refer to the real world".
Question to Clippy: would you agree to move into a simulation where you'd have all the paperclips you want?
Question to humans: would you agree to all of humankind moving into a simulation where we would fulfill our CEV (at least, all terms of it that don't mention "not living in a simulation")?
In both cases assume you have mathematical proof that the simulation is indestructible and perfectly tamper-resistant.
Paul Graham has written extensively on Startups and what is required. A highly focused team of 2-4 founders, who must be willing to admit when their business model or product is flawed, yet enthused enough about it to pour their energy into it.
Steve Blank has also written about the Customer Development process which he sees as paralleling the Product Development cycle. The idea is to get empirical feedback.by trying to sell your product from the get-go, as soon as you have something minimal but useful. Then you test it for scalability. Eventually you have ...
I know this thread is a bit bloated already without me adding to the din, but I was hoping to get some assistance on page 11 of Pearl's Causality (I'm reading 2nd edition).
I've been following along and trying to work out the examples, and I'm hitting a road block when it comes to deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory. Part of my problem comes because I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates...
Well, given that I can now be confident my words won't encourage you*, I will feel free to mention that I found the attitudes of many of those replying to you troubling. There seemed to be an awful lot of verbiage ascribing detailed motivations to you based on (so far as I could tell) little more than (a) your disagreement and (b) your tone, and these descriptions, I feel, were accepted with greater confidence than would be warranted given their prior complexity and their current bases of evidential support.
None of the above is to withdraw my remarks towar...
I love that on LW, feeding the trolls consists of writing well-argued and well-supported rebuttals.
We think of Aumann updating as updating upward if the other person's probability is higher than you thought it would be, or updating downward if the other person's probability is lower than you thought it would be. But sometimes it's the other way around. Example: there are blue urns that have mostly blue balls and some red balls, and red urns that have mostly red balls and some blue balls. Except on Opposite Day, when the urn colors are reversed. Opposite Day is rare, and if it's OD you might learn it's OD or you might not. A and B are given an urn and ar...
I have been thinking about "holding off on proposing solutions." Can anyone comment on whether this is more about the social friction involved in rejecting someone's solution without injuring their pride, or more about the difficulty of getting an idea out of your head once it's there?
If it's mostly social, then I would expect the method to not be useful when used by a single person; and conversely. My anecdote is that I feel it's helped me when thinking solo, but this may be wishful thinking.
I have a few questions.
1) What's "Bayescraft"? I don't recall seeing this word elsewhere. I haven't seen a definition on LW wiki either.
2) Why do some people capitalize some words here? Like "Traditional Rationality" and whatnot.
an interesting site I stumbled across recently: http://youarenotsosmart.com/
They talk about some of the same biases we talk about here.
Is self-ignorance a prerequisite of human-like sentience?
I present here some ideas I've been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.
Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.
Yet I take it as true that our brains - like everything else - are purely ...
Downvoted for unnecessarily rude plonking. You can tell someone you're not interested in what they have to say without being mean.
So, probably like most everyone else here, I sometimes get complaints (mostly from my ex-girlfriend, you can always count on them to point out your flaws) that I'm too logical and rational and emotionless and I can't connect with people or understand them et cetera. Now, it's not like I'm actually particularly bad at these things for being as nerdy as I am, and my ex is a rather biased source of information, but it's true that I have a hard time coming across as... I suppose the adjective would be 'warm', or 'human'. I've attributed a lot of this to a) my ...
Information theory challenge: A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.
To do so, you'd have to exploit all the regularities of English to offer suggestions that save the user from having to specify individual letters. Most of the entropy is in the intial cha...
Another reference request: Eliezer made a post about how it's ultimately incoherent to talk about how "A causes B" in the physical world because at root, everything is caused by the physical laws and initial conditions of the universe. But I don't remember what it is called. Does anybody else remember?
I have some half-baked ideas about getting interesting information on lesswronger's political opinions.
My goal is to give everybody an "alien's eye" view of their opinions, something like "You hold position Foo on issue Bar, and justify it by the X books you read on Bar; but among the sample people who read X or more books on Bar, 75% hold position ~Foo, suggesting that you are likely to be overconfident".
Something like collecting:
your positions on various issues
your confidence in that position
how important various characteristics
I have an IQ of 85. My sister has an IQ of 160+. AMA.
http://www.reddit.com/r/IAmA/comments/cma2j/i_have_an_iq_of_85_my_sister_has_an_iq_of_160_ama/
Posted because of previous LW interest in a similar thread.
We've been thinking about moral status of identical copies. Some people value them, some people don't, Nesov says we should ask a FAI because our moral intuitions are inadequate for such problems. Here's a new intuition pump:
Wolfram Research has discovered a cellular automaton that, when run for enough cycles, produces a singleton creature named Bob. From what we can see, Bob is conscious, sentient and pretty damn happy in his swamp. But we can't tweak Bob to create other creatures like him, because the automaton's rules are too fragile and poorly understo...
I've been listening to a podcast (Skeptically Speaking) talking with a fellow named Sherman K Stein, author of Survival Guide for Outsiders. I haven't read the book, but it seems that the author has a lot of good points about how much weight to give to expert opinions.
EDIT: Having finished listening, I revise my opinion down. It's still probably worth reading, but wait for it to get to the library.
Scientific study roundup: fish oil and mental health.
one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution convert...
I was originally not going to post this, but I decided to on the basis that if it's as bad as I think, it'll be voted down:
one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next e...
This is a mostly-shameless plug for the small http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/21sr I proposed in May:
I'm still looking for three people to cross the "http://lesswrong.com/lw/d6/the_end_of_sequences" by donating $60 to the http://singinst.org/donate/whysmalldonationsmatter.
If you're interested, see my http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/21sr. I will match your donation.
We've been thinking about moral status of identical copies. Some people value them, some people don't, Nesov says we should ask a FAI because our moral intuitions are inadequate for such problems. Here's a new intuition pump:
Wolfram Research has discovered a cellular automaton that, when run for enough cycles, produces a singleton creature named Bob. From what we can see, Bob is conscious, sentient and pretty damn happy in his swamp. But we can't tweak Bob to create other creatures like him, because the automaton's rules are too fragile and poorly understood, and finding another ruleset with sentient beings seems very difficult as well. My question is, how many computers must we allocate to running identical copies of Bob and his world to make our moral sense happy? Assume computing power is pretty cheap.
I completely lack the moral intuition that one should create new conscious beings if one knows that they will be happy. Instead, my ethics apply only to existing people. I am actually completely baffled that so many people seem to have this intuition.
Thus, there is no reason to copy Bob. (Moreover, I avoid the repugnant condition.)