This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
My experiments with nootropics continue. A few days ago, I started taking sulbutiamine (350mg/day), a synthetic analog of thiamine which differs in that it crosses the blood-brain barrier more readily. The effects were immediate, positive, and extremely dramatic - on an entirely different order of magnitude than I expected, and probably the largest single improvement to my subjective well being I have ever experienced. A feeling of mental fatigue and not wanting to do stuff - a feeling that leads to spending lots of time on blogs, playing video games and otherwise killing time suboptimally (though not necessarily the only such feeling) - just up and vanished overnight. This was something that I had identified as a major problem, and believed to be purely psychological in nature, but was, in fact, entirely biochemical. On the first day I took sulbutiamine, I felt significantly better, worked three hours longer than normal, and went to the gym (which would previously have been entirely out of character for me).
That said, I do have a concrete reason to believe that this effect is atypical. Specifically, I believe I was deficient in thiamine; I believe this because I'm a type 1 diabeti...
Oh, in other news, the FDA is apparently going after piracetam; smartpowders.com reports that it's been ordered to cease selling piracetam and is frantically trying to get rid of its stock. See
Bridging the Chasm between Two Cultures: A former New Age author writes about slowly coming to realize New Age is mostly bunk and that the skeptic community actually might have a good idea about keeping people from messing themselves up. Also about how hard it is to open a genuine dialogue with the New Age culture, which has set up pretty formidable defenses to perpetuate itself.
Hah, was just coming here to post this. This article sort of meanders, but it's definitely worth skimming at least for the following two paragraphs:
One of the biggest falsehoods I've encountered is that skeptics can't tolerate mystery, while New Age people can. This is completely wrong, because it is actually the people in my culture who can't handle mystery—not even a tiny bit of it. Everything in my New Age culture comes complete with an answer, a reason, and a source. Every action, emotion, health symptom, dream, accident, birth, death, or idea here has a direct link to the influence of the stars, chi, past lives, ancestors, energy fields, interdimensional beings, enneagrams, devas, fairies, spirit guides, angels, aliens, karma, God, or the Goddess.
We love to say that we embrace mystery in the New Age culture, but that’s a cultural conceit and it’s utterly wrong. In actual fact, we have no tolerance whatsoever for mystery. Everything from the smallest individual action to the largest movements in the evolution of the planet has a specific metaphysical or mystical cause. In my opinion, this incapacity to tolerate mystery is a direct result of my culture’s disavowal of the intellect. One of the most frightening things about attaining the capacity to think skeptically and critically is that so many things don't have clear answers. Critical thinkers and skeptics don't create answers just to manage their anxiety.
Can't decide? With the Universe Splitter iPhone app, you can do both! The app queries a random number generator in Switzerland which releases a single photon into a half-silvered mirror, meaning that according to MWI each outcome is seen in one branch of the forking Universe. I particularly love the chart of your forking decisions so far.
I'm looking for something that I hope exists:
Some kind of internet forum that caters to the same crowd as LW (scientifically literate, interested in technology, roughly atheist or rationalist) but is just a place to chat about a variety of topics. I like the crowd here but sometimes it would be nice to talk more casually about stuff other than the stated purpose of this blog.
Any options?
What fosters a sense of camaraderie or hatred for a machine? Or: How users learned to stop worrying and love Clippy
http://online.wsj.com/article/SB10001424052748703959704575453411132636080.html
I’m not sure whether the satanic ritual abuse and similar prosecutions of the 80s/90s have ever been discussed on LW in any detail (I couldn’t find anything with a few google searches), but some of the failures of rationality in those cases seem to fit into the subject matter here.
For those unfamiliar with these cases, a sort of panic swept through many parts of the United States (and later other countries) resulting in a number of prosecutions of alleged satanic ritual abuse or other extensive conspiracies involving sexual abuse, despite, in almost all cases, virtually no physical evidence that such abuse occurred. Lack of physical evidence, of course, does not always mean that a crime has not occurred, but given the particular types of allegations made, it was not credible in most cases that no physical evidence would exist. It is hard to choose the most outrageous example, but this one is pretty remarkable:
...Gerald [Amirault], it was alleged, had plunged a wide-blade butcher knife into the rectum of a 4-year-old boy, which he then had trouble removing. When a teacher in the school saw him in action with the knife, she asked him what he was doing, and then told him not to do it
Excellent link. A particularly noteworthy excerpt:
[Q:] I assume that most people in these jobs aren't actually trying to convict innocent people. So how does such misconduct come about?
[A:] I think what happens is that prosecutors and police think they've got the right guy, and consequently they think it's OK to cut corners or control the game a little bit to make sure he's convicted.
This is the same phenomenon that is responsible for most scientific scandals: people cheat when they think they have the right answer.
It illustrates why proper methods really ought to be sacrosanct even when you're sure.
The welcome thread is about to hit 500 comments, which means that the newer comments might start being hidden for new users. Would it be a good thing if I started a new welcome thread?
While I'm at it, I'd like to add some links to posts I think are especially good and interesting for new readers.
A couple of viewquakes at my end.
I was really pleased when the Soviet Union went down-- I thought people there would self-organize and things would get a lot better.
This didn't happen.
I'm still more libertarian than anything else, but I've come to believe that libertarianism doesn't include a sense of process. It's a theory of static conditions, and doesn't have enough about how people actually get to doing things.
The economic crisis of 2007 was another viewquake for me. I literally went around for a couple of months muttering about how I had no idea it (the economy) was so fragile. A real estate bust was predictable, but I had no idea a real estate bust could take so much with it. Of course, neither did a bunch of other people who were much better paid and educated to understand such things, but I don't find that entirely consoling.
This gets back to libertarianism and process, I think. Protections against fraud don't just happen. They need to be maintained, whether by government or otherwise.
NancyLebovitz:
The economic crisis of 2007 was another viewquake for me. I literally went around for a couple of months muttering about how I had no idea it (the economy) was so fragile. A real estate bust was predictable, but I had no idea a real estate bust could take so much with it.
That depends on what exactly you mean by "the economy" being fragile. Most of it is actually extremely resilient to all sorts of disasters and destructive policies; if it weren't so, the modern civilization would have collapsed long ago. However, one critically unstable part is the present financial system, which is indeed an awful house of cards inherently prone to catastrophic collapses. Shocks such as the bursting of the housing bubble get their destructive potential exactly because their effect is amplified by the inherent instabilities of the financial system.
Moldbug's article "Maturity Transformation Considered Harmful" is probably the best explanation of the root causes of this problem that I've seen.
I'm considering starting a Math QA Thread at the toplevel, due to recent discussions about the lack of widespread math understanding on LW. What do you say?
[Originally posted this in the first August 2010 Open Thread instead of this Part 2; oops]
I've been wanting to change my username for a while, and have heard from a few other people who do too, but I can see how this could be a bit confusing if someone with a well-established identity changes their username. (Furthermore, at LW meetups, when I've told people my username, a couple of people have said that they didn't remember specific things I've posted here, but had some generally positive affect associated with the name "ata". I would not want t...
I've been thinking more and more about web startups recently (I'm nearing the end of grad school and am contemplating whether a life in academia is for me). I'm no stranger to absurd 100 hour weeks, love technology, and most of all love solving problems, especially if it involves making a new tool. Academia and startups are both pretty good matches for those specs.
Searching the great wisdom of the web suggests that a good startup should be two people, and that the best candidate for a cofounder is someone you've known for a while. From my own perspective, ...
Quick question about time: Is a time difference the same thing as the minimal energy-weighted configuration-space distance?
IMO, the quality of comments on Overcoming Bias has diminished significantly since Less Wrong started up. This was true almost from the beginning, but the situation has really spiraled out of control more recently.
I gave up reading the comments regularly last year, but once a week or so, I peek at the comments and they are atrociously bad (and almost uniformly negative). The great majority seem unwilling to even engage with Robin Hanson's arguments and instead rely on shaming techniques.
So what gives? Why is the comment quality so much higher on LW than on...
Quick question — I know that Eliezer considers all of his pre-2002 writings to be obsolete. GISAI and CFAI were last updated in 2001 but are still available on the SIAI website and are not accompanied by any kind of obsolescence notice (and are referred to by some later publications, if I recall correctly). Are they an exception, or are they considered completely obsolete as well? (And does "obsolete" mean "not even worth reading", or merely "outdated and probably wrong in many instances"?)
Is there a post dealing with the conflict between the common LW belief that there are no moral absolutes, and that it's okay to make current values permanent; and the belief that we have made moral progress by giving up stoning adulterers, slavery, recreational torture, and so on?
I recently started taking piracetam, a safe and unregulated (in the US) nootropic drug that improves memory. The effect (at a dose of 1.5g/day) was much stronger than I anticipated; I expected the difference to be small enough to leave me wondering whether it was mere placebo effect, but it has actually made a very noticeable difference in the amount of detail that gets committed to my long-term memory.
It is also very cheap, especially if you buy it as a bulk powder. Note that when taking piracetam, you also need to take something with choline in it. I bo...
..."But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing.
Oh, we’ll invoke lush clichés about the lonely heroism of Olympic athletes, the pain and analgesia of football, the early rising and hours of practice and restricted diets, the preflight celibacy, et cetera. But the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collaps
Jaron Laier is at it again: The First Church of Robotics
Besides piling up his usual fuzzy opinions about AI Jaron claims, and I cannot imagine that this was done out of sheer ignorance, that "This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined"; I cannot imagine that this ...
I bet a good way to improve your rationality is to attempt to learn from the writings of smart, highly articulate people, whom you consider morally evil, and who often use emotional language to mock people like yourself. So, for example, feminists could read Roissy and liberals could read Ann Coulter.
A HN post mocks Kurzweil for claiming the length of the brain's "program" is mostly due to the part of the genome that affects it. This was discussed here lately. How much more information is in the ontogenic environment, then?
The top rated comment makes extravagant unsupported claims about the brain being a quantum computer. This drives home what I already knew: many highly rated HN comments are of negligible quality.
PZ Myers:
...We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, becaus
I'm thinking of signing up for cryonics. However, one point that is strongly holding me back is that cryonics seems to require signing up for a DNR (Do not resuscitate). However, if there's a chance at resuscitation I'd like all attempts to be made and only have cryonics used when it is clear that the other attempts to keep me alive will fail. I'm not sure that this is easily specifiable with current legal settings and how cryonics is currently set up. I'd appreciate input on this matter.
Problems with high stakes, low quality testing
The percentage of students falling below some arbitrary cutoff is a bad statistic to use for management purposes. (Again, this is Statistical QA 101 stuff.) It throws away information. Worse, it creates perverse incentives, encouraging schools to concentrate on the students performing at around the cut-off level and to neglect both those too far below the threshold to have a chance of catching up and those comfortably above it.
...Way back when MSNBC used to do 6 hour stints with lawyers doing the color commentary on the court case of the day, I was a regular doing the defense side. Once an hour, we would be expected to sit at a desk and do a five minute stint, me and whoever was doing the "former prosecutor" job that day. Our five minutes would consist of two questions, each taking up about 90 seconds including some mischaracterization of the nature of the legal issue, and concluding with the words "how do you feel?" We had ten
Informal poll to ensure I'm not generalizing from one example:
How frequently do you find yourself able to remember how you feel about something, but unable to remember what produced the feeling in the first place (ie: you remember you hate steve but can't remember why)?
It seems like this is a cognitive shortcut, giving us access only to the "answer" that's already been computed (how to act vis-a-vis steve) instead of wasting energy and working memory re-accessing all the data and re-performing the calculation.
This may be a stupid question, but...
There're a couple of psych effects we have evidence for. Specifically, we have evidence for a sort of consistency effect. For example (relevant to my question) there's apparently evidence for stuff like if someone ends up tending to do small favors for others or being nice to them/etc, they'll be willing to continue to do so, more easily willing to do bigger things later.
And there's also willpower/niceness "used up"ness effects, whereby apparently (as I understand it), one might do one nice thing then, feeling...
I had a question. Other than Cryonics and PUA, what other "selfish" purposes might be pursued by an extreme rationalist that would not be done by common people?
On thinking on this for quite a while, one unusual thing I could think of was possibly, the entire expat movement, building businesses all across the world and protecting their wealth from multiple governments. I'm not sure if this might be classified as extreme rationality or just plain old rationality.
Switzerland seems to be a great place to start a cryonics setup as it is already a hub for people maintaining their wealth there. If cryonics was added, then your money and your life could be safe in switzerland.
The Last Psychiatrist on a new study of the placebo effect.
I'm having trouble parsing his analysis (it seems disjointed) but the effect is interesting nonetheless.
Has anyone read, and could comment on, Scientific Reasoning: The Bayesian Approach by philosophers Howson and Urbach? To me it appears to be the major work on Bayes from within mainstream philosophy of science, but reviews are mixed and I can't really get a feel for its quality and whether it's worth reading.
A quick probability math question.
Consider a population of blobs, initially comprising N individual blobs. Each individual blob independently has a probability p of reproducing, just once, spawning exactly one new blob. The next generation (an expected N*p individuals) has the same probability for each individual to spawn one new blob, and so on. Eventually the process will stop, with a total blob population of P.
The question is about the probability distribution for P, given N and p. Is this a well-known probability distribution? If so, which? Even if not...
Here's my solution. The descendants of each initial blob spawn independently of descendants of other initial blobs, so this is a sum of N independent distributions. The number of descendants of one initial blob is obviously the geometric distribution. Googling "sum of independent geometric distributions" gives Negative binomial distribution as the answer.
ETA: Ag, just before posting this I realized Hal Finney had already basically raised this same point on the original thread! Still, I think this expands on it a little...
You know, if Wei Dai's alien black box halting decider scenario were to occur, I'm not sure there is any level of black-box evidence that could convince me they were telling the truth. (Note: To make later things make sense, I'm going to assume the claim is that if the program halts, it actually tells you the output, not just that it halts.)
It's not so much that I'm committed to the Turi...
Last open thread I linked a Wired article on a argument-diagram software called ACH being open-sourced.
It's now available: http://competinghypotheses.org/
(PHP, apparently. Blech!)
I just read and liked "Pascal's mugging." It was written a few years ago, and the wiki is pretty spare. What's the state of the art on this problem?
Apparently AGI, transhumanism, and the Singularity are a massive statist/corporate conspiracy, and there exists a vast "AGI Manhattan Project". Neat.
I just came across this article called "Thank God for the New Atheists," written by Michael Dowd, and I can't tell if his views are just twisted or if he is very subtly trying to convert religious folks into epistemic rationalists. Sample quotes include:
Religion Is About Right Relationship with Reality, Not the Supernatural
...
...Because the New Atheists put their faith, their confidence, in an evidentially formed and continuously tested view of the world, these critics of religion are well positioned to see what’s real and what’s important t
File under "Less Wrong will rot your brain":
At my day job, I had to come up with and code an algorithm which assigned numbers to a list of items according to a list of sometimes-conflicting rules. For example, I'd have a list of 24 things that would have to be given the numbers 1-3 (to split them up into groups) according to some crazy rules.
The first algorithm I came up with was:
Possible new barriers to Moore's Law where small chips won't have enough power to use the maximum transistor density they have available. The article also discusses how other apparent barriers (such as leaky gates) have been overcome in the past including this amusing line:
“The number of people predicting the end of Moore’s Law doubles every two years,” quips the Scandinavian Tryggve Fossum.
What does Less Wrong know about the Myers-Briggs personality type indicator? My sense is that it's a useful model for some things, but I'm most interested in how useful it is for relationships. This site suggests that each personality type pair has a specific type of relationship, while this site only comments on what the ideal pair is for any given type. But the two sites disagree about what the ideal pairings are.
Wanted ad: Hiring a personal manager
I will pay anyone I hire $50-$100 a month, or equivalent services if you prefer.
I've trying to overcome my natural laziness and get work done. For-fun project, profitable projects, SIAI-type research, academic homework -- I don't do much without a deadline, even projects that I want to do because they sound fun.
I want to hire a personal manager, basically to get on my case and tell/convince me to get stuff done. The ideal candidate would:
There's an article on rationality in Newsweek, with an emphasis on evo-psych explanations for irrationality. Especially: we evolved our reasoning skills not just to get at the truth, but also to win debates, and overconfidence is good for the latter.
There's nothing there that's new to readers of this blog, and the analysis is superficial (plus the writer makes an annoying but illustrative error while explaining why human intuition is poor at logic puzzles). But Newsweek is a large-circulation (second to Time) newsweekly in the U.S., so this is a pretty b...
Value-sorting hypothetical:
If you had access to a time-machine and could transfer one piece of knowledge to an influential ancient (i.e. Plato), what would you tell him?
Something practical, like pasteurization, would almost certainly improve millions of lives, but it wouldn't necessarily produce people with values like ours. I can imagine a bishop claiming heat drives demons from milk.
Meta-knowledge, like a working understanding of the scientific method, might allow for thousands of other pasteurizations to be developed, or maybe it would remain unused thr...
After seeing the recent thread about proving Occam's razor (for which a better name would be Occam's prior), I thought I should add my own proof sketch:
Consider an alternative to Occam's prior such as "Favour complicated priors*". Now this prior isn't itself very complicated, it's about as simple as Occam's prior, and this makes it less likely, since it doesn't even support itself.
What I'm suggesting is that priors should be consistent under reflection. The prior "The 527th most complicated hypothesis is always true (probability=1)" mus...
I wish my long-term memory were better.
Am I losing out on opportunities to hold onto certain facts because I often rely on convenient electronic lookup? For instance, when programming I'll search for documentation on the web instead of first taking my best recollection as a guess (which, if wrong, will almost certainly be caught by the type checker). What's worse, I find myself relying on multi-monitor/window so I don't even need to temporarily remember anything :)
I'd like to hear any evidence/anecdotes in favor of:
habits that might improve my general
Gelernter on "machine rights," I didn't know his anti-AI consciousness views were tied in with Orthodox Judaism.
If I commit quantum suicide 10 times and live, does my estimate of MWI being true change? It seems like it should, but on the other hand it doesn't for an external observer with exactly the same data...
There are lots of other problems we should tackle, too! But presumably many of these are just symptoms of some deeper underlying problem. What is this deeper problem? I’ve been trying to figure that out for years. Is there any way to summarize what’s going on, or it is just a big complicated mess?
Here’s my attempt at a quick summary: the human race makes big decisions based on an economic model that ignores many negative externalities.
A ‘negative externality’ is, very roughly, a way in which my actions impose a cost on you, for which I don’t pay any price.
-- John Baez on saving the planet
Have advocates of the simulation argument actually argued for the possibility of ancestor simulations? It is a very counterintuitive idea, yet it seems to be invoked as though it is obviously possible. Aside from whatever probability we want to assign to the possibility that the future human race will discover strange previously-unknown laws of physics that make it more feasible, doesn't the idea of an ancestor simulation (a simulation of "the entire mental history of humankind") depend on having access to a huge amount of information that has pr...
Light entertainment: this hyperboleandahalf comic reminded me of some of the FAI discussions that go on in these parts.
http://hyperboleandahalf.blogspot.com/2010/08/this-comic-was-inspired-by-experience-i.html
Has anyone tried or uses the Zeo Personal Sleep Coach (press coverage).
It's a sleep tracker - measuring light, REM, and deep sleep - which sounds useful for improving sleep which as we all know is extremely important to mental performance and learning and health. I'm thinking of getting one, but the $200 pricepoint is a little daunting.
I just found a new blog that I'm going to follow: http://neuroanthropology.net/
This post is particularly interesting: http://neuroanthropology.net/2009/02/01/throwing-like-a-girls-brain/
Just watched Tyler Cowen at TEDx Mid-Atlantic 2009-11-05 talking about how our love of stories misleads us. We talk about good-story bias on the Wiki.
Is there a way good-story bias could be experimentally verified?
'But Dietrich is more concerned that companies will fail to analyze the petabytes of data they do collect. When she met with the pharmaceutical company about its portfolio management strategy, for instance, the executives explained how they allocated spending according to their estimates of how likely each project was to succeed. “I asked them if they ever checked to see how well the estimates matched their results,” she says. “They never had.”'
~Oops I did it again.~
~I trolled on Slashdot ~
~Got modded to 5~
Well, hey, at least this time they universally criticized me. The topic was about a species being discovered that was present earlier than they thought, and I said that this refutes evolution.
If this press release isn't overstating its case, AIXItl/other unFriendly bayesian superintelligence just got a lot closer.
There was a question recently about whether neurons were like computers or something like that. I cannot find the comment although I replied at the time. Today I came across an article that may interest that questioner. http://www.sciencedaily.com/releases/2010/08/100812151632.htm
Reposted here instead of part 1, didn't realise part 2 had been started.
I don't understand why you should pay the $100 in a counterfactual mugging. Before you are visited by Omega, you would give the same probabilities to Omega and Nomega existing so you don't benefit from precommitting to pay the $100. However, when faced with Omega you're probability estimate for its existence becomes 1 (and Nomegas becomes something lower than 1).
Now what you do seems to rely on the probability that you give to Omega visiting you again. If this was 0, surely you wouldn'...
Are there any Less Wrong postings besides The Trouble With "Good" and (arguably) Circular Altruism which argue in favor of utilitarianism?
At last year's Singularity Summit, there was an OB/LW meetup the evening of the first day, held a few blocks away from the convention center. Is anything similar planned for this weekend?
(I'm guessing no, since I haven't heard anything about it here, but we'd still have a couple days to plan it if anyone's interested...)
I've experimented a little more, and still don't know how to make links appear properly in top-level posts. Instead of doing a bug report, I request that someone who does get it to work explain what they do.
Also, Book Recommendations isn't showing up as NEW, even though it's there in Recent Posts. I thought there might be a delay involved, but the post for this thread showed up in NEW almost immediately.
Probability & AI
...The probabilistic approach has been responsible for most of the recent progress in artificial intelligence, such as voice recognition systems, or the system that recommends movies to Netflix subscribers. But Noah Goodman, an MIT research scientist whose department is Brain and Cognitive Sciences but whose lab is Computer Science and Artificial Intelligence, thinks that AI gave up too much when it gave up rules. By combining the old rule-based systems with insights from the new probabilistic systems, Goodman
Further to this. Let's plot political discourse along two axes: substantive (x axis: -disagree to +agree) and rhetorical (y axis: -"cool"/reasoned to +"hot"/emotional). Oligopsony states that it is valuable to engage with those on the left-hand side of the graph (people who disagree with you), without any particular sense that special dangers are posed by the upper left-hand quadrant. (Oligopsony says reading so-and-so is "like reading political philosophy from Mars, and that's something you should experience regularly" -- regardless of the particular emotional relationship you are going to have with that Martian political philosophy as a function of the way in which it's presented.) My view (following on, I think, PitM-K -- and in sharp disagreement with James Miller's original post in this thread) is that the upper half of the graph, and particularly the upper left-hand quadrant, is danger territory, because of the likelihood you are going to retreat into tribalism as your views are mocked.