Open Thread, August 2010-- part 2
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (369)
I bet a good way to improve your rationality is to attempt to learn from the writings of smart, highly articulate people, whom you consider morally evil, and who often use emotional language to mock people like yourself. So, for example, feminists could read Roissy and liberals could read Ann Coulter.
http://roissy.wordpress.com/ http://www.anncoulter.com/
Who should conservatives read?
Paul Krugman?
My experience is that Paul Krugman is one of those people with whom you disagree at your peril.
If by "peril" you mean being censored from the comments section of his blog....
If you disagreed with him about long-term interest rates on T-bills, and you bet on that belief, you lost a lot of money.
And if you agreed with him about the housing market and bought any real estate a few years back, then you're probably underwater right now.
Even if you sold in 2006?
I don't know if that link is gated or not.
The fact that he (partially) acknowledged the existence of the problem, once it was well underway, doesn't change the fact that he actively advocated the policies that caused the problem.
Should the Federal Reserve have refrained from lowering interest rates in 2001?
Glenn Greenwald!
Paul Krugman's a good example, because he goes on the offensive, but he's not quite offensive enough. For good, juicy ad hominems, read SLOG (the blog of the Seattle Stranger) or Feministe.
How to offend a conservative is an interesting question. I think it should be easy to offend (or upset or disgust) a traditional social conservative, with simple sexual shock value. It's harder for me to think of ways to offend an economic conservative. The closest thing I can think of is stereotyping libertarians as weirdo losers.
Advocate Marxism?
You'll have to expand on this before I could agree. My inclination is to think quite the opposite. That is, when I read people who more or less articulately use highly emotion-button-pushing language to mock people like me, it puts my defenses up and makes me try to justify my beliefs, rationality be damned. Was this not pretty much the thrust of Politics is the Mind-Killer? If I were, to adopt a wild hypothetical, a conservative, I would probably say nearly anything to defend myself -- whether publicly or in my own mind -- against the kind of mockery I'd get on a daily basis from Paul Krugman's blog (Krugman chosen as example per mattnewport). Rationality-wise, that is not the position I want to be trying to put myself in. Rather, I want to seek out reasoned, relatively "cool" (as opposed to emotionally "hot") expressions of opposing viewpoints and try to approach them open-mindedly, trying to modify my positions if warranted.
I mean, am I missing something?
"If I were, to adopt a wild hypothetical, a conservative, I would probably say nearly anything to defend myself -- whether publicly or in my own mind -- against the kind of mockery I'd get on a daily basis from Paul Krugman's blog"
Yes, most people would do this, so the rationality challenge would be to fight against it. Think of it as special-forces-intensity rationality training.
Not everything that is difficult is thereby good training. It's easier to withstand getting punched in the gut if you're in good physical shape, but I wouldn't suggest trying to get in shape by having someone punch you repeatedly in the gut.
(Indeed, at some point of martial arts training it's useful to learn how to take a punch, but this training has to be done carefully and sparingly. You don't become stronger by rupturing your spleen.)
It's probably good to have a mix. I get something distinct from reading people like Roissy or Sailer, whose basic values are totally divorced from my own. I get something else from Eliezer or Will Wilkinson, who derive different policy preferences from values that are similar to mine.
There's something liberating about evil analysis, and I think it's that it's audaciousness allows you to put down mental blinders that would be on guard againstmore plausible threats to your ideological integrity. And a nice thing about values changing over time is that the classics are full of this stuff. Reading, say, Schmidt is like reading political philosophy from Mars, and that's something you should experience regularly. Any similar recommendations?
Upvoted. From my reply you'll see that I agree it's probably good to seek out, as you say, those "whose basic values are totally divorced from [one's'] own." But can you say more about James Miller's original contention that you should specifically be seeking out that which is designed to piss you off? That's where it seems to me that his idea goes just totally wrong. How is this going to do anything except encourage you to retreat into tribalism?
Well, there is the aesthetic appreciation of polemic for its own sake, but that's not going to make you more rational.
I think the most obvious answer, though, is that it can inure you a bit to connotative sneers. Aversion to this kind of insult is likely one of the major things keeping you from absorbing novel information!
One way to do this very quickly - you shouldn't, of course, select your politics for such trivial advantages, but if you do, take advantage of it - is to become evil yourself, relative to the majority's values. There are certain groups an attack upon which constitutes an applause line in the mainstream. If you identify as a communist or fascist or Islamist or other Designated Antagonist Group, you can either take the (obviously epistemically disastrous) route of only reading your comrades, or you can keep relying on mainstream institutional sources of information that insult you, and thereby thicken your skin. (Empirical prediction: hard {left|right}ists are more likely to read mainstream {conservatives|liberals} than are mainstream {liberals|conservatives}.)
(An alternate strategy this suggests, if your beliefs are, alas, pedestrian, is to "identify" with some completely ridiculous normative outlook, like negative utilitarianism or something. Let everyone's viewpoint offend you until "this viewpoint offends!" no longer functions as a curiosity stopper.)
Well, I understand your reasoning: you suggest that it's likely (or at least possible) that one's reaction in the face of rhetorically "hot" disagreement will be a built-up tolerance (immunity) for mockery, making one more able to extract substance and ignore affect. My belief is that that particular strength of character (which I admire when I see it, which is rarely) is infrequent relative to, as I keep calling it, a retreat into tribalism in the face of mockery of one's dearly-held beliefs. Hence my feeling that the upper-left quadrant of the graph I describe is not good breeding grounds for rationality. That isn't to suggest that we shouldn't do our best to self-modify such that that would no longer be the case, but it is hard to do and our efforts might be best spent elsewhere.
Also worth considering is the hypothesis that the two axes of my graph aren't fully independent, but instead that "hot" expressions are correlated with substantively less rich and worthwhile viewpoints, because the richest and most worthwhile viewpoints wouldn't have much need to rely on affect. If this is true (and I think it is at least somewhat true), it would be another reason for avoiding rhetorically "hot" political viewpoints in general.
As my political beliefs have become more evil I've become much better at ignoring insults to my politics. I remain pretty thin-skinned individually, though, so it seems that whatever's moving me in this way is politics-specific.
The healthiest reading space is probably all over the axis. Passion is not the opposite of reason, and there are pleasures to take in reading beyond the conveyance of mere information.
If you know that doing X will " encourage you to retreat into tribalism" then doing X gives you a great opportunity to fight against your irrational instincts.
Further to this. Let's plot political discourse along two axes: substantive (x axis: -disagree to +agree) and rhetorical (y axis: -"cool"/reasoned to +"hot"/emotional). Oligopsony states that it is valuable to engage with those on the left-hand side of the graph (people who disagree with you), without any particular sense that special dangers are posed by the upper left-hand quadrant. (Oligopsony says reading so-and-so is "like reading political philosophy from Mars, and that's something you should experience regularly" -- regardless of the particular emotional relationship you are going to have with that Martian political philosophy as a function of the way in which it's presented.) My view (following on, I think, PitM-K -- and in sharp disagreement with James Miller's original post in this thread) is that the upper half of the graph, and particularly the upper left-hand quadrant, is danger territory, because of the likelihood you are going to retreat into tribalism as your views are mocked.
Oligopsony:
Please pardon my evident lack of erudition, but which Schmidt do you have in mind?
Carl, and upon looking it up it's Schmitt. So the lack of erudition is all mine.
Yes, I've heard of Schmitt. Paul Gottfried, who is one of my favorite contemporary political theorists, wrote a book about him. I plan to read at least some of Schmitt's original work before reading what Gottfried has to say about him, but I haven't gotten to it yet. Do you have any particular recommendations?
If you want to read some political philosophy that's really out there by modern standards, try Joseph de Maistre. His staunch Catholicism will probably be off-putting to many people here, but a truly unbiased reader should understand that modern political writers are smuggling just as many unwarranted metaphysical assumptions into their work, except in much more devious ways. Also, although some of his arguments have been objectively falsified in the meantime, others have struck me as spot-on from the modern perspective of Darwinian insight into human nature and the humanity's practical political experiences from the last two centuries. (His brother Xavier is a minor classic of French literature, whom I warmly recommend for some fun reading.)
Ann Coulter's blog seems fixed-width and very narrow :-(
I've done exactly this. Read Roissy, read far-right sites (though not Coulter specifically.) Basically sought out the experience of having my feelings hurt, in the interest of curiosity.
I learned a few things from the experience. First, they have a few good points (my politics have changed over time). Second, they are not right about everything just because they are mean and nasty and make me feel bad, and in fact sometimes right-wingers and anti-feminists display flaws in reasoning. And, third, I learned how to deal better with emotional antagonism itself: I don't bother seeking out excuses to be offended any more, but I do protect myself by avoiding people who directly insult me.
I think this norm would do poorly in practice, because people would seek out antagonists they unconsciously knew would be flawed, rather than those who actually scare them.
A much better idea, I think, is the following:
I'd suggest, however, that you not give the other person someone who will constantly mock their position, because usually this will only further polarize them away from you. Exposing oneself to good contrary arguments, not ridicule, is the way for human beings to update.
I've been trying to do roughly that, though focusing more on the "smart and highly articulate" aspect and dropping "emotional mockery". When I read someone taking cheap shots at a position I might hold, I mostly find the writer childish and annoying, I don't see how reading more of that would improve my rationality. It doesn't really hurt my feelings, unlike some commenters here, so I guess different people need to be prodded in different ways.
For smart and articulate writers with a rationalist vibe, I would recommend Mencius Moldbug (posts are articulate but unfortunately quite long; advocates monarchy, colonialism and slavery) and Noam Chomsky. Any recommendations of smart, articulate and "extreme" writers whose views are far from those two?
I think Moldbug is far away from any living thinker you could name. And he'd probably tell you so himself.
(FWIW, I think Moldbug is usually wrong, through a combination of confirmation bias and reversed stupidity, although I'm still open on Austrian economics in general.)
Probably, as lon as you restrict yourself to sane, articulate thinkers in the West. There are probably even more outlandish ideas in Japan, India, or the Islamic world.
Come to think of it, it would probably be more instructive to read "non-westernized" intellectuals from India, Korean, Japan, China or the Islamic world, talking about the west. I think Moldbug recommended a medieval Japanese writer talking about his experience in America, but I can't find it right now.
Yukichi Fukuzawa. Only limited parts of his works are online (eg. in Google Books, very limited previews).
I'm kinda torn about Moldbug. His political arguments look shaky, but whenever he hits a topic I happen to know really well, he's completely right. (1, 2) Then again, he has credentials in CS but not history/economy/poli-sci, so the halo effect may be unjustified. Many smart people say dumb things when they go outside their field.
Funny, I like CS too but his writings put me off in part; I particularly disliked his Nock language. It looks like a seriously crappy Lisp to me (and I like Haskell better).
Agreed, Nock was a neat puzzle, but not much more. I have no idea why he tried to oversell it so.
Nock was followed by Urbit, "functional programming from scratch", but that project doesn't seem to have gone anywhere, and it's not clear to me where there would be for it to go. His vision of "Martian code", "tiny and diamond-perfect" is still a castle in the air, the job of putting a foundation under it still undone.
A criticism that I think applies to his politics as well. He does a fine destructive critique of the current state of things and how we got here, but is weak on what he would replace it by.
That just shows he got two easy questions right. When he spells out his general philosophy, which I had criticized before, you see just how anti-rational his epistemology is. You're just seeing a broken clock at noon.
By the way, anyone know if "Mencius Moldbug" is his real name? It sounds so fake.
I am about 80% confident that his real name is [redacted]
Publically revealing people trying to stay anonymous (though admittedly in his case, not very hard) is not very nice :P
It's a pseudonym; he's said that himself, but I don't remember where.
He states that it's a pseudonym. (It's actually quite a clever one - unique, and conveys a lot about him.)
MM's name combines the pseudonyms he previously used as a commenter in two separate blogging realms (HBD and finance).
I have a very hard time evaluating Moldbug's claims, due to my lack of background in the relevant history, but holy shit, do I ever enjoy reading his posts.
The crowd here may be very interested in watching him debate Robin Hanson about futarchy before an audience at the 2010 Foresight conference. Moldbug seems to be a bit quicker with the pen than in person.
Moldbug's initial post that spurred the argument is here; it's very moldbuggy, so the summary, as far as my understanding goes, is like this: Futarchy is exposed to corrupt manipulators, decision markets can't correctly express comparisons between multiple competing policies, many potential participants are incapable of making rational actions on the market, and it's impossible to test whether it's doing a good job.
http://unqualified-reservations.blogspot.com/2009/05/futarchy-considered-retarded.html
Video of the debate is here: http://vimeo.com/9262193
Moldbug's followup: http://unqualified-reservations.blogspot.com/2010/01/hanson-moldbug-debate.html
Hanson's followup: http://www.overcomingbias.com/2010/01/my-moldbug-debate.html
I was less than impressed by Hanson's response (in a comment) to http://unqualified-reservations.blogspot.com/2010/02/pipe-shorting-and-professor-hansons.html
Yeah, that response didn't have much content, but I think that's pretty understandable considering that by that point in their debate, Moldbug had already revealed himself to be motivated by something other than rational objections to Hanson's ideas, and basically immune to evidence. In their video debate it became very clear that Moldbug's strategy was simply to hold Hanson's ideas to an impossibly high standard of evidence, hold his own ideas to an incredibly low standard of evidence, and then declare victory.
So I can understand why Hanson might not have thought it was worth investing a lot more time in responding point by point.
I enjoy reading his posts too (when I have the time - not much, lately), but I wasn't very impressed by his debate with Robin Hanson - his arguments seemed to be mostly rehashing typical arguments against prediction markets that I'd heard before.
I have the same reaction to someone taking cheap shots, period. It doesn't matter whether they're arguing something I agree with, disagree with, or don't care about. It just lowers my opinion of the writer.
I'll have to disagree, at least to the extent that this is taken as a positive attribute. I find his posts to be rambling and cutesy, which may correspond to articulate. But most people here have the kind of mind that prefers "get to the point" writing, which he fails at.
Slavery? I'm certainly not defending Moldbug, but if he advocated slavery, I must have missed that post. Do you have a link?
See http://www.google.com/search?num=100&q=slavery%20site%3Aunqualified-reservations.blogspot.com%2F
And there is, of course, http://unqualified-reservations.blogspot.com/2009_03_01_archive.html which cannot be excerpted and done proper justice.
Hrmmmm... haven't read the second link yet, but that first excerpt is.... well.... yeah. The selling yourself into slavery part is basically unobjectionable (to a libertarian), but selling your children into slavery.......
I think Moldbug's positions seem to be derived not so much from reversed stupidity as reversed PC.
I would defend an (eviscerated) monarchy such as Canadians like me and other Commonwealthers have as being a social good, insofar as it's a rich & elegant tradition that's not very pernicious.
Actual "off with his head" monarchy... nah.
And if Prince Charles decides to flap his unelected gums too much when he accedes, you may see me change my tune. But at the moment I'm happy to be a subject of HM Queen Elizabeth.
Why do you think MM has a rationalist vibe?. He doesn't talk about probability or heuristics/biases etc.
I would accept that bet. In my experience exposure to such writings mostly serves to produce contempt. Contempt is one emotion that seems to have a purely deleterious effect on thinking. Anger, fear, sadness, anxiety and depression all provide at least some positive effects on thinking in the right circumstances but contempt... nothing.
This is a special case of a well known process in social science circles since at least the 1950's: role playing. It became popular after the work of Fritz Perls (a psychotherapist who started out in drama), who would have the patient do things such as play their mother or father (or tyrannical trauma family character of choice) to try and broaden their understanding of their life story and memory. It can be a very powerful technique. I have been in group psychotherapy sessions where people scream, bawl, and many other visceral responses get displayed.
In 1983 Robert Anton Wilson published a book, Prometheus Unbound which was ostensibly a self-help book for making your thinking more rational, specifically for destroying dogmas. This book is little more than one recipe after another for exercises of this type; for example, be a neo-Nazi for a week.
The mechanics is to learn by exposing yourself to that great universe of unknown-unknowns. My personal experience is sometimes they can be helpful, but it is really hard to know beforehand if they will be worth the time. I have benefited from some of these things; I have wasted time doing some.
I've experimented a little more, and still don't know how to make links appear properly in top-level posts. Instead of doing a bug report, I request that someone who does get it to work explain what they do.
Also, Book Recommendations isn't showing up as NEW, even though it's there in Recent Posts. I thought there might be a delay involved, but the post for this thread showed up in NEW almost immediately.
It doesn't use the same Markdown formatting as the comments, if that's what you were trying to do. Instead, you select some text and click the link button in the WYSIWIG toolbar (two to the left of the anchor button).
That button has gone dead for me. It used to work (produced a pop-up with two windows), but now there's no reaction when I click on it.
I don't know what would cause that, but one workaround is to write the links in a regular comment, then copy-and-paste them into the WYSIWYG editor. The links should copy over correctly.
Thanks.
I think I've got it now. Part of the problem was not realizing that it makes links blue and underlined in the edit window, but they aren't live-- they go live when the post is submitted, even to draft.
This is not what I'd call an intuitive interface.
Gelernter on "machine rights," I didn't know his anti-AI consciousness views were tied in with Orthodox Judaism.
He's not exactly Orthodox. His views are a bit unique religiously and is connected to his politics in some strange ways. But he's made clear before that his negative views of AI come in a large part from his particular brand of theism before. See for example the section in his book "The Muse in the Machine" that consists of a fairly rambling theological argument that AI will never exist and is based mainly on quotes from the Bible and Talmud. Jeff Shallit wrote an interesting if slightly obnoxious commentary about how Gelernter's religion has impacted his thinking.
One curious thing is how rarely Gelernter touches on golems when discussing his religion and AI. I suspect this is because although the classical discussion of golems touches on many of the relevant issues (including discussion of whether golems have souls and whether people have ethical obligations to them), it probably comes across to him as too much like superstitious folklore that he doesn't like to think of as part of Judaism in a deep sense.
ETA: However, some of Gelernter's points have validity outside of any religious context. In particular, the point that acting badly to non-conscious entities will encourage people to act badly to conscious ones is valid outside any Talmudic framework. Disclaimer: I'm friends with one of Gelernter's sons and his niece so I may be a biased source.
Thanks for the info.
Bridging the Chasm between Two Cultures: A former New Age author writes about slowly coming to realize New Age is mostly bunk and that the skeptic community actually might have a good idea about keeping people from messing themselves up. Also about how hard it is to open a genuine dialogue with the New Age culture, which has set up pretty formidable defenses to perpetuate itself.
Hah, was just coming here to post this. This article sort of meanders, but it's definitely worth skimming at least for the following two paragraphs:
Excellent article, thanks for the link! Let's keep in mind that she also wrote about how inflammatory and combative language is counterproductive, and the need to communicate with people in ways they have some chance of understanding.
What she said wasn't that simple-- she also talks about trying to get her ideas across while being completely inoffensive, and having them not noticed at all. When we're talking about a call to change deeply held premises, getting some chance of being understood is quite a hard problem.
Has anyone read, and could comment on, Scientific Reasoning: The Bayesian Approach by philosophers Howson and Urbach? To me it appears to be the major work on Bayes from within mainstream philosophy of science, but reviews are mixed and I can't really get a feel for its quality and whether it's worth reading.
A couple of viewquakes at my end.
I was really pleased when the Soviet Union went down-- I thought people there would self-organize and things would get a lot better.
This didn't happen.
I'm still more libertarian than anything else, but I've come to believe that libertarianism doesn't include a sense of process. It's a theory of static conditions, and doesn't have enough about how people actually get to doing things.
The economic crisis of 2007 was another viewquake for me. I literally went around for a couple of months muttering about how I had no idea it (the economy) was so fragile. A real estate bust was predictable, but I had no idea a real estate bust could take so much with it. Of course, neither did a bunch of other people who were much better paid and educated to understand such things, but I don't find that entirely consoling.
This gets back to libertarianism and process, I think. Protections against fraud don't just happen. They need to be maintained, whether by government or otherwise.
NancyLebovitz:
That depends on what exactly you mean by "the economy" being fragile. Most of it is actually extremely resilient to all sorts of disasters and destructive policies; if it weren't so, the modern civilization would have collapsed long ago. However, one critically unstable part is the present financial system, which is indeed an awful house of cards inherently prone to catastrophic collapses. Shocks such as the bursting of the housing bubble get their destructive potential exactly because their effect is amplified by the inherent instabilities of the financial system.
Moldbug's article "Maturity Transformation Considered Harmful" is probably the best explanation of the root causes of this problem that I've seen.
Sometimes I think the only kind of libertarianism that makes sense is what I'd call "tragic libertarianism." There is no magic market fairy. Awful things are going to happen to people. Poverty, crime, illness, and war. The libertarian part is that our ability to alleviate suffering through the government is limited. The tragic part is that this is not good news.
There's another tragic bit-- some of what government does makes things worse. There's no magic government fairy that guarantees good (or even non-horrible) results just because a government is doing something.
Yes, exactly.
I wish my long-term memory were better.
Am I losing out on opportunities to hold onto certain facts because I often rely on convenient electronic lookup? For instance, when programming I'll search for documentation on the web instead of first taking my best recollection as a guess (which, if wrong, will almost certainly be caught by the type checker). What's worse, I find myself relying on multi-monitor/window so I don't even need to temporarily remember anything :)
I'd like to hear any evidence/anecdotes in favor of:
habits that might improve my general ability to remember and/or recall (I'd guess that having enough sleep (and low enough stress) matters, for example.)
tricks for ensuring that particular bits of info are preferentially stored (As I mentioned, I imagine using a memory
consolation - perhaps being more forgetful than many other smart people is a trade-off with different advantages (I doubt it, although I've heard that we do some useful selective forgetting when we sleep, and I'm glad I don't remember every malformed thought I have while asleep)
You have two of the big ones. Add in exercise and diet. And add exercise again just in case you skipped it. With all the basics handled you can consider things like cognitive enhancers (ie. Aniracetam and choline supplementation).
Spaced Repetition .
People spend an awful lot of time trying to forget things. A particularly strong memory exacerbates the effects of trauma. (If something particularly bad happens to you some day then smoke some weed to prevent memory consolidation.)
Thanks. I guess I'm just lazy and hope to remember things better without any explicit drilling.
I do exercise (but I'm nearly completely sedentary every other day; it's probably better to even out the activity).
I remember reading in the past week that the way exercise improves brain function is not merely by improving oxygen supply to the brain, but in some other interesting, measurable ways (unfortunately, that's as much as I can remember, but it seems like this from Wikipedia at least covers the category:
Exactly. Exercise is great stuff, particularly with the boost to neurogenesis!
Incidentally, the best forms of exercise (for this purpose) is activities which not only provide an intense cardiovascular workout but also rely on extensive motor coordination.
But if the increased neurogenesis is only for implementing motor skill learning, then it's not going to help me get better at Starcraft 2 (I mean, my research) - so what's the point? :)
I play piano for 10-60 min daily and imagine there's some benefit as well (surprisingly, it's also a mild cardiovascular workout once you can play hard enough repertoire).
Also, I read a little about choline; it seems likely that unless I'm dieting heavily, I'll get enough already. That is, there's no hard evidence of any benefit to taking more than necessary to maintain liver health - although it seems like up to 7x that dose also has no notable side effects).
Aniracetam looks interesting (but moderately expensive). Do you have any personal experience with it?
I don't think I expressed myself clearly. The effect I refer to is influence of a coordination based component to exercise on neurogenesis and not particularly on the benefits of such to motor skills. Crudely speaking, of the neurons formed from the BDNF released during exercise a greater fraction of them will stably integrate into the brain if extensive coordination is involved than if the exercise is 'boring'. I suspect, however, that a cardio workout combined with (ie. on the same day as) your piano practice will be at least as effective. That stuff does wonders!
I included choline only because I mentioned Aniracetam. While the effects are hardly miraculous, Aniracetam (and the more basic Piracetam) do seem to have a positive effect on cognition and learning. Because the *racetams work by (among other things) boosting Acetylcholine people usually find that their choline reserves are strained. The effects of such depletion tends to be reported as 'head fog' or at least as a neutralisation of the positive benefits of the cognitive enhancement.
Supplementing choline in proportion to racetam use is more or less standard practice. Using choline alone seems, as you noted, largely pointless.
I have used it and my experiences were positive. I found it particularly useful in social situations, with improved verbal fluency. Unfortunately I cannot give much insight into how well it works for improving memory retention. Basically because my memory has always been far more powerful than I've ever required. It just isn't a bottle neck in my performance so my self report is largely useless.
The Last Psychiatrist on a new study of the placebo effect.
I'm having trouble parsing his analysis (it seems disjointed) but the effect is interesting nonetheless.
After seeing the recent thread about proving Occam's razor (for which a better name would be Occam's prior), I thought I should add my own proof sketch:
Consider an alternative to Occam's prior such as "Favour complicated priors*". Now this prior isn't itself very complicated, it's about as simple as Occam's prior, and this makes it less likely, since it doesn't even support itself.
What I'm suggesting is that priors should be consistent under reflection. The prior "The 527th most complicated hypothesis is always true (probability=1)" must be false because it isn't the 527th most complicated prior.
So to find the correct prior you need to find a reflexive equilibrium where the probability given to each prior is equal to the average of the probabilities given to it by all the priors, weighted by how probable they are.
*This isn't a proper prior, but it's good enough for illustrative purposes.
Amusing exercise: find a complexity measure and a N such that "the Nth most complex hypothesis is always true" is the Nth most complex prior :)
:)
Equivalently, can you write a function that takes a string and returns true iff the string is the same as the source code of the function?
Anyone got some quining skills?
in Python:
...it's probably possible to make a simpler one.
This makes you vulnerable to quining, like this:
Hypotheses that consist of ten words must have higher priors.
I'm hoping that when the hypotheses are written in a well defined computer language, this problem doesn't crop up. (you would think that after reading GEB I would know better!)
Of course there may be multiple fixed points or none at all, but it would be nice if there was exactly one.
Oh, no. Quines are just as common in programming as they are in natural languages. Also see the diagonal lemma. I use self-referential sentences to prove theorems all the time, they're very common and can be used for a huge variety of purposes.
The welcome thread is about to hit 500 comments, which means that the newer comments might start being hidden for new users. Would it be a good thing if I started a new welcome thread?
While I'm at it, I'd like to add some links to posts I think are especially good and interesting for new readers.
OK, I'm seeing some quick approval. I've been looking back through LW for posts and wiki articles that would be interesting/provocative for new readers, and don't require the entirety of the sequences. Here's my list right now:
And from the wiki:
What should I add? What, if anything, should I subtract?
Due to heavy personal history bias: That Alien Message.
I would take out anything that involves weird stuff regarding dead people, but that might be better A/B tested or surveyed. My own expectation is that hitting readers with the crazy topics right away is bad and a turn-off while it is better to give out useful and interesting things in the beginning that are relatable right away. [Edit: important missing word added]
Huh?
Corrected. I meant cryonics, and some of the applications of 'shut up and multiply'.
Ah. As you can see on the page itself, I decided to leave out the wiki links (for basically the same reasons you mentioned.) I'll add That Alien Message.
I've been thinking more and more about web startups recently (I'm nearing the end of grad school and am contemplating whether a life in academia is for me). I'm no stranger to absurd 100 hour weeks, love technology, and most of all love solving problems, especially if it involves making a new tool. Academia and startups are both pretty good matches for those specs.
Searching the great wisdom of the web suggests that a good startup should be two people, and that the best candidate for a cofounder is someone you've known for a while. From my own perspective, I'd love to have a cofounder that was rational and open minded, hence LessWrong as a potential source.
I'm not pitching a startup idea here. What I'm pitching is promiscuous intellectual philandering. I'd like to shoot the shit about random tech ideas, gossip about other startups, and in general just see if I click with anyone here strongly enough to at some point consider buddying up to take on the world.
Thoughts on how best to do this? What's the internet equivalent of speed dating for finding startup cofounders? Maybe the best way is to just attend more LessWrong meetups?
GiveWell blog discusses SIAI:
http://blog.givewell.org/2010/06/29/singularity-summit/
Value-sorting hypothetical:
If you had access to a time-machine and could transfer one piece of knowledge to an influential ancient (i.e. Plato), what would you tell him?
Something practical, like pasteurization, would almost certainly improve millions of lives, but it wouldn't necessarily produce people with values like ours. I can imagine a bishop claiming heat drives demons from milk.
Meta-knowledge, like a working understanding of the scientific method, might allow for thousands of other pasteurizations to be developed, or maybe it would remain unused throughout the Dark Ages.
Convincingly arguing for a philosophical conclusion, like materialism, might prevent the horror of the crusades, or maybe the now unaddressed emotional need for community would sooner be channeled into nationalism and hasten the coming of the world wars that terrorized the early 20th century.
Each side has its pluses and potential pitfalls. Which would you choose?
And should that therefore be the main thrust of your rationality-promoting conversations today?
How to make a movable-type printing press. They'll figure out pasteurization and the scientific method on their own eventually, but without a press, they'll lose knowledge almost as fast as they gain it. And as an added bonus, it introduces the concept of mass production.
We don't want them to advance quickly; we want them to advance with a low probability of screwing up permanently.
I don't think screwing up permanently becomes a real concern until the invention of nuclear weapons, and that's such a long ways ahead of the starting point for this exercise that I don't think we can influence how it goes.
Surely we can have nontrivial influence both on variables relating to specific technologies like nukes, and on general variables along the lines of "caution about technology".
I like this answer a lot. This would change a lot of incentives for the better.
This requires a lot of work. I'm not sure that they had the metallurgy to do this. The antikythera mechanism suggests that the answer is yes. But the printing press as a whole requires a lot of different technologies to come to together. The screw press, without which moveable type is highly inefficient, was not around until around 100 CE or slightly earlier(I'm under the impression that late medieval versions were generally better and more efficient than Roman era screw presses but don't have a citation for that claim. If someone can confirm/refute this I'd appreciate it). You also need to explain how to make a matrix for printing (again, otherwise efficiency issues kill things badly). Also, one needs to introduce the idea of a book/codex. Prior to that, the use of scrolls and other writing systems make a printing press less practical. This is another innovation from the Roman period. So one could probably have success introducing a printing press around 150 or 200 CE but the chance of successful introduction drops drastically as one goes further back in time.
Jared Diamond has suggested that even if something approximating the Gutenberg press were introduced early on the lack of supporting technologies might make it difficult to catch on. This connects with objects like the Phaistos Disc which used a standardized form of printing around 1600 BCE but the technology did not apparently spread far (or if it did spread far has left no substantial remnants elsewhere and did not stay around).
What counts for "one piece"? I'd like them to know enough math and rationality to be able to think sane thoughts, and explain the problem of Friendly AI, before technology is advanced enough to threaten.
There's an article on rationality in Newsweek, with an emphasis on evo-psych explanations for irrationality. Especially: we evolved our reasoning skills not just to get at the truth, but also to win debates, and overconfidence is good for the latter.
There's nothing there that's new to readers of this blog, and the analysis is superficial (plus the writer makes an annoying but illustrative error while explaining why human intuition is poor at logic puzzles). But Newsweek is a large-circulation (second to Time) newsweekly in the U.S., so this is a pretty broad audience.
Perhaps this has been mentioned before, since it's been online for almost a week, but my parents' print copy was just delivered today, and that's what I read.
I assume this is relevant enough to post in the open thread, though it may be old news to some of you. There is a purported proof that P!=NP. A wiki collecting discussion and links to discussions, as well as links to the paper draft, is here.
Wanted ad: Hiring a personal manager
I will pay anyone I hire $50-$100 a month, or equivalent services if you prefer.
I've trying to overcome my natural laziness and get work done. For-fun project, profitable projects, SIAI-type research, academic homework -- I don't do much without a deadline, even projects that I want to do because they sound fun.
I want to hire a personal manager, basically to get on my case and tell/convince me to get stuff done. The ideal candidate would:
Please do post something here as well for onlookers to see what's going on, but if you're interested PM, post, or email me contact information so we can get together real-time. My email is vanceza+lesswrong@gmail.com (I will take my email address down in a week or two).
I may be interested. PMing contact info.
I might also be interested, but I am fairly confident that Alicorn is more advanced than me, so I'll give her precedence. PM me or reply here if you guys can't work something out, though.
Good idea. I'll be interested to hear a report on how it works out!
I had a question. Other than Cryonics and PUA, what other "selfish" purposes might be pursued by an extreme rationalist that would not be done by common people?
On thinking on this for quite a while, one unusual thing I could think of was possibly, the entire expat movement, building businesses all across the world and protecting their wealth from multiple governments. I'm not sure if this might be classified as extreme rationality or just plain old rationality.
Switzerland seems to be a great place to start a cryonics setup as it is already a hub for people maintaining their wealth there. If cryonics was added, then your money and your life could be safe in switzerland.
PUA probably fits the "plain old rationality" category too, including the "done by common people" part.
I for one who appreciate people not using abbreviations that are not in the Wiki or someplace like that. I do not know what PUA is - so where do I go to find out?
Google. (It stands for Pick-Up Artist)
Thanks
Just watched Tyler Cowen at TEDx Mid-Atlantic 2009-11-05 talking about how our love of stories misleads us. We talk about good-story bias on the Wiki.
Is there a way good-story bias could be experimentally verified?
Tyler Cowen tells a nice story here.
As he points out.
Nice video. I am thinking that conjunction bias and hyperactive agency detectors are both linked to this 'story bias'. Of course religion milks this set for all it's worth.
Another question that came to me was whether telling children stories helps them or wires them up to keep thinking in terms of stories.
late to the party? :)
Jaron Laier is at it again: The First Church of Robotics
Besides piling up his usual fuzzy opinions about AI Jaron claims, and I cannot imagine that this was done out of sheer ignorance, that "This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined"; I cannot imagine that this is the Singularity University's official position, it's really just too stupid.
I understand SingU is not SIAI, but there is some affiliation and I hope someone speaks up for them.
Seconded. I think SingU is stupid, but whatever its faults, that isn't one of them, and it would be bad if this image spilled onto SIAI.
I'm kind of upset too, because I was just reading Jaron Lanier's recent book, which is the biggest dose of sanity I've seen on issues I care about in a long time.
At last year's Singularity Summit, there was an OB/LW meetup the evening of the first day, held a few blocks away from the convention center. Is anything similar planned for this weekend?
(I'm guessing no, since I haven't heard anything about it here, but we'd still have a couple days to plan it if anyone's interested...)
I would be interested in such a meetup.
This may be a stupid question, but...
There're a couple of psych effects we have evidence for. Specifically, we have evidence for a sort of consistency effect. For example (relevant to my question) there's apparently evidence for stuff like if someone ends up tending to do small favors for others or being nice to them/etc, they'll be willing to continue to do so, more easily willing to do bigger things later.
And there's also willpower/niceness "used up"ness effects, whereby apparently (as I understand it), one might do one nice thing then, feeling they "filled up their virtue quota" be nasty elsewhere. (ie, apparently one of the less obvious dangers of religion is you, say, go to church or whatever, and thus later in the day you don't even bother tipping (or tip poorly) when you go to a restaurant because you're "already virtuous" and thus don't have to do any more.)
How is it that we can simultaneously have evidence of these two things when they directly contradict each other? Or am I being totally stupid here?
(EDIT: just to clarify, I meant I was asking "How can it be that the sum total of evidence support both these positions when they seem to me to directly contradict each other?")
As fas as I understand they operate on different scales. "Used up" effects operate on much shorter time scales, and consistency effects (often?) operate on more specific things than general niceness.
Ah, if so, then thank you. (Huh, I'd thought consistency effects were supposed to work on short time scales too.)
The classic post says effects persist for two weeks, at least. So it would seem that the response curves of the two effects cross each other at one or two points. I'd be interested to see studies plotting them against each other; it is an interesting dichotomy.
Thanks. And yeah, that's a fair point and question. (Hrm... how exactly would one measure the response curves anyways in any quantitative way? ie, sure, "how many people respond after delay X vs delay Y, etc...", but any way to directly measure the strength of the effect rather than simply measuring when it "falls below measurability"?
Are there any Less Wrong postings besides The Trouble With "Good" and (arguably) Circular Altruism which argue in favor of utilitarianism?
Some are linked here.
http://www.technologyreview.in/computing/24967/page3/
Reposted here instead of part 1, didn't realise part 2 had been started.
I don't understand why you should pay the $100 in a counterfactual mugging. Before you are visited by Omega, you would give the same probabilities to Omega and Nomega existing so you don't benefit from precommitting to pay the $100. However, when faced with Omega you're probability estimate for its existence becomes 1 (and Nomegas becomes something lower than 1).
Now what you do seems to rely on the probability that you give to Omega visiting you again. If this was 0, surely you wouldn't pay the $100 because its existence is irrelevant to future encounters if this is your only encounter.
If this was 1, it seems at a glance like you should. But I don't understand in this case why you wouldn't just keep your $100 and then afterwards self-modify to be the sort of being that would pay the $100 in the future and therefore end up an extra hundred on top.
I presume I've missed something there though. But once I understand that, I still don't understand why you would give the $100 unless you assigned a greater than 10% probability to Omega returning in the future (even ignoring the none zero, but very low, chance of Nomega visiting).
Is anyone able to explain what I'm missing?
I think I've figured out the answer to my question.
The related scenario: You're stuck in the desert without water (or money) and a car offers to give you a lift if, when you reach the town, you pay them money. But you're both perfectly rational so you know when you reach town, you would gain nothing by giving the person the money then. You say, "Yes" but they know you're lying and so drive off.
If you use a decision theory which would have you give them the money once you reach town, you end up better off (ie. safely in town), even though the decision to give the money may seem stupid once you're in town.
From the perspective of t = 2 (ie. after the event), giving up the money looks stupid, you're in town. But if you didn't follow that decision theory, you wouldn't be in town, so it is beneficial to follow that decision theory.
Similarly, at t = 2 in the counterfactual mugging, giving up the money looks stupid. But if you didn't follow that decision theory, you would never have had the opportunity to win a lot more money. So once again, following a decision theory which involves you acting as if you precommitted is beneficial.
So by that analysis: My mistake was asking what the beneficial action was at t = 2. Whereas, the actual question is, what's the beneficial decision theory to follow.
Does my understanding seem correct?
Sounds right to me. I had actually written a blog post recently that explores the desert problem (aka Parfit's Hitchhiker) that you might be interested in. I think it also sheds some light on why humans (usually) obey a decision theory that would win on Parfit's Hitchhiker.
Quick question about time: Is a time difference the same thing as the minimal energy-weighted configuration-space distance?
That is among the most difficult 'quick questions' I have ever seen.
I meant 'quick' in the sense of 'relating to the nature of quickness'.
Will a correct answer to this question give you significant help toward maximizing the number of paperclips in the universe?
Yes.
Informal poll to ensure I'm not generalizing from one example:
How frequently do you find yourself able to remember how you feel about something, but unable to remember what produced the feeling in the first place (ie: you remember you hate steve but can't remember why)?
It seems like this is a cognitive shortcut, giving us access only to the "answer" that's already been computed (how to act vis-a-vis steve) instead of wasting energy and working memory re-accessing all the data and re-performing the calculation.
Yes, absolutely. Well, "hate" is too strong a word, but certainly it's hard to explain to other people: "I have a mental black mark against Steve's name, though I can't tell you why"...
Never. But I'm not typical.
I do this constantly. In fact, I do it a lot right here on LW - in reading comment threads, I see a comment by a certain user and have either a positive or negative reaction to the username, based on previous comments of theirs I've read, despite having no recollection of what those comments actually were
I'm not quite sure whether this is a good thing or a bad thing.
I don't do this on LessWrong, but that may be because I don't care enough about LW, and the stakes are too low.
On Wikipedia, though, there are at least 10 editors who, when I see their name come up on my watchlist, I briefly freeze up with a combination of fear, disgust, and anger.
Just curious: why do you consider LW to be lower stakes than wikipedia?
Fewer deletions. On Wikipedia, I have to fight tooth and nail for some things just to remain (and I often fail; I'm still a little bitter about the off-handed deletion of the Man-Faye article a few days ago); on LW, deletion of stuff is so rare that it's a major event when Eliezer deletes an article.
This has happened to me but not often.
There is a reason for it. Current thinking is that memories of events are retrieved by a process in the hippocampus (until the memories become substantially re-consolidated). Memories of strong emotional experiences are also retrieved by a process in the amygdala. They are not memories of events but just a link between the emotion and the object that caused it. In recent memories these are usually connected - If the amygdala retreives it prompts the hippocampus to do so also - if the hippocampus does the retreiving it triggers the amygdala. But the two processes can be disconnected for a particular memory pair. You see X and feel the memory of fear or anger but not the episodic memory of when and where you felt that emotion towards X. The amygdala retrieves but the hippocampus fails to.
Thanks - I appreciate the explanation.
Yes -- and I agree that it's probably a cognitive shortcut, because it's also something that happens with purely conceptual ideas. I'll forget the definition of a word, but remember whether it's basically a positive or negative notion. Yay/Boo is surprisingly efficient shorthand for describing anything.
Extremely rare, but I suspect I'm in the same boat as wedrifid in that I seem to have above-average memory.
Occasional but rare. I have more of a problem where I have some feeling some reason and then find out I was wrong about that reason and then need to make effort to adjust my feelings to fit the data. But I generally remember the cause for my feelings. The only exception is that occasionally I'll vaguely remember that some approach to a problem doesn't work at all but won't remember why (it generally turns out that I spent a few days at some point in the past trying to use that method to solve something and got negative results showing that the method wasn't very useful.)
I would say that most people I know easily fit this heuristic, but I almost never employ it, based on the way I remember people. When I have been in a conflict with someone, I can recall a categorized list of every thing I dislike about them, and a few fights we have had quite easily, and vice versa for people I like. What this means essentially is... I have a very hard time remaining angry / happy with people, because it requires constant use of resources, and it also seems to effect my ability to remember meeting people at all. Since I store memories of other people using events instead of descriptions if I have never had a particularly eventful interaction with someone, remembering their names or any other info is almost impossible.
--David Foster Wallace, "The String Theory", July 1996 Esquire
I thought of Robin Hanson and ems as I read this:
There was a question recently about whether neurons were like computers or something like that. I cannot find the comment although I replied at the time. Today I came across an article that may interest that questioner. http://www.sciencedaily.com/releases/2010/08/100812151632.htm
A little fiction about specialized AI.
What does Less Wrong know about the Myers-Briggs personality type indicator? My sense is that it's a useful model for some things, but I'm most interested in how useful it is for relationships. This site suggests that each personality type pair has a specific type of relationship, while this site only comments on what the ideal pair is for any given type. But the two sites disagree about what the ideal pairings are.
I think there are better models to use when considering relationships. I note that often such models are useful in as much as they serve to provide a language which can be used to describe intuitive associations that we pick up through observation. The model is not terrible, being a formalisation of the 'opposites attract' conventional wisdom with consideration given to how different people relate on intellectual and emotional levels.
As for MBTI, I have found it useful in some regards. I know, for example, that I can basically rule out relationships with anyone who comes in as a "J". I just find "J"s annoying ('judgemental' of me, I know!)
Edit: The links you provide are... interesting. I must admit I have rather strong doubts about just how accurate those physical descriptions of various personality types are!
like? (I'm intrigued)
Me too. There does seem to be some correlation between physical appearance and personality, but those details are rather burdensome.
Personality Page is not mainstream Jungian; they seem to be of the opinion that sharing a dominant trait of opposite attitude is most beneficial. More mainstream MBTI sites will tend to agree with Socionics that completely opposite traits are the most complementary (for example Fe and Ti) but disagree on what of these traits correlate to a J or P.
So if you go by the theory that J/P correlates to extroverted conscious traits (the MBTI position), INTP and ESFJ are complementary. If you go by the theory that J/P correlates to the dominant trait, INTJ is ESFJ's dual. Socionics sites tend to take this position.
Note that while these letters should be completely exclusive for introverts, many of the introvert profiles seem to be the same (or suspiciously similar) between the systems, particularly with sensing types. So an (alleged) ISFP MBTI may actually be ISFP in Socionics.
That would imply that someone is wrong/confused. Either the profiles are uselessly vague (Fourier effect, no better than astrology charts for identifying this particular feature), the traits aren't actually real emperical phenomena (Si1 is indistinguishable from Se2), or the traits are being defined differently (such that Si1 in system A is actually Se2 in system B).
To confuse/complicate matters more, all the traits have various features in common with each other: S+T are pragmatic and "hard", T+N are theoretical/consequence-based, F+N are abstract and ideal, F+S are aesthetic and social, just as T+F are judging and S+N are perceiving. So profiles could have varying accuracy while describing surface aspects of real traits, yet not distinguishing them from each other well enough to be useful.
Now, if just you want to use this to find a prospective spouse or best friend who is your dual type, and don't care so much about the theoretical correctness of who is what type, there's a work-around: Find someone who appears opposite on the first three letters, then see if they make you comfortable or not. If they have shared values and a compatible sense of humor, chances are relatively high that they are a dual type rather than a conflict type.
But which view (if any) makes good predictions in the relationship department?
EDIT: A quick survey of abstracts on google scholar suggests that marital satisfaction is not related to the MB personality types of the couples.
That is interesting. I would expect there to be some significant differences in relationship quality among MB types even if the types are only somewhat correlated (under the assumption that socionics is correct).
One of the better sites on the topic is Rick DeLong's Socionics.us. He says there is only roughly a 30% correlation between MBTI types and Socionics types. Boulakov is also skeptical of the validity of MBTI typings. Perhaps the correlation is not high enough to obtain meaningful results here. I will be updating my beliefs on the matter, as this implies most MBTI types are mistaken if socionics is valid.
Honestly though, it really does look a lot like motivated cognition on part of socionists. I mean, they do have a coherently self-consistent theory but reference to external data points are suspiciously scarce. They seemingly start with the assumption (based on anecdotal observations of Augusta, socionics' founder, and others after her) that these relationship preferences between distinct types exist, find subjective validation, and then go from there to assert that the MBTI is just not accurate enough at determining the traits socionics is based on. So for example if two people who are claimed to be ISFp and ENTp (where lowercase p is "irrational") do not get along, Socionists will say the typing is invalid rather than that the theory is wrong. But if relationships are the only acid test of a typing, and relationships are the only thing predicted by the typing, it's turned into a vague "if you like these kinds of people you will like these kinds of people".
However, it's not entirely hopeless because there are more specific predictions to to validate. As an example, given a valid ISFp/ENTp pair, socionics also predicts the ISFp will be a supervisor ("supervision transmitter" in DeLong's terms) for the INFj type, whereas ENTp will be the "request transmitter" or beneficiary for the INFj. So if you could design a set of experimental test situations where supervision and request are distinguishable from other types of interaction (perhaps a game of some sort), you could perhaps set up a series of meetings between test subjects and see if it checks out. You could verify a given dual pair by their interactions with a given supervision/request receiver first, then arrange a meeting between them and see if they have more compatibility than the control group. The same thing could be verified with the diad's supervision/request transmitter type.
Problems with high stakes, low quality testing
Possible new barriers to Moore's Law where small chips won't have enough power to use the maximum transistor density they have available. The article also discusses how other apparent barriers (such as leaky gates) have been overcome in the past including this amusing line:
I just found a new blog that I'm going to follow: http://neuroanthropology.net/
This post is particularly interesting: http://neuroanthropology.net/2009/02/01/throwing-like-a-girls-brain/
I'm considering starting a Math QA Thread at the toplevel, due to recent discussions about the lack of widespread math understanding on LW. What do you say?
I'm not sure that people necessarily know what questions they need to ask, or even that they need to ask.
A math Q&A seems like a good idea, but it would be a better idea if there were some "the math you need for LW" posts first.
There was a very nice piece here (possibly a quote) on how to think about math problems-- no more that a few paragraphs long. It was about how to break things down and the sorts of persistence needed. Anyone remember it?
Here is all the math you need to know to understand most of LW (correct me if I'm wrong):
I'm working through all of it right now. Not very far yet though.
You might want to add computer science and basic programming knowledge too.
Ok, you might add some logic and set theory as well if you want to grasp the comments. Although some comment threads go much further than that.
Some people, including me, can get away with knowing much less and just figuring stuff out as we go along. I'm not sure if anyone can learn this ability, but for me personally it wasn't inborn and I know exactly how I acquired it. Working through one math topic properly at school over a couple years taught me all the skills needed to fill any gaps I encountered afterwards. University was a breeze after that.
The method of study was this: we built one topic (real analysis) up from the ground floor (axiomatization of the reals), receiving only the axioms and proving all theorems by working through carefully constructed problem sets. An adult could probably condense this process into several months. It doesn't sound like much fun - it's extremely grueling intellectual work of the sort most people never even attempt - but when you're done, you'll never be afraid of math again.
I had to figure out ALL myself, without the help of anyone in meatspace. I'm lacking any formal education that be worth mentioning. The very language I'm writing in right now is almost completely self-taught. It took me half a decade to get here, irrespective of my problems. That is, most of the time I haven't been learning anything but merely pondering what is the right thing to do in the first place. Only now I've gathered enough material, intention and the basic tools to tackle my lack of formal education.
I have been wanting something like this on LW for quite awhile, but wasn't sure it was on topic. With your linked post in mind, however, I think this is a good idea, and I, for one, would be an active participant.
Pattern matching, signalling:
Link from The Agitator.
I'm looking for something that I hope exists:
Some kind of internet forum that caters to the same crowd as LW (scientifically literate, interested in technology, roughly atheist or rationalist) but is just a place to chat about a variety of topics. I like the crowd here but sometimes it would be nice to talk more casually about stuff other than the stated purpose of this blog.
Any options?
This is something I'd very much like as well. If you find anything, let me know. xkcd forums can be pretty good, though I haven't been on there in a while.
If all else fails, we can make one of our own.
I was thinking the same thing.
Pharyngula. More atheist than rational, and more biology than technology, but it is definitely a community. It is a blog, but has an interesting feature called the endless thread which is kind of "collective stream of consciousness". Check it out. And also look at other offerings in the science blogosphere.
[Edit:supplied link.]
I would not suggest Pharyngula for this purpose. The endless thread is fun but the rationality level there is not very high. It is higher than that of a random internet forum but I suspect that many LWians would become quickly annoyed at the level at which arguments are treated as soldiers.
Truth, bro. It is pretty rowdy.
Agreed. Myers himself is way too political, and basically not nice. If anybody ever calls him on that, they get an insane level of vitriol and accusations stopping just short of being in league with the Vatican.
Stardestroyer.net fits that description somewhat, for values of "casually" that allow for copious swearing punctuating most disagreements. I haven't posted there, but Kaj Sotala posts as Xuenay (<s>apologies</s> no apologies for stalking).
Examples of threads on LW-related topics:
(Edited after first upvote; later edited again to add a link.)
I recently started taking piracetam, a safe and unregulated (in the US) nootropic drug that improves memory. The effect (at a dose of 1.5g/day) was much stronger than I anticipated; I expected the difference to be small enough to leave me wondering whether it was mere placebo effect, but it has actually made a very noticeable difference in the amount of detail that gets committed to my long-term memory.
It is also very cheap, especially if you buy it as a bulk powder. Note that when taking piracetam, you also need to take something with choline in it. I bought piracetam and choline citrate as bulk powders, along with a bag of empty gelatin capsules and a scale. (Both piracetam and choline citrate taste extremely vile, so the gel caps are necessary. Assembling your own capsules is not hard, and can be done at a rate of approximately 10/minute with a tolerance of +/- 10% once you get the hang of it.)
I strongly recommend that anyone who has not tried piracetam stop procrastinating and order some. Yes, people have done placebo-controlled studies. No, there are not any rare but dangerous side effects. Taking piracetam is an unambiguous win if you want to learn and remember things.
Two questions:
-How much does it cost?
-How soon do you start becoming desensitized to it, if at all?
I ordered from here at a price of $46 for 500g each of piracetam and choline citrate, plus $10 for gel caps and $20 for a scale (which is independently useful).
I could not find any reported instances of desensitization to piracetam, so I don't think it's an issue.
I'm trying out nootropics, adding them one at a time. Next on my list to try is sulbutiamine; I've seen claims that it prevents mental fatigue, and it too has basically zero side-effect risks. Also on my list to try are lion's mane, aniracetam, l-tyrosine and fish oil. All of these are unregulated in the US.
I also use adrafinil, which greatly improves my focus. However, it's more expensive and it can't be used continuously without extra health risks, so I only use it occasionally rather than as part of my daily regimen. (There's an expensive and prescription-only related drug, modafinil, which can be used continuously.)
Sounds good. Be sure to report back once you test out the others-- nootropics are very interesting to me, and I think generally useful to the community as well.
First, the results of a wikipedia check: "There is very little data on piracetam's effect on healthy people, with most studies focusing on those with seizures, dementia, concussions, or other neurological problems." which seems to decrease the assurance of safety for everyday use. But otherwise, most of the sources appear to agree with your advertising. I too would like to see memory tests for these drugs, but preferably in a large and random sample of people, with a control group given a placebo, and another control group taking the tests with no aid of any kind. As well as a long term test to check for diminishing effectiveness or side effects. With my memory, I would pay a considerable amount to improve it, but first I want to see a wide scale efficiency test.
Why? Given the low cost and risk of trying it out, the high possible benefits, and the high probability that results will depend on individual genetic or other variations and so will not reach significance in any study, wouldn't the reasonable thing be to try it yourself, even if the wide-scale test had already concluded it had no effect?
Question: did you find that it leads to faster grokking (beyond the effects of improvement of raw recall ability)?
I don't know, but I think it's just memory. This is almost impossible to self-test, since there's a wide variance in problem difficulty and no way to estimate difficulty except by speed of grokking itself.
File under "Less Wrong will rot your brain":
At my day job, I had to come up with and code an algorithm which assigned numbers to a list of items according to a list of sometimes-conflicting rules. For example, I'd have a list of 24 things that would have to be given the numbers 1-3 (to split them up into groups) according to some crazy rules.
The first algorithm I came up with was:
Of course, I did not try to implement this algorithm. Rather, I ended up solving the problem (mostly) using about 100 lines of perl and no AI.
This is what happens to me whenever I start to write a difficult program in C++, start by building a innovative system which solves the problems with minimal intervention on my part, and then eventually set up a cludge using heuristics which get the same thing done in a fraction of the time.
What was the application?
I'm thinking of signing up for cryonics. However, one point that is strongly holding me back is that cryonics seems to require signing up for a DNR (Do not resuscitate). However, if there's a chance at resuscitation I'd like all attempts to be made and only have cryonics used when it is clear that the other attempts to keep me alive will fail. I'm not sure that this is easily specifiable with current legal settings and how cryonics is currently set up. I'd appreciate input on this matter.
What did you read that makes it seem this way? I haven't run into this before.
A variety of places mention it. Alcor mentions it here. Cryonics.org discusses the need for some form of DNR although the details don't seem to be very clear there. Another one that discusses it is this article which makes the point that repeated attempts at resuscitation can lead to additional brain damage although at least from the material I've read I get the impression that as long as it doesn't delay cryopreservation by more than an hour or two that shouldn't be an issue.
You don't have to sign a DNR or objection to autopsy to get cryonics. The autopsy objection is recommended, but not required. It looks like Alcor wants terminally ill people to sign a DNR, not typical healthy people.
I've signed a religious objection to autopsy (California doesn't seem to allow an atheistic objection to autopsy), but never has a DNR been mentioned to me by anyone at Alcor.
Which just a tad ironic. Atheists are people who consider the physical state of their brain to be all that is 'them'. Most religious people assume their immortal soul has traipsed off some place, a paradise or at the very least a brand spanking new (possibly animalian) body.
Is there a post dealing with the conflict between the common LW belief that there are no moral absolutes, and that it's okay to make current values permanent; and the belief that we have made moral progress by giving up stoning adulterers, slavery, recreational torture, and so on?
I'm not sure that both of those are common LW beliefs (at least common in the same people at the same time), but I don't see any conflict there. If there are no moral absolutes, then making current values permanent is just as good as letting them evolve as they usually do.
Who here advocates making current values permanent?