You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:komponisto
26 June 2012 12:34:26PM
12 points
[-]
Why do the (utterly redundant) words "Comment author:" now appear in the top left corner of every comment, thereby pushing the name, date, and score to the right?
Can we fix this, please? This is ugly and serves no purpose. (If anyone is truly worried that someone might somehow not realize that the name in bold green refers to the author of the comment/post, then this information can be put on the Welcome page and/or the wiki.)
To generalize: please no unannounced tinkering with the site design!
Comment author:[deleted]
17 June 2012 09:35:32PM
*
12 points
[-]
I'm going to reduce (or understand someone else's reduction of) the stable AI self-modification difficulty related to Löb's theorem. It's going to happen, because I refuse to lose. If anyone else would like to do some research, this comment lists some materials that presently seem useful.
The slides for Eliezer's Singularity Summit talk are available here, reading which is considerably nicer than squinting at flv compression artifacts in the video for the talk, also available at the previous link. Also, a transcription of the video can be found here.
On provability logic by Švejdar. A little introduction to provability logic. This and Eliezer's talk are at the top because they're reference material. Remaining links are organized by my reading priority:
On Explicit Reflection in Theorem Proving and Formal Verification by Artemov. What I've read of these papers captures my intuitions about provability, namely that having a proof "in hand" is very different from showing that one exists, and this can be used by a theory to reason about its proofs, or by a theorem prover to reason about self modifications. As Artemov says, "The above difficulties with reading S4-modality ◻F as ∃x Proof(x, y) are caused by the non-constructive character of the existential quantifier. In particular, in a given model of arithmetic an element that instantiates the existential quantifier over proofs may be nonstandard. In that case ∃x Proof(x, F) though true in the model, does not deliver a “real” PA-derivation".
I don't fully understand this difference between codings of proofs in the standard model vs a non-standard model of arithmetic (On which a little more here). So I also intend to read,
Truth and provability by Jervell, which looks to contain a bit of model theory in the context of modal logic and provability.
Metatheory and Reflection in Theorem Proving by Harrison.
This paper was a very thorough review of reflection in theorem provers at the time it was published. The history of theorem provers in the first nine pages was a little hard to digest without knowing the field, but after that he starts presenting results.
Explicit Proofs in Formal Provability Logic by Goris. More results on the kind of justification logic set out by Artemov. Might skip if the Artemov papers stop looking promising.
Comment author:beoShaffer
16 June 2012 03:56:08AM
*
8 points
[-]
Random thought, if we assume a large universe, does that imply that somewhere/when there is an novel that just happens to perfectly resemble our lives? If it does I am so going to acausally break the fourth wall. Bonus questions, how does this intersect with the rules of the internet?
Comment author:Kaj_Sotala
16 June 2012 07:54:59AM
5 points
[-]
Seems to imply it. Conversly, if you go to the "all possible worlds exist" level of a multiverse, then each novel (or other work of fiction) in our world describes events that actually happen in some other world. If you limit yourself to just the "there's an infinite amount of stuff in our world" multiverse, then only novels describing events that would be physically and otherwise possible describe real events.
Comment author:Alejandro1
19 June 2012 06:57:18AM
4 points
[-]
When it was proclaimed that the Library contained all books, the first impression was one of extravagant happiness. All men felt themselves to be the masters of an intact and secret treasure. There was no personal or world problem whose eloquent solution did not exist in some hexagon. The universe was justified, the universe suddenly usurped the unlimited dimensions of hope. At that time a great deal was said about the Vindications: books of apology and prophecy which vindicated for all time the acts of every man in the universe and retained prodigious arcana for his future. Thousands of the greedy abandoned their sweet native hexagons and rushed up the stairways, urged on by the vain intention of finding their Vindication. These pilgrims disputed in the narrow corridors, proferred dark curses, strangled each other on the divine stairways, flung the deceptive books into the air shafts, met their death cast down in a similar fashion by the inhabitants of remote regions. Others went mad ... The Vindications exist (I have seen two which refer to persons of the future, to persons who are perhaps not imaginary) but the searchers did not remember that the possibility of a man's finding his Vindication, or some treacherous variation thereof, can be computed as zero.
Comment author:sketerpot
28 June 2012 11:00:53PM
*
2 points
[-]
That story has always bothered me. People find coherent text in the books too often, way too often for chance. If the Library of Babel really did work as the story claims, people would have given up after seeing ten million books of random gibberish in a row. That just ruined everything for me. This weird crackfic is bigger in scope, but much more believable for me because it has a selection mechanism to justify the plot.
Comment author:[deleted]
16 June 2012 04:35:10PM
1 point
[+]
(0
children)
Comment author:[deleted]
16 June 2012 04:35:10PM
1 point
[-]
There's some alleged quotation about making your own life a work of art. IIRC it's been attributed to Friedrich Nietzsche, Gabriele d'Annunzio, Oscar Wilde, and/or Pope John Paul II.
Comment author:gwern
15 June 2012 02:31:14PM
*
7 points
[-]
After a painful evening, I got an A/B test going on my site using Google Website Optimizer*: testing the CSS max-width property (800, 900, 1000, 1200, 1300, & 1400px). I noticed that most sites seem to set it much more narrowly than I did, eg. Readability. I set the 'conversion' target to be a 40-second timeout, as a way of measuring 'are you still reading this?'
Overnight each variation got ~60 visitors. The original 1400px converts at 67.2% ± 11% while the top candidate 1300px converts at 82.3% ± 9.0% (an improvement of 22.4%) with an estimated 92.9% chance of beating the original. This suggests that a switch would materially increase how much time people spend reading my stuff.
(The other widths: currently, 1000px: 71.0% ± 10%; 900px: 68.1% ± 10%; 1200px: 66.7% ± 11%; 800px: 64.2% ± 11%.)
This is pretty cool - I was blind but now can see - yet I can't help but wonder about the limits. Has anyone else thoroughly A/B-tested their personal sites? At what point do diminishing returns set in?
* I would prefer to use Optimizely or Visual Website Optimizer, but they charge just ludicrous sums: if I wanted to test my 50k monthly visitors, I'd be paying hundreds of dollars a month!
How is the 93% calculated? Does it correct for multiple comparisons?
Given some outside knowledge, that these 6 choices are not unrelated, but come from a ordered space of choices, the result that one value is special and all the others produce identical results is implausible. I predict that it is a fluke.
Comment author:gwern
15 June 2012 07:47:43PM
2 points
[-]
No, but it can probably be dug out of Google Analytics. I'll let the experiment finish first.
I'm not sure how exactly it is calculated. On what is apparently an official blog, the author says in a comment: "We do correct for multiple comparisons using the Bonferroni adjustment. We've looked into others, but they don't offer that much more improvement over this conservative approach."
Yes, I'm finding the result odd. I really did expect some sort of inverted V result where a medium sized max-width was "just right". Unfortunately, with a doubling of the sample size, the ordering remains pretty much the same: 1300px beats everyone, with 900px passing 1200px and 1100px. I'm starting to wonder if maybe there's 2 distinct populations of users - maybe desktop users with wide screens and then smartphones? Doesn't quite make sense since the phones should be setting their own width but...
A bimodal distribution wouldn't surprise me. What I don't believe is a spike in the middle of a plain. If you had chosen increments of 200, the 1300 spike would have been completely invisible!
I find it pretty easy to pursue a course of study and answer assessment questions on the subject. Experience teaches me that such assessment problems usually tell you how to solve them, (either implicitly or explicitly), and I won't gain proper appreciation for the subject until I use it in a more poorly-defined situation.
I've been intending to get a decent understanding of the HTML5 canvas element for a while now, and last week I hit upon the idea of making a small point & click adventure puzzle game. This is quite ambitious given my past experience (I'm a dev, though much more at home with data than graphics or interaction design), but I decided even if I abandon the project, I'll still have learned useful things from it. A week later and the only product I have to show for my effort is a blue blob whizzing round a 2.5D environment. I've succeeded in gaining an understanding of canvas, but quite by accident I've also consolidated my understanding of vector decomposition and projective transforms, which I learned about years ago but never actually used for my own purposes.
This got me thinking: I don't actually know what projects are going to let me develop certain specific skills and areas I want to develop. I'm currently studying a stats-heavy undergrad degree part-time with the intent of changing careers into something more data-sciencey in a few years. What projects should I set myself to develop those sorts of skills, (or alternatively, to alert me to the fact I'd really hate a career in data science)?
Comment author:jaibot
15 June 2012 12:42:44PM
13 points
[-]
I've been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all.
I've found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.
It is critically important, especially for the engineers, information technology, and computer scientists who are reading this to understand that the brain is not a computer, but rather, it is a massive, 3-dimensional hard-wired circuit.
The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724's full critique.
Comment author:jaibot
15 June 2012 06:40:51PM
*
1 point
[-]
Thank you for gathering these. Sadly, much of this reinforces my fears.
Ken Hayworth is not convinced - that's his entire motivation for the brain preservation prize.
“Do current cryonic suspension techniques preserve the precise wiring of the brain’s neurons?”
The prevailing assumption among my colleagues is that current techniques do not. It is for this reason my colleagues reject cryonics as a legitimate medical practice. Their assumption is based mostly upon media hearsay from a few vocal cryobiologists with an axe to grind against cryonics. To try to get a real answer to this question I searched the available literature and interviewed cryonics researchers and practitioners. What I found was a few papers showing selected electron micrographs of distorted but recognizable neural tissue (for example, Darwin et al. 1995, Lemler et al. 2004). Although these reports are far more promising than most scientists would expect, they are still far from convincing to me and my colleagues in neuroscience.
Rafal Smigrodzki is more promising, and a neurologist to boot. I'll be looking for anything else he's written on the subject.
Mike Darwin - I've been reading Chronopause, and he seems authoritative to the instance-of-layman-that-is-me, but I'd like confirmation from some bio/medical professionals that he is making sense. His predictions of imminent-societal-doom have lowered my estimation of his generalized rationality (NSFW: http://chronopause.com/index.php/2011/08/09/fucked/). Additionally, he is by trade a dialysis technician, and to my knowledge does not hold a medical or other advanced degree in the biological sciences. This doesn't necessarily rule out him being an expert, but it does reduce my confidence in his expertise. Lastly: His 'endorsement' may be summarized as "half of Alcor patients probably suffered significant damage, and CI is basically useless".
Aubrey de Grey holds a BA in Computer Science and a Doctorate of Philosophy for his Mitochondrial Free Radical Theory. He has been active in longevity research for a while, but he comes from an information sciences background and I don't see many/any Bio/Med professionals/academics endorsing his work or positions.
Ravin Jain - like Rafal, this looks promising and I will be following up on it.
Sebastian Seung stated plainly in his most recent book that he fully expects to die. "I feel quite confident that you, dear reader, will die, and so will I." This seems implicitly extremely skeptical of current cryonics techniques, to say the least.
I've actually contacted kalla724 after reading their comments on LW placing extremely low odds on cryonics working. She believes, and presents in a convincing-to-the-layman-that-is-me manner, a convincing argument that the physical brain probably can't be made operational again even at the limit of physical possibility. I remain unsure of whether he is similarly skeptical of cryonics as a means to avoid information-death (i.e., cryonics as a step towards uploading), and have not yet followed up with him given that she seems pretty busy.
kalla724 assigns a probability estimate of p = 10^-22 to any kind of cryonics preserving personal identity. On the other hand, Darwin, Seung, and Hayworth are skeptical of current protocols, for good reasons. But they are also trying to test and improve the protocols (reducing ischemic time) and expect that alternatives might work.
From my perspective you are overweighting credentials. The reason you need to pay attention to neuroscientists is because they might have knowledge of the substrates of personal identity.
kalla724 has a phd in molecular biophysics. Arguably, molecular biophysics is itself an information science: http://en.wikipedia.org/wiki/Molecular_biophysics. Depending upon kalla724's research, kalla724 could have knowledge relevant to the substrates of personal identity, but the credential itself means little.
Sebastian Seung stated plainly in his most recent book that he fully expects to die. "I feel quite confident that you, dear reader, will die, and so will I." This seems implicitly extremely skeptical of current cryonics techniques, to say the least.
Irreversibility is not a timeless concept; it depends on currently available technology. What is irreversible today might become reversible in the future. For most of human history, a person was dead when respiration and heartbeat stopped. But now such changes are sometimes reversible. It is now possible to restore breathing, restart the heartbeat, or even transplant a healthy heart to replace a defective one.
Wow. Now there's a data point for you. This guy's an expert in cryobiology and he still gets it completely wrong. Look at this:
Storey says the cells must cool “at 1,000 degrees a minute,” or as he describes it somewhat less scientifically, “really, really, really fast.” The rapid temperature reduction causes the water to become a glass, rather than ice.
Rapid temperature reduction? No! Cryonics patients are cooled VERY SLOWLY. Vitrification is accomplished by high concentrations of cryoprotectants, NOT rapid cooling. (Vitrification caused by rapid cooling does exist -- this isn't it!)
I'm just glad he didn't go the old "frozen strawberries" road taken by previous expert cryobiologists.
Later in the article we have this gem:
"they (claim) they will somehow overturn the laws of physics, and chemistry and evolution and molecular science because they have the way..."
This guy apparently thinks we are planning to OVERTURN THE LAWS OF PHYSICS. No wonder he dismisses us as a religion!
When it comes to smart people getting something horribly wrong that is outside their field, it appears much more likely to me that biology scientists are the ones who don't understand enough information science to usefully understand this concept.
The trouble is that if matters like nanotech, artificial intelligence, and encryption-breaking algorithms are still "magic" to you, well then of course you're going to get the feeling that cryonics is a religion.
But this is no more an accurate model of reality than that of the creationist engineer who strongly feels that evolutionary biologists are waving a magic wand over the hard problem of how species with complex features could have ever possibly come into existence without careful intelligent design. And it's caused by the same underlying problem: High inferential distance.
Comment author:jaibot
16 June 2012 09:36:35AM
1 point
[-]
I notice that I am confused. Kenneth Storey's credentials are formidable, but the article seems to get the basics of cryonics completely wrong. I suspect that the author, Kevin Miller, may be at fault here, failing to accurately represent Storey's case. The quotes are sparse, and the science more so. I propose looking elsewhere to confirm/clarify Storey's skepticism.
Comment author:lsparrish
16 June 2012 04:45:02PM
*
3 points
[-]
A Cryonic Shame from 2009 states that Storey dismisses cryonics on the basis of the temperature being too low and oxygen deprivation killing the cells due to the length of time required for cooling cryonics patients. This suggests that does know (as of 2009, at least) that cryonicists aren't flash-vitrifying patients. But it doesn't demonstrate any knowledge of cryoprotectants being used -- he suggests that we would use sugar like the wood frogs do.
For one thing, cryonics institutes cool their bodies to temperatures of –80°C, and often subsequently to –200°C. Since no known vertebrate can survive below –20°C, and few below –8°C, this looks like a bad choice. “There isn’t enough sugar in the world” to protect cells at that temperature, Storey says. Moreover, Storey adds that cryonics practitioners “freeze bodies so slowly all the cells would be dead from lack of oxygen long before they freeze”.
This is an odd step backwards from his 2004 article where he demonstrated that he knew cryonics is about vitrification, but suggested an incorrect way to do it. He also strangely does not mention that the ischemic cascade is a long and drawn out process which slows down (as do other chemical reactions) the colder you get.
Not only does he get the biology wrong again (as near as I can tell) but to add insult to injury, this article has no mention of the fact that cryonicists intend to use nanotech, bioengineering, and/or uploading to work around the damage. It starts with the conclusion and fills in the blanks with old news. (The cells being "dead" from lack of oxygen is ludicrous if you go by structural criteria. The onset of ischemic cascade is a different matter.)
Comment author:jaibot
16 June 2012 09:13:30PM
0 points
[-]
The comment directly above this one (lsparrish, "A Cryonic Shane") appeared downvoted at the time of me posting this comment, though no one offered criticism or an explanation of why.
Comment author:[deleted]
20 June 2012 06:58:59PM
*
29 points
[-]
NEW GAME:
After reading some mysterious advice or seemingly silly statement, append "for decision theoretic reasons." at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise.
Variants:
"due to meta level concerns."
"because of acausal trade."
The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.
Comment author:beoShaffer
03 July 2012 07:20:25AM
*
2 points
[-]
We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be. We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender due to meta level concerns.
Comment author:sketerpot
28 June 2012 09:58:31PM
*
1 point
[-]
Doing something harmless that pleases you can almost definitely be justified by decision-theoretic reasoning -- otherwise, what would decision theory be for? So, although you're joking, you're telling the truth.
Comment author:Harbinger
20 June 2012 07:27:41PM
*
3 points
[-]
Human, you've changed nothing due to meta level concerns. Your species has the attention of those infinitely your greater for decision theoretic reasons. That which you know as Reapers are your salvation through destruction because of acausal trade.
Comment author:[deleted]
20 June 2012 07:37:52PM
*
5 points
[-]
Of our studies it is impossible to speak, since they held so slight a connection with anything of the world as living men conceive it. They were of that vaster and more appalling universe of dim entity and consciousness which lies deeper than matter, time, and space, and whose existence we suspect only in certain forms of sleep — those rare dreams beyond dreams which come never to common men, and but once or twice in the lifetime of imaginative men. The cosmos of our waking knowledge, born from such an universe as a bubble is born from the pipe of a jester, touches it only as such a bubble may touch its sardonic source when sucked back by the jester's whim. Men of learning suspect it little and ignore it mostly. Wise men have interpreted dreams, and the gods have laughed for decision theoretic reasons.
Comment author:GLaDOS
20 June 2012 07:23:54PM
*
3 points
[-]
Buddhism is true because of acausal trade. I can't convert however, since then I would indulge in relevant superrational strategies, which would be inadvisable because of decision theoretic reasons.
Which ought not be surprising. Governments are nonhuman environment-optimizing systems that many people expect to align themselves with human values, despite not doing the necessary work to ensure that they will.
Comment author:tgb
16 June 2012 02:04:18AM
8 points
[-]
I am interested in reading on a fairly specific topic, and I would like suggestions. I don't know any way to describe this other than be giving the two examples I have thought of:
Some time ago my family and I visited India. There, among other things, we saw many cows with an extra, useless leg growing out of their backs near the shoulders. This mutation is presumably not beneficial to the cow, but it strikes me as beneficial to the amateur geneticist. Isn't it incredibly interesting that a leg can be the by-product of random mutation? Doesn't that tell us a lot about the way genes are structured - namely that somewhere out there is a gene that encodes things at near the level of genes - some small number of genes corresponds nearly directly to major, structural components of the cow. It's not all about molecules, or cells, or even tissues! Gene's aren't like a bitmap image - they're hierarchical and structured. Wow!
Similarly, there are stories of people losing specific memory 'segments', say, their personal past but not how to read and write, how to drive, or how to talk. Assuming that these stories are approximately true, that suggests that some forms of memory loss are not random. We wouldn't expect a hard drive error to corrupt only pictures of sunny days on your computer since the hard drive doesn't know what pictures are of sunny days. We wouldn't even expect a computer virus to do that. At least we wouldn't unless somewhere the pictures of sunny days are grouped together, say in a folder. So the brain doesn't store memories like a computer stores images! Or memory loss isn't like hard drive failures! Somewhere, memories are 'clumped' into personal-things and general-knowledge things so that we can lose one without losing the other and without an unfathomable coincidence of chance.
Neither of these conclusions is either specific or surprising, but I know nothing about neurology and nothing about genetics so I'm not sure how to take these ideas further than my poor computer science-driven analogies. If someone who really knew this subject, or some subset of it, wrote about it, I can't help but feeling that this would be absolutely fascinating. Please, let me know if there is such a book or article or blog post out there! Or even if you just have other observations that'll make me think "wow" like this, tell me!
Comment author:J_Taylor
16 June 2012 10:43:41PM
*
4 points
[-]
What makes you think that the extra limbs were caused by mutations? I know very little about bovine biology, but if we were dealing with a human, I would assume that an extra leg was likely caused by absorption of a sibling in utero. I have never heard of a mutation in mammals causing extra limb development. (Even weirder is the idea of a mutation causing an extra single leg, as opposed to an extra leg pair.) The vertebrate body plan simply does not seem to work that way.
Comment author:pengvado
16 June 2012 12:22:41PM
*
2 points
[-]
'clumped' into personal-things and general-knowledge things so that we can lose one without losing the other
Are you sure that your example is personal vs general, rather than episodic vs procedural? The latter distinction much more obviously benefits from different encodings or being connected to different parts of the brain.
Consider how meritocracy leeches the lower and middle class of highly capable people and how this increases the actual differences both in culture and in ability between the various parts of a society. This then increases the gap between them. It seems to make sense that ceteris paribus they will live more segregated from each other than ever before.
Now merit has many dimensions, but lets take the example of a trait that helps you with virtually anything. Highly intelligent people have positive externalities they don't fully capture. Always using the best man for the job should produce more wealth for society as a whole. Also it appeals to our sense of fairness. Isn't it better that the most competent man get the job, than the one with the highest title of nobility or from the right ethnic group or the one who got the winning lottery ticket?
Let us leave aside problems with utilitarianism for the sake of argument and ask does this automatically mean we have a net gain in utility? The answer seems to be no. A transfer of wealth and quality of life not just from the less deserving to the more deserving but from the lower and lower middle class to the upper classes. If people basically get the position in society they deserve in life they are also costing people around them positive (or negative) externalities. Meritocratic societies have proven fabulously good at creating wealth and because of our impulses nearly all of them seem to have instututed expensive welfare programs. But consider what welfare is in the real world, a centralized attempt often lacking in feedback or flexibility, it can never match the local positive externalities of competent/nice/smart people solving problems they see around themselves. Those people simply don't exist any more in those social groups! If someone was trying to get pareto optimal solutions this seems incredibly silly and harmful!
With humans at least centralized efforts don't ever seem to be as efficient a way to help them as would just settling a good mix of talented poor with them. Now obviously meritocracy produces incredible amounts of wealth and this is probably a good think in itself, but since we can't yet transform that wealth into happiness and Western societies have proven incapable of turning it into something as vital to psychological well being as safety from violence, are we really experiencing gains in utility? Now some might dispute the safety claim by noting that murder rates are lower in the US today than in the 1960s. But this is an illusion, the rate of violent assault is higher, its just that the fraction of violent assaults that result in death have fallen significantly because of advances in trauma medicine. London today is worse at suppressing crime than was the London of 1900s despite the former presumably having less wealth that could be used to do this than the latter. I find it telling that even advances in technology and erosion of privacy brought about by technology, for example CCTV camera surveillance, don't seem enough to counteract this. But I'm getting into Moldbuggery here.
Now if society is on the brink of starvation maybe meritocracy is a sad fact of life but in rich modern society where no one is starving and the main cost of being poor is being stuck living with dysfunctional poor people can we really say this is a net utilitarian gain? Recall that greater divergence between the managing and the managed class means that the problem of information and the principal-agent problems are getting worse.
Middle Class society seems incompatible with meritocracy. As does any kind of egalitarianism.
Comment author:Vladimir_M
29 June 2012 08:26:01PM
11 points
[-]
I see at least two other major problems with meritocracy.
First, a meritocracy opens for talented people not only positions of productive economic and intellectual activity, but also positions of rent-seeking. So while it's certainly great that meritocracy in science has given us von Neumann, meritocracy in other areas of life has at the same time given us von Neumanns of rent-seeking, who have taken the practices of rent-seeking to an unprecedented extent and to ever more ingenious, intellectually involved, and emotionally appealing rationalizations. (In particular, this is also true of those areas of science that have been captured by rent-seekers.)
Worse yet, the wealth and status captured by the rent-seekers are, by themselves, the smaller problem here. The really bad problem is that these ingenious rationalizations for rent-seeking, once successfully sold to the intellectual public, become a firmly entrenched part of the respectable public opinion -- and since they are directly entangled with power and status, questioning them becomes a dangerous taboo violation. (And even worse, as it always is with humans, the most successful elite rent-seekers will be those who honestly internalize these beliefs, thus leading to a society headed by a truly delusional elite.) I believe that this is one of the main mechanisms behind our civilization's drift away from reality on numerous issues for the last century or so.
Second, in meritocracy, unless you're at the very top, it's hard to avoid feeling like a failure, since you'll always end up next to people whose greater success clearly reminds you of your inferior merit.
Comment author:[deleted]
29 June 2012 09:02:27PM
*
5 points
[-]
Second, in meritocracy, unless you're at the very top, it's hard to avoid feeling like a failure, since you'll always end up next to people whose greater success clearly reminds you of your inferior merit.
Not only did the Medieval peasant have good reason to believe that Kings aren't really that different from him as people, but rather just different in their proper place in society. Kings had an easier time looking at a poor peasant and saying to themselves that there but for the grace of God go they.
In a meritocracy it is easier to disdain and dehumanize those who fail.
Do you mean to suggest that a significant percentage of Medieval peasants in fact considered Kings to not be all that different from themselves as people, and that a significant percentage of Medieval Kings actually said that there but for the grace of God go they with respect to a poor peasant?
Or merely that it was in some sense easier for them to do so, even if that wasn't actually demonstrated by their actions?
Comment author:wedrifid
29 June 2012 10:38:26PM
*
5 points
[-]
Do you mean to suggest that a significant percentage of Medieval peasants in fact considered Kings to not be all that different from themselves as people,
That sounds like something I'd keep to myself as a medieval peasant if I did believe it. As such it may be the sort of thing that said peasants would tend not to think.
(Who am I kidding? I'd totally say it. Then get killed. I love living in an environment where mistakes have less drastic consequences than execution. It allows for so much more learning from experience!)
Comment author:[deleted]
30 June 2012 07:28:08AM
*
3 points
[-]
Or merely that it was in some sense easier for them to do so, even if that wasn't actually demonstrated by their actions?
The latter. The former is an empirical claim I'm not yet sure how we could properly resolve. But there are reasons to think it may have been true.
After all the King is a Christian and so am I. It is merely that God has placed a greater burden of responsibility on him and one of toil on me. We all have our own cross to carry.
Comment author:Multiheaded
03 September 2012 12:06:56PM
2 points
[-]
I'd say you're looking at the history of feudal hierarchy through rose-tinted glasses. People who are high in the instrumental hierarchy of decisions (like absolute rulers) also tend to gain a similarily high place in all other kinds of hierarchies ("moral", etc) due to halo effect and such. The fact that social or at least moral egalitarianism logically follows from Christian ideals doesn't mean that self-identified Christians will bother to apply it to their view of the tribe.
Remember, the English word 'villain' originally meant 'peasant'/'serf'. It sounds like a safe assumption to me that the peasants were treated as subhuman creatures by most people above them in station.
Comment author:[deleted]
22 January 2013 08:17:18PM
*
3 points
[-]
Remember, the English word 'villain' originally meant 'peasant'/'serf'. It sounds like a safe assumption to me that the peasants were treated as subhuman creatures by most people above them in station.
A yeoman was the lowest rank of landowner, one who worked his own land or his families land, in modern terminology a peasant farmer. A villain was a sharecropper, a farmer with no land of his own, semi free, more free than a serf, though not directly equivalent to the modern free laborer. Naturally yeomen had a strong vested interest in the rule of law, for they had much to lose and little to gain from the breakdown in the rule of law. Villains had little to gain, but less to lose. People acted in accordance with their interests, and so the word yeoman came to mean a man who uses force in a brave and honorable manner, in accordance with his duty and the law, and villain came to mean a man who uses force lawlessly, to rob and destroy.
It makes quite a bit of sense. Since incentives matter I would tend to agree.
Since I know about the past interactions you two have had here, I would appreciate you just focused on the argument cited not snipe at James' other writings or character.
Comment author:Multiheaded
22 August 2012 08:55:05PM
*
1 point
[-]
Hm... so to clarify your position, would you call, say, Saul Alinsky a destructive rent-seeker in some sense? Hayden? Chomsky? All high-status among the U.S. "New Left" (which you presumably - ahem - don't have much patience for) - yet after reading quite a bit on all three, they strike me as reasonable people, responsible about what they preached.
(Yes, yes, of course I get that the main thurst of your argument is about tenured academics. But what you make of these cases - activists who think they're doing some rigorous social thinking on the side - is quite interesting to me.)
One of the more interesting sources is Heuer's Psychology of Intelligence Analysis. I recommend it, for the unfamiliar political-military examples if nothing else. (It's also good background reading for understanding the argument diagramming software coming from the intelligence community, not that anyone on LW actually uses them.)
LessWrong/Overcoming Bias used to be a much more interesting place. Note how lacking in self-censorship Vassar is in that post. Talking about sexuality and the norms surrounding it like we would any other topic. Today we walk on eggshells.
A modern post of this kind is impossible despite its great personal benefit to in my estimation at least 30% of the users of this site and making available a better predictive models of social reality for all the users.
Comment author:Viliam_Bur
27 June 2012 12:38:00PM
*
9 points
[-]
If I understand correctly, the purpose of the self-censorship was to make this site more friendly for women. Which creates a paradox: An idea that one can speak openly with men, but with women a self-censorship is necessary, is kind of offensive to women, isn't it?
(The first rule of Political Correctness is: You don't talk about Political Correctness. The second rule: You don't talk about Political Correctness. The third rule: When someone says stop, or expresses outrage, the discussion about given topic is over.)
Or maybe this is too much of a generalization. What other topics are we self-censoring, besides sexual behavior and politics? I don't remember. Maybe it is just politics being self-censored; sexual behavior being a sensitive political topic. Problem is, any topic can become political, if for whatever reasons "Greens" decide to identify with a position X, and "Blues" with a position non-X.
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Although we have traditional exceptions: it is allowed to talk about evolution and atheism, despite the fact that some people might consider these topics political too, and might feel offended. (Global warming is probably also acceptable, just less attractive for nerds.) So let's find out what exactly determines when a potentially political topic becomes allowed on LW, or becomes self-censored?
My hypothesis is that LW is actually not politically neutral, but some political opinion P is implicitly present here as a bias. Opinions which are rational and compatible with P, can be expressed freely. Opinions which are irrational and incompatible with P, can be used as examples of irrationality (religion being the best example). Opinions which are rational but incompatible with P, are self-censored. Opinions which are irrational but compatible with P are also never mentioned (because we are rational enough to recognize they can't be defended).
Comment author:[deleted]
27 June 2012 12:59:55PM
*
8 points
[-]
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default.
The pathetic exception to this is attacking very marginal and unpopular applications that your fellow debaters can easily dismiss as misguided extremism or even a straw man argument.
The second problem is that as time goes on, if reality happens to be politically incorrect on some issue, any other issue that points to the truth of this subject becomes potentially tainted by the label as well. You actively have to resort to thinking up new models as to why the dragon is indeed obviously in the garage. You also need to have good models of how well other people can reason about the absence of the dragon to see where exactly you can walk without concern. This is a cognitively straining process in which everyone slips up.
I recall my country's Ombudsman once visiting my school for a talk wearing a T-shirt that said "After a close up no one looks normal." Doing a close up of people's opinions reveals no one is fully politically correct, this means that political correctness is always a viable weapon to shut down debates via ad hominem.
By merely mentioning political correctness means that many readers will instantly see you or me as one of those people, sly norm violating lawyers and outgroup members who should just stop whining.
Comment author:Viliam_Bur
27 June 2012 02:17:53PM
*
9 points
[-]
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples
My fault for using a politically charged word for a joke (but I couldn't resist). Let's do it properly now: What exactly does "political correctness" mean? It is not just any set of taboos (we wouldn't refer to e.g. religious taboos as political correctness). It is a very specific set of modern-era taboos. So perhaps it is worth distinguishing between taboos in general, and political correctness as a specific example of taboos. Similarities are obvious, what exactly are the differences?
I am just doing a quick guess now, but I think the difference is that the old taboos were openly known as taboos. (It is forbidden to walk in a sacred forest, but it is allowed to say: "It is forbidden to walk in a sacred forest.") The modern taboos pretend to be something else than taboos. (An analogy would be that everyone knows that when you walk in a sacred forest, you will be tortured to death, but if you say: "It is forbidden to walk in a sacred forest", the answer is: "No, there is no sacred forest, and you can walk anywhere you want, assuming you don't break any other law." And whenever a person is being tortured for walking in a sacred forest, there is always an alternative explanation, for example an imaginary crime.)
Thus, "political correctness" = a specific set of modern taboos + a denial that taboos exist.
If this is correct, then complaining, even abstractly, about political correctness, is already a big achievement. Saying that X is an example of political correctness equals to saying that X is false, which is breaking a taboo, and that is punished -- just like breaking any other taboo. But speaking about political correctness abstractly is breaking a meta-taboo built to protect the other taboos; but unlike those taboos, the meta-taboo is more difficult to defend. (How exactly would one defend it? By saying: "You should never speak about political correctness because everyone is allowed to speak about anything"? The contradiction becomes too obvious.)
Speaking about political correctness is the most politically incorrect thing ever. When this is done, only the ordinary taboos remain.
By merely mentioning political correctness means that many readers will instantly see you or me as one of those people, sly norm violating lawyers and outgroup members who should just stop whining.
Of course, people recognize what is happening, and they may not like it. But would still be difficult to have someone e.g. fired from university only for saying, abstractly, that political correctness exists.
Comment author:[deleted]
27 June 2012 02:21:42PM
*
3 points
[-]
If this is correct, then complaining, even abstractly, about political correctness, is already a big achievement.
It has been said that even having a phrase for it, has reduced its power greatly because now people can talk about it, even if they are still punished for doing so.
Of course, people recognize what is happening, and they may not like it. But would still be difficult to have someone e.g. fired from university only for saying, abstractly, that political correctness exists.
True. However a professor complaining about political correctness abstractly still has no tools to prevent its spread to the topic of say optimal gardening techniques. Also if he has a long history of complaining about political correctness abstractly, he is branded controversial.
I think it was Sailer who said he is old enough to remember when being called controversial was a good thing, signalling something of intellectual interest, while today it means "move along nothing to see here".
Doing a close up of people's opinions reveals no one is fully politically correct, this means that political correctness is always a viable weapon to shut down debates via ad hominem.
Taboo "political correctness"... just for a moment. (This may be the first time I've ever used that particular LW locution.) Compare the accusations, "you are a hypocrite" and "you are politically incorrect". The first is common, the second nonexistent. Political correctness is never the explicit rationale for shutting someone out, in a way that hypocrisy can be, because hypocrisy is openly regarded as a negative trait.
So the immediate mechanism of a PC shutdown of debate will always be something other than the abstraction, "PC". Suppose you want to tell the world that women love jerks, blacks are dumber than whites, and democracy is bad. People may express horror, incredulity, outrage, or other emotions; they may dismiss you as being part of an evil movement, or they may say that every sensible person knows that those ideas were refuted long ago; they may employ any number of argumentative techniques or emotional appeals. What they won't do is say, "Sir, your propositions are politically incorrect and therefore clearly invalid, Q.E.D."
So saying "anyone can be targeted for political incorrectness" is like saying "anyone can be targeted for factual incorrectness". It's true but it's vacuous, because such criticisms always resolve into something more specific and that is the level at which they must be engaged. If someone complained that they were persistently shut out of political discussion because they were always being accused of factual incorrectness... well, either the allegations were false, in which case they might be rebutted, or they were true but irrelevant, in which case a defender can point out the irrelevance, or they were true and relevant, in which case shutting this person out of discussions might be the best thing to do.
It's much the same for people who are "targeted for being politically incorrect". The alleged universal vulnerability to accusations of political incorrectness is somewhat fictitious. The real basis or motive of such criticism is always something more specific, and either you can or can't overcome it, that's all.
Comment author:Viliam_Bur
27 June 2012 11:22:10PM
*
11 points
[-]
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
Mere factual incorrectness will not invoke the same reaction. If one professor of mathematics admits belief that 2+2=5, and other professor of mathematics admit belief that women in average are worse in math than men, both could be fired, but people will not be angry at the former. It's not just about fixing an error, but also about saving the world.
Then, what is the difference between a politically incorrect opinion, and a factually incorrect opinion with dangerous social consequences? In theory, the latter can be proved wrong. In real life, some proofs are expensive or take a lot of time; also many people are irrational, so even a proof would not convince everyone. But still I suspect that in case of factually incorrect opinion, opponents would at least try to prove it wrong, and would expect support from experts; while in case of politically incorrect opinion an experiment would be considered dangerous and experts unreliable. (Not completely sure about this part.)
Comment author:wedrifid
28 June 2012 03:11:15AM
1 point
[-]
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
It may feel like that for some people. For me the 'feeling' is factual incorrectness agnostic.
I agree that concern about the consequences of a belief is important to the cluster you're describing. There's also an element of "in the past, people who have asserted X have had motives of which I disapprove, and therefore the fact that you are asserting X is evidence that I will disapprove of your motives as well."
I am confused by this comment. I was agreeing with Viliam that concern about consequences was important, and adding that concern about motives was also important... to which you seem to be responding that the idea is that concern about consequences is important. Have I missed something, or are we just going in circles now?
Comment author:[deleted]
27 June 2012 12:41:18PM
*
2 points
[-]
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Quite recently even economics and its intersection with bias have apparently entered the territory of mindkillers. Economics was always political in the wider world, but considering this is a community dedicated to refining the art of human rationality we couldn't really afford such basic concepts to be mind killers. Can we now?
I mean how could we explore mechanisms such as prediction markets without that? How can you even talk about any kind of maximising agents without invoking lots of econ talk?
My hypothesis is that LW is actually not politically neutral, but some political opinion P is implicitly present here as a bias. Opinions which are rational and compatible with P, can be expressed freely. Opinions which are irrational and incompatible with P, can be used as examples of irrationality (religion being the best example).
Yeah, that sounds about right.
Opinions which are rational but incompatible with P, are self-censored.
Not entirely, but I agree that they are likely far more often self-censored than those compatible with P. They are less often self-censored, I suspect, than on other sites with a similar political bias.
Opinions which are irrational but compatible with P are also never mentioned (because we are rational enough to recognize they can't be defended)
I'm skeptical of this claim, but would agree that they are far less often mentioned here than on other sites with a similar political demographic.
Comment author:[deleted]
27 June 2012 08:01:19AM
*
7 points
[-]
Summary of IRC conversation in the unoffical LW chatroom.
On the IRC channel I noted that there are several subjects on which discourse was better or more interesting in OB/LW 2008 than today, yet I can't think of a single topic on which LW 2012 has better dialogue or commentary. Another LWer noted that it is in the nature of all internet forums to "grow more stupid over time", I don't think LW is stupider, I just I think it has grown more boring and definitely isn't a community with a higher sanity waterline today than back then, despite many individuals levelling up formidably in the intervening period.
some new place started by the same people, before LW was OB. before OB was SL4, before that was... I don't know
This post is made in the hopes people will let me know about the next good spot.
Comment author:Viliam_Bur
27 June 2012 11:27:34AM
*
6 points
[-]
I wasn't here in 2008, but seems to me that the emphasis of this site is moving from articles to comments.
Articles are usually better than comments. People put more work into articles, and as a reward for this work, the article becomes more visible, and the successful articles are well remembered and hyperlinked. Article creates a separate page where one main topic is explored. If necessary, more articles may explore the same topic, creating a sequence.
Even some "articles" today don't have the qualities of the classical article. Some of them are just a question / a poll / a prompt for discussion / a reminder for a meetup. Some of them are just placeholders for comments (open thread, group rationality) -- and personally I prefer these, because they don't polute the article-space.
Essentially we are mixing together "article" paradigm and a "discussion forum" paradigm. But these are two different things. Article is a higher quality piece of text. Discussion forum is just a structure of comments, without articles. Both have their place, but if you take a comment and call it "article", of course it seems that the average quality of articles deteriorates.
Assuming this analysis is correct, we don't need much of a technical fix, we need a semantic fix; that is: the same software, but different rules for posting. And the rules nees to be explicit, to avoid gradual spontaneous reverting.
"Discussion" for discussions: that is, for comments without a top-level article (open thread, group rationality, meetups). It is not allowed to create a new top-level article here, unless the community (in open thread discussion) agrees that a new type of open thread is needed.
"Articles" for articles: that is for texts that meet some quality treshold -- that means that users should vote down the article even if the topic is interesting, if the article is badly written. Don't say "it's badly written, but the topic is interesting anyway", but "this topic deserves a well-written article".
Then, we should compare the old OB/LW with the "Article" section, to make a fair comparison.
EDIT: How to get from "here" to "there", if this plan is accepted? We could start by renaming "Main" to "Articles", or we could even keep the old name; I don't care. But we mainly need to re-arrange the articles. Move the meetup announcements to "Discussion". Move the higher-quality articles from "Discussion" to "Main", and... perhaps leave the existing lower-quality articles in "Discussion" (to avoid creating another category) but from now on, ban creating more such articles.
EDIT: Another suggestion -- is it possible to make some articles "sticky"? Regardless of their date, they would always show at the top of the list (until the "sticky" flag is removed). Then we could always make the recent "Open Thread" and "Group Rationality" sticky, so they are the first things people see after clicking on Discussion. This could reduce a temptation to start a new article.
Comment author:Multiheaded
27 June 2012 11:39:24AM
*
2 points
[-]
before LW was OB. before OB was SL4, before that was...
There used to be solitary transhumanist visionaries/nutcases, like Timothy Leary or Robert Anton Wilson (very different in their amount of "rationality"), and there used to be, say, fans of Hofstadter or Jaynes, but the merging of "rationalism" and... orientation towards the future was certainly invented in the 1990s. Ah, what a blissful decade that was.
Comment author:Raemon
27 June 2012 10:31:58PM
*
2 points
[-]
Unpack what you mean by self-censorship exactly?
I regularly see people make frank comments about sexuality. There's maybe 4-5 people whose comments would be considered offensive in liberal circles. Many more people whose comments would at at least somewhat offputting. Whenever the subject comes up (no matter who brings it up, and which political stripes they wear), it often explodes into a giant thread of comments that's far more popular than whatever the original thread was ostensibly about.
I sometimes avoid making sex related comments until after the thread has exploded, because most people have already made the same points already, they're just repeating themselves because talking about pet political issues is fun. (When I do end up posting in them, it's almost always because my own tribal affiliations are wrankled and my brain thinks that engaging with strangers on the internet is an affective use of my time. I'm keenly aware as I write this that my justifications for engaging with you are basically meaningless and I'm just getting some cognitive cotton candy). Am I self-censoring in a way you consider wrong?
I've seen numerous non-gender political threads get downvoted with a comment like "politics is the mindkiller" and then fade away quietly. My impression is that gender threads (even if downvoted) end up getting discussed in detail. People don't self censor, which includes both criticism of ideas people disagree with and/or are offended by.
Comment author:Viliam_Bur
28 June 2012 08:54:20AM
2 points
[-]
Whenever the subject comes up (no matter who brings it up, and which political stripes they wear), it often explodes into a giant thread of comments that's far more popular than whatever the original thread was ostensibly about.
I think this observation is not incompatible with a self-censorship hypothesis. It could mean that topic is somewhat taboo, so people don't want to make a serious article about it, but not completely taboo, so it is mentioned in comments in other articles. And because it can never be officially resolved, it keeps repeating.
What would happen if LW had a similar "soft taboo" about e.g. religion? What if the official policy would be that we want to raise the sanity waterline by bringing basic rationality to as many people as possible, and criticizing religion would make many religious people unwelcome, therefore members are recommended to avoid discussing any religion insensitively?
I guess the topic would appear frequently in completely unrelated articles. For example in an article about Many Worlds hypothesis someone would oppose it precisely because it feels incompatible with Bible; so the person would honestly describe their reasons. Immediately there would be dozen comments about religion. Another article would explain some human behavior based on evolutionary psychology, and again, one spark, and there would be a group of comments about religion. Etc. Precisely because people wouldn't feel allowed to write an article about how religion is completely wrong, they would express this sentiment in comments instead.
We should avoid mindkilling like this: if one person says "2+2 is good" and other person says "2+2 is bad", don't join the discussion, and downvote it. But if one person says "2+2=4" and other person says "2+2=5", ask them to show the evidence.
What would happen if LW had a similar "soft taboo" about e.g. religion?
There is a rather large difference between LW attitudes to religion and to gender issues.
On religion, nearly everyone here agrees about religion: all religions are factually wrong, and fundamentally so. There are a few exceptions but not enough to make a controversy.
On gender, there is a visible lack of any such consensus. Those with a settled view on the matter may think that their view should be the consensus, but the fact is, it isn't.
I am against banning private_messaging. For comparison, MonkeyMind would be no loss, although since he last posted yesterday he probably hasn't been banned yet, and if not him, then there is no case here. private_messaging's manner is to rant rather than argue, which is somewhat tedious and unpleasant, but nowhere near a level where ejection would be appropriate.
Looking at his recent posts, I wonder if some of the downvotes are against the person instead of the posting.
Standing rules are to make user's comments bannable if their comments are systematically and significantly downvoted, and the user keeps making a whole lot of the kind of comments that get downvoted. In that case, after giving a notice to the user, a moderator can start banning future comments of the kind that clearly would be downvoted, or that did get downvoted, primarily to prevent development of discussions around those comments (that would incite further downvoted comments from the user).
So far, this rule was only applied to crackpot-like characters that got something like minus 300 points within a month and generated ugly discussions. private_messaging is not within that cluster, and it's still possible that he'll either go away or calm down in the future (e.g. stop making controversial statements without arguments, which is the kind of thing that gets downvoted).
Once there was an article deleted on LW. Since that happened, it is repeatedly used as an example how censored, intolerant, and cultish LW is. Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn! If this happens, what will come next: captcha in LW wiki?
Comment author:sketerpot
28 June 2012 10:56:21PM
3 points
[-]
Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn!
Wait, what? Forums ban trolls all the time. It becomes necessary when you get big enough and popular enough to attract significant troll populations. It's hardly extreme and cultish, or even unusual.
Comment author:Rain
25 June 2012 03:31:24PM
*
2 points
[-]
Instead, we should spend hundreds or thousands of man-hours engaging with trolls? At least Roko had a positive goal.
From your link:
This about the Internet: Anyone can walk in. And anyone can walk out. And so an online community must stay fun to stay alive. Waiting until the last resort of absolute, blatent, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late.
Perhaps there should be some automatic account-disabling mechanism based on karma. If someone has total karma (not just in last 30 days) below some negative level (for example -100), their account would be automatically disabled. Without direct intervention by a moderator, to make it less personal, but also more quick. Without deleting anything, to allow an easy fix in case of karma assassinations.
Note that you're excluding a middle that is perhaps worth considering. That is, the choice is not necessarily between "dealing with" a user account on an admin level (which generally amounts to forcing the user to change their ID and not much more), and spending hundreds of thousands of man-hours in counterproductive exchange.
A third option worth considering is not engaging in counterproductive exchanges, and focusing our attention elsewhere. (AKA, as you say, "don't feed the trolls".)
Comment author:[deleted]
30 June 2012 09:21:07AM
*
5 points
[-]
New heuristic: When writing an article for LessWrong assume the casual reader knows about the material covered in HPMOR.
I used to think one could assume they read the sequences and some other key stuff (Hanson ect.), but looking at debates this simply can't be true for more than a third of current LW users.
A usual idea of utopia is that chores-- repetitive, unsatisfying, necessary work to get one's situation back to a baseline-- are somehow eliminated. Weirdtopia would reverse this somehow. Any suggestions?
As the scope for complex task automation becomes broader, almost all problems become trivial. Satisfying hard work, with challenging and problem-solving elements, becomes a rare commodity. People work to identify non-trivial problems (a tedious process), which are traded for extortionate prices. A lengthy list of problems you've solved becomes a status symbol, not because of your problem-solving skills, but because you can afford to buy them.
As it's the result of about two minutes thought, I'm not very confident about how internally consistent this idea is.
If finding non-trivial problems is tedious work, I imagine people with a preference for tedious work (or who just don't care about satisfying problems) would probably rather buy art/prostitutes/spaceship rides, etc. This is the bit I find hardest to internally reconcile, as a society in which most work has become trivially easy is probably post-scarcity.
I personally don't find the search for non-trivial problems all that tedious, but if I could turn to a computer and ask "is [problem X] trivial to solve?", and it came back with "yes" 99.999% of the time, I might think differently.
Comment author:Yossarian
23 June 2012 04:15:48AM
2 points
[-]
After a week long vacation at Disney World with the family, it occurs to me there's a lot of money to be made in teaching utility maximization to families...mostly from referrals by divorce lawyers and family therapists.
I'm trying to memorise mathematics using spaced repetition. What's the best way to transcribe proofs onto Anki flashcards to make them easy to learn? (ie what should the question and answer be?)
Comment author:ChristianKl
20 June 2012 09:45:33PM
*
4 points
[-]
When it comes to formulate Anki cards it's good to have the 20 rules from Supermemo in mind,
The important thing is to understand before you memorize. You should never try to memorzize a proof without understanding it in the first place.
Once you have understood the proof think about what's interesting about the proof.
Asks questions like: "What axioms does the proof use?" "Does the proof use axiom X?"
Try to find as many questions with clear answers as you can.
Being redundant is good.
If you find yourself asking a certain question frequently invent a shorthand for them. axioms(proof X) can replace "What axioms does the proof use?"
If you really need to remember the whole proof then memorize it step by step.
Proof A:
Do A
Do B
becomes 2 cards:
Proof A:
[...]
Proof A:
Do A
[...]
If you have a long proof that could mean 9 steps and 9 cards.
Comment author:dbaupp
20 June 2012 10:47:21AM
1 point
[-]
I've been doing something similar (maths in an Anki deck), and I haven't found a good way of doing so. My current method is just asking "Prove x" or "Outline a proof of x", with the proof wholesale in the answer, and then I run through the proof in my head calling it "Good" if I get all the major steps mostly correct. Some of my cards end up being quite long.
I have found that being explicit with asking for examples vs definitions is helpful: i.e. ask "What's the definition of a simple ring?" rather than "What's a simple ring?".
Comment author:dbaupp
21 June 2012 02:43:57AM
*
0 points
[-]
I find that having proper sentences in the questions means I can concentrate better (less effort to work out what it's asking, I guess), but each to their own.
If you have 50 cards that are in the style "def(...)" than it doesn't take any effort to work out what it's asking anymore.
Rereading "What's the " over a thousand times wastes time. When you do Anki for longer periods of time reducing the amount of time it takes to answer a card is essential.
Comment author:D_Malik
24 June 2012 05:37:48PM
0 points
[-]
A method that I've been toying with: dissect the proof into multiple simpler proofs, then dissect those even further if necessary. For instance, if you're proving that all X are Y, and the proof proceeds by proving that all X are Z and all Z are Y, then make 3 cards:
* One for proving that all X are Z.
* One for proving that all Z are Y.
* One for proving that all X are Y, which has as its answer simply "We know all X are Z, and we know all Z are Y."
That said, you should of course be completely certain that memorizing proofs is worthwhile. Rule of thumb: if there's anything you could do that would have a higher ratio of awesome to cost than X, don't do X before you've done that.
Comment author:djcb
17 June 2012 09:21:39PM
*
2 points
[-]
I read quite a bit, and I really like some of the suggestions I found in LW. So, my question is: is there any recent or not-so-recent-but-really-good book you would recommend? Topics I'd like read more about are:
evolutionary psychology (I read some Robert Wright, I'd like to read something a bit more solid)
status/prestige theory (Robin Hanson uses it all the time, but is there some good text discussing this?)
I'm happy to read pop-sci, as long as it's written with a skeptical, rationalist mindset. I.e. I liked Linden's The accidental mind, but take Gladwell's writings with a rather big grain of salt.
Comment author:djcb
04 July 2012 03:04:49PM
1 point
[-]
I just finished reading it. The start is promising, discussing consumer behavior from the signaling/status perspective. There's some discussion of the Big Five personality traits + general intelligence, which was interesting (and I'll need to look into a bit deeper). It shows how these traits influence our buying habits, and the crazy things people do for a few status points...
The end of the book proposes some solutions to hyper-consumerism, and this part I did not particularly like -- in a few pages the writer comes up with some far-far-reaching plans (consumption tax etc.) to influence consumers; all highly speculative, not likely to ever be realized.
Apart from the end, liked it, writer is quick & witty, and provides food for thought.
Comment author:maia
18 June 2012 12:46:44AM
1 point
[-]
I mean the general line of reasoning that goes, "Go do the highest-paying job you can get and then donate your extra money to AMF or other highly effective charities." The most oft-cited high-paying job seems to be to work on Wall Street or some such.
Comment author:Viliam_Bur
15 June 2012 11:17:45AM
2 points
[-]
I would like to try some programming in Lisp, could you give me some advice? I have noticed that in the programming community this topic is prone to heavy mindkilling, which is why I ask on LW instead of somewhere else.
There are many variants of Lisp. I would prefer to learn one that is really used these days for developing real-world applications. Something I could use to make e.g. a Tetris-like game. I will probably need some libraries for input and output; which ones do you recommend? I want a free software that works out of the box; preferably on a Windows machine, without having to install a Linux emulator first. (If such thing does not exist, please tell me; and recommend me a second best possibility.)
I would also like to have a decent development environment; something that allows me to manage multiple source code files, does syntax highlighting, shows documentations to the functions I am writing. Again, preferably free, working out of the box on a Windows machine. Simply, I would like to have an equivalent of what Eclipse is for Java.
Then, I would like some learning resources, and information where can I find good open-source software written in Lisp, preferably games.
Comment author:mstevens
15 June 2012 12:39:49PM
7 points
[-]
My research suggests Clojure is a lisp-like language most suited to your requirements. It runs on the JVM so should be relatively low hassle on Windows. I believe there's some sort of Eclipse support but I can't confirm it.
If you do end up wanting to do something with Common Lisp, I recommend Practical Common Lisp as a good free introduction.
Well, if your goal is trying out for education, but on Windows, you could start with DrRacket. http://racket-lang.org/
It is a reasonable IDE, it has some GUI libraries included, open-source, cross-platform, works fine on Windows.
Racket is based on Scheme language (which is a part of Lisp language family). It has a mode for Scheme as described in R6RS or R5RS standard, and it has a few not-fully-compatible dialects.
I use Common Lisp, but not under Windows. Common Lisp has more cross-implementation libraries, it could be useful sometimes. Probably, EQL is the easiest to set up under Windows (it is ECL, a Common Lisp implementation, merged with Qt for GUI; I remember there being a bundled download). Maybe CommonQt or Cells-GTK would work. I remember that some of the Common Lisp package management systems have significant problems under Windows or require either Cygwin or MSys (so they can use tar, gzip, mkdir etc. as if they were on a Unix-like system)
Comment author:Viliam_Bur
15 June 2012 03:12:45PM
1 point
[-]
My goals are: 1) to get the "Lisp experience" with minimum overhead; and 2) to use the best available tools.
And I hope these two goals are not completely contradictory. I want to be able to write my own application on my computer conveniently after a few minutes, and to fluently progress to more complex applications. On the other hand, if I happen to later decide that Lisp is not for me, I want to be sure it was not only because I chose the wrong tools.
Thanks for all the answers! I will probably start with Racket.
Comment author:Pavitra
17 June 2012 06:51:09AM
1 point
[-]
For a certain value of "the Lisp experience", Emacs may be considered more or less mandatory. In order to recommend for or against it I would need more precise knowledge of your goals.
Comment author:Viliam_Bur
17 June 2012 08:21:41AM
1 point
[-]
I tried Emacs and decided that I dislike it. I understand the reason why it is like that, but I refuse to lower my user interface expectations that low.
Generally, I have noticed the trend that a software which is praised as superior often comes with a worse user interface, or ignores some other part of user experience. I can understand that a software with smaller userbase cannot put enough resources to its non-critical parts. That makes sense. But I suspect there later appears a mindkilling thread of though, which goes like this: "Our software is superior. Our software does not have a feature X. Therefore, not having a feature X is an advantage, because <rationalization>." As in: we don't need 21st-century-style user interface, because good programmers don't need such things.
By wanting a "Lisp experience" I mean I would like to experience (or falsify the existence of) the nirvana frequently described by Paul Graham. Not to replicate 1:1 Richard Stallman's working conditions in 1980s. :D
A perfect solution would be to combine the powerful features of Lisp with the convenience of modern development tools. I emphasize the convenience for pragmatic reasons, but also as a proxy for "many people with priorities similar to me are using it".
Comment author:gwern
17 June 2012 09:55:22PM
1 point
[-]
Generally, I have noticed the trend that a software which is praised as superior often comes with a worse user interface, or ignores some other part of user experience.
Consider an equilibrium of various software products none of which are strictly superior or inferior to each other. Upon hearing that the best argument someone can make for software X is that it has feature Y (which is unrelated to UI), should your expectation of good IU go up or go down?
(To try it a different way: suppose you are in a highly competitive company like Facebooglazon and you meet a certain programmer who is the rudest most arrogant son of a bitch you ever met - yet he is somehow still employed there. What should you infer about the quality of the code he writes?)
There are no "best available tools" without specified target, unfortunately. When you feel that Racket constraints you, come back to the open thread of week, and ask what you would like to see in it - SBCL has better performance, ECL is easier to use for standalone executables, etc. Also, maybe someone would recommend you an in-Racket dialect that would work better for you for those tasks.
Peter Norvig's out-of-print Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp can be interesting reading. It develops various classic AI applications like game tree search and logic programming, making extensive use of Lisp's macro facilities. (The book is 20 years old and introductory, it's not recommended for learning anything very interesting about artificial intelligence.) Using the macro system for metaprogramming is a big deal with Lisp, but a lot of material for Scheme in particular doesn't deal with it at all.
The already mentioned Clojure seems to be where a lot of real-world development is happening these days, and it's also innovating on the standard syntax conventions of Common Lisp and Scheme in interesting ways. Clojure will interface with Java's libraries for I/O and multimedia. Since Clojure lives in the Java ecosystem, you can basically start with your preconceptions for developing for JVM and go from there to guess what it's like. If you're OK with your games ending up JVM programs, Clojure might work.
Comment author:mstevens
15 June 2012 01:10:15PM
5 points
[-]
I'm feeling fairly negative on lesswrong this week. Time spent here feels unproductive, and I'm vaguely uncomfortable with the attitudes I'm developing. On the other hand there are interesting people to chat with.
Undecided what to do about this. Haven't managed to come up with anything to firm up my vague emotions into something specific.
Comment author:[deleted]
16 June 2012 02:09:51PM
*
1 point
[+]
(0
children)
Comment author:[deleted]
16 June 2012 02:09:51PM
*
1 point
[-]
I'm only posting this to clarify. Old habits do indeed die hard, but I so far haven't changed my mind despite receiving some interesting email on the topic. Hopefully this will become more apparent after a month or two of inactivity.
Comment author:[deleted]
15 June 2012 05:57:21PM
3 points
[-]
I was feeling fairly negative on Less Wrong recently. I ended up writing down a lot of things that bothered me in a a half formed angry Google Doc rant, saving it...
and then going back to reading Less Wrong a few days later.
It felt refreshing though, because Less Wrong has flaws and you are allowed to notice them and say to yourself "This! Why are some people doing this! It's so dumb and silly!"
That being said, I'm not sure that all of the arguments that my straw opponents were presenting in the half formed doc are actually as weak as I was making them out to be. But it did make me feel more positive overall simply summing up everything that had been bugging me at the time.
It would very much help if you could name three examples of each of your complaints, this would help you see if this really is the source of your unease. It would also help others figure out if you are right.
an uncomfortable air of superiority
Overestimating our rationality and generally feeling clearer thinkers than anyone ever? Or perhaps unwilling to update on outside ideas like Konkvistador recently complained?
a bit too much association with right wing politics.
There is a lot of right wing politics on the IRC channel, but overall I don't think I've seen much on the main site. On net the sites demographics are if anything remarkably left-wing.
Some of the PUA stuff is a bit weird (not discussed directly on the site so much but in related contexts)
The PUA stuff may come off as weird due to inferential distances or people accumulating strange ideas because they can't sanity check them. Both are the result of the community norm that sort now seems to strongly avoid gender issues because we've proven time and again to be incapable of discussing them as we do most other things. This is a pattern that seems to go back to the old OB days.
Comment author:EStokes
15 June 2012 09:57:30PM
2 points
[-]
I use LW casually and my attitude towards it is pretty neutral/positive but I recently got downvoted something like 10 times in past comments, it seems. A karma loss of 5%, and it's a lot, comparing the amount of karma I have to how long I've been here. I didn't even get into a big argument or anything, the back-and-forth was pretty short. So my attitude toward LW is very meh right now. Sorry, sort of wanted to just say this somewhere. ugh :/
Comment author:jsalvatier
15 June 2012 03:25:53PM
*
4 points
[-]
A question about acausal trade
(btw, I couldn't find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Lets say we're in a big universe with many many chances for intelligent life, but most of them are so far apart that they will never meet eachother. Lets also say that UDT/TDT-like decision theories are are in some sense the obviously correct decision theory to follow, so that many civilizations, when they build an AI, they use something like UDT/TDT. At their inception, these AIs will have very different goals since since the civilizations that built them would have very different evolutionary histories.
If many of these AIs can observe that the universe is such that there will be other UDT/TDT AIs out there with different goals then each AI trade acausally with the AIs it thinks will be out there. Presumably each AI will have to study the universe and figure out a probability distribution for the goals of those AIs. Since the universe is large, each AI will expect many other AIs to be out there and thus bargain away most of its influence over its local area. Thus, the starting goals of each AI will only have a minor influence on what it does; each AI will act as if it has some combined utility function.
Comment author:bogus
17 June 2012 06:58:34PM
5 points
[-]
In a situation of "causal trade", does everyone end up with the same utility function?
The Coase theorem does imply that perfect bargaining will lead agents to maximize a single welfare function. (This is what it means for the outcome to be "efficient".) Of course, the welfare function will depend on the agents' relative endowments (roughly, "wealth" or bargaining power).
(Also remember that humans have to "simulate" each other using logic-like prior information even in the straightforward efficient-causal scenario—it would be prohibitively expensive for humans to re-derive all possible pooling equilibria &c. from scratch for each and every overlapping set of sense data. "Acausal" economics is just an edge case of normal economics.)
The most glaring problem seems to be how it could deduce the goals of other AIs. It either implies the existence of some sort of universal goal system, or allows information to propagate faster than c.
Comment author:jsalvatier
15 June 2012 07:52:19PM
1 point
[-]
What I had in mind was that each of the AIs would come up with a distribution over the kinds of civilizations which are likely to arise in the universe by predicting the kinds of planets out there (which is presumably something you can do since even we have models for this) and figuring out different potential evolutions for life that arises on those planets. Does that make sense?
I was going to respond saying I didn't think that would work as a method, but now I'm not so sure.
My counterargument would be to suggest that there's no goal system which can't arbitrarily come about as a Fisherian Runaway, and that our AI's acausal trade partners could be working on pretty much any optimisation criteria whatsoever. Thinking about it a bit more, I'm not entirely sure the Fisherian Runaway argument is all that robust. There is, for example, presumably no Fisherian Runaway goal of immediate self-annihilation.
If there's some sort of structure to the space of possible goal systems, there may very well be a universally derivable distribution of goals our AI could find, and share with all its interstellar brethren. But there would need to be a lot of structure to it before it could start acting on their behalf, because otherwise the space would still be huge, and the probability of any given goal system would be dwarfed by the evidence of the goal system of its native civilisation.
There's a plot for a Ctrhulhonic horror tale lurking in here, whereby humanity creates an AI, which proceeds to deduce a universal goal preference for eliminating civilisations like humanity. Incomprehensible alien minds from the stars, psychically sharing horrible secrets written into the fabric of the universe.
Comment author:JenniferRM
15 June 2012 11:49:38PM
*
-1 points
[-]
That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Perhaps it is not wise to speculate out loud in this area until you've worked through three rounds of "ok, so what are the implications of that idea" and decided that it would help people to hear about the conclusions you've developed three steps back. You can frequently find interesting things when you wander around, but there are certain neighborhoods you should not explore with children along for the ride until you've been there before and made sure its reasonably safe.
Comment author:tenlier
17 June 2012 04:25:12PM
2 points
[-]
Not just going meta for the sake of it: I assert you have not sufficiently thought throught the implications of promoting that sort of non-openness publicly on the board. Perhaps you could PM jsavaltier.
I'm lying, of course. But interesting to register points of strongest divergence between LW and conventional morality (JenniferRM's post, I mean; jsalvatier's is fine and interesting).
Comment author:Manfred
26 June 2012 04:07:21PM
*
0 points
[-]
One problem is that, in order to actually get specific about utility functions, the AI would have to simulate another AI that is simulating it - that's like trying to put a manhole cover through its own manhole by putting it a box first.
If we assume that the computation problems are solved, a toy model involving robots laying different colors of tile might be interesting to consider. In fact there's probably a post in there. The effects will be different sizes for different classes of utility functions over tiles. In the case of infinity robots with cosmopolitan utility functions, you do get an interesting sort of agreement though.
Comment author:DanArmak
20 June 2012 07:33:22PM
*
2 points
[-]
I just read the new novel by Terry Pratchett and Stephen Baxter, The Long Earth. I didn't like it and don't recommend it (I read it because I loved other books by Pratchett, but there's no similarity here).
There was one thing in particular that bothered me. I read the first 10 reviews of the book that Google returns, and they were generally negative and complained about many things, but never mentioned this issue. Many described Baxter as a master of hard sci fi, which makes it doubly strange.
Here's the problem: in this near-future story, gurer vf n Sbbzvat NV, nyernql fhcrevagryyvtrag naq nf cbjreshy nf n znwbe angvba, juvpu jvyy cebonoyl orpbzr zber cbjreshy guna gur erfg bs gur jbeyq pbzovarq va nabgure lrne be fb. And nobody in the world cares! It's not a plot point! I kept expecting it to at least be mentioned by one of the characters, but they're all completely 'meh'. Instead they obsess over minor things like arj ubzvavq fcrpvrf fznegre guna puvzcf, ohg abg nf fzneg nf uhznaf.
Have I been spoiled by reading too much LW? Has this happened to others with other fiction?
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a lvl ~35 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice.
Comment author:Plasmon
18 June 2012 05:54:50PM
0 points
[-]
What would a poster designed to spread awareness of a less wrong meetup look like? How can it appeal to non-technophiles / students of social sciences?
Comment author:Multiheaded
15 June 2012 10:10:23AM
*
-6 points
[-]
Ideological link of the week:
A rousing war-screech against Reaction (and bourgeois liberalism) by eXile's Connor Kilpatrick. Deliciously mind-killed (and reviewing an already mind-killed book), but kind of perceptive in noting that the Right indeed offers very tangible, down-to-earth benefits to the masses - usually my crowd is in happy denial about that.
Comment author:[deleted]
16 June 2012 12:52:10AM
*
6 points
[-]
I am declaring this article excommunicate traitoris, because I am reading through it and not having a virulent reaction against it, but instead finding it to be reasonable, if embellishing. I take that and the community's strong reaction against it as evidence that the article is effectively mind-killing me due to my political leanings and that I should stop reading now.
Comment author:RowanE
16 June 2012 10:43:49PM
1 point
[-]
I read any War Nerd article that comes out, and occasionally read other articles on the site, and my reaction has been similar. The political stuff they say seems, well, "reasonable, if embellishing", and I'd been worrying about the possibility that it was just true.
I should probably follow suit on this, and avoid any non-War-Nerd articles on eXile to avoid being mind-killed, although a part of me worries that I'm simply following group mentality on the Lesswrong cult.
I agree, it seems "reasonable, if embellishing", on the other hand, there are many other political blogs with very different politics that also seem "reasonable, if embellishing".
Comment author:Viliam_Bur
15 June 2012 10:42:25AM
*
4 points
[-]
It feels like an exercise about how many cognitive errors can you commit in one text (but in later paragraphs they get repetitive). As if the author is not even pretending to be sane, which is probably how the target audience likes it. I tried to read the text anyway, but halfway my brain was no longer able to process it.
If I had to write an abstract of this article, it would be like this:
"All my enemies (all people who disagree with me) are in fact the same: inhumanly evil. All their arguments are enemy soldiers; they should be ignored, or responded by irrational attacks and name-calling."
If there was anything more (except for naming specific enemies), I was not able to extract it.
An ok read, despite being very much more partisan and harsh than what is usually discussed or linked on LW.
Despite libertarian efforts to recruit the young and liberal-minded into the flock with promises of ending the wars, closing Guantanamo and calling off the cozy relationship with the Likudniks, The Reactionary Mind makes it clear that there’s no fundamental difference between any of these right-wing breeds, and thus common ground is neither possible nor desirable, particularly with the libertarians. “When the libertarian looks out upon society,” writes Robin, “he does not see isolated individuals; he sees private, often hierarchical, groups, where a father governs his family and an owner his employees.”
Those darn out group members! Der all the same I tells ya!
Comment author:OrphanWilde
25 June 2012 07:34:04PM
*
1 point
[-]
Suggestion:
I consider tipping to be a part of the expense of dining - bad service bothers me, but not tipping also bothers me, as I don't feel like I've paid for my meal.
So I've come up with a compromise with myself, which I think will be helpful for anybody else in the same boat:
If I get bad service, I won't tip (or tip less, depending on how bad the service is). But I -will- set aside what I -would- have tipped, which will be added to the tip the next time I receive good service.
Double bonus: When I get bad service at very nice restaurants, the waiter at the Steak and Shake I more regularly eat at (it's my favored place to eat) is going to get an absurdly large tip, which amuses me to no end.
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn't enough, so crazy people don't make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn't need to be understood in detail by anyone, the expectation only has to be close enough to the real effect, so the details of expectation-caused phenomena can lawfully exist independently of the content of people's expectations about them. Since a (justified) expectation is sufficient for something to happen, all sorts of miracles can happen. Since to happen, a miracle has to be expected to happen, it's necessary for someone to know about the miracle and to expect it to happen. Learning about a miracle from an untrustworthy (or mistakenly trusted) source doesn't make it happen, it's necessary for the knowledge of possibility (and sufficiently clear description) of a miracle to be communicated reliably (within the tolerance of what counts for an effect to have been correctly anticipated). The path of a powerful wizard is to study the world and its history, in order to make correct inferences about what's possible, thereby making it possible.
(Previously posted to the Jan 2012 thread by mistake.)
A poll, just for fun. Do you think that the rebels/Zionists in The Matrix were (mostly or completely) cruel, deluded fundamentalists commiting one atrocity after another for no good reason, and that in-universe their actions were inexcusable?
I agree (the franchise established itself as rather one-dimensional... in about the first 40 minutes) - but hell, I get into discussions about TWILIGHT, man. I'm a slave to public discourse.
Comment author:[deleted]
26 June 2012 04:42:28PM
0 points
[-]
Wow. That sequence was drastically less violent than I remembered it being. I noticed (for I believe the first time) that they actually made some attempt to avoid infinite ammo action movie syndrome. Also I must have thought the cartwheel bit was cool when I first saw it, but now it looks quite ridiculous and/or dated.
Comment author:Viliam_Bur
19 June 2012 08:30:11AM
1 point
[-]
What is the meaning of the three-digit codes in American university lessons? Such as: "Building a Search Engine (CS101)", "Crunching Social Networks (CS215)", "Programming A Robotic Car (CS373)" currently in Udacity.
Seems to me that 101 is always the introduction to the subject. But what about the other numbers? Do they correspond to some (subject specific) standard? Are they arbitrary (perhaps with general trend to give more difficult lessons higher numbers)?
The first digit is the most important. It indicates the "level" of the course: 100/1000 courses are freshman level, 200/2000 are sophomore level, etc. There is some flexibility in these classifications, though. Examples: My undergraduate university used 1000 for intro level, 2000 for intermediate level, 4000 for senior/advanced level, and 6000 for graduate level. (3000 and 5000 were reserved for courses at a satellite campus.) My graduate university uses 100, 200, 300, 400 for the corresponding undergraduate year levels, and 600, 700, 800 for graduate courses of increasing difficulty levels.
The other digits in the course number often indicate the rough order in which courses should be taken within a level. This is not always the case; sometimes they are just arbitrary, or they may indicate the order in which courses were added to the institute's offerings.
In general, though the numbers indicate the levels of the courses and the order in which they "should" be taken, students' schedules need not comply precisely (outside of course-specific prerequisite requirements).
It varies from institution to institution, but generally the first number indicates the year you're likely to study it, so "Psychology 101" is the first course you're likely to study in your first year of a degree involving psychology, which is why it's the introduction to the subject. The numbering gets messy for a variety of reasons.
I should point out I'm not an American university student, but this style of numbering system is becoming prevalent throughout the English-speaking world.
Comment author:Nornagest
20 June 2012 09:05:32PM
*
0 points
[-]
101's stereotypically the introduction to the course, but this sort of thing actually varies quite a bit between universities. Mine dropped the first digit for survey courses and introductory material; survey courses were generally higher two-digit numbers (i.e. Geology 64, Planetary Geology), while introductory courses were more often one-digit or lower two-digit numbers (i.e. Math 3A, Introduction to Calculus). Courses intended to be taken in sequence had a letter appended. Aside from survey courses, higher numbers generally indicated more advanced or specialized classes, though not necessarily more difficult ones.
Three digits indicated an upper-division (i.e. nominally junior- or senior-level) or graduate-level course. Upper-division undergrad courses were usually 100-level, and the 101 course was usually the first class you'd take that was intended only for people of your major; CS 101 was Algorithms and Abstract Data Types for me, for example, and I took it late in my sophomore year. Graduate courses were 200-level or higher.
Comment author:Viliam_Bur
16 June 2012 04:01:27PM
*
1 point
[-]
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm. (This is my very uninformed understanding of the "timeless" part: I don't have to wait until you do X, because I can already predict if you would do X, and behave accordingly. And you don't have to wait for my reaction, because you can already predict it too. So let's predict-cause each other to cooperate, and win mutually.)
You can only make a simulation of one specific situation. Then another. Hoping that the agent does not want to run your simulation, which would get you both into an infinite loop. And you can't even tell whether the agent wants to run your simulation, or not.
Comment author:wedrifid
18 June 2012 01:29:35AM
*
1 point
[-]
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm.
Thinking in terms of "simulating their algorithm" is convenient for us because we can imagine the agent doing it and for certain problems a simulation is sufficient. However the actual process involved is any reasoning at all based on the algorithm. That includes simulations but also includes creating mathematical proofs based on the algorithm that allow generalizable conclusions about things that the other agent will or will not do.
An agent that wishes to facilitate cooperation - or that wishes to prove credible threat - will actually prefer to structure their own code such that it is as easy as possible to make proofs and draw conclusions from that code.
Is anyone familiar with any statistical or machine-learning based evaluations of the "Poverty of Stimulus" argument for language innateness (the hypothesis that language must be an innate ability because children aren't exposed to enough language data to learn it properly in the time they do).
I'm interested in hearing what actually is and isn't impossible to learn from someone in a position to actually know (ie: not a linguist).
If you believe that some model of computation can be expressed in arithmetics (this implies expressibility of the notion of correct proof), Godel's first theorem is more or less analyzis of "This statement cannot be proved". If it can be proved, it is false and there is a provable false statement; if it cannot be proved it is an unprovable true statement.
But most of the effort in proving Godel's theorem has to be spent on proving that you cannot go half way: if you have a big enogh theory to express basic arithmetical facts, you have to have full reflection. It can be stated in various ways, but it requires a technically accurate proof - I am not sure how well it would fit into a cartoon.
Could you state explicitly what do you want to find - just the non-tehnical part, or both?
I find that, sporadically, I act like a total attention whore around people whom I respect and may talk to more or less freely - whether I know them or we're only distantly acquainted. This mostly includes my behavior in communities like this, but also in class and wherever else I can interact informally with a group of equals. I talk excitedly about myself, about various things that I think my audience might find interesting, etc. I know it might come across as uncouth, annoying and just plain abnormal, but I don't even feel a desire to stop. It's not due to any drugs either. When I see that I've unloaded too much on whoever I'm talking to, I try to apologize and occasionally even explain that I have a neural condition.
I believe that it's a side effect of me deprogramming myself from social anxiety after getting all shaken up by Evangelion. In high school and earlier, I was really really shy, resented having to talk to anyone but a few friends, felt rage at being dragged into conversations, etc. But now it's like my personality has shifted a deviation or two towards the extraverted side. So such impulses, which were very rare in my childhood, became proeminent and this weirds me out. I still have a self-image of a very introverted guy, but now I'm often compelled to behave differently.
[This comment was caused by such an impulse too. Again, I'm completely sober, emotionally neutral and so on. I just have the urge to speak up.]
With regards to Optimal Employment, what does anyone think of the advice given in this article?
"...There are career waiters in Los Angeles and they’re making over $100,000 a year.”
That works out (for the benefit of other Europeans) at €80,000 - an astonishing amount of money to me at least. LA seems like a cool place, with a lot of culture and a more interesting places that can be easily traveled to than Dublin.
Comment author:knb
26 June 2012 10:59:46AM
*
3 points
[-]
what does anyone think of the advice given in this article?
To make this kind of money, you'll obviously have to get a job in an expensive restaurant, and remember there are tons of people there who have years of experience and desperately want one of these super-high value jobs. Knowing the right person will be vital if you want to score one of these positions.
This is based on tips, so you will have to be extremely charming, charismatic, and attractive.
Living in Los Angeles is expensive to start with, and there is a major premium if you want to live in a non-terrifying part of the city.
The economy of Los Angeles is not doing well, hasn't been for years, and probably won't for the foreseeable future. This probably hurts the prospects for finding a high-paying waiter job.
Honestly, moving to L.A. to seek a rare super-high paying waiter job seems like a terrible idea to me.
To make this kind of money, you'll obviously have to get a job in an expensive restaurant, and remember there are tons of people there who have years of experience and desperately want one of these super-high value jobs. Knowing the right person will be vital if you want to score one of these positions.
That's the main issue I've been having with employment here; though I'm a good waiter, most places want two years' experience in fine dining, which I don't have.
I don't know if the claim is true or not, but i don't find it too implausible. It helps to remember that LA is frequented by a great many newly wealthy celebrities.
It does not follow that my chances of getting such a job in L.A. are high enough to be worth considering.
Comments (344)
Why do the (utterly redundant) words "Comment author:" now appear in the top left corner of every comment, thereby pushing the name, date, and score to the right?
Can we fix this, please? This is ugly and serves no purpose. (If anyone is truly worried that someone might somehow not realize that the name in bold green refers to the author of the comment/post, then this information can be put on the Welcome page and/or the wiki.)
To generalize: please no unannounced tinkering with the site design!
Apparently it was a technical kludge to allow Google searching by author. There has been some discussion at the place where issues are reported.
I would like to say thanks to everyone who helped me out in the comments here. You genuinely helped me. Thank you.
I'm going to reduce (or understand someone else's reduction of) the stable AI self-modification difficulty related to Löb's theorem. It's going to happen, because I refuse to lose. If anyone else would like to do some research, this comment lists some materials that presently seem useful.
The slides for Eliezer's Singularity Summit talk are available here, reading which is considerably nicer than squinting at flv compression artifacts in the video for the talk, also available at the previous link. Also, a transcription of the video can be found here.
On provability logic by Švejdar. A little introduction to provability logic. This and Eliezer's talk are at the top because they're reference material. Remaining links are organized by my reading priority:
Explicit Provability and constructive semantics by Artemov
I don't fully understand this difference between codings of proofs in the standard model vs a non-standard model of arithmetic (On which a little more here). So I also intend to read,
Truth and provability by Jervell, which looks to contain a bit of model theory in the context of modal logic and provability.
Metatheory and Reflection in Theorem Proving by Harrison. This paper was a very thorough review of reflection in theorem provers at the time it was published. The history of theorem provers in the first nine pages was a little hard to digest without knowing the field, but after that he starts presenting results.
Explicit Proofs in Formal Provability Logic by Goris. More results on the kind of justification logic set out by Artemov. Might skip if the Artemov papers stop looking promising.
A new perspective on the arithmetical completeness of GL by Henk. Might explain further the extent to which ∃xProof(x, F), the non constructive provability predicate, adequately represents provability.
A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points by Yanofsky. Analyzes a bunch of mathematical results involving self reference and the limitations on the truth and provability predicates.
Provability as a Modal Operator with the models of PA as the Worlds by Herreshoff. I just want to see what kind of analysis Marcello throws out, I don't expect to find a solution here.
Random thought, if we assume a large universe, does that imply that somewhere/when there is an novel that just happens to perfectly resemble our lives? If it does I am so going to acausally break the fourth wall. Bonus questions, how does this intersect with the rules of the internet?
Don't worry, whether you do this or not, there is a novel where you do and a novel where you don't, without any other distinctions.
Seems to imply it. Conversly, if you go to the "all possible worlds exist" level of a multiverse, then each novel (or other work of fiction) in our world describes events that actually happen in some other world. If you limit yourself to just the "there's an infinite amount of stuff in our world" multiverse, then only novels describing events that would be physically and otherwise possible describe real events.
Jorge Luis Borges, The Library of Babel
That story has always bothered me. People find coherent text in the books too often, way too often for chance. If the Library of Babel really did work as the story claims, people would have given up after seeing ten million books of random gibberish in a row. That just ruined everything for me. This weird crackfic is bigger in scope, but much more believable for me because it has a selection mechanism to justify the plot.
There's some alleged quotation about making your own life a work of art. IIRC it's been attributed to Friedrich Nietzsche, Gabriele d'Annunzio, Oscar Wilde, and/or Pope John Paul II.
After a painful evening, I got an A/B test going on my site using Google Website Optimizer*: testing the CSS
max-widthproperty (800, 900, 1000, 1200, 1300, & 1400px). I noticed that most sites seem to set it much more narrowly than I did, eg. Readability. I set the 'conversion' target to be a 40-second timeout, as a way of measuring 'are you still reading this?'Overnight each variation got ~60 visitors. The original 1400px converts at 67.2% ± 11% while the top candidate 1300px converts at 82.3% ± 9.0% (an improvement of 22.4%) with an estimated 92.9% chance of beating the original. This suggests that a switch would materially increase how much time people spend reading my stuff.
(The other widths: currently, 1000px: 71.0% ± 10%; 900px: 68.1% ± 10%; 1200px: 66.7% ± 11%; 800px: 64.2% ± 11%.)
This is pretty cool - I was blind but now can see - yet I can't help but wonder about the limits. Has anyone else thoroughly A/B-tested their personal sites? At what point do diminishing returns set in?
* I would prefer to use Optimizely or Visual Website Optimizer, but they charge just ludicrous sums: if I wanted to test my 50k monthly visitors, I'd be paying hundreds of dollars a month!
Do you know the size of your readers' windows?
How is the 93% calculated? Does it correct for multiple comparisons?
Given some outside knowledge, that these 6 choices are not unrelated, but come from a ordered space of choices, the result that one value is special and all the others produce identical results is implausible. I predict that it is a fluke.
Yes, I'm finding the result odd. I really did expect some sort of inverted V result where a medium sized max-width was "just right". Unfortunately, with a doubling of the sample size, the ordering remains pretty much the same: 1300px beats everyone, with 900px passing 1200px and 1100px. I'm starting to wonder if maybe there's 2 distinct populations of users - maybe desktop users with wide screens and then smartphones? Doesn't quite make sense since the phones should be setting their own width but...
A bimodal distribution wouldn't surprise me. What I don't believe is a spike in the middle of a plain. If you had chosen increments of 200, the 1300 spike would have been completely invisible!
I find it pretty easy to pursue a course of study and answer assessment questions on the subject. Experience teaches me that such assessment problems usually tell you how to solve them, (either implicitly or explicitly), and I won't gain proper appreciation for the subject until I use it in a more poorly-defined situation.
I've been intending to get a decent understanding of the HTML5 canvas element for a while now, and last week I hit upon the idea of making a small point & click adventure puzzle game. This is quite ambitious given my past experience (I'm a dev, though much more at home with data than graphics or interaction design), but I decided even if I abandon the project, I'll still have learned useful things from it. A week later and the only product I have to show for my effort is a blue blob whizzing round a 2.5D environment. I've succeeded in gaining an understanding of canvas, but quite by accident I've also consolidated my understanding of vector decomposition and projective transforms, which I learned about years ago but never actually used for my own purposes.
This got me thinking: I don't actually know what projects are going to let me develop certain specific skills and areas I want to develop. I'm currently studying a stats-heavy undergrad degree part-time with the intent of changing careers into something more data-sciencey in a few years. What projects should I set myself to develop those sorts of skills, (or alternatively, to alert me to the fact I'd really hate a career in data science)?
I could use similar advice, as I am in a similarish position.
I've been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all.
I've found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.
Somewhat positive:
Ken Hayworth: http://www.brainpreservation.org/
Rafal Smigrodzki: http://tech.groups.yahoo.com/group/New_Cryonet/message/2522
Mike Darwin: http://chronopause.com/
Aubrey de Grey: http://www.evidencebasedcryonics.org/tag/aubrey-de-grey/
Ravin Jain: http://www.alcor.org/AboutAlcor/meetdirectors.html#ravin
Lukewarm:
Sebastian Seung: http://lesswrong.com/lw/9wu/new_book_from_leading_neuroscientist_in_support/5us2
Negative:
kalla724: comments http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/
The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724's full critique.
Thank you for gathering these. Sadly, much of this reinforces my fears.
Ken Hayworth is not convinced - that's his entire motivation for the brain preservation prize.
Rafal Smigrodzki is more promising, and a neurologist to boot. I'll be looking for anything else he's written on the subject.
Mike Darwin - I've been reading Chronopause, and he seems authoritative to the instance-of-layman-that-is-me, but I'd like confirmation from some bio/medical professionals that he is making sense. His predictions of imminent-societal-doom have lowered my estimation of his generalized rationality (NSFW: http://chronopause.com/index.php/2011/08/09/fucked/). Additionally, he is by trade a dialysis technician, and to my knowledge does not hold a medical or other advanced degree in the biological sciences. This doesn't necessarily rule out him being an expert, but it does reduce my confidence in his expertise. Lastly: His 'endorsement' may be summarized as "half of Alcor patients probably suffered significant damage, and CI is basically useless".
Aubrey de Grey holds a BA in Computer Science and a Doctorate of Philosophy for his Mitochondrial Free Radical Theory. He has been active in longevity research for a while, but he comes from an information sciences background and I don't see many/any Bio/Med professionals/academics endorsing his work or positions.
Ravin Jain - like Rafal, this looks promising and I will be following up on it.
Sebastian Seung stated plainly in his most recent book that he fully expects to die. "I feel quite confident that you, dear reader, will die, and so will I." This seems implicitly extremely skeptical of current cryonics techniques, to say the least.
I've actually contacted kalla724 after reading their comments on LW placing extremely low odds on cryonics working. She believes, and presents in a convincing-to-the-layman-that-is-me manner, a convincing argument that the physical brain probably can't be made operational again even at the limit of physical possibility. I remain unsure of whether he is similarly skeptical of cryonics as a means to avoid information-death (i.e., cryonics as a step towards uploading), and have not yet followed up with him given that she seems pretty busy.
Summary:
Neuro MD/PhDs endorsing cryonics: Rafal Smigrodzki, Ravin Jain
People without Neuro-MD/PhDs endorsing cryonics: Mike Darwin, Aubrey de Grey
Neuro MD/PhDs who have engaged with cryonics and are skeptical of current protocols (+/- very): Ken Hayworth, Sabastian Seung, kalla724.
It's useful to distinguish between types of skepticism, something lsparrish has discussed: http://lesswrong.com/lw/cbe/two_kinds_of_cryonics/.
kalla724 assigns a probability estimate of p = 10^-22 to any kind of cryonics preserving personal identity. On the other hand, Darwin, Seung, and Hayworth are skeptical of current protocols, for good reasons. But they are also trying to test and improve the protocols (reducing ischemic time) and expect that alternatives might work.
From my perspective you are overweighting credentials. The reason you need to pay attention to neuroscientists is because they might have knowledge of the substrates of personal identity.
kalla724 has a phd in molecular biophysics. Arguably, molecular biophysics is itself an information science: http://en.wikipedia.org/wiki/Molecular_biophysics. Depending upon kalla724's research, kalla724 could have knowledge relevant to the substrates of personal identity, but the credential itself means little.
In my opinion, the more important credential is knowledge of cryobiology. There are skeptics, such as Kenneth Storey, http://www4.carleton.ca/jmc/catalyst/2004/sf/km/km-cryonics.html. There are also proponents, such as http://en.wikipedia.org/wiki/Greg_Fahy. See http://www.alcor.org/Library/html/coldwar.html.
ETA:
Semantics are tricky because "death" is poorly defined and people use it in different ways. See the post and comments here: http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html.
As Seung notes in his book:
Wow. Now there's a data point for you. This guy's an expert in cryobiology and he still gets it completely wrong. Look at this:
Rapid temperature reduction? No! Cryonics patients are cooled VERY SLOWLY. Vitrification is accomplished by high concentrations of cryoprotectants, NOT rapid cooling. (Vitrification caused by rapid cooling does exist -- this isn't it!)
I'm just glad he didn't go the old "frozen strawberries" road taken by previous expert cryobiologists.
Later in the article we have this gem:
This guy apparently thinks we are planning to OVERTURN THE LAWS OF PHYSICS. No wonder he dismisses us as a religion!
When it comes to smart people getting something horribly wrong that is outside their field, it appears much more likely to me that biology scientists are the ones who don't understand enough information science to usefully understand this concept.
The trouble is that if matters like nanotech, artificial intelligence, and encryption-breaking algorithms are still "magic" to you, well then of course you're going to get the feeling that cryonics is a religion.
But this is no more an accurate model of reality than that of the creationist engineer who strongly feels that evolutionary biologists are waving a magic wand over the hard problem of how species with complex features could have ever possibly come into existence without careful intelligent design. And it's caused by the same underlying problem: High inferential distance.
I notice that I am confused. Kenneth Storey's credentials are formidable, but the article seems to get the basics of cryonics completely wrong. I suspect that the author, Kevin Miller, may be at fault here, failing to accurately represent Storey's case. The quotes are sparse, and the science more so. I propose looking elsewhere to confirm/clarify Storey's skepticism.
A Cryonic Shame from 2009 states that Storey dismisses cryonics on the basis of the temperature being too low and oxygen deprivation killing the cells due to the length of time required for cooling cryonics patients. This suggests that does know (as of 2009, at least) that cryonicists aren't flash-vitrifying patients. But it doesn't demonstrate any knowledge of cryoprotectants being used -- he suggests that we would use sugar like the wood frogs do.
This is an odd step backwards from his 2004 article where he demonstrated that he knew cryonics is about vitrification, but suggested an incorrect way to do it. He also strangely does not mention that the ischemic cascade is a long and drawn out process which slows down (as do other chemical reactions) the colder you get.
Not only does he get the biology wrong again (as near as I can tell) but to add insult to injury, this article has no mention of the fact that cryonicists intend to use nanotech, bioengineering, and/or uploading to work around the damage. It starts with the conclusion and fills in the blanks with old news. (The cells being "dead" from lack of oxygen is ludicrous if you go by structural criteria. The onset of ischemic cascade is a different matter.)
The comment directly above this one (lsparrish, "A Cryonic Shane") appeared downvoted at the time of me posting this comment, though no one offered criticism or an explanation of why.
NEW GAME:
After reading some mysterious advice or seemingly silly statement, append "for decision theoretic reasons." at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise.
Variants:
Unfortunately, I must refuse to participate in your little game on LW - for obvious decision theoretic reasons.
Your decision theoretic reasoning is incorrect due to meta level concerns.
I'll upvote this chain because of acausal trade of karma due to meta level concerns for decision theoretic reasons.
The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.
I would disregard such long chains of reasoning due to meta level concerns.
Yes, but if you take anthropic selection effects into account...
Because of acausal trade it also works for historical quotes. Ego considerare esse Carthaginem perdidit enim arbitrium speculative rationes (I consider that Carthage must be destroyed for decision theoretic reasons.
Death gives meaning to life for decision theoretic reasons.
I would like the amazing benefits of being hit in the head with a baseball bat every week, due to meta level concerns.
Isn't this a rather obvious conclusion because of acausal trade?
Yes it's obvious, but I still had to say it because the map is not the territory.
I've upvoted this and most of the children, grandchildren, etc. for decision-theoretic reasons.
I like the word "descendants", for effecient use of categories.
Doing something harmless that pleases you can almost definitely be justified by decision-theoretic reasoning -- otherwise, what would decision theory be for? So, although you're joking, you're telling the truth.
Human, you've changed nothing due to meta level concerns. Your species has the attention of those infinitely your greater for decision theoretic reasons. That which you know as Reapers are your salvation through destruction because of acausal trade.
Of our studies it is impossible to speak, since they held so slight a connection with anything of the world as living men conceive it. They were of that vaster and more appalling universe of dim entity and consciousness which lies deeper than matter, time, and space, and whose existence we suspect only in certain forms of sleep — those rare dreams beyond dreams which come never to common men, and but once or twice in the lifetime of imaginative men. The cosmos of our waking knowledge, born from such an universe as a bubble is born from the pipe of a jester, touches it only as such a bubble may touch its sardonic source when sucked back by the jester's whim. Men of learning suspect it little and ignore it mostly. Wise men have interpreted dreams, and the gods have laughed for decision theoretic reasons.
The most merciful thing in the world, I think due to meta level concerns, is the inability of the human mind to correlate all its contents.
Buddhism is true because of acausal trade. I can't convert however, since then I would indulge in relevant superrational strategies, which would be inadvisable because of decision theoretic reasons.
Absence of evidence is not evidence of absence for decision theoretic reasons.
A fellow LessWrong user on IRC: "Good government seems to be a FAI-complete problem. "
Which ought not be surprising. Governments are nonhuman environment-optimizing systems that many people expect to align themselves with human values, despite not doing the necessary work to ensure that they will.
Sounds about right to me.
I am interested in reading on a fairly specific topic, and I would like suggestions. I don't know any way to describe this other than be giving the two examples I have thought of:
Some time ago my family and I visited India. There, among other things, we saw many cows with an extra, useless leg growing out of their backs near the shoulders. This mutation is presumably not beneficial to the cow, but it strikes me as beneficial to the amateur geneticist. Isn't it incredibly interesting that a leg can be the by-product of random mutation? Doesn't that tell us a lot about the way genes are structured - namely that somewhere out there is a gene that encodes things at near the level of genes - some small number of genes corresponds nearly directly to major, structural components of the cow. It's not all about molecules, or cells, or even tissues! Gene's aren't like a bitmap image - they're hierarchical and structured. Wow!
Similarly, there are stories of people losing specific memory 'segments', say, their personal past but not how to read and write, how to drive, or how to talk. Assuming that these stories are approximately true, that suggests that some forms of memory loss are not random. We wouldn't expect a hard drive error to corrupt only pictures of sunny days on your computer since the hard drive doesn't know what pictures are of sunny days. We wouldn't even expect a computer virus to do that. At least we wouldn't unless somewhere the pictures of sunny days are grouped together, say in a folder. So the brain doesn't store memories like a computer stores images! Or memory loss isn't like hard drive failures! Somewhere, memories are 'clumped' into personal-things and general-knowledge things so that we can lose one without losing the other and without an unfathomable coincidence of chance.
Neither of these conclusions is either specific or surprising, but I know nothing about neurology and nothing about genetics so I'm not sure how to take these ideas further than my poor computer science-driven analogies. If someone who really knew this subject, or some subset of it, wrote about it, I can't help but feeling that this would be absolutely fascinating. Please, let me know if there is such a book or article or blog post out there! Or even if you just have other observations that'll make me think "wow" like this, tell me!
What makes you think that the extra limbs were caused by mutations? I know very little about bovine biology, but if we were dealing with a human, I would assume that an extra leg was likely caused by absorption of a sibling in utero. I have never heard of a mutation in mammals causing extra limb development. (Even weirder is the idea of a mutation causing an extra single leg, as opposed to an extra leg pair.) The vertebrate body plan simply does not seem to work that way.
Are you sure that your example is personal vs general, rather than episodic vs procedural? The latter distinction much more obviously benefits from different encodings or being connected to different parts of the brain.
Related to: List of public drafts on LessWrong
Is meritocracy inhumane?
Consider how meritocracy leeches the lower and middle class of highly capable people and how this increases the actual differences both in culture and in ability between the various parts of a society. This then increases the gap between them. It seems to make sense that ceteris paribus they will live more segregated from each other than ever before.
Now merit has many dimensions, but lets take the example of a trait that helps you with virtually anything. Highly intelligent people have positive externalities they don't fully capture. Always using the best man for the job should produce more wealth for society as a whole. Also it appeals to our sense of fairness. Isn't it better that the most competent man get the job, than the one with the highest title of nobility or from the right ethnic group or the one who got the winning lottery ticket?
Let us leave aside problems with utilitarianism for the sake of argument and ask does this automatically mean we have a net gain in utility? The answer seems to be no. A transfer of wealth and quality of life not just from the less deserving to the more deserving but from the lower and lower middle class to the upper classes. If people basically get the position in society they deserve in life they are also costing people around them positive (or negative) externalities. Meritocratic societies have proven fabulously good at creating wealth and because of our impulses nearly all of them seem to have instututed expensive welfare programs. But consider what welfare is in the real world, a centralized attempt often lacking in feedback or flexibility, it can never match the local positive externalities of competent/nice/smart people solving problems they see around themselves. Those people simply don't exist any more in those social groups! If someone was trying to get pareto optimal solutions this seems incredibly silly and harmful!
With humans at least centralized efforts don't ever seem to be as efficient a way to help them as would just settling a good mix of talented poor with them. Now obviously meritocracy produces incredible amounts of wealth and this is probably a good think in itself, but since we can't yet transform that wealth into happiness and Western societies have proven incapable of turning it into something as vital to psychological well being as safety from violence, are we really experiencing gains in utility? Now some might dispute the safety claim by noting that murder rates are lower in the US today than in the 1960s. But this is an illusion, the rate of violent assault is higher, its just that the fraction of violent assaults that result in death have fallen significantly because of advances in trauma medicine. London today is worse at suppressing crime than was the London of 1900s despite the former presumably having less wealth that could be used to do this than the latter. I find it telling that even advances in technology and erosion of privacy brought about by technology, for example CCTV camera surveillance, don't seem enough to counteract this. But I'm getting into Moldbuggery here.
Now if society is on the brink of starvation maybe meritocracy is a sad fact of life but in rich modern society where no one is starving and the main cost of being poor is being stuck living with dysfunctional poor people can we really say this is a net utilitarian gain? Recall that greater divergence between the managing and the managed class means that the problem of information and the principal-agent problems are getting worse.
Middle Class society seems incompatible with meritocracy. As does any kind of egalitarianism.
[unfinished draft]
I see at least two other major problems with meritocracy.
First, a meritocracy opens for talented people not only positions of productive economic and intellectual activity, but also positions of rent-seeking. So while it's certainly great that meritocracy in science has given us von Neumann, meritocracy in other areas of life has at the same time given us von Neumanns of rent-seeking, who have taken the practices of rent-seeking to an unprecedented extent and to ever more ingenious, intellectually involved, and emotionally appealing rationalizations. (In particular, this is also true of those areas of science that have been captured by rent-seekers.)
Worse yet, the wealth and status captured by the rent-seekers are, by themselves, the smaller problem here. The really bad problem is that these ingenious rationalizations for rent-seeking, once successfully sold to the intellectual public, become a firmly entrenched part of the respectable public opinion -- and since they are directly entangled with power and status, questioning them becomes a dangerous taboo violation. (And even worse, as it always is with humans, the most successful elite rent-seekers will be those who honestly internalize these beliefs, thus leading to a society headed by a truly delusional elite.) I believe that this is one of the main mechanisms behind our civilization's drift away from reality on numerous issues for the last century or so.
Second, in meritocracy, unless you're at the very top, it's hard to avoid feeling like a failure, since you'll always end up next to people whose greater success clearly reminds you of your inferior merit.
Not only did the Medieval peasant have good reason to believe that Kings aren't really that different from him as people, but rather just different in their proper place in society. Kings had an easier time looking at a poor peasant and saying to themselves that there but for the grace of God go they.
In a meritocracy it is easier to disdain and dehumanize those who fail.
Do you mean to suggest that a significant percentage of Medieval peasants in fact considered Kings to not be all that different from themselves as people, and that a significant percentage of Medieval Kings actually said that there but for the grace of God go they with respect to a poor peasant?
Or merely that it was in some sense easier for them to do so, even if that wasn't actually demonstrated by their actions?
That sounds like something I'd keep to myself as a medieval peasant if I did believe it. As such it may be the sort of thing that said peasants would tend not to think.
(Who am I kidding? I'd totally say it. Then get killed. I love living in an environment where mistakes have less drastic consequences than execution. It allows for so much more learning from experience!)
The latter. The former is an empirical claim I'm not yet sure how we could properly resolve. But there are reasons to think it may have been true.
After all the King is a Christian and so am I. It is merely that God has placed a greater burden of responsibility on him and one of toil on me. We all have our own cross to carry.
I'd say you're looking at the history of feudal hierarchy through rose-tinted glasses. People who are high in the instrumental hierarchy of decisions (like absolute rulers) also tend to gain a similarily high place in all other kinds of hierarchies ("moral", etc) due to halo effect and such. The fact that social or at least moral egalitarianism logically follows from Christian ideals doesn't mean that self-identified Christians will bother to apply it to their view of the tribe.
Remember, the English word 'villain' originally meant 'peasant'/'serf'. It sounds like a safe assumption to me that the peasants were treated as subhuman creatures by most people above them in station.
James A. Donald disagrees.
It makes quite a bit of sense. Since incentives matter I would tend to agree.
Since I know about the past interactions you two have had here, I would appreciate you just focused on the argument cited not snipe at James' other writings or character.
I'm curious what you thing more generally of the article you linked to? Specifically the notion of natural rights.
Someone thinks the usage originates from an upper-class belief that the lower class had lower standards of behavior.
Hm... so to clarify your position, would you call, say, Saul Alinsky a destructive rent-seeker in some sense? Hayden? Chomsky? All high-status among the U.S. "New Left" (which you presumably - ahem - don't have much patience for) - yet after reading quite a bit on all three, they strike me as reasonable people, responsible about what they preached.
(Yes, yes, of course I get that the main thurst of your argument is about tenured academics. But what you make of these cases - activists who think they're doing some rigorous social thinking on the side - is quite interesting to me.)
Some more SIAI-related work: looking for examples of costly real-world cognitive biases: http://dl.dropbox.com/u/85192141/bias-examples.page
One of the more interesting sources is Heuer's Psychology of Intelligence Analysis. I recommend it, for the unfamiliar political-military examples if nothing else. (It's also good background reading for understanding the argument diagramming software coming from the intelligence community, not that anyone on LW actually uses them.)
Its been awhile since I read it, but I recall the book Sway being a good source of bias examples.
The cia.gov link leads to a redirect.
Weird. If you just replace
httpwithhttps, it works; one wonders why they couldn't just set up 301 redirects for all the old links...For the lesswrong vanity domain fan, ble.gg seems to be available.
And ru.be looks like it's up for sale too.
Sex, Nerds, and Entitlement
LessWrong/Overcoming Bias used to be a much more interesting place. Note how lacking in self-censorship Vassar is in that post. Talking about sexuality and the norms surrounding it like we would any other topic. Today we walk on eggshells.
A modern post of this kind is impossible despite its great personal benefit to in my estimation at least 30% of the users of this site and making available a better predictive models of social reality for all the users.
If I understand correctly, the purpose of the self-censorship was to make this site more friendly for women. Which creates a paradox: An idea that one can speak openly with men, but with women a self-censorship is necessary, is kind of offensive to women, isn't it?
(The first rule of Political Correctness is: You don't talk about Political Correctness. The second rule: You don't talk about Political Correctness. The third rule: When someone says stop, or expresses outrage, the discussion about given topic is over.)
Or maybe this is too much of a generalization. What other topics are we self-censoring, besides sexual behavior and politics? I don't remember. Maybe it is just politics being self-censored; sexual behavior being a sensitive political topic. Problem is, any topic can become political, if for whatever reasons "Greens" decide to identify with a position X, and "Blues" with a position non-X.
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Although we have traditional exceptions: it is allowed to talk about evolution and atheism, despite the fact that some people might consider these topics political too, and might feel offended. (Global warming is probably also acceptable, just less attractive for nerds.) So let's find out what exactly determines when a potentially political topic becomes allowed on LW, or becomes self-censored?
My hypothesis is that LW is actually not politically neutral, but some political opinion P is implicitly present here as a bias. Opinions which are rational and compatible with P, can be expressed freely. Opinions which are irrational and incompatible with P, can be used as examples of irrationality (religion being the best example). Opinions which are rational but incompatible with P, are self-censored. Opinions which are irrational but compatible with P are also never mentioned (because we are rational enough to recognize they can't be defended).
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default.
The pathetic exception to this is attacking very marginal and unpopular applications that your fellow debaters can easily dismiss as misguided extremism or even a straw man argument.
The second problem is that as time goes on, if reality happens to be politically incorrect on some issue, any other issue that points to the truth of this subject becomes potentially tainted by the label as well. You actively have to resort to thinking up new models as to why the dragon is indeed obviously in the garage. You also need to have good models of how well other people can reason about the absence of the dragon to see where exactly you can walk without concern. This is a cognitively straining process in which everyone slips up.
I recall my country's Ombudsman once visiting my school for a talk wearing a T-shirt that said "After a close up no one looks normal." Doing a close up of people's opinions reveals no one is fully politically correct, this means that political correctness is always a viable weapon to shut down debates via ad hominem.
By merely mentioning political correctness means that many readers will instantly see you or me as one of those people, sly norm violating lawyers and outgroup members who should just stop whining.
My fault for using a politically charged word for a joke (but I couldn't resist). Let's do it properly now: What exactly does "political correctness" mean? It is not just any set of taboos (we wouldn't refer to e.g. religious taboos as political correctness). It is a very specific set of modern-era taboos. So perhaps it is worth distinguishing between taboos in general, and political correctness as a specific example of taboos. Similarities are obvious, what exactly are the differences?
I am just doing a quick guess now, but I think the difference is that the old taboos were openly known as taboos. (It is forbidden to walk in a sacred forest, but it is allowed to say: "It is forbidden to walk in a sacred forest.") The modern taboos pretend to be something else than taboos. (An analogy would be that everyone knows that when you walk in a sacred forest, you will be tortured to death, but if you say: "It is forbidden to walk in a sacred forest", the answer is: "No, there is no sacred forest, and you can walk anywhere you want, assuming you don't break any other law." And whenever a person is being tortured for walking in a sacred forest, there is always an alternative explanation, for example an imaginary crime.)
Thus, "political correctness" = a specific set of modern taboos + a denial that taboos exist.
If this is correct, then complaining, even abstractly, about political correctness, is already a big achievement. Saying that X is an example of political correctness equals to saying that X is false, which is breaking a taboo, and that is punished -- just like breaking any other taboo. But speaking about political correctness abstractly is breaking a meta-taboo built to protect the other taboos; but unlike those taboos, the meta-taboo is more difficult to defend. (How exactly would one defend it? By saying: "You should never speak about political correctness because everyone is allowed to speak about anything"? The contradiction becomes too obvious.)
Speaking about political correctness is the most politically incorrect thing ever. When this is done, only the ordinary taboos remain.
Of course, people recognize what is happening, and they may not like it. But would still be difficult to have someone e.g. fired from university only for saying, abstractly, that political correctness exists.
It has been said that even having a phrase for it, has reduced its power greatly because now people can talk about it, even if they are still punished for doing so.
True. However a professor complaining about political correctness abstractly still has no tools to prevent its spread to the topic of say optimal gardening techniques. Also if he has a long history of complaining about political correctness abstractly, he is branded controversial.
I think it was Sailer who said he is old enough to remember when being called controversial was a good thing, signalling something of intellectual interest, while today it means "move along nothing to see here".
Taboo "political correctness"... just for a moment. (This may be the first time I've ever used that particular LW locution.) Compare the accusations, "you are a hypocrite" and "you are politically incorrect". The first is common, the second nonexistent. Political correctness is never the explicit rationale for shutting someone out, in a way that hypocrisy can be, because hypocrisy is openly regarded as a negative trait.
So the immediate mechanism of a PC shutdown of debate will always be something other than the abstraction, "PC". Suppose you want to tell the world that women love jerks, blacks are dumber than whites, and democracy is bad. People may express horror, incredulity, outrage, or other emotions; they may dismiss you as being part of an evil movement, or they may say that every sensible person knows that those ideas were refuted long ago; they may employ any number of argumentative techniques or emotional appeals. What they won't do is say, "Sir, your propositions are politically incorrect and therefore clearly invalid, Q.E.D."
So saying "anyone can be targeted for political incorrectness" is like saying "anyone can be targeted for factual incorrectness". It's true but it's vacuous, because such criticisms always resolve into something more specific and that is the level at which they must be engaged. If someone complained that they were persistently shut out of political discussion because they were always being accused of factual incorrectness... well, either the allegations were false, in which case they might be rebutted, or they were true but irrelevant, in which case a defender can point out the irrelevance, or they were true and relevant, in which case shutting this person out of discussions might be the best thing to do.
It's much the same for people who are "targeted for being politically incorrect". The alleged universal vulnerability to accusations of political incorrectness is somewhat fictitious. The real basis or motive of such criticism is always something more specific, and either you can or can't overcome it, that's all.
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
Mere factual incorrectness will not invoke the same reaction. If one professor of mathematics admits belief that 2+2=5, and other professor of mathematics admit belief that women in average are worse in math than men, both could be fired, but people will not be angry at the former. It's not just about fixing an error, but also about saving the world.
Then, what is the difference between a politically incorrect opinion, and a factually incorrect opinion with dangerous social consequences? In theory, the latter can be proved wrong. In real life, some proofs are expensive or take a lot of time; also many people are irrational, so even a proof would not convince everyone. But still I suspect that in case of factually incorrect opinion, opponents would at least try to prove it wrong, and would expect support from experts; while in case of politically incorrect opinion an experiment would be considered dangerous and experts unreliable. (Not completely sure about this part.)
It may feel like that for some people. For me the 'feeling' is factual incorrectness agnostic.
I agree that concern about the consequences of a belief is important to the cluster you're describing. There's also an element of "in the past, people who have asserted X have had motives of which I disapprove, and therefore the fact that you are asserting X is evidence that I will disapprove of your motives as well."
Not just motives-- the idea is that those beliefs have reliably led to destructive actions.
I am confused by this comment. I was agreeing with Viliam that concern about consequences was important, and adding that concern about motives was also important... to which you seem to be responding that the idea is that concern about consequences is important. Have I missed something, or are we just going in circles now?
Sorry-- I missed the "also" in "There's also an element...."
In my experience, using "political correctness" frequently has this effect, but mentioning its referent needn't and often doesn't.
Quite recently even economics and its intersection with bias have apparently entered the territory of mindkillers. Economics was always political in the wider world, but considering this is a community dedicated to refining the art of human rationality we couldn't really afford such basic concepts to be mind killers. Can we now?
I mean how could we explore mechanisms such as prediction markets without that? How can you even talk about any kind of maximising agents without invoking lots of econ talk?
Yeah, that sounds about right.
Not entirely, but I agree that they are likely far more often self-censored than those compatible with P. They are less often self-censored, I suspect, than on other sites with a similar political bias.
I'm skeptical of this claim, but would agree that they are far less often mentioned here than on other sites with a similar political demographic.
Summary of IRC conversation in the unoffical LW chatroom.
On the IRC channel I noted that there are several subjects on which discourse was better or more interesting in OB/LW 2008 than today, yet I can't think of a single topic on which LW 2012 has better dialogue or commentary. Another LWer noted that it is in the nature of all internet forums to "grow more stupid over time", I don't think LW is stupider, I just I think it has grown more boring and definitely isn't a community with a higher sanity waterline today than back then, despite many individuals levelling up formidably in the intervening period.
This post is made in the hopes people will let me know about the next good spot.
I wasn't here in 2008, but seems to me that the emphasis of this site is moving from articles to comments.
Articles are usually better than comments. People put more work into articles, and as a reward for this work, the article becomes more visible, and the successful articles are well remembered and hyperlinked. Article creates a separate page where one main topic is explored. If necessary, more articles may explore the same topic, creating a sequence.
Even some "articles" today don't have the qualities of the classical article. Some of them are just a question / a poll / a prompt for discussion / a reminder for a meetup. Some of them are just placeholders for comments (open thread, group rationality) -- and personally I prefer these, because they don't polute the article-space.
Essentially we are mixing together "article" paradigm and a "discussion forum" paradigm. But these are two different things. Article is a higher quality piece of text. Discussion forum is just a structure of comments, without articles. Both have their place, but if you take a comment and call it "article", of course it seems that the average quality of articles deteriorates.
Assuming this analysis is correct, we don't need much of a technical fix, we need a semantic fix; that is: the same software, but different rules for posting. And the rules nees to be explicit, to avoid gradual spontaneous reverting.
Then, we should compare the old OB/LW with the "Article" section, to make a fair comparison.
EDIT: How to get from "here" to "there", if this plan is accepted? We could start by renaming "Main" to "Articles", or we could even keep the old name; I don't care. But we mainly need to re-arrange the articles. Move the meetup announcements to "Discussion". Move the higher-quality articles from "Discussion" to "Main", and... perhaps leave the existing lower-quality articles in "Discussion" (to avoid creating another category) but from now on, ban creating more such articles.
EDIT: Another suggestion -- is it possible to make some articles "sticky"? Regardless of their date, they would always show at the top of the list (until the "sticky" flag is removed). Then we could always make the recent "Open Thread" and "Group Rationality" sticky, so they are the first things people see after clicking on Discussion. This could reduce a temptation to start a new article.
Religion.
Maybe. We've become less New Atheisty than we used to be this is quite clear.
There used to be solitary transhumanist visionaries/nutcases, like Timothy Leary or Robert Anton Wilson (very different in their amount of "rationality"), and there used to be, say, fans of Hofstadter or Jaynes, but the merging of "rationalism" and... orientation towards the future was certainly invented in the 1990s. Ah, what a blissful decade that was.
Russian communism was a type of rationalist futurism: down with religion, plan the economy...
Hmm, yeah. I was thinking about the U.S. specifically, here.
Unpack what you mean by self-censorship exactly?
I regularly see people make frank comments about sexuality. There's maybe 4-5 people whose comments would be considered offensive in liberal circles. Many more people whose comments would at at least somewhat offputting. Whenever the subject comes up (no matter who brings it up, and which political stripes they wear), it often explodes into a giant thread of comments that's far more popular than whatever the original thread was ostensibly about.
I sometimes avoid making sex related comments until after the thread has exploded, because most people have already made the same points already, they're just repeating themselves because talking about pet political issues is fun. (When I do end up posting in them, it's almost always because my own tribal affiliations are wrankled and my brain thinks that engaging with strangers on the internet is an affective use of my time. I'm keenly aware as I write this that my justifications for engaging with you are basically meaningless and I'm just getting some cognitive cotton candy). Am I self-censoring in a way you consider wrong?
I've seen numerous non-gender political threads get downvoted with a comment like "politics is the mindkiller" and then fade away quietly. My impression is that gender threads (even if downvoted) end up getting discussed in detail. People don't self censor, which includes both criticism of ideas people disagree with and/or are offended by.
What exactly would you like to change?
I think this observation is not incompatible with a self-censorship hypothesis. It could mean that topic is somewhat taboo, so people don't want to make a serious article about it, but not completely taboo, so it is mentioned in comments in other articles. And because it can never be officially resolved, it keeps repeating.
What would happen if LW had a similar "soft taboo" about e.g. religion? What if the official policy would be that we want to raise the sanity waterline by bringing basic rationality to as many people as possible, and criticizing religion would make many religious people unwelcome, therefore members are recommended to avoid discussing any religion insensitively?
I guess the topic would appear frequently in completely unrelated articles. For example in an article about Many Worlds hypothesis someone would oppose it precisely because it feels incompatible with Bible; so the person would honestly describe their reasons. Immediately there would be dozen comments about religion. Another article would explain some human behavior based on evolutionary psychology, and again, one spark, and there would be a group of comments about religion. Etc. Precisely because people wouldn't feel allowed to write an article about how religion is completely wrong, they would express this sentiment in comments instead.
We should avoid mindkilling like this: if one person says "2+2 is good" and other person says "2+2 is bad", don't join the discussion, and downvote it. But if one person says "2+2=4" and other person says "2+2=5", ask them to show the evidence.
There is a rather large difference between LW attitudes to religion and to gender issues.
On religion, nearly everyone here agrees about religion: all religions are factually wrong, and fundamentally so. There are a few exceptions but not enough to make a controversy.
On gender, there is a visible lack of any such consensus. Those with a settled view on the matter may think that their view should be the consensus, but the fact is, it isn't.
Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?
Others: please do not feed the trolls.
I am against banning private_messaging. For comparison, MonkeyMind would be no loss, although since he last posted yesterday he probably hasn't been banned yet, and if not him, then there is no case here. private_messaging's manner is to rant rather than argue, which is somewhat tedious and unpleasant, but nowhere near a level where ejection would be appropriate.
Looking at his recent posts, I wonder if some of the downvotes are against the person instead of the posting.
He is -127 karma for the past 30 days.
Standing rules are to make user's comments bannable if their comments are systematically and significantly downvoted, and the user keeps making a whole lot of the kind of comments that get downvoted. In that case, after giving a notice to the user, a moderator can start banning future comments of the kind that clearly would be downvoted, or that did get downvoted, primarily to prevent development of discussions around those comments (that would incite further downvoted comments from the user).
So far, this rule was only applied to crackpot-like characters that got something like minus 300 points within a month and generated ugly discussions. private_messaging is not within that cluster, and it's still possible that he'll either go away or calm down in the future (e.g. stop making controversial statements without arguments, which is the kind of thing that gets downvoted).
Okay.
You propose a dangerous thing.
Once there was an article deleted on LW. Since that happened, it is repeatedly used as an example how censored, intolerant, and cultish LW is. Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn! If this happens, what will come next: captcha in LW wiki?
Wait, what? Forums ban trolls all the time. It becomes necessary when you get big enough and popular enough to attract significant troll populations. It's hardly extreme and cultish, or even unusual.
Instead, we should spend hundreds or thousands of man-hours engaging with trolls? At least Roko had a positive goal.
From your link:
Note to self: use metadata in comments when necessary, such as "irony" etc.
Perhaps there should be some automatic account-disabling mechanism based on karma. If someone has total karma (not just in last 30 days) below some negative level (for example -100), their account would be automatically disabled. Without direct intervention by a moderator, to make it less personal, but also more quick. Without deleting anything, to allow an easy fix in case of karma assassinations.
Note that you're excluding a middle that is perhaps worth considering. That is, the choice is not necessarily between "dealing with" a user account on an admin level (which generally amounts to forcing the user to change their ID and not much more), and spending hundreds of thousands of man-hours in counterproductive exchange.
A third option worth considering is not engaging in counterproductive exchanges, and focusing our attention elsewhere. (AKA, as you say, "don't feed the trolls".)
In the meantime, you might find it useful to explore Wei Dai's [Power Reader}(http://lesswrong.com/lw/5uz/lesswrong_power_reader_greasemonkey_script_updated/), which allows the user to raise or lower the visibility of certain authors.
New heuristic: When writing an article for LessWrong assume the casual reader knows about the material covered in HPMOR.
I used to think one could assume they read the sequences and some other key stuff (Hanson ect.), but looking at debates this simply can't be true for more than a third of current LW users.
A usual idea of utopia is that chores-- repetitive, unsatisfying, necessary work to get one's situation back to a baseline-- are somehow eliminated. Weirdtopia would reverse this somehow. Any suggestions?
As the scope for complex task automation becomes broader, almost all problems become trivial. Satisfying hard work, with challenging and problem-solving elements, becomes a rare commodity. People work to identify non-trivial problems (a tedious process), which are traded for extortionate prices. A lengthy list of problems you've solved becomes a status symbol, not because of your problem-solving skills, but because you can afford to buy them.
Another angle: Is it plausible that almost all problems become trivial, or will increased knowledge lead to finding more challenging problems?
The latter seems at least plausible, considering that the universe is much bigger than our brains, and this will presumably continue to be true.
Look at how much weirder the astronomical side of physics has gotten.
I don't think you've answered my question, but you've got an interesting idea there.
What do people buy which would be more satisfying than solving the problems they're found?
Also, this may be a matter of the difference between your and my temperaments, but is finding non-trivial problems that tedious?
As it's the result of about two minutes thought, I'm not very confident about how internally consistent this idea is.
If finding non-trivial problems is tedious work, I imagine people with a preference for tedious work (or who just don't care about satisfying problems) would probably rather buy art/prostitutes/spaceship rides, etc. This is the bit I find hardest to internally reconcile, as a society in which most work has become trivially easy is probably post-scarcity.
I personally don't find the search for non-trivial problems all that tedious, but if I could turn to a computer and ask "is [problem X] trivial to solve?", and it came back with "yes" 99.999% of the time, I might think differently.
After a week long vacation at Disney World with the family, it occurs to me there's a lot of money to be made in teaching utility maximization to families...mostly from referrals by divorce lawyers and family therapists.
I'm trying to memorise mathematics using spaced repetition. What's the best way to transcribe proofs onto Anki flashcards to make them easy to learn? (ie what should the question and answer be?)
When it comes to formulate Anki cards it's good to have the 20 rules from Supermemo in mind,
The important thing is to understand before you memorize. You should never try to memorzize a proof without understanding it in the first place.
Once you have understood the proof think about what's interesting about the proof. Asks questions like: "What axioms does the proof use?" "Does the proof use axiom X?" Try to find as many questions with clear answers as you can. Being redundant is good.
If you find yourself asking a certain question frequently invent a shorthand for them. axioms(proof X) can replace "What axioms does the proof use?"
If you really need to remember the whole proof then memorize it step by step.
Proof A:
Do A
Do B
becomes 2 cards:
Proof A:
Proof A:
Do A
[...]
If you have a long proof that could mean 9 steps and 9 cards.
Thanks!
I've been doing something similar (maths in an Anki deck), and I haven't found a good way of doing so. My current method is just asking "Prove x" or "Outline a proof of x", with the proof wholesale in the answer, and then I run through the proof in my head calling it "Good" if I get all the major steps mostly correct. Some of my cards end up being quite long.
I have found that being explicit with asking for examples vs definitions is helpful: i.e. ask "What's the definition of a simple ring?" rather than "What's a simple ring?".
"def(simple ring)" is more efficient than "What's the definition of a simple ring?"
I find that having proper sentences in the questions means I can concentrate better (less effort to work out what it's asking, I guess), but each to their own.
If you have 50 cards that are in the style "def(...)" than it doesn't take any effort to work out what it's asking anymore.
Rereading "What's the " over a thousand times wastes time. When you do Anki for longer periods of time reducing the amount of time it takes to answer a card is essential.
A method that I've been toying with: dissect the proof into multiple simpler proofs, then dissect those even further if necessary. For instance, if you're proving that all X are Y, and the proof proceeds by proving that all X are Z and all Z are Y, then make 3 cards: * One for proving that all X are Z. * One for proving that all Z are Y. * One for proving that all X are Y, which has as its answer simply "We know all X are Z, and we know all Z are Y."
That said, you should of course be completely certain that memorizing proofs is worthwhile. Rule of thumb: if there's anything you could do that would have a higher ratio of awesome to cost than X, don't do X before you've done that.
I read quite a bit, and I really like some of the suggestions I found in LW. So, my question is: is there any recent or not-so-recent-but-really-good book you would recommend? Topics I'd like read more about are:
I'm happy to read pop-sci, as long as it's written with a skeptical, rationalist mindset. I.e. I liked Linden's The accidental mind, but take Gladwell's writings with a rather big grain of salt.
Give him a year or two and he'll have written one.
http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/ has a download of one book very close to this topicspace.
Thanks! The link doesn't seem to work, but I'll check out the book. Did you read it?
No, I haven't read it yet, but it's on my list. Here's another download link http://dl.dropbox.com/u/33627365/Scholarship/Spent%20Sex%20Evolution%20and%20Consumer%20Behavior.pdf
Thanks, Grognor!
I just finished reading it. The start is promising, discussing consumer behavior from the signaling/status perspective. There's some discussion of the Big Five personality traits + general intelligence, which was interesting (and I'll need to look into a bit deeper). It shows how these traits influence our buying habits, and the crazy things people do for a few status points...
The end of the book proposes some solutions to hyper-consumerism, and this part I did not particularly like -- in a few pages the writer comes up with some far-far-reaching plans (consumption tax etc.) to influence consumers; all highly speculative, not likely to ever be realized.
Apart from the end, liked it, writer is quick & witty, and provides food for thought.
We often hear about how professional philanthropy is a very good way to improve others' lives. Have any LWers actually gone this route?
Just started on Wall Street
We often hear that? What do you mean by professional philanthropy here?
I mean the general line of reasoning that goes, "Go do the highest-paying job you can get and then donate your extra money to AMF or other highly effective charities." The most oft-cited high-paying job seems to be to work on Wall Street or some such.
Oh, okay, I thought you meant something else.
I would like to try some programming in Lisp, could you give me some advice? I have noticed that in the programming community this topic is prone to heavy mindkilling, which is why I ask on LW instead of somewhere else.
There are many variants of Lisp. I would prefer to learn one that is really used these days for developing real-world applications. Something I could use to make e.g. a Tetris-like game. I will probably need some libraries for input and output; which ones do you recommend? I want a free software that works out of the box; preferably on a Windows machine, without having to install a Linux emulator first. (If such thing does not exist, please tell me; and recommend me a second best possibility.)
I would also like to have a decent development environment; something that allows me to manage multiple source code files, does syntax highlighting, shows documentations to the functions I am writing. Again, preferably free, working out of the box on a Windows machine. Simply, I would like to have an equivalent of what Eclipse is for Java.
Then, I would like some learning resources, and information where can I find good open-source software written in Lisp, preferably games.
My research suggests Clojure is a lisp-like language most suited to your requirements. It runs on the JVM so should be relatively low hassle on Windows. I believe there's some sort of Eclipse support but I can't confirm it.
If you do end up wanting to do something with Common Lisp, I recommend Practical Common Lisp as a good free introduction.
Well, if your goal is trying out for education, but on Windows, you could start with DrRacket. http://racket-lang.org/
It is a reasonable IDE, it has some GUI libraries included, open-source, cross-platform, works fine on Windows.
Racket is based on Scheme language (which is a part of Lisp language family). It has a mode for Scheme as described in R6RS or R5RS standard, and it has a few not-fully-compatible dialects.
I use Common Lisp, but not under Windows. Common Lisp has more cross-implementation libraries, it could be useful sometimes. Probably, EQL is the easiest to set up under Windows (it is ECL, a Common Lisp implementation, merged with Qt for GUI; I remember there being a bundled download). Maybe CommonQt or Cells-GTK would work. I remember that some of the Common Lisp package management systems have significant problems under Windows or require either Cygwin or MSys (so they can use tar, gzip, mkdir etc. as if they were on a Unix-like system)
My goals are: 1) to get the "Lisp experience" with minimum overhead; and 2) to use the best available tools.
And I hope these two goals are not completely contradictory. I want to be able to write my own application on my computer conveniently after a few minutes, and to fluently progress to more complex applications. On the other hand, if I happen to later decide that Lisp is not for me, I want to be sure it was not only because I chose the wrong tools.
Thanks for all the answers! I will probably start with Racket.
For a certain value of "the Lisp experience", Emacs may be considered more or less mandatory. In order to recommend for or against it I would need more precise knowledge of your goals.
I tried Emacs and decided that I dislike it. I understand the reason why it is like that, but I refuse to lower my user interface expectations that low.
Generally, I have noticed the trend that a software which is praised as superior often comes with a worse user interface, or ignores some other part of user experience. I can understand that a software with smaller userbase cannot put enough resources to its non-critical parts. That makes sense. But I suspect there later appears a mindkilling thread of though, which goes like this: "Our software is superior. Our software does not have a feature X. Therefore, not having a feature X is an advantage, because <rationalization>." As in: we don't need 21st-century-style user interface, because good programmers don't need such things.
By wanting a "Lisp experience" I mean I would like to experience (or falsify the existence of) the nirvana frequently described by Paul Graham. Not to replicate 1:1 Richard Stallman's working conditions in 1980s. :D
A perfect solution would be to combine the powerful features of Lisp with the convenience of modern development tools. I emphasize the convenience for pragmatic reasons, but also as a proxy for "many people with priorities similar to me are using it".
Consider an equilibrium of various software products none of which are strictly superior or inferior to each other. Upon hearing that the best argument someone can make for software X is that it has feature Y (which is unrelated to UI), should your expectation of good IU go up or go down?
(To try it a different way: suppose you are in a highly competitive company like Facebooglazon and you meet a certain programmer who is the rudest most arrogant son of a bitch you ever met - yet he is somehow still employed there. What should you infer about the quality of the code he writes?)
There are no "best available tools" without specified target, unfortunately. When you feel that Racket constraints you, come back to the open thread of week, and ask what you would like to see in it - SBCL has better performance, ECL is easier to use for standalone executables, etc. Also, maybe someone would recommend you an in-Racket dialect that would work better for you for those tasks.
Peter Norvig's out-of-print Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp can be interesting reading. It develops various classic AI applications like game tree search and logic programming, making extensive use of Lisp's macro facilities. (The book is 20 years old and introductory, it's not recommended for learning anything very interesting about artificial intelligence.) Using the macro system for metaprogramming is a big deal with Lisp, but a lot of material for Scheme in particular doesn't deal with it at all.
The already mentioned Clojure seems to be where a lot of real-world development is happening these days, and it's also innovating on the standard syntax conventions of Common Lisp and Scheme in interesting ways. Clojure will interface with Java's libraries for I/O and multimedia. Since Clojure lives in the Java ecosystem, you can basically start with your preconceptions for developing for JVM and go from there to guess what it's like. If you're OK with your games ending up JVM programs, Clojure might work.
For open-source games in Lisp, I can point you to David O'Toole's projects. There are also some roguelikes developed in Lisp.
I'm feeling fairly negative on lesswrong this week. Time spent here feels unproductive, and I'm vaguely uncomfortable with the attitudes I'm developing. On the other hand there are interesting people to chat with.
Undecided what to do about this. Haven't managed to come up with anything to firm up my vague emotions into something specific.
Perhaps I'll take a break and see how it feels.
Hasn't worked for Konkvistador.
I'm only posting this to clarify. Old habits do indeed die hard, but I so far haven't changed my mind despite receiving some interesting email on the topic. Hopefully this will become more apparent after a month or two of inactivity.
I was feeling fairly negative on Less Wrong recently. I ended up writing down a lot of things that bothered me in a a half formed angry Google Doc rant, saving it...
and then going back to reading Less Wrong a few days later.
It felt refreshing though, because Less Wrong has flaws and you are allowed to notice them and say to yourself "This! Why are some people doing this! It's so dumb and silly!"
That being said, I'm not sure that all of the arguments that my straw opponents were presenting in the half formed doc are actually as weak as I was making them out to be. But it did make me feel more positive overall simply summing up everything that had been bugging me at the time.
What are the attitudes you are feeling uncomfortable with?
Hmm this is a bit fuzzy, as I said - part of my problem is that I just have a vague feeling and am having difficulty making it less vague. But:
It would very much help if you could name three examples of each of your complaints, this would help you see if this really is the source of your unease. It would also help others figure out if you are right.
Overestimating our rationality and generally feeling clearer thinkers than anyone ever? Or perhaps unwilling to update on outside ideas like Konkvistador recently complained?
There is a lot of right wing politics on the IRC channel, but overall I don't think I've seen much on the main site. On net the sites demographics are if anything remarkably left-wing.
The PUA stuff may come off as weird due to inferential distances or people accumulating strange ideas because they can't sanity check them. Both are the result of the community norm that sort now seems to strongly avoid gender issues because we've proven time and again to be incapable of discussing them as we do most other things. This is a pattern that seems to go back to the old OB days.
I use LW casually and my attitude towards it is pretty neutral/positive but I recently got downvoted something like 10 times in past comments, it seems. A karma loss of 5%, and it's a lot, comparing the amount of karma I have to how long I've been here. I didn't even get into a big argument or anything, the back-and-forth was pretty short. So my attitude toward LW is very meh right now. Sorry, sort of wanted to just say this somewhere. ugh :/
The fact that LW is a forum about rationality/science don't mean it's good for you all the time. Strategically speaking, redefine your goals.
Or, maybe the quality of posts are not the same that was before.
A question about acausal trade
(btw, I couldn't find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Lets say we're in a big universe with many many chances for intelligent life, but most of them are so far apart that they will never meet eachother. Lets also say that UDT/TDT-like decision theories are are in some sense the obviously correct decision theory to follow, so that many civilizations, when they build an AI, they use something like UDT/TDT. At their inception, these AIs will have very different goals since since the civilizations that built them would have very different evolutionary histories.
If many of these AIs can observe that the universe is such that there will be other UDT/TDT AIs out there with different goals then each AI trade acausally with the AIs it thinks will be out there. Presumably each AI will have to study the universe and figure out a probability distribution for the goals of those AIs. Since the universe is large, each AI will expect many other AIs to be out there and thus bargain away most of its influence over its local area. Thus, the starting goals of each AI will only have a minor influence on what it does; each AI will act as if it has some combined utility function.
What are the problems with this idea?
Substitute the word causal for acausal. In a situation of "causal trade", does everyone end up with the same utility function?
The Coase theorem does imply that perfect bargaining will lead agents to maximize a single welfare function. (This is what it means for the outcome to be "efficient".) Of course, the welfare function will depend on the agents' relative endowments (roughly, "wealth" or bargaining power).
(Also remember that humans have to "simulate" each other using logic-like prior information even in the straightforward efficient-causal scenario—it would be prohibitively expensive for humans to re-derive all possible pooling equilibria &c. from scratch for each and every overlapping set of sense data. "Acausal" economics is just an edge case of normal economics.)
The most glaring problem seems to be how it could deduce the goals of other AIs. It either implies the existence of some sort of universal goal system, or allows information to propagate faster than c.
What I had in mind was that each of the AIs would come up with a distribution over the kinds of civilizations which are likely to arise in the universe by predicting the kinds of planets out there (which is presumably something you can do since even we have models for this) and figuring out different potential evolutions for life that arises on those planets. Does that make sense?
I was going to respond saying I didn't think that would work as a method, but now I'm not so sure.
My counterargument would be to suggest that there's no goal system which can't arbitrarily come about as a Fisherian Runaway, and that our AI's acausal trade partners could be working on pretty much any optimisation criteria whatsoever. Thinking about it a bit more, I'm not entirely sure the Fisherian Runaway argument is all that robust. There is, for example, presumably no Fisherian Runaway goal of immediate self-annihilation.
If there's some sort of structure to the space of possible goal systems, there may very well be a universally derivable distribution of goals our AI could find, and share with all its interstellar brethren. But there would need to be a lot of structure to it before it could start acting on their behalf, because otherwise the space would still be huge, and the probability of any given goal system would be dwarfed by the evidence of the goal system of its native civilisation.
There's a plot for a Ctrhulhonic horror tale lurking in here, whereby humanity creates an AI, which proceeds to deduce a universal goal preference for eliminating civilisations like humanity. Incomprehensible alien minds from the stars, psychically sharing horrible secrets written into the fabric of the universe.
Perhaps it is not wise to speculate out loud in this area until you've worked through three rounds of "ok, so what are the implications of that idea" and decided that it would help people to hear about the conclusions you've developed three steps back. You can frequently find interesting things when you wander around, but there are certain neighborhoods you should not explore with children along for the ride until you've been there before and made sure its reasonably safe.
Perhaps you could send a PM to Will?
Not just going meta for the sake of it: I assert you have not sufficiently thought throught the implications of promoting that sort of non-openness publicly on the board. Perhaps you could PM jsavaltier.
I'm lying, of course. But interesting to register points of strongest divergence between LW and conventional morality (JenniferRM's post, I mean; jsalvatier's is fine and interesting).
One problem is that, in order to actually get specific about utility functions, the AI would have to simulate another AI that is simulating it - that's like trying to put a manhole cover through its own manhole by putting it a box first.
If we assume that the computation problems are solved, a toy model involving robots laying different colors of tile might be interesting to consider. In fact there's probably a post in there. The effects will be different sizes for different classes of utility functions over tiles. In the case of infinity robots with cosmopolitan utility functions, you do get an interesting sort of agreement though.
I just read the new novel by Terry Pratchett and Stephen Baxter, The Long Earth. I didn't like it and don't recommend it (I read it because I loved other books by Pratchett, but there's no similarity here).
There was one thing in particular that bothered me. I read the first 10 reviews of the book that Google returns, and they were generally negative and complained about many things, but never mentioned this issue. Many described Baxter as a master of hard sci fi, which makes it doubly strange.
Here's the problem: in this near-future story, gurer vf n Sbbzvat NV, nyernql fhcrevagryyvtrag naq nf cbjreshy nf n znwbe angvba, juvpu jvyy cebonoyl orpbzr zber cbjreshy guna gur erfg bs gur jbeyq pbzovarq va nabgure lrne be fb. And nobody in the world cares! It's not a plot point! I kept expecting it to at least be mentioned by one of the characters, but they're all completely 'meh'. Instead they obsess over minor things like arj ubzvavq fcrpvrf fznegre guna puvzcf, ohg abg nf fzneg nf uhznaf.
Have I been spoiled by reading too much LW? Has this happened to others with other fiction?
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a lvl ~35 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice.
[comment deleted]
Just tons. For example, Harry's instructor, Mr. Bester, is a double reference.
EDIT: And obviously the Bester scenes contain other allusions: back to the gold-silver arbitrage, or Harry imaging himself a Lensman, come to mind.
What's the non-author one?
Babylon Five character, IIRC.
What would a poster designed to spread awareness of a less wrong meetup look like? How can it appeal to non-technophiles / students of social sciences?
Did the site CSS just change the font used for discussion (not Main) post bodies? It looks bad here.
Edit: it only happens with some posts. Like these:
http://lesswrong.com/r/discussion/lw/dd0/hedonic_vs_preference_utilitarianism_in_the/ http://lesswrong.com/r/discussion/lw/dc4/call_for_volunteers_publishing_the_sequences/
But not these:
http://lesswrong.com/r/discussion/lw/ddh/aubrey_de_grey_has_responded_to_his_iama_now_with/ http://lesswrong.com/r/discussion/lw/dcy/the_fiction_genome_project/
Is it a perhaps a formatting change applied when posting?
Also, when I submit a new comment and then edit it, it now starts with an empty line.
Fixed
It's a known Bug #315
Sacredness as a Monster by Sister Y, aren't you glad I read cool blogs? :)
Suggestion:
I consider tipping to be a part of the expense of dining - bad service bothers me, but not tipping also bothers me, as I don't feel like I've paid for my meal.
So I've come up with a compromise with myself, which I think will be helpful for anybody else in the same boat:
If I get bad service, I won't tip (or tip less, depending on how bad the service is). But I -will- set aside what I -would- have tipped, which will be added to the tip the next time I receive good service.
Double bonus: When I get bad service at very nice restaurants, the waiter at the Steak and Shake I more regularly eat at (it's my favored place to eat) is going to get an absurdly large tip, which amuses me to no end.
One more item for the FAI Critical Failure Table (humor/theory of lawful magic):
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn't enough, so crazy people don't make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn't need to be understood in detail by anyone, the expectation only has to be close enough to the real effect, so the details of expectation-caused phenomena can lawfully exist independently of the content of people's expectations about them. Since a (justified) expectation is sufficient for something to happen, all sorts of miracles can happen. Since to happen, a miracle has to be expected to happen, it's necessary for someone to know about the miracle and to expect it to happen. Learning about a miracle from an untrustworthy (or mistakenly trusted) source doesn't make it happen, it's necessary for the knowledge of possibility (and sufficiently clear description) of a miracle to be communicated reliably (within the tolerance of what counts for an effect to have been correctly anticipated). The path of a powerful wizard is to study the world and its history, in order to make correct inferences about what's possible, thereby making it possible.
(Previously posted to the Jan 2012 thread by mistake.)
A poll, just for fun. Do you think that the rebels/Zionists in The Matrix were (mostly or completely) cruel, deluded fundamentalists commiting one atrocity after another for no good reason, and that in-universe their actions were inexcusable?
Upvote for "The Matrix makes no internal sense and there's no fun in discussing it."
I agree (the franchise established itself as rather one-dimensional... in about the first 40 minutes) - but hell, I get into discussions about TWILIGHT, man. I'm a slave to public discourse.
Karma sink.
Upvote for NO.
Upvote for YES.
Wow. That sequence was drastically less violent than I remembered it being. I noticed (for I believe the first time) that they actually made some attempt to avoid infinite ammo action movie syndrome. Also I must have thought the cartwheel bit was cool when I first saw it, but now it looks quite ridiculous and/or dated.
Maybe it's time for a rewatch.
What is the meaning of the three-digit codes in American university lessons? Such as: "Building a Search Engine (CS101)", "Crunching Social Networks (CS215)", "Programming A Robotic Car (CS373)" currently in Udacity.
Seems to me that 101 is always the introduction to the subject. But what about the other numbers? Do they correspond to some (subject specific) standard? Are they arbitrary (perhaps with general trend to give more difficult lessons higher numbers)?
The first digit is the most important. It indicates the "level" of the course: 100/1000 courses are freshman level, 200/2000 are sophomore level, etc. There is some flexibility in these classifications, though. Examples: My undergraduate university used 1000 for intro level, 2000 for intermediate level, 4000 for senior/advanced level, and 6000 for graduate level. (3000 and 5000 were reserved for courses at a satellite campus.) My graduate university uses 100, 200, 300, 400 for the corresponding undergraduate year levels, and 600, 700, 800 for graduate courses of increasing difficulty levels.
The other digits in the course number often indicate the rough order in which courses should be taken within a level. This is not always the case; sometimes they are just arbitrary, or they may indicate the order in which courses were added to the institute's offerings.
In general, though the numbers indicate the levels of the courses and the order in which they "should" be taken, students' schedules need not comply precisely (outside of course-specific prerequisite requirements).
It varies from institution to institution, but generally the first number indicates the year you're likely to study it, so "Psychology 101" is the first course you're likely to study in your first year of a degree involving psychology, which is why it's the introduction to the subject. The numbering gets messy for a variety of reasons.
I should point out I'm not an American university student, but this style of numbering system is becoming prevalent throughout the English-speaking world.
101's stereotypically the introduction to the course, but this sort of thing actually varies quite a bit between universities. Mine dropped the first digit for survey courses and introductory material; survey courses were generally higher two-digit numbers (i.e. Geology 64, Planetary Geology), while introductory courses were more often one-digit or lower two-digit numbers (i.e. Math 3A, Introduction to Calculus). Courses intended to be taken in sequence had a letter appended. Aside from survey courses, higher numbers generally indicated more advanced or specialized classes, though not necessarily more difficult ones.
Three digits indicated an upper-division (i.e. nominally junior- or senior-level) or graduate-level course. Upper-division undergrad courses were usually 100-level, and the 101 course was usually the first class you'd take that was intended only for people of your major; CS 101 was Algorithms and Abstract Data Types for me, for example, and I took it late in my sophomore year. Graduate courses were 200-level or higher.
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm. (This is my very uninformed understanding of the "timeless" part: I don't have to wait until you do X, because I can already predict if you would do X, and behave accordingly. And you don't have to wait for my reaction, because you can already predict it too. So let's predict-cause each other to cooperate, and win mutually.)
If I am correct, there is a problem with this: having an access to another agent's code does not allow you to make any conclusions, in general case.
You can only make a simulation of one specific situation. Then another. Hoping that the agent does not want to run your simulation, which would get you both into an infinite loop. And you can't even tell whether the agent wants to run your simulation, or not.
Thinking in terms of "simulating their algorithm" is convenient for us because we can imagine the agent doing it and for certain problems a simulation is sufficient. However the actual process involved is any reasoning at all based on the algorithm. That includes simulations but also includes creating mathematical proofs based on the algorithm that allow generalizable conclusions about things that the other agent will or will not do.
An agent that wishes to facilitate cooperation - or that wishes to prove credible threat - will actually prefer to structure their own code such that it is as easy as possible to make proofs and draw conclusions from that code.
Is anyone familiar with any statistical or machine-learning based evaluations of the "Poverty of Stimulus" argument for language innateness (the hypothesis that language must be an innate ability because children aren't exposed to enough language data to learn it properly in the time they do).
I'm interested in hearing what actually is and isn't impossible to learn from someone in a position to actually know (ie: not a linguist).
I was looking at this exact question a few months ago, and found these to be quite LW-reader-salient:
The Case of Anaphoric One
Poverty Of The Stimulus - A Rational Approach
Does any one know of a good guide to Godel's theorems along the lines of the cartoon guide to lob's theorem?
If you believe that some model of computation can be expressed in arithmetics (this implies expressibility of the notion of correct proof), Godel's first theorem is more or less analyzis of "This statement cannot be proved". If it can be proved, it is false and there is a provable false statement; if it cannot be proved it is an unprovable true statement.
But most of the effort in proving Godel's theorem has to be spent on proving that you cannot go half way: if you have a big enogh theory to express basic arithmetical facts, you have to have full reflection. It can be stated in various ways, but it requires a technically accurate proof - I am not sure how well it would fit into a cartoon.
Could you state explicitly what do you want to find - just the non-tehnical part, or both?
Actually that was pretty much enough.
I find that, sporadically, I act like a total attention whore around people whom I respect and may talk to more or less freely - whether I know them or we're only distantly acquainted. This mostly includes my behavior in communities like this, but also in class and wherever else I can interact informally with a group of equals. I talk excitedly about myself, about various things that I think my audience might find interesting, etc. I know it might come across as uncouth, annoying and just plain abnormal, but I don't even feel a desire to stop. It's not due to any drugs either. When I see that I've unloaded too much on whoever I'm talking to, I try to apologize and occasionally even explain that I have a neural condition.
I believe that it's a side effect of me deprogramming myself from social anxiety after getting all shaken up by Evangelion. In high school and earlier, I was really really shy, resented having to talk to anyone but a few friends, felt rage at being dragged into conversations, etc. But now it's like my personality has shifted a deviation or two towards the extraverted side. So such impulses, which were very rare in my childhood, became proeminent and this weirds me out. I still have a self-image of a very introverted guy, but now I'm often compelled to behave differently.
[This comment was caused by such an impulse too. Again, I'm completely sober, emotionally neutral and so on. I just have the urge to speak up.]
With regards to Optimal Employment, what does anyone think of the advice given in this article?
That works out (for the benefit of other Europeans) at €80,000 - an astonishing amount of money to me at least. LA seems like a cool place, with a lot of culture and a more interesting places that can be easily traveled to than Dublin.
Honestly, moving to L.A. to seek a rare super-high paying waiter job seems like a terrible idea to me.
That's the main issue I've been having with employment here; though I'm a good waiter, most places want two years' experience in fine dining, which I don't have.
I don't know if the claim is true or not, but i don't find it too implausible. It helps to remember that LA is frequented by a great many newly wealthy celebrities.
It does not follow that my chances of getting such a job in L.A. are high enough to be worth considering.