If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
I was chatting with Toby Ord recently about a series of events we think we've observed in ourselves:
I wish I had an intuitive name for this, which made analogy to some similar process, ala "evaporative cooling of group beliefs." But the best I've heard so far is the "pruning effect."
It may not be a special effect, anyway, but just a particular version of the effect whereby a scenario you spend a lot of time thinking about feels intuitively more probable than it should.
I hadn't realized it before, but the usual take on non-empathic people-- that they will treat other people very badly-- implies that most people think that mistreating people is a very strong temptation and/or reliably useful.
One of the principles of interesting computer games is that sometimes a simple action by the player leads to a lot of response from the game. This has an obvious application to why hurting people might be fun.
Account by someone who's highly dependent on prescription hormones to function, some description of the difficulties of finding a doctor who was willing to adjust the hormones properly, a little about the emotional effects of the hormones, and a plea to make hormones reliably available. It sounds like they're almost as hard to get reliably as pain meds.
Moldbug weighs in on the Techcrunch thing, with words on LW:
Alexander is a disciple of the equally humorless "rationalist" movement Less Wrong, a sort of Internet update of Robespierre's good old Cult of Reason, Lenin's very rational Museums of Atheism, etc, etc. If you want my opinion on this subject, it is that - alas - there is no way of becoming reasonable, other than to be reasonable. Reason is wisdom. There is no formula for wisdom - and of all unwise beliefs, the belief that wisdom can be reduced to a formula, a prayer chant, a mantra, whatever, is the most ridiculous.
We're so humorless that our primary piece of evangelical material is a Harry Potter fanfiction.
It's "humorless" that hurts the most, of course.
Out of interest, does anyone here have a positive unpacking of "wisdom" that makes it a useful concept, as opposed to "getting people to do what you want by sounding like an idealised parental figure"?
Is it simply "having built up a large cache of actually useful responses"?
I think Moldbug is somewhat on target
In this case he could not be farther off target if he tried. Yvain's writings are some of the best, most engaging, most charitable and most reasonable anywhere online. This is widely acknowledged even by those who disagree with him.
It is interesting how similar in style and thought patterns this is to the far left rant about LW from a few months ago.
Cult accusations, criticism by way of comparison to things one doesn't like simply because they bear similar names, use of ill-defined terms as part of that criticism, bizarre analogies to minimally historically connected individuals (Shabtai Tzvi? Seriously? Also does Moldbug realize what that one particularly sounds like given Eliezer's background?), phrasing things in terms of conflicts of power rather than in terms of what might actually be true, operating under the strong presumption that people who disagree with one are primarily motivated by ulterior motivations rather than their stated ones especially when those ulterior motivations would support one's narrative.
Starting today, Monday 25 November 2013, some Stoic philosophers are running "Stoic Week", a week-long mass-participation "experiment" in Stoic philosophy and whether Stoic exercises make you happier.
There is more information on their blog.
To participate, you have to complete the initial exercises (baseline scores) by midnight today (wherever you are), Monday 25 November.
Recently, a planetary system similar to our own solar system was found. This is one of the first cases where one has rocky planets near the star and large gas giants farther away, like our own solar system. Unlike our system, this system apparently has everyone fairly close in, with all the planets closer to the star than the Earth is to the sun.
You know I really do feel like I am clinging bitterly to my priors and my meta at this point as I joked on twitter recnetly. I knew this was inevitable should our presence ever be noticed by anyone actually important like a journalist. What I didn't know was it would still hurt.
This emotion shows in my reply. Should delete it?
You shouldn't be upset by the initial media coverage, and I say this as someone who doesn't identify with neo-reactionary thought. Attacking new social movements is NOT inevitable. It is a sign of growth and a source of new adherents. Many social movements never pick up enough steam to receive negative coverage, and those movements are ineffective. Lots of people who have never heard of neo-reactionaries will read this article, note that parts of it are pretty obvious bullshit (even the parts that are intended to be most negative; lots of people privately believe that IQ and race are connected even if they are publicly unwilling to say anything of the sort), and follow the links out of interest. There are many very smart people that read TechCrunch, and don't automatically agree with a journalist just because they read an article. Obviously this is bad for Peter Thiel, who is basically just collateral damage, but it's most definitely good for neo-reactionaries.
Gandhi's famous quote ("First they ignore you, then they laugh at you, then they fight you, then you win.") is accurate as to the stages that a movement needs to pass through, although obviously one can be stopped at any given stage. I think we are already seeing these stages play out in the Men's Rights movement, which is further along the curve than neo-reaction.
Clinging bitterly to your priors and your metta sounds like a sign you should update, and that's more important than deleting or not deleting a blog comment.
As for your comment, first two paragraphs are fine, perhaps even providing helpful clarification. The sarcasm in the second paragraph is probably unhelpful, though, maybe just edit the comment.
To be more clear, HBDers claim that not just that humans differ significantly at a genetic level (that's pretty uncontroversial: I don't think anyone is going to argue that genetically inherited disease aren't a thing for example). As far as I can tell, the HBDers believe that most or almost all of mental traits are genetically determined. Moreover, HBDers seem to generally believe that these genetic traits are distributed in the population in ways that closely match with what are normally seen as ethnic and racial groups, and that that explains most of racial differences in IQ scores, life success, and rates of criminal activity.
The anti-reaction FAQ describes it as "Neoreaction is a political ideology supporting a return to traditional ideas of government and society, especially traditional monarchy and an ethno-nationalist state. It sees itself opposed to modern ideas like democracy, human rights, multiculturalism, and secularism. " As far as I'm aware, neoreactioaries do not object to that description.
I feel this is a stupid question, but I'd rather ask it than not knowing: Why would anyone want that? I can understand opposing things like: democracy, secularism and multiculturalism, but replacing them with a traditional monarchy just doesn't seem right. And I don't mean morally, I just don't see how it could create a working society.
I can fully understand opposing certain ideas, but if you're against democracy because it doesn't work, why go to a system of governance that has previously shown not to work?
If you accept the criticism it makes of democracy you are already basically Neoreactionary. Only about half of them advocate monarchy as what should replace our current order, remember no one said the journalist did an excellent job reporting about us. While I can't even speak for those who do advocate Monarchy, only for myself, here some of my reasons for finding it well worth investigating and advocating:
Good enough - You need not think it an ideal form of government, but if you look at it and conclude it is better than democracy and nearly anything else tried from time to time so far, why not advocate for it? We know it can be done with humans and can be stable. This is not the case with some of the proposed theoretical forms of government. Social engineering is dangerous, you want fail safes. If you want to be careful and small c-conservative it is hard to do better than monarchy, it is as old as civilization, an institution that can create bronze age empires or transform a feudal society into an industrial one.
Simplicity - Of the proposed other proposed alternative forms of governments it is the one most easily accurately explained to nearly anyone. Simplicity and emoti
If you accept the criticism it makes of democracy you are already Neoreactionary.
That sounds like a hell of a package deal fallacy to me.
If you accept the criticism it makes of democracy you are already basically Neoreactionary
And if we accept the Reactionary criticisms of democracy and the Progressive criticisms of aristocracy and monarchy? What then?
Then you get to happily look down on everyone's naive worldviews until you realize that world is fucked and go cry in a corner.
I am curious why Switzerland isn't more popular among people who want to change the political system. It has direct democracy, decades of success, few problems...
The cynical explanation is that promoting a system someone else invented and tested is not so good for signalling.
I am curious why Switzerland isn't more popular among people who want to change the political system. It has direct democracy, decades of success, few problems...
The correct question is whether Switzerland's success is caused by its political system. If not, emulating it won't help.
We can at least be sure that Switzerland's success hasn't been prevented by its political system. This isn't a proof that the system should be copied, but it's at least a hint that it should be studied.
I can fully understand opposing certain ideas, but if you're against democracy because it doesn't work, why go to a system of governance that has previously shown not to work?
The obvious question here is, why do you think monarchy has been "shown not to work"? Is it because monarchies have had a tendency to turn into democracies? Or perhaps because historical monarchies didn't have the same level of technology that modern liberal democracies enjoy?
That question is kinda obvious. Thanks for pointing it out.
From what I remember from my History classes, monarchies worked pretty okay with an enlightened autocrat who made benefiting the state and the populace as his or her prime goal. But the problem there was that they didn't stay in power and they had no real way of making absolutely sure their children had the same values. All it takes to mess things up is one oldest son (or daughter if you do away with the Salic law) who cares more about their own lives than those of the population.
So I don't think technology level plays a decisive factor. It probably will improve things for the monarchy, since famines are a good way to start a revolution, but giving absolute power to people without a good fail-safe when you've got a bad ruler seems like a good way to rot a system from the inside.
I was in a Chinese university around Geoge W. Bush's second election and afterwards, which didn't make it easy to convince Chinese students that Democracy was a particularly good system for picking competent leaders (Chinese leaders are often graduates from prestigious universities like Tsinghua (where I was), which is more like MIT than like Yale, and they are generally very serious and competent, though not particularly telegenic). On the other hand, the Chinese system gets you people like Mao.
I don't think Mao could exactly be said to be a product of the Chinese system, seeing as unless you construe the "Chinese system" to include revolutions, it necessarily postdates him.
I totally agree, and in addition, Mao is the kind of leader that could get elected in a democracy.
However, a democracy may be getting rid of someone like Mao than China was (provided the democracy stats).
I'm not necessarily saying that democracy is the best thing ever. I just have issues jumping from "democracies aren't really as good as you're supposed to believe" to "and therefore a monarchy is better."
How sure are you that what you are taught is a complete and unbiased analysis of political history, carried out by sufficiently smart and rational people that massive errors of interpretation are unlikely, and transmitted to you with high fidelity?
I don't think you have to be (certainly I am not,) not to put much credence in Reaction. From the premise that political history is conventionally taught in a biased and flawed manner, it does not follow that Reaction is unbiased or correct.
The tendency to see society as being in a constant state of decline, descending from some golden age, is positively ancient, and seems to be capable of arising even in cases where there is no real golden age to look back on, unless society really started going downhill with the invention of writing. There is no shortage of compelling biases to motivate individuals to adopt a Reactionary viewpoint, so for someone attempting to judge how likely the narrative is to be correct, they need to look, not for whether there are arguments for Reaction at all, but whether those arguments are significantly stronger than they would have predicted given a knowledge of how well people tend to support other ideologies outside the mainstream.
In fact, Einstein was pretty politically active and influential, largely as a socialist, pacifist, and mild Zionist.
Napoleon was a populist Revolutionary leader. That should be well-understood.
I'm not convinced that this is a meaningful category. It is similarly connected to how you blame assassins and other issues on the populist revolutions: if historically monarchies lead to these repeatedly, then there's a definite problem in saying that that's the fault of the demotist tendencies, when the same things have not by and large happens in democracies once they've been around for a few years.
Also, while Napoleon styled himself as a populist revolutionary leader, he came to power from the coup of 18 Brumaire, through military strength, not reliance on the common people. In fact, many historians see that event as the end of the French Revolution.
While I understand that responding to everything Yvain has to say is difficult, I'd rather read a complete and persuasive response three months from now than an unpersuasive one right now. By all means, feel free to take your time if you need it.
There are three decent starting points:
All of these have issues, I like Nick Land's one best, Moldbug is probably easier to read if you are used to the writing style here, Scott's is the best writer of the three, but deficient and makes subtle mistakes since he isn't reactionary.
My own summary of some points that are often made would be:
If you build a society based on consent, don't be surprised if consent factories come to dominate your society. What reactionaries call the Cathedral is machinery that naturally arises when the best way to power is hacking opinions of masses of people to consent to whatever you have in store for them. We claim the beliefs this machine produces has no consistent relation to reality and is just stuck in a feedback loop of giving itself more and more power over society. Power in society thus truly lies with the civil service, academia and journalists not elected officials, who have very little to do with actual governing. This can be shown by interesting examples like the EU repeating referendums until they achieve the desired results or Belgium's 589 days without elected government. Their nongovernment managed to have little difficulty doing things with important political implications like nationalizing a major bank.
Moral Progress hasn't happened. Moral change has, we rationalize the latter as progress. Whig history is bunk.
The modern world allows only a very small window of allowed policy experimentation. Things like
Moral Progress hasn't happened. Moral change has, we rationalize the latter as progress. Whig history is bunk.
Do you think the decline of lynching is mere change rather than progress?
The claim that the morality of a society doesn't steadily, generally, and inexorably increase over time is not the same as the claim that there will be no examples of things that can be reasonably explained as increases in societal morality. If morality is an aggregate of bounded random walks, you'd still expect some of those walks to go up.
To return to the case at hand: the decline of lynching may be an improvement in one area, but you have to weigh it against the explosions in the imprisonment and illegitimacy rates, the total societal collapse of a demographic that makes up over a tenth of the population, drug abuse, knockout games, and so on.
How is causality relevant? The absence of continuous general increase is enough to falsify the Whig-history hypothesis, given that the Whig-history hypothesis is nothing more than the hypothesis of continuous general increase -- unless we add to the hypothesis the possibility of 'counterrevolutionary' periods where immoral, anti-Whig groups take power and immorality increases, but expressing concern over things like illegitimacy rates, knockout games, and inner-city dysfunction is an outgroup marker for Whigs.
Maybe I can. It seems Elezier was hurriedly trying to make the point that he's not affiliated with neoreactionaries, out of fear of the name of LessWrong being besmirched.
It's definitely true, I think, that Elezier is not a neoreactionary and that LessWrong is not a neoreactionary place. Perhaps the source of confusion is that the discussions we have on this website are highly unusual compared to the internet at large and would be extremely unfamiliar and confusing to people with a more politically-oriented mind-killed mindset.
For example, I could see how someone could read a comment like "What is the utility of killing ten sad people vs one happy person" (that perhaps has a lot of upvotes) - which is a perfectly valid and serious question when talking about FAI - and erroneously interpret that as this community supporting, say, eugenics. Even though we both know that the person who asked that question on this site probably didn't even have eugenics cross their mind.
(I'm just giving this as an example. You could also point to comments about democracy, intersexual relationships, human psychology, etc.)
The problem is that the inferential distance between these sorts of discussions and political discussions is just too large.
Instead of just being reactionary and saying "LessWrong doesn't support blabla", it would have been better if Elezier just recommended the author of that post to read the rationality materials on this site.
LessWrong is about the only public forum outside their own blog network that gives neoreaction any airtime at all. It's certainly the only place I've tripped over them.
On the other hand, I at least found the conversation about neoreaction on LW to be vague and confusing and had basically no idea of what the movement was about until I read Yvain's pieces.
Eliezer's comment hurt my feelings and I'm not sure why it was really necessary. Responding to something just reinforces the original idea. If rationalists want to reject the Enlightenment, we should have every right to do so, without Eliezer proclaiming that it's not canon for this community.
You claim a right not to have your feelings hurt that overrules Eliezer's right to speak on the matter? That concept of offense-based rights and freedom to say only nice things is one that I am more used to seeing neoreactionaries find in their hated enemies, the progressives. Are you sure you know where you are actually standing?
Eliezer has made a true statement: that neoreaction is not canon for LessWrong or MIRI, in response to an article strongly suggesting the opposite.
Elsethread you write:
The fact that Eliezer felt the need to respond explicitly to these two points with an official-sounding disavowal shows hypersensitivity
So Eliezer shouldn't say anything, because:
Apparently the supposed Streisand effect applies to him responding to Klint but not to you responding to him. How does that...
Your response to Eliezer, both here and in the other thread, comes across as a completely unjustified refusal to take his comment at face-value: Eliezer explaining that he concluded your views were not worth spending time on for quite rational reasons, and is saying so because he doesn't want people thinking he or the majority of the community he leads hold views which they don't in fact hold.
This seems to be part of a pattern with you: you refuse to accept that people (especially smart people) really disagree with you, and aren't just lying about their views for fear or reputational consequences. It's reminiscent of creationists who insist there's a big conspiracy among scienitsts to suppress their revolutionary ideas. And it contributes to me being glad that you are no longer working for MIRI, for much the same reasons that I am glad MIRI does not employ any outspoken creationists.
Not mean-spirited. Just honest. If this were a private conversation, I'd keep my thoughts to myself and leave in search of more rational company, but when someone starts publicly saying things like...
(all of which are grossly unfair readings of Eliezer's coment)
...then I think some bluntness is called for.
Hm, I didn't feel that Eliezer was being particularly dismissive (and am somewhat surprised by the level of the reactions in this thread here). The original post sort-of insinuated that MIRI was linked to neoreaction, so Eliezer correctly pointed out that MIRI was even more closely linked to criticism of Neoreaction, which seems like what anybody would do if he found himself associated with an ideology he disagreed with - regardless of the public relations fallout of that ideology.
Then the author referred to a "conspiracy," which he admits is just a joke and explicitly says he doesn't actually believe in it.
I routinely read "I was only joking" as "I meant every word but need plausible deniability."
If he actually wanted to achieve the "get it off me" goal, indifference would be a more effective response.
Silence is often consent & agreement.
Given the things PG has said at times, I'm not sure that is a wrong interpretation of matters. Modus ponens, tollens...
There's a difference between "neoreactionary" and "expresses skepticism against Progressive Orthodoxy".
Are you and Konkvistador using the word with different meanings, the former narrower and the latter broader? or am I missing something? or...
You should know perfectly well that as long as MIRI needs to coexist and cooperate with the Cathedral (as colleges are the main source of mathematicians) they can't afford to be thought of as right wing. Take comfort at least in knowing that whatever Eliezer says publicly is not very strong evidence of any actual feelings he may or may not have about you.
I can't figure out whether the critics believe the Cathedral is right-wing paranoia or a real thing.
MIRI is seen as apolitical. I doubt an offhand mention in a TechCrunch hatchet job is going to change that, but a firm public disavowal might, per the Streisand effect.
It's complicated. We reject some parts of the Enlightenment but not all. Jayson just listed three of my favorite monarchs, actually.
Marcus Hutter just had a little article about AIXI published on the Australian website "The Conversation". Not much there that will be new to LW readers, though. Includes a link to his presentation at the 2012 Singularity Summit.
Discussion on Hacker News. (As happens far too often on HN, the link there is to the blogspam site phys.org rather than to the original source.)
I don't see it, for instance, here at Less Wrong.
I do sometimes get mildly annoyed at people linking to a generic news site about some research (nytimes, etc.) instead of digging up a source as close to the original as possible.
It may not be "blogspam" proper, but it's research or tech being summarized by journalists, which tends to be less accurate than the original source.
I just realized it's possible to explain people picking dust in the torture vs. dust specks question using only scope insensitivity and no other mistakes. I'm sure that's not original, but I bet this is what's going on in the head of a normal person when they pick the specks.
The dust speck "dillema" - like a lot of the other exercises that get the mathematically wrong answer from most people is triggering a very valuable heuristic. - The "you are trying to con me into doing evil, so fuck off" Heuristic.
Consider the problem as you would of it was a problem you were presented with in real life.
The negative utility of the "Torture" choice is nigh-100% certain. It is in your physical presence, you can verify it, and "one person gets tortured" is the kind of event that happens in real life with depressing frequency. The "Billions of people get exposed to very minor annoyance" choice? How is that causal chain supposed to work, anyway? So that choice gets assigned a very high probability of being a lie.
And it is the kind of lie people encounter very frequently. False hypotheticals in which large numbers of people suffer if you do not take a certain action are a common lever for cons. From a certain perspective, this is what religion is - Attempts to hack people's utility functions by inserting so absurdly large numbers into the equations so that if you assign any probability at all to them being true they become dominant.
So claims that look like this class of attack routinely get assigned a probability of zero unless they have very strong evidence backing them up because that is the only way to defend against this kind of mental malware.
I know that MIRI doesn't aim to actually build a FAI and they mainly try to provide material for those that seriously try to do it in the future. At least this is my understanding, correct me if I'm wrong.
But has the work MIRI has done so far brought us closer to building a working FAI or any kind of AGI for that matter? Even a tiny bit?
Not sure where exactly to ask but here goes:
Sparked by the recent thread(s) on the Brain Preservation Foundation and by my Grandfather starting to undergo radiation+chemo for some form of cancer. While timing isn't critical yet, I'm tentatively trying to convince my Mother (who has an active hand in her Fathers' treatment) into considering preservation as an option.
What I'm looking for is financial and logistical information of how one goes about arranging this starting from a non-US country, so if anyone can point me at it I'd much appreciate it.
A very readable new paper on causality on Andrew Gelman's blog: Forward causal inference and reverse causal questions. It doesn't have any new results, but motivates asking "why" questions in addition to "what if" questions to facilitate model checking and hypothesis generation. Abstract:
...The statistical and econometrics literature on causality is more focused on “effects of causes” than on “causes of effects.” That is, in the standard approach it is natural to study the effect of a treatment, but it is not in general possible to defin
I recently started using Habit RPG, which is a sort of gamified todolist where you get gold and XP for doing your tasks and not doing what you disapprove of.
Previously I had been mostly using Wunderlist (I also tried Remember The Milk, but found the features too limited), and so far Habit RPG looks better than Wunderlist on some aspects (more fine-grained control of the kind of tasks you put in it, regular vs. one-off vs. habits), and of course has an extra fun aspect.
Anybody else been trying it? (I saw it mentioned a few times on LW) Anybody else want to try?
LW meta: I have received a message from “admin”:
We were unable to determine if there is a Less Wrong wiki account registered to your account. If you do not have an account and would like one, please go to your preferences page.
I have seen, indeed, options to create a wiki account. But I already have one; how do I properly associate the existing accounts?
Experience that I had recently that I found interesting:
So, you may have noticed that I'm interested in causality. Part of my upcoming research is using pcalg (which you may have heard of) to identify the relationships between sensors on semiconductor manufacturing equipment, so that we can apply work done earlier in my lab where we identify which subsystem of a complex dynamic system is the root cause of an error. It's previously been applied in automotive engineering, where we have strong first principles models of how the systems interact, but now we wa...
...What is the problem with grounding medical practice in the cold logic of numbers? In theory, nothing. But in practice, as decades of work in fields like behavioral economics have shown, people — patients and doctors alike — often have a hard time making sense of quantified risks. Douglas B. White, a researcher at the University of Pittsburgh, has shown that the family members of seriously ill patients, when presented with dire prognoses, typically offer quite variable understandings not only of qualitative terms such as “extremely like
Good Judgment Project is temporarily signing up more participants: http://www.goodjudgmentproject.com/
(Fairly easy way to make $100 a year or so, if that's a motivating factor.)
Ron Arkin, author of Governing Lethal Behavior in Autonomous Robots, on autonomous lethal robots:
...I will posit that we can actually reduce civilian casualties through the use of [autonomous lethal robots], just as we do with precision-guided munitions, if... the technology is developed carefully. Only under those circumstances should the technology be released into the battlefield...
I am not a proponent for lethal autonomous robots... The question is... if we are in a war, how can we ensure these systems behave appropriately?
...I am not averse to a ban...
I often find that postings voted little nonetheless sometimes come with a lively and up-voted discussion.
The listing of postings shows the karma of the posting but gives no indication of the volume and quality of the discussion.
At least for the display of a positing it should be easy to display an additional karma score indicating the sum total of the conments. That'd give an indication of the aggregate.
This would improve awareness of positing discussions. On the other hand such a scoring might further drain away participation from topics which fail to attract discussion.
I am interested in getting better at negotiation. I have read lots on the subject, but I have realised that I was not very careful about what I read, and what evidence the authors had for their advice. I have stumbled on a useful heuristic to find better writing.
The conventional wisdom in negotiating says you should try very hard to not be the first person to mention a price.
I can see that, if you're the seller, it may make sense to try to convince the buyer that they want the product before you start the price negotiation.
When it comes to the price negoti...
A while back, I tried to learn more about the efficacy of various blindness organizations. Recently, I realized that (1) I really need some sort of training, and (2) my research method was awful, and I should have just looked up notable blind people and looked for patterns in their training.
I avoided doing the "look at blind people" research out of it sounding boring, until not even an hour ago, when I just opened all the pages on blind Americans on Wikipedia and skimmed them all for the information in question.
Either most notable blind Americans...
Just finished MaddAddam, the conclusion of Margaret Atwood's Oryx and Crake dystopian trilogy. It is very well written and rather believable. Her dry sardonic humor is without peer. There are many LW-relevant ideas and quotes in it, but here is one that is most un-LW, about a cryo preservation company CryoJeenyus:
..."Your friend has unfortunately had a life-suspending event"
"I’m so sorry for your temporary loss."
"Temporary Inertness Caretaker"
"a ferrying of the subject of a life-suspending event from the shore of life on
Would anyone be interested in going to a LW meetup in the North-East of England? I'm thinking Newcastle or Durham.
What the hell is going on with all the ads here? I've got keywords highlighted in green that pop up ads when you mouse over them, stuff in the top and sidebars of the screen, popups when loading new pages... all of this since yesterday.
Normally I would think this sort of thing meant I had a virus (and I am scanning for one with everything I have) but other people have been complaining about stuff like this as well over the last few days.
I would be glad to donate if the site needs more money to stay up, but this is absolutely unacceptable.
[Edit: Never mind, it really was a virus.]
This is an extraordinary claim by Eliezer Yudkowsky that progress is a ratchet that moves in only one direction. I wonder what, say, the native Americans circa 1850 thought about Western notions of progress? If you equate "power" with "progress" this claim is somewhat believable, but if you're also trying to morally characterize the arc of history then it sounds like you've descended into progressive cultism and fanaticism.
You could set up a comment voting system based on the theory of forum quality that only high-value comments in response to other high-value comments indicate a healthy forum. Make any upvote or downvote on a child comment also upvote or downvote the parent comment it is replying to. So you'd get a bonus of upvotes if someone replies to your upvoted comment with another upvoted comment, but people would be hesitant to reply anything to comments being downvoted since their replies probably wouldn't be upvoted to keep the parent comment from getting upvotes.
T...
We could test this theory by using it on the existing data and selecting the best comments under this theory. I would be interested in reading the "top 20 (or 50) LW comments ever" found by this algorithm, posted as a separate article. It could give us an approximate idea of what exactly the new system would incentiize.
I don't really think this is relevant to LessWrong per se, but I'm wondering if any smart folks here have attempted to solve this "internet mystery":
The internet mystery that has the world baffled: For the past two years, a mysterious online organisation has been setting the world's finest code-breakers a series of seemingly unsolveable problems. But to what end? Welcome to the world of Cicada 3301
Even harder than recognizing proto-sciences-- how could you recognize a basis for a system which hasn't yet been invented?
The real world example is the collection of astronomical data long before there was any astronomy.
Apologies, I was referring to his comment at this thread: http://techcrunch.com/2013/11/22/geeks-for-monarchy/
"The ratchet of progress turns unpredictably, but it doesn't turn backward."
Virtue ethics versus consequentialism: The Neuroscientist Who Discovered He Was a Psychopath
I had actually been wondering about this recently. People define a psychopath as someone with no empathy, and then jump to "therefore, they have no morals." But it doesn't seem impossible to value something or someone as a terminal value without empathizing with them. I don't see why you couldn't even be a psychopath and an extreme rational altruist, though you might not enjoy it. Is the word "psychopath" being used two different ways (meaning a non-empathic person and meaning a complete monster), or am I missing a connection that makes these the same thing?