OH MY GOD. THAT WAS IT. THAT WAS VOLDEMORT'S PLAN. RATIONAL!VOLDEMORT DIDN'T TRY TO KILL HARRY IN GODRIC'S HOLLOW. HE WAITED ELEVEN YEARS TO GIVE HARRY A GRADE IN SCHOOL SO THAT ANY ASSASSINATION ATTEMPT WOULD BE IN ACCORDANCE WITH THE PROPHECY.
Duplicate comment, probably should be deleted.
Agreed. I actually looked up tax & spending for UK vs. Scandinavian countries, and they aren't that different. It may not be a good distinction.
I thought of this last year after I completed the survey, and rated anti-agathics less probable than cryonics. This year I decided cryonics counted, and rated anti-agathics 5% higher than cryonics. But it would be nice for the question to be clearer.
Done, except for the digit ratio, because I do not have access to a photocopier or scanner.
Liberal here, I think my major heresy is being pro-free trade.
Also, I'm not sure if there's actually a standard liberal view of zoning policy, but it often feels like the standard view is that we need to keep restrictive zoning laws in place to keep out those evil gentrifiers, in which case my support for loser zoning regulations is another major heresy.
You could argue I should call myself a libertarian, because I agree the main thrust of Milton Friedman's book Capitalism and Freedom. However, I suspect a politician running on Friedman's platform today wou...
and anyone smart has already left the business since it's not a good way of making money.
Can you elaborate? The impression I've gotten from multiple converging lines of evidence is that there are basically two kinds of VC firms: (1) a minority that actually know what they're doing, make money, and don't need any more investors and (2) the majority that exist because lots of rich people and institutions want to be invested in venture capital, can't get in on investing with the first group, and can't tell the two groups apart.
A similar pattern appears to ...
Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.
I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor.
Ah... you just resolved a bit of confusion I didn't know I had. Eliezer often seems quite wise about "how to manage a community" stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren't available.
So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.
I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.
I like the idea of this fanfic, it seems like it could have been executed much better.
EDIT: Try re-writing later? As the saying goes, "Write drunk; edit sober."
So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)
Have you guys given any thought to doing pagerankish stuff with karma?
Can you elaborate more? I'm guessing you mean people with more karma --> their votes count more, but it isn't obvious how you do that in this context.
Everyone following the situation knew it was Eugine. At least one victim named him publicly. Sometimes he was referred to obliquely as "the person named in the other thread" or something like that, but the people who were following the story knew what that meant.
I'm glad this was done, if only to send a signal to the community that something is being done, but you have a point that this is not an ideal solution and I hope a better one is implemented soon.
I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."
If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.
The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).
My guess is that he cares not nearly as much about LW in general now as he used to...
This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major ...
...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".
Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?
It is indeed a cue to look for motivated reasoning. I am not neglecting to do that. I have scrutinized extensively. It is possible to be motivated by very simple emotions while constraining the actions you take to the set endorsed by deliberative reasoning.
The observation that something fits the status-seeking patterns you've cached is not strong evidence that nothing else is going on. If you can write off everything anybody does by saying "status" and "signaling" without making predictions about their future behavior--or even looking i...
I love how understated this comment is.
Thanks for posting this. I don't normally look at the posters names when I read a comment.
People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.
Maximizing your chances of getting accepted: Not sure what to tell you. It's mostly about the coding questions, and the coding questions aren't that hard—"implement bubble sort" was one of the harder ones I got. At least, I don't think that's hard, but some people would struggle to do that. Some people "get" coding, some don't, and it seems to be hard to move people from one category to another.
Maximizing value given that you are accepted: Listen to Ned. I think that was the main piece of advice people from our cohort gave people in the...
Presumably. The question is whether we should accept that belief of theirs.
And the solution to how not to catch false positives is to use some common sense. You're never going to have an aytomated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.
Right on. The solution to karma abuse isn't some sophisticated algorithm. It's extremely simple database queries, in plain english along the lines of "return list of downvotes by user A, and who was downvoted," "return downvotes on posts/comments by user B, and who cast the vote," and "return lists of downvotes by user A on user B."
Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.
This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.
Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?
What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going...
Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:
Username explicitly linked to torture vs. dust specks as a case where it makes sense to use torture as an example. Username is just objecting to using torture for general decision theory examples where there's no particular reason to use that example.
But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.
With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accompli...
I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs.
FWIW, actual heuristics I use to determine who's worth paying attention to are
Your heuristics are, in my opinion, too conservative or not strong enough.
Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because t...
Oh, I see now. But why would Eliezer do that? Makes me worry this is being handled less well than Eliezer's public statements indicate.
Plantinga's argument defines God as a necessary being, and assumes it's possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, "It's possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true." Or even "Possibly it's a necessary truth that pigs fly, therefore pigs fly."
(This is as much as I can explain without trying to give a lesson in modal logic, which I'm not confident in my ability to do.)
People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.
My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yo...
His assertion that there is no way to check seems to me a better outcome than these posts shouting into the wind that don't get any response.
Did he assert that, exactly? The comment you linked to sounds more like "it's difficult to check." Even that puzzles me, though. Is there a good reason for the powers that be at LessWrong not to have easy access to their own database?
The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.
You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.
Abuse of the karma system is a well-known problem on LessWrong, which the admins appear to have decided not to do anything about.
Update: actually, it appears Eliezer has looked into this and not been able to find any evidence of mass-downvoting.
How much have you looked into potential confounders for these things? With the processed meat thing in particular, I've wondered what could be so bad about processing meat, and if this could be one of those things where education and wealth are correlated with health, so if wealthy, well-educated people start doing something, it becomes correlated with health too. In that particular case, it would be a case of processed meat being cheap, and therefore eaten by poor people more, while steak tends to be expensive.
(This may be totally wrong, but it seems like an important concern to have investigated.)
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are wel...
...I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn
Saying
Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?
sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).
...There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other
Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.
Upvoted for publicly changing your mind.
Further, the idea that the tribe of Honest Except When I Benefit is the vast majority while Always Honest is a tiny minority is not one that I'll accept without evidence.
Here's one relevant paper: Lying in Everyday Life
I read that paper, and was distressed, so I set about finding other papers to disprove it. Instead I found links to it, and other works that backed it up. I was wrong. Liers are the larger tribe. Thanks for educating me.
We can't forecast anything so let's construct some narratives..?
I think the point is more "good forecasting requires keeping an eye on what your models are actually saying about the real world."
In addition to mistakes other commenters have pointed out, it's a mistake to think you can neatly divide the world into "defectors" and "non-defectors," especially when you draw the line in a way that classifies the vast majority of the world as defectors.
Oops, sorry.
"Much of real rationality is learning how to learn from others."
I once talked to a theorist (not RBC, micro) who said that his criterion for serious economics was stuff that you can’t explain to your mother. I would say that if you can’t explain it to your mother, or at least to your non-economist friends, there’s a good chance that you yourself don’t really know what you’re doing.
--Paul Krugman, "The Trouble With Being Abstruse"
On philosophy, I think it's important to realize that most university philosophy classes don't assign textbooks in the traditional sense. They assign anthologies. So rather than read Russell's History of Western Philosophy or The Great Conversation (both of which I've read), I'd recommend something like The Norton Introduction to Philosophy.