Eliezer_Yudkowsky comments on The noncentral fallacy - the worst argument in the world? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1742)
The problem is that if you initiate it, it's subject to the Loss Aversion effect where the dissatisfied speak up in much greater numbers.
I don't see what this has to do with "loss aversion" (the phenomenon where people think losing a dollar is worse than failing to gain a dollar they could have gained), though that's of course a tangential matter.
The point here is -- and I say this with all due respect -- it looks to me like you're rationalizing a decision made for other reasons. What's really going on here, it seems to me, is that, since you're lucky enough to be part of a physical community of "similar" people (in which, of course, you happen to have high status), your brain thinks they are the ones who "really matter" -- as opposed to abstract characters on the internet who weren't part of the ancestral environment (and who never fail to critique you whenever they can).
That doesn't change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.
Do you have advice on how to run my dinner parties?
Vaniver and DaFranker have both offered sensible, practical, down-to-earth advice. I, on the other hand, have one word for you: Airship.
Not plastics?
Consider seating logistics, and experiment with having different people decide who sits where (or next to whom). Dinner parties tend to turn out differently with different arrangements, but different subcultures will have different algorithms for establishing optimal seating, so the experimentation is usually necessary (and having different people decide serves both as a form of blinding and as a way to turn up evidence to isolate the algorithm faster).
Huh, I haven't been assigning seats at all except for reserving the one with easiest kitchen access for myself. I've just been herding people towards the dining table.
Consider eating Roman-style to increase the intimacy / as a novel experience. Unfortunately, this is made way easier with specialized furniture- but you should be able to improvise with pillows. As well, it is a radically different way to eat that predates the invention of the fork (and so will work fine with hands or chopsticks, but not modern implements).
Was Eliezer "lucky" to have cofounded the Singularity Institute and Overcoming Bias? "Lucky" to have written the Sequences? "Lucky" to have founded LessWrong? "Lucky" to have found kindred minds, both online and in meatspace? Does he just "happen" to be among them?
Or has he, rather, searched them out and created communities for them to come together?
The online community of LessWrong does not own LessWrong. EY owns LessWrong, or some combination of EY, the SI, and whatever small number of other people they choose to share the running of the place with. To a limited extent it is for us, but its governance is not at all by us, and it wouldn't be LessWrong if it was. The system of government here is enlightened absolutism.
The causes of his being in such a happy situation (is that better?) were clearly not the point here, and, quite frankly, I think you knew that.
But if you insist on an answer to this irrelevant rhetorical question, the answer is yes. Eliezer_2012 is indeed quite fortunate to have been preceded by all those previous Eliezers who did those things.
Then, like I implied, he should just admit to making a decision on the basis of his own personal preference (if indeed that's what's going on), instead of constructing a rationalization about the opinions of offline folks being somehow more important or "appropriately" filtered.
I would replace preference with hypothesis of what constitutes the optimal rationality-refining community.
They are sensibly the same, but I find the latter to be a more useful reduction that is more open to being refined in turn.
This is a community blog. If your community has a dictator, you should overthrow him.
Is the overthrowing of dictators a terminal value to you, or is it that you associate it with good consequences?
A little of both. Freedom is a terminal value, and heuristically dictators cause bad consequences.
My own view: Dictators in countries tend to cause bad consequences. Dictators in forums tend to cause good consequences.
I'd like to point out that Overcoming Bias, back in the day, was a dictatorship: Robin and Eliezer were explicitly in total control. Whereas Less Wrong was explictly set up to be community-moderated, with voting taking the place of moderator censorship. And the general consensus has always been that LW was an improvement over OB.
Do you have any evidence for that ? In my experience, it all depends on the dictator, not on the venue.
It's easier to leave a forum than a country. Forum-dictators who abuse their power end up with empty forums.
Real world dictators who abuse their power often end up dead. (But perhaps not as much as real world dictators who do not abuse their power enough to secure it.)
Perhaps I misunderstood what ArisKatsaris was saying. I thought he meant something like this:
If this is true, your objection is somewhat tangential to the topic (though an empty forum is less desirable than an active one). But perhaps he meant something else ?
Just my own personal experience of how moderated vs non-moderated forums tend to go, and as for countries, likewise my impression of what countries seem nice to live in.
You're probably right about modern countries; however, as far as I understand, historically some countries did reasonably well under a dictatorship. Life under Hammurabi was far from being all peaches and cream, but it was still relatively prosperous, compared to the surrounding nations. A few Caesars did a pretty good job of administering Rome; of course, their successors royally screwed the whole thing up. Likewise, life in Tzarist Russia went through its ups and downs (mostly downs, to be fair).
Unfortunately, the kind of a person who seeks (and is able to achieve) absolute power is usually exactly the kind of person who should be kept away from power if at all possible. I've seen this happen in forums, where the unofficial grounds for banning a user inevitably devolve into "he doesn't agree with me", and "I don't like his face, virtually speaking".
"Dictators" in forums can't kill people or hold them hostage.
Right, but that doesn't mean they tend to be beneficial, either. We're not arguing over which dictator is the worst, but whether dictators in forums are diametrically opposed to their real-world cousins.
Freedom is never a terminal value. If you dig a bit, you should be able to explain why freedom is important/essential in particular circumstances.
I'd be cautious about saying something's never a terminal value. Given my model of the EEA, it wouldn't be terribly surprising to me if some set of people did have poor reactions to certain types of external constraint independently of their physical consequences, though "freedom" and its various antonyms seem too broad to capture the way I'd expect this to work.
Someone's probably studied this, although I can't dig up anything offhand.
I take back the "never" part, it is way too strong. What I meant to say is that the probability of someone proclaiming that freedom is her terminal value not having dug deep enough to find her true terminal values is extremely high.
That seems reasonable. Especially given how often freedom gets used as an applause light.
Yes, I was commenting on this at the same time. The mental perception of restrictions, or the mental perception of absence of restrictions, can become a direct brainwired value through evolution, and is a simple step enough from other things already in there AFAICT. Freedom itself, however, independent of perception/observation and as a pattern of real interactions and decision choices and so on, seems far too complex to be something the brain would just randomly stumble upon in one go, especially only in some humans and not others.
I agree that freedom is an instrumental value. I disagree that it is never a terminal value. It is constitutive of the good life.
See if you can replace "freedom" with its substance, and then evaluate whether that substance is something the human brain would be likely to just happen to, once in a while, find as a terminal, worth-in-itself value for some humans but not others, considering the complexity of this substance.
Yes, the mental node/label "freedom" can become a terminal value (a single mental node is certainly simple enough for evolution to stumble upon once in a while), but that's directly related to a perception of absence of constraints or restrictions within a situation or context.
I don't see what you're getting at here. All terminal values are agent-specific.
Ironically, the appearance of freedom can be a default terminal value for humans and some other animals, if you take evolutionary psychology seriously. Or, to be more accurate, the appearance of absence of imposed restrictions can be a default terminal value that receives positive reinforcement cookies in the brain of humans and some other animals. Claustrophobia seems to be a particular subset of this that automates the jump from certain types of restrictions through the whole mental process that leads to panic-mode.
The abstract concept of freedom and its reality referent pattern, however, would be extremely unlikely to end up as a terminal value, if only even for its sheer mathematical complexity.
I agree with this.
With the caveats:
Agreed. With the caveat that I think all 'should's are that weak.
"If you see someone about to die and can save them, you should."
Now, you might agree or disagree with this. But "If you see someone about to die and can save them, you should, if it is convenient to do so and you haven't got something else you'd rather do with your time" seems more like disagreement to me.
I don't think so. I agree with that statement, with the same caveats. If there are also 100 people about to die and I can save them instead, I should probably do so. I suppose it depends how morally-informed you think "something else you'd rather do with your time" is supposed to be.
How did he acquire such a friend, and who convinced him to bankroll SIAI?
SIAI over its history (you can look at the Form 990s if you want) has gotten maybe half or less its budget from Thiel. Where's the rest coming from? Lady Luck's charitable writeoffs?
Still, at least you seem to have dropped your claim that SIAI or LW is a homeschooling propaganda front...
It's my impression that "front group" as typically used refers to a hidden/covert connection. LessWrong on the other hand has the logos/links for CFAR, SI and the Future of Humanity Institute displayed prominently.
<nitpick>Thiel</nitpick>
What would have happened if he didn't? How many times, do you think, other potential sponsors decided to pass? Seems like this is one of those cases where a person makes his own luck.
True. For that to be an effective communication channel, there would need to be a control group. As for how to create that control group or run any sort of blind (let alone double-blind) testing... yeah, I have no idea. Definitely a problem.
ETA: By "I have no idea", I mean "Let me find my five-minute clock and I'll get back to you on this if anything comes up".
So I thought for five minutes, then looked at what's been done in other websites before.
The best I have is monthly surveys with randomized questions from a pool of stuff that matters for LessWrong (according to the current or then-current staff, I would presume) with a few community suggestions, and then possibly later implementation of a weighing algorithm for diminishing returns when multiple users with similar thread participation (e.g. two people that always post in the same thread) give similar feedback.
The second part is full of holes and horribly prone to "Death by Poking With Stick", but an ideal implementation of this seems like it would get a lot more quality feedback than what little gets through low-bandwidth in-person conversations.
There are other, less practical (but possibly more accurate) alternatives, of course. Like picking random LW users every so often, appearing at their front door, giving them a brain-scan headset (e.g. an Emotiv Epoc), and having them wear the headset while being on LW so you can collect tons of data.
I'd stick with live feedback and simple surveys to begin with.
But Eliezer Yudkowsky, too, is subject to the loss aversion effect. Just as those dissatisfied with changes overweight change's negative consequences, so does Eliezer Yudkowsky overweight his dissatisfaction with changes initiated by the "community." (For example, increased tolerance of responding to "trolling.")
Moreover, if you discount the result of votes on rules, why do you assume votes on other matters are more rational? The "community" uses votes on substantive postings to discern a group consensus. These votes are subject to the same misdirection through loss aversion as are procedural issues. If the community has taken a mistaken philosophical or scientific position, people who agree with that position will be biased to vote down postings that challenge that position, a change away from a favored position being a loss. (Those who agree with the newly espoused position will be less energized, since they weight their potential gain less than their opponents weigh their potential loss.)
If you think "voting" is so highly distorted that it fails to represent opinion, you should probably abolish it entirely.