Comment author: Dagon 25 August 2016 08:45:00PM -1 points [-]

I was around back in the day, and can confirm that this is nonsense. NRX evolved separtely. There was a period where it was of interest and explored by a number of LW contributors, but I don't think any of the thought leaders of either group were significantly influential to the other.

There is some philosophical overlap in terms of truth-seeking and attempted distinction between universal truths and current social equilibria, but neither one caused nor grew from the other.

Comment author: Viliam 02 September 2016 04:07:37PM *  -10 points [-]

Agreed.

At some moment, there was a period when there were debates about NR on LW, simply because those were the times when people on LW were okay with discussing almost anything. And NR happened to be one of the many interesting topics to discuss.

The problem was, everyone else on this planet was completely ignoring NR at that moment. Starved for attention, NRs decided to focus their recruitment attempts on LW audience, and started associating themselves with LW on their personal blogs. After repeating this lie enough times, it started getting quoted by other people as a fact. (Such as on Breitbart now.)

The LW debates of NR were interesting at the beginning, but they soon became repetitive (and it was kinda impossible to decipher what specifically Moldbug is saying in his long texts, other than that "Cthulhu is always swimming left"). The few local NR fans decided to start their own collective blog.

The largest long-term impact is that one hardcore NR fan decided to stay on LW, despite repeated bans, and created an army of automated sockpuppets, downvoting comments of people he perceives hostile to the NR idea, plus any comments about himself or his actions here. (I expect this comment to have -10 karma soon, but whatever.)

Probably long before NR existed, LW had the "politics is the mindkiller" approach to politics. This didn't prevent us from having relatively friendly discussions of political topics once in a while. But the automated downvoting of every perceived enemy of NR had a chilling effect on such debates.

Comment author: ChristianKl 02 September 2016 06:57:03AM 1 point [-]

Who's bankrupt? Peter Thiel or Gawker?

Comment author: Viliam 02 September 2016 03:36:47PM *  -1 points [-]

What were Peter Thiel's uncovered misdeeds? Being gay?

Comment author: reguru 01 September 2016 11:03:34PM 0 points [-]

Hi, I'm curious what rationalists (you) think of this video if you have time:

Why Rationality Is WRONG! - A Critique Of Rationalism https://www.youtube.com/watch?v=iaV6S45AD1w 1 h 22 min 47 s

Personally, I don't know much about all of the different obstacles in figuring out the truth so I can't do this myself. I simply bought it because it made sense to me, but if you can somehow go meta on the already meta, I would appreciate it.

Comment author: Viliam 02 September 2016 03:33:24PM *  8 points [-]

I tried listening to the video on the 1.5× speed. Even so, the density of ideas is horribly low. It's something like:

Science is successful, but that makes scientists overconfident. By 'rationalists' I mean people who believe they already understand everything.

Those fools don't understand that "what they understand" is just a tiny fraction of the universe. Also, they don't realize that the universe is not rational; for example the animals are not rational. Existence itself has nothing to do with rationality or logic. Rationalists believe that the universe is rational, but that's just their projection. Rationality is an emergent property. Existence doesn't need logic, but logic needs existence, therefore existence is primary.

You can't use logic to prove whether the sun is shining or not; you have to look out of the window. You can invent an explanation for empirical facts, but there are hundreds of other equally valid explanations.

That was the first 16 minutes, then I became too bored to continue.

My opinion?

Well, of course if you define a "rationalist" as a strawman, you can easily prove the strawman is foolish. You don't need more than one hour to convince me about that. No one in this community is trying to derive whether the sun is shining from the first principles.

I am not sure whether "universe is rational" is supposed to mean that (a) the universe has a relatively short description which could be understood by a mind, or that (b) the universe itself is a mind, specifically a rational one. Seems like the meaning was switched in the middle of an argument, using a sleight of hand.

In summary, my impression is of muddled thinking, and of feeling superior to the imaginary opponents. Actually, maybe the opponents are not imaginary -- there are many fools of various kinds out there -- it just has nothing to do with the kind of "rationality" that we use here, such as described e.g. by Stanovich.

In response to comment by gjm on The call of the void
Comment author: Val 31 August 2016 06:57:02PM -10 points [-]

And it seems the community is not interested enough to counter the ten or so accounts which do this... :(

In response to comment by Val on The call of the void
Comment author: Viliam 02 September 2016 08:40:31AM *  -10 points [-]

At this moment, the post is at -4 karma 44% positive, that is about 19 downvotes and 15 upvotes.

The active part of the community is not large enough to provide significantly more upvotes. Just look at how much karma an average article gets.

(And even if the community would be larger, if Eugine's sockpuppets are automated, it wouldn't ultimately make any difference.)

Comment author: Dentin 26 August 2016 05:34:24PM -9 points [-]

They probably could, but that ends up being a very toil-based setup as new targets are found and selected. I wouldn't consider this anything more than a short term stopgap.

As an example, even if Elo was protected, it's pretty clear the eugine is willing to downvote anyone who comments on Elo material.

Comment author: Viliam 02 September 2016 08:32:13AM -2 points [-]

Also, if some comments by Elo are good and some are bad, we would lose the ability to organically downvote the bad ones. (Maybe the protected users should still be able to downvote each other?)

Comment author: The_Jaded_One 31 August 2016 01:14:46PM -9 points [-]

I definitely think we should ban downvotes, at least temporarily. Also, it is clear that Eugene has an army of automated sockpuppet accounts that are repeatedly downvoting this entire thread. At a later stage something should be done about this, for example limiting the ability of people to spam accounts (e.g. the google "I'm not a robot" button) and limiting the ability of new accounts to downvote. Perhaps only accounts that have made a discussion post with a few upvotes should be allowed to downvote at all, and even then with limits per week and per user to be downvoted, and also perhaps there should be a per-user limit on downvoting of sufficiently old comments, so that even with an army of bots you cannot mass downvote people by attacking all their old content.

Overall it seems we have given out downvoting privileges like candy and now we are reaping the consequences...

Comment author: Viliam 02 September 2016 08:27:25AM -9 points [-]

As a general rule, if something is a problem, the solution needs to deal with the problem, not with its proxy. The problem is "army of sockpuppet accounts", not "downvotes" per se, therefore a successful solution must somehow address the sockpuppetting itself.

I don't want to give Eugine new ideas, but banning the downvotes would probably just make him change strategy. I can imagine two powerful attack strategies that would work if (a) downvotes are banned, or even (b) all votes are banned.

The successful solution must:

  • identify the sockpuppets; and
  • remove the sockpuppets or otherwise render them harmless

I think there are two essential approaches to this:

  • blacklisting = suspected sockpuppets (detected e.g. by their IP address or behavior) are removed and their votes reversed; or
  • whitelisting = only a set of "trusted users" can vote

These two can come with different flavors and combinations. For example, we could have an invisible whitelist of trusted users, in general treat votes by trusted and untrusted voters equally, but also provide an automatical warning to the moderators if the votes given by trusted vs untrusted voters differ dramatically (for example, if 9 of 10 trusted users upvoted a comment, but 30 of 40 untrusted users downvoted it). This is just an example; it could be made more sophisticated, but that would require more programming resources and computing power.

Perhaps only accounts that have made a discussion post with a few upvotes should be allowed to downvote at all

Eugine would simply upvote posts made by his sockpuppets by his other sockpuppets. In the very best case, this would force him to write one half-decent post per sockpuppet.

limits per week and per user to be downvoted

Limits per sockpuppet = more sockpuppets.

perhaps there should be a per-user limit on downvoting of sufficiently old comments

Maybe downvoting of sufficiently old comments should be limited in general, not just per user. Just like on Reddit you cannot vote on too old stuff. (Question is, how old is "sufficiently old"; on Reddit that means a few months.)

Comment author: MrMind 02 September 2016 07:28:25AM *  -10 points [-]

We won the war against Eugene... for a brief instant.

I'm keeping score and calculating the number of downvotes/upvotes on the comment where I requested help against Eugene Nier downvotes campaign.

Well, there was a moment where 14 people upvoted and 20 puppets downvoted. Now we are at a point where 21 people upvoted and 30 puppets downvoted. This means that at least we forced Eugene to increase the count of his puppets to fight back. I count this as score for LW :)

makelwniceagain

Comment author: Viliam 02 September 2016 08:02:42AM *  -10 points [-]

I don't think that opposing strategic voting by strategic voting is an improvement. (Noise + more noise != signal.) I also don't see how forcing Eugine to increase the number of sockpuppets is a good thing, especially if the difference is between 20 and 30.

Thanks for trying! I just think this is a wrong direction.

Comment author: turchin 17 August 2016 12:14:06AM 0 points [-]

Elon Musk almost terminated our simulation.

Simulation is a simulation only if everybody is convinced that they are living real life. Bostrom proved that we are most likely live in a simulation, but not much people know about it. Elon Musk tweeted that we live with probability 1000000 to 1 in simulation. Now everybody knows. I think that it was 1 per cent chance that our simulation will terminate after it. It has not happen this time, but there may be some other threshold after which it will be terminated, like finding more proves that we are in a simulation or creation of an AI.

Comment author: Viliam 17 August 2016 07:38:10AM 2 points [-]

On the other hand, the more "actions that would get the simulation terminated" we do and survive them, the higher the chance that we are actually not living in a simulation.

Comment author: WhySpace 16 August 2016 06:08:57PM *  -10 points [-]

PSA:

I just realized that /u/Elo's posts haven't been showing up in /r/Discussion because of all the downvoting from Eugene_Nier's sockpuppet accounts. So, I've gone back to read through the sequence of posts they're in the middle of. You may wish to do the same.

Meta:

I was going to leave this as a comment on Filter on the way in, Filter on the way out..., but I figured it's different enough to stand on it's own. It’s also mostly a corollary though, and just links Elo’s post to existing ideas without saying much new, and so probably isn’t worth it’s own top-level post. This isn’t likely to be actionable either, since I basically come to the conclusion that it’s ok to take down the Chesterton Fence that LW has already long ago taken down.

This might be a good comment to skim rather than read, since the examples are mostly to completely define precisely what I’m getting at, and you're likely already familiar with them. I've divided this into sections for easy skimming. I’m posting only because I thought the connections were small but interesting insights.

Also meta: this took about 2.5 hrs to write and edit.

TL;DR of Elo’s “Filter on the way in, Filter on the way out...” post:

Elo proposes that nerd culture encourages people to apply tact to anything they hear, and so it becomes less necessary to be tiptoe around sensitive issues for fear of being misunderstood. Nerds have a tact filter between their ears and brain, to soften incoming ideas.

"Normal" culture, on the other hand, encourages people to apply tact to anything they say, and so it becomes less necessary to constantly look for charitable interpretations, for fear of a misunderstanding. Non-nerds have a tact filter between their brain and mouth, to soften outgoing ideas.

They made several pretty diagrams, but they all look something like this:

speaker’s brain -> [filter] -> speaker’s mouth -> listener’s ears -> [filter] -> listener’s brain

The thing I want to expand Elo’s idea to cover:

What’s going on in someone’s head when they encounter something like the trolley problem, and say “you can’t just place a value on a human life”? EA’s sometimes get backlash for even weighing the alternatives. Why would anyone refuse to even engage with the problem, and merely empathize with the victims? After all, the analytic half of our brains, not the emotional parts, are what solves such problems.

I propose that this can be thought of as a tact filter for one’s own thoughts. If that’s not clear, let me give a couple rationalist examples of the sort of thing I think is going on in people’s heads, to help triangulate meaning:

  • HPMOR touches on this a couple times with McGonagall. She avoids even thinking of disturbing topics.

  • Some curiosity stoppers/semantic stopsigns are due to avoiding asking one’s self unpleasant questions.

  • The idea of separate magisteria comes from an aversion to thinking critically about religion.

  • Several biases and fallacies. The just world fallacy is result of an aversion to more accurate mental models.

  • Politics is the mindkiller, so I’ll leave you to come up with your own examples from that domain. Identity politics is especially ripe with examples.

Filter on the way in, Filter on the way out, Filter while in, Filter while out:

So, I propose that Elo’s model can be expanded by adding this:

Some subcultures encourage people to apply tact to anything they think, and so it becomes less necessary to constantly filter what we say, for fear of a misunderstanding. Such people have a tact filter between different parts of their brain, to filter the internal monologue.

That corollary doesn’t add much that hasn’t already been discussed to death on LW. However, we can phrase things in such a way as to put people at ease, and encourage them to relax their internal and/or outgoing filters, while maintaining their ingoing filter. Adapting Elo's model to capture this, we get this:

future speaker’s thought -> [filter] -> speaker’s cashed thoughts -> [filter] -> speaker’s mouth -> listener’s ears -> [filter] -> listener’s thoughts -> [filter] -> past listener’s cashed thoughts

Note that both the speaker and the listener have internal filters. We can think or hear something, and then immediately reject it for being horrible, even if it’s true.

Ideally, everyone would avoid filtering their own ideas internally, but apply tact when speaking and listening, and then strip any filters from memes they encounter while unpacking them. Without this model, perhaps us endorsing removing the 2 internal filters was a bit of a Chesterton Fence.

However, with the other 2 filters firmly in place, we should be able to safely remove the internal filters in both the thoughts of the speaker and listener. If the listener believes the filter between the speaker and their mouth is clouding information transfer, they might even ask for Crocker's rules. This is dangerous though, since removing redundant backup leaves only their own ear->brain filter as a single point of failure.

Practical applications:

To encourage unconstrained thinking in others, perhaps we can vocally strip memes passed to us of obfuscating tact if there is a backup filter in place and if we’ve already shown that we agree with the ideas. (If we don’t agree, obviously this would look like an attack on their argument, and would backfire.)

That sounds like something out of the boring advice repository, but providing social proof is probably much more powerful than merely telling people that they shouldn’t filter their internal monologue. It probably doesn’t feel like censorship from the inside. If we want to raise the sanity waterline, we’ll have to foster cultures where we all provide positive reinforcement for each other’s good epistemic hygiene.

Comment author: Viliam 17 August 2016 07:34:59AM 4 points [-]

What’s going on in someone’s head when they encounter something like the trolley problem, and say “you can’t just place a value on a human life”?

Maybe: "Here is someone who is practicing excuses for killing people, using fictional scenarios. Is this some kind of wannabe killer, exploring the terrain to find out under which circumstances would his actions be socially acceptable? I'd better explain him that this approach wouldn't work here."

Comment author: WalterL 16 August 2016 08:45:19PM 2 points [-]

I'm not sure that it is so much a cultural thing, as it is a personal deal. Popular dudes who can always get more friends don't need to filter other people's talky-talky for tact. Less cool bros have to put up with a lot more and. "Your daddy loves us and he means well..." kind of stuff. Not just filter but positively translating.

Comment author: Viliam 17 August 2016 07:30:36AM 1 point [-]

I would say this is about status. People filter what they say to high-status individuals, but don't bother filtering what they say to low-status individuals.

Nerd culture is traditionally low-status in context of the whole society, and meritocratic inside. That means that nerds are used to hearing non-filtered things from outsiders, and don't have strong reasons to learn filtering when speaking with insiders. Also, it is more complicated for aspies to understand when and why exactly should the filters be used, so it is easier to have a norm for not having filters.

(And I suspect that people most complaining about the lack of filters would be often those who want to be treated as high-status by the nerd community, without having the necessary skills and achievements.)

View more: Prev | Next