Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: skeptical_lurker 21 February 2017 01:54:16AM 0 points [-]

All good points, in the general case - I myself frequently read about things I disagree with. However...

Even in cases that appear to be clear cut fear or violence mongering it may be that they joined the group to have its messages in their news feed for awareness, because they refuse to flinch from the problem.

That is more of a LW thing. Most normal people don't act like this, and the person I was thinking of certainly doesn't. Politics is about waving the flag for your tribe, and trying to actually understand the other tribe's point of view is like waving the enemy flag - treason! To show that they are loyal, many people seem to be adopting the maximally uncharitably point of view, or at least they are in the last few years.

Of course, its also possible that that is why some people are advocating violence - they wouldn't really want violence, and they certainly wouldn't personally assault someone, but they advocate violence because it shows more tribal loyalty then just advocating peaceful protest.

Comment author: username2 21 February 2017 02:16:26AM 0 points [-]

I was going to remind you of the fundamental attribution error, but that isn't exactly what's going on here. Is there a name for the error of assuming the simplest possible explanation given the information available is correct, when it comes to human behaviour? Popsci aside, the simplest explanation you can come up with is usually not the case, because the other person is acting as a result of a lifetime of experiences that you have had at best only a small glimpse into. It's hard to evaluate exactly why they do what they do, without sitting themselves down on the couch for a few hours. If anyone knows what this error in analysis is called, I'm genuinely curious.

Comment author: tristanm 21 February 2017 12:54:59AM 1 point [-]

Hi LW, first time commenting on here, but I have been a reader / lurker of the site for quite some time. Anyway, I hope to bring a question to the community that has been on my mind recently.

I have noticed an odd transformation of my social circle, in particular, of the people whom I have basically known since I was young, and are about the same age as me. I'm wondering if this is something that most people have observed in other people as they moved into adulthood and out into the world.

I would say that ever since I was a teenager I considered myself a "rationalist". What that has meant exactly has of course been updated over the years, but I would say that my approach to knowledge hasn't fundamentally changed (like I didn't suddenly become a postmodernist or anything). As soon as I understood what science and empiricism were about, I knew that my life would revolve around it in some way. And, what made me very close to the people who would be my best friends throughout high school and college, is that they felt pretty much the same way I did. At least I very much believed they did. My happiest moments with them, when I was about 16 to 18, involved lengthy, deep, and enjoyable discussions about philosophy, science, politics, and current events. I was convinced we were all rationalists, that we were fairly agnostic about most things until we felt that we had come to well-argued conclusions about them, and were always willing to entertain new hypotheses or conjectures about any topic that we cared about.

Fast-forward about ten years, and it seems like most of those people have "grown out of" that, like it was some kind of phase most people go through when they're young. All important questions have been settled, the only things that seem to matter now are careers, relationships, and hobbies. That's the impression I get from my various social media interactions with them, anyway. There are no debates or discussions except angry political ones, which mostly just consist of scolding people, or snarky comments and jokes. Politically, most people I know have gone either hard-left or hard-right (mostly hard-left, since everyone I know grew up on the west coast). But what's striking to me is how hivemind-ish a lot of them have become. It's really impossible to have a good discussion with any of my old friends anymore. I realize that sounds a little complain-y, but what I emphasize is that this a particular observation about the people I grew up with, not the older people I've known like family members, and not the people in my current social circle.

Ok, sure, it's possible that I just picked bad friends back then. But I think this is a little bit unlikely, since the reason we were drawn together in the first place is our shared interests and similar way of thinking. But I feel like I have basically stuck to the same principles that I had even back then. I've tried to avoid becoming too deeply attached to any one subculture or "tribe" - and there have been many opportunities to do so. What makes me believe my observation might be a more common phenomenon is that it seems to be shared by the people I'm close to now. It appears to me that there is something that alters a person's psychology as they move into adulthood, and through college in particular. And that this alteration makes people less "rational" in a way. And whatever causes that is traumatic enough that it encourages people to cluster into groups of very like-minded individuals, where their beliefs and way of life feel extremely safe.

I'd also like to emphasize that I'm not saying that our views and beliefs have simply diverged. This has mostly to do with the way that people think, and the way that they communicate ideas.

I wonder if anyone else has had this observation, and if so, what the possible explanations might be. On the other hand, maybe I have gone through the same change in my psychology, but simply fail to notice it in myself.

Comment author: username2 21 February 2017 01:43:11AM *  0 points [-]

Yes, this is absolutely normal, common experience. People get "set in their ways" in some point in their lives and it becomes easier to move a mountain than to have them change their mind. This is exactly why one of the very first parts of the EY sequences is How To Actually Change Your Mind. It is the foundational skill of rationalism, and something which most people, even self-described rationalists, like. Really, truly changing your mind goes against some sort of in-built human instinct, itself the amalgamation of various described heuristics and biases with names like 'the availability heuristic' and '(dis)confirmation bias.'

Comment author: ChristianKl 20 February 2017 01:31:29PM 0 points [-]

There won't be a blanket tax on all robots but self-driving cars and trucks can be taxed directly.

Taxing them enough to reduce their usage means less carbon emissions.

Comment author: username2 21 February 2017 01:32:29AM 0 points [-]

If your goal is to reduce carbon emissions, then tax the gasoline.

Comment author: skeptical_lurker 20 February 2017 10:08:35PM 2 points [-]

I think about politics far too much. Its depressing, both in terms of outcomes and in terms of how bad the average political argument is. It makes me paranoid and alienated if people I know join facebook groups that advocate political violence/murder/killing all the kulaks, although to be fair its possible that those people have only read one or two posts and missed the violent ones. But most of all its fundamentally pretty pointless because I have no desire to get involved in politics and I'm sure that wrt any advantages in terms of helping me to better understand human nature, I've already picked all the low hanging fruit.

So anyway, I'm starting by committing to ignore all politics for a week (unless something really earth-shattering happens). I'll post again in a week to say whether I stuck to it, and if I didn't, please downvote me to oblivion.

Oh, and replying to replies to this post are excepted from this rule.

Comment author: username2 21 February 2017 01:27:31AM *  0 points [-]

although to be fair its possible that those people have only read one or two posts and missed the violent ones

Or they agree with some aspects of a group but not others. Surely you don't agree with every opinion voiced on LessWrong, do you? Not even all of the generally accepted orthodoxy either, I'm sure. If you claimed you did, I'm sure I could come up with some post by EY (picked for representing LW views, no other reason) that you would be insulted to think others ascribed to you. Worth thinking about.

Even in cases that appear to be clear cut fear or violence mongering it may be that they joined the group to have its messages in their news feed for awareness, because they refuse to flinch from the problem. How others choose to engage in social circles should be treated like browsing data from a library -- confidential, respected, and interpreted charitably. We wouldn't want to be making thought crime a real thing by adding social repercussions to how they choose to engage in the world around them.

Comment author: tukabel 20 February 2017 12:31:51PM 1 point [-]

So Bill Gates wants to tax robots... well, how about SOFTWARE? May fit easily into certain definitions of ROBOT. Especially if we realize it is the software what makes robot (in that line of argumentation) a "job stealing evil" (100% retroactive tax on evil profits from selling software would probably shut Billy's mouth).

Now how about AI? Going to "steal" virtually ALL JOBS... friendly or not.

And let's go one step further: who is the culprit? The devil who had an IDEA!

The one who invented the robot, its application in the production, programmer who wrote the software, designed neural nets, etc.

So, let's tax ideas and thinking as such... all orwellian/huxleyian fantasies fade short in the Brave New Singularity.

Comment author: username2 20 February 2017 01:41:30PM 1 point [-]

I'd say that you are not supposed to tax people, you are supposed to tax flows of money, e.g. income, profit, sales, etc.

Comment author: Val 15 February 2017 06:21:48PM 0 points [-]

Those "very real, very powerful security regimes around the world" are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.

And if you underestimate how much a threat could a mere "computer" be, read the "Friendship is Optimal" stories.

Comment author: username2 16 February 2017 05:20:26PM *  0 points [-]

I've read the sequences on friendliness here and find them completely unconvincing with lack of evidence and a one-sided view the problem. I'm not about to start generalizing from fictional evidence.

I'm not sure I agree with the assessment of the examples that you give. There are billions of people who would like to live in first world countries but don't. I think immigration controls have a particularly effective if there's only a few million people crossing borders illegally in a world of 7 billion. And most of the immigration issues being faced by the world today such as Syrian refugees are about asylum-seekers who are in fact being permitted just in larger numbers than secondary systems were designed to support. Also the failure modes are different. If you let the wrong person in, what happens? Statistically speaking, nothing of great consequence.

Crime waves? We are currently at one of the lowest periods of violence per capita. I think the powers that be have been doing quite a good job actually.

Comment author: gjm 15 February 2017 01:08:18AM 1 point [-]

there are plenty of all-seeing eye superpowers in this world

Oh, I see. OK then.

My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren't much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped. So the threats you have in mind aren't in the "balrog" category at all, for me.

You seemed happy to engage until it was pointed out that the outcome was not what you expected.

My first comment in the balrog discussion was the one you took exception to. The point at which you say I stopped being "happy to engage" is the point at which I started engaging. The picture you're trying to paint is literally the exact opposite of the truth.

Comment author: username2 15 February 2017 06:31:45AM 0 points [-]

My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren't much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped.

Ah, now we are at the crux of the issue. That is not generally agreed upon, at least not outside of the Yudkowski-Bostrom echo chamber. You'll find plenty of hard-takeoff skeptics even here on LessWrong, let along AI circles where hard-takeoff scenarios are given much credence.

Comment author: gjm 14 February 2017 11:15:54PM 5 points [-]

Er, no. Because we don't (so far as I know) have any reason to expect that if we somehow produce a problematically powerful AI anything like an "all-seeing Eye" will splat it.

(Why on earth would you think my reason for saying what I said was "because it didn't go the way [I] liked"? It seems a pointlessly uncharitable, as well as improbable, explanation.)

Comment author: username2 15 February 2017 12:01:43AM *  0 points [-]

Because there are plenty of all-seeing eye superpowers in this world. Not everyone is convinced that the very real, very powerful security regimes around the world would be suddenly left inept when the opponent is a computer instead of a human being.

My comment didn't contribute any less than yours to the discussion, which is rather the point. The validity of an allegory depends on the accuracy of the setup and rules, not the outcome. You seemed happy to engage until it was pointed out that the outcome was not what you expected.

Comment author: gjm 14 February 2017 04:27:26PM 5 points [-]

I think this may have started to be less useful as an analogy for AI safety now.

Comment author: username2 14 February 2017 06:09:23PM 0 points [-]

Because it didn't go the way you liked?

Comment author: username2 13 February 2017 04:22:46PM 2 points [-]

Are there interesting youtubers lesswrong is subscribed to ? I never really used youtube and after watching history of japan I get the feeling I'm missing out on some stuff.

View more: Next