Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jan. 23 - Jan. 29, 2017

2 Post author: MrMind 23 January 2017 07:41AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (209)

Comment author: James_Miller 24 January 2017 12:55:28AM 9 points [-]

Prediction: Government regulations greatly reduce economic growth. Trump, with the help of the Republican Congress, is going to significantly cut regulations and this is going to supercharge economic growth allowing Trump to win reelection in a true landslide.

Comment author: Douglas_Knight 24 January 2017 01:25:22AM 8 points [-]

Do you want to put a probability on that? Also, break it down into a bunch of steps. Be precise. Include timelines.

Has anything like that every happened in the entire history of the world? In four years? For example, most of what Reagan is credited with doing to the economy was either done by Carter or in Reagan's second term.

Why do you believe that federal regulations are a significant portion of the total?

Comment author: James_Miller 24 January 2017 01:34:25AM 6 points [-]

Has anything like that every happened in the entire history of the world

Yes, China after Mao.

It might not just be federal regulations. For example, if Republicans passed a freedom to build law that allowed landowners to quickly get permission to build we would see a massive construction boom.

Comment author: Douglas_Knight 24 January 2017 02:06:03AM 2 points [-]

You made a strong conjunction that deregulation lead to economic growth lead to popular support for the regime in four years. That definitely did not happen to China in the first four years after Mao's death. Maybe if you cherry-pick 1980-1984 as the beginning of Deng's real hold on power it is an example, but I doubt it.

Sure, if you want to open up the pathways and no longer predict a conjunction, I can't stop you, but I do complain that this is a new prediction. But predicting that Trump will abolish States' Right so quickly to have economic effects doesn't seem very plausible to me. I wouldn't be focusing on elections in that scenario.

Comment author: drethelin 26 January 2017 12:16:14AM 0 points [-]

the US regime operates on popular support in a way very unlike that of China.

Comment author: ChristianKl 26 January 2017 10:09:42AM 0 points [-]

The difference is that the regime in China actually has popular support while the US regime doesn't.

Comment author: ChristianKl 26 January 2017 10:10:28AM 1 point [-]

For example, if Republicans passed a freedom to build law that allowed landowners to quickly get permission to build we would see a massive construction boom.

Given that land use is mostly legislated by the individual states, why do you think a Republican congress would infringe on state laws that strongly?

Comment author: James_Miller 26 January 2017 02:57:03PM 0 points [-]

The commerce clause + big cities are controlled by the left so Republicans would be willing to step on their power+Trump is a builder.

Comment author: ChristianKl 26 January 2017 09:05:58PM 0 points [-]

What's you credence for this event?

Comment author: James_Miller 27 January 2017 01:10:29AM 0 points [-]

Over the next 4 years, 50% that Republicans will enact such a law or that Trump with use regulations to make it easier to build by, for example, claiming it's racist for San Francisco to restrict the building of low income housing. But I'm not willing to bet on this because it would be hard to define the bet's winning conditions.

Comment author: gjm 24 January 2017 03:08:19AM 4 points [-]

Would you like to quantify that enough that we can look back in a few years and see whether you got it right?

Comment author: knb 26 January 2017 06:04:32AM 0 points [-]

I think it's a clear enough prediction, but putting some actual numbers on it would be useful. Personally, I would put the odds of a Trump landslide well under 50% even contingent on "supercharged" economic growth. Maybe 25%. Politics is too identity-oriented now to see anything like the Reagan landslides in the near future.

Comment author: ChristianKl 26 January 2017 10:06:33AM 1 point [-]

Could you operationalize the terms? What's a landslide? And what probability do you attach to that event?

Comment author: knb 26 January 2017 05:58:02AM 0 points [-]

Kudos for making a clear prediction.

I voted for Trump but I don't think there is any realistic possibility of a Trump landslide, even if the economy grows very well for the next 4 years. The country is just too bitterly divided along social lines for economic prosperity to deliver one candidate a landslide (assuming a landslide in the popular vote means at least 10% margin of victory.)

In terms of economic growth, I wonder what you mean by "supercharge". I think 4% is pretty unlikely. If the US manages an annual average of 3.0% for the next 4 years that would be a good improvement, but I don't think that could really be called "supercharged."

Trump job approval looks pretty good right now considering the unrelenting negative press, so right now I think Trump is likely to be re-elected if he chooses to run in 2020.

Comment author: satt 26 January 2017 10:29:54PM 1 point [-]

The country is just too bitterly divided along social lines for economic prosperity to deliver one candidate a landslide (assuming a landslide in the popular vote means at least 10% margin of victory.)

Assuming that

  1. the US does in fact hold a nationwide presidential election in 2020,

  2. the Democratic & Republican parties get > 90% of the votes in 2020, and

  3. US military fatalities in new, unprovoked, foreign wars are minimal (< 3000),

I predict:

  • a Trump landslide with probability 20%, assuming 2% weighted growth over Trump's term

  • a Trump landslide with probability 70%, assuming 3% weighted growth over Trump's term

  • a Trump landslide with probability 95%, assuming 4% weighted growth over Trump's term

using your definition of landslide, and defining "weighted growth" as annualized growth in quarterly, inflation-adjusted, disposable, personal income per capita, with recent growth weighted more heavily with a discount factor λ of 0.9. (See Hibbs for more details.) This is a clear prediction.

I'm more doubtful of chatter about rising political polarization than I am about fundamentals-based models of voting, and those models highlight the economy & war as the factors that most matter. As such I reckon sufficient economic prosperity could in fact produce a landslide for Trump (and virtually any incumbent, really).

Comment author: waveman 24 January 2017 02:41:36AM 0 points [-]

You should take into account that tariff and other barriers to trade are a form of government regulation.

Comment author: satt 26 January 2017 11:24:15PM 0 points [-]

I doubt the remaining trade barriers imposed by the US government are making much difference to overall US growth. As far as I know, models which don't crowbar in optimistic second-order effects (like big jumps in productivity) estimate that trade liberalization would raise US GDP by ~ $10 billion a year. That's a big number, but surely one has to compare it to existing US GDP: $18,560 billion a year.

This gives me the back of the envelope estimate that trade barriers are depriving the US of about 0.05% of GDP. American voters would scarcely notice that.

Comment author: waveman 11 July 2017 01:01:53AM 0 points [-]

Trump was saying he would increase trade barriers, so current levels are not the point.

Comment author: satt 15 July 2017 11:47:13AM 0 points [-]

I think in January I read you as amplifying James_Miller's point, giving "tariff and other barriers" as an example of something to slot into his "Government regulations" claim (hence why I thought my comment was germane). But in light of your new comment I probably got your original intent backwards? In which case, fair enough!

Comment author: gjm 24 January 2017 03:05:37PM 6 points [-]

Derek Parfit (author of "Reasons and Persons", a very influential work of analytic philosophy much of which is concerned with questions of personal identity and which comes up with decidedly LW-ish answers to most of its questions) has died. (He actually died a few weeks ago, but I only just heard of it, and I haven't seen his death mentioned on LW.)

Comment author: lifelonglearner 25 January 2017 06:16:06PM 1 point [-]

Also our namesake for Parfit's Hitchhiker

Comment author: Viliam 25 January 2017 05:13:27PM 4 points [-]

A few years ago I used to be a hothead. Whenever anyone said anything, I’d think of a way to disagree. I’d push back hard if something didn’t fit my world-view.

It’s like I had to be first with an opinion – as if being first meant something. But what it really meant was that I wasn’t thinking hard enough about the problem. The faster you react, the less you think. Not always, but often.

-- Give it five minutes

Comment author: MrMind 26 January 2017 08:41:44AM 0 points [-]

Absolutely thumbs up. I strive to achieve this: often when I write my first thoughts it becomes clear only later that I misread, or that I missed the main point, or that I'm completely wrong.
On the other side, it's true that people have very little and precious attention span: if you reply the day after there's already nobody listening. So try to struck a balance between impulse and reflection...

Comment author: phl43 26 January 2017 09:16:31PM *  3 points [-]

Hi everyone,

I'm a PhD candidate at Cornell, where I work on logic and philosophy of science. I learned about Less Wrong from Slate Star Codex and someone I used to date told me she really liked it. I recently started a blog where I plan to post my thoughts about random topics: http://necpluribusimpar.net. For instance, I wrote a post (http://necpluribusimpar.net/slavery-and-capitalism/) against the widely held but false belief that much of the US wealth derives from slavery and that without slavery the industrial revolution wouldn't have happened, as well as another (http://necpluribusimpar.net/election-models-not-predict-trumps-victory/) in which I explain how election models work and why they didn't predict Trump's victory. I think members of Less Wrong will find my blog interesting or, at least, that's what I hope. I welcome any criticisms, suggestions, etc. Sorry for the shameless self-promotion, but I just started the blog and I would like people to know about it :-)

Philippe

Comment author: Daniel_Burfoot 23 January 2017 11:11:10PM 3 points [-]

How do you weight the opinion of people whose arguments you do not accept? Say you have 10 friends who all believe with 99% confidence in proposition A. You ask them why they believe A, and the arguments they produce seem completely bogus or incoherent to you. But perhaps they have strong intuitive or aesthetic reasons to believe A, which they simply cannot articulate. Should you update in favor of A or not?

Comment author: TheAncientGeek 24 January 2017 06:47:25AM 1 point [-]

Trying to steelman arguments by talking to people you know in real life isnt a good method. You will find the best arguments in books and papers written by people who have acquired the rare skill of articulating intuitions.

Comment author: Gunnar_Zarncke 24 January 2017 02:52:52PM 0 points [-]

Yes, that may be true, but that's doesn't address the question. A stronger version would be:

Say you have 10 sources who all claim high confidence in proposition A. The arguments produced seem completely bogus or incoherent to you. But perhaps they have strong intuitive or aesthetic reasons to claim A, which you cannot understand. Should you update in favor of A or not?

Comment author: ChristianKl 26 January 2017 12:12:34PM 0 points [-]

If I don't understand a topic well I'm likely to simply copy the beliefs of friends who seem to have delved deep into an issue even if they can't tell me exactly why they believe what they believe.

If I on the hand already have a firm opinion and especially if the reasons for my opinions aren't possible to be communicated easily I don't update much.

Comment author: Dagon 24 January 2017 05:40:03AM 0 points [-]

What's your prior for A, and what was your prior for their confidence in A? very roughly speaking, updates feel like surprise.

Comment author: dglukhov 23 January 2017 03:19:35PM *  3 points [-]

I'm curious if anybody here frequents retraction watch enough to address this concern I have.

I find articles here very effective at announcing retractions and making testimonies from lead figures in investigations a frequent fallback, but rarely do you get to see the nuts and bolts of the investigations being discussed. For example, "How were the journals misleading?" or "What evidence was or was not analyzed, and how did the journal's analysis deviate from correct protocol?" are questions I often ask myself as I read, followed by an urge to see the cited papers. And then upon investigating the articles and their retraction notices, I am given a reason that I can't myself arbitrate. Maybe data was claimed to have been manipulated, or analyzed according to an incorrect framework.

Studies such as these I find alarming because I'm forced to trust the good intentions of a multi-billion dollar corporation in finding the truth. Often I find myself going on retraction watch, trusting the possibly non-existing good intentions of the organization's leadership, as I read the headlines without time to read every detail of the article. I am given certain impressions from the pretentious writing of the articles, but none of the substance, when I choose to skim selections.

Perhaps I am warning against laziness. Perhaps I am concerned about the potential for corruption in even a crusade to fight misinformation that retraction watch seems to fight. Nonetheless, I'm curious if people here have had similar or differing experiences with these articles...

Comment author: waveman 24 January 2017 12:38:44AM 2 points [-]

Crimes and trials are the same. Much goes on in closed rooms. You rightly feel that you are in the dark.

Often there is some material on pubpeer which can help understand what happened.

Comment author: morganism 23 January 2017 10:56:08PM 2 points [-]

A blog list of bogus journals just went down too...

http://ottawacitizen.com/storyline/worlds-main-list-of-science-predators-vanishes-with-no-warning

"Beall, who became an assistant professor, drew up a list of the known and suspected bad apples, known simply as Beall’s List. Since 2012, this list has been world’s main source of information on journals that publish conspiracy theories and incompetent research, making them appear real."

Comment author: Lumifer 23 January 2017 05:59:17PM 2 points [-]

but rarely do you get to see the nuts and bolts of the investigations being discussed.

Gelman's blog goes into messy details often enough.

because I'm forced to trust

No, you're not. You are offered some results, you do NOT have to trust them.

Comment author: dglukhov 23 January 2017 06:11:32PM 0 points [-]

Thanks for the tool.

No, you're not. You are offered some results, you do NOT have to trust them.

Indeed, but I suppose the tool provided solves my problem of judging when data was misanalysed as I could just as easily do the analysis myself.

Comment author: Douglas_Knight 23 January 2017 06:56:00PM 0 points [-]

Why are you consuming research at all? If you are a researcher considering building on someone else's research, then you probably shouldn't trust them and should replicate everything you really need. But you are also privy to a lot of gossip not on LW and so have a good grasp on base rates. If you are considering using a drug, then it has been approved by the FDA, which performs a very thorough check on the drug company. The FDA has access to all the raw data and performs all the analysis from scratch. The FDA has a lot of problems, but letting studies of new drugs get away with fraud is not one of them. But if you want to take a drug off-label, then you are stuck with research.

You say that you don't trust the intentions of a multi-billion dollar corporations. Have you thought about what those intentions are? They don't care about papers. Their main goal is to get the drug approved by the FDA. Their goal is for their early papers to be replicated by big, high quality, highly monitored studies. Whereas, the goal of multi-billion dollar universities is mainly to produce papers with too much focus on quantity and too little on replication.

Comment author: dglukhov 23 January 2017 10:18:59PM *  1 point [-]

Why are you consuming research at all?

I'm no researcher, and you're right, if I did want to improve upon my study, I would, given the materials. However, I am not that affluent, I do not have such opportunities unless the research was based on coatings and adhesives (these materials I do have access to). The retraction I linked was merely presented on retraction watch as an example. An example for what? Let's continue to...

Have you thought about what those intentions are?

My understanding is that as a public company your primary concern is bringing in enough value to the company to appease investors. A subset of that goal would be to get FDA approved.

I don't trust the company because of the incentive system, and that is my gut reaction that stems from companies getting away with unscrupulous business practices in the past. Though now that I think about it, however, Pfizer would have nothing to gain from retracting papers they knew they couldn't back up if someone asked them to. My guess is that either:

a) Min-Jean's managers were planning on gambling with her research only to find out their moles in the FDA wouldn't cooperate or,

b) there was no conspiracy, and Min-Jean was incentivized to fabricate her work on her own volition.

I do see your point, since a) is a more complicated theory in this case. But I distrust the situation. I smell a power play, at worst. But I can’t support that, unfortunately, from the articles alone. I can support power plays happening in big companies, but I can’t show those situations are related here. Not yet, anyway…

EDIT: With all that said, you seem to err on the side of trusting the FDA to do their job and trusting Pfizer to comply. Would you be able to back up that trust in this case alone?

I think waveman made my point clearer in that I don't like the fact that I don't know the details of the investigation. Down to the painfully detailed process of verifying image duplication. I'm not so sure a quick phone call to Pfizer or Min-Jean would help me either...

Comment author: waveman 24 January 2017 12:47:13AM 0 points [-]

the FDA, which performs a very thorough check on the drug company

I think you have an overly sunny view of how effective the FDA is. (leaving aside the question of cost effectiveness and the opportunity cost of the delays and even outright prevention of useful drugs getting to market and their effect on the cost of drugs)

There are plenty of cases of the FDA being hoodwinked by drug companies. Regulatory capture is always a concern.

Statistical incompetence is very common. I still cannot believe that they let Vioxx on the market when the fourfold increase in heart attacks had a P value of about 10-11%. <Not statistically significant, so nothing is happening>. This is the sort of stupidity that would (or should) get you as F in Statistics 101.

My experience over many decades is that over time the benefits of drugs often turn out to be way overstated and the dangers greatly underestimated.

Comment author: Douglas_Knight 24 January 2017 01:18:25AM 0 points [-]

I stand by my narrow claims. Here is another narrow claim: you are wrong about what happened with Vioxx.

Comment author: waveman 24 January 2017 02:39:52AM 0 points [-]

People can read about it for themselves.

https://en.wikipedia.org/wiki/Rofecoxib

Comment author: CellBioGuy 29 January 2017 05:56:00AM *  2 points [-]

In a crack of time between doing my last data analysis for my PhD and writing my thesis, I couldn't stop myself from churning out a brief sparsely-sourced astrobiology blog post in which I argue that the limited lifespan of planetary geospheres and the decay of star formation rates means that even though the vast majority of star-years are in the distant future around long-lived small stars, we are still a typical observer in that we are occurring less than 15 billion years into an apparently open-ended universe.

https://thegreatatuin.wordpress.com/2017/01/29/on-the-death-of-planets/

Comment author: lifelonglearner 24 January 2017 12:03:25AM *  2 points [-]

I'm new to writing resumes and am currently writing one for an internship application. I don't know if trying to optimize for uniqueness or quirkiness comes at significant social costs, or if there are many benefits. If anyone is good at this sort of thing (listing / bragging skills), general tips would be very welcome.

Comment author: moridinamael 24 January 2017 03:35:37PM 3 points [-]

It probably depends on the type of job you're looking for.

In school I was taught to make my resume fit on a single page. As far as I can tell, this is nonsense. In my professional life have never seen a resume that was less than two pages. Mine is several pages.

The point of a resume is (a) to give the company a broad first-pass sense of whether you're qualified and (b) to provide a scaffolding of prior knowledge about you around which to conduct an interview. Constructing your resume with the point in mind may simplify things.

I would personally avoid going out of my way to broadcast uniqueness or quirkiness. But I suppose it depends on what exactly you mean. If you hold the world record for pogo stick jumps, that would be something interesting to put on there, partly because that's the kind of thing that connotes ambition and dedication. If you are an ardent fan of some obscure fantasy series, that's not something that's going to conceivably help you get a job.

Comment author: lifelonglearner 24 January 2017 05:07:07PM 1 point [-]

Thanks for the information. I saw the one-page-sheet recommendation in a lot of places, but this didn't match up with actual CVs I've seen on people's pages. Clearing that up is helpful.

The general point to keep in mind is also helpful.

Comment author: satt 26 January 2017 11:45:56PM 1 point [-]

I saw the one-page-sheet recommendation in a lot of places, but this didn't match up with actual CVs I've seen on people's pages.

Expanding on this, acceptable & typical lengths for CVs seem to vary between sectors. My feeling is that 1-page CVs are a bit uncommon in business (though some people do make it work!), with CVs more often 2-4 pages long. But academic CVs are often a lot longer, and can be pretty much arbitrarily long. (I suspect highly eminent academics' CVs tend shorter. Presumably they have less to prove.)

Comment author: mindspillage 26 January 2017 02:54:06AM 2 points [-]

In general, don't optimize for uniqueness or quirkiness; you have limited space and your potential workplace is probably using the resume to screen for "does this person meet enough of the basic desired qualities that we should find out more about them with an interview". You can add a few small things if they really set you apart, but don't go out of your way to do it. A better opportunity to do this is in your cover letter.

The best reference for workplace norms and job-hunting advice that I know is Ask A Manager; you may want to browse her archives.

Comment author: [deleted] 25 January 2017 06:40:22PM 2 points [-]

I would look strongly into the company culture before making a decision. I would default towards being more professional, but there are certain companies (e.g. Pixar from what I heard at a talk they gave at my school) who value individuality more than others. I would generally say that cover letters are a better place to emphasize personality rather than a resume. Resumes should mostly be to demonstrate qualification.

Comment author: Dagon 26 January 2017 03:52:17AM *  1 point [-]

All of the responses so far seem reasonable to me. A bit of theory as to why: How much quirkiness to show a potential employer is a great example of countersignaling.

If you're trying to distinguish yourself among a group that the observer ALREADY classifies as high-achieving, then showing that you can afford not to be serious can indicate you're at the top of that group. If you haven't established that you belong in that category, you should focus first on doing so, or your quirkiness will be taken as further evidence that you are not that good in the first place.

Oh, you might also show some quirkiness as a reverse-filter, and an honest sharing of a matching trait - if you want to only consider employers who'll tolerate (or appreciate) your quirks, this is one way to accomplish that. Usually, I'd save that for later rounds of discussion.

Comment author: lifelonglearner 26 January 2017 04:34:04AM 0 points [-]

Thanks for expanding on this. I think it makes more sense (given where I'm at) to be more conservative for right now.

Comment author: dglukhov 23 January 2017 06:06:54PM 2 points [-]

On a lighter note (Note: this is open access)

Comment author: Lumifer 23 January 2017 07:58:11PM 2 points [-]

LOL. "How to prevent crimethink". I recommend introducing global-warming Newspeak and phasing out Oldspeak -- it should help with that "inoculation" thing.

Comment author: gjm 23 January 2017 09:23:02PM *  1 point [-]

How would you distinguish between "giving people the tools to detect misinformation" and "preventing crimethink", and why do you regard this one as the latter rather than the former (which is what it claims to be)?

(Or do you think the first of those is always bogus and best thought of as the latter in disguise?)

EDITED to add: The description of "inoculation" near the start of the paper gives the impression that the procedure amounts to "make people less likely to be convinced by misinformation X by presenting a strawman version of X and refuting it", but when they go on to describe the actual "inoculation" they tried it doesn't sound strawman-y at all to me.

Comment author: Lumifer 24 January 2017 02:56:19AM 2 points [-]

How would you distinguish between "giving people the tools to detect misinformation" and "preventing crimethink"

By looking at who makes the decision. If you let the people decide, you gave them tools. If the right answer is preordained, you're just preventing crimethink.

Comment author: gjm 24 January 2017 03:07:17AM 1 point [-]

I'm not sure how that distinction actually cashes out in practice. I mean, you never have the option of directly controlling what people think, so you can always argue that you've "let the people decide". But you (generic "you", not Lumifer in particular) often have a definite opinion about what's right, so someone can always argue that "the right answer is preordained".

Suppose I write some stuff about mathematics, explaining (say) how the surreal numbers work. Am I "preventing crimethink"? I haven't told my audience "of course this stuff is as likely to be wrong as right", I haven't made any attempt to find people who think there's a contradiction or something in the theory; I've just told them what I think is right.

What if I do the same with evolution?

What if I do the same with anthropogenic climate change?

Comment author: Lumifer 24 January 2017 03:23:28AM 2 points [-]

The difference is in whether you think that disagreeing with your views is acceptable or not.

You tell people how you think the world works and why do you believe this, they say "I don't think that's right", you shrug and let them be -- it's one thing.

You tell people how you think the world works and why do you believe this, they say "I don't think that's right", you say "This will not stand, you need to be re-educated and made to disbelieve the false prophets" -- that's quite a different thing.

Comment author: gjm 24 January 2017 03:38:17AM 1 point [-]

Ah, OK. Then the paper we're discussing is not about "preventing crimethink": it is not saying, nor advocating saying, anything like what you describe in that last paragraph.

However, I suspect you will still want to characterize it as "preventing crimethink". Perhaps consider whether you can give a characterization of that term with a bit less spin on it?

(I think the authors are pretty sure they are right about global warming. They believe there is a lot of misinformation around, much of it deliberate. They suggest ways to pre-empt such misinformation and thereby make people less likely to believe it and more likely to believe what the authors consider to be the truth. But they do not say it is "unacceptable" to take a different view; they just think it's incorrect. And they aren't concerned with what you say to someone who has already made up their mind the other way; they are looking at ways to make that less likely to happen as a result of misinformation.)

Comment author: Lumifer 24 January 2017 03:51:05AM *  2 points [-]

Perhaps consider whether you can give a characterization of that term with a bit less spin on it?

Why? :-P

They believe there is a lot of misinformation around, much of it deliberate.

The problem is, I bet they believe all the misinformation is coming from the sceptics' side.

Comment author: gjm 24 January 2017 11:30:18AM 0 points [-]

Maybe they do. Maybe they're wrong. But holding a wrong opinion is not the same thing as attempting Orwellian thought control.

(My guess is that what they actually think is that almost all the misinformation, and most of the worst misinformation, is coming from the skeptics' side. It appears to me that they are in fact correct.)

Comment author: Lumifer 24 January 2017 04:13:01PM 2 points [-]

It appears to me that they are in fact correct

It appears to me that they are not, but I'm disinclined to do another dance round the same mulberry bush...

Comment deleted 25 January 2017 03:09:43AM [-]
Comment author: gjm 25 January 2017 04:33:11AM 2 points [-]

I think either you're misunderstanding the paper, or I'm misunderstanding you. (Or of course both.) The point isn't that scientists should be looking at consensus instead of actually doing science; of course they shouldn't. It's that for someone who isn't an expert in the field and isn't in a position to do their own research, the opinions of those who are experts and have done their own research are very useful information. (In cases -- such as this one -- where there is near unanimity among the experts, I think the only reasonable options are "accept expert consensus, if only tentatively" and "become expert at a level comparable to theirs and form your own opinion". Of course no one is obliged to be reasonable.)

Comment deleted 25 January 2017 04:41:46AM [-]
Comment author: gjm 25 January 2017 10:55:10AM 1 point [-]

Catholic theologians are experts in what the Roman Catholic Church believes. If you claim that the RCC isn't really trinitarian, then "bullshit, look at what all the Catholic theologians say" is a perfectly good response.

They claim (or at least let's suppose arguendo that they do) to be experts on the actual facts about God. It turns out they're wrong about that. So ... is their situation nicely parallel to that of climate scientists?

Why, no. Look at all the people in the world who claim to be God-experts and have studied long and hard, got fancy credentials, etc. That includes the Catholic theologians. It also includes the Protestant theologians (who are almost all also trinitarian but disagree about a bunch of other important things). And it includes Islamic scholars, who are very decidedly not trinitarians. It includes Hindu religious experts, whose views are more different still. By any reasonable criterion it also includes philosophers specializing in philosophy of religion, whose views are very diverse.

This is very much not the same as the situation with climate science. (And not only because the term "climate science" has been somehow coopted; there aren't university departments doing heretical climate science and using a different name for it, so far as I can tell.)

Eric Raymond's list of "warning signs" is pretty bullshitty. One warning sign with his warning signs: he prefaces it with a brief list of past alleged junk-science panics, and at least some of the things he lists are definitely not junk science and he only has the luxury of pretending they are because governments listened to the scientists and did something about the things they were warning about. It's amusing that he lists "Past purveyers of junk science do not change their spots" among his signs, incidentally, because to a great extent the organizations (and in some cases the people) supporting global warming skepticism are the same ones that argued that smoking doesn't cause cancer. Why? Because their opinions are on sale to the highest corporate bidder.

Comment author: jimmy 23 January 2017 11:13:08PM 1 point [-]

Are they looking at your thought processes or your conclusions?

If they have nothing to say when you choose the "right" conclusion, but have a lot to say when you choose the "wrong" one (especially if they don't know how you arrived at it), then it's crimethink.

If you can have the whole conversation with them without being able to tell which conclusion they personally believe, then they're legit.

Without reading further than the title, my money is on them being on the "global warming is real and if you don't think so you're an idiot" side. (am I wrong?)

Comment author: gjm 23 January 2017 11:22:34PM 1 point [-]

Are they looking at your thought processes or your conclusions?

Neither, I think. I mean, your question seems to rest on an incorrect presupposition about what the paper's about. They're not trying to judge people for their opinions or how they reached them. They're saying "here's a topic with a lot of misinformation flying around; let's see what we can do to make people less likely to be persuaded by the misinformation".

my money is on them being on the [...] side

Well, the authors clearly hold that global warming is real and that the evidence for it is very strong. Does that invalidate the paper for you?

Comment author: ChristianKl 26 January 2017 10:35:51AM 2 points [-]

Even if you grant that global warming is real that doesn't mean that there also isn't a lot of misinformation on the global warming side.

If I quiz a random number of liberal on the truth as the truth has been found by the IPCC, there are many issues where the liberals are likely saying that specific scenarios are more likely than the IPCC assumes.

Comment author: gjm 26 January 2017 11:38:34AM 1 point [-]

there are many issues where the liberals are likely saying that specific scenarios are more likely than the IPCC assumes.

Could be. As I've said elsewhere in the thread, I think the relevant question is not "is there misinformation on both sides?" (the answer to that is likely to be yes on almost any question) but "how do the quantity and severity of misinformation differ between sides?". My impression is that it's not at all symmetrical, but of course I might think that even if it were (it's much easier to spot misinformation when you disagree strongly with it). Do you know of any nonpartisan studies of this?

Comment author: ChristianKl 26 January 2017 12:09:59PM 2 points [-]

There was a letter by Nobel Laureates that suggested the probability of global warming is in the same class as evolution.

Given the probability I have in my mind for evolution, that's off more orders of magnitude from the IPCC number than the positions of global warming skeptics.

Do you know of any nonpartisan studies of this?

Who would have to fund a study like this to be nonparitsan?

My impression is that it's not at all symmetrical, but of course I might think that even if it were

How do you make that judgment? Did you read the IPCC report to have the ground truth for various claims? The great thing in the report is that it has probability categories for it's various claims.

In reading most of the claims that the IPCC report makes about global warming are a lot less than 99% certain. Media reports generally have a hard time reasoning about claims with probability 80% or 90%.

Comment author: gjm 26 January 2017 02:25:52PM 1 point [-]

a letter [...] that suggested the probability of global warming is in the same class as evolution [...] more orders of magnitude from the IPCC number than the positions of global warming skeptics.

I can only guess what letter you have in mind; perhaps this one? (Some of its signatories are Nobel laureates; most aren't.) I'll assume that's the one; let me know if I'm wrong.

It doesn't mention probability at all. The way in which it suggests global warming is in the same class as evolution is this:

But when some conclusions have been thoroughly and deeply tested, questioned, and examined, they gain the status of "well-established theories" and are often spoken of as "facts".

For instance, there is compelling scientific evidence that [here they list the age of the earth, the Big Bang, and evolution]. Even as they are overwhelmingly accepted by the scientific community, fame still awaits anyone who could show these theories to be wrong. Climate change now falls into this category: there is compelling, comprehensive, and consistent objective evidence that humans are changing the climate in ways that threaten our societies and the ecosystems on which we depend.

They don't claim that the probabilities are the same. Only that in all these cases the probability is high enough to justify saying that this is a thing that's been scientifically established.

Who would have to fund a study like this to be nonpartisan?

I don't know. Probably best not the fossil fuel industry. Probably best not any environmentalist organization. I think claims of bias on the part of government and academia are severely exaggerated, but maybe best to avoid those if only for the sake of appearances. A more pressing question, actually, is who would have to do it to be nonpartisan. You want people with demonstrated expertise, but the way you demonstrate expertise is by publishing things and as soon as anyone publishes anything related to climate change they will be labelled a partisan by people who disagree with what they wrote.

I don't have a good answer to this.

How do you make that judgement? Did you read the IPCC report [...] most of the claims that the IPCC report makes about global warming are a lot less than 99% certain.

It's not a judgement; my use of the rather noncommital word "impression" was deliberate. I make it by looking at what I see said about climate change, comparing it informally with what I think I know about climate change, and considering the consequences. It's not the result of any sort of statistical study, hence my deliberately noncommittal language. I have read chunks of the IPCC report but not the whole thing. I agree that it's good that they talk about probabilities. The terms they attach actual numerical probabilities to are used for future events; they usually don't give any numerical assessment of probability (nor any verbal assessment signifying a numerical assessment) for statements about the present and past, so I don't see any way to tell whether they regard those as "a lot less than 99% certain". They say "Where appropriate, findings are also formulated as statements of fact without using uncertainty qualifiers", which I take to mean that when they do that they mean there's no uncertainty to speak of.

Here are a few extracts from the AR5 "synthesis report".

Warming of the climate system is unequivocal [...] it is virtually certain [GJM: this is their term for >99%] that globally the troposphere has warmed and the lower stratosphere has cooled since the mid-20th century [...] It is virtually certain that the upper ocean (0-700m) warmed from 1971 to 2010 [...] Human influence [...] is extremely likely [GJM this is their term for 95-100%] to have been the dominant cause of the observed warming since the mid-20th century.

The key "headline" claims that distinguish the "global warming" from "not global warming" positions are "virtually certain"; attribution to human activities is "extremely likely" (and I have the strong impression that they are being deliberately overcautious about this one; note, e.g., that they say the best estimates for how much warming known human activity should have caused and the best estimates for how much warming there has actually been are pretty much equal).

Comment author: ChristianKl 26 January 2017 07:51:55PM 2 points [-]

I would judge the chances that evolution is incorrect by lower than 10^{-6}.

When the IPCC uses 10^{-2} as the category for global warming that off by many orders of magnitude.

A person who would believe that the chances of human-caused global warming are 10% would be nearer at the truth than a person who think that it's in the same category as evolution.

and I have the strong impression that they are being deliberately overcautious

Basically given the information to which you have been exposed you have a strong impression that the IPCC is making a mistake in the direction that would align with your politics.

The outside view suggests that most of the time experts are a bit overconfident. The replication crisis suggests that scientists are often overconfident. With climate science we are speaking about a domain that doesn't even have access to running real controlled experiments to verify important beliefs. That makes me doubt the idea that IPCC are underconfident.

If those IPCC scientists are that good at not being overconfident, why don't we tell the psychologists to listen to them to deal with their replication crisis?

Comment author: jimmy 24 January 2017 08:07:29PM *  1 point [-]

I'm not addressing the paper specifically, I'm answering your question more generally. I still think it applies here though. When they identify "misinformation", are they first looking for things that support the wrong conclusion and then explaining why you shouldn't believe this wrong thing, or are they first looking at reasoning processes and explaining how to do them better (without tying it to the conclusion they prefer).

For example, do they address any misinformation that would lead people to being misled into thinking global warming is more real/severe than it is? If they don't and they're claiming to be about "misinformation" and that they're not pushing an agenda, then that's quite suspicious. Maybe they do, I dunno. But that's where I'd look to tell the difference between what they're claiming and what Lumifer is accusing them of.

Well, the authors clearly hold that global warming is real and that the evidence for it is very strong. Does that invalidate the paper for you?

The fact that they hold that view does not. It's possible to agree with someones conclusions and still think they're being dishonest about how they're arguing for it, you know. (and also, to disagree with someone's conclusions but think that they're at least honest about how they get there)

The fact that it is clear from reading this paper which is supposedly not about what they believe sorta does, depending on how clear they are about it and how they are clear about it. It's possible for propaganda to contain good arguments, but you do have to be pretty careful with it because you're getting filtered evidence.

(notice how it applies here. I'm talking about processes not conclusions, and haven't given any indication of whether or not I buy into global warming - because it doesn't matter, and if I did it'd just be propaganda slipping out)

Comment author: gjm 25 January 2017 12:57:31AM 1 point [-]

When they identify "misinformation", are they first looking for things that support the wrong conclusion [...] or are they first looking at reasoning processes

What makes misinformation misinformation is that it's factually wrong, not that the reasoning processes underlying it are bad. (Not to deny the badness of bad reasoning, but it's a different failure mode.)

do they address any misinformation that would lead people to being misled into thinking global warming is more real/severe than it is?

They pick one single example of misinformation, which is the claim that there is no strong consensus among climate scientists about anthropogenic climate change.

If they don't and they're claiming to be about "misinformation" and that they're not pushing an agenda, then that's quite suspicious.

It would be quite suspicious if "global warming is real" and "global warming is not real" were two equally credible positions. As it happens, they aren't. Starting from the premise that global warming is real is no more unreasonable than starting from the premise that evolution is real, and not much more unreasonable than starting from the premise that the earth is not flat.

The fact that it is clear from reading this paper which is supposedly not about what they believe sorta does

I disagree. If you're going to do an experiment about how to handle disinformation, you need an example of disinformation. You can't say "X is an instance of disinformation" without making it clear that you believe not-X. Now, I suppose they could have identified denying that there's a strong consensus on global warming as disinformation while making a show of not saying whether they agree with that consensus or not, but personally I'd regard that more as a futile attempt at hiding their opinions than as creditable neutrality.

I [...] haven't given any indication of whether or not I buy into global warming

I think you have, actually. If there were a paper about how to help people not be deceived by dishonest creationist propaganda, and someone came along and said "do they address any misinformation that would lead people into being misled into thinking 6-day creation is less true than it is?" and the like, it would be a pretty good bet that that person was a creationist.

Now, of course I could be wrong. If so, then I fear you have been taken in by the rhetoric of the "skeptics"[1] who are very keen to portray the issue as one where it's reasonable to take either side, where taking for granted that global warming is real is proof of dishonesty or incompetence, etc. That's not the actual situation. At this point, denial of global warming is about as credible as creationism; it is not a thing scientific integrity means people should treat neutrally.

[1] There don't seem to be good concise neutral terms for the sides of that debate.

Comment author: ChristianKl 26 January 2017 10:35:47AM 2 points [-]

It would be quite suspicious if "global warming is real" and "global warming is not real" were two equally credible positions.

Both are quite simplistic positions. If you look at the IPCC report there are many different claims about global warming effects and those have different probabilities attached to them.

It's possible to be wrong on some of those probabilities in both directions, but thinking about probabilities is a different mode than "On what side do you happen to be?"

Comment author: gjm 26 January 2017 11:44:21AM 1 point [-]

Both are quite simplistic positions

Incidentally, the first comment in this thread to talk in terms of discrete "sides" was not mine above but one of jimmy's well upthread, and I think most of the ensuing discussion in those terms is a descendant of that. I wonder why you chose my comment in particular to object to.

Comment author: gjm 26 January 2017 11:33:35AM 1 point [-]

Both are quite simplistic positions

I don't know about you, but I don't have the impression that my comments in this thread are too short.

Yes, the climate is complicated. Yes, there is a lot more to say than "global warming is happening" or "global warming is not happening". However, it is often convenient to group positions into two main categories: those that say that the climate is warming substantially and human activity is responsible for a lot of that warming, and those that say otherwise.

Comment author: jimmy 25 January 2017 02:50:41AM *  1 point [-]

What makes misinformation misinformation is that it's factually wrong, not that the reasoning processes underlying it are bad.

Yes, and identifying it is a reasoning process, which they are claiming to teach.

It would be quite suspicious if "global warming is real" and "global warming is not real" were two equally credible positions. As it happens, they aren't.

Duh.

You can't say "X is an instance of disinformation" without making it clear that you believe not-X.

Sure, but there's more than one X at play. You can believe, for example, that "the overwhelming scientific consensus is that global warming is real" is false and that would imply that you believe not-"the overwhelming scientific consensus is that global warming is real". You're still completely free to believe that global warming is real.

I think you have, actually.

"What about the misinformation on the atheist side!" is evidence that someone is a creationist to the extent that they cannot separate their beliefs from their principles of reason (which usually people cannot do).

If someone is actually capable of the kind of honesty where they hold their own side to the same standards as the outgroup side, it is no longer evidence of which side they're on. You're assuming I don't hold my own side to the same standards. That's fine, but you're wrong. I'd have the same complaints if it were a campaign to "teach them creationist folk how not to be duped by misinformation", and I am absolutely not a creationist by any means.

I can easily give an example, if you'd like.

If so, then I fear you have been taken in by the rhetoric of the "skeptics"[1] who are very keen to portray the issue as one where it's reasonable to take either side,

Nothing I am saying is predicated on there being more than one "reasonable" side.

where taking for granted that global warming is real is proof of dishonesty or incompetence, etc

If you take for granted a true thing, it is not proof of dishonesty or incompetence.

However, if you take it for granted and say that there's only one reasonable side, then it is proof that you're looking down on the other side. That's fine too, if you're ready to own that.

It just becomes dishonest when you try to pretend that you're not. It becomes dishonest when you say "I'm just helping you spot misinformation, that's all" when what you're really trying to do is make sure that they believe Right thoughts like you do, so they don't fuck up your society by being stupid and wrong.

There's a difference between helping someone reason better and helping someone come to the beliefs that you believe in, even when you are correct. Saying that you're doing the former while doing the latter is dishonest, and it doesn't help if most people fail to make the distinction (or if you somehow can't fathom that I might be making the distinction myself and criticizing them for honesty rather than for disagreeing with me)

Comment author: gjm 25 January 2017 03:37:23AM 1 point [-]

identifying it is a reasoning process, which they are claiming to teach.

I don't think they are. Teaching people to reason is really hard. They describe what they're trying to do as "inoculation", and what they're claiming to have is not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation.

"What about the misinformation on the atheist side!" is evidence that someone is a creationist to the extent that they cannot separate their beliefs from their principles of reason

Not only that. Suppose the following is the case (as in fact I think it is): There is lots of creationist misinformation around and it misleads lots of people; there is much less anti-creationist misinformation around and it misleads hardly anyone. In that case, it is perfectly reasonable for non-creationists to try to address the problem of creationist misinformation without also addressing the (non-)problem of anti-creationist misinformation.

I think the situation with global warming is comparable.

You're assuming I don't hold my own side to the same standards.

I'm not. Really, truly, I'm not. I'm saying that from where I'm sitting it seems like global-warming-skeptic misinformation is a big problem, and global-warming-believer misinformation is a much much smaller problem, and the most likely reasons for someone to say that discussion of misinformation in this area should be balanced in the sense of trying to address both kinds are (1) that the person is a global-warming skeptic (in which case it is unsurprising that their view of the misinformation situation differs from mine) and (2) that the person is a global-warming believer who has been persuaded by the global-warming skeptics that the question is much more open than (I think) it actually is.

then it is proof that you're looking down on the other side.

Sure. (Though I'm not sure "looking down on" is quite the right phrase.) So far as I can tell, the authors of the paper we're talking about don't make any claim not to be "looking down on" global-warming skeptics. The complaints against them that I thought we were discussing here weren't about them "looking down on" global-warming skeptics. Lumifer described them as trying to "prevent crimethink", and that characterization of them as trying to practice Orwellian thought control is what I was arguing against.

It becomes dishonest when you say "I'm just helping you spot misinformation, that's all" when what you're really trying to do is make sure that they believe Right thoughts like you do

I think this is a grossly unreasonable description of the situation, and the use of the term "crimethink" (Lumifer's, originally, but you repeated it) is even more grossly unreasonable. The unreasonableness is mostly connotational rather than denotational; that is, there are doubtless formally-kinda-equivalent things you could say that I would not object to.

So, taking it bit by bit:

when you say "I'm just helping you spot misinformation, that's all"

They don't say that. They say: here is a way to help people not be taken in by disinformation on one particular topic. (Their approach could surely be adapted to other particular topics. It could doubtless also be used to help people not be informed by accurate information on a particular topic, though to do that you'd need to lie.) They do not claim, nor has anyone here claimed so far as I know, that they are offering a general-purpose way of distinguishing misinformation from accurate information. That would be a neat thing, but a different and more difficult thing.

make sure that they believe Right thoughts

With one bit of spin removed, this becomes "make sure they are correct rather than incorrect". With one bit of outright misrepresentation removed, it then becomes "make it more likely that they are correct rather than incorrect". This seems to me a rather innocuous aim. If I discover that (say) many people think the sun and the moon are the same size, and I write a blog post or something explaining that they're not even though they subtend about the same angle from earth, I am trying to "make sure that they believe Right thoughts". But you wouldn't dream of describing it that way. So what makes that an appropriate description in this case?

(Incidentally, it may be worth clarifying that the specific question about which the authors of the paper want people to "believe Right thoughts" is not global warming but whether there is a clear consensus on global warming among climate scientists.)

crimethink

I'm just going to revisit this because it really is obnoxious. The point of the term "crimethink" in 1984 is that certain kinds of thoughts there were illegal and people found thinking them were liable to be tortured into not thinking them any more. No one is suggesting that it should be illegal to disbelieve in global warming. No one is suggesting that people who disbelieve in global warming should be arrested, or tortured, or have their opinions forcibly changed in any other fashion. The analogy with "crimethink* just isn't there. Unless you are comfortable saying that "X regards Y as crimethink" just means "X thinks Y is incorrect", in which case I'd love to hear you justify the terminology.

Comment author: Lumifer 25 January 2017 05:00:11AM *  4 points [-]

No one is suggesting that it should be illegal to disbelieve in global warming.

This is factually incorrect (and that's even without touching Twitter and such).

The analogy with "crimethink* just isn't there.

Oh, all right. You don't like the word. How did you describe their activity? "...not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation."

Here: brainwashing. Do you like this word better?

Comment author: jimmy 25 January 2017 09:10:46PM *  0 points [-]

I don't think they are. Teaching people to reason is really hard. They describe what they're trying to do as "inoculation”

Oh. well in that case, if they’re saying “teaching you to not think bad is too hard, we’ll just make sure you don’t believe the wrong things, as determined by us”, then I kinda thought Lumifer’s criticism would have been too obvious to bother asking about.

Suppose the following is the case (as in fact I think it is): There is lots of creationist misinformation around and it misleads lots of people; there is much less anti-creationist misinformation around and it misleads hardly anyone. In that case, it is perfectly reasonable for non-creationists to try to address the problem of creationist misinformation without also addressing the (non-)problem of anti-creationist misinformation.

Oh… yeah, that’s not true at all. If it were true, and 99% of the bullshit were generated by one side, then yes, it would make sense to spend 99% of one’s time addressing bullshit from that one side and it wouldn’t be evidence for pushing an agenda. There’s still other reasons to have a more neutral balance of criticism even when there’s not a neutral balance of bullshit or evidence, but you’re right - if the bullshit is lopsided then the lopsided treatment wouldn’t be evidence of dishonest treatment.

It’s just that bullshit from one’s own side is a whole lot harder to spot because you immediately gloss over it thinking “yep, that’s true” and don’t stop to notice “wait! That’s not valid!”. In every debate I can think of, my own side (or “the correct side”, if that’s something we’re allowed to declare in the face of disagreement) is full of shit too, and I just didn’t notice it years ago.

I'm not. Really, truly, I'm not. [...]it seems like [...] the most likely reasons for someone to say that discussion of misinformation in this area should be balanced in the sense of trying to address both kinds are (1) that the person is a global-warming skeptic (in which case it is unsurprising that their view of the misinformation situation differs from mine) and (2) that the person is a global-warming believer who has been persuaded by the global-warming skeptics that the question is much more open than (I think) it actually is.

This reads to me as “I’m not. Really, truly, I’m not. I’m just [doing exactly what you said I was doing]”. This is a little hard to explain as there is some inferential distance here, but I’ll just say that what I mean by “have given no indication of what I believe” and the reason I think that is important is different from what it looks like to you.

Sure. (Though I'm not sure "looking down on" is quite the right phrase.) So far as I can tell, the authors of the paper we're talking about don't make any claim not to be "looking down on" global-warming skeptics. The complaints against them that I thought we were discussing here weren't about them "looking down on" global-warming skeptics. Lumifer described them as trying to "prevent crimethink", and that characterization of them as trying to practice Orwellian thought control is what I was arguing against.

Part of “preventing crimethink” is that the people trying to do it usually believe that they are justified in doing so (“above” the people they’re trying to persuade), and also that they are “simply educating the masses”, not “making sure they don’t believe things that we believe [but like, we really believe them and even assert that they are True!]”.

With one bit of spin removed, this becomes "make sure they are correct rather than incorrect".

This is what it feels like from the inside when you try to enforce your beliefs on people. It feels like the beliefs you have are merely correct, not your own beliefs (that you have good reason to believe you’re right on, etc). However, you don’t have some privileged access to truth. You have to reason and stuff. If your reasoning is good, you might come to right answers even. If the way that you are trying to make sure they are incorrect is by finding out what is true [according to your own beliefs, of course] and then nudging them towards believing the things that are true (which works out to “things that you believe”), then it is far more accurate to say “make sure they hold the same beliefs as me”, even if you hold the correct beliefs and even if it’s obviously correct and unreasonable to disagree.

And again, just to be clear, this applies to creationism too.

With one bit of outright misrepresentation removed, it then becomes "make it more likely that they are correct rather than incorrect". This seems to me a rather innocuous aim. If I discover that (say) many people think the sun and the moon are the same size, and I write a blog post or something explaining that they're not even though they subtend about the same angle from earth, I am trying to "make sure that they believe Right thoughts". But you wouldn't dream of describing it that way. So what makes that an appropriate description in this case?

If you simply said “many people think the sun and the moon are the same size, they aren’t and here’s proof”, I’d see you as offering a helpful reason to believe that the sun is bigger.

If it was titled “I’m gonna prevent you from being wrong about the moon/sun size!”, then I’d see your intent a little bit differently. Again, I’m talking about the general principles here and not making claims about what the paper itself actually does (I cannot criticise the paper itself as I have not read it), but it sounded to me like they weren’t just saying “hey guys, look, scientists do actually agree!” and were rather saying “how can we convince people that scientists agree” and taking that agreement as presupposed. “innoculate against this idea” is talking about the idea and the intent to change their belief. If all you are trying to do is offer someone a new perpsective, you can just do that - no reason to talk about how “effective” this might be.

Unless you are comfortable saying that "X regards Y as crimethink" just means "X thinks Y is incorrect", in which case I'd love to hear you justify the terminology.

Yes, I thought it was obvious and common knowledge that Lumifer was speaking in hyperbole. No, they are not actually saying people should be arrested and tortured and I somehow doubt that is the claim Lumifer was trying to make here.

It’s not “thinks Y is incorrect”, it’s “socially punishes those who disagree”, even if it’s only mild punishment and even if you prefer not to see it that way. If, instead of arguing that they’re wrong you presuppose that they’re wrong and that the only thing up for discussion is how they could come to the wrong conclusion, they’re going to feel like they’re being treated like an idiot. If you frame those who disagree with you as idiots, then even if you have euphemisms for it and try to say “oh, well it’s not your fault that you’re wrong, and everyone is wrong sometimes”, then they are not going to want to interact with you.

Does this make sense?

If you frame them as an idiot, then in order to have a productive conversation with you that isn’t just “nuh uh!”/”yeah huh!”, they have to accept the frame that they’re an idiot, and no one wants to do that. They may be an idiot, and from your perspective it may not be a punishment at all - just that you’re helping them realize their place in society as someone who can’t form beliefs on their own and should just defer to the experts. And you might be right.

Still, by enforcing your frame on them, you are socially punishing them, from their perspective, and this puts pressure on them to “just believe the right things”. It’s not “believe 2+2=5 or the government will torture you”, it’s “believe that this climate change issue is a slam dunk or gjm will publicly imply that you are unreasonable and incapable of figuring out the obvious”, but that pressure is a step in the same direction - whether or not the climate change issue is a slam dunk and whether or not 2+2=5 does not change a thing. If I act to lower the status of people who believe the sky isn’t blue without even hearing out their reasons, then I am policing thoughts, and it becomes real hard to be in my social circle if you don’t share this communal (albeit true) belief. This has costs even when the communal beliefs are true. At the point where I start thinking less of people and imposing social costs on them for not sharing my beliefs (and not their inability to defend their own or update), I am disconnecting the truth finding mechanism and banking on my own beliefs being true enough on their own. This is far more costly than it seems like it should be for more than one reason - the obvious one being that people draw this line waaaaaaay too early, and very often are wrong about things where they stop tracking the distinction between “I believe X” and “X is true”.

And yes, there are alternative ways of going about it that don't require you to pretend that "all opinions are equally valid" or that it you don't think it would be better if more people agreed with you or any of that nonsense.

Does this make sense?

Comment author: lifelonglearner 23 January 2017 06:20:48PM 2 points [-]

Wow, this is very neat. Thanks for sharing! (I'll be giving a talk to students about climate change and psych next month, and this looks to be very helpful.)

Do you have any other papers you'd recommend in this vein?

Comment author: kitimat 23 January 2017 09:29:52PM 2 points [-]

This handbook is about climate change and how debunking can actually backfire.

John Cook is an instructor on EDX's Making Sense of Climate Science Denial course.

Comment author: Thomas 23 January 2017 08:42:09AM 2 points [-]
Comment author: cousin_it 27 January 2017 03:02:47PM *  1 point [-]

Maybe check out this, then this if you're hardcore.

Comment author: Thomas 29 January 2017 10:32:30AM 0 points [-]

Thanks.

Hyper knots are still knots, however. I am more looking for something conceptually new in higher dimensions. Like the rotation is a new concept in 2D, unknown in Lineland. Or knots are unknown concept in Flatland. I think every dimension has something unfathomable to offer for people used to only lower dimensions - like their own number of dimensions and bellow that.

I also think, that at least in principle, a 3D dweller might be able to simulate a 4D or more space vividly. I doubt there is already one such, but it should be possible.

Here is a sneak preview on tomorrow Open Thread's link about my new problem:

https://protokol2020.wordpress.com/2017/01/29/a-topological-problem/

Comment author: Viliam 26 January 2017 05:27:29PM *  1 point [-]

I have just read a debate about whether high-IQ kids should be allowed to attend special schools, and the debate was predictable. So I used this as an opportunity to summarize the arguments against "smart segregation". (The arguments in favor of it seem quite straightforward: better education, less bullying, social and professional company of equals.) Here are the results; please tell me if some frequently-made argument is missing.

Note: different arguments here contradict each other, which is okay, because they are typically not made by the same people.

1 -- There is no such thing as "smart children", because...

1.A -- ...everyone who believes to be smart is actually just a conceited fool. Parents who believe that their children are smart are just typical parents uncritical about their children. (Insert anecdotal evidence about a kid from your elementary school who believed to be super smart, and so did his parents, but he was obviously a moron.)

1.B -- ...you cannot measure smartness on a single scale. There are many kinds of intelligence; everyone is special in a different way. Someone is better at math, but someone else may be better at dancing or spirituality. Belief in g-factor is debunked pseudoscience; it is racist and sexist and shouldn't be given a platform. (Quote S.J.Gould and/or insert example of Hitler believing some people were better than others.)

1.C -- ...you cannot measure smartness fairly. If a child is tested as smart, it only means they have rich parents who were able to buy them tutors, made them cram for the tests, and maybe even bribed the test evaluators. Also, it is known that tests provide unfair advantage to white cishet male children.

1.D -- A weaker version of the previous statement is that if you make programs for smart children, the children from poor or minority families will not be able to participate in them, for various reasons. This would leave them in a worse situation than they are now, because if it becomes a common knowledge that such programs exist, the fact that the child didn't participate in one would be taken as an evidence against being smart. That is, an average smart child would be actually harmed by such policy.

2 -- Having smart children together with dumb ones is better for the smart children, because...

2.A -- ...it will improve the smart children's social skills. The most important social skill is to be able to interact with average people, because they make a majority of the population, so you will interact with them most frequently as an adult. (This assumes that adult people actually interact with a random sample of population, as opposed to living in a bubble of their profession or socioeconomical level, both in professional and private lives.)

2.B -- ...it will allow the smart children to learn important things from the dumb ones, other than the academic skills. (This usually assumes some kind of cosmic justice, where smaller intelligence is balanced by greater emotionality or spirituality, so the dumb children can provide value that the smart children would not be able to provide to each other.)

2.C -- ...it will allow the smart children to have contacts outside of their bubble.

2.D -- ...the smart children can tutor the dumb ones, which will be an enriching experience for both sides. Explaining stuff to other people deepens your own understanding of the topic.

3 -- Having smart children together with dumb ones is better for the dumb children, because...

3.A -- ...having the smart children in the classroom will provide inspiration for the rest of the class.

3.B -- ...the smart children can tutor the dumb ones.

3.C -- ...it will allow the dumb children to have contacts outside of their bubble.

3.D -- ...the smart children in the classroom will motivate the teachers; having motivated teachers at school will benefit all students.

3.E -- ...the parents of the smart children (presumably themselves smart and rich) will care about improving the quality of education in their child's school, which will benefit all students.

4 -- We should actually not optimize for the smart children, even if it would be a net benefit, because...

4.A -- ...the whole "problem" is made up anyway, and a truly smart child will thrive in any environment. Optimizing for smart children should be such low priority that you should be ashamed for even mentioning the topic. (Insert anecdotal evidence about a smart kid who studied at average school, and became successful later.) Even the argument about bullying is invalid, because bullying happens among smart children, too.

4.B -- ...smart children usually have rich parents. Creating better educational opportunities for smart children therefore on average increases income inequality, which is bad.

Comment author: gjm 26 January 2017 06:39:46PM 2 points [-]

I haven't seen a lot of arguments about this issue. Here are some other anti-segregation arguments that occur to me; I make no guarantee that they are common. I do not necessarily endorse them any more than Viliam endorses the ones he mentions. I do not necessarily endorse the conclusion they (in isolation) point towards any more than Viliam does.

I'm going along with Viliam's smart/dumb terminology and dichotomous treatment for simplicity; I am well aware, and I'm sure Viliam is too, that actually it doesn't make much sense to classify every pupil as "smart" or "dumb".

2.E -- ...the smart children will grow up with more awareness that not everyone is like them, and a better idea of what other people outside their bubbles are like. (Not the same about 2.C; it applies to some extent even if smart and dumb never even speak to one another.)

2.F -- ...a certain fraction of dumb children is necessary for various sorts of extracurricular activity mostly but not exclusively liked by dumb children to be sustainable, so without the dumb ones the smart ones who would have benefited from those activities will be left out.

3.F -- ...if they are segregated, better teachers will likely want to avoid the "dumb" schools, so the "dumb" children will get a worse education.

3.G -- (same as 2.E with signs reversed)

3.H -- ...the mostly smart and rich people in government will care about improving the quality of education in all schools, not just the ones attended by the children of People Like Them. (Closely related to 3.E but not the same.)

3.I -- ...the smart children tend to be better behaved too, and a school consisting entirely of dumb children will have serious behaviour problems. (Whether this is better overall depends on how behaviour varies with fraction of smart/dumb children, among other things.)

3.J -- ...a certain fraction of smart children is necessary for various sorts of extracurricular activity mostly but not exclusively liked by smart children to be sustainable, so without the smart ones the dumb ones who would have benefited from those activities will be left out.

5 -- Having smart children together with dumb ones is better for everyone, because ...

5.A -- ...segregation means that on average schools will be further from homes (because children will less often just be going to the nearest school), which means more travel; hence more time wasted, more pollution, more congestion on the roads, etc.

5.B -- ...segregation in schools will lead to segregation of communities as parents who expect their children to go to the "smart" schools move nearer them, and likewise for the "dumb" schools, and more-segregated communities means people more completely in bubbles, less empathy for one another, etc., destabilizing society. (Mumble mumble Trump Brexit mumble out-of-touch elites mumble mumble.)

5.C -- ...parents whose children go to the same school will interact at least a bit for school-related reasons, so non-segregated schools improve social cohesion and cross-bubble awareness by making smart and dumb parents talk to one another from time to time.

5.D -- ...children near the smart/dumb borderline (wherever it's drawn) may do worse, because e.g. if they're generally smart but somewhat worse at one particular thing, there won't be a class of dumbish people for them to do it in, and likewise if they're generally dumb but somewhat better at one particular thing; particularly sad will be the case of a child who develops late or has a bad year for some reason and then finds they're in a school that just doesn't have any lessons that suit the level they're at.

Comment author: Viliam 27 January 2017 09:54:00AM *  0 points [-]

Thanks! What are your central examples of the activities in 2.F? Sport? Craft? Something else?

I think I never actually met anyone using 5.B. Probably because using this argument requires assuming that there are enough genuinely smart people to create a community when given a chance; and most people around me seem to believe that high IQ doesn't really matter, and on the "unspecified but certainly very high" level where it does, those people are too few, not enough to create a functional bubble. Alternatively, other people believe that every above-average high school or every university is already de facto a school for high-IQ kids, and the IQ levels above this line don't actually make a difference, so all such bubbles already exist. -- No one seems to believe that there could be a meaningful line drawn at IQ maybe 150, where the people are too few to create a (non-professional) community spontaneously, but sufficiently different from the rest of the population that they might enjoy actually living in such community if given a chance.

Comment author: gjm 27 January 2017 10:45:44AM 0 points [-]

For 2.F I was indeed thinking sport, but actually I have very little idea whether such activities really exist and if so what they actually are. Plenty of smart kids like sport.

requires assuming that there are enough genuinely smart people to create a community

We're already assuming that there are enough smart-in-whatever-sense people to have their own school. Depending on where the borderline between "smart" and "dumb" is drawn, there may be more or fewer "smart" schools, but each one will have to be of reasonable size.

Comment author: Viliam 27 January 2017 03:41:15PM *  0 points [-]

Well, specific IQ levels are usually not mentioned in the debates I have seen. Which of course only makes the debates more confused. :(

When I think about it quantitatively, if we use Mensa as a Schelling point for "high IQ", then 2% of the population have IQ over 130, which qualifies them as Mensa-level smart. Two percent may seem not too much, but for example in a city with population of half a milion (such as where I live), that gives 10 000 people. To better visualize this number, if you have an apartment building with 7 floors, that is 20 apartments, assuming on average 2.5 people per apartment that is 50 people per building, which gives us 200 buildings.

Of course assuming unrealistically that Mensa could somehow successfully convince all people in the city to take the test, and to convince those who pass to move together. But 200 buildings of Mensa-level people sounds impressive. (Well, if Mensa sounds impressive, which on LW it probably does not.)

Speaking of schools, let's say that people live about 70 years, but there are more young people than old people, so let's assume that for young people a year of age corresponds to 1/50 of the population, so if there are 10 000 Mensa-level people in the half-million city, that makes 200 children for each grade. That's about 7 classrooms for each grade, depending on size. That's like two or three schools. Again, depending on the assumption that Mensa could find those kids, and convince the parents to put them all into Mensa schools. (Which, under the completely unrealistic assumptions, could be built in the "Mensa district" of the city.)

To make this happen, it would require a smaller miracle, but it's not completely impossible. Just making all people in one city interested in Mensa, making them take the test, and making them interested in moving to the "Mensa district" would require a lof of advertising. (And even then, there would be a lot of resistance.) But, hypothetically, if you had a millionaire, who would build the new district, hire a few celebrities to popularize the idea, and perhaps sell or rent the appartments only to high-IQ people for a discount... it could happen. If 10% of the target population would be convinced enough to move, you could get a "Mensa block" with 20 houses, 1 very small elementary school, and 1 very small high school. -- I am afraid this is the best possible outcome, and it already assumes a millionaire sponsor.

If you imagine a higher IQ level, such as IQ 150, even this is impossible.

So, while some people may fear that if we make smart people network with each other, they could take over the whole society, to me it feels like saying that as soon as dozen LessWrong fans start regularly attending a meetup, they will become an army of beisutsukai and take over the world. Nice story, but not gonna happen. If this is the reason against the schools for smart kids, the normies are safe.

Comment author: Lumifer 27 January 2017 05:26:26PM 2 points [-]

specific IQ levels

You might find this interesting: The 7 Tribes of Intellect.

Comment author: Viliam 30 January 2017 01:14:35PM 0 points [-]

Well, I approximately agree, but that's just a classification of people into IQ layers. I'd like to go much further than that.

For example -- sorry, today I am too lazy to do a google hunt -- I think there was a research, probably by Terman, about why some high-IQ people succeed in life while others fail, often spectacularly. His conclusion was that it mostly depends on how well connected with other high-IQ people they are; most importantly whether they come from a generally high-IQ family. (My hypothesis is that the effect is twofold: first, the high-IQ family itself is a small high-IQ society; second, the older family members were already solving the problem of "how to find other high-IQ people" and can share their strategies and contacts with the younger members.)

If this is true (which I have no reason do doubt), then not allowing high-IQ children to associate with other high-IQ children is child abuse. It is like sending someone on train that is predictably going to crash. I will charitably assume that most people participating in this form of child abuse are actually not aware of what they are doing, so I don't blame them morally... at least until the moment when someone tries to explain to them what are the actual consequences of their actions, and they just stick fingers in their ears and start singing "la la la, I don't hear you, elitism is always bad".

But perhaps a more important thing is this: the usual "compromise" solution of accepting that some children indeed are smarter than others, but solving it by simply having them skip a grade (that is, instead of company of children with similar age and IQ, you give them company of older children with average IQ, so that the "mental age" is kinda balanced) is just a short-term fix that fails in long term. Yes, you are providing the children an appropriately mentally challenging environment, which is good. But what you don't give them, is the opportunity to learn the coping skills that seem necessary for high-IQ people. So when they reach the stage where there is simply no value X such that an average X years old person has the same mental level as the gifted person does now, the whole strategy collapses. (But by the time the gifted person is usually an adult, so everyone just shrugs and says "whatever, it's their fault". Ignoring the fact that the society spent the whole previous time teaching them a coping strategy that predictably fails afterwards.)

So, I believe that for a high-IQ person there is simply no substitute for the company of intellectual peers; even older children will not do, because that is a strategy that predictably fails when the person reaches adulthood. Some kids are lucky, because their parents are high-IQ, because their parents have high-IQ friends, who probably have high-IQ children, so by this social network they can connect with intelectual peers. But high-IQ kids growing up without this kind of network... need a lot of luck.

Comment author: Lumifer 30 January 2017 04:24:37PM 2 points [-]

If this is true (which I have no reason do doubt), then not allowing high-IQ children to associate with other high-IQ children is child abuse.

You do understand that "true" here means "we built a model where the coefficient for a factor we describe as 'connectedness' is statistically significant", right? I don't think throwing around terms like "child abuse" is helpful.

Also, do you think the existence of the internet helps with the problem you describe?

Comment author: Viliam 30 January 2017 05:51:39PM *  1 point [-]

I don't think throwing around terms like "child abuse" is helpful.

Yeah, it's probably a strategic mistake to tell people plainly that they are doing a horrible thing. It will most likely trigger a chain of "I am a good person... therefore I couldn't possibly do a horrible thing... therefore what this person is telling me must be wrong", which prevents or delays the solution of the problem. Whether you discuss social deprivation of high-IQ children, or circumcision, or religious education, or whatever, you have to remember that people mostly want to maintain the model of the world where they are the good ones who never did anything wrong, even if it means ignoring all evidence to the contrary. Especially if their opinion happens to be a majority opinion.

It's just that on LW I try to tell it how I see it, ignoring the strategical concerns. As an estimate, let's say that if normal child development is 0 units of abuse, and feral children are 100 units of abuse, then depriving a high-IQ child of contact with other high-IQ children is about 1 unit of abuse. (I specifically chose feral children, because I believe it is an abuse of a similar kind, just much smaller magnitude.) Sure, compared with many horrors that children sometimes suffer, this is relatively minor. However, people who systematically harm thousands of children in this way are guilty of child abuse. I mean those who campaign against existence of high-IQ schools, or even make laws against them. (As an estimate, I would guess that at least one of hundred such children will commit suicide or at least seriously consider it, as a result of the social deprivation.)

Also, do you think the existence of the internet helps with the problem you describe?

I think it helps, but not sufficiently. I may be generalizing from one example here, but internet connection simply doesn't have the same quality as personal interaction. That's why e.g. some people attend LW meetups instead of merely posting here. And generally, why people still bother meeting in person, despite almost everyone (in developed countries) having an internet connection. -- As a high IQ child, you may find other high IQ children on the internet, but unless there is a specialized school for such children, you still have to spend most of your time without them.

Another problem is that the topic is mostly taboo, so even with internet you may simply not know what to look for. To explain, I will use homosexuality as an analogy (obviously, it is not the same situation, I just find this one aspect similar) -- if you know that homosexuality is a thing, and if you know that you happen to be a gay, then you can just open google and look for the nearest gay club, and if you are lucky, then something like that exists in your town. But imagine that you happen to be gay, but you never heard about homosexuality as a concept. No one ever mentioned that; you are not aware that it exists, or that other people may be gay, too. All you know is that other boys of your age seem to be interested in girls, and you don't quite understand why. So, you open google and... what? "Why I am not so interested in girls?" But in a society where this topic is taboo, the first 100 pages of google results will probably contain information that for different boys attraction in girls develops at a different age, or that maybe this is a sign from God to become a celibate priest, or that you should lower your unrealistic standards for female beauty, or that you need to learn more social skills and go out more and meet more girls until you find one that will attract you... some people will say there is something wrong with you, some people will say everything is perfectly okay and all problems will solve themselves in time, but no one will even hint that maybe your lack of attraction to girls may be because you are gay, which is a fact of life, and a solution for such people is to find company of other gays.

Analogically, if you have a very high IQ, and your problem is that there are not people with sufficiently high IQ around you, but you are not aware of the fact that a very high IQ is a thing, what will you write in google? "How not to feel alone?" "How to communicate with people?" "How to feel understood?" And you will get advice like "when interacting with people, you should talk less, and listen more", which is all perfectly true and useful, but all it does is that it helps you connect to average people on their level, which is not the thing you are starving for. (It's like a recipe for a gay how to maintain erection while having sex with a girl. It may be a technically perfect advice, it may even work; it's just not the thing that the gay truly desires. Similarly, the high-IQ person may learn to be able to maintain conversation with an average person, talking on the average person's terms; it's just not fulfilling their deep intellectual desires.) Some of the advice will tell the problem is in you: you don't have enough social skils, you are too proud, you have unrealistic expectations of human interaction; other advice will tell you to calm down because everything is going to magically become okay as soon as you stop worrying. But if you happen to be a high-IQ person, the advice you need is probably "you feel different from other people because you are different, but don't worry, there is a 'high IQ club' down the street, you may find similar people there". (Which is what Mensa tries to be. Except that Mensa is for people with IQ 130, so if you happen to be have IQ 160, you will feel in Mensa just as lonely as an average Mensa member feels among the normies.)

So, analogically to gays, we need to make it generally known that "having a high IQ" is a thing, and that "meeting other people with the similar level to IQ" is the only solution that actually works. And then, people will know what to type in google. And then, having an internet will solve this problem. But most people still have beliefs that are analogical to "homosexuality is a sin, it is unnatural, it shouldn't be encouraged, it corrupts the youth, it will make God send floods on us, you just have to pray and become hetero"; except that they say "IQ is a myth, it is an unhealthy elitism, there are multiple intelligences and everyone has one, IQ doesn't mean anything because EQ is everything and you can increase your EQ by reading Dale Carnegie, and if you believe in IQ you will develop a fixed mindset and fail at life". And you may start believing it, until you happen to stumble upon e.g. a LW meetup and experience your best intellectual orgasm in life, and you suddenly easily click with people and develop fulfilling relationships.

(Where the analogy with gays fails is that people usually don't create fake gay clubs full of hetero people, but there are groups of people who believe themselves to be smart even when they are not. So a person googling for something like a high IQ club may still be disappointed with the results.)

Comment author: Lumifer 30 January 2017 07:36:11PM *  3 points [-]

ignoring the strategical concerns

I don't think the problem is strategic concerns, I think the problem is connotations.

The connotations for child abuse are "call the cops and/or child protective services which will take the child away from the parents and place him/her into foster care" and "put the parents in jail and throw away the key". Child abuse is not merely bad parenting.

you may simply not know what to look for

What do you mean? Finding your tribe / peer group isn't a matter of plopping the right search terms into Google. I think it mostly works through following the connections from things and people you find on the 'net. If you consistently look for "smarter" and follow the paths to "more smarter" X-), you'll end in the right area.

internet connection simply doesn't have the same quality as personal interaction

Well, of course. But imagine things before the internet :-/

Comment author: gjm 27 January 2017 04:06:44PM 0 points [-]

I don't think the concern would be that "they could take over the whole society". It would be more that smart people (more accurately: people in various groups that correlate with smartness, and perhaps more strongly with schools' estimates of pupil-smartness) already have some tendency to interact only with one another, and segregating schools would increase that tendency, and that might be bad for social cohesion and even stability (because e.g. those smart people will include most of the politicians, and the less aware they are of what Ordinary People want the more likely they are to seem out of touch and lead to populist smash-everything moves).

Comment author: Lumifer 27 January 2017 05:25:15PM 1 point [-]

that might be bad for social cohesion and even stability

This is a complicated argument. Are you basically saying that it's "good" (we'll leave aside figuring out what it means for a second) for people to be tribal at the nation-state level but bad for them to be tribal at more granular levels?

For most cohesion you want a very homogeneous population (see e.g. Iceland). Technically speaking, any diversity reduces social cohesion and diversity in IQ is just one example of that. If you're worried about cohesion and stability, any diversity is "bad" and you want to discourage tribes at the sub-nation levels.

The obvious counterpoint is that diversity has advantages. Homogeneity has well-known downsides, so you're in effect trading off diversity against stability. That topic, of course, gets us into a right into a political minefield :-/

Comment author: gjm 27 January 2017 06:06:52PM 0 points [-]

Are you basically saying [...]

Just to clarify, I am describing rather than making arguments. As I said upthread, I am not claiming that they are actually good arguments nor endorsing the conclusion they (by construction) point towards. With that out of the way:

that it's "good" [...] for people to be tribal at the nation-state level but bad for them to be tribal at more granular levels?

The argument doesn't have anything to say about what should happen at the nation-state level. I guess most people do endorse tribalism at the nation-state level, though.

For most cohesion you want a very homogeneous population [...] any diversity reduces social cohesion

If you have a more or less fixed national population (in fact, what we have that's relevant here is a more or less fixed population at a level somewhere below the national; whatever scale our postulated school segregation happens at) then you don't get to choose the diversity at that scale. At smaller scales you can make less-diverse and therefore possibly more-cohesive subpopulations, at the likely cost of increased tension between the groups.

(I think we are more or less saying the same thing here.)

The obvious counterpoint is that diversity has advantages.

Yes. (We were asked for arguments against segregation by ability, so I listed some. Many of them have more or less obvious counterarguments.)

Comment author: Lumifer 27 January 2017 08:23:55PM *  1 point [-]

The argument doesn't have anything to say about what should happen at the nation-state level.

Concerns about social cohesion and stability are mostly relevant at the nation-state level. This is so because at sub-state levels the exit option is generally available and is viable. At the state level, not so much.

In plain words, it's much easier to move out if your town loses cohesion and stability than if your country does.

you don't get to choose the diversity

You don't get to choose the diversity, but you can incentivise or disincentivise the differentiation with long-term consequences. For an example, look at what happened to, say, people who immigrated to the US in the first half of the XX century. They started with a lot of diversity but because the general trend was towards homogenisation, that diversity lessened considerably.

Comment author: Viliam 27 January 2017 05:10:00PM 0 points [-]

This again depends a lot on the specific IQ values. There are probably many politicians around the Mensa level, but I would suspect that there are not so many above cca IQ 150, simply because of the low base rate... and maybe even because they might have a communication problem when talking to an average voter, so if they want to influence politics, it would make more sense for them to start a think tank, or becomes advisors, so they don't have to compete for the average Joe's vote directly.

Comment author: tut 24 January 2017 10:00:33PM 1 point [-]

Has the password changed on the username2 account?

Comment author: username2 25 January 2017 03:49:40PM 0 points [-]

No

Comment author: niceguyanon 24 January 2017 06:21:30PM 1 point [-]

Suggestion to sticky the welcome thread. Stickying the welcome thread to the sidebar would encourage participation/comments/content. And perhaps in the future add emphasis on communication norms to the thread, specifically that negative reception and/or lack of reception is more obvious on LessWrong – So have thick skin and do not take it personal. I'd imagine that quality control will be what it has always been, critical comments.

Comment author: [deleted] 23 January 2017 10:30:58PM 1 point [-]

Thoughts on punching nazis? I can't really wrap my head around why there are so many people who think it's 100% ok to punch nazis. Not sure if discussion about this has happened elsewhere (if so please direct me!) . For the purposes of this discussion let's ignore whether or not the alt-right counts as Nazism and speak only about a hypothetical Nazi ideological group.

I understand to some extent the argument that reasonable discussion with Nazis is almost certainly futile and that they are perhaps a danger to others, however my main concerns with punching Nazis are: 1) It promotes violence as an acceptable means of dealing with disagreement 2) It doesn't accomplish much (though the hypothetical Nazi in question has said that he is more afraid of going outside, so I suppose it's accomplished at least fear which may be a pro or con depending on your point of view, besides that however I don't think it's hindered Nazis very much and has only worsened the image of the anti-Nazis)

Comment author: drethelin 23 January 2017 11:29:14PM 5 points [-]

I think a lot of people's intuitive moral framework relies on the idea of the Outlaw. Traditionally an Outlaw is someone who has zero rights or legal protection accorded them by society: it's legal to steal from them, beat them, or kill them. This was used as punishment in a variety of older societies, but has mostly fallen out of favor. However, a lot of people still seem to think of transgressors as moral non-patients, and are happy to see them receive any amount of punishment. Similar to how people think criminals deserve to be raped in prison, people think Nazis deserve whatever happens to them. This is counter to our judicial system and the happy functioning of civilization, but I don't think most people are susceptible to reasoned arguments when they're in a heightened emotional state.

Comment author: Viliam 24 January 2017 03:18:54PM *  4 points [-]

Thoughts on punching nazis?

Step 1: Make a good argument for why punching Nazis is okay.
Step 2: Call everyone you don't like a Nazi.
Step 3: Keep punching.

The steelman version of "punching Nazis is okay" is that one should not bring verbal arguments into a punching fight. That is, we assume that the Nazis are there to punch you, and if you prepare for verbal fight, well, don't expect to return home unharmed.

But this makes an assumption about your opponent, and typically, mindkilled people make a lot of wrong assumptions, especially about their opponents.

Comment author: Good_Burning_Plastic 24 January 2017 04:25:39PM 1 point [-]

The steelman version of "punching Nazis is okay" is that one should not bring verbal arguments into a punching fight. That is, we assume that the Nazis are there to punch you, and if you prepare for verbal fight, well, don't expect to return home unharmed.

But that guy didn't just "not bring verbal arguments into a punching fight", he brought a punch into a verbal argument.

Comment author: Viliam 24 January 2017 04:27:26PM *  0 points [-]

I am not familiar with the specific case, my answer was meant in general.

EDIT: I think it was historically the situation that when Nazis (the real ones, i.e. the NSDAP Nazis, not the people who disagree with me "Nazis") were losing a political debate, they often changed the rules of the game, and attacked their opponents physically ("pigeon chess"). Which is why everyone else developed a rule "if Nazis invite you to a debate, you don't participate (or you come ready to throw punches, if necessary)". No idea whether this is a fact or a myth.

Comment author: satt 29 January 2017 05:20:42PM 1 point [-]

Not sure if discussion about this has happened elsewhere (if so please direct me!)

https://www.google.com/search?q=site:twitter.com+is+it+ok+to+punch+nazis

Comment author: username2 23 January 2017 11:50:53PM 1 point [-]

I think that people punching other people is the default behavior, and it takes conscious effort to control yourself when you are angry at someone. E.g. drunk people who lost their inhibitions often get involved in fights. And people who are angry rejoice at any opportunity to let their inner animal out, feel the rush of adrenaline that comes with losing your inhibitions and not have to think about consequences or social condemnation.

2) It doesn't accomplish much (though the hypothetical Nazi in question has said that he is more afraid of going outside, so I suppose it's accomplished at least fear which may be a pro or con depending on your point of view, besides that however I don't think it's hindered Nazis very much and has only worsened the image of the anti-Nazis)

People like the strong and dislike the weak. If Nazis got punched all the time, they would be perceived as weak and nobody would join them. Even if they didn't like the punching, most likely they would simply be a bystanders.

Comment author: waveman 24 January 2017 12:32:33AM 3 points [-]

drunk people who lost their inhibitions often get involved in fights

Even here there may be a cultural element. I notice in Japan when I was there, men would be totally drunk without a hint of violence. In some cultures being drunk provides permission to be violent, similar perhaps to the way that men are 'permitted' to hug one another after scoring a goal on the playing field.

Comment author: plethora 24 January 2017 04:54:19PM 0 points [-]

If Nazis got punched all the time, they would be perceived as weak and nobody would join them.

Two thousand years ago, some guy in the Roman Empire got nailed to a piece of wood and left to die. How did that turn out?

Comment author: Lumifer 24 January 2017 05:11:03PM 0 points [-]

Quod licet Iovi, non licet bovi

Comment author: g_pepper 24 January 2017 01:22:35AM 0 points [-]

FWIW, Reason magazine condemned the punching.

Comment author: ChristianKl 28 January 2017 09:54:10AM 0 points [-]

Economic growth basically means that workers get more productive. Less hours of work means more output. GDP growth is not really possible without making workers more efficient.

It's interesting how in the last years the old luddie arguments got revived. The idea that automation means that there won't be any jobs anymore get's more and more popular.

Comment author: Good_Burning_Plastic 30 January 2017 09:17:40AM 0 points [-]

Economic growth basically means that workers get more productive. Less hours of work means more output. GDP growth is not really possible without making workers more efficient.

In principle it is possible for GDP to grow even if productivity per hour stays constant provided the number of hours worked goes up. I've heard that's an important factor to consider when comparing the GDPs of France and the US, so it's not that unlikely it also is when comparing the GDP of a country in year X and that of the same country in year X+10. (But of course such a thing couldn't go on arbitrarily far because there are only so many hours in a day.)

Comment author: Viliam 30 January 2017 01:39:33PM 0 points [-]

When comparing accross countries, I wouldn't be surprised if different countries had different methodologies for calculating GDP. The differences don't have to be obvious at the first sight. For example, both countries may agree that GDP = X + Y + Z, but there may be a huge difference in how exactly they calculate X, Y, and Z. Also, gray economy may or may not be included, and may be estimated incorrectly.

(Sometimes such changes are done for obvious political reasons, for example in my country a government once reduced unemployment by simply changing the definition of how unemployment is calculated. Another example of how the same word can correspond to different things is how countries calculate tourism: in some countries a "tourist" is any foreiger who comes for a non-work visit, in other countries only those who stay at a hotel are counted.)

Comment author: MrMind 30 January 2017 08:37:37AM 0 points [-]

Economic growth basically means that workers get more productive.

Is that the best way to slice the problem? It doesn't seem to cover well instances where new resources are discovered, or new services offered, or production processes improved to deliver a higher added value.

The idea that automation means that there won't be any jobs anymore get's more and more popular.

Well, I think the main worry is that there won't be any more jobs for humans.

Comment author: ChristianKl 30 January 2017 12:18:42PM 0 points [-]

Well, I think the main worry is that there won't be any more jobs for humans.

There are plenty of people who want to have more stuff. I don't think that the constraint for building more stuff or providing more services is that we don't have enough raw materials.

Comment author: MrMind 30 January 2017 04:06:14PM 0 points [-]

I'm not sure I'm following the analogy. If robots replace humans, we will have an increase in things to buy due to increased efficiency, but a lot more people will become poorer due to a lack of empolyment. If no other factor is involved, what you'll see is at least an increase in the disequality of distribution of richness between those who have been replaced and those who owns the replacement, proportional to the level of sophistication of the said AI.

Comment author: ChristianKl 31 January 2017 03:32:36PM 0 points [-]

People get employed when their work allows an employer to create more value, that a customer can buy, than their wage costs.

Robots need to be designed, built, trained and repaired.

When it comes to wealth inequality that's partly true. Automatization has the potential to create a lot of inequality because skill differences lead to stronger outcome differences.

Comment author: MrMind 01 February 2017 09:20:36AM *  0 points [-]

The robotic revolution and possibly the next AI revolution means that the source of labor can be shifted from people to robot.
Within the usual production model, output = f(capital) x g(labor), labor is to be meant exclusively as human labor, but in the next future, possibly labor will mean robot labor, which can be acquired and owned, thus becoming part of the means of production accessible to capital. In a sense, if AI will take a hold in the industry, labor will be a function of capital, and this means that the equation will be transformed as output = h(capital). Depending on the h, of course, you will have more or less convenience (humans require training and repairing too).

Comment author: ChristianKl 01 February 2017 03:22:53PM 0 points [-]

Before AGI there are many tasks that human can do but that robots/AI can't. It's possible to build a lot of robots if robots are useful.

That's the kind of work that's likely a constraint on producing more stuff. I don't think the constraint will be resources. Number of robots is also unlikely the constraint as you can easily build more robots.

Comment author: Good_Burning_Plastic 30 January 2017 09:21:13AM 0 points [-]

or production processes improved to deliver a higher added value.

That does count as workers getting more productive by the standard definition of the term as usually used e.g. in discussions of Baumol's cost disease.

Comment author: MrMind 30 January 2017 10:53:23AM 0 points [-]

I'm confused.
If productivity is unit / labor, then switching to another production line which deliveres the same quantity of items but which are sold for a higher price should increase the GDP without increasing productivity.
Reading a couple of papers about Bauomol's disease seems to agree with the definition of productivity as output per labor: the labor cost increases while the productions stays the same, so price rises without an increase in efficiency.

Comment author: Gram_Stone 24 January 2017 02:46:05AM 0 points [-]

Does anyone have an electronic copy of the Oxford Handbook of Metamemory that they're willing to share?

Comment author: whpearson 23 January 2017 02:59:00PM 0 points [-]

Are there any forums explicitly about how to think about and act to best make humanity survive its future?

Comment author: WalterL 23 January 2017 05:03:37PM 1 point [-]

Our consensus is pretty unalterably "Build an AI God".

Comment author: Lumifer 23 January 2017 05:36:27PM 6 points [-]

Our consensus is pretty unalterably "Build an AI God".

Kinda. The LW's position is "We will make a God, how do we make sure He likes us?"

Comment author: WalterL 23 January 2017 07:33:08PM 1 point [-]

I lounge corrected. What Lum said is right.

Comment author: WhySpace 23 January 2017 05:16:54PM 0 points [-]

I checked their karma before replying, so I could taylor my answer to them if they were new. They have 1350 karma though, so I asume they are already familiar with us.

Same likely goes for the existential risk segment of EA. These are the only such discussion forums I'm aware of, but neither is x-risk only.

Comment author: whpearson 24 January 2017 12:43:58AM 0 points [-]

I'm a cryonaut from a few years back. I had deep philosophical differences to most of the arguments for AI Gods, which you may be able to determine from some of my recent discussions. I still think that it not completely crazy to try and create an beneficial AI God (taking into consideration my fallible hardware and all), but I put a lot more weight on futures where the future of intelligence is very important, but not as potent as a god.

Thanks for your pointers towards the EA segment, I wasn't aware that there was a segment.

Comment author: WhySpace 24 January 2017 06:54:02PM *  1 point [-]

In that case, let me give a quick summary of what I know of that segment of effective altruism.

For context, there are basically 4 clusters. While many/most people concentrate on traditional human charities, some people think animal suffering matters more than 1/100th as much as a human suffering, and so think of animal charities are therefore more cost effective. Those are the first 2 clusters of ideas.

Then you have people who think that movement growth is more important, since organizations like Raising for Effective Giving have so far been able to move like $3/year (I forget) to effective charities for each dollar donated to them that year. Other organizations may have an even higher multiplier, but this is fairly controversial, because it’s difficult to measure future impact empirically, and it risks turning EA into a self-promoting machine which achieves nothing.

The 4^th category is basically weird future stuff. Mostly this is for people who think humans going extinct would be significantly worse than a mere 7 billion deaths would be. However, it's not exclusively focused on existential risk. Unfortunately, we have no good ways of even evaluating how effective various anti-nuclear efforts are at actually reducing existential risk, and it's even worse for efforts against prospective future technologies like AI. The best we can do is measure indirect effects. So the entire category is fairly controversial.

I would further divide the "weird future stuff" category into Global Catastrophic Risk/x-risk and non-GCR/x-risk stuff. For example, Brian Tomasik has coined the term s-risk for risks of astronomical future suffering. He makes a strong case for wild animals experiencing more net suffering than happiness, and so thinks that even without human extinction the next billion years are likely to be filled with astronomical amounts of animal suffering.

Within the GDR/x-risk half of the "weird future stuff" category, there appear to be maybe 4 or 5 causes I'm aware of. Nuclear war is the obvious one, along with climate change. I think most EAs tend to think climate change is important, but just not tractable enough to be a cost effective use of resources. The risk of another 1918 Flu pandemic, or of an engineered pandemic, comes up occasionally, especially with relation to the new CRISPR gene editing technology. AI is a big concern too, but more controversial, since it is more speculative. I'm not sure I've ever seen asteroid impacts or nanotechnology floated as a cost-effective means of reducing x-risk, but I don't follow that closely, so perhaps there is some good discussion I've missed.

Much or most of the effort I've seen is to better understand the risks, so that we can better allocate resources in the future. Here are some organizations I know of which study existential risk, or are working to reduce it:

  • The Future of Humanity Institute at Oxford, and is led by Nick Bostrom. They primarily do scholarly research, and focus a good chunk of their attention on AI. There are now more academic papers published on human extinction than there are on dung beetles, largely due to their efforts to lead the charge.

  • Center for the Study of Existential Risk is out of Cambridge. I don't know much about them, but they seem to be quite similar to FHI.

  • Future of Life Institute was founded by a bunch of people from MIT, but I don't believe there is any official tie. They fund research too, but they seem to have a larger body of work directed at the general public. They give grants to researchers, and publish articles on a range of existential risks.

Perhaps there are discussion forums associated with these groups, but I'm unaware of them. There are a bunch of EA facebook groups, but they are mostly regional groups as far as I know. However, the EA forum and here are the closest things I know to what you're after.

Comment author: morganism 27 January 2017 10:29:02PM 0 points [-]

B612 Foundation is working on impact risks, by trying to get some IR cameras out to L2, L3 at least, and hopefully at S5. and Planetary Resources say that objects found with their IR cameras for mining, will go into the PDSS database.

Comment author: whpearson 24 January 2017 07:48:46PM 0 points [-]

Thanks! I'll get in touch with the EA community in a bit. I've got practical work to finish and I find forums too engaging.

Comment author: username2 23 January 2017 05:16:39PM 0 points [-]

That is a contentious view.

Comment author: CellBioGuy 24 January 2017 11:47:06PM 0 points [-]

To say the least.

Comment author: whpearson 23 January 2017 05:08:12PM 0 points [-]

unalterably

Not very empiricist/bayesian of you? ;)

What is the backup plan? For if that doesn't work.

Comment author: Gunnar_Zarncke 24 January 2017 02:56:32PM 0 points [-]

There are quite a few points where you can go, e.g. google existential risk