To arms, my brothers and sisters! This could be a highly consequential opportunity for those of us with concerns but without the connections or finesse or approach them independently with those concerns! Upvote critical questions!
Here is a thread from LessWrong/Reddit user 'jimrandomh' asking questions which will be most pertinent to the LessWrong userbase, and supporters of MIRI.
Here is a long thread where Eliezer himself jumps in and has an interesting conversation attempting to bridge gaps of understanding between the perception of other Machine Learning (ML) researchers of MIRI's goals and concerns, vs. what their goals and concerns, re: AI safety, actually are.
Thank you.
I could ban that account. Thoughts about whether that would be a good idea?
+1 to the anon who used the "username2" account to post the parent comment. IIRC, there was an original "username" account, the use of which of collectively discontinued several months ago because its value to the community was co-opted by abusive anonymous users logging into it. I would hate to see that happen a second time, to "username2". If it were to happen a second time, that would disincentivize necessary, honest, and respectful use of the anonymous account. If that became the norm, even if there were a "username3", the perception might become there isn't a general anonymous account on LessWrong for users. If people stopped trusting the resource completely, that would be sad. Also, it might send the signal to disrespectful anonymous users they can make or use as many anonymous accounts as they want, and the moderators would do little or nothing to stop them. This last point strikes me as unlikely, though, especially if LW mods have the power to block specific IP addresses.
Religion solves some coordination problems very well. Witness religions outlasting numerous political and philosophical movements, often through coordinated effort. Some wrong beliefs assuage bad emotions and thoughts, allowing humans to internally deal with the world beyond the reach of god. Some of the same wrong beliefs also hurt and kill a shitload of people, directly and indirectly.
My personal belief is that religions were probably necessary for humanity to rise from agricultural to technological societies, and tentatively necessary to maintain technological societies until FAI, especially in a long-takeoff scenario. We have limited evidence that religion-free or wrong-belief-free societies can flourish. Most first-world nations are officially and practically agnostic but have sizable populations of religious people. The nations which are actively anti-religious generally have their own strong dogmatic anti-scientific beliefs that the leaders are trying to push, and they still can't stomp out all religions.
Basically, until doctors can defeat virtually all illness and death and leaders can effectively coordinate global humane outcomes without religions I think that religions serve as a sanity line above destructive hedonism or despair.
What about humans or religions make religions necessary for humanity to rise from agricultural to technological societies? While it's not 'high-tech', and came long before the scientific revolution, what makes an agricultural society not also a technological one, insofar as agriculture might be considered closer to a technological society than to one which is purely run by hunter-gatherers?
By "technological society", do you mean "industrial society"? If not, what's the line between a technological society and an agricultural one in your mind? How have you determined the agricultural society is closer to one of hunter-gatherers than a technological one? Do you have a reason for expecting religion is necessary to raise humans from agriculture to more technological societies, but not from a state of tribal hunter-gatherers to agricultural city-states?
I will run it if no one else has piped up. Scheduled for Feb if no one else takes it on. I will post asking for questions in a month.
I was going to email Scott, i.e., Yvain, if he needed any help, and/or if he was planning on running it at all. I never bothered to carry that out. I will email him now, though, unless you've confirmed he won't be doing it for 2015, or early 2016. Anyway, if you'll ultimately be doing it, let me know if you need help, and I can pitch in.
Notes
1 Based on my current impressions of changing and growing cause selections within effective altruism, here are some other causes I believe it would've been useful to include this year, and perhaps can be added to next year's survey.
- Animal welfare (industrial farming)
- Animal welfare (other)
- Existential risks (biosecurity/biotechnology)
2 Based on my current impressions of the growing number of major effective altruism organizations, here are some other organizations I believe it would've been useful to include this year as sources for first hearing about effective altruism, and perhaps can be added to next year's survey.
- Charity Science
- Center For Applied Rationality
- Effective Altruism Foundation (formerly GBS Schweiz)
- The Gates Foundation
- Peter Singer's online Coursera course
- University or college course
3 Based on my current impressions of the growing number of major effective altruism organizations, and also their recommended charities, I expect it would make sense to include in the list of charities asked if donated to this year, and perhaps can be included next year:
- The full list of Givewell's recommended and standout charities
- The full list of ACE's recommended and standout charities
- The full list of TLYCS's and GWWC's recommended charities
- Charity Science
- The Effective Altruism Foundation and its affiliate organizations
- The Centre for Effective Altruism and its affiliate organizations
Really, of all the lists which don't have enough options this year, this one is the most lacking. Note: there has been rapid change to the effective altruism community as of late, with an unusually rapid expansion in the number of organizations recognized as effective in 2015 than in prior years. Thus, I consider it understandable the team running the EA Survey this year hasn't been able to keep up pace with the growing and full list of organizations favored by the broader effective altruism community.
4 For education levels, between "High school (and lower)" and "Undergraduate degree", I'd include "2-year college/Associate's degree" as another option.
5 For the question "What broad career are you planning to follow?", I believe it would've been useful to include the following options in this year's survey, and perhaps can be added to next year's survey:
- social entrepreneurship
- advocacy
- policy work
That's all. Otherwise, assume anything else I didn't comment on is free or any errors or conspicuous omissions (I could notice). Great work!
I see that you can read my mind and my votes. Glad you have that ability
It's part of the Dark Arts package. I'll dryly observe that I knew you downvoted him - how do you think I knew that you downvoted him? It's not like downvotes come with names attached. Yes, I can "read your mind", which is to say, I read the -massive- amounts of connotation information associated with otherwise bland text.
Can you please provide evidence of what I am thinking and how I am voting?
You, uh, admitted to it? "I thought his comments were not worth attention"
If it helps to know, extra downvotes your getting specifically in this thread, but not other ones, are coming from me. This comment isn't meant as a glib statement to signal my affective disapproval. I just think this conversation is going nowhere, and think the quality of dialogue is getting worse. I'm downvoting these comments as I would others. I'm commenting so you know why I'm downvoting, and don't cast aspersions at other users.
I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there's maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there's maybe a 10-20% chance that he's having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.
So here's some of the concerns I see; I've gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:
- By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
- By teaching people using arguments from authority, he may be worsening the primary "sanity waterline" issues rather than improving them. The articles, materials, and comments I've seen make heavy use of language like "science-based", "research-based" and "expert". The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he's spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
- Gleb's writing style strikes me as very unauthentic feeling. Let me be clear I don't mean to accuse him of anything negative; but I intuitively feel a very negative reaction to his writing. It triggers emotional signals in me of attempted deception and rhetorical tricks (whether or not this is his intent!) His writing risks associating "rationality" with such signals (should other people share my reactions) and again causing immunization, or even catalyzing opposition.
An illustration of the nightmare scenario from such an outreach effort would be that, 3 years from now when I attempt to talk to someone about biases, they respond by saying "Oh god don't give me that '6 weird tips' bullshit about 'rational thinking', and spare me your godawful rhetoric, gtfo."
Like I said at the start, I don't know which way it swings, but those are my thoughts and concerns. I imagine they're not new concerns to Gleb. I still have these concerns after reading all of the mitigating argumentation he has offered so far, and I'm not sure of a good way to collect evidence about this besides running absurdly large long-term "consumer" studies.
I do imagine he plans to continue his efforts, and thus we'll find out eventually how this turns out.
This comment captures my intutions well. Thanks for writing this. It's weird for me, because when I wear my effective altruism hat, I think what Gleb is doing is great because marketing effective altruism seems like it would only drive more donations to effective charities, while not depriving them of money or hurting their reputations if people become indifferent to the Intentional Insights project. This seems to be the consensus reaction to Gleb's work on the Effective Altruism Forum. Of course, effective altruism is sometimes more concerned with only the object-level impact that's easy to measure, e.g., donations, rather than subtler effects down the pipe, like cumulatively changing how people think over the course of multiple years. Whether that's a good or ill effect is a judgment I'll leave for you.
On the other hand, when I put on my rationality community hat, I feel the same way about Gleb's work as you do. It's uncomfortable for me because I realize I have perhaps contradicting motivations in assessing Intentional Insights.
I found out about Omnilibrium a couple months ago, and I was thinking of joining in eventually. I was also thinking of telling some friends of mine who might want to get in on it even more than I do about it. However, I've been thinking if I told lots of people, or they themselves told lots of people, then suddenly Omnilibrium might get flooded with dozens of new users at once. I don't know how big that is compared to the whole community, but I was thinking Omnilibrium would be averse to it growing it too big, as well-kempt gardens die by pacificism and all that. But then, Slate Star Codex linked to it a few weeks ago. So, that's plausibly hundreds of new users flooding it.
I'm wondering, how do the admins of Omnilibrium feel about this? Are you happy to have many new users? Are you upset SSC linked to Omnilibrium, bringing it to the attention of so many people who may not necessarily maintain the quality of discourse current users of Omnilibrium have gotten accustomed to?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
As someone who voted yes, and currently seeing how the margin is 32 'yays' (52%) to 29 'nays' (48%), I don't think you should start this discussion simply because there is a majority in favour of a discussion thread on Trump. I mean, I wouldn't like to see 48% of users put off by this discussion. So, I think it's safe to say the discussions should really only start if you get a supermajority, something like 2/3rds in favor of starting the discussion. If that's not the case whenever you decide the poll is closed, I don't think it's worth the costs of hosting the discussion here.
I thus agree with ChristianKI to move the discussion to Omnilibrium.