You wrote a low quality summary of a low quality secondary-source video of no particular importance by a talking head whose expertise has little to do with AI (nor is regarded as such like a Gary Marcus)
You're right that I was probably exaggerating when I said it was the best effort I could provide. It was more like what I expected would be considered a basic, accurate summary I could generate in a brief period of time.
...low quality secondary-source video of no particular importance by a talking head whose expertise has little to do with AI (nor is reg
I appreciate that, though it seems since yesterday my post may have been downvoted even more. I wouldn't mind as much except nobody has explained why when I bothered putting in the effort. I could think maybe it's because of the clickbait-y title, or on account of the fact that it's a YouTube video meant to convey important info about AI to, like, normies in a mainstream way, and is therefore assumed to be of super low quality.
Yet that'd be in spite of facts that:
1. I clarified this is from theoretical physicist and science communicator who's trying to inf...
I suspect part of it might just be a latent preference on LessWrong for the sort of lengthy blog posts in a style they're accustomed to, which is valid, but a tendency to presume the same sort of info they like being exposed to but delivered in a different way means it must be lower quality
You wrote a low quality summary of a low quality secondary-source video of no particular importance by a talking head whose expertise has little to do with AI (nor is regarded as such like a Gary Marcus), about events described more informatively in other secondary so...
I've now summarized those details as they were presented in the video. 'Staying more grounded in how bad it is' with more precision would require you or whoever learning more about these developments from the respective companies on your own, though the summaries I've now provided can hopefully serve as a starting point for doing so.
Do you mean Evan Hubinger, Evan R. Murphy, or a different Evan? (I would be surprised and humbled if it was me, though my priors on that are low.)
How do you square encouraging others to weigh in on EA fundraising, and presumably the assumption that anyone in the EA community can trust you as a collaborator of any sort, with your intentions, as you put it in July, to probably seek to shut down at some point in the future?
The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the research themselves, like a project manager or a research assistant.
Nothing in the document should necessarily be taken as representative of Google, or any particular department, though the value of any insights drawn from the document could vary based on what AI research project(s)/department the authors of the document work on/in. This document ...
Thanks for making this comment. I had a similar comment in mind. You're right nobody should assume any statements in this document represent the viewpoint of Google, or any of its subsidiaries, like DeepMind, or any department therein. Neither should be assumed that the researcher(s) who authored or leaked this document are department or project leads. The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the ...
Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it.
The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response.
Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime.
I've edited the post so It's now "resentment from rationalists elsewhere to the Bay Area community" to "resentment from rationalists elsewhere toward the Bay Area community" because that seems to reduce the ambiguity some. My use of the word 'resentment' was intentional.
Thanks for catching those. The word 'is' was missing. The word "idea" was meant to be "ideal." I've made the changes.
I'm thinking of asking as another question post, or at least a post seeking feedback probably more than trying to stake a strong claim. Provoking debate for the sake of it would hinder that goal, so I'd try to writing any post in a way to avoid that. Those filters applied to any post I might write wouldn't hinder any kind of feedback I'd seek. The social barriers to posting raised by others with the concerns you expressed are seeming high enough that I'm unsure I'll post it after all.
This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good.
Another consideration is how it may be a risk for long-termists to not pursue new ways of conveying the importance and challenge of ensuring human control of transformative AI. There is a certain principle of being cautious in EA. Yet in general we don't self-reflect enough to notice when being cautious by default is irrational on the margin.
Recognizing the ...
I'm aware it's a rather narrow range of ideas but a set of a few standard options being the ones most people adhere to is how it's represented in popular discourse, which is what I'm going off of as a starting point. It has been established in other comments on my post that isn't what to go off of. I've also mentioned that to be exposed to ideas I may not have thought of myself is part of why I want to have an open discussion on LW. My goal has been to gauge if that's a discussion any significant portion of the LW user-base is indeed open to having. The best I've been able to surmise as an answer thus far is: "yes, if it's done right."
As to the question of whether I can hold myself to those standards and maintain them, I'll interpret the question not as a rhetorical but literally. My answer is: yes, I expect I would be able to hold myself to those standards and maintain them. I wouldn't have asked the original question in the first place if I thought there wasn't at least a significant chance I could. I'm aware of how I'm writing this may seem to betray gross overconfidence on my part.
I'll try here to convince you otherwise by providing context in terms of the perceived strawmanning of ...
I meant to include the hyperlink to the original source in my post but I forgot to, so thanks for catching that. I've now added it to the OP.
It seems like the kind of post I have in mind would be respected more if I'm willing and prepared to put in the effort of moderating the comments well too. I won't make such a post before I'm ready to commit the time and effort to doing so. Thank you for being so direct about why you suspect I'm wrong. Voluntary explanations for the crux of a disagreement or a perception of irrationality are not provided on LessWrong nearly often enough.
I am thinking of making a question post to ask because I expect there may be others who are able to address an issue related to legal access to abortion in a way that is actually good. I expect I might be able to write a post that would be considered to not only "suck" but might be so-so as opposed to unusually good.
My concern was that by even only asking a question, even asked well in a way that will frame responses to be better, I would still be downvoted. It's seeming like if I put serious effort into it, though, the question post would not be sup...
My impression has been it's presumed that a position presented will have been adopted for bad epistemological reasons and that it has little to do with rationality without much in the way of checking. I'm not asking about subjects I want to or would frame as political. I'm asking if there are some subjects that will be treated as though they are inherently political even when they are not.
It's not as much about moral intuitions to me so much as rational arguments. That may not hold up if someone has some assumptions diametrically opposite of mine, like the unborn being sacred or otherwise special in some way that assigns a moral weight to them incomparably higher than the moral weight assigned to pregnant persons. That's something I'd be willing to write about if that itself is considered interesting. My intention is to ask what are the best compromises for various positions being offered by the side of the debate opposite myself, so that's very different from perspectives unfit for LW.
I'm not an active rationalist anymore but I've 'been around' for a decade. Sometimes I occasionally post on LessWrong still because it's interesting or valuable enough for some subjects. That the rationality community functions the way you describe and the norms that entails is an example of why I don't participate in the rationality community as much anymore. Thank you, though, for the feedback.
This is great news! This could even be a topic for one of our meetups!
Thanks. Do you feel like you have a sense of what proportion of long-termists you know who are forecasting that way? Or do you know of some way how one might learn more about forecasts like this and the reasoning or models behind them?
I think the difficulty with answering this question is that many of the disagreements boil down to differences in estimates for how long it will take to operationalize lab-grade capabilities.
The same point was made on the Effective Altruism Forum and it's a considerable one. Yet I expected that.
The problem frustrating me is that the relative number of individuals who have volunteered their own numbers is so low it's an insignificant minority. One person doesn't disagree with their own self unless there is model uncertainty or whatever. Unless individ...
Upvoted. Thanks.
I'll state that in my opinion it shouldn't necessarily have to be the responsibility of MIRI or even Eliezer to clarify what was meant by a position stated but is taken out of context. I'm not sure but it seems as though at least a significant minority of those who've been alarmed by some of Eliezer's statements haven't read the full post to put it in a less dramatic context.
Yet errant signals sent seem important to rectify as they make it harder for MIRI to coordinate with other actors in the field of AI alignment based on exis...
I don't know what "this" is referring to in your sentence.
I was referring to the fact that there are meta-jokes in the post about which parts are or are not jokes.
I want to push back a bit against a norm I think you're arguing for, along the lines of: we should impose much higher standards for sharing views that assert high p(doom), than for sharing views that assert low p(doom).
I'm sorry I didn't express myself more clearly. There shouldn't be a higher standard for sharing views that assert a high(er) probability of doom. That's not what I was argui...
The issue is that Eliezer appears to think, but without any follow-up, that most other approaches to AI alignment distinct from MIRI's, including ones that otherwise draw inspiration from the rationality community, will also fail to bear fruit. Like, the takeaway isn't other alignment researchers should just give up, or just come work for MIRI...?, but then what is it?
From the AGI interventions discussion we posted in November (note that "miracle" here means "surprising positive model violation", not "positive event of negligible probability"):
...Anonym
Thank you for the detailed response. It helps significantly.
The parts of the post that are an April Fool's Joke, AFAIK, are the title of the post, and the answer to Q6. The answer to Q6 is a joke because it's sort-of-pretending the rest of the post is an April Fool's joke.
It shouldn’t be surprising others are confused if this is your best guess about what the post means altogether.
...believing p(doom) is high isn't a strategy, and adopting a specific mental framing device isn't really a "strategy" either). (I'm even more confused by how
Summary: The ambiguity as to how much of the above is a joke appears it may be for Eliezer or others to have plausible deniability about the seriousness of apparently extreme but little-backed claims being made. This is after a lack of adequate handling on the part of the relevant parties of the impact of Eliezer’s output in recent months on various communities, such as rationality and effective altruism. Virtually none of this has indicated what real, meaningful changes can be expected in MIRI’s work. As MIRI’s work depends in large part on the commu...
Here is an update on our efforts in Canada.
1. There are nearly five of us who would be willing to sponsor a refugee to settle in Canada (indefinitely or for however long the war might last). There is a requisite amount of money that must be committed beforehand to cover at least a few months worth of costs for settling in Canada and living here for a few months. Determining whether 3 or more of us would be able to cover those costs appears to be the most significant remaining bottleneck before we decide whether to take this on.
2. There are two effective al...
Thanks for flagging all of that. I've made all of those edits.
That isn't something I thought of but that makes sense as the most significant reason that, at least so far, I hadn't considered yet.
I notice this comment has only received downvotes other than the strong upvote this post received by default from me as the original poster. My guess would be this post has been downvoted because it's (perceived as):
That was not my intention. I'd like to know what other reasons there may be for why this post was downvoted, so please reply if you can think of any or you are one of the users who downvoted this post.
AI alignment is the term MIRI (among other actors in the field) ostensibly prefers to refer to the control problem instead of AI safety to distinguish it from other AI-related ethics or security issues because those other issues don't constitute x-risks. Of course the extra jargon could be confusing for a large audience being exposed to AI safety and alignment concerns for the first time. In the case of introducing the field to prospective entrants into the field or students, keeping it simpler as you do may very easily be the better way to go.
Strongly upvoted. Thanks for your comprehensive review. This might be the best answer I've ever received for any question I've asked on LW.
In my opinion, given that these other actors who've adopted the term are arguably leaders in the field more than MIRI, it's valid for someone in the rationality community to claim it's in fact the preferred term. A more accurate statement would be:
Thanks for flagging this.
There are several signals the government might be trying to send that come to mind:
I previously have not been as aware that this is a pattern of how so many people have experienced responses to criticism from Geoff and Leverage in the past.
Yeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at.
For what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.
Sorry, edited. I meant that it was a mistake for me to keep away before, not now.
(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)
In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure.
That was a few years ago...
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.
I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post".
Leverage Research hosted a virtual open house and AMA a couple weeks ago for their relaunch as a new kind of organization that has been percolating for the last couple years. I attended. One subject Geoff and I talked about was the debacle that was the article in The New York Times (NYT) on Scott Alexander f...
Based on how you wrote your comment, it seems that the email you received may have come across as intimidating.
I think the important information here is how did Geoff / Leverage Research handle similar criticism in the past. (I have no idea. I assume both you and Ryan probably know more about this.) As they say, past behavior is the best predictor of future behavior. The wording of the e-mail is not so important.
Regarding problems related pseudoscientific quacks and cranks as a kind of example given, at this point it seems obvious that it needs to be taken for granted that there will be causal factors that, absent effective interventions, will induce large sections of society to embrace pseudo-scientific conspiracy theories. In other words, we should assume that if there is another pandemic in a decade or two, there will be more conspiracy theories.
At that point in time, people will beware science again because they'll recall the conspiracies they believed i...
The fact that many scientists are awful communicators who are lousy as telling stories is not a point against them. It means that they were more interested in figuring out the truth than figuring out how to win popularity contests.
This implies to me that there is a market for science communicators who in their careers specialize in winning popularity contests but do so to spread the message of scientific consensus in a way optimized to combat the most dangerous pseudoscience and misinformation/disinformation. It seemed like the Skeptics movement was trying...
First, don't trust any source that consistently sides with one political party or one political ideology, because Politics is the Mind Killer.
One challenge with this is that it's harder to tell what the ideology in question is. If anti-vaxxers are pulled from among the populations of wingnuts on both the left and the right, I'm inclined to take lots of people whose views consistently side with one political party much more seriously not only on vaccines but on many other issues as well.
It's quantitatively difficult to meet one million people, e.g., in terms of the amount of time it takes to accomplish that feat but how qualitatively hard it is makes it seem almost undoable but to me it's more imaginable. I've worked in customer service and sales jobs in multiple industries.
I never kept count enough to know if I ever met one hundred people in one day but it could easily have been several dozen people everyday. I wouldn't be surprised if someone working the till at a McDonalds in Manhattan met over one hundred people on some days. Mo...
One overlooked complication here is the extent to which honor is still socially constructed in particular circumstances. One helpful way to frame practical ethics is to distinguish between public and private morality. Almost nobody subscribes to a value system that exists in a vacuum independent of the at least somewhat subjective influence of their social environment. Having integrity can sometimes still mean subverting one's personal morality to live up to societal standards imposed upon oneself.
To commit suicide after a sufficiently shameful act h...
PR is about managing how an antagonist could distort your words and actions to portray you in a negative light.
There are narrow contexts in which the overwhelming purpose of PR, to the exclusion of almost any other concern, is to manage how an antagonist could distort one's words and actions to depict one in a hostile way. That's not the only good reason for PR in general.
Much of PR is about finding the right ways to best communicate what an organization is trying to do in an accurate way. Miscommunication may trigger others into fearing what one rea...
I'm coming to this article by way of being linked from a Facebook group though I am also an occasional LessWrong user. I would have asked this question in the comments of the FB post where this post was linked, but since the comments were closed there, I'll ask it here: What was (or were) the reason(s) behind:
I understand why someone would do this if they thought a plat...
I peruse her content occasionally but I wasn't aware that she is widely recognized as the quality of her analysis/commentary varying so wildly, and often particularly lacklustre outside of her own field. Gwern mentioned that Gary Marcus has apparently said as much in the past when it comes to her coverage of AI topics. I'll refrain from citing her as a source in the future.