I think that too much investment could result in more noise in the field. First of all because it will result in large number of published materials, which could exceed capacity of other researchers to read it. In result really interesting works will be not read. It will also attract in the field more people than actually clever and dedicated people exist. If we have 100 trained ai safety reserchers, which is overestimation , and we hire 1000 people, than real reasesrchers will be dissolved. In some fields like nanotech overinvestment result even in expel of original reaserchers because they prevent less educated ones to spent money as they want. But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.
Yeah, I read Eliezer's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk" in Global Catastrophic Risks, and it was impressed with how far in advance he anticipated reactions to the rising popularity of AI safety, what it might be like when the public finally switched from skepticism to genuine concern, and what it might start to look like. Eliezer has also anticipated even safety-conscious work on AI might increase AI risk.
The idea some existing institutions in AI safety, perhaps MIRI, should expand much faster than others so it can keep up with all the published material coming out, and evaluate it, is neglected.
Sorry to complain, but I opened the site to see what was going on, and Main has gone to utter crap.
"Is spirituality irrational?" and "3 reasons it's irrational to demand 'rationalism' in social-justice activism" are now heavily-commented recent posts in Main. Meanwhile, "Building Machines That Learn and Think Like People" was published a short while ago, and nothing about it appears on this site.
Looks like this site has slid into the River of Low Domain-Knowledge, Easy-to-Discuss General Stuff, rather than staying up in the nice Forest of Stuff LW Purports to be About.
Context: Main is currently disabled; LessWrong 2.0
LessWrong is actively being redesigned. Until further notice, posts to Main have been disabled. Once the redesign is complete, LW may have multiple subs, none of which might be called 'Main', but one or more of which will be designated as where the nice Forest of Classic LW Stuff you're hoping to find here. The only posts in Main recently are meetup posts and the survey, which were promoted there for visibility. Apparently, usage statistics show for the last several months Discussion has been getting much more attention than Main, so Discussion is where non-crap is. Of course, there is no more explicit division between crap and non-crap you'd expect the 'Main'/'Discussion' divide to reflect. Try finding other ways to filter out crap, like reading the top posts from the previous week.
It seems to me, despite talk of change, LW is staying essentially the same... and thereby struggling at an accelerating rate to be a place for useful content.
My current modus operandi for LW is to use the LW favorite I have in place to (1) Check SSC and the other "Rationality Blogs" on the side bar, and then (2) peruse discussion (and sometimes comment) if there isn't a new post at SSC, et al that commands my attention. I wonder if other LWers do the same? I wonder what percentage of LW traffic is "secondary" in a way similar to what I've described?
I like your suggestion because it is a radical change that might work. And it's bad to do nothing if what you are doing seems to be on a trajectory of death.
At some point, during a "how can we make LW better" post on here, I mentioned making LW a de facto "hub" for the rationality blogosphere since it's increasingly not anything else. I'm now re-saying that and seconding your idea. There could still be original content... but there is nowhere close to enough original content coming in right now to justify LW as a standalone site.
My current modus operandi for LW is to use the LW favorite I have in place to (1) Check SSC and the other "Rationality Blogs" on the side bar, and then (2) peruse discussion (and sometimes comment) if there isn't a new post at SSC, et al that commands my attention. I wonder if other LWers do the same? I wonder what percentage of LW traffic is "secondary" in a way similar to what I've described?
As a a data point, this is exactly how I've been using LessWrong for at least the last year. One of the reasons I more frequently comment in open threads is because we can have less idle conversations like this one as well :P
Would you say there's an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?
I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to "save" it.
One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.
I personally wouldn't mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we're not generating enough content locally anymore.
I'm essentially just floating this idea for now. If it's positively received, I might take it upon myself to start posting links.
Rob Bensinger published Library of Scott Alexandria, his summary/"Sequences" of the historically best posts from Scott (according to Rob, that is). Scott seems to pursue or write on topics with a common thread between them in cycles of a few months. This can be observed in the "top posts" section of his blog. Sometimes I forget a blog exists for a few months, so I don't read it, but when I do read diaspora/rationality-adjacent blogs, I consider the reading personally valuable. I'd appreciate LessWrong users sharing pieces from their favourite blogs that they believe would also appeal to many users here. So, to make a top-level post linking to several articles from one author once in a while, sharing their best recent posts which would be relevant to LessWrong's interests, seems reasonable. I agree making a top-level post for any one or all links from a separate blog would be too much, and that this implicit norm should continue to exist.
[LINK]
There seems like some relevant stuff this week:
Katie Cohen, a member of the rationality community in the Bay Area, and her daughter, are beneficiaries of fundraiser anonymously hosted by one (or more) of their friends, and they've fallen on some hard times. I don't know them, but Rob Bensinger vouched on social media he is friends with everyone involved, including the anonymous fundraiser.
Seems like there are lots of good links and corrections from the previous links post this week, so check it out if you found yourself reading lots of SSC links this week.
Scott is moving back to the Bay Area next year, and is looking for doctors from the area to talk to about setting himself up with a job as a psychiatrist.
[Survey Taken Thread]
By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.
Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.
Upvoted for sharing unique experiences for their learning potential. I recall Luke Muehlhauser attended a Toastmasters meetup run by Scientologists several years ago when he first moved to California. This was unrelated to the article, but as an aside he discouraged other LessWrong users to attend any meeting run by Scientologists just because he did, because they are friendly and they will hack people's System 1's into making them want to come back, and even being enticed to join Scientology is not a worthwhile risk, and the best case is you might just waste your time with them anyway. I mean, IIRC, this was after Luke himself had left evangelical Christianity, and read the LessWrong Sequences, so I guess we was very confident he wouldn't be pulled in.
It's interesting that you went, but if you were invited by a stranger on a plane to this home, I hardly think you "infiltrated", as opposed to being invited by a Raelian on the first step to join them. I'm not saying you'll be fooled into joining, but I caution against going back, as you could at least use the time to find other friendly communities to join, like any number of meetups, which aren't cults. It's sad other are in this cult, but it's difficult enough to pull others out I'm not confident it's worth sticking around to pull others out, even if you think they're good people. When you get back Stateside or wherever your'e from, I figure there are skeptics associations you can get involved with which do good work on helping people believe less crazy things.
If you have ever wondered how it is possible that a flying saucer cult has more members than EA, now it's time to learn something.
One sentiment from a friend of mine that I don't completely agree with but I believe is worth keeping in mind is that effective altruism (EA) is about helping others and isn't meant to become a "country club for saints". What does that have to do with Raelianism, or Scientology, or some other cult? Well, they tend to treat their members like saints, and their members aren't effective. I mean, these organizations may be effective by one metric in that they're able to efficiently funnel capital/wealth (e.g., financial, social, material, sexual, etc.) to their leaders. I'm aware of Raelianism, and I don't know much about it. From what I've read about Scientology, it's able to get quite a lot done. However, they're able to get away with that when they don't follow rules, bully everyone from their detractors to whole governments, and brainwash people into becoming their menial slaves. The epistemic hygiene in these groups are abysmal.
I think there are many onlookers from LessWrong who are hoping much of effective altruism gets better epistemics than they have now, and would be utterly aghast if they were selling this out to use whatever tools from the dark arts to make gains in raw numbers of self-identified adherents who cannot think or act for themselves. Being someone quite involved in EA, I can tell you that EA should grow as fast as possible, or that the priority is to make anyone who willing to become passionate about it to feel as welcome as possible, isn't worth it if the expense is the quality culture of the movement, to the extent it has a quality culture of epistemic hygiene. So, sure, we could learn lessons from ufo-cults, but they would be the wrong lessons. Having as many people in EA as possible isn't the most important thing for EA to do.
As someone who voted yes, and currently seeing how the margin is 32 'yays' (52%) to 29 'nays' (48%), I don't think you should start this discussion simply because there is a majority in favour of a discussion thread on Trump. I mean, I wouldn't like to see 48% of users put off by this discussion. So, I think it's safe to say the discussions should really only start if you get a supermajority, something like 2/3rds in favor of starting the discussion. If that's not the case whenever you decide the poll is closed, I don't think it's worth the costs of hosting the discussion here.
I thus agree with ChristianKI to move the discussion to Omnilibrium.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I strongly disagree.
First, because there are multiple reasons that the creation of many distinct theories of friendliness would not be dangerous: The first one to get to superintelligence should be able to establish a monopoly on power, and then we wouldn't have to worry about the others. Even if that didn't happen, a reasonable decision theory should be able to cooperate with other agents with different reasonable decision theories, when it is in both of their interests to do so. And even if we end up with multiple friendly AIs that are not great at cooperation, it is a particularly easy problem to cooperate with agents that have similar goals (as is implied by all of them being friendly). And even if we end up with a "friendly AI" that is incapable of establishing a monopoly on power but that will cause a great deal of destruction when another similarly capable but differently designed agent comes into existence, even if both agents have broadly similar goals (I would not call this a successful friendly AI), convincing people not to create such AIs does not actually get much easier if the people planning to create the AI have not been thinking about how to make it friendly, so preventing people from developing different theories of friendliness still doesn't help.
But beyond all that, I would also say that not creating many incomparable theories of friendliness is dangerous. If there is only one that anyone is working on, it will likely be misguided, and by the time anyone notices, enough time may have been wasted that friendliness will have lost too much ground in the race against general AI.
Just pointing out I upvoted Turchin's comment above, but I agree with your clarification above here, of the last part of his comment. Nothing I've read thus far raises concern about warring superintelligences.