Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Raemon 12 January 2017 06:48:36PM 5 points [-]

Note: there also discussions of this taking place on the Effective Altruism forum, which I think makes more sense as the central-repository for discourse.

http://effective-altruism.com/ea/169/a_response_to_ea_has_a_lying_problem/#comments

Comment author: Jiro 04 January 2017 04:54:58AM 0 points [-]

You can't necessarily get someone to act as though a signal isn't a signal by saying "I won't count it as a signal".

Comment author: Raemon 04 January 2017 04:26:59PM 0 points [-]

Maybe. (If you have a suggested solution to that issue, let me know)

But honestly I don't think your interpretation is that likely. a) most of this feedback is on anonymous forms, and the people who I talk to in person I talk to enough to get a pretty comprehensive understanding of their concerns.

b) This community in general does not seem to have a problem criticizing itself.

I think it is much more likely that people are made uncomfortable about whether it is socially acceptable to unironically enjoy x-risk memes and risk being judged by people outside or less involved with the Less Wrong community, than to be uncomfortable for personal reasons and then fear being judged from within the x-risky elements of LW.

Comment author: gjm 04 January 2017 12:07:07PM 1 point [-]

downvotes and upvotes tracked separately

At present, they are. You can count the upvotes by looking at the point score. You can count the downvotes by saying quietly to yourself the number zero.

(Downvoting is currently disabled.)

Comment author: Raemon 04 January 2017 04:20:39PM 0 points [-]

Oh right. :P

Comment author: Jiro 03 January 2017 09:26:20PM 0 points [-]

And my impression is that people are only really weirded out by these songs on behalf of other people who are only weirded out by them on behalf of other people.

Be careful, though. How much of that is really "only weirded out on behalf of other people", and how much of it is "weirded out themselves, but think it would be more socially acceptable to claim to be weirded out on behalf of other people"? After all, being weirded out yourself may signal that you don't want to be part of the group. Being offended "on behalf of other people" is a way to express your offense while trying not to signal the wrong thing.

Comment author: Raemon 03 January 2017 11:15:55PM 0 points [-]

Definitely possible - I was careful to follow this up with 'if this does actually bother you I definitely want to know about it' and I did mean that seriously.

I haven't gotten ZERO personal concerns over it (I think I've gotten about one and a half complaints about When I Die, which is dramatically fewer complaints than I get about other aspects of the Solstice that are less juicing controversial)

Comment author: dglukhov 28 December 2016 09:29:51PM *  7 points [-]

Hello all,

I found this site from a link in the comments section of an SCP Foundation post, which consequently linked to one of Eliezer's stranger allegorical pieces about the dangers of runaway AI intelligence getting the best of us. I've been hooked since.

Thanks to this site, I'm relearning university physics through Feynman, have plans to pick up a couple textbooks from the recommended list, and plan on taking the opportunity to meet some hopefully intellectually stimulating people in person if any of the meetups you guys seem to regularly have manage to ever make it closer to the general Massachusetts area.

I recently graduated with a B.S in Chemistry with the now odd realization that I haven't really learned anything during my experiences at university. I hope participating here will alleviate this void of knowledge I could have potentially learned.

Furthermore, if I'm lucky, I might get to contribute to the plethora of useful discussions that seem to populate this site. If I'm even luckier, those contributions will be positive. Let's just hope I learn fast enough to make sure luck isn't the deciding factor for such an outcome.

I am also curious as to the level of regular activity this site receives, perhaps a link to some statistics? Any reply would be greatly appreciated.

Also, I don't know if this is really relevant here, but I'd like to mention that I have a weird dream of someday inventing direct mental communication between people that doesn't involve the use of language, or at the very least help such a project along if any exist. I don't know if anybody will care for such news, or even if this is a realistic goal to strive for considering the multitude of other priorities I have in life, but hey, it is what it is. Supposedly, meeting such a goal would at least require some optimization of my own ability to think clearly and correctly. Yet another reason to come here, no doubt.

Well, here goes nothing! Hi guys!

Comment author: Raemon 29 December 2016 05:27:56PM 1 point [-]

Welcome!

Comment author: Elo 27 December 2016 07:44:31PM 0 points [-]

As a suggestion - to maintain the small feel - divide into smaller groups. Possibly while in the larger hall, be divided into groups for groupier-close-feel. Think about dunbar's group size numbers. We need small tribes to feel close and connected to people.

Comment author: Raemon 27 December 2016 08:04:29PM 0 points [-]

I think the Bay aimed to do that (or something similar) by having small tables people could sit at.

But the issue is more about the practicality of setting the sort of environment that feels cozier. I.e. with less than 50 people, you can fit in a living room, which means you have couches and it naturally feels right to cuddle on the floor, etc. Whereas in a big hall, unless you bring in a lot of your own couches, pillows, etc, and then arrange them artfully on the floor, and have a space for the songleaders to stand that doesn't feel like a stage... it's going to be hard to produce the feeling no matter how you divide people up.

Comment author: eukaryote 24 December 2016 07:32:47AM 2 points [-]

Comparative solsticeology: I helped organize the Seattle Solstice, and also attended the Bay Solstice. Both were really nice. A couple major observations:

The Seattle Solstice (also, I think, the New York one) had a really clear light-dark-light progression throughout the presentations, the Bay one didn't - it seemed like each speech or song was its own small narrative arc, and there wasn't an over-arching one.

Seattle's was also in a small venue where there were chairs, but most people sat on cushions of various size on the floors, and were quite close to the performers and speakers. The Bay's was on a stage. While the cushion version probably wouldn't work for a much larger solstice, it felt intimate and communal. (Despite, I think, ~100 attendees at Seattle. Not sure how many people came to the Bay one, ~150 marked themselves as having gone on Facebook but it seemed larger.)

Comment author: Raemon 27 December 2016 06:34:02PM 0 points [-]

Thanks!

Yeah, it's a really frustrating problem that once Solstices cross 75-attendees or so, it becomes increasingly hard to preserve the intimate feel. You either need to spend a lot of effort transforming a big empty room into an arbitrary space, or you need to find a space that feels more intimate somehow.

Comment author: username2 26 December 2016 09:44:18PM 0 points [-]

That's literally the post you are replying to. Did you have trouble reading that?

Comment author: Raemon 27 December 2016 12:31:30AM 0 points [-]

Huh, okay I do see that now (I was fairly tired when I read it, and still fairly tired now).

I think I have a smaller-scale version of the same criticism to level at your comment as the original post, which is that it's a bit long and meandering and wall-of-text-y in a way that makes it hard to parse. (In your case I think just adding more paragraph breaks would solve it though)

Comment author: username2 23 December 2016 06:39:58PM *  7 points [-]

It's a little disheartening to see that all of the comments so far except one have missed what I think was the presenters core point, and why I posted this link. Since this is a transcript of a talk, I suggest that people click the link at the beginning of the linked page to view the video of the talk -- transcripts, although immensely helpful for accessibility and search, can at times like this miss the important subtleties of emphatic stress or candor of delivery which convey as much about why an presenter is saying what they are saying. When they are being serious, when they are being playful, when they are making a throwaway point or going for audience laughter, etc. That matters a little bit more than usual for a talk like this. I will attempt to provide what I think is a fair summary outline of that core point, and why I think his critique is of relevance to this community, while trying not inject my own opinions into it:

I don't think the presenter believes that all or even very many of Bostrom's arguments in Superintelligence are wrong, per se, and I don't see that being argued in this keynote talk. Rather he is presenting an argument that one should have a very strong prior against the ideas presented in Superintelligence, which is to say they require a truly large amount of evidence, more than has been provided so far, to believe them to such an extent as to uproot yourself and alter your life's purpose, as many are doing. In doing it is also a critique against the x-risk rationality movement. Some of the arguments used are ad hominem attacks and reductio ad absurdum points. But he prefaces these with a meta-level argument that while these are not good evidence in the Bayesian sense for updating beliefs, one should pay attention to ad hominem and reducto ad absurdum arguments in the construction of priors (my words), as these biases and heuristics are evolved memes that have historic track records for guageing the accuracy of arguments, at least on average (closer to his words). In other words, you are better served by demanding more evidence of a crazy-sounding idea than the more mundane. He then goes on to show many reasons why AI risk specifically, and the x-risk rationality community generally looks and sounds crazy. This is in addition to a scattering of technical points about the reality of AI development diverging from the caricature of it presented in Superintelligence. His actual professed opinion on AI risk, given at the end, is rather agnostic, and that seems to be what he is arguing for: a healthy dose of agnostic skepticism.

Comment author: Raemon 26 December 2016 03:47:36PM 0 points [-]

I started reading this, got about halfway through, and had no idea what the core thesis was and got bored and stopped. Can you briefly summarize what you expected people to get out of it?

Comment author: shev 23 December 2016 01:49:46AM 11 points [-]

While I think it's fine to call someone out by name if nothing else is working, I think the way you're doing it is unnecessarily antagonistic and seemingly intentionally spiteful or at least utterly un-empathetic, and what you're doing can (and in my opinion ought to) be done empathetically, for cohesion and not hurting people excessively and whatnot.

Giving an excuse about why it's okay that you, specifically, are doing it, and declaring that you're "naming and shaming" on purpose, makes it worse. It's already shaming the person without saying that you're very aware that it is; you ought to be taking a "I'm sorry I have to do this" tone instead of a "I'm immune to repercussions, so I'm gonna make sure this stings extra!" tone.

At least, this is how it would work in the several relatively typical (American) social groups that I'm familiar with.

Comment author: Raemon 23 December 2016 08:55:03PM 2 points [-]

Yeah.

Semi-related: This entire conversation has kind of wanted me to be able to see downvotes and upvotes tracked separately - I feel motivated to downvote the people who seem unnecessarily antagonistic to me, but I also very much want to see the upvotes showing solidarity with the complaint.

View more: Next