Comment author: CronoDAS 08 February 2016 05:10:14PM 5 points [-]

I think my girlfriend needs psychiatric help - she has visual hallucinations and other symptoms I've promised to keep confidential. She doesn't want to see a psychiatrist, as she and her family attribute her symptoms to supernatual causes; they believe that the "spirits" she sees actually exist. (Another family member - not a blood relative - also has psychiatric symptoms that are being treated and managed.) I really don't want to go into further details because one time I promised not to tell my psychiatrist about her issues and then told him anyway and she freaked out when I admitted to telling him. (I admitted it because I can't lie for shit and suck at keeping secrets, but that's beside the point.)

Any advice? ("Break up with your girlfriend" will be ignored, unless you can convince me that it would be better for her if I left her.)

Comment author: MaximumLiberty 10 February 2016 02:43:59AM 2 points [-]

It seems that part of the problem might be that she is afraid of being judged crazy or the equivalent. Having someone talk to her about her being crazy (which is how she will probably perceive it) seems like it runs a risk of being counter-productive. I think so far I've only told you what you are implying or saying.

If I have that right, you might think about finding a story -- fictional or biographical -- written from the perspective of someone suffering from similar symptoms and who resolved it through treatment. If she identifies with the protagonist, it might create some willingness to listen to alternatives.

Comment author: gjm 09 February 2016 10:36:38AM 6 points [-]

Everyone is "banworthy", in the sense that the moderators have the power to ban anyone for any reason and so far as I know there are no defined limits on their actions.

This particular post

  • is in no way actually on topic for LW
  • appears to have been the last straw in leading one long-standing contributor to give up on LW
  • fits right into an anti-LW narrative that's already not so uncommon ("LW has become a sinkhole of racists and sexists and fascists, because the site's supposedly rational norms give no way to make them unwelcome but they make everyone else feel unwelcome")
  • seems at the end to be trying to imply that it's unjust for rapists to be punished, if they feel frustrated and upset and the person they rape wasn't very nice to them

and I think some kind of moderator action in response is eminently reasonable. Personally I'd have gone for "This article is not suitable for LW because [...]; I will wait two days so that anyone who wants to preserve what they've written can take a copy, and then delete it; further attempts at posting this sort of thing may result in a ban".

(I think Nancy was right to ask "what about women's preferences?" and right to apply a bit of moderatorial intimidation, but I don't think the two should have gone together.)

Comment author: MaximumLiberty 10 February 2016 01:32:04AM 6 points [-]

Your list of reasons seem to me to be the very reason we have karma. Why does this post deserve moderation in a system where karma sends the message about the community's desire for more of the same?

Comment author: MaximumLiberty 07 February 2016 07:13:19PM 3 points [-]

You have run into the "productivity paradox." This is the problem that, while it seems from first-hand observation that using computers would raise productivity, that rising productivity does not seem to show up in economy-wide statistics. It is something of a mystery. The Wikipedia page on the subject has an OK introduction to the problem.

I'd suggest that the key task is not measuring the productivity of the computers. The task is measuring the change in productivity of the researcher. For that, you must have a measure of research output. You'd probably need multiple proxies, since you can't evaluate it directly. For example, one proxy might be "words of published AI articles in peer-reviewed journals." A problem with this particular proxy is substitution, over long time periods, of self-publication (on the web) for journal publication.

A bigger problem is the quality problem. The quality of a good today is far better than the similar good of 30 years ago. But how much? There's no way to quantify it. Economists usually use some sense that "this year must be really close to last year, so we'll ignore it across small time frames." But that does not help for long time frames (unless you are looking only at the rate of change in productivity rates, such that the productivity rate itself gets swept aside by taking the first derivative, which works fine as long as quality is nor changing disproportionately to productivity). The problem seems much greater if you have to assess the quality of AI research. Perhaps you could construct some kind of complementary metric for each proxy you use, such as "citations in peer-reviewed journals" for each peer-reviewed article you used in the proxy noted above. And you would again have to address the effect of self-publication, this time on quality.

Comment author: MaximumLiberty 12 January 2016 07:03:04PM 1 point [-]

The work of Elinor Ostrom (2009 Nobel prize co-winner in economics) seems relevant. The Wikipedia page on her does a decent introduction. The relevant part of her work was in how societies use customs (other than market transactions) to regulate use of common resources. The relevant observation here is that the customs often seem strange and non-sensical, but they work. She summarized her findings, "A resource arrangement that works in practice can work in theory."

Similarly, the work of Peter Leeson on ordeals seems relevant. Ordeals were medieval methods of determining the outcome of what would today be a lawsuit. An example of an ordeal is (literally) trial by fire or trial by battle. Leeson shows how this facially strange and non-sensical custom actually served its purpose of dispensing justice. His research along these lines is surprising, unorthodox, and amusing.

Comment author: MaximumLiberty 12 January 2016 08:38:52PM 0 points [-]

And similarly, here's a quotation from economist George Stigler: “every durable social institution or practice is efficient.” ("Efficient" has a specific meaning in context. Don't over-extend it to "good" or similar ideas.)

Comment author: MaximumLiberty 12 January 2016 07:03:04PM 1 point [-]

The work of Elinor Ostrom (2009 Nobel prize co-winner in economics) seems relevant. The Wikipedia page on her does a decent introduction. The relevant part of her work was in how societies use customs (other than market transactions) to regulate use of common resources. The relevant observation here is that the customs often seem strange and non-sensical, but they work. She summarized her findings, "A resource arrangement that works in practice can work in theory."

Similarly, the work of Peter Leeson on ordeals seems relevant. Ordeals were medieval methods of determining the outcome of what would today be a lawsuit. An example of an ordeal is (literally) trial by fire or trial by battle. Leeson shows how this facially strange and non-sensical custom actually served its purpose of dispensing justice. His research along these lines is surprising, unorthodox, and amusing.

Comment author: PeteMichaud 28 December 2015 11:34:12PM 1 point [-]

Sure, I'd be happy to--I can share a summary of the plan and what we hope to achieve with it, but before I do that, are there specific questions you'd like answered about it?

Comment author: MaximumLiberty 30 December 2015 04:43:07PM *  1 point [-]

I doubt I know enough to ask good questions. The article has a very bare-bones reference to it, so here are some basic questions:

  1. What is the high level objective?
  2. Describe the training from the outside: when, where, who, how much?
  3. Describe the training from the inside: what gets taught, what gets learned?
  4. What role do you expect mentors to play?
  5. How do you support the mentors in playing that role?
Comment author: Mirzhan_Irkegulov 24 December 2015 10:54:21AM 0 points [-]

I somewhat support what you're saying, but I also believe that 100% filtering would lead to a filter bubble. Suppose you were much smarter than you are now and upon reflection realized Effective Altruism is super-duper important. But now you've filtered EA-related articles on LW and you will no longer be exposed to it.

Comment author: MaximumLiberty 30 December 2015 04:40:21PM 0 points [-]

That is true with an assumption. The assumption is that I will regularly return to LessWrong and read EA articles if I see them. My own assessment of myself is that I won't, so the assumption would be false. (I could be wrong.) I generally avoid EA articles because I'm not all that interested in them. No knock on the field, it's just not why I'm here. But the fact that I have to wade through articles on EA and all the other topics I don't care about deters me from returning to LessWrong, which I do less frequently than I wish I would, because I miss the optimal time to comment on articles.

Comment author: MaximumLiberty 23 December 2015 11:42:22PM 2 points [-]

Can you explain more about your Mentorship Training Program?

Comment author: Lumifer 23 December 2015 04:18:34PM 1 point [-]

by someone who understands them better than I do

Why would such a someone commit to spending a considerable amount of time predigesting papers for your convenience?

In response to comment by Lumifer on LessWrong 2.0
Comment author: MaximumLiberty 23 December 2015 10:58:29PM *  0 points [-]

I think the key part of that sentence was "I'd like ..."

I can think of several reasons why someone might want to do such a thing.

  • They want to begin or enhance a reputation for being an authority in the field.
  • They want the organization that they represent to begin or enhance its reputation in the field and to popularize the particular spin that their organization places on such information.
  • They are studying the field anyway, so the investment is essentially prettying up their own precis of materials they are reading anyway.
  • They want to help the LW community and this is the way they choose to contribute. (For example, if there was interest in the field of law in which I specialize, I'd do the same, but I can;t see that fitting in here.)

Then, empirically, I note that people (who know these fields better than I) do actually post this kind of content here. But I don't see the karma system recognizing them for that contribution as much as being the "editor" of the whatever section would recognize them.

(Subsequently edited for terrible formatting)

In response to LessWrong 2.0
Comment author: MaximumLiberty 23 December 2015 12:32:00AM 2 points [-]

This is a proposal to replace (or supplement) the tagging system with a classification system for content that would be based on three elements: subject, type, and organization.

For me, one of the problems with current LessWrong is that it has too many interesting distractions in it. Ideally, I would want to follow just a few things, with highly groomed content. For example, I'd like to see a section devoted to summaries of recent behavioral psychology articles by someone who understands them better than I do. I suspect that other people would like to see other things that I'd prefer to filter out. Examples: artificial intelligence research, effective altruism, personal productivity. I'm not knocking these subjects; but when I allocate time, I'd like to be able to allocate 100% to what I want to see and 0% to what I don't.

That suggests that one area where Less Wrong could be improved is at the top level of organization. I'd suggest that content be organized in subjects, like Behavioral Psychology, Effective Altruism, Personal Productivity, and Artificial Intelligence. Now you might say that the tagging system does this. It kind of does, but it is insufficiently prescriptive. An article on effective altruism could have no tags, or many, or not the ones I think of.

Currently, the content is also classified by type, in Main and Discussion. Frankly, the difference between the two makes little sense to me. But I think there is another classification that would be helpful when combined with prescriptive subjects. I'd classify content type more like this: * Research, used for summarizing a publication elsewhere, with the summary provided by someone who know something about it * Link, used for identifying some information that might be of use to the community * Commentary, used for the normal kind of stuff that shows up in discussion * Sequence, assigned by moderators to the original stuff that made this site what it is, or at least was * Reading, used for reading groups for specific books * Meetups, used only to announce Meetups * Organization, used to announce and promote organized action

Then a third classification of content is by organization. The community needs to remain connected to the organizations it spawned. So the third content classification would be by organization, which could be empty. Possible initial values would be MIRI, CFAR, FLI, etc. I'd hope that those organizations would ensure that at least their own research got into the relevant subject under a Research classification, and that their own blog posts got thrown over into the relevant subject under a Link classification.

This would make it easier for me to justify coming back to read Less Wrong daily, because I wouldn't expose my self to wonderful distrations in order to find the things I'd like to keep up on.

View more: Prev | Next