Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Raemon 01 April 2017 06:53:47PM 10 points [-]

I'll be making sure there are notes from the Berkeley unconference. If you're interested in doing something vaguely-similar in your own neck of the woods, I recommend commenting here to see if others are interested (and/or reaching out through whatever your usual community-channels are).

My past experience is it hasn't been been worth it to arrange skyping in for this sort of event, but I think it'd be worth collaborating on ideas beforehand and sharing notes afterwards between people in different geographic locations.

Comment author: oge 17 May 2017 08:14:05AM 1 point [-]

Hey Ray, would you mind posting the notes from the unconference? With the CFAR hackathon coming up, the notes might give me ideas of hacks to work on.

Comment author: Alicorn 17 March 2017 01:46:56AM 21 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: oge 18 March 2017 11:42:14AM 0 points [-]

Hear! Hear!!

Comment author: oge 04 February 2017 12:39:34PM 0 points [-]


Comment author: oge 04 January 2017 01:05:03PM 1 point [-]

Thanks for writing this Gordon. I was struck by this paragraph:

By making clever choices in the company we keep and the cultures we engage, as adults we can insulate ourselves from the fullness of the world

I'd never considered that my cleverness in avoiding aversive stimuli could hurt me in the long run.

Comment author: itaibn0 03 January 2017 11:26:14PM 1 point [-]

Writing suggestion: Drop everything past the 10th paragraph ("It’s not immediately obvious that you’d want to overcome fear, though...").

Comment author: oge 04 January 2017 12:58:27PM 0 points [-]

Haha, I'd have advised gworley to drop everything before the 8th paragraph ("It’s in this spirit that I advise you, act into fear. ") since it could have been summarized by a paragraph-long disclaimer.

Comment author: oge 26 December 2016 12:16:28AM 2 points [-]

Lovely story, Stuart. I like how you captured certain aspects of thinking I hadn't seen articulated before e.g.

  • a being moving their attention/power towards different abilities as needed
  • that feeling of slowly becoming the part of a community
  • the feeling of noticing that a statement is likely not true
Comment author: Viliam 17 December 2016 11:43:11AM *  1 point [-]

Well, I am guilty of proposing a solution soon, too. But it's interesting to see (ignoring the minor details), where we agree, and where our proposals differ. This is a quick comparision:

Common points:

  • software change is necessary;
  • creating a personalized "bubble" is not the direction we want to go, because we want to create commons;
  • but a democracy where all randos from internet have an equal vote is also not the way (even ignoring the issue of sockpuppets);
  • so we needs a group of people with trusted opinion;
  • a binary decision of what content is "really in" could help people who only want to spend short time online (and we should optimize for these, or at least not actively against them);
  • a lot of good new content is likely to appear in the "experimental" part (that's where all the new talents are), from where it should be promited to the "in" part;
  • this promotion of content should be done only by the people who are already "in" (but the opinion of others can be used as a hint).


  • scalable N-tiered system (reflecting familiarity and compatibility with the existing commons); or
    • just a "user tier" and "admin tier" with quite different rules;
  • content filtered by a slider ("in" or "in + somewhat less in" etc.); or
    • the "in" content and the "experimental / debate" content separated from each other as much as possible;
  • a voting system where votes from higher tiers can override the votes from lower tiers; or
    • probably something pretty similar, except that the "promote or not" decision would be "two or three admins say yes, and either no admin says no, or the dictator overrides".

The summary of our differences seems to be that you want to generalize to N tiers, which all use the same algorithm; while I assume two groups using dramatically different rules. (Most of the other differences are just further consequences of this one.)

The reason for my side is that I assume that the "trusted group" will be relatively small and busy, for various reasons. (Being instrumentally rational correlates with "having a lot of high-priority work other than voting in LW debates". Some of the trusted users will also be coders who will be busy fixing and contributing to LW code. And they will have to solve other problems that appear.) I imagine something like about 20 people, of whom only about 5 will actually be active at any given week, and 3 of those will be solving some technical or other problem. In other words, a group too small to need have their mutual communication solved by an algorithm. (And in case of admin conflict, we have the dictator anyway.)

Comment author: oge 18 December 2016 10:36:26PM 0 points [-]

Hi Villiam, your idea sounds like an academic community around rationality. You can think of the discussions as like the events at a conference or workshop where half-baked ideas are thrown about. And you can think of the "final" tome of knowledge as the proceedings of the journal: when an idea has been workshopped enough, it is revised and then published in the journal as Currently Definitive Knowledge.

This framing suggests having a rotating board of editors and a formal peer review system as is common in academic journals.

Comment author: oge 04 December 2016 01:40:41PM 3 points [-]

Hi Anna, could you please explain how CFAR decided to focus on AI safety, as opposed to other plausible existential risks like totalitarain governments or nuclear war?

Comment author: BiasedBayes 16 September 2016 04:10:11PM *  1 point [-]

Thanks for the post, I really liked the article overall. Nice general summary of the ideas. I agree with torekp. I also think that the term consciousness is too broad. Wanting to have a theory of consciousness is like wanting to have a "theory of disease". The overall term is too general and "consciousness" can mean many different things. This dilutes the conversation. We need to sharpen our semantic markers and not to rely on intuitive or prescientific ideas.Terms that do not "carve nature well at its joints" will lead our inquiry astray from the beginning.

When talking about consciousness one can mean for example:


-attention: focusing mental resources on specific information

-primary consciousness: having any form of subjective experience

-conscious access: how the attended information reaches awareness and becomes reportable to others

-phenomenal awareness/qualia

-sense of self/I

Neuroscience is needed to determine if our concepts are accurate (enough) in the first place. It can be that the "easy problem" is hard and the "hard problem" seems hard only because it engages ill posed intuitions.

Comment author: oge 16 September 2016 04:55:12PM 1 point [-]

I agree re: consciousness being too broad a term.

I use the term in the sense of "having an experience that isn't directly observable to others" but as you noted, people use it to mean LOTS of different other things. Thanks for articulating that thought.

Comment author: torekp 05 September 2016 11:34:37AM 1 point [-]

The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.

Comment author: oge 06 September 2016 05:14:00PM 1 point [-]

My understanding is that if the creature is conscious at all, and it acts observably like a human with the kind of experience we care about, THEN it likely has the kind of experiences we care about.

Do you think it is likely that the creatures will NOT have the experiences we care about?

(just trying to make sure we're on the same page)

View more: Next