Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JenniferRM 17 September 2017 11:22:27PM *  19 points [-]

I'm super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.

Fundamentally, I believe that a major component of LW's decline isn't in the primary article and isn't being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.

These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.

My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer's writing.

The fundamental issue seems to be existential risks to the human species from exceptionally high quality thinking with no predictably benevolent goals that was augmented by recursively improving computers (ie the singularity as original defined by Vernor Vinge in his 1993 article). This original vision covers (and has always covered) Artificial Intelligence and Intelligence Amplification.

Now, I have no illusions that an unincorporated community of people can retain stability of culture or goals over periods of time longer than about 3 years.

Also, even most incorporated communities drift quite a bit or fall apart within mere decades. Sometimes the drift is worthwhile. Initially the thing now called MIRI was a non-profit called "The Singularity Institute For Artificial Intelliegence". Then they started worrying that AI would turn out bad by default, and dropped the "...For Artificial Intelligence" part. Then a late arriving brand-taker-over ("Singularity University") bought their name for a large undisclosed amount of money and the real research started happening under the new name "Machine Intelligence Research Institute".

Drift is the default! As Hanson writes: Coordination Is Hard.

So basically my hope for "grit with respect to species level survival in the face of the singularity" rests in gritty individual humans whose commitment and skills arises from a process we don't understand, can't necessarily replicate, and often can't even reliably teach newbies to even identify.

Then I hope for these individuals to be able to find each other and have meaningful 1:1 conversations and coordinate at a smaller and more tractable scale to accomplish good things without too much interference from larger scale poorly coordinated social structures.

If these literal 1-on-1 conversations happen in a public forum, then that public forum is a place that "important conversations happen" and the conversation might be enshrined or not... but this enshrining is often not the point.

The real point is that the two gritty people had a substantive give and take conversation and will do things differently with their highly strategic lives afterwards.

Often times a good conversation between deeply but differently knowledgeable people looks like an exchange of jokes, punctuated every so often by a sharing of citations (basically links to non-crap content) when a mutual gap in knowledge is identified. Dennet's theory of humor is relevant here.

This can look, to the ignorant, almost like trolling. It can look like joking about megadeath or worse. And this appearance can become more vivid if third and fourth parties intervene in the conversation, and are brusquely or jokingly directed away.

The false inference of bad faith communication becomes especially pernicious if important knowledge is being transmitted outside of the publicly visible forums (perhaps because some of the shared or unshared knowledge verges on being an infohazard).

The practical upshot of much of this is that I think that a lot of the very best content on Lesswrong in the past happened in the comment section, and was in the form of conversations between individuals, often one of whom regularly posted comments with a net negative score.

I offer you Tim Tyler as an example of a very old commenter who (1) reliably got net negative votes on some of his comments while (2) writing from a reliably coherent and evidence based (but weird and maybe socially insensitive) perspective. He hasn't been around since 2014 that I'm aware of.

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector. He seems to me like the kind of person you might actually be trying to drive away, while at the same time being something of a canary for the tolerance of people genuinely focused on something other than winning at a silly social media game.

Upvotes don't matter except to the degree that they conduce to surviving and thriving. Getting a lot of upvotes and enshrining a bunch of ideas into the canon of our community and then going extinct as a species is LOSING.

Basically, if I had the ability to, for the purposes of learning new things, I would just filter out all the people who are high on the first eigendemocracy vector.

Yes, I want those "traditionally good" people to exist and I respect their work... but I don't expect novel ideas to arise among them at nearly as high a rate, to even be available for propagation and eventual retention in a canon.

Also, the traditionally good people's content and conversations are probably going to be objectively improved if people high in the second and third and fourth such vectors also have a place, and that place allows them the ability to object in a fairly high profile way when someone high in the first eigendemocracy vector component proposes a stupid idea.

One of the stupidest ideas, that cuts pretty close to the heart of such issues, is the possible proposal that people and content whose first eigendemocracy vector are low should be purged, banned, deleted, censored, and otherwise made totally invisible and hard to find by any means.

I fear this would be the opposite of finding yourself a worthy opponent and another step in the direction of active damage to the community in the name of moderation and troll fighting, and it seems like it might be part of the mission, which makes me worried.

Comment author: ESRogs 10 October 2017 03:08:46PM 0 points [-]

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector.

Is there a natural interpretation of what the first vector means vs what the second or third mean? My lin alg is rusty.

Comment author: Lumifer 28 January 2016 03:54:46PM 1 point [-]

I am a bit confused. If we are living in a Quantum Immortality world, why don't we see any 1000-year-old people around?

Comment author: ESRogs 20 September 2017 11:55:57AM 1 point [-]

QI doesn't imply that you see any other immortal people. It just suggests that through an increasingly unlikely series of coincidences, the first-person perspective perpetually persists.

Comment author: IlyaShpitser 17 September 2017 03:13:13PM 1 point [-]

That's an illusion of readability though, it's only sorting in a fairly arbitrary way.

Comment author: ESRogs 17 September 2017 05:33:12PM 8 points [-]

As long as it's not anti-correlated with quality, it helps.

It doesn't matter if the top comment isn't actually the very best comment. So long as the system does better than random, I as a reader benefit.

Comment author: ESRogs 17 September 2017 10:09:32AM 3 points [-]

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages

Some questions about this (okay if you don't have answers now):

  • Can anyone make a personal page?
  • Are there any requirements for the content -- does it need to be "rationality" themed, or can it be whatever the user wants (with the expectation that only LW-appropriate stuff will get promoted to the general frontpage)?
  • Can a user get kicked off for inappropriate content (whatever that means)?
Comment author: IlyaShpitser 15 September 2017 02:28:41PM *  4 points [-]

(a) Thanks for making the effort!

(b)

"I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation."

This won't work, for the same reason PageRank did not work, you can game it by collusion. Communities are excellent at collusion. I think the important thing to do is making toxic people (defined in a socially constructed way as people you don't want around) go away. Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Comment author: ESRogs 17 September 2017 09:35:57AM 7 points [-]

Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Ranking helps me know what to read.

The SlateStarCodex comments are unusable for me because nothing is sorted by quality, so what's at the top is just whoever had the fastest fingers and least filter.

Maybe this isn't a problem for fast readers (I am a slow reader), but I find automatic sorting mechanisms to be super useful.

Comment author: Habryka 16 September 2017 11:35:15PM 5 points [-]

Being aware that this is probably the most bikesheddy thing in this whole discussion, I've actually thought about this a bit.

From skimming a lot of early Eliezer posts, I've seen all three uses "LessWrong", "Lesswrong" and "Less Wrong" and so there isn't a super clear precedent here, though I do agree that "Less Wrong" was used a bit more often.

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words. It makes it sound too much like it wants to refer to the original meaning of the words, instead of being a pointer towards the brand/organization/online-community, and while one might think that is actually useful, it usually just results in a short state of confusion when I read a sentence that has "Less Wrong" in it, because I just didn't parse it as the correct reference.

I am currently going with "LessWrong" and "LESSWRONG", which is what I am planning to use in the site navigation, logos and other areas of the page. If enough people object I would probably change my mind.

Comment author: ESRogs 17 September 2017 09:20:23AM 1 point [-]

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words.

Did you mean to write, 'dislike' "Less Wrong"'?

Comment author: Wei_Dai 25 November 2009 11:32:31PM 0 points [-]

The cost to use it is, on the face of it, at most a couple of mouse clicks. How could that be higher than the benefit of letting every reader know why the conversation ended? Perhaps I'm leaving out some hidden costs here, in which case, what do you think they are?

As for the cost to implement, I volunteer to code the feature myself, if I can get a commitment that it will be accepted (and if someone more qualified/familiar with the codebase doesn't volunteer).

Comment author: ESRogs 15 September 2017 07:34:48AM *  0 points [-]

When I read a comment. I may have a vague sense of not-worth-more-time-ness. So I don't respond.

I expect actually resolving that sense into a concrete reason to be effortful. It seems like it'd be worth it to do in many cases, but not always.

A version of this feature that sounds more likely to succeed to me, would be if it takes a mouse-click to request a reason for end of argument. I'd expect that to dramatically cut down on the number of times I'd have to resolve a vague sense into a concrete reason.

Comment author: Dr_Manhattan 02 September 2017 11:16:57AM *  0 points [-]

One other thing that worries me is, unless we can precisely diagnose what is causing academia to be unable to take the "outsider steps", it seems dangerous to make ourselves more like academia. What if that causes us to lose that ability ourselves?

Seems that academic motivations can be "value", e.g discovering something of utility or "momentum", sort of like a beauty contest, more applicable in abstract areas where utility is not obvious. Possible third is immediate enjoyment which probably contributed to millennia of number theory before it became useful.

Doing novel non-incremental things for non-value (like valuing AI safety) reasons is likely to be difficult until enough acceptability is built up for momentum type motivations. (which also suggests trying to explicitly build up momentum as an intervention)

Comment author: ESRogs 03 September 2017 12:19:13AM 0 points [-]

is likely to be different

Did you mean "likely to be difficult"?

Comment author: paulfchristiano 20 June 2017 04:11:08PM *  20 points [-]

I don't buy the "million times worse," at least not if we talk about the relevant E(s-risk moral value) / E(x-risk moral value) rather than the irrelevant E(s-risk moral value / x-risk moral value). See this post by Carl and this post by Brian. I think that responsible use of moral uncertainty will tend to push you away from this kind of fanatical view

I agree that if you are million-to-1 then you should be predominantly concerned with s-risk, I think they are somewhat improbable/intractable but not that improbable+intractable. I'd guess the probability is ~100x lower, and the available object-level interventions are perhaps 10x less effective. The particular scenarios discussed here seem unlikely to lead to optimized suffering, only "conflict" and "???" really make any sense to me. Even on the negative utilitarian view, it seems like you shouldn't care about anything other than optimized suffering.

The best object-level intervention I can think of is reducing our civilization's expected vulnerability to extortion, which seems poorly-leveraged relative to alignment because it is much less time-sensitive (unless we fail at alignment and so end up committing to a particular and probably mistaken decision-theoretic perspective). From the perspective of s-riskers, it's possible that spreading strong emotional commitments to extortion-resistance (e.g. along the lines of UDT or this heuristic) looks somewhat better than spreading concern for suffering.

The meta-level intervention of "think about s-risk and understand it better / look for new interventions" seems much more attractive than any object-level interventions we yet know, and probably worth investing some resources in even if you take a more normal suffering vs. pleasure tradeoff. If this is the best intervention and is much more likely to be implemented by people who endorse suffering-focused ethical views, it may be the strongest incentive to spread suffering-focused views. I think that higher adoption of suffering-focused views is relatively bad for people with a more traditional suffering vs. pleasure tradeoff, so this is something I'd like to avoid (especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation). Ironically, that gives some extra reason for conventional EAs to think about s-risk, so that the suffering-focused EAs have less incentive to focus on value-spreading. This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world. (Though here the non-s-riskers should also err on the side of extortion-resistance, e.g. trading with the position of rational non-extorting s-riskers rather than whatever views/plans the s-riskers happen to have.)

An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the latter, then x-risk and s-risk reduction may end up being aligned. If the former, then at best the s-riskers are indifferent to survival and need to resort to more speculative interventions. Interestingly, in this case it may also be counterproductive for s-riskers to expand their influence or acquire resources. My guess is that mature suffering-hating civilizations reduce s-risk, since immature suffering-hating civilizations probably provide a significant part of the game-theoretic incentive yet have almost no influence, and sane suffering-hating civilizations will provide minimal additional incentives to create suffering. But I haven't thought about this issue very much.

Comment author: ESRogs 22 July 2017 05:08:23PM 1 point [-]

An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the former, then x-risk and s-risk reduction may end up being aligned.

Did you mean to say, "if the latter" (such that x-risk and s-risk reduction are aligned when suffering-hating civilizations decrease s-risk), rather than "if the former"?

Comment author: VAuroch 15 June 2017 05:53:42AM 0 points [-]

That would make it a terrible at being a medium of exchange or a store of value, though, wouldn't it? No one knows how much it's worth, and you have to acquire some, pass it off, and then (on their side) turn it into currency every time you use it.

Comment author: ESRogs 16 June 2017 12:43:23AM 0 points [-]

That depends on how volatile it is. On the timescale of a single transaction, a certain level of volatility might not matter very much even if the same level of volatility would prevent you from wanting to set prices in BTC.

View more: Next