If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here.

New Comment
73 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:

Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.

It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.

Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -67... (read more)

Today we have banned two users, curi and Periergo from LessWrong for two years each.

I wanted to reply to this because I don't think it's right to judge curi the way you have. Periergo I don't have an issue w/. (it's a sockpuppet acct anyway)

I think your decision should not go unquestioned/uncriticized, which is why I'm posting. I also think you should reconsider curi's ban under a sort of appeals process.

Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.

On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack.

You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.

I'd like to note I am on that list. (like 1/2 way down) I am also a public figure in Australia, having founded a federal political party based on epistemic principles with nearly 9k members. I am okay with being on that list. Arguably, if there is something truly wrong with the list, I should h... (read more)

You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.

The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.

Unpopularity is no reason for a ban

That seems like a sentiment indicative of ignoring the reason for which he was banned. It was a utilitarian argument. The fact that someone gets downvoted is Bayesian evidence that it's not valuable for people to interact with him on LessWrong.

How is this different to pre-crime?

If you imprision someone who murdered in the past because you are afarid they murder again, that's not pre-crime in most common senses of the word.

Additionally even if it would be, LW is not a place with virtue ethics standards but one with utilitarian standards. Taking action to prevent things that are likely to negatively effect LW from happening in the future is perfectly fine with the idea of good gardening. 

If you stand in your garden you don't ask "what crimes did the plants commit and how should they be punished?" but you focus on the future.

1Max Kaye
Isn't it even worse then b/c no action was necessary? But more to the point, isn't the determination X person is not good to have around a personal judgement? It doesn't apply to everyone else. I think what habryka meant was that he wasn't making a personal judgement.
This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.

The traditional guidance for up/downvotes has been "upvote what you would like want to see more of, downvote what you would like to see less of". If this is how votes are interpreted, then heavy downvotes imply "the forum's users would on average prefer to see less content of this kind". Someone posting the kind of content that's unwanted on a forum seems like a reasonable reason to bar that person from the forum in question.

I agree with "being disliked is not a reason for punishment", but people also have the right to choose who they want to spend their time with, even if someone who they preferred not to spend time with viewed that as being punished. In my book, banning people from a private forum is more like "choosing not to invite someone to your party again, after they previously caused others to have a bad time" than it is like "punishing someone".

0Gavin Palmer
I'm a fan of solving problems with technology. One way to solve this problem of people not liking an author's content is to allow users to put people on an ignore list (and maybe for some period of time).
4Richard_Kennaway
How many people here remember Usenet's kill files?
-5Max Kaye
9lsusr
If I understand you correctly then your primary argument appears to be that a ban is (1) too harsh a judgment where a warning would have sufficed, (2) that curi ought to have some sort of appeals process and (3) that habryka's top-level comment does not provide detailed citations for all the accusations against curi. (1) Curi was warned at least once. (2) Curi is being banned for wasting time with long, unproductive conversations. An appeals process would produce another long, unproductive conversation. (3) Specific quotes are unnecessary. It blindingly obvious from a glance through curi's profile and even curi's response you linked to that curi is damaging to productive dialogue on Less Wrong. The strongest claim against curi is "a history of threats against people who engage with him [curi]". I was able to confirm this via a quick glance through curi's past behavior on this site. In this comment curi threatens to escalate a dialogue by mirroring it off of this website. By the standards of collaborative online dialogue, this constitutes a threat against someone who engaged with him. Edit: grammar.
3Max Kaye
lsusr said: I'm reasonably sure the slack comments refers to events 3 years ago, not anything in the last few months. I'll check, though. There are some other comments about recent discussion in that thread, like this: https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=38FzXA6g54ZKs3HQY gjm said: I don't think there is case for (1). Unless gjm is a mod and there are things I don't know? lsusr said: habryka explicitly mentions curi changing his LW commenting policy to be 'less demanding'. I can see the motivation for expedition, but the mods don't have to speedrun it. I think it's bad there wasn't any communication beforehand. lsusr said: I don't think that's the case. His net karma has increased, and judging him for content on his blog - not his content on LW - does not establish whether he was 'damaging to productive dialogue on Less Wrong'. His posts on less wrong have been contributions, for example, www.lesswrong.com/posts/tKcdTsMFkYjnFEQJo/can-social-dynamics-explain-conjunction-fallacy-experimental is a direct response to of EY's posts and it was net-upvoted. He followed that up with two more net-upvoted posts: * www.lesswrong.com/posts/HpiTacu2P6c22GEzF/asch-conformity-could-explain-the-conjunction-fallacy * www.lesswrong.com/posts/tKcdTsMFkYjnFEQJo/can-social-dynamics-explain-conjunction-fallacy-experimental This is not the track record of someone wanting to waste time. I know there are disagreements between LW and curi / FI. If that's the main point of contention, and that's why he's being banned, then so be it. But he doesn't deserve to mistreated and have baseless accusations thrown at him. lsusr said: We have substantial disagreements about what constitutes a threat, in that case. I think a threat needs to involve something like danger, or violence, or something like that. It's not a 'threat' to copy public discussion under fair use for criticism and commentary. I googled the

I googled the definition, and these are the two (for define:threat)

  • a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
  • a person or thing likely to cause damage or danger.

Neither of these apply.

I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace". I think the word "retribution" implies undue justice. A "threat" need only imply retaliation, not retribution, of hostile action.

We have substantial disagreements about what constitutes a threat,

Evidently yes, as do dictionaries.

1habryka
This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.
0Max Kaye
This definition doesn't describe anything curi has done (see my sibling reply linked below), at least that I've seen. I'd appreciate any quotes you can provide. https://www.lesswrong.com/posts/PkpuvsFYr6yuYnppy/open-and-welcome-thread-september-2020?commentId=H2tyDgoRFov8Xs8HS
-4Max Kaye
This definition seems okay to me. I don't know how justice can be undue, do you mean like undue or excessive prosecution? or persecution perhaps? thought I don't think either prosecution or persecution describe anything curi's done on LW. If you have counterexamples I would appreciate it if you could quote them. I don't think the dictionary definitions disagree much. It's not a substantial disagreement. thesaurus.com seems to agree; it lists them as ~strong synonyms. the crux is retribution vs retaliation, and retaliation is more general. The mafia can threaten shopkeeps with violence if they don't pay protection. I think retaliation is a better fitting word. However, this still does not apply to anything curi has done!
7lsusr
I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough. I do not believe that elaborating upon my reasoning would get you to change your mind about the core disagreement. Elaborating upon my position would therefore waste both of our time. The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern. Curi is being banned for wasting time with long, unproductive conversations. It would be ironic for me to embroil myself in such a conversation as a consequence.
-4Max Kaye
I don't either. Sure, we can stop. I don't know anywhere I could go to find out that this is a bannable offense. If it is not in a body of rules somewhere, then it should be added. If the mods are unwilling to add it to the rules, he should be unbanned, simple as that. Maybe that idea is worth discussing? I think it's reasonable. If something is an offense it should be publicly stated as such and new and continuing users should be able to point to it and say "that's why". It shouldn't feel like it was made up on the fly as a special case -- it's a problem when new rules are invented ad-hoc and not canonicalized (I don't have a problem with JIT rulebooks, it's practical).
6Rafael Harth
This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determined by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified. Knowing the existence of the list (again, even if it were justified) would also make me uneasy to talk to curi.
1Max Kaye
I think this is fair, and additionally I maybe shouldn't have used the word "truly"; it's a very laden word. I do think that, on the balance of probabilities, my case does reduce the likelihood of something being foundationally wrong with it, though. (Note: I've said this in, what I think, is a LW friendly way. I'd say it differently on FI.) One thing I do think, though, is that people's social anxiety does not make things in general right or wrong, but can be decisive wrt thinking about a single action. Another thing to point out is anonymous participation in FI is okay, it's reasonably easy to use an anonymous/pseudonymous email to start with. curi's blog/forum hybrid also allows for anonymous posting. FI is very pro-free-speech. I think that's okay, curi isn't trying to attract everyone as an audience, and FI isn't designed to be a forum which makes people feel comfortable, as such. It has different goals from e.g. LW or a philosophy subreddit. I think we'd agree that norms at FI aren't typical and aren't for everyone. It's a place where anyone can post, but that doesn't mean that everyone should, sorta thing.
2habryka
I don't understand this sentence at all. How has he already been punished for his past behavior? Indeed, he has never been banned before, so there was never any previous punishment. 
6Sherrinford
I welcome the transparency, but this "I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities" seems a bit weird to me. "a propensity for long unproductive discussions, a history of threats against people who engage with him" and "I assign too high of a probability that old patterns will repeat themselves" seem like quite a judgement and why would someone else not update on this? Additionally, I think that while a ban is sometimes necessary (e.g. harassment), a 2-year ban seems like quite a jump. I could think of a number of different sanctions, e.g. blocking someone from commenting in general; giving users the option to block someone from commenting; blocking someone from writing anything; limiting someone's authority to her own shortform; all of these things for some time.
3habryka
The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results.  I also said "I don't want others to think this is much evidence", not "this is no evidence". Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn't be very surprised to see curi participate well in other online communities.
5Ben Pace
I also didn't understand what your sentence was saying. It read to me as "I don't want people to update on this post". When you pointed specifically to LW's culture (which is very argumentative) possibly being a key cause it was clearer what you were saying. Thanks for the clarification (and for trying to avoid negative misinterpretations of your comment).
2habryka
I am not sure. I really don't like the world where someone is banned from commenting on other people's posts, but can still make top-level posts, or is banned from making top-level posts but can still comment. Both of these end up in really weird equilibria where you sometimes can't reply to conversations you started and respond to objections other people make to your arguments, and that just seems really bad.  I also don't really know what those things would have done. I don't think those things would have reduced the uncertainty of whether curi is a good fit for LessWrong super much, and feel like they could have just dragged things out into a long period of conflict that would have been more stressful for everyone.  The "blocking someone from writing anything" does feel like an option. Like, at least you can still vote and read. I do think that seems potentially like the better option, but I don't think we currently actually have the technical infrastructure to make that happen. I might consider building that for future occasions like this.
6Richard_Kennaway
Blocking from writing but allowing to vote seems like a really bad idea. Being read-only is already available — that's the capability of anyone without an account. Generally I'd be against complicated subsets of permissions for various classes of disfavoured members. Simpler to say that someone is either a member, or they're not.
3Sherrinford
Additionally, I'd like to know whether people are warned before they are banned, and whether they are asked about their own view of the matter.
6Vaniver
Sometimes people are warned, and sometimes they aren't, depending on the circumstances. By volume, the vast majority of our bans are spammers, who aren't warned. Of users who have posted more than 3 posts to the site, I believe over half (and probably closer to 80%?) are warned, and many are warned and then not banned. [See this list.]
3habryka
Yeah, almost everyone who we ban who has any real content on the site is warned. It didn't feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.
1Max Kaye
I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI) curi evidently wanted to change some things about his behaviour, otherwise he wouldn't have updated his commenting policy. How do you know he wouldn't have updated it more if you'd warned him? That's exactly the type of criticism we (CR/FI) think is useful. That sort of update is exactly the type of thing that would be reasonable to expect next time he came back (considering that he was away for 2 weeks when the ban was announced). He didn't want to be banned, and he didn't want to have shitty discussions, either. (I don't know those things for certain, but I have high confidence.) What probability would you assign to him continuing just as before if you said something like "If you keep continuing what you're doing, I will ban you. It's for these reasons." Ideally, you could add "Here they are in the rules/faq/whatever". Practically, the chance of him changing is lower now because there isn't any point if he's never given any chances. So in some ways you were exactly right to think there's low probability of him changing, it's just that it was due to your actions. Actions which don't need to be permanent, might I add.

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that.  But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?". 

As stewards of the community, we need to make decisions taking into account both the direct impact (on curi for being banned or not) and the indirect impact (on other people deciding whether or not to use the site, or their experience being better or worse).

3Max Kaye
I'm not sure about other cases, but in this case curi wasn't warned. If you're interested, he and I discuss the ban in the first 30 mins of this stream
1Sherrinford
I agree to your first paragraph. Whether someone is "good fit" already should be visible by the Karma (and I think Karma then translates into Karma points per Vote?) and I don't see why that should additionally lead to a ban or something. A ban, or a writing ban, could result for destructive behavior. I think there is no real point in having people blocked from reading. Writing - ok (though after all things start out as personal blog posts in any case and don't have to be made frontpage posts).
4Max Kaye
FYI I am on that list and fine with it - curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto I think you're wrong on multiple counts. Will reply more in a few hours.
3Max Kaye
FYI and FWIW curi has updated the post to remove emails and reword the opening paragraph. http://curi.us/2215-fallible-ideas-post-mortems and http://curi.us/2215-fallible-ideas-post-mortems#18059

I don't recall learning in school that most of "the bad guys" from history (e.g., Communists, Nazis) thought of themselves as "the good guys" fighting for important moral reasons. It seems like teaching that fact, and instilling moral uncertainty in general into children, would prevent a lot of serious man-made problems (including problems we're seeing play out today). So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:

We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. ... That we are also very confused about normativity and meta-ethics and don't really know what we mean by "should", including in this document...

Yeah, I realize this would be a hard sell in today's environment, but what if building Friendly AI requires a civilization sane enough to consider this common sense? I mean, for example, how can it be a good idea to gift a super-powerful "corrigible" or "obedient" AI to a civilization full of people with crazy amounts of moral certainty?

5lsusr
Non-dualist philosophies such as Zen place high value on confusion (they call it "don't know mind") and have a sophisticated framework for communicating this idea. Zen is one of the alternative intellectual traditions I alluded to in my controversial post about ethical progress. The Dao De Jing 道德经, written 2.5 thousand years ago, includes strong warnings against ontological certainty (and, by extension, moral certainty). If we naïvely apply the Lindy Effect then Chinese civilization is likely to continue for thousands more years while Western science annihilates itself after mere centuries. This may not be a coincidence. Here is the manifesto you are looking for: Unfortunately, the duality of emptiness and form is difficult to translate into English.
6lsusr
States evolve to perpetuate themselves. Civilization has figured it out (in the blind idiot god sense of "figured it out") that moral uncertainty is teachable and decreases trust in the state ideology. You have it backward. The states in existence today promote moral certainty in children for exactly the same reason the Communist and Nazi states did.
6ryan_b
I expect it is this. General moral uncertainty has all kinds of problems in expectation, like: * It ruins morality as a coordination mechanism among the group. * It weakens moral conviction in the individual, which is super bad from the perspective of people who believe there are direct consequences for a lack of conviction (like Hell). * It creates space for different and possibly weird moralities to arise; I don't know of any moral systems that think it is a good thing to be a member of a different moral system, so I expect all the current moral systems to agree on this one. I feel like the first bullet point is the real driving force behind the problems it would prevent, anyhow. Moral uncertainty doesn't cause people to do good things; it keeps them from doing good things (that are different from other groups' definitions of good things).
5Vaniver
This is sort of a rehash of sibling comments, but I think there are two factors to consider here. The first is the rules. It is very important that people drive on the correct side of the road, and not have uncertainty about which side of the road is correct, and not very important whether they have a distinction between "correct for <country> in <year>" and "correct everywhere and for all time." The second is something like the goal. At one point, people thought it was very important that society have a shared goal, and worked hard to make it expansive; things like "freedom of religion" are the things civilization figured out to both have narrow shared goals (like "keep the peace") and not expansive shared goals (like "as many get to Catholic Heaven as possible"). It is unclear to me whether we're better off with moral uncertainty as generator for "narrow shared goals", whether narrow shared goals is what we should be going for.
5ESRogs
I would guess that teaching that fact is not enough to instill moral uncertainty. And that instilling moral uncertainty would be very hard.
4Kaj_Sotala
Often expressing any understanding towards the motives of a "bad guy" is taken as signaling acceptance for their actions. There was e.g. controversy around the movie Downfall for this:
4cousin_it
Wouldn't more moral uncertainty make people less certain that Communism or Nazism were wrong?
4gbear605
That's definitely how it was taught in my high school, so it's not unknown.
1Wei Dai
Did it make you or your classmates doubt your own morality a bit? If not, maybe it needs to be taught along with the outside view and/or the teacher needs to explicitly talk about how the lesson from history is that we shouldn't be so certain about our morality...
3ChristianKl
We want to teach children to accept the norms of our society and the narrative we tell about it. A lot of what we teach is essential pro-system propaganda.  Teaching moral uncertainty doesn't help with that and it also doesn't help with getting students to score better on standardized tests which was the main goal of educational reforms of the last decades. 
2lsusr
Compulsory education is an organ of the state. Nation-states evolve to perpetuate their own existence. Teaching moral uncertainty is counter-productive toward maintaining the norms of a nation-state.
3RyanCarey
I guess it's because high-conviction ideologies outperform low-conviction ones, including nationalistic and political ideologies, and religions. Dennett's Gold Army/Silver Army analogy explains how conviction can build loyatly and strength, but a similar thing is probably true for movement-builders. Also, conviction might make adherents feel better, and therefore simply be more attractive.
3TurnTrout
If I had to guess, I'd guess the answer is some combination of "most people haven't realized this" and "of those who have realized it, they don't want to be seen as sympathetic to the bad guys". 

The full-text version of the Embedded Agency sequence has colors! And it's not just in the form of an image, but they're actually embedded as text. Is there any way a normal LW user can do the same with any of the three editors? (I.e., LW docs, Draft-JS, or Markdown.)

6habryka
Alas, not. The reason is a bit silly. I can enable text-colors in our editor, but this has the unintended side-effect of now copying over the text-color from wherever you are copying your text from, even the shade of black that that other program uses, which is hard to spot, but ends up looking kind of unsettling on LessWrong. Since the vast majority of posts are just written in normal "black-or-grey on white" text colors, the cost of that seemed larger than the ability to allow people to use colored text.  Eventually we could probably do something clever, like filtering out grey shades of text when you copy-paste it into the editor, but I haven't gotten around to that, though PRs are always welcome.
[-]gjm80

Apparently OpenAI has sold Microsoft some sort of exclusive licence to GPT-3. I assume this is bad for the prospects of anyone else doing serious research on it.

1Rana Dexsin
Is there visible reporting on this?
5gjm
Some. Microsoft's blog post; OpenAI's blog post; article in The Verge; article in Engadget; article in VentureBeat; article in MIT Technology Review.
3mingyuan
Yup, https://www.theverge.com/2020/9/22/21451283/microsoft-openai-gpt-3-exclusive-license-ai-language-research

I recently realized that I've been confused about an extremely basic concept: the difference between an Oracle and an autonomous agent.

This feels obvious in some sense. But actually, you can 'get' to any AI system via output behavior + robotics. If you can answer arbitrary questions, you can also answer the question 'what's the next move in this MDP', or less abstractly, 'what's the next steering action of the imaginary wheel' (for a self-driving car). And the difference can't be 'an autonomous agent has a robotic component'.

The essential difference seems ... (read more)

2Rafael Harth
Two existing suggestions for how to avoid existential risk naturally fall out of this framing. 1. Go all the way to the left (even further than the picture implies) by giving the AI no output channels whatsoever. This is Microscope AI. 2. Go all the way to the bottom and avoid all agent-like systems, but allow autonomous systems like self-driving cars. This is (as I understand it) Comprehensive AI Services (CAIS).

I'm going on a 30-hour roadtrip this weekend, and I'm looking for math/science/hard sci-fi/world-modelling Audible recommendations. Anyone have anything?

Golden raises $14.5M. I wrote about Golden here as an example of the most common startup failure mode: lacking a single well-formed use case. I’m confused about why someone as savvy as Mark Andreessen is tripling down and joining their board. I think he’s making a mistake.

If anyone happens to be willing to privately discuss some potentially infohazardous stuff that's been on my mind (and not in a good way) involving acausal trade, I'd appreciate it - PM me. It'd be nice if I can figure out whether I'm going batshit.

So which simulacrum level are ants on when they are endlessly following each other in a circle?

4Charlie Steiner
Over, and over... the pheromones... the overwhelming harmony...

Do those of you who live in America fear the scenarios discussed here? ("What If Trump Loses And Won’t Leave?")

3Daniel Kokotajlo
I do, at least. I don't think "What if trump loses and wont' leave" is the best summary of my concern; the best summary is "What if the election is heavily disputed."
1Sherrinford
"What if Trump Loses..." is just the title of the article, but the article also discusses scenarios where "Biden might be the one who disputes the result".

I do not know whether this has already been mentioned on Lesswrong, but 4-6 weeks ago you could read in German news websites that commercially available mouth wash has been tested to kill coronavirus in the lab and the (positive) results have been published in Journal of Infectious Diseases.

You can click through this article to see the ranked names of the mouth wash brands and their "reduction factor" though I found the sample sizes seemed quite small. You can also find a list in this overview article. In an article I saw today on this topic, the... (read more)

I'm so bored of my job, I need a programming job that has actual math/algorithms :/  I'm curious to hear about people here who have programming jobs that are more interesting.  In college I competed at a high level in ICPC, but I got into my head that there are so few programming jobs with actual advanced algorithms that if your name on topcoder isn't red you might as well forget about it.  I ended up just taking a boring job at a top tech company that pays well but does very little for society and is not intellectually stimulating at all.

4lincolnquirk
Have you read https://www.benkuhn.net/hard/ ? Curious what you think. (Disclosure: I started the company that Ben works for, which does not have hard eng problems but does have a high potential for social impact)
1tinyanon
I feel happy pulling up kattis and doing some algorithm questions so there is definitely joy to be had chasing technical questions.  Ben doesn't seem to be disputing that but is offering two other things you can chase.  I don't know if this is different person to person but for me gamifying a problem can make me care more about something but it can't make me care about something I don't care about at all This has been in my head for months because everyone* gives a variation of this advice and it feels like it's missing the hard part.  It started when I saw a clip on Reddit of Dr. K from Healthy Gamer saying something along the lines of "If you don't know what you want to do, get a piece of paper and write down everything wrong with the world.  In 5 minutes the paper will be almost full" and... What? No?  I mean, things are problems in that they make people's lives worse.  But I notice very very little actually changes how I feel.  So why would I expect anything I do to change how someone else feels if nothing they do can change how I feel?  There are only two axis that actually change how I feel about life: lonely VS belonging and bored VS engaged.  I don't really have a reason to expect other people are very different except that people in worse life situations also have an unsafe VS secure axis.  So the problems are "loneliness" and "listlessness".  Everyone acts like there are important problems everywhere.  You see people saying ideas for side projects are a dime a dozen but here I am where I actually have the funds to quit and make something I thought had value and just nothing I can think of that seems to have any value.   *Everyone except one friend on Paxil who assures me the solution to my problem is Paxil and one friend who is convinced LSD is the solution to all problems.  I remain unconvinced.
2lsusr
* Quantitative finance has use for people who know advanced math and algorithms. (Though they are not known for doing great good for society.) * You can also get around this problem by starting your own ML startup. (I did this.) The startup route takes work and risk tolerance but provides high positive externalities for society.