1 min read1st Mar 202463 comments
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a special post for quick takes by Wei Dai. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

63 comments, sorted by Click to highlight new comments since: Today at 5:41 AM

AI labs are starting to build AIs with capabilities that are hard for humans to oversee, such as answering questions based on large contexts (1M+ tokens), but they are still not deploying "scalable oversight" techniques such as IDA and Debate. (Gemini 1.5 report says RLHF was used.) Is this more good news or bad news?

Good: Perhaps RLHF is still working well enough, meaning that the resulting AI is following human preferences even out of training distribution. In other words, they probably did RLHF on large contexts in narrow distributions, with human rater who have prior knowledge/familiarity of the whole context, since it would be too expensive to do RLHF with humans reading diverse 1M+ contexts from scratch, but the resulting chatbot is working well even outside the training distribution. (Is it actually working well? Can someone with access to Gemini 1.5 Pro please test this?)

Bad: AI developers haven't taken alignment seriously enough to have invested enough in scalable oversight, and/or those techniques are unworkable or too costly, causing them to be unavailable.

From a previous comment:

From my experience doing early RLHF work for Gemini, larger models exploit the reward model more. You need to constantly keep collecting more preferences and retraining reward models to make it not exploitable. Otherwise you get nonsensical responses which have exploited the idiosyncracy of your preferences data. There is a reason few labs have done RLHF successfully.

This seems to be evidence that RLHF does not tend to generalize well out-of-distribution, causing me to update the above "good news" interpretation downward somewhat. I'm still very uncertain though. What do others think?

RLHF with humans might also soon get obsoleted by things like online DPO where another chatbot produces preference data for on-policy responses of the tuned model, and there is no separate reward model in the RL sense. If generalization from labeling instructions through preference decisions works in practice, even weak-to-strong setting won't necessarily be important, if tuning of a stronger model gets bootstrapped by a weaker model (where currently SFT from an obviously off-policy instruct dataset seems to suffice), but then the stronger model re-does the tuning of its equally strong successor that starts with the same base model (as in the self-rewarding paper), using some labeling instructions ("constitution"). So all that remains of human oversight that actually contributes to the outcome is labeling instructions written in English, and possibly some feedback on them from spot checking what's going on as a result of choosing particular instructions.

Apparently Gemini 1.5 Pro isn't working great with large contexts:

While this worked well, for even a slightly more complicated problem the model failed. One Twitter user suggested just adding a random ‘iPhone 15’ in the book text and then asking the model if there is anything in the book that seems out of place in the book. And the model failed to locate it.

The same was the case when the model was asked to summarize a 30-minute Mr. Beast video (over 300k tokens). It generated the summary but many people who had watched the video pointed out that the summary was mostly incorrect.

So while on paper this looked like a huge leap forward for Google, it seems that in practice it's not performing as well as they might have hoped.

But is this due to limitations of RLHF training, or something else?

My guess is that we're currently effectively depending on generalization. So "Good" from your decomposition. (Though I think depending on generalization will produce big issues if the model is scheming, so I would prefer avoiding this.)

since it would be too expensive to do RLHF with humans reading diverse 1M+ contexts from scratch

It's plausible to me that after doing a bunch of RLHF on short contexts, RLHF on long contexts is extremely sample efficient (when well tuned) such that only (e.g.) 1,000s of samples sufficies[1]. If you have a $2,000,000 budget for long context RLHF and need only 1,000 samples, you can spend $2,000 per sample. This gets you perhaps (e.g.) 10 hours of time of an experienced software engineer which might suffice for good long context supervision without necessarily needing any fancy scalable oversight approaches. (That said, probably people will use another LLM by default when trying to determine the reward if their spending this long: recursive reward modeling seems almost certain by default if we're assuming that people spend this much time labeling.)

That said, I doubt that anyone has actually started doing extremely high effort data labeling like this, though plausibly they should...

From a previous comment: [...] This seems to be evidence that RLHF does not tend to generalize well out-of-distribution

It's some evidence, but exploiting a reward model seems somewhat orthogonal to generalization out of distribution: exploitation is heavily selected for.

(Separately, I expect that the quoted comment results in a misleadingly negative perception of the current situation.)


  1. I think experiments on sample efficiency of RLHF when generalizing to a new domain could be very important and are surprisingly underdone from my perspective (at least I'm not aware of interesting results). Even more important is sample efficiency in cases where you have a massive number of weak labels, but a limited number of high quality labels. It seems plausible to me that the final RLHF approach used will look like training the reward model on a combination of 100,000s of weak labels and just 1,000 very high quality labels. (E.g. train a head on the weak labels and then train another head to predict the difference between the weak label and the strong label.) In this case, we could spend a huge amount of time on each label. E.g., with 100 skilled employees we could spend 5 days on each label and still be done in 50 days which isn't too bad of a delay. (If we're fine with this labels trickling in for online training, the delay could be even smaller.) ↩︎

Thanks for some interesting points. Can you expand on "Separately, I expect that the quoted comment results in a misleadingly perception of the current situation."? Also, your footnote seems incomplete? (It ends with "we could spend" on my browser.)

Can you expand on "Separately, I expect that the quoted comment results in a misleadingly negative perception of the current situation."?

I'm skeptical that increased scale makes hacking the reward model worse. Of course, it could (and likely will/does) make hacking human labelers more of a problem, but this isn't what the comment appears to be saying.

Note that the reward model is of the same scale as the base model, so the relative scale should be the same.

This also contradicts results from an earlier paper by Leo Gao. I think this paper is considerably more reliable than the comment overall, so I'm inclined to believe the paper or think that I'm misunderstanding the comment.

Additionally, from first principles I think that RLHF sample efficiency should just increase with scale (at least with well tuned hyperparameters) and I think I've heard various things that confirm this.

Also, your footnote seems incomplete?

Oops, fixed.

I have access to Gemini 1.5 Pro. Willing to run experiments if you provide me with an exact experiment to run, plus cover what they charge me (I'm assuming it's paid, I haven't used it yet).

[-]Wei Dai2mo4716

I find it curious that none of my ideas have a following in academia or have been reinvented/rediscovered by academia (including the most influential ones so far UDT, UDASSA, b-money). Not really complaining, as they're already more popular than I had expected (Holden Karnofsky talked extensively about UDASSA on an 80,000 Hour podcast, which surprised me), it just seems strange that the popularity stops right at academia's door. (I think almost no philosophy professor, including ones connected with rationalists/EA, has talked positively about any of my philosophical ideas? And b-money languished for a decade gathering just a single citation in academic literature, until Satoshi reinvented the idea, but outside academia!)

Clearly academia has some blind spots, but how big? Do I just have a knack for finding ideas that academia hates, or are the blind spots actually enormous?

I was thinking of writing a short post kinda on this topic (EDIT TO ADD: it’s up! See Some (problematic) aesthetics of what constitutes good work in academia), weaving together:

Holden on academia not answering important questions

I followed this link thinking that it looks relevant to my question, but the way Holden delineates what academia is interested in, it should totally be interested in my ideas:

I, today, when I think about what academia does, I think it is really set up to push the frontier of knowledge, the vast majority, and I think especially in the harder sciences. I would say the vast majority of what is going on in academic is people are trying to do something novel, interesting, clever, creative, different, new, provocative, that really pushes the boundaries of knowledge forward in a new way.

versus what Holden says are important questions that academia neglects:

There’s an intellectual topic, it’s really important to the world but it’s not advancing the frontier of knowledge. It’s more figuring out something in a pragmatic way that is going to inform what decision makers should do, and also there’s no one decision maker asking for it as would be the case with Government or corporations.

The rest of your comment seems to be hinting that maybe academia is ignoring my ideas because it doesn't like the aesthetics of my writing? (Not sure if that was your point, or if those bullet points weren't supposed to be directly related to my question...) Even if that's true though, I'm still puzzled why academia hasn't reinvented any of my ideas (which have been independently invented multiple times outside of academia, e.g. Nick Szabo and Satoshi with b-money, Paul Christiano with UDASSA).

Hmm, yeah I guess what I wrote wasn’t too directly helpful for your question.

the way Holden delineates what academia is interested in, it should totally be interested in my ideas…

I think Holden forgot “trendy”. Trendy is very important. I think people in academia have a tacit shared understanding of the currently-trending topics / questions, within which there’s a contest to find interesting new ideas / progress. If an idea is important but not trendy, it’s liable to get neglected, I think. It’s kinda like in clothing fashion: if you find a brilliant use of beads, but beads aren’t fashion-forward this year, roughly nobody will care.

Of course, the trends change, and indeed everyone is trying to be the pioneer of the next hot topic. There are a lot of factors that go into “what is the next hot topic”, including catching the interest of a critical mass of respected people (or people-who-control-funding), which in turn involves them feeling it’s “exciting”, and that they themselves have an angle for making further progress in this area, etc. But trendiness doesn’t systematically track objective importance, and it’s nobody’s job to make it so.

At least, that’s what things felt like to me in the areas of physics I worked in (optics, materials science, and related). I’m much less familiar with philosophy, economics, etc.

Remember, aside from commercially-relevant ideas, success for academia research scientists (and philosophers) is 100% determined by “am I impressing my peers?”—grants, promotions, invited talks, etc. are all determined by that. So if I write a paper and the prestigious people in my field are unanimously saying “I don’t know about that thing, it’s not an area that I know or care about”, the result is just as bad for me and my career as if those people had unanimously said “this is lousy work”.

it doesn't like the aesthetics of my writing

To be clear, when I said “the aesthetic of what makes a good X”, I meant it in a really broad sense. Maybe I should have said “the implicit criteria of what makes a good X” instead. So “the paper concerns a currently-trendy topic” can be part of that, even though it’s not really “aesthetics” in the sense of beauty. E.g., “the aesthetic of what makes a good peer-reviewed experimental condensed-matter physics paper” has sometimes been greatly helped by “it somehow involves nanotechnology”.

From the years in academia studying neuroscience and related aspects of bioengineering and medicine development... yeah. So much about how effort gets allocated is not 'what would be good for our country's population in expectation, or good for all humanity'. It's mostly about 'what would make an impressive sounding research paper that could get into an esteemed journal?', 'what would be relatively cheap and easy to do, but sound disproportionately cool?', 'what do we guess that the granting agency we are applying to will like the sound of?'.  So much emphasis on catching waves of trendiness, and so little on estimating expected value of the results.

Research an unprofitable preventative-health treatment which plausibly might have significant impacts on a wide segment of the population? Booooring.

Research an impractically-expensive-to-produce fascinatingly complex clever new treatment for an incredibly rare orphan disease? Awesome.

I think the main reason why UDT is not discussed in academia is that it is not a sufficiently rigorous proposal, as well as there not being a published paper on it. Hilary Greaves says the following in this 80k episode:

Then as many of your listeners will know, in the space of AI research, people have been throwing around terms like ‘functional decision theory’ and ‘timeless decision theory’ and ‘updateless decision theory’. I think it’s a lot less clear exactly what these putative alternatives are supposed to be. The literature on those kinds of decision theories hasn’t been written up with the level of precision and rigor that characterizes the discussion of causal and evidential decision theory. So it’s a little bit unclear, at least to my likes, whether there’s genuinely a competitor to decision theory on the table there, or just some intriguing ideas that might one day in the future lead to a rigorous alternative.

I also think it is unclear to what extent UDT and updateless are different from existing ideas in academia that are prima facie similar, like McClennen's (1990) resolute choice and Meacham's (2010, §4.2) cohesive decision theory.[1] Resolute choice in particular has been discussed in a lot of detail, and for a long time (see the citations of McClennen's book). (And, FWIW, my sense is that most philosophers think that resolute choice is irrational and/or doesn't make sense, at least if it is cashed out as a choice rule.)

It also doesn't help that it is unclear what the difference between FDT and UDT is supposed to be. 

(If UDT is supposed to be an LDT of some sort, then you might want to check out Spohn's (2012)[2] version of CDT, Fisher's (n.d) disposition-based decision theory, and Poellinger's (2013) discussion of Spohn's theory, for ideas in academia that are similar to the LDT-part of the theory. And then there is also Schwarz' critique of FDT, which would then also apply to UDT, at least partially.)

  1. ^

    My own take, using the terminology listed here, is that the causalist version of Meacham's cohesive decision theory is basically "updateless CDT", that the evidentialist version is basically "updateless EDT", and that a Spohn-CDT version of cohesive decision theory is basically "U(C)DT/F(C)DT". I also think that resolute choice is much more permissive than e.g. cohesive decision theory and updatelessness. As a choice rule, it doesn't recommend anything close to "maximizing EU relative to your prior". Instead, it just states that (i) how you act ex ante in a dynamic choice problem should be the same as you would act in the normalised version of the problem, and (ii) you should be dynamically consistent (i.e., the most preferred plan should not change throughout the decision problem). 

  2. ^

    Note that in the published article, it says that the article was received in 2008.

I think the main reason why UDT is not discussed in academia is that it is not a sufficiently rigorous proposal, as well as there not being a published paper on it.

The reason for the former is that I (and others) have been unable to find a rigorous formulation of it that doesn't have serious open problems. (I and I guess other decision theory researchers in this community currently think that UDT is more of a relatively promising direction to explore, rather than a good decision theory per se.)

And the reason for the latter is the above, plus my personal distaste for writing/publishing academic papers (which I talked about elsewhere in this thread), plus FDT having been published which seems close enough to me.

Thank for the references in the rest of your comment. I think I've come across Meacham 2010 and Spohn 2012 before, but forgot about them as I haven't been working actively on decision theory for a while. It does seem that Meacham's cohesive decision theory is equivalent to updateless EDT/CDT. (BTW in The Absent-Minded Driver I referenced a 1997 paper that also has an idea similar to updatelessness, although the authors didn't like it.)

On a quick skim of Spohn 2012 I didn't see something that looks like LDT or "algorithmic/logical agent ontology" but it's quite long/dense so I'll take your word on it for now. Still, it seems like none of the academic papers put all of the pieces together in a single decision theory proposal that's equivalent to UDT or FDT?

(Please note that UDT as originally described was actually updateless/evidential/logical, not causalist as you wrote in the post that you linked. This has been a historical disagreement between me and Eliezer, where in I leaned towards evidential and he leans towards causal, although these days I just say that I'm confused and don't know what to think.)

The reason for the former is that I (and others) have been unable to find a rigorous formulation of it that doesn't have serious open problems. (I and I guess other decision theory researchers in this community currently think that UDT is more of a relatively promising direction to explore, rather than a good decision theory per se.)

That's fair. But what is it then that you expect academics to engage with? How would you describe this research direction, and why do you think it's interesting and/or important?

To quickly recap the history, people on LW noticed some clear issues with "updating" and "physicalist ontology" of the most popular decision theories at the time (CDT/EDT), and thought that switching to "updatelessness" and "logical/algorithmic ontology" was an obvious improvement. (I was the first person to put the two pieces together in an explicit formulation, but they were already being talked about / hinted at in the community.) Initially people were really excited because the resulting decision theories (UDT/FDT) seemed to solve a lot of open problems in one swoop, but then pretty quickly and over time we noticed more and more problems with UDT/FDT that seem to have no clear fixes.

So we were initially excited but then increasingly puzzled/confused, and I guess I was expecting at least some academics to follow a similar path, either through engagement with LW ideas (why should they be bothered that much by lack of academic publication?), or from independent invention. Instead academia seems to still be in a state similar to LW when I posted UDT, i.e., the ideas are floating in the air separately and nobody has put them together yet? (Or I guess that was the state of academia before FDT was published in an academic journal, so now the situation is more like some outsiders put the pieces together in a formal publication, but still no academic is following a similar path as us.)

I guess it's also possible that academia sort of foresaw or knew all the problems that we'd eventually find with UDT/FDT and that's why they didn't get excited in the first place. I haven't looked into academic DT literature in years, so you're probably more familiar with it. Do you know if they're puzzled/confused by the same problems that we are? Or what are they mostly working on / arguing about these days?

There are many many interesting questions in decision theory, and "dimensions" along which decision theories can vary, not just the three usually discussed on LessWrong. It's not clear to me why (i) philosophers should focus on the dimensions you primarily seem to be interested in, and (ii) what is so special about the particular combination you mention (is there some interesting interaction I don't know about maybe?). Furthermore, note that most philosophers probably do not share your intuitions: I'm pretty sure most of them would e.g. pay in counterfactual mugging. (And I have not seen a good case for why it would be rational to pay.) I don't mean to be snarky, but you could just be wrong about what the open problems are.

I haven't looked into academic DT literature in years, so you're probably more familiar with it. Do you know if they're puzzled/confused by the same problems that we are? 

I wouldn't say so, no. But I'm not entirely sure if I understand what the open problems are. Reading your list of seven issues, I either (i) don't understand what you are asking, (ii) disagree with the framing/think the question is misguided, or (iii) think there is an obvious answer (which makes me think that I'm missing something). With that said, I haven't read all the posts you reference, so perhaps I should read those first.

There are many many interesting questions in decision theory, and “dimensions” along which decision theories can vary, not just the three usually discussed on LessWrong.

It would be interesting to get an overview of what these are. Or if that's too hard to write down, and there are no ready references, what are your own interests in decision theory?

what is so special about the particular combination you mention

As I mentioned in the previous comment, it happens to solve (or at least seemed like a good step towards solving) a lot of problems I was interested in at the time.

Furthermore, note that most philosophers probably do not share your intuitions

Agreed, but my intuitions don't seem so unpopular outside academia or so obviously wrong that there should be so few academic philosophers who do share them.

I’m pretty sure most of them would e.g. pay in counterfactual mugging. (And I have not seen a good case for why it would be rational to pay.)

I'm not sure I wouldn't pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I'm more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)

I don’t mean to be snarky, but you could just be wrong about what the open problems are.

Yeah, I'm trying to keep an open mind about that. :)

With that said, I haven’t read all the posts you reference, so perhaps I should read those first.

Cool, I'd be interested in any further feedback when you're ready to give them.

It would be interesting to get an overview of what these are. Or if that's too hard to write down, and there are no ready references, what are your own interests in decision theory?

Yeah, that would be too hard. You might want to look at these SEP entries: Decision Theory, Normative Theories of Rational Choice: Expected Utility, Normative Theories of Rational Choice: Rivals to Expected Utility and Causal Decision Theory. To give an example of what I'm interested in, I think it is really important to take into account unawareness and awareness growth (see §5.3 of the first entry listed above) when thinking about how ordinary agents should make decisions. (Also see this post.)

I'm not sure I wouldn't pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I'm more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)

Okay, interesting! I thought UDT was meant to pay in CM, and that you were convinced of (some version of) UDT.

On the point about AI (not directly responding to your question, to which I don't have an answer): I think it's really important to be clear about whether we are discussing normative, constructive or descriptive decision theory (using Elliott Thornley's distinction here). For example, the answers to "is updatelessness normatively compelling?", "should we build an updateless AI?" and "will some agents (e.g. advanced AIs) commit to being updateless?" will most likely come apart (it seems to me). And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.

Thanks, will look into your references.

Okay, interesting! I thought UDT was meant to pay in CM, and that you were convinced of (some version of) UDT.

I wrote "I'm really not sure at this point whether UDT is even on the right track" in UDT shows that decision theory is more puzzling than ever which I think you've read? Did you perhaps miss that part?

(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I'd point that out since it may not be obvious at first glance.)

And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.

Yeah I agree with this to some extent, and try to point out such confusions or make such distinctions when appropriate. (Such as in the CM / indexical values case.) Do you have more examples where making such distinctions would be helpful?

I wrote "I'm really not sure at this point whether UDT is even on the right track" in UDT shows that decision theory is more puzzling than ever which I think you've read? Did you perhaps miss that part?

Yes, missed or forgot about that sentence, sorry.

(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I'd point that out since it may not be obvious at first glance.)

Thanks.

Do you have more examples where making such distinctions would be helpful?

I was mostly thinking about discussions surrounding what the "correct" decision theory, is whether you should pay in CM, and so on.

It may be worth thinking about why proponents of a very popular idea in this community don't know of its academic analogues, despite them having existed since the early 90s[1] and appearing on the introductory SEP page for dynamic choice.

Academics may in turn ask: clearly LessWrong has some blind spots, but how big?

  1. ^

    And it's not like these have been forgotton; e.g., McClennen's (1990) work still gets cited regularly.

It may be worth thinking about why proponents of a very popular idea in this community don’t know of its academic analogues

I don't think this is fair, because even though component ideas behind UDT/FDT have academic analogues, it doesn't look like someone put them together into a single decision theory formulation in academic literature, at least prior to MIRI's "Cheating Death in Damascus" being published. Also "Cheating Death in Damascus" does cite both Meacham and Spohn (and others) and it seems excusable for me to have forgotten those references since they were both published after I wrote about UDT and again were only component ideas of it, plus I haven't actively worked on decision theory for several years.

I think Sami's comment is entirely fair given the language and framing of the original post. It is of course fine to forget about references, but e.g. "I find it curious that none of my ideas have a following in academia or have been reinvented/rediscovered by academia" and "Clearly academia has some blind spots, but how big?" reads like you don't consider it a possilbity that you might have re-invented something yourself, and that academics are at fault for not taking up your ideas.

(It sucks to debate this, but ignoring it might be interpreted as tacit agreement. Maybe I should have considered the risk that something like this would happen and not written my OP.)

When I wrote the OP, I was pretty sure that the specific combination of ideas in UDT has not been invented or re-invented or have much of a following in academia, at least as of 2019 when Cheating Death in Damascus was published, because the authors of that paper obviously did a literature search and would have told me if they had found something very similar to UDT in the literature, and I think I also went through the papers it referenced as being related and did not find something that had all of the elements of UDT (that's probably why your references look familiar to me). Plus FDT was apparently considered novel enough that the reviewers of the paper didn't tell the authors that they had to call it by the name of an existing academic decision theory.

So it's not that I "don’t consider it a possibility that you might have re-invented something yourself" but that I had good reason to think that's not the case?

I think there is nothing surprising that small community of nerds writing in spare time has blind spots, but when large professional community has such blind spots that's surprising.

On your first point: as Sami writes, resolute choice is mentioned in the introductory SEP article on dynamic choice (it even has its own section!), as well as in the SEP article on decision theory. And SEP is the first place you go when you want to learn about philosophical topics and find references.

On your second point: as I wrote in my comment above, (i) academics have produced seemingly similar ideas to e.g. updatelessness (well before they were written up on LW) so it is unclear why academics should engage with less rigorous, unpublished proposals that appear to be similar (in other words, I don't think the phrase "blind spots" is warranted), and (ii) when academics have commented on or engaged with LW DT ideas, they have to my knowledge largely been critical (e.g. see the post by Wolfgang Schwarz I linked above, as well as the quote from Greaves)[1].

  1. ^

    Cheating Death in Damascus getting published in the Journal of Philosophy is a notable exception though!

To clarify, by “blind spot” I wasn't complaining that academia isn't engaging specifically with posts written up on LW, but more that nobody in academia seems to think that the combination of "updateless+logical" is clearly the most important or promising direction to explore in decision theory.

Thanks Sylvester! Yep it looks like cohesive decision theory is basically original UDT.  Do you know what the state of the art is in terms of philosophical critiques of cohesive decision theory? Any good ones? Any good responses to the critiques?

Cohesive decision theory lacks the logical/algorithmic ontology of UDT and is closer to what we call "updateless EDT/CDT" (the paper itself talks about cohesive versions of both).

Also interested in a response from Sylvester, but I would guess that one of the main critiques is something like Will MacAskill's Bomb thought experiment, or just intuitions for paying the counterfactual mugger. From my perspective, these do have a point when it comes to humans, since humans seemingly have indexical values, and one way to explain why UDT makes recommendations in these thought experiments that look "bizarre" to many humans is that it assumes away indexical values (via the type signature of its utility function). (It was an implicit and not totally intentional assumption, but it's unclear how to remove the assumption while retaining nice properties associated with updatelessness.) I'm unsure if indexical values themselves are normative or philosophically justified, and they are probably irrelevant or undesirable when it comes to AIs, but I guess academic philosophers probably take them more for granted and are not as interested in AI (and therefore take a dimmer view on updatelessness/cohesiveness).

But yeah, if there are good critiques/responses aside from these, it would be interesting to learn them.

I don't think cohesive decision theory is being discussed much, but I'm not sure. Perhaps because the theory is mainly used to argue against the claim that "every decision rule will lead agents who can’t bind themselves to disaster" (p. 20, footnote 34) in the paper, and discussion of its independent interest is relegated to a footnote (footnote 34).

OK, thanks. So then the mystery remains why academic philosophy isn't more interested in this.

Aside from the literature on international relations, I don't know much about academic dysfunction (mostly from reading parts of Inadequate Equilibria, particularly the visitor dialog) and other Lesswrong people can probably cover it better. I think that planecrash, Yud's second HPMOR-scale work, mentions that everyone in academia just generally avoids citing things published outside of academia, because they risk losing status if they do.

EDIT: I went and found that section, it is here:

It turns out that Earth economists are locked into powerful incentive structures of status and shame, which prevent them from discussing the economic work of anybody who doesn't get their paper into a journal.  The journals are locked into very powerful incentive structures that prevent them from accepting papers unless they're written in a very weird Earth way that Thellim can't manage to imitate, and also, Thellim hasn't gotten tenure at a prestigious university which means they'll probably reject the paper anyways.  Thellim asks if she can just rent temporary tenure and buy somebody else's work to write the paper, and gets approximately the same reaction as if she asked for roasted children recipes.

The system expects knowledge to be contributed to it only by people who have undergone painful trials to prove themselves worthy.  If you haven't proven yourself worthy in that way, the system doesn't want your knowledge even for free, because, if the system acknowledged your contribution, it cannot manage not to give you status, even if you offer to sign a form relinquishing it, and it would be bad and unfair for anyone to get that status without undergoing the pains and trials that others had to pay to get it.

She went and talked about logical decision theory online before she'd realized the full depth of this problem, and now nobody else can benefit from writing it up, because it would be her idea and she would get the status for it and she's not allowed to have that status.  Furthermore, nobody else would put in the huge effort to push forward the idea if she'll capture their pay in status.  It does have to be a huge effort; the system is set up to provide resistance to ideas, and disincentivize people who quietly agreed with those ideas from advocating them, until that resistance is overcome.  This ensures that pushing any major idea takes a huge effort that the idea-owner has to put in themselves, so that nobody will be rewarded with status unless they have dedicated several years to pushing an idea through a required initial ordeal before anyone with existing status is allowed to help, thereby proving themselves admirable enough and dedicated enough to have as much status as would come from contributing a major idea.

To suggest that the system should work in any different way is an obvious plot to steal status that is only deserved by virtuous people who work hard, play by the proper rules, and don't try to cheat by doing anything with less effort than it's supposed to take.

It's glowfic, so of course I don't know how accurate it is as it's intended to plausibly deniable enough to facilitate free writing (while keeping things entertaining enough to register as not-being-work).

I have to think more about the status dynamics that Eliezer talked about. There's probably something to it... But this part stands out as wrong or at least needing nuance/explanation:

Thellim hasn’t gotten tenure at a prestigious university which means they’ll probably reject the paper anyways

I think most academic venues do blind reviews and whoever decides whether or not to accept a paper isn't supposed to know who wrote it? Which isn't to say that the info won't leak out anyway and influence the decision. (For example I once left out the acknowledgements section in a paper submission, thinking that, like the author byline, I was supposed to add it after the paper was accepted, but apparently I was actually supposed to include it and someone got really peeved that I didn't.)

Also it seems weird that Eliezer wrote this in 2021, after this happened in 2019:

MIRI suggested I point out that Cheating Death In Damascus had recently been accepted in The Journal of Philosophy, a top philosophy journal, as evidence of (hopefully!) mainstream philosophical engagement.

From talking with people who do work on a lot of grant committees in the NIH and similar funding orgs, it's really hard to do proper blinding of reviews. Certain labs tend to focus on particular theories and methods, repeating variations of the same idea...  So if you are familiar the general approach of a particular lab and it's primary investigator, you will immediately recognize and have a knee-jerk reaction (positive or negative) to a paper which pattern-matches to the work that that lab / subfield is doing. 

Common reactions from grant reviewers:

Positive - "This fits in nicely with my friend Bob's work. I respect his work, I should argue for funding this grant."

Neutral - "This seems entirely novel to me, I don't recognize it as connecting with any of the leading trendy ideas in the field or any of my personal favorite subtopics. Therefore, this seems high risk and I shouldn't argue too hard for it."

Slightly negative - "This seems novel to me, and doesn't sound particularly 'jargon-y' or technically sophisticated. Even if the results would be beneficial to humanity, the methods seem boring and uncreative. I will argue slightly against funding this."

Negative - "This seems to pattern match to a subfield I feel biased against. Even if this isn't from one of Jill's students, it fits with Jill's take on this subtopic. I don't want views like Jill's gaining more traction. I will argue against this regardless of the quality of the logic and preliminary data presented in this grant proposal."

Ah, sorry that this wasn't very helpful. 

I will self-downvote so this isn't the top comment. Yud's stuff is neat, but I haven't read much on the topic, and passing some along when it comes up has been a good general heuristic.

No need to be sorry, it's actually great food for thought and I'm glad you pointed me to it.

I think that UDASSA and UDT might be in academia's blind spots in the same way that the Everett interpretation is: more correct theories that came after less correct theories with mostly only theoretical evidence to support changing over to the new theories.

Many parts of academia have a strong Not Invented Here tendency. Not just research outside of academia is usually ignored, but even research outside a specific academic citation bubble, even if another bubble investigates a pretty similar issue. For example, economic decision theorists ignore philosophical decisions theorists, which in turn mostly ignore the economic decision theorists. They each have their own writing style and concerns and canonical examples or texts. Which makes it hard for outsiders to read the literature or even contribute to it, so they don't.

A striking example is statistics, where various fields talk about the same mathematical thing with their own idiosyncratic names, unaware or unconcerned whether it already had a different name elsewhere.

Edit: Though LessWrong is also a citation bubble to some degree.

[-]TAG2mo42

"Read the sequences....just the sequences"

Something a better , future version of rationalism could do is build bridges and facilitate communication between these little bubbles. The answet-to-everything approach has been tried too many times.

Have you tried talking to professors about these ideas?

[-]zm2mo30

Indeed, there is no need for sorrow, for by choosing to remain anonymous, you have done great things. The world owes you a Nobel Prize in Economics and a Turing Award. It is time for the world to seriously recognize your achievements and lead it towards a financial system without bubbles.

Why you hadn't wrote academic articles on these topics? 

The secret is that academic article is just a formatting type and anyone can submit to scientific journals. No need to have a PhD or even work in a scientific institution.

I wrote an academic-style paper once, as part of my job as an intern in a corporate research department. It soured me on the whole endeavor, as I really didn't enjoy the process (writing in the academic style, the submission process, someone insisting that I retract the submission to give them more credit despite my promise to insert the credit before publication), and then it was rejected with two anonymous comments indicating that both reviewers seemed to have totally failed to understand the paper and giving me no chance to try to communicate with them to understand what caused the difficulty. The cherry on top was my mentor/boss indicating that this is totally normal, and I was supposed to just ignore the comments and keep resubmitting the paper to other venues until I run out of venues.

My internship ended around that point and I decided to just post my ideas to mailing lists / discussion forums / my home page in the future.

Also, I think MIRI got FDT published in some academic philosophy journal, and AFAIK nothing came of it?

FDT paper got 29 citation, but many from MIRI affiliated people and-or on AI safety. https://scholar.google.ru/scholar?cites=13330960403294254854&as_sdt=2005&sciodt=0,5&hl=ru

One can escape troubles with reviewers by publishing in arxiv or other paper archives (philpapers). Google Scholar treats them as normal articles. 

But in fact there are good journals with actually helping reviewers (e.g. Futures). 

[-]ag4k2mo20

I don't think FDT got published -- as far as I can tell it's just on arXiv.  

I was referring to Cheating Death In Damascus which talks about FDT in Section 4.

There is some similarity between UDASSA and 'Law without law" by Mueller, as both use Kolmogorov complexity to predict the distribution of observers. In LwL there is not any underlying reality except numbers, so it is just dust theory over random number fields. 

Are humans fundamentally good or evil? (By "evil" I mean something like "willing to inflict large amounts of harm/suffering on others in pursuit of one's own interests/goals (in a way that can't be plausibly justified as justice or the like)" and by "good" I mean "most people won't do that because they terminally care about others".) People say "power corrupts", but why isn't "power reveals" equally or more true? Looking at some relevant history (people thinking Mao Zedong was sincerely idealistic in his youth, early Chinese Communist Party looked genuine about wanting to learn democracy and freedom from the West, subsequent massive abuses of power by Mao/CCP lasting to today), it's hard to escape the conclusion that altruism is merely a mask that evolution made humans wear in a context-dependent way, to be discarded when opportune (e.g., when one has secured enough power that altruism is no longer very useful).

After writing the above, I was reminded of @Matthew Barnett's AI alignment shouldn’t be conflated with AI moral achievement, which is perhaps the closest previous discussion around here. (Also related are my previous writings about "human safety" although they still used the "power corrupts" framing.) Comparing my current message to his, he talks about "selfishness" and explicitly disclaims, "most humans are not evil" (why did he say this?), and focuses on everyday (e.g. consumer) behavior instead of what "power reveals".

At the time, I replied to him, "I think I’m less worried than you about “selfishness” in particular and more worried about moral/philosophical/strategic errors in general." I guess I wasn't as worried because it seemed like humans are altruistic enough, and their selfish everyday desires limited enough that as they got richer and more powerful, their altruistic values would have more and more influence. In the few months since then, I've became more worried, perhaps due to learning more about Chinese history and politics...

Comparing my current message to his, he talks about "selfishness" and explicitly disclaims, "most humans are not evil" (why did he say this?), and focuses on everyday (e.g. consumer) behavior instead of what "power reveals".

The reason I said "most humans are not evil" is because I honestly don't think the concept of evil, as normally applied, is a truthful way to describe most people. Evil typically refers to an extraordinary immoral behavior, in the vicinity of purposefully inflicting harm to others in order to inflict harm intrinsically, rather than out of indifference, or as a byproduct of instrumental strategies to obtain some other goal. I think the majority of harms that most people cause are either (1) byproducts of getting something they want, which is not in itself bad (e.g. wanting to eat meat), or (2) the result of their lack of will to help others (e.g. refusing to donate any income to those in poverty).

By contrast, I focused on consumer behavior because the majority of the world's economic activity is currently engaged in producing consumer products and services. There exist possible worlds in which this is not true. During World War 2, the majority of GDP in Nazi Germany was spent on hiring soldiers, producing weapons of war, and supporting the war effort more generally—which are not consumer goods and services.

Focusing on consumer preferences a natural thing to focus on if you want to capture intuitively "what humans are doing with their wealth", at least in our current world. Before focusing on something else by default—such as moral preferences—I'd want to hear more about why those things are more likely to be influential than ordinary consumer preferences in the future. 

You mention one such argument along these lines:

I guess I wasn't as worried because it seemed like humans are altruistic enough, and their selfish everyday desires limited enough that as they got richer and more powerful, their altruistic values would have more and more influence.

I just think it's not clear it's actually true that humans get more altruistic as they get richer. For example, is it the case that selfish consumer preferences have gotten weaker in the modern world, compared to centuries ago when humans were much poorer on a per capita basis? I have not seen a strong defense of this thesis, and I'd like to see one before I abandon my focus on "everyday (e.g. consumer) behavior".

Evil typically refers to an extraordinary immoral behavior, in the vicinity of purposefully inflicting harm to others in order to inflict harm intrinsically, rather than out of indifference, or as a byproduct of instrumental strategies to obtain some other goal.

Ok, I guess we just define/use it differently. I think most people we think of as "evil" probably justify inflicting harm to others as instrumental to some "greater good", or are doing it to gain or maintain power, not because they value it for its own sake. I mean if someone killed thousands of people in order to maintain their grip on power, I think we'd call them "evil" and not just "selfish"?

I just think it’s not clear it’s actually true that humans get more altruistic as they get richer.

I'm pretty sure that billionaires consume much less as percent of their income, compared to the average person. EA funding comes disproportionately from billionaires, AFAIK. I personally spend a lot more time/effort on altruistic causes, compared to if I was poorer. (Not donating much though for a number of reasons.)

For example, is it the case that selfish consumer preferences have gotten weaker in the modern world, compared to centuries ago when humans were much poorer on a per capita basis?

I'm thinking that we just haven't reached that inflection point yet, where most people run out of things to spend selfishly on (like many billionaires have, and like I have to a lesser extent). As I mentioned in my reply to your post, a large part of my view comes from not being able to imagine what people would spend selfishly on, if each person "owned" something like a significant fraction of a solar system. Why couldn't 99% of their selfish desires be met with <1% of their resources? If you had a plausible story you could tell about this, that would probably change my mind a lot. One thing I do worry about is status symbols / positional goods. I tend to view that as a separate issue from "selfish consumption" but maybe you don't?

[-]jmh1mo40

I like the insight regarding power corrupting or revealing. I think perhaps both might be true and, if so, we should keep both lines of though in mind when thinking about these types of questions.

My general view is that most people are generally good when you're talking about individual interactions. I'm less confident in that when one brings in the in group-out of group aspects. I just am not sure how to integrate all that into a general view or princple about human nature.

A line I heard in some cheesey B-grade horror movies, related to the question of a personal nature and the idea that we all have competing good and bad wolves inside. One of the characters asks which wolve was strongest, the good wolf or the bad wolf. The answer was "Which do you feed the most?"

My model is that the concept of "morality" is a fiction which has 4 generators that are real:

  • People have empathy, which means they intrinsically care about other people (and sufficiently person-like entities), but, mostly about those in their social vicinity. Also, different people have different strength of empathy, a minority might have virtually none.
  • Superrational cooperation is something that people understand intuitively to some degree. Obviously, a minority of people understand it on System 2 level as well.
  • There is something virtue-ethics-like which I find in my own preferences, along the lines of "some things I would prefer not to do, not because of their consequences, but because I don't want to be the kind of person who would do that". However, I expect different people to differ in this regard.
  • The cultural standards of morality, which it might be selfishly beneficial to go along with, including lying to yourself that you're doing it for non-selfish reasons. Which, as you say, becomes irrelevant once you secure enough power. This is a sort of self-deception which people are intuitively skilled at.

I don't think altruism is evolutionarily connected to power as you describe. Caesar didn't come to power by being better at altruism, but by being better at coordinating violence. For a more general example, the Greek and other myths don't give many examples of compassion (though they give many other human values), it seems the modern form of compassion only appeared with Jesus, which is too recent for any evolutionary explanation.

So it's possible that the little we got of altruism and other nice things are merely lucky memes. Not even a necessary adaptation, but more like a cultural peacock's tail, which appeared randomly and might fix itself or not. While our fundamental nature remains that of other living creatures, who eat each other without caring much.

I think the way morality seems to work in humans is that we have a set of potential moral values, determined by our genes, that culture can then emphasize or de-emphasize. Altruism seems to be one of these potential values, that perhaps got more emphasized in recent times, in certain cultures. I think altruism isn't directly evolutionarily connected to power, and it's more like "act morally (according to local culture) while that's helpful for gaining power" which translates to "act altruistically while that's helpful for gaining power" in cultures that emphasize altruism. Does this make more sense?

Yeah, that seems to agree with my pessimistic view - that we are selfish animals, except we have culture, and some cultures accidentally contain altruism. So the answer to your question "are humans fundamentally good or evil?" is "humans are fundamentally evil, and only accidentally sometimes good".

I think altruism isn't directly evolutionarily connected to power, and it's more like "act morally (according to local culture) while that's helpful for gaining power" which translates to "act altruistically while that's helpful for gaining power" in cultures that emphasize altruism. Does this make more sense?

 

I think that there is a version of an altruistic pursuit where one will, by default, "reduce his power." I think this scenario happens when, in the process of attempting to do good, one exposes himself more to unintended consequences. The person who sacrifices will reduce his ability to exercise power, but he may regain or supersede such loss if the tribe agrees with his rationale for such sacrifice.

Just because it was not among the organizing principles of any of the literate societies before Jesus does not mean it is not part of the human mental architecture.

[-]Roko1mo2-6

"willing to inflict large amounts of harm/suffering on others in pursuit of one's own interests/goals (in a way that can't be plausibly justified as justice or the like)"

Yes, obviously.

The vast majority of people would inflict huge amounts of disutility on others if they thought they could get away with it and benefitted from it.

What then prevents humans from being more terrible to each other? Presumably, if the vast majority of people are like this, and they know that the vast majority of others are also like this, up to common knowledge, I don't see how you'd get a stable society in which people aren't usually screwing each other a giant amount.

Any thoughts on why, if it's obvious, it's seldomly brought up around here (meaning rationalist/EA/AI safety circles)?

There are several levels in which humans can be bad or evil:

  1. Doing bad things because they believe them to be good
  2. Doing bad things while not caring whether they are bad or not
  3. Doing bad things because they believe them to be bad (Kant calls this "devilish")

I guess when humans are bad, they usually do 1). Even Hitler may have genuinely thought he is doing the morally right thing.

Humans also sometimes do 2), for minor things. But rarely if the anticipated bad consequences are substantial. People who consistently act according to 2) are called psychopaths. They have no inherent empathy for other people. Most humans are not psychopathic.

Humans don't do 3), they don't act evil for the sake of it. They aren't devils.