I agree that in real life the entropy argument is an argument in favor of it being actually pretty hard to fool a superintelligence into thinking it might be early in Tegmark III when it's not (even if you yourself are a superintelligence, unless you're doing a huge amount of intercepting its internal sanity checks (which puts significant strain on the trade possibilities and which flirts with being a technical-threat)). And I agree that if you can't fool a superintelligence into thinking it might be early in Tegmark III when it's not, then the purchasing ...
Dávid graciously proposed a bet, and while we were attempting to bang out details, he convinced me of two points:
The entropy of the simulators’ distribution need not be more than the entropy of the (square of the) wave function in any relevant sense. Despite the fact that subjective entropy may be huge, physical entropy is still low (because the simulations happen on a high-amplitude ridge of the wave function, after all). Furthermore, in the limit, simulators could probably just keep an eye out for local evolved life forms in their domain and wait until o...
Thanks to Nate for conceding this point.
I still think that other than just buying freedom to doomed aliens, we should run some non-evolved simulations of our own with inhabitants that are preferably p-zombies or animated by outside actors. If we can do this in the way that the AI doesn't notice it's in a simulation (I think this should be doable), this will provide evidence to the AI that civilizations do this simulation game (and not just the alien-buying) in general, and this buys us some safety in worlds where the AI eventually notices there are n...
I'm happy to stake $100 that, conditional on us agreeing on three judges and banging out the terms, a majority will agree with me about the contents of the spoilered comment.
If the simulators have only one simulation to run, sure. The trouble is that the simulators have simulations they could run, and so the "other case" requires additional bits (where is the crossent between the simulators' distribution over UFAIs and physics' distribution over UFAIs).
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space.
Consider the gas example again.
If you have gas that was compressed into the corner a long time ago and has long since expanded to f...
I basically endorse @dxu here.
Fleshing out the argument a bit more: the part where the AI looks around this universe and concludes it's almost certainly either in basement reality or in some simulation (rather than in the void between branches) is doing quite a lot of heavy lifting.
You might protest that neither we nor the AI have the power to verify that our branch actually has high amplitude inherited from some very low-entropy state such as the big bang, as a Solomonoff inductor would. What's the justification for inferring from the observation that we ...
seems to me to have all the components of a right answer! ...and some of a wrong answer. (we can safely assume that the future civ discards all the AIs that can tell they're simulated a priori; that's an easy tell.)
I'm heartened somewhat by your parenthetical pointing out that the AI's prior on simulation is low account of there being too many AIs for simulators to simulate, which I see as the crux of the matter.
My answer is in spoilers, in case anyone else wants to answer and tell me (on their honor) that their answer is independent from mine, which will hopefully erode my belief that most folk outside MIRI have a really difficult time fielding wacky decision theory Qs correctly.
The sleight of hand is at the point where God tells both AIs that they're the only AIs (and insinuates that they have comparable degree).
Consider an AI that looks around and sees that it sure seems to be somewhere in Tegmark III. The hypothesis "I am in the basement of some branch that
The only thing we need there is that the AI can't distinguish sims from base reality, so it thinks it's more likely to be in a sim, as there are more sims.
I don't think this part does any work, as I touched on elsewhere. An AI that cares about the outer world doesn't care how many instances are in sims versus reality (and considers this fact to be under its control much moreso than yours, to boot). An AI that cares about instantiation-weighted experience considers your offer to be a technical-threat and ignores you. (Your reasons to make the offer would...
One complication that I mentioned in another thread but not this one (IIRC) is the question of how much more entropy there is in a distant trade partner's model of Tegmark III (after spending whatever resources they allocate) than there is entropy in the actual (squared) wave function, or at least how much more entropy there is in the parts of the model that pertain to which civilizations fall.
In other words: how hard is it for distant trade partners to figure out that it was us who died, rather than some other plausible-looking human civilization that doe...
Starting from now? I agree that that's true in some worlds that I consider plausible, at least, and I agree that worlds whose survival-probabilities are sensitive to my choices are the ones that render my choices meaningful (regardless of how determinisic they are).
Conditional on Earth being utterly doomed, are we (today) fewer than 75 qbitflips from being in a good state? I'm not sure, it probably varies across the doomed worlds where I have decent amounts of subjective probability. It depends how much time we have on the clock, depends where the points o...
What are you trying to argue? (I don't currently know what position y'all think I have or what position you're arguing for. Taking a shot in the dark: I agree that quantum bitflips have loads more influence on the outcome the earlier in time they are.)
You often claim that conditional on us failing in alignment, alignment was so unlikely that among branches that had roughyly the same people (genetically) during the Singularity, only 2^-75 survives.
My first claim is not "fewer than 1 in 2^75 of the possible configurations of human populations navigate the problem successfully".
My first claim is more like "given a population of humans that doesn't even come close to navigating the problem successfully (given some unoptimized configuration of the background particles), probably you'd need to spend quite ...
the "you can't save us by flipping 75 bits" thing seems much more likely to me on a timescale of years than a timescale of decades; I'm fairly confident that quantum fluctuations can cause different people to be born, and so if you're looking 50 years back you can reroll the population dice.
This point feels like a technicality, but I want to debate it because I think a fair number of your other claims depend on it.
You often claim that conditional on us failing in alignment, alignment was so unlikely that among branches that had roughyly the same people (genetically) during the Singularity, only 2^-75 survives. This is important, because then we can't rely on other versions of ourselves "selfishly" entering an insurance contract with us, and we need to rely on the charity of Dath Ilan that branched off long ago. I agree that's a big diff...
Summarizing my stance into a top-level comment (after some discussion, mostly with Ryan):
I was responding to David saying
Otherwise, I largely agree with your comment, except that I think that us deciding to pay if we win is entangled with/evidence for a general willingness to pay among the gods, and in that sense it's partially "our" decision doing the work of saving us.
and was insinuating that we deserve extremely little credit for such a choice, in the same way that a child deserves extremely little credit for a fireman saving someone that the child could not (even if it's true that the child and the fireman share some aspects of a decis...
Attempting to summarize your argument as I currently understand it, perhaps something like:
...Suppose humanity wants to be insured against death, and is willing to spend 1/million of its resources in worlds where it lives for 1/trillion of those resources in worlds where it would otherwise die.
It suffices, then, for humanity to be the sort of civilization that, if it matures, would comb through the multiverse looking for [other civilizations in this set], and find ones that died, and verify that they would have acted as follows if they'd survived, and then
Thanks for the cool discussion Ryan and Nate! This thread seemed pretty insightful to me. Here’s some thoughts / things I’d like to clarify (mostly responding to Nate's comments).[1]
Who’s doing this trade?
In places it sounds like Ryan and Nate are talking about predecessor civilisations like humanity agreeing to the mutual insurance scheme? But humans aren’t currently capable of making our decisions logically dependent on those of aliens, or capable of rescuing them. So to be precise the entity engaging in this scheme or other acausal interactions on our b...
What does degree of determination have to do with it? If you lived in a fully deterministic universe, and you were uncertain whether it was going to live or die, would you give up on it on the mere grounds that the answer is deterministic (despite your own uncertainty about which answer is physically determined)?
I think I'm confused why you work on AI safety then, if you believe the end-state is already 2^75 level overdetermined.
It's probably physically overdetermined one way or another, but we're not sure which way yet. We're still unsure about things like "how sensitive is the population to argument" and "how sensibly do government respond if the population shifts".
But this uncertainty -- about which way things are overdetermined by the laws of physics -- does not bear all that much relationship to the expected ratio of (squared) quantum amplitude between bra...
Background: I think there's a common local misconception of logical decision theory that it has something to do with making "commitments" including while you "lack knowledge". That's not my view.
I pay the driver in Parfit's hitchhiker not because I "committed to do so", but because when I'm standing at the ATM and imagine not paying, I imagine dying in the desert. Because that's what my counterfactuals say to imagine. To someone with a more broken method of evaluating counterfactuals, I might pseudo-justify my reasoning by saying "I am acting as you would ...
"last minute" was intended to reference whatever timescale David would think was the relevant point of branch-off. (I don't know where he'd think it goes; there's a tradeoff where the later you push it the more that the people on the surviving branch care about you rather than about some other doomed population, and the earlier you push it the more that the people on the surviving branch have loads and loads of doomed populations to care after.)
I chose the phrase "last minute" because it is an idiom that is ambiguous over timescales (unlike, say, "last thr...
Do you buy that in this case, the aliens would like to make the deal and thus UDT from this epistemic perspective would pay out?
If they had literally no other options on offer, sure. But trouble arises when the competant ones can refine P(takeover) for the various planets by thinking a little further.
maybe your objection is that aliens would prefer to make the deal with beings more similar to them
It's more like: people don't enter into insurance pools against cancer with the dude who smoked his whole life and has a tumor the size of a grapefruit in ...
I largely agree with your comment, except that I think that us deciding to pay if we win is entangled with/evidence for a general willingness to pay among the gods, and in that sense it's partially "our" decision doing the work of saving us.
Sure, like how when a child sees a fireman pull a woman out of a burning building and says "if I were that big and strong, I would also pull people out of burning buildings", in a sense it's partially the child's decsiion that does the work of saving the woman. (There's maybe a little overlap in how they run the same...
There's a question of how thick the Everett branches are, where someone is willing to pay for us. Towards one extreme, you have the literal people who literally died, before they have branched much; these branches need to happen close to the last minute. Towards the other extreme, you have all evolved life, some fraction of which you might imagine might care to pay for any other evolved species.
The problem with expecting folks at the first extreme to pay for you is that they're almost all dead (like dead). The problem with expecting folks at the ...
Conditional on the civilization around us flubbing the alignment problem, I'm skeptical that humanity has anything like a 1% survival rate (across any branches since, say, 12 Kya). (Haven't thought about it a ton, but doom looks pretty overdetermined to me, in a way that's intertwined with how recorded history has played otu.)
My guess is that the doomed/poor branches of humanity vastly outweigh the rich branches, such that the rich branches of humanity lack the resources to pay for everyone. (My rough mental estimate for this is something like: you've prob...
Taking a second stab at naming the top reasons I expect this to fail (after Ryan pointed out that my first stab was based on a failure of reading comprehension on my part, thanks Ryan):
This proposal seems to me to have the form "the fragments of humanity that survive offer to spend a (larger) fraction of their universe on the AI's goals so long as the AI spends a (smaller) fraction of its universe on their goals, with the ratio in accordance to the degree of magical-reality-fluid-or-whatever that reality allots to each".
(Note that I think this is not at al...
I agree that in scenarios where humanity survives in 1/X portion of reality and offers to put 1/Y portion of their universe towards the AI's goals, then so long as 1/(X * Y) of a universe is more than it'd cost the AI to preserve the humans, the humans are at least making an offer that is worth more than nothing.
(There is then a question of how many better offers get made. My top guess is that X and Y are such that there's not much effect here, and that the question of what happens to saved copies of human minds is more about whether aliens buy copies than about which dead or distant trade-partners make offers that the AI considers juciest.)
Here's how I think this choice looks from the perspective of a correctly-reasoning AI (having skimmed only your proposal for humanity in the future):
(EDIT: as Ryan points out, the deal he's proposing is one where the AI is allowed to grab the universe in front of it so long as it gives humans one planet's worth of resources, which does change the calculus and makes the deal worthwhile. I had not read the proposal carefully enough to notice this, oops, sorry. There critique below is invalid if humanity asks only for one planet. Note however that this is sti...
Note that we don't want the AI to hand over the reins to humans, just to avoid killing humans when possible/cheap and we're willing to pay quite a bit for this (in the proposal, IDK if I personally think we should pay). So, it should look like a good offer for AIs who care about Tegmark IV (with a measure etc).
So, if humans execute this scheme, the AI's options should look something like:
This is an excerpt from a comment I wrote on the EA forum, extracted and crossposted here by request:
There's a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say "I'm such a fool; I should have bet 23".
More useful would be to say "I'm such a fool; I should have noticed that the EV of this gamble is negative." Now at least you aren't asking for magic lottery powers.
Even more useful would be to say "I'm such a fool; I had three chances to notice that this bet was bad: when my partner was trying to ex...
my original 100:1 was a typo, where i meant 2^-100:1.
this number was in reference to ronny's 2^-10000:1.
when ronny said:
I’m like look, I used to think the chances of alignment by default were like 2^-10000:1
i interpreted him to mean "i expect it takes 10k bits of description to nail down human values, and so if one is literally randomly sampling programs, they should naively expect 1:2^10000 odds against alignment".
i personally think this is wrong, for reasons brought up later in the convo--namely, the relevant question is not how many bits is takes to...
Agreed that the proposal is underspecified; my point here is not "look at this great proposal" but rather "from a theoretical angle, risking others' stuff without the ability to pay to cover those risks is an indirect form of probabilistic theft (that market-supporting coordination mechanisms must address)" plus "in cases where the people all die when the risk is realized, the 'premiums' need to be paid out to individuals in advance (rather than paid out to actuaries who pay out a large sum in the event of risk realization)". Which together yield the downs...
In relation to my current stance on AI, I was talking with someone who said they’re worried about people putting the wrong incentives on labs. At various points in that convo I said stuff like (quotes are not exact; third paragraph is a present summary rather than a re-articulation of a past utterance):
“Sure, every lab currently seems recklessly negligent to me, but saying stuff like “we won’t build the bioweapon factory until we think we can prevent it from being stolen by non-state actors” is directionally better than not having any commitments about any...
It seems like this is only directionally better if it’s true, and this is still an open question for me. Like, I buy that some of the commitments around securing weights are true, and that seems good. I’m way less sure that companies will in fact pause development pending their assessment of evaluations. And to the extent that they are not, in a meaningful sense, planning to pause, this seems quite bad. It seems potentially worse, to me, to have a structure legitimizing this decision and making it seem more responsible than it is, rather than just openly d...
If you allow indirection and don't worry about it being in the right format for superintelligent optimization, then sufficiently-careful humans can do it.
Answering your request for prediction, given that it seems like that request is still live: a thing I don't expect the upcoming multimodal models to be able to do: train them only on data up through 1990 (or otherwise excise all training data from our broadly-generalized community), ask them what superintelligent machines (in the sense of IJ Good) should do, and have them come up with something like CEV (...
I claim that to the extent ordinary humans can do this, GPT-4 can nearly do this as well
(Insofar as this was supposed to name a disagreement, I do not think it is a disagreement, and don't understand the relevance of this claim to my argument.)
Presumably you think that ordinary human beings are capable of "singling out concepts that are robustly worth optimizing for".
Nope! At least, not directly, and not in the right format for hooking up to a superintelligent optimization process.
(This seems to me like plausibly one of the sources of misunderstandi...
(I had used that pump that very day, shortly before, to pump up the replacement tire.)
Separately, a friend pointed out that an important part of apologies is the doer showing they understand the damage done, and the person hurt feeling heard, which I don't think I've done much of above. An attempt:
I hear you as saying that you felt a strong sense of disapproval from me; that I was unpredictable in my frustration as kept you feeling (perhaps) regularly on-edge and stressed; that you felt I lacked interest in your efforts or attention for you; and perhaps that this was particularly disorienting given the impression you had of me both from my ...
I did not intend it as a one-time experiment.
In the above, I did not intend "here's a next thing to try!" to be read like "here's my next one-time experiment!", but rather like "here's a thing to add to my list of plausible ways to avoid this error-mode in the future, as is a virtuous thing to attempt!" (by contrast with "I hereby adopt this as a solemn responsibility", as I hypothesize you interpreted me instead).
Dumping recollections, on the model that you want more data here:
I intended it as a general thing to try going forward, in a "seems like a sensi...
Thanks <3
(To be clear: I think that at least one other of my past long-term/serious romantic partners would say "of all romantic conflicts, I felt shittiest during ours". The thing that I don't recall other long-term/serious romantic partners reporting is the sense of inability to trust their own mind or self during disputes. (It's plausible to me that some have felt it and not told me.))
Chiming in to provide additional datapoints. (Apologies for this being quite late to the conversation; I frequent The Other Forum regularly, and LW much less so, and only recently read this post/comments.) My experience has been quite different to a lot of the experiences described here, and I was very surprised when reading.
I read all of the people who have had (very) negative experiences as being sincere and reporting events and emotions as they experienced them. I could feel what I perceived to be real distress and pain in a lot of the comments, a...
Insofar as you're querying the near future: I'm not currently attempting work collaborations with any new folk, and so the matter is somewhat up in the air. (I recently asked Malo to consider a MIRI-policy of ensuring all new employees who might interact with me get some sort of list of warnings / disclaimers / affordances / notes.)
Insofar as you're querying the recent past: There aren't many recent cases to draw from. This comment has some words about how things went with Vivek's hires. The other recent hires that I recall both (a) weren't hired to do res...
One frame I want to lay out is that it seems like you're not accounting for the organizational cost of how you treat employees/collaborators. An executive director needing to mostly not talk to people, and shaping hiring around social pain tolerance, is a five alarm fire for organizations as small as MIRI. Based on the info here, my first thought is you should be in a different role, so that you have fewer interactions and less implied power. That requires someone to replace you as ED, and I don't know if there are any options available, but at...
Do I have your permission to quote the relevant portion of your email to me?
Yep! I've also just reproduced it here, for convenience:
(One obvious takeaway here is that I should give my list of warnings-about-working-with-me to anyone who asks to discuss their alignment ideas with me, rather than just researchers I'm starting a collaboration with. Obvious in hindsight; sorry for not doing that in your case.)
I warned the immediately-next person.
It sounds to me like you parsed my statement "One obvious takeaway here is that I should give my list of warnings-about-working-with-me to anyone who asks to discuss their alignment ideas with me, rather than just researchers I'm starting a collaboration with." as me saying something like "I hereby adopt the solemn responsibility of warning people in advance, in all cases", whereas I was interpreting it as more like "here's a next thing to try!".
I agree it would have been better of me to give direct bulldozing-warnings explicitly to Vivek's hires.
Here is the statement:
(One obvious takeaway here is that I should give my list of warnings-about-working-with-me to anyone who asks to discuss their alignment ideas with me, rather than just researchers I'm starting a collaboration with. Obvious in hindsight; sorry for not doing that in your case.)
I agree that this statement does not explicitly say whether you would make this a one-time change or a permanent one. However, the tone and phrasing—"Obvious in hindsight; sorry for not doing that in your case"—suggested that you had learned from the experience a...
On the facts: I'm pretty sure I took Vivek aside and gave a big list of reasons why I thought working with me might suck, and listed that there are cases where I get real frustrated as one of them. (Not sure whether you count him as "recent".)
My recollection is that he probed a little and was like "I'm not too worried about that" and didn't probe further. My recollection is also that he was correct in this; the issues I had working with Vivek's team were not based in the same failure mode I had with you; I don't recall instances of me getting frustrated an...
...I think I'd also be more compelled by this argument if I was more sold on warnings being the sort of thing that works in practice.
Like... (to take a recent example) if I'm walking by a whiteboard in rosegarden inn, and two people are like "hey Nate can you weigh in on this object-level question", I don't... really believe that saying "first, be warned that talking techincal things with me can leave you exposed to unshielded negative-valence emotions (frustration, despair, ...), which some people find pretty crappy; do you still want me to weigh in?" actual
I've been asked to clarify a point of fact, so I'll do so here:
My recollection is that he probed a little and was like "I'm not too worried about that" and didn't probe further.
This does ring a bell, and my brain is weakly telling me it did happen on a walk with Nate, but it's so fuzzy that I can't tell if it's a real memory or not. A confounder here is that I've probably also had the conversational route "MIRI burnout is a thing, yikes" -> "I'm not too worried, I'm a robust and upbeat person" multiple times with people other than Nate.
In private ...
In particular, you sound [...] extremely unwilling to entertain the idea that you were wrong, or that any potential improvement might need to come from you.
you don't seem to consider the idea that maybe you were more in a position to improve than he was.
Perhaps you're trying to point at something that I'm missing, but from my point of view, sentences like "I'd love to say "and I've identified the source of the problem and successfully addressed it", but I don't think I have" and "would I have been living up to my conversational ideals (significantly)...
Thanks for saying so!
My intent was not to make you feel bad. I apologize for that, and am saddened by it.
(I'd love to say "and I've identified the source of the problem and successfully addressed it", but I don't think I have! I do think I've gotten a little better at avoiding this sort of thing with time and practice. I've also cut down significantly on the number of reports that I have.)
For whatever it's worth: I don't recall wanting you to quit (as opposed to improve). I don't recall feeling ill will towards you personally. I do not now think po
I do have some general sense here that those aren't emotionally realistic options for people with my emotional makeup.
Here's my take: From the inside, Nate feels like he is incapable of not becoming very frustrated, even angry. In a sense this is true. But this state of affairs is in fact a consequence of Nate not being subject to the same rules as everybody else.
I think I know what it's like, to an extent — I've had anger issues since I was born, and despite speaking openly about it to many people, I've never met anyone who's been able to really understan...
I have some replies to Nate's reply.
Overview:
Perhaps I'm missing some obvious third alternative here, that can be practically run while experiencing a bunch of frustration or exasperation. (If you know of one, I'd love to hear it.)
One alternative could be to regulate your emotions so you don't feel as intense frustration from a given epistemic position? I think this is what most people do.
I suspect that lines like this are giving people the impression that you [Nate] don't think there are (realistic) things that you can improve, or that you've "given up".
...I do have some general sense here that those aren't emotionally realistic options for people with my emotional makeup.
I have a sense that there's some sort of trap for people with my emotional makeup here. If you stay and try to express yourself despite experiencing strong feelings of frustration, you're "almost yelling". If you leave because you're feeling a bunch of frustration and
I think it's cool that you're engaging with criticism and acknowledging the harm that happened as a result of your struggles.
And, to cut to the painful part, that's about the only positive thing that I (random person on the internet) have to say about what you just wrote.
In particular, you sound (and sorry if I'm making any wrong assumption here) extremely unwilling to entertain the idea that you were wrong, or that any potential improvement might need to come from you.
You say:
...For whatever it's worth: I don't recall wanting you to quit (as opposed to imp
That helps somewhat, thanks! (And sorry for making you repeat yourself before discarding the erroneous probability-mass.)
I still feel like I can only barely maybe half-see what you're saying, and only have a tenuous grasp on it.
Like: why is it supposed to matter that GPT can solve ethical quandries on-par with its ability to perform other tasks? I can still only half-see an answer that doesn't route through the (apparently-disbelieved-by-both-of-us) claim that I used to argue that getting the AI to understand ethics was a hard bit, by staring at sentences ...
I have the sense that you've misunderstood my past arguments. I don't quite feel like I can rapidly precisely pinpoint the issue, but some scattered relevant tidbits follow:
I didn't pick the name "value learning", and probably wouldn't have picked it for that problem if others weren't already using it. (Perhaps I tried to apply it to a different problem than Bostrom-or-whoever intended it for, thereby doing some injury to the term and to my argument?)
Glancing back at my "Value Learning" paper, the abstract includes "Even a machine intelligent enough
Glancing back at my "Value Learning" paper, the abstract includes "Even a machine intelligent enough to understand its designers’ intentions would not necessarily act as intended", which supports my recollection that I was never trying to use "Value Learning" for "getting the AI to understand human values is hard" as opposed to "getting the AI to act towards value in particular (as opposed to something else) is hard", as supports my sense that this isn't hindsight bias, and is in fact a misunderstanding.
For what it's worth, I didn't claim that you argue...
In academia, for instance, I think there are plenty of conversations in which two researchers (a) disagree a ton, (b) think the other person's work is hopeless or confused in deep ways, (c) honestly express the nature of their disagreement, but (d) do so in a way where people generally feel respected/valued when talking to them.
My model says that this requires them to still be hopeful about local communication progress, and happens when they disagree but already share a lot of frames and concepts and background knowledge. I, at least, find it much harde...
(I am pretty uncomfortable with all the "Nate / Eliezer" going on here. Let's at least let people's misunderstandings of me be limited to me personally, and not bleed over into Eliezer!)
(In terms of the allegedly-extraordinary belief, I recommend keeping in mind jimrandomh's note on Fork Hazards. I have probability mass on the hypothesis that I have ideas that could speed up capabilities if I put my mind to it, as is a very different state of affairs from being confident that any of my ideas works. Most ideas don't work!)
(Separately, the infosharing agreem...
I hereby push back against the (implicit) narrative that I find the standard community norms costly, or that my communication protocols are "alternative".
My model is closer to: the world is a big place these days, different people run on different conversation norms. The conversation difficulties look, to me, symmetric, with each party violating norms that the other considers basic, and failing to demonstrate virtues that the other considers table-stakes.
(To be clear, I consider myself to bear an asymmetric burden of responsibility for the conversatiosn go...
I sure don't buy a narrative that I'm in violation of the local norms.
This is preposterous.
I'm not going to discuss specific norms. Discussing norms with Nate leads to an explosion of conversational complexity.[1] In my opinion, such discussion can sound really nice and reasonable, until you remember that you just wanted him to e.g. not insult your reasoning skills and instead engage with your object-level claims... but somehow your simple request turns into a complicated and painful negotiation. You never thought you'd have to explain "being nice."
Th...
I'm putting in rather a lot of work (with things like my communication handbook) to making my own norms clearer, and I follow what I think are good meta-norms of being very open to trying other people's alternative conversational formats.
Nate, I am skeptical.
As best I can fathom, you put in very little work to proactively warn new hires about the emotional damage which your employees often experience. I've talked to a range of people who have had professional interactions with you, both recently and further back. Only one of the recent cases reported that ...
Huh, I initially found myself surprised that Nate thinks he's adhering to community norms. I wonder if part of what's going on here is that "community norms" is a pretty vague phrase that people can interpret differently.
Epistemic status: Speculative. I haven't had many interactions with Nate, so I'm mostly going off of what I've heard from others + general vibes.
Some specific norms that I imagine Nate is adhering to (or exceeding expectations in):
Less "hm they're Vivek's friends", more "they are expressly Vivek's employees". The working relationship that I attempted to set up was one where I worked directly with Vivek, and gave Vivek budget to hire other people to work with him.
If memory serves, I did go on a long walk with Vivek where I attempted to enumerate the ways that working with me might suck. As for the others, some relevant recollections:
I donated $25k. Thanks for doing what you do.