All of Unreal's Comments + Replies

Catholicism never would have collected the intelligence necessary to invent a nuke. Their worldview was not compatible with science. It was an inferior organizing principle. ("inferior" meaning less capable of coordinating a collective intelligence needed to build nukes.)

You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground ... (read more)

4Eli Tyre
I agree that religions mostly don't cause x-risk, because (for the most part) they're not sufficiently good at organizing intellectual endeavor. (There might be exceptions to that generalization, and they can coopt the technological products of other organizational systems.) I agree that the x-risk is an overriding concern, in terms of practical consequences. If any given person does tons of good things, and also contributes to x-risk, its easy for the x-risk contribution to swamp everything else in their overall impact.  But yeah, I object to calling a person or an institution more ethical because they are / it is too weak to do (comparatively) much harm.  I care about identifying which people and institutions are more ethical so that 1) I can learn from them ethics from them 2) so that I can defer to them. If a person or institution avoids causing harm because they're weak, they're mostly not very helpful to learn from (they can't help me figure out how to wield power ethically, at least) and defering to them or otherwise empowering them is actively harmful because doing so removes the feature that was keeping them (relatively) harmless. A person who is dispositionally a bully, but who is physically weak, but who would immediately start acting like a bully if he were bigger, or if he had more social power, is not ethical on account of his not bullying people.  An AGI that is "aligned", until it is much more powerful than the rest of the world, is not aligned. A church that does (relatively) less harm unless and until it is powerful enough to command armies or nukes, is likewise not very trustworthy. To reason well in these domains, I need a concept of ethics that can be discussed independently of power. And therefore I need to be able to evaluate ethics independently of actual harm caused.  Not just "how much harm does this institution do?" but "how much harm would it do, in other circumstances?". I might want to ask "how does this person or institution beh

are you putting forward that something about worldviews sometimes relies on faster than light signaling?

OK this is getting close. I am saying worldviews CANNOT EVER be fast enough, and that's why the goal is to drop all worldviews to get "fast enough". Which the very idea of "fast enough" is in itself 'wrong' because it's conceptual / limited / false. This is my worst-best attempt to point to a thing, but I am trying to be as literal as possible, not poetic. 

No response can be immediate in a physical universe

Yeah, we're including 'physical universe' a... (read more)

5Eli Tyre
Ok, well maybe it doesn't make sense to answer this question then, but... Why is it such a crucial desiderata to have (apparently literally) instantaneous responsiveness? What's insufficient about a 200 millisecond delay?  So far "this isn't instantaneous [in a way that my current worldview suggests is literally, fundamentally, impossible]", isn't a very compelling reason for me to try do a different thing that I'm already doing. It seems like an irrelevant desiderata rather than a reason to Halt, Melt, and Catch Fire.

I don't know if I fully get you, but you also nailed it on the head.

In such situation, I think the one weird trick would be to invent a belief system that actively denies being one. To teach people a dogma that would (among other things) insist that there is no dogma, you just see the reality as it is (unlike all the other people, who merely see their dogmas). To invent rituals that consist (among other things) of telling yourself repeatedly that you have no rituals (unlike all the other people). To have leaders that deny being leaders (and yet they are su

... (read more)
8Viliam
The part that you quoted was originally supposed to end by: "So, basically... Buddhism", but then I noticed it actually applies to science, too. Because it's both, kind of. By trying to get out of systems, you create something that people from outside will describe as yet another system. (And they will include it into the set of systems they are trying to get out of.) Is there an end to this? I don't know, really. (Also, it reminds of this.) I think what many people do is apply this step once. They get out of the system that their parents and/or school created for them, and that's it. Some people do this step twice or more. For example, first they rebel against their parents. Then they realize that their rebellion was kinda stupid and perhaps there is more to life than smoking marijuana, so they get out of that system, too. And that's it. Or they join a cult, and then they leave it. Etc. Some people notice that this is a sequence -- that you can probably do an arbitrary number of steps, always believing that now you are getting out of systems, when in hindsight you have always just adopted yet another system. But even if you notice this, what can you do about it? Is there a way out that isn't just another iteration of the same? The problem is that even noticing the sequence and trying to design a solution such as "I will never get attached to any system; I will keep abandoning them the moment I notice that there is such a thing; and I will always suspect that anything I see is such a thing", is... yet another system. One that is more meta, and perhaps therefore more aesthetically appealing, but a system nonetheless. Another option is to give up and say "yeah, it's systems all the way down; and this one is the one I feel most comfortable with, so I am staying here". So you stay consciously there; or maybe halfway there and halfway in the next level, because you actually do recognize your current system as a system... One person's "the true way to see reality"

Or why can't you have a worldview that computes the best answer to any given "what should I do" question, to arbitrary but not infinite precision?

I am not talking about any 'good enough' answer. Whatever you deem 'good enough' to some arbitrary precision.

I am talking about the correct answer every time. This is not a matter of sufficient compute. Because if the answer comes even a fraction of a second AFTER, it is already too late. The answer has to be immediate. To get an answer that is immediate, that means it took zero amount of time, no compute is invo... (read more)

7Eli Tyre
That sounds like either a nonsensical or a fundamentally impossible constraint to me.  Max human reaction times (for something like responding to a visual cue by pressing a button) are about 150-200 milliseconds. Just the input side, for a signal to travel from the retina, down the optic nerve, through the brain to the visual cortex takes 50-80 milliseconds. By the time your sensory cortices receive (not even process, just receive) the raw sense data, you're already a fraction of a second out of sync with reality. Possibly you're not concerned with the delay between an external event occurring, and when the brain parses it into a subjective experience, only the delay between the moment of subjective experience[1] and one's response to it? But the same basic issue applies at every step. It takes time for the visual cortex to do image recognition. It takes time to pass info to the motor cortex. It takes time for the motor cortex to send signals to the muscles. It takes times for the muscles to contract. No response can be immediate in a physical universe with (in the most extreme case), a lightspeed limit. Insofar as a response is triggered by some event, some time will pass between the event and the response. Or to put it another way, the only way for a reaction to a situation to not involve any computation, is for the response to be completely static and unvarying, which is to say not even slightly responsive to the details of the situation. A rock that has "don't worry, be happy" painted on it can give a perfect truly instantaneous response, if "a rock that says 'don't worry, be happy'" is the perfect response to every possible situation. Am I being blockheaded here, and missing the point?  Do you mean something different than I do by "instantaneous"? Or are you putting forward that something about worldviews sometimes relies on faster than light signaling? Or is your point that immediate responses are impossible. 1. ^ Notably my understanding is

I am saying things in a direct and more or less literal manner. Or at least I'm trying to. 

I did use a metaphor. I am not using "poetry"? When I say "Discard Ignorance" I mean that as literally as possible. I think it's somewhat incorrect to call what I'm saying phenomenology. That makes it sound purely subjective.

Am I talking down to you? I did not read it that way. Sorry it comes across that way. I am attempting to be very direct and blunt because I think that's more respectful, and it's how I talk. 

4Ben Pace
I propose we wrap this particular thread up for now (with another reply from you as you wish). I will say that for this bit Being asked "So what's the answer? What's the path?" feels more like answering a riddle than being asked "The capital city of England is London. Please repeat back to me the capital city of England?".  Direct speech is clear and unambiguous. Direct speech is like "Please can you close the door?" and indirect speech is like "Oh I guess it's chilly in here" or "Perhaps we should get people's temperature preferences", which may be a sincere attempt to communicate that you want the door closed but isn't direct. What you wrote was not especially unambiguous or non-metaphorical. I think it's a sincere attempt at communication but it's not direct. Being asked to just answer "can anyone repeat back what I'm saying without adding or subtracting anything?" seems hard when you wrote in a rather metaphorical and roundabout way.

First paragraph: 3/10. The claim is that something was already more natural to begin with, but you need deliberate practice to unlock the thing that was already more natural. It's not that it 'comes more naturally' after you practice something. What 'felt' natural before was actually very unnatural and hindered, but we don't realize this until after practicing. 

2nd, 3rd, 4th paragraph: 2/10. This mostly doesn't seem relevant to what I'm trying to offer.

...

It's interesting trying to watch various people try to repeat what I'm saying or respond to what ... (read more)

5Ben Pace
Sure, I can try again during my lunch break.  I think you are actually emphasizing this section. It sounds like you believe, as I become more aligned with who I want to be and with goodness, this will not feel strained or effortful, but in fact I will experience less friction than I used to feel, less discomfort or unease. This is not a learned way of being but rather a process of backing out of bad and unhealthy practices. I don't know that it's easy for me to describe how this feels in more phenomenological detail. I'd have to find some examples. Most of my experiences of becoming a better person have been around finding good principles that I believe in, and feeling good relying on them and seeing that they indeed do improve the world and help me avoid unethical action/behavior. It has simplified my life tremendously (mostly). So I believe you mean that, when you find the right way of acting, it feels more natural and less friction-y than the way you were previously behaving. The primary thing I don't understand is that I can't tell what claim you are making about what exactly one is approaching. You keep saying all the things it isn't without saying what it is. I am not sure if you mean "You are born well and then have lots of bad habits and unhealthy practices added to you" or if you are saying "You were not necessarily ever in the right state of mind, but approach it through careful practice, and then it will feel better/natural-er/etc". Also you keep saying that it's not "state of mind" or anything other noun I might use to describe it, which isn't helpful for saying what it is.  My current guess is that you don't think it's any particular state, but that being a spiritually whole person is more about everything (both in the mind and in the mind's relationship to the environment) working together well. But not sure. Regarding and I think talking about phenomenology is hard and subtle and the fact you have failed to have people hear you as you use meta

Just respond genuinely. You already did.

I don't know how else to phrase it, but I would like to not contradict interdependent origination. While still pointing toward what happens when all views are dropped and insight becomes possible. 

I appreciate this attempt... but no it is not it. 

What I'm talking about is not the skill to combine S1 and S2 fluidly as needed. 

I will respond to this more fully at a later point. But a quick correction I wish to make:

What I'm referring to is not about System 1 or System 2 so much. It's not that I rely more on System 1 to do things. System 1 and System 2 are both unreliable systems, each with major pitfalls. 

I'm more guided by a wisdom that is not based in System 1 or System 2 or any "process" whatsoever. 

I keep trying to point at this, and people don't really get it until they directly see it. That's fine. But I wish people would at least mentally try to understand what ... (read more)

8Ben Pace
Okay, sounds like I have misunderstood you. Sure, I can retry. My next attempt to pass your ITT is thus: How close is this to what you're saying, from 1 to 10?
4AprilSR
To have a go at it: Some people try to implement a decision-making strategy that's like, "I should focus mostly on System 1" or "I should focus mostly on System 2." But this isn't really the point. The goal is to develop an ability to judge which scenarios call for which types of mental activities, and to be able to combine System 1 and System 2 together fluidly as needed.

Yes we agree. 👍🌻

I think I mention this in the essay too. 

If we are just changing at the drop of a hat, not for truth, but for convenience or any old reason, like most people are, ... 

or even under very strenuous dire circumstances, like we're about to die or in excruciating pain or something...

then that is a compromised mind. You're working with a compromised, undisciplined mind that will change its answers as soon as externals change.

Views change. Even our "robust" principles can go out the window under extreme circumstances. (So what's going ... (read more)

8Ben Pace
I am reading this as “I rely on explicit theory much less when guiding my actions than I used to”. I think this is also true of me, much of my decision-making on highly important or high-stakes decisions (but also ~most decisions) is very intuitive and fast and I often don’t reflect on it very much. I have lately been surprised to notice how little cognition I will spend on substantial life decision, or scary decisions, simply trusting myself to get it right. I know other people (who I like and respect) who rely on explicit reasoning when deciding whether to take a snack break, whether to go to a party, whether to accept a job, etc, and I think the stronger version of themselves would end up trusting their system 1 processes on such things. But I think anyone who is not regularly doing ton of system 2 reflection on decisions that they are confused about or arguing about principles involved in their and others’ decisions will fail to act well or be principled. I do not think there is a way around it, of avoiding this hard work. I think I would hazard a guess that the person who must rely on explicit theory for guiding behavior is more likely to be able to grow into a wholesome and principled person than the intuitive and kind person who doesn’t see much worth in developing and arguing about explicit principles. Caring about principles seems much rarer and non-natural to me.

I would also argue against the claim religious institutions are "devoid of moral truths". I think this is mostly coming from secularist propaganda. In fact these institutions are still some of the most charitable, generous institutions that provide humanitarian aid all over the world. Their centuries-old systems are relatively effective at rooting out evil-doing in their ranks. 

Compared to modern corporations, they're acting out of a much clearer sense of morality than capitalist institutions. Compared to modern secular governments, such as that of th... (read more)

OK, well, I can tell a story about why corruption seeped into the Church, and it doesn't sound crazy to me. (Black Death happened, is what.) 

The Mediaeval Christian church's power-seeking and hypocrisy precedes the Black Death. 

  • Charlemagne led campaigns against the Saxon pagans in the late 8th century, to convert them by force, with the blessing of the papacy. 
  • Medieval popes very regularly got into power-conflicts with Medieval kings.[1]
  • Church leaders got into into conflicts with each other, often declaring each other illegitimate. [2]
  • T
... (read more)

My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:

 

This is easy to research. 

I will name a few ways the Bud... (read more)

3Unreal
I would also argue against the claim religious institutions are "devoid of moral truths". I think this is mostly coming from secularist propaganda. In fact these institutions are still some of the most charitable, generous institutions that provide humanitarian aid all over the world. Their centuries-old systems are relatively effective at rooting out evil-doing in their ranks.  Compared to modern corporations, they're acting out of a much clearer sense of morality than capitalist institutions. Compared to modern secular governments, such as that of the US, they're doing less violence and harm to the planet. They did not invent nuclear weapons. They are not striving to build AGI. Furthermore, I doubt they would.  When spiritual teachers were asked about creating AI versions of themselves, they were not interested, and one company had to change their whole business model to creating sales bots instead. (Real story. I won't reveal which company.)  I'm sad about all the corruption in religious institutions, still. It's there. Hatred against gay people and controlling women's bodies. The crusades. The jihads. Using coercive shame to keep people down. OK, well, I can tell a story about why corruption seeped into the Church, and it doesn't sound crazy to me. (Black Death happened, is what.)  But our modern world has become nihilistic, amoral, and vastly more okay with killing large numbers of living beings, ecosystems, habitats, the atmosphere, etc. Pat ourselves on the back for civil rights, yes. Celebrate this. But who's really devoid of moral truths here? When we are the ones casually destroying the planet and even openly willing to take 10%+ chances at total extinction to build an AGI? The Christians and the Buddhists and even the jihadists aren't behind this. 

I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths. 

This might be my trapped priors talking, but I am a non-cognitivist. I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless,

... (read more)

I use the mind too. I appreciate the mind a lot. I wouldn't choose to be less smart, for instance. 

But we are still over-using the mind to solve our problems. 

Intelligence doesn't solve problems. It creates problems. 

Wisdom resolves problems. 

This is what's really hard to convey. But I am trying, as you say.

Yes non-attachment points in the same direction. 

Another way of putting it is "negate everything." 

Another way of putting it is "say yes to everything." 

Both of these work toward non-attachment. 

Hm, if by "discovering" you mean 
Dropping all fixed priors 
Making direct contact with reality (which is without any ontology) 
And then deep insight emerges
And then after-the-fact you construct an ontology that is most beneficial based on your discovery

Then I'm on board with that

And yet I still claim that ontology is insufficient, imperfect, and not actually gonna work in the end. 

7romeostevensit
'these practices grant unmediated access to reality' sounds like a metaphysical claim. The Buddha's take on his system's relevance to metaphysics seems pretty consistently deflationary to me.
2[comment deleted]

we serve oatmeal every breakfast where i live 
love oatmeal

Hm, you know I do buy that also. 

The task is much harder now, due to changing material circumstances as you say. The modern culture has in some sense vaccinated itself against certain forms of wisdom and insight. 

We acknowledge this problem and are still making an effort to address them, using modern technology. I cannot claim we're 'anywhere close' to resolving this? We're just firmly GOING to try, and we believe we in particular have a comparative advantage, due to a very solid community of spiritual practitioners. We have AT LEAST managed to g... (read more)

no anyone can visit! we have guests all the time. feel free to DM me if you want to ask more. or you can just go on the website and schedule a visit. 

Alex Flint is still here too, altho he lives on neighboring land now. 

'directly addressing suffering' is a good description of what we're up to? 

if you have any interest in visiting MAPLE, lmk. ? (monasticacademy.org) 

wow thanks for trying to make this distinction here on LessWrong. admirable. 

i don't seem to have the patience to do this kind of thing here, but i'm glad someone is trying. 

1jbkjr
You're welcome, and thanks for the support! :) Re: MAPLE, I might have in interest in visiting—I became acquainted with MAPLE because I think Alex Flint spent some time there? Does one need to be actively working on an AI safety project to visit? I am not currently doing so, having stepped away from AI safety work to focus on directly addressing suffering.
2Unreal
if you have any interest in visiting MAPLE, lmk. ? (monasticacademy.org) 

We have a significant comparative advantage to pretty much all of Western philosophy. I know this is a 'bold claim'. If you're further curious you can come visit the Monastic Academy in Vermont, since it seems best 'shown' rather than 'told'. But we also plan on releasing online content in the near future to communicate our worldview. 

We do see that all the previous efforts have perhaps never quite consistently and reliably succeeded, in both hemispheres. (Because, hell, we're here now.) But it is not fair to say they have never succeeded to any degre... (read more)

4xpym
I do agree that there are some valuable Eastern insights that haven't yet penetrated the Western mainstream, so work in this direction is worth a try. Also reasonable. Here I disagree. I think that much of "what is good" is contingent on our material circumstances, which are changing ever faster these days, so it's no surprise that old answers no longer work as well as they did in their time. Unfortunately, nobody has discovered a reliable way to timely update them yet, and very few seem to even acknowledge this problem.

Thank you for pointing this out, as it is very important. 

The morality / ethics of the human beings matters a lot. But it seems to matter more than just a lot. If we get even a little thing wrong here, ...

But we're getting more than just a little wrong here, imo.  Afaict most modern humans are terribly confused about morality / ethics. As you say "what is even good"

I've spoken with serious mathematicians who believe they might have a promising direction to the AI alignment problem. But they're also confused about what's good. That is not their re... (read more)

-3xpym
The other option is being slightly less terribly confused, I presume. Do you consider yourselves having significant comparative advantage in this area relative to all other moral philosophers throughout the millenia whose efforts weren't enough to lift humanity from the aforementioned dismal state?

Rationality seems to be missing an entire curriculum on "Eros" or True Desire.

I got this curriculum from other trainings, though. There are places where it's hugely emphasized and well-taught. 

I think maybe Rationality should be more open to sending people to different places for different trainings and stop trying to do everything on its own terms. 

It has been way better for me to learn how to enter/exit different frames and worldviews than to try to make everything fit into one worldview / frame. I think some Rationalists believe everything is ... (read more)

1meedstrom
Aye - see also In Praise of Fake Frameworks. It's helped me interface with a lot people that would've otherwise befuddled me. That gives me a more fleshed-out range of possible perspectives on things, which shortcuts to new knowledge. But perhaps it's worth thinking twice when or at least how to introduce this skill, because it looks like a method of doing Salvage Epistemology and so could invite its downsides if taught poorly. I'm undecided whether that's worth worrying about.
8Morpheus
What are these places?
2Raemon
Yeah I don't know that I disagree with it (I think I maybe believe it less strongly than you atm, but it seems like a reasonable take)

I was bouncing around LessWrong and ran into this. I started reading it as though it were a normal post, but then I slowly realized ... 

I think according to typical LessWrong norms, it would be appropriate to try to engage you on the object level claims or talk about the meta-presentation as though you and I were trying to collaborate on figuring things out and how to communicate things.

But according to my personal norms and integrity, if I detect that something is actually quite off (like alarm bells going) then it would be kind of sick to ignore tha... (read more)

Is this your first time running into Zack's stuff? You sound like you're talking to someone showing up out of nowhere with a no-context crackpot manuscript and has zero engagement with community. Zack's post is about his actual engagement with the community over a decade, we've seen a bunch of the previous engagement (in pretty much the register we see here so this doesn't look like an ongoing psychotic break), he's responsive to comments and his thesis generally makes sense. This isn't drive-by crackpottery and it's on LessWrong because it's about LessWrong.

Musings: 

COVID was one of the MMA-style arenas for different egregores to see which might come out 'on top' in an epistemically unfriendly environment. 

I have a lot of opinions on this that are more controversial than I'm willing to go into right now. But I wonder what else will work as one of these "testing arenas." 

I don't interpret that statement in the same way. 

You interpreted it as 'lied to the board about something material'. But to me, it also might mean 'wasn't forthcoming enough for us to trust him' or 'speaks in misleading ways (but not necessarily on purpose)' or it might even just be somewhat coded language for 'difficult to work with + we're tired of trying to work with him'. 

I don't know why you latch onto the interpretation that he definitely lied about something specific. 

6faul_sname
I'm interpreting this specifically through the lens of "this was a public statement". The board definitely had the ability to execute steps like "ask ChatGPT for some examples of concrete scenarios that would lead a company to issue that statement". The board probably had better options than "ask ChatGPT", but that should still serve as a baseline for how informed one would expect them to be about the implications of their statement. Here are some concrete example scenarios ChatGPT gives that might lead to that statement being given: What all of these things have in common is that they involve misleading the board about something material. "Not fully candid", in the context of corporate communications, means "liar liar pants on fire", not "sometimes they make statements and those statements, while true, vaguely imply something that isn't accurate".

I was asked to clarify my position about why I voted 'disagree' with "I assign >50% to this claim: The board should be straightforward with its employees about why they fired the CEO." 

I'm putting a maybe-unjustified high amount of trust in all the people involved, and from that, my prior is very high on "for some reason, it would be really bad, inappropriate, or wrong to discuss this in a public way." And given that OpenAI has ~800 employees, telling them would basically count as a 'public' announcement. (I would update significantly on the claim ... (read more)

7faul_sname
The board's initial statement in which they stated That is already a public statement that they are firing Sam Altman for cause, and that the cause is specifically that he lied to the board about something material. That's a perfectly fine public statement to make, if Sam Altman has in fact lied to the board about something material. Even a statement to the effect of "the board stands by its decision, but we are not at liberty to comment on the particulars of the reasons for Sam Altman's departure at this time" would be better than what we've seen (because that would say "yes there was actual misconduct, no we're not going to go into more detail"). The absence of such a statement implies that maybe there was no specific misconduct though.

Media & Twitter reactions to OpenAI developments were largely unhelpful, specious, or net-negative for overall discourse around AI and AI Safety. We should reflect on how we can do better in the future and possibly even consider how to restructure media/Twitter/etc to lessen the issues going forward.

The OpenAI Charter, if fully & faithfully followed and effectively stood behind, including possibly shuttering the whole project down if it came down to it, would prevent OpenAI from being a major contributor to AI x-risk. In other words, as long as people actually followed this particular Charter to the letter, it is sufficient for curtailing AI risk, at least from this one org. 

Reply1032

The partnership between Microsoft and OpenAI is a net negative for AI safety. And: What can we do about that? 

We should consider other accountability structures than the one OpenAI tried (i.e. the non-profit / BoD). Also: What should they be?

Reply1611

I would never have put it as either of these, but the second one is closer. 

For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don't expect most people do, but I've developed this as a practice, and I am guessing most people can, with some effort or practice. 

I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this incl... (read more)

2Elizabeth
I 100% agree it's good to cultivate an internal sense of motivation, and move to act from motives more like curiosity and care, and less like prurient gossip and cruelty. I don't necessarily think we can transition by fiat, but I share the goal. But I strongly reject "I am responsible for mitigating all negative consequences of my actions". If I truthfully accuse someone of a crime and it correctly gets them fired, am I responsible for feeding and housing them? If I truthfully accuse someone of a crime but people overreact, am I responsible for harm caused by overreaction? Given that the benefits of my statement accrue mostly to other people, having me bear the costs seems like a great way to reduce the supply of truthful, useful negative facts being shared in public.  I agree it's good to acknowledge the consequences, and that this might lead to different actions on the margin. But that's very different than making it a mandate. 

I'm fine with drilling deeper but I currently don't know where your confusion is. 

I assume we exist in different frames, but it's hard for me to locate your assumptions. 

I don't like meandering in a disagreement without very specific examples to work with. So maybe this is as far as it is reasonable to go for now. 

4Elizabeth
That makes sense. Let me take a stab at clarifying, but if that doesn't work seems good to stop. You said When I read that, my first thought is that before (most?) every question, you want people to think hard and calculate the specific consequences asking that question might have, and ask only if the math comes out strongly positive. They bear personal responsibility for anything in which their question played any causal role. I think that such a policy would be deeply harmful.  But another thing you could mean is that people who have a policy of asking questions like this should be aware and open about the consequences of their general policies on questions they ask, and have feedback loops that steer themselves towards policies that produce good results on average. That seems good to me. I'm generally in favor of openly acknowledging costs even when they're outweighed by benefits, and I care more that people have good feedback loops than that any one action is optimal. 

Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns. 

RE: The bullet point on compassion... maybe just strike that bullet point.  It doesn't really affect the rest of the points. 

It's good if people ultimately use their models to help themselves and others, but I think it's bad to make specific questions or models justify their usefulness be

... (read more)
4Elizabeth
Could you say more on what you mean by "with compassion" and "taking responsibility for the impact of speech actions"?

Oh, okay, I found that a confusing way to communicate that? But thanks for clarifying. I will update my comment so that it doesn't make you sound like you did something very dismissive. 

I feel embarrassed by this misinterpretation, and the implied state of mind I was in. But I believe it is an honest reflection about something in my state of mind, around this subject. Sigh. 

But I think it's pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of.  There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.  

There is a chance we don't have a disagreement, and there is a chance we do. 

In brief, to see if there's a crux anywhere in here:

  • Don't need
... (read more)
6Elizabeth
This was a great reply, very crunchy, I appreciate you spelling out your beliefs so legibly.  I'm confused here because that's not my definition of compassion and the sentence doesn't quite make sense to me if you plug that definition in.  But I agree those questions should be done treating everyone involved as real and human. I don't believe they need to be done out of concern for the person. I also don't think the question needs to be motivated by any specific concern; desire for good models is enough. It's good if people ultimately use their models to help themselves and others, but I think it's bad to make specific questions or models justify their usefulness before they can be asked. 

I had written a longer comment, illustrating how Oliver was basically committing the thing that I was complaining about and why this is frustrating. 

The shorter version:

His first paragraph is a strawman. I never said 'take me at my word' or anything close. And all previous statements from me and knowing anything about my stances would point to this being something I would never say, so this seems weirdly disingenuous. 

His second paragraph is weirdly flimsy, implying that ppl are mostly using the literal words out of people's mouths to determine w... (read more)

The 'endless list' comment wasn't about you, it was a more 'general you'. Sorry that wasn't clear. I edited stuff out and then that became unclear. 

I mostly wanted to point at something frustrating for me, in the hopes that you or others would, like, get something about my experience here. To show how trapped this process is, on my end.

I don't need you to fix it for me. I don't need you to change. 

I don't need you to take me for my word. You are welcome to write me off, it's your choice. 

I just wanted to show how I am and why. 

9Unreal
I had written a longer comment, illustrating how Oliver was basically committing the thing that I was complaining about and why this is frustrating.  The shorter version: His first paragraph is a strawman. I never said 'take me at my word' or anything close. And all previous statements from me and knowing anything about my stances would point to this being something I would never say, so this seems weirdly disingenuous.  His second paragraph is weirdly flimsy, implying that ppl are mostly using the literal words out of people's mouths to determine whether they're lying (either to others or to themselves). I would be surprised if Oliver would actually find Alice and Bob both saying "trust me i'm fine" would be 'totally flat' data, given he probably has to discern deception on a regular basis. Also I'm not exactly the 'trust me i'm fine' type, and anyone who knows me would know that about me, if they bothered trying to remember. I have both the skill of introspection and the character trait of frankness. I would reveal plenty about my motives, aliefs, the crazier parts of me, etc. So paragraph 2 sounds like a flimsy excuse to be avoidant?  But the IMPORTANT thing is... I don't want to argue. I wasn't interested in that. I was hoping for something closer to perspective-taking, reconciliation, or reaching more clarity about our relational status. But I get that I was sounding argumentative. I was being openly frustrated and directing that in your general direction. Apologies for creating that tension. 

FTR, the reason I am engaging with LW at all, like right now... 

I'm not that interested in preserving or saving MAPLE's shoddy reputation with you guys. 

But I remain deeply devoted to the rationalists, in my heart. And I'm impacted by what you guys do. A bunch of my close friends are among you. And... you're engaging in this world situation, which impacts all of us. And I care about this group of people in general. I really feel a kinship here I haven't felt anywhere else. I can relax around this group in a way I can't elsewhere. 

I concern m... (read more)

so okay i'm actually annoyed by a thing... lemme see if i can articulate it. 

  1. I clearly have orders of magnitude more of the relevant evidence to ascertain a claim about MAPLE's chances of producing 'crazy' ppl as you've defined—and much more even than most MAPLE people (both current and former). 
  2. Plus I have much of the relevant evidence about my own ability to discern the truth (which includes all the feedback I've received, the way people generally treat me, who takes me seriously, how often people seem to want to back away from me or tune me ou
... (read more)
2Ben Pace
Oh, this is a miscommunication. The thing I was intending to communicate when I linked to that post was that it is indeed plausible that you have observed strong evidence and that your confidence that you are in a healthy environment is accurate. I am saying that I think it is not in-principle odd or questionable to have very confident beliefs. I did not mean this to dismiss your belief, but to say the opposite, that your belief is totally plausible!
7Unreal
FTR, the reason I am engaging with LW at all, like right now...  I'm not that interested in preserving or saving MAPLE's shoddy reputation with you guys.  But I remain deeply devoted to the rationalists, in my heart. And I'm impacted by what you guys do. A bunch of my close friends are among you. And... you're engaging in this world situation, which impacts all of us. And I care about this group of people in general. I really feel a kinship here I haven't felt anywhere else. I can relax around this group in a way I can't elsewhere.  I concern myself with your norms, your ethical conduct, etc. I wish well for you, and wish you to do right by yourselves, each other, and the world. The way you conduct yourselves has big implications. Big implications for impacts to me, my friends, the world, the future of the world.  You've chosen a certain level of global-scale responsibility, and so I'm going to treat you like you're AT THAT LEVEL. The highest possible levels with a very high set of expectations. I hold myself AT LEAST to that high of a standard, to be honest, so it's not hypocritical.  And you can write me off, totally. No problem.  But in my culture, friends concern themselves with their friends' conduct. And I see you as friends. More or less.  If you write me off (and you know me personally), please do me the honor of letting me know. Ideally to my face. If you don't feel you are gonna do that / don't owe me that, then it would help me to know that also. 

I mean, I am not sure what you want me to do. If I had taken people at their word when I was concerned about them or the organizations they were part of, and just believed them on their answer on whether they will do reckless or dangerous or crazy things in the future, I would have gotten every single one of the cases I know about wrong.

Like, it's not impossible but seems very rare that when I am concerned about the kind of thing I am concerned about here and say "hey I am worried that you will do a crazy thing" that my interlocutor goes "yeah, I totally m... (read more)

Anonymized paraphrase of a question someone asked about me (reported to me later, by the person who was being asked the question): 

I have a prior about people who go off to monasteries sometimes going nuts, is Renshin nuts?

The person being asked responded "nah" and the question-asker was like "cool" 

I think this sort of exchange might be somewhat commonplace or normal in the sphere. 

I personally didn't feel angry, offended, or sad to hear about this exchange, but I don't feel the person asking the question was asking out of concern or care f... (read more)

Thanks for adding this. I felt really hamstrung by not knowing exactly what kind of conversation we were talking about, and this helps a lot.

I think it's legit that this type of conversation feels shitty to the person it is about. Having people talk about you like you're not a person feels awful. If it included someone with whom you had a personal relation with, I think it's legit that this hurts the relationships. Relationships are based on viewing each other as people. And I can see how a lot of generators of this kind of conversation would be bad.

But I ... (read more)

Ideas I'm interested in playing with:

  • experiment with using this feature for one-on-one coaching / debugging; I'd be happy to help anyone with their current bugs... (I suspect video call is the superior medium but shrug, maybe there will be benefits to this way)
  • talk about our practice together (if you have a 'practice' and know what that means) 

Topics I'd be interested in exploring:

  • Why meditation? Should you meditate? (I already meditate, a lot. I don't think everyone should "meditate". But everyone would benefit from something like "a practice" that t
... (read more)

I think the thing I'm attempting to point out is:

If I hold myself to satisfying A&C's criterion here, I am basically:

a) strangleholding myself on how to share information about Nonlinear in public
b) possibly overcommitting myself to a certain level of work that may not be worth it or desirable
c) implicitly biasing the process towards coming out with a strong case against Nonlinear (with a lower-level quality of evidence, or evidence to the contrary, being biased against) 

I would update if it turns out A&C was actually fine with Ben coming to t... (read more)

9Ben Pace
I affirm that there was a bias toward the process coming out against Nonlinear. I think this would normally be unjustified and unfair but it was done here due to the IMO credible threat of retaliation — otherwise I would have just shared my info as I wanted to on day one. I have tried to be open about the algorithm I followed so that people can update on the filtering. Insofar as the concern about retaliation was essentially ungrounded then I think that doing this was wrong and I made a fairly serious mistake. I think it will be hard to know with certainty, given how much of the stuff was verbal, but overall I am quite confident that it was a justified concern. To clarify, A&C didn't ask me to make a "credible" post, I myself thought that was what I should do. If I investigated and thought that the fears and harms were false, then my guess is that I would have shared a low-detail version of that. ("I have looked into these concerns about treatment of employees a fair bit and basically do not buy them.") These accusations were having effects for Nonlinear and I would have wanted to counteract that.

it seemed to me Alice and Chloe would be satisfied to share a post containing accusations that were received as credible.

 

This is a horrible constraint to put on an epistemic process. You cannot, ever, guarantee the reaction to these claims, right? Isn't this a little like writing the bottom line first? 

If it were me in this position, I would have been like: 

Sorry Alice & Chloe, but the goal of an investigation like this is not to guarantee a positive reaction for your POV, from the public. The goal is to reveal what is actually true abo... (read more)

4Ben Pace
Certainly! I think I did do this. I mentioned that two people came away with a false impression of how much money Alice received, and that some people involved questioned her reliability a bunch. Sometimes I think the stories she'd share with me were a bit fuzzy and when I asked her for primary sources they were slightly out of line with her recollection (though overall roughly quite similar).
8Viliam
There are basically three possible outcomes to Ben investigating the story of Alice and Chloe: * Ben concludes that the accusations against Nonlinear are true * Ben concludes that the accusations against Nonlinear are false * Ben decides that he doesn't have enough evidence to make a conclusion (but can share the data) You are talking about the first two options, but it seems quite clear to me that the third option is the thing Alice and Chloe actually worry about. (A&C know whether they are telling the truth or lying, but they can't predict whether Ben will be sufficiently convinced by the evidence or not.) What they want is for Ben not to publish the story if the third option happens, because the predictable outcome is that Nonlinear would take revenge against them. Ben also wants to avoid the third option, but he can't really promise it. Maybe there simply is not enough evidence either way; or maybe there is, but collecting it would take more time than Ben is willing to spend.

Neither here nor there: 

I am sympathetic to "getting cancelled." I often feel like people are cancelled in some false way (or a way that leaves people with a false model), and it's not very fair. Mobs don't make good judges. Even well-meaning, rationalist ones. I feel this way about basically everyone who's been 'cancelled' by this community. Truth and compassion were never fully upheld as the highest virtue, in the end. Justice was never, imo, served, but often used as an excuse for victims to evade taking personal responsibility for something and fo... (read more)

After reading more of the article, I have a better sense of this context that you mention. It would be interesting to see Nonlinear's response to the accusations because they seem pretty shameful, as is. 

I would actively advise against anyone working with Kat / Emerson, not without serious demonstration of reformation and, like, values-level shifts. 

If Alice is willing to stretch the truth about her situation (for any reason) or outright lie in order to enact harsher punishment on others, even as a victim of abuse, I would be mistrustful of her s... (read more)

5Unreal
Neither here nor there:  I am sympathetic to "getting cancelled." I often feel like people are cancelled in some false way (or a way that leaves people with a false model), and it's not very fair. Mobs don't make good judges. Even well-meaning, rationalist ones. I feel this way about basically everyone who's been 'cancelled' by this community. Truth and compassion were never fully upheld as the highest virtue, in the end. Justice was never, imo, served, but often used as an excuse for victims to evade taking personal responsibility for something and for rescuers to have something to do. But I still see the value in going through a 'cancelling' process, for everyone involved, and so I'm not saying to avoid it either. It just sucks, and I get it. That said, the people who are 'cancelled' tend to be stubborn hard-heads about it, and their own obstinacy tends to lead further to an even more extreme downfall. It's like some suicidal part of them kicks in, and drives the knife in deeper without anyone's particular help.  I agree it's good to never just give into mob justice, but for your own souls to not take damage, try not to clench. It's not worth protecting it, whatever it happens to be.  Save your souls. Not your reputation. 

These texts have weird vibes from both sides. Something is off all around.  

That said, what I'm seeing: A person failed to uphold their own boundaries or make clear their own needs. Instead of taking responsibility for that, they blame the other person for some sort of abuse. 

This is called playing the victim. I don't buy it. 

I think it would generally be helpful if people were informed by the Drama Triangle when judging cases like these. 

Alternative theory: Alice felt on thin ice socially + professionally. When she was sick she finally felt she had a bit of leeway and therefore felt even a little willing to make requests of these people who were otherwise very "elitist" wrt everyone, somewhat including her. She tries to not overstep. She does this by stating what she needs, but also in the same breath excusing her needs as unimportant, so that the people with more power can preserve the appearance of not being cruel while denying her requests. She does this because she doesn't know how much leeway she actually has.

Unfortunately this is a hard to falsify theory. But at a glance it seems consistent, and I think it's also totally a thing that happens.

this is a good question 

thanks for asking it 

curious how much you know about koan traditions and practices. this article is like an interesting mix of koan practice and analytical meditation. 

7TsviBT
It's definitely consciously meditative. It's a form of meditation I call "redescription". You redescribe the thing over and over--emphasizing different aspects, holding different central examples in mind, maybe tabooing words you used previously--like running your hands over an object over and over, making it familiar / part of you. IDK about koans. A favorite intro / hook / source?
Load More