are you putting forward that something about worldviews sometimes relies on faster than light signaling?
OK this is getting close. I am saying worldviews CANNOT EVER be fast enough, and that's why the goal is to drop all worldviews to get "fast enough". Which the very idea of "fast enough" is in itself 'wrong' because it's conceptual / limited / false. This is my worst-best attempt to point to a thing, but I am trying to be as literal as possible, not poetic.
No response can be immediate in a physical universe
Yeah, we're including 'physical universe' a...
I don't know if I fully get you, but you also nailed it on the head.
...In such situation, I think the one weird trick would be to invent a belief system that actively denies being one. To teach people a dogma that would (among other things) insist that there is no dogma, you just see the reality as it is (unlike all the other people, who merely see their dogmas). To invent rituals that consist (among other things) of telling yourself repeatedly that you have no rituals (unlike all the other people). To have leaders that deny being leaders (and yet they are su
Or why can't you have a worldview that computes the best answer to any given "what should I do" question, to arbitrary but not infinite precision?
I am not talking about any 'good enough' answer. Whatever you deem 'good enough' to some arbitrary precision.
I am talking about the correct answer every time. This is not a matter of sufficient compute. Because if the answer comes even a fraction of a second AFTER, it is already too late. The answer has to be immediate. To get an answer that is immediate, that means it took zero amount of time, no compute is invo...
I am saying things in a direct and more or less literal manner. Or at least I'm trying to.
I did use a metaphor. I am not using "poetry"? When I say "Discard Ignorance" I mean that as literally as possible. I think it's somewhat incorrect to call what I'm saying phenomenology. That makes it sound purely subjective.
Am I talking down to you? I did not read it that way. Sorry it comes across that way. I am attempting to be very direct and blunt because I think that's more respectful, and it's how I talk.
First paragraph: 3/10. The claim is that something was already more natural to begin with, but you need deliberate practice to unlock the thing that was already more natural. It's not that it 'comes more naturally' after you practice something. What 'felt' natural before was actually very unnatural and hindered, but we don't realize this until after practicing.
2nd, 3rd, 4th paragraph: 2/10. This mostly doesn't seem relevant to what I'm trying to offer.
...
It's interesting trying to watch various people try to repeat what I'm saying or respond to what ...
I will respond to this more fully at a later point. But a quick correction I wish to make:
What I'm referring to is not about System 1 or System 2 so much. It's not that I rely more on System 1 to do things. System 1 and System 2 are both unreliable systems, each with major pitfalls.
I'm more guided by a wisdom that is not based in System 1 or System 2 or any "process" whatsoever.
I keep trying to point at this, and people don't really get it until they directly see it. That's fine. But I wish people would at least mentally try to understand what ...
Yes we agree. 👍🌻
I think I mention this in the essay too.
If we are just changing at the drop of a hat, not for truth, but for convenience or any old reason, like most people are, ...
or even under very strenuous dire circumstances, like we're about to die or in excruciating pain or something...
then that is a compromised mind. You're working with a compromised, undisciplined mind that will change its answers as soon as externals change.
Views change. Even our "robust" principles can go out the window under extreme circumstances. (So what's going ...
I would also argue against the claim religious institutions are "devoid of moral truths". I think this is mostly coming from secularist propaganda. In fact these institutions are still some of the most charitable, generous institutions that provide humanitarian aid all over the world. Their centuries-old systems are relatively effective at rooting out evil-doing in their ranks.
Compared to modern corporations, they're acting out of a much clearer sense of morality than capitalist institutions. Compared to modern secular governments, such as that of th...
OK, well, I can tell a story about why corruption seeped into the Church, and it doesn't sound crazy to me. (Black Death happened, is what.)
The Mediaeval Christian church's power-seeking and hypocrisy precedes the Black Death.
My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:
This is easy to research.
I will name a few ways the Bud...
...I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths.
This might be my trapped priors talking, but I am a non-cognitivist. I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless,
I use the mind too. I appreciate the mind a lot. I wouldn't choose to be less smart, for instance.
But we are still over-using the mind to solve our problems.
Intelligence doesn't solve problems. It creates problems.
Wisdom resolves problems.
This is what's really hard to convey. But I am trying, as you say.
Hm, if by "discovering" you mean
Dropping all fixed priors
Making direct contact with reality (which is without any ontology)
And then deep insight emerges
And then after-the-fact you construct an ontology that is most beneficial based on your discovery
Then I'm on board with that
And yet I still claim that ontology is insufficient, imperfect, and not actually gonna work in the end.
Hm, you know I do buy that also.
The task is much harder now, due to changing material circumstances as you say. The modern culture has in some sense vaccinated itself against certain forms of wisdom and insight.
We acknowledge this problem and are still making an effort to address them, using modern technology. I cannot claim we're 'anywhere close' to resolving this? We're just firmly GOING to try, and we believe we in particular have a comparative advantage, due to a very solid community of spiritual practitioners. We have AT LEAST managed to g...
We have a significant comparative advantage to pretty much all of Western philosophy. I know this is a 'bold claim'. If you're further curious you can come visit the Monastic Academy in Vermont, since it seems best 'shown' rather than 'told'. But we also plan on releasing online content in the near future to communicate our worldview.
We do see that all the previous efforts have perhaps never quite consistently and reliably succeeded, in both hemispheres. (Because, hell, we're here now.) But it is not fair to say they have never succeeded to any degre...
Thank you for pointing this out, as it is very important.
The morality / ethics of the human beings matters a lot. But it seems to matter more than just a lot. If we get even a little thing wrong here, ...
But we're getting more than just a little wrong here, imo. Afaict most modern humans are terribly confused about morality / ethics. As you say "what is even good"
I've spoken with serious mathematicians who believe they might have a promising direction to the AI alignment problem. But they're also confused about what's good. That is not their re...
Rationality seems to be missing an entire curriculum on "Eros" or True Desire.
I got this curriculum from other trainings, though. There are places where it's hugely emphasized and well-taught.
I think maybe Rationality should be more open to sending people to different places for different trainings and stop trying to do everything on its own terms.
It has been way better for me to learn how to enter/exit different frames and worldviews than to try to make everything fit into one worldview / frame. I think some Rationalists believe everything is ...
I was bouncing around LessWrong and ran into this. I started reading it as though it were a normal post, but then I slowly realized ...
I think according to typical LessWrong norms, it would be appropriate to try to engage you on the object level claims or talk about the meta-presentation as though you and I were trying to collaborate on figuring things out and how to communicate things.
But according to my personal norms and integrity, if I detect that something is actually quite off (like alarm bells going) then it would be kind of sick to ignore tha...
Is this your first time running into Zack's stuff? You sound like you're talking to someone showing up out of nowhere with a no-context crackpot manuscript and has zero engagement with community. Zack's post is about his actual engagement with the community over a decade, we've seen a bunch of the previous engagement (in pretty much the register we see here so this doesn't look like an ongoing psychotic break), he's responsive to comments and his thesis generally makes sense. This isn't drive-by crackpottery and it's on LessWrong because it's about LessWrong.
Musings:
COVID was one of the MMA-style arenas for different egregores to see which might come out 'on top' in an epistemically unfriendly environment.
I have a lot of opinions on this that are more controversial than I'm willing to go into right now. But I wonder what else will work as one of these "testing arenas."
I don't interpret that statement in the same way.
You interpreted it as 'lied to the board about something material'. But to me, it also might mean 'wasn't forthcoming enough for us to trust him' or 'speaks in misleading ways (but not necessarily on purpose)' or it might even just be somewhat coded language for 'difficult to work with + we're tired of trying to work with him'.
I don't know why you latch onto the interpretation that he definitely lied about something specific.
I was asked to clarify my position about why I voted 'disagree' with "I assign >50% to this claim: The board should be straightforward with its employees about why they fired the CEO."
I'm putting a maybe-unjustified high amount of trust in all the people involved, and from that, my prior is very high on "for some reason, it would be really bad, inappropriate, or wrong to discuss this in a public way." And given that OpenAI has ~800 employees, telling them would basically count as a 'public' announcement. (I would update significantly on the claim ...
Media & Twitter reactions to OpenAI developments were largely unhelpful, specious, or net-negative for overall discourse around AI and AI Safety. We should reflect on how we can do better in the future and possibly even consider how to restructure media/Twitter/etc to lessen the issues going forward.
The OpenAI Charter, if fully & faithfully followed and effectively stood behind, including possibly shuttering the whole project down if it came down to it, would prevent OpenAI from being a major contributor to AI x-risk. In other words, as long as people actually followed this particular Charter to the letter, it is sufficient for curtailing AI risk, at least from this one org.
I would never have put it as either of these, but the second one is closer.
For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don't expect most people do, but I've developed this as a practice, and I am guessing most people can, with some effort or practice.
I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this incl...
I'm fine with drilling deeper but I currently don't know where your confusion is.
I assume we exist in different frames, but it's hard for me to locate your assumptions.
I don't like meandering in a disagreement without very specific examples to work with. So maybe this is as far as it is reasonable to go for now.
Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns.
RE: The bullet point on compassion... maybe just strike that bullet point. It doesn't really affect the rest of the points.
...It's good if people ultimately use their models to help themselves and others, but I think it's bad to make specific questions or models justify their usefulness be
Oh, okay, I found that a confusing way to communicate that? But thanks for clarifying. I will update my comment so that it doesn't make you sound like you did something very dismissive.
I feel embarrassed by this misinterpretation, and the implied state of mind I was in. But I believe it is an honest reflection about something in my state of mind, around this subject. Sigh.
But I think it's pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of. There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.
There is a chance we don't have a disagreement, and there is a chance we do.
In brief, to see if there's a crux anywhere in here:
I had written a longer comment, illustrating how Oliver was basically committing the thing that I was complaining about and why this is frustrating.
The shorter version:
His first paragraph is a strawman. I never said 'take me at my word' or anything close. And all previous statements from me and knowing anything about my stances would point to this being something I would never say, so this seems weirdly disingenuous.
His second paragraph is weirdly flimsy, implying that ppl are mostly using the literal words out of people's mouths to determine w...
The 'endless list' comment wasn't about you, it was a more 'general you'. Sorry that wasn't clear. I edited stuff out and then that became unclear.
I mostly wanted to point at something frustrating for me, in the hopes that you or others would, like, get something about my experience here. To show how trapped this process is, on my end.
I don't need you to fix it for me. I don't need you to change.
I don't need you to take me for my word. You are welcome to write me off, it's your choice.
I just wanted to show how I am and why.
FTR, the reason I am engaging with LW at all, like right now...
I'm not that interested in preserving or saving MAPLE's shoddy reputation with you guys.
But I remain deeply devoted to the rationalists, in my heart. And I'm impacted by what you guys do. A bunch of my close friends are among you. And... you're engaging in this world situation, which impacts all of us. And I care about this group of people in general. I really feel a kinship here I haven't felt anywhere else. I can relax around this group in a way I can't elsewhere.
I concern m...
so okay i'm actually annoyed by a thing... lemme see if i can articulate it.
I mean, I am not sure what you want me to do. If I had taken people at their word when I was concerned about them or the organizations they were part of, and just believed them on their answer on whether they will do reckless or dangerous or crazy things in the future, I would have gotten every single one of the cases I know about wrong.
Like, it's not impossible but seems very rare that when I am concerned about the kind of thing I am concerned about here and say "hey I am worried that you will do a crazy thing" that my interlocutor goes "yeah, I totally m...
Anonymized paraphrase of a question someone asked about me (reported to me later, by the person who was being asked the question):
I have a prior about people who go off to monasteries sometimes going nuts, is Renshin nuts?
The person being asked responded "nah" and the question-asker was like "cool"
I think this sort of exchange might be somewhat commonplace or normal in the sphere.
I personally didn't feel angry, offended, or sad to hear about this exchange, but I don't feel the person asking the question was asking out of concern or care f...
Thanks for adding this. I felt really hamstrung by not knowing exactly what kind of conversation we were talking about, and this helps a lot.
I think it's legit that this type of conversation feels shitty to the person it is about. Having people talk about you like you're not a person feels awful. If it included someone with whom you had a personal relation with, I think it's legit that this hurts the relationships. Relationships are based on viewing each other as people. And I can see how a lot of generators of this kind of conversation would be bad.
But I ...
Ideas I'm interested in playing with:
Topics I'd be interested in exploring:
I think the thing I'm attempting to point out is:
If I hold myself to satisfying A&C's criterion here, I am basically:
a) strangleholding myself on how to share information about Nonlinear in public
b) possibly overcommitting myself to a certain level of work that may not be worth it or desirable
c) implicitly biasing the process towards coming out with a strong case against Nonlinear (with a lower-level quality of evidence, or evidence to the contrary, being biased against)
I would update if it turns out A&C was actually fine with Ben coming to t...
it seemed to me Alice and Chloe would be satisfied to share a post containing accusations that were received as credible.
This is a horrible constraint to put on an epistemic process. You cannot, ever, guarantee the reaction to these claims, right? Isn't this a little like writing the bottom line first?
If it were me in this position, I would have been like:
Sorry Alice & Chloe, but the goal of an investigation like this is not to guarantee a positive reaction for your POV, from the public. The goal is to reveal what is actually true abo...
Neither here nor there:
I am sympathetic to "getting cancelled." I often feel like people are cancelled in some false way (or a way that leaves people with a false model), and it's not very fair. Mobs don't make good judges. Even well-meaning, rationalist ones. I feel this way about basically everyone who's been 'cancelled' by this community. Truth and compassion were never fully upheld as the highest virtue, in the end. Justice was never, imo, served, but often used as an excuse for victims to evade taking personal responsibility for something and fo...
After reading more of the article, I have a better sense of this context that you mention. It would be interesting to see Nonlinear's response to the accusations because they seem pretty shameful, as is.
I would actively advise against anyone working with Kat / Emerson, not without serious demonstration of reformation and, like, values-level shifts.
If Alice is willing to stretch the truth about her situation (for any reason) or outright lie in order to enact harsher punishment on others, even as a victim of abuse, I would be mistrustful of her s...
These texts have weird vibes from both sides. Something is off all around.
That said, what I'm seeing: A person failed to uphold their own boundaries or make clear their own needs. Instead of taking responsibility for that, they blame the other person for some sort of abuse.
This is called playing the victim. I don't buy it.
I think it would generally be helpful if people were informed by the Drama Triangle when judging cases like these.
Alternative theory: Alice felt on thin ice socially + professionally. When she was sick she finally felt she had a bit of leeway and therefore felt even a little willing to make requests of these people who were otherwise very "elitist" wrt everyone, somewhat including her. She tries to not overstep. She does this by stating what she needs, but also in the same breath excusing her needs as unimportant, so that the people with more power can preserve the appearance of not being cruel while denying her requests. She does this because she doesn't know how much leeway she actually has.
Unfortunately this is a hard to falsify theory. But at a glance it seems consistent, and I think it's also totally a thing that happens.
Catholicism never would have collected the intelligence necessary to invent a nuke. Their worldview was not compatible with science. It was an inferior organizing principle. ("inferior" meaning less capable of coordinating a collective intelligence needed to build nukes.)
You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground ... (read more)