flood lights seem best?
However, Annie has not yet provided what I would consider direct / indisputable proof that her claims are true. Thus, rationally, I must consider Sam Altman innocent.
This is an interesting view on rationality that I hadn't considered
Omen decouples but has prohibitive gas problems and sees no usage as a result.
Augur was a total failboat. Almost all of these projects couple the market protocol to the resolution protocol, which is stupid, especially if you are Augur and your ideas about making resolution protocols are really dumb.
Your understanding is correct. I built one which is currently offline, I'll be in touch soon.
I found the stuff about relationship success in Luke's first post here to be useful! thanks
Ok, this kind of tag is exactly what I was asking about. I'll have a lok at these posts.
Thanks for giving an example of a narrow project, I think it helps a lot. I have been around EA for several years, I find that grandiose projects and narratives at this point alienate me, and hearing about projects like yours make my ears perk up and feel like maybe I should devote more time and attention to the space.
I guess it’s good to know it’s possible to be both a LW-style rationalist and quite mentally ill.
Not commenting on distributions here, but it sure as fuck is possible.
I liked the analogy and I also like weird bugs
While normal from a normal perspective, this post is strange from a rationalist perspective, since the lesson you describe is X is bad, but the evidence given is that you had a good experience with X aside from mundane interpersonal drama that everyone experiences and that doesnt sound particularly exacerbated by X. Aside from that you say it contributed to psychosis years down the line, but its not very clear to me there is a strong causal relationship or any.
(of course, your friend's bad experience with cults is a good reason to update against cult...
how are you personally preparing for this?
Recently I learned that Pixel phones actually contain TPUs. This is a good indicator of how much deep learning is being used (particularly it is used by the camera I think)
Re: taboos in EA, I think it would be good if somebody who downvoted this comment said why.
Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve's top comment on this post is an example of enforcing/reiterating this norm.
It's an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I'd consider a taboo,...
So the first step to good outreach is not treating AI capabilities researchers as the enemy. We need to view them as our future allies, and gently win them over to our side by the force of good arguments that meets them where they're at, in a spirit of pedagogy and truth-seeking.
To this effect I have advocated that we should call it "Different Altruism" instead of "Effective Altruism", because by leading with the idea that a movement involves doing altruism better than status quo, we are going to trigger and alienate people part of status quo that we...
Thanks a lot for doing this and posting about your experience. I definitely think that nonviolent resistance is a weirdly neglected approach. "mainstream" EA certainly seems against it. I am glad you are getting results and not even that surprised.
You may be interested in discussion here, I made a similar post after meeting yet another AI capabilities researcher at FTX's EA Fellowship (she was a guest, not a fellow): https://forum.effectivealtruism.org/posts/qjsWZJWcvj3ug5Xja/agrippa-s-shortform?commentId=SP7AQahEpy2PBr4XS
I'm interestd in working on dying with dignity
I actually feel calmer after reading this, thanks. It's nice to be frank.
For all the handwringing in comments about whether somebody might find this post demotivating, I wonder if there are any such people. It seems to me like reframing a task from something that is not in your control (saving the world) to something that is (dying with personal dignity) is the exact kind of reframing that people find much more motivating.
Related post: https://www.lesswrong.com/posts/ybQdaN3RGvC685DZX/the-emh-is-false-specific-strong-evidence
One relevant thing here is baseline P(beats market) on given [rat / smart] & [tries to beat market]. In my own anecdotal dataset of about 15 people the probability here is about 100%, and the amount of wealth among these people is also really high. Obvious selection effects or whatever are obvious. But EMH is just a heuristic and you probably have access to stronger evidence.
I found this post persuasive, and only noticed after the fact that I wasn't clear on exactly what it had persuaded me of.
I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.
Great, thanks.
I was not aware of any examples of anything anyone would refer to as prejudicial mobbing with consequences. I'd be curious to hear about your prejudicial mobbing experience.
Maybe there is some norm everyone agrees with that you should not have to distance yourself from your friends if they turn out to be abusers, or not have to be open about the fact you were there friend, or something. Maybe people are worried about the chilling effects of that.
If this norm is the case, then imo it is better enforced explicitly.
But to put it really simply it does seem like I should care about whether it is true that Duncan and Brent were close friends if I am gonna be taking advice from him about how to interpret and discuss accusation...
Some facts relevant to the question of whether we were close friends:
Your OP is way too long (or not sufficiently indexed) for me to, without considerable strain, determine how much or how meaningfully I think this claim is true. Relatedly I don't know what you are referring to here.
Maybe it is good to clarify: I'm not really convinced that LW norms are particularly conducive to bad faith or psychopathic behavior. Maybe there are some patches to apply. But mostly I am concerned about naivety. LW norms aren't enough to make truth win and bullies / predators lose. If people think they are, that alone is a problem independent of possible improvements.
since you might just have different solutions in mind for the same problem.
I think that Duncan is concerned about prejudicial mobs being too effective and I am concerned about sy...
I like this highlighting of the tradeoffs, and have upvoted it. But:
But to me it doesn't seem like LW is particularly afflicted by prejudicial mobs and is nonzero afflicted by abuse.
... I think this is easier to say when one has never been the target of a prejudicial mob on LessWrong, and/or when one agrees with the mob and therefore doesn't think of it as prejudicial.
I've been the target of prejudicial mobbing on LessWrong. Direct experience. And yes, it impacted work and funding and life and friendships outside of the site.
If you do happen to feel like listing a couple of underappreciated norms that you think do protect rationality, I would like that.
Brevity
I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. I think this post is pretty much an example of that:
- a lot of time is spent gesturing / sermoning about the importance of fighting biases etc. with no particularly informative or novel content (it is after all intended to "remind people of why they care".). I personally find it difficult to engage critically with this kind of high volume and low density.
- ultimately the intent seems to be an effort to coordinate power against types of pos...
If I'm reading you correctly, it sounds like there's actually multiple disagreements you have here--a disagreement with Duncan, but also a disagreement with the current norms of LW.
My impression is primarily informed by these bits here:
I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. [...]
If people here really think you can't propagandize or bad-faith accumulate points/power while adhering to LW norms, well, I think that's bad for rationality.
Could you say more about this? In particular...
propagandize / pointscore / accumulate power with relative ease
There's a way in which this is correct denotatively, even though the connotation is something I disagree with. Like, I am in fact arguing for increasing a status differential between some behaviors that I think are more appropriate for LW and others that I think are less appropriate. I'm trying at least to be up front about what those behaviors are, so that people can disagree. e.g. if you think that it's actually not a big deal to distinguish between observation and inference...
Thank you SO MUCH for writing this.
The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with. Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.
I think this is so well put and important.
I think that your fear of extreme rebuke from publishing this stuff is obviously reasonable when dealing with a group that believes itse...
I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that's the case, much discussion is completely moot. I personally kinda think that the world's best shot is the one where MIRI/CFAR type orgs don't break so many eggs. And I think transparency is the only realistic mechanism for course correction.
FWIW, I (former MIRI employee and current LW admin) saw a draft of this post before it was published, and told jessicata that I thought she should publish it, roughly because of that belief in transparency / ethical treatment of people.
"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."
I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.
[1] I don’t particularly blame them, consider the alternative.
I think the alternative is actually much better than silence!
For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced.
Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are ...
I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmak...
I will say that the EA Hotel, during my 7 months of living there, was remarkably non-cult-like. You would think otherwise given Greg's forceful, charismatic presence /j
I find it hard to imagine people sleeping in on Sundays. Not even the most hardened criminal will steal when the policeman's right in front of him and the punishment is infinite.
I'm a little late on this one but for another clear example is that theists don't have the relationship with death that you would expect someone to have if they believed that post-death was the good part. "You want me to apologize to the bereaved family for murder? They should be thanking me!"
Not sure what you're on, but "You might listen to an idiot doctor that puts you on spiro" is definitely a real transition downside