All of agrippa's Comments + Replies

Not sure what you're on, but "You might listen to an idiot doctor that puts you on spiro" is definitely a real transition downside

However, Annie has not yet provided what I would consider direct / indisputable proof that her claims are true. Thus, rationally, I must consider Sam Altman innocent.

This is an interesting view on rationality that I hadn't considered

Omen decouples but has prohibitive gas problems and sees no usage as a result.

Augur was a total failboat. Almost all of these projects couple the market protocol to the resolution protocol, which is stupid, especially if you are Augur and your ideas about making resolution protocols are really dumb.

2agrippa
Omen decouples but has prohibitive gas problems and sees no usage as a result.

Your understanding is correct. I built one which is currently offline, I'll be in touch soon.

I found the stuff about relationship success in Luke's first post here to be useful! thanks

Ok, this kind of tag is exactly what I was asking about. I'll have a lok at these posts.

6Yoav Ravid
You can also use the frontpage filters to make only post tagged practical appear, or to make them highly prioritized.
agrippa101

Thanks for giving an example of a narrow project, I think it helps a lot. I have been around EA for several years, I find that grandiose projects and narratives at this point alienate me, and hearing about projects like yours make my ears perk up and feel like maybe I should devote more time and attention to the space.

I guess it’s good to know it’s possible to be both a LW-style rationalist and quite mentally ill.

Not commenting on distributions here, but it sure as fuck is possible. 

1TeaTieAndHat
Well, I knew it was very much compatible with ADHD, ASD, and a few others. I guess what I meant is closer to "Good to know it’s possible to both be a massive rationality and self-help nerd ; and never use any of that to think more clearly and escape destructive thought patterns ; and yet it’s still okay"

I liked the analogy and I also like weird bugs

While normal from a normal perspective, this post is strange from a rationalist perspective, since the lesson you describe is X is bad, but the evidence given is that you had a good experience with X aside from mundane interpersonal drama that everyone experiences and that doesnt sound particularly exacerbated by X. Aside from that you say it contributed to psychosis years down the line, but its not very clear to me there is a strong causal relationship or any. 

(of course, your friend's bad experience with cults is a good reason to update against cult... (read more)

agrippa165

how are you personally preparing for this?

Recently I learned that Pixel phones actually contain TPUs. This is a good indicator of how much deep learning is being used (particularly it is used by the camera I think)

Re: taboos in EA, I think it would be good if somebody who downvoted this comment said why. 

3tamgent
I didn't downvote this just because I disagree with it (that's not how I downvote), but if I could hazard a guess at why people might downvote, it'd be that some might think it's a 'thermonuclear idea'.
agrippa190

Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve's top comment on this post is an example of enforcing/reiterating this norm. 

It's an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid.  That fits what I'd consider a taboo,... (read more)

3Linch
???
3[comment deleted]

So the first step to good outreach is not treating AI capabilities researchers as the enemy. We need to view them as our future allies, and gently win them over to our side by the force of good arguments that meets them where they're at, in a spirit of pedagogy and truth-seeking.

 

To this effect I have advocated that we should call it "Different Altruism" instead of "Effective Altruism", because by leading with the idea that a movement involves doing altruism better than status quo, we are going to trigger and alienate people part of status quo that we... (read more)

agrippa360

Thanks a lot for doing this and posting about your experience. I definitely think that nonviolent resistance is a weirdly neglected approach. "mainstream" EA certainly seems against it. I am glad you are getting results and not even that surprised.

You may be interested in discussion here, I made a similar post after meeting yet another AI capabilities researcher at FTX's EA Fellowship (she was a guest, not a fellow): https://forum.effectivealtruism.org/posts/qjsWZJWcvj3ug5Xja/agrippa-s-shortform?commentId=SP7AQahEpy2PBr4XS
 

agrippa170

I'm interestd in working on dying with dignity

agrippa220

I actually feel calmer after reading this, thanks. It's nice to be frank. 

For all the handwringing in comments about whether somebody might find this post demotivating, I wonder if there are any such people. It seems to me like reframing a task from something that is not in your control (saving the world) to something that is (dying with personal dignity) is the exact kind of reframing that people find much more motivating.

-2Mawrak
If I am doomed to fail, I have no motivation to work on the problem. If we are all about to get destroyed by an out-of-control AI, I might just go full hedonist and stop putting work into anything (Fuck dignity and fuck everything). Post like this is severely demotivating, people are interested in solving problems, and nobody is interested in working on "dying with dignity".

Related post: https://www.lesswrong.com/posts/ybQdaN3RGvC685DZX/the-emh-is-false-specific-strong-evidence

One relevant thing here is baseline P(beats market) on given [rat / smart] & [tries to beat market]. In my own anecdotal dataset of about 15 people the probability here is about 100%, and the amount of wealth among these people is also really high. Obvious selection effects or whatever are obvious. But EMH is just a heuristic and you probably have access to stronger evidence. 

agrippa170

I found this post persuasive, and only noticed after the fact that I wasn't clear on exactly what it had persuaded me of.

I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.

6Duncan Sabien (Deactivated)
It is indeed a concern. The alarm is a bit tempered by the fact that this doesn't seem to be a majority view, but "40% of readers" would be deeply problematic and "10% of readers" would still probably indicate some obvious low-hanging fruit for fixing a real issue. Looking at the votes, I don't think it's as low as 4% of readers, which is near my threshold for "no matter what you do, there'll be a swath this large with some kind of problem."

I was not aware of any examples of anything anyone would refer to as prejudicial mobbing with consequences. I'd be curious to hear about your prejudicial mobbing experience.

3Duncan Sabien (Deactivated)
I think it's better (for the moment at least) to let Oliver speak to the most salient one, and I can say more later if need be.  I suspect Oliver would provide a more neutral POV.

Maybe there is some norm everyone agrees with that you should not have to distance yourself from your friends if they turn out to be abusers, or not have to be open about the fact you were there friend, or something. Maybe people are worried about the chilling effects of that.

If this norm is the case, then imo it is better enforced explicitly. 

But to put it really simply it does seem like I should care about whether it is true that Duncan and Brent were close friends if I am gonna be taking advice from him about how to interpret and discuss accusation... (read more)

4Ben Pace
Yeah, I don't act by that norm, and I did update negatively on the judgment of people I knew who supported Brent in the community. (I don't think of Duncan centrally in that category.)

Some facts relevant to the question of whether we were close friends:

  • We spent a grand total of well under 200 total hours in each other's company over the years 2014 - 2018 (the estimate is deliberately generous) with the bulk of that estimated time coming from a month of me mostly-by-myself using tools in his garage, but him occasionally coming out to work on his geodesic dome.
  • We did not at any point embark on any large projects together.
  • We did not at any point go on trips together, or have sleepovers, or schedule "let's go grab dinner together."  We
... (read more)
agrippa110

Your OP is way too long (or not sufficiently indexed) for me to, without considerable strain, determine how much or how meaningfully I think this claim is true. Relatedly I don't know what you are referring to here.

agrippa230

Maybe it is good to clarify: I'm not really convinced that LW norms are particularly conducive to bad faith or psychopathic behavior. Maybe there are some patches to apply. But mostly I am concerned about naivety. LW norms aren't enough to make truth win and bullies / predators lose. If people think they are, that alone is a problem independent of possible improvements. 
 

since you might just have different solutions in mind for the same problem.

I think that Duncan is concerned about prejudicial mobs being too effective and I am concerned about sy... (read more)

I like this highlighting of the tradeoffs, and have upvoted it. But:

But to me it doesn't seem like LW is particularly afflicted by prejudicial mobs and is nonzero afflicted by abuse.

... I think this is easier to say when one has never been the target of a prejudicial mob on LessWrong, and/or when one agrees with the mob and therefore doesn't think of it as prejudicial.

I've been the target of prejudicial mobbing on LessWrong.  Direct experience.  And yes, it impacted work and funding and life and friendships outside of the site.

agrippa520

If you do happen to feel like listing a couple of underappreciated norms that you think do protect rationality, I would like that.

 

Brevity

6Duncan Sabien (Deactivated)
Strong upvote. (I think the norms I'm pulling for increase brevity; more consistent standards mean less need to bend over backwards ruling out everything else in each individual case.)
agrippa390

I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. I think this post is pretty much an example of that:
- a lot of time is spent gesturing / sermoning about the importance of fighting biases etc. with no particularly informative or novel content (it is after all intended to "remind people of why they care".). I personally find it difficult to engage critically with this kind of high volume and low density. 
- ultimately the intent seems to be an effort to coordinate power against types of pos... (read more)

dxu*130

If I'm reading you correctly, it sounds like there's actually multiple disagreements you have here--a disagreement with Duncan, but also a disagreement with the current norms of LW.

My impression is primarily informed by these bits here:

I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. [...]

If people here really think you can't propagandize or bad-faith accumulate points/power while adhering to LW norms, well, I think that's bad for rationality.

Could you say more about this? In particular... (read more)

propagandize / pointscore / accumulate power with relative ease

There's a way in which this is correct denotatively, even though the connotation is something I disagree with.  Like, I am in fact arguing for increasing a status differential between some behaviors that I think are more appropriate for LW and others that I think are less appropriate.  I'm trying at least to be up front about what those behaviors are, so that people can disagree.  e.g. if you think that it's actually not a big deal to distinguish between observation and inference... (read more)

agrippa*20

Thank you SO MUCH for writing this. 

The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with.  Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.

I think this is so well put and important.

I think that your fear of extreme rebuke from publishing this stuff is obviously reasonable when dealing with a group that believes itse... (read more)

Vaniver380

I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that's the case, much discussion is completely moot. I personally kinda think that the world's best shot is the one where MIRI/CFAR type orgs don't break so many eggs. And I think transparency is the only realistic mechanism for course correction. 

FWIW, I (former MIRI employee and current LW admin) saw a draft of this post before it was published, and told jessicata that I thought she should publish it, roughly because of that belief in transparency / ethical treatment of people.

8ChristianKl
Is a sign of most cults that they have a clear interior/exterior distinction. Whether or not someone is a scientologist is for example very clear. The fact that CFAR doesn't have that is an indication against it being a cult. 
agrippa100

"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."

I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.

2ozziegooen
That's good to know.  I imagine grantmakers would be skeptical about people who would say "yes" to an optional form. Like, they say they're okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time. However, some of our community seems unusually reasonable, so perhaps there's some way to make it viable.
agrippa210

[1] I don’t particularly blame them, consider the alternative.

I think the alternative is actually much better than silence!

For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced. 

Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are ... (read more)

I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmak... (read more)

I will say that the EA Hotel, during my 7 months of living there, was remarkably non-cult-like.  You would think otherwise given Greg's forceful, charismatic presence /j

I find it hard to imagine people sleeping in on Sundays. Not even the most hardened criminal will steal when the policeman's right in front of him and the punishment is infinite.

I'm a little late on this one but for another clear example is that theists don't have the relationship with death that you would expect someone to have if they believed that post-death was the good part. "You want me to apologize to the bereaved family for murder? They should be thanking me!"

5NoriMori1992
This was one of the main reasons that I didn't set much store by the single Sunday school lesson I ever attended (courtesy of my paternal grandmother, against my mother's wishes, when I was probably older than 5 but definitely younger than 10). When the teacher described to all of us what a wonderful place heaven was, every single one of us (apparently I wasn't the only one there hearing all this for the first time) was overjoyed, and we all exclaimed that we wanted to die right away so we could go there. The teacher hurriedly tried to explain to us that that's not what God wants, he wants us to live full lives here on Earth. We were having none of it. "No!" we said joyfully, "I wanna die right now!" While we didn't say it in so many words, it was clear to us that the teacher wasn't making any sense, since she seemed to be claiming both that God wants what's best for us, and that God doesn't want us to do what would clearly actually be best for us. We were in the strange position of watching an adult firmly denounce what should have been the most obviously correct decision in the history of human existence if heaven actually existed. Claiming that God doesn't want us to go to heaven right away because he "wants us to live full lives on Earth" makes no sense. If heaven's really so great, and it's eternal, then why in the world would literally anybody not want to get there as soon as possible? And more importantly, why would God not want that? If there's an answer to that, the teacher didn't know it. And if we're accepting her claims that heaven is amazing and eternal and God wants what's best for us, then the only good reason God could have for wanting us to "live full lives on Earth" is that it would maximize our well-being in the long run even after taking an eternity in heaven into account. The teacher offered us no reason to believe that's the case; and more importantly, neither did God, who was just as silent on that day as on all other days. The other main