LESSWRONG
LW

Arepo
762230
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
What TMS is like
Arepo10mo20

Did the ringing go away over time, or was it permanent?

Reply
Towards more cooperative AI safety strategies
Arepo1y20

What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that's a different claim.

Reply
Towards more cooperative AI safety strategies
Arepo1y2-1

I would also be interested to see this. Also, could you clarify:

I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).

Are you talking here about 'the extended EA-Alignment ecosystem', or do you mean you've aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?

Reply
EA orgs' legal structure inhibits risk taking and information sharing on the margin
Arepo2y20

Another effect I'm very concerned about is the unseen effect on the funding landscape. For all EVF organisations are said to be financially independent, none of them seem to have had any issue getting funding, primarily from Open Phil (generally offering market rate or better salaries and in some cases getting millions of dollars on marketing alone), while many other EA orgs - and, contra the OP, there many more* outside the EVF/RP net than within - have struggled to get enough money to pay a couple of staff a living wage.

* That list excludes regional EA subgroups, of which there are dozens, and would no doubt be more if a small amount of funding was available.

Reply
Honoring Petrov Day on LessWrong, in 2020
Arepo5y150

It doesn't matter whether you'd have been hypothetically willing to do something for them. As I said on the Facebook thread, you did not consult with them. You merely informed them they were in a game, which, given the social criticism Chris has received, had real world consequences if they misplayed. In other words, you put them in harm's way without their consent. That is not a good way to build trust.

Reply
Honoring Petrov Day on LessWrong, in 2020
Arepo5y*140

The downvotes on this comment seem ridiculous to me. If I email 270 people to tell them I've carefully selected them for some process, I cannot seriously presume they will give up >0 of their time to take part in it. 

Any such sacrifice they make is a bonus, so if they do give up >0 time, it's absurd to ask that they give up even more time to research the issue.

Any negative consequences are on the person who set up the game. Adding the justification that 'I trust you' does not suddenly make the recipient more obligated to the spammer.

Reply
The Case for The EA Hotel
Arepo6y110

My impression is that many similar projects are share houses or other flat hierarchies. IMO a big advantage of the model here is a top-down approach, where the trustees/manager view it as a major part of our job to limit and mitigate interpersonal conflicts, zero sum status games etc.

Reply
[link] Choose your (preference) utilitarianism carefully – part 1
Arepo10y00

Whatever you call it, they've got to identify some alternative, even if only tacitly by following some approximation of it in their daily life.

Reply
[link] Choose your (preference) utilitarianism carefully – part 1
Arepo10y00

I would like to write an essay about that eventually, but I figured persuading PUs of the merits of HU was lower hanging fruit.

For what it's worth, I have a lot of sympathy with your scepticism - I would rather (and believe it possible to) build a system resembling ethics up without reference to normativity, 'oughts', or any of their associated baggage. I think the trick will be to properly understand the overlap of ethics and epistemology, both of which are subject to similar questions (how do we non question-beggingly 'ground' 'factual' questions?), but the former of whose questions people disproportionately emphasise.

[ETA] It's also hard to pin down what the null hypothesis would be. Calling it 'nihilism' of any kind is just defining the problem away. For eg, if you just decide you want to do something nice for your friend - in the sense of something beneficial for her, rather than just picking an act that will give you warm fuzzies - then your presumption of what category of things would be 'nice for her' implicitly judges how to group states of the world. If you also feel like some things you might do would be nicer for her than others, then you're judging how to order states of the world.

This already has the makings of a 'moral system', even though there's not a 'thou shalt' in sight. If you further think that how she'll react to whatever you do for her can corroborate/refute your judgement of what things are nice(r than others) for her, your system seems to have, if not a 'realist' element, at least a non purely antirealist/subjectivist one. It's not utilitarianism (yet), but it seems to be heading in that sort of direction.

Reply
xkcd on the AI box experiment
Arepo11y50

How do we know EY isn't doing the same?

Reply
Load More
2Who created the Less Wrong Gather Town?
Q
11mo
Q
1
2Two tools for rethinking existential risk
1y
0