Who created the Less Wrong Gather Town?
Does anyone know who created it and if they're still around? I'm interested in getting some of the artwork they used for the EA Gather Town and/or getting a two-way portal between them.
Does anyone know who created it and if they're still around? I'm interested in getting some of the artwork they used for the EA Gather Town and/or getting a two-way portal between them.
Crossposted from the EA Forum. Tl;dr I’ve developed two calculators designed to help longtermists estimate the likelihood of humanity achieving a secure interstellar existence after 0 or more major catastrophes. These can be used to compare an a priori estimate, and a revised estimate after counterfactual events. I hope these...
Did the ringing go away over time, or was it permanent?
Does anyone know who created it and if they're still around? I'm interested in getting some of the artwork they used for the EA Gather Town and/or getting a two-way portal between them.
What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that's a different claim.
I would also be interested to see this. Also, could you clarify:
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Are you talking here about 'the extended EA-Alignment ecosystem', or do you mean you've aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
Crossposted from the EA Forum.
I’ve developed two calculators designed to help longtermists estimate the likelihood of humanity achieving a secure interstellar existence after 0 or more major catastrophes. These can be used to compare an a priori estimate, and a revised estimate after counterfactual events.
I hope these calculators will allow better prioritisation among longtermists and will finally give a common currency to longtermists, collapsologists and totalising consequentialists who favour non-longtermism. This will give these groups more scope for resolving disagreements and perhaps finding moral trades.
This post explains how to use the calculators, and how to interpret their results.
I argued earlier in this sequence that the classic concept of ‘existential risk’ is much too reductive. In short, by classing... (read 7423 more words →)
Another effect I'm very concerned about is the unseen effect on the funding landscape. For all EVF organisations are said to be financially independent, none of them seem to have had any issue getting funding, primarily from Open Phil (generally offering market rate or better salaries and in some cases getting millions of dollars on marketing alone), while many other EA orgs - and, contra the OP, there many more* outside the EVF/RP net than within - have struggled to get enough money to pay a couple of staff a living wage.
* That list excludes regional EA subgroups, of which there are dozens, and would no doubt be more if a small amount of funding was available.
It doesn't matter whether you'd have been hypothetically willing to do something for them. As I said on the Facebook thread, you did not consult with them. You merely informed them they were in a game, which, given the social criticism Chris has received, had real world consequences if they misplayed. In other words, you put them in harm's way without their consent. That is not a good way to build trust.
The downvotes on this comment seem ridiculous to me. If I email 270 people to tell them I've carefully selected them for some process, I cannot seriously presume they will give up >0 of their time to take part in it.
Any such sacrifice they make is a bonus, so if they do give up >0 time, it's absurd to ask that they give up even more time to research the issue.
Any negative consequences are on the person who set up the game. Adding the justification that 'I trust you' does not suddenly make the recipient more obligated to the spammer.
My impression is that many similar projects are share houses or other flat hierarchies. IMO a big advantage of the model here is a top-down approach, where the trustees/manager view it as a major part of our job to limit and mitigate interpersonal conflicts, zero sum status games etc.
Whatever you call it, they've got to identify some alternative, even if only tacitly by following some approximation of it in their daily life.
I would like to write an essay about that eventually, but I figured persuading PUs of the merits of HU was lower hanging fruit.
For what it's worth, I have a lot of sympathy with your scepticism - I would rather (and believe it possible to) build a system resembling ethics up without reference to normativity, 'oughts', or any of their associated baggage. I think the trick will be to properly understand the overlap of ethics and epistemology, both of which are subject to similar questions (how do we non question-beggingly 'ground' 'factual' questions?), but the former of whose questions people disproportionately emphasise.
[ETA] It's also hard to pin down what the null hypothesis... (read more)
Somewhat belatedly worth pointing out that playing the ace first is strictly mechanically better, since it can drop either singleton honour held by the right hand opponent.
This means that there's no dominant strategy here - you have to have a theory of mind about how likely your left hand opponent is to rise with the king, and compare that against the very small (by my reckoning about 0.1%) mechanical upside of the A-first play .