(This is Kelsey Piper). I am quite confident the contract has been widely retracted. The overwhelming majority of people who received an email did not make an immediate public comment. I am unaware of any people who signed the agreement after 2019 and did not receive the email, outside cases where the nondisparagement agreement was mutual (which includes Sutskever and likely also Anthropic leadership). In every case I am aware of, people who signed before 2019 did not reliably receive an email but were reliably able to get released if they emailed OpenAI HR.
If you signed such an agreement and have not been released, you can of course contact me on Signal: 303 261 2769.
I am quite confident the contract has been widely retracted.
Can you share your reasons for thinking this? Given that people who remain bound can’t say so, I feel hesitant to conclude that people aren’t without clear evidence.
I am unaware of any people who signed the agreement after 2019 and did not receive the email, outside cases where the nondisparagement agreement was mutual (which includes Sutskever and likely also Anthropic leadership).
Excepting Jack Clark (who works for Anthropic) and Remco Zwetsloot (who left in 2018), I would think all the po...
Cross posting from the EA Forum:
It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here's what I came away with:
On December 15, Alice states that she'd had very little to eat all day, that she'd repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don't want to get fast food. She asks ag...
(Crossposted)
It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice's language throughout emphasizes how she'll be fine, it's no big deal [...] I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people.
100% agreed with this. The chat log paints a wildly different picture than what was included in Ben's original post.
...Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th
These texts have weird vibes from both sides. Something is off all around.
That said, what I'm seeing: A person failed to uphold their own boundaries or make clear their own needs. Instead of taking responsibility for that, they blame the other person for some sort of abuse.
This is called playing the victim. I don't buy it.
I think it would generally be helpful if people were informed by the Drama Triangle when judging cases like these.
Crossposted from the EA Forum:
We definitely did not fail to get her food, so I think there has been a misunderstanding - it says in the texts below that Alice told Drew not to worry about getting food because I went and got her mashed potatoes. Ben mentioned the mashed potatoes in the main post, but we forgot to mention it again in our comment - which has been updated
The texts involved on 12/15/21:
I also offered to cook the vegan food we had in the house for her.
I think that there's a big difference between telling everyone "I didn't get the food I wanted,...
We celebrate the May date because May is a good time for a holiday (not close to other major holidays, good weather in our part of the world) and December is very close to the date of Solstice and also close to Christmas, Thanksgiving, etc.
I appreciate this post. I get the sense that the author is trying to do something incredibly complicated and is aware of exactly how hard it is, and the post does it as well as it can be done.
I want to try to contribute by describing a characteristic thing I've noticed from people who I later realized were doing a lot of frame control on me:
Comments like 'almost no one is actually trying but you, you're actually trying' 'most people don't actually want to hear this, and I'm hoping you're different'.' I can only tell you this if you want to hear...
the common thread is that you, the listener, are special, and the speaker is the person who gets to recognize you as special, and the proof of your specialness is...
The speaker has granted you a "special" status, and now they can also set the rules you have to follow unless you want that status revoked. How much are you willing to pay in order to keep that precious status?
Antidotes: "I am not special" or "whether I am special or not, does not depend on whether X thinks I am".
Ahhh these are fantastic examples that clearly map onto frame controllers I know and I didn't think of it when writing this post; really great points.
Were the positive tests from the same batch/purchased all together?
And same question for a positive test: if you get a positive and then retest and get a negative, do you have a sense of how much of an overall update you should make? I've been treating that as 'well, it was probably a false positive then', but multiplying the two updates together would imply it's probably legit?
Are test errors going to be highly correlated? If you take two tests (either of the same type or of different types) and both come back negative, how much of an update is the second test?
Given your described desiderata, I would think that a slightly more rural location along the coast of California ought to be up there. Large properties in Orinda are not that expensive (there are gorgeous 16-30 acre lots for about 1million on Zillow right now), and right now, for better and for worse, the Bay is the locus of the rationalist and EA communities and of the tech industry; convincing people to move to a pastoral retreat 1hour from the city everyone already lives in is a much easier sell and smoother transition than convincing them to move acros...
That is, of course, consistent with it being net neutral to give people money which they spend on school fees, if the mechanism here is 'there are X good jobs, all of which go to people who've had formal education, but formal education adds no value here'. In that scenario it's in anyone's interest to send their kid to school, but all of the kids being sent to school does not net improve anything.
It seems kind of unlikely to me that primary school teaches nothing - and even just teaching English and basic literacy and numeracy seems really valuable - but if it does, that wouldn't make this woman irrational while it would make cash transfers spent on schooling poorly spent overall.
Thanks for answering this. It sounds like the things in the 'maybe concerns, insufficient info' categories are largely not concerns, which is encouraging. I'd be happy to privately contribute salary and CoL numbers to someone's effort to figure out how much people would save.
https://angel.co/manchester/jobs is a little discouraging; there are Lead Java Developer roles listed for £30-50k , no equity which would pay $150,000-$180,000 base in SF and might well see more than $300k in total compensation. Even if you did want to buy a house,...
Are you disagreeing with my prediction? I'd be happy to bet on it and learning that two of the four initial residents are trans women does not change it.
I wrote a post listing reasons why I would not move to Manchester. Since writing it I've gotten more confident about the 'bad culture fit' conclusion by reading bendini's blog. I would also add that the part of the community with the best gender ratio (rationalist tumblr) and the adjacent community with the best gender ratio (Alicorn's fan community) are also the ones with the norms that the founders of this project seem to find most objectionable, and the ones who seem to be the worst culture fit for the project. I think things li...
I would live in this if it existed. Buying an apartment building or hotel seems like the most feasible version of this, and (based on very very minimal research) maybe not totally intractable; the price-per-unit on some hotels/apartments for sale is like $150,000, which is a whole lot less than the price of independently purchasing an SF apartment and a pretty reasonable monthly mortgage payment.
I am suspicious of this as an explanation. Most straight-identified women I know who will dance with/jokingly flirt with other women are in fact straight and not 'implicitly bisexual'; plenty of them live in environments where there'd be no social cost to being bisexual, and they are introspective enough that 'they are actually just straight and don't interpret those behaviors as sexual/romantic' seems most likely.
Men face higher social penalties for being gay or bisexual (and presumably for being thought to be gay or bisexual) which seems a more likely e...
I am not sure that we're communicating meaningfully here. I said that there's a place to set a threshold that weighs the expense against the lives. All that is required for this to be true is that we assign value to both money and lives. Where the threshold is depends on how much we value each, and obviously this will be different across situations, times, and cultures.
You're conflating a practical concern (which behaviors should society condemn?) and an ethical concern (how do we decide the relative value of money and lives?) which isn't even a particula...
Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.
It doesn't have to be well-known. Morally there's a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn't acting morally is taking advantage of the...
Assume there's a threshold at which sending the ship for repairs is morally obligatory (if we're utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn't be utilitarian for this to work.)
Let's say that the threshold is 5% - if there's more than a 5% chance the ship will go down, you should get it repaired.
Mr. Grumpy's thought process seems to be 'I alieve that my ship will sink, but this alief is harmful and I should avoi...
The next passage confirms that this is the author's interpretation as well:
Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out.
And clearly what he is guilty of (or if you prefer, blameworthy) is rati...
A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that sh...
From the upvotes I'm concluding it's worthwhile to go ahead and write it: I agree it serves as a pretty decent example of applying rationality concepts for long-term decision making. It'll have to wait a week until Thanksgiving Break, though.
I'm a freshman in college now, but a post or two analyzing the reasons for choosing an (expensive, high status) private college versus an (essentially free, low status) state college, or going to school in America versus Europe versus somewhere else, would have been immensely valuable to me a year ago.
This would belong on LessWrong because typical advice on this topic is either "follow your dreams, do what you love, everything will work out", or "you're an idiot to take on debt, if you can't pay your own way through college you're a lazy, e...
give 300 bucks to the Against Malaria Foundation, saving the lives of 1-3 children.
Source? The most recent estimate I've seen was that saving a life costs around $2000.
Fixed, sorry! (I'm female and that mistake doesn't bother me at all, but I know it really annoys some people. I'll be more careful in future.)
I completely agree that characterizing RW as contributing to existential risk is absurd.
Thanks for linking to the context! In fairness, though, if people are citing RationalWiki as proof that LessWrong has a "reputation", then devoting a discussion-level post to it doesn't strike me as excessive.
(On a related note: I hadn't read Jade's comments, but I did after you flagged them as interesting; they struck me as totally devoid of value. Would you mind explaining what you think the valid concern he/she's expressing is?)
LW paying RW this much attention while also claiming that the entire future of human value itself is at stake looks on the surface like a failure of apportionment of cognitive resources, but perhaps I've missed something.
What do you mean by "this much attention"? If Konkvistador's links at the top are reasonably comprehensive (and a quick search doesn't turn up much more), there have been 2 barely-upvoted discussion posts about RW in four years, which hardly seems like much attention. For comparison, LW has devoted several times as much energy...
I suppose I'm really thinking of an LW regular telling me in conversation that they consider RW a serious existential risk. You know, serious enough to even think about compared to everything else in the world.
... and if your utility scales linearly with money up to $1,001,000, right?
I don't think there's anything wrong with the topic, if it comes with a little bit of discussion along the lines of palladius's comment below, or along the lines of "What evidence would convince us that the sanity waterline is actually rising, as opposed to just more people being raised non-religious?"
It would be very interesting to see this study in the context of trendlines for other popular sanity-correlated topics, such as belief in evolution, disbelief in ghosts, non-identification with a political party, knowledge about GMOs, etcetera, even though there are lots and lots of confounding variables.
One alone, though, without commentary about rationality, probably does not belong on LessWrong.
I don't think he's saying that motives are morally irrelevant - I think he's saying that they are irrelevant to the point he is trying to make with that blog post.
I just want to experience being wrong sometimes.
Your comments are consistent with wanting to be proved wrong. No one experiences "being wrong" - from the inside, it feels exactly like "being right". We do experience "realizing we were wrong", which is hopefully followed by updating so that we once again believe ourselves to be right. Have you never changed your mind about something? Realized on your own that you were mistaken? Because you don't need to "lose" or to have other people "beat you" to experie...
It looks like I won here, but I thought of some reasons why I may still have lost:
You should stop thinking about discussions in these terms.
My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down.
It seems like it's your estimate of the programming knowledge of the commenters that should go down. Most of the proposed solutions have in common that they sound really simple to implement, but would in fact be complicated - which someone with high general intelligence and rationality, but limited domain-specific knowledge, might not know.
Should people who can't program refrain from suggesting programming fixes? Maybe. But maybe it's worth the tim...
Generally speaking, there are fewer upvotes later in a thread, since fewer people read that far. If the children to your comment have more karma then your comment, it's reasonable to assume that people saw both comments and chose to up vote theirs, but if a parent to your comment has more karma, you can't really draw any inference from that at all.
Not to fall into the "trap" of buying warm fuzzies? Do you advocate a policy of never buying yourself any warm fuzzies, or just of never buying warm fuzzies specifically through donating to charity (because it's easy to trick your brain into believing it just did good)?
Yes, I am deeply suspicious of Eliezer's post on warm fuzzies vs utilons because while I accept that it can be a good strategy, I am skeptical that it actually is: my suspicion is that for pretty much all people, buying fuzzies just crowds out buying utilons.
For example, I asked Konkvistador on IRC, since he was planning on buying fuzzies by donating to this person, what utilons he was planning on buying, especially since he had just mentioned he had very little money to spare. He replied with something about not eating ice cream and drinking more water.
Looks like PMing is down, actually. You can email me at kelseyp [at] stanford.edu (not written out to avoid spambots).
I was accepted to Stanford this spring. At the welcome weekend, we talked a lot with the admissions representatives about what they're looking for - I'd be happy to share tips and my own essays. PM me.
The July matching drive was news to me; I wonder how many other readers hadn't even heard about it.
Is there a reason this hasn't been published on LessWrong, i.e. with the usual public-commitment thread?
Also, if a donation is earmarked for CFAR, does the "matching" donation also go to CFAR?
Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying "thinking rationally about metaethics is not rational" is using the world two different ways, and is the reason your post is so confusing to me.
On your example of a witch, I don't actually see why believing that would be rational. But if you take a more straightforward example, say, "Not knowing that your boss is engaging in insider training, and not looking, coul...
most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.
Agree.
For most people, a rational ethics system costs far more than it provides in benefits.
I don't think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious...
I think what you're trying to say is:
"Morally as computation" is expensive, and you get pretty much the same results from "morality as doing what everyone else is doing." So it's not really rational to try to arrive at a moral system through precise logical reasoning, for the same reasons it's not a good idea to spend an hour evaluating which brand of chips to buy. Yeah, you might get a slightly better result - but the costs are too high.
If that's right, here are my thoughts:
Obviously you don't need to do all moral reasoning from scratc...
He also says:
As in so many other areas, our most important information comes from reality television.
I'm guessing both are a joke.
Your article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think?
I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue?
Racism, sexism and homophobia are the three primary evils for politically correct professors. From what I've read of pick-up (i.e. Roissy's blog) it is in part predicated on a negative view of women's intelligence, standards and ethics making it indeed sexist.
See this to get a feel for how feminist react to criticisms of women. Truth is not considered a defense for this kind of "sexism". (A professor suggested I should not be teaching at Smith College because during a panel discussion on free speech I said Summers was probably correct.)
I've ...
My favorite explanation of Bayes' Theorem barely requires algebra. (If you don't need the extended explanation, just scroll to the bottom, where the problem is solved.)
Chapter 79:
I think we're supposed to be able to figure this one out. My mental model of Eliezer says he thinks he's given us more than enough hints, and we have a week to wait despite it being a short, high tension chapter. He makes a big deal out of how Harry only has thirty hours, which isn't enough; he gives us a week, and a lot of information Harry doesn't have.
Who benefits from isolating Harry from both of his friends, and/or making him do something stupid to protect Hermione in front of the most powerful people in the Wizarding World?
Evidence again...
It's also mentioned in Circular Altruism.
...This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).
My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the co
An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes.
But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there's minimal risk of the suicide attempt failing/ the lottery not really being random, etc.
In short, it's an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn't imply MWI is a belief-in-belief.
I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.)
Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don't waste money on a lottery ticke...
In the original books, Harry's cohort was born ten years into an extremely bloody civil war. I always assumed birth rates were extremely low for Harry's age group, which would imply that the overall population is much larger than what you'd extrapolate from class sizes.
Of course, the numbers still don't work. There are 40 kids in canon!Harry's class. Even if you assume that's a tenth of the normal birthrate and the average person lives to 150, you get a wizarding population of 6,000.
In MoR, class sizes are around 120 (more than half the kids are in the ar...
Kolmogorov Complexity/Solmanoff Induction and Minimum Message Length have been proven equivalent in their most-developed forms. Essentially, correct mathematical formalizations of Occam's Razor are all the same thing.
I have been in touch with around a half dozen former OpenAI employees who I spoke to before former employees were released and all of them later informed me they were released, and they were not in any identifiable reference class such that I’d expect OpenAI would have been able to selectively release them while not releasing most people. I have further been in touch with many other former employees since they were released who confirmed this. I have not heard from anyone who wasn’t released, and I think it is reasonably likely I would have heard from the... (read more)
Thanks, that's helpful context.
I agree it's unsurprising that few rank-and-file employees would make statements, but I am surprised by the silence from those in policy/evals roles. From my perspective, active non-disparagement obligations seem clearly disqualifying for most such roles, so I'd think they'd want to clarify.