I have signed no contracts or agreements whose existence I cannot mention.
They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.
Humans very clearly are privileged objects for continuing human values, there is no "giving up on transhumanism". Its literally right there in the name! It would be (and is) certainly absurd to suggest otherwise.
As for CEV, note that the quote you have there indeed does privilege the "human" in human values, in the sense that it suggests giving the AI under consideration a pointer to what humans would want if they had perfect knowledge and wisdom.
Stripping away these absurdities (and appeals to authority or in-groupedness), your comment becomes "Well to generalize human values without humans, you could provide an AI with a pointer to humans thinking under ideal conditions about their values", which is clearly a valid answer, but doesn't actually support your original point all that much, as this relies on humans having some ability to generalize their values out of distribution.
But what statement? Can you just copy your whole message? I just want to try it out myself.
What, specifically, was your prompt?
You don't need to wear the mask at all times, for example you can buy an air quality monitor, and wear the mask only when the sensors detect unsafe levels of contaminants (in which case your fellow passengers ought to be scared).
Based on the article, these events seem most common on Airbus A320 aircrafts, and those are the crafts for which these events have been getting more common. Boeing 737s remain under the FAA’s industry wide estimate (the article claims Airbus far exceeds that estimate), and incidence for them has been basically constant since 2015, so if you want to dodge the whole question, I’d just make sure you use 737 flights.
Edit: Reading more, it sounds like the Boeing 787 completely fixes the relevant design issue (running cabin air through the engine compartment)
Many use it for this purpose.
I also think that a more insidious problem with Twitter than misinfo is the way it teaches you to think. There are certain kinds of arguments people make and positions people hold which very clearly are there because of Twitter (though not necessarily becuase they read them on Twitter). They are usually sub-par, simple-minded, and very vibes (read: not evidence) based. A common example here is the "we're so back" sort of hype-talk.
Yeah, so you make probabilistic forecasts about events in the future, then you grade yourself later, and update based off what the results ended up being and how wrong or right you were.
If its expensive to test a forecast, prioritize those less until you have a better understanding of which experiments are more likely to be more valuable, and you have better prediction capabilities.
I don’t understand how this relates to what I said, can you say more?
You were appealing to authority, and being absurd (and also appealing to in/out-groupness). I feel satisfied getting a bit aggressive when people do that. I agree that style doesn't have any bearing on the validity of my argument, but it does discourage that sort of talk.
I'm not certain what you're arguing for in this latest comment, I definitely don't think you show here that humans aren't privileged objects when it comes to human values, nor do you show that your quote by Eliezer recommends any special process more than a pointer to humans thinking about their values in an ideal situation, which were my main two contentions in my original comment.
I don't think anyone in this conversation argued that humans can generalize from a fixed training distribution arbitrarily far, and I think everyone also agrees that humans think about morality by making iterative, small, updates to what they already know. But, of course, that does still privilege humans. There could be some consistent pattern to these updates, such that something smarter wouldn't need to run the same process to know the end-result, but that would be a pattern about humans.