1.) Conflict theory in practice: you see conflicts of interest, explain them to your ingroup, and if they don't agree, they are corrupted by the enemy.
2.) Mistake theory in practice: you identify behavior as bad, explain that to everybody and if they don't agree either move to 1.) or note that people are very stupid.
It would be great if people first did some literature research before presenting their theory of life, universe and everything. If they did not find any literature, they should say so.
I considered looking for any studies or documentation about whether blog and website posts are improved by prior research or references. But then I got distracted, so I just wrote this comment instead.
German Jokes translated by ChatGPT
Me:
Here's a German joke, could you please translate it to English such that the joke is still funny? Zwei Kerzen unterhalten sich. Sagt die eine zur anderen: "Ist Wasser eigentlich gefährlich?" Die andere antwortet: "Ja, davon kannst du ausgehen!"
ChatGPT:
Here's a translation of the German joke into English:
Two candles are talking to each other. One says to the other, "Is water dangerous?" The other answers, "Yes, you can count on it!"
The joke still retains its humor in English, as it relies on a simple play on words ...
Saying that "the control system" does something is about as informative as saying that "emergence" is the cause of something.
Currently reading Fooled by Randomness, almost 20 years after it was published. By now I have read about a third of it. Up to now, it seems neither very insightful nor dense; all the insights (or observations) seem to be what you can read in the (relatively short) wikipedia article. It is also not extremely entertaining.
I wonder whether it was a revealing, revolutionary book back in the days, or whether it is different to people with a certain background (or lack thereof), such that my impression is, in some sense, biased. I also wonder whether the other b...
New results published in Cell suggest that Sars-Cov 2 gets into the body via the nasal mucosa and then gets into deep parts of the lung via body fluids, and possibly into the brain. A second part of the same study suggests that there may be a partial immunity against Sars-Cov 2 of people who had Sars or Mers. (Disclaimer: I only read a newspaper summary.)
My impression is that the OpenAI thing has a larger effect ive negative impact on the world compared to the FTX thing, but less people will notice it.
People worried about AI alignment: At some point, an AI might succeed convincing people to give it access to the internet!
Big internet company seeing an AI: Hm we could just include the AI into our search engine.
A list of some beliefs of which I changed my subjective probabilities because of FTX (in one direction or the other):
Seeing a blog post from 2019 that called UK and Australia fascistic because Melatonin is a prescription medicine makes me update slightly in the direction that discourse norms in the ea/lw communities did not decline due to corona as much as I had thought.
Having put some thoughts into the 80,000 hours career planning document, I think it is time for the next "some weeks of thinking" projects.
Either it's gonna be similar planning processes:
Or it's gonna be concrete learning projects:
Y...
The results of Bob Jacob's LessWrong survey are quite interesting. It's a pity the sample is so small.
The visualized results (link in his post) are univariate, but I would like to highlight some things:
49 out of 56 respondents identifying as "White",
53 out of 59 respondents born male and 46 out of 58 identifying male cisgender
47 of 59 identifying as heterosexual (comparison: https://en.wikipedia.org/wiki/Demographics_of_sexual_orientation)
1 out of 55 working in a "blue collar" profession
Most people identify as "left of c...
My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme. And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.
(Note that I am not discussing here why this is so. Maybe the left-wing is inherently correct. Or maybe the intelligent people are just more likely to attend universities where they get brainwashed by the establishment. I am not discussing the cause here, merely observing the outcome.)
So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?
I don't see where leftwing lesswrongers are denounced as rightwing extremists.
For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how ...
The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated.
It's not true, though! The article claims: "The neoreactionary movement first grew on LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology".
I mean, okay, it's true that we've had discussions on eugenics and evolutionary psychology, and it's true that a few of the contrarian nerds who enthusiastically read Overcoming Bias back in the late 'aughts were also a few of the contrarian nerds who enthusiastically read Unqualified Reservations. But "first grew" (Wikipedia) and "originated" (your comment) really doesn't seem like a fair summary of that kind of minor overlap in readership. No one was doing neoreactionary political theorizing on this website. Okay, I don't have a exact formalization of what I mean by "no one" in the previous sentence because I haven't personally read and remembered every post in our archives; maybe there are nonzero posts with nonnegative karma that could be construed to match this description. Still, in essence, you can only make the claim "true" by gerrymandering the construal of those words.
And yet the cha...
You would hope that people actually saw steelmanning as an ideal to follow. If that was ever true, the corona pandemic and the policy response seem to have killed the demand for this. It seems to become acceptable to attribute just any kind of seemingly-wrong behavior to either incredible stupidity or incredible malice, both proving that all institutions are completely broken.
I remember reading a post about how the US navy went down in number of ships and average age of the ships went up a lot over time. It was about how the US military in general spends a lot of their money very inefficiently.
Unfortunately, it might have been a decade since I read it and I don't find it. Does anybody here know the post I'm referring to?
It's funny that in the interview episode "Rob Wiblin on how he ended up the way he is" of the 80,000 hours podcast, Misha Saul says that parents don't have much of an influence on the development of their own children (biodeterminism), but at the same time the whole interview is about important, formative experiences.
In the last weeks, I saw some posts or comments arguing why it would be in the self-interest of an extremely powerful AI to leave some power or habitat or whatever to humans. This seems to try to be an answer to the briader question "why should AI dobthings that we want even though we are powerless?" But it skips the complicqted question "What do we actually want an AI to do?" If we can answer that second question, then maybe the whole "please don't do things that we really do not want" quest becomes easier to solve.
When I write a post and select text, a menu appears where I can select text appearance properties etc. However, in my latest post, this menu does not appear when I edit the post and select text. Any idea why that could be the case?
Metaculus should adjust election forecasting questions such that forecasters are forced to make their forecasts add up to 100% over all options (with an additional option "noone of the above").
Noah Smith writes about
"1) AI flooding social media with slop, and 2) foreign governments flooding English-language social media with disinformation. Well, if you take a look at the screenshot at the top of this post, you’ll see the intersection of the two!"
Check the screenshot in his post and tell me whether you see a rabbit or a duck.
I see a person called A. Mason writing on Twitter and ironically subverting the assumption that she is a bot, by answering with the requested poem but letting it end with a sentence about Biden that confirms her origin...
How much does technological progress of consumer goods increase happiness?
Sure I prefer a modern TV to the TV we had in the 90's if I have to choose, but if I compare "pleasure I had from watching Star Trek TNG, which was only possible at 3 pm" to "pleasure I have from watching one of the many currently available TV shows at any time of the day", I am not really sure the second one really feels much better.
The "Snake cult of consciousness" theory sounds extremely fascinating. Qt the same time, it also sounds like the explanations why the pyramids were built by aliens. For laypeople, it is hard to distinguish between Important insights and clever nonsense.
Why do nuclear-energy fan articles often mention France as a positive example without discussing the drastic reduction of French nuclear power generation in 2022?
In the recent Econtalk podcast with Tyler Cowen, Cowen explicitly and strictly demands a mathematical model of AI risk, claiming something like that does not exist.
At the same time, he sees "Hayekian" arguments as a kind of benchmark. As far as I know, there is no mathematical benchmark model of the classical Hayek argument.
The same is true for Cowen's demand for loyalty to the US constitution. There is no mathematical model for that.
All claims and demands of Cowen are asymmetrical. The doomers are emotional, he says. The non-doomers are just reasonable.
One thing of which it might be helpful if powerful beings could learn it: "It's in general not okay to enforce your wishes on others. "
However, ethics is complicated and you will probably find many cases where enforcing your wishes on others is actually okay.
Moreover, if the learning dataset is humanity's behavior, then it's probably a problem that enforcing takes place all the time.
Typical fiction has probably framed thinking about the development of intelligence in non-humans in bad ways.
I sometimes read claims like "doing strength training for more than [insert small number of times per week] is useless, see scientific evidence". Another impression is: people doing lots of exercise per week get big muscles. How do these observations relate to reality? What is the fitness production function, or the optimal schedule, if one really wanted to become something like a Schwarzenegger? (I don't. This is just curiosity.)
Several posts in this or the ea forum seem to suggest that it's kind of standard among people in the community to have invested a large share of their personal wealth into crypto, and in ftx in particular.
Why should this be assumed? Are there statistics on that?
I used to believe that most people have a diversified portfolio, both inside and outside of the communities. Isn't that the case?
What are the properties that make substack so successful? At first glance, substack blogs seem less structured than e.g. wordpress.com. In Substack, the "Archive" of a blog is just a long list. Distributing new articles via email does not seem like a spectacular feature, but in any case it should be possible on other blog platforms as well. What am I missing?
There may be a certain risk that downvoting culture replaces "comment and discussion" culture (at least at the margins). A reason for that may be that there is no clear idea of what a downvote (or an upvote) actually means, such that possibly some people just upvote if the content of a comment / post confirms their worldview (and vice versa).
When people write articles containing wrong statements and statements without evidence or source, you can use your knowledge of the wrong statements to update the probability that the statements without evidence or source are true.
I would love to see examples of contributions with actual steelmanning instead of just seeing people who pay lipservice to it.
Among EA-minded people interested in preventing climate change, it seems Clean Air Task Force (CATF) is seen very favorably. Why? The "Climate Change Cause Area Report" by Founders Pledge (PDF) gives an overview.
CATF's work is introduced as follows:
"It was founded in 1996 with the aim of enacting federal policy reducing the air pollution caused by American coal-fired power plants. This campaign has been highly successful and has been a contributing factor to the retirement of a large portion of the US coal fleet." (p. 5)
On p. 88, you will read:
"Do th...
In the latest episode, the Bayesian conspiracy podcast people discuss moral behavior in the context of FTX and in that context treat the claim that the FTX thing is evidence that you shouldn't trust people who look kind of weird as a totally normal part of a conversation. Could we maybe just slightly slow down all these discussions?
Still waiting for a change of the general attitude in certain rationality etc circles concerning Elon Musk, a change that would also take into account what kind of news sources Musk promotes on X.