Rank: #10 out of 4859 in peer accuracy at Metaculus for the time period of 2016-2020.
Can you imagine a similar piece about disagreements between the EPP and ALDE? And have you, by the way, even heard those acronyms?
Yes, they are European political parties, but hardly anyone has heard of them.
They are not political parties, they are political groups. It's like how caucus in the US Congress where an independent can be together Democratic members in one caucus.
The EU executive, on the other hand, where most of the real power lies, is apolitical, and the individual commissioners are appointed by member states, not by political parties.
Calling the decisions of member states apolitical is wrong. They are all political appointments. It's not like for example the British system where you have a private secretary appointed to a minister that's an apolitical appointment.
The government has also been making improvements like adding fencing, and you could probably fence the whole thing for under $100M [3].
While this would solve the problem of deaths it would also harm people by increasing travel times. The article suggests that some people cross the tracks because otherwise they would have to work 10 minutes more to work. It easy to do a lot of harm with safety interventions as well.
A superintelligence would likely interact with a large part of the world via a browser. Building a browser that works well with their AI, seems to me like it helps developing an AI that can do task in the real world.
It also provides a lot of training data of AI acting as an agent that can be valuable for building superintelligence.
I don't think Elon team says "no" to Elon unless Elon asks for something that's crazy.
Elon spoke about releasing Grokipedia in a open source fashion, which suggests other companies could also train on it instead of it being a competitive advantage in terms of training Grok.
It's a way to make Claude/Gemini/ChatGPT less woke given that Grokipedia is supposed to be non-woke.
I'm not sure they were working on consistency. If you run a lot of Deep Research queries of all sorts of questions, you get a lot of text content that you can feed into your algorithm without needing it to be fully consistent.
We had a pandemic that might have been caused by pandemic preparedness funding. Increasing pandemic preparedness funding isn't the most straightforward lesson to draw from that.
We have some increased technical capacity and knowledge, e.g. on how to manufacture mRNA vaccines, but there's very large swaths of people who have learned the opposite lesson that pandemics aren't real & vaccines don't work, or that the whole thing was orchestrated and governments shouldn't be trusted[4].
The fact that governments failed massively is a key learning from the pandemic. One of my most memorable experiences was when I was wearing my FFP2 mask in public transportation in Berlin which was legally required at the time and the security personal in public transportation was exempt from the requirement of wearing FFP2 masks and was wearing surgical masks.
While hospitals have regular training to make sure that it's staff wear FFP2 masks properly, there was no public education from the government of how to wear FFP2 masks properly no places where you could go to test whether you are wearing your FFP2 mask properly but a legal requirement that everyone wears FFP2 masks.
In the US, the NIH stopped believing in the importance of science and thus didn't fund any trials to see what you need to do to make community masking work successfully. Somehow, the policy position in the US ended to be that everyone should wear a cloth mask and people didn't let non-NIH studies of community masking affect their habits very much.
If you have the government LARPing a pandemic response, the fact that the government shouldn't be trusted is the right learning. The key question you should ask if what you could do so that the next time the government isn't LARPing but actually has a decent pandemic response.
When Elon talked at the All-In-Summit about how they want to train new versions of Grok entirely based on synthethic data David Sachs asked him to create Grokipedia as Wikipedia alternative. To me, Elon sounded at that moment like he didn't really consider the idea before and that it made sense and promised to talk to his team about it.
Of course, if LW were truly meritocratic (which it should be), this shouldn’t matter — but in my experience, it descriptively does.
That's not really true if people are popular rationalist thinkers because of skill at good rationalist writing. Meritocracy does not imply that people get judged based on individual pieces of their work, a meritocracy where people are primarily judged on their total output would still be a meritocracy.
I think the problem is more that post that make points that are popular and fit neatly into the world view of the reader are more likely to get upvotes than posts that challenge the world view of the reader and would require the reader to update their world view. Jimmy's introductionary post to his sequence is currently as I'm writing it at 12 karma.
While writing quality might be improved, it's a post that sets out to challenges the readers conception about how human reasoning works in practical contexts and that's why it's at low karma. I would love to see more posts like Jimmy that are about the core of how rationality works over posts that feel good to read and that get lot of upvotes without really changing minds much.
When making donations like that, is there a way to add a note explaining why you donated? I would expect that if Scott Wiener knows that a lot of his donations are because of AI safety, that might mean that he spends more of his time if elected with the cause.
Your statement only holds insofar as you ignore (or deny the legitimacy of) the role of context and non-verbal communication in determining the overt meaning of a statement.
Statements don't have a single meaning. They have layers of meaning. "Overt" is a word that has a specific meaning. The overt meaning of something can be different than the deeper meaning of it.
Take a wife asking her husband whether she looks fat in her dress.
If you use a model like that of Schulz von Thun, you get an expression of meaning on all four layers. The information layer about whether or not she looks fat in the dress is the overt layer. However, most people do also understand that the question has a deeper relationship layer and answering it on the information layer would offend many wives.
Zach argues in this post that we should call this kind of question "bad faith".
Sarcastic statements frequently have different overt and deep meanings. You need to understand the context to understand the deep meaning.
I have not heard about Carlson's and Maddow's cases.
Tucker Carlson and Rachel Maddow used to be two of the most popular prime show hosts, are you saying you have no idea of their programs? What do you think a Tucker Carlson or Rachel Maddow fan would say if you ask them whether they are truthful?
What I write about egg shredding is of positions hold a few years ago, but it illustrates the principle:
In Germany, we don't really like to throw little baby chickens in the shredder. If Germany would be completely on it's own we would require eggs to screened early to prevent it from happening, even if that means that our eggs are a few cents more expensive. However, we are living next to Poland. If we would require eggs produced in Germany to do more egg screening and then cheaper Polish eggs outcompete the German eggs in our supermarkets we don't want that. A common market means that we can't simply forbid Polish eggs, so there's a need for a shared agricultural policy that somehow brings the different ideas about how eggs should be produced together.
If one country decides to increase subsidies for apples and then outcompetes other European countries for apples or creates pressures for them also to add apple subsidies to protect their apple growers that isn't great either.
If you have a common market common agricultural policy does make some sense.