All of toothpaste's Comments + Replies

So the idea is that if you get as many people in the AI business/research and as possible to read the sequences, then that will change their ideas in a way that will make them work in AI in a safer way, and that will avoid doom?

I'm just trying to understand how exactly the mechanism that will lead to the desired change is supposed to work.

If that is the case, I would say the critique made by OP is really on point. I don't believe the current approach is convincing many people to read the sequences, and I also think reading the sequences won't necessarily m... (read more)

1mruwnik
More that you get as many people in general to read the sequences, which will change their thinking so they make fewer mistakes, which in turn will make more people aware both of the real risks underlying superintelligence, but also of the plausibility and utility of AI. I wasn't around then, so this is just my interpretation of what I read post-facto, but I get the impression that people were a lot less doomish then. There was a hope that alignment was totally solvable. The focus didn't seem to be on getting people into alignment, as much as it generally being better for people to think better. AI isn't pushed as something everyone should do - rather as what EY knows - but something worth investigating. There are various places where it's said that everyone could use more rationality, that it's an instrumental goal like earning more money. There's an idea of creating Rationality Dojos, as places to learn rationality like people learn martial arts. I believe that's the source of CFAR. It's not that the one and only goal of the rationalist community was to stop an unfriendly AGI. It's just that is the obvious result of it. It's a matter of taking the idea seriously, then shutting up and multiplying - assuming that AI risk is a real issue, it's pretty obvious that it's the most pressing problem facing humanity, which means that if you can actually help, you should step up. Business/economic/social incentives can work, no doubt about that. The issue is that they only work as long as they're applied. Actually caring about an issue (as in really care, like oppressed christian level, not performance cultural christian level) is a lot more lasting, in that if the incentives disappear, they'll keep on doing what you want. Convincing is a lot harder, though, which I'm guessing is your point? I agree that convincing is less effective numerically speaking, but it seems a lot more good (in a moral sense), which also seems important. Though this is admittedly a lot more of an

Won't the goal of getting humans to reason better necessarily turn political at a certain point? After all, if there is one side of an issue that is decidedly better from some ethical perspective we have accepted, won't the rationalist have to advocate that side? Won't refraining from taking political action then be unethical? This line of reasoning might need a little bit of reinforcement to be properly convincing, but it's just to make the point that it seems to me that since political action is action, having a space cover rationality and ethics and not... (read more)

6ryan_b
Trivially, yes. Among other things, we would like politicians to reason better, and for everyone to profit thereby. As it happens, this significantly predates the current political environment. Minimizing talking about politics, in the American political party horse-race sense, is one of our foundational taboos. It is not so strong anymore - once even a relevant keyword without appropriate caveats would pile on downvotes and excoriation in the comments - but for your historical interest the relevant essay is Politics Is The Mind-Killer. You can search that phrase, or similar ones like "mind-killed" or "arguments are soldiers" to get a sense of how it went. The basic idea was that while we are all new at this rationality business, we should try to avoid talking about things that are especially irrational. Of course at the same time the website was big on atheism, which is an irony we eventually recognized and corrected. The anti-politics taboo softened enough to allow talking about theory, and mechanisms, and even non-flashpoint policy (see the AI regulation posts). We also added things like arguing about whether or not god exists to the taboo list. There was a bunch of other developments too, but that's the directional gist. Happily for you and me both, political theory tackled well as theory finds a good reception here. As an example I submit A voting theory primer for rationalists and the follow-up posts by Jameson Quinn. All of these are on the subject of theories of voting, including discussing some real life examples of orgs and campaigns on the subject, and the whole thing is one of my favorite chunks of writing on the site.
2mruwnik
It depends what you mean by political. If you mean something like "people should act on their convictions" then sure. But you don't have to actually go in to politics to do that, the assumption being that if everyone is sane, they will implement sane policies (with the obvious caveats of Moloch, Goodhart etc.). If you mean something like "we should get together and actively work on methods to force (or at least strongly encourage) people to be better", then very much no. Or rather it gets complicated fast. 

You claim that the point of the rationalist community was to stop an unfriendly AGI. One thing that confuses me is exactly how it intends to do so, because that certainly wasn't my impression of it. I can see the current strategy making sense if the goal is to develop some sort of Canon for AI Ethics that researchers and professionals in the field get exposed to, thus influencing their views and decreasing the probability of catastrophe. But is it really so?

If the goal is to do it by shifting public opinion in this particular issue, by making a majority of... (read more)

5mruwnik
The answer is to read the sequences (I'm not being facetious). They were written with the explicit goal of producing people with EY's rationality skills in order for them to go into producing Friendly AI (as it was called then). It provides a basis for people to realize why most approaches will by default lead to doom.  At the same time, it seems like a generally good thing for people to be as rational as possible, in order to avoid the myriad cognitive biases and problems that plague humanities thinking, and therefore actions. My impression is that the hope was to make the world more similar to Dath Ilan. 

It's hard to judge this particular case without context, but such sentences can be valid if they convey a general direction a person wants something to move on in a situation where they can't or shouldn't be overspecific, for example if they don't know much about the specific subject, or if they want to remain on topic during a talk about a particular issue.

For example, I could say "it's time someone developed a machine that is able to fetch things around the house and bring them to us". It doesn't mean I know anything about engineering or about how this m... (read more)

Paolo Freire

You mean Paulo Freire!

Most apostrophe removals didnt cause any problems, but the "were" in the paragraph before the last one had me confused for a split second.

One of the reasons I was having trouble with the Reagan example when I was reading this for the first time was that I was interpreting it as

“Reagan will provide federal support for unwed mothers AND cut federal support to local governments” is more probable than “Reagan will provide federal support for unwed mothers AND NOT cut federal support to local governments”.

The fact that in one option one of the the sentences was present and in the other option it wasn't made me think that the fact that it was not present made it implicit that it would NOT happen, when it wasn't the case.

I wonder how common is that line of reasoning.