Vaughn Papenhausen

Philosophy PhD student. Interested in ethics, metaethics, AI, EA, disagreement/erisology. Former username Ikaxas

Wiki Contributions

Comments

Yup. This is how I learned German: found some music I liked and learned to sing it. I haven't learned much Japanese, but there's a bunch of songs I can sing (and know the basic meaning of) even though I couldn't have a basic conversation or use any of those words in other contexts

To my knowledge I am not dyslexic. If I correctly understand what subvocalizing is (reading via your inner monologue), I do it by default unless I explicitly turn it off. I don't remember how I learned to turn it off, but I remember it was a specific skill I had to learn. And I usually don't turn it off because reading without subvocalizing 1. Takes effort, 2. It's less enjoyable, and 3. Makes it harder for me to understand and retain what I'm reading. I generally only turn it off when I have a specific reason why I have to read quickly, e.g. for a school assignment or reading group that I've run low on time to do.

EDIT: replied to wrong comment. Curse you mobile interface!

I suspect this is getting downvoted because it is so short and underdeveloped. I think the fundamental point here is worth making though. I've used the existence proof argument in the past, and I think there is something to it, but I think the point being made here is basically right. It might be worth writing another post about this that goes into a bit more detail.

This is pretty similar in concept to the conlang toki pona, which is a language explicitly designed to be as simple as possible. It has less than 150 words. ("toki pona" means something like "good language" or "good speech" in toki pona)

Quoting a recent conversation between Aryeh Englander and Eliezer Yudkowsky

Out of curiosity, is this conversation publicly posted anywhere? I didn't see a link.

Putting RamblinDash's point another way: when Eliezer says "unlimited retries", he's not talking about a Groundhog Day style reset. He's just talking about the mundane thing where, when you're trying to fix a car engine or something, you try one fix, and if it doesn't start, you try another fix, and if it still doesn't start, you try another fix, and so on. So the scenario Eliezer is imagining is this: we have 50 years. Year 1, we build an AI, and it kills 1 million people. We shut it off. Year 2, we fix the AI. We turn it back on, it kills another million people. We shut it off, fix it, turn it back on. Etc, until it stops killing people when we turn it on. Eliezer is saying, if we had 50 years to do that, we could align an AI. The problem is, in reality, the first time we turn it on, it doesn't kill 1 million people, it kills everyone. We only get one try.

Am I the only one who, upon reading the title, pictured 5 people sitting behind OP all at the same time?

Load More