All of Olivier Faure's Comments + Replies

I really appreciate you for writing how despite verbally agreeing that modesty is unproductive, you nevertheless never judged high-status people as dumb. That's totally the kind of noticing/Law I imagine we need more of. And I also imagine this is the sort of mindset Eliezer is looking for - the mindset where you figure those things out unprompted, without an Eliezer there to correct you.

I find this comment kind of aggravating.

I'll claim that the very mindset you mention starts with not taking Eliezer at face value when he half-implies he's the only per... (read more)

But my guess is that studying applied math and CS would have been better for me per hour than studying science, and the reason I spent that time learning science was largely because I think it's exciting and cool rather than because I endorse it as a direct path to knowing things that are useful for doing alignment research

Strong upvote for this.

Doing things you find fun is extremely efficient. Studying things you don't like is inefficient, no matter how useful these things may turn out to be for alignment or x-risk.

Regarding text, if the problem comes from encoding, does that mean the model does better with individual letters and digits? Eg

"The letter A"

"The letters X Y and Z"

"Number 8"

"A 3D rendering of the number 5"

2Swimmer963 (Miranda Dixon-Luinenburg)
"A 3D rendering of the number 5"
2Swimmer963 (Miranda Dixon-Luinenburg)
"Number 8". Huh I think these are almost all street numbers on houses/buildings? 
2Swimmer963 (Miranda Dixon-Luinenburg)
"The letters X Y and Z" ok it's starting to get confused here.... (My prediction is that it'll manage the number 8 and number 5 in the next prompts, but if I try a 3-digit number it might flail).
2Swimmer963 (Miranda Dixon-Luinenburg)
Let's see!  "The letter A"

It feels like a "gotcha" rebuke, but it honestly doesn't seem like it really addresses the article's point. Unless you think GPT-3 would perform better if given more time to work on it?

6Daniel Kokotajlo
How does it not address the article's point? What I'm saying is that Armstrong's example was an unfair "gotcha" of GPT-3; he's trying to make some sort of claim about its limitations on the basis of behavior that even a human would also exhibit. Unless he's saying we humans also have this limitation... Yes, I think GPT-3 would perform better if given more time to work on it (and fine-tuning to get used to having more time). See e.g. PaLM's stuff about chain-of-thought prompting. How much better? I'm not sure. But I think its failure at this particular task tells us nothing.

For that prompt "she went to work at the office" was still the most common completion. But it only happened about  of the time. Alternatively, GPT-3 sometimes found the completion "she was found dead". Kudos, GPT-3, you understand the prompt after all! That completion came up about  of the time.

Does it really understand, though? If you replace the beginning of the prompt with "She died on Sunday the 7th", does it change the probability that the model outputs "she was found dead"?

From previous posts about this setting, the background assumption is that the child almost certainly won't permanently die if it takes 15 seconds longer to reach them.

Sure, whatever.

Honestly, that answer makes me want to engage with the article even less. If the idea is that you're supposed to know about an entire fanfiction-of-a-fanfiction canon to talk about this thought experiment, then I don't see what it's doing in the Curated feed.

I reject the parable/dilemma for another reason: in the majority of cases, I don't think it's ethical to spend so much money on a suit that you would legitimately hesitate to save a drowning child if it put the suit at risk?

If you're so rich that you can buy tailor-made suits, then sure, go save the child and buy another suit. If you're not... then why are you buying super-expensive tailor-made suits? I see extremely few situations where keeping the ability to play status games slightly better would be worth more than saving a child's life.

(And yes, there'... (read more)

1Iaroslav Postovalov
Putting ethics aside, if you are not rich enough to easily buy a new fragile suit, you can simply insure it. There are plenty of situations in which the suit could be ruined.
5aphyer
If you think luxury spending is inherently immoral, I think you're going to end up in the same position as Peter Singer re. the obligation to give away almost all of your income.
8JBlack
From previous posts about this setting, the background assumption is that the child almost certainly won't permanently die if it takes 15 seconds longer to reach them. This is not Earth. Even if they die, their body should be recoverable before their brain degrades too badly for vitrification and future revival. It is also stated that the primary character here is far more selfish than usual. However even on Earth, we do accept economic reasons for delaying rescue by even a lot more than 15 seconds. We don't pay enough lifeguards to patrol near every swimmer, for example, which means that when they spot a swimmer in distress it takes at least 15 more seconds to reach them. In nearly every city, a single extra ambulance team could reduce average response time to medical emergencies by a great deal more than 15 seconds. There doesn't seem to be any great ethical outcry about this, though there are sometimes newspaper articles when the delays go past a few extra hours. What's more these are typically shared, public expenses (via insurance if nothing else). One of the major questions addressed in the post is whether the extra cost should be borne by the rescuer alone. Is that ethically relevant, or is it just an economic question of incentives?

For example I think many Muslim countries have a lot of success at preventing pornography

Citation needed.

My default assumption for any claims of that sort is "they had a lot of success at concealing the pornography that existed in such a way that officials can pretend it doesn't exist".

Yes, all societies are identical except insofar as what the officials pretend about it. People in very religious societies are having just as much sex as in modern secular societies, they just do it in a way that allowed officials to pretend it didn’t exist.

This was fun to read, but also a little awkward. This feels less like "The world if everyone was an economist" and more "The world if everyone agreed with Eliezer Yudkowsky about everything".

Some thoughts:

  • I don't care how strong your social norms are, you're not enforcing that pornography ban. Forget computers, it's unworkable as long as people have paper.

  • Same thing with sad people not reproducing. People would go "fuck social norms" and have kids anyway. People who respect the norms would be pushed out of the gene pool. I don't see how you could enf

... (read more)

I mean, surely Eliezer is going to have somewhat dath-ilan typical preferences, having grown up there.

4FiftyTwo
Yeah I like a lot of EY's stuff (otherwise I wouldn't be here) but he does have a habit of treating his own preferences as universal, or failing to appreciate when there might be good reasons that the seemingly obvious solution doesn't work, as is common with people commenting on areas outside their expertise

I feel like the first two are enforceable with culture. For example I think many Muslim countries have a lot of success at preventing pornography (or at least, they did until the internet, which notably dath ilan seems to not quite have). I also have a sense that many people with severe mental/physical disabilities are implicitly treated as though they won't have children in our culture, and as a result often do not. But I agree it's hard to do it ethically, and both of the aforementioned ways aren't done very ethically in our civilization IMO.

For the latt... (read more)