Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
blf120

It's the lazy beaver function: https://googology.fandom.com/wiki/Lazy_beaver_function

blf3-2

Strong disagree.  Probably what you say applies to the case of a couple that cares sufficiently to use several birth control methods, and that has no obstruction to using some methods (e.g., bad reactions to birth-control pills).

Using only condoms, which from memory was the advice I got as a high-schooler in Western Europe twenty years ago, seems to have a 3% failure rate (per year, not per use of course!) even when used correctly (leaving space at the tip, using water-based lubricant). That is small but not negligible.

It would a good public service to have an in depth analysis of available evidence on contraception methods. Or maybe we should ask Scott Alexander to add a question on contraception failure to his annual survey?

blf50

The Manhattan project had benefits potentially in the millions of lives if the counterfactual was broader Nazi domination.  So while AI is different in the size of the benefit, it is a quantitative difference.  I agree it would be interesting to compute QALYs with or without AI, and do the same for some of the other examples in the list.

blf40

Usually, negative means "less than 0", and a comparison is only available for real numbers and not complex numbers, so negative numbers mean negative real numbers.

That said, ChatGPT is actually correct to use "Normally" in "Normally, when you multiply two negative numbers, you get a positive number." because taking the product of two negative floating point numbers can give zero if the numbers are too tiny.  Concretely in python -1e300 * -1e300 gives an exact zero, and this holds in all programming languages that follow the IEEE 754 standard.

blf30

An option is to just to add the month and year, something like "November 2023 AI Timelines".

blf42

I guess if your P(doom) is sufficiently high, you could think that moving T(doom) back from 2040 to 2050 is the best you can do?

Of course the costs have to be balanced, but well, I wouldn't mind living ten more years. I think that is a perfectly valid thing to want for any non-negligible P(doom).

blf159

The usual advice to get a good YES/NO answer is to first ask for the explanation, then the answer.  The way you did it, GPT4 decides YES/NO, then tries to justify it regardless of whether it was correct.

blf10

The first four and next four kinds of alignment you propose are parallel except that they concern a single person or society as a whole.  So I suggest the following names which are more parallel.  (Not happy about 3 and 7.)

  1. Personal Literal Genie: Do exactly what I say.
  2. Personal Servant: Do what I intended for you to do.
  3. Personal Patriot: Do what I would want you to do.
  4. Personal Nanny: Be loyal to me, but do what’s best for me, not strictly what I tells you to do or what he wants or intended.
  5. Public Literal Genie: Do whatever it is collectively told.
  6. Public Servant: Carry out the will of the people.
  7. Public Patriot: Uphold the values of the people, and do what they imply.
  8. Public Nanny: Do what needs to be done, whether the people like it or not.
  9. Gentle Genie: The Genie from Aladdin. Note he is not strategic.
  10. Arbiter: What is the law?
blf20

The analogy (in terms of dynamics of the debate) with climate change is not that bad: "great news and we need more" is in fact a talking point of people who prefer not acting against climate change.  E.g., they would mention correlations between plant growth and CO2 concentration.  That said, it would be weird to call such people climate deniers.

Load More