All of tup99's Comments + Replies

There are some scenarios where having control, rather than ownership/profit, could be important.


I'm curious what kind of scenarios you're thinking about. Having actual control, yes, that could be important. But having 0.001% of control of Google does not seem like it would have any effect on either Google or me, under any scenario.

1wassname
I'm imagining a scenario where an AI extrapolates "keep the voting shareholders happy" and "maximise shareholder value". Voting stocks can also get valuable when people try to accumulate them to corner the market and execute a takeover this happens in crytopcurrencies like CURVE. I know these are farfetched, but all future scenarios are. The premium on google voting stock is very small right now, so it's a cheap feature to add.

Whether or not to get insurance should have nothing to do with what makes one sleep

This (and much of the rest of your article) seems needlessly disdainful of people’s emotions.

Wealth does not equal happiness!

If it did, then yes, 899 < 900 so don’t buy the insurance. But in the real world, I think you’re doing normal humans a big disservice by pretending that we are all robots.

Even Mr. Spock would take human emotions into consideration when giving advice to a human.

1Michael Cohn
100% agree with the principle that buying peace of mind can be a good deal whether the peace of mind is quantitatively justified or not, and the broader principle that we shouldn't be disdainful of emotions. But while emotions aren't entirely rational, they're not entirely irrational either. I expect that learning and applying the method in this post will help the user feel peace of mind at a level of insurance that's somewhat closer to optimal than they would have otherwise.  It's also probably useful to learn the quantiatively optimal strategy so you can be consciously aware of what premium you're paying for peace of mind, and make thoughtful decisions about how much it's worth to you. I said that buying peace of mind can be a good deal, but I'll bet there are people who -- if they could see exactly how big the risk-aversion premium they're paying is -- would decide that they'd rather deal with more anxiety and pocket the cash. 
3mruwnik
Wealth not equaling happiness works both ways. It's the idea of losing wealth that's driving sleep away. In this case, the goal of buying insurance is to minimize the risk of losing wealth. The real thing that's stopping you sleep is not whether you have insurance or not, it's how likely it is that something bad happens, which will cost more than you're comfortable losing. Having insurance is just one of the ways to minimize that - the problem is stress stemming from uncertainty, not whether you've bought an insurance policy.  The list of misunderstandings is a bit tongue in cheek (at least that's how I read it). So it's not so much disdainful of people's emotions, as much as it's pointing out that whether you have insurance is not the right thing to worry about - it's much more fruitful to try to work out the probabilities of various bad things then calculate how much you should be willing to pay to lower that risk. It's about viewing the world through the lens of probability and deciding these things on the basis of expected value. Rather than have sleepless nights, just shut up and multiply (this is a quote, not an attack). Even if you're very risk averse, you should be able to just plug that into the equation and come up with some maximum insurance cost above which it's not worth buying it. Then you just buy it (or not) and sleep the sleep of the just. The point is to actually investigate it and put some numbers on it, rather than live in stress. This is why it's a mathematical decision with a correct answer. Though the correct answer, of course, will be subjective and depend on your utility function. It's still a mathematical decision. Spock is an interesting example to use, in how he's very much not rational. Here's a lot more on that topic. 

The probability should be given as 0.03 -- that might reduce your confusion!

Perhaps you should make this more clear in the calculator, to avoid people mistakenly making bad choices? (Or just change it to percent. Most people are more comfortable with percentages, and the % symbol will make it unambiguous.)

You’re saying that we might survive, but our environment/food might not, right?

How many things could reasonably have a p(doom) > 0.01? Not very many.  Therefore your worry about me "neurotically obsessing over tons of things" is unfounded. I promise I won't :) If my post causes you to think that, then I apologize, I have misspoken my argument.

3M. Y. Zuo
What is the actual argument that there’s ‘not very many’? (Or why do you believe such an argument made somewhere else) There’s hundreds of asteroids and comets alone that have some probability of hitting the Earth in the next thousand years, how can anyone possibly evaluate ‘p(doom)’ for any of this, let alone every other possible catastrophe?
1cdt
I was reading the UK National Risk Register earlier today and thinking about this. Notable to me that the top-level disaster severity has a very low cap of ~thousands of casualties, or billions of economic loss. Although it does note in the register that AI is a chronic risk that is being managed under a new framework (that I can't find precedent for).

A lot of your responses make you sound like you're more interested in arguing and being contrarian than in seeking the truth with us. This one exemplifies it, but it's a general pattern of the tone of your responses. It'd be nice if you came across as more truth-seeking than argument-seeking.

-2Logan Zoellner
I came and asked "the expert concensus seems to be that AGI doom is unlikely.  This is the best argument I am aware of and it doesn't seem very strong.  Are there any other arguments?"   Responses I have gotten are: * I don't trust the experts, I trust my friends * You need to read the sequences * You should rephrase the argument in a way that I like And 1 actual attempt at giving an answer (which unfortunately includes multiple assumptions I consider false or at least highly improbable) If I seem contrarian, it's because I believe that the truth is best uncovered by stating one's beliefs and then critically examining the arguments.  If you have arguments or disagree with me fine, but saying "you're not allowed to think about this, you just have to trust me and my friends" is not a satisfying answer.

Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.

4tailcalled
"People I consider very smart say this is dangerous" seems so cursed, especially in response to people questioning whether it is dangerous. Would be better for you to not participate in the discussion and just leave it to the people who have an actual independently informed opinion.

This was the most compelling part of their post for me:

"You are correct about the arguments for doom being either incomplete or bad. But the arguments for survival are equally incomplete and bad."

And you really don't seem to have taken it to heart. You're demanding that doomers provide you with a good argument.  Well, I demand that you provide me with a good argument! 

More seriously: we need to weigh the doom-evidence and the non-doom-evidence against each other. But you believe that we need to look at the doom-evidence and if it's not very good,... (read more)

7tailcalled
Feels like this attitude would lead you to neurotically obsessing over tons of things. You ought to have something that strongly distinguishes AI from other concepts before you start worrying about it, considering how infeasible it is to worry about everything conceivable.

I (on average) expect to be treated about as well by our new AGI overlords as I am treated by the current batch of rulers.

...

By doom I mean the universe gets populated by AI with no moral worth (e.g. paperclippers).  

 

Well, at least we've unearthed the reasons that your p(doom) differs!

Most people do not expect #1(unless we solve alignment), and have a broader definition of #2. I certainly do.

Agreed. Let's not lose sight of the fact that 2-20% means it's still the most important thing in the world, in my view.

I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.

Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as... (read more)

8TsviBT
Ok, I added some links to "Downside risks of genomic selection". Not true! This consideration is the main reason I included a "unit price" column. Germline engineering should be roughly comparable to IVF, i.e. available to middle class and up; and maybe cheaper given more scale; and certainly ought be subsidized, given the decreased lifetime healthcare costs alone. Eh, unless you can explain this more, I think you've been brainwashed by Gattaca or something. Gattaca conflates class with genetic endowment, which is fine because it's a movie about class via a genetics metaphor, but don't be confused that it's about genetics. Did the invention of smart phones increase or decrease the distance? In general, some technologies scale with money, and other technologies scale by bodycount. Each person only gets one brain to receive implants and stuff. Elon Musk, famously extremely rich and baby-obsessed, has what... 12 kids? A peasant could have 12 kids if they wanted to! Germline engineering would therefore be extremely democratic, at least for middle class and up. The solution, of course, is to make the tech even cheaper and more widely available, not to inflict preventable disease and disempowerment on everyone's kids. Stats or GTFO. First, the two specific things you listed are quite genetically heritable. Second, 7 SDs -- which is the most extreme form that I advocate for -- is only a little bit outside the Gaussian human distribution. It's just not that extreme of a change. It seems quite strange to postulate that a highly polygenic trait, if pushed to 5350 out of 10000 trait-positive variants, would suddenly cause major psychological problems, whereas natural-born people with 5250 or 5300 out of 10000 trait-positive variants are fine.
4Raemon
I think the terror reaction is honestly pretty reasonable. ([edit: Not, like, like, necessarily meaning one shouldn't pursue this sort of direction on balance. I think the risks of doing this badly are real and I think the risks of not doing anything are also quite real and probably great for a variety of reasons]) One reason I nonetheless think this is very important to pursue is that we're probably going to end up with superintelligent AI this century, and it's going to be dramatically more alien and scary than the tail-risk outcomes here. I do think the piece would be improved if it acknowledged and grappled with that more.