Yes but GPT-3 offers us new evidence we should try to update on. It's debatable to say how many bits of evidence that provides, but we can also update based on this Discontinuous progress in history: an update:
Growth rates sharply changed in many trends, and this seemed strongly associated with discontinuities. If you experience a discontinuity, it looks like there’s a good chance you’re hitting a new rate of progress, and should expect more of that.
AlphaGo was something we saw before we expected it. GPT-3 text generator was something we saw before we expected it. They were discontinuities.
Thanks! From this the first impression is that testing is going to get easier, but actually it still depends on what the antibody test reagents are, do you have any info on that?
You are correct, and that simpler model gives an even greater risk. I'm skeptical about social distancing because hospitals become overcrowded once 1/1000 of the population gets infected, and they need one month to process the hospitalized. With that pace, the quarantine would need to last 83 years. Even if this estimate is wrong by 10x that implies quarantine duration of 8 years. So much about flattening the curve. The best hope is a vaccine, so the quarantine lasts for approx 1 year, but maybe much shorter if more resources are invested and barriers (such as rigorous testing requirements, China could be of help here) somehow avoided.
Wei Dai already talked about it here: https://www.lesswrong.com/posts/RukXjEvMfqDKRJaup/what-will-be-the-big-picture-implications-of-the-coronavirus?commentId=p6xZhhJLMBRdfXhe5
Lombardy has a population of 10M, at 5K confirmed infections they got overcrowded, 1/2000 of the population got confirmed infected, let's say the true number was 1/1000. I didn't check the Wei Dai's math but his number is similar "0.1% of Hubei's population have a confirmed infection, and its hospitals are already at the breaking point".
"In Daegu, 2,300 people were waiting to be admitted to hospitals and temporary medical facilities, Vice Health Minister Kim Gang-lip said. A 100-bed military hospital that had been handling many of the most serious cases was due to have 200 additional beds available by Thursday, he added."
And they have 10x hospital beds compared to the US.
By the way, that's another Roko, I'm not that guy :)
Perhaps I should have been more specific, I'm talking about a scenario where there is an actual machine (like a time machine but instead of travelling in time you travel between universes) in which you step and press a button, and then you appear in a parallel universe. In standard probability we have a potential future state of "I'm dead" and "I'm alive" but you can physically travel between those two future states, either one happens or the other happens. In the inter-universe travel scenario you can use the machine to travel to other universes and revive the copies (or repopulate the universes in which humanity went extinct).
EDIT:
So I think we agree on this part:
Let's say we have 10 universes which are all identical, they all have you in them, you are tied to the tracks and a trolley is approaching. You have two buttons to press. Button A in all universes has the same effect but you are not sure which the effect is, there is a 90% of it not doing anything and 10% of it stopping the trolley. Button B uses a QMRNG and stops the trolley in 1 universe while letting it run you over in 9 universes. To a utilitarian the total expected utility from pressing any button is the same. In case A, the expected utility for each universe is 0.1 lives saved, so for total we get 10 * 0.1 = 1 life saved. In case B, the total expected utility is 1 life saved.
Then comes the problematic part:
The expected utility is the same, except... if inter-universe travel is possible and you are an expert surgeon which can save your copy's life after it has been run over. In that case you survive in one universe and travel to other universes one by one and save the other copies. Taking the sum of utility of all universes for all times, the situation when a QMRNG is used looks a lot different than when not used. When not used, at one point in the future, the utility becomes zero and stays zero. When used, you can recover.
So the one surgeon survives, steps into the machine, presses a button to go to another universe, revives the copy, then he goes to the next universe (assuming the universes are nearly-identical except for the fact one of them got run over by a trolley, so all 10 of the parallel universes have such machines in them) and revives the next copy, and so on. So the expected utility of using QMRNG is 10 lives saved.
When applying this to xrisk it doesn't matter if other universes have such machines in them since the travelers can use their knowledge and engineering skills to construct them. That's what I meant by " we can assume that inter-universe travel consumes some resources and takes some time".
Perhaps I should have been more specific, I'm talking about a scenario where there is an actual machine (like a time machine but instead of travelling in time you travel between universes) in which you step and press a button, and then you appear in a parallel universe. It's not a question who claims anything, nor it is a question of random fluctuations, it's a question of whether that kind of machine can be built or not. If it can be built, then increasing quantum diversification reduces xrisk, because then the travelers can travel around and repopulate other universes.
It is simplest to imagine a scenario where all 10 universes have such machines and you can only travel from one machine to another, so you step into the machine in your universe and you step out of the machine in another universe.
There is also no point in talking about the exact number of such-and-such universes, all that matters is the proportion of the universes in which something happens, there is an infinite number of every possible universe. I talked about 10 of them to simplify the principle, which holds for any n of universes.
Can you please elaborate on your example of resurrection, it sounds interesting but I don't understand it.
It changes because with ordinary randomness you can't travel between different branches in the decision tree. In the thought experiment with the surgeon he actually physically travels to a parallel universe and saves a life of his copy there. So the expected long term utility is not 1 life saved but 10 lives saved.
Within the pessimistic hypothesis it does not matter who develops AGI, in any case our death is almost certain.