What do you mean when you say you're "willing to bet according to the Kelly Criterion"? If you're proposing a bet with 99% odds and your actual belief that you'll win the bet is also 99%, then the Kelly Criterion would advise against betting (since the EV of such a bet would be zero, i.e. merely neutral).
Perhaps you mean that the other person should come up with the odds, and then you'll determine your bet amount using the Kelly criterion, assuming a 99% probability of winning for yourself.
Yeah, based on EY's previous tweets regarding this, it seemed like it was supposed to be a TED talk.
Example origin scenario of this Nash equilibrium from GPT-4:
In this hypothetical scenario, let's imagine that the prisoners are all part of a research experiment on group dynamics and cooperation. Prisoners come from different factions that have a history of rivalry and distrust.
Initially, each prisoner sets their dial to 30 degrees Celsius, creating a comfortable environment. However, due to the existing distrust and rivalry, some prisoners suspect that deviations from the norm—whether upward or downward—could be a secret signal from one faction to ...
Hmm, I've not seen people refer to (ChatGPT + Code execution plugin) as an LLM. IMO, an LLM is supposed to be language model consisting of just a neural network with a large number of parameters.
I'm a bit confused about this post. Are you saying it is theoretically impossible to create an LLM that can do 3*3 matrix multiplication without using chain of thought? That seems false.
The amount of computation an LLM has done so far will be a function of both the size of the LLM (call it the s factor) and the number of tokens generate so far (n). Let's say matrix multiplication of n*n matrices requires cn^3 amount of computation (actually, there are more efficient algos, but it doesn't matter).
You can do this by either using a small LLM and n^3 tok...
I don't think GPT4 can be used with plugins in ChatGPT. It seems to be a different model, probably based on GPT3.5 (evidence: the color of the icon is green, not black; seems faster than GPT4; no limits or quota; no explicit mention of GPT4 anywhere in announcement).
So I think there's a good chance the title is wrong.
Additional comments on creative mode by Mikhail (from today):
https://twitter.com/MParakhin/status/1636350828431785984
We will {...increase the speed of creative mode...}, but it probably always be somewhat slower, by definition: it generates longer responses, has larger context.
https://twitter.com/MParakhin/status/1636352229627121665
Our current thinking is to keep maximum quality in Creative, which means slower speed.
https://twitter.com/MParakhin/status/1636356215771938817
...Our current thinking about Bing Chat modes: Balanced: best for the most common tasks,
Based on Mikhail's Twitter comments, 'precise' and 'creative' don't seem to be too much more than simply the 'temperature' hyperparameter for sampling. 'Precise' would presumably correspond to very low, near-zero or zero, highly deterministic samples.
Nope, Mikhail has said the opposite: https://twitter.com/MParakhin/status/1630280976562819072
Nope, the temperature is (roughly) the same.
So I'd guess the main difference is in the prompt.
I think it's more of a correction than a misunderstanding. It shouldn't be assumed that "value" just means human civilization and its potential. Most people reading this post will assume "wiping out all value" to mean wiping out all that we value, not just wiping out humanity. But this is clearly not true, as most people value life and sentience in general, so a universe where all alien civs also end up dying due to our ASI is far worse than the one where there are survivors.
Minor (?) correction: You've mentioned multiple times that our ASI will wipe out all value in the universe, but that's very unlikely to happen. We won't be the only (or the first) civilization to have created ASI, so eventually our ASI will run into other rogue/aligned ASIs and be forced to negotiate.
Relevant EY tweets: https://twitter.com/ESYudkowsky/status/1558974831269273600
...People who value life and sentience, and think sanely, know that the future galaxies are the real value at risk.
...
Yes, I mean that I expect AGI ruin to wipe out all galaxies in its
That part was a bit unclear. I guess he could work with redwood/conjecture without necessarily quitting his MIRI position?
Suppose you lived in the dark times, where children have a <50% of living to adulthood. Wouldn't you still have kids? Even if probabilistically smallpox was likely to take them?
Just wanna add that each of you children individually having a 50% chance of survival due to smallpox is different from all of your children together having 50% chance of survival due to AI (i.e. uncorrelated vs correlated), so some people might decide differently in these 2 cases.
AFAIK, it is not necessary to "“accurately reverse engineer human values and also accurately encode them”. That's considered too hard, and as you say, not tractable anytime soon. Further, even if you're able to do that, you've only solved outer alignment, inner alignment still remains unsolved.
Instead, the aim is to build "corrigible" AIs. See Let's See You Write That Corrigibility Tag, Corrigibility (Arbital), Hard problem of corrigibility (Arbital).
Quoting from the last link:
...
The "hard problem of corrigibility" is to build an agent which, in a
Interesting example, but I still feel like Bob doesn't need to contradict Alice's known beliefs.
If Bob found a page from Alice's notebook that said "time and space are relative," he could update his understanding to realize that the theory of Newtonian physics he's been using is only an approximation, and not the real physics of the universe. Then, he could try to come up with upper bounds on how inaccurate Newtonian physics is, by thinking about his past experiences or doing new experiments. Even so, he could still keep using Newtonian physics, with the u...
But if that's the case, he could simply mention the amount he's willing to bet. The phrasing kinda suggested to me that he doesn't have all the info needed to do the Kelly calculation yet.