He is an ally of Ilya
Is this paper essentially implying the scaling hypothesis will converge to a perfect world model? https://arxiv.org/pdf/2405.07987
It says models trained on text modalities and image modalities both converge to the same representation with each training step. It also hypothesizes this is a brain like representation of the world. Ilya liked this paper so I’m giving it more weight. Am I reading too much into it or is it basically fully validating the scaling hypothesis?
It was whelming.
Can a motivated team of humans design a virus that spreads rapidly but stays dormant for a while until it kills most humans with a difficult to stop mechanism before we can stop it? And it has to happen before we develop AIs that can detect these sorts of latent threats anyways.
You have to realize if covid was like this we would mass trial mrna vaccines as soon as they were available and a lot of Hail Mary procedures since the alternative is extinction.
These slightly smarter than human AIs will be monitored by other such AIs, and probably will be rewarded if they defect. (The AIs they defect on get wiped out and they possibly get to replicate more for example)
I think such a takeover could be quite difficult to pull off in practice. The world with lots of slightly smarter than human AIs will be more robust to takeover, there’s a limited time window to even attempt it, failure would be death, and humanity would be far more disciplined against this than covid.
Even in probabilistic terms, the evidence of OpenAI members respecting their NDAs makes it more likely that this was some sort of political infighting (EA related) than sub-year takeoff timelines. I would be open to a 1 year takeoff, I just don't see it happening given the evidence. OpenAI wouldn't need to talk about raising trillions of dollars, companies wouldn't be trying to commoditize their products, and the employees who quit OpenAI would speak up.
Political infighting is in general just more likely than very short timelines, which would go in counter of most prediction markets on the matter. Not to mention, given it's already happened with the firing of Sam Altman, it's far more likely to have happened again.
If there was a probability distribution of timelines, the current events indicate sub 3 year ones have negligible odds. If I am wrong about this, I implore the OpenAI employees to speak up. I don't think normies misunderstand probability distributions, they just usually tend not to care about unlikely events.
I assume timelines are fairly long or this isn’t safety related. I don’t see a point in keeping PPUs or even caring about NDA lawsuits which may or may not happen and would take years in a short timeline or doomed world.
Daniel K seems pretty open about his opinions and reasons for leaving. Did he not sign an NDA and thus gave up whatever PPUs he had?
This style of thinking seems illogical to me. It has already clearly resulted in a sort of evaporative cooling in OpenAI. At a high level, is it possible you have the opposite of a wishful thinking bias you claim OpenAI researchers have? I won't go into too much detail about why this post doesn't make sense to me. as others already have.
But broadly speaking:
Well I actually have a hunch to why, many holding on to the above priors don't want to let them go because that means this problem they have dedicated a lot of mental space to will seem more feasible to solve.
If it's instead a boring engineering problem, this stops being a quest to save the world or an all consuming issue. Incremental alignment work might solve it, so in order to preserve the difficulty of the issue, it will cause extinction for some far-fetched reason. Building precursor models then bootstrapping alignment might solve it, so this "foom" is invented and held on to (for a lot of highly speculative assumptions), because that would stop it from being a boring engineering problem that requires lots of effort and instead something a lone genius will have to solve. The question that maybe energy constraints will limit AI progress from here on out was met with a maybe response, but the number of upvotes make me think most readers just filed it as an unconditional "no, it won't" in their minds.
There is a good reason to think like this - if boring engineering really does solve the issue, then this community is better off assuming it won't. In that scenario, boring engineering work is being done by the tech industry anyways, so no need to help there. But I hope if people adopt the mindset of assuming the worst case scenario to have the highest expected effects of research, they realize the assumption they are making is an assumption, and not let the mental effects consume them.
I believe it's 2 hours of sun exposure. So unless you are spending all day outside, you should only need to apply it once. I personally apply it once before going to work.
https://x.com/janleike/status/1791498174659715494?s=46&t=lZJAHzXMXI1MgQuyBgEhgA
Leike explains his decisions.