I'm not sure if this objection has been pointed out or is even valid.. I think the argument from approximate linearity is probably wrong, even if we're talking editing embryos and not adults. In machine learning we make the learning rate small enough that the map of the error over the parameter space appears linear. This means scaling the gradients way down, but my intuition is that it's minimizing the euclidean distance covered by each step that's "doing the work" of making everything appear flat. If that's correct then flipping 20000 genes is a massive s...
I mean, I explicitly state in the post that I don't think we'll be able to reach IQs far outside the normal human range by just flipping alleles:
I don’t expect such an IQ to actually result from flipping all IQ-decreasing alleles to their IQ-increasing variants for the same reason I don’t expect to reach the moon by climbing a very tall ladder; at some point, the simple linear model will break down.
So yes, I agree with you
A sequence of still frames is a video, if the model was trained on ordered sequences of still frames crammed into the context window, as claimed by the technical report, then it understands video natively. And it would be surprising if it didn't also have some capability for generating video. I'm not sure why audio/video generation isn't mentioned, perhaps the performance in these arenas is not competitive with other models
Sure, but they only use 16 frames, which doesn't really seem like it's "video" to me.
Understanding video input is an important step towards a useful generalist agent. We measure the video understanding capability across several established benchmarks that are held-out from training. These tasks measure whether the model is able to understand and reason over a temporally-related sequence of frames. For each video task, we sample 16 equally-spaced frames from each video clip and feed them to the Gemini models. For the YouTube video datasets (all datasets except NextQA and the Perception test), we evaluate the Gemini models on videos that were still publicly available in the month of November, 2023
We still have a hard problem since misuse of AI, for example using it to secure permanent control over the world, would be extremely tempting. Under this assumption outcomes where everyone doesn't die but which are as bad or worse are much more likely than they would be under its negation. I think the answer to avoiding non awful futures looks similar, we agree globally to slow down before the tech could plausibly pose a big risk, probably that means right around yesterday. Except instead of just using the extra time to do scientific research we also make the appropriate changes to our societies/governments.
This seems to me the opposite of a low bandwidth recursion. Having access the the entire context window of the previous iteration minus the first token, it should be pretty obvious that most of the relevant information encoded by the values of the nodes in that iteration could in principal be reconstructed, excepting the unlikely event that first token turns out to be extremely important. And it would be pretty weird if much if that information wasn't actually reconstructed in some sense in the current iteration. An inefficient way to get information from one iteration to the next, if that is your only goal, but plausibly very high bandwidth.
Well I don't think it should be possible to convince a reasonable person at this point in time. But maybe some evidence that we might not be doomed. Yudkowsky and other's ideas rest on some fairly plausible but complex assumptions. You'll notice in the recent debate threads where Eliezer is arguing for inevitability of AI destroying us he will often resort to something like, "well that just doesn't fit with what I know about intelligences". At a certain point in these types of discussions you have to do some hand waving. Even if it's really good hand wavin...
While we're sitting around waiting for revolutionary imaging technology or whatever, why not try and make progress on the question of how much and what type of information can we obscure about a neural network and still approximately infer meaningful details of that network from behavior. For practice, start with ANNs and keep it simple. Take a smallish network which does something useful, record the outputs as it's doing its thing, then add just enough random noise to the parameters that output deviates noticeably from the original. Now train the perturbe...
This is outright saying ETH is likely to outperform BTC, so this is Scott’s biggest f*** you to the efficient market hypothesis yet. I’m going to say he’s wrong and sell to 55%, since it’s currently 0.046, and if it was real I’d consider hedging with ETH.
I'm curious what's behind this, is Zvi some sort of bitcoin maximalist? I tend to think that bitcoin having a high value is hard to explain, it made sense when it was the only secure cryptocurrency out there but now it's to a large degree a consequence of social forces rather than economic ones. Ether I can see value in, since it does a bunch of things and there's at least an argument that it's best in class for all those.
So many times I've been reading your blog and I'm thinking to myself, "finally something I can post to leftist spaces to get them to trust Scott more", and then I run into one or two sentences that nix that idea. It seems to me like you've mostly given up on reaching the conflict theory left, for reasons that are obvious. I really wish you would keep trying though, they (we?) aren't as awful and dogmatic as they appear to be on the internet, nor is their philosophy as incompatible. For me, it's less a matter of actually adopting the conflict perspective, and more just taking it more seriously and making fun of it less.
What about some form of indirect supervision, where we aim to find transcripts in which H has a decision of a particular hardness? A would ideally be trained starting with things that are very very easy for H, with the hardness ramped up until A maxes out it's abilities. Rather than imitating H, we use a generative technique to create fake transcripts, imitating both H and it's environment. We can incorporate into our loss function the amount of time H spends on a particular decision, the reliability of that decision, and maybe some kind of complexity measure on the transcript to find easier/harder situations which are of genuine importance to H.
Isn't The Least Convenient Possible World directly relevant here? I'm surprised it hasn't been mentioned yet.
One thing I've personally witnessed is people claiming to have had the exact same vivid dream the night before. I'm talking stuff like playing scrabble with Brad Pitt and Former President Carter on the summit of mount McKinley, so it seems unlikely that they were both prompted by the same recent event. Assuming that these people haven't been primed until after the fact, I would expect even stronger effects to be possible for those who have.
I am extremely poor at visualization, can't even picture a line or a circle (I just tried it) and I don't remember images from my dreams. Strangely, when I was a child, I was sometimes able to visualize, but only with extreme effort. More recently, I have experienced what I would call "brain movies", involuntary realistic visualizations, under the influence of opiates.
It seems I am fundamentally capable of visual thinking, but my brain is just not in the habit, though I wouldn't mind being able to summon the ability. It sounds kinda cool.
Low dose ketamine has been shown to promote synaptogenesis in the prefrontal cortex. (in rats) Link to abstract
It is currently being investigated as a potential antidepressant in humans, but based on anecdotal evidence, it seems likely that it's also a nootropic.
Alexander Grothendieck used the analogy of opening a nut to illuminate two different styles of doing mathematics. One way is to strike the nut repeatedly with a hammer and chisel.
I can illustrate the second approach with the same image of a nut to be opened. The first analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!
I don't know whether augmentation is the right step after backing off or not, but I do know that the simpler "back off" is a much better message to send to humanity than that. More digestible, more likely to be heard, more likely to be understood, doesn't cause people to peg you as a rational tech bro, doesn't at all sound like the beginning of a sci-fi apocalypse plot line. I could go on.