When I was in the UK I bought one of these floodlights. You have to attach your own plug but that's easy enough. More frustrating is there's a noticeable flicker, and the CRI's pretty poor.
On the upside it's £70, compact, simple, and 40k lumens.
Yes! Though that engineer might not be interested in us.
In-person is required. We'll add something to the job descriptions in the new year, thanks for the heads up!
It's not impossible, but it appears unlikely for the foreseeable future. We do sponsor visas, but if that doesn't suit then I'd take a look at Cohere.ai, as they're one org I know of with a safety team who are fully-onboard with remote.
You're presenting it as a litmus test for engineers to apply to themselves, and that's fine as far as it goes
I can reassure you that it is in fact a litmus test for engineers to apply to themselves, and that's as far as it goes.
While part of me is keen to discuss our interview design further, I'm afraid you've done a great job of laying out some of the reasons not to!
I've been surprised by this too, and my best explanation so far is schools. Evidence in favour is that Scottish school holidays start end-of-June, while English school holidays start middle-of-July, and indeed there looks to be a two-week difference in the peaks for the two nations.
A good test for this will be this week's ONS report. This doesn't have the English turn-around in it yet, but if it is schools then there should be an extremely sharp drop in the school-age rates.
All that said, it's only my best hypothesis. A strong piece of evidence against it ...
One piece of evidence against this: almost all the uptick in the UK is in folks under 40. Under 40s have a much lower vaccination rate due to the age-dependent rollout, but because of the blood clot scare under 40s have preferentially gotten Pfizer. Over 40s meanwhile have a very high vaccination rate but it's mostly AstraZeneca. Their case rate is flat.
Source
Those lines aren't flat, they're just hard to read on that scale. I made my own based on the heatmap of case rates for england (there doesn't seem to be a whole-UK heatmap).
Nine months later I consider my post pretty 'shrill', for want of a better adjective. I regret not making more concrete predictions at the time, because yeah, reality has substantially undershot my fears. I think there's still a substantial chance of something 10x large being revealed within 18 months (which I think is the upper bound on 'timeline measured in months'), but it looks very unlikely that there'll be a 100x increase in that time frame.
To pick one factor I got wrong in writing the above, it was thinking of my massive update in response to ...
Curious if you have any other thoughts on this after another 10 months?
Those I know who train large models seem to be very confident we will get 100 Trillion parameter models before the end of the decade, but do not seem to think it will happen, say, in the next 2 years.
There is a strange disconcerting phenomena where many of the engineers I've talked to most in the position to know, who work for (and in one case owns) companies training 10 billion+ models, seem to have timelines on the order of 5-10 years. Shane Legg recently said he gave a 50% chan...
I don't think the problem you're running into is a problem with making bets, it's a problem with leverage.
Heck, you've already figured out how to place a bet that'll pay off in future, but pay you money now: a loan. Combined with either the implicit bet on the end of the world freeing you from repayment, or an explicit one with a more-AI-skeptical colleague, this gets you your way of betting on AI risk that pays now.
Where it falls short is that most loanmaking organisations will at most offer you slightly more than the collateral you can put up. Because, w...
You may be interested in alpha-rank. It's an Elo-esque system for highly 'nontransitive' games - ie, games where there're lots of rock-paper-scissors-esque cycles.
At a high level, what it does is set up a graph like the one you've drawn, then places a token on a random node and repeatedly follows the 'defeated-by' edges. The amount of time spent on a node gives the strength of the strategy.
You might also be interested in rectified policy space response oracles, which is one approach to finding new, better strategies in nontransitive games.
This is superb, and I think it'll have a substantial impact on debate going work. Great work!
Worth noting that the "evidence from the nascent AI industry" link has bits of evidence pointing in both directions. For example:
Training a single AI model can cost hundreds of thousands of dollars (or more) in compute resources. While it’s tempting to treat this as a one-time cost, retraining is increasingly recognized as an ongoing cost, since the data that feeds AI models tends to change over time (a phenomenon known as “data drift”).
Doesn't this kind of cost make AI services harder to commodify? And also:
...We’ve seen a massive difference in COGS
You can get a complementary analysis by comparing the US to its past self. Incarceration rate, homicide rate. Between 1975 and 2000, the incarceration rate grew five-fold while the homicide rate fell by half.
Bit of a tangent, but while we might plausibly run out of cheap oil in the near future, the supply of expensive, unconventional oil is vast. By vast I mean 'several trillion barrels of known reserves', against an annual consumption of 30bn.
Question is just how much of those reserves are accessible at each price point. This is really hard to answer well, so instead here's an anecdote that'll stick in your head: recent prices ($50-$100/bbl) are sufficient that the US is now the largest producer of oil in the world, and a net exporter to b...
Thanks for the feedback! I've cleaned up the constraints section a bit, though it's still less coherent than the first section.
Out of curiosity, what was it that convinced you this isn't an infohazard-like risk?
While you're here and chatting about D.5 (assume you meant 5), another tiny thing that confuses me - Figure 21. Am I right in reading the bottom two lines as 'seeing 255 tokens and predicting the 256th is exactly as difficult as seeing 1023 tokens and predicting the 1024th'?
e: Another look and I realise Fig 20 shows things much more clearly - never mind, things continue to get easier with token index.
Though it's not mentioned in the paper, I feel like this could be because the scaling analysis was done on 1024-token sequences. Maybe longer sequences can go further.
It's indeed strange no-one else has picked up on this, which makes me feel I'm misunderstanding something. The breakdown suggested in the scaling law does imply that this specific architecture doesn't have much further to go. Whether the limitation is in something as fundamental as 'the information content of language itself', or if it's a more-easily bypas...
They do discuss this a little bit in that scaling paper, in Appendix D.6. (edit: actually Appendix D.5)
At least in their experimental setup, they find that the first 8 tokens are predicted better by a model with only 8 tokens its its window than one with 1024 tokens, if the two have equally many parameters. And that later tokens are harder to predict, and hence require more parameters if you want to reach some given loss threshold.
I'll have to think more about this and what it might mean for their other scaling laws... at the very least, it's an effect which their analysis treats as approximately zero, and math/physics models with such approximations often break down in a subset of cases.
'Why the hell has our competitor got this transformative capability that we don't?' is not a hard thought to have, especially among tech executives. I would be very surprised if there wasn't a running battle over long-term perspectives on AI in the C-suite of both Google Brain and DeepMind.
If you do want to think along these lines though, the bigger question for me is why OpenAI released the API now, and gave concrete warning of the transformative capabilities they intend to deploy in six? twelve? months' time. 'Why the hell ...
I think the fact that's it's not a hard thought to have is not too much evidence about whether other orgs will change approach. It takes a lot to turn the ship.
Consider how easy it would be to have the thought, "Electric cars are the future, we should switch to making electric cars." any time in the last 15 years. And yet, look at how slow traditional automakers have been to switch.
hey man wanna watch this language model drive my car
Thinking about this a bit more, do you have any insight on Tesla? I can believe that it's outside DM and GB's culture to run with the scaling hypothesis, but watching Karpathy's presentations (which I think is the only public information on their AI program?) I get the sense they're well beyond $10m/run by now. Considering that self-driving is still not there - and once upon a time I'd have expected driving to be easier than Harry Potter parodies - it suggests that language is special in some way. Information density? Rich, diff'able reward signal?
Tesla publishes nothing and I only know a little from Karpathy's occasional talks, which are as much about PR (to keep Tesla owners happy and investing in FSD, presumably) & recruiting as anything else. But their approach seems heavily focused on supervised learning in CNNs and active learning using their fleet to collect new images, and to have nothing to do with AGI plans. They don't seem to even be using DRL much. It is extremely unlikely that Tesla is going to be relevant to AGI or progress in the field in general given their secrecy and domain-spe...
Self driving is very unforgiving of mistakes. The text generation on the other hand doesn't have similar failure conditions and bad content can be easily fixed.
hey man wanna watch this language model drive my car
I'd say it's at least 30% likely that's the case! But if you believe that, you'd be pants-on-head loony not to drop a billion on the 'residual' 70% chance that you'll be first to market on a world-changing trillion-dollar technology. VCs would sacrifice their firstborn for that kind of deal.
Entirely seriously: I can never decide whether the drunkard's search is a parable about the wisdom in looking under the streetlight, or the wisdom of hunting around in the dark.
I think the drunkard's search is about the wisdom of improving your tools. Sure, spend some time out looking, but let's spend a lot of time making better streetlights and flashlights, etc.
Feels worth pasting in this other comment of yours from last week, which dovetails well with this:
DL so far has been easy to predict - if you bought into a specific theory of connectionism & scaling espoused by Schmidhuber, Moravec, Sutskever, and a few others, as I point out in https://www.gwern.net/newsletter/2019/13#what-progress & https://www.gwern.net/newsletter/2020/05#gpt-3 . Even the dates are more or less correct! The really surprising thing is that that particular extreme fringe lunatic theory turned out to be correct. So the question is,...
I'm imagining a tiny AI Safety organization, circa 2010, that focused on how to achieve probable alignment for scaled-up versions of that year's state-of-the-art AI designs. It's interesting to ask whether that organization would have achieved more or less than MIRI has, in terms of generalizable work and in terms of field-building.
Certainly it would have resulted in a lot of work that was initially successful but ultimately dead-end. But maybe early concrete results would have attracted more talent/attention/respect/funding, and the org could have thrown ...
a lot of AI safety work increasingly looks like it'd help make a hypothetical kind of AI safe
I think there are many reasons a researcher might still prioritize non-prosaic AI safety work. Off the top of my head:
GPT-3 does indeed only depend on the past few thousand words. AI Dungeon, however, can depend on a whole lot more.
Be careful using AI Dungeon's behaviour to infer GPT-3's behaviour. I am fairly confident that Latitude wrap your Dungeon input before submitting it to GPT-3; if you put in the prompt all at once, that'll make for different model input than putting it in one line at a time.
I am also unsure as to whether the undo/redo system sends the same input to the model each time. Might be Latitude adds something to encourage an output different to the ones you've already seen.
Alternately phrased: much of the observed path dependence in this instance might be in Dragon, not GPT-3.
I think that going forward there'll be a spectrum of interfaces to natural language models. At one end you'll have fine-tuning, and at the other you'll have prompts. The advantage of fine-tuning is that you can actually apply an optimizer to the task! The advantage of prompts is anyone can use them.
In the middle of the spectrum, two things I expect are domain-specific tunings and intermediary models. By 'intermediary models' I mean NLP models fine-tuned to take a human prompt over a specific area and return a more useful prompt for...
(a)
Look, we already have superhuman intelligences. We call them corporations and while they put out a lot of good stuff, we're not wild about the effects they have on the world. We tell corporations 'hey do what human shareholders want' and the monkey's paw curls and this is what we get.
Anyway yeah that but a thousand times faster, that's what I'm nervous about.
(b)
Look, we already have superhuman intelligences. We call them governments and while they put out a lot of good stuff, we're not wild about the effects they have on the world. We tell gov... (read more)