All of delton137's Comments + Replies

I've never heard of cyclic peptides. But I suppose it is possible. 

Actually the HPLC study we cite talks about a similar possibility - they say the observed effects may be due to "cryptic peptides", perhaps created by the lysing of larger proteins and peptides. It just seems very unlikely to me. To me, based on my metascience research, it's a it easier to believe that the published literature is just plain wrong. Check out the HPLC paper - it's he only we found doing that sort of analysis but it's very detailed (https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/abs/10.1002/dta.1817). 

In the case of Bryan Johnson he injected it in his glute. Others inject in the thigh (there's a how-to video about this on YouTube). Apparently some biohackers online think that by injecting in the shoulder more will reach the brain, because the shoulder is closer to the head, according to a podcast I listened to LOL (the podcaster agreed this made a lot of sense). 

4Nathan Helm-Burger
To be fair, as Seth Redmore mentions in his comment, some signalling proteins do get actively moved across the Blood Brain Barrier (BBB). I don't expect you to have much luck with significantly affecting brain BDNF levels via peripheral injection but... it's not a completely black and white case, because few things in biology are. Of course, if you could isolate just the BDNF fraction and inject a large quantity, then a low BBB-crossing rate would still amount to a meaningful effect.  The Blood Brain Barrier (BBB) isn't perfect, stuff leaks from time to time. You can deliberately (and very unwisely!) temporarily degrade the BBB with certain chemicals. I've assisted in studies taking advantage of this fact to get things into rat brains that otherwise would be restricted by the BBB. But, you can kill or permanently cripple yourself doing that, so I really don't recommend it.
delton137161

Uh this is kinda embarrassing but I just didn't really read it at all honestly.. 

Q: when was prion disease discovered

A: "Prion disease was first recognized as a distinct category of neurodegenerative disorders in the 1980s, though the groundwork for its discovery was laid earlier.

Key milestones in the discovery of prion diseases include:

  1. 1920s: Creutzfeldt-Jakob Disease (CJD) was first described by Hans Gerhard Creutzfeldt and Alfons Maria Jakob.
  2. 1950s: Carleton Gajdusek began studying kuru, a disease affecting the Fore people of Papua New Guinea.
  3. 1960s:
... (read more)

You have an eagle eye Carl!... I actually asked Claude (the AI) about this when I was writing it, but I was very sloppy and didn't read Claude's answer carefully. Just fixed it. I should be more careful..

I'm a little confused. Is Claude considered a reliable secondary source now? Did you not even check Wikipedia?

I'm as enthusiastic about the power of AI as the next guy, but that seems like the worst possible way to use it.

7gwern
What did Claude say, exactly?

hah... actually not a bad idea... too late now. BTW the recording will be available eventually if you're interested.

Hi, organizer here. I just saw your message now right after the event. There were a couple people from Microsoft there but I'm not sure if they were interested in alignment research. This was mostly a general audience at this event, mostly coming through the website AIcamp.ai. We also had some people from the local ACX meetup and transhumanist meetup. PS: I sent you an invitation to connect on LinkedIN, let's stay in touch (I'm https://www.linkedin.com/in/danielelton/). 

1Sheikh Abdur Raheem Ali
Accepted

Unfortunately they have a policy that they check ID at the door and only allow those over 21 in. I'm going to update the post now to make this clear. Even when the outdoor patio is open it's still only 21+. 

The way I would describe it now is there's a large bar in the main room, and then there's a side room (which is also quite large) with a place that serves Venezeualian Food (very good), and Somerville Chocolate (they make and sell chocolate there).  

The age restriction has never been a problem in the past although I do vaguely recall someone m... (read more)

1duck_master
Thank you for the comment! I will not attend this since you stated that they check IDs at the door.

Sorry for the late reply. In the future we will try to have a Zoom option for big events like this. 

We did manage to record it, but the audio isn't great (and we didn't cover the Q&A)
 

This is pretty interesting.. any outcome you can share? (I'll bug you about this next time I see you in person so you can just tell me then rather than responding, if you'd like)

Good idea to just use the time you fall asleep rather than the sleep stage tracking, which isn't very accurate. I think the most interesting metric is just boring old total sleep time (unfortunately sleep trackers in my experience are really bad at actually capturing sleep quality.. but I suppose if there's a sleep quality score you have found useful that might be interesting to look at also). Something else I've noticed is that by looking at the heart rate you can often get a more accurate idea of when you feel asleep and woke up. 

I would modify the theory slightly by noting that the brain may become hypersensitive to sensations arising from the area that was originally damaged, even after it has healed. Sensations that are otherwise normal can then trigger pain. I went to the website about pain reprocessing therapy and stumbled upon an interview with Alan Gordon where he talked about this.  I suspect that high level beliefs about tissue damage etc play a role here also in causing the brain to become hyper focused on sensations coming from a particular region and to interpret t... (read more)

2Chris_Leong
Just to add to this: Beliefs can be self-reinforcing in predictive processing theory because the higher level beliefs can shape the lower level observations. So the hypersensitisation that Delton has noted can reinforce itself.

Since nobody else posted these: 

Bay Area is Sat Dec 17th (Eventbrite) (Facebook)

South Florida (about an hour north of Miami) is Sat Dec 17th (Eventbrite) (Facebook)

On current hardware, sure.

It does look like scaling will hit a wall soon if hardware doesn't improve, see this paper: https://arxiv.org/abs/2007.05558

But Gwern has responded to this paper pointing out several flaws... (having trouble finding his response right now..ugh)

However, we have lots of reasons to think Moore's law will continue ... in particular future AI will be on custom ASICs / TPUs / neuromorphic chips, which is a very different story. I wrote about this long ago, in 2015. Such chips, especially asynchronous and analog ones, can be vastly more ... (read more)

I disagree, in fact I actually think you can argue this development points the opposite direction, when you look at what they had to do to achieve it and the architecture they use. 

I suggest you read Ernest Davis' overview of Cicero.  Cicero is a special-purpose system that took enormous work to produce -- a team of multiple people labored on it for three years.  They had to assemble a massive dataset from 125,300 online human games. They also had to get expert annotations on thousands of preliminary outputs. Even that was not enough.. they ... (read more)

I've looked into these methods a lot, in 2020 (I'm not so much up to date on the latest literature). I wrote a review in my 2020 paper, "Self-explaining AI as an alternative to interpretable AI". 

There are a lot of issues with saliency mapping techniques, as you are aware (I saw you link to the "sanity checks" paper below). Funnily enough though, the super simple technique of occlusion mapping does seem to work very well, though! It's kinda hilarious actually that there are so many complicated mathematical techniques for saliency mapping, but I have s... (read more)

delton137Ω120

There's no doubt a world simulator of some sort is probably going to be an important component in any AGI, at the very least for planning - Yan LeCun has talked about this a lot. There's also this work where they show a VAE type thing can be configured to run internal simulations of the environment it was trained on.

In brief, a few issues I see here:

  • You haven't actually provided any evidence that GPT does simulation other than "Just saying “this AI is a simulator” naturalizes many of the counterintuitive properties of GPT which don’t usually become apparen
... (read more)
6the gears to ascension
my impression is that by simulator and simulacra this post is not intending to claim that the thing it is simulating is realphysics but rather that it learns a general "textphysics engine", the model, which runs textphysics environments. it's essentially just a reframing of the prediction objective to describe deployment time - not a claim that the model actually learns a strong causal simplification of the full variety of real physics.

Peperine (black pepper extract) can help make quercetin more bioavailable. They are co-administered in many studies on the neuroprotective effects of quercetin: https://scholar.google.com/scholar?hl=en&as_sdt=0,22&q=piperine+quercetin

I find slower take-off scenarios more plausible. I like the general thrust of Christiano's "What failure looks like". I wonder if anyone has written up a more narrative / concrete account of that sort of scenario.

The thing you are trying to study ("returns on cognitive reinvestment") is probably one of the hardest things in the world to understand scientifically. It requires understanding both the capabilities of specific self-modifying agents and the complexity of the world. It depends what problem you are focusing on too -- the shape of the curve may be very different for chess vs something like curing disease. Why? Because chess I can simulate on a computer, so throwing more compute at it leads to some returns. I can't simulate human biology in a computer - we h... (read more)

2DragonGod
I disagree. The theoretical framework is a first step to allow us to reason more clearly about the topic. I expect to eventually bridge the gap between the theoretical and the empirical eventually. In fact, I just added some concrete empirical research directions I think could be pursued later on:   Recall that I called this "a rough draft of the first draft of one part of the nth post of what I hope to one day turn into a proper sequence". There's a lot of surrounding context that I haven't gotten around to writing yet. And I do have a coherent narrative of where this all fits together in my broader project to investigate takeoff dynamics.   The formalisations aren't useless; they serve to refine and sharpen thinking. Making things formal forces you to make explicit some things you'd left implicit.

How familiar are you with Chollet's paper "On the Measure of Intelligence"? He disagrees a bit with the idea of "AGI" but if you operationalize it as "skill acquisition efficiency at the level of a human" then he has a test called ARC which purports to measure when AI has achieved human-like generality.

This seems to be a good direction, in my opinion. There is an ARC challenge on Kaggle and so far AI is far below the human level. On the other hand, "being good at a lot of different things", ie task performance across one or many tasks, is obviously very important to understand and Chollet's definition is independent from that.

4lorepieri
ARC is a nice attempt. I also participated in the original challenge on Kaggle. The issue is that the test can be gamed (as anyone on Kaggle did) brute forcing over solution strategies.  An open-ended or interactive version of ARC may solve this issue.

Thanks, it's been fixed!!

Interesting, thanks. 10x reduction in cost every 4 years is roughly twice what I would have expected. But it sounds quite plausible especially considering AI accelerators and ASICs.

Thanks for sharing! That's a pretty sophisticated modeling function but it makes sense. I personally think Moore's law (the FLOPS/$ version) will continue, but I know there's a lot of skepticism about that.

Could you make another graph like Fig 4 but showing projected cost, using Moore's law to estimate cost? The cost is going to be a lot, right?

9Tamay
Thanks!  Good idea. I might do this when I get the time—will let you know!
8lennart
We basically lumped the reduced cost of FLOP per $ and increased spending together. A report from CSET on AI and Compute projects the costs by using two strongly simplified assumptions: (I) doubling every 3.4 months (based on OpenAI's previous report) and (II) computing cost stays constant. This could give you some ideas on rather upper bounds of projected costs. Carey's previous analysis uses this dataset from AI Impacts and therefore assumes:

Networks with loops are much harder to train.. that was one of the motivations for going to transformers instead of RNNs. But yeah, sure, I agree. My objection is more that posts like this are so high level I have trouble following the argument, if that makes sense. The argument seems roughly plausible but not making contact with any real object level stuff makes it a lot weaker, at least to me. The argument seems to rely on "emergence of self-awareness / discovery of malevolence/deception during SGD" being likely which is unjustified in my view. I'm not s... (read more)

Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it's still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven't so far despite almost two years of breathless hype is interesting to contemplate. I've learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed "in the ... (read more)

1superads91
Economic value might not be a perfect measure. Nuclear fission didn't generate any economic value either until 200.000 in Japan were incinerated. My fear is that a mixture of experts approach can lead to extremely fast progress towards AGI. Perhaps even less - maybe all it takes is an agent AI that can code as well as humans, to start a cascade of recursive self-improvement. But indeed, a Knightian uncertainty here would already put me at some ease. As long as you can be sure that it won't happen "just anytime" before some more barriers are crossed, at least you can still sleep at night and have the sanity to try to do something. I don't know, I'm not a technical person, that's why I'm asking questions and hoping to learn more. "I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon." Personally that's what worries me the least. We can't even crack c.elegans! I don't doubt that in 100-200 years we'd get there but I see many other way faster routes.
Answer by delton13710

I'm interested!

This is a shot in the dark, but I recall there was a blog post that made basically the same point visually, I believe using Gaussian distributions. I think the number they argued you should aim for was 3-4 instead of 6. Anyone know what I'm talking about?

5lsusr
I don't recognize the exact blog post you reference but in my personal experience the actual number of skills that put me on a useful Pareto frontier is indeed closer to 3-4. To be clear, I don't get there by just learning 3-4 different skills. I learn like 8-10 and then a subset of 3-4 gets me to the Pareto frontier.

Hi, I just wanted to say thanks for the comment / feedback. Yeah, I probably should have separated out the analysis of Grokking from the analysis of emergent behaviour during scaling. They are potentially related - at least for many tasks it seems Grokking becomes more likely as the model gets bigger. I'm guilty of actually conflating the two phenomena in some of my thinking, admittedly.

Your point about "fragile metrics" being more likely to show Grokking great. I had a similar thought, too.

I think a bit too much mindshare is being spent on these sci-fi scenario discussions, although they are fun.

Honestly I have trouble following these arguments about deception evolving in RL. In particular I can't quite wrap my head around how the agent ends up optimizing for something else (not a proxy objective, but a possibly totally orthogonal objective like "please my human masters so I can later do X"). In any case, it seems self awareness is required for the type of deception that you're envisioning. Which brings up an interesting question - can a pu... (read more)

1Timothy Underwood
Yeah, but don't you expect successful human equivalent neural networks to have some sort of loop involved? It seems pretty likely to me that the ML researchers will successfully figure out how to put self analysis loops into neural nets.

Zac says "Yes, over the course of training AlphaZero learns many concepts (and develops behaviours) which have clear correspondence with human concepts."

What's the evidence for this? If AlphaZero worked by learning concepts in a sort of step-wise manner, then we should expect jumps in performance when it comes to certain types of puzzles, right? I would guess that a beginning human would exhibit jumps from learning concepts like "control the center" or "castle early, not later".. for instance the principle "control the center", once followed, has implicati... (read more)

Huh, that's pretty cool, thanks for sharing.

This is pretty interesting. There is a lot to quibble about here, but overall I think the information about bees here is quite valuable for people thinking about where AI is at right now and trying to extrapolate forward.

A different approach, perhaps more illuminating would be to ask how much of a bee's behavior could we plausibly emulate today by globing together a bunch of different ML algorithms into some sort of virtual bee cognitive architecture - if say we wanted to make a drone that behaved like a bee ala Black Mirror. Obviously that's a much more c... (read more)

Another point is that when you optimize relentlessly for one thing, you have might have trouble exploring the space adequately (get stuck at local maxima). That's why RL agents/algorithms often take random actions when they are training (they call this "exploration" instead of "exploitation"). Maybe random actions can be thought of as a form of slack? Micro-slacks?

Look at Kenneth Stanley's arguments about why objective functions are bad (video talk on it here). Basically he's saying we need a lot more random exploration. Humans are similar - we have an ope... (read more)

Bostrom talks about this in his book "Superintelligence" when he discusses the dangers of Oracle AI. It's a valid concern, we're just a long way from that with GPT-like models, I think.

I used to think a system trained on text only could never learn vision. So if it escaped onto the internet, it would be pretty limited in how it could interface with the outside world since it couldn't interpret streams from cameras. But then I realized that probably in it's training data is text on how to program a CNN. So in theory a system trained on only text could build... (read more)

I just did some tests... it works if you go to settings and click "Activate Markdown Editor". Then convert to Markdown and re-save (note, you may want to back up before this, there's a chance footnotes and stuff could get messed up). 

$stuff$ for inline math and double dollar signs for single line math work when in Markdown mode. When using the normal editor, inline math doesn't work, but $$ works (but puts the equation on a new line). 

I have mixed feelings on this. I have mentored ~5 undergraduates in the past 4 years and observed many others, and their research productivity varies enormously. How much of that is due to IQ vs other factors I really have no idea. My personal feeling was most of the variability was due to life factors like the social environment (family/friends) they were ensconced in and how much time that permitted them to focus on research. 

My impression from TAing physics for life scientists for two years was that a large number felt they were intrinsically bad a... (read more)

I liked how in your AISS support talk you used history as a frame for thinking about this because it highlights the difficulty of achieving superhuman ethics. Human ethics (for instance as encoded in laws/rights/norms) is improving over time, but it's been a very slow process that involves a lot of stumbling around and having to run experiments to figure out what works and what doesn't.  "The Moral Arc" by Michael Shermer is about the causes of moral progress... one of them is allowing free speech, free flow of ideas. Basically, it seems moral progres... (read more)

Answer by delton13720

It's a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear. 

The early writings of Bostom and Yudkowsky I would classify as a mix of scientifically informed futurology and philosophy. As with science fiction, they are laying out what might happen. There is no science of psychohistory an... (read more)

The paper you cited does not show this.

Yeah, you're right I was being sloppy. I just crossed it out. 

oo ok, thanks, I'll take a look. The point about generative models being better is something I've been wanting to learn about, in particular. 

SGD is a form of efficient approximate Bayesian updating.

Yeah I saw you were arguing that in one of your posts. I'll take a closer look. I honestly have not heard of this before. 

Regarding my statement - I agree looking back at it it is horribly sloppy and sounds absurd, but when I was writing I was just thinking about how all L1 and L2 regularization do is bias towards smaller weights - the models still take up the same amount of space on disk and require the same amount amount of compute to run in terms of FLOPs. But yes you're right they make the m... (read more)

2jacob_cannell
So actually L1/L2 regularization does allow you to compress the model by reducing entropy, as evidenced by the fact that any effective pruning/quantization system necessarily involves some strong regularizer applied during training or after. The model itself can't possibly know or care whether you later actually compress said weights or not, so it's never the actual compression itself that matters, vs the inherent compressibility (which comes from the regularization).

By the way, if you look at Filan et al.'s paper "Clusterability in Neural Networks" there is a lot of variance in their results but generally speaking they find that L1 regularization leads to slightly more clusterability than L2 or dropout.

The idea that using dropout makes models simpler is not intuitive to me because according to Hinton dropout essentially does the same thing as ensembling. If what you end up with is something equivalent to an ensemble of smaller networks than it's not clear to me that would be easier to prune.

One of the papers you linked to appears to study dropout in the context of Bayesian modeling and they argue it encourages sparsity. I'm willing to buy that it does in fact reduce complexity/ compressibility but I'm also not sure any of this is 100% clear cut.

3jacob_cannell
It's not that dropout provides some ensembling secret sauce; instead neural nets are inherently ensembles proportional to their level of overcompleteness. Dropout (like other regularizers) helps ensure they are ensembles of low complexity sub-models, rather than ensembles of over-fit higher complexity sub-models (see also: lottery tickets, pruning, grokking, double descent).
1delton137
By the way, if you look at Filan et al.'s paper "Clusterability in Neural Networks" there is a lot of variance in their results but generally speaking they find that L1 regularization leads to slightly more clusterability than L2 or dropout.

(responding to Jacob specifically here) A lot of things that were thought of as "obvious" were later found out to be false in the context of deep learning - for instance the bias-variance trade-off.

I think what you're saying makes sense at a high/rough level but I'm also worried you are not being rigorous enough. It is true and well known that L2 regularization can be derived from Bayesian neural nets with a Gaussian prior on the weights. However neural nets in deep learning are trained via SGD, not with Bayesian updating -- and it doesn't seem modern CNNs... (read more)

4jacob_cannell
SGD is a form of efficient approximate Bayesian updating. More specifically it's a local linear 1st order approximation. As the step size approaches zero this approximation becomes tight, under some potentially enormous simplifying assumptions of unit variance (which are in practice enforced through initialization and explicit normalization). But anyway that's not directly relevant, as Bayesian updating doesn't have some monopoly on entropy/complexity tradeoffs. If you want to be 'rigorous', then you shouldn't have confidently said: (As you can't rigorously back that statement up). Regularization to bias towards simpler models in DL absolutely works well, regardless of whether you understand it or find the provided explanations satisfactory.

Hey, OK, fixed. Sorry there is no link to the comment -- I had a link in an earlier draft but then it got lost. It was a comment somewhere on LessWrong and now I can't find it -_-.

That's interesting it motivated you to join Anthropic - you are definitely not alone in that. My understanding is Anthropic was founded by a bunch of people who were all worried about the possible implications of the scaling laws.

1Zac Hatfield-Dodds
No worries, here's the comment.

To my knowledge the most used regularization method in deep learning, dropout, doesn't make models simpler in the sense of being more compressible.

A simple L1 regularization would make models more compressible in so far as it suppresses weights towards zero so they can just be thrown out completely without affecting model performance much. I'm not sure about L2 regularization making things more compressible - does it lead to flatter minima for instance? (GPT-3 uses L2 regularization, which they call "weight decay").

But yes, you are right, Occam factors are... (read more)

8gwern
Yes, it does (as should make sense, because if you can drop out a parameter entirely, you don't need it, and if it succeeds in fostering modularity or generalization, that should make it much easier to prune), and this was one of the justifications for dropout, and that has nice Bayesian interpretations too. (I have a few relevant cites in my sparsity tag.)
4jacob_cannell
L2 regularization is much more common than dropout, but both are a complexity prior and thus compress. This is true in a very obvious way for L2. Dropout is more complex to analyze, but has now been extensively analyzed and functions as a complexity/entropy penalty as all regularization does. L2 regularization (weight decay) obviously makes things more compressible - it penalizes models with high entropy under a per-param gaussian prior. "Flatter minima" isn't a very useful paradigm for understanding this, vs Bayesian statistics.

I think this is a nice line of work. I wonder if you could add a simple/small constraint on weights that avoids the issue of multimodal neurons -- it seems doable. 

I just wanted to say I don't think you did anything ethically wrong here. There was a great podcast with Diana Fleischman I listened to a while ago where she talked about how we manipulate other people all the time especially in romantic relationships. I'm uncomfortable saying that any manipulation whatsoever is ethically wrong because I think that's demanding too much cognitive overhead for human relationships (and also makes it hard to raise kids) - I think you have to have a figure out a more nuanced view. For instance, having a high level rule on what ... (read more)

You sound very confident your device would have worked really well. I'm curious, how much testing did you do? 

I have a Garmin Vivosmart 3 and it tries to detect when I'm either running, biking, or going up stairs. It works amazingly well considering the tiny amount of hardware and battery power it has, but it also fails sometimes, like randomly thinking I've been running for a while when I've been doing some other high heart rate thing. Maddeningly, I can't figure out how to turn off some of the alerts, like when I've met my "stair goal" for the day. 

3lsusr
Only eating with a fork. A full system would require more data than that. We tested on real people in real-world conditions who were not part of the training dataset. If someone ate in a different style we could add just a little bit of annotated training data for the eating style, run the toolchain overnight and the algorithm would be noticeably better for that person and everyone else. The reason why I'm so confident in our algorith was because ① it required very little data to do updates and ② I had lots of experience in the field which meant I knew exactly what quality level was and wasn't acceptable to customers. To update the code in response to user feedback we would have to push the new code. Building an update system was theoretically straightforward. It was a (theoretically) solved problem with little technical risk. But it was not a problem that we had personally built a toolchain for and the whole firmware update system involved more technical maintenance than I wanted to commit myself to.
Load More