LESSWRONG
LW

11
tailcalled
7936Ω7710924180
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Linear Diffusion of Sparse Lognormals: Causal Inference Against Scientism
6tailcalled's Shortform
4y
278
No wikitag contributions to display.
What Are Probabilities, Anyway?
tailcalled1d20

"Probabilities" are a mathematical construct that can be used to represent multiple things, but in Bayesianism the first option is the most common.

Which world gets to be real seems arbitrary.

It's the one observations come from.

Most possible worlds are lifeless, so we’d have to be really lucky to be alive.

Typically probabilistic models only represent a fragment of the world, and therefore might e.g. implicitly assume that all worlds are lived-in. The real world has life so it's ok to assume we're not in a lifeless world.

We have no information about the process that determines which world gets to be real, so how can we decide what the probability mass function p should be? 

Often you require need some additional properties, e.g. ergodicity or exchangeability, which might be justified by separation-of-scale and symmetry and stuff.

P represents your uncertainty over worlds, so there's no "right" P (except the one that assigns 100% to the real world, in a sense). You just gotta do your best.

Reply
Tomás B.'s Shortform
tailcalled6d43

My impression is that health problems reduce height but height also causes health problems (even in the normal range of height, e.g. higher cancer risk). I'd be surprised if height was causally healthy.

Reply
I ate bear fat with honey and salt flakes, to prove a point
tailcalled7d113

Putting it on bread and crackers seems like it dilutes it. Is it still good on its own?

Reply5
Major survey on the HS/TS spectrum and gAyGP
tailcalled13d30

By "gaygp victim", do you mean that you are gay and AGP? Or...?

Reply
Non-copyability as a security feature
tailcalled22d*20

That's not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don't do that would win on the marketplace.

Reply
Non-copyability as a security feature
tailcalled1mo20

I'm not sure I understand your question. By AI companies "making copying hard enough", I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.

Reply
Generalization and the Multiple Stage Fallacy?
Answer by tailcalledOct 07, 202562

The neural tangent kernel[1] provides an intuitive story for how neural networks generalize: a gradient update on a datapoint will shift similar (as measured by the hidden activations of the NN) datapoints in a similar way.

The vast majority of LLM capabilities still arise from mimicking human choices in particular circumstances. This gives you a substantial amount of alignment "for free" (since you don't have to worry that the LLMs will grab excess power when humans don't), but it also limits you to ~human-level capabilities.

"Gradualism" can mean that fundamentally novel methods only make incremental progress on outcomes, but in most people's imagination I think it rather means that people will keep the human-mimicking capabilities generator as the source of progress, mainly focusing on scaling it up instead of on deriving capabilities by other means.

  1. ^

    Maybe I should be cautious about invoking this without linking to a comprehensible explanation of what it means, since most resources on it are kind of involved...

Reply1
tailcalled's Shortform
tailcalled2mo20

Once you focus on "parts" of the brain, you're restricting consideration to mechanisms that are activated at sufficient scale to need to balloon up. I would expect the rarely-activating mechanisms to be much smaller in a physical sense than "parts" of the brain are 

Reply
tailcalled's Shortform
tailcalled2mo20

Idk, the shift happened a while ago. Maybe mostly just reflecting on how evolution acts on a holistic scale, making it easy to incorporate "gradients" from events that occur only one or a few times in one's lifetime, if these events have enough effect on survival/reproduction. Part of a bigger change in priors towards the relevance of long tails associated with my LDSL sequence.

Reply
tailcalled's Shortform
tailcalled2mo71

I've switched from considering uploading to be obviously possible at sufficient technological advancement to considering it probably intractable. More specifically, I expect the mind to be importantly shaped by a lot of rarely-activating mechanisms, which are intractable to map out. You could probably eventually make a sort of "zombie upload" that ignores those mechanisms, but it would be unable to update to new extreme conditions.

Reply11
Load More
22Major survey on the HS/TS spectrum and gAyGP
15d
3
16Non-copyability as a security feature
1mo
4
17AI development as the first fully-automated job
3mo
4
-17Against Infrabayesianism
4mo
4
31Knocking Down My AI Optimist Strawman
9mo
3
14My Mental Model of AI Optimist Opinions
9mo
7
23Evolution's selection target depends on your weighting
1y
22
43Empathy/Systemizing Quotient is a poor/biased model for the autism/sex link
1y
0
12Binary encoding as a simple explicit construction for superposition
1y
0
12Rationalist Gnosticism
1y
12
Load More