My impression is that health problems reduce height but height also causes health problems (even in the normal range of height, e.g. higher cancer risk). I'd be surprised if height was causally healthy.
Putting it on bread and crackers seems like it dilutes it. Is it still good on its own?
By "gaygp victim", do you mean that you are gay and AGP? Or...?
That's not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don't do that would win on the marketplace.
I'm not sure I understand your question. By AI companies "making copying hard enough", I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.
The neural tangent kernel[1] provides an intuitive story for how neural networks generalize: a gradient update on a datapoint will shift similar (as measured by the hidden activations of the NN) datapoints in a similar way.
The vast majority of LLM capabilities still arise from mimicking human choices in particular circumstances. This gives you a substantial amount of alignment "for free" (since you don't have to worry that the LLMs will grab excess power when humans don't), but it also limits you to ~human-level capabilities.
"Gradualism" can mean that fundamentally novel methods only make incremental progress on outcomes, but in most people's imagination I think it rather means that people will keep the human-mimicking capabilities generator as the source of progress, mainly focusing on scaling it up instead of on deriving capabilities by other means.
Maybe I should be cautious about invoking this without linking to a comprehensible explanation of what it means, since most resources on it are kind of involved...
Once you focus on "parts" of the brain, you're restricting consideration to mechanisms that are activated at sufficient scale to need to balloon up. I would expect the rarely-activating mechanisms to be much smaller in a physical sense than "parts" of the brain are
Idk, the shift happened a while ago. Maybe mostly just reflecting on how evolution acts on a holistic scale, making it easy to incorporate "gradients" from events that occur only one or a few times in one's lifetime, if these events have enough effect on survival/reproduction. Part of a bigger change in priors towards the relevance of long tails associated with my LDSL sequence.
I've switched from considering uploading to be obviously possible at sufficient technological advancement to considering it probably intractable. More specifically, I expect the mind to be importantly shaped by a lot of rarely-activating mechanisms, which are intractable to map out. You could probably eventually make a sort of "zombie upload" that ignores those mechanisms, but it would be unable to update to new extreme conditions.
"Probabilities" are a mathematical construct that can be used to represent multiple things, but in Bayesianism the first option is the most common.
It's the one observations come from.
Typically probabilistic models only represent a fragment of the world, and therefore might e.g. implicitly assume that all worlds are lived-in. The real world has life so it's ok to assume we're not in a lifeless world.
Often you require need some additional properties, e.g. ergodicity or exchangeability, which might be justified by separation-of-scale and symmetry and stuff.
P represents your uncertainty over worlds, so there's no "right" P (except the one that assigns 100% to the real world, in a sense). You just gotta do your best.