Elliot Callender

Wikitag Contributions

Comments

Sorted by

How much would you say (3) supports (1) on your model? I'm still pretty new to AIS and am updating from your model.

I agree that marginal improvements are good for fields like medicine, and perhaps so too AIS. E.g. I can imagine self-other overlap scaling to near-ASI, though I'm doubtful about stability under reflection. I'll put 35% we find a semi-robust solution sufficient to not kill everyone.

Given my model, I think 20% generalizability is worth a person's time. Given yours, I'd say 1% is enough.

I think that the distribution of success probability of typical optimal-from-our-perspective solutions is very wide for both of the ways we describe generalizability; within that, we should weight generalizability heavier than my understanding of your model does.

Earlier:

Designing only best-worst-case subproblem solutions while waiting for Alice would be like restricting strategies in game to ones agnostic to the opponent's moves

Is this saying people should coordinate in case valuable solutions aren't in the apriori generalizable space?

I strongly think cancer research has a huge space and can't think of anything more difficult within biology.

I was being careless / unreflective about the size of the cancer solution space, by splitting the solution spaces of alignment and cancer differently; nor do I know enough about cancer to make such claims. I split the space into immunotherapies, things which target epigenetics / stem cells, and "other", where in retrospect the latter probably has the optimal solution. This groups many small problems with possibly weakly-general solutions into a "bottleneck", as you mentioned:

aging may be a general factor to many diseases, but research into many of the things aging relates to is composed of solving many small problems that do not directly relate to aging, and defining solving aging as a bottleneck problem and judging generalizability with respect to it doesn't seem useful.

Later:

Define the baseline distribution generalizability is defined on.

For a given problem, generalizability is how likely a given sub-solution is to be part of the final solution, assuming you solve the whole problem. You might choose to model expected utility, if that differs between full solutions; I chose not to here because I natively separate generality from power.

Give a little intuition about why a threshold is meaningful, rather than a linear "more general is better".

I agree that "more general is better" with a linear or slightly superlinear (because you can make plans which rely heavier on solution) association with success probability. We were already making different value statements about "weakly" vs "strongly" general, where putting concrete probabilities / ranges might reveal us to agree w.r.t the baseline distribution of generalizability and disagree only on semantics.

I.e. thresholds are only useful for communication.

Perhaps a better way to frame this is in ratios of tractability (how hard to identify and solve) and usefulness (conditional on the solution working) between solutions with different levels generalizability. E.g. suppose some solution  is 5x less general than . Then you expect, for the types of problems and solutions humans encounter, that  will be more than 5x as tractable * useful as .

I disagree in expectation, meaning for now I target most of my search at general solutions.

 

My model of the central AIS problems:

  1. How to make some AI do what we want? (under immense functionally adversarial pressures)
    1. Why does the AI do things? (Abstractions / context-dependent heuristics; how do agents split reality given assumptions about training / architecture)
    2. How do we change those things-which-cause-AI-behavior?
  2. How do we use behavior specification to maximize our lightcone?
    1. How to actually get technical alignment into a capable AI? (AI labs / governments)
    2. What do we want the AI to do? ("Long reflection" / CEV / other)

I'd be extremely interested to hear anyone's take on my model of the central problems.

I think general solutions are especially important for fields with big solution spaces / few researchers, like alignment. If you were optimizing for, say, curing cancer, it might be different (I think both the paradigm-and subproblem-spaces are smaller there).

From my reading of John Wentworth's Framing Practicum sequence, implicit in his (and my) model is that solution spaces for these sorts of problems are apriori enormous. We (you and I) might also disagree on what apriori feasibility would be "weakly" vs "strongly" generalizable; I think my transition is around 15-30%.

Shoot, thanks. Hopefully it's clearer now.

Yes, I agree. I expect abstractions, typically, to involve much more than 4-8 bits of information. On my model, any neural network, be it MLP, KAN or something new, will approximate abstractions with multiple nodes in parallel when the network is wide enough. I.e. the causal graph I mentioned is very distinct from the NN which might be running it.

Though now that you mentioned it, I wonder if low-precision NN weights are acceptable because of some network property (maybe SGD is so stochastic that higher precision doesn't help) or the environment (maybe natural latents tend to be lower-entropy)?

Anyways, thanks for engaging. It's encouraging to see someone comment.

Answer by Elliot Callender30

This one was a lot of fun!

  1. ROS activity in some region of the body is a function of antioxidant bioavailability, heat, and oxidant bioavailability. I imagine this relationship is the inverse of some chemical rate laws, i.e. dependent on which antioxidants we're looking at. But since I expect most antioxidants to work as individual molecules, the relationship is probably , i.e. ROS activity is inverse w.r.t. some antioxidant's potency and concentration if we ignore other antioxidants. The bottom term can also be a sum across all antioxidants, given no synergistic / antagonistic interactions!
  2. Transistor reliability is probably a function of heat, band gap and voltage? I imagine that, in fact, reliability is hysteretic in terms of band gap and voltage! When the gap is lower, noise can cross more easily, and when it's too high there won't be enough voltage for it to pass (without overheating your circuit). And heat increases noise. I think that information transmission might be exponential or Gaussian centered around the optimum, parameterized by . Does anyone have an equation for this?
  3. Ant movement speed is probably an equilibrium between evolved energy-conservation priors, available calories and pheromones. Let's just focus on pheromones which make the ant move faster. Energy (perhaps as ) and pheromones (say, ) are probably each about  predictors of speed, since I'm imagining material stress of movement () to be the main energy sink. Let , where . I don't know what the evolved frugality priors look like, but expect they can just map  without needing the subcomponents  and , at least as far as big-O notation goes.
Answer by Elliot Callender30
  1. Sleep / wakefulness; hypnagogia seems transient and requires conscious effort to maintain. Outside stimuli and internal volition can wake people up; lack thereof can do the opposite.
  2. Friendships; I tend to have few, close friendships. I don't interact much with more distant friends because it's less emotionally fulfilling, so they slowly fade towards being acquaintances. I distance myself from people I don't connect with / feel safe around, and try to strengthen bonds with people I think are emotionally mature and interesting.
  3. Focus; I tend to either be checked out or deeply zoned-in. There's strong momentum here, especially for cognitively engaging tasks. Anything which I expect to impair my work will push me into "maintenance" mode, where I conserve energy and do less object-level work. This takes engagement with interesting stuff plus willed focus to recover from.
Answer by Elliot Callender30

I know this post is old(ish), but still think this exercise is worth doing!

  1. Deep ocean currents; I expect changes in ocean floor topography and deep-water inertial/thermal changes to matter. I don't expect shallow-water topography to matter, nor wind (unless we have sustained 300+kph winds for weeks straight).
  2. Earth's magnetic pole directions; I'm not sure what causes them. I think they're generated by induction from magma movement? In that case, our knobs are those currents. I don't think anything can change the equilibrium without changing the flow patterns, minus stuff like magma composition which can eliminate magnetism.
  3. Tourism to, say, Tokyo; the following factors are both compared to other destinations and just Tokyo, and don't span our knob-space. Public opinion and salience, travel costs (time and money), hotel availability, and number of people who speak Japanese. I think that if we know these, most other markets become rounding errors, though I wouldn't be too sure.

I agree that this seems like a very promising direction.

Beyond that, we of course want our class of random variables to be reasonably general and cognitively plausible as an approximation - e.g. we shouldn’t assume some specific parametric form.

Could you elaborate on this; "reasonably general" sounds to me like the redundancy axiom, so I'm unclear about whether this sentence is an intuition pump.

I think it depends on which domain you're delegating in. E.g. physical objects, especially complex systems like an AC unit, are plausibly much harder to validate than a mathematical proof.

In that vein, I wonder if requiring the AI to construct a validation proof would be feasible for alignment delegation? In that case, I'd expect us to find more use and safety from [ETA: delegation of] theoretical work than empirical.

Load More