Bunthut

Wikitag Contributions

Comments

Sorted by
Bunthut10

6 picolightcones as well, don't think that changed.

Bunthut10

Before logging in I had 200 LW-Bux, and 3 virtues. Now I have 50 LW and 8 virtues, and I didn't do anything. Whats that? Is there any explanation of how this stuff works?

Bunthut30

I think your disagreement can be made clear with more formalism. First, the point for your opponents:

When the animals are in a cold place, they are selected for a long fur coat, and also for IGF, (and other things as well). To some extent, these are just different ways of describing the same process. Now, if they move to a warmer place, they are now selected for a shorter fur instead, and they are still selected for IGF. And there's also a more concrete correspondence to this: they have also been selected for "IF cold long fur, ELSE short fur" the entire time. Notice especially that there are animals actually implementing this dependent property - it can be evolved just fine, in the same way as the simple properties. And in fact, you could "unroll" the concept of IGF into a humongous environment-dependent strategy, which would then always be selected for, because all the environment-dependence is already baked in.

Now on the other hand, if you train an AI first on one thing, and then on another, wouldn't we expect it to get worse at the first again? Indeed, we would also expect a species living in the cold for very long to lose those adaptations relevant to the heat. The reason for this in both cases are, broadly speaking, limits and penalties to complexity. I'm not sure very many people would have bought the argument in the previous paragraph - we all know unused genetic code decays over time. But in the behavioral/cognitive version with intentionally maximizing IGF that makes it easy to ignore the problems, we're not used to remembering the physical correlates of thinking. Of course, a dragonfly couldn't explicitly maximize IGF, because its brain is to small to even understand what that is, and developing that brain has demands for space and energy incompatible with the general dragonfly life strategy. The costs of cognition are also part of the demands of fitness, and the dragonfly is more fit the way it is, and similarly I think a human explicitly maximizing IGF would have done worse for most of our evolution[1] because the odds you get something wrong are just too high with current expenditure on cognition, better to hardcode some right answers..

I don't share your optimistic conclusion however. Because the part about selecting for multiple things simultanuously, that's true. You are always selecting for everything thats locally extensionally equivalent to the intended selection criteria. There is not a move you could have done in evolutions place, to actually select for IGF instead of [various particular things], this already is what happens when you select for IGF, because it's the complexity, rather than different intent, that lead to the different result[2]. Similarly, reinforcement learning for human values will result is whatever is the simplest[3] way to match human values on the training data.

 

  1. ^

    and even today, still might if sperm donations et al weren't possible

  2. ^

    I don't think you've tried to come up with what that different move might look like for evolution, but it's strongly implied they exist for both it and the AI situation.

  3. ^

    in the sense of that architecture

Bunthut110

for AIs, more robust adversarial examples - especially ones that work on AIs trained on different datasets - do seem to look more "reasonable" to humans.

Then I would expect they are also more objectively similar. In any case that finding is strong evidence against manipulative adversarial examples for humans - your argument is basically "there's just this huge mess of neurons, surely somewhere in there is a way", but if the same adversarial examples work on minds with very different architectures, then that's clearly not why they exist. Instead, they have to be explained by some higher-level cognitive factors shared by ~anyone who gets good at interpreting a wide range of visual data.

The really obvious adversarial example of this kind in human is like, cults, or so

Cults use much stronger means than is implied by adversarial examples. For one, they can react to and reinforce your behaviour - is a screen with text promising you things for doing what it wants, with escalating impact and building a track record an adversarial example? No. Its potentially worrying, but not really distinct from generic powerseeking problems. The cult also controls a much larger fraction of your total sensory input over an extended time. Cult members spreading the cult also use tactics that require very little precision - there isn't information transmitted to them on how to do this, beyond simple verbal instructions. Even if there are more precision-needing ways of manipulating individuals, its another thing entirely to manipulate them into repeating those high precision strategies that they couldn't themselves execute correctly on purpose.

if you're not personally familiar with hypnosis

I think I am a little bit. I don't think that means what you think it does. Listening-to-action still requires comprehension of the commands, which is much lower bandwidth than vision, and its a structure thats specifically there to be controllable by others, so it's not an indication that we are controllable to others in other bizzare ways. And you are deliberately not being so critical - you haven't, actually, been circumvented, and there isn't really a path to escalating power - just the fact youre willing to oey someone in a specific context. Hypnosis also ends on its own - the brain naturally tends back towards baseline, implanting a mechanism that keeps itself active indefinitely is high-precision.

Bunthut1-3

Ok, thats mostly what I've heard before. I'm skeptical because:

  1. If something like classical adversarial examples existed for humans, it likely wouldn't have the same effects on different people, or even just viewed from different angles, or maybe even in a different mood.
  2. No known adversarial examples of the kind you describe for humans. We could tell if we had found them because we have metrics of "looking similar" which are not based on our intuitive sense of similarity, like pixelwise differences and convolutions. All examples of "easily confused" images I've seen were objectively similar to what theyre confused for.
  3. Somewhat similar to what Grayson Chao said, it seems that the influence of vision on behaviour goes through a layer of "it looks like X", which is much lower bandwidth than vision in total. Ads have qualitatively similar effects to what seeing their content actually happen in person would.
  4. If adversarial examples exist, that doesn't mean they exist for making you do anything of the manipulators choosing. Humans are, in principle, at least as programmable as a computer, but that also means there are vastly more courses of action than possible vision inputs. In practice, propably not a lot of high-cognitive-function-processing could be commandeered by adversarial inputs, and behaviours complex enough to glitch others couldn't be implemented.
Bunthut20

I just thought through the causal graphs involved, there's probably enough bandwidth through vision into reliably redundant behavior to do this

Elaborate.

Bunthut20

This isn't my area of expertise, but I think I have a sketch for a very simple weak proof:

The conjecture states that V runtime and  length are polynomial in C size, but leaves the constant open. Therefore a counterexample would have to be an infinite family of circuits satisfying P(C), with their corresponding  growing faster than polynomial. To prove the existence of such a counterexample, you would need a proof that each member of the family satisfies P(C). But that proof has finite length, and can be used as the  for any member of the family with minor modification. Therefore there can never be a proven counterexample.

Or am I misunderstanding something?

Bunthut95

I think the solution to this is to add something to your wealth to account for inalienable human capital, and count costs only by how much you will actually be forced to pay. This is a good idea in general; else most people with student loans or a mortage are "in the red", and couldnt use this at all.

Bunthut10

What are real numbers then? On the standard account, real numbers are equivalence classes of sequences of rationals, the finite diagonals being one such sequence. I mean, "Real numbers don't exist" is one way to avoid the diagonal argument, but I don't thinks that's what cubefox is going for.

Bunthut10

The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people

This is one of two claims here that I'm not convinced by. Informal disproof: If you are a smart individual in todays society, you shouldn't ignore threats of punishment, because it is in the states interest to follow through anyway, pour encourager les autres. If crime prevention is in peoples interest, intelligence monotonicity implies that a smart population should be able to make punishment work at least this well. Now I don't trust intelligence monotonicity, but I don't trust it's negation either.

The second one is:

You can already foresee the part where you're going to be asked to play this game for longer, until fewer offers get rejected, as people learn to converge on a shared idea of what is fair.

Should you update your idea of fairness if you get rejected often? It's not clear to me that that doesn't make you exploitable again. And I think this is very important to your claim about not burning utility: In the case of the ultimatum game, Eliezers strategy burns very little over a reasonable-seeming range of fairness ideals, but in the complex, high-dimensional action spaces of the real world, it could easily be almost as bad as never giving in, if there's no updating.

Load More