Wiki Contributions

Comments

Sorted by
janos00

Is there a reason to think this problem is less amenable to being solved by complexity priors than other learning problems? / Might we build an unaligned agent competent enough to be problematic without solving problems similar to this one?

janos20

What is Mathematics? by Courant and Robbins is a classic exploration that goes reasonably deep into most areas of math.

janos00

This makes me think of two very different things.

One is informational containment, ie how to run an AGI in a simulated environment that reveals nothing about the system it's simulated on; this is a technical challenge, and if interpreted very strictly (via algorithmic complexity arguments about how improbable our universe is likely to be in something like a Solomonoff prior), is very constraining.

The other is futurological simulation; here I think the notion of simulation is pointing at a tool, but the idea of using this tool is a very small part of the approach relative to formulating a model with the right sort of moving parts. The latter has been tried with various simple models (eg the thing in Ch 4); more work can be done, but justifying the models&priors will be difficult.

janos00

Certainly, interventions may be available, just as for anything else; but it's not fundamentally more accessible or malleable than other things.

janos20

I'm arguing that the fuzzy-ish definition that corresponds to our everyday experience/usage is better than the crisp one that doesn't.

Re IQ and "way of thinking", I'm arguing they both affect each other, but neither is entirely under conscious control, so it's a bit of a moot point.

Apropos the original point, under my usual circumstances (not malnourished, hanging out with smart people, reading and thinking about engaging, complex things that can be analyzed and have reasonable success measures, etc), my IQ is mostly not under my control. (Perhaps if I was more focused on measurements, nootropics, and getting enough sleep, I could increase my IQ a bit; but not very much, I think.) YMMV.

janos100

I think what you're saying is that if we want a coherent, nontrivial definition of "under our control" then the most natural one is "everything that depends on the neural signals from your brain". But this definition, while relatively clean from the outside, doesn't correspond to what we ordinarily mean; for example, if you have a mental illness, this would suggest that "stop having that illness!!" is reasonable advice, because your illness is "under your control".

I don't know enough neuroscience to give this a physical backing, but there are certain conscious decisions or mental moves that feel like they're very much under my control, and I'd say the things under my control are just those, plus the things I can reliably affect using them. I think the correct intuitive definition of "locus of control" is "those things you can do if you want to".

Regarding causal arrows between your IQ and your thoughts, I don't think this is a well-defined query. Causality is entirely about hypothetical interventions; to say "your way of thinking affects your IQ" is just to say that if I was to change your way of thinking, I could change your IQ.

But how would I change your way of thinking? There has to be an understanding of what is being held constant, or of what range of changes we're talking about. For instance we could change your way of thinking to any that you'd likely reach from different future influences, or to any that people similar to you have had, etc. Normally what we care about is the sort of intervention that we could actually do or draw predictions from, so the first one here is what we mean. And to some degree it's true, your IQ would be changed.

From the other end, what does it mean to say your way of thinking is affected by your IQ? It means if we were to "modify your IQ" without doing anything else to affect your thinking, then your way of thinking would be altered. This seems true, though hard to pin down, since IQ is normally thought of as a scalar, rather than a whole range of phenomena like your "way of thinking". IQ is sort of an amalgam of different abilities and qualities, so if we look closely enough we'll find that IQ can't directly affect anything at all, similarly to how g can't ("it wasn't your IQ that helped you come up with those ideas, it was your working memory, and creativity, and visualization ability!"); but on the other hand if most things that increase IQ make the same sort of difference (eg to academic success) then it's fairly compact and useful to say that IQ affects those things.

Causality with fuzzy concepts is tricky.

janos30

March 2nd isn't a Tuesday; is it Monday night or Tuesday night?

janos20

If you want to discuss the nature of reality using a similar lexicon to what philosophers use, I recommend consulting the Stanford Encyclopedia of Philosophy: http://plato.stanford.edu/

janos130

Musk has joined the advisory board of FLI and CSER, which are younger sibling orgs of FHI and MIRI. He's aware of the AI xrisk community.

janos10

Cool. Regarding bounded utility functions, I didn't mean you personally, I meant the generic you; as you can see elsewhere in the thread, some people do find it rather strange to think of modelling what you actually want as a bounded utility function.

This is where I thought you were missing the point:

Or you might say it's a suboptimal outcome because you just know that this allocation is bad, or something. Which amounts to saying that actually you know what the utility function should be and it isn't the one the analysis assumes.

Sometimes we (seem to) have stronger intuitions about allocations than about the utility function itself, and parlaying that to identify what the utility function should be is what this post is about. This may seem like a non-step to you; in that case you've already got it. Cheers! I admit it's not a difficult point. Or if you always have stronger intuitions about the utility function than about resource allocation, then maybe this is useless to you.

I agree with you that there are some situations where the sublinear allocation (and exponentially-converging utility function) seems wrong and some where it seems fine; perhaps the post should initially have said "person-enjoying-chocolate-tronium" rather than chocolate.

Load More