What is Mathematics? by Courant and Robbins is a classic exploration that goes reasonably deep into most areas of math.
Occasionally in this crew, people discuss the idea of computer simulations of the introduction of an AGI into our world. Such simulations could utilize advanced technology, but significant progress could be made even if they were not themselves an AGI.
I would like to hear how people might flesh out that research direction? I am not completely against trying to prove theorems about formal systems-it's just that the simulation direction is perfectly good virgin research territory. If we made progress along that path, it would also be much easier to explain.
This makes me think of two very different things.
One is informational containment, ie how to run an AGI in a simulated environment that reveals nothing about the system it's simulated on; this is a technical challenge, and if interpreted very strictly (via algorithmic complexity arguments about how improbable our universe is likely to be in something like a Solomonoff prior), is very constraining.
The other is futurological simulation; here I think the notion of simulation is pointing at a tool, but the idea of using this tool is a very small part of the approach relative to formulating a model with the right sort of moving parts. The latter has been tried with various simple models (eg the thing in Ch 4); more work can be done, but justifying the models&priors will be difficult.
Uh.. "stop having that illness!" is reasonable advice. Seek help. Try medication. Enter into psychotherapy. I'm not sure what you are objecting to there?
Certainly, interventions may be available, just as for anything else; but it's not fundamentally more accessible or malleable than other things.
Well, you're right that in the mental illness case my definition works badly, but I can't think about a better precise definition right now (can you?); probably something like selecting a specific "sub-process" in brain which is related to the conscious experience, but it's fuzzy and I'm not even sure that such separation is possible.
I think the correct intuitive definition of "locus of control" is "those things you can do if you want to".
I have a feeling that it is a rephrasing of "things under your control".
Causality is entirely about hypothetical interventions; to say "your way of thinking affects your IQ" is just to say that if I was to change your way of thinking, I could change your IQ.
Actually, I'm arguing that causal arrows are pointing in the opposite direction: if I was to change your IQ, I could change your way of thinking. The rest of article is about what happens if we assume IQ fixed (that somehow resembles Bayesian inference).
I'm arguing that the fuzzy-ish definition that corresponds to our everyday experience/usage is better than the crisp one that doesn't.
Re IQ and "way of thinking", I'm arguing they both affect each other, but neither is entirely under conscious control, so it's a bit of a moot point.
Apropos the original point, under my usual circumstances (not malnourished, hanging out with smart people, reading and thinking about engaging, complex things that can be analyzed and have reasonable success measures, etc), my IQ is mostly not under my control. (Perhaps if I was more focused on measurements, nootropics, and getting enough sleep, I could increase my IQ a bit; but not very much, I think.) YMMV.
I think what you're saying is that if we want a coherent, nontrivial definition of "under our control" then the most natural one is "everything that depends on the neural signals from your brain". But this definition, while relatively clean from the outside, doesn't correspond to what we ordinarily mean; for example, if you have a mental illness, this would suggest that "stop having that illness!!" is reasonable advice, because your illness is "under your control".
I don't know enough neuroscience to give this a physical backing, but there are certain conscious decisions or mental moves that feel like they're very much under my control, and I'd say the things under my control are just those, plus the things I can reliably affect using them. I think the correct intuitive definition of "locus of control" is "those things you can do if you want to".
Regarding causal arrows between your IQ and your thoughts, I don't think this is a well-defined query. Causality is entirely about hypothetical interventions; to say "your way of thinking affects your IQ" is just to say that if I was to change your way of thinking, I could change your IQ.
But how would I change your way of thinking? There has to be an understanding of what is being held constant, or of what range of changes we're talking about. For instance we could change your way of thinking to any that you'd likely reach from different future influences, or to any that people similar to you have had, etc. Normally what we care about is the sort of intervention that we could actually do or draw predictions from, so the first one here is what we mean. And to some degree it's true, your IQ would be changed.
From the other end, what does it mean to say your way of thinking is affected by your IQ? It means if we were to "modify your IQ" without doing anything else to affect your thinking, then your way of thinking would be altered. This seems true, though hard to pin down, since IQ is normally thought of as a scalar, rather than a whole range of phenomena like your "way of thinking". IQ is sort of an amalgam of different abilities and qualities, so if we look closely enough we'll find that IQ can't directly affect anything at all, similarly to how g can't ("it wasn't your IQ that helped you come up with those ideas, it was your working memory, and creativity, and visualization ability!"); but on the other hand if most things that increase IQ make the same sort of difference (eg to academic success) then it's fairly compact and useful to say that IQ affects those things.
Causality with fuzzy concepts is tricky.
IMPORTANT:
This is your final exam.
You have 60 hours.
Your solution must at least allow Harry to evade immediate death, despite being naked, holding only his wand, facing 36 Death Eaters plus the fully resurrected Lord Voldemort.
If a viable solution is posted before *12:01AM Pacific Time* (8:01AM UTC) on Tuesday, March 2nd, 2015, the story will continue.
Otherwise you will get a shorter and sadder ending.
There are more details and suggestions at the end of the chapter.
Question for Eliezer: Would a post to a LessWrong HPMOR discussion thread count as a solution, or must all solutions be posted to fanfiction.net?
March 2nd isn't a Tuesday; is it Monday night or Tuesday night?
That is a very good suggestion.
I guess I am pretty confused. And as I said, I'd be very open to the proper way to view such things.
It seems we've got one group of words (reality, universe, multiverse, world, nature,..) and another group of words (experience, and consciousness, and mind) and I am very confused at what each of these words refer to, and how they are related.
Is there something the way of a standard lexicon for that you can point me to?
I think one solution is to break it down, first beginning with only reality. We can then split reality into absolute reality and relative reality, which correspond to the absolute states and relative states of Everett's model.
At this point, we haven't made any distinct deviations from what our most general physics models are, nor from the traditions of philosophy (Kant, Plato, Leibniz, ect ect).
Has a mistep been made?
For the record, when Newton essentially defined the study of physics, he said "it will be convenient to distinguish reality into absolute and relative." I'm paraphrasing by using "reality" where he said "time and space". My point is the consistency of my suggestions with physics and philosophy and rational though in general are not some casual comment, but the consequence of an inquirey of the type you suggest.
If you want to discuss the nature of reality using a similar lexicon to what philosophers use, I recommend consulting the Stanford Encyclopedia of Philosophy: http://plato.stanford.edu/
Nice, thanks. I'm willing to do it (I've actually never used G+ or Twitter before...). I would think though if there is a LW user who Musk might have actually heard of, say through the Superintelligence book, then that would be more likely to get a response. In other words, I think if Eliezer Yudkowsky wrote to Elon, that could be a good thing.
Musk has joined the advisory board of FLI and CSER, which are younger sibling orgs of FHI and MIRI. He's aware of the AI xrisk community.
No, it doesn't seem strange to me to consider representing what I want by a bounded utility function. It seems strange to consider representing what I want by a utility function that converges exponentially fast towards its bound.
I'll repeat something I said in another comment:
You might say it's a suboptimal outcome even though it's a good one, but to make that claim it seems to me you have to do an actual expected-utility calculation. And we know what that expected-utility calculation says: it says that the resource allocation you're objecting to is, in fact, the optimal one.
Or you might say it's a suboptimal outcome because you just know that this allocation is bad, or something. Which amounts to saying that actually you know what the utility function should be and it isn't the one the analysis assumes.
I have some sympathy with that last option. A utility function that not only is bounded but converges exponentially fast towards its bound feels pretty counterintuitive. It's not a big surprise, surely, if such a counterintuitive choice of utility function yields wrong-looking resource allocations?
(Remark 1: the above is a comment that remarks that the optimum is the optimum but is visibly not missing the point by failing to appreciate that we might be constructing a utility function and trying to make it do good-looking things, rather than approximating a utility function we already have.)
(Remark 2: I think I can imagine situations in which we might consider making the relationship between chocolate and utility converge very fast -- in fact, taking "chocolate" literally rather than metaphorically might yield such a situation. But in those situations, I also think the results you get from your exponentially-converging utility function aren't obviously unreasonable.)
Cool. Regarding bounded utility functions, I didn't mean you personally, I meant the generic you; as you can see elsewhere in the thread, some people do find it rather strange to think of modelling what you actually want as a bounded utility function.
This is where I thought you were missing the point:
Or you might say it's a suboptimal outcome because you just know that this allocation is bad, or something. Which amounts to saying that actually you know what the utility function should be and it isn't the one the analysis assumes.
Sometimes we (seem to) have stronger intuitions about allocations than about the utility function itself, and parlaying that to identify what the utility function should be is what this post is about. This may seem like a non-step to you; in that case you've already got it. Cheers! I admit it's not a difficult point. Or if you always have stronger intuitions about the utility function than about resource allocation, then maybe this is useless to you.
I agree with you that there are some situations where the sublinear allocation (and exponentially-converging utility function) seems wrong and some where it seems fine; perhaps the post should initially have said "person-enjoying-chocolate-tronium" rather than chocolate.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Is there a reason to think this problem is less amenable to being solved by complexity priors than other learning problems? / Might we build an unaligned agent competent enough to be problematic without solving problems similar to this one?