Interesting. I might show up.
When I first worked through this book, it didn't result in long-term retention of the material (I'm sure some people will be able to manage, just not me, not without meditating on it much longer than it takes to work through or setting up a spaced repetition system). In that respect, Enderton's Elements of Set Theory worked much better. Enderton's book goes into more detail, giving enough time to exercise intuition about standard proofs. At the same time, it's an easier read, which might be helpful if Halmos's text seems difficult.
Thanks for the tip. Two other books on the subject that seem to be appreciated are Introduction to Set Theory by Karel Hrbacek and Classic Set Theory: For Guided Independent Study by Derek Goldrei.
Edit: math.se weighs in: http://math.stackexchange.com/a/264277/255573
In general, reading about the same subject from a different author is a great way to learn and retain the material better. This is true even if neither author is objectively "better" than the other. Something about recognizing the same underlying concept expressed in different words helps to fix that concept in the mind.
It's possible to exploit this phenomenon even when you have only one text to work with. One trick I use when working through a math text is to willfully use different notation in my notes next to the text. Using a different notation forces me to make sure that I'm really following the details of the argument. Expressing the same logic in different symbols makes it easier to see through those symbols to the underlying logic.
The author of the Teach Yourself Logic study guide agrees with you about reading multiple sources:
I very strongly recommend tackling an area of logic (or indeed any new area of mathematics) by reading a series of books which overlap in level (with the next one covering some of the same ground and then pushing on from the previous one), rather than trying to proceed by big leaps.
In fact, I probably can’t stress this advice too much, which is why I am highlighting it here. For this approach will really help to reinforce and deepen understanding as you re-encounter the same material from different angles, with different emphases.
Looks to me like Halmos does intend "one-to-one" to mean "injective". What he writes is "A function that always maps distinct elements onto distinct elements is called one-to-one (usually a one-to-one correspondence)." Then he mentions inclusion maps as examples of one-to-one functions.
My two main sources of confusion in that sentence are:
- He says "distinct elements onto distinct elements", which suggests both injection and surjection.
- He says "is called one-to-one (usually a one-to-one correspondence)", which might suggest that "one-to-one" and "one-to-one correspondence" are synonyms -- since that is what he usually uses the parantheses for when naming concepts.
I find Halmos somewhat contradictory here.
But I'm convinced you're right. I've edited the post. Thanks.
Oh yes, for sure, but the context here was a statement that "onto" means surjective while "one-to-one" means bijective. Definitely talking functions. And I would be really surprised if Halmos were using "one-to-one" followed by anything other than "correspondence" to mean bijective.
You guys must be right. And wikipedia corroborates. I'll edit the post. Thanks.
Hello.
I'm currently attempting to read through the MIRI research guide in order to contribute to one of the open problems. Starting from Basics. I'm emulating many of Nate's techniques. I'll post reviews of material in the research guide at lesswrong as I work through it.
I'm mostly posting here now just to note this. I can be terse at times.
See you there.
First, appreciation: I love that calculated modification of self. These, and similar techniques, can be very useful if put to use in the right way. I recognize myself here and there. You did well to abstract it all out this clearly.
Second, a note: You've described your techniques from the perspective of how they deviate from epistemic rationality - "Changing your Terminal Goals", "Intentional Compartmentalization", "Willful inconsistency". I would've been more inclined to describe them from the perspective of their central effect, e.g. something to the style of: "Subgoal ascension", "Channeling", "Embodying". Perhaps not as marketable to the lesswrong crowd. Multiple perspectives could be used as well.
Third, a question: How did you create that gut feeling of urgency?
If you want guarantees, find yourself another universe. "There's no guarantee" of anything.
You're concept of a boxed AI seems very naive and uninformed. Of course a superintelligence a million times more powerful than a human would probably be beyond the capability of a human operator to manually debug. So what? Actual boxing setups would involve highly specialized machine checkers that assure various properties about the behavior of the intelligence and its runtime, in ways that truly can't be faked.
And boxing, by the way, means giving the AI zero power. If there is a power differential, then really by definition it is out of the box.
Regarding your last point, is is in fact possible to build an AI that is not a utility maximizer.
And boxing, by the way, means giving the AI zero power.
No, hairyfigment's answer was entirely appropriate. Zero power would mean zero effect. Any kind of interaction with the universe means some level of power. Perhaps in the future you should say nearly zero power instead so as to avoid misunderstanding on the parts of others, as taking you literally on the "zero" is apparently "legalistic".
As to the issues with nearly zero power:
- A superintelligence with nearly zero power could turn to be a heck of a lot more power than you expect.
- The incentives to tap more perceived utility by unboxing the AI or building other unboxed AIs will be huge.
Mind, I'm not arguing that there is anything wrong with boxing. What's I'm arguing is that it's wrong to rely only on boxing. I recommend you read some more material on AI boxing and Oracle AI. Don't miss out on the references.
So you think human-level intelligence by principle does not combine with goal stability.
To be clear I’ve been talking about human-like, which is a different distinction than human-level. Human-like intelligences operate similarly to human psychology. And it is demonstrably true that humans do not have a fixed set of fundamentally unchangeable goals, and human society even less so. For all its faults, the neoreactionaries get this part right in their critique of progressive society: the W-factor introduces a predictable drift in social values over time. And although people do tend to get “fixed in their ways”, it is rare indeed for a single person to remain absolutely rigidly so. So yes, in as far as we are talking about human-like intelligences, if they had fixed truly steadfast goals then that would be something which distinguishes them from humans.
Aren't you simply disagreeing with the orthogonality thesis, "that an artificial intelligence can have any combination of intelligence level and goal"?
I don’t think the orthogonality thesis is well formed. The nature of an intelligence may indeed cause it to develop certain goals in due coarse, or for its overall goal set to drift in certain, expected if not predictable ways.
Of course denying the orthogonality thesis as stated does not mean endorsing a cosmist perspective either, which would be just as ludicrous. I’m not naive enough to think that there is some hidden universal morality that any smart intelligence naturally figures out -- that’s bunk IMHO. But it’s just as naive to think that the structure of an intelligence and its goal drift over time are purely orthogonal issues. In real, implementable designs (e.g. not AIXI), one informs the other.
So you disagree with the premise of the orthogonality thesis. Then you know a central concept to probe to understand the arguments put forth here. For example, check out Stuart's Armstrong's paper: General purpose intelligence: arguing the Orthogonality thesis
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I would caution away from a bias towards "the current situation seems vaguely bad, therefore Something Must Be Done." There are lots of people still getting use out of LessWrong. I think it would be unfortunate that a bias towards Doing Something over Leaving It Be might cause a valuable resource to be ended without good cause. If the site can be reinvented, great, but if it can't -- don't hit the Big Red Button without honestly weighing the significant costs to the people who are still actively using the site.
(I briefly searched, to see if there's an article on LW about the idea of a bias towards Doing Something. It would of course be essentially the opposite of status quo bias; and yet I think it's a real phenomenon. I certainly feel like I observe it happening in discussions like this. Perhaps the real issue is in the resolution of conflicts between a small minority who are outspoken about Doing Something, and a large silent majority who don't express strong feelings because they're fine with the status quo. This is an attempt to express a thought that I've had percolating, not a criticism of this post.)
Counterpoint: Sometimes, not moving means moving, because everyone else is moving away from you. Movement -- change -- is relative. And on the Internet, change is rapid.