Comment author: ESRogs 25 June 2015 08:50:49PM *  1 point [-]

To create a superhuman AI driver, you 'just' need to create a realistic VR driving sim and then train a ULM in that world (better training and the simple power of selective copying leads to superhuman driving capability).

So to create benevolent AGI, we should think about how to create virtual worlds with the right structure, how to educate minds in those worlds, and how to safely evaluate the results.

There is some interesting overlap between these ideas and Eric Drexler's recent proposal. (Previously discussed on LessWrong here)

Comment author: ESRogs 09 June 2015 03:42:01PM *  7 points [-]

FYI: I've just made this: www.reddit.com/r/RationalistDiaspora.

See: discussion in this thread.

Comment author: Raziel123 09 June 2015 03:12:34AM *  4 points [-]

I would be surprised if that subreddit get traction. I was thinking something more like Reaction Times(damn Scot and his FAQ), and having it in a visible place in all of the Rationality related sites. a coordinanted effort.

Well, the idea was not to comment in the agregator, that way it will be like a highway, it should take you to others sites with 2 clicks (3 max) . if that is not possible I'm not sure there will be any impact, besides making another gravity center.

Comment author: ESRogs 09 June 2015 02:56:04PM 1 point [-]

the idea was not to comment in the agregator

I'm thinking about whether to try to explicitly establish this as a norm of /r/RationalistDiaspora. Haven't made up my mind yet.

In response to comment by [deleted] on Open Thread, Jun. 8 - Jun. 14, 2015
Comment author: TezlaKoil 08 June 2015 09:43:34PM *  16 points [-]

Is such a long answer suitable in OT? If not, where should I move it?

tl;dr Naive ultrafinitism is based on real observations, but its proposals are a bit absurd. Modern ultrafinitism has close ties with computation. Paradoxically, taking ultrafinitism seriously has led to non-trivial developments in classical (usual) mathematics. Finally: ultrafinitism would probably be able to interpret all of classical mathematics in some way, but the details would be rather messy.

1 Naive ultrafinitism

1.1. There are many different ways of representing (writing down) mathematical objects.

The naive ultrafinitist chooses a representation, calls it explicit, and says that a number is "truly" written down only when its explicit representation is known. The prototypical choice of explicit representation is the tallying system, where 6 is written as ||||||. This choice is not arbitrary either: the foundations of mathematics (e. g. Peano arithmetic) use these tally marks by necessity.

However, the integers are a special^1 case, and in the general case, the naive ultrafinitist insistance on fixing a representation starts looking a bit absurd. Take Linear Algebra: should you choose an explicit basis of R3 that you use indiscriminately for every problem; or should you use a basis (sometimes an arbitary one) that is most appropriate for the problem at hand?

1.2. Not all representations are equally good for all purposes.

For example, enumerating the prime factors of 2*3*5 is way easier than doing the same for ||||||||||||||||||||||||||||||, even though both represent the same number.

1.3. Converting between representations is difficult, and in some cases outright impossible.

Lenstra earned $14,527 by converting the number known as RSA-100 from "positional" to "list of prime factors" representation.

Converting 3\^\^\^3 from up-arrow representation to the binary positional representation is not possible for obvious reasons.

As usual, up-arrow notation is overkill. Just writing the decimal number 100000000000000000000000000000000000000000000000000000000000000000000000000000000 would take more tally-marks than the number of atoms in the observable universe. Nonetheless, we can deduce a lot of things about this number: it is an even number, and its larger than RSA-100. Nonetheless, I can manually convert it to "list of prime factors" representation: 2\^80 * 5\^80.

2 Constructivism

The constructivists were the first to insist that algorithmic matters be taken seriously. Constructivism separates concepts that are not computably equivalent. Proofs with algorithmic content are distinguished from proofs without such content, and algorithmically inequivalent objects are separated.

For example, there is no algorithm for converting Dedekind cuts to equivalence classes of rational Cauchy sequences. Therefore, the concept of real number falls apart: constructively speaking, the set of Cauchy-real numbers is very different from the set of Dedekind-real numbers.

This is a tendency in non-classical mathematics: concepts that we think are the same (and are equivalent classically) fall apart into many subtly different concepts.

Constructivism separates concepts that are not computably equivalent. Computability is a qualitative notion, and even most constructivists stop here (or even backtrack, to regain some classicality, as in the foundational program known as Homotopy Type Theory).

3. Modern ultra/finitism

The same way constructivism distinguished qualitatively different but classically equivalent objects, one could starts distinguishing things that are constructively equivalent, but quantitatively different.

One path leads to the explicit approach to representation-awareness. For example, LNST^4 explicitly distinguishes between the set of binary natural numbers B and the set of tally natural numbers N. Since these sets have quantitatively different properties, it is not possible to define a bijection between B and N inside LNST.

Another path leads to ultrafinitism.

The most important thinker in modern ultra/finitism was probably Edward Nelson. He observed that the "set of effectively representable numbers" is not downward-closed: even though we have a very short notation for 3\^\^\^3, there are lots of numbers between 0 and 3^^^3 that have no such short representation. In fact, by elementary considerations, the overwhelming majority of them cannot ever have a short representation.

What's more, if our system of notation allows for expressing big enough numbers, then the "set of effectively representable numbers" is not even inductive because of the Berry paradox. In a sense, the growth of 'bad enough' functions can only be expressed in terms of themselves. Nelson's hope was to prove the inconsistency of arithmetic itself using a similar trick. His attempt was unsuccessful: Terry Tao pointed out why Nelson's approach could not work.

However, Nelson found a way to relate unexpressibly huge numbers to non-standard models of arithmetic^(2).

This correspondence turned out to be very powerful, leading to many paradoxical developments: including finitistic^3 extension of Set Theory; a radically elementary treatment of Probability Theory and a new ways of formalising the Infinitesimal Calculus.

4. Answering your question

What kind of mathematics would we still be able do (cryptography, analysis, linear algebra …)?

All of it; modulo translating the classical results to the subtler, ultra/finitistic language. This holds even for the silliest versions of ultrafinitism. Imagine a naive ultrafinitist mathematician, who declares that the largest number is m. She can't state the proposition R(n,2^(m)), but she can still state its translation R(log_2 n,m), which is just as good.

Translating is very difficult even for the qualitative case, as seen in this introductory video about constructive mathematics. Some theorems hold for Dedekind-reals, others for Cauchy-reals, et c. Similarly, in LNST, some theorems hold only for "binary naturals", others only for "tally naturals". It would be even harder for true ultrafinitism: the set of representable numbers is not downward-closed.

This was a very high-level overview. Feel free to ask for more details (or clarification).


^1 The integers are absolute. Unfortunately, it is not entirely clear what this means.

^2 coincidentally, the latter notion prompted my very first contribution to LW

^3 in this so-called Internal Set Theory, all the usual mathematical constructions are still possible, but every set of standard numbers is finite.

^4 Light Naive Set Theory. Based on Linear Logic. Consistent with unrestricted comprehension.

Comment author: ESRogs 09 June 2015 07:06:34AM *  0 points [-]

What is LNST?

Edit: Nevermind, saw the footnote.

Comment author: Daniel_Burfoot 06 June 2015 12:09:39PM *  6 points [-]

The problem I see here is that the mainstream AI / machine learning community measures progress mainly by this kind of contest.

Yup, two big chapters of my book is about how terrible the evaluation systems of mainstream CV and NLP are. Instead of image classification (or whatever), researchers should write programs to do lossless compression of large image databases. This metric is absolutely ungameable, and also more meaningful.

Comment author: ESRogs 09 June 2015 06:12:39AM 1 point [-]

Is it important that it be lossless compression?

I can look at a picture of a face and know that it's a face. If you switched a bunch of pixels around, or blurred parts of the image a little bit, I'd still know it was a face. To me it seems relevant that it's a picture of a face, but not as relevant what all the pixels are. Does AI need to be able to do lossless compression to have understanding?

I suppose the response might be that if you have a bunch of pictures of faces, and know that they're faces, then you ought to be able to get some mileage out of that. And even if you're trying to remember all the pixels, there's less information to store if you're just diff-ing from what your face-understanding algorithm predicts is most likely. Is that it?

Comment author: ESRogs 03 June 2015 08:02:00PM 2 points [-]

Some of our supporters are willing to sweeten the deal as well: if you haven't given us more than £1,000 before, then they'll match 1:1 a gift between £1,000 and £5,000.

Do donations to other projects within CEA count towards the £1,000 in previous donations limit? I've donated to GPP, but not GWWC.

Comment author: ESRogs 03 June 2015 07:38:36PM 0 points [-]

Does computational neuroscience sound interesting to you?

Seems like a field that could use your stats skills, and understanding how the brain works seems likely to be important for AI.

Comment author: paulfchristiano 12 April 2015 04:30:05PM *  1 point [-]

or by subverting the system through some design or implementation flaw

I discuss the most concerning-to-me instance of this in problem (1) here; it seems like that discussion applies equally well to anything that might work fine at first but then break when you become a sufficiently smart reasoner.

I think the basic question is whether you can identify and exploit such flaws at exactly the same time that you recognize their possibility, or whether you can notice them slightly before. By “before” I mean with a version of you that is less clever, has less time to think, has a weaker channel to influence the world, or is treated with more skepticism and caution.

If any of these versions of you can identify the looming problem in advance, and then explain it to the aliens, then they can correct the problem. I don’t know if I’ve ever encountered a possible flaw that wasn’t noticeable “before” it was exploitable in one of these senses. But I may just be overlooking them, and of course even if we can’t think of any it’s not such great reassurance.

Of course even if you can’t identify such flaws, you can preemptively improve the setup for the aliens, in advance of improving your own cognition. So it seems like we never really care about the case where you are radically smarter than the designer of the system, we care about the case where you are very slightly smarter. (Unless this system-improvement is a significant fraction of the difficulty of actually improving your cognition, which seems far-fetched.)

The point is that my behavior while my abilities are less than super-alien are not a very good indication of how safe I will eventually be.

Other than the issue from the first part of this comment, I don't really see why the behavior changes (in a way that invalidates early testing) when you become super-alien in some respects. It seems like you are focusing on errors you may make that would cause you to receive a low payoff in the RL game. As you become smarter, I expect you to make fewer such errors. I certainly don't expect you to predictably make more of them.

(I understand that this is a bit subtle, because as you get smarter the problem also may get harder, since your plans will e.g. be subject to more intense scrutiny and to more clever counterproposals. But that doesn't seem prone to lead to the kinds of errors you discuss.)

Comment author: ESRogs 01 June 2015 11:18:02PM 0 points [-]

Other than the issue from the first part of this comment, I don't really see why the behavior changes (in a way that invalidates early testing) when you become super-alien in some respects. It seems like you are focusing on errors you may make that would cause you to receive a low payoff in the RL game. As you become smarter, I expect you to make fewer such errors.

Paraphrasing, I think you're saying that, if the reinforcement game setup continues to work, you expect to make fewer errors as you get smarter. And the only way getting smarter hurts you is if it breaks the game (by enabling you to fall into traps faster than you can notice and avoid them).

Is that right?

Comment author: ESRogs 01 June 2015 09:38:10PM 0 points [-]

I missed the fact that most of my readers aren't in the habit of spending ~10 hours carefully reading a dense article

I'm a bit confused -- are you referring to one of your LessWrong articles? Were you anticipating that readers would spend ~10 hours reading and thinking about it before commenting?

Comment author: [deleted] 25 May 2015 04:10:26AM 1 point [-]

Yes. I'm inferring a bit about what they are willing to fund due to the request for proposals that they have put out, and statements that have been made by Musk and others. Hopefully there won't be any surprises when the selected grants are announced.

Comment author: ESRogs 26 May 2015 02:04:46PM 0 points [-]

Would you be surprised if they funded MIRI?

View more: Prev | Next