In response to MIRI's Approach
Comment author: LawrenceC 30 July 2015 09:28:12PM *  6 points [-]

Thanks Nate, this is a great summary of the case for MIRI's approach!

Out of curiosity, is there an example where algorithms led to solutions other than Bird and Layzell? That paper seems to be cited a lot in MIRI's writings.

In response to comment by [deleted] on MIRI's Approach
Comment author: jacob_cannell 30 July 2015 05:26:16PM *  4 points [-]

In fact, I expect that given the right way of modelling, formal verification of learning systems up to epsilon-delta bounds (in the style of PAC-learning, for instance) should be quite doable. Why?

Dropping the 'formal verification' part and replacing it with approximate error bound variance reduction this is potentially interesting - although it also seems to be a general technique that would - if it worked well - be useful for practical training, safety aside.

Why? Because, as mentioned regarding PAC learning, it's the existing foundation for machine learning.

Machine learning is an eclectic field with many mostly independent 'foundations' - bayesian statistics of course, optimization methods (hessian free, natural, etc), geometric methods and NLDR, statistical physics ...

That being said - I'm not very familiar with the PAC learning literature yet - do you have a link to a good intro/summary/review?

Hell, if I could find the paper showing that deep networks form a "funnel" in the model's free-energy landscape - where local minima are concentrated in that funnel and all yield more-or-less as-good test error, while the global minimum reliably overfits - I'd be posting the link myself.

That sounds kind of like the saddle point paper. It's easy to show that in complex networks there are a large number of equivalent minima due to various symmetries and redundancies. Thus finding the actual technical 'global optimum' quickly becomes suboptimal when you discount for resource costs.

If it seems really really really impossibly hard to solve a problem even with the 'simplification' of lots of computing power, perhaps the underlying assumptions are wrong. For example - perhaps using lots and lots of computing power makes the problem harder instead of easier.

You're not really being fair to Nate here, but let's be charitable to you: this is fundamentally a dispute between the heuristics-and-biases school of thought about cognition and the bounded/resource-rational school of thought.

Yes that is the source of disagreement, but how am I not being fair? I said 'perhaps' - as in have you considered this? Not 'here is why you are certainly wrong'.

Computationally, this is saying, "When we have enough resources that only asymptotic complexity matters, we use the Old Computer Science way of just running the damn algorithm that implements optimal behavior and optimal asymptotic complexity." Trying to extend this approach into statistical inference gets you basic Bayesianism and AIXI, which appear to have nice "optimality" guarantees, but are computationally intractable and are only optimal up to the training data you give them.

Solonomoff/AIXI and more generally 'full Bayesianism' is useful as a thought model, but is perhaps over valued on this site compared to the machine learning field. Compare the number of references/hits to AIXI on this site (tons) to the number on r/MachineLearning (1!). Compare the number of references for AIXI papers (~100) to other ML papers and you will see that the ML community sees AIXI and related work as minor.

The important question is what does the optimal practical approximation of Solonomoff/Bayesian look like? And how different is that from what the brain does? By optimal I of course I mean optimal in terms of all that really matters, which is intelligence per unit resources.

Human intelligence - including that of Turing or Einstein, only requires 10 watts of energy and more surprisingly only around 10^14 switches/second or less - which is basically miraculous. A modern GPU uses more than 10^18 switches/second. You'd have to go back to a pentium or something to get down to 10^14 switches per second. Of course the difference is that switch events in an ANN are much more powerful because they are more like memory ops, but still.

It is really really hard to make any sort of case that actual computer tech is going to become significantly more efficient than the brain anytime in the near future (at least in terms of switch events/second). There is a very strong case that all the H&B stuff is just what actual practical intelligence looks like. There is no such thing as intelligence that is not resource efficient - or alternatively we could say that any useful definition of intelligence must be resource normalized (ie utility/cost).

Comment author: LawrenceC 30 July 2015 09:23:28PM *  3 points [-]

I'm not sure what you're looking for in terms of the PAC-learning summary, but for a quick intro, there's this set of slides or these two lectures notes from Scott Aaronson. For a more detailed review of the literature in all the field up until the mid 1990s, there's this paper by David Haussler, though given its length you might as well read up Kearns and Vazirani's 1994 textbook on the subject. I haven't been able to find a more recent review of the literature though - if anyone had a link that'd be great.

Comment author: LawrenceC 07 July 2015 09:03:27AM 0 points [-]

This was a great post, thanks!

One thing I'm curious about is how the ULH explains to the fact that human thought seems to be divided into System 1/System 2 - is this solely a matter of education history?

Comment author: Wenceslao 30 June 2015 08:43:35PM 3 points [-]

Interesting post. However, I do not agree completely in the conclutions on the end.

I am a student in math science, what involves me into an enviroment of researchers of this area. In this way, I am able to see that this people's work is based on beliefs that 'does not exists', I mean, they work on abstract ideas that generally only exists in their minds. And now I wonder, does their efforts 'does not pay rent'? They live from structures and stuff that, in the most of the cases, cannot be found in 'real life', and so, according to the article's conclution, this would not be worth thinking, as is not flowing from a question of anticipation (what were we anticipating if it does not exists?).

Maybe I'm missunderstanding the post, or maybe it is just focus in other life experiences.

Comment author: LawrenceC 30 June 2015 09:18:47PM *  2 points [-]

You're definitely right that there's some areas where it's easier to make beliefs pay rent than others! I think there's two replies to your concern:

1) First, many theories from math DO pay rent (the ones I'm most aware of are statistics and computer-science related ones). For example, better algorithms in theory (say Strassen's algorithm for multiplying matrices) often correspond to better results in practice. Even more abstract stuff like number theory or recursion theory do yield testable predictions.

2) Even things that can't pay rent directly can be logical implications of other things that pay rent. Eliezer wrote about this kind of reasoning here.

Comment author: LawrenceC 02 June 2015 03:18:52PM *  11 points [-]

"Mystics exult in mystery and want it to stay mysterious. Scientists exult in mystery for a different reason: it gives them something to do."

Richard Dawkins, The God Delusion, on the topic of mysterious answers to mysterious questions.

Comment author: LawrenceC 29 May 2015 02:15:48AM 0 points [-]

Here's a thing that's been bugging me for a while.

For Gryffindors there's "Gryffindorks". Are there any similarly good insults for the other three houses?

Comment author: LawrenceC 26 May 2015 04:23:28AM 3 points [-]

I've noticed recently that listening to music with lyrics significantly hampers comprehension for reading texts as well as essay-writing ability, but has no (or even slightly positive) effects on doing math. My informal model of the problem is that the words of the song disrupt the words being formed in my head. Has anyone else experienced anything similar?

Comment author: WilliamKiely 03 May 2015 05:48:06PM 2 points [-]

I agree that there are several reasons why solving the value alignment problem is important.

Note that when I said that Bostrom should "modify" his reply I didn't mean that he should make a different point instead of the point he made, but rather meant that he should make another point in addition to the point he already made. As I said:

While what [Bostrom] says is correct, I think that there is a more important point he should also be making when replying to this claim.

Comment author: LawrenceC 03 May 2015 08:43:41PM 0 points [-]

Ah, I see. Fair enough!

Comment author: WilliamKiely 30 April 2015 02:48:24AM 6 points [-]

This is my first comment on LessWrong.

I just wrote a post replying to part of Bostrom's talk, but apparently I need 20 Karma points to post it, so... let it be a long comment instead:

Bostrom should modify his standard reply to the common "We'd just shut off / contain the AI" claim

In Superintelligence author Prof. Nick Bostrom's most recent TED Talk, What happens when our computers get smarter than we are?, he spends over two minutes replying to the common claim that we could just shut off an AI or preemptively contain it in a box in order to prevent it from doing bad things that we don't like, so there's no need to be too concerned about the possible future development of AI that has misconceived or poorly specified goals:

Now you might say, if a computer starts sticking electrodes into people's faces, we'd just shut it off. A, this is not necessarily so easy to do if we've grown dependent on the system -- like, where is the off switch to the Internet? B, why haven't the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.

And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn't find a bug. Given that merely human hackers find bugs all the time, I'd say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I'm sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.

More creative scenarios are also possible, like if you're the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code -- Bam! -- the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out.

If I recall correctly, Bostrom has replied to this claim in this manner in several of the talks he has given. While what he says is correct, I think that there is a more important point he should also be making when replying to this claim.

The point is that even if containing an AI in a box so that it could not escape and cause damage was somehow feasible, it would still be incredibly important for us to determine how to create AI that shares our interests and values (friendly AI). And we would still have great reason to be concerned about the creation of unfriendly AI. This is because other people, such as terrorists, could still create an unfriendly AI and intentionally release it into the world to wreak havoc and potentially cause an existential catastrophe.

The idea that we should not be too worried about figuring out how to make AI friendly because we could always contain the AI in a box until we knew it was safe to release is confused not primarily because we couldn't actually successfully contain it in the box, but rather because the primary reason we have for wanting to quickly figure out how to make a friendly AI is so that we can make a friendly AI before anyone else makes an unfriendly AI.

In his TED Talk, Bostrom continues:

I believe that the answer here is to figure out how to create superintelligent A.I. such that even if -- when -- it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.

Bostrom could have strengthened his argument for the position that there is no way around this difficult problem by stating my point above.

That is, he could have pointed out that even if we somehow developed a reliable way to keep a superintelligent genie locked up in its bottle forever, this still would not allow us to avoid having to solve the difficult problem of creating friendly AI with human values, since there would still be a high risk that other people in the world with not-so-good intentions would eventually develop an unfriendly AI and intentionally release it upon the world, or simply not exercise the caution necessary to keep it contained.

Once the technology to make superintelligent AI is developed, good people will be pressured to create friendly AI and let it take control of the future of the world ASAP. The longer they wait, the greater the risk that not-so-good people will develop AI that isn't specifically designed to have human values. This is why solving the value alignment problem soon is so important.

Comment author: LawrenceC 02 May 2015 02:49:03AM *  1 point [-]

I'm not sure your argument proves your claim. I think what you've shown is that there exist reasons other than the inability to create perfect boxes to care about the value alignment problem.

We can flip your argument around and apply it to your claim: imagine a world where there was only one team with the ability to make superintelligent AI. I would argue that it'll still be extremely unsafe to build an AI and try to box it. I don't think that this lets me conclude that a lack of boxing ability is the true reason that the value alignment problem is so important.

Comment author: SanguineEmpiricist 19 April 2015 08:03:16PM *  1 point [-]

Well over the last year I've been studying Feller Vol 1, Probability via Expectation, Papoulis's probability book , and Abbot, Bressoud's book, and Strichartz. I also collect a lot of math books so I know random stuff but I definitely just want to get the plumbing right.

I should probably just stick with one of each, I did discrete a while ago but that was before I fixed a few things causing major productivity losses for me so i'm interested to redoing everything now my executive functions aren't depressed.

I'm thinking about getting epp as opposed to rosen

Comment author: LawrenceC 22 April 2015 12:59:56PM 2 points [-]

Wow. That's pretty impressive.

If you have a decent background in Math already, I've been told that Knuth's Concrete Mathematics might be more interesting (though it's really not appropriate as an introductory text). I've skimmed through a copy, and it seems to cover series and number theory at a much higher level, if that's what you're looking for.

View more: Prev | Next