Anybody want to join a Math Club?

9 smoofra 05 April 2013 04:36AM

I've found it's hard to teach myself math without an objective.    If I don't have a specific question I'm trying to answer, my eyes just start to skip over equations, trying to get to the "good part".   Pretty soon I've left the boring parts I know far behind.  I've also skipped the less boring parts that i sorta know, and now I'm skipping forward even faster because I only understand  half of what I'm reading.   I wind up skimming the whole book, but not really absorbing much of it.   I think I'd do better if I was planning on discussing what I'm reading with others.

So here's my idea: a math club.   We pick a book, and we read it together.   Every (week | two weeks | month) we read the next chapter in the book, and then we meet up and discuss it.    Anything we can't figure out on our own, we discuss with the other members of the math club until we get it.   The impending deadline of having to actually explain the material to other humans servs to focus and motive the reading.   

Anybody interested?

Possible topics:

EDIT: Benito made a facebook group so we can get organized and do this!  see: http://lesswrong.com/r/discussion/lw/h5y/lw_study_group_facebook_page/

Comment author: Eliezer_Yudkowsky 23 March 2013 11:04:59PM 7 points [-]

The problem is if your mathematical power has to go down each time you create a successor or equivalently self-modify. If PA could prove itself sound that might well be enough for many purposes. The problem is if you need a system that proves another system sound and in this case the system strength has to be stepped down each time. That is the Lob obstacle.

Comment author: smoofra 24 March 2013 03:23:03AM 2 points [-]

So you are assuming that it will be wanting to prove the soundness of any successors? Even though it can't even prove the soundness of itself? But it can believe in it's own soundness in a Bayesian sense without being able to prove it. There is not (as far as I know) any Godelian obstacle to that. I guess that was your point in the first place.

Comment author: lukeprog 23 March 2013 09:28:14PM 0 points [-]

Isn't the real danger that you will formalize the goal function wrong, not that the deductions will be invalid?

Both are huge difficulties, but most of the work in FAI is probably in the AI part, not the F part.

Comment author: smoofra 23 March 2013 09:36:02PM 0 points [-]

OK, forget about F for a second. Isn't the huge difficulty finding the right deductions to make, not formalizing them and verifying them?

Comment author: smoofra 23 March 2013 08:41:30PM 4 points [-]

This is all nifty and interesting, as mathematics, but I feel like you are probably barking up the wrong tree when it comes to applying this stuff to AI. I say this for a couple of reasons:

First, ZFC itself is already comically overpowered. Have you read about reverse mathematics? Stephen Simpson edited a good book on the topic. Anyway, my point is that there's a whole spectrum of systems a lot weaker than ZFC that are sufficient for a large fraction of theorems, and probably all the reasoning that you would ever need to do physics or make real word, actionable decisions. The idea that physics could depend on reasoning of a higher consistency strength than ZFC just feels really wrong to me. Like the idea that P could really equal NP. Of course my gut feeling isn't evidence, but I'm interested in the question of why we disagree. Why do you think these considerations are likely to be important?

Second, Isn't the the whole topic of formal reasoning a bike shed? Isn't the real danger that you will formalize the goal function wrong, not that the deductions will be invalid?

Comment author: smoofra 16 October 2012 09:43:11PM 1 point [-]

I don't think you've chosen your examples particularly well.

Abortion certainly can be a 'central' case of murder. Immagine aborting a fetus 10 minutes prior to when it would have been born. It can also be totally 'noncentral': the morning after pill. Abortions are a grey area of central-murder depending on the progress of neural devlopment of the fetus.

Affermative action really IS a central case of racism. It's bad for the same reason as segregation was bad, because it's not fair to judge people based on their race. The only difference is that it's not nearly AS bad. Segregation was brutal and oppressive, while affermative action doesn't really affect most peopel enough for them to notice.

Comment author: smoofra 18 January 2010 10:40:45PM 5 points [-]

"Trust your intuitions, but don't waste too much time arguing for them"

This is an excellent point. Intuition plays an absolutely crucial point in human thought, but there's no point in debating an opinion that (by definition, even) you're incapable of verbalizing your reasons for. Let me suggest another maxim:

Intuitions tell you where to look, not what you'll find.

Comment author: MatthewB 24 December 2009 08:00:07PM 2 points [-]

From reading histories of him. He took classes on how to yell at crowds; studied it in great detail.

Comment author: smoofra 25 December 2009 02:19:08AM 2 points [-]

wait so, are you agreeing with me or disagreeing?

Comment author: smoofra 24 December 2009 07:46:06PM *  7 points [-]

What makes you think Hitler didn't deliberately think about how to yell at crowds?

Comment author: smoofra 22 December 2009 07:03:10PM 4 points [-]

You're confusing "reason" with inappropriate confidence in models and formalism.

Comment author: smoofra 17 December 2009 03:51:06PM 1 point [-]

I vote for the meta-thread convention, or for any other mechanism that keeps meta off the front page.

View more: Prev | Next