Learning math (repost from reddit)

0 xamdam 20 December 2010 04:26PM

This is a good starting point for generally useful math. Probability is a conspicuous omission.

http://www.reddit.com/r/math/comments/eohrr/to_everyone_who_posts_about_learning_more_math/

 

Wonder who is going to be there...

4 xamdam 07 December 2010 03:14AM

Good news about the Big Bang

0 xamdam 25 November 2010 12:44PM

(Disclaimer: very poor knowledge of physics here, just interpreting the article)

http://www.physorg.com/print209708826.html

- looks like there are many of them, as non-creationists would expect

The really good news is

> In the past, Penrose has investigated cyclic cosmology models because he has noticed another shortcoming of the much more widely accepted inflationary theory: it cannot explain why there was such low entropy at the beginning of the universe. The low entropy state (or high degree of order) was essential for making complex matter possible.

Which I interpret to mean information passes through the Big Crunch/Big Bang cycle. No heat death, information passes through - good news for transhumanists?

 

Toy model of human values

3 xamdam 02 November 2010 06:28PM

This is just a summary via analogy where I human values come from, as far as I understand it. The expanded version would be Eli's http://lesswrong.com/lw/l3/thou_art_godshatter/.

The basic analogy is to chess-playing programs (at least the basic ones from 40 years ago, the art has progressed since then, but not much). The way they work is basically by examining the branching tree of possible moves; since chess is "too big" to solve completely (find the branch that always leads to winning) by present hardware what these programs do is go to a certain depth and then use heuristics to decide whether the end state is good, such as how many pieces are on its side vs. the enemy side, weighed by their "power" (queen is worth more than pawn) and position (center positions are worth more). 

The analogy mapping is as follows: the goal of the game is winning, of evolution is survival of a gene fragment (such as human DNA). Explicit encoding of the goal is not computationally feasible or worthwhile (in terms of the goal itself), so values of certain non-terminal states (in terms of the goal) are explicitly given to the program or to a human; the human/program knows no better than these non-terminal values - they are our values - we are Godshatter

What do you think?

Before you start solving a problem

1 xamdam 12 October 2010 03:46PM

(While this is a general discussion, I have "doing well on interview questions" as an instrumental goal; the discussion below is somewhat skewed due to that).

I noticed one of the common failures to solving problems (especially under time constraints) is trying to solve the problem prematurely. There are multiple causes for this; awareness of some of them might reduce the chance of falling into failure mode, others (at least one) I do not have a solution to, and a procedural solution might not exist other than the magic of experience.

Here is my list of the first kind (awareness-helps group):

  1. Jumping into the problem before completely understanding it: this could be due to perceived time pressure (e.g. test, interview). This *could* be rational, depending on the "test score" function, but could be a serious failure mode if done due to stress.
  2. Using a cached solution instead of trying to solve the problem. The statement of the problem can trigger "cached thoughts" despite (possibly intentionally, in interview) being more subtly more difficult than a well known problem. In one instance I actually misread the statement of the problem because it sounded like one I knew of before.
  3. Another problem with a cached solution, even if it is the correct one for the problem at hand, is that you might believe that you know it without actually doing the "retrieve from disk"; consequences might be looking bad when asked follow-up questions on an interview or inability to build on the problem if it's a part of a greater structure.
  4. Besides cognitive mechanics, there might be a desire to blurt out a cached solution because it makes you look knowledgeable. A status claim might be instrumentally useful ("this looks like a min-spanning tree algorithm!"), as long as you properly calibrate your level of confidence and don't fall for the trap.

This brings me to the last failure mode which I do not have a solution for (which is why I am posting ;). If I avoid the traps above, I should have a pretty good understanding of the problem. I think this is a kind of crucial point, as I by definition, do not know what to do next. This uncertainty is scary and might push me into trying to immediately solve it, very similar to 1 above. While you might be able to avoid acting on this by being emotionally reflective (which has the instrumental side effect of appearing more confident) I still do not know what exactly should be done next. Giving some time for unconscious processing seems necessary even on a smallish (interview-question-size) problems, but how much time? And what should I be doing in this time? Meditation? Drawing the problem? Trying to solve sub-problems? Writing down lists of whatever-comes-to-mind? I can use the time constraint-expected size of communicating the solution (in proper format, e.g. C++ code) as an upper bound; but there is a moment when I have to sigh (optional) and take a shot at solution. I do not have anything better to go by than gut feel here.

(Even after the plunge, there is a chance of getting stuck, which is where Meta-thinking skills come in)

Cognitive Bias Song

1 xamdam 06 May 2010 11:30PM

I will not summarize this. Or transcribe it. It's just funny(video link).

Who is your favorite rationalist?

3 xamdam 25 April 2010 02:56PM

Light reading about 'Rationalist Heroes'.

I am not sure how useful people find having personal heroes. I would argue that they are definitely useful for children. Perhaps I haven't really grown up enough yet (growing up without a father possibly contributed), but I like to have some people in my head I label as "I wonder what would X think about this". Many times they've set me straight through their ideas. Other times I've had to reprimand them, though unfortunately they never get the memo.

One living example is Charlie Munger.

He was an early practical adopter of the cognitive biases framework, and moreover he clearly put it into context of "something to protect":

"not understanding human misjudgment was reducing my ability to help everything I loved"

(The quote is from his talk on "Misjudgment" which is worth reading on its own http://vinvesting.com/docs/munger/human_misjudgement.html)

One interesting point is that Charlie is seemingly a Christian. I have a deep suspicion that he believes that religion is valuable, for the time, as a payload delivering mechanism.

“Economic systems work better when there’s an extreme reliability ethos. And the traditional way to get a reliability ethos, at least in past generations in America, was through religion. The religions instilled guilt. … And this guilt, derived from religion, has been a huge driver of a reliability ethos, which has been very helpful to economic outcomes for man.”

Also, judge for yourself from his recommended reading list - looks like something out of an Atheist's Bookshelf.

Living Large - availability of life

3 xamdam 21 April 2010 04:14PM

"Q: Doctor, if I do not eat much, drink vodka or have women, will I live long? A: Sure, but why?" - bad joke poorly translated from Russian.

Summary: Can traditional measures of living create anchoring/availability bias?

I have seen a few studies like this one in the news:
http://www.medpagetoday.com/PrimaryCare/SleepDisorders/6834

The upshot is that sleeping less (or, less interestingly for most people, more) can increase mortality. Like 20% in the next 20 years or something.

This is obviously a question of some interest to many of us who have been sacrificing more and more sleep to do stuff we find fulfilling. This seems to be a recent trend at least in part due to the fact that our ancestors, despite having the ability to enjoy knowledge, were limited by availability of high quality inputs, especially structured knowledge (internet is obviously a prime example).

There is nothing wrong with the studies like this, but the interpretation I am afraid many people will fall into upon seeing them is wrong. Clearly when thinking about 20% quoted in the study the base rate is very important, but I just want to concentrate on the psychological issue. It seems to me that people are very fixated on 'not increasing the chances of dying earlier' and perhaps fixate on the a specific number of years they expect to have. This is anchoring. (I am specifically setting aside the issue of living longer for the sake of benefitting from the technological progress; suffice to say that if the small chance that the extra year will make all the difference is not worth infinity, otherwise people should just get it over with and freeze themselves right now rather than risk being too far away to be properly frozen.). But simple arithmetic should be used here: let's say you sleep 2 hours less than the prescribed 8, over expected lifespan of, let's say 32 years. This (setting aside the possibly sleep-deprived quality of life) will result in the equivalent of 36 years done in 32. Unless the sleep loss subtracts 4 years, you end up ahead. Not seeing those 4 years and just looking at length of life is availability bias.

As much as we hate death, we have to be brave and rational about the life we have.

 

PS. From personal observation: I appeared (to myself) significantly more prone to catching colds after a bad night of sleep. Once I started exercising regularly I have had no major colds.

 

 

CogSci books

5 xamdam 20 April 2010 02:11PM

Cognitive psych is ovbiously important to people here, so I want to point out a CogSci book thread over from reddit/r/cogsci.

http://www.reddit.com/r/cogsci/comments/bmbaq/dear_rcogsci_lets_construct_a_musthave_library_of/

I would be interested in an extension of this thread here, since LW has somewhat more computational theory of mind slant.

Interesting Peter Norvig interview

6 xamdam 03 March 2010 12:59AM

(Sorry this is mostly a link instead of a post, but I think it will interesting to the FAI folks here)

I helped arrange this interview with Peter Norvig:

http://www.reddit.com/r/blog/comments/b8aln/peter_norvig_answers_your_questions_ask_me/

I think the answer to the AGI question 4 is telling, but judge for yourself. (BTW, the 'components' Peter referred to are probabilistic relational learning and hierarchical modeling. He singled these two in his singularity summit talk)

View more: Next