Comment author: selylindi 31 January 2012 08:16:07PM *  0 points [-]

That's a little too vaguely stated for me to interpret. Can you give an illustration? For comparison, here's one of how I assumed it would work:

A paperclip-making AI is given a piece of black-box machinery and given specifications for two possible control schemes for it. It calculates that if scheme A is true, it can make 700 paperclips per second, and if scheme B is true, only 300 per second. As a Bayesian AI using Pascal's Goldpan formalized as a utilitarian prior, it assigns a prior probability of 0.7 for A and 0.3 for B. Then it either acts based on a weighted sum of models (0.7A+0.3B) or runs some experiments until it reaches a satisfactory posterior probability.

That doesn't seem intractably circular.

Comment author: alexflint 01 February 2012 09:27:36PM 0 points [-]

Occam's razor is the basis for believing that those experiments tell us anything whatsoever about the future. Without it, there is no way to assign the probabilities you mention.

Comment author: alexflint 29 January 2012 12:10:27PM 1 point [-]

These could do with forward/backward links. The Article Navigator doesn't seem to be able to get me to number 4 in this series, and the page for 'sleeping_beauty' tag appears empty.

In response to Occam alternatives
Comment author: alexflint 29 January 2012 10:15:54AM *  1 point [-]

Occam\s razor is famously difficult to justify except by circular appeal to itself. It's interesting to think of alternatives but you should be aware of what you give up when you give up Occam's razor. You can no longer make sensible inferences about the future based on your past experiences. For example, you can no longer have any confidence that the direction of gravity will still point downwards tomorrow, or that the laws of physics won't spontaneously change a minute from now. The experimental method itself no longer makes sense if you have no reason to think that the future will resemble the past.

You should read:

Comment author: selylindi 25 January 2012 04:11:10PM 0 points [-]

That is, suppose you are considering whether or not to believe that you can fly by leaping off a cliff and flapping your arms. What is the expected utility of holding this belief?

I completely grant that this scheme can have disastrous consequences for a utility function that discounts consistency with past evidence, has short time horizons, considers only direct consequences, fails to consider alternatives, or is in any other way poorly chosen. Part of the point in naming it Pascal's Goldpan was as a reminder of how naive utility functions using it will be excessively susceptible to wagers, muggings, and so on. Although I expect that highly weighting consistency with past evidence, long time horizons, considering direct and indirect consequences, considering all alternative hypotheses, and so on would prevent the obvious failure modes, it may nevertheless be that there exists no satisfactory utility function that would be safe using the Goldpan. That would certainly be compelling reason to abandon it.

Comment author: alexflint 29 January 2012 10:10:39AM *  0 points [-]

The point is that to evaluate the utility of holding a belief, you need to have already decided upon a scheme to set your beliefs.

Comment author: Nornagest 24 January 2012 07:35:52PM *  9 points [-]

While I think martial arts are pretty useful by hobby standards (although their usefulness is broad enough that they might not be optimal for specialists in several fields), several historical and cultural factors in their practice have combined to create an unusually fertile environment for certain kinds of irrationality.

First, they're hard to verify: what works in point sparring might not work in full-contact sparring, and neither one builds quite the same skillset that's useful for, say, security work, or for street-level self-defense, or for warfare. It's difficult to model most of the final applications, both because they entail an unacceptably high risk of serious injury in training and because they involve psychological factors that don't generally kick in on the mat.

Second, they're all facets of a field that's too broad to master in its entirety in a human lifetime. A serious amateur student can, over several years, develop a good working knowledge of grappling, or of aikido-style body dynamics, or empty-hand striking, or one or two weapons. The same student cannot build all of the above up to an acceptable level of competence: even becoming sort of okay at the entire spectrum of combat is a full-time job. (Many martial arts claim to cover all aspects of fighting, but they're wrong.)

Despite this, though, almost every martial art claims to do the best job of teaching practical fighting for some value of "practical", and every martial art takes a lot of pride in its techniques. As a consequence, there's a lot of posturing going on between nearly incommensurate systems. There have been various attempts at comparing them anyway (MMA is the most popular modern framework): they're better than nothing, but in practice usually come out too context-dependent to be very useful from a research perspective.

On top of that, there's a tradition of secrecy, especially in older systems (koryu, in Japanese martial arts parlance). Until well after WWII, it was uncommon for any system to open its doors to ethnic outsiders, often even to familial outsiders. Until the Eighties it was uncommon for systems to welcome cross-training in their students. Many still require instructors to have trained in only the system they teach. This is intended to prevent memetic cross-contamination but in practice serves to foster the wide range of biases that come with isolation and hierarchy: you can make almost anything work on your own students, as Eliezer's memorable example about ki powers demonstrates. (If you're feeling uncharitable, you could probably make an analogy here to the common cultic practice of isolation.)

Finally, a lot of selection pressure's eased off the martial arts in the modern era. During the Sengoku era, for example, Japanese martial arts were clannish and highly secretive, but it didn't matter too much: two hundred years of warfare made it very clear which taught viable techniques, if only by extinguishing poorer schools. Most other martial cultures were in a position to gain similar feedback, if less intensely. In the 20th century, though, martial arts grew more or less disconnected from martial applications: most militaries still teach simplified systems, but martial arts skill rarely decides engagements, and when it does it's in a narrower range of situations. Same goes for all the civilian jobs where martial arts are useful: there's feedback, but it's narrow, uncommon, and hyperspecialized.

I think there are ways around all of these problems, but no arts that I know of have done a very good job of engaging them systematically (though at least the more modern intersectional martial arts are trying -- JKD comes to mind). This actually wouldn't be a bad exercise in large-scale instrumental optimization, except that it requires a pool of talent that at present doesn't exist in any organized way.

(Disclaimer: as is probably obvious by now, I am a martial artist.)

Comment author: alexflint 28 January 2012 09:08:03AM 1 point [-]

Thanks for a thoughtful reply!

You could say much the same about painting/dancing/cooking/writing: There are many different sub-arts; it's hard to master all of them; practitioners can become unduly wedded to a single style; there are examples of styles that have "gone bonkers"; there are many factors in place that hurt the rationality of practitioners.

These are all valid concerns, but I don't think they're particularly problematic within martial arts in comparison to other hobbies.

Comment author: Nominull 08 April 2009 09:03:40PM 23 points [-]

Personally I suspect that the bathwater only really gets dirty when you are teaching something that is essentially useless in modern society, like martial arts or literary criticism. Most people who study, say, engineering don't do so in the hopes of becoming teachers of engineering.

Now you might say that this is because teachers of engineering are expected to also do research, but firstly that doesn't explain the disparity between fields, and secondly, I don't think that the example of tertiary education is one to aspire to in this way. I seem to recall you are an autodidact, so you may not have the same trained gut reaction I do, but I have seen too many people who did not have the skill of teaching but were good researchers teaching horribly, and I remember one heartbreaking example of an excellent teacher denied tenure because the administrators felt his research was not up to snuff too well, to want to optimize rationality teachers on any basis other than their ability to teach rationality.

Comment author: alexflint 24 January 2012 06:44:46PM 0 points [-]

Martial arts seem to get an unreasonably bad rep on LW. It's at least as useful as painting or writing fiction, and I consider those to be fine personal development endeavours.

Comment author: alexflint 19 January 2012 02:48:59PM 11 points [-]

Would it be helpful for us to try out these exercises with a small group of people and report back?

Comment author: kilobug 17 January 2012 07:59:56PM 1 point [-]

There are many ways to do, even small, contributions for everyone. The easiest is giving money (to someone whom you believe is trying to address the "really hard problems"). But there are many others. I would take two examples of things I do (or plan to do in the short future) : I'm helping with the French translation of HP:MoR and I'll (try to at least, nothing serious done yet) help SIAI with migrating their publication to their new LaTeX template (see http://lesswrong.com/r/discussion/lw/9d3/new_si_publications_design/ ). Both are tiny contributions, but which can in the hand help in various ways the SIAI to tackle the really hard problems. A lot of people doing those small things can allow the great things to happen much faster.

Of course, you can replace SIAI with anyone you think could solve the hard problems - other kind of research, charity, political party if you believe a given one is doing more good than harm, ...

The hardest part in that is probably in identifying who is more likely to actually help in solving the really hard problems. I tend to "invest" my energy and money in different kind of entities, hoping at least one of them will do something good enough on the long run.

Comment author: alexflint 19 January 2012 12:16:18PM 0 points [-]

I agree. But compared to where we are right now, I think more people should actually go work directly on the core FAI problem. If the smartest half of each LW meetup earnestly and persistently worked on the most promising open problem they could identify, I'd give 50% chance that at least one would make valuable progress somewhere.

Comment author: Vladimir_M 02 December 2011 01:58:59AM *  26 points [-]

"Genius is 1 percent inspiration, 99 percent perspiration," said Thomas Edison, and he should've known: It took him hundreds of tweaks to get his incandescent light bulb to work well, and he was already building on the work of 22 earlier inventors of incandescent lights.

On the other hand, Nikola Tesla had this to say about Edison's methodology:

If Edison had a needle to find in a haystack, he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. [...] His method was inefficient in the extreme, for an immense ground had to be covered to get anything at all unless blind chance intervened... [...] I was almost a sorry witness of such doings, knowing that a little theory and calculation would have saved him ninety per cent of his labor.

Even allowing for a significant bias against Edison on Tesla's part, it does seem like he relied on perspiration to an extraordinary degree among high achievers. Of course, even that diligence wouldn't have been of much use if it hadn't come together with a very considerable talent.

More generally, there are two problems with the general message of this article:

  1. It is delusional for most people to believe that they can contribute usefully to really hard problems. (Except in trivial ways, like helping those who are capable of it with mundane tasks in order to free up more of their time and energy.) There is such a thing as innate talent, and doing useful work on some things requires an extraordinary degree of it.

  2. There is also a nasty failure mode for organized scientific effort when manpower and money are thrown at problems that seem impossibly hard, hoping that "hacking away at the edges" will eventually lead to major breakthroughs. Instead of progress, or even an honest pessimistic assessment of the situation, this may easily create perverse incentives for cargo-cult work that will turn the entire field into a vast heap of nonsense.

Comment author: alexflint 17 January 2012 06:58:22PM 8 points [-]

It is delusional for most people to believe that they can contribute usefully to really hard problems.

This seems more and more like the most damaging meme ever created on LessWrong. It persistently leads to people that could have made useful contributions (to AI safety) making no such contribution. Would it be a better world in which lots more people tried to contribute usefully to FAI and a small percentage succeeded? Yes, it would, even taking into account whatever cost the unsuccessful people pay.

Comment author: alexflint 04 January 2012 07:52:41AM 1 point [-]

Q: What is Baconmas?

Baconmas is a relatively new holiday, celebrated on January 22nd (the birthday of Sir Francis Bacon) to celebrate the sciences, with a side order of bacon. You should try it!

That is excellent! Simple, light-hearted, and to the point.

View more: Prev | Next