Post ridiculous munchkin ideas!

55 D_Malik 15 May 2013 10:27PM

Thus spake Eliezer:

A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells.  Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else.  Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death.  Or figures out how to build the real-life version of the cycle of infinite wish spells.

It seems that many here might have outlandish ideas for ways of improving our lives. For instance, a recent post advocated installing really bright lights as a way to boost alertness and productivity. We should not adopt such hacks into our dogma until we're pretty sure they work; however, one way of knowing whether a crazy idea works is to try implementing it, and you may have more ideas than you're planning to implement.

So: please post all such lifehack ideas! Even if you haven't tried them, even if they seem unlikely to work. Post them separately, unless some other way would be more appropriate. If you've tried some idea and it hasn't worked, it would be useful to post that too.

Imitation is the Sincerest Form of Argument

74 palladias 18 February 2013 05:05PM

I recently gave a talk at Chicago Ideas Week on adapting Turing Tests to have better, less mindkill-y arguments, and this is the precis for folks who would prefer not to sit through the video (which is available here).

Conventional Turing Tests check whether a programmer can build a convincing facsimile of a human conversationalist.   The test has turned out to reveal less about machine intelligence than human intelligence.  (Anger is really easy to fake, since fights can end up a little more Markov chain-y, where you only need to reply to the most recent rejoinder and can ignore what came before).  Since normal Turing Tests made us think more about our model of human conversation, economist Bryan Caplan came up with a way to use them to make us think more usefully about our models of our enemies.

After Paul Krugman disparaged Caplan's brand of libertarian economics, Caplan challenged him to an ideological Turing Test, where both players would be human, but would be trying to accurately imitate each other.  Caplan and Krugman would each answer questions about their true beliefs honestly, and then would fill out the questionaire again in persona inimici - trying to guess the answers given by the other side.  Caplan was willing to bet that he understood Krugman's position well enough to mimic it, but Krugman would be easily spotted as a fake!Caplan.

Krugman didn't take him up on the offer, but I've run a couple iterations of the test for my religion/philosophy blog.  The first year, some of the most interesting results were the proxy variables people were using, that weren't as strong as indicators as the judges thought.  (One Catholic coasted through to victory as a faux atheist, since many of the atheist judges thought there was no way a Christian would appreciate the webcomic SMBC).

The trouble was, the Christians did a lot better, since it turned out I had written boring, easy to guess questions for the true and faux atheists.  The second year, I wrote weirder questions, and the answers were a lot more diverse and surprising (and a number of the atheist participants called out each other as fakes or just plain wrong, since we'd gotten past the shallow questions from year one, and there's a lot of philosophical diversity within atheism).

The exercise made people get curious about what it was their opponents actually thought and why.  It helped people spot incorrect stereotypes of an opposing side and faultlines they'd been ignoring within their own.  Personally, (and according to other participants) it helped me have an argument less antagonistically.  Instead of just trying to find enough of a weak point to discomfit my opponent, I was trying to build up a model of how they thought, and I needed their help to do it.  

Taking a calm, inquisitive look at an opponent's position might teach me that my position is wrong, or has a gap I need to investigate.  But even if my opponent is just as wrong as zer seemed, there's still a benefit to me.  Having a really detailed, accurate model of zer position may help me show them why it's wrong, since now I can see exactly where it rasps against reality.  And even if my conversation isn't helpful to them, it's interesting for me to see what they were missing.  I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error.  I'd like to be able to see it more clearly so I can try and spot it in my own thought.  (Think of this as the shift from "How the hell can you be so dumb?!" to "How the hell can you be so dumb?").

When I get angry, I'm satisfied when I beat my interlocutor.  When I get curious, I'm only satisfied when I learn something new.

Cheat codes

36 sketerpot 01 December 2010 09:19PM

Most things worth doing take serious, sustained effort. If you want to become an expert violinist, you're going to have to spend a lot of time practicing. If you want to write a good book, there really is no quick-and-dirty way to do it. But sustained effort is hard, and can be difficult to get rolling. Maybe there are some easier gains to be had with simple, local optimizations. Contrary to oft-repeated cached wisdom, not everything worth doing is hard. Some little things you can do are like cheat codes for the real world.

Take habits, for example: your habits are not fixed. My diet got dramatically better once I figured out how to change my own habits, and actually applied that knowledge. The general trick was to figure out a new, stable state to change my habits to, then use willpower for a week or two until I settle into that stable state. In the case of diet, a stable state was one where junk food was replaced with fruit, tea, or having a slightly more substantial meal beforehand so I wouldn't feel hungry for snacks. That's an equilibrium I can live with, long-term, without needing to worry about "falling off the wagon." Once I figured out the pattern -- work out a stable state, and force myself into it over 1-2 weeks -- I was able to improve several habits, permanently. It was amazing. Why didn't anybody tell me about this?

In education, there are similar easy wins. If you're trying to commit a lot of things to memory, there's solid evidence that spaced repetition works. If you're trying to learn from a difficult textbook, reading in multiple overlapping passes is often more time-efficient than reading through linearly. And I've personally witnessed several people academically un-cripple themselves by learning to reflexively look everything up on Wikipedia. None of this stuff is particularly hard. The problem is just that a lot of people don't know about it.

What other easy things have a high marginal return-on-effort? Feel free to include speculative ones, if they're testable.

Fallacies as weak Bayesian evidence

59 Kaj_Sotala 18 March 2012 03:53AM

Abstract: Exactly what is fallacious about a claim like ”ghosts exist because no one has proved that they do not”? And why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible? Looking at various fallacies – the argument from ignorance, circular arguments, and the slippery slope argument - we find that they can be analyzed in Bayesian terms, and that people are generally more convinced by arguments which provide greater Bayesian evidence. Arguments which provide only weak evidence, though often evidence nonetheless, are considered fallacies.

As a Nefarious Scientist, Dr. Zany is often teleconferencing with other Nefarious Scientists. Negotiations about things such as ”when we have taken over the world, who's the lucky bastard who gets to rule over Antarctica” will often turn tense and stressful. Dr. Zany knows that stress makes it harder to evaluate arguments logically. To make things easier, he would like to build a software tool that would monitor the conversations and automatically flag any fallacious claims as such. That way, if he's too stressed out to realize that an argument offered by one of his colleagues is actually wrong, the software will work as backup to warn him.

Unfortunately, it's not easy to define what counts as a fallacy. At first, Dr. Zany tried looking at the logical form of various claims. An early example that he considered was ”ghosts exist because no one has proved that they do not”, which felt clearly wrong, an instance of the argument from ignorance. But when he programmed his software to warn him about sentences like that, it ended up flagging the claim ”this drug is safe, because we have no evidence that it is not”. Hmm. That claim felt somewhat weak, but it didn't feel obviously wrong the way that the ghost argument did. Yet they shared the same structure. What was the difference?

The argument from ignorance

Related posts: Absence of Evidence is Evidence of Absence, But Somebody Would Have Noticed!

One kind of argument from ignorance is based on negative evidence. It assumes that if the hypothesis of interest were true, then experiments made to test it would show positive results. If a drug were toxic, tests of toxicity of reveal this. Whether or not this argument is valid depends on whether the tests would indeed show positive results, and with what probability.

With some thought and help from AS-01, Dr. Zany identified three intuitions about this kind of reasoning.

1. Prior beliefs influence whether or not the argument is accepted.

A) I've often drunk alcohol, and never gotten drunk. Therefore alcohol doesn't cause intoxication.

B) I've often taken Acme Flu Medicine, and never gotten any side effects. Therefore Acme Flu Medicine doesn't cause any side effects.

Both of these are examples of the argument from ignorance, and both seem fallacious. But B seems much more compelling than A, since we know that alcohol causes intoxication, while we also know that not all kinds of medicine have side effects.

2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be.

C) Acme Flu Medicine is not toxic because no toxic effects were observed in 50 tests.

D) Acme Flu Medicine is not toxic because no toxic effects were observed in 1 test.

C seems more compelling than D.

3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments.

E) Acme Flu Medicine is toxic because a toxic effect was observed (positive argument)

F) Acme Flu Medicine is not toxic because no toxic effect was observed (negative argument, the argument from ignorance)

Argument E seems more convincing than argument F, but F is somewhat convincing as well.

"Aha!" Dr. Zany exclaims. "These three intuitions share a common origin! They bear the signatures of Bayonet reasoning!"

"Bayesian reasoning", AS-01 politely corrects.

"Yes, Bayesian! But, hmm. Exactly how are they Bayesian?"

continue reading »

The Singularity Institute needs remote researchers (writing skill not required)

65 lukeprog 05 February 2012 10:02PM

The Singularity Institute needs researchers capable of doing literature searches, critically analyzing studies, and summarizing their findings. The fields involved are mostly psychology (biases and debiasing, effective learning, goal-directed behavior / self help), computer science (AI and AGI), technological forecasting, and existential risks.

Gwern's work (e.g. on sunk costs and spaced repetition) is near the apex of what we need, but you don't need to be as skilled as Gwern or write as much as he does to do most of the work that we need.

Pay is hourly and starts at $14/hr but that will rise if the product is good. You must be available to work at least 20 hrs/week to be considered.

Perks:

  • Work from home, with flexible hours.
  • Age and credentials are irrelevant; only the product matters.
  • Get paid to research things you're probably interested in anyway.
  • Contribute to human knowledge in immediately actionable ways. We need this research because we're about to act on it. Your work will not fall into the journal abyss that most academic research falls into.

If you're interested, apply here.

Why post this job ad on LessWrong? We need people with some measure of genuine curiosity.

Also see Scholarship: How to Do It Efficiently.

 

A call for solutions and a tease of mine

-3 HonoreDB 19 January 2012 02:40PM

So here's the problem: Given a well-defined group charter, how should groups make decisions? You have an issue, you've talked it over, and now it's time for the group to take action. Different members have different opinions, because they're not perfect reasoners and because their interests don't reliably align with those of the group. What do you do? Historical solutions include direct democracy, representative democracy, various hierarchies, dictatorships, oligarchies, consensus...But what's the shoes-with-toes solution? How do they do it in Weirdtopia? What is the universally correct method that could be implemented by organizations, corporations, and governments alike?

continue reading »

Dubstep and algorithmic information theory

-4 Will_Newsome 10 September 2011 02:32PM

I think this visualization by YouTube entity dubzophrenia is a decent introduction to some intuitions driving algorithmic information theory, at least for people who like dubstep:

http://www.youtube.com/watch?v=VwkcY8spx_g&feature=channel_video_title

(Please watch in 720p; audio quality is also higher. Fullscreening it is also kind of necessary.)

At least I notice that this is how I tend to think about things like statistical mechanics, quantum information theory, that kinda thing, insofar as I think about them rather than just read lots of abstracts and pretend like that counts. I suspect that liking dubstep is correlated with schizotypality is correlated with having your procedural learning set up such that watching videos like this actually improves your math intuitions (depending on the type of math I think; stat mech more than linear algebra). But that's a lot of speculation. Upvote or downvote as you intuit.

As a side note, I now have a tumblr blog at least half about rationality and electronic music at willnewsome.com. I'd like it if more people promoted themselves and their blogs via one-off LW discussion posts every few months or so even if they're not directly rationality-related. I fear I'm missing out on some interesting stuff due to silly social dynamics resulting from potentially-maladaptive psychology.

Confidence levels inside and outside an argument

129 Yvain 16 December 2010 03:06AM

Related to: Infinite Certainty

Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?

Mine would be significantly less than 999,999,999 in a billion.

When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in "But that still leaves a one in a billion chance, right?". The majority of the probability is in "That argument is flawed". Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.


More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, ey will accidentally switch which candidate goes with which chance of winning.

So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model's internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.

continue reading »

A sense of logic

13 NancyLebovitz 10 December 2010 06:19PM

What's the worst argument you can think of?

One of my favorites is from a Theodore Sturgeon science fiction story in which it's claimed that faster than light communication must be possible because even though stars are light years apart, a person can look from one to another in a moment.

I don't know about you, but bad logic makes my stomach hurt, especially on first exposure.

This seems rather odd-- what sort of physical connection might that be?

Also, I'm not sure how common the experience is, though a philosophy professor did confirm it for himself and (by observation) his classes. He mentioned one of the Socratic dialogues (sorry, I can't remember which one) which is a compendium of bad arguments and which seemed to have that effect on his classes.

So, how did you feel when you read that bit of sf hand-waving? If your stomach hurt, what sort of stomach pain was it? Like nausea? Like being hit? Something else? If you had some other sensory reaction, can you describe it?

For me, the sensation is some sort of internal twinge which isn't like nausea.

Anyway, both for examination and for the fun of it, please supply more bad arguments.

I think there are sensory correlates for what is perceived to be good logic (unfortunately, they don't tell you whether an argument is really sound)-- kinesthesia which has to do with solidity, certainty, and at least in my case, a feeling that all the corners are pinned down.

Addendum: It looks as though I was generalizing from one example. If you have a fast reaction to bad arguments and it isn't kinesthetic, what is it?

Help request: What is the Kolmogorov complexity of computable approximations to AIXI?

4 AnnaSalamon 05 December 2010 10:23AM

Does anyone happen to know the Komogorov complexity (in some suitable, standard UTM -- or, failing that, in lines of Python or something) of computable approximations of AIXI?

I'm writing a paper on how simple or complicated intelligence is, and what implications that has for AI forecasting.  In that context: adopt Shane Legg's measure of intelligence (i.e., let "intelligence" measure a system's average goal-achievement across the different "universe" programs that might be causing it to win or not win reward at each time step, weighted according to the universe program's simplicity).

Let k(x, y) denote the Kolmogorov complexity of the shortest program that attains an intelligence of at least x, when allowed an amount y of computation (i.e., of steps it gets to run our canonical UTM).  Then, granting certain caveats, AIXI and approximations thereto tell us that the limit as y approaches infinity of k(x,y) is pretty small for any computably attainable value of x.  (Right?)

What I'd like is to stick an actual number, or at least an upper bound, on "pretty small".

If someone could help me out, I'd be much obliged.

View more: Next