Please don't vote because democracy is a local optimum

-9 [deleted] 05 November 2012 08:09PM

Related to: Voting is like donating thousands of dollars to charity, Does My Vote Matter?

And voting adds legitimacy to it.

Thank you.

#annoyedbymotivatedcognition

Proofs, Implications, and Models

58 Eliezer_Yudkowsky 30 October 2012 01:02PM

Followup to: Causal Reference

From a math professor's blog:

One thing I discussed with my students here at HCSSiM yesterday is the question of what is a proof.

They’re smart kids, but completely new to proofs, and they often have questions about whether what they’ve written down constitutes a proof. Here’s what I said to them.

A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you want it to count as a proof, the only real issue is whether you’re completely convincing.

This is not quite the definition I would give of what constitutes "proof" in mathematics - perhaps because I am so used to isolating arguments that are convincing, but ought not to be.

Or here again, from "An Introduction to Proof Theory" by Samuel R. Buss:

There are two distinct viewpoints of what a mathematical proof is. The first view is that proofs are social conventions by which mathematicians convince one another of the truth of theorems. That is to say, a proof is expressed in natural language plus possibly symbols and figures, and is sufficient to convince an expert of the correctness of a theorem. Examples of social proofs include the kinds of proofs that are presented in conversations or published in articles. Of course, it is impossible to precisely define what constitutes a valid proof in this social sense; and, the standards for valid proofs may vary with the audience and over time. The second view of proofs is more narrow in scope: in this view, a proof consists of a string of symbols which satisfy some precisely stated set of rules and which prove a theorem, which itself must also be expressed as a string of symbols. According to this view, mathematics can be regarded as a 'game' played with strings of symbols according to some precisely defined rules. Proofs of the latter kind are called "formal" proofs to distinguish them from "social" proofs.

In modern mathematics there is a much better answer that could be given to a student who asks, "What exactly is a proof?", which does not match either of the above ideas. So:

Meditation: What distinguishes a correct mathematical proof from an incorrect mathematical proof - what does it mean for a mathematical proof to be good? And why, in the real world, would anyone ever be interested in a mathematical proof of this type, or obeying whatever goodness-rule you just set down? How could you use your notion of 'proof' to improve the real-world efficacy of an Artificial Intelligence?

continue reading »

Question about application of Bayes

0 RolfAndreassen 31 October 2012 02:35AM

I have successfully confused myself about probability again. 

I am debugging an intermittent crash; it doesn't happen every time I run the program. After much confusion I believe I have traced the problem to a specific line (activating my debug logger, as it happens; irony...) I have tested my program with and without this line commented out. I find that, when the line is active, I get two crashes on seven runs. Without the line, I get no crashes on ten runs. Intuitively this seems like evidence in favour of the hypothesis that the line is causing the crash. But I'm confused on how to set up the equations. Do I need a probability distribution over crash frequencies? That was the solution the last time I was confused over Bayes, but I don't understand what it means to say "The probability of having the line, given crash frequency f", which it seems I need to know to calculate a new probability distribution. 

I'm going to go with my intuition and code on the assumption that the debug logger should be activated much later in the program to avoid a race condition, but I'd like to understand this math. 

Things philosophers have debated

4 Eliezer_Yudkowsky 31 October 2012 05:09AM

Straight from Wikipedia.

I just had to stare at this a while.  We can have papers published about this, we really ought to be able to get papers published about Friendly AI subproblems.

My favorite part is at the very end.


Trivialism is the theory that every proposition is true. A consequence of trivialism is that all statements, including all contradictions of the form "p and not p" (that something both 'is' and 'isn't' at the same time), are true.[1]

[edit]See also

[edit]References

  1. ^ Graham Priest; John Woods (2007). "Paraconsistency and Dialetheism"The Many Valued and Nonmonotonic Turn in Logic. Elsevier. p. 131. ISBN 978-0-444-51623-7.

[edit]Further reading

Guessing game -how low can you go?

-7 sundar 31 October 2012 08:25AM

A game similar to Guess 2/3 of the average,

Choose a number below 1000.

Unique number closest to it wins. (People with same answer are eliminated)

What is your pick and the reason for your choice?

Value Loading

3 ryjm 23 October 2012 04:47AM

This article was originally on the FHI wiki and is being reposted to LW Discussion with permission. All content in this article is credited to Daniel Dewey.

In value loading, the agent will pick the action:

Here A is the set of actions the agent can take, e is the evidence the agent has already seen, W is the set of possible worlds, and U is the set of utility functions the agent is considering.

The parameter C(u) is some measure of the 'correctness' of the utility u, so the term p(C(u)|w) is the probability of u being correct, given that the agent is in world w. A simple example is of an AI that completely trusts the programmers; so if u is some utility function that claims that giving cake is better than giving death, and w1 is a world where the programmers have said "cake is better than death" while w2 is a world where they have said the opposite, then p(C(u)|w1) = 1 and p(C(u) | w2) = 0.

There are several challenging things in this formula:

W : How to define/represent the class of all worlds under consideration

U : How to represent the class of all utility functions over such worlds

C : What do we state about the utility function: that it is true? believed by humans?

p(C(u)|w) : How to define this probability

: How to sum up utility functions (a moral uncertainty problem)

In contrast:

is mostly the classic AI problem. It is hard to predict what the world is like from evidence, but this is a well known and studied problem and not unique to the present research. There is a trick to it here in that the nature of w includes the future actions of the agent which will depend upon how good future states look to it, but this recursive definition eventually bottoms out like a game of chess (where what happens when I make a move depends on what moves I make after that). It may cause an additional exponential explosion in calculating out the formula though, so the agent may need to make probabilistic guesses as to its own future behaviour to actually calculate an action.

This value loading equation is not subject to the classical Cake or Death problem, but is vulnerable to the more advanced version of the problem, if the agent is able to change the expected future value of p(C(u)) through its actions.

Daniel Dewey's Paper

The above idea was partially inspired by a draft of Learning What to Value, a paper by Daniel Dewey. He restricted attention to streams of interactions, and his equation, in a simplified form, is:

where S is the set of all possible streams of all past and future observations and actions.

Cynical explanations of FAI critics (including myself)

21 Wei_Dai 13 August 2012 09:19PM

Related Posts: A cynical explanation for why rationalists worry about FAIA belief propagation graph

Lately I've been pondering the fact that while there are many critics of SIAI and its plan to form a team to build FAI, few of us seem to agree on what SIAI or we should do instead. Here are some of the alternative suggestions offered so far:

  • work on computer security
  • work to improve laws and institutions
  • work on mind uploading
  • work on intelligence amplification
  • work on non-autonomous AI (e.g., Oracle AI, "Tool AI", automated formal reasoning systems, etc.)
  • work on academically "mainstream" AGI approaches or trust that those researchers know what they are doing
  • stop worrying about the Singularity and work on more mundane goals
Given that ideal reasoners are not supposed to disagree, it seems likely that most if not all of these alternative suggestions can also be explained by their proponents being less than rational. Looking at myself and my suggestion to work on IA or uploading, I've noticed that I have a tendency to be initially over-optimistic about some technology and then become gradually more pessimistic as I learn more details about it, so that I end up being more optimistic about technologies that I'm less familiar with than the ones that I've studied in detail. (Another example of this is me being initially enamoured with Cypherpunk ideas and then giving up on them after inventing some key pieces of the necessary technology and seeing in more detail how it would actually have to work.)
I'll skip giving explanations for other critics to avoid offending them, but it shouldn't be too hard for the reader to come up with their own explanations. It seems that I can't trust any of the FAI critics, including myself, nor do I think Eliezer and company are much better at reasoning or intuiting their way to a correct conclusion about how we should face the apparent threat and opportunity that is the Singularity. What useful implications can I draw from this? I don't know, but it seems like it can't hurt to pose the question to LessWrong. 

 

Magic players: "How do I lose?"

32 Manfred 15 July 2012 08:58AM

An excellent habit that I've noticed among professional players of the game Magic: The Gathering is asking the question "how do I lose?" - a sort of strategic looking into the dark.

Imagine this situation: you have an army ready to destroy your opponent in two turns.  Your opponent has no creatures under his command.  Victory seems inevitable.  And so you ask "how do I lose?"

Because your victory is now the default, the options for your opponent are very limited.  If you have a big army, they need to play a card that can deal with lots of creatures at once.  If you have a good idea what their deck contains, you can often narrow it down to a single card that they need to play in order to turn the game around.  And once you know how you could lose, you can plan to avoid it.

For example, suppose your opponent was playing white.  Then their card of choice to destroy a big army would be Wrath of God.  That card is the way you could lose.  But now that you know that, you can avoid losing to Wrath of God by keeping creature cards in your hand so you can rebuild your army - you'll still win if he doesn't play it, since you winning is the default.  But you've made it harder to lose.  This is a bit of an advanced technique, since not playing all your cards is counterintuitive.

A related question is "how do I win?"  This is the question you ask when you're near to losing.  And like above, this question is good to ask because when you're really behind, only a few cards will let you come back.  And once you know what those cards are, you can plan for them.

For example, suppose you have a single creature on your side.  The opponent is attacking you with a big army.  You have a choice: you can let the attack through and lose in two turns, or you can send your creature out to die in your defense and lose in three turns.  If you were trying to postpone losing, you would send out the creature.  But you're more likely to actually win if you keep your forces alive - you might draw a sword that makes your creature stronger, or a way to weaken their army, or something.  And so you ask "how do I win?" to remind yourself of that.

 

This sort of thinking is highly generalizable.  The next time you're, say, packing for a vacation and feel like everything's going great, that's a good time to ask: "How do I lose?  Well, by leaving my wallet behind or by having the car break down - everything else can be fixed.  So I'll go put my wallet in my pocket right now, and check the oil and coolant levels in the car."

An analogy is that when you ask "how do I win?" you get to disregard your impending loss because you're "standing on the floor" - there's a fixed result that you get if you don't win, like calling a tow truck if you're in trouble in the car, or canceling your vacation and staying home.  Similarly when you ask "how do I lose?" you should be standing on the ceiling, as it were - you're about to achieve a goal that doesn't need to be improved upon, so now's the time to be careful about potential Wraths of God.

[Link] You Should Downvote Contrarian Anecdotes

8 Vladimir_Golovin 18 June 2012 07:57AM

http://thobbs.github.com/blog/2012/06/17/you-should-downvote-anecdotes/

Anecdotal evidence has been shown to have a greater influence on opinion than it logically deserves, most visibly when the anecdote conflicts with the reader’s opinion and when the reader is not highly analytical, even if the anecdotes are accompanying statistical evidence. Though the anecdotes may not totally sway you, they can easily leave you with the sense that the research findings aren’t as conclusive as they claim to be.

Talking to Children: A Pre-Holiday Guide

32 [deleted] 20 December 2011 09:54PM

Note: This is based on anecdotal evidence, personal experience (I have worked with children for many years. It is my full-time job.) and "general knowledge" rather than scientific studies, though I welcome any relevant links on either side of the issue.

 


 

The holidays are upon us, and I would guess that even though most of us are atheists, that we will still be spending time with our extended families sometime in the next week. These extended families are likely to include nieces and nephews, or other children, that you will have to interact with (probably whether you like it or not...)

Many LW-ers might not spend a lot of time with children in their day-to-day lives, and therefore I would like to make a quick comment on how to interact with them in a way that is conducive to their development. After all, if we want to live in a rationalist world tomorrow, one of the best ways to get there is by raising children who can become rationalist adults. 

PLEASE READ THIS LINK if there are any little girls you will be seeing this holiday season:

How To Talk to Little Girls: http://www.huffingtonpost.com/lisa-bloom/how-to-talk-to-little-gir_b_882510.html?ref=fb&src=sp&comm_ref=false


I know it's hard, but DON'T tell little girls that they look cute, and DON'T comment on their adorable little outfits, or their pony-tailed hair. The world is already screaming at them that the primary thing other people notice and care about for them is their looks. Ask them about their opinions, or their hobbies. Point them toward growing into a well-rounded adult with a mind of her own.

This does not just apply to little girls and their looks, but can be extrapolated to SO many other circumstances. For example, when children (of either gender) are succeeding in something, whether it is school-work, or a drawing, DON'T comment on how smart or skilled they are. Instead, say something like: "Wow, that was a really difficult math problem you just solved. You must have studied really hard to understand it!" Have your comments focus on complementing their hard work, and their determination.

By commenting on children's innate abilities, you are setting them up to believe that if they are good at something, it is solely based on talent. Conversely, by commenting on the amount of work or effort that went into their progress, you are setting them up to believe that they need to put effort into things, in order to succeed at them.


This may not seem like a big deal, but I have worked in childcare for many years, and have learned how elastic children's brains are. You can get them to believe almost anything, or have any opinion, JUST by telling them they have that opinion. Tell a kid they like helping you cook often enough, and they will quickly think that they like helping you cook.

For a specific example, I made my first charge like my favorite of the little-kid shows by saying: "Ooo! Kim Possible is on! You love this show!" She soon internalized it, and it became one of her favorites. There is of course a limit to this. No amount of saying "That show is boring", and "You don't like that show" could convince her that Wonderpets was NOT super-awesome.

View more: Prev | Next