Filter This week

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wear a Helmet While Driving a Car

38 James_Miller 30 July 2015 04:36PM

A 2006 study showed that “280,000 people in the U.S. receive a motor vehicle induced traumatic brain injury every year” so you would think that wearing a helmet while driving would be commonplace.  Race car drivers wear helmets.  But since almost no one wears a helmet while driving a regular car, you probably fear that if you wore one you would look silly, attract the notice of the police for driving while weird, or the attention of another driver who took your safety attire as a challenge.  (Car drivers are more likely to hit bicyclists who wear helmets.)  

 

The $30+shipping Crasche hat is designed for people who should wear a helmet but don’t.  It looks like a ski cap, but contains concealed lightweight protective material.  People who have signed up for cryonics, such as myself, would get an especially high expected benefit from using a driving helmet because we very much want our brains to “survive” even a “fatal” crash. I have been using a Crasche hat for about a week.

Magnetic rings (the most mediocre superpower) A review.

19 Elo 30 July 2015 01:23PM

Following on from a few threads about superpowers and extra sense that humans can try to get; I have always been interested in the idea of putting a magnet in my finger for the benefits of extra-sensory perception.

Stories (occasional news articles) imply that having a magnet implanted in a finger in a place surrounded by nerves imparts a power of electric-sensation.  The ability to feel when there are electric fields around.  So that's pretty neat.  Only I don't really like the idea of cutting into myself (even if its done by a professional piercing artist).  

Only recently did I come across the suggestion that a magnetic ring could impart similar abilities and properties.  I was delighted at the idea of a similar and non-invasive version of the magnetic-implant (people with magnetic implants are commonly known as grinders within the community).  I was so keen on trying it that I went out and purchased a few magnetic rings of different styles and different properties.

Interestingly the direction that a magnetisation can be imparted to a ring-shaped object can be selected from 2 general types.  Magnetised across the diameter, or across the height of the cylinder shape.  (there is a 3rd type which is a ring consisting of 4 outwardly magnetised 1/4 arcs of magnetic metal suspended in a ring-casing. and a few orientations of that system).

I have now been wearing a Neodymium ND50 magnetic ring from supermagnetman.com for around two months.  The following is a description of my experiences with it.


When I first got the rings, I tried wearing more than one ring on each hand, I very quickly found out what happens when you wear two magnets close to each other. AKA they attract.  Within a day I was wearing one magnet on each hand.  What is interesting is what happens when you move two very strong magnets within each other's magnetic field.  You get the ability to feel a magnetic field, and roll it around in your hands.  I found myself taking typing breaks to play with the magnetic field between my fingers.  It was an interesting experience to be able to do that.  I also found I liked the snap as the two magnets pulled towards each other and regularly would play with them by moving them near each other.  For my experiences here I would encourage others to use magnets as a socially acceptable way to hide an ADHD twitch - or just a way to keep yourself amused if you don't have a phone to pull out and if you ever needed a reason to move.  I have previously used elastic bands around my wrist for a similar purpose.

The next thing that is interesting to note is what is or is not ferrous.  Fridges are made of ferrous metal but not on the inside.  Door handles are not usually ferrous, but the tongue and groove of the latch is.  metal railings are common, as are metal nails in wood.  Elevators and escalators have some metallic parts.  Light switches are often plastic but there is a metal screw holding them into the wall.  Tennis fencing is ferrous, the ends of usb cables are sometimes ferrous and sometimes not.  The cables are not ferrous.  except one I found. (they are probably made of copper)

 

Breaking technology

I had a concern that I would break my technology.  That would be bad.  overall I found zero broken pieces of technology.  In theory if you take a speaker which consists of a magnet and an electric coil and you mess around with its magnetic field it will be unhappy and maybe break.  That has not happened yet.  The same can be said for hard drives, magnetic memory devices, phone technology and other things that rely on electricity.  So far nothing has broken.  What I did notice is that my phone has a magnetic-sleep function on the top left.  i.e. it turns the screen off to hold the ring near that point.  For both benefit and detriment depending on where I am wearing the ring.

Metal shards

I spend some of my time in workshops that have metal shards lying around.  sometimes they are sharp, sometimes they are more like dust.  They end up coating the magnetic ring.  The sharp ones end up jabbing you, and the dust just looks like dirt on your skin.  in a few hours they tend to go away anyways, but it is something I have noticed

magnetic strength

Over the time I have been wearing the magnets their strength has dropped off significantly.  I am considering building a remagnetisation jig, but have not started any work on it.  obviously every time I ding something against it, every time I drop them - the magnetisation decreases a bit as the magnetic dipoles reorganise.

knives

I cook a lot.  Which means I find myself holding sharp knives fairly often.  The most dangerous thing that I noticed about these rings is that when I hold a ferrous knife in the normal way I hold a knife, the magnet has a tendency to shift the knife slightly or at a time when I don't want it to.  That sucks.  Don't wear them while playing with sharp objects like knives.  the last think you want to do is accidentally have your carrot-cutting turn into a finger-cutting event.  What is interesting as well is that some cutlery is made of ferrous metal and some is not.  also sometimes parts of a piece of cutlery are ferrous and some are non-ferrous.  i.e. my normal food-eating knife set has a ferrous blade part and a non-ferrous handle part.  I always figured they were the same, but the magnet says they are different materials.  Which is pretty neat.  I have found the same thing with spoons sometimes.  the scoop is ferrous and the handle is not.  I assume it would be because the scoop/blade parts need extra forming steps so need to be a more work-able metal.  Cheaper cutlery is not like this.

The same applies to hot pieces of metal.  Ovens, stoves, kettles, soldering irons...  When they accidentally move towards your fingers, or your fingers are compelled to be attracted to them.  Thats a slightly unsafe experience.

electric-sense

You know how when you run a microwave it buzzes, in a *vibrating* sorta way.  if you put your hand against the outside of a microwave you will feel the motor going.  Yea cool.  So having a magnetic ring means you can feel that without touching the microwave from about 20cm away.  There is a variability to it, better microwaves have more shielding on their motors and are leak less.  I tried to feel the electric field around power tools like a drill press, handheld tools like an orbital sander, computers, cars, appliances, which pretty much covers everything.  I also tried servers and the only thing that really had a buzzing field was a UPS machine (uninterupted power supply).  Which was cool.  Only other people had reported that any transformer - i.e. a computer charger would make that buzz.  I also carry a battery block with me and that had no interesting fields.  Totally not exciting.  As for moving electrical charge.  Cant feel it.  If powerpoints are receiving power - nope.  not dying by electrocution - no change.

boring superpower

There is a reason I call magnetic rings a boring superpower.  The only real super-power I have been imparted is the power to pick up my keys without using my fingers.  and also maybe hold my keys without trying to.  As superpowers go - thats pretty lame.  But kinda nifty.  I don't know. I wouldn't insist people do it for the life-changing purposes.

 

Did I find a human-superpower?  No.  But I am glad I tried it.

 

Any questions?  Any experimenting I should try?

The horrifying importance of domain knowledge

13 NancyLebovitz 30 July 2015 03:28PM

There are some long lists of false beliefs that programmers hold. isn't because programmers are especially likely to be more wrong than anyone else, it's just that programming offers a better opportunity than most people get to find out how incomplete their model of the world is.

I'm posting about this here, not just because this information has a decent chance of being both entertaining and useful, but because LWers try to figure things out from relatively simple principles-- who knows what simplifying assumptions might be tripping us up?

The classic (and I think the first) was about names. There have been a few more lists created since then.

Time. And time zones. Crowd-sourced time errors.

Addresses. Possibly more about addresses. I haven't compared the lists.

Gender. This is so short I assume it's seriously incomplete.

Networks. Weirdly, there is no list of falsehoods programmers believe about html (or at least a fast search didn't turn anything up). Don't trust the words in the url.

Distributed computing Build systems.

Poem about character conversion.

I got started on the subject because of this about testing your code, which was posted by Andrew Ducker.

Rationality Reading Group: Part F: Politics and Rationality

7 Gram_Stone 29 July 2015 10:22PM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part F: Politics and Rationality (pp. 255-289)This post summarizes each article of the sequence, linking to the original LessWrong post where available.

F. Politics and Rationality

57. Politics is the Mind-Killer - People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, food, or let you kill your hated rival. If you must talk about politics (for the purpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it, but don't blame it explicitly on the whole Republican/Democratic/Liberal/Conservative/Nationalist Party.

58. Policy Debates Should Not Appear One-Sided - Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.

59. The Scales of Justice, the Notebook of Rationality - People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.

60. Correspondence Bias - Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

61. Are Your Enemies Innately Evil? - People want to think that the Enemy is an innately evil mutant. But, usually, the Enemy is acting as you might in their circumstances. They think that they are the hero in their story and that their motives are just. That doesn't mean that they are right. Killing them may be the best option available. But it is still a tragedy.

62. Reversed Stupidity Is Not Intelligence - The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Stalin also believed that 2 + 2 = 4. Stupidity or human evil do not anticorrelate with truth. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.

63. Argument Screens Off Authority - There are many cases in which we should take the authority of experts into account, when we decide whether or not to believe their claims. But, if there are technical arguments that are available, these can screen off the authority of experts.

64. Hug the Query - The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

65. Rationality and the English Language - George Orwell's writings on language and totalitarianism are critical to understanding rationality. Orwell was an opponent of the use of words to obscure meaning, or to convey ideas without their emotional impact. Language should get the point across - when the effort to convey information gets lost in the effort to sound authoritative, you are acting irrationally.

66. Human Evil and Muddled Thinking - It's easy to think that rationality and seeking truth is an intellectual exercise, but this ignores the lessons of history. Cognitive biases and muddled thinking allow people to hide from their own mistakes and allow evil to take root. Spreading the truth makes a real difference in defeating evil.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part G: Against Rationalization (pp. 293-339). The discussion will go live on Wednesday, 12 August 2015, right here on the discussion forum of LessWrong.

We really need a "cryonics sales pitch" article.

6 CronoDAS 03 August 2015 10:42PM

Every so often, I see a blog post about death, usually remarking on the death of someone the writer knew, and it often includes sentiments about "everyone is going to die, and that's terrible, but we can't do anything about it have so we have to accept it."

It's one of those sentiments that people find profound and is often considered Deep Wisdom. There's just one problem with it. It isn't true. If you think cryonics can work, as many people here do, then you believe that people don't really have to die, and we don't need to accept that we've only got at most about a hundred years and then that's it.

And I want to tell them this, as though I was a religious missionary out to spread the Good Word that you can save your soul and get into Christian Heaven as long as you sign up with Our Church. (Which I would actually do, if I believed that Christianity was correct.)

But it's not easy to broach the issue in a blog comment, and I'm not a good salesman. (One of the last times I tried, my posts kept getting deleted by the moderators.) It would be a lot better if I could simply link them to a better sales pitch; the kind of people I'm talking to are the kinds of people who read things on the Internet. Unfortunately, not one of the pro-cryonics posts listed on the LessWrong wiki can serve this purpose. Not "Normal Cryonics", not "You Only Live Twice", not "We Agree: Get Froze", not one! Why isn't there one? Heck, I'd pay money to get it written. I'd even pay Eliezer Yudkowsky a bunch of money to talk to my father on the telephone about cryonics, with a substantial bonus on offer if my father agrees to sign up. (We can discuss actual dollar amounts in the comments or over private messages.)

Please, someone get to work on this!

Stupid Questions August 2015

6 Grothor 01 August 2015 11:08PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Integral vs differential ethics, continued

5 Stuart_Armstrong 03 August 2015 01:25PM

I've talked earlier about integral and differential ethics, in the context of population ethics. The idea is that the argument for the repugnant conclusion (and its associate, the very repugnant conclusion) is dependent on a series of trillions of steps, each of which are intuitively acceptable (adding happy people, making happiness more equal), but reaching a conclusion that is intuitively bad - namely, that we can improve the world by creating trillions of people in torturous and unremitting agony, as long as balance it out by creating enough happy people as well.

Differential reasoning accepts each step, and concludes that the repugnant conclusions are actually acceptable, because each step is sound. Integral reasoning accepts that the repugnant conclusion is repugnant, and concludes that some step along the way must therefore be rejected.

Notice that key word, "therefore". Some intermediate step is rejected, but not for intrinsic reasons, but purely because of the consequence. There is nothing special about the step that is rejected, it's just a relatively arbitrary barrier to stop the process (compare with the paradox of the heap).

Indeed, things can go awry when people attempt to fix the repugnant conclusion (a conclusion they rejected through integral reasoning) using differential methods. Things like the "person-affecting view" have their own ridiculousness and paradoxes (it's ok to bring a baby into the world if it will have a miserable life; we don't need to care about future generations if we randomise conceptions, etc...) and I would posit that it's because they are trying to fix global/integral issues using local/differential tools.

The relevance of this? It seems that integral tools might be better suited to deal with the bad convergence of AI problem. We could set up plausibly intuitive differential criteria (such as self-consistency), but institute overriding integral criteria that can override these if they go too far. I think there may be some interesting ideas in that area, potentially. The cost is that integral ideas are generally seen as less elegant, or harder to justify.

August 2015 Media Thread

5 ArisKatsaris 01 August 2015 02:46PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

On stopping rules

4 Anders_H 02 August 2015 09:38PM

(tl;dr: In this post I try to explain why I think the stopping rule of an experiment matters. It is likely that someone will find a flaw in my reasoning. That would be a great outcome as it would help me change my mind.  Heads up: If you read this looking for new insight you may be disappointed to only find my confusion)

 

(Edited to add: Comments by Manfred and Ike seem to point correctly to the critical flaws in my reasoning. I will try to update my intuition over the next few days)

 

 

In the post "Don't You Care If It Works Part 1" on the Main section of this website, Jacobian writes:

 

A few weeks ago I started reading beautiful probability and immediately thought that Eliezer is wrong about the stopping rule mattering to inference. I dropped everything and spent the next three hours convincing myself that the stopping rule doesn't matter and I agree with Jaynes and Eliezer. As luck would have it, soon after that the stopping rule question was the topic of discussion at our local LW meetup. A couple people agreed with me and a couple didn't and tried to prove it with math, but most of the room seemed to hold a third opinion: they disagreed but didn't care to find out. I found that position quite mind-boggling. Ostensibly, most people are in that room because we read the sequences and thought that this EWOR (Eliezer's Way Of Rationality) thing is pretty cool. EWOR is an epistemology based on the mathematical rules of probability, and the dude who came up with it apparently does mathematics for a living trying to save the world. It doesn't seem like a stretch to think that if you disagree with Eliezer on a question of probability math, a question that he considers so obvious it requires no explanation, that's a big frickin' deal!

First, I'd like to point out that the mainstream academic term for Eliezer's claim is The Strong Likelihood Principle.  In the comments section, a vigorous discussion of stopping rules ensued. 

My own intuition is that the strong likelihood principle is wrong.  Moreover, there exist a small number of people whose opinion I give higher level of credence than Eliezer's, and some of those people also disagree with him. For instance, I've been present in the room when a distinguished Professor of Biostatistics at Harvard stated matter-of-factly that the principle is trivially wrong. I also observed that he was not challenged on this by another full Professor of Biostatistics who is considered an expert on Bayesian inference.

So at best, the fact that Eliezer supports the strong likelihood principle is a single data point, ie pretty weak Bayesian evidence.  I do however value Eliezer's opinion, and in this case I recognize that I am confused. Being a good rationalist, I'm going to take that as an indication that it is time for The Ritual.  Writing this post is part of my "ritual": It is an attempt to clarify exactly why I think the stopping condition matters, and determine whether those reasons are valid.   I expect a likely outcome is that someone will identify a flaw in my reasoning. This will be very useful and help improve my map-territory correspondence.

--

Suppose there are two coins in existence, both of which are biased: Coin A comes up heads with probability 2/3 and tails with probability 1/3,  whereas Coin B comes up heads with probability 1/3.     Someone gives me a coin without telling me which one, my goal is to figure out if it is Coin A or Coin B.   My prior is that they are equally likely.

There are two statisticians who both offer to do an experiment:  Statistician 1 says that he will flip the coin 20 times and report the number of heads.    Statistician 2 would really like me to believe that it is Coin B, and says he will terminate the experiment whenever there are more tails than heads. However, since Statistician 2 is kind of lazy and doesn't have infinite time, he also says that if he reaches 20 flips he is going to call it quits and give up.

Both statisticians do the experiment, and both experiments end up with 12 heads and 8 tails. I trust both Statisticians to be honest about the experimental design and the stopping rules. 

In the experiment of Statistician 1, the probability of getting this outcome if you have Coin A was 0.1486, whereas the probability of getting this outcome if it was Coin B was 0.0092.  The likelihood ratio is therefore 16.1521   and the posterior probability of Coin A (after converting the prior to odds, applying the likelihood ratio and converting back to probability) is 0.94.

In the experiment of Statistician 2, however, I can't just use the binomial distribution because there is an additional data point which is not Bernoulli, namely the number of coin flips.  I therefore have to calculate, for both Coin A and Coin B,  the probability that he would not terminate the experiment prior to the 20th flip, and that at that stage he would have 12 heads and 8 coins.    Since the probability reaching 20 flips is much higher for Coin A than for Coin B, the likelihood ratio would be much higher than in the experiment of Statistician 1. 

 

This should not be unexpected: If Statistician B gives me data that supports the hypothesis which his stopping rule was designed to discredit, then that data is stronger evidence than similar data coming from the neutral Statistician A.  

In other words, the stopping rule matters. Yes, all the evidence in the trial is still in the likelihood ratio, but the likelihood ratio is different because there is an additional data point.   Not considering this additional data point is statistical malpractice. 

 

 

Let's pour some chlorine into the mosquito gene pool

4 Clarity 31 July 2015 11:12AM

Reading up on the GiveWell Open Philanthropy Project's investigation of science policy lead me to look up CRISPR which is given as the example of a very high potential basic science research area.

In context, Givewell appears to be interested in the potential for Gene drive. I am not sure if I am using the term in a grammatically correct way.

Austin Burt, an evolutionary geneticist at Imperial College London,[5] first outlined the possibility of building gene drives based on natural "selfish" homing endonuclease genes.[4]Researchers had already shown that these “selfish” genes could spread rapidly through successive generations. Burt suggested that gene drives might be used to prevent a mosquito population from transmitting the malaria parasite or crash a mosquito population. Gene drives based on homing endonucleases have been demonstrated in the laboratory in transgenic populations of mosquitoes[6] and fruit flies.[7][8] These enzymes could be used to drive alterations through wild populations.[1]

I would be suprised if I am the first community member to ponder whether we could just go ahead and exterminate mosquito's to control their populations. Google research I conducted ages ago indicated that doing so resulted in no effective improvement in desired outcomes over the long term. I vaguely remember several examples cited, none of which were Gene Driving, which I have only just heard of. I concluded, at the time, that controlling mosquito populations wasn't the way to go, and instead people should proactively protect themselves.

In 2015, study in Panama reported that such mosquitoes were effective in reducing populations of dengue fever-carrying Aedes aegypti. Over a six month period approximately 4.2 million males were released, yielding a 93-percent population reduction. The female is the disease carrier. The population declined because the larvae of GM males and wild females fail to thrive. Two control areas did not experience population declines. The A. aegypti were not replaced by other species such as the aggressive A. albopictus. In 2014, nine people died and 5,026 were infected, and in 2013 eight deaths and 4,481 infected, while in March 2015 a baby became the year's first victim of the disease.[9]

It's apparent that research is emerging for the efficacy of Gene Driving. In conducting research for this discussion post, I found most webpages in top google results were from groups and individuals concerned about genetically modified mosquitos being released. I am interested in know if that's the case for anyone else, since my results may be biased by google targeting results based on my past proclivity for using google-searching to confirm suspicions about things I already had.

It appears that the company responsible for the mosquitos is called Oxitec. I have no conflict of interest to disclose in relation to them (though I was hoping to find one, but they're not a publicly listed company!). They appear to be supplying trials in the US and Australia. Though, I haven't looked to see if they're involved in any trials in developing countries. It stuns me that I was not aware of them, given multiple lines of interest that could have brought me to them.

My general disposition towards synthetic biology has been overwhelming suspicious and censorial in the recent past. My views were influenced by the caution I've ported from fears of unfriendly AI. I wanted to share this story of Gene Driving because it is heartwarming and has made me feel better about the future of both existential risk and effective giving. 

Rationality Quotes Thread August 2015

3 bbleeker 03 August 2015 09:50AM

Another month, another rationality quotes thread. The rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.

Open thread, Aug. 03 - Aug. 09, 2015

3 MrMind 03 August 2015 07:05AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Does Probability Theory Require Deductive or Merely Boolean Omniscience?

3 potato 03 August 2015 06:54AM

It is often said that a Bayesian agent has to assign probability 1 to all tautologies, and probability 0 to all contradictions. My question is... exactly what sort of tautologies are we talking about here? Does that include all mathematical theorems? Does that include assigning 1 to "Every bachelor is an unmarried male"?1 Perhaps the only tautologies that need to be assigned probability 1 are those that are Boolean theorems implied by atomic sentences that appear in the prior distribution, such as: "S or ~ S".

It seems that I do not need to assign probability 1 to Fermat's last conjecture in order to use probability theory when I play poker, or try to predict the color of the next ball to come from an urn. I must assign a probability of 1 to "The next ball will be white or it will not be white", but Fermat's last theorem seems to be quite irrelevant. Perhaps that's because these specialized puzzles do not require sufficiently general probability distributions; perhaps, when I try to build a general Bayesian reasoner, it will turn out that it must assign 1 to Fermat's last theorem. 

Imagine a (completely impractical, ideal, and esoteric) first order language, who's particular subjects were discrete point-like regions of space-time. There can be an arbitrarily large number of points, but it must be a finite number. This language also contains a long list of predicates like: is blue, is within the volume of a carbon atom, is within the volume of an elephant, etc. and generally any predicate type you'd like (including n place predicates).2 The atomic propositions in this language might look something like: "5, 0.487, -7098.6, 6000s is Blue" or "(1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant." The first of these propositions says that a certain point in space-time is blue; the second says that there is an elephant between two points at one second after the universe starts. Presumably, at least the denotational content of most english propositions could be expressed in such a language (I think, mathematical claims aside).

Now imagine that we collect all of the atomic propositions in this language, and assign a joint distribution over them. Maybe we choose max entropy, doesn't matter. Would doing so really require us to assign 1 to every mathematical theorem? I can see why it would require us to assign 1 to every tautological Boolean combination of atomic propositions [for instance: "(1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant OR ~((1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant)], but that would follow naturally as a consequence of filling out the joint distribution. Similarly, all the Boolean contradictions would be assigned zero, just as a consequence of filling out the joint distribution table with a set of reals that sum to 1. 

A similar argument could be made using intuitions from algorithmic probability theory. Imagine that we know that some data was produced by a distribution which is output by a program of length n in a binary programming language. We want to figure out which distribution it is. So, we assign each binary string a prior probability of 2^-n. If the language allows for comments, then simpler distributions will be output by more programs, and we will add the probability of all programs that print that distribution.3 Sure, we might need an oracle to figure out if a given program outputs anything at all, but we would not need to assign a probability of 1 to Fermat's last theorem (or at least I can't figure out why we would). The data might be all of your sensory inputs, and n might be Graham's number; still, there's no reason such a distribution would need to assign 1 to every mathematical theorem. 

Conclusion

A Bayesian agent does not require mathematical omniscience, or logical (if that means anything more than Boolean) omniscience, but merely Boolean omniscience. All that Boolean omniscience means is that for whatever atomic propositions appear in the language (e.g., the language that forms the set of propositions that constitute the domain of the probability function) of the agent, any tautological Boolean combination of those propositions must be assigned a probability of 1, and any contradictory Boolean combination of those propositions must be assigned 0. As far as I can tell, the whole notion that Bayesian agents must assign 1 to tautologies and 0 to contradictions comes from the fact that when you fill out a table of joint distributions (or follow the Komolgorov axioms in some other way) all of the Boolean theorems get a probability of 1. This does not imply that you need to assign 1 to Fermat's last theorem, even if you are reasoning probabilistically in a language that is very expressive.4 

Some Ways To Prove This Wrong:

Show that a really expressive semantic language, like the one I gave above, implies PA if you allow Boolean operations on its atomic propositions. Alternatively, you could show that Solomonoff induction can express PA theorems as propositions with probabilities, and that it assigns them 1. This is what I tried to do, but I failed on both occasions, which is why I wrote this. 


[1] There are also interesting questions about the role of tautologies that rely on synonymy in probability theory, and whether they must be assigned a probability of 1, but I decided to keep it to mathematics for the sake of this post. 

[2] I think this language is ridiculous, and openly admit it has next to no real world application. I stole the idea for the language from Carnap.

[3] This is a sloppily presented approximation to Solomonoff induction as n goes to infinity. 

[4] The argument above is not a mathematical proof, and I am not sure that it is airtight. I am posting this to the discussion board instead of a full-blown post because I want feedback and criticism. !!!HOWEVER!!! if I am right, it does seem that folks on here, at MIRI, and in the Bayesian world at large, should start being more careful when they think or write about logical omniscience. 

 

 

Ideological Turing Test Domains

3 Raelifin 02 August 2015 01:45PM

Hello! I'm running an Ideological Turing Test for my local rationality group, and I'm wondering what ideology to use (and what prompts to use for that ideology). Palladias has previously run a number of tests on Christianity, but ideally I'd find something that was a good 50/50 split for my community, and I don't expect to find many Christians in my local group. The original test was proposed for politics, which seems like a reasonable first-guess, but I also worry that my group has too many liberals and not enough conservatives to make that work well.

What I plan to do is email the participants who have agreed to write entries asking how they stand on a number of issues (politics, religion, etc) and then use the issue that is most divisive within the population. To do that, however, I'll need a number of possible issues. Do any of you have good ideas for ITT domains other than religion or politics, particularly for rationalists?

(Side questions:

I've been leaning towards using the name "Caplan Test" instead of "Ideological Turing Test". I think the current name is too unwieldy and gives the wrong impression. Does the ITT name seem worth keeping?

Also, would anyone on here be interested in submitting entries to my test and/or seeing results?)

Is simplicity truth indicative?

2 27chaos 04 August 2015 05:47PM

This essay claims to refute a popularized understanding of Occam's Razor that I myself adhere to. It is confusing me, since I hold this belief at a very deep level that it's difficult for me to examine. Does anyone see any problems in its argument, or does it seem compelling? I specifically feel as though it might be summarizing the relevant Machine Learning research badly, but I'm not very familiar with the field. It also might be failing to give any credit to simplicity as a general heuristic when simplicity succeeds in a specific field, and it's unclear whether such credit would be justified. Finally, my intuition is that situations in nature where there is a steady bias towards growing complexity are more common than the author claims, and that such tendencies are stronger for longer. However, for all of this, I have no clear evidence to back up the ideas in my head, just vague notions that are difficult to examine. I'd appreciate someone else's perspective on this, as mine seems to be distorted.

Essay: http://bruce.edmonds.name/sinti/

[Link] The Much Forgotten and Ignored Need to Have Workable Solutions

2 Emile 03 August 2015 10:02PM

I ran across this article: The Much Forgotten and Ignored Need to Have Workable Solutions, that might interest some, either for the Rationality or the Effective Altruism aspects.

For a very rough summary: Academia (more specifically, the humanities) gives too much credit to describing problems (i.e. complaining) and not enough on thinking about good solutions, which is the difficult and important part.

Some quotes if you don't want to read the whole thing:

Of course the biggest assumption of all that is being shown to be inconsistent with actual behaviour is that of rationality – Richard Thaler’s Misbehaving and other behavioural research is showing that people are subject to various biases and often do not make rational decisions. This is especially scary for theoretical economists, whose entire universe pretty much depends on the rational representative household.

If their assumptions are rather strict and may not hold up in real-life, their call for a policy response is technically null and void. A good example is with auctions, where previously designers (economists) would rely heavily on the Revenue Equivalence Theorem in creating the rules of auctions. Yet, many of them forget that the assumptions of Revenue Equivalence aren’t always satisfied, for example the possibility of collusion, which can prove to significantly reduce the revenue of the seller.

The best paper on a time economists forgot about ECON 101 has to be this review of European 3G auctions. What was most clear for me from Klemperer’s work is that you can get all up in complex auction theory and mechanism design, but if you forget how very basic concepts in economics work in conjunction with that, you can get easily derailed. They basically put the cart before the horse – they forgot that they had to satisfy their own assumptions before applying their model to reality.

More questions: is the policy they suggest cumbersome, intangible and unable to be monitored for success? This is another pet peeve of mine – my blood boils when people say “We need to fix gender stereotypes! We need to create awareness! We need to change societal attitudes!” without suggesting how it should be done, how this monumental task will be measured for good performance and how they propose regulating all the sources of these things.

Also, how would they justify that spending? Have they thought about the parameters which would determine success or failure? What kind of campaign or agency are they suggesting to carry out these monumental tasks? What are the conditions for success?

Last is that sometimes when people chuck the words “Policy Implications” around, they often have no idea what a deep and complicated field policy design actually is. To be fair, I’m still learning about it and I don’t expect university students or even researchers not involved in related areas to have a full understanding of it.

However, it’s not like economists don’t have a basic understanding of incentives, principal-agent relationships, transaction-cost economics and externalities. Those four areas should be enough to at least attempt a more rigorous analysis of possible policies, rather than simply providing an offhand description of the policy based on a single relationship.

At the end of the day, there’s just a lot of arrogance among some researchers who like to imply that their research necessitates action – yet they haven’t put any meaningful or strategic thought whether the research truly necessitates action in the first place (especially in comparison to cost-equivalent policies in similar areas, or dealing with similar problems), whether the action will actually lead to the desired outcome (checking if assumptions are realistic/addressing relevant design issues) or whether there will be any undesirable externalities or further implications of the policy.

[...]

Maybe the worst thing about all of this is that when I was growing up, I always looked up to people who were aware of issues outside themselves, especially if the issues didn’t necessarily affect them. They seemed so cool and aware and intelligent. I’d watch these people with great admiration for their insight.

Now a lot of that is gone. The people about whom once I thought, wow, this person is so aware and intelligent, I now realize aren’t actually that intelligent. They’re just pretending to be. They’re just better at vocalizing some of the things that anyone can see and turning them into long spiels about what’s wrong with the world. They haven’t really thought about it.

(ironically (intentionally?), the post is mostly complaining about a problem, without offering a workable solution, but I still liked it)

Bragging thread August 2015

2 philh 01 August 2015 07:46PM

Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on"Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

(Previous Bragging Thread)

Weekly LW Meetups

1 FrankAdamek 31 July 2015 03:54PM

This summary was posted to LW Main on July 24th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Which LW / rationalist blog posts aren't covered by my books & courses?

0 iarwain1 04 August 2015 10:55PM

I've read a few of the Sequences (probably about 50-100 individual posts), but I've only occasionally come away with insights and perspectives that I hadn't already thought of or read elsewhere. I've read a bunch of the popular books on cognitive science and decision theory, including everything on the CFAR popular books list. I'm also about to start an undergrad in statistics with a minor (or possibly a second major) in philosophy.

My question is: Are there specific LW posts / Sequences / other rationalist blog posts that I should read that won't be covered by standard statistics and philosophy courses, or by the books on CFAR's popular reading lists?