Filter This week

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Update on the Brain Preservation Foundation Prize

23 Andy_McKenzie 26 May 2015 01:47AM

Brain Preservation Foundation President Kenneth Hayworth just wrote a synopsis of the recent ongoings from the major two competitors for the BPF prizes. Here is the summary: 

Brain Preservation Prize competitor Shawn Mikula just published his whole mouse brain electron microscopy protocol in Nature Methods (paper, BPF interview), putting him close to winning the mouse phase of our prize.

Brain Preservation Prize competitor 21st Century Medicine has developed a new “Aldehyde-Stabilized Cryopreservation” technique–preliminary results show good ultrastructure preservation even after storage of a whole rabbit brain at -135 degrees C.

This work was funded in part from donations from LW users. In particular, a grant to support the work of LW user Robert McIntyre at 21st Century Medicine that the BPF was able to provide has been instrumental. 

In order to continue this type of research and to bolster it, BPF welcomes your support in a variety of different ways, including awareness-raising, donations, and volunteering. Please reach out if you would like to volunteer, or you can PM me and I will help put you in touch. And if you have any suggestions for the BPF, please feel free to discuss them in the comments below. 

[Link] Persistence of Long-Term Memory in Vitrified and Revived C. elegans worms

21 Rangi 24 May 2015 03:43AM

http://online.liebertpub.com/doi/pdf/10.1089/rej.2014.1636

This is a paper published in 2014 by Natasha Vita-More and Daniel Barranco, both associated with the Alcor Research Center (ARC).

The abstract:

Can memory be retained after cryopreservation? Our research has attempted to answer this long-standing question by using the nematode worm Caenorhabditis elegans (C. elegans), a well-known model organism for biological research that has generated revolutionary findings but has not been tested for memory retention after cryopreservation. Our study’s goal was to test C. elegans’ memory recall after vitrification and reviving. Using a method of sensory imprinting in the young C. elegans we establish that learning acquired through olfactory cues shapes the animal’s behavior and the learning is retained at the adult stage after vitrification. Our research method included olfactory imprinting with the chemical benzaldehyde (C₆H₅CHO) for phase-sense olfactory imprinting at the L1 stage, the fast cooling SafeSpeed method for vitrification at the L2 stage, reviving, and a chemotaxis assay for testing memory retention of learning at the adult stage. Our results in testing memory retention after cryopreservation show that the mechanisms that regulate the odorant imprinting (a form of long-term memory) in C. elegans have not been modified by the process of vitrification or by slow freezing.

Six Ways To Get Along With People Who Are Totally Wrong*

20 RobertWiblin 27 May 2015 12:37PM

This is a re-post of something I wrote for the Effective Altruism Forum. Though most of the ideas have been raised here before, perhaps many times, I thought it might still be of interest as a brief presentation of them all!

--

* The people you think are totally wrong may not actually be totally wrong.

Effective altruism is a ‘broad tent’

As is obvious to anyone who has looked around here, effective altruism is based more on a shared interest in the question 'how can you do the most good' than a shared view on the answer. We all have friends who support:

  • A wide range of different cause areas.
  • A wide range of different approaches to those causes.
  • Different values and moral philosophies regarding what it means to 'help others'.
  • Different political views on how best to achieve even shared goals. On economic policy for example, we have people covering the full range from far left to far right. In the CEA offices we have voters for every major political party, and some smaller ones too.

Looking beyond just stated beliefs, we also have people with a wide range of temperaments, from highly argumentative, confident and outspoken to cautious, idiosyncratic and humble.

Our wide range of views could cause problems

There is a popular saying that 'opposites attract'. But unfortunately, social scientists have found precisely the opposite to be true: birds of a feather do in fact flock together.

One of the drivers of this phenomenon is that people who are different are more likely to get into conflicts with one another. If my partner and I liked to keep the house exactly the same way, we certainly wouldn't have as many arguments about cleaning (I'll leave you to speculate about who is the untidy one!). People who are different from you may initially strike you as merely amusing, peculiar or mistaken, but when you talk to them at length and they don't see reason, you may start to see them as stupid, biased, rude, impossible to deal with, unkind, and perhaps even outright bad people.

A movement brought together by a shared interest in the question ‘what should we do?’ will inevitably have a greater diversity of priorities, and justifications for those priorities, than a movement united by a shared answer. This is in many ways our core strength. Maintaining a diversity of views means we are less likely to get permanently stuck on the wrong track, because we can learn from one another's scholarship and experiences, and correct course if necessary.

However, it also means we are necessarily committed to ideological pluralism. While it is possible to maintain ‘Big Tent’ social movements they face some challenges. The more people hold opinions that others dislike, the more possible points of friction there are that can cause us to form negative opinions of one another. There have already been strongly worded exchanges online demonstrating the risk.

When a minority holds an unpopular view they can feel set upon and bullied, while the majority feels mystified and frustrated that a small group of people can't see the obvious truth that so many accept.

My first goal with this post is to make us aware of this phenomenon, and offer my support for a culture of peaceful coexistence between people who, even after they share all their reasons and reflect, still disagree.

My second goal is to offer a few specific actions that can help us avoid interpersonal conflicts that don't contribute to making the world a better place:

1. Remember that you might be wrong

Hard as it is to keep in mind when you're talking to someone who strongly disagrees with you, it is always possible that they have good points to make that would change your mind, at least a bit. Most claims are only ‘partially true or false’, and there is almost always something valuable you can learn from someone who disagrees with you, even if it is just an understanding of how they think.

If the other person seems generally as intelligent and informed about the topic as you, it's not even clear why you should give more weight to your own opinion than theirs.

2. Be polite, doubly so if your partner is not

Being polite will make both the person you are talking to, and onlookers, more likely to come around to your view. It also means that you're less likely to get into a fight that will hurt others and absorb your precious time and emotional energy.

Politeness has many components, some notable ones being: not criticising someone personally; interpreting their behaviour and statements in a fairly charitable way; not being a show-off, or patronising and publicly embarrassing others; respecting others as your equals, even if you think they are not; conceding when they have made a good point; and finally keeping the conversation focussed on information that can be shared, confirmed, and might actually prove persuasive.

3. Don't infer bad motivations

While humans often make mistakes in their thinking, it's uncommon for them to be straight out uninterested in the welfare of others or what is right, especially so in this movement. Even if they are, they are probably not aware that that is the case. And even if they are aware, you won't come across well to onlookers by addressing them as though they have bad motivations.

If you really do become convinced the person you are talking to is speaking in bad faith, it's time to walk away. As they say: don't feed the trolls.

4. Stay cool

Even when people say things that warrant anger and outrage, expressing anger or outrage publicly will rarely make the world a better place. Anger being understandable or natural is very different from it being useful, especially if the other person is likely to retaliate with anger of their own.

Being angry does not improve the quality of your thinking, persuade others that you're right, make you happier or more productive, or make for a more harmonious community.

In its defence, anger can be highly motivating. Unfortunately it is indiscriminate about motivating you to do very valuable, ineffective and even harmful things.

Any technique that can keep you calm is therefore useful. If something is making you unavoidably angry, it's typically best to walk away and let other people deal with it.

5. Pick your battles

Not all things are equally important to reach a consensus about. For good or ill, most things we spend our days talking about just aren't that 'action relevant'. If you find yourself edging towards interpersonal conflict on a question that i) isn't going to change anyone's actions much; ii) isn't going to make the world a much better place, even if it does change their actions; or iii) is very hard to persuade others about, maybe it isn't worth the cost of interpersonal tension to explore in detail.

So if someone in the community says something unrelated or peripheral to effective altruism that you disagree with, which could develop into a conflict, you always have the option of not taking the bait. In a week, you and they may not even remember it was mentioned, let alone consider it worth damaging your relationship over.

6. Let it go

The most important advice of all.

Perhaps you are discussing something important. Perhaps you've made great arguments. Perhaps everyone you know agrees with you. You've been polite, and charitable, and kept your cool. But the person you're talking to still holds a view you strongly disagree with and believe is harmful.

If that's the case, it's probably time for you both to walk away before your opinions of one another fall too far, or the disagreement spirals into sectarianism. If someone can't be persuaded, you can at least avoid creating an ill-will between you that ensures they never come around. You've done what you can for now, and that is enough.

Hopefully time will show which of you is right, or space away from a public debate will give one of you the chance to change your mind in private without losing face. In the meantime maybe you can't work closely together, but you can at least remain friendly and respectful.

It isn't likely or even desirable for us to end up agreeing with one another on everything. The world is a horribly complex place; if the questions we are asking had easy answers the research we are doing wouldn't be necessary in the first place.

The cost of being part of a community that accepts and takes an interest in your views, even though many think you are pulling in the wrong direction, is to be tolerant of others in the same way even when you think their views are harmful.

So, sometimes, you just have to let it go.

--

PS

If you agree with me about the above, you might be tempted to post or send it to people every time they aren’t playing by these rules. Unfortunately, this is likely to be counterproductive and lead to more conflict rather than less. It’s useful to share this post in general, but not trot it out as a way of policing others. The most effective way to promote this style of interaction is to exemplify it in the way you treat others, and not get into long conversations with people who have less productive ways of talking to others.

Thanks to Amanda, Will, Diana, Michelle, Catriona, Marek, Niel, Tonja, Sam and George for feedback on drafts of this post.

Giving What We Can needs your help!

12 RobertWiblin 29 May 2015 04:30PM

As you probably know, Giving What We Can exists to move donations to the charities that can most effectively help others. Our members take a pledge to give 10% of their incomes for the rest of their life to the most impactful charities. Along with other extensive resources for donors such as GiveWell and OpenPhil, we produce and communicate, in an accessible way, research to help members determine where their money will do the most good. We also impress upon members and the general public the vast differences between the best charities and the rest.

Many LessWrongers are members or supporters, including of course the author of Slate Star Codex. We also recently changed our pledge so that people could give to whichever cause they felt best helped others, such as existential risk reduction or life extension, depending on their views. Many new members now choose to do this.

What you might not know is that 2014 was a fantastic year for us - our rate of membership growth more than tripled! Amazingly, our 1066 members have now pledged over $422 million, and already given over $2 million to our top rated charities. We've accomplished this on a total budget of just $400,000 since we were founded. This new rapid growth is thanks to the many lessons we have learned by trial and error, and the hard work of our team of staff and volunteers.

To make it to the end of the year we need to raise just another £110,000. Most charities have a budget in the millions or tens of millions of pounds and we do what we do with a fraction of that.

We want to raise the money as quickly as possible, so that our staff can stop focusing on fundraising (which takes up a considerable amount of energy), and get back to the job of growing our membership.

Some of our supporters are willing to sweeten the deal as well: if you haven't given us more than £1,000 before, then they'll match 1:1 a gift between £1,000 and £5,000.

You can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for our bank details. Info on tax deductible giving from the USA and non-UK Europe are also available on our website.

What we are doing this year

The second half of this year is looking like it will be a very exciting for us. Four books about effective altruism are being released this year, including one by our own trustee William MacAskill, which will be heavily promoted in the US and UK. The Effective Altruism Summit is also turning into 'EA Global' with events at Google Headquarters in San Francisco, Oxford University and Melbourne, headlined by Elon Musk.

Tens, if not hundreds of thousands of people will be finding out about our philosophy of effective giving for the first time.

To do these opportunities justice Giving What We Can needs to expand its staff to support its rapidly growing membership and local chapters, and ensure we properly follow up with all prospective members. We want to take people who are starting to think about how they can best make the world a better place, and encourage them to make a serious long-term commitment to effective giving, and help them discover where their money can do the most good.

Looking back at our experience over the last five years, we estimate that each $1 given to Giving What We Can has already moved $6, and will likely end up moving between $60 and $100 to the most effective charities in the world. (This are time discounted, counterfactual donations, only to charities we regard very highly. Check out this report for more details.)

This represents a great return on investment, and I would be very sad if we couldn't take these opportunities just because we lacked the necessary funding.

Our marginal hire

If we don't raise this money we will not have the resources to keep on our current Director of Communications. He has invaluable experience as a Communications Director for several high-profile Australian politicians, which has given him skills in web-development, public relations, graphic design, public speaking and social media. Amongst the things he has already achieved in his three months here are: automation of the book-keeping on our Trust (saving huge amounts of time and minimising errors), very much improved our published materials including our fundraising prospectus, written a press release and planned a media push to capitalise on our getting to 1,000 members and Peter Singer’s book release in the UK.

His wide variety of skills mean that there are a large number of projects he would be capable of doing which would increase our member growth, and we are keen for him to test a number of these. His first project would be to optimise our website to make the most of the increased attention effective altruism will be generating over the summer and turn that into people actually donating 10% of their incomes to the most effective causes. In the past we have had trouble finding someone with such a broad set of crucial skills. Combined with how swiftly and well he has integrated into our team, it would be a massive loss to have to let him go and later down the line need to try to recruit a replacement.

As I wrote earlier you can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for bank details or personalised advice on how to give best. If you need tax deductibility in another country check these pages on the USA and non-UK Europe.

I'm happy to take questions here or by email!

Request for Advice : A.I. - can I make myself useful?

10 zslastman 29 May 2015 09:13AM

I'm a PhD student wrapping up a doctorate in Genomics. I started in biology and switched to analysis because I have stupid hands. My opinion of my field is low. Working in it has, on on the bright side, taught me some statistics and programming.  I'm roughly upper 5% on math ability, relative to my college class. Once upon a time I could solve ODEs, now most of my math is gone. However, I'm good with R, and can talk intelligently about mixed linear model, bayesianism vs. frequentism and about genetics, biochemistry and developmental biology. It's also taught me that huge segments of the biology literature are a mixture of non reproducible crap, and uninteresting, street-light science, dressed up as progress with deceptive plots and statistics. I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence are going to make human-run biology irrelevant before long. I think the ultimate problems we're tackling (predicting genotype from phenotype, reliable manipulation of biology, curing cancer/aging/death) are insoluble with our current methods - we need effective robots to do the experiments, and A.I. to interpret the results.


Here's what I want to ask the lesswrong hivemind:

 

1)Do you agree? Do you think there are important problems being tackled now in biology that someone with my skillset could be useful in? E.g. analyzing the brain with genetics to try and get a handle on how it's algorithms work? (I'm skeptical of this bottom up approach to the brain myself)

 

2)Do you think there are areas closer to the AI problem (or say, cryonics...) I could be usefully working on?

 

Sorry for bothering you with my personal problems, but I recall a thread a while ago inviting this sort of thing, so I thought I'd give it a try. I'm leaning towards the default option right now, which is to do some more courses, so I can say, bluff my way through Hadoop and Java, and then see how much cash I can earn in a boring private sector job. However, I'd prefer to do something I find intrinsically interesting.

 

Edit: Thanks guys - this has already been helpful.

Ideas to Improve LessWrong

10 adamzerner 25 May 2015 10:55PM

This article is something that has been in my head for a while. I hadn't planned on doing a write-up so soon. I wanted to take the time to a) refine my ideas and b) figure out how to express them clearly before posting. But the recent post Less Wrong lacks direction made me change my mind. My thinking now is that I overestimated the downside (wasting peoples time with a less than fully thought out post) and that there's enough value to justify posting a rough draft now.


LessWrong has been one of the most amazing things I've experienced in my life.

  1. I have learned a ton, and have "leveled up" quite a bit.
  2. Knowing that there are this many other relatively rational people in the world and being able to interact with them is a truly amazing thing.
But I see so much opportunity for LW to do more. Below are some thoughts.

Easy

  • A way to discuss ideas for the site, vote on them, and incentivize the generation of good ideas. I sense that having this would be huge. a) I sense that there are a lot of good ideas out there in people's heads but that they haven't shared. b) I sense that by discussing things, there could be a lot of refinement of current ideas, and a lot of generation of new ideas.
  • More generally, my impression is that it'd be a good idea to subdivide sections for posts. Right now it's pretty much Main, Discussion or Open Thread. Ex. someone who has an idea to improve LW might not think it's "Discussion worthy" (or even "Open Thread worthy"), but I sense that if there were a section explicitly for "LW Ideas", they'd be a lot less reluctant to post. More generally, it'd justify more "bite sized posts" rather than requiring a full write-up.
    • One example of a subsection that I think would be cool is a Personal Advice section. The ability to post anonymously seems like it'd be a useful feature here. Other ideas for subsections: AMA!, Brainstorming/Unrefined Thoughts, I Don't Understand X, Contrarian Thoughts.
  • Social coordination:
    • Apartments/living together.
    • "What are you currently learning? What do you want to learn?". so8res recommends pairing up, and I agree.
    • Geographical map of users to facilitate friendships and/or dating. (This already exists. But it seems that a low proportion of LW users added a pin on the map. My impression is that because of network effects, the usefulness of this is very much a function of how many users there are. Also, I sense that there'd need to be some sort of a different UI that with some sort of organization.)
    • Online chat. Like Facebook. I think it'd be a) cool and b) sometimes useful.
  • Project ideas. There are a lot of smart, skilled and ambitious people here who want to do good things. If LW made it easier to coordinate and work with people, I could see it having a huge impact.


Harder

  • Crowdsource the refinement of posts. 
    • Maybe have an answer wiki for each article that summarizes the main points.
    • Maybe let the author award karma to people who submit a diagram of something explained in the article (I'm a big fan of explaining things visually). Along similar lines, maybe do the same thing for people who submit relevant YouTube videos. Ex. I think that this would be a relevant clip to add to an article about expected value (beware: cringeworthy). (Again, I'm really not a big fan of writing as a medium)
    • Maybe allow collaboration on drafts. And allow the author to award karma to collaborators.
  • Side comments. I really think that for a lot of scenarios, this is a much better UI. But I also think there are use cases for the traditional comments at the bottom of the page, and so there should be both.
  • Make use of some sort of debate tool. I think there are a lot of improvements that could be made to the current approach of having nested comments. It might be sufficient for the level of conversation elsewhere on the internet, but not here.
    • I should emphasize that this seems like it'd be a large and difficult undertaking.
    • But I should also emphasize how important I think it is. Media For Thinking The Unthinkable largely expresses my views here. I think that the mediums we use to write, think and communicate play a large and very underrated role in determining how well we could think. As a society, we don't seem to really recognize this, and we don't seem to have made much progress as far as inventing such tools goes. The importance of such tools goes way beyond LessWrong, but I guess I'm just noting here that LW would benefit greatly from it. I don't think that there are many legitimately deep conversations on LW, and I think that the limitations of nested comments are a big part of the reason why.
    • Along similar lines, I think it's pretty important for there to be a way to highlight and take notes on articles (currently, most people don't). I've been using scrible. Come to think of it, scrible actually isn't that bad of a solution, but I think it'd be awesome if there were a better way to do this built in to the site. (This is another thing I'd like to see across the internet, but I digress...)


There are also a bunch of ideas I have on how the design of the site could be better, but those don't seem relevant to post now (I think we all agree that there's a lot to improve on).

(As far as actually implementing change, things in the "Easy" section seem rather... easy to implement. One option is to just "hack it together". Have "thread" posts, like "Stupid Questions Thread" and use some sort of community managed Google Doc for things like project ideas and social coordination. Idk, I actually haven't thought through what a hacky/MVP implementation would look like. But as far as making changes to the actual site, the code wouldn't be that hard.)

Edit: Some concrete steps to be taken for "version 1s" of the easy ideas. Not well thought out enough; needs vetting.
  • Have a "LessWrong Ideas" thread. Preferably link to it on the sidebar so we don't get too many repeat ideas, so the good ideas have enough time to be voted up, and so ideas don't get "lost in time".
  • Have a "Project Ideas" thread. Preferably link to it on the sidebar, for similar reasons.
  • Have a Project Ideas Google Doc. This would be a list of the more serious and vetted ideas, with brief summaries, skills required, and you could add your name to the list of people interested in working on it.
  • Link to the map of community members on the sidebar. Give, say 50 karma points for adding your data point. I'm not sure how the data would be used for social coordination though. It'd be incredible if there was an API.
    • Actually, maybe it'd be a better idea to create a site that allows users to input their data point on the map, and it'd create the API for us. And you could add things like contact info, interested in finding a roommate? dating? friends/fun activities? On second thought, maybe this is getting too far from the idea of a version 1.
  • Discuss a) the idea of having subsections (ex. Personal Advice, Unrefined Thoughts), and b) which ones we'd like to see. Then create and manage threads based on interest and traction.
  • Have a Google Doc to help people learning the same things pair up. Potential information to include: what you want to learn, how much time per week you want to spend, how many months you'd like to spend learning, fields in which you're knowledgeable (ex. math, psychology, genetics...).
Edit: I didn't realize how reasonable the first steps would be. If after a discussion and vetting people agree on some of these things (or other ideas), I'd be willing to manage the Google Docs and/or threads. I'm also a web developer and would be willing to work with any code for the site. I only have about a year of experience though, so I wouldn't be able to lead any efforts.

Things in the "Harder" section seem, well... harder. It seems that it'd take a non-trivial amount of coding. Especially the debate tools stuff - that'd take an incredible amount of time, expertise and iteration. I think that they're absolutely worth the effort though.


Anyway, at this point I don't even think I'm even proposing anything (I haven't thought it through nearly enough). But I do sense that the ideas are thought out enough to start a conversation. Thoughts?

No peace in our time?

9 Stuart_Armstrong 26 May 2015 02:41PM

There's a new paper arguing, contra Pinker, that the world is not getting more peaceful:

On the tail risk of violent conflict and its underestimation

Pasquale Cirillo and Nassim Nicholas Taleb

Abstract—We examine all possible statistical pictures of violent conflicts over common era history with a focus on dealing with incompleteness and unreliability of data. We apply methods from extreme value theory on log-transformed data to remove compact support, then, owing to the boundedness of maximum casualties, retransform the data and derive expected means. We find the estimated mean likely to be at least three times larger than the sample mean, meaning severe underestimation of the severity of conflicts from naive observation. We check for robustness by sampling between high and low estimates and jackknifing the data. We study inter-arrival times between tail events and find (first-order) memorylessless of events. The statistical pictures obtained are at variance with the claims about "long peace".

Every claim in the abstract is supported by the data - with the exception of the last claim. Which is the important one, as it's the only one really contradicting the "long peace" thesis.

Most of the paper is an analysis of trends in peace and war that establish that what we see throughout conflict history is consistent with a memoryless powerlaw process whose mean we underestimate from the sample. That is useful and interesting.

However, the paper does not compare they hypothesis that the world is getting peaceful with the alternative hypothesis that it's business as usual. Note that it's not cherry-picking to suggest that the world might be getting more peaceful since 1945 (or 1953). We've had the development of nuclear weapons, the creation of the UN, and the complete end of direct great power wars (a rather unprecedented development). It would be good to test this hypothesis; unfortunately this paper, while informative, does not do so.

The only part of the analysis that could be applied here is the claim that:

For an events with more than 10 million victims, if we refer to actual estimates, the average time delay is 101.58 years, with a mean absolute deviation of 144.47 years

This could mean that the peace since the second world war is not unusual, but could be quite typical. But this ignores the "per capita" aspect of violence: the more people, the more deadly events we expect at same per capita violence. Since the current population is so much larger than it's ever been, the average time delay is certainly lower that 101.58 years. They do have a per capita average time delay - table III. Though this seems to predict events with 10 million casualties (per 7.2 billion people) every 37 years or so. That's 3.3 million casualties just after WW2, rising to 10 million today. This has never happened so far (unless one accepts the highest death toll estimate of the Korean war; as usual, it is unclear whether 1945 or 1953 was the real transition).

This does not prove that the "long peace" is right, but at least shows the paper has failed to prove it wrong.

[Link] Small-game fallacies: a Problem for Prediction Markets

8 Antisuji 28 May 2015 03:32AM

Nick Szabo writes about the dangers of taking assumptions that are valid in small, self-contained games and applying them to larger, real-world "games," a practice he calls a small-game fallacy.

Interactions between small games and large games infect most works of game theory, and much of microeconomics, often rendering such analyses useless or worse than useless as a guide for how the "players" will behave in real circumstances. These fallacies tend to be particularly egregious when "economic imperialists" try to apply the techniques of economics to domains beyond the traditional efficient-markets domain of economics, attempting to bring economic theory to bear to describe law, politics, security protocols, or a wide variety of other institutions that behave very differently from efficient markets. However as we shall see, small-game fallacies can sometimes arise even in the analysis of some very market-like institutions, such as "prediction markets."

This last point, which he expands on later in the post, will be of particular interest to some readers of LW. The idea is that while a prediction market does incentivize feeding accurate information into the system, the existence of the market also gives rise to parallel external incentives. As Szabo glibly puts it,

A sufficiently large market predicting an individual's death is also, necessarily, an assassination market...

Futarchy, it seems, will have some kinks to work out.

Less Wrong lacks direction

8 casebash 25 May 2015 02:53PM

I think the greatest issue with Less Wrong is that it lacks direction. There doesn't appear to be anyone driving it forward or helping the community achieve its goals. At the start this role was taken by Eliezer, but he barely seems active these days. The expectation seems to be that things will happen spontaneously, on their own. And that has worked for a few things (is. SubReddit, study hall, ect.), but on the while the community is much less effective than it could be.

I want to give an example of how things could work. Let's imagine Less Wrong had some kind of executive (as opposed to moderators who just keep everything in order). At the start of the year, they could create a thread asking about what goals they thought were important for Less Wrong - ie. Increasing the content in main, producing more content for a general audience, increasing female participation rate.

They would then have a Skype meeting to discuss the feedback and to debate which ones that wanted to primarily focus on. Suppose for example they decided they wanted to increase the content in main. They might solicit community feedback on what kinds of articles they'd like to see more of. They might contact people who wrote discussion posts that were main quality and suggest they submit some content there instead. They could come up with ideas of new kinds of contentLW might find useful (ie. Project management) and seed the site with content on that area to do that people understand that kind of content is desired.

These roles would take significant work, but I imagine people would be motivated to do this by altruism or status. By discussing ideas in person (instead of just over the internet), they there would be more of an opportunity to build a consensus and they would be able to make more progress towards addressing these issues.

If a group said that they thought A was an important issue and the solution was X, most members would pay more attention than if a random individual said it. No-one would have to listen to anything they say, but I imagine that many would choose to. Furthermore if the exec were all actively involved in the projects, I imagine they'd be able to complete some smaller ones themselves, or at least provide the initial push to get it going.

SSC discussion: "bicameral reasoning", epistemology, and scope insensitivity

6 tog 27 May 2015 05:08AM

(Continuing the posting of select posts from Slate Star Codex for comment here, as discussed in this thread, and as Scott Alexander gave me - and anyone else - permission to do with some exceptions.)

Scott recently wrote a post called Bicameral Reasoning. It touches on epistemology and scope insensitivity. Here are some excerpts, though it's worth reading the whole thing:

Delaware has only one Representative, far less than New York’s twenty-seven. But both states have an equal number of Senators, even though New York has a population of twenty million and Delaware is uninhabited except by corporations looking for tax loopholes.

[...]

I tend to think something like “Well, I agree with this guy about the Iraq war and global warming, but I agree with that guy about election paper trails and gays in the military, so it’s kind of a toss-up.”

And this way of thinking is awful.

The Iraq War probably killed somewhere between 100,000 and 1,000,000 people. If you think that it was unnecessary, and that it was possible to know beforehand how poorly it would turn out, then killing a few hundred thousand people is a really big deal. I like having paper trails in elections as much as the next person, but if one guy isn’t going to keep a very good record of election results, and the other guy is going to kill a million people, that’s not a toss-up.

[...]

I was thinking about this again back in March when I had a brief crisis caused by worrying that the moral value of the world’s chickens vastly exceeded the moral value of the world’s humans. I ended up being trivially wrong – there are only about twenty billion chickens, as opposed to the hundreds of billions I originally thought. But I was contingently wrong – in other words, I got lucky. Honestly, I didn’t know whether there were twenty billion chickens or twenty trillion.

And honestly, 99% of me doesn’t care. I do want to improve chickens, and I do think that their suffering matters. But thanks to the miracle of scope insensitivity, I don’t particularly care more about twenty trillion chickens than twenty billion chickens.

Once again, chickens seem to get two seats to my moral Senate, no matter how many of them there are. Other groups that get two seats include “starving African children”, “homeless people”, “my patients in hospital”, “my immediate family”, and “my close friends”.

[...]

I’m tempted to say “The House is just plain right and the Senate is just plain wrong”, but I’ve got to admit that would clash with my own very strong inclinations on things like the chicken problem. The Senate view seems to sort of fit with a class of solutions to the dust specks problem where after the somethingth dust speck or so you just stop caring about more of them, with the sort of environmentalist perspective where biodiversity itself is valuable, and with the Leibnizian answer to Job.

But I’m pretty sure those only kick in at the extremes. Take it too far, and you’re just saying the life of a Delawarean is worth twenty-something New Yorkers.

Thoughts?

Approximating Solomonoff Induction

4 Houshalter 29 May 2015 12:23PM

Solomonoff Induction is a sort of mathematically ideal specification of machine learning. It works by trying every possible computer program and testing how likely they are to have produced the data. Then it weights them by their probability.

Obviously Solomonoff Induction is impossible to do in the real world. But it forms the basis of AIXI and other theoretical work in AI. It's a counterargument to the no free lunch theorem; that we don't care about the space of all possible datasets, but ones which are generated by some algorithm. It's even been proposed as a basis for a universal intelligence test.

Many people believe that trying to approximate Solomonoff Induction is the way forward in AI. And any machine learning algorithm that actually works, to some extent, must be an approximation of Solomonoff Induction.

But how do we go about trying to approximate true Solomonoff Induction? It's basically an impossible task. Even if you make restrictions to remove all the obvious problems like infinite loops/non-halting behavior. The space of possibilities is just too huge to reasonably search through. And it's discrete - you can't just flip a few bits in a program and find another similar program.

We can simplify the problem a great deal by searching through logic circuits. Some people disagree about whether logic circuits should be classified as Turing complete, but it's not really important. We still get the best property of Solomonoff Inducion; that it allows most interesting problems to be modelled much more naturally. In the worst case you have some overhead to specify the memory cells you need to emulate a Turing machine.

Logic circuits have some nicer properties compared to arbitrary computer programs, but they still are discrete and hard to do inference on. To fix this we can easily make continuous versions of logic circuits. Go back to analog. It's capable of doing all the same functions, but also working with real valued states instead of binary.

Instead of flipping between discrete states, we can slightly increase connections between circuits, and it will only slightly change the behavior. This is very nice, because we have algorithms like MCMC that can efficiently approximate true bayesian inference on continuous parameters.

And we are no longer restricted to boolean gates, we can use any function that takes real numbers. Like a function that takes a sum of all of it's inputs, or one that squishes a real number between 0 and 1.

We can also look at how much changing the input of a circuit slightly, changes the output. Then we can go to all the circuits that connect to it in the previous time step. And we can see how much changing each of their input changes their output, and therefore the output of the first logic gate.

And we can go to those gates' inputs, and so on, chaining it all the way through the whole circuit. Finding out how much a slight change to each connection will change the final output. This is called the gradient, and we can then do gradient descent. Basically change each parameter slightly in the direction that increases the output the way we want.

This is a very efficient optimization algorithm. With it we can rapidly find circuits that fit functions we want. Like predicting the price of a stock given the past history, or recognizing a number in an image, or something like that.

But this isn't quite Solomonoff Induction. Since we are finding the best single model, instead of testing the space of all possible models. This is important because essentially each model is like a hypothesis. There can be multiple hypotheses which also fit the data yet predict different things.

There are many tricks we can do to approximate this. For example, if you randomly turn off each gate with 50% probability and then optimize the whole circuit to deal with this. For some reason this somewhat approximates the results of true bayesian inference. You can also fit a distribution over each parameter, instead of a single value, and approximate bayesian inference that way.

Although I never said it, everything I've mentioned about continuous circuits is equivalent to Artificial Neural Networks. I've shown how they can be derived from first principles. My goal was to show that ANNs do approximate true Solomonoff Induction. I've found the Bayes-Structure.

It's worth mentioning that Solomonoff Induction has some problems. It's still an ideal way to do inference on data, it just has problems with self-reference. An AI based on SI might do bad things like believe in an afterlife, or replace it's reward signal with an artificial one (e.g. drugs.) It might not fully comprehend that it's just a computer, and exists inside the world that it is observing.

Interestingly, humans also have these problem to some degree.

Reposted from my blog here.

New Alzheimer’s treatment fully restores memory function in mice

4 Bound_up 27 May 2015 02:33AM

The team reports fully restoring the memory function of 75 percent of the mice they tested it on, with zero damage to the surrounding brain tissue.

"We’re extremely excited by this innovation of treating Alzheimer’s without using drug therapeutics."

The team says they’re planning on starting trials with higher animal models, such as sheep, and hope to get their human trials underway in 2017.

 

http://www.sciencealert.com/new-alzheimer-s-treatment-fully-restores-memory-function

Group Bragging Thread (May 2015)

3 Viliam 29 May 2015 10:36PM

This is similar to the usual bragging threads, with one major exception: this thread is for groups, not individuals.

Please comment on this thread explaining awesome things that you have done with your fellow rationalists as a group. The lower bound on group size is three people.

Otherwise the rules are analogical: be as blatantly proud of your group as you feel, consider your group the coolest freaking group ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that. - No work in progress, no proposals, only the awesome things you have already done.

And because this is the first such thread, feel free to write about anything that happened since the extinction of dinosars until the end of May 2015.

 

(Yes, organizing Less Wrong meetups is a valid example of an activity that belongs here, if at least three people participated. Please try to include more details than merely "we organized a LW meetup in <city_name>".)

[Link] Mainstream media writing about rationality-informed approaches

3 Gleb_Tsipursky 24 May 2015 01:18AM

Wanted to share two articles published in mainstream media, namely Ohio newspapers, about how rationality-informed strategies help people improve their lives.

This one is about improving one's thinking, feeling, and behavior patterns overall, and especially one's highest-order goals, presented as "meaning and purpose."

This one is about using rationality to deal with mental illness, and specifically highlights the strategy of "in what world do I want to live?"

I know about these two articles because I was personally involved in their publication as part of my broader project of spreading rationality widely. What other articles are there that others know about?

Weekly LW Meetups

2 FrankAdamek 29 May 2015 03:35PM

This summary was posted to LW Main on May 22nd. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

A Challenge: Maps We Take For Granted

2 Sable 29 May 2015 03:50AM

Imagine that you were instantly transported into (roughly) the 13th century.  I'm not great at history, but I'm picturing sometime around the crusades.  You're sitting there, reading this post on your computer, and BAM!  Some guy in chain mail is asking you if thou art the spawn of a demon.

 

Given this situation, I present to you a challenge:

 

You are stranded in the past.  You have no modern technology except your everyday clothes.  The only thing you do have is your knowledge from the future.

 

What do you do?

 

I'll make this a little more structured for the sake of clarity.

1) You appear in Great Britain (or the appropriate analogue for your native culture).

2) Assume the language barrier is surmountable - in other words, it may not be easy, but you can communicate effectively (by learning the language, or simply adapting to an older version of your native tongue).

3) Further assume that you manage to gain the ear of a ruling lord (how is not important, just say you're a wizard or something) and that he provides you with enough money, labor, and expertise (carpenters, smiths, etc.) to build something *so long as you can describe it in enough detail*.

4) You are only allowed to pull from general, scientifically literate knowledge - high school/bachelor's level only.

5) You can't use your knowledge of future events to your advantage, as it requires too expert a grasp of history.  Only your knowledge of the way the world actually works is available.

 

The reason for 4) has to do with the point of the question.  I'm trying to figure out the kind of maps that we have today that are considered "general knowledge" - the kinds of things that are so obvious to us we tend to not realize that people in the past didn't know them.  

 

I'll go first.

 

The germ theory of disease didn't achieve widespread acceptance until the 19th century.  In other words, I'm the only person in the past who is quite confident about how diseases are spread.  This means that I can offer practical advice about sanitation when dealing with injuries and plagues.  I can make sure that people wash their hands before cutting other people up, and after dealing with corpses.  I can make sure that cutting instruments are sanitized (they did have alcohol) before use.  And so on. This should reduce the number of deaths from disease in the kingdom, and prove my worth to the king.

 

I'm trying to build a list of things like this - maps of the way the world really is that we take for granted.

 

Have fun!

Dissolving philosophy

2 DeVliegendeHollander 26 May 2015 10:45AM

Summary: a large chunk of the history of Western philosophy is about finding out by what kinds of less conscious algorithms does the human mind arrive to certain intuitions. 

In Plato's Republic, Socrates runs around Athens talking with people, trying to find an answer to the question: "What is justice?" Two and half thousands of years later we still don't have a truly definitive answer. We can spend another thousand year or two pondering it, but I suspect it would be better to reformulate the question in a more answerable way. So let's look at what Socrates is trying to do here, what his method is and what is actual question is!

It is not an empirical, scientific question that can be answered by observing something whose existence is independent of the human mind. Rather the question is about a feature of the human mind, not of a feature of the external reality out there. 

However Socrates is not simply conducting an opinion survey. He is not content simply finding 74% of Athenians think justice means obeying laws. Socrates also argues against definitions of justice he considers _wrong_. 


So, apparently, justice in this question relates to something that does not exist outside the human mind, but we can still have wrong opinions about it.

The method Socrates is employing is the following. He assumes when people see an actual action, they can intuitively judge it just or unjust and that judgement will be seen as _correct_. Well, not always, but at least when they are dispassionate, and have no vested interest either. So according to Socrates, any definition of justice can be tested by thought experiments that are sufficiently dispassionate and disinterested for the audience that they will actually use their Justice Sensors to form a judgement about them, and not, say, their passion like anger or greed, or their interests.

What Socrates is doing here, then, is asking people to make an algorithm that predicts what acts will a dispassionate and disinterested observer find just or unjust.

Example: "I think justice is paying debts." "Okay dude, but what if you borrowed a sword from a friend and now you see he is really mad at people and wants to go on a murderous rampage. Would it be just / righteous / correct to pay the debt and return the sword now?" "Uh, no."

This means: "I propose this algorithm." "This algorithm predicts you would find hypothetical situation X  just. Would you?" "Uh, no."

The big question: is he looking for any algorithm that _happens_ to predict human intuitions of justice, or looking for the algorithm the human brain _actually_ uses? Well, they probably did not know much about algorithms back then, and they considered the brain an organ for  cooling the blood but from our own angle, since we know the brain uses algorithms, any algorithm that predicts really well what another algorithm does is more or less the same algorithm.

So, "What is justice?" roughly means this: "What algorithm does our brain use when we intuitively consider something just or unjust?"

I am not claiming you can reduce all of philosophy to this, but apparently a significant chunk of Western philosophy ("footnotes to Plato") you can. 

If we see philosophy this, we can also see better how does it overlap with yet why is it distinct from science. The basic ideas are the same: propose hypotheses, test them with (thought) experiments. The difference is that science is focused on looking outward, on the observable reality outside the mind. When science wants to learn about the brain, it invariably treats it as an external object and manipulates and observes it so, for example, looking at what areas of neurons light up under an fMRI scan.

Philosophy is, apparently, a form of cognitive science, a way of learning about the brain that looks inward, not outward, here the experimenters observes his own brain from the inside, and generally tries to consciously notice the subconscious algorithms his brain works with.


This is also why philosophy can feel so "truthy" on the gut level. You can have these kinds of "I knew it! I knew it all along, dammit, just did not connect the dots!" types of euphoric heureka experiences (or: "how could I have been so stupid" types of experiences) far more often in philosophy or math than in the empirical sciences such a biology, because here you study how your own brain works and you study it from the inside. It is about one part of your brain learning how the other part works. (OK, phyiscs is empirical enough and yet it happens. But the point is, it does not really happen in the empirical part of physics like measuring the weight of a particle. It happens in the mathemathical parts of physics.)

Request for advices on small presentation about LW community

2 efim 26 May 2015 07:30AM

In a couple of weeks I'll be giving a small (~50m) presentation about LW community on "social sciences sunday" in Saint Petersburg.

Target audience - students, teachers and young recearcher mostly from social sciences and humanities.

I'm planning to at least mention in passing:

1) rationality: epistemological and practical division

2) virtues of rationality

3) big part of learning is by osmosis

4) about sequences => some ideas I found engaging (but those that are at the same time would be easier to explain in 10 minutes)- definetely about inferential distances and looking wise

maybe mention Milgrams experiments or anecdote about Pain and Gain motivation

5) study hall (I tried it just for a bit), meetups, related projects - CFAR (anything else?), International Insights, slatestarcodex?

There is also this

I'm not sure LW is a good entry point for people who are turned away by a few technical terms. Responding to unfamiliar scientific concepts with an immediate surge of curiosity is probably a trait I share with the majority of LW'ers.

I am going to spend some more time prepairing and would probably have some good ideas, but I would be really great to have opinions from others. Am I missing something? Or if anyone had relevant experience?

Open Thread, May 25 - May 31, 2015

2 Gondolinian 25 May 2015 12:00AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

How do we learn from errors?

2 ChristianKl 24 May 2015 09:56PM

Mark Friedenbach's post Leaving LessWrong for a more rational life makes a few criticisms of the way LW approaches rationality. It's not focused enough on empiricism. While he grants that there's lip service payed to empiricism Mark argues that LW isn't empiric enough.

Part of empiricism is learning from errors. How do you deal with learning from your own errors? What was the last substantial errors you made that made you learn and think differently about the issue in question?

Do you have a framework for thinking about the issue of learning through errors? Do you have additional questions regarding the issue of learning through errors that are worth exploring?

"Immortal But Damned to Hell on Earth"

1 Bound_up 29 May 2015 07:55PM

http://www.theatlantic.com/technology/archive/2015/05/immortal-but-damned-to-hell-on-earth/394160/

 

With such long periods of time in play (if we succeed), the improbable hellish scenarios which might befall us become increasingly probable.

With the probability of death never quite reaching 0, despite advanced science, death might yet be inevitable.

But the same applies also to a hellish life in the meanwhile. And the longer the life, the more likely the survivors will envy the dead. Is there any safety in this universe? What's the best we can do?

Learning to get things right first time

0 owencb 29 May 2015 10:06PM

These are quick notes on an idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI.

 

Motivation:

  • Most challenges we can approach with trial-and-error, so many of our habits and social structures are set up to encourage this. There are some challenges where we may not get this opportunity, and it could be very helpful to know what methods help you to tackle a complex challenge that you need to get right first time.

  • Giving an artificial intelligence good values may be a particularly important challenge, and one where we need to be correct first time. (Distinct from creating systems that act intelligently at all, which can be done by trial and error.)

  • Building stronger societal knowledge about how to approach such problems may make us more robustly prepared for such challenges. Having more programmers in the AI field familiar with the techniques is likely to be particularly important.

 

Idea: Develop methods for training people to write code without bugs.

  • Trying to teach the skill of getting things right first time.

  • Writing or editing code that has to be bug-free without any testing is a fairly easy challenge to set up, and has several of the right kind of properties. There are some parallels between value specification and programming.

  • Set-up puts people in scenarios where they only get one chance -- no opportunity to test part/all of the code, just analyse closely before submitting.

    • Interested in personal habits as well as social norms or procedures that help this.

      • Daniel Dewey points to standards for code on the space shuttle as a good example of getting high reliability code edits.

 

How to implement:

  • Ideal: Offer this training to staff at software companies, for profit.

    • Although it’s teaching a skill under artificial hardship, it seems plausible that it could teach enough good habits and lines of thinking to noticeably increase productivity, so people would be willing to pay for this.

    • Because such training could create social value in the short run, this might give a good opportunity to launch as a business that is simultaneously doing valuable direct work.

    • Similarly, there might be a market for a consultancy that helped organisations to get general tasks right the first time, if we knew how to teach that skill.

  • More funding-intensive, less labour intensive: run competitions with cash prizes

    • Try to establish it as something like a competitive sport for teams.

    • Outsource the work of determining good methods to the contestants.

 

This is all quite preliminary and I’d love to get more thoughts on it. I offer up this idea because I think it would be valuable but not my comparative advantage. If anyone is interested in a project in this direction, I’m very happy to talk about it.

Exponential Finance Meetup(NYC)?

0 skilesare 26 May 2015 04:53PM

Anyone going to the Exponential Finance conference being put on by Singularity University next week? (Tuesday and Wednesday)

http://exponential.singularityu.org/finance/

I'm going to be there participating in the XCS challenge with my Democratic Hypercapitalism venture.

I've posted about it a couple of times here and here.

I'd love to meet up with some folks from the community and talk more about my ideas and rationality.

Why is a goal a good thing?

-1 Elo 29 May 2015 03:00AM

It seems to be an important concept that setting goals is something that should be done. Why?

Advocates of goal-setting (and the sheer number of them) would imply that there is a reason for the concept.

 

I have to emphasis that I don't want answers that suggest - "Don't set goals", as is occasionally written.  I specifically want answers that explain why goals are good. see http://zenhabits.net/no-goal/ for more ideas on not having goals.

 

I have to emphasise again that I don't mean to discredit goals or suggest that the Dilbert's Scott Adams "make systems not goals" suggestion is better or should be followed more than, "set goals".  see http://blog.dilbert.com/post/102964992706/goals-vs-systems .  I specifically want to ask - why should we set goals?  (because the answer is not intuitive or clear to me)

 

Here in ROT13 is a theory; please make a suggestion first before translating:

Cer-qrpvqrq tbnyf npg nf n thvqryvar sbe shgher qrpvfvbaf; Tbnyf nffvfg jvgu frys pbageby orpnhfr lbh pna znxr cer-cynaarq whqtrzragf (V.r. V nz ba n qvrg naq pna'g rng fhtne - jura cerfragrq jvgu na rngvat-qrpvfvba). Jura lbh trg gb n guvaxvat fcnpr bs qrpvfvbaf gung ner ybat-grez be ybat-ernpuvat, gb unir cerivbhfyl pubfra tbnyf (nffhzvat lbh qvq gung jryy; jvgu pbeerpg tbny-vagreebtngvba grpuavdhrf); jvyy yrnq lbh gb znxr n orggre qrpvfvba guna bgurejvfr hacynaarq pubvprf.

Gb or rssrpgvir - tbnyf fubhyq or zber guna whfg na vagragvba. "V jnag gb or n zvyyvbanver", ohg vapyhqr n fgengrtl gb cebterff gbjneqf npuvrivat gung tbny.  (fgevpgyl fcrnxvat bhe ybpny YrffJebat zrrghc'f tbny zbqry vf 3 gvrerq; "gur qernz". "gur arkg gnetrg". "guvf jrrx'f npgvba" Jurer rnpu bar yrnqf gb gur arkg bar.  r.t. "tb gb fcnpr", "trg zl qrterr va nrebfcnpr ratvarrevat", "fcraq na ubhe n avtug fghqlvat sbe zl qrterr")

Qvfnqinagntr bs n tbnyf vf vg pna yvzvg lbhe bccbeghavgl gb nafjre fvghngvbaf jvgu abiry nafjref. (Gb pbagvahr gur fnzr rknzcyr nf nobir - Jura cerfragrq jvgu na rngvat pubvpr lbh znl abg pbafvqre gur pubvpr gb "abg rng nalguvat" vs lbh gubhtug uneq rabhtu nobhg vg; ohg ng yrnfg lbh zvtug pubbfr gur fyvtugyl urnyguvre bcgvba orgjrra ninvynoyr sbbqf).


 

I suspect that the word "goals" will need a good taboo, feel free to do so if you think that is needed in your explanation.

A resolution to the Doomsday Argument.

-1 Eitan_Zohar 24 May 2015 05:58PM

A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.

Plausible?

Prior probabilities and statistical significance

-1 [deleted] 24 May 2015 10:00AM

How does using priors affect the concept of statistical significance? The scientific convention is to use a 5% threshold for significance, no matter whether the hypothesis has been given a low or a high prior probability.

If we momentarily disregard the fact that there might be general methodological issues with using statistical significance, how does the use of priors specifically affect the appropriateness of using statistical significance?