Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jul. 25 - Jul. 31, 2016

3 MrMind 25 July 2016 07:07AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

A rational unfalsifyable believe

1 Arielgenesis 25 July 2016 02:15AM
A rational unfalsifyable believe

I'm trying to argue that it is possible for someone rational to hold on to a believe that is unfalsifyable and remain rational.

There are three people in a room. Adam, Cain, and Able. Able was murdered. Adam and Cain was taken into police custody. The investigation was thorough but it remains inconclusive. The technology was not advanced enough to produce conclusive evidence. The arguments are basically you did it, no, you did it.

Adam has a wife, her name is Eve. Eve believed that Adam is innocent. She believed so because she has known Adam very well and the Adam that she knew, wouldn't commit murder. She uses Adam's character and her personal relationship with him as evidence.

Cain, trying to defend himself, asked Eve. "What does it take for her to change her believe". She replied, "show me the video recording, then I would believe". But there was no video recording. Then she said, "show me any other evidence that is as strong as a video recording". But there was no such evidence as well.

Cain pointed out, "the evidence that you use for your believe is personal relationship and his character. Then if there are evidence against his character, would you change your mind?"

After some thinking and reflection, she finally said. "Yes, if it could be proven that I have been deceived all these years, then I will believe otherwise."

All of Adam's artifact were gathered, collected and analysed. The search was so thorough, there could never be any new evidence about what Adam had did before the custody that could be presented in the future. All points to Adam good character.

Eve was happy. Cain was not. Then he took one step further. He proposed, "Eve, people could change. If Adam change in the future into man of bad character, would you be convinced that he could have been the murderer?"

"Yes, if Adam changed, then I would believe that it is possible for Adam to be the murderer." Eve said. 

Unfortunately, Adam died the next day. Cain said to Eve, "how do you propose that your belief about Adam's innocence be falsified now?"

"It cannot be falsified now." Eve replied. 

"Then you must be irrational."

  • Is Eve irrational?
  • Can believing an unfalsifyable believe be rational?
  • Can this argument be extended to believe in God?
 

Street Epistemology - letting people name their thinking errors

3 Bound_up 24 July 2016 07:43PM

https://www.youtube.com/watch?v=Exmjlc4PfEQ

 

Anthony Magnabosco does what he calls Street Epistemology, usually applying it to supernatural (usually religious) beliefs.

 

The great thing about his method (and his manner, guy's super personable) is that he avoids the social structure of a debate, of two people arguing, of a zero-sum one game where person wins at the other's loss.

 

I've struggled with trying to figure out how to let people save face in disputes (when they're making big, awful mistakes), even considering including minor errors (that don't affect the main point) in my arguments so that they could point them out and we could both admit we were wrong (in their case, about things which do affect the main point) and move on.

 

But this guy's technique manages to invite people to correct their own errors (people are SOOOO much more rational when they're not defensive) and they DO it. No awkwardness, no discomfort, and people pointing out the flaws in their own arguments, and then THANKING him for the talk afterwards and referring him to their friends to talk. Even though they just admitted that their cherished beliefs might not deserve the certainty they've been giving them.

 

This is applied to religion in this video, but this seems to me to be a generally useful method when you confront someone making an error in their thinking. Are you forcing people to swallow their pride a little (over and over) when they talk with you? Get that out, and watch how much more open people can be.

Meetup : Welcome Scott Aaronson to Texas

1 Vaniver 25 July 2016 01:27AM

Discussion article for the meetup : Welcome Scott Aaronson to Texas

WHEN: 13 August 2016 06:00:00PM (-0500)

WHERE: 4212 Hookbilled Kite Dr Austin, TX 78738

...probably.

We're having another all-Texas party in Austin. If all goes well, we'll be welcoming Scott Aaronson, who's moved here to teach at UT Austin.

Scott may be called back to NY for a family event, though, and if that happens we'll welcome him in effigy, and hold another party a month later.

Discussion article for the meetup : Welcome Scott Aaronson to Texas

Weekly LW Meetups

1 FrankAdamek 22 July 2016 04:00PM

This summary was posted to LW Main on July 22nd. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] Suffering-focused AI safety: Why “fail-safe” measures might be particularly promising

7 wallowinmaya 21 July 2016 08:22PM

The Foundational Research Institute just published a new paper: "Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention". 

It is important to consider that [AI outcomes] can go wrong to very different degrees. For value systems that place primary importance on the prevention of suffering, this aspect is crucial: the best way to avoid bad-case scenarios specifically may not be to try and get everything right. Instead, it makes sense to focus on the worst outcomes (in terms of the suffering they would contain) and on tractable methods to avert them. As others are trying to shoot for a best-case outcome (and hopefully they will succeed!), it is important that some people also take care of addressing the biggest risks. This perspective to AI safety is especially promising both because it is currently neglected and because it is easier to avoid a subset of outcomes rather than to shoot for one highly specific outcome. Finally, it is something that people with many different value systems could get behind.

Fallacymania: party game where you notice fallacies in arguments

4 Alexander230 21 July 2016 09:34AM

Fallacymania is a game developed by Moscow LessWrong community. Main goals of this game is to help people notice fallacies in arguments, and of course to have fun. The game requires 3-20 players (recommended 4-12), and some materials: printed A3 sheets with fallacies (5-10 sheets), card deck with fallacies (you can cut one A3 sheet into cards, or print stickers and put them to common playing cards), pens and empty sheets, and 1 card deck of any type with at least 50 cards (optional, for counting guessing attempts). Rules of the game are explained here:

https://drive.google.com/open?id=0BzyKVqP6n3hKQWNzV3lWRTYtRzg

This is the sheet of fallacies, you can download it and print on A3 or A2 sheet of paper:

https://drive.google.com/open?id=0BzyKVqP6n3hKVEZSUjJFajZ2OTA

Also you can use this sheet to create playing cards for debaters.

When we created this game, we used these online articles and artwork about fallacies:

http://obraz.io/ru/posters/poster_view/1/?back_link=%2Fru%2F&lang=en&arrow=right
http://www.informationisbeautiful.net/visualizations/rhetological-fallacies/
http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/

Also I've made electronic version of Fallacymania for Tabletop Simulator (in Steam Workshop):

http://steamcommunity.com/sharedfiles/filedetails/?id=723941480

 

In partially observable environments, stochastic policies can be optimal

4 Stuart_Armstrong 19 July 2016 10:42AM

I always had the informal impression that the optimal policies were deterministic (choosing the best option, rather than some mix of options). Of course, this is not the case when facing other agents, but I had the impression this would hold when facing the environment rather that other players.

But stochastic policies can also be needed if the environment is partially observable, at least if the policy is Markov (memoryless). Consider the following POMDP (partially observable Markov decision process):

There are two states, 1a and 1b, and the agent cannot tell which one they're in. Action A in state 1a and B in state 1b, gives a reward of -R and keeps the agent in the same place. Action B in state 1a and A in state 1b, gives a reward of R and moves the agent to the other state.

The returns for the two deterministic policies - A and B - are -R every turn except maybe for the first. While the return for the stochastic policy of 0.5A + 0.5B is 0 per turn.

Of course, if the agent can observe the reward, the environment is no longer partially observable (though we can imagine the reward is delayed until later). And the general policy of "alternate A and B" is more effective that the 0.5A + 0.5B policy. Still, that stochastic policy is the best of the memoryless policies available in this POMDP.

Earning money with/for work in AI safety

7 rmoehn 18 July 2016 05:37AM

(I'm re-posting my question from the Welcome thread, because nobody answered there.)

I care about the current and future state of humanity, so I think it's good to work on existential or global catastrophic risk. Since I've studied computer science at a university until last year, I decided to work on AI safety. Currently I'm a research student at Kagoshima University doing exactly that. Before April this year I had only little experience with AI or ML. Therefore, I'm slowly digging through books and articles in order to be able to do research.

I'm living off my savings. My research student time will end in March 2017 and my savings will run out some time after that. Nevertheless, I want to continue AI safety research, or at least work on X or GC risk.

I see three ways of doing this:

  • Continue full-time research and get paid/funded by someone.
  • Continue research part-time and work the other part of the time in order to get money. This work would most likely be programming (since I like it and am good at it). I would prefer work that helps humanity effectively.
  • Work full-time on something that helps humanity effectively.


Oh, and I need to be location-independent or based in Kagoshima.

I know http://futureoflife.org/job-postings/, but all of the job postings fail me in two ways: not location-independent and requiring more/different experience than I have.

Can anyone here help me? If yes, I would be happy to provide more information about myself.

(Note that I think I'm not in a precarious situation, because I would be able to get a remote software development job fairly easily. Just not in AI safety or X or GC risk.)

Open thread, Jul. 18 - Jul. 24, 2016

3 MrMind 18 July 2016 07:17AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

View more: Next