Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

What is Intelligence?

0 DragonGod 23 July 2017 12:12AM

As far as Artificial Intelligence is concerned, what is "intelligence"? The definition I see on various sites like Wikipedia:

Intelligence has been defined in many different ways including as one's capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity, and problem solving

Merriam Webster:

  1. The ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason.
  2. The ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests).

etc seem to be a bit broad and nebulous, and not necessarily what I would be thinking of if I wanted to build an AI, or evaluate the intelligence of non human life-forms.

The definition I currently go with is:

General problem solving ability.

However, I'm not sure if this is broad enough to encompass all we think of when we say "intelligence" in the context of AI, or what we would be looking for in "intelligent" life-forms. What's a useful definition of intelligence. Broad enough to encompass all the we consider when we think intelligence, yet narrow enough to exclude particular idiosyncrasies of specific intelligent agents? A universal definition of intelligence applicable to all intelligent agents.

Book Review: Mathematics for Computer Science (Suggestion for MIRI Research Guide)

8 richard_reitz 22 July 2017 07:26PM

tl;dr: I read Mathematics for Computer Science (MCS) and found it excellent. I sampled Discrete Mathematics and Its Applications (Rosen)—currently recommended in MIRI's research guide—as well as Concrete Mathematics and Discrete Mathematics with Applications (Epp), which appear to be MCS's competition. Based on these partial readings, I found MCS to be the best overall text. I therefore recommend MIRI change the recommendation in its research guide.

Introduction

MCS is used at MIT for their introductory discrete math course, 6.042, which appears to be taken primarily by second-semester freshman and sophomores. You can find OpenCourseWare archives from 2010 and 2015, although the book is self-contained; I never had occasion to use them throughout my reading. 

If you liked Computability and Logic (review), currently in the MIRI research guide, you'll like MCS:

MCS is a wonderful book. It's well written. It's rigorous, but does a nice job of motivating the material. It efficiently proves a number of counterintuitive results and then helps you see them as intuitively obvious. Freed from the constraint of printing cost, it contains many diagrams which are generally useful. You can find the pdf here or, if that link breaks, by googling "Mathematics for Computer Science". (See section 21.2 for why this works.)

MCS is regularly updated during the semester. Based on the dates of revision given to the cover, I suspect that the authors attempt to update it within a week of the last update during the semester. The current version is 87 pages longer than the 2015 version, suggesting ~40 pages of material is added a year. My favorite thing about the constant updates was that I never needed to double check statements about our current state of knowledge to see if anything had changed since publication.

MCS is licensed under a Creative Commons attribution share-alike license: it is free in the sense of both beer and freedom. I'm a big fan of such copyleft licenses, so I give MIT major props. I've tried to remain unbiased in my review, but halo effect suggests my views on the text might be affected by the text's license: salt accordingly.

Prerequisites

The only prerequisite is single-variable calculus. In particular, I noted integration, differentiation, and convergence/infinite sums coming up. That said, I don't remember seeing them coming up in sections that provided a lot of dependencies: with just a first course in algebra, I feel a smart 14-year-old could get through 80–90% of the book, albeit with some help, mostly in places where "do a bunch of algebra" steps are omitted. An extra 4–5 years of practice doing algebraic manipulations makes a difference.

MCS is also an introduction to proofwriting. In my experience, writing mathematical proofs is a skill complex enough to require human feedback to get all the nuances of why something works and why something else doesn't work and why one approach is better than another. If you've never written proofs before and would like a human to give you feedback, please pm me.

Comparison to Other Discrete Math Texts

Rosen

I randomly sampled section 4.3 of Rosen, on primes and greatest common divisors and was very unimpressed. Rosen states the fundamental theorem of arithmetic without a proof. The next theorem had a proof which was twice as long and half as elegant as it could have been. The writing was correct but unmotivating and wordy. For instance, Rosen writes "If n is a composite integer", which is redundant, since all composite numbers are integers, so he could have just said "If n is composite".

In the original Course Recommendations for Friendliness Researchers, Louie responded to Rosen's negative reviews:

people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.

Based on the sample I read, Rosen is significantly dumbed-down relative to MCS. Rosen does not prove the fundamental theorem of arithmetic whereas MCS proves it in section 9.4. For the next theorem, Rosen gives an inelegant proof when a much sleeker—but reasonably evident!—proof exists, making it feel like Rosen expected the reader to not be able to follow the sleeker proof. Rosen's use of "composite integer" instead of "composite" seems like he assumes the reader doesn't understand that the only objects one describes as composite are integers; MCS does not contain the string "composite integer".

In the section I read, Rosen has worked examples for finding gcd(24, 36) and gcd(17, 22), something I remember doing when I was 12. It's almost like Rosen was spoon-feeding how to guess the teacher's password for the student to regurgitate on an exam instead of building insight.

Concrete Mathematics

There are probably individuals who would prefer Concrete Mathematics to MCS. These people are probably into witchcraft.

I explain by way of example. In section 21.1.1, MCS presents a very sleek, but extremely nonobvious, proof of gambler's ruin using a clever argument courtesy of Pascal. In section 21.1.2, MCS gives a proof that doesn't require the reader to be "as ingenuious Pascal [sic]". As an individual who is decidedly not as ingenious as Pascal was, I appreciate this.

More generally, say we want to prove a theorem that looks something like "If A, then B has property C." You start at A and, appealing to the definition of C, show that B has it. There's probably some cleverness involved in doing so, but you start at the obvious place (A), end in the obvious place (B satisfies the definition of C), and don't rely on any crazy, seemingly-unrelated insights. Let's call this sort of proof mundane.

(Note that mundane is far from mechanical. Most of the proofs in Baby Rudin are mundane, but require significant cleverness and work to generate independently.)

There is a virtue in mundane proofs: a smart reader can usually generate them after they read the theorem but before they read its proof. Doing is beneficial, since proof-generating makes the theorem more memorable. It also gives the reader practice building intuition by playing around with the mathematical objects and helps them improve their proofwriting by comparing their output to a maximally refined proof.

On the end of the spectrum opposing mundane is witchcraft. Proofs that use witchcraft typically have a step where you demonstrate you're as ingenious as Pascal by having a seemingly-unrelated insight that makes everything easier. Notice that, even if you are as ingenious as Pascal, you won't necessarily be able to generate these insights quickly enough to get through the text at any reasonable pace.

For the reasons listed above, I prefer mundane proofs. This isn't to say MCS is devoid of witchcraft: sometimes it's the best or only way of getting a proof. The difference is that MCS uses mundane proofs whenever possible whereas Concrete Mathematics invokes witchcraft left and right. This is why I don't recommend it.

Individuals who are readily as ingenious as Pascal, don't want the skill-building benefits of mundane proofs, or prefer the whimsy of witchcraft may prefer Concrete Mathematics.

Epp

I randomly sampled section 12.2 of Epp and found it somewhat dry but wholly unobjectionable. Unlike Rosen, I felt like Epp was writing for an intelligent human being (though I was reading much further along in the book, so maybe Rosen assumed the reader was still struggling with the idea of proof). Unlike Concrete Mathematics, I detected no witchcraft. However, I felt that Epp had inferior motivation and was written less engagingly. Epp is also not licensed under Creative Commons.

Coverage

Epp, Rosen, and MCS are all ~1000 pages long, whereas Concrete Mathematics is ~675. To determine what these books covered that might not be in MCS, I looked through their table of contents' for things I didn't recognize. The former three have the same core coverage, although Epp and Rosen go into material you would find in Computability and Logic or Sipser (also part of the research guide), whereas MCS spends more time developing discrete probability. Based on the samples I read, Epp and MCS have about the same density, whereas Rosen spends little time building insight and a lot of time showing how to do really basic, obvious stuff. I would expect Epp and MCS to have roughly the same amount of content covering mostly (but not entirely) the same stuff and Rosen to offer a mere shadow of the insight of the other two.

Concrete Mathematics seems to contain a subset of MCS's topics, but from the sections I read, I expect the presentation to be wildly different.

Complaints

My only substantial complaint about MCS is that, to my knowledge, the source LaTeX is not available. Contrast this to SICP, which has the HTML available. This resulted in a proliferation of PDFs tailored for different use cases. It'd be nice, for instance, to have a print-friendly version of MCS (perhaps with fewer pages), plus a version that fit nicely onto the small screen of an ereader or mobile device, plus a version with the same aspect ratio as my monitor. This all would be extremely easy to generate given the source. It would also facilitate crowdsourcing proofreading: there are more than a few typos, although they don't preempt comprehension. At the very least, I wish there were somewhere to submit errata.

Some parts of MCS were notation-heavy. To quote what a professor once wrote on a problem set of mine:

I'm not sure all the notation actually serves the goal of clarifying the argument for the reader. Of course, such notation is sometimes needed. But when it is not needed, it can function as a tool with which to bludgeon the reader…

I found myself referring to Wikipedia's glossary of graph theory terms more than a few times when I was making definitions to put into Anki. Not sure if this is measuring a weak section or a really good glossary or something else.

A Note on Printing

A lot of people like printed copies of their books. One benefit of MCS I've put forward is that it's free (as in beer), so I investigated how much printing would cost.

I checked the local print shops and Kinko's online was unable to find printing under $60, a typical price around $70, with the option to burn $85 if I wanted nicer paper. This was more than I had expected and between ⅓ and ½ (ish) the price of Rosen or Epp.

Personally, I think printing is counterproductive, since the PDF has clickable links.

Final Thoughts

Despite sharing first names, I am not Richard Stallman. I prefer the license on MCS to the license on its competitors, but I wouldn't recommend it unless I thought the text itself was superior. I would recommend baby Rudin (nonfree) over French's Introduction to Real Analysis; Hoffman and Kunze's Linear Algebra (nonfree) over Jim Hefferson's Linear Algebra; and Epp over 2010!MCS. The freer the better, but that consideration is trumped by the quality of the text. When you're spending >100 hours working out of a book that provides foundational knowledge for the rest of your life, ~$150 and a loss of freedom is a price many would pay for better quality.

Eliezer writes:

Tell a real educator about how Earth classes are taught in three-month-sized units, and they would’ve sputtered and asked how you can iterate fast enough to learn how to teach that.

Rosen is in its seventh edition. Epp is in its fourth edition and Concrete Mathematics its second. The earliest copy of MCS I've happened across comes from 2004. Near as I can tell, it is improved every time the authors go through the material with their students, which would put it in its 25th edition.

And you know what? It's just going to keep getting better faster than anything else.

Acknowledgements

Thank you to Gram Stone for reviewing drafts of this review.

Sleeping Beauty Problem Can Be Explained by Perspectivism (III)

0 Xianda_GAO 22 July 2017 02:50PM

This is the third part of my argument for the importance of perspective disagreement in the sleeping beauty problem. The first part can be found here

In this part I would give a simple example to show how two agents can reach to an agreement in a typical bayesian problem. It would also highlight why such an agreement cannot be reached in the sleeping beauty problem. 

Balls In Urns(BIU):

Suppose there is a urn filled with either 2 blue balls and 1 red ball(BBR) or 2 red balls and a blue ball(BRR) with equal chances. Andy randomly picked 2 balls from the urn and finds one ball of each colour. He correctly conclude the probability of BBR is 1/2, same as the probability of BRR. Afterwards Bob asked for a red ball and was given one, he then randomly picked 1 ball from the 2 remaining balls in the urn and saw a blue one. He correctly concluded the probability of BBR is 2/3. It turns out however Andy and Bob actually saw the exact same 2 balls. The two of them are free to communicate and argue. Suppose both of them are rational can they reach to an agreement? Who should change his answer?

 

We can use a frequentist approach to solve this problem. Suppose the experiment is repeated many times with equal number of BBR and BRR. We can count the total number of occurrences when they both see the same 1 red and 1 blue balls. The relative frequency of BBR and BRR among these occurrences would indicate the correct probability. Both Andy and Bob should have no problem agreeing with this method. Here it becomes apparent that the exact procedure of the experiment determines whose initial probability is correct and who need to adjust his answer. More specifically how is the red ball given to Bob determined.

Scenario 1: The red ball picked by Andy is always given to Bob. In this case Andy is correct, Bob should adjust his answer of BBR from 2/3 to 1/2. This is because for both BBR and BRR Andy has the same chance to pick a red and a blue ball. Given the same red ball Andy has Bob would have equal chance to pick the same blue ball again regardless the colour of the last ball left. The relative frequency of BBR and BRR given the occurrences (that they both have the same 1 red and 1 blue balls), would be about the same.

Scenario 2: Any red ball in the urn can be given to Bob. In this case Bob’s initial judgement is correct and Andy would have to change his probability for BBR to 2/3. Because all else equal, Bob is twice more likely to have the same red ball as Andy if there is only 1 red ball in the urn. The relative frequency of BBR to BRR with the occurrences would be 2:1.

Since only one out of the two Scenarios can be true Andy and Bob must agree with each other as long as there is no ambiguity about the experiment procedure.

However, if we duplicate Bob (either by cloning or memory wiping) in the case of BRR and give each Bob a different red ball suddenly both Scenarios become true. To Andy the red ball he picked will always be given to Bob. To Bob the red ball given to him can be any one from the urn. Neither person would have reason to adjust his own probability once knowing they have the same balls while fully knowing the experiment procedure. So Andy would stick his probability of BBR to 1/2 and Bob to 2/3. In this case the the two person having exact same information will remain in disagreement even if they are free to communicate. I believe the parallel between BIU, 81D and SBP is obvious enough to show why the selector and beauty, just as Andy and Bob, can be in disagreement.

 

Next part of my argument discuss why Elga's counter argument to traditional halfers is valid. Which concludes SSA as incorrect. And why by considering the importance of perspective disagreement I must concluded double-halving as the correct answer. 

How long has civilisation been going?

4 Elo 22 July 2017 06:41AM

I didn't realise how short human history was.  Somewhere around 130,000 years ago we were standing upright as we are today.  Somewhere around 50,000 years ago we broadly arrived at:

the fully modern capacity for Culture *

That's roughly when we started, "routine use of bone, ivory, and shell to produce formal (standardized) artifacts".  Agriculture and humans staying still to grow plants happened at about 10,000BCE (or 12,000 years ago).

Writing started happening around 6600BCE* (8600 or so years ago).  

This year is 5777 in the Hebrew calendar.  So someone has been counting for roughly that long.

The pyramids are estimated to have been built at around 2600 BCE (4600 years ago)

Somewhere between then and zero by the christian calendar we sorted out a lot of metals and how to use them.

And somewhere between then and now we finished up all the technological advances that lead to present day.


But it's hard to get a feel for that.  Those are just some numbers of years.  Instead I want to relate that to our lives.  And our generations.

12,000 years ago is a good enough point to start paying attention to.

If a human generation is normally between 12* and 35* years.  Considering that further back the generations would have been closer to 12 years apart and today they are shifting to being more like 30 years apart (and up to 35 years apart).  That means the bounds are:

12,000/12=1,000
12,000/35 = 342

342-1000 generations.  That's all we have.  In all of humanity.  We are SO YOUNG!

(if you take the 8600 year number as a starting point you get a range of 717-242.)


Let's make it personal

I know my grandparents which means I am a not-negligible chance to also know my grandchildren and maybe even more (depending on medical technology).  I already have a living niece so I have already experienced 4 generations.  Without being unreasonable I can expect to see 5 and dream to see 6, 7 or infinite.  

(5/1000)->(7/342) = between a half a percent and two percent of human history.  I will have lived through 1/2% - 2% of human generations (ignoring longevity escape for a moment) to date.

Compared to other life numbers:

Days in a year * 100 year = 36,500 days in a 100 year lifespan.

52 weeks *100 = 5200.  Or one week of a 100 year lifespan is equivalent to one generation of humans.

12,000 years / 365 days = 32.8 years.  Or when you are 32 years old you have lived more days than humans have been collecting artefacts of worth.

8600 years/365 = 23.5 years.  Or when you are 24 years old you have lived one day for every year humans have had written records.


Discrete human lives

If you put an olden day discrete human life at 25 years - maybe more, and a modern day discrete life at 90 years and compare those to the numbers above

12,000/25 = 480 discrete human lifetimes

12,000/90=133 discrete human lifetimes

8600/25=344 discrete human lifetimes

8600/90=95 discrete human lifetimes

That's to say the entire of recorded history is only about 350 independent human lives stacked end on end.

Everything we know in history has been done on somewhere less than 480 discrete lifetime runthroughs.


Humanity is so young.  And we forget so easily that 50 lifetimes ago we were nothing.

Meta:  Thanks billy for hanging out and thinking about the numbers with me.  This idea came up on a whim and took a day of thinking about and about an hour to write up

Original post: http://bearlamp.com.au/how-long-has-civilisation-been-going/

Can anyone refute these arguments that we live on the interior of a hollow Earth?

2 Eitan_Zohar 21 July 2017 04:51PM

I found a website run by an interesting fellow called 'Wild Heretic' and it seems incredibly intricate and comprehensive. I've yet to see any other person argue as well for half so radical a claim. Think of this as an opportunity to examine arguments for highly unpopular views.

Wild Heretic believes that we live on the inside of a hollow sphere, lit by a half-light half-dark Sun at its center (he claims that light bends in order to produce the effect of rising and setting), that the moon is an optical illusion, that manmade satellites don't really exist, that the stars are light artifacts produced in the atmosphere and can never be seen above it, and he has a bunch of explanations for the other celestial bodies like comets and galaxies.

It all seems shockingly intelligent (aside from when he insists that the fact that the Earth doesn't move under your feet when you jump disproves heliocentrism). He also has nine main pieces of evidence for his model:

1. Some early modern maps have inversed latitude and longitude
2. Modern polyconic maps show more accurate sizes and shapes
3. 19th century balloon observations (that is, without an intervening medium) gave the impression of a concave surface
4. 4,000 foot plumb lines reportedly were farther away from each other at the bottom of a mine shaft
5. A laser shot between two posts (over water) seems to curve downwards
6. An old rectilineator experiment indicates a concave surface (the experiment has been criticized here)
7. Radar and radio wave horizons cannot be explained on a convex ball
8. Ships disappearing below the horizon are an optical illusion
9. Light bends upwards, which allows for the rising/setting illusion of the sun and moon

I would really like to know what people here have to say about this, since the comments on the site itself are very disappointing. (A lot of it does rely on a massive conspiracy involving scientists of many stripes, but it's probably best to overlook that.)

MILA gets a grant for AI safety research

6 Dr_Manhattan 21 July 2017 03:34PM

http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research

The really good news is that Yoshua Bengio is leading this (he is extremely credible in modern AI/deep learning world), and this is a pretty large change of mind for him. When I spoke to him at a conference 3 years ago he was pretty dismissive of the whole issue; this year's FLI conference seems to have changed his mind (kudos to them)

Of course huge props to OpenPhil for pursuing this

Ways of Seeing

1 ig0r 21 July 2017 01:42AM

Cross-posted on my blog: http://garybasin.com/ways-of-seeing/

Tough problems often feel insurmountable without more information and better models — more data and thinking. An alternative approach is to be able to see the problem, and the whole world, in a new way. By looking through different eyes, different aspects of the world get highlighted and new actions become visible. An entrepreneur sees the world differently. They notice opportunities for improvement and innovation where someone else only sees stress and pain. Similarly, while a typical person enters a living room and sees the couches and artwork on the walls, a parent of a young child perceives a menagerie of death traps. We are doing this in our own ways all of the time and this defines our experience — our reality.

Several ways of seeing come pre-installed for us — drives to obtain food, sex, safety, and socialization — as a result of our mind rewarding itself for continued survival and gene propagation. These powerful recurring waves of hallucination affect us to the core: how we see and how we experience. The world takes on a different character when we are hungry in contrast to when we are cold and wet. We develop new ways of seeing as we are exposed to more complex patterns: being unemployed or playing a game of chess. At times, we glimpse perspectives of overwhelming curiosity and open-mindedness — fertile soil for our capacity for reason. Unfortunately, we often overestimate this capacity, causing us to fool ourselves and others, and get stuck in the same old ways of thinking and perceiving.

The way we experience and how we look are two sides of the same coin. A way of seeing guides our attention in the service of some purpose, which highlights some parts of experience at the expense of others. The purpose is perhaps not a cause but rather a justification: a way that we understand, or talk about, the behaviors we undertake. One can imagine that if the earth was a conscious thing, it may understand one of its purposes — one of its ways of seeing — as life creation. The way we see also seems to define which actions appear available to us — which levers are pullable. When we feel stuck, it is useful to explore alternative ways of seeing. The way you perceive the world may be limiting your ability to find a solution, so try other ways of looking. Certain questions can act as attentional portals into ways of seeing which immediately reveal insight and new potential actions. Similarly, approaching the particular and peculiar with curiosity has a tendency of generating new thoughts.

Do we really have multiple ways of seeing and in what sense can we more fully inhabit ones beyond the primordial set? Which interesting ways of seeing have we forgotten?

The dark arts: Examples from the Harris-Adams conversation

9 Stabilizer 20 July 2017 11:42PM

Recently, James_Miller posted a conversation between Sam Harris and Scott Adams about Donald Trump. James_Miller titled it "a model rationalist disagreement". While I agree that the tone in which the conversation was conducted was helpful, I think Scott Adams is a top practitioner of the Dark Arts. Indeed, he often prides himself on his persuasion ability. To me, he is very far from a model for a rationalist, and he is the kind of figure we rationalists should know how to fight against.

 

Here are some techniques that Adams uses:

 

  1. Changing the subject: (a) Harris says Trump is unethical and cites the example of Trump gate-crashing a charity event to falsely get credit for himself. Adams responds by saying that others are equally bad—that all politicians do morally dubious things. When Harris points out that Obama would never do such a thing, Adams says Trump is a very public figure and hence people have lots of dirt on him. (b) When Harris points out that almost all climate scientists agree that climate change is happening and that it is wrong for Trump to have called climate change a hoax, Adams changes the subject to how it is unclear what economic policies one ought to pursue if climate change is true.
  2. Motte-and-bailey: When Harris points out that the Trump University scandal and Trump's response to it means Trump is unethical, Adams says that Trump was not responsible for the university because it was only a licensing deal. Then Harris points out that Trump is unethical because he shortchanged his contractors. Adams says that that’s what happens with big construction projects. Harris tries to argue that it’s the entirety of Trump’s behavior that makes it clear that he is unethical—i.e., Trump University, his non-payment to contractors, his charity gate-crashing, and so on. At this points Adams says we ought to stop expecting ethical behavior from our Presidents. This is a classic motte-and-bailey defense. Try to defend an indefensible position (the bailey) for a while, but then once it becomes untenable to defend it, then go to the motte (something much more defensible).
  3. Euphemisation: (a) When Harris tells Adams that Trump lies constantly and has a dangerous disregard for the truth, Adams says, I agree that Trump doesn’t pass fact checks. Indeed, throughout the conversation Adams never refers to Trump as lying or as making false statements. Instead, Adams always says, Trump “doesn’t pass the fact checks”. This move essentially makes it sound as if there’s some organization whose arbitrary and biased standards are what Trump doesn’t pass and so downplays the much more important fact that Trump lies. (b) When Harris call Trump's actions morally wrong, Adams makes it seem as if he is agreeing with Harris but then rephrases it as: “he does things that you or I may not do in the same situation”. Indeed, that's Adams's constant euphemism for a morally wrong action. This is a very different statement compared to saying that what Trump did was wrong, and makes it seem as if Trump is just a normal person doing what normal people do. 
  4. Diagnosis: Rather than debate the substance of Harris’s claims, Adams will often embark on a diagnosis of Harris’s beliefs or of someone else who has that belief. For example, when Harris says that Trump is not persuasive and does not seem to have any coherent views, Adams says that that's Harris's "tell" and that Harris is "triggered" by Trump's speeches. Adams constantly diagnoses Trump critics as seeing a different movie, or as being hypnotized by the mainstream media. By doing this, he moves away from the substance of the criticisms.
  5. Excusing: (a) When Harris says that it is wrong to not condemn, and wrong to support, the intervention of Russia in America’s election, Adams says that the US would extract revenge via its intelligence agencies and we would never know about it. He provides no evidence for the claim that Trump is indeed extracting revenge via the CIA. He also says America interferes in other elections too. (b) When Harris says that Trump degraded democratic institutions by promising to lock up his political opponent after the election, Adams says that was just a joke. (c) When Harris says Trump is using the office of the President for personal gain, Adams tries to spin the narrative as Trump trying to give as much as possible late in his life for his country. 
  6. Cherry-picking evidence: (a) When Harris points out that seventeen different intelligence agencies agreed that Russia’s government interfered in the US elections, Adams says that the intelligence agencies have been known to be wrong before. (b) When Harris points out that almost all climate scientists agree on climate change, Adams points to some point in the 1970s where (he claims) climate scientists got something wrong, and therefore we should be skeptical about the claims of climate scientists.

Overall, I think what Adams is doing is wrong. He is an ethical and epistemological relativist: he does not seem to believe in truth or in morality. At the very least, he does not care about what is true and false and what is right and wrong. He exploits his relativism to push his agenda, which is blindingly clear: support Trump.

 

(Note: I wanted to work on this essay more carefully, and find out all the different ways in which Adams subverts the truth and sound reasoning. I also wanted to cite more clearly the problematic passages from the conversations. But I don't have the time. So I relied on memory and highlighted the Dark Arts moves that struck me immediately. So please, contribute in the comments with your own observations about the Dark Arts involved here.)

[Link] What Value Subagents?

0 gworley 20 July 2017 07:19PM

[Link] Sam Harris and Scott Adams debate Trump: a model rationalist disagreement

2 James_Miller 20 July 2017 12:18AM

Sleeping Beauty Problem Can Be Explained by Perspectivism (II)

3 Xianda_GAO 19 July 2017 11:00PM

This is the second part of my argument. It mainly involves a counter example to SIA and Thirdism.

First part of my argument can be found here.

 

The 81-Day Experiment(81D):

There is a circular corridor connected to 81 rooms with identical doors. At the beginning all rooms have blue walls. A random number R is generated between 1 and 81. Then a painter randomly selects R rooms and paint them red. Beauty would be put into a drug induced sleep lasting 81 day, spending one day in each room. An experimenter would wake her up if the room she currently sleeps in is red and let her sleep through the day if the room is blue. Her memory of each awakening would be wiped at the end of the day. Each time after beauty wakes up she is allowed to exit her room and open some other doors in the corridor to check the colour of those rooms. Now suppose one day after opening 8 random doors she sees 2 red rooms and 6 blue rooms. How should beauty estimate the total number of red rooms(R).

For halfers, waking up in a red room does not give beauty any more information except that R>0. Randomly opening 8 doors means she took a simple random sample of size 8 from a population of 80. In the sample 2 rooms (1/4) are red. Therefore the total number of red rooms(R) can be easily estimated as 1/4 of the 80 rooms plus her own room, 21 in total.

For thirders, beauty's own room is treated differently.As SIA states, finding herself awake is  as if she chose a random room from the 81 rooms and find out it is red. Therefore her room and the other 8 rooms she checked are all in the same sample. This means she has a simple random sample of size 9 from a population of 81. 3 out of 9 rooms in the sample (1/3) are red. The total number of red rooms can be easily estimated as a third of the 81 rooms, 27 in total.

If a bayesian analysis is performed R=21 and R=27 would also be the case with highest credence according to halfers and thirders respectively. It is worth mentioning if an outside Selector randomly chooses 9 rooms and check them, and it just so happens those 9 are the same 9 rooms beauty saw (her own room plus the 8 randomly chosen rooms), the Selector would estimate R=27 and has the highest credence for R=27. Because he and the beauty has the exact same information about the rooms their answer would not change even if they are allowed to communicate. So again, there will be a perspective disagreement according to halfers but not according to thirders. Same as mentioned in part I. 

However, thirder's estimation is very problematic. Because beauty believes the 9 rooms she knows is a fair sample of all 81 rooms, it means red rooms (and blue rooms) are not systematically over- or under-represented. Since beauty is always going to wake up in a red room, she has to conclude the other 8 rooms is not a fair sample. Red rooms have to be systematically underrepresent in those 8 rooms. This means even before beauty decides which doors she wants to open we can already predict with certain confidence that those 8 rooms is going to contains less reds than the average of the 80 suggests. This supernatural predicting power is a strong evidence against SIA and thirding. 

Another way to see the problem is ask beauty how many red rooms would she expect to see if we let her open another 8 rooms. According to SIA she should expect to see 24/72x8=2.67 reds. Meaning even after seeing 2 reds in the first 8 random rooms she would expect to see almost 3 in another set of randomly chosen rooms. Which is counterintuitive to say the least. 

The argument can also be structured this way. Consider the following three statements:

A: The 9 rooms is an unbiased sample of the 81 rooms.

B: Beauty is guaranteed to wake up in a red room

C: The 8 rooms beauty choose is an unbiased sample of the other 80 rooms.

These statements cannot be all true at the same time. Thirders accept A and B meaning they must reject C. In fact they must conclude the 8 rooms she choose would be biased towards blue. This contradicts the fact that the 8 rooms are randomly chosen. 

It is also easy to see why beauty should not estimate R the same way as the selector does. There are about 260 billion distinct combinations to pick 9 rooms out of 81. The selector has a equal chance to see any one of those 260 billion combinations. Beauty on the other hand could only possibility see a subset of the combinations. If a combination does not contains a red room, beauty would never see it. Furthermore, the more red rooms a combination contains the more awakening it has leading to a greater chance for a beauty to select the said combination. Therefore while the same 9 rooms is a unbiased sample for the selector it is a sample biased towards red for beauty.

One might want to argue after the selector learns a beauty has the knowledge of the same 9 rooms he should lower his estimation of R to the same as beauty’s. After all beauty could only know combinations in a subset biased towards red. The selector should also reason his sample is biased towards red. This argument is especially tempting for SSA supporters since if true it means their answer also yields no disagreements. Sadly this notion is wrong, the selector ought to remain his initial estimation. To the selector a beauty knowing the same 9 rooms simply means after waking up in one of the red rooms in his sample, beauty made a particular set of random choices coinciding said sample. It offers him no new information about the other rooms. This point can be made clearer if we look at how people reach to an agreement in an ordinary problem. Which would be shown by another thought experiment in the next part.  

Sleeping Beauty Problem Can Be Explained by Perspectivism (I)

0 Xianda_GAO 19 July 2017 10:11PM

First thing I want to say is that I do not have a mathematics or philosophy degree. I come from an engineering background. So please forgive me when I inevitably messed up some concept. Another thing I want to mention is that English is not my first language. If you think there is any part that is poorly described please point them out. I would try my best to explain what I meant. That being said, I believe I found a good explanation for the SBP.  

My main argument is that in case of the sleeping beauty problem, agents free to communicate thus having identical information can still rightfully have different credence to the same proposition. This disagreement is purely caused by the difference in their perspective. And due to this perspective disagreement, SIA and SSA are both wrong because they are answering the question from an outsider's perspective which is different from beauty's answer. I concluded that the correct answer should be double-halving. 

My argument involves three thought experiments. Here I am breaking it into several parts to facilitate easier discussion. The complete argument can also be found at www.sleepingbeautyproblem.com. However do note it is quite lengthy and not very well written due to my language skills. 

 

First experiment:Duplicating Beauty (DB)

Beauty falls asleep as usual. The experimenter tosses a fair coin before she wakes up. If the coin landed on T then a perfect copy of beauty will be produced. The copy is precise enough that she cannot tell if herself is old or new. If the coin landed on H then no copy will be made . The beauty(ies) will then be randomly put into two identical rooms. At this point another person, let's call him the Selector, randomly chooses one of the two rooms and enters. Suppose he saw a beauty in the chosen room. What should the credence for H be for the two of them? 

For the Selector this is easy to calculate. Because he is twice more likely to see a beauty in the room if T, simple bayesian updating gives us his probability for H as 1/3.

For Beauty, her room has the same chance of being chosen (1/2) regardless if the coin landed on H or T. Therefore seeing the Selector gives her no new information about the coin toss. So her answer should be the same as in the original SBP. If she is a halfer 1/2, if she is a thirder 1/3. 

This means the two of them would give different answers according to halfers and would give the same answer according to thirders. Notice here the Selector and Beauty can freely communicate however they want, they have the same information regarding the coin toss. So halving would give rise to a perspective disagreement even when both parties share the same information. 

This perspective disagreement is something unusual (and against Aumann's Agreement Theorem), so it could be used as an evidence against halving thus supporting Thirdrism and SIA. I would show the problems of SIA in the second thought experiment. For now I want to argue that this disagreement has a logical reason. 

Let's take a frequentist's approach and see what happens if the experiment is repeated, say 1000 times. For the Selector, this simply means someone else go through the potential cloning 1000 times and each time let him chooses a random room. On average there would be 500 H and T. He would see a beauty for all 500 times after T and see a beauty 250 times after H. Meaning out of the 750 times 1/3 of which would be H. Therefore he is correct in giving 1/3 as his answer.

For beauty a repetition simply means she goes through the experiment and wake up in a random room awaiting the Selector's choice again.  So by her count, taking part in 1000 repetitions means she would recall 1000 coin tosses after waking up.  In those 1000 coin tosses there should be about 500 of H and T each. She would see the Selector about 500 times with equal numbers after T or H. Therefore her answer of 1/2 is also correct from her perspective. 

If we call the creation of a new beauty a "branch off", here we see that from Selector's perspective experiments from all branches are considered a repetition. Where as from Beauty's perspective only experiment from her own branch is counted as a repetition. This difference leads to the disagreement.

This disagreement can also be demonstrated by betting odds. In case of T, choosing any of the two rooms leads to the same observation for the Selector: he always sees a beauty and enters another bet. However, for the two beauties the Selector's choice leads to different observations: whether or not she can see him and enters another bet. So the Selector is twice more likely to enter a bet than any Beauty in case of T, giving them different betting odds respectively. 

The above reasoning can be easily applied to original SBP. Conceptually it is just an experiment where its duration is divided into two parts by a memory wipe in case of T. The exact duration of the experiment, whether it is two days or a week or five hours, is irrelevant. Therefore from beauty’s perspective to repeat the experiment means her subsequent awakenings need to be shorter to fit into her current awakening. For example, if in the first experiment the two possible awakenings happen on different days, then the in the next repetition the two possible awakening can happen on morning and afternoon of the current day. Further repetitions will keep dividing the available time. Theoretically it can be repeated indefinitely in the form of a supertask. By her count half of those repetitions would be H. Comparing this with an outsider who never experiences a memory wipe: all repetitions from those two days are equally valid repetitions. The disagreement pattern remains the same as in the DB case. 

 

PS: Due to the length of it I'm breaking this thing into several parts. The next part can be found here.

Looking for ideas about Epistemology related topics

1 Onemorenickname 19 July 2017 06:56PM

Notes :

  • "Epistemy" refers to the second meaning of epistemology : "A particular theory of knowledge".
  • I'm more interested in ideas to further the thoughts exposed here than exposing them.

 

Good Experiments

The point of "Priors are useless" is that if you update after enough experiments, you tend to the truth distribution regardless of your initial prior distribution (assuming its codomain doesn't include 0 and 1, or at least that it doesn't assign 1 to a non-truth and 0 to a truth). However, "enough experiments" is magic :

  1. The pure quantitative aspect : you might not have time to do these experiments in your lifetime.
  2. Having independent experiments is not defined. Knowing which experiments are pairwise independent embeds higher-level knowledge that could easily be used to derive truths directly. If we try to prove a mathematical theorem, comparing the pairwise success probability correlations of different approaches would give much more insights and results than trying to prove it as usual.
  3. We don't need pairwise independence. For instance, assuming we assume P=/=NP because we couldn't prove it, we assume so because we expect all used techniques not to be all correlated together. However, this expectation is ether wrong (Small list of fairly accepted conjectures that were later disproved), or stems from higher-order knowledge (knowledge about knowledge). Infinite regress.

Good Priors

However, conversely, having a good prior distribution is magic too. You can have a prior distribution affecting 1 to truths, and 0 to non-truths. So you might want the additional requirement that the prior distribution has to be computable. But there are two problems :

  1. There aren't many known computable prior distribution. Occam's razor (in term of Kolmogorov complexity in a given language) is one. But fails miserably in most interesting situations. Think of poker, or a simplified version thereof : A+K+Q. If someone bets, the simplest explanation is that he has good cards. Most interesting situations where we want to apply bayesianism are from human interactions (we managed to do hard sciences before bayesianism, and we still have troubles with social sciences). As such, failing to take into account bluff is a big epistemic fault for a prior distribution.
  2. Evaluating the efficiency of a given prior distribution will be done over the course of several experiments, and hence requires a higher order prior distribution (a prior distribution over prior distributions). Infinite regress.

Epistemies

In real-life, we don't encounter these infinite regresses. We use epistemies. An epistemy is usually a set of axioms, and a methodology to derive truths with these axioms. They form a trusted core, that we can use if we understood the limits of the underlying meta-assumptions and methodology.

Epistemies are good, because instead of thinking about the infinite chain of higher priors every time we want to prove a simple statement, we can rely on an epistemy. But they are regularly not defined, not properly followed or not even understood. Leading to epistemic faults.

Questions

As such, I'm interested in the following :

  • When and how do we define new epistemies ? Eg, "Should we define an epistemy for evaluating the Utility of actions for EA ?",  "How should we define an epistemy to build new models of human psychology ?", etc.
  • How to account for epistemic changes in Bayesianism ? (This requires self-reference, which Bayesianism lacks.)
  • How to make sense of of Scott Alexander's yearly predictions ? Is it only a blackbox telling us to bet more on future predictions, or do we have a better analysis ?
  • What prior distributions are interesting to study human behavior ? (For a given restricted class of situations, of course.)
  • Are answers to the previous questions useful ? Are the previous questions meaningful ?

I'm looking for ideas and pointers/links.

Even if your thought seems obvious, if I didn't explicitly mention it, it's worth commenting it. I'll add it to this post.

Even if you only have idea for one of the question, or a particular criticism of a point made in the post, go on.

 

Thank you for reading this far.

Book Reviews

2 Torello 18 July 2017 02:19PM

Open thread, Jul. 17 - Jul. 23, 2017

1 MrMind 17 July 2017 08:15AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Machine Learning Group

7 Regex 16 July 2017 08:58PM

After signing up for this post, those of us that want to study machine learning have made a team.

In an effort to actually get high returns on our time we won't delay, and instead actually build the skills. First project: work through Python Machine Learning by Sebastian Raschka, with the mid-term goal of being able to implement the "recognizing handwritten digits" code near the end.

As a matter of short term practicality currently we don't have the hardware for GPU acceleration. This limits the things we can do, but at this stage of learning most of the time spent is on understanding and implementing the basic concepts anyway.

Here is our discord invite link if you're interested in joining in on the fun.

LessWrong Is Not about Forum Software, LessWrong Is about Posts (Or: How to Immanentize the LW 2.0 Eschaton in 2.5 Easy Steps!)

11 enye-word 15 July 2017 09:35PM

[epistemic status: I was going to do a lot of research for this post, but I decided not to as there are no sources on the internet so I'd have to interview people directly and I'd rather have this post be imperfect than never exist.]

Many words have been written about how LessWrong is now shit. Opinions vary about how shit exactly it is. I refer you to http://lesswrong.com/lw/n0l/lesswrong_20/ and http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/ for more comments about LessWrong being shit and the LessWrong diaspora being suboptimal.

However, how to make LessWrong stop being shit seems remarkably simple to me. Here are the steps to resurrect it:

1. Get Eliezer: The lifeblood of LessWrong is Eliezer Yudkowsky's writing. If you don't have that, what's the point of being on this website? Currently Eliezer is posting his writings on Facebook, (https://www.facebook.com/groups/674486385982694/) which I consider foolish, for the same reasons I would consider it foolish to house the Mona Lisa in a run-down motel.

2. Get Scott: Once you have Eliezer back, and you sound the alarm that LW is coming back, I'm fairly certain that Scott "Yvain" Alexander will begin posting on LessWrong again. As far as I can tell he's never wanted to have to moderate a comment section, and the growing pains are stressing his website at the seams. He's even mused publicly about arbitrarily splitting the Slate Star Codex comment section in two (http://slatestarcodex.com/2017/04/09/ot73-i-lik-the-thred/) which is a crazy idea on its own but completely reasonable in the context of (cross)posting to LW. Once you have Yudkowsky and Yvain, you have about 80% of what made LessWrong not shit.

3. Get Gwern: I don't read many of Gwern's posts; I just like having him around. Luckily for us, he never left!

After this is done, everyone else should wander back in, more or less.

Possible objections, with replies:

Objection: Most SSC articles and Yudkowsky essays are not on the subject of rationality and thus for your plan to work LessWrong's focus would have to subtly shift.

Reply: Shift away, then! It's LessWrong 2! We no longer have to be a community dedicated to reading Rationality: From AI to Zombies as it's written in real time; we can now be a community that takes Rationality: From AI to Zombies as a starting point and discusses whatever we find interesting! Thus the demarcation between 1.0 and 2.0!

Objection: People on LessWrong are mean and I do not like them.

Reply: The influx of new readers from the Yudkowsky-Yvain in-migration should make the tone on this website more upbeat and positive. Failing that, I don't know, ban the problem children, I guess. I don't know if it's poor form to declare this but I'd rather have a LessWrong Principate than a LessWrong Ruins. See also: http://lesswrong.com/lw/c1/against_online_pacifism/

Objection: I'd prefer, for various reasons, to just let LessWrong die.

Reply: Then kill it with your own hands! Don't let it lie here on the ground, bleeding out! Make a post called "The discussion thread at the end of the universe" that reads "LessWrong is over, piss off to r/SlateStarCodex", disallow new submissions, and be done with it! Let it end with dignity and bring a close to its history for good.

[Link] Timeline of Machine Intelligence Research Institute

5 riceissa 15 July 2017 04:57PM

[Link] Examples of Superintelligence Risk (by Jeff Kaufman)

5 Wei_Dai 15 July 2017 04:03PM

90% of problems are recommendation and adaption problems

6 casebash 12 July 2017 04:53AM

Want to improve your memory? Start a business? Fix your dating life?

The chances are that out of the thousands upon thousands of books and blogs out there on each of these topics there are already several that will tell you all that you need. I'm not saying that this will immediately solve your problem - you will still need to put in the hard yards of experiment and practise - just that lack of knowledge will no longer be the limiting factor.

This suggests if we want to be winning at life (as any good rationalist should), what is most important isn't creating brilliant and completely unprecedented approaches to solve these problems, but rather taking ideas that already exist.

The first problem is recommendation - finding which out of all of the thousands of books out there are the most helpful for a particular problem. Unfortunately, recommendation is not an easy problem at all. Two people may both be dealing with procrastination problems, but what works for one person may not work for another person. Further, even for the same idea, it is incredibly subjective what counts as a clear explanation - some people may want more detail, others less, some people may find some examples really compelling, others won't. Recommendations are generally either one person's individual recommendations or those which recieved the highest vote, but there probably are other methods of producing a recommendation that should be looked into, such as asking people survey questions and matching on that, or asking people to rate a book on different factors.

The second problem is adaption. Although you shouldn't need to create any new ideas, it is likely that certain elements will need more explanation and certain elements less. For example, when writing for the rationalist community, you may need to be more precise and be clearer when you are talking figuratively, rather than literally. Alternatively, you can probably just link people to certain common ideas such as the map and territory without having to explain it.

I'll finish with a rhetorical question - what percent of solutions here are new ideas and what percentage are existing solutions? Are these in the right ratio?

UPDATE: (Please note: This article is not about time spent learning vs. time spent practising, but about existing ideas vs. new ideas. The reason why this is the focus is because LW can potentially recommend resources or adapt resources, but it can't practise for you!).

 

 

 

 

 

 

Becoming stronger together

14 b4yes 11 July 2017 01:00PM

I want people to go forth, but also to return.  Or maybe even to go forth and stay simultaneously, because this is the Internet and we can get away with that sort of thing; I've learned some interesting things on Less Wrong, lately, and if continuing motivation over years is any sort of problem, talking to others (or even seeing that others are also trying) does often help.

But at any rate, if I have affected you at all, then I hope you will go forth and confront challenges, and achieve somewhere beyond your armchair, and create new Art; and then, remembering whence you came, radio back to tell others what you learned.

Eliezer Yudkowsky, Rationality: From AI to Zombies

If you want to go fast, go alone. If you want to go far, go together.

African proverb (possibly just made up)

About a year ago, a secret rationalist group was founded. This is a report of what the group did during that year.

The Purpose

“Rationality, once seen, cannot be unseen,” are words that resonate with all of us. Having glimpsed the general shape of the thing, we feel like we no longer have a choice. I mean, of course we still have an option to think and act in stupid ways, and we probably do it a lot more than we would be okay to admit! We just no longer have an option to do it knowingly without feeling stupid about it. We can stray from the way, but we cannot pretend anymore that it does not exist. And we strongly feel that more is possible, both in our private lives, and for the society in general.

Less Wrong is the website and the community that brought us together. Rationalist meetups are a great place to find smart, interesting, and nice people; awesome people to spend your time with. But feeling good was not enough for us; we also wanted to become stronger. We wanted to live awesome lives, not just to have an awesome afternoon once in a while. But many participants seemed to be there only to enjoy the debate. Or perhaps they were too busy doing important things in their lives. We wanted to achieve something together; not just as individual aspiring rationalists, but as a rationalist group. To make peer pressure a positive force in our lives; to overcome akrasia and become more productive, to provide each other feedback and to hold each other accountable, to support each other. To win, both individually and together.

The Group

We are not super secret really; some people may recognize us by reading this article. (If you are one of them, please keep it to yourself.) We just do not want to be unnecessarily public. We know who we are and what we do, and we are doing it to win at life; trying to impress random people online could easily become a distraction, a lost purpose. (This article, of course, is an exception.) This is not supposed to be about specific individuals, but an inspiration for you.

We started as a group of about ten members, but for various reasons some people soon stopped participating; seven members remained. We feel that the current number is probably optimal for our group dynamic (see Parkinson's law), and we are not recruiting new members. We have a rule “what happens in the group, stays in the group”, which allows us to be more open to each other. We seem to fit together quite well, personality-wise. We desire to protect the status quo, because it seems to work for us.

But we would be happy to see other groups like ours, and to cooperate with them. If you want to have a similar kind of experience, we suggest starting your own group. Being in contact with other rationalists, and holding each other accountable, seems to benefit people a lot. CFAR also tries to keep their alumni in regular contact after the rationality workshops, and some have reported this as a huge added value.

To paint a bit more specific picture of us, here is some summary data:

  • Our ages are between 20 and 40, mostly in the middle of the interval.
  • Most of us, but not all, are men.
  • Most of us, but not all, are childless.
  • All of us are of majority ethnicity.
  • Most of us speak the majority language as our first language.
  • All of us are atheists; most of us come from atheist families.
  • Most of us have middle-class family background.
  • Most of us are, or were at some moment, software developers.

I guess this is more or less what you could have expected, if you are already familiar with the rationalist community.

We share many core values, but have some different perspectives, which adds value and confronts groupthink. We have entrepreneurs, employees, students, and unemployed bums; the ratio changes quite often. It is probably the combination of all of us having a good sense of epistemology, but different upbringing, education and professions, that makes supporting each other and giving advice more effective (i.e. beyond the usual benefits of the outside view); there have been plenty of situations which were trivial for one, but not for the other.

Some of us knew each other for years before starting the group, even before the local Less Wrong meetups. Some of us met the others at the meetups. And finally, some of us talked to some other members for the first time after joining the group. It is surprising how well we fit, considering that we didn’t apply any membership filter (although we were prepared to); people probably filtered themselves by their own interest, or a lack thereof, to join this kind of a group, specifically with the productivity and accountability requirements.

We live in different cities. About once in a month we meet in person; typically before or after the local Less Wrong meetup. We spend a weekend together. We walk around the city and debate random stuff in the evening. In the morning, we have a “round table” where each of us provides a summary of what they did during the previous month, and what they are planning to do during the following month; about 20 minutes per person. That takes a lot of time, and you have to be careful not to go off-topic too often.

In between meetups, we have a Slack team that we use daily. Various channels for different topics; the most important one is a “daily log”, where members can write briefly what they did during the day, and optionally what they are planning to do. In addition to providing extra visibility and accountability, it helps us feel like we are together, despite the geographical distances.

Besides mutual accountability, we are also fans of various forms of self-tracking. We share tips about tools and techniques, and show each other our data. Journaling, time tracking, exercise logging, step counting, finance tracking...

Even before starting the group, most of us were interested in various productivity systems: Getting Things Done, PJ Eby; one of us even wrote and sold their own productivity software.

We do not share a specific plan or goal, besides “winning” in general. Everyone follows their own plan. Everything is voluntary; there are no obligations nor punishments. Still, some convergent goals have emerged.

Also, good habits seem to be contagious, at least in our group. If a single person was doing some useful thing consistently, eventually the majority of the group seems to pick it up, whether it is related to productivity, exercise, diet, or finance.

Exercise

All of us exercise regularly. Now it seems like obviously the right thing to do. Exercise improves your health and stamina, including mental stamina. For example, the best chess players exercise a lot, because it helps them stay focused and keep thinking for a long time. Exercise increases your expected lifespan, which should be especially important for transhumanists, because it increases your chances to survive until the Singularity. Exercise also makes you more attractive, creating a halo effect that brings many other benefits.

If you don’t consider these benefits worth at least 2 hours of your time a week, we find it difficult to consider you a rational person who takes their ideas seriously. Yes, even if you are busy doing important things; the physical and mental stamina gained from exercising is a multiplier to whatever you are doing in the rest of your time.

Most of us lift weights (see e.g. StrongLifts 5×5, Alan Thrall); some of us even have a power rack and/or treadmill desk at home. Others exercise using their body weight (see Convict Conditioning). Exercising at home saves time, and in long term also money. Muscle mass correlates with longevity, in addition to the effect of exercise itself; and having more muscle allows you to eat more food. Speaking of which...

Diet

Most of us are, mostly or completely, vegetarian or vegan. Ignoring the ethical aspects and focusing only on health benefits, there is a lot of nutrition research summarized in a book How Not to Die and a website NutritionFacts.org. The short version is that whole-food vegan diet seems to work best, but you really should look into details. (Not all vegan food is automatically healthy; there is also vegan junk food. It is important to eat a lot of unprocessed vegetables, fruit, nuts, flax seeds, broccoli, beans. Read the book, seriously. Or download the Daily Dozen app.) We often share tasty recipes when we meet.

We also helped each other research food supplements, and actually find the best and cheapest sources. Most of us take extra B12 to supplement the vegan diet, creatine monohydrate, vitamin D3, and some of us also use Omega3, broccoli sprouts, and a couple of other things that are generally aimed at health and longevity.

Finance

We strategize and brainstorm career decisions or just debug office politics. Most of us are software developers. This year, one member spent nine months learning how to program (using Codecademy, Codewars, and freeCodeCamp at the beginning; reading tutorials and documentation later); as a result their income more than doubled, and they got a job they can do fully remotely.

Recently we started researching cryptocurrencies and investing in them. Some of us started doing P2P lending.

Personal life

Many of us are polyamorous. We openly discuss sex and body image issues in the group. We generally feel comfortable sharing this information with each other; women say they do not feel the typical chilling effects.

Summary

Different members report different benefits from their membership in the group. Some quotes:

“During the first half of the year, my life was more or less the same. I was already very productive before the group, so I kept the same habits, but benefited from sharing research. Recently, my life changed more noticeably. I started training myself to think of more high-leverage moves (inspired by a talk on self-hypnosis). This changed my asset allocation, and my short-term career plans. I realize more and more that I am very much monkey see, monkey do.”

“Before stumbling over the local Less Wrong meetup, I had been longing (and looking) for people who shared, or even just understood, my interest and enthusiasm for global, long-term, and meta thinking (what I now know to be epistemic rationality). After the initial thrill of the discovery had worn off however, I soon felt another type of dissonance creeping up on me: "Wait, didn't we agree that this was ultimately about winning? Where is the second, instrumental half of rationality, that was supposedly part of the package?" Well, it turned out that the solution to erasing this lingering dissatisfaction was to be found in yet a smaller subgroup.

So, like receiving a signal free of interference for the first time, I finally feel like I'm in a "place" where I can truly belong, i.e. a tribe, or at least a precursor to one, because I believe that things hold the potential to be way more awesome still, and that just time alone may already be enough to take us there.

On a practical level, the speed of adoption of healthy habits is truly remarkable. I've always been able to generally stick to any goals and commitments I've settled on, however the process of convergence is just so much faster and easier when you can rely on the judgment of other epistemically trustworthy people. Going at full speed is orders of magnitudes easier when multiple people illuminate the path (i.e. figure out what is truly worth it), while simultaneously sharing the burdens (of research, efficient implementation, trial-and-error, etc.)”

“Now I'm on a whole-food vegan diet and I exercise 2 times a week, and I also improved in introspection and solving my life problems. But most importantly, the group provides me companionship and emotional support; for example, starting a new career is a lot easier in the presence of a group where reinventing yourself is the norm.”

“It usually takes grit and willpower to change if you do it alone; on the other hand, I think it's fairly effortless if you're simply aligning your behavior with a preexisting strong group norm. I used to eat garbage, smoke weed, and have no direction in life. Now I lift weights, eat ~healthy, and I learned programming well enough to land a great job.

The group provides existential mooring; it is a homebase out of which I can explore life. I don't think I'm completely un-lost, but instead of being alone in the middle of a jungle, I'm at a friendly village in the middle of a jungle.”

“I was already weightlifting and eating vegan, but got motivated to get more into raw and whole foods. I get confronted more with math, programming and finance, and can broaden my horizon. Sharing daily tasks in Slack helps me to reflect about my priorities. I already could discuss many current career and personal challenges with the whole group or individuals.”

“I started exercising regularly, and despite remaining an omnivore I eat much more fresh vegetables now than before. People keep telling me that my body shape improved a lot during this year. Other habits did not stick (yet).”

“Finding a tribe of sane people in an insane world was a big deal for me, now I feel more self-assured and less alone. Our tribe has helped me to improve my habits—some more than others (for example, it has inspired me to buy a power-rack for my living room and start weightlifting daily, instead of going to the gym). The friendly bragging we do among our group is our way of celebrating success and inspires me to keep going and growing.”

Random

Despite having met each other thanks to Less Wrong, most of us do not read it anymore, because our impression is that “Less Wrong is dead”. We do read Slate Star Codex.

From other rationalist blogs, we really liked the article about Ra, and we discussed it a lot.

The proposal of a Dragon Army evoked mixed reactions. On one hand, we approve of rationalists living closer to each other, and we want to encourage fellow rationalists to try it. On the other hand, we don’t like the idea of living in a command hierarchy; we are adults, and we all have our own projects. Our preferred model would be living close to each other; optimally in the same apartment building with some shared communal space, but also with a completely self-contained unit for each of us. So far our shared living happened mostly by chance, but it always worked out very well.

Jordan Peterson and his Self-Authoring Suite is very popular with about half of the group.

What next?

Well, we are obviously going to continue doing what we are doing now, hopefully even better than before, because it works for us.

You, dear reader, if you feel serious about becoming stronger and winning at life, but are not yet a member of a productive rationalist group, are encouraged to join one or start one. Geographical distances are annoying, but Slack helps you overcome the intervals between meetups. Talking to other rationalists can be a lot of fun, but accountability can make the difference between productivity and mere talking. Remember: “If this is your first night at fight club, you have to fight!”

Even if it’s seemingly small things, such as doing an exercise or adding some fiber to your diet; these things, accumulated over time, can increase your quality of life a lot. The most important habit is the meta-habit of creating and maintaining good habits. And it is always easier when your tribe is doing the same thing.

Any questions? It may take some time for our hive mind to generate an answer, and in case of too many or too complex questions we may have to prioritize. Don’t feel shy, though. We care about helping others.

 

(This account was created for the purpose of making this post, and after a week or two it will stop being used. It may be resurrected after another year, or maybe not. Please do not send private messages; they will most likely be ignored.)

In praise of fake frameworks

14 Valentine 11 July 2017 02:12AM

Related to: Bucket errors, Categorizing Has Consequences, Fallacies of Compression

Followup to: Gears in Understanding


I use a lot of fake frameworks — that is, ways of seeing the world that are probably or obviously wrong in some important way.


I think this is an important skill. There are obvious pitfalls, but I think the advantages are more than worth it. In fact, I think the "pitfalls" can even sometimes be epistemically useful.


Here I want to share why. This is for two reasons:


  • I think fake framework use is a wonderful skill. I want it represented more in rationality in practice. Or, I want to know where I'm missing something, and Less Wrong is a great place for that.

  • I'm building toward something. This is actually a continuation of Gears in Understanding, although I imagine it won't be at all clear here how. I need a suite of tools in order to describe something. Talking about fake frameworks is a good way to demo tool #2.


With that, let's get started.

continue reading »

[Link] Interpreting Deep Neural Networks using Cognitive Psychology (DeepMind)

0 Gunnar_Zarncke 10 July 2017 09:09PM

Best Of Rationality Blogs RSS Feed

5 SquirrelInHell 10 July 2017 11:11AM

[Note: There's already a gather-it-all feed by deluks917, and the lw summary recently had a "most recommended" section, so it covers some of what I'm doing here.]

This is an RSS feed that aggregates the most valuable posts (according to me) from around 40 or 50 rationality blogs. It's relatively uncluttered, averaging 3-5 articles per week.

Feed URL: http://www.inoreader.com/stream/user/1005752783/tag/user-favorites

There's also a Facebook page version, and you can view it online using any of the available free RSS viewers.

Edit: see my comment below for details of the heuristics I use for selecting articles for the feed.

Open thread, July 10 - July 16, 2017

3 Thomas 10 July 2017 06:31AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Bi-Weekly Rational Feed

11 deluks917 09 July 2017 07:11PM

===Highly Recommended Articles:

Just Saying What You Mean Is Impossible by Zvi Moshowitz - "Humans are automatically doing lightning fast implicit probabilistic analysis on social information in the background of every moment of their lives." This implies there is no way to divorce the content of your communication from its myriad probabilistic social implications. Different phrasings will just send different implications.

In Defense Of Individualist Culture by Sarah Constantin (Otium) - A description of individualist culture. Criticisms of individualist culture: Lacking sympathy, few good defaults. Defenses: Its very hard to change people (psychology research review). A defense of naive personal identity. Traditional culture is fragile. Building a community project is hard in the modern world, prepare for the failure modes. Modernity has big upsides, some people will make better choices than the traditional rules allow.

My Current Thoughts On Miris Highly Reliable by Daniel Dewey (EA forum) - Report by the Open Phil AI safety lead. A basic description of and case for the MIRI program. Conclusion: 10% credence in MIRI's work being highly useful. Reasons: Hard to apply to early agents, few researchers are excited, other approaches seem more promising.

Conversation With Dario Amodei by Jeff Kaufman - "The research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today". Prioritize work on goals. Transparency and adversarial examples are also important.

Cfar Week 1 by mindlevelup - What is working at CFAF actually like. Less rationality research than anticipated. Communication costs scale quadratically. Organization efficiency and group rationality.

The Ladder Of Interventions by mindlevelup - "This is a hierarchy of techniques to use for in-the-moment situations where you need to “convince” yourself to do something." The author uses these methods in practice.

On Dragon Army by Zvi Moshowitz - Long response to many quotes from "Dragon Army Barracks". Duncan't attitude to criticism. Tyler Durden shouldn't appeal to Duncan. Authoritarian group houses haven't been tried. Rationalists undervalue exploration. Loneliness and doing big things. The pendulum model of social progress. Sticking to commitments even when its painful. Saving face when you screw up. True Reliability: The bay is way too unreliable but Duncan goes too far. Trust and power Dynamics. Pragmatic criticism of the charter.

Without Belief In A God But Never Without Belief In A Devil by Lou (sam[]zdat) - The nature of mass movements. The beats and the John Birchers. The taxonomy of the frustrated. Horseshoe theory. The frustrated cannot derive satisfaction from action, something else has to fill the void Poverty, work and meaning. Mass movements need to sow resentment. Hatred is the strongest unifier. Modernity inevitably causes justified resentment. Tocqueville, Polyanai, Hoffer and Scott's theories. Helpful and unhelpful responses.

On The Effects Of Inequality On Economic Growth by Artir (Nintil) - Most of the article tries to explain and analyze the economic consensus on whether inequality harms growth. A very large number of papers are cited and discussed. A conclusion that the effect is at most small.

===Scott:

Two Kinds Of Caution by Scott Alexander - Sometimes boring technologies (ex container ships) wind up being far more important than flashy tech. However Scott argues that often the flashy tech really is important. There is too much contrarianism and not enough meta-contrarianism. AI risk.

Open Road by Scott Alexander - Bi-weekly public open thread. Some messages from Scott Alexander.

To The Great City by Scott Alexander - Scott's Karass is in San Fransisco. He is going home.

Open Thread 78 75 by Scott Alexander - Bi-weekly public open thread.

Why Are Transgender People Immune To Optical Illusions by Scott Alexander - Scott's community survey showed, with a huge effect size, that transgender individuals are less susceptible to the spinning mask and dancer illusions. Trans suffer from dis-associative disorder at a high rate. Connections between the two phenomena and NDMA. Commentary on the study methodology.

Contra Otium On Individualism by Scott Alexander (Scratchpad) - Eight point summary of Sarah's defense of individualism. Scott is terrified the market place of ideals doesn't work and his own values aren't memetically fit.

Conversation Deliberately Skirts The Border Of Incomprehensibility by Scott Alexander - Communication is often designed to be confusing so as to preserve plausible deniability.

===Rationalist:

Rethinking Reality And Rationality by mindlevelup - Productivity is almost a solved problem. Much current rationalist research is very esoteric. Finally grokking effective altruism. Getting people good enough at rationality that they are self correcting. Pedagogy and making research fields legible.

The Power Of Pettiness by Sarah Perry (ribbonfarm) - "These emotions – pettiness and shame – are the engines driving epistemic progress" Four virtues: Loneliness, ignorance, pettiness and overconfidence.

Irrationality is in the Eye of the Beholder by João Eira (Lettuce be Cereal) - Is eating a chocolate croissant on a diet always irrational? Context, hidden motivations and the curse of knowledge.

The Abyss Of Want by AellaGirl - The infinite regress of 'Asking why'. Taking acid and ego death. You can't imagine the experience of death. Coming back to life. Wanting to want things. Humility and fake enlightenment.

Epistemic Laws Of Motion by SquirrelInHell - Newton's three laws re-interpreted in terms of psychology and people's strategies. A worked example using 'physics' to determine if someone will change their mind. Short and clever.

Against Lone Wolf Selfimprovement by cousin_it (lesswrong) - Lone wolf improvement is hard. Too many rationalists attempt it for cultural and historical reasons. Its often better to take a class or find a group.

Fictional Body Language by Eukaryote - Body language in literature is often very extreme compared to real life. Emojis don't easily map to irl body language. A 'random' sample of how emotion in represented in American Gods, Earth and Lirael. Three strategies: Explicitly describing feelings vs describing actions vs metaphors.

Bayesian Probability Theory As Extended Logic A by ksvanhorn (lesswrong) - Cox's theorem is often cited to support that Bayesian probability is the only valid fundamental method of plausible reasoning. A simplified guide to Cox's theorem. The author their paper that uses weaker assumptions than Cox's theorem. The author's full paper and a more detailed exposition of Cox's theorem are linked.

Steelmanning The Chinese Room Argument by cousin_it (lesswrong) - A short thought experiment about consciousness and inferring knowledge from behavior.

Ideas On A Spectrum by Elo (BearLamp) - Putting ideas like 'selfishness' on a spectrum. Putting yourself and others on the spectrum. People who give you advice might disagree with you about where you fall on the spectrum. Where do you actually stand?

A Post Em Era Hint by Robin Hanson - In past ages there were pairs of innovations that enabled the emulation age without changing the growth rate. Forager: Reasoning and language. Farmer: Writing and math. Industrial: Computers and Digital Communication. What will the em-age equivalents be?

Zen Koans by Elo (BearLamp) - Connections between koans and rationalist ideas. A large number of koans are included at the end of the post. Audio of the associated meetup is included.

Fermi Paradox Resolved by Tyler Cowen - Link to a presentation. Don't just multiply point estimates. Which Drake parameters are uncertain. The Great filter is probably in the past. Lots of interesting graphs and statistics. Social norms and laws. Religion. Eusocial society.

Developmental Psychology In The Age Of Ems by Gordan (Map and Territory) - Brief intro to the Age of Em. Farmer values. Robin's approach to futurism. Psychological implications of most ems being middle aged. Em conservatism and maturity.

Call To Action by Elo (BearLamp) - Culmination of a 21 article series on life improvement and getting things done. A review of the series as a whole and thoughts on moving forward.

Cfar Week 1 by mindlevelup - What is working at CFAF actually like. Less rationality research than anticipated. Communication costs scale quadratically. Organization efficiency and group rationality.

Onemagisterium Bayes by tristanm (lesswrong) - Toolbox-ism is the dominant mode of thinking today. Downsides of toolbox-ism. Desiderata that imply Bayesianism. Major problems: Assigning priors and encountering new hypothesis. Four minor problems. Why the author is still a strong Bayesianism. Strong Bayesians can still use frequentist tools. AI Risk.

Selfconscious Ideology by casebash (lesswrong) - Lesswrong has a self conscious ideology. Self conscious ideologies have major advantages even if any given self-conscious ideology is flawed.

Intellectuals As Artists by Robin Hanson - Many norms function to show off individual impressiveness: Conversations, modern songs, taking positions on diverse subjects. Much intellectualism is not optimized for status gains not finding truth.

Just Saying What You Mean Is Impossible by Zvi Moshowitz - "Humans are automatically doing lightning fast implicit probabilistic analysis on social information in the background of every moment of their lives." This implies there is no way to divorce the content of your communication from its myriad probabilistic social implications. Different phrasings will just send different implications.

In Defense Of Individualist Culture by Sarah Constantin (Otium) - A description of individualist culture. Criticisms of individualist culture: Lacking sympathy, few good defaults. Defenses: Its very hard to change people (psychology research review). A defense of naive personal identity. Traditional culture is fragile. Building a community project is hard in the modern world, prepare for the failure modes. Modernity has big upsides, some people will make better choices than the traditional rules allow.

Forget The Maine by Robin Hanson - Monuments are not optimized for reminding people to do better. Instead they largely serve as vehicles for simplistic ideology.

The Ladder Of Interventions by mindlevelup - "This is a hierarchy of techniques to use for in-the-moment situations where you need to “convince” yourself to do something." The author uses these methods in practice.

On Dragon Army by Zvi Moshowitz - Long response to many quotes from "Dragon Army Barracks". Duncan't attitude to criticism. Tyler Durden shouldn't appeal to Duncan. Authoritarian group houses haven't been tried. Rationalists undervalue exploration. Loneliness and doing big things. The pendulum model of social progress. Sticking to commitments even when its painful. Saving face when you screw up. True Reliability: The bay is way too unreliable but Duncan goes too far. Trust and power Dynamics. Pragmatic criticism of the charter.

===AI:

Updates To The Research Team And A Major Donation by The MIRI Blog - MIRIr received a 1 million dollar donation. Two new full-time researchers. Two researchers leaving. Medium term financial plans.

Conversation With Dario Amodei by Jeff Kaufman - "The research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today". Prioritize work on goals. Transparency and adversarial examples are also important.

Why Don't Ai Researchers Panic by Bayesian Investor - AI researchers predict a 5% chance of "extremely bad" (extinction level) events, why aren't they panicking? Answers: They are thinking of less bad worst cases, optimism about counter-measures, risks will be easy to deal with later, three "star theories" (MIRI, Paul Christiano, GOFAI). More answers: Fatal pessimism and resignation. It would be weird to openly worry. Benefits of AI-safety measures are less than the costs. Risks are distant.

Strategic Implications Of Ai Scenarios by (EA forum) - Questions and topics: Advanced AI timelines. Hard or soft takeoff? Goal alignment? Will advanced AI act as a single entity or a distributed system? Implication for estimating the EV of donating to AI-safety. - Tobias Baumann

Tool Use Intelligence Conversation by The Foundational Research Institute - A dialogue. Comparisons between humans and chimps/lions. The value of intelligence depends on the available tools. Defining intelligence. An addendum on "general intelligence" and factors that make intelligence useful.

Self-modification As A Game Theory Problem by (lesswrong) - "If I'm right, then any good theory for cooperation between AIs will also double as a theory of stable self-modification for a single AI, and vice versa." An article with mathematical details is linked.

Looking Into Ai Risk by Jeff Kaufman - Jeff is trying to decide if AI risk is a serious concern and whether he should consider working in the field. Jeff's AI-risk reading list. A large comment section with interesting arguments.

===EA:

Ea Marketing And A Plea For Moral Inclusivity by MichaelPlant (EA forum) - EA markets itself as being about poverty reduction. Many EAs think other topics are more important (far future, AI, animal welfare, etc). The author suggests becoming both more inclusive and more openly honest.

My Current Thoughts On Miris Highly Reliable by Daniel Dewey (EA forum) - Report by the Open Phil AI safety lead. A basic description of and case for the MIRI program. Conclusion: 10% credence in MIRI's work being highly useful. Reasons: Hard to apply to early agents, few researchers are excited, other approaches seem more promising.

How Can We Best Coordinate As A Community by Benn Todd (EA forum) - 'Replaceability' is a bad reason not to do direct work, lots of positions are very hard to fill. Comparative Advantage and division of labor. Concrete ways to boost productivity: 5 minute favours, Operations roles, Community infrastructure, Sharing knowledge and Specialization. EA Global Video is included.

Deciding Whether to Recommend Fistula Management Charities by The GiveWell Blog - "An obstetric fistula, or gynecologic fistula, is an abnormal opening between the vagina and the bladder or rectum." Fistula management, including surgery. Open questions and uncertainty particularly around costs. Our plans to partner with IDinsight to answer these questions.

Allocating the Capital by GiveDirectly - Eight media links on Give Directly, Basic Income and Cash Transfers.

Testing An Ea Networkbuilding Strategy by remmelt (EA forum) - Pivot from supporting EA charities to cooperating with EA networks. Detailed goals, strategy, assumptions, metrics, collaborators and example actions.

How Long Does It Take To Research And Develop A Vaccine by (EA forum) - How long it takes to make a vaccine. Literature review. Historical data on how long a large number of vaccines took to develop. Conclusions.

Hi Im Luke Muehlhauser Ama About Open by Luke Muelhauser (EA forum) - Animal and computer consciousness. Luke wrote a report for the open philanthropy project on consciousness. Lots of high quality questions have been posted.

Hidden Cost Digital Convenience by Innovations for Poverty - Moving from in person to digital micro-finance can harm saving rates in developing countries. Reduction in group cohesion and visible transaction fees. Linked paper with details.

Projects People And Processes by Open Philosophy - Three approaches used by donors and decision makers: Choose from projects presented by experts, defer near-fully to trusted individuals and establishing systematic criteria. Pros and cons of each. Open Phil's current approach.

Effective Altruism An Idea Repository by Onemorenickname (lesswrong) - Effective altruism is less of a closed organization than the author thought. Building a better platform for effective altruist idea sharing.

Effective Altruism As Costly Signaling by Raemon (EA forum) - " 'a bunch of people saying that rich people should donate to X' is a less credible signal than 'a bunch of people saying X thing is important enough that they are willing to donate to it themselves.' "

The Person Affecting Philanthropists Paradox by MichaelPlant (EA forum) - Population ethics. The value of creating more happy people as opposed to making pre-existing people happy. Application to the question of whether to donate now or invest and donate later.

Oops Prize by Ben Hoffman (Compass Rose) - Positive norms around admitting you were wrong. Charity Science publicly admitted they were wrong about grant writing. Did anyone organization at EA Global admit they made a costly mistake? 1K oops prize.

===Politics and Economics:

Scraps 3 Hoffer And Performance Art by Lou (sam[]zdat) - Growing out of radicalism. Either economic and family instability can cause mass movements. why the left has adopted Freud. The Left's economic platform is popular, its cultural platform is not. Performance art: Marina Abramović's' 'Rhythm 0'. Recognizing and denying your own power.

What Replaces Rights And Discourse by Feddie deBoer - Lots of current leftist discourse is dismissive of rights and open discussion. But what alternative is there? The Soviets had bad justifications and a terrible system but at least it had an explicit philosophical alternative.

Why Do You Hate Elua by H i v e w i r e d - Scott's Elua as an Eldritch Abomination that threatens traditional culture. An extended sci-fi quote about Ra the great computer. "The forces of traditional values remembered an important fact: once you have access to the hardware, it’s over."

Why Did Europe Lose Crusades by Noah Smith - Technological comparison between Europe and the Middle East. Political divisions on both sides. Geographic distance. Lack of motivation.

Econtalk On Generic Medications by Aceso Under Glass - A few egregious ways that big pharma games the patent system. Short.

Data On Campus Free Speech Cases by Ozy (Thing of Things) - Ozy classifies a sample of the cases handled by the Foundation for Individual Rights in Education. Ozy classifies 77 cases as conservative, liberal or apolitical censorship. Conservative ideas were censored 52%, liberal 26% and apolitical 22%.

Beware The Moral Spotlight by Robin Hanson - The stated goals of government/business don't much matter compared to the selective pressures on their leadership, don't obsess over which sex has the worse deal overall, don't overate the benefits of democracy and ignore higher impact changes to government.

Reply To Yudkowsky by Bryan Caplan - Caplan quotes and replies to many sections Yudkowsky's response. Points: Yudkowsky's theory is a special case of Caplan's. The left has myriad complaints about markets. Empirically the market actually has consistently provided large benefits in many countries and times.

Without Belief In A God But Never Without Belief In A Devil by Lou (sam[]zdat) - The nature of mass movements. The beats and the John Birchers. The taxonomy of the frustrated. Horseshoe theory. The frustrated cannot derive satisfaction from action, something else has to fill the void Poverty, work and meaning. Mass movements need to sow resentment. Hatred is the strongest unifier. Modernity inevitably causes justified resentment. Tocqueville, Polyanai, Hoffer and Scott's theories. Helpful and unhelpful responses.

Genetic Behaviorism Supports The Influence Of Chance On Life Outcomes by Freddie deBoer - Much of the variance in many traits is non-shared-environment. Much non-shared-environment can be thought of as luck. In addition no one chooses or deserves their genes.

Yudkowsky On My Simpistic Theory of Left and Right by Bryan Caplan - Yudkowsky claims the left holds the market to the same standards as human beings. The market as a ritual holding back a dangerous Alien God. Caplan doesn't respond he just quotes Yudkowsky.

On The Effects Of Inequality On Economic Growth by Artir (Nintil) - Most of the article tries to explain and analyze the economic consensus on whether inequality harms growth. A very large number of papers are cited and discussed. A conclusion that the effect is at most small.

===Misc:

Erisology Of Self And Will Representative Campbell Speaks by Everything Studies - An exposition of the "mainstream" view of the self and free will.

What Is The Ein Sof The Meaning Of Perfection In by arisen (lesswrong) - "Kabbalah is based on the analogy of the soul as a cup and G-d as the light that fills the cup. Ein Sof, nothing ("Ein") can be grasped ("Sof"-limitation)."

Sexualtaboos by AellaGirl - A graph of sexual fetishes. The axes are "taboo-ness" and "reported interest". Taboo correlated negatively with interest (p < 0.01). Lots of fetishes are included and the sample size is pretty large.

Huffman Codes Problem by protokol2020 - Find the possible Huffman Codes for all twenty-six English letters.

If You're In School Try The Curriculum by Freddie deBoer - Ironic detachment "leaves you with the burden of the work but without the emotional support of genuine resolve". Don't be the sort of person who tweets hundreds of thousands of times but pretends they don't care about online.

Media Recommendations by Sailor Vulcan (BYS) - Various Reviews including: Games, Animated TV shows, Rationalist Pokemon. A more detailed review of Harry Potter and the Methods of Rationality.

Sunday Assorted Links by Tyler Cowen - Variety of Topics. Ethereum Cryptocurrency, NYC Diner decline, Building Chinese Airports, Soccer Images, Drone Wars, Harberger Taxation, Douthat on Heathcare.

Summary Of Reading April June 2017 by Eli Bendersky - Brief reviews. Various topics: Heavy on Economics. Some politics, literature and other topics.

Rescuing The Extropy Magazine Archives by deku_shrub (lesswrong) - "You'll find some really interesting very early articles on neural augmentation, transhumanism, libertarianism, AI (featuring Eliezer), radical economics (featuring Robin Hanson of course) and even decentralized payment systems."

Epistemic Spot Check A Guide To Better Movement Todd Hargrove by Aceso Under Glass - Flexibility and Chronic Pain. Early section on flexibility fails check badly. Section on psychosomatic pain does much better. Model: Simplicity (Good), Explanation (Fantastic), Explicit Predictions (Good), Useful Predictions (Poor), Acknowledge Limits (Poor), Measurability (Poor).

Book Review Barriers by Eukaryote - Even cell culturing is surprisingly hard if you don't know the details. There is not much institutional knowledge left in the field of bioweapons. Forcing labs underground makes bioterrorism even harder. However synthetic biology might make things much more dangerous.

Physics Problem 2 by protokol2020 - Can tidal forces rotate a metal wheel?

Poems by Scott Alexander (Scratchpad) - Violets aren't blue.

Evaluating Employers As Junior Software by Particular Virtue - You need to write alot of code and get detailed feedback to improve as an engineer. Practical suggestions to ensure your first job fulfills both conditions.

===Podcast:

Kyle Maynard Without Limits by Tim Ferriss - "Kyle Maynard is a motivational speaker, bestselling author, entrepreneur, and ESPY award-winning mixed martial arts athlete, known for becoming the first quadruple amputee to reach the summit of Mount Kilimanjaro and Mount Aconcagua without the aid of prosthetics."

85 Is This The End Of Europe by Waking Up with Sam Harris - Douglas Murray and his book 'The Strange Death of Europe: Immigration, Identity, Islam'.

Myers Briggs, Diet, Mistakes And Immortality by Tim Ferriss - Ask me anything podcast. Topics beyond the title: Questions to prompt introspection, being a Jack of All Trades, balancing future and present goals, don't follow your passion, 80/20 memory retention, advice to your past selves.

Interview Ro Khanna Regional Development by Tyler Cowen - Bloomberg Podcast. "Technology, jobs and economic lessons from his perspective as Silicon Valley’s congressman."

Avic Roy by The Ezra Klein Show - Better Care Reconciliation Act, broader health care philosophies that fracture the right. Roy’s disagreements with the CBO’s methodology. The many ways he thinks the Senate bill needs to improve. How the GOP has moved left on health care policy. Medicaid, welfare reform, and the needy who are hard to help. The American health care system subsidizes the rich, etc.

Chris Blattman 2 by EconTalk - "Whether it's better to give poor Africans cash or chickens and the role of experiments in helping us figure out the answer. Along the way he discusses the importance of growth vs. smaller interventions and the state of development economics."

Landscapes Of Mind by Waking Up with Sam Harris - "why it’s so hard to predict future technology, the nature of intelligence, the 'singularity', artificial consciousness."

Blake Mycoskie by Tim Ferriss - Early entrepreneurial ventures. The power of journaling. How “the stool analogy” changed Blake’s life. Lessons from Ben Franklin.

Ben Sasse by Tyler Cowen - "Kansas vs. Nebraska, famous Nebraskans, Chaucer and Luther, unicameral legislatures, the decline of small towns, Ben’s prize-winning Yale Ph.d thesis on the origins of conservatism, what he learned as a university president, Stephen Curry, Chevy Chase, Margaret Chase Smith"

Danah Boyd on why Fake News is so Easy to Believe by The Ezra Klein Show - Fake news, digital white flight, how an anthropologist studies social media, machine learning algorithms reflect our prejudices rather than fixing them, what Netflix initially got wrong about their recommendations engine, the value of pretending your audience is only six people, the early utopian visions of the internet.

Robin Feldman by EconTalk - Ways pharmaceutical companies fight generics.

Jason Weeden On Do People Vote Based On Self Interest by Rational Speaking - Do people vote based on personality, their upbringing, blind loyalty or do they follow their self interest? What does self-interest even mean?

Reid Hoffman 2 by Tim Ferriss - The 10 Commandments of Startup Success according to the extremely successful investor Reid Hoffman.

[Link] The Internet as an existential threat

4 Kaj_Sotala 09 July 2017 11:40AM

[Link] Daniel Dewey on MIRI's Highly Reliable Agent Design Work

9 lifelonglearner 09 July 2017 04:35AM

Mini map of s-risks

2 turchin 08 July 2017 12:33PM
S-risks are risks of future global infinite sufferings. Foundational research institute suggested them as the most serious class of existential risks, even more serious than painless human extinction. So it is time to explore types of s-risks and what to do about them.

Possible causes and types of s-risks:
"Normal Level" - some forms of extreme global suffering exist now, but we ignore them:
1. Aging, loss of loved ones, moral illness, infinite sufferings, dying, death and non-existence - for almost everyone, because humans are mortal
2. Nature as a place of suffering, where animals constantly eat each other. Evolution as superintelligence, which created suffering and using it for its own advance.

Colossal level:
1. Quantum immortality creates bad immortality - I survived as old, but always dying person, because of weird observation selection.
2. AI goes wrong. 2.1 Rocobasilisk 2.2. Error in programming 2.3. Hacker's joke 2.4 Indexical blackmail.
3. Two AIs go in war with each other, and one of them is benevolent to human, so another AI tortures humans to get bargain position in the future deal.
4. X-risks, which includes infinite suffering for everyone - natural pandemic, cancer epidemic etc
5. Possible worlds (in Lewis terms) with infinite sufferings qualia in them. For any human a possible world with his infinite sufferings exist. Modal realism makes them real.

Ways to fight s-risks:
1. Ignore them by boxing personal identity inside today
2. Benevolent AI fights "measure war" to create infinitely more copies of happy beings, as well as trajectories in the space of the possible minds from sufferings to happiness

Types of most intensive sufferings:

Qualia based, listed from bad to worse:
1. Eternal, but bearable in each moment suffering (Anhedonia)
2. Unbearable sufferings - sufferings, to which death is the preferable outcome (cancer, death in fire, death by hanging). However, as said Mark Aurelius: “Unbearable pain kills. If it not kills, it is bearable"
3. Infinite suffering - qualia of the infinite pain, so the duration doesn’t matter (not known if it exists)
4. Infinitely growing eternal sufferings, created by constant upgrade of the suffering’s subject (hypothetical type of sufferings created by malevolent superintelligence)

Value based s-risks:
1. Most violent action against one’s main values: like "brutal murder of children”
2. Meaninglessness, acute existential terror or derealisation with depression (Nabokov’s short story “Terror”) - incurable and logically proved understanding of meaningless of life
3. Death and non-existence are forms of counter-value sufferings.

Time-based:
1. Infinite time without happiness.

Subjects, who may suffer from s-risks:

1. Anyone as individual person
2. Currently living human population
3. Future generation of humans
4. Sapient beings
5. Animals
6. Computers, neural nets with reinforcement learning, robots and AIs.
7. Aliens
8. Unembodied sufferings in stones, Boltzmann brains, pure qualia etc.

My position

It is important to prevent s-risks, but not by increasing probability of human extinction, as it would mean that we already fail victims of blackmail by non-existence things.

Also s-risk is already default outcome for anyone personally (so it is global), because of inevitable aging and death (and may be bad quantum immortality).

People prefer the illusive certainty of non-existence - to hypothetical possibility of infinite sufferings. But nothing is certain after death.

The same way overestimating of the animal suffering results in the underestimating of the human sufferings and risks of human extinction. But animals are more suffering in the forests than in the animal farms, where they are feed every day, get basic healthcare, there no predators, who will eat them alive etc.

The hopes are wrong that we will prevent future infinite sufferings if we stop progress or commit suicide on the personal or civilzational level. It will not help animals. It will not help in sufferings in the possible world. It even will not prevent sufferings after death, if quantum immortality in some form is true.

But the fear of infinite sufferings makes us vulnerable to any type of the “acausal" blackmail. The only way to fight sufferings in possible worlds is to create an infinitely larger possible world with happiness.


[Link] Epistemic Laws of Motion

0 SquirrelInHell 07 July 2017 09:37PM

[Link] Postdoc opening at U. of Washington in AI law and policy

3 mindspillage 07 July 2017 06:53PM

Against lone wolf self-improvement

27 cousin_it 07 July 2017 03:31PM

LW has a problem. Openly or covertly, many posts here promote the idea that a rational person ought to be able to self-improve on their own. Some of it comes from Eliezer's refusal to attend college (and Luke dropping out of his bachelors, etc). Some of it comes from our concept of rationality, that all agents can be approximated as perfect utility maximizers with a bunch of nonessential bugs. Some of it is due to our psychological makeup and introversion. Some of it comes from trying to tackle hard problems that aren't well understood anywhere else. And some of it is just the plain old meme of heroism and forging your own way.

I'm not saying all these things are 100% harmful. But the end result is a mindset of lone wolf self-improvement, which I believe has harmed LWers more than any other part of our belief system.

Any time you force yourself to do X alone in your room, or blame yourself for not doing X, or feel isolated while doing X, or surf the web to feel some human contact instead of doing X, or wonder if X might improve your life but can't bring yourself to start... your problem comes from believing that lone wolf self-improvement is fundamentally the right approach. That belief is comforting in many ways, but noticing it is enough to break the spell. The fault wasn't with the operator all along. Lone wolf self-improvement doesn't work.

Doesn't work compared to what? Joining a class. With a fixed schedule, a group of students, a teacher, and an exam at the end. Compared to any "anti-akrasia technique" ever proposed on LW or adjacent self-help blogs, joining a class works ridiculously well. You don't need constant willpower: just show up on time and you'll be carried along. You don't get lonely: other students are there and you can't help but interact. You don't wonder if you're doing it right: just ask the teacher.

Can't find a class? Find a club, a meetup, a group of people sharing your interest, any environment where social momentum will work in your favor. Even an online community for X that will reward your progress with upvotes is much better than going X completely alone. But any regular meeting you can attend in person, which doesn't depend on your enthusiasm to keep going, is exponentially more powerful.

Avoiding lone wolf self-improvement seems like embarrassingly obvious advice. But somehow I see people trying to learn X alone in their rooms all the time, swimming against the current for years, blaming themselves when their willpower isn't enough. My message to such people: give up. Your brain is right and what you're forcing it to do is wrong. Put down your X, open your laptop, find a class near you, send them a quick email, and spend the rest of the day surfing the web. It will be your most productive day in months.

Call to action

6 Elo 07 July 2017 09:10AM

Core knowledge: List of common human goals
Part 1Exploration-Exploitation
Part 1a: The application of the secretary problem to real life dating
Part 1b: adding and removing complexity from models
Part 2Bargaining Trade-offs to your brain.
Part 2a.1: A strategy against the call of the void.
Part 2a.2: The call of the void
Part 2b.1: Empirical time management
Part 2b.2: Memory and notepads
Part 3The time that you have
Part 3a: A purpose finding exercise
Part 3b: Schelling points, trajectories and iteration cycles
Part 4What does that look like in practice?
Part 4a: Lost purposes – Doing what’s easy or what’s important
Part 4b.1: In support of yak shaving
Part 4b.2: Yak shaving 2
Part 4c: Filter on the way in, Filter on the way out…
Part 4d.1: Scientific method
Part 4d.2: Quantified self
Part 5: Skin in the game
Part 6
Call to action

A note about the contents list; you can find the list in the main parts, the a,b,c parts are linked to from the main posts.  If you understand them in the context they are mentioned you can probably skip them, but if you need the explanation, click through.


If you understand exploration and exploitation, you realise that sometimes you need to stop exploring and take advantage of what you know based on the value of the information that you have. At other times you will find your exploitations are giving you diminishing returns, you are stagnating and you need to dive into the currents again, take some risks.  If you are accurately calibrated, you will know what to do, whether to sharpen the saw, educate yourself more or cut down the tree right now.

If you are not calibrated yet and you want to start, you might want to empirically assess your time.  You might like to ask yourself in light of the information of your time use all on one page – Am I exploring and exploiting enough?  Remembering you probably make the most measurable and ongoing returns in the Exploitation phase, however the exploration might be seem more fun (to find exciting and new knowledge), and the place where you grow, but are you sure that’s what you want to be doing in regard to the value return by exploiting?

Why were you not already exploring and exploiting in the right ratio?  Brains are tricky things.  You might need to bargain trade-offs to your own brain.  You might be dealing with a System2!understanding of what you want to do and trying to carry out a System1!motivated_action.  The best thing to do is to ask the internal disagreeing parts, “How could I resolve this disagreement in my head?”, “How will I resolve my indecision at this time?“, “How do I go about gathering evidence for better making this decision?”.  This all starts with noticing.  Noticing that disagreement, noticing the chance to resolve the stress in your head…

Sometimes we do things for bad, dumb, silly, irrational, frustrating, self-defeating, or irrelevant reasons.  All you really have is the time you have.  People take actions based on their desires and goals.  That’s fine.  You have 168 hours a week. As long as you are happy with how you spend it.  If you are not content, that’s when you have the choice to do something else.

Look at all the things that you are doing or not doing that does not contribute to a specific goal (a process called the immunity to change).  This fundamentally hits on a universal; Namely what you are doing with your time is everything you are choosing not to do with your time.  There is an equal and opposite opportunity cost to each thing that you do.  And that’s where we come to revealed preferences.

Revealed preferences are different to preferences, they are in fact distinctly different.  I would argue that revealed preferences are much more real and the only real preference, because it’s made up of what actually happens, not just what you say you want to happen.  It’s firmly grounded in reality.  The reality of what you choose to do with your time (what you chose to do with your time yesterday).

On the one hand you can introspect, consider your existing revealed preferences and let that inform your future judgements and future actions.  As a person who has always watched every season of your favourite TV show, you might decide to be the type of person for which TV shows matter more than <exercise|relationships|learning> or any number of things.  Good!  Make that decision with pride!  What you cared about can be what you want to care about in the future, but it also might not be.  That’s why you might want to take stock of what you are doing and align what you are doing with your desired goals.  Change what you reveal with your ongoing actions so that they reflect who you want to be as a person.

Do you have skin in the game?  Who do you want to be as a person?  It’s a hard problem.  You want to figure out your desired goals.  I don’t know how exactly to do that but I have some ideas.  You can look around you at how other people do it, you can consider common human goals.  Without explaining why, “knowing what your goals are” is important, even if it takes a while to work that out.

If you know what your goals are you can compare your goals and the list of your empirical time use.  Realise that everything that you do will take time.  If these were your revealed preferences, what do you reveal that you care about?  But wait, don’t stop there, consider your potential:

Potential To:

  • Discover/Define/Declare what you really care about.
  • Define what results you think you can aim for within what you really care about.
  • Define what actions you can take to yield a trajectory towards those results.
  • Stick to it because it’s what you really want to do.  What you care about.

That’s what’s important right?  Doing the work you value because it leads towards your goals (which are the things you care about).  If you are not doing that, then maybe your revealed preferences are showing that you are not a very strategic human.  There is a solution to that.  Keeping yourself on track looks pretty easy when you think about it.

And If you find parts of your brain doing what they want at the detriment of other parts of your goals, you need to reason with them.  This whole; define what you really care about and then head towards it, you should know that it needs doing ASAP, or you are already making bad trade offs with your time.

Consider this post a call to action as a chance to be the you that you really want to be!  Get to it! With passion and joy!


Core knowledge: List of common human goals
Part 1Exploration-Exploitation
Part 1a: The application of the secretary problem to real life dating
Part 1b: adding and removing complexity from models
Part 2Bargaining Trade-offs to your brain.
Part 2a.1: A strategy against the call of the void.
Part 2a.2: The call of the void
Part 2b.1: Empirical time management
Part 2b.2: Memory and notepads
Part 3The time that you have
Part 3a: A purpose finding exercise
Part 3b: Schelling points, trajectories and iteration cycles
Part 4What does that look like in practice?
Part 4a: Lost purposes – Doing what’s easy or what’s important
Part 4b.1: In support of yak shaving
Part 4b.2: Yak shaving 2
Part 4c: Filter on the way in, Filter on the way out…
Part 4d.1: Scientific method
Part 4d.2: Quantified self
Part 5: Skin in the game
Part 6
Call to action

A note about the contents list; you can find the list in the main parts, the a,b,c parts are linked to from the main posts.  If you understand them in the context they are mentioned you can probably skip them, but if you need the explanation, click through.


Meta: This took about 3 hours to write, and was held up by many distractions in my life.

I am not done.  Not by any means.  I feel like I left some unanswered questions along the way.  Things like:

  • “I don’t know what is good, am I somehow bound by a duty to go seeking out what is good or truly important to go do that?”
  • “So maybe I know what’s good, but I keep wondering if it is the best thing to do.  How can I be sure?”
  • “I am sure it is the best thing but I don’t seem to be doing it.  What’s up?”
  • “I am doing the things I thing are right but other people keep trying to tell me I am not.  What now?”
  • “I have a track record of getting it wrong a lot.  How do I even trust myself this time?”
  • “I am doing the thing but I feel wrong, what should I do about that?”

And many more.  But I see other problems worth writing about first.

[Link] Red Teaming Climate Change Research - Should someone be red-teaming Rationality/EA too?

1 casebash 07 July 2017 02:16AM

Bayesian probability theory as extended logic -- a new result

8 ksvanhorn 06 July 2017 07:14PM

I have a new paper that strengthens the case for strong Bayesianism, a.k.a. One Magisterium Bayes. The paper is entitled "From propositional logic to plausible reasoning: a uniqueness theorem." (The preceding link will be good for a few weeks, after which only the preprint version will be available for free. I couldn't come up with the $2500 that Elsevier makes you pay to make your paper open-access.)

Some background: E. T. Jaynes took the position that (Bayesian) probability theory is an extension of propositional logic to handle degrees of certainty -- and appealed to Cox's Theorem to argue that probability theory is the only viable such extension, "the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind." This position is sometimes called strong Bayesianism. In a nutshell, frequentist statistics is fine for reasoning about frequencies of repeated events, but that's a very narrow class of questions; most of the time when researchers appeal to statistics, they want to know what they can conclude with what degree of certainty, and that is an epistemic question for which Bayesian statistics is the right tool, according to Cox's Theorem.

You can find a "guided tour" of Cox's Theorem here (see "Constructing a logic of plausible inference"). Here's a very brief summary. We write A | X for "the reasonable credibility" (plausibility) of proposition A when X is known to be true. Here X represents whatever information we have available. We are not at this point assuming that A | X is any sort of probability. A system of plausible reasoning is a set of rules for evaluating A | X. Cox proposed a handful of intuitively-appealing, qualitative requirements for any system of plausible reasoning, and showed that these requirements imply that any such system is just probability theory in disguise. That is, there necessarily exists an order-preserving isomorphism between plausibilities and probabilities such that A | X, after mapping from plausibilities to probabilities, respects the laws of probability.

Here is one (simplified and not 100% accurate) version of the assumptions required to obtain Cox's result:

 

  1. A | X is a real number.
  2. (A | X) = (B | X) whenever A and B are logically equivalent; furthermore, (A | X) ≤ (B | X) if B is a tautology (an expression that is logically true, such as (a or not a)).
  3. We can obtain (not A | X) from A | X via some non-increasing function S. That is, (not A | X) = S(A | X).
  4. We can obtain (A and B | X) from (B | X) and (A | B and X) via some continuous function F that is strictly increasing in both arguments: (A and B | X) = F((A | B and X), B | X).
  5. The set of triples (x,y,z) such that x = A|X, y = (B | A and X), and z = (C | A and B and X) for some proposition A, proposition B, proposition C, and state of information X, is dense. Loosely speaking, this means that if you give me any (x',y',z') in the appropriate range, I can find an (x,y,z) of the above form that is arbitrarily close to (x',y',z').
The "guided tour" mentioned above gives detailed rationales for all of these requirements.

Not everyone agrees that these assumptions are reasonable. My paper proposes an alternative set of assumptions that are intended to be less disputable, as every one of them is simply a requirement that some property already true of propositional logic continue to be true in our extended logic for plausible reasoning. Here are the alternative requirements:
  1. If X and Y are logically equivalent, and A and B are logically equivalent assuming X, then (A | X) = (B | Y).
  2. We may define a new propositional symbol s without affecting the plausibility of any proposition that does not mention that symbol. Specifically, if s is a propositional symbol not appearing in A, X, or E, then (A | X) = (A | (s ↔ E) and X).
  3. Adding irrelevant background information does not alter plausibilities. Specifically, if Y is a satisfiable propositional formula that uses no propositional symbol occurring in A or X, then (A | X) = (A | Y and X).
  4. The implication ordering is preserved: if  A → B is a logical consequence of X, but B → A is not, then then A | X < B | X; that is, A is strictly less plausible than B, assuming X.
Note that I do not assume that A | X is a real number. Item 4 above assumes only that there is some partial ordering on plausibility values: in some cases we can say that one plausibility is greater than another.

 

I also explicitly take the state of information X to be a propositional formula: all the background knowledge to which we have access is expressed in the form of logical statements. So, for example, if your background information is that you are tossing a six-sided die, you could express this by letting s1 mean "the die comes up 1," s2 mean "the die comes up 2," and so on, and your background information X would be a logical formula stating that exactly one of s1, ..., s6 is true, that is,

(s1 or s2 or s3 or s5 or s6) and
not (s1 and s2) and not (s1 and s3) and not (s1 and s4) and
not (s1 and s5) and not (s1 and s6) and not (s2 and s3) and
not (s2 and s4) and not (s2 and s5) and not (s2 and s6) and
not (s3 and s4) and not (s3 and s5) and not (s3 and s6) and
not (s4 and s5) and not (s4 and s6) and not (s5 and s6).

Just like Cox, I then show that there is an order-preserving isomorphism between plausibilities and probabilities that respects the laws of probability.

My result goes further, however, in that it gives actual numeric values for the probabilities. Imagine creating a truth table containing one row for each possible combination of truth values assigned to each atomic proposition appearing in either A or X. Let n be the number of rows in this table for which X evaluates true. Let m be the number of rows in this table for which both A and X evaluate true. If P is the function that maps plausibilities to probabilities, then P(A | X) = m / n.

For example, suppose that a and b are atomic propositions (not decomposable in terms of more primitive propositions), and suppose that we only know that at least one of them is true; what then is the probability that a is true? Start by enumerating all possible combinations of truth values for a and b:
  1. a false, b false: (a or b) is false, a is false.
  2. a false, b true : (a or b) is true,  a is false.
  3. a true,  b false: (a or b) is true,  a is true.
  4. a true,  b true : (a or b) is true,  a is true.
There are 3 cases (2, 3, and 4) in which (a or b) is true, and in 2 of these cases (3 and 4) a is also true. Therefore,

    P(a | a or b) = 2/3.

This concords with the classical definition of probability, which Laplace expressed as

The probability of an event is the ratio of the number of cases favorable to it, to the number of possible cases, when there is nothing to make us believe that one case should occur rather than any other, so that these cases are, for us, equally possible.

This definition fell out of favor, in part because of its apparent circularity. My result validates the classical definition and sharpens it. We can now say that a “possible case” is simply a truth assignment satisfying the premise X. We can simply drop the problematic phrase “these cases are, for us, equally possible.” The phrase “there is nothing to make us believe that one case should occur rather than any other” means that we possess no additional information that, if added to X, would expand by differing multiplicities the rows of the truth table for which X evaluates true.

For more details, see the paper linked above.

Steelmanning the Chinese Room Argument

4 cousin_it 06 July 2017 09:37AM

(This post grew out of an old conversation with Wei Dai.)

Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.

Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?

Clearly the only reasonable answer is "no, not in general".

Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?

Again, clearly, the only reasonable answer is "not in general".

Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?

A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!

We need a better theory of happiness and suffering

1 toonalfrink 04 July 2017 08:14PM

We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)

The Unreasonable Effectiveness of Certain Questions

3 ig0r 04 July 2017 03:37AM

Cross-posted on my blog: http://garybasin.com/the-unreasonable-effectiveness-of-certain-questions/

About a year ago I was sitting around trying to grok the concept of Evil — where does it come from and how does it work? After a few hours of spinning in circles, I experienced a sudden shift. My mind conjured up the question: “Is this a thing out in the world or just a projection?” (Map vs Territory). Immediately, a part of my mind replied with “Well, this may not be anything other than a story we tell about the behavior of people we dislike”. Let’s ignore the truth value for today and notice the process. I’m interested in this mechanism of how a simple query — checking if I’m looking at a confusion of map with the territory — was able to instantly reframe a problem in a way that allowed me to effortlessly make a mental leap. What’s fascinating is that you don’t even need someone else’s brain to come up with these questions (although that often helps) — you can try to explain your problem to a rubber duck which creates a conversation with yourself and generates queries, or just go through a list of things to ask yourself when stuck.

 

There are a few different categories of these types of queries and many examples of each. For instance, when thinking about plans we can ask ourselves to perform prehindsight/inner simulator or reference class forecasting/outside view. When introspecting on our own behavior, we can perform sentence completion to check for limiting beliefs, ask questions like “Why aren’t I done yet?” or “What can I do to 10x my results?”. When thinking about problems or situations, we can ask ourselves to invert, reframe into something falsifiable, and taboo your words or perform paradjitsu. Or consider the miracle question: Imagine you wake up and the problem is entirely solved — what do you see, as concretely as possible, such that you know this is true?

So “we know more than we can tell” — somewhere in our head often lies the answer, if only we could get to it. In some sense, parts of our brain are not speaking to each other (do they even share the same ontologies?) except through our language processor, and only then if the sentences are constructed in specific ways. This may make you feel relieved if you think you can rely on your subconscious processing — which may have access to this knowledge — to guide you to effective action, or terrified if you need to use conscious reasoning to think through a chain of consequences.

My thoughts on Evil have continued to evolve since that initial revelation, partially driven by trying new queries on the concept (and partially from finally reading Nietzsche). Once you have a set of tools to throw at problems, the bottleneck to clearer thinking becomes remembering to apply them and actually having the time to do so. This makes me wonder about people that have formed habits to automatically apply a litany of these mental moves whenever approaching a problem — how much of their effectiveness and intelligence can this explain?

Lesswrong Sydney Rationality Dojo on zen koans

0 Elo 04 July 2017 12:10AM

Link: http://bearlamp.com.au/zen-koans/

 

Short post here.

 

Lesswrong Sydney runs a rationality dojo once a month.  We usually cover 1-2 topics for an hour or less each.  Our regular attendance is 10-20 people.

 

This month's topics were:

  1. Captain awkward advice
  2. Goal factoring (CFAR)
  3. Understanding zen koans
I only recorded the section on zen koans.  Feedback welcome.

[Link] Developmental Psychology in The Age of Ems

0 gworley 03 July 2017 06:53PM

View more: Next