Open thread, September 4 - September 10, 2017
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
September 2017 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
Simplified Anthropic Doomsday
Here is a simplified version of the Doomsday argument in Anthropic decision theory, to get easier intuitions.
Assume a single agent A exists, an average utilitarian, with utility linear in money. Their species survives with 50% probability; denote this event by S. If the species survives, there will be 100 people total; otherwise the average utilitarian is the only one of its kind. An independent coin lands heads with 50% probability; denote this event by H.
Agent A must price a coupon CS that pays out €1 on S, and a coupon CH that pays out €1 on H. The coupon CS pays out only on S, thus the reward only exists in a world where there are a hundred people, thus if S happens, the coupon CS is worth (€1)/100. Hence its expected worth is (€1)/200=(€2)/400.
But H is independent of S, so (H,S) and (H,¬S) both have probability 25%. In (H,S), there are a hundred people, so CH is worth (€1)/100. In (H,¬S), there is one person, so CH is worth (€1)/1=€1. Thus the expected value of CH is (€1)/4+(€1)/400 = (€101)/400. This is more than 50 times the value of CS.
Note that C¬S, the coupon that pays out on doom, has an even higher expected value of (€1)/2=(€200)/400.
So, H and S have identical probability, but A assigns CS and CH different expected utilities, with a higher value to CH, simply because S is correlated with survival and H is independent of it (and A assigns an ever higher value to C¬S, which is anti-correlated with survival). This is a phrasing of the Doomsday Argument in ADT.
The Doomsday argument in anthropic decision theory
EDIT: added a simplified version here.
Crossposted at the intelligent agents forum.
In Anthropic Decision Theory (ADT), behaviours that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences (and from certain specific selfish preferences).
However, SSA implies the doomsday argument, and, to date, I hadn't found a good way to express the doomsday argument within ADT.
This post will remedy that hole, by showing how there is a natural doomsday-like behaviour for average utilitarian agents within ADT.
Is life worth living?
Genuinely curious how folks on this website would answer the following question:
First, imagine the improbable: God exists. Now pretend that he descends from the clouds and visits you one night, saying the following: "I'm going to give you exactly two choices. (1) I'll murder you right now and annihilate your soul, meaning that you'll have no more conscious experiences ever again. [Theologians call this "annihilationism."] Alternatively, (2) I'll allow you to relive your life up to this moment exactly as it unfolded the first time -- that is, all the exact same experiences, life decisions, outcomes, etc. If you choose the second, once you reach the present moment -- this moment right now -- I'll then annihilate your soul."
Which would you choose, if you were forced to pick one or the other?
Intrinsic properties and Eliezer's metaethics
Abstract
I give an account for why some properties seem intrinsic while others seem extrinsic. In light of this account, the property of moral goodness seems intrinsic in one way and extrinsic in another. Most properties do not suffer from this ambiguity. I suggest that this is why many people find Eliezer's metaethics to be confusing.
Section 1: Intuitions of intrinsicness
What makes a particular property seem more or less intrinsic, as opposed to extrinsic?
Consider the following three properties that a physical object X might have:
- The property of having the shape of a regular triangular. (I'll call this property "∆-ness" or "being ∆-shaped", for short.)
- The property of being hard, in the sense of resisting deformation.
- The property of being a key that can open a particular lock L (or L-opening-ness).
To me, intuitively, ∆-ness seems entirely intrinsic, and hardness seems somewhat less intrinsic, but still very intrinsic. However, the property of opening a particular lock seems very extrinsic. (If the notion of "intrinsic" seems meaningless to you, please keep reading. I believe that I ground these intuitions in something meaningful below.)
When I query my intuition on these examples, it elaborates as follows:
(1) If an object X is ∆-shaped, then X is ∆-shaped independently of any consideration of anything else. Object X could manifest its ∆-ness even in perfect isolation, in a universe that contained no other objects. In that sense, being ∆-shaped is intrinsic to X.
(2) If an object X is hard, then that fact does have a whiff of extrinsicness about it. After all, X's being hard is typically apparent only in an interaction between X and some other object Y, such as in a forceful collision after which the parts of X are still in nearly the same arrangement.
Nonetheless, X's hardness still feels to me to be primarily "in" X. Yes, something else has to be brought onto the scene for X's hardness to do anything. That is, X's hardness can be detected only with the help of some "test object" Y (to bounce off of X, for example). Nonetheless, the hardness detected is intrinsic to X. It is not, for example, primarily a fact about the system consisting of X and the test object Y together.
(3) Being an L-opening key (where L is a particular lock), on the other hand, feels very extrinsic to me. A thought experiment that pumps this intuition for me is this: Imagine a molten blob K of metal shifting through a range of key-shapes. The vast majority of such shapes do not open L. Now suppose that, in the course of these metamorphoses, K happens to pass through a shape that does open L. Just for that instant, K takes on the property of L-opening-ness. Nonetheless, and here is the point, an observer without detailed knowledge of L in particular wouldn't notice anything special about that instant.
Contrast this with the other two properties: An observer of three dots moving in space might notice when those three dots happen to fall into the configuration of a regular triangle. And an observer of an object passing through different conditions of hardness might notice when the object has become particularly hard. The observer can use a generic test object Y to check the hardness of X. The observer doesn't need anything in particular to notice that X has become hard.
But all that is just an elaboration of my intuitions. What is really going on here? I think that the answer sheds light on how people understand Eliezer's metaethics.
Section 2: Is goodness intrinsic?
I was led to this line of thinking while trying to understand why Eliezer's metaethics is consistently confusing.
The notion of an L-opening key has been my personal go-to analogy for thinking about how goodness (of a state of affairs) can be objective, as opposed to subjective. The analogy works like this: We are like locks, and states of affairs are like keys. Roughly, a state is good when it engages our moral sensibilities so that, upon reflection, we favor that state. Speaking metaphorically, a state is good just when it has the right shape to "open" us. (Here, "us" means normal human beings as we are in the actual world.) Being of the right shape to open a particular lock is an objective fact about a key. Analogously, being good is an objective fact about a state of affairs.
Objective in what sense? In this important sense, at least: The property of being L-opening picks out a particular point in key-shape space1. This space contains a point for every possible key-shape, even if no existing key has that shape. So we can say that a hypothetical key is "of an L-opening shape" even if the key is assumed to exist in a world that has no locks of type L. Analogously, a state can still be called good even if it is in a counterfactual world containing no agents who share our moral sensibilities.
But the discussion in Section 1 made "being L-opening" seem, while objective, very extrinsic, and not primarily about the key K itself. The analogy between "L-opening-ness" and goodness seems to work against Eliezer's purposes. It suggests that goodness is extrinsic, rather than intrinsic. For, one cannot properly call a key "opening" in general. One can only say that a key "opens this or that particular lock". But the analogous claim about goodness sounds like relativism: "There's no objective fact of the matter about whether a state of affairs is good. There's just an objective fact of the matter about whether it is good to you."
This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.
Section 3: Seeing intrinsicness in simulations
I think that we can account for the intuitions of intrinsicness in Section 1 by looking at them from the perspective simulations. Moreover, this account will explain why some of us (including perhaps Eliezer) judge goodness to be intrinsic.
The main idea is this: In our minds, a property P, among other things, "points to" the test for its presence. In particular, P evokes whatever would be involved in detecting the presence of P. Whether I consider a property P to be intrinsic depends on how I would test for the presence of P — NOT, however, on how I would test for P "in the real world", but rather on how I would test for P in a simulation that I'm observing from the outside.
Here is how this plays out in the cases above.
(1) In the case of being ∆-shaped, consider a simulation (on a computer, or in your mind's eye) consisting of three points connected by straight lines to make a triangle X floating in space. The points move around, and the straight lines stretch and change direction to keep the points connected. The simulation itself just keeps track of where the points and lines are. Nonetheless, when X becomes ∆-shaped, I notice this "directly", from outside the simulation. Nothing else within the simulation needs to react to the ∆-ness. Indeed, nothing else needs to be there at all, aside from the points and lines. The ∆-shape detector is in me, outside the simulation. To make the ∆-ness of an object X manifest, the simulation needs to contain only the object X itself.
In summary: A property will feel extremely intrinsic to X when my detecting the property requires only this: "Simulate just X."
(2) For the case of hardness, imagine a computer simulation that models matter and its motions as they follow from the laws of physics and my exogenous manipulations. The simulation keeps track of only fundamental forces, individual molecules, and their positions and momenta. But I can see on the computer display what the resulting clumps of matter look like. In particular, there is a clump X of matter in the simulation, and I can ask myself whether X is hard.
Now, on the one hand, I am not myself a hardness detector that can just look at X and see its hardness. In that sense, hardness is different from ∆-ness, which I can just look at and see. In this case, I need to build a hardness detector. Moreover, I need to build the detector inside the simulation. I need some other thing Y in the simulation to bounce off of X to see whether X is hard. Then I, outside the simulation, can say, "Yup, the way Y bounced off of X indicates that X is hard." (The simulation itself isn't generating statements like "X is hard", any more than the 3-points-and-lines simulation above was generating statements about whether the configuration was a regular triangle.)
On the other hand, crucially, I can detect hardness with practically anything at all in addition to X in the simulation. I can take practically any old chunk of molecules and bounce it off of X with sufficient force.
A property of an object X still feels intrinsic when detecting the property requires only this: "Simulate just X + practically any other arbitrary thing."
Indeed, perhaps I need only an arbitrarily small "epsilon" chunk of additional stuff inside the simulation. Given such a chunk, I can run the simulation to knock the chunk against X, perhaps from various directions. Then I can assess the results to conclude whether X is hard. The sense of intrinsicness comes, perhaps, from "taking the limit as epsilon goes to 0", seeing the hardness there the whole time, and interpreting this as saying that the hardness is "within" X itself.
In summary: A property will feel very intrinsic to X when its detection requires only this: "Simulate just X + epsilon."
(3) In this light, L-opening keys differ crucially from ∆-shaped things and from hard things.
An L-opening key differs from an ∆-shaped object because I myself do not encode lock L. Whereas I can look at a regular triangle and see its ∆-ness from outside the simulation, I cannot do the same (let's suppose) for keys of the right shape to open lock L. So I cannot simulate a key K alone and see its L-opening-ness.
Moreover, I cannot add something merely arbitrary to the simulation to check K for L-opening-ness. I need to build something very precise and complicated inside the simulation: an instance of the lock L. Then I can insert K in the lock and observe whether it opens.
I need, not just K, and not just K + epsilon: I need to simulate K + something complicated in particular.
Section 4: Back to goodness
So how does goodness as a property fit into this story?
There is an important sense in which goodness is more like being ∆-shaped than it is like being L-opening. Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it. Putting it another way, goodness is like L-opening would be if I happened myself to encode lock L. If that were the case, then, as soon as I saw K take on the right shape inside the simulation, that shape could "click" with me outside of the simulation.
That is why goodness seems to have the same ultimate kind of intrinsicness that ∆-ness has and which being L-opening lacks. We don't encode locks, but we do encode morality.
Footnote
1. Or, rather, a small region in key-shape space, since a lock will accept keys that vary slightly in shape.
Is there a flaw in the simulation argument?
Can anyone tell me what's wrong with the following "refutation" of the simulation argument? (I know this is a bit long -- my apologies! I also posted an earlier draft several months ago and got some excellent feedback. I don't see a flaw, but perhaps I'm missing something!)
Consider the following three scenarios:
Scenario 1: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you guess that you’re in room X—and consequently, you almost certainly win 1 million dollars. After all, since betting odds are a guide to rationality, if everyone in room X and Y were to bet that they’re in room X, just about everyone would win.
Scenario 2: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. You are also told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. The question here is: Does the extra information about the past histories of rooms X and Y change your mind about which room you’re in? It shouldn’t. After all, if everyone currently in rooms X and Y were to bet that they’re in room X, just about everyone would win.
Scenario 3: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then told that you’ll be escorted into room Z through one of two rooms, either X or Y, but you won’t know which one. At any given moment, or timeslice, there will always be exactly 1,000 people in room X and only a single person in room Y. (Thus, as one person enters each room another one exits into room Z.) Once you arrive in room Z at time T2, you are told that between T1 and T2 a total of 1 billion people passed through room Y whereas only 10,000 people in total passed through room X, where all of these people are now in room Z with you. There is no way of communicating with anyone else, so you must use the information given to guess which room, X or Y, you passed through on your way from Location A to room Z. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you now guess that you passed through room Y—and consequently, you almost certainly win 1 million dollars. After all, if everyone in room Z at T2 were to bet that they passed through room Y rather than room X, the large majority would win.
Let’s analyze these scenarios. In the first two, the only relevant information is synchronic information about the current distribution of people when you answer the question, “Which room am I in, X or Y?” (Thus, the historical knowledge offered in Scenario 2 doesn’t change your answer.) In contrast, the only relevant information in the third scenario is diachronic information about which of the two rooms had more people pass through them from T1 to T2. If these claims are correct, then the simulation argument proposed by Nick Bostrom (2003) is flawed. The remainder of this paper will (a) outline this argument, and (b) show how the ideas above falsify the argument’s conclusion.
According to the simulation argument, one or more of the following disjuncts must be true: (i) humanity goes extinct before reaching a stage of technological development that would enable us to run a large number of ancestral simulations; (ii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations but we decide not to; and (iii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do, in fact, run a large number of ancestral simulations. The third disjunct entails that we would almost certainly live in a computer simulation because (a) a sufficiently high-resolution simulation would be sensorily and phenomenologically indistinguishable from the “real” world, and (b) the indifference principle tells us to distribute our probabilities evenly among all the possibilities if we have no special reason to favor one over another. Since the population of sims would far outnumber the population of non-sims in scenario (iii), ex hypothesi, then we would almost certainly be sims. This is the simulation hypothesis.
But consider the following possible Posthuman Future: instead of running a huge number of ancestral simulations in parallel, as Bostrom seems to assume we would, future humans run a huge number of simulations sequentially, one after another. This could be done such that at any given moment the total number of extant non-sims far exceeds the total number of extant sims, yet over time the total number of sims who have existed far exceeds the total number of non-sims who also have existed. (This could be accomplished by running simulations at speeds much faster than realtime.) If the question is, “Where am I right now, in a simulation or not?,” then the principle of indifference instructs you to answer, “I am not a sim.” After all, if everyone were to bet at some timeslice Tx that they are not a sim, nearly everyone would win.
Here the only information that matters is synchronic information; diachronic information about how many sims, non-sims, or “observer-moments” there have been has no bearing on one’s credence about one’s present ontological status (sim or non-sim?)—that is, no more than historical knowledge about rooms X and Y in Scenario 2 have any bearing on one’s response to the question, “Which room am I currently in?” This is problematic for the simulation argument because the Posthuman Future outlined above satisfies the condition of disjunct (iii) yet it doesn’t entail that one is almost certainly living in a simulation. Thus, Bostrom’s assertion that “at least one of the following propositions is true” is false.
One might wonder: but what if we run a huge number of simulations sequentially and then stop. Wouldn’t this be analogous to Scenario 3, in which we would have reason for believing that we passed through room Y rather than room X, i.e., that we were (and thus still are) in a simulation rather than the “real” world? The answer is no, it’s not analogous to Scenario 3 because in our case we would have some additional relevant information about our actual history—that is, we would know that we were in “room X,” which held more people at every given moment, since we would have control over the ratio of sims to non-sims (always making sure that the latter far outnumbers the former). Even more, if we were to stop all simulations, then the ratio of sims to non-sims would be zero to whatever the human population is at the time, thus making a bet that we are non-sims virtually certain. So far as I can tell, these conclusions follow whether one accepts the self-sampling assumption (SSA), strong self-sampling assumption (SSSA), or the self-indication assumption (SIA) (Bostrom 2002).
In sum, the simulation argument is missing a fourth disjunct: (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, yet the principle of indifference leads us to believe that we are not in a simulation. It will, of course, be up to future generations to decide whether to run a large number of ancestral simulations, and if so whether to run these sequentially or in parallel, given the ontological-epistemic implications of each.
Doing a big survey on work, stress, and productivity. Feedback / anything you're curious about?
In September, doing a big survey on work, stress, and productivity -- going to gather a bunch of possibly germane data, and then see what correlations stand out.
Current version is around 90% complete here --
https://form.jotform.com/71974198606368
Any feedback? Any data you'd be very interested in getting? We're basically guaranteed to get basic statistical significance / sample size, and might have respondents in the mid-thousands if things break right. What would you like to know? Feedback? Thanks.
Request For Collaboration
I want to work on a paper: "The Information Theoretic Conception of Personhood". My philosophy is shit though, so I am interested in a coauthor. Someone who has the relevant philosophical knowledge to let the paper stand the tests of academic rigour.
DM me if you're willing to help.
One sentence thesis of the paper: "I am my information".
Some conclusions: A simulation of me is me.
I have no idea of the length, but I want to flesh the paper to be something that meets the standards of Academia.
Open thread, August 28 - September 3, 2017
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
P: 0 <= P <= 1
Part of The Contrarian Sequences.
Reply to infinite certainty and 0 and 1 are not probabilities.
Introduction
In infinite certainty, Eliezer makes the argument that you can't ever be absolutely sure of a proposition. That is an argument I disagreed with for a long time, but due to Akrasia acedia, I never got around to writing it. I think I have a more coherent counter argument now, and would present it below. Because the post I am replying to and infinite certainty are linked, I address both of them in this post.
This doesn't mean, though, that I have absolute confidence that 2 + 2 = 4. See the previous discussion on how to convince me that 2 + 2 = 3, which could be done using much the same sort of evidence that convinced me that 2 + 2 = 4 in the first place. I could have hallucinated all that previous evidence, or I could be misremembering it. In the annals of neurology there are stranger brain dysfunctions than this.
This is true. That a statement is true does not mean that you have absolute confidence in the veracity of the statement. It is possible that you may have hallucinated everything.
Suppose you say that you're 99.99% confident that 2 + 2 = 4. Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once.
I am not so sure of this. If I have X% confidence in a belief, and I am well calibrated, then if there were K statements for which I said I have X% confidence in, then you expect that ((100-X)/100)*K of those statements would be wrong, and the remainder would be right. It does not follow that if I have X% confidence in a belief that I can make K statements in which I repose equal confidence, and be wrong only ((100-X)/100)*K times.
It's something like X% confidence (implies) if you made K statements then ((100-X)/100)*K of those statements would be wrong.
A well calibrated agent does not have to be able to make K with only ((100-X)/100)*K wrong those statements for them to possess X% confidence in the proposition. It only indicates that in a hypothetical world in which they did make K statements, if they were well calibrated, only ((100-X)/100)*K of those statements would be wrong. To assert that a well calibrated agent must be able to make those statements before they can have X% confidence, is to establish the hypothetical as a given fact—either a honest mistake, or deliberate malice.
As for the notion that you could get up to 100% confidence in a mathematical proposition—well, really now! If you say 99.9999% confidence, you're implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once. That's around a solid year's worth of talking, if you can make one assertion every 20 seconds and you talk for 16 hours a day.
Assert 99.9999999999% confidence, and you're taking it up to a trillion. Now you're going to talk for a hundred human lifetimes, and not be wrong even once?
Assert a confidence of (1—1/googolplex) and your ego far exceeds that of mental patients who think they're God.
And a googolplex is a lot smaller than even relatively small inconceivably huge numbers like 3^^^3.
All based on the same flawed premise, and equally flawed.
I am Infinitely Certain
There is one proposition that I would start with and assign a probability of 1, not 1-1/googolplex. Not 1 - 1/3^^^^3, Not 1 - epsilon (where epsilon is an arbitrarily small number), but a probability of 1.
I exist.
Rene Descartes presents a very wonderful argument for the veracity of this statement:
Accordingly, seeing that our senses sometimes deceive us, I was willing to suppose that there existed nothing really such as they presented to us; And because some men err in reasoning, and fall into Paralogisms, even on the simplest matters of Geometry, I, convinced that I was as open to error as any other, rejected as false all the reasonings I had hitherto taken for Demonstrations; And finally, when I considered that the very same thoughts (presentations) which we experience when awake may also be experienced when we are asleep, while there is at that time not one of them true, I supposed that all the objects (presentations) that had ever entered into my mind when awake, had in them no more truth than the illusions of my dreams. But immediately upon this I observed that, whilst I thus wished to think that all was false, it was absolutely necessary that I, who thus thought, should be something; And as I observed that this truth, I think, therefore I am,[c] was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged by the Sceptics capable of shaking it, I concluded that I might, without scruple, accept it as the first principle of the philosophy of which I was in search
Eliezer quotes Rafal Smigrodski:
"I would say you should be able to assign a less than 1 certainty level to the mathematical concepts which are necessary to derive Bayes' rule itself, and still practically use it. I am not totally sure I have to be always unsure. Maybe I could be legitimately sure about something. But once I assign a probability of 1 to a proposition, I can never undo it. No matter what I see or learn, I have to reject everything that disagrees with the axiom. I don't like the idea of not being able to change my mind, ever."
I am alright with accepting as an axiom that I exist. I see no reason why I should be cautious of assigning a probability of 1 to this statement. I am infinitely certain that I exist.
If you accept Descartes argument, then this is very important. You're accepting that we can be infinitely certain about a proposition—and not just that—that it is sensible to be infinitely certain about a proposition. Usually, only one counterexample is necessary, but there are several other statements which you may assign a probability of 1 to.
I believe that I exist.
I believe that I believe that I exist.
I believe that I believe that I believe that I exist.
And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer's (fatuous) requirements for assigning a certain level of confidence to a proposition. If you feel that it is not sensible to assign probability 1 to the first statement, then consider this argument. I assign a probability 1 to the proposition "I exist". This means that the proposition "I exist" exists (pun intended) in my mental map of the world, and is therefore a belief of mine. By deduction, if I assign a probability of 1 to the statement "I exist", then I must assign a probability of 1 to the proposition "I believe that I exist". By induction, I must assign a probability of 1 to all the infinite statements, and all of them are true.
(I assign a probability of 1 to deduction being true).
Generally, using the power of recursion, we can pick any statement, to which we assign a probability of 1 and generate infinite more statements to which we (by deduction) also assign a probability of 1.
Let X be a proposition to which we assign a probability of 1.
define f(var, n=0)
if n < 0 or type(n) != int
return -1
end if
if var == X and n == 0
var = ("I believe " + var + ".")
print var
end if
n = (n < 2)?2:n
str = ("I believe that " + var + ".")
print str
i = 0
while i < n
str += "I believe that " + str + "."
print str
end while
end if else
f(str, n**n)
end
f(f(X, n)) for any X (to which we assign a probability of 1 and some valid n) prints an infinite number of statements to which we also assign a probability of 1.
While I'm at it, I can show that there are an uncountably infinite number of such statements with a probability of 1.
Let S be the array of all propositions produced by f(f(X, n)) (for some valid X to which we assigned a probability of 1, and a valid n).
define g(var)
k = rand(#S)
i = 0
j = rand(#S)
str = "I believe " + S[j]
delete(S[j])
while i < k
j = rand(#S)
str += " and " + S[j]
delete(S[j]
i++
end while
print(str)
f(g(var), 2)
end
Assuming #S = Aleph_null, there are 2^#S possible values for str, and each of them can be used to generate an infinite sequence of true propositions. By Cantor's diagonal argument the number of propositions to which we assign a probability of 1 are uncountable. For each of those propsitions, we assign a probability of 0 to their negation. That is if you accept Descartes argument, or accept any single proposition has having a probability of 1 (or 0), then you accept uncountably infinite many propositions as having a probability of 1 (or 0). Either we can never be certain of any propositions ever, or we can be certain of uncountably infinite many propositions (you can also use the outlined method to construct K statements with arbitrary accuracy).
Personally, I see no problem with accepting "I exist" (and deduction) as having P of 1.
When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.
Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.
This ignores the fact that you can assign priors of 0 and 1—in fact, it is for this very reason that I argue that 0 and 1 are probabilities—Eliezer is right in that we can never update upwards (or downwards as the case may be) to 1 or 0 (without using priors of 0 or 1), but we can (and I argue we should) sometimes start with priors of 0 and 1.
0 and 1 as priors.
Consider Pascal's Mugging. Pascal's Mugging is a breaker (breakers are a name I coined for decision problems which break decision theories). Let us reconceive the problem such that the person doing the mugging is me.
I walk up to Eliezer and tell him that he should pay me a $10,000 or I would grant him infinite negative utility.
Now, I cannot (as a matter of fundamental physical law) inflict infinite negative utility on Eliezer. However, if Eliezer is rational (maximising his expected utility), then Eliezer must pay me the money. No matter how much money I demand from Eliezer, Eliezer must pay me, because Eliezer does not assign a probability of 0 to me carrying out my threat, and no matter how small the probability is, as long as it's not 0, paying me the ransom I demanded is the choice which maximises expected utility.
(If you claim that it is impossible for me to grant you infinite negative utility/infinite negative utility is incoherent/return a category error on infinite negative utility, then you are assigning a probability of 0 to the existence of infinite negative utility, and (implicitly (because P(A) >= P(A and B). A here is "infinite negative utility exists". B is "I can grant infinite negative utility".) assigning a probability of 0 to me granting you infinite negative utility).
I have no problems with decision problems which break decision theories, but when a problem breaks the very formulation of rationality itself, then I'm pissed. There is a trivial solution to resolving Pascal's mugging using classical decision theory (accept the objective definition of probability; once you do so, the probability of me carrying out my threat becomes zero and the problem disappears). Only the insistence to cling to (unfounded) subjective probability that forbids 0 and 1 as probabilities leads to this mess.
If anything, Pascal's mugging should be a definitive proof demonstrating that indeed 0 and 1 are perfectly legitimate priors (if you accept a prior of 0 that I will grant you infinite negative utility, then trivially, you accept a prior of 1 that I do not grant you infinite negative utility). Pascal's mugging only "breaks" Expected utility theory if you forbid priors of 0 and 1—an inane commandment.
I'll expand more on breakers, rationality, etc. in my upcoming several ten pages+ paper.
Conclusion
So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.
The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.
However, in the real world, when you roll a die, it doesn't literally have infinite certainty of coming up some number between 1 and 6. The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write "37" on one side.
If you made a magical symbol to stand for "all possibilities I haven't considered", then you could marginalize over the events including this magical symbol, and arrive at a magical symbol "T" that stands for infinite certainty.
But I would rather ask whether there's some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in double negation or infinite sets, I would like to be a probability theorist who doesn't believe in absolute certainty.
Eliezer presents a shaky basis for rejecting 0 and 1 as probabilities. His model leads to absurd conclusion(s) (a proof by contradiction that 0 and 1 are indeed probabilities), he offers no benefits to rejecting the standard model and replacing it with his (only multiple demerits), and he doesn't formalise an alternative model of probability that is free of absurdities and has more benefits than the standard model.
0 and 1 are not probabilities is a solution in search of a problem.
Epistemic Hygiene
This article may have come across as overly vicious and confrontational; I adopted such an attitude to minimise the bias in my perception of the original article based on the halo effect.
The Contrarian Sequences
A series of posts wherein I outline my disagreements with Shishou (Eliezer Yudkowsky).
When I discovered the sequences was my second epiphany (the first was when I became atheist, and my world was turned upside down). Eliezer's charisma, his arrogance, the force of his personality, and his eloquence all combined to make a deadly drug. I was hooked, and seized with a fervour greater than when I first gave my life to Jesus Christ. Eliezer became my new Jesus, and I was drinking the koolaid pretty badly. Some months later (in light of criticism from both friend and foe), I realised I was a cultist, and began trying to sanitise myself. Due to the Halo effect, I accepted everything Eliezer said unconditionally, and never bothered trying to ascertain for myself the veracity of his claims.
I am stronger now than I was then, and aspiring higher still. This is a project in raising my epistemic hygiene. I will only react to posts that I feel were wrong in the overarching thesis, and develop my counterarguments in them. I expect I should write counter posts for at least 1% of all posts (if I don't reach the target, I'm probably not being critical enough), and at most 5% (if I exceed that, then I've probably just biased myself in the opposite direction, and/or I'm getting a kick out of disagreeing with Eliezer). Posts in the contrarian sequences would be in chronological order of how I wrote them, and not in the order of how they appear in the sequences, or in any ontological order.
Table of Contents
Rational Feed
===Highly Recommended Articles:
What Is Rationalist Berkleys Community Culture by Zvi Moshowitz - The original rationalist community mission was to save the world, not to be nice to each other. Sarah recently suggested the later is currently the actual goal. Zvi reinterprets this as sounding an alarm. The rationalists should not become just another Berkeley community of bohemians and weirdos.
Cthugha The Living Flame by Exploring Egregores - Rationalists as worshippers of an Eldritch Star God. Valuing knowledge and ideas above all else. Bonobos and transhumanists. Yudkowsky's argument about distributed vs concentrated intellect. The AI box experiment. Nerds as the true extraverts. "What do you think the singularity will actually look like?" The site maps eight other Eldritch Gods to different philosophical dispositions.
Internet Explorers Not Exploiters by Nostalgebraist - Exploit vs explore tradeoffs. Attention spans. How long should you try a math problem before you give up? Exploring new options can be uncomfortable since it might lead nowhere. Addictive games and the internet. Academic research.
Diversity And Team Performance What The Research Says by Eukaryote - Opens with several links about diversity and inclusion in EA. The pros and cons of different types of diversity in terms of group cohesion and information processing. Practical ways to minimize the costs of diversity and magnify the benefits. Lots of references.
The Market Power Story by Noah Smith - Many issues in the American economy are blamed on the increasing market power of a small number of firms. Analysis: Monopolistic competition. Profits. Market Concentration. Output restriction. Three updates. Lots of citations and references to papers.
The Anti Slip Slope by samuelthefifth (Status 451) - An analogy between workplace noise and workplace sexism. How efforts to stamp out 'workplace noise' can get out of control.
Dota 2 by Open Ai - Open AI codes a 1v1 Dota-2 bot that defeaated top players. The bots actions per minute were comparable to many humans. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. The game involves hidden information and the bot's strategies were complicated.
Stop Caring So Much About Technical Problems by Particular Virtue - Links to an article describing what attributes actually get developers jobs (other than technical skill). Caring about making great products is much more desirable than caring about technical problems. Developer interviews are highly random. Experience matters alot. Enterprise programmers are disliked. Practical advice.
===Scott:
Partial Credit by Scott Alexander - Blotting out the Sun. Short story.
Moral Reflective Equilibrium and the Absurdity Principle by SlateStarScratchpad - A long discussion about the nature of morality. The absurdity heuristic. Reflective equilibrium of moral values. The feedback loop between intuition and logic.
Advertising by SlateStarScratchpad - Nostalgebraist muses about advertising. Scott briefly explains how advertising works on SSC.
Fear And Loathing At Effective Altruism Global 2017 by Scott Alexander - EA Global was well run and impressive. The deep weirdness of EA. The fundamental goodness of effective altruists. The yoke is light and everyone is welcome.
Community History by Scott (r/SSC) - Scott answers: "What happened to Lesswrong? When (and more importantly why) did the spread out to other blogs happen?"
Threado Quia Absurdum by Scott Alexander - Bi-Weekly public open thread. Recommended comments on: how organizations change over time, self-driving car progress, gun laws in the Czech Republic, why comments are closed on some posts here. Scot may be choosing a SSC moderator.
Brief Cautionary Notes On Branded Combination Nootropics by Scott Alexander - Many 'Xbrain' pills contain ineffectively low doses of ingredients. Nootropics, like many drugs, effect people differently; you need to isolate which nootropics work for you. Drug interactions are very poorly understood, even for well studied drugs.
The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible by Scott Alexander - Faster than light communication via negative average preference utilitarianism.
Sparta by SlateStarScratchpad - A historian claims that Sparta's military renown was developed during a period when Sparta's actual military ability was declining. Scott disagrees and cites sources showing that the earliest records all claim Sparta was very powerful.
===Rationalist:
Internal Dialogue About End Of World by Sailor Vulcan - Short Story. Keep living, maybe we will win the lottery.
My Tedtedx Talks by Robin Hanson - Ted talks by Robin about his books "The Age of Em" and "The Elephant in the Brain". Talks are short ~12 minutes.
Paranoia Testing by Elo - Experiments to test if you have paranoia. Costs. Notes and some graphs.
Theres Always Subtext by Robin Hanson - Mostly a quote about subtext in film.
Play In Hard Mode by Zvi Moshowitz - "Hard mode is harder. The reason to Play in Hard Mode is because it is the only known way to become stronger, and to defend against Goodhart’s Law." Zvi revists the eleven examples from 'easy mode' and shows how to approach them from a hard mode perspective.
Play In Easy Mode by Zvi Moshowitz - Eleven examples of 'selling out' and taking the path of least resistance. Interestingly in several examples taking the easy path is quite defensible.
Emotional Labour by Elo - "I wanted to save you the effort of thinking about the thing and so I decided not to tell/ask you before it was resolved." VS "I wanted to not have to withhold a thing from you so I told you as soon as it was bothering me so that I didn’t have to lie/cheat/withhold/deceive you even if I thought it was in your best interest"
Paths Forward On Berkeley Culture Discussion by Zvi Moshowitz - Follow up to Zvi's post on the Berkeley rationalist community. A long sketch of the arguments Zvi would make and the article he would write if he had time to respond in depth.
How Social Is Reason by Robin Hanson - Humans alone have a logical reasoning module. 'Logical Fallacies' evolved because they are adaptive for persuasion. Unschooled populations often cannot solve logical problems. Epistemic learned helplessness. Impressive complex arguments are preferred over simple ones.
Cthugha The Living Flame by Exploring Egregores - Rationalists as worshippers of an Eldritch Star God. Valuing knowledge and ideas above all else. Bonobos and transhumanists. Yudkowsky's argument about distributed vs concentrated intellect. The AI box experiment. Nerds as the true extraverts. "What do you think the singularity will actually look like?" The site maps eight other Eldritch Gods to different philosophical dispositions.
Self Fulfilling Prophecy by Entirely Useless - The author analyzes various edge cases about intention and choice. They discuss how to modify their theories and whether they are on the right track.
Decisions As Predictions by Entirely Useless - "Consider the hypothesis that both intention and choice consist basically in beliefs: intention would consist in the belief that one will in fact obtain a certain end, or at least that one will come as close to it as possible. Choice would consist in the belief that one will take, or that one is currently taking, a certain temporally immediate action for the sake of such an end."
Bathtime by The Unit of Caring - Bath time play with a baby. Things are compelling when they have the right balance of surprise and predictability.
Internet Explorers Not Exploiters by Nostalgebraist - Exploit vs explore tradeoffs. Attention spans. How long should you try a math probem before you give up? Exploring new options can be uncomfortable since it might lead nowhere. Addictive games and the internet. Academic research.
Embracing Metamodernism by Gordon (Map and Territory) - "Metamodernism believes in reconstructing things that have been deconstructed with a view toward reestablishing hope and optimism in the midst of a period (the postmodern period) marked by irony, cynicism, and despair."
Why Ethnicity Ideology by Robin Hanson - "he more life decisions a feature influences, the more those who share this feature may plausibly share desired policies, policies that their coalition could advocate. So you might expect political coalitions to be mostly based on individual features that are very useful for predicting individual behavior. But you’d be wrong."
A Village Is Better Than Group House by Particular Virtue - More private space. Non-shared legal ownership. More people means much more social space and stability.
A Flaw In The Way Smart People Think About Robots And Job Loss by Tom Bartleby - Considering jobs one at a time causes smart people to think no one will lose their job from automation. However small incremental advances reduce the number of needed workers. A history of secretaries. Personal experience of saving time via programming.
More Brain Lies by Aceso Under Glass - "But sometimes it helps to take the gap between is and ought as a sign of how high your standards are, rather than how bad you are at a thing."
Ems In Walkaway by Robin Hanson - A review of the science fiction book 'Walkaway' which features brain emulation. Robin describes what he finds realistic and unrealistic.
Take My Job by Jacob Falkovich - "I want to tell you about the job I’m leaving, why you should think about applying for it, and what it has taught me in the last four years about company culture, diversity, and the makings of a good workplace." Cool jobs have work environments. Keep company identity small if you want real diversity.
The Parliamentary Model As The Correct Ethical Model by Kaj Sotala - An explanation of how the 'parliamentary' model of morality resolves uncertainty around which model of morality is correct. Why the parliamentary model is itself the correct model.
The Problem With Prestige by Robin Hanson - Small fields such as academic disciplines often use prestige to reward people. A mathematical model of how effort is allocated to maximize prestige. Why prestige doesn't scale and what is under-incentivized by prestige.
How I Think About Free Speech Four Categories by Julia Galef - Descriptions of the following categories: No consequences, Individual social consequences, Official social consequences, Legal consequences. Disagreements about categories.
Choices Are Really Bad by Zvi Moshowitz - Exercising willpower is a cost in the short term. Decision fatigue. Reasons why people, including you, WILL choose wrong. People justify their choices. Choices create blame and responsibility. Choices cause paralysis. Choice are communication. Choices require justification. Choices let people defect and destroy cooperation.
What Is Rationalist Berkleys Community Culture by Zvi Moshowitz - The original rationalist community mission was to save the world, not to be nice to each other. Sarah recently suggested the later is currently the actual goal. Zvi reinterprets this as sounding an alarm. The rationalists should not become just another Berkeley community of bohemians and weirdos.
Repairing Anxiety Using Internal And External Locus Of Control Models by Elo - Two variable model. Locus of Control: Internal or External. Feeling: Good or bad. The four combinations. Moving diagonally, for example from internal-bad to external-good.
Social Insight When A Lie Is Not A Lie When A by Bound_Up (lesswrong) - If you merely speak the truth as you see then you will be misunderstood. Example of saying you are an atheist. Many people are incapable of understanding your real arguments.
Multiverse Wide Cooperation Via Correlated Decision Making by The Foundational Research Institute - "If we care about what happens in civilizations located elsewhere in the multiverse, we can superrationally cooperate with some of the their inhabitants. That is, if we take their values into account, this makes it more likely that they do the same for us. In this paper, I attempt to assess the practical implications of this idea"
Questions Are Not Just For Asking by Malcom Ocean (ribbonfarm) - Hazards of asking questions. Hold your Questions. Reveal your questions. Un-ask your questions. Question your questions. Using Questions to Organize Attention. Letting the question ask you; becoming the answer.
Happiness Is Not Coherent Concept by Particular Virtue - A social science concept is 'real' if and only if it represents reality well and you have ruled out alternatives. "If a thing can be measured several different ways, and a causal factor can push one in a direction but not the other, then you start to worry that the thing is not actually one thing, but several things." Why should you care that happiness isn't a single thing.
The Craft Is Not The Community by Sarah Constantin (Otium) - The Berkeley Rationalists are building a true community: Sharehouses, Plans for an unschooling center, etc. However many rationalist companies/projects have failed. Sarah doesn't think it makes sense to tackle 'external facing' projects as a community. Tesla Motors and MIT aren't run as community projects, they are run meriotocratically. Lots of analysis on the meaning of community and what makes organizations effective. Personal.
===AI:
More On Dota 2 by Open Ai - Timeline of the DOTA-bot's rapid improvement. Bot Exploits. Physical Infrastructure. What needs to be done to play 5x5.
Openai Bots Were Defeated At Least 50 Times - People could play against the openAI Dota bot. Several people found strategies to beat the bot. One of the human victors explains their strategy.
Dota 2 by Open Ai - Open AI codes a 1v1 Dota-2 bot that defeaated top players. The bots actions per minute were comparable to many humans. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. The game involves hidden information and the bot's strategies were complicated.
===EA:
Things I Have Gotten Wrong by Aceso Under Glass - Mistaken evaluations: Animal Charity Evaluators, Raising for Effective Giving, Charity Science, Tostan.
We Have No Idea If There Are Cost Effective Interventions Into Wild Animal Suffering by Ozy - Many people are confident there are no effective ways to reduce wild animal suffering, Ozy disagrees. Ecosystems are complex but we aren't completely uncertain. Wild Animal Suffering is a tiny field staffed by non-experts working part time.
Altruism Is Incomplete by Zvi Moshowitz - "I worry many in EA are looking at life like a game where giving money to charity is how the world scores victory points." Controls in psychology are often motivated by researcher bias. Amazon is the world's most effective charity. Life is about getting things done, often for selfish reasons. Veganism. Zvi doesn't believe the official EA party line.
Let Them Decide by GiveDirectly - Eight media articles about Basic Income, Give Directly, Cash Transfer and Development Aid.
High Time For Drug Policy Reform Part 44 by MichaelPlant (EA forum) - "This is the fourth of four posts on DPR. In this part I provide some simplistic but illustrative cost-effectiveness estimates comparing an imaginary campaign for DPR against current interventions for poverty, physical health and mental health; I also consider what EAs should do next."
High Time For Drug Policy Reform Policy by MichaelPlant (EA forum) - "This is the third of four posts on DPR. In this part I look at what a better approach to drug policy might be and then discuss how neglected and tractable this problem is as cause area of EAs to work on."
Drug Policy Reform 1 by MichaelPlant (EA forum) - 9300 Words. Six Mechanisms for drug reform to do good: Fighting mental illness. Reducing pain. Improving public health. Reducing crime, violence, corruption and instability (including international scale). Raising revenue for governments. Recreational use. Five major objections and the Author's response.
===Politics and Economics:
Diversity And Team Performance What The Research Says by Eukaryote - Opens with several links about diversity and inclusion in EA. The pros and cons of different types of diversity in terms of group cohesion and information processing. Practical ways to minimize the costs of diversity and magnify the benefits. Lots of references.
Unpopular Ideas About Social Norms by Julia Galef - Twenty-four ideas, many with references explaining the ideas. As an example: "Overall it would be a good thing to have a totally transparent society with no privacy"
Unpopular Ideas About Political And Economic Systems by Julia Galef - Twenty-three ideas, many with references explaining the ideas. As an example "Many people have a moral duty not to vote".
The Market Power Story by Noah Smith - Many issues in the American economy are blamed on the increasing market power of a small number of firms. Analysis: Monopolistic competition. Profits. Market Concentration. Output restriction. Three updates. Lots of citations and references to papers.
The Courage To Stand Up And Do The Wrong Thing by Tom Bartleby - According to Supreme Court Justice Black, applying Brown vs Board of Education to DC schools was an unprincipled but correct decision. Have principles. Don't follow them over a cliff. Acknowledge deviations. Charlottesville. Cloduflare suspends service to the daily stormer.
Many Topics by Scott Aaronson - Misc Topics: HTTPS / Kurtz / eclipse / Charlottesville / Blum / P vs. NP
The Muted Signal Hypothesis Of Online Outrage by Kaj Sotala - "People want to feel respected, loved, appreciated, etc. When we interact physically, you can easily experience subtle forms of these feelings... Online, most of these messages are gone: a thousand people might read your message, but if nobody reacts to it, then you don’t get any signal indicating that you were seen... . So if you want to consistently feel anything, you may need to ramp up the intensity of the signals."
Marching Markups by Robin Hanson - "Holding real productivity constant, if firms move up their demand curves to sell less at a higher prices, then total output, and measured GDP, get smaller. Their numerical estimates suggest that, correcting for this effect, there has been no decline in US productivity growth since 1965. That’s a pretty big deal."
Greater Gender Parity Economics Suggests Reform Tenure Systems by Marginal Revolution - Biological clocks conflict with the tenure system timeline. Tyler recommends a much more flexible system with a variety of roles. The leaders in the economics profession have been 'punching down' at an infamous anonymous economics forum.
Moral Precepts And Suicide Pacts by Perfecting Dated Visions - "To be trusted to remain peaceful, you must be the kind of person who remains peaceful. And to be a peaceful person and earn the trust placed in you, you must be peaceful even when you have every right to fight. It’s the same with tolerance. If you want to shut up your argumentative opponents and vigorously retaliate when your opponents show signs of intolerance, you will not be trusted to be tolerant to others who are tolerant, even those who basically agree with you." The constitution, World War 1, Nazi's today.
The Anti Slip Slope by samuelthefifth (Status 451) - An analogy between workplace noise and workplace sexism. How efforts to stamp out 'workplace noise' can get out of control.
Seattle Minimum Wage Study Part 3 Tell Me Why Im Wrong Please by Zvi Moshowitz - Most writers thought the Seattle minimum wage study showed that low wage workers were hurt. Zvi found a fundamental flaw in their analysis. If you correct for raising wages in Seattle then the study seems to show low wage workers weren't hurt or perhaps benefitted.
Theory Vs Data In Statistics by Noah Smith - Theory heavy vs minimal theory models in Economics. Machine learning as the extreme of a "no model required" paradigm.
Thats Amore by sam[]zdat - Epistocracy, democracy with limits on who can vote. Competency and incompetency and pizza. Politics is the strongest identity. Trading power for the image of power. Morlocks and Eloi. Replication crisis. Google guy. The Left's support for the powerful. Nhilism.
Contra Sadedin Varinsky: The Google Memo Is Still Right Again by Artir - Detailed refutation of two criticisms of the google memo. Lots of long quotations and citation of counter evidence.
Indian Feminism And The Role Of The Environment: Why The Google Memo Is Still Right by Artir - A very detailed cross-country look at female enrollment in CS and various technology fields. A focus on countries where women are well represented in tech (many in Asia). Lots of discussion.
Brief Thoughts On The Google Memo by Julia Galef - "So as far as I can see, there are only two intellectually honest ways to respond to the memo: 1. Acknowledge gender differences may play some role, but point out other flaws in his argument (my preference) 2. Say “This topic is harmful to people and we shouldn’t discuss it” (a little draconian maybe, but at least intellectually honest)"
The Kolmogorov Option by Scott Aaronson - Kolmogorov was a brilliant mathematician as well as a sensitive and kind man. However he cooperated with the Soviets. An option for living in a society where many falsehoods are 'official truth': Build a bubble of truth and wait for the right time to take down the Orthodoxy. Don't charge headfirst and get killed. There are no 'good heretics' in the eyes of the Inquisition.
===Misc:
Can Atheists be Jewish by Brute Reason - Reasons MIRI can be an atheist Jew: Judaism is a religion, but being Jewish isn’t necessarily. Belief in god isn’t particularly central in most Jewish communities and practices. Because I fucking said so.
Ten Small Life Improvements by Paul Christiano (lesswrong) - Nine tech tips. Christmas lights all year round.
Extremely Easy Problem by protokol2020 - How much water per second do you need to raise the sea level 6 meters in 100 years.
The Premium Mediocre Life Of Maya Millennial by venkat (ribbonfarm) - Venkat - "Yes, ribbonfarm is totally premium mediocre. We are a cut above the new media mediocrityfests that are Vox and Buzzfeed, and we eschew low-class memeing and listicles. But face it: actually enlightened elite blog readers read Tyler Cowen and Slatestarcodex."
Right And Left Folds Primitive Recursion Patterns In Python And Haskell by Eli Bendersky - "In this article I'll present how left and right folds work and how they map to some fundamental recursive patterns. The article starts with Python, which should be (or at least look) familiar to most programmers. It then switches to Haskell for a discussion of more advanced topics like the connection between folding and laziness, as well as monoids."
Meta Contrarian Typography Part 2 by Tom Bartleby - You should use two spaces after your sentences whn drafting. Why to use a plaintext editor. Why to write a resume in plaintext. Flexibility is power. Two spaces is more much more machine readable.
Stop Caring So Much About Technical Problems by Particular Virtue - Links to an article describing what attributes actually get developers jobs (other than technical skill). Caring about making great products is much more desirable than caring about technical problems. Developer interviews are highly random. Experience matters alot. Enterprise programmers are disliked. Practical advice.
Trip Sitting Tips And Tricks by AellaGirl - Thirteen practical tips for trip sitting someone on a high dose of acid. Focuses on accepting their experiences, treating them similarly to a small child and keeping yourself safe.
Erisology Of Self And Will Closing Thoughts by Everything Studies - "Here in Part 7 I’ll end with a summary and some thoughts on how to deal with the problems described in the series."
===Podcast:
We Are Not Worried Enough About The Next Pandemic by 80,000 Hours - "We spend the first 20 minutes covering his work as a foundation grant-maker, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. In the second half of the interview we go through what you personally could study and where you could work to tackle one of the worst threats facing humanity."
Identity Terror by Waking Up with Sam Harris - "Douglas Murray. Identity politics, the rise of white nationalism, the events in Charlottesville, guilt by association, the sources of western values, the problem of finding meaning in a secular world."
Seth Stephens Davidowitz On What The Internet Can Tell Us by Rational Speaking - "New research gives us into which parts of the USA are more racist, what kinds of strategies reduce racism, whether the internet is making political polarization worse, and the sexual fetishes and insecurities people will only admit to their search engine."
John McWhorter on the Evolution of Language and Words on the Move by EconTalk - "The unplanned ways that English speakers create English, an example of emergent order. Topics discussed include how words get short (but not too short), the demand for vividness in language, and why Shakespeare is so hard to understand."
The Limits Of Persuasion by Waking Up with Sam Harris - "David Pizarro and Tamler Sommers. Free speech on campus, the Scott Adams podcast, the failings of the mainstream media, moral persuasion, moral certainty, the ethics of abortion, Buddhism, the illusion of the self."
Conversation: Comedian Dave Barry by Marginal Revolution - "What makes Florida special, why business writing is so terrible, Eddie Murphy, whether social conservatives can be funny (in public), the weirdness of Peter Pan, how he is so productive, playing guitar with Roger McGuinn, DT, the future of comedy."
Ritual And Spirituality by The Bayesian Conspiracy - Rationalist ritual. Witchcraft. Welcome to Nightvale. Concerts. What makes something ritual? Is rationalist ritual psychologically safe?
Chris Hayes by The Ezra Klein Show - Chris Hayes. Should Trump be removed from office. "Infighting between different factions of the Democratic Party, the signs that congressional Republicans are growing some backbone, and the reports that Trump’s closest aides are conspiring to keep him from doing too much damage to the country."
The Biology Of Good And Evil by Waking Up with Sam Harris - "Robert Sapolsky. His work with baboons, the opposition between reason and emotion, doubt, the evolution of the brain, the civilizing role of the frontal cortex, the illusion of free will, justice and vengeance, brain-machine interface, religion, drugs"
Senator Michael Bennet by The Ezra Klein Show - Senator Michael Bennet. "This is a conversation about why Congress is broken, and what broke it. We discuss money, partisanship, the media, the rules, the leadership, and much more. We talk about what Bennet thinks House of Cards gets right (hint: it’s the sociopathy) and whether President Trump’s antics are creating some hope of institutional renewal."
Could the Maxipok rule have catastrophic consequences? (I argue yes.)
Here I argue that following the Maxipok rule could have truly catastrophic consequences.
Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."
And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.
I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)
Paranoia testing
Original post: http://bearlamp.com.au/paranoia-testing/
Because I live on the internet I sometimes meet some interesting characters. On this particular occasion I found myself in a conversation with someone who suggested, “I don't know if I'm paranoid or not”. The full story had some drug use and what I can only describe as peculiar circumstances (if they were in fact being accurately reported, but I have no reason not to believe the reports).
Intrigued by this puzzle and I was not entirely sure what the best course of action to do with a potential mentally unwell person - Should I discourage the stories, should I indulge the stories? If I say something that causes my friend to drop into a state of greater paranoia I would be liable to try to help them out again. After a short while of talking I figured that I would just try and get the person to think and feel about the "edges" of what it means to have paranoia.
Which is how I came up with the idea to run some simple thought experiment tests that might give you a hint as to whether you have paranoia or not. I didn't know if I could be trusted by this person so I was always very careful to suggest ideas and not insist on any ideas. It's not necessary for me to insist someone seek treatment.
As a brain living inside the conditions like paranoia it's difficult to have an objective test because any scientific test that can be done on faulty equipment is going to come up with faulty results the equivalent to the fault. A camera with a dust speck on it's lens will always take a photo of the dust spec. Unfortunately paranoia is a more complicated fault to test.
Any experiment that you might try, could run into multiple errors at the same time or multiple errors in the one experiment. Could paranoia machinery change at different times of day? Different amounts of stress? If you were to try and design a normal experiment knowing that your equipment was faulty you'll be trying to aim for reliability, validity and accuracy.
Repeating the experiment for reliability - which is to say that if you were shooting an arrow and you always land in the direction of the target you know you have at least the reliability to shoot an arrow. If you regularly hit one foot to the left of the bullseye you have the accuracy to get close to the target, you just need to move the target or improve your aim so that you actually hit it.
The other problem that you might have is validity. Which is if you try to weigh a feather to work out how much an average feather weighs but you only happen to have peacock feathers you might end up with a different answer than if you measured a pigeon's feathers. Depending on what you want to know - your experiment needs to validly come to a result. a result that doesn't represent the information you are trying to measure is going to be useless.
Tests
In thinking about paranoia how can you test whether you are paranoid or not using your faulty equipment that may or may not be faulty (paranoid) First I started trying to think of something that is a little bit random but has a known randomness to it. For example a coin flip. You know that it will probably land either heads or tails but it might be a test that you have to run a couple of times before you conclude that it is it a biased or before you conclude that the coin is actually random.
The next strategy I considered was the random person strategy. For example a stranger on a bus or a server at the supermarket. In law there is a reasonable person test that can be applied, something like, "what would a reasonable person have done in the situation given the details and facts of the experience that the defendant was going through". Curiously the reasonable hypothetical person was once described as, "The man on the Clapham omnibus is a reasonably educated, intelligent but nondescript person, against whom the defendant's conduct can be measured.", or, "The bald-headed man at the back of the omnibus.". As a map-making strategy, I find it kind of neat that they describe a "reasonable person" as a "man on the bus". So when running your experiment or thought experiment - do you think you could ask a stranger on a bus the result of a coin flip and have them tell you the true answer?
For a non paranoid person - the server at the supermarket, or a person on the bus has no incentive to lie to you about anything you might ask them. If you are specifically unsure if the other humans are all in cahoots with each other scheming against you, at some point it gets damn expensive to pull off a ruse like "all of the people on every bus you ever catch are paid to stand around and answer your questions incorrectly". For example if you ask the stranger on a bus, what day of the week it was - do you think you could trust their answer?
Costs
If all the humans in your life, or many of the humans in your life were part of a grand scheme, very quickly the cost of maintaining a grand scheme starts to grow. Where maybe a room full of people could pull a practical joke on someone for about an hour or two - "just for fun"... By the time the ruse's time scale stretches out to a day or perhaps several days there needs to be some sort of value being generated, for simplicity - in terms of "dollars" to incentivise people to “keep playing along”. By the time you want your brain to believe that - 5 people you have never met - scheming or pulling a practical joke on you. If those 5 people spend more than a day on that practical joke the cost of keeping them pulling that joke starts to escalate where a full day might be 12 hours x 5 people x your country's minimum wage (I will use $10 for simplicity) = $600 for a day's practical joke. It's not cheap. I would say bordering on irrational to burn that sort of cost on a practical joke.
I really want to believe that I am important enough to scheme about but I know the incentives here. If you can't afford to pay those 5 confederates to participate in your practical joke then after about a day they're going to go home and get on with their lives. I do consider myself "valuable" but I don't know that I consider myself valuable enough for a 10 person scheme for 3 days even (10*12*3*10=$3600). I mean, depending on what it's worth to pull a thinly veiled paranoia plot, epic scheme or hilarious practical joke - there has to be a monetary cost to the scheme. By the time you start to include public places - clearing up any chance of the "scheme" failing, starts to get quite expensive.
Maybe your number is higher than mine - maybe you think someone has $10,000 to spend on fooling you for a few days. But there still should be a limit to how complicated a scheme must be before it is unreasonably complicated and unlikely to be possible or valid because it just cost too damn much to pull off.
A note
What if you wrote yourself a note and hid it in a drawer? Do you think that you could come back the following morning and expect that no one had tampered with it?
What if you wrote the note in code? A simple substitution cipher is all it takes to make a slightly higher barrier to tampering. Do you think you could trust the note to not be tampered with now?
From existing research we know that there is a limited number of people who can be involved in a conspiracy before it becomes unwieldy to keep a secret.
Simply put if you have too many people involved in the conspiracy it becomes impossible to keep a secret as time goes on.
The research (if you agree with their models and I am not so sure that I do) seems to suggest a much higher number of participants than I would have guessed, still. Interesting to know.
Mostly I am curious of what test you might use or generate to evaluate if you are paranoid. Knowing of course that no test is perfect and your faulty hardware could be getting in the way of you actually noticing a scheme afoot, or being able to tell if you are paranoid.
Social Insight: Status Exchange: When an Insult Is a Compliment, When a Compliment Is an Insult
Some rough synonyms for status include respect, prestige, and "coolness."
Conceptually, the idea I sometimes think of when I try to describe "status" in its constituent parts is that to have status is to have people feel that they owe you something, to feel like they would if you had just given them a gift. The balance of give-and-take in the encounter is tilted in your favor. Picture a king among subjects, being given gifts and praises. Every brush of his hand is itself a gift, every glance of his eyes a praise to the recipient. The give-and-take in a relationship is never exactly equal, and high status people have it tilted strongly in their favor.
With the people you know, you'll have implicitly established an individual give-and-take relationship with each of them, and if one of you fails to give as much as that balance (or imbalance) requires, you'll be asked to apologize. So, if you have a 60-40 relationship (your way) with someone, and they only give you 50, you'll feel offended and ask for the apology. An apology is essentially a recognition of failure to give somebody as much as is expected, and a promise to give them more from now on/take less from now on. In other words, to shift the actual give-and-take favorably in their direction. This is why asking for an apology is essentially a re-negotiation of power/a request for submission.
(You'll note that you can feel offended for being treated fairly if that's not what your give-and-take has been in the past, just like someone can apologize for acting fairly if more than that is expected of them. This is why apologies can be purposefully sought and extracted with the intention of gaining status/re-negotiating the give-and-take of the relationship. Ammunition will be noted, stored, and prepared in advance and the encounter will be initiated at a strategically opportune time. Ammunition includes anything that can make someone feel sorry, and sometimes you can win without ammunition by continuing to act or feel like you've been wronged even without being able to give a justification for it.)
With people you don't know, general status determines how much they "owe" you and you them. If you are high status, people will feel like they owe you even before you've had any give and take. They will treat you much the same way as they would if you had just done them a great favor and they wanted to show you appreciation and thanks. As I said, having high status = people feel the same way they would feel if they owed you something in real life/you were giving them things in real life.
A compliment can be seen in two ways: an assessment of a person, or as an attempt to raise their status. If you ever hear a nonsensical compliment, it's probably being used simply to raise the recipient's status, not to use language to describe a quality the person has. The entire message is summed up in this: that words clearly identifiable as definitely-a-compliment are spoken at all, not in what those specific words are.
Over-the-top compliments are one kind of nonsensical compliment, and as said, are (on the surface) attempts to raise someone's status, not comments on their qualities or abilities.
Let's blur out the words and look at how giving-a-compliment affects social status.
How good does a compliment make you feel? Scratch that. How good do compliments make most people feel. Personally, I'd feel better about a compliment the more I thought it said something I valued about myself, multiplied by how capable an assessor of that thing I considered the compliment-er. So if you can consistently guess people's IQ or future success, and tell me you think I've got the stuff, that's an amazing compliment, even if you're the whipping boy of the tribe. It is now my impression that most people's appreciation for a compliment is calculated differently.
Take the effusiveness of the compliment and add a bonus for how much more status than the complimented the compliment-er has (or subtract the difference if they have less status). That's how much people appreciate a given compliment.
Effusiveness can partially be measured without even understanding the language being spoken. The tone and body language will communicate how much deference is being shown the complimented.
You can also find some of the compliment's effusiveness in the actual words. Mostly just look at adjectives and adverbs, though. Are you extremely something-or-other? Cool, bump up the effusiveness a little. Are you tremendous? Ditto. However, whether you're extremely this versus that, or what you're tremendous about exactly, is mostly irrelevant.
As for how status affects things, let's say whatever your status is, someone has status a little bit lower, maybe it's -1 relative to you.`So, penalize the power of the compliment accordingly, it'll come out a little bit weaker than the effusiveness alone would suggest. In contrast, if Johnny Depp compliments you, or even nods at you approvingly, this "compliment" will get a substantial bonus for coming from a higher status person.
"Oh my god; he looked at me" comes from this kind of thing. In contrast, "I don't want your apology/money" also does, when the other person is lower status (being mad at someone is like temporarily treating them like they have much lower status than usual).
You can see how this dynamic will play out if you start with "compliments from higher people feel better" and follow its implications.
If getting a compliment from a cool person feels better, then acting happy to receive a compliment signals that you consider them to be of higher status. At least, if you act happy enough. Get too excited about a compliment, and that suggests that you consider the other person to be of higher status than you (but you still have to do it when they really are higher status; it's quite awkward not to act pleased about a compliment from a higher-up and you'll lose points if you don't act in the usual manner).
So let's say someone of equal status give you a mild compliment, not particularly effusive. If you act all excited, you've signaled that you are lower status than them. If someone of lower status mildly compliments you and you act impressed at all, you've lowered your status even more (ignoring counter-signaling for the moment).
Every compliment is a two-way street. The compliment is a signal of how they perceive your status relative to theirs, and how you receive the compliment signals how you perceive their status relative to yours. Both the compliment-er and the complimented have to choose their move, some choices grabbing for status and others granting it.
You can see how this plays out with low-status people who are desperate to give you over-the-top compliments. Every compliment is also an attempt to receive something. They want to see your reaction. If you respond at all, that validates them to some degree (and potentially lowers your status as a result). If they don't get the reactions they want, they'll exaggerate your merits, practically begging you to be appreciative in some way. You might also notice how awkward it feels to receive such excessive compliments from someone of lower status. (I might recommend taking them to the side, alone, where that feeling will suddenly disappear (mostly) and giving them some tips about not begging so much).
This feeling is instinctive, I hypothesize. It protects your status, and you can see why if you learn this stuff and think it through. But of course, evolution would like to get you not to respond to low-status people without you having to consciously know all this stuff. So it gives you a feeling. A feeling's a lot easier for evolution to give an organism than complicated abstract knowledge is.
This feeling makes you feel unimpressed by low-status compliments and awkward about the whole thing so as to preserve your status via not acting appreciative, lest you signal your acceptance of the compliment-er as higher status than you (or closer in status to you than they are).
On the other hand, a high-status person might find it useful to force you to choose between acting grateful to them and violating social norms. Giving you a compliment can force you into exactly that situation. Maybe you just met and want to impress Party C, so you have to present your nice, civilized face (see "person masks" at http://www.meltingasphalt.com/personhood-a-game-for-two-or-more-players/). Under those circumstances, "violate social norms" is not available to you, so if you receive a compliment, you kind of have to respond, you know? Inside you might be seething, though, as your hated rival forces you to dance through some hoops by offering you ever more effusive compliments.
A compliment, just like a gift, can be an offensive move. It pushes you into a certain role; If you don't act appreciative enough/reciprocate, you might lose points.
A compliment can be a gift, or an attack, or it can be begging, or it can be a test.
So, let's imagine how these principles play out in a variety of situations.
1. High compliments Low effusively. Low is only mildly appreciative, signaling higher status than they have. High is offended. Low doesn't act embarrassed(have you no shame?!) and loses points in High's eyes.
2. Several Lows effusively compliment a High. Then, one Low says something only mildly complimentary about High. Everyone tenses up a little and looks at Low (to censure him) and High (to see his reaction). Low has signaled possible enmity. The compliment is an insult.
3. A High on the enemy side singles out and insults a Low in your group. The Low is elevated by the attention of the High and is considered "a real player" now. The insult is a compliment.
I've seen this one many a times in politics, where people are proud when they are personally decried by famous enemies. "Did you hear that Trump said I was dumb? Awesome, am I right?"
In the past, playing by my own rules (compliments are worth most if accurate, informationally-dense, and coming from a competent assessor) led me to, from everyone else's perspective act quite chaotically. To them, it seemed that sometimes I made the appropriate response and maintained status. Occasionally I accidentally executed elaborate plots which ended in my status increasing. But mostly, I consistently broke the rules in a way that lost me status and proved I didn't understand what was really going on. Which I didn't.
Most people seem to play by these rules (and others), so if you want to understand what they're doing, and how your actions look to them, this is one of the building blocks.
What is Rational?
Eliezer defines rationality as such:
Instrumental rationality: systematically achieving your values.
....
Instrumental rationality, on the other hand, is about steering reality— sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”
Extrapolating from the above definition, we can conclude that an act is rational, if it causes you to achieve your goals/win. The issue with this definition is that we cannot evaluate the rationality of an act, until after observing the consequences of that action. We cannot determine if an act is rational without first carrying out the act. This is not a very useful definition, as one may want to use the rationality of an act as a guide.
Another definition of rationality is the one used in AI when talking about rational agents:
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
A precept sequence is basically the sequence of all perceptions the agent as had from inception to the moment of action. The above definition is useful, but I don't think it is without issue; what is rational for two different agents A and B, with the exact same goals, in the exact same circumstances differs. Suppose A intends to cross a road, and A checks both sides of the road, ensures it's clear and then attempts to cross. However, a meteorite strikes at that exact moment, and A is killed. A is not irrational for attempting to cross the road, giving that t hey did not know of the meteorite (and thus could not have accounted for it). Suppose B has more knowledge than A, and thus knows that there is substantial delay between meteor strikes in the vicinity, and then crosses after A and safely crosses. We cannot reasonably say B is more rational than A.
The above scenario doesn't break our intuitions of what is rational, but what about in other scenarios? What about the gambler who knows not of the gambler's fallacy, and believes that because the die hasn't rolled an odd number for the past n turns, that it would definitely roll odd this time (after all, the probability of not rolling odd ). Are they then rational for betting the majority of their fund on the die rolling odd? Letting what's rational depend on the knowledge of the agent involved, leads to a very broad (and possibly useless) notion of rationality. It may lead to what I call "folk rationality" (doing what you think would lead to success). Barring a few exceptions (extremes of emotion, compromised mental states, etc), most humans are folk rational. However, this folk rationality isn't what I refer to when I say "rational".
How then do we define what is rational to avoid the two issues I highlighted above?
A Decision Problem
The idea for this problem is gotten from dmytryl.
Omega makes a simulation of you. One of you is presented with an offer Omega offers them $1000.
1. If the simulation is offered the $1000 dollars and rejects it, the real you gets a $10,000.
2. If the simulation is offered the $1,000 dollars and accepts it, the real you gets $100.
3. If the real you is offered $1000 and accepts it, the real you gets $1000.
4. If the real you is offered $1000 and rejects it, the real you gets $0.
Immeidately after completion of the decision problem, the simulation is terminated.
The probability of selecting simulation or real you by Omega is not known. (OMega may always select one option, select both options with equal probability, or select options with any valid probabilities).
You find yourself in the game, with the rules explained as such to you. You don't know if you're the simulation or real, do you accept the $1000 or reject it?
The payoffs only need be of the form: 1. $k*X (k: 1 < k) (X: 1 < X)
2. $X/k
3. $X
4. $0
If $1000 is irrelevant to you, then substitute for any enticing value of X, and replace $X with X utils. There is no diminishing returns on the utility you gain from the reward Omega gives you.
Do you have a strategy for a general form of this problem?
Emotional labour
A brief breakdown:
- event: I broke your vase.
- event: I bought you a gift but then left it at home
- event: I want to go to a (privately valuable event) on our (relationship important day)
Options:
- I wanted to save you the effort of thinking about the thing and so I decided not to tell/ask you before it was resolved.
- I wanted to not have to withhold a thing from you so I told you as soon as it was bothering me so that I didn't have to lie/cheat/withhold/deceive you even if I thought it was in your best interest
Discussion:
what is a better plan of action?
1 would be doing emotional labour in the form of:
I thought about the event and how you would feel about it and modelled how I thought you would feel and then acted according to what I thought was best for you feeling better.
2 would be to put an emotional burden on the other person but carries with it more honesty, more expectation that the other person is autonomous and able to make choices for themselves.
I didn't want to withhold anything, but instead burdened you with making the choice about what to do about the matter by telling you about my conundrum.
I used to do 1, but now I do 2. The relationship books tend to suggest 2.
All of the things my brain ever conjured up used to tell me 1.
Brain: Make the martyr choice for people. Don't tell them, suffer in secret.
I made a lot of relationship mistakes doing 1's in various situations and now I do 2s. I don't know why this works but it lines up with everything I ever read - NVC, Daring greatly, Gottman institute research. I don't have much to add other than - I wonder if you do 1's or 2's.
I would prefer people do 2's not 1's around me. (A little more on emotional labour)
Original post: http://bearlamp.com.au/emotional-labour/
Open thread, August 21 - August 27, 2017
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Ten small life improvements
I've accumulated a lot of small applications and items that make my life incrementally better. Most of these ultimately came from someone else's recommendation, so I thought I'd pay it forward by posting ten of my favorite small improvements.
(I've given credit where I remember who introduced the item into my life. Obviously the biggest part of the credit goes to the creator.)
Video speed
Video Speed Controller lets you speed up HTML 5 video; it gives a nicer interface than the YouTube speed adjustment and works for most videos displayed in a browser (including e.g. netflix/amazon).
(Credit: Stephanie Zolayvar?)
Spectacle
Spectacle on OSX provides keyboard shortcuts to snap windows to any half or third of the screen (or full screen).
Pinned tabs + tab wrangler
I use tab wrangler to automatically close tabs (and save a bookmark) after 10m. I keep gmail and vimflowy pinned so that they don't close. For me, closing tabs after 10m is usually the right behavior.
Aggressive AdBlock
I use AdBlock for anything that grabs attention even if isn't an ad. I usually block "related content," "next stories," the whole youtube sidebar, everything on Medium other than the article, the gmail sidebar, most comment sections, etc. Similarly, I use kill news feed to block my Facebook feed.
Avoiding email inbox
I often need to write or look up emails during the day, which would sometimes lead me to read/respond to new emails and switch contexts. I've mostly fixed the problem by leaving gmail open to my list of starred emails rather than my inbox, ad-blocked the "Inbox (X)" notification, and pin gmail so that I can't see the "Inbox (X)" title.
Christmas lights
I prefer the soft light from christmas lights to white overhead lights or even softer lamps. My favorite are multicolored lights, though soft white lights also seem OK.
(Credit: Ben Hoffman)
Karabiner
Karabiner remaps keys in a very flexible way. (Unfortunately, it only works on OSX pre-Sierra. Would be very interested if there is any similarly flexible software that )
Some changes have helped me a lot:
- While holding s: hjkl move the cursor. (Turn on "Simple Vi Mode v2") I find this way more convenient than the arrow keys.
- While holding d: hjkl move the mouse. (Turn on "Mouse Keys Mode v2") I find this slightly more convenient than a mouse most of the time, but the big win is that I can use my computer when a bluetooth mouse disconnects.
- Other stuff while holding s: (add this gist to your private.xml):
- While holding s: u/o move to the previous and next word, n is backspace.
- While holding s+f: key repeat is 10x faster.
- While holding s+a: hold shift (so cursor selects whatever it moves over, e.g. I can quickly select last ten words by holding a+s+f and then holding u for 1 second).
I'd definitely pay > a minute a day for these changes.
Keyboard
I find split+tented keyboards much nicer than usual keyboards. I use a Kinesis Freestyle 2 with this to prop it up. I put my touchpad on a raised platform between the keyboard halves. Alternatively, you might prefer the wire cutter's recommendations.
(Credit: Emerald Yang)
Vimflowy
Vimflowy is similar to Workflowy, with a few changes: it lets you "clone" bullets so they appear in multiple places in your document, has marks that you can jump to easily, and has much more flexible motions / macros / etc. I find all of these very helpful. The biggest downside for most people is probably modal editing (keystrokes issue commands rather than inserting text).
The biggest value add for me is the time tracking plugin. I use vimflowy essentially constantly, so this gives me extremely fine-grained time tracking for free.
Running locally (download from github) lets you use vimflowy offline, and using the SQLite backend scales to very large documents (larger than workflowy can handle).
(Credit: Jeff Wu and Zachary Vance.)
ClipMenu [hard to get?]
Keeps a buffer of the last 20 things you've copied, so that you can paste any one of them. Source for OSX is on github here, I'm not sure if it can be easily compiled/installed (binaries used to be available). Would be curious if anyone knows a good alternative or tries to compile it.
(Credit: Jeff Wu.)
Like-Minded Forums
What awesome forums around the internet can you recommend?
LW, OB, EA, and SSC are all in the current rationalist cluster. What forums do you know from outside the cluster that would appeal to those within it?
Tabooing Science + an xkcd comic about the eclipse - "Honestly, it's not that scientific."
It occurred to me when I was reading XKCD a moment ago that given that there exists a strain of suspicion of anything 'science' among a certain crowd in this country (fundamentalists, creationists, etc), and a kind of mystique among another crowd (of the "It was in a study so it must be true" variety) that it might be helpful, given that by doing science people are more or less systematizing thinking critically and checking things to be as certain as they can about an idea, to kind of pay attention to and possibly 'play taboo' to an extent when that something-is-a-special-kind-of-a-thing-because-it-is-a-science-thing attitude comes up.
A good example being the xkcd comic I got it from:

We need to think more about Terminal Values
I just sent an email to Eliezer but it is also applicable to everyone on this website. Here it is:
Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth - Pt. 2
'"If people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold."' People misinterpret this as indicating that I take a policy stance in favor of regulation. It indicates no such thing. It is meant purely as guess about empirical consequences." - EY (http://lesswrong.com/lw/h2/blue_or_green_on_regulation/)
Try this a few times, and you'll stop thinking you can make "guess[es] about empirical consequences" (or say anything about empirical consequences (or say anything about empirical anything)) and have people hear anything except you showing off your policy stances. Showing off whom you associate with and what virtues you possess.
Once your eyes open to how hard it is to convince people that your sentences about how markets function are meant to describe how markets function, you give up and stop trying.
Well, if you have the time to convince people you're actually trying to say something about how the world works and not just proudly waving a verbal banner in favor of the home team, and you have the ability to make interesting a subject so much less accessible and exciting than politics (we've all seen it, haven't we, how little they care once they realize that we really are trying to describe market functions?), and the time to actually do it properly, all without alienating important people in the process...
Then, yeah, maybe. And those sets of circumstances absolutely happen and I'm glad that we do teach each other things.
But, I really, really understand why most politicians can't do anything remotely like this, and thus, say the words "If people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold" only if they want people to hear "I am taking a policy stance in favor of regulation."
If you do want people to hear that, then this is a very effective way of communicating that. If you know that saying this will lead people to holding that belief about your stance, then saying it is honest, even if you don't believe that markets work that way. You're not saying/they're not hearing anything about markets, so none of your beliefs about markets can be misrepresented by saying these words. You believe something, you want to honestly communicate that belief, so you use symbols. We think of words as our symbols, but whole sentences can be symbols, too. A sentence has no "true" meaning any more than a word does. And if we define that sentence according to the common usage...
Think through some other possibilities. Maybe you don't believe there's a market for stupidity, but you do take a stance in favor of regulation. If you say you don't believe there's a market for stupidity, you'll knowingly deceive a large group of people (the social thinkers) whom you know will hear "I oppose regulation" when you say there's no market for stupidity. In contrast, if you say you do believe there's a market for stupidity, you'll communicate your endorsement of regulation to that group, but will be interpreted as saying something untrue by another group of people who think that you're saying something about market functions and only about market functions and that you've said nothing about your stance on regulation, so wouldn't we be jumping to conclusions to assume anything either way (nerds/empirical thinkers)?
Most people aren't empirical thinkers (and those that are often aren't when it comes to politics), so as a matter of practicality, politics is spoken in the language of social reasoning. Knowing this, you're shooting yourself in the foot if you listen to these people's words as a way of modeling their beliefs. You have to listen to their sentences, and understand their definition according to the common usage. "Blah blah market for stupidity blah blah" is defined as "I endorse regulation" according to the common usage (no matter what you substitute in place of the blah blah's).
There's a whole music to this social language, and if you start to catch the rhythm, you may find that the absolute garbage that is presidential debates (I use to marvel that the apparently top candidates for president never had anything new or interesting to say, surely such people should be a fountain of insight and formidable competence) resolves itself into something interesting after all.
Ah, yes, now I see. First he waves the flag for group X, then he waves the flag for group N. Many people are members of both tribes and feel really connected, while those people who belong to only one are quite tolerant of this particular outgroup. And the members of X who actively oppose group N are disproportionately single-issue voters, so this comes out as an effective appeal as measured by vote-grabbing...
It's also interesting to hear new ways of saying "I'm with them" over and over again about the commonest groups to appeal to ("God bless America") or compete over (How can they say "I support our troops" more strongly than their opponent? It's a real exercise in creativity). And, of course, amid the majority of people, this is the language of power, and you may find it useful to know how to move within this world, to act upon it, to make yourself respected, and to move people.
Most people (citation needed) talk and think like this all the time. They are social thinkers, not empirical thinkers. Everything they "know" about the minimum wage is how to use it as a vehicle for talking about social things, their own status, their group status, and their virtues. Except they don't do so consciously, but automatically. Humans are social creatures, and to think socially, and not in terms of abstract propositions about the function of the world is their first and natural instinct. Always remember, we're the weird ones. Possessors of an inhuman power with a price.
Find some non-nerdy types you may not usually associate much with. Go clubbing and ask all the people wearing something you find appalling their opinions on the minimum wage. After their initial summary of "I'm with them," whichever "them" they might happen to be with, inquire a bit more deeply. Go a little Socratic on them and ask about their reasoning, and ask them to confirm your guesses about which observations they would take as evidence for and against their position. You might want to personally note all the times they (it seems to you) change the subject, contradict themselves, or use any of a thousand flavors of fallacy.
Now, review the conversation (which you carefully recorded, of course), but this time, ask yourself if there's any way to interpret each of their statements (which sound like propositions about the function of markets and the nature of human rights) instead as signals about tribal loyalty, personal status, and personal virtue. Write down what these statements might say about the tribe and the person. Incredibly, you may find that what once was a cacophony of contradiction has resolved itself. In another key, it was all perfectly mainstream, run-of-the mill, straightforward, vanilla, dry, unremarkable clarity. Seen this way, the mystery dissolves into something so ordinary as to face-palmingly obvious in retrospect.
They're just saying how great they are and how great their people are and how awesome they all are and what good people they are. Charming.
My last discussion of this found many respondents thinking that it was mean to think such lowly things of other people. It is curious to me that they seem to take it for granted that it is lowly. Humans are naturally political; why call our native tongue lowly? There are a thousand stories about the plucky hero who cares about the work, and it's all about the work, and they have to jump over the hurdles that are the regular humans who are into office politics and are so shameful as to not care about the work for its own sake (who do they think they are, not being fascinated with blood spatter analysis or awesome architecture?). Why fetishize this work-over-politics bit? Oh, sure it's responsible for everything lasting that humanity has ever created and all, but...well, as hobbies go, politics is humanity's first and natural choice. People enjoy it; they optimize for it. I'm nerdy and happen to fancy the romance of abstract propositions about reality, but I don't begrudge those who don't feel the same way.
Perhaps more importantly, I need to learn their language, the language of social power, if I am to get them to do what is needful regarding reality despite their native disinterest. Tim Urban's the best speaker our community has, probably, and it still takes all of 2 minutes before it's completely obvious he's a nerd and proud of it. Julia Galef's up there, too, but with a similar weakness when it comes to getting non-nerds to get on board with important political movements. Robin Hanson's alright...
But we need a proper Draco. As galling as it is, there is very much a place for a Trumpesque speaker who can get a certain kind of person participating in important things that they...really aren't naturally inclined to care about. We need Steve Harvey and Barack Obama and MLK and someone who can talk to anybody. Or at least who can talk to somebody other than the nerds who are already half-way on our side (and will be more and more as consensus consolidates around the correct answer).
A good map of reality, Knowledge, is power, to bind the universe to our service. But status, respect, prestige. That is the power to move humans. It is, of course, contained within knowledge itself. But the time has come to train the versatile laser focus of knowledge upon social Homo Sapiens and learn how we're really going to get them, all of them (not just the nerds), to save the world.
Open thread, August 14 - August 20, 2017
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Repairing Anxiety using Internal and External locus of control models
Original post: http://bearlamp.com.au/repairing-anxiety-using-internal-and-external-locus-of-control-models/
I want you to examine your map. It's the representation you carry around in your head that says, "I am in control of most things" or it says, "most things are out of my control". Or for very specific things it says, "I am in control" or "I am not in control".
Factually - In the territory - through life things are more and less in your control or shaped by events beyond your control in the external world. Independent of your locus of control, can be noted the way you feel about a problem alongside whether it is internal of external locus of control. As a separation of cause and effect. (Of concrete event and their surrounding judgements, evaluations, conclusions or extrapolations)
This might already seem obvious but let's make some examples to play with. Here are some times that you might feel in control or out of control.
- Internal-Good: I am the lead on the project so everything is going to get done my way (the right way).
- Internal-Bad: My house is a mess and it's my fault. It won't get tidy unless I do something about it and it's bothering me.
- External-Good: I outsourced my tax to an accountant. Now I have less to worry about.
- External-Bad: I got the flu, how does this keep happening to me?!?
In these examples it's clear what's going on (the concrete) and it's supposed to be clear whether it's an internal or external locus of control, and the feeling is mentioned. Now lets play with them. Can we shift the concrete experiences to a different locus of control? from the original first example we can shift the event to the 4 quadrants above
- Internal-good: I am the lead on the project so everything is going to get done my way (the right way).
- Internal-bad: I am the lead on the project. It's all on me. What if I make a mistake, it will be all my fault. I don't know if I can handle it.
- External-good: I am the lead on the project, I have so much responsibility at work, they must know I can handle it.
- External-bad: I am the lead on the project. I am under so much pressure at work. It's stressing me out!
But that's not the only example that can shift.
- Internal-good: My house is a mess, it's my fault but I don't care. I am having way too much fun to bother with it. I will deal with it when it bothers me enough or when I find time
- Internal-bad: My house is a mess and it's my fault. It won't get tidy unless I do something about it and it's bothering me.
- External-good: My house is a mess and it's my fault, lucky for me no one cares! I can get away with it because it doesn't matter.
- External-bad: My house is a mess and it's my fault, what if anyone sees, I can't have friends over, what would they think of me? I have too much to do, life never gives me enough time to hold myself together
As we try each example...
- Internal-good: I outsourced my tax to an accountant. I am a powerful agent that can decide to not do tasks if I don't want to. I know my strengths and this is not one of them.
- Internal-bad: I outsourced my tax to an accountant. I am incompetent about finance, it's my fault I have to pay someone to do this for me.
- External-good: I outsourced my tax to an accountant. Now I have less to worry about.
- External-bad: I outsourced my tax to an accountant. My tax was too hard, I had no choice but to pay someone to fix it for me
- Internal-good: I got the flu. I had to take care of my sick friends, I knew there was a risk but you gotta live.
- Internal-Bad: I got the flu. I hate public transport, so many sick people I always get sick. I can't help it.
- External-good: I got the flu. these things happen. Better take it easy or I will be sick for longer.
- External-bad: I got the flu, how does this keep happening to me?!?
Curious isn't it. Any concrete experience can be shifted to a good/bad feeling, and any locus of control can be shifted to a internal/external locus of control as well. As a person who has an ego that barely fits in the room, it means that I am very practised at living in that first row of the square. That means I am looking for a method that either obtains power/control for myself or bestows responsibility to the external locus of control. If you carry anxieties around with you, chances are they have some perspective that can be changed by hanging around in the other part of the square. Obviously this is not yet a method for getting you into the first row of the square but moving in that direction is the strategy below.
How?
The only method I want to mention in this article is to switch locus of control. So if you are in Internal-bad try switch to external and see what comes up. That is; move diagonally in the table. Going from; Internal-bad: My house is a mess and it's my fault. It won't get tidy unless I do something about it and it's bothering me. To: External-good: My house is a mess and it's my fault, lucky for me no one else cares! I can get away with it because it doesn't matter to anyone else and no one can see. While avoiding: External-bad: My house is a mess and it's my fault, what if anyone sees, I can't have friends over, what would they think of me? I have too much to do, life never gives me enough time to hold myself together How exactly? Try:
- Write down the problem in concrete form. Or get clear on what the problem is somehow. You can talk to a friend or just think about it so long as you lock down what the problem is. The benefit of writing it down is that once written it's not going to squirm in your head and be the elusive spiralling colour changing problem monster.
- Decide which locus of control you are currently in. (or just pick one. It can't both be "my fault" and "not my problem" at the same time" so start somewhere and switch.)
- Try think of ways in which the problem is in the other locus of control. ("Not my problem" or "I can take charge of this problem")
- If 3 seems impossible - ask other people for help. They will be able to see your situation differently and suggest ways of looking that are in the other locus of control.
It would be very hard for a problem to be both entirely your fault (caused by you) and the world hating you (caused by external forces) at the same time. It's also remarkably hard to be in control of a problem and have it be not your problem. What I am saying is that it would have to either be your problem or not your problem. It would be hard to be both.
Meta: changing your internal models of locus of control is an internal locus of control method. Unless you propose, "this is the way I am I can't change it" which would be an external locus of control explanation. I don't know how to build on this so it will have to come in another post. having this out there will help to make it easier to build on later.
Meta: this took around 2.5 hours to put together.
Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth
//The point has already been made, that if you wish to truly be honest, it is not enough to speak the truth.
I generally don't tell people I'm an atheist (I describe my beliefs without using any common labels). Why? I know that if I say the words "I am an atheist," that they will hear the following concepts:
- I positively believe there is no God
- I cannot be persuaded by evidence any more than most believers can be persuaded by evidence, ie, I have a kind of faith in my atheism
- I wish to distance myself from members of religious tribes
As I said, the point has already been made; If I know that they will hear those false ideas when I say a certain phrase, how can I say I am honest in speaking it, knowing that I will cause them to have false beliefs? Hence the saying, if you wish to protect yourself, speak the truth. If you wish to be honest, speak so that truth will be heard.
Many a politician convincingly lies with truths by saying things that they know will be interpreted in a certain positive (and false) way, but which they can always defend as having been intended to convey some other meaning.
---
The New
There is a counterpart to this insight, come to me as I've begun to pay more attention to the flow of implicit social communication. If speaking the truth in a way you know will deceive is a lie, then perhaps telling a lie in a way that you know will communicate a true concept is not a lie.
I've relaxed my standards of truth-telling as I've come to understand this. "You're the best" and "You can do this" statements have been opened to me, no qualifiers needed. If I know that everyone in a group has to say "I have XYZ qualification," but I also know that no one actually believes anybody when they say it, I can comfortably recite those words, knowing that I'm not actually leading anybody to believe false things, and thus, am not being dishonest.
Politicians use this method, too, and I think I'm more or less okay with it. You see, we have a certain problem that arises from intellectual inequality. There are certain truths which literally cannot be spoken to some people. If someone has an IQ of 85, you literally cannot tell them the truth about a great number of things (or they cannot receive it). And there are a great many more people who have the raw potential to understand certain difficult truths, but whom you cannot reasonably tell these truths (they'd have to want to learn, put in effort, receive extensive teaching, etc).
What if some of these truths are pertinent to policy? What do you do, say a bunch of phrases that are "true" in a way you will interpret them, but which will only be heard as...
As what? What do people hear when you explain concepts they cannot understand? If I had to guess, very often they interpret this as an attack on their social standing, as an attempt by the speaker to establish themselves as a figure of superior ability, to whom they should defer. You sound uppity, cold, out-of-touch, maybe nerdy or socially inept.
So, then...if you're socially capable, you don't say those things. You give up. You can't speak the truth, you literally cannot make a great many people hear the real reasons why policy Z is a good idea; they have limited the vocabulary of the dialogue by their ability and willingness to engage.
Your remaining moves are to limit yourself to their vocabulary, or say something outside of that vocabulary, all the nuance of which will evaporate en route to their ears, and which will be heard as a monochromatic "I think I'm better than you."
The details of this dynamic at play go on and on, but for now, I'll just say that this is the kind of thing Scott Adams is referring to when he says that what Trump has said is "emotionally true" even if it "doesn't pass the fact checks" (see dialogue with Sam Harris).
In a world of inequality, you pick your poison. Communicate what truths can be received by your audience, or...be a nerd, and stay out of elections.
Prediction should be a sport
So, I've been thinking about prediction markets and why they aren't really catching on as much as I think they should.
My suspicion is that (beside Robin Hanson's signaling explanation, and the amount of work it takes to get to the large numbers of predictors where the quality of results becomes interesting) the basic problem of prediction markets is that they look and feel like gambling. Or at best like the stock market, which for the vast majority of people is no less distasteful.
Only a small minority of people are neither disgusted by nor terrified of gambling. Prediction markets right now are restricted to this small minority.
Poker used to have the same problem.
But over the last few decades Poker players have established that Poker is (also) a sport. They kept repeating that winning isn't purely a matter of luck, they acquired the various trappings of tournaments and leagues, they developed a culture of admiration for the most skillful players that pays in prestige rather than only money and makes it customary for everyone involved to show their names and faces. For Poker, this has worked really well. There are much more Poker players, more really smart people are deciding to get into Poker and I assume the art of game probably improved as well.
So we should consider re-framing prediction the same way.
The calibration game already does this to a degree, but sport needs competition, so results need to be comparable, so everyone needs to make predictions on the same events. You'd need something like standard cards of events that players place their predictions on.
Here's a fantasy of what it could look like.
- Late in the year, a prediction tournament starts with the publication of a list of events in the coming year. Everybody is invited to enter the tournament (and maybe pay a small participation fee) by the end of the year, for a chance to be among the best predictors and win fame and prizes.
- Everyone who enters plays the calibration game on the same list of events. All predictions are made public as soon as the submission period is over and the new year begins. Lots of discussion of each event's distribution of predictions.
- Over the course of the year, events on the list happen or fail to happen. This allows for continually updated scores, a leaderboard and lots of blogging/journalistic opportunities.
- Near the end of the year, as the leaderboard turns into a shortlist of potential winners, tension mounts. Conveniently, this is also when the next tournament starts.
- At new year's, the winner is crowned (and I'm open to having that happen literally) at a big celebration which is also the end of the submission period for the next tournament and the revelation of what everyone is predicting for this next round. This is a big event that happens to be on a holiday, where more people have time for big events.
View more: Next
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)