If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
101 comments, sorted by Click to highlight new comments since: Today at 7:57 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've finally figured out why Eliezer was popular. He isn't the best writer, or the smartest writer, or the best writer for smart people, but he's the best writer for people who identify with being smart. This opportunity still seems open today, despite tons of rational fiction being written, because its authors are more focused on showing how smart they are, instead of playing on the self-identification of readers as Eliezer did.

It feels like you could do the same trick for people who identify with being kind, or brave, or loving, or individualist, or belo... (read more)

5Viliam7y
Sequences also contain criticism, towards smart people who are smart in the wrong way ("clever arguers"), and even towards smart people in general ("why our kind can't cooperate"). Making smartness sound like the most important thing gives you Mensa or RationalWiki; you also need harsh lessons on how to do it right to create Less Wrong. Maybe so harsh that most people who identify with being X will actually turn against you, because you assigned low status to their way of doing X. And by the way, effective altruism is already using this strategy in the field of... well, altruism.
2Kaj_Sotala7y
Can you give specific examples of him doing that?
2Viliam7y
Not an OP, but I suspect the parts that rub many people wrong are the following: * Quantum physics sequence; specifically that Eliezer claims to know the right answer to which interpretation of quantum physics is scientific, despite the professional quantum physicists can't all agree on the same answer. ("I am so smart I know science better than the best scientists in the world, despite being a high-school dropout.") * Dismissing religion. ("I am so smart I know for sure that billions of people are wrong, including the theologists who spent their whole lives studying religion.") * The whole "sense that more is possible" approach. (Feels like bragging about abilities of you and your imaginary tribe of smart people. Supported by the fictional evidence of the Beisutsukai superheroes, to illustrate how high you think about yourself.) I guess people with different attitude will see the relative importance of these parts differently. If you start reading the book already not believing in supernatural and not being emotionally invested in quantum physics, you will be like: "Supernatural is not-even-wrong? Yeah. Many worlds? I guess this is what biting the bullet really means, huh. Could we do better? Yeah, that's a nice dream, and perhaps not completely impossible." And then you focus on the parts of how to avoid specific mistakes. But if you start reading the book believing strongly in supernatural, or Copenhagen interpretation, or that nerds are inherently and irreparably losers, you will probably be like: "Oh, this guy is so wrong. And so overconfident. Oh, please someone slap him already to remind him of his real status, because this is so embarrassing. Jesus, this low-status nerd is now surrounded by other low-status nerds who worship him. What a cringe-fest!" So different people can come with completely different interpretations of what the Sequences are actually about. If you dismiss all the specific advice, it seems like a status move, because when people
2Kaj_Sotala7y
I agree with these examples, but cousin_it said specifically that and these examples all seem to be more "Eliezer showing off how smart he is" rather than "Eliezer making his readers feels smart". Though now that's it been pointed out, I agree that there's a sense of Eliezer also doing the latter, and doing more of it than the average focused-on-the-former writer... but this distinction seems a little fuzzy to me and it's not entirely clear to me what the specific things that he does are.

Does anyone follow the academic literature on NLP sentence parsing? As far as I can tell, they've been writing the same paper, with minor variations, for the last ten years. Am I wrong about this?

2Darklight7y
Well, as far as I can tell, the latest progress in the field has come mostly through throwing deep learning techniques like bidirectional LSTMs at the problem and letting the algorithms figure everything out. This obviously is not particularly conducive to advancing the theory of NLP much.
0MrMind7y
I'm not following NLP per se, but lately I've seen papers on grammar's analysis based on the categorical semantics of quantum mechanics (that is, dagger-compact categories). Search the latest papers by Coecke on the arXiv.
[-][anonymous]7y40

Two things have been bugging me about LessWrong and its connection to other rationality diaspora/tangential places.

1) Criticism on LW is upvoted a lot, leading to major visibility. This happens even in the case where the criticism is quite vitriolic, like in Duncan's Dragon Army Barracks post. Currently, there's only upvotes for comments, and there aren't multiple reactions, like on Facebook, Vanilla Forums, or other places. So there's no clear way to say something like "you bring up good points, but also, your tone is probably going to make other peo... (read more)

2MrMind7y
Critics are also a sign that the site is becoming more recognized and has started spreading around... You cannot control what other people choose to criticize, mainly because it's known that people get a status kick by taking down others. When downvotes will be resurrected, we'll have some means of judging nasty or undue criticisms.
1Viliam7y
Also, it will be nice to have some tools to detect sockpuppets. Because if a nasty comment gets 20 upvotes, that doesn't necessarily mean that 20 people upvoted it.
0MrMind7y
Yes, there's also that... has the glitch allowing a sock-puppets army been discovered / fixed?
1Viliam7y
Well, how would you prevent someone registering multiple accounts manually? Going by IP could unfairly stop multiple people using the same computer (e.g. me and my wife) or even multiple people behind the same proxy server (e.g. the same job, or the same university). I think the correct approach to this is to admit that you simply cannot prevent someone from creating hundreds of accounts, and design a system in a way that doesn't allow an army of hundred zombies to do significant damage. One option would be to require something costly before allowing someone to either upvote or downvote, typically to have karma high enough that you can't gain that in a week by simply posting three clever quotes to Rationality Threed. Preferably to require high karma and afterwards a personal approval by a moderator. Maybe some of this will be implemented in LessWrong 2.0, I don't know.
0MrMind7y
That's beside the point, any user determined enough can create enough sock-puppets to be annoying, but I remember that you said that specifically in the case of Eugene there should have been a glitch that allowed him to create automatically multiple accounts. But the usual standard precaution here would suffice: captcha login and unique email verifications should be deterring enough.
0Lumifer7y
You can't, but you can make the process more difficult and slower. This is, more or less, infosec and here it's rarely feasible to provide guarantees of unconditional safety. Generally speaking, the goal of defence is not so much to stop the attacker outright, but rather change his cost-benefit calculation so that the attack becomes too expensive. The issue is detection: once you know they are zombies, their actions are not hard to undo.
4Viliam7y
Generally true, but with Reddit code and Reddit database schema, everything is hard (both detecting the zombies and undoing their actions). One of the reasons to move to LessWrong 2.0. (This may be difficult to believe until you really try do download the Reddit/LW codebase and try to make it run on your home machine.)
2Viliam7y
Seems to me, when you find a vitriolic comment, there are essentially three options (other than ignoring it): * upvote it; * downvote it; * write a polite steelmanned version as a separate top-level comment, and downvote the original one. The problem is, the third option is too much work. And the second options feels like: "omg, we can't take any criticism; we have become a cult just like some people have always accused us!". So people choose the first option. Maybe a good approach would be if the moderators would write a message like: "I am going to delete this nasty comment in 3 hours; if anyone believes it contains valuable information, please report it as a separate top-level comment." Some of them also like to play the "damned if you do, damned if you don't" game (e.g. the Basilisk). Delete or not delete, you are a bad guy either way; you only have a choice of what kind of a bad guy you are -- the horrible one who keeps nasty comments on his website, or the horrible one who censors information exposing the dark secrets of the website. Trolling or status games, I guess. For people who don't have rationality as a value, it is fun (and more pageviews for their website) to poke the nerds and watch how they react. For people who have rationality as a value, it is a status move to declare that they are more rational than the stupid folks at LW. At some moment, trying to interpret everything charitably and trying to answer politely and reasonably will make you a laughingstock. The most important lacking social skill is probably recognizing when you are simply being bullied. It is good to start with assumption of good intent, but it is stupid to refuse to update in face of overwhelming evidence. For example, it is obvious that people on RationalWiki are absolutely not interested in evaluating the degree of rationality on LW objectively; they enjoy too much their "snarky point of view", which simply means bullying the outgroup; and they have already decided th
2Pimgd7y
You mean "the second option is disabled". which would leave upvote or ignore.
1Viliam7y
True, but I guess some people were doing this even before the downvotes were disables. Or sometimes we had a wave of downvotes first, then someone saying "hey, this contains some valid criticism, so I am going to upvote it, because we shouldn't just hide the criticism", then a wave of contrarian upvotes, then a meta-debate... eh.

I'm thinking about starting an AIrisk meetup every other tuesday in London. Anyone interested? Also if you could signal boost to other Londoners you know, that would be good.

5philh7y
I think I'm unlikely to attend regularly, but what do you plan to do with the meetup? Lay discussion, technical discussion, attempts to make progress? I'll link to this from the London rationalish group.
3whpearson7y
A mixture of things that I think aren't being done enough (if they are let me know). * A meeting point for people interested in the subject in London. * Discussion around some of the social issues (prevention of arms races) * Discussion on the nature of intelligence, and how we should approach safety in light of it * Discussion of interesting papers (including psychology/neuroscience) to feed into the above Maybe forming a society to do these long term. If people are interested in helping solve the normal computer control problem as a stepping stone to solving the super intelligence problem, that would be cool. But I'd rather keep the meetup generalist and have things spin off from it.
2Kaj_Sotala7y
If you want to have a reading group, there's an existing one with a nice list of stuff they've covered that can be used for inspiration.

I have a question about AI safety. I'm sorry in advance if it's too obvious, I just couldn't find an answer on the internet or in my head.

The way AI has bad consequences is through its drive to maximize (destroys the world in order to produce paperclips more efficiently). If you instead designed AIs to: 1) find a function/algorithm within an error range of the goal, 2)stop once that method is found, 3) do 1) and 2) while minimizing the amount of resources it uses and/or its effect on the outside world

If the above could be incorporated as a convention into any AI designed, would that mitigate the risk of AI going "rougue"?

9cousin_it7y
It's one of the proposed plans. The main difficulty is that low impact is hard to formalize. For example, if you ask the AI to cure cancer with low impact, it might give people another disease that kills them instead, to keep the global death rate constant. Fully unpacking "low impact" might be almost as hard as the friendliness problem. See this page for more. The LW user who's doing most work on this now is Stuart Armstrong.

A highly recommended review of James Scott's Seeing like a State which, not coincidentally, has also been reviewed by Yvain.

Sample:

I think this is helpful to understand why certain aesthetic ideals emerged. Many people maybe started on the more-empirical side, but then noticed that all of the research started looking the same. I’ve called this “quantification”. It probably looked geometric, “simple” (think Occam’s razor), etc. Much like you’d imagine scientific papers to look today. When confronted with a situation where they didn’t have data, but still

... (read more)

Finally read the review, and I am happy I did. Made me think about a few things...

Legibility has its costs. For example, I had to use Jira for tracking my time in many software companies, and one task is always noticeably missing, despite requiring significant time and attention of all team members, namely using the Jira itself. How much time and attention does it require, in addition to doing the work, to make notes about what exactly you did when, whether it should be tracked as a separate issue, what meta-data to assign to that issue, who needs to approve it, communicating why they should approve it, explaining technical details of why the map drawn by the management doesn't quite match the territory, explaining that you are doing a "low-priority" task X because it is a prerequisite to a "high-priority" task Y, then explaining the same thing to yet another manager who noticed that you are logging time on low-priority tasks despite having high-priority tasks in the queue and decided to take initiative, negotiating whether you should log the time in your company's Jira or your company's customer's Jira or both, in extreme cases whether it is okay to use English... (read more)

0ChristianKl7y
Many Western countries do allow alternative schools for the elite. The UK won't shut down Eton anytime soon.
4Viliam7y
Somehow people in power can always make an exception for themselves and for their families. Legibility only overrides the needs of everyone else. Sometimes you can also benefit from the exception, even if you are not one of them, if you happen to have exactly the same need. But the further from the elite you are, the less likely are your specific needs going to fit the exception made for them.
0MrMind7y
We should adopt an acronym: YASLASR, yet another seeing like a state review. And we are crossing into the meta-reviews territory already. To be frank, I've never understood the wide appeal that the book enjoys. Sure, it's an important lesson that implicit knowledge sometimes is less wrong than scientific knowledge, but we (should) already know: it's in Jaynes (about the peasants believing that meteors were rocks falling from the sky) and it's in the metaphor of evolution as the mad god Azatoth (referenced here many times). Perhaps is less surprising to aspiring rationalists because we already know the limitation of scientific knowledge.
7Lumifer7y
I can gesture in the direction of some points that make it appeal to me: * I like the concept of legibility. It's a new one for me and I find it useful * I like the lack of clear-cut heroes and villains -- it is complicated * I like the attention paid to what can be expressed in what language and the observation that there are real concerns which cannot be readily expressed in the language of rationality * I like the recognition of the role that power plays in social arrangements, regardless of what's "rational" or not * I like the pushback against the idea -- very popular among rationalists, mind you -- that we have a new shiny tool called math and logic which will solve everything so we can ignore the accumulated local knowledge deadwood All in all it's smart book written by someone on the other side of the ideological fence (AFAIK James Scott is a Marxist, though not entirely an orthodox one) which makes it very interesting.
4MaryCh7y
Goodness, you said something definite! :)
0Lumifer7y
Ooops, sorry ma'am, won't happen again :-P

Well, if there's been any less accurate spam...

I find myself in a potentially critical crossroads at the moment, one that could affect my ability to become a productive researcher for friendly AI in the future. I'll do my best to summarize the situation.

I had very strong mental capabilities 7 years ago, but a series of unfortunate health related problems including a near life threatening infection led to me developing a case of myalgic encephalomyelitis (chronic fatigue syndrome). This disease is characterized by extreme fatigue that usually worsens with physical or mental exertion, and is not signific... (read more)

4Mitchell_Porter7y
I'm going to take a wild guess, and suggest that your attitude towards FAI research, and your experience of CFS, are actually related. I have no idea if this is a standard theory, but in some ways CFS sounds like depression minus the emotion - and that is a characteristic symptom in people who have a purpose they regard as supremely important, who find absolutely no support for their attempt to pursue it, but who continue to regard it as supremely important. The point being that when something is that important, it's easy to devalue certain aspects of your own difficulties. Yes, running into a blank wall of collective incomprehension and indifference may have been personally shattering; you may be in agony over the way that what you have to do in order to stay alive, interferes with your ability to preserve even the most basic insights that motivate your position ... but it's an indulgence to think about these feelings, because there is an invisible crisis happening that's much more important. So you just keep grinding away, or you keep crawling through the desert of your life, or you give up completely and are left only with a philosophical perspective that you can talk about but can't act on... I don't know all the permutations. And then at some point it affects your health. I don't want to say that this is solely about emotion, we are chemical beings affected by genetics, nutrition, and pathogens too. But the planes intersect, e.g. through autoimmune disorders or weakened disease resistance. The core psychological and practical problem is, there's a difficult task - the great purpose, whatever it is - being made more difficult in ways that have no intrinsic connection to the problem, but are solely about lack of support, or even outright interference. And then on top of that, you may also have doubts and meta doubts to deal with - coming from others and from yourself (and some of those doubts may be justified!). Finally, health problems round out the picture.
0BeleagueredPotential7y
I spent a good year and half trying to answer questions related to the points you brought up after first seeing the mind-body specialist, although you gave me some good perspective. . Actually I only discovered the purpose a couple of years after the myalgic encephalomyelitis set in, before that point my primary goal was to get better and to worry about other goals afterwards. I do not think that becoming more purpose focused translated into me devaluing my difficulties; I was focused on myself and my health at the start of this thing and that seems to have remained constant, it’s just that suddenly those weren’t the most important things to me anymore. My health became not just something intrinsically valuable but also a very important means to an end. Though I’ll be mindful about how my goals affect me, even if they weren’t initially involved in my health problems they could be involved in their continuation if I take matters too seriously. Exactly this; I keep learning over and over new ways in which the mind and body and all their subsystems can affect each other in very major ways. Several insights related to this concept put me into partial remission in the first place. I wouldn’t say that I’ve done all I can in figuring out how all these things interact with each other. I would say though that with the success of the partial remission and all the work I did afterwards towards figuring out mind-body interactions within myself that I am at the point of diminishing returns with results vs effort and that I need to pursue other avenues at this point.
3ChristianKl7y
One of the core questions of rationality is: "Why do you believe what you believe?" Specifically, why do you believe that this doctor will be able to help you in a way that others can't? You can also write down the likelihood of different outcomes. Given that you had success at improving your condition with one mind-body paradigm, why not try others? Given that you speak about travelling to the US it would also be worthwhile to know where you are living at the moment to know about what's available to you.
0BeleagueredPotential7y
This was extremely helpful in figuring out what to do, it hadn't occurred to me that a Bayesian calculation would be useful here. After tallying up all the variables I came to the conclusion that my current methods had a lower chance of helping me than I had always implicitly thought. What I referred to as “extreme risks” may not even be the truly risky options when considering other factors; like how the longer someone has myalgic encephalomyelitis the less likely it is they can get better. I realized the types of solutions I’ve been trying give me the mental satisfaction of having “done something” but they might not stand the best chance of actually working. I trust this one doctor more because I am trying to treat this condition rather than manage it and patients of his have reported more actual reduction of symptoms than almost any other ME doctor I can find, except possibly Dr. Sarah Myhill (but she isn’t accepting new patients). I have seen many health professionals (general practitioners, psychiatrists, dietitians, etc.) already but only the one mind-body specialist has treated the ME rather than just managed the symptoms. After careful consideration I decided in the end to go to the appointment. He ordered a lot of lab tests and it was quite a bit more expensive than I thought, but I had anticipated that possibility beforehand and went ahead with it even so. I should get all the results back in one or two months. This is precisely what I thought after seeing what the mind-body specialist achieved. I tried numerous approaches over the following year and a half; such as seeing a therapist, cognitive behavioral therapy from a book, and acting on suggestions from the specialist. Unfortunately I didn't see any further progress for my primary health concerns (there were some favorable, unrelated side benefits though). My guess is that only some of the physiological issues going on within me could be corrected this way, at least for any of the mind-body paradigm

I'd like to ask a question about the Sleeping Beauty problem for someone that thinks that 1/2 is an acceptable answer.

Suppose the coin isn't flipped until after the interview on Monday, and Beauty is asked the probability that the coin has or will land heads. Does this change the problem, even though Beauty is woken up on Monday regardless? It seems to me to obviously be equivalent, but perhaps other people disagree?

If you accept that these problems are equivalent, then you know that P(Heads | Monday) = P(Tails | Monday) = 1/2, since if it's Monday then a ... (read more)

0entirelyuseless7y
I think that 1/2 is an acceptable answer, and in fact the only correct answer. Basically 1/2 corresponds to SSA, and 1/3 to SIA; and in my opinion SSA is right, and SIA is wrong. We can convert the situation to an equivalent Incubator situation to see how SSA applies. We have two cells. We generate a person and put them in the first cell. Then we flip a coin. If the coin lands heads, we generate no one else. If the coin lands tails, we generate a new person and put them in the second cell. Then we question all of the persons: "Do you think you are in the first cell, or the second?" "Do you think the coin landed heads, or tails?" To make things equivalent to your description we could question the person in the first cell before the coin is flipped, and the person in the second only if they exist, after it is flipped. Estimates based on SSA: P(H) = .5 P(T) = .5 P(1st cell) = .75 [there is a 50% chance I am in the first cell because of getting heads; otherwise there is a 50% chance I am the first person instead of the second] P(2nd cell) = .25 [likewise] P(H | 1st cell) = 2/3 [from above] P(T | 1st cell) = 1/3 [likewise] P(H | 2nd cell) = 0 P(T | 2nd cell) = 1 Your mistake is in the assumption that "P(Heads | Monday) = P(Tails | Monday) = 1/2, since if it's Monday then a fair coin is about to be flipped." The Doomsday style conclusion that I fully embrace is that if it is Monday, then it is more likely that the coin will land heads.
0justinpombrio7y
I'm curious: is this grounded on anything beyond your intuition in these cases? SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are: P(1st cell) = 2/3 P(2nd cell) = 1/3 P(H | 1st cell) = 1/2 P(H | 2nd cell) = 0 (FYI, I find this intuitive, and find SSA in this situation unintuitive.) These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you're a utilitarian that's what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising). So your probabilities aren't grounded in frequency&utility. Is there something else they're grounded in that you care about? Or do you choose them only because they feel intuitive?
0entirelyuseless7y
In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not. I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time. I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am. Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying "these seem equally likely to me" is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish "I exist" from "someone else exists." In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false -- you have no way of knowing that the coin will land tails or is
0Jiro7y
Adding P(Heads | Monday) and P(Tails | Monday) doesn't give you P(Monday), it gives you P(1 | Monday).
0justinpombrio7y
I didn't say it did. I said that P(Heads | Monday) = P(Tails | Monday) = 1/2, because it's determined by a fair coin flip that's yet to happen. This is in contrast to the standard halfer position, where P(Heads | Monday) > 1/2, and P(Tails | Monday) < 1/2. Everyone agrees that P(Heads | Monday) + P(Tails | Monday) = 1. Or are you disagreeing with the calculation? P(Heads) = P(Monday) P(Heads | Monday) + P(Tuesday) P(Heads | Tuesday) is just Baye's theorem. P(Heads | Tuesday) = 0, because if Beauty is awake on Tuesday then the coin must have landed tails. P(Heads | Monday) = 1/2 by the initial reasoning. Then P(Monday) = 2 * P(Heads) by a teeny amount of algebra.
0Jiro7y
The probability is 1/3 per awakening and 1/2 per experiment. * P(Heads | Monday) = 1/2 * P(Tails | Monday) = 1/2 * P(Heads | Tuesday) = 0 * P(Tails | Tuesday) = 1 Per-experiment: * P(Monday) = 1 * P(Tuesday) = 1/2 * P(Heads) = 1/2, P(Tails) = 1/2 Per-awakening: * P(Monday) = 2/3 * P(Tuesday) = 1/3 * P(Heads) = 1/3, P(Tails) = 2/3 I don't see anything in either of those links claiming that P(Heads | Monday) > 1/2. I assume that your reasoning to get that is something like "P(Heads | Tuesday) is less than P(Heads), so it follows that P(Heads | Monday) is greater than P(Heads)." However, if you're calculating per-experiment, Monday and Tuesday are not mutually exclusive, so this reasoning doesn't work. (If you're calculating per-awakening, P(Heads) isn't 1/2 anyway.)
0entirelyuseless7y
Some additional support for the apparently unreasonable conclusion that if it is Monday, it is more likely that the coin will land heads. Suppose that on each awakening, the experimenter flips a second coin, and if the second coin lands heads, the experimenter tells Beauty what day it is, and does not do so if it is tails. If Beauty is told that it is Tuesday, this is evidence (conclusive in fact) that the first coin landed tails. So conservation of expected evidence means that if she is told that it is Monday, she should treat this as evidence that the first coin will land heads.
0Jiro7y
More likely than what? Using per-awakening probabilities, ithe probability of heads without this information is 1/3. The new information makes heads more likely than the 1/3 that the probability would be without the new information. It doesn't make it more likely than 1/2.
0entirelyuseless7y
I misplaced that comment. It was not a response to yours. More likely than .5. In fact I am saying the probability of getting heads is 2/3 after being told that it is Monday. This is a frequentist definition of probability. I am using probability as a subjective degree of belief, where being almost certain that something is so means assigning a probability near 1, being almost certain that it is not means assigning a probability near 0, and being completely unsure means .5. Here is how this works. If I am sleeping Beauty, on every awakening I am subjectively in the same condition. I am completely unsure whether the coin landed/will land heads or tails. So the probability of heads is .5, and the probability of tails is .5. What is the subjective probability that it is Monday, and what is the subjective probability it is Tuesday? It is easier to understand if you consider the extreme form. Let's say that if the coin lands tails, I will be woken up 1,000,000 times. I will be quite surprised if I am told that it is day #500,000, or any other easily definable number. So my degree of belief that it is day #500,000 has to be quite low. On the other hand, if I am told that it is the first day, that will be quite unsurprising. But it will be unsurprising mainly because there is a 50% chance that will be the only awakening anyway. This tells me that before I am told what day it is, my estimate of the probability that it is the first day is a tiny bit more than 50% -- 50% of this is from the possibility that the coin landed heads, and a tiny bit more from the possibility that it landed tails but it is still the first day. When we transition to the non-extreme form, being Monday is still less surprising than being Tuesday. In fact, before being told anything, I estimate a chance of 75% that it is Monday -- 50% from the coin landing heads, and another 25% from the coin landing tails. And when I am told that it is in fact Monday, then I think there is a chance of 2/3, i.e. 5
0Jiro7y
In the non-extreme form, the chance of being Monday is 2/3 and the chance of being Tuesday is 1/3. 2/3 is indeed less surprising than 1/3, so your reasoning is correct. Before being told anything, you should estimate a 2/3 chance that it's Monday (not a 75% chance). There are three possibilities: heads/Monday, tails/Monday, and tails/Tuesday, all of which are equally likely. Because tails results in two awakenings, and you are calculating probability per awakening, that boosts the probability of tails, so it would be incorrect to put 50% on heads/Monday and 25% on tails/Monday. Tails/Monday is not half as likely as heads/Monday; it is equally likely. Only in the scenario where you were woken up either on Monday or Tuesday, but not both, would the probability of tails/Monday be 25%. When you are told that it is Monday, the chance is not 50/75, it's (1/3) / (2/3) = 50%. Being told that it is Monday does increase the probability that the result is heads; however, it increases it from 1/3 -> 1/2, not from 1/2 -> 2/3.
0entirelyuseless7y
I disagree that these situations are equally likely. We can understand it better by taking the extreme example. I will be much more surprised to hear that the coin was tails and that we are now at day #500,000, then that the coin was heads and that it is the first day. So obviously these two situations do not seem equally likely to me. And in particular, it seems equally likely to me that the coin was or will be heads, and that it was or will be tails. Going back to the non-extreme form, this directly implies that it seems half as likely to me that it is Monday and that the coin will be tails, as it is that it is Monday and that the coin will be heads. This results in my estimate of a 75% chance that it is Monday. I am not calculating "probability per awakening", but calculating in the way indicated above, which does indeed make Tails/Monday half as likely as heads/Monday. I am not asking about the probability that the situation as a whole will somewhere or other contain tails/Monday; this has a probability of 50%, just like the corresponding claim about heads/Monday. I am being asked in a concrete situation, "do you think it is Monday?" And I am less sure it is Monday if the coin is going to be tails, because in that situation I will not be able to distinguish my situation from Tuesday. And this is surely the case even when I am woken up both on Monday and Tuesday. It will just happen twice that I am less sure it is Monday. And based on the above reasoning, being told that it is Monday does indeed lead me to expect that the coin will land heads, with a probability of 2/3.
0Jiro7y
You should not be more surprised in that situation. The more days there are, the more that the extra tails awakenings push down the probability of heads. With 500000 awakenings, the probability gets pushed down by a lot. Now heads is 1/500001 per-awakening probability, same as tails-day-1 and tails-day-500000
0entirelyuseless7y
You are claiming that if I will be wake up 500,000 times if the coin lands tails, I should be virtually certain a priori that the coin will land tails. I am not; I would not be surprised at all if it landed heads. In fact, as I have been saying, the setup does not make me expect tails in any way. So at the start the probability remains 50% heads, 50% tails.
0Jiro7y
Yes, I am (assuming you mean per-awakening certainty).
0entirelyuseless7y
I do not. I mean reporting my opinion when someone asks, "Do you think the coin landed, heads, or tails?" I will truthfully respond that I have no idea. The fact that I would be woken up multiple times if it landed tails, did not make it any harder for the coin to land heads.
0justinpombrio7y
I'd recommend distinguishing between the probability that the coin landed heads (which happens exactly once), and the probability that, if you were to plan to peak you would see heads (which would happen on average 250,000 times).
0entirelyuseless7y
The problem is that you are counting frequencies, and I am not. It is true that if you run the experiment many times, my estimate will change, from the very moment that I know that the experiment will be run many times. But if we are going to run the experiment only once, then even if I plan to peek, I would expect with 50% probability to see heads. That does not mean "per awakening" or any other method of counting. It means that if I saw heads, I would say, "Not surprising; that had a 50% chance of happening." I would not say, "What an incredible coincidence!!!!"
0justinpombrio7y
Thank you for walking me through this; I'm having a very hard time seeing the other perspective here. I understand that P(Monday) is ambiguous. I meant to refer to "the probability that the current day, as Beauty is currently being interviewed, is Monday". Regardless of Beauty's perspective, she can ask weather the current day is Monday or Tuesday, and she does know that it is not currently both. And she can ask what the probability that the coin landed tails given that the current day is Tuesday, etc. Yes? Given that, I'm not seeing what part of my reasoning doesn't work if you replace each instance of "Monday" with "IsCurrentlyMonday".
0Jiro7y
What you just described is a per-awakening probability. Per-awakening, P(Heads) = 1/3, so the proof that P(Heads | Monday) > 1/2 actually only proves that P(Heads | Monday) > 1/3, which is true since 1/2 > 1/3.
0justinpombrio7y
Sorry, you lost me completely. I didn't prove that P(Heads | Monday) > 1/2 at all. Could you say which step (1-6) is wrong, if I am Beauty, and I wake up, and I reason as follows? 1. The experiment is unchanged by delaying the coin flip until Monday evening. 2. If the current day is Monday, then the coin is equally likely to land heads or tails, because it is a fair coin that is about to be flipped. Thus P(Heads | CurrentlyMonday) = 1/2. 3. By Bayes' theorem, which is applicable because it cannot currently be both Monday and Tuesday: P(Heads) = P(CurrentlyMonday) P(Heads | CurrentlyMonday) + P(CurrentlyTuesday) P(Heads | CurrentlyTuesday) 4. P(Heads | CurrentlyTuesday) = 0, because if it is Tuesday then the coin must have landed tails. 5. Thus P(CurrentlyMonday) = 2 * P(Heads) by some algebra. 6. It may not currently be Monday, thus P(CurrentlyMonday) != 1, thus P(Heads) < 1/2.
0Jiro7y
You had said: Neither of your links to the halfer position shows anyone claiming that. So I assumed you tried to deduce it from the halfer position. The obvious way to deduce it is wrong for the reason I stated. "CurrentlyMonday" as you have defined it is a per-awakening probability, not a per-experiment probability. So the P(Heads) that you end up computing by those steps is a per-awakening P(Heads). Per-awakening, P(Heads) is 1/3, which indeed is less than 1/2. The halfer position assumes that the probability that is meaningful is a per-experiment probability. (If you want to compute a per-experiment probability, you would have to define CurrentlyMonday as something like "the probability that the experiment contains a bet where, at the moment of the bet, it is currently Monday", and step 3 won't work since CurrentlyMonday and CurrentlyTuesday are not exclusive.)
0justinpombrio7y
To be clear, you're saying that, from a halfer position, "the probability that, when Beauty wakes up, it is currently Monday" is meaningless? Sorry, I wrote that without thinking much. I've seen that position, but it's definitely not the standard halfer position. (It seems to be entirelyuseless' position, if I'm not mistaken.) The per-experiment probabilities you give make perfect sense to me: they're the probabilities you have before you condition on the fact that you're Beauty in an interview, and they're the probabilities from which I derived the "per-awakening" probabilities myself (three indistinguishable scenarios: HM, TM, TT, each with probability 1/2; thus they're all equally likely, though that's not the most rigorous reasoning). I'm confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she's interviewed each time she wakes up. If instead, on Heads you let Beauty live and on Tails you kill her, then no one would have trouble saying that Beauty should say P(Heads) = 1 in an interview. Why is this different? Thanks again for the discussion.
0Jiro7y
It's meaningless in the sense that it doesn't have a meaning that matches what you're trying to use it for. Not that it literally has no meaning. It depends on what you're trying to measure. If you're trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn't obviously implausible that someone might want to measure what percentage of experiments have heads.
0justinpombrio7y
What I'm trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument. So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?
0Jiro7y
The "per-experiment" part is a description of, among other things, how we are calculating the probability. In other words, when you say "P(per-experiment event)" the "per-experiment" is really describing the P, not just the event. So if you say "P(per-experiment event|per-awakening event)" that really is meaningless; you're giving two contradictory descriptions to the same P.
0justinpombrio7y
THANK YOU. I now see that there are two sides of the coin. However, I feel like it's actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty's answer to the question "what is the probability that the coin landed heads" once per awakening, you're sometimes double-counting her answer (on Tails). It's dishonest to ask her twice about an event that only happened once. On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she's twice as likely to see tails. If you count Beauty's answer to the question "what is the probability that the coin is currently showing heads" once per experiment, you're sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions. (Being more precise: suppose the coin lands tails, and you ask Beauty "What is the probability that the coin is currently showing heads?" on each day, but only count her answer on Monday. Well, you've asked her two distinct questions, because the meaning of "currently" changes between the two days, but only counted one of them. It's dishonest.) Thus, this question isn't up for interpretation. The answer is 1/2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations - per experiment and per awakening - but the interpretation should be set by the question. Likewise, setting a bet doesn't help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged. Although this is subtle, and maybe I'm still missing things. For one, why is Baye's rule failing? I now know how to use it both to prove that P
0Jiro7y
That doesn't help. "Coin landed heads" can still be used to describe either a per-experiment or per-awakening situation: 1) Given many experiments, if you selected one of those experiments at random, in what percentage of those experiments did the coin land heads? 2) Given many awakenings, if you selected one of those awakenings at random, in what percentage of those awakenings did the coin land heads?
0justinpombrio7y
My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that? Ok, yes, agreed.
0Jiro7y
A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as "different knowledge" if you wish, since you know things about experiments that are not true of awakenings and vice versa.
0entirelyuseless7y
This is a SIA idea, and it's wrong. There's nothing to condition on because there's no new information, just as there's no new information when you find that you exist. You can never find yourself in a position where you don't exist or where you're not awake (assuming awake here is the same as being conscious.)
0justinpombrio7y
Please don't make statements like this unless you really understand the other person's position (can you guess how I will respond?). For instance, notice that I haven't ever said that the halfer position is wrong. This is just a restatement of SSA. By SIA there is new information, since you're more likely to be one of a larger set of people. Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she's alive that it landed heads. I understand that SSA is a consistent position, and I understand that it matches your intuition if not mine. I'm curious how you'd respond to the question I asked above. It's in the post with "So your probabilities aren't grounded in frequency&utility."
0entirelyuseless7y
And I didn't say (or even mean to say) that your position is wrong. I said the SIA idea is wrong. You can learn something from the fact that you are alive, as in cases like this. But you don't learn anything from it in the cases where the disagreement between SSA and SIA comes up. I'll say more about this in replying to the other comments, but for the moment, consider this thought experiment: Suppose that you wake up tomorrow in your friend Tom's body and with his memories and personality. He wakes up tomorrow in yours in the same way. The following day, you swap back, and so it goes from day to day. Notice that this situation is empirically indistinguishable from the real world. Either the situation is meaningless, or you don't even have a way to know it isn't happening. The world would seem the same to everyone, including to you and him, if it were the case. So consider another situation: you don't wake up tomorrow at all. Someone else wakes up in your place with your memories and personality. Once again, this situation is either meaningless, or no one, including you, has a way to know it didn't already happen yesterday. So you can condition on the fact that you woke up this morning, rather than not waking up at all. We can conclude from this, for example, that the earth was not destroyed. But you cannot condition on the fact that you woke up this morning instead of someone else waking up in your place; since for all you know, that is exactly what happened. The application of this to SSA and SIA should be evident.

It seems (understandably) that to get people to take your ideas seriously about intelligence there are incentives to actually make AI and show it doing things.

Then people will try and make it safe.

Can we do better at spotting ideas about intelligence that might be different compared to current AI and engaging with those ideas before they are instantiated?

0Lumifer7y
What kind of things are you thinking about? Any examples?
0whpearson7y
You can, hypothetically, build some pretty different interacting systems of ML programs inside the VM I've been building, that has not gotten a lot of interest. I've been thinking about it a fair bit recently. But I think the general case still stands. How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?
0Lumifer7y
In the usual way someone who has made a breakthrough convinces others. Reputation helps. Whitepapers help. Toy examples help. Etc., etc. I don't understand the context, however. That someone, how does he know it's a breakthrough without testing it out? And why would he be so concerned with the opinion of the AI risk community (which isn't exactly held in high regard by most working AI researchers)?
0whpearson7y
Okay. A good metaphor might be the development of the atomic bomb. Lots of nuclear physicists thought that nuclear reactions couldn't be used for useful energy (e.g. Rutherford). Leo Szilard had the insight that you could do a chain reaction and that this might be dangerous. He did not build it the bomb (he could not, he didn't know about neutrons) and assigned the patent to the admiralty to keep it secret. But he managed to convince other high profile physicists that it might dangerous without publicizing it too much (no whitepapers etc). He had the reputation etc and the physics of these things was far more firm than our whispy grasp of intelligence. So that worked. But how will it work for our hypothetical AI researcher who has the breakthrough, if they are not part of the in group of ai risk people? They might be chinese and not have a good grasp on english. They are motivated to try and get the word to say Elon Musk (or another influential concerned person/group that might be able to develop it safely) of their breakthrough but want to keep the idea as secret as possible and do not have the pathway of reaching them.
0Lumifer7y
One issue is that you're judging the idea of a chain reaction as a breakthrough post factum. At the time, it was just a hypothesis, interesting but unproven. I don't know the history of nuclear physics well enough, but I suspect there were other hypotheses, also quite interesting, which didn't pan out and we forgot about them. A breakthrough idea is by definition weird and doesn't fit into the current paradigm. At the time it's proposed, it is difficult to separate real breakthroughs from unworkable craziness unless you can demonstrate that your breakthrough idea actually works in reality. And if you can't -- well, absent a robust theoretical proof, you will just have to be very convincing: we're back to the usual methods mentioned above (reputation, etc.). Claimed breakthroughs sometimes are real and sometimes are not (e.g. cold fusion). I suspect the base rates will create a prior not favourable to accepting a breakthrough as real.
0whpearson7y
It was interesting enough that a letter got sent to the president by Einstein about it which was taken seriously, before it was made. I recommend reading up about it, it is a very interesting time in history, It would be interesting to know how many other potential breakthroughs got that treatment. And how can we make sure that the right ones going to be be made get that treatment.

Has there been / will there be in the future / could there be a condition where transforming atoms is cheaper than transforming bits? Or it's a universal law that emulation is always developed before nanotechnology?

2whpearson7y
Flippant answer. Nanotech has come first! And we are made of it. I'm not quite sure what you are getting at here. Are you asking whether it will be possible to recreate a human with nanotech more easily than to emulate one? I ask this because not all atoms are not equal. It is somewhat hard to pry two nitrogen atoms apart but easier to pry two oxygen atoms apart. So the energy costs to make the thing depend a lot what your feedstock is. Then there is the question of whether running the recreated human is cheaper than running an emulation. Which is separate from the cost to recreate a human vs emulation. It depends on the amount of fidelity you require. If there are strange interactions between neurons mediated by the electrics fields that you want to emulate, or the exact way that the emulation interacts with certain drugs then I think recreation is probably going to be a lot cheaper

Is this true for anyone: "If you offered me X right now, I'd accept the offer, but if you first offered me to precommit against taking X, I'd accept that offer and escape the other one"? For which values of X? Do you think most people have some value of X that would make them agree?

2Dagon7y
Not exactly right now, but I've called in sick for work when I would have gone in with sufficient precommitment. edit: for clarity - this is a decision that I would prefer to have escaped the night before, and the day after. A number of things I lump into the "akrasia" topic fit this pattern.
0ChristianKl7y
A person who's on a diet might agree if X is "I give you a piece of cake" in many instances. I'm personally quite good at inhibiting myself from actions I don't want to take but less good at getting myself to do uncomfortable things, so there's no example that comes to mind immediately. In general, I think that cases where system I wants to accept to offer but system II wants to reject the offer provide material for X. I would be surprised if you can't find examples that hold for most people.
0whpearson7y
Do they have to be examples of willingly yielding? E.g. if there was a malign Super intelligence in the box that I had to interact with, then I would probably yield to letting it out but if I could I would precommit to not letting it out I would.
0cousin_it7y
Good example. "I would yield to a mind hack right now, but I would precommit to not yielding to a mind hack right now." Are there any simpler examples, or specific mind hacks that would work on you?
0Lumifer7y
Hmmm... would you precommit to not giving an armed robber your wallet? Would it be a wise precommittment?
0[anonymous]7y
If the robber knew that, then such a precommitment means you never have to face them, yes?
1Lumifer7y
No. You assume the robber is a rational homo economicus. Hint: in most cases this is not true. Besides, this.
0MrMind7y
Could you rewrite it more clearly? I'm not sure exactly what you're asking... Besides, offer me to precommit against x why? With which incentive?
0cousin_it7y
I'm looking for examples of temptations that you would yield to, given the chance, and precommit against, given the chance. Basically things that make you torn and confused.
0MrMind7y
Oh well, that's easy: * snoozing * snacking * slacking at work * watching too much youtube * etc.
1cousin_it7y
Note that the question tries to avoid the time inconsistency angle. You'd yield to one unit of X right now, given the chance, and you'd precommit against yielding to one unit of X right now, given the chance. Do any of your examples work like that?
0MrMind7y
Sometimes they do, yes. Not always though. There are times when I would like not to do something but some other subsystem is in control.
0entirelyuseless7y
I think some people would precommit to never telling lies, if they had the chance, but at the same time, they would lie in the typical Nazi at the door situation, given that they in fact cannot precommit. This has nothing to do with time inconsistency, because after you have lied in such a situation, you don't find yourself wishing you had told the truth.
0Screwtape7y
I'm not sure I'm parsing the question correctly. Attempting to set X = five dollars, I get "If you offered me five dollars right now, I'd accept the offer, but if you first offered me to precommit against taking five dollars, I'd accept that offer and escape the other one." Precommitting against taking five dollars seems strange. My best interpretation is "If you offered me X right now, I'd accept the offer, but if you first offered me Y to precommit against taking X, I'd accept that offer and later wouldn't take X." If that interpretation is close enough, then yes. If you offered me the opportunity to play Skyrim all day right now, I'd accept the offer, but if you first offered me a hundred dollars to precommit against playing Skyrim all day, I'd accept that offer and later wouldn't take the opportunity to play Skyrim all day. That seems too straightforward though, so I don't think I'm interpreting the question right.
1cousin_it7y
I think this article shows that you probably won't get a crisp answer.
4Luke_A_Somers7y
That's more about the land moving in response to the changes in ice, and a tiny correction for changing the gravitational force previously applied by the ice. This is (probably?) about the way the water settles around a spinning oblate spheroid.
0Thomas7y
This article is quite a bullshit.
0cousin_it7y
Hmm, yeah, you're right. I got hypnotized by the yale.edu address.