All of Endovior's Comments + Replies

It seems to me that this is related to the idea of roles. If you don't see yourself as being responsible for handling emergencies, you probably won't do anything about them, hoping someone else will. But if you do see yourself as being the person responsible for handling a crisis situation, then you're a lot more likely to do something about it, because you've taken that responsibility upon yourself.

It's a particularly nuanced response to both take that kind of responsibility for a situation, and then, after carefully evaluating the options, decide that ... (read more)

I think I understand. There is something of what you describe here that resonates with my own past experience.

I myself was always much smarter than my peers; this isolated me, as I grew contemptuous of the weakness I found in others, an emotion I often found difficult to hide. At the same time, though, I was not perfect; the ease at which I was able to do many things led me to insufficient conscientiousness, and the usual failures arising from such. These failures would lead to bitter cycles of guilt and self-loathing, as I found the weakness I so hated... (read more)

-2sparkles

Not necessarily. Cosmic rays are just electromagnetic energy on particular (high) frequencies. So if it interprets everything along those lines, it's just seeing everything purely in terms of the EM spectrum... in other words 'normal, uninteresting background case, free of cosmic rays'. So things that don't trigger high enough to be cosmic rays, like itself, parse as meaningless random fluctuations... presumably, if it was 'intelligent', it would think that it existed for no reason, as a matter of random chance, like any other case of background radiation below the threshold of cosmic rays, without losing any ability to perceive or understand cosmic rays.

-2MugaSofer
Scanning itself and saying "nope, nothing to see here", that's one thing. Scanning itself and saying "well, this is basically cosmic rays, only at a lower frequency ..." is closer to what the quote describes.

As a former Objectivist, I understand the point being made.

That said, I no longer agree... I now believe that Ayn Rand made an axiom-level mistake. Existence is not Identity. To assume that Existence is Identity is to assume that all things have concrete properties, which exist and can therefore be discovered. This is demonstrably false; at the fundamental level of reality, there is Uncertainty. Quantum-level effects inherent in existence preclude the possibility of absolute knowledge of all things; there are parts of reality which are actually unknowa... (read more)

0Rob Bensinger
Existence is frequently defined in terms of identity. 'exists(a)' ≝ '∃x(a=x)' Only if you're an Objective Collapse theorist of some stripe. If you accept anything in the vicinity of Many Worlds or Hidden Variables, then nature is not ultimately so anthropocentric; all of its properties are determinate, though those properties may not be exactly what you expect from everyday life. If "there are" such parts, then they exist. The mistake here is not to associate existence with identity, but to associate existence or identity with discoverability; lots of things are real and out there and objective but are physically impossible for us to interact with. You're succumbing to a bit of Rand's wordplay: She leaps back and forth between the words 'identity' and 'identification', as though these were closely related concepts. That's what allows her to associate existence with consciousness -- through mere wordplay. But that axiom isn't true. I like my axioms to be true. Probability is in the head, unlike existent things like teacups and cacti.

Yeah, that happens too. Best argument I've gotten in support of the position is that they feel that they are able to reasonably interpret the will of God through scripture, and thus instructions 'from God' that run counter to that must be false. So it's not quite the same as their own moral intuition vs a divine command, but their own scriptural learning used as a factor to judge the authenticity of a divine command.

This argument really isn't very good. It works on precisely none of the religious people I know, because:

A: They don't believe that God would tell them to do anything wrong.

B: They believe in Satan, who they are quite certain would tell them to do something wrong.

C: They also believe that Satan can lie to them and convincingly pretend to be God.

Accordingly, any voice claiming to be God and also telling them to do something they feel is evil must be Satan trying to trick them, and is disregarded. They actually think like that, and can quote relevant scrip... (read more)

6TheOtherDave
My experience is that this framework is not consistently applied, though. For example, I've tried pointing out that it follows from these beliefs that if our moral judgments reject what we've been told is the will of God then we ought to obey our moral judgments and reject what we've been told is the will of God. The same folks who have just used this framework to reject treating something reprehensible as an expression of the will of God will turn around and tell me that it's not my place to judge God's will.
-1MugaSofer
The quote specifies God, not "a voice claiming to be God". I'm not sure what evidence would be required, but presumably there must be some, or why would you follow any revelation?
3Kawoomba
Penn Jilette is wrong to call someone not following a god's demands an atheist. Theism is defined by existence claims regarding gods (whether personal or more broadly defined), as a classifier it does not hinge on following said gods' mandates.
2Richard_Kennaway
Seems a perfectly sensible way to think. Being religious doesn't mean being stupid enough to fall for that argument.

Well, that gets right to the heart of the Friendliness problem, now doesn't it? Mother Brain is the machine that can program, and she reprogrammed all the machines that 'do evil'. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are 'evil', is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility ... (read more)

My point in posting it was that UFAI isn't 'evil', it's badly programmed. If an AI proves itself unfriendly and does something bad, the fault lies with the programmer.

5earthwormchuck163
That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context). I don't understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY's quotes about apathetic uFAIs?

Eh. Would you say that "humans aren't capable of evil. Evolution makes them that way"?

This. Took a while to build that foundation, and a lot of contemplation in deciding what needed to be there... but once built, it's solid, and not given to reorganization on whim. That's not because I'm closed-minded or anything, it's because stuff like a belief that the evidence provided by your own senses is valid really is kind of fundamental to believing anything else, at all. Not believing in that implies not believing in a whole host of other things, and develops into some really strange philosophies. As a philosophical position, this is called '... (read more)

If your ends don’t justify the means, you’re working on the wrong project.

-Jobe Wilkins (Whateley Academy)

1Luke_A_Somers
... or going about it wrong.

Listening through the sequences available now, a couple issues:

1: The podcasts available, when loaded in iTunes, aren't in proper order; the individual posts come through in a haphazard, random sort of order, which means that to listen to them in proper order means consulting with the proper order in another window, which is awkward. I am inclined to believe that this has something to do with the order in which they are recorded on your end, though I don't actually know enough about the mechanics of podcasting to be certain of this.

2: It is insufficiently... (read more)

0Rick_from_Castify
1: Endovior, sorry about the order! You're right, they are downloading out of order. We're looking into why that is and when we solve it you'll be able to re-download them in the correct order, I'll keep you posted. 2: Hopefully you didn't smack your forehead too hard. We'll gladly refund your purchase of The Simple Truth. As you know it's a long essay so we've been offering it separate to see if people are interested in smaller purchases. I also wanted to let you know you can reach our support team at support@castify.co any time. You'll most likely get a faster response time than posting your comments here. Thanks again for your feedback!

Awesome, was waiting for that to be fixed before sending you monies. Subscribed now.

Thanks for the quick response. I figured that you probably wouldn't be trying to do that (it'd be awful for business, for one thing), but from what was written on the site as it stood, I couldn't find any reading of it that said anything else.

-1Rick_from_Castify
Hello again Endovior, just wanted to let you know the changes have been made. Now it's no longer a subscription but a single purchase. Thanks again for the feedback and your patience!

Speaking personally, I'm really put off by the payment model. You're presenting this as "$5 for a one-year subscription". Now, if this was "$5 for a one-year subscription to all our Less Wrong content, released regularly on the following schedule", then that would seem fair value for money. On the other hand, if it was "$5 to buy this sequence, and you can buy other sequences once we have them ready", then that would be okay, too. As is, though it's coming across as "$5 to subscribe to this sequence for one-year, plus ... (read more)

2Rick_from_Castify
Sorry about the confusion over the "subscription" of what is a one-time payment for a Core Sequence. The "subscription" status of this first Core Sequence was a result of the way we originally set up things with PayPal. We are working to change the one-time purchases to a "buy" option not a "subscription". Our goal is to do exactly what you said at the end of your post. We will have a single subscription option where you can subscribe to all new promoted posts from Less Wrong. This will be a monthly recurring payment model. Then there will be a list of Core Sequences available for purchase (like buying an audiobook). You'd buy them individually. We will have some additional core sequences coming out shortly and hope to get the promoted posts subscription option up and running very soon. Thank you for your feedback!

Not how Omega looks at it. By definition, Omega looks ahead, sees a branch in which you would go for Box A, and puts nothing in Box B. There's no cheating Omega... just like you can't think "I'm going to one-box, but then open Box A after I've pocketed the million" there's no "I'm going to open Box B first, and decide whether or not to open Box A afterward". Unless Omega is quite sure that you have precommitted to never opening Box A ever, Box B contains nothing; the strategy of leaving Box A as a possibility if Box B doesn't pan out is a two-box strategy, and Omega doesn't allow it.

2TheOtherDave
Well, this isn't quite true. What Omega cares about is whether you will open Box A. From Omega's perspective it makes no difference whether you've precommitted to never opening it, or whether you've made no such precommitment but it turns out you won't open it for other reasons.

Okay... so since you already know, in advance of getting the boxes, that that's what you'd know, Omega can deduce that. So you open Box B, find it empty, and then take Box A. Enjoy your $1000. Omega doesn't need to infinite loop that one; he knows that you're the kind of person who'd try for Box A too.

0MixedNuts
No, putting $1 million in box B works to. Origin64 opens box B, takes the money, and doesn't take box A. It's like "This sentence is true." - whatever Omega does makes the prediction valid.

Sure, that's a valid way of looking at things. If you value happiness over truth, you might consider not expending a great deal of effort in digging into those unpleasant truths, and retain your pleasant illusions. Of course, the nature of the choice is such that you probably won't realize that it is such a choice until you've already made it.

I don't have a valid proof for you. Omega is typically defined like that (arbitrarily powerful and completely trustworthy), but a number of the problems I've seen of this type tend to just say 'Omega appears' and assume that you know Omega is the defined entity simply because it self-identifies as Omega, so I felt the need to specify that in this instance, Omega has just proved itself.

Theoretically, you could verify the trustworthiness of a superintelligence by examining its code... but even if we ignore the fact that you're probably not equipped to compr... (read more)

Eh, that point probably was a bit weak. I probably could've just gotten away with saying 'you are required to choose a box'. Or, come to think of it, 'failure to open the white box and investigate its contents results in the automatic opening and deployment of the black box after X time'.

0wedrifid
Or, for that matter, just left it at "Omega will kill you outright". For flavor and some gratuitous additional disutility you could specify the means of execution as being beaten to death by adorable live puppies.

The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it's rational to win, not complain that you're being punished for making the "right" choice. As with Newcomb's Problem, if you can predict in advance that the choice you've labelled "right" has less utility than a "wrong" choice... (read more)

-3Richard_Kennaway
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It's like Pascal's Mugging. Sure, there can be things you're better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can't lift it.

A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don't produce an answer you'd expect; they instead produce something that matches the criteria even better then anything you were aware of.

Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it's information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don't know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would... (read more)

1mwengler
Wire-heading, drug-addicition, lobotomy, black-box, all seem similar morally to me. Heck, my own personal black box would need nothing more than to have me believe that the universe is just a little more absurd than I already believe, that the laws of physics and the progress of humanity are a fever-dream, an hallucination. From there I would lower my resistance to wire-heading, drug-addiction. Even if I still craved the "truth" (my utility function was largely unchanged), these new facts would lead me to believe there was less of a possibility of utility from pursuing that, and so the rather obvious utility of drug or electronic induced pleasure would win my not-quite-factual day. The white box and a Nazi colonel-dentist with his tools laid out, talking to me about what he was going to do to me until I chose the black box are morally similar. I do not know why the Nazis/Omega want me to black box it. I do not know the extent of the disutility the colonel-dentist will actually inflict upon me. I do know m fear is at minimum nearly overwhelming, and may indeed overwhelm me before the day is done. Being broken in the sense that those who torture you for a result, and choosing the black box, are morally equivalent to me. Abandoning a long-term principle of commitment to the truth in favor of a short term but very high utility of giving up, the short term utility of totally abandoning myself in to the control of an evil god to avoid his torture is what I am being asked to do in choosing the black box. Its ALWAYS at least a little scary to choose reality over self-deception, over the euphoria of drugs and pain killers. The utility one derives from making this choice is much colder than the utility one derives from succumbing: it comes more, it seems, from the neo-cortex and less from the limbic system or lizard brain of fast fear responses. My utility AFTER I choose the white box may well be less than if I chose the black box. The scary thing in the white box might b

The problem does not concern itself with merely 'better off', since a metric like 'better off' instead of 'utility' implies 'better off' as defined by someone else. Since Omega knows everything you know and don't know (by the definition of the problem, since it's presenting (dis)optimal information based on it's knowledge of your knowledge), it is in a position to extrapolate your utility function. Accordingly, it maximizes/minimizes for your current utility function, not its own, and certainly not some arbitrary utility function deemed to be optimal for... (read more)

As stated, the only trap the white box contains is information... which is quite enough, really. A prediction can be considered a true statement if it is a self-fulfilling prophecy, after all. More seriously, if such a thing as a basilisk is possible, the white box will contain a basilisk. Accordingly, it's feasible that the fact could be something like "Shortly after you finish reading this, you will drop into an irreversible, excruciatingly painful, minimally aware coma, where by all outward appearances you look fine, yet you find out the world g... (read more)

0DaFranker
But the information in either box is clearly an influence on the universe - you can't just create information. I'm operating under the assumption that Omega's boxes don't violate the entropy principles here, and it just seems virtually impossible to construct a mind such that Omega could not possibly, with sufficient data on the universe, construct a truth and a falsehood for which when learned by you would arrive at causal disruption of the world in the worst-possible-by-your-utility-function and best-possible-by-your-utility-function manners respectively. As such, since Omega is saying the truth and Omega has fully optimized these two boxes among a potentially-infinite space of facts correlating to a potentially-infinite (unverified) space of causal influences on the world depending on your mind. To me, it seems >99% likely that opening the white box will result in the worst possible universe for the vast majority of mindspace, and the black box in the best possible universe for the vast majority of mindspace. I can conceive of minds that would circumvent this, but these are not even remotely close to anything I would consider capable of discussing with Omega (e.g. a mind that consists entirely of "+1 utilon on picking Omega's White Box, -9999 utilon on any other choice" and nothing else), and I infer all of those minds to be irrelevant to the discussion at hand since all such minds I can imagine currently are.

Okay, so you are a mutant, and you inexplicably value nothing but truth. Fine.

The falsehood can still be a list of true things, tagged with 'everything on this list is true', but with an inconsequential falsehood mixed in, and it will still have net long-term utility for the truth-desiring utility function, particularly since you will soon be able to identify the falsehood, and with your mutant mind, quickly locate and eliminate the discrepancy.

The truth has been defined as something that cannot lower the accuracy of your beliefs, yet it still has maximum... (read more)

The problem is that truth and utility are not necessarily correlated. Knowing about a thing, and being able to more accurately assess reality because of it, may not lead you to the results you desire. Even if we ignore entirely the possibility of basilisks, which are not ruled out by the format of the question (eg: there exists an entity named Hastur, who goes to great lengths to torment all humans that know his name), there is also knowledge you/mankind are not ready for (plan for a free-energy device that works as advertised, but when distributed and r... (read more)

0roystgnr
"You are not perfectly rational" is certainly an understatement, and it does seem to be an excellent catch-all for ways in which a non-brain-melting truth might be dangerous to me... but by that token, a utility-improving falsehood might be quite dangerous to me too, no? It's unlikely that my current preferences can accurately be represented by a self-consistent utility function, and since my volition hasn't been professionally extrapolated yet, it's easy to imagine false utopias that might be an improvement by the metric of my current "utility function" but turn out to be dystopian upon actual experience. Suppose someone's been brainwashed to the point that their utility function is "I want to obey The Leader as best as I can" - do you think that after reflection they'd be better off with a utility-maximizing falsehood or with a current-utility-minimizing truth?

That's exactly why the problem invokes Omega, yes. You need an awful lot of information to know which false beliefs actually are superior to the truth (and which facts might be harmful), and by the time you have it, it's generally too late.

That said, the best real-world analogy that exists remains amnesia drugs. If you did have a traumatic experience, serious enough that you felt unable to cope with it, and you were experiencing PTSD or depression related to the trauma that impeded you from continuing with your life... but a magic pill could make it all go away, with no side effects, and with enough precision that you'd forget only the traumatic event... would you take the pill?

1Jay_Schweikert
Okay, I suppose that probably is a more relevant question. The best answer I can give is that I would be extremely hesitant to do this. I've never experienced anything like this, so I'm open to the idea that there's a pain here I simply can't understand. But I would certainly want to work very hard to find a way to deal with the situation without erasing my memory, and I would expect to do better in the long-term because of it. Having any substantial part of my memory erased is a terrifying thought to me, as it's really about the closest thing I can imagine to "experiencing" death. But I also see a distinction between limiting your access to the truth for narrow, strategic reasons, and outright self-deception. There are all kinds of reasons one might want the truth withheld, especially when the withholding is merely a delay (think spoilers, the Bayesian Conspiracy, surprise parties for everyone except Alicorn, etc.). In those situations, I would still want to know that the truth was being kept for me, understand why it was being done, and most importantly, know under what circumstances it would be optimal to discover it. So maybe amnesia drugs fit into that model. If all other solutions failed, I'd probably take them to make the nightmares stop, especially if I still had access to the memory and the potential to face it again when I was stronger. But I would still want to know there was something I blocked out and was unable to bear. What if the memory was lost forever and I could never even know that fact? That really does seem like part of me is dying, so choosing it would require the sort of pain that would make me wish for (limited) death -- which is obviously pretty extreme, and probably more than I can imagine for a traumatic memory.

Suicide is always an option. In fact, Omega already presented you with it as an option, the consequences for not choosing. If you would in general carry around such a poison with you, and inject it specifically in response to just such a problem, then Omega would already know about that, and the information it offers would take that into account. Omega is not going to give you the opportunity to go home and fetch your poison before choosing a box, though.

EDIT: That said, I find it puzzling that you'd feel the need to poison yourself before choosing the ... (read more)

0Armok_GoB
I never said I would do it, just curious.

True; being deluded about lotteries is unlikely to have positive consequences normally, so unless something weird is going to go on in the future (eg: the lottery machine's random number function is going to predictably malfunction at some expected time, producing a predictable set of numbers; which Omega then imposes on your consciousness as being 'lucky'), that's not a belief with positive long-term consequences. That's not an impossible set of circumstances, but it is an easy-to-specify set, so in terms of discussing 'a false belief which would be long-term beneficial', it leaps readily to mind.

Wow. This is particularly interesting to me, because I already felt this way without knowing why, not having consciously examined the feeling. I know that I already felt uncomfortable around gift-giving holidays, and this provides context to that; I don't particularly enjoy receiving incorrect things, and indeed, I have several boxes full of incorrect things following me around that I can't get rid of (even if I can't think of any reason to have or use a thing, it feels like losing hit points to dispose of it). For the same reason, I feel uncomfortable ... (read more)

That's why the problem specified 'long-term' utility. Omega is essentially saying 'I have here a lie that will improve your life as much as any lie possibly can, and a truth that will ruin your life as badly as any truth can; which would you prefer to believe?'

Yes, believing a lie does imply that your map has gotten worse, and rationalizing your belief in the lie (which we're all prone to do to things we believe) will make it worse. Omega has specified that this lie has optimal utility among all lies that you, personally, might believe; being Omega, it i... (read more)

0asparisi
Interesting idea. That would imply that there is a fact out there that, once known, would change my ethical beliefs, which I take to be a large part of my utility function, AND would do so in such a way that afterward, I would assent to acting on the new utility function. But one of the things that Me(now) values is updating my beliefs based on information. If there is a fact that shows that my utility function is misconstrued, I want to know it. I don't expect such a fact to surface, but I don't have a problem imagining such a fact existing. I've actually lost things that Me(past) valued highly on the basis of this, so I have some evidence that I would rather update my knowledge than maintain my current utility function. Even if that knowledge causes me to update my utility function so as not to prefer knowledge over keeping my utility function. So I think I might still pick the truth. A more precise account for how much utility is lost or gained in each scenario might convince me otherwise, but I am still not sure that I am better off letting my map get corrupted as opposed to letting my values get corrupted, and I tend to pick truth over utility. (Which, in this scenario, might be suboptimal, but I am not sure it is.)

As presented, the 'class' involved is 'the class of facts which fits the stated criteria'. So, the only true facts which Omega is entitled to present to you are those which are demonstrably true, which are not misleading as specified, which Omega can find evidence to prove to you, and which you could verify yourself with a month's work. The only falsehoods Omega can inflict upon you are those which are demonstrably false (a simple test would show they are false), which you do not currently believe, and which you would disbelieve if presented openly.

Those are fairly weak classes, so Omega has a lot of room to work with.

2Lapsed_Lurker
So, a choice between the worst possible thing a superintelligence can do to you by teaching you an easily-verifiable truth and the most wonderful possible thing by having you believe an untruth. That ought to be an easy choice, except maybe when there's no Omega and people are tempted to signal about how attached to the truth they are, or something.

The original problem didn't specify how long you'd continue to believe the falsehood. You do, in fact, believe it, so stopping believing it would be at least as hard as changing your mind in ordinary circumstances (not easy, nor impossible). The code for FAI probably doesn't run on your home computer, so there's that... you go off looking for someone who can help you with your video game code, someone else figures out what it is you're come across and gets the hardware to implement, and suddenly the world gets taken over. Depending on how attentive you ... (read more)

That is the real question, yes. That kind of self-modification is already cropping up, in certain fringe cases as mentioned; it will get more prevalent over time. You need a lot of information and resources in order to be able to generally self-modify like that, but once you can... should you? It's similar to the idea of wireheading, but deeper... instead of generalized pleasure, it can be 'whatever you want'... provided that there's anything you want more than truth.

The problem specifies that something will be revealed to you, which will program you to believe it, even though false. It doesn't explicitly limit what can be injected into the information stream. So yes, assuming you would value the existence of a Friendly AI, yes, that's entirely valid as optimal false information. Cost: you are temporarily wrong about something, and realize your error soon enough.

As written, the utility calculation explicitly specifies 'long-term' utility; it is not a narrow calculation. This is Omega we're dealing with, it's entirely possible that it mapped your utility function from scanning your brain, and checked all possible universes forward in time from the addition of all possible facts to your mind, and took the worst and best true/false combination.

Accordingly, a false belief that will lead you to your death or maiming is almost certainly non-optimal. No, this is the one false thing that has the best long-term consequen... (read more)

That is my point entirely, yes. This is a conflict between epistemic and instrumental rationality; if you value anything higher than truth, you will get more of it by choosing the falsehood. That's how the problem is defined.

Yes, least optimal truths are really terrible, and the analogy is apt. You are not a perfect rationalist. You cannot perfectly simulate even one future, much less infinite possible ones. The truth can hurt you, or possibly kill you, and you have just been warned about it. This problem is a demonstration of that fact.

That said, if your terminal value is not truth, a most optimal falsehood (not merely a reasonably okay one) would be a really good thing. Since you are (again) not a perfect rationalist, there's bound to be something that you could be falsely believing that would lead you to better consequences than your current beliefs.

Okay, so if your utilities are configured that way, the false belief might be a belief you will encounter, struggle with, and get over in a few years, and be stronger for the experience.

For that matter, the truth might be 'your world is, in fact, a simulation of your own design, to which you have (through carelessness) forgotten the control codes; you are thus trapped and will die here, accomplishing nothing in the real world'. Obviously an extreme example; but if it is true, you probably do not want to know it.

I didn't have any other good examples on tap when I originally conceived of the idea, but come to think of it...

Truth: A scientific formula, seemingly trivial at first, but whose consequences, when investigated, lead to some terrible disaster, like the sun going nova. Oops.

Lies involving 'good' consequences are heavily dependent upon your utility function. If you define utility in such a way that allows your cult membership to be net-positive, then sure, you might get a happily-ever-after cult future. Whether or not this indicates a flaw in your utility... (read more)

1Kindly
I'd say that this is too optimistic. Omega checks the future and if, in fact, you would eventually win the lottery if you started playing, then deluding you about lotteries might be a good strategy. But for most people that Omega talks to, this wouldn't work. It's possible that the number of falsehoods that have one-in-a-million odds of helping you exceeds a million by far, and then it's very likely that Omega (being omniscient) can choose one that turns out to be helpful. But it's more interesting to see if there are falsehoods that have at least a reasonably large probability of helping you.
3Zaine
If that's what you meant, then the choice is really "best thing in life" or "worst thing in life"; whatever belief leads you there is of little consequence. Say the truth option leads to an erudite you eradicating all present, past, and future sentient life, and the falsehood option leads to an ignorant you stumbling upon the nirvana-space that grants all infinite super-intelligent bliss and Dr. Manhattan-like superpowers (ironically enough): What you believed is of little consequence to the resulting state of the verse(s).

Agreed. Emotional motivations make just as good a target as intellectual ones. If someone already feels lonely and isolated, then they have a generally exploitable motivation, making them a prime candidate for any sort of cult recruitment. That kind of isolation is just what cults look for in a recruit, and most try to create it intentionally, using whatever they can to cut their cultists off from any anti-cult influences in their lives.

5wedrifid
Agree, except I'd strengthen this to "a much better".

The trick: you need to spin it as something they'd like to do anyway... you can't just present it as a way to be cool and different, you need to tie it into an existing motivation. Making money is an easy one, because then you can come in with an MLM structure, and get your cultists to go recruiting for you. You don't even need to do much in the way of developing cultic materials; there's plenty of stuff designed to indoctrinate people in anti-rational pro-cult philosophies like "the law of attraction" that are written in a way so as to appear ... (read more)

3ChristianKl
If you want to reach a person who feels lonely having a community of like minded people who accept the person can be enough. You don't necessarily need stuff like money.

Yeah, it looks like there's something seriously broken about this poll code. I'm seeing 159 total votes, and only 13 visible votes.

What was repealed seems to have been the ability to veto individual letters (creating new words). This was a laughably incomplete solution, as instead of vetoing individual letters to create whatever wording the governor liked (as it was before), he's now limited to vetoing lots and lots of words until he finds the exact wording he wanted. Hence why the example looks like lots and lots of words crossed out, instead of specific letters crossed out. The power involved is quite similar, but it's somewhat more tricky to use if you're restricted to whole words.

Really? Are you sure you're not just making yourself believe you feel something you do not?

9wedrifid
Yes. It's not an unusual ability to have. It can take a long time and concerted effort to develop desired control over one's own feelings but it is worth it. Yes.
5MarkusRamikin
I'm sure. Certain feelings are easier to excite than others, but still. All it takes is imagination. A fun exercise is try out paranoia. Go walk down a street and imagine everyone you meet is a spy/out to get you/something of that sort. It works. (Disclaimer: I do not know if the above is safe to actually try for everyone out there.)

That's obviously true, yeah. But if it's cool enough that you'd consider doing it, and you actually, as the quote implies, cannot understand why nobody has attempted it despite having done initial research, then you may be better off preparing to try it yourself rather than doing more research to try and find someone else who didn't quite do it before. Not all avenues of research are fruitful, and it might actually be better to go ahead and try than to expend a bunch of effort trying to dig up someone else's failure.

Both sound quite appropriate; it seems likely that in the process of attempting to do some crazy awesome thing, you will run into the exact reasons why nobody has done it before; either you'll find out why it wasn't actually a good idea, or you'll do something awesome.

1PrometheanFaun
But there must be better ways to find out the reasons not to do it. Just doing it instead is a tremendous waste of time. Talking to the sorts of people who would or should have tried already might be one avenue.

No worries; it's just that here, in particular, you caught the tail end of my clumsy attempts to integrate my old Objectivist metaethics with what I'd read thus far in the Sequences. I have since reevaluated my philosophical positions... after all, tidy an explanation as it may superficially seem, I no longer believe that the human conception of morality can be entirely based on selfishness.

Uh... did you just go through my old comments and upvote a bunch of them? If so, thanks, but... that really wasn't necessary.

It's almost embarrassing in the case of the above; it, like much of the other stuff that I've written at least one year ago, reads like an extended crazy rant.

-2[anonymous]
I read some of your posts because, having agreed with you on some things, I wondered whether I would agree on others. Actually, I didn't check the date. When I read a post I want to approve of, I don't worry whether it's old. If I see a post like this one espousing moral anti-realism intelligibly, I'm apt to upvote it. Most of the posters are rather dogmatic preference utilitarians. Sorry I embarrassed you.
Load More