Comment author: Endovior 13 December 2012 02:15:11AM *  5 points [-]

Listening through the sequences available now, a couple issues:

1: The podcasts available, when loaded in iTunes, aren't in proper order; the individual posts come through in a haphazard, random sort of order, which means that to listen to them in proper order means consulting with the proper order in another window, which is awkward. I am inclined to believe that this has something to do with the order in which they are recorded on your end, though I don't actually know enough about the mechanics of podcasting to be certain of this.

2: It is insufficiently obvious that The Simple Truth is bundled with Map and Territory. After finishing Mysterious Answers to Mysterious Questions, and looking for something else to listen to, I decided to grab The Simple Truth quickly... and, about 45 minutes later, having finished it, I moved on to Map and Territory... and then promptly smacked my forehead, seeing a little disclaimer that mentioned that The Simple Truth was included (I bought Map and Territory anyways). It wasn't until after I investigated further that I noticed a similar disclaimer on the purchase page for The Simple Truth; I'd missed that the first time around. This should probably be mentioned on the channels page directly.

Comment author: Rick_from_Castify 12 December 2012 02:50:04AM 0 points [-]

Hello again Endovior, just wanted to let you know the changes have been made. Now it's no longer a subscription but a single purchase. Thanks again for the feedback and your patience!

Comment author: Endovior 12 December 2012 06:43:17AM 1 point [-]

Awesome, was waiting for that to be fixed before sending you monies. Subscribed now.

Comment author: Rick_from_Castify 06 December 2012 12:22:25AM 2 points [-]

Sorry about the confusion over the "subscription" of what is a one-time payment for a Core Sequence. The "subscription" status of this first Core Sequence was a result of the way we originally set up things with PayPal. We are working to change the one-time purchases to a "buy" option not a "subscription".

Our goal is to do exactly what you said at the end of your post. We will have a single subscription option where you can subscribe to all new promoted posts from Less Wrong. This will be a monthly recurring payment model. Then there will be a list of Core Sequences available for purchase (like buying an audiobook). You'd buy them individually.

We will have some additional core sequences coming out shortly and hope to get the promoted posts subscription option up and running very soon. Thank you for your feedback!

Comment author: Endovior 06 December 2012 12:32:43AM 1 point [-]

Thanks for the quick response. I figured that you probably wouldn't be trying to do that (it'd be awful for business, for one thing), but from what was written on the site as it stood, I couldn't find any reading of it that said anything else.

In response to LessWrong podcasts
Comment author: Endovior 05 December 2012 10:03:33PM 3 points [-]

Speaking personally, I'm really put off by the payment model. You're presenting this as "$5 for a one-year subscription". Now, if this was "$5 for a one-year subscription to all our Less Wrong content, released regularly on the following schedule", then that would seem fair value for money. On the other hand, if it was "$5 to buy this sequence, and you can buy other sequences once we have them ready", then that would be okay, too. As is, though it's coming across as "$5 to subscribe to this sequence for one-year, plus more money to subscribe to any other sequences we put out when we have them, and if something happens to your files at some point in the future, then too bad; we'll charge you another $5 to get it again, despite the fact that we have your account information on file and know you paid us". And that is... not good. It strikes me as overly greedy, and to no real purpose, since you're not locking the files or anything so I don't get to play them after the subscription expires (incidentally, also a business model I would not support). To summarize: I'd willingly 'subscribe' to content that is coming out on a regular basis, or 'buy' content that is complete as is. I will not 'subscribe' to content complete as is, since the implication is that my right to the content is temporary and revocable.

Comment author: MixedNuts 03 November 2012 05:27:35PM 0 points [-]

No, putting $1 million in box B works to. Origin64 opens box B, takes the money, and doesn't take box A. It's like "This sentence is true." - whatever Omega does makes the prediction valid.

Comment author: Endovior 04 November 2012 04:34:14AM 0 points [-]

Not how Omega looks at it. By definition, Omega looks ahead, sees a branch in which you would go for Box A, and puts nothing in Box B. There's no cheating Omega... just like you can't think "I'm going to one-box, but then open Box A after I've pocketed the million" there's no "I'm going to open Box B first, and decide whether or not to open Box A afterward". Unless Omega is quite sure that you have precommitted to never opening Box A ever, Box B contains nothing; the strategy of leaving Box A as a possibility if Box B doesn't pan out is a two-box strategy, and Omega doesn't allow it.

Comment author: Origin64 03 November 2012 04:18:08PM -1 points [-]

So what you're saying is that the only reason this problem is a problem is because the problem hasn't been defined narrowly enough. You don't know what Omega is capable of, so you don't know which choice to make. So there is no way to logically solve the problem (with the goal of maximizing utility) without additional information.

Here's what I'd do: I'd pick up B, open it, and take A iff I found it empty. That way, Omega's decision of what to put in the box would have to incorporate the variable of what Omega put in the box, causing an infinite regress which will use all cpu cycles until the process is terminated. Although that'll probably result in the AI picking an easier victim to torment and not even giving me a measly thousand dollars.

Comment author: Endovior 03 November 2012 04:56:43PM 1 point [-]

Okay... so since you already know, in advance of getting the boxes, that that's what you'd know, Omega can deduce that. So you open Box B, find it empty, and then take Box A. Enjoy your $1000. Omega doesn't need to infinite loop that one; he knows that you're the kind of person who'd try for Box A too.

Comment author: SilasBarta 19 October 2012 12:29:52AM 1 point [-]

This doesn't sound that hypothetical to me: it sounds like the problem of which organizations to join. Rational-leaning organizations will give you true information you don't currently know, while anti-rational organizations will warp your mind to rationalize false things. The former, while not certain to be on net bad, will lead you to unpleasant truths, while people in anti-rational groups are often duped into a kind of happiness.

Comment author: Endovior 19 October 2012 05:20:55AM 0 points [-]

Sure, that's a valid way of looking at things. If you value happiness over truth, you might consider not expending a great deal of effort in digging into those unpleasant truths, and retain your pleasant illusions. Of course, the nature of the choice is such that you probably won't realize that it is such a choice until you've already made it.

Comment author: philh 17 October 2012 11:46:07PM 2 points [-]

Not really relevant, but

Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games

I idly wonder what such a proof would look like. E.g. is it actually possible to prove this to someone without presenting them an algorithm for superintelligence, sufficiently commented that the presentee can recognise it as such? (Perhaps I test it repeatedly until I am satisfied?) Can Omega ever prove its own trustworthiness to me if I don't already trust it? (This feels like a solid Gödelian "no".)

Comment author: Endovior 18 October 2012 03:59:41AM 0 points [-]

I don't have a valid proof for you. Omega is typically defined like that (arbitrarily powerful and completely trustworthy), but a number of the problems I've seen of this type tend to just say 'Omega appears' and assume that you know Omega is the defined entity simply because it self-identifies as Omega, so I felt the need to specify that in this instance, Omega has just proved itself.

Theoretically, you could verify the trustworthiness of a superintelligence by examining its code... but even if we ignore the fact that you're probably not equipped to comprehend the code of a superintelligence (really, you'll probably need another completely trustworthy superintelligence to interpret the code for you, which rather defeats the point), there's still the problem that an untrustworthy superintelligence could provide you with a completely convincing forgery, which could potentially be designed in such a way that it would performs every action in the same way as the real one would (in that way being evaluated as 'trustworthy' under simulation)... except the one for which the untrustworthy superintelligence is choosing to deceive you on. Accordingly, I think that even a superintelligence probably can't be sure about the trustworthiness of another superintelligence, regardless of evidence.

Comment author: wedrifid 17 October 2012 07:17:50AM *  0 points [-]

You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branch.

Everett branches don't (necessarily) work like that. If 'you' are a person who systematically refuses to play such games then you just don't, no matter the branch. Sure, the Omega in a different branch may find a human-looking creature also called "Endovior" that plays such games but if it is a creature that has a fundamentally different decision algorithm then for the purpose of analyzing your decision algorithm it isn't "you". (There are also branches in which an actual "you" plays the game but only as a freak anomaly of an event like the way an actual 'me' in a freakishly small Everett branch walked through a brick wall that one time. Still not exactly something to identify with as 'you' doing it.)

Comment author: Endovior 17 October 2012 11:20:03AM 0 points [-]

Eh, that point probably was a bit weak. I probably could've just gotten away with saying 'you are required to choose a box'. Or, come to think of it, 'failure to open the white box and investigate its contents results in the automatic opening and deployment of the black box after X time'.

Comment author: RichardKennaway 16 October 2012 08:40:52PM -1 points [-]

If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree.

That's the problem. The question is the rationalist equivalent of asking "Suppose God said he wanted you to kidnap children and torture them?" I'm telling Omega to just piss off.

Comment author: Endovior 17 October 2012 07:02:21AM 0 points [-]

The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it's rational to win, not complain that you're being punished for making the "right" choice. As with Newcomb's Problem, if you can predict in advance that the choice you've labelled "right" has less utility than a "wrong" choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega's being a jerk. It does that. But that doesn't change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own "rationality". This implies a flaw in your system of rationality.

View more: Prev | Next