If you intend to try again in the current open thread, feel free to transfer the examples.
Trying to clarify my intuitions re. B:
Consider Paul Atreides undergoing the gom jabbar; he will die unless he keeps his hand in the box. Given that he knows this, I count his success as a freely willed action; if (counterfactually) the pain had been sufficient to overcome him, withdrawing his hand would not have been freely willed, because it is counter to his consciously endorsed values (and, in this case, not subtle or confused values).
However, if (also counterfac...
Pretty sure I'm misparsing you somehow, but here are some things I might consider nonfree action :
A) an action is rewarded with a heroin fix; the actor is in withdrawal
B) an action will relieve extreme and urgent pain
C) an action is demanded by reflex (e.g. withdrawal from heat)
D) an action is demanded by an irresistably salient emotional appeal that the agent does not reflectively endorse (release the country-slaying neurotoxin, or I shall shoot your child)
Are you asking for a procedure for identifying acts of free will (the doable kind of extensive definition) or a set of in-out exemplars (ostensive definition)?
Confused. What's incoherent about caring equally about copies of myself, and less about everyone else?
I've just finished marathoning the first 1.5 seasons (to the current cliffhanger/hiatus) of Gravity Falls, and strongly recommend it. Supernatural mystery/horror/comedy, significantly darker than Disney usually gets. High levels of continuity; very strong art direction; near-HPMOR levels of foreshadowing/conservation of detail (I advise not reading about it beforehand as there was a similar hivemind-predictive-success of the biggest twist). Secret codes, cryptic Reddit AMAs, trolling creators with hand puppets, all the good stuff.
Don't follow. You see "making an actually binding promise" as equivalent to dying?
This seems odd to me, though I'm not saying you're wrong. From the inside, my values seem far more akin to habits or reflexes than to time-indexed memories.
I imagine Obliviated!me still having a NO DON'T reaction when asked to support a purpose opposed to my previous goals, because verbalised goals flow from wordless moral habits; not the other way around. (assuming a possibly inconsistent scenario where I retain enough language for someone to expect to manipulate me)
Quite a bit. I have a very bad memory for personal history anyway - I have a vague timeline of significant dates in my head, and a handful of random "vivid" memories, maybe one per year, that have been nailed down by neural happenstance. But if you asked me what I was doing yesterday evening, I think I would end up randomly selecting an evening from the last three or so - unless I painstakingly solved it in the manner of a logic puzzle ("I go to the gym on Wednesdays, and yesterday was Thursday, so I guess I was at the gym").
Rathanel's The Empty Cage (previously recommended on LW) and OmgImPwned's In Fire Forged. Can't remember if the first is finished, the second certainly isn't.
Waves Arisen is in a class by itself as regards sweet sweet ingroup jargon, however :)
Every rational!Naruto fic I encounter keeps topping the preceding ones - I suspect my head will implode if I ever attempt to read the canon story at this point.
The best one yet is The Waves Arisen. Everyone is very sensible, shadow cloning is more broken than ever, and patiently listening to giant slugs pays off in the end.
Disagree; you're prematurely optimising. LW is full of dully worthy explanatory articles using the blueprint you describe; an attempt to communicate technical concepts by redundant array of overlapping metaphors is novel and fun even if it doesn't end up working well.
(Sure, it's summoned a few bizarre commenting entities, but never mind!)
I have high confidence, based on style, that I have read work you have published elsewhere; but on the default assumption that you don't want that context connected to this, I'll say no more.
Splendid. As inexplicably haunting as the rest of your work. Looking forward to more.
I just tried to (using the form at the bottom of the hpmor.com chapter) and it appeared to accept it, but I can't see it showing up on the FF.net reviews page. Is this the wrong way to do it? Is there a significant lag time?
EDIT: Never mind, there it is!
Here is my best attempt at a delaying tactic, after sleeping on it. Please tear apart/suggest better ways in which LV might tear apart, to replace the poor placeholder responses he has here.
--
"Agree that I musst die, if it ssavess world. But thiss iss not besst way to kill me. Ssee how you can benefit more, given your goalss."
"Explain."
"Believe power you know not doess refer to power to desstroy life-eaterss. Life-eaterss will find you eventually, teacher. Know you. Will hunt you down, ssomeday. Eat all of you, all of world and mag...
Really like that one. My first reaction was "and yet the Gatekeeper can still say no and kill you". After all, Voldemort's trying to prevent untold destruction, a prophecy whose exact paths to possible fulfilment are a mystery. Killing a limited number of Dementors is less important.
But my understanding of the AI box experiment is that it was never just about finding an argument that will look persuasive to someone armchair-thinking about it. It's about finding an opening to the psyche, an emotional vulnerability specific to your current target. ...
I saw the thread title and assumed "Maletopia" was a Disney AU fanfic about a perfect society run by rational!Maleficent. Disappointed now.
The mirror tailors good and sound ambitions for people.
"This mirror can help us get our Cutie Marks!!"
sorry
I don't like to frustrate the poor databases' telos, it is not at fault for the use humans put its data to.
(Yes, I realise this is silly. It's still an actual weight in the mess I call a morality; just a small one.)
I misremembered, you are correct. I was possibly instead frustrated with finding a temporary email that it would accept (they block the most common disposables I think).
Not speaking for above poster: because that's not actually trivial - you need a real fake phone number to receive validation on, etc. Also, putting fake data into a computer system feels disvirtuous enough to put me off doing it further.
Yes, for a copy close enough that he will do everything that I will do and nothing that I won't. In simple resource-gain scenarios like the OP's, I'm selfish relative to my value system, not relative to my locus of consciousness.
Delicious reinforcement! Thank you, friend.
Ah, I see. We may not disagree, then. My angle was simply that "continuing to agree on all decisions" might be quite robust versus environmental noise, assuming the decision is felt to be impacted by my values (i.e. not chocolate versus vanilla, which I might settle with a coinflip anyway!)
In the OP's scenario, yes, I cooperate without bothering to reflect. It's clearly, obviously, the thing to do, says my brain.
I don't understand the relevance of the TPD. How can I possibly be in a True Prisoner's Dilemma against myself, when I can't even be in a TPD against a randomly chosen human?
Do you really think your own nature that fragile?
(Please don't read that line in a judgemental tone. I'm simply curious.)
I would automatically cooperate with a me-fork for quite a while if the only "divergence" that took place was on the order of raising a different hand, or seeing the same room from a different angle. It doesn't seem like value divergence would come of that.
I'd probably start getting suspicious in the event that "he" read an emotionally compelling novel or work of moral philosophy I hadn't read.
Assuming we substitute something I actually want to do for hang-gliding...
("Not the most fun way to lose 1/116,000th of my measure, thanks!" say both copies, in stereo)
...and that I don't specifically want to avoid non-shared experiences, which I probably do...
("Why would we want to diverge faster, anyway?" say the copies, raising simultaneous eyebrows at Manfred)
...that's what coinflips are for!
(I take your point about non-transferability, but I claim that B-me would press the button even if it was impossible to share the profits.)
I am confident that, in this experiment, my B-copy would push the button, my A-copy would walk away with 60 candies, and shortly thereafter, if allowed to confer, they would both have 30. And that this would happen with almost no angst.
I'm puzzled as to you why you think this is difficult. Are people being primed by fiction where they invariably struggle against their clones to create drama?
You're thinking of this one, and he cited Carrier, and we have this argument after every survey. At this point it's a Tradition, and putting "ARGH LOOK JUST USE CARRIER'S DEFINITION" on the survey itself would just spoil it :)
Ah, yes. I read that page and scrunchyfaced, back when Scott posted the map. (Although I seem to remember reading other things on the same blog that were better thought out, so maybe the author was having an off day.)
I hope that something more rigorous and interesting comes along. The defensible heart of the position, it seems to me, could be something along the lines of "Yes, we must be ready to relinquish our beliefs with the slightest breath of the winds of evidence. But exactly so long as we do believe A, let's really believe it. Let's not deny ourselves the legitimate Fun that can reside in savouring a belief, including any combination of robes and chanting that seems appropriate."
Upvoted for informing me that "straight and narrow" was a malformation. Also, yes.
I want to be friends with the write-in worshiper of CelestAI mentioned :) PM if you like!
Data point: I picked this option, because of a grab-bag of vaguely related positions in my head that make me feel dissatisfied with the flat "atheist" option, including:
I checked regs, seems we're all good: http://www.food.gov.uk/business-industry/imports/want_to_import/personalimports
"Providing the food parcel you wish to send is to a private, named individual and contains no meat and meat products, dairy products or any particular restricted products (for example Kava kava, which is not permitted either as a personal import or a commercial import) you may send a reasonable amount for personal consumption."
I'd love to try them, but am in the UK. Happy to cover the additional postage cost!
It's pony time, I'm afraid.
My Little Economy: Economics is Science and its sequelae.
"It's the NGDP Targeting Festival in Ponyville," Twilight said. "I'll have a miserable time trying to explain monetary theory to a bunch of hicks and then come home. What's the worst that could happen?"'
Really good - perhaps the best compromise between the needs of characterisation, parable, and comedy I've ever seen. Seems like it should be accessible to people who haven't seen MLP.
ETA: The author seems to have randomly deleted all hir blog posts, made...
I believe it doesn't work like this; you need the circulatory system in order to perfuse the head, and in doing so the other organs are compromised. This could probably be avoided, but not without more surgical expertise/equipment than today's perfusion teams have, I think.
Smiles, laughter, hugging, the humming or whistling of melodies in a major key, skipping, high-fiving and/or brofisting, loud utterance of "Huzzah" or "Best thing EVER!!!", airborne nanoparticles of cake, streamers, balloons, accordion music? On the assumption that the AI was not explicitly asked to produce these things, of course.
I think the intuitive surface reading of that post (supernatural objects are black boxes; they have state, but are denied to have internal structure that implements the state) at least makes it clear that simulators are not "supernatural" under this definition. Which is the actual query people were blocking on. But evidently many people read the post differently.
Man, I'm late this year. Taken. To save my index finger, just upvoted everyone who took it in November :)
Next time, the "supernatural" question really needs to just link to the Sequence post defining the word.
The first option reads "Moral statements don't express propositions and can neither be true nor false." I'm curious what else you wanted. The second clause without the first?
I was mostly irked that "the position from the Sequences" wasn't an option (although I quite understand why you'd want to avoid parochial signalling), as neither your definition of subjectivist nor substantive realist seemed to capture it adequately. I eventually opted for the latter.
Exactly the same misreading here.
viewer-hostile
Wow, you're still sore over Endless Eight? I thought it one of the finer pieces of trolling ever indulged in by a commercial product. :)
Sora no Woto. The K-On! archetypes are traumatised child soldiers in an uneasy-interwar-period in bizzaro alternate Switzerland, and they have a pet owl. Scenery is amazing.
I would naturally say ikes-hee, but I believe it's supposed to be ay-eye-zy (or maybe ay-eye-kzy)?
I am generally in favour of a long-term ruler AI; though I don't think I'm the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.
Learning to lucid dream, from everything I've read on the subject, involves progressively defeating whatever mechanism usually provides amnesia on waking. Having too much access to memories of nonexistent events seems an epistemically unsafe thing. I have one or two memories from a lifetime of dreaming, and I cannot distinguish them from life memories by any individual texture or quality; only by the fact that they don't cohere with my other memories. This scared me greatly.
No such accusation intended! In all honesty, my thought process was "Guvf fgbel erpncvghyngrf gur svany gevyrzzn (nf lbh fnl, ybbc/tebj/qvr) bs Pnryrz rfg Pbagreeraf, juvpu vf nyernql xabja gb cbffrff RL-puvyyvat cebcregvrf; lbh pbaqrafr vg irel rssrpgviryl, naq gura lbh unir Pryrfgvn rpub bar bs gur zber ubcrshy Sha Gurbel cbfgf jvgu 'Vg znl jryy or gung n zber pbagebyyrq pyvzo hc gur vagryyvtrapr gerr vf cbffvoyr'; naq gura Gjvyvtug erwrpgf vg." I just read it as very pointed, which clearly was not the intended reading.
I can't dispute your cla...
Haven't had time to read it; but from the story description, it seems to be a comic affair where Twilight decides to monetise her teleportation skillz, and picks the wrong word to advertise with. Hilarity presumably prevails?
These make me sad, but not in an objectionable way. Liked and Follow'd. Good Night seems specifically optimised to chill EY, was it your goal?
I am a bit puzzled by one aspect of Good Night, but that may be because I don't understand the tech level that the characters are operating at. In Twilight's place, it seems that the obvious thing to do would be to znxr n pbcl bs urefrys jvgu gur nccebcevngr oberqbz-erqhpgvba arhebzbqvsvpngvba, naq yrnir vg gb xrrc Pryrfgvn pbzcnal. Vs guvf vf cbffvoyr va gur frggvat, V qba'g frr jul guvf vfa'g n pyrne jva; fvapr Gjv...
Apologies for no response; I vaguely assumed I would get a notification if anyone commented. I think we'll start in the Shakespeare's Head as it's a bit cloudy. There will be a sign up. Otherwise, climb the nerd gradient until you find us; we're usually in the back third past the bar.