All of DSherron's Comments + Replies

After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of "myself" is something along the lines of:

  • In short form, a "future evolution of the algorithm which produces my conscious experience, which is implemented in some manner that actually gives rise to that conscious experience"
  • In order for a thing to count as me, it must have conscious experience; anything which appears to act like it has conscious experience will count, unless we somehow figure out a better test.
  • It also mus
... (read more)

No, because "we live in an infinite universe and you can have this chocolate bar" is trivially better. And "We live in an infinite universe and everyone not on earth is in hell" isn't really good news.

You're conflating responsibility/accountability with things that they don't naturally fall out of. And I think you know that last line was clearly B.S. (given that the entire original post was about something which is not identical to accountability - you should have known that the most reasonable answer to that question is "agentiness"). Considering their work higher, or considering them to be smarter, is alleged by the post to not be the entirety of the distinction between the hierarchies; after all, if the only difference was brains or status,... (read more)

Doesn't that follow from the agenty/non-agenty distinction? An agenty actor is understood to make choices where there is no clear right answer. It makes sense that mistakes within that scope would be punished much less severely; it's hard to formally punish someone if you can't point to a strictly superior decision they should have made but didn't. Especially considering that even if you can think of a better way to handle that situation, you still have to not only show that they had enough information at the time to know that that decision would have been... (read more)

0passive_fist
That's not the point though; the point is that agenty actors are understood to be doing higher work as reflected in their position in the hierarchy, and thus are expected to be smarter than their lower counterparts, and be able to make better decisions. That expectation should logically carry higher responsibility and accountability with it, otherwise what distinguishes a soldier from an officer?

...Or you could notice that requiring that order be preserved when you add another member is outright assuming that you care about the total and not about the average. You assume the conclusion as one of your premises, making the argument trivial.

DSherron-10

Better: randomly select a group of users (within some minimal activity criteria) and offer the test directly to that group. Publicly state the names of those selected (make it a short list, so that people actually read it, maybe 10-20) and then after a certain amount of time give another public list of those who did or didn't take it, along with the results (although don't associate results with names). That will get you better participation, and the fact that you have taken a group of known size makes it much easier to give outer bounds on the size of the... (read more)

2epursimuove
I would find such a feature to be extraordinarily obnoxious, to the point that I'd be inclined to refused such a test purely out of anger (and my scores are not at all embarrassing). I can't think of any other examples of a website threatening to publicly shame you for non-compliance.
DSherron170

She responds "I'm sorry, but while I am a highly skilled mathematician, I'm actually from an alternate universe which is identical to yours except that in mine 'subjective probability' is the name of a particularly delicious ice cream flavor. Please precisely define what you mean by 'subjective probability', preferably by describing in detail a payoff structure such that my winnings will be maximized by selecting the correct answer to your query."

3Scott Garrabrant
I agree with that response to the sleeping beauty problem, and the way you set up the payoff structure will probably make this problem equivalent to the St. Petersburg Paradox.

Written before reading comments; The answer was decided within or close to the 2 minute window.

I take both boxes. I am uncertain of three things in this scenario: 1)whether the number is prime; 2) whether Omega predicted I would take one box or two; and 3) whether I am the type of agent that will take one box or two. If I take one box, it is highly likely that Omega predicted this correctly, and it is also highly likely that the number is prime. If I take two boxes, it is highly likely that Omega predicted this correctly and that the number is composite. I... (read more)

It seems that mild threats, introduced relatively late while immersion is strong, might be effective against some people. Strong threats, in particular threats which pattern-match to the sorts of threats which might be discussed on LW (and thus get the gatekeeper to probably break some immersion) are going to be generally bad ideas. But I could see some sort of (possibly veiled/implied?) threat working against the right sort of person in the game. Some people can probably be drawn into the narrative sufficiently to get them to actually react in some respec... (read more)

His answer isn't random. It's based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a differ... (read more)

Does locking doors generally lead to preventing break-ins? I mean, certianly in some cases (cars most notably) it does, but in general, if someone has gone up to your back door with the intent to break in, how likely are they to give up and leave upon finding it locked?

Nitpicking is absolutely critical in any public forum. Maybe in private, with only people who you know well and have very strong reason to believe are very much more likely to misspeak than to misunderstand, nitpicking can be overlooked. Certainly, I don't nitpick every misspoken statement in private. But when those conditions do not hold, when someone is speaking on a subject I am not certain they know well, or when I do not trust that everyone in the audience is going to correctly parse the statement as misspoken and then correctly reinterpret the correc... (read more)

2Richard_Kennaway
I disagree. Not all things that are true are either relevant or important. Irrelevancies and trivialities lower discussion quality, however impeccable their truth. There is practically nothing that anyone can say, that one could not find fault with, given sufficient motivation and sufficient disregard for the context that determines what matters and what does not. In the case at hand, "evidence" sometimes means "any amount whatever, including zero", sometimes "any amount whatever, except zero, including such quantities as 1/3^^^3", and sometimes "an amount worth taking notice of". In practical matters, only the third sense is relevant: if you want to know the colour of crows, you must observe crows, not non-crows, because that is where the value of information is concentrated. The first two are only relevant in a technical, mathematical context. The point of the Bayesian solution to Hempel's paradox is to stop worrying about it, not to start seeing purple zebras as evidence for black crows that is worth mentioning in any other context than talking about Hempel's paradox.

While you can be definitively wrong, you cannot be definitely right.

Not true. Trivially, if A is definitively wrong, then ~A is definitively right. Popperian falsification is trumped by Bayes' Theorem.

Note: This means that you cannot be definitively wrong, not that you can be definitively right.

DSherron-20

They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.

Alternatively, if you define solution such that any two given solutions are equally acceptable with respect to the original problem.

WOW. I predicted that I would have a high tolerance for variance, given that I was relatively unfazed by things that I understand most people would be extremely distressed by (failing out of college and getting fired). I was mostly right in that I'm not feeling stress, exactly, but what I did not predict was a literal physical feeling of sickness after losing around $20 to a series of bad plays (and one really bad beat, although I definitely felt less bad about that one after realizing that I really did play the hand correctly). It wasn't even originally m... (read more)

If I can do something fun, from my house, on my own hours, without any long-term commitment, and make as much money as a decent paying job, then that sounds incredible. Even if it turns out I can't play at high levels, I don't mind playing poker for hours a day and making a modest living from it. I don't really need much more than basic rent/food/utilities in any case.

6RomeoStevens
I would just warn that variance is way more stressful than most people predict. It is really really hard to keep doing something when you get very strongly negatively reinforced. And things have a way of not being as fun when they are a job.
DSherron110

Online poker (but it seems kinda hard)

Actually, does anyone know any good resources for getting up to speed on poker strategies? I'm smart, I'm good at math, I'm good at doing quick statistical math, and I've got a lot of experience at avoiding bias in the context of games. Plus I'm a decent programmer, so I should be able to take even more of an advantage by writing a helper bot to run the math faster and more accurately than I otherwise could. It seems to me that I should be able to do well at online poker, and this would be the sort of thing that I c... (read more)

0RomeoStevens
The main reason you don't hear much about it IMO is that the number of hours you wind up putting in makes it only commensurable with normal decent paying jobs. You only exceed that at high levels. This is just based on searching around for info a few years ago and speaking with people who made a living at it. I myself did it part time for pocket money for awhile, but the stress got to me.
0ChristianKl
http://tynan.com/playpoker-2 is a decent article on learning it.
5[anonymous]
I don't, but: http://lesswrong.com/lw/2qp/virtual_employment_open_thread/2oay
DSherron160

You see an animal at a distance. It looks like a duck, walks like a duck, and quacks like a duck. You start to get offended by the duck. Then, you get closer and realize the duck was a platypus and not a duck at all. At this point, you realize that you were wrong, in a point of fact, to be offended. You can't claim that anything that looks like a duck, but which later turns out not to be, is offensive. If it later turns out not to be a duck then it was never a duck, and if you haven't been able to tell for sure yet (but will be able to in the future) then ... (read more)

Given no other information to strictly verify, any supposed time-traveled conversation is indistinguishable from someone not having time-traveled at all and making the information up. The true rule must depend on the actual truth of information acquired, and the actual time such information came from. Otherwise, the rule is inconsistent. It also looks at whether your use of time travel actually involves conveying the information you gained; whether such information is actually transferred to the past, not merely whether it could be. Knowing that Amelia Bon... (read more)

0stcredzero
Your formulation of "indistinguishable" was already invalidated on reddit.com/r/hpmor by a different objection to my hypothesis. When you lie, you leak information. That information just puts the situation into the 6-hour rule. This cuts off the rest of your reasoning below. It also shows how hard the 6-hour rule is to "fool," which in turn explains why it hasn't been figured out yet. EDIT: Rewrote one sentence to put the normal 6-hour rule back. EDIT: Basically, if all of the information Dumbledore can receive from Amelia Bones could logically come from her departing anywhere between time X and time Y, then the metadata available to Dumbledore is effectively that, "Amelia Bones came from anywhere between time X and time Y." I suspect my actual formulation (not your slight misread of it) and yours come out to much the same.

The ideal attitude for humans with our peculiar mental architecture probably is one of "everything is amazing, also lets make it better" just because of how happiness ties into productivity. But that would be the correct attitude regardless of the actual state of the world. There is no such thing as an "awesome" world state, just a "more awesome" relation between two such states. Our current state is beyond the wildest dreams of some humans, and hell incarnate in comparison to what humanity could achieve. It is a type error to... (read more)

3Kaj_Sotala
Sure. But "things are pretty awesome" is faster to say than "our current world is more awesome than most of the worlds that have existed in history". That's a valid interpretation of the quote, but not the only one. The way I read it, specifically the way it focused on the drinks and the word "complain", it wasn't so much saying that we should pretend that we've already achieve perfection but rather to keep in mind what's worth feeling upset over and what isn't. In other words, don't waste your time complaining about drinks to anyone who could hear, but instead focus your energies on something that you can actually change and which actually matters.

Sure, I agree with that. But you see, that's not what the quote said. It actually not even related to what the quote said, except in very tenuous manners. The quote condemned people complaining about drinks on an airplane; that was the whole point of mentioning the technology at all. I take issue with the quote as stated, not with every somewhat similar-sounding idea.

DSherron190

That honestly seems like some kind of fallacy, although I can't name it. I mean, sure, take joy in the merely real, that's a good outlook to have; but it's highly analogous to saying something like "Average quality of life has gone up dramatically over the past few centuries, especially for people in major first world countries. You get 50-90 years of extremely good life - eat generally what you want, think and say anything you want, public education; life is incredibly great. But talk to some people, I absolutely promise you that you will find someon... (read more)

3NancyLebovitz
I don't think the comparison is to complaining about very bad things happening elsewhere, it's more like "we've got it so much easier than our forebears, why do people still complain about misspellings on the internet? They should be grateful they have an internet." One fallacy is that the person who says sort of thing fails to realize that complaining about complaining is still complaining.
8James_K
Nonetheless it is important to have a firm grasp on the progress we have already attained. It's easy to go from "we haven't made any real progress" to "real progress is impossible". And so we should acknowledge the achievements we have made to date, while always striving to build on them.
6Kaj_Sotala
You're right that it would indeed be a mistake to say "things are already great, let's stop here". But then, "things are really awful, so let's get better" doesn't sound quite right either. The attitude I would lean towards, and which I think is compatible with the quote, is "things are already pretty awesome, how could we make them even more awesome?".
3dspeyer
I'm not saying we should settle for anything. Certainly not. But to forget the awesomeness that already exists is a mistake with consequences. When looking at the big picture, it's important to realize that our current tradjectory is upwards. When planning for something like space travel, it's important to remember that air travel sounded just as crazy a hundred years ago. And when thinking about thinking, it's worth remembering that this same effect will hit whatever awesome thing we think of next.

If, after realizing an old mistake, you find a way to say "but I was at least sort of right, under my new set of beliefs," then you are selecting your beliefs badly. Don't identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.

For people having an otherwise ratio... (read more)

Answered "moderate programmer, incorrect". I got the correct final answer but had 2 boxes incorrect. Haven't checked where I went wrong, although I was very surprised I had as back in grade school I got these things correct with near perfection. I learned programming very easily and have traditionally rapidly outpaced my peers, but I'm only just starting professionally and don't feel like an "experienced" programmer. As for the test, I suspect it will show some distinction but with very many false positives and negatives. There are too ... (read more)

God f*ing damn it. Again? He has 99.9% accuracy, problem resolved. Every decision remains identical unless a change of 1/1000 in your calculations causes a different action, which in Newcomboid problems it never should.

Note to anyone and everyone who encounters any sort of hypothetical with a "perfect" predictor; if you write it, always state an error rate, and if you read it then assume one (but not one higher than whatever error rate would make a TDT agent choose to two-box.)

Right, I didn't quite work all the math out precisely, but at least the conclusion was correct. This model is, as you say, exclusively for fatal logic errors; the sorts where the law of non-contradiction doesn't hold, or something equally unthinkable, such that everything you thought you knew is invalidated. It does not apply in the case of normal math errors for less obvious conclusions (well, it does, but your expected utility given no errors of this class still has to account for errors of other classes, where you can still make other predictions).

That's not how decision theory works. The bounds on my probabilities don't actually apply quite like that. When I'm making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I've made... (read more)

1nshepperd
Your calculations aren't quite right. You're treating EU(action) as though it were a probability value (like P(action)). EU(action) would be more logically written E(utility | action), which itself is an integral over utility * P(utility | action) for utility∈(-∞,∞), which, due to linearity of * and integrals, does have all the normal identities, like E(utility | action) = E(utility | action, e) * P(e | action) + E(utility | action, ¬e) * P(¬e | action). In this case, if you do expand that out, using p<<1 for the probability of an error, which is independent of your action, and assuming E(utility|action1,error) = E(utility|action2,error), you get E(utility | action) = E(utility | error) * p + E(utility | action, ¬error) * (1 - p). Or for the difference between two actions, EU1 - EU2 = (EU1' - EU2') * (1 - p) where EU1', EU2' are the expected utilities assuming no errors. Anyway, this seems like a good model for "there's a superintelligent demon messing with my head" kind of error scenarios, but not so much for the everyday kind of math errors. For example, if I work out in my head that 51 is a prime number, I would accept an even odds bet on "51 is prime". But, if I knew I had made an error in the proof somewhere, it would be a better idea not to take the bet, since less than half of numbers near 50 are prime.

Neat. Consider my objection retracted. Although I suspect someone with more knowledge of the material could give a better explanation.

2NickRetallack
I'm going to read the QM sequence now. I have always been confused by descriptions of QM.
DSherron-10

This comment fails to address the post in any way whatsoever. No claim is made of the "right" thing to do; a hypothetical is offered, and the question asked is "what do you do?" It is not even the case that the hypothetical rests on an idea of an intrinsic "right thing" to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It's not an especially interesting or original question, but it does not make any claims which are relevant to your comment... (read more)

While I don't entirely think this article was brilliant, it seems to be getting downvoted in excess of what seems appropriate. Not entirely sure why that is, although a bad choice of example probably helped push it along.

To answer the main question: need more information. I mean, it depends on the degree to which the negative effects happen, and the degree to which it seems this new belief will be likely to have major positive impacts on decision-making in various situations. I would, assuming I'm competent and motivated enough, create a secret society whi... (read more)

Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible - I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example.

As to what probability y... (read more)

-2OccamsTaser
Indeed, I would bet the world (or many worlds) that (A→A) to win a penny, or even to win nothing but reinforced signaling. In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, I may be misusing the term "money-pump"). Let's say you assign a 1/10^100 probability that your mind has a critical logic error of some sort, causing you to bound probabilities to the range of (1/10^100, 1-1/10^100) (should be brackets but formatting won't allow it). You can now be pascal's mugged if the payoff offered is greater than the amount asked for by a factor of at least 10^100. If you claim the probability is less than 10^100 due to a leverage penalty or any other reason, you are admitting that your brain is capable of being more certain than the aforementioned number (and such a scenario can be set up for any such number).

Sure, it sounds pretty reasonable. I mean, it's an elementary facet of logic, and there's no way it's wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering... (read more)

1OccamsTaser
So by that logic I should assign a nonzero probability to ¬(A→A). And if something has nonzero probability, you should bet on it if the payout is sufficiently high. Would you bet any amount of money or utilons at any odds on this proposition? If not, then I don't believe you truly believe 100% certainty is impossible. Also, 100% certainty can't be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?

"Exist" is meaningful in the sense that "true" is meaningful, as described in EY's The Simple Truth. I'm not really sure why anyone cares about saying something with probability 1 though; no matter how carefully you think about it, there's always the chance that in a few seconds you'll wake up and realize that even though it seems to make sense now, you were actually spouting gibberish. Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.

0OccamsTaser
I must raise an objection to that last point, there are 1 or more domain(s) on which this does not hold. For instance, my belief that A→A is easily 100%, and there is no way for this to be a mistake. If you don't believe me, substitute A="2+2=4". Similarly, I can never be mistaken in saying "something exists" because for me to be mistaken about it, I'd have to exist.
DSherron-20

I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about "moral" situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a "moral" preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which... (read more)

1buybuydandavis
I don't think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn't what makes my preference for ice cream preferences different from my moral preferences. Is there a behavioral cluster about "moral"? Sure. How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don't eat it? For their tolerance of a preference in ice cream in others? Not many that I see. So yeah, it's really different. And matter is matter, whether alive or dead, whether your shoe or your mom.
DSherron-10

Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. "Should" is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of "should" at least. Telling anyone "you shouldn't do that" when what you really mean is "I want you to stop doing that" isn't productive. If they want to do it then they don't care what they "should" or "shouldn't" do ... (read more)

3buybuydandavis
But that's not what they mean, or at least not all that they mean. Look, I'm a fan of Stirner and a moral subjectviist, so you don't have to explain the nonsense people have in their heads with regard to morality to me. I'm on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space. But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It's handy to have a label to separate those out, and "moral" is the accurate one, regardless of the other nonsense people have in their heads about morality.
DSherron-10

I'm not a physicist, and I couldn't give a technical explanation of why that won't work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You're not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.

Incidentally, the Uncertainty Principle doesn't talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the obse... (read more)

[This comment is no longer endorsed by its author]Reply
3NickRetallack
I didn't come up with it. It's called the EPR Paradox.
DSherron-10

The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn't protect you from being wrong; you can talk all day about "is it ethical to steal this cookie" but you are wasting your time. Either you're actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you're babbling nonsense. Just use basic consequentialist reasoning and skip the who... (read more)

DSherron-30

This is because people are bad at making decisions, and have not gotten rid of the harmful concept of "should". The original comment on this topic was claiming that "should" is a bad concept; instead of thinking "I should x" or "I shouldn't do x", on top of considering "I want to/don't want to x", just look at want/do not want. "I should x" doesn't help you resolve "do I want to x", and the second question is the only one that counts.

I think that your idea about morality is simply expres... (read more)

DSherron-10

You're sneaking in connotations. "Morality" has a much stronger connotation than "things that other people think are bad for me to do." You can't simply define the word to mean something convenient, because the connotations won't go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say th... (read more)

1asr
My original point was just that "subjective versus objective" is a false dichotomy in this context. I don't want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world. Turning now to the substance of whether moral or judgement words ("should", "ought", "honest", etc) are bad concepts -- At work, we routinely have conversations about "is it ethical/honest to do X", or "what's the most ethical way to deal with circumstance Y". And we do not mean "what is our private preference about outcomes or rules" -- we mean something imprecise but more like "what would our peers think of us if they knew" or "what do we think our peers ought to think of us if they knew". We aren't being very precise how much is objective, subjective, and socially constructed, but I don't see that we would gain from trying to speak with more precision than our thoughts actually have. Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using 'ethical' instead of other terms smuggles in a lot of connotation. That's the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively. The original topic was "harmful" concepts, I believe, and I don't think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.
DSherron-10

"Should" is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It's a distinctly human invention, and it's meaning shifts as the user desires. Moral obligations are great for social interactions, but they don't reflect anything deeper than an extension of tribal politics. Saying "you should x" (in the moral sense of the word) is just equivalent to saying "I would prefer you to x", but with bonus social pressure.

Just becau... (read more)

0buybuydandavis
Subjectivity is part of the territory.
2asr
These aren't the only two possibilities. Lots of important aspects of the world are socially constructed. There's no objective truth about the owner of a given plot of land, but it's not purely subjective either -- and if you don't believe me, try explaining it to the judge if you are arrested for trespassing. Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It's perfectly coherent for somebody to say "society believes X is immoral but I don't personally think it's wrong". I think it's even coherent for somebody to say "X is immoral but I intend to do it anyway."
4ArisKatsaris
I really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between "I should X" and "I would prefer to not-X". If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children. My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn't involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.) If my guess is correct it would also explain why there's disagreement about whether morality is objective or subjective (morality is a personal preference, but it's also an attempt to remove personal biases - it's by itself an attempt to move from subjective preferences to objective preferences).

Hell, it's definitely worth us thinking about it for at least half a second. Probably a lot more than that. It could have huge implications if we discovered that there was evidence of any kind of powerful agent affecting the world, Matrix-esque or not. Maybe we could get into heaven by praying to it, or maybe it would reward us based on the number of paperclips we created per day. Maybe it wouldn't care about us, maybe it would actively want to cause us pain. Maybe we could use it, maybe it poses an existential risk. All sorts of possible scenarios there, ... (read more)

DSherron230

Endless, negligible, and not at all. Reference every atheism argument ever.

0loup-vaillant
I disagree with "not at all", to the extent that the Matrix has probably much less computing power than the universe it runs on. Plus, it could have exploitable bugs. This is not a question worth asking for us mere mortals, but a wannabe super-intelligence should probably think about it for at least a nanosecond.

I don't think it's particularly meaningful to use "free will" for that instead of "difficult to predict." I mean, you don't say that weather has free will, even though you can't model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn't part of the technical definition. I think that your concept captures some of the real-world uses of the term "free will" but that it doesn't capture enough of the usage to help deal with the confusion around it. In particular, your defin... (read more)

0CronoDAS
I don't mean to imply that being difficult to predict is a sufficient condition for having free will... I'm kind of confused about this myself.

Taboo "free will" and then defend that the simplest answer is that we have it. X being true is weakly correlated to us believing X, where belief in X is an intuition rather than a conclusion from strong evidence.

It's explicitly opposed to my response here. I feel like if I couldn't predict my own actions with certainty then I wouldn't have free will (more that I wouldn't have a will than that it wouldn't be free, although I tend to think that the "free" component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the univers... (read more)

2CronoDAS
I sort of think of "agent with free will" as a model for "that complicated thing that actually does determine someone's actions, which I don't have the data and/or computational capacity to simulate perfectly." Predicting human behavior is like predicting weather, turbulent fluid flow, or any other chaotic system: you can sort of do it, but you'll start running into problems as you aim for higher and higher precision and accuracy. Does that make any sense? (I'm not sure it does.)

I suspect that a quick summary of people's viewpoints on free will itself would help in interpreting at least some answers. In my case, I believe that we don't have "free will" in the naive sense that our intuitions tend to imply (the concept is incoherent). However, I do believe that we fell like we have free will for specific reasons, such that I can identify some situations that would make me feel as though I didn't have it. So, not actually having free will doesn't constrain experience, but feeling like I don't does.

Epistemically:

If I discove... (read more)

DSherron120

It is tautological, but it's something you're ignoring in both this post and the linked reply. If you care about saving children as a part of a complex preference structure, then saving children, all other things being equal, fulfills your preferences more than not saving those children does. Thus, you want to do it. I'm trying to avoid saying you should do it, because I think you'll read that in the traditional moral framework sense of "you must do this or you are a bad person" or something like that. In reality, there is no such thing as "... (read more)

0[anonymous]
So you said that if you want to save children, you should do it (where 'should' shouldn't be heard as a moral imperative or anything like that). Suppose I do want to save children, and therefore (non-morally) should save them, but I don't. What do you call me or my behavior?
3Said Achmiz
Yeah, agree with almost everything you say in the first two paragraphs. Your overall points, as I read them, are not new to me; mostly I was confused by what seemed to me a strange formulation. What I thought you were saying and what I am now pretty sure you are saying are the same thing. Some quibbles: Well, no comment on who's important and who's not, but I definitely read some posters/commenters here as saying that people who save children are good people, etc. That's not to say I am necessarily bothered by this. It seems mistaken to say that I (or anyone) care about money as such. Money buys things. It's more like: I care about some things that money can buy (books, say? luxury food products?) more than I care about other things that money can buy (the lives of children in Africa, say). In any case, I try not to base my decisions on a self-image; that seems backwards. P.S. I have to note that your comments don't seem to address what I said in the comment I linked (but maybe you did not intend to do so). That comment does speak directly to what my preferences in fact are, and what actions of mine I think would lead to their satisfaction.

Right, that's the thought that motivated the "probably" at the end. Although it feels pretty strongly like motivated cognition to actually propose such an argument.

You're not "on the hook" or anything of the sort. You're not morally obligated to save the kids, any more than you're morally obligated to care about people you care about, or buy lunch from the place you like that's also cheaper than the other option which you don't like. But, if you do happen to care about saving children, then you should want to do it. If you don't, that's fine; it's a conditional for a reason. Consequentialism wins the day; take the action that leads most to the world you, personally, want to see. If you really do value the k... (read more)

0A1987dM
Possibly vaguely relevant
-1Said Achmiz
This sounds tautological. I would be reasonably sure I knew what you were saying if not for that line, which confuses me. I make a relevant rule-consequentialist argument here.
9TheOtherDave
Well, unless what you happen to value is discharging your obligations, in which case the whole consequentialist/deontologist divide fades away altogether.
Load More