No, because "we live in an infinite universe and you can have this chocolate bar" is trivially better. And "We live in an infinite universe and everyone not on earth is in hell" isn't really good news.
You're conflating responsibility/accountability with things that they don't naturally fall out of. And I think you know that last line was clearly B.S. (given that the entire original post was about something which is not identical to accountability - you should have known that the most reasonable answer to that question is "agentiness"). Considering their work higher, or considering them to be smarter, is alleged by the post to not be the entirety of the distinction between the hierarchies; after all, if the only difference was brains or status,...
Doesn't that follow from the agenty/non-agenty distinction? An agenty actor is understood to make choices where there is no clear right answer. It makes sense that mistakes within that scope would be punished much less severely; it's hard to formally punish someone if you can't point to a strictly superior decision they should have made but didn't. Especially considering that even if you can think of a better way to handle that situation, you still have to not only show that they had enough information at the time to know that that decision would have been...
...Or you could notice that requiring that order be preserved when you add another member is outright assuming that you care about the total and not about the average. You assume the conclusion as one of your premises, making the argument trivial.
Better: randomly select a group of users (within some minimal activity criteria) and offer the test directly to that group. Publicly state the names of those selected (make it a short list, so that people actually read it, maybe 10-20) and then after a certain amount of time give another public list of those who did or didn't take it, along with the results (although don't associate results with names). That will get you better participation, and the fact that you have taken a group of known size makes it much easier to give outer bounds on the size of the...
She responds "I'm sorry, but while I am a highly skilled mathematician, I'm actually from an alternate universe which is identical to yours except that in mine 'subjective probability' is the name of a particularly delicious ice cream flavor. Please precisely define what you mean by 'subjective probability', preferably by describing in detail a payoff structure such that my winnings will be maximized by selecting the correct answer to your query."
Written before reading comments; The answer was decided within or close to the 2 minute window.
I take both boxes. I am uncertain of three things in this scenario: 1)whether the number is prime; 2) whether Omega predicted I would take one box or two; and 3) whether I am the type of agent that will take one box or two. If I take one box, it is highly likely that Omega predicted this correctly, and it is also highly likely that the number is prime. If I take two boxes, it is highly likely that Omega predicted this correctly and that the number is composite. I...
It seems that mild threats, introduced relatively late while immersion is strong, might be effective against some people. Strong threats, in particular threats which pattern-match to the sorts of threats which might be discussed on LW (and thus get the gatekeeper to probably break some immersion) are going to be generally bad ideas. But I could see some sort of (possibly veiled/implied?) threat working against the right sort of person in the game. Some people can probably be drawn into the narrative sufficiently to get them to actually react in some respec...
His answer isn't random. It's based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a differ...
Does locking doors generally lead to preventing break-ins? I mean, certianly in some cases (cars most notably) it does, but in general, if someone has gone up to your back door with the intent to break in, how likely are they to give up and leave upon finding it locked?
Nitpicking is absolutely critical in any public forum. Maybe in private, with only people who you know well and have very strong reason to believe are very much more likely to misspeak than to misunderstand, nitpicking can be overlooked. Certainly, I don't nitpick every misspoken statement in private. But when those conditions do not hold, when someone is speaking on a subject I am not certain they know well, or when I do not trust that everyone in the audience is going to correctly parse the statement as misspoken and then correctly reinterpret the correc...
While you can be definitively wrong, you cannot be definitely right.
Not true. Trivially, if A is definitively wrong, then ~A is definitively right. Popperian falsification is trumped by Bayes' Theorem.
Note: This means that you cannot be definitively wrong, not that you can be definitively right.
They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.
Alternatively, if you define solution such that any two given solutions are equally acceptable with respect to the original problem.
WOW. I predicted that I would have a high tolerance for variance, given that I was relatively unfazed by things that I understand most people would be extremely distressed by (failing out of college and getting fired). I was mostly right in that I'm not feeling stress, exactly, but what I did not predict was a literal physical feeling of sickness after losing around $20 to a series of bad plays (and one really bad beat, although I definitely felt less bad about that one after realizing that I really did play the hand correctly). It wasn't even originally m...
If I can do something fun, from my house, on my own hours, without any long-term commitment, and make as much money as a decent paying job, then that sounds incredible. Even if it turns out I can't play at high levels, I don't mind playing poker for hours a day and making a modest living from it. I don't really need much more than basic rent/food/utilities in any case.
Online poker (but it seems kinda hard)
Actually, does anyone know any good resources for getting up to speed on poker strategies? I'm smart, I'm good at math, I'm good at doing quick statistical math, and I've got a lot of experience at avoiding bias in the context of games. Plus I'm a decent programmer, so I should be able to take even more of an advantage by writing a helper bot to run the math faster and more accurately than I otherwise could. It seems to me that I should be able to do well at online poker, and this would be the sort of thing that I c...
You see an animal at a distance. It looks like a duck, walks like a duck, and quacks like a duck. You start to get offended by the duck. Then, you get closer and realize the duck was a platypus and not a duck at all. At this point, you realize that you were wrong, in a point of fact, to be offended. You can't claim that anything that looks like a duck, but which later turns out not to be, is offensive. If it later turns out not to be a duck then it was never a duck, and if you haven't been able to tell for sure yet (but will be able to in the future) then ...
Given no other information to strictly verify, any supposed time-traveled conversation is indistinguishable from someone not having time-traveled at all and making the information up. The true rule must depend on the actual truth of information acquired, and the actual time such information came from. Otherwise, the rule is inconsistent. It also looks at whether your use of time travel actually involves conveying the information you gained; whether such information is actually transferred to the past, not merely whether it could be. Knowing that Amelia Bon...
The ideal attitude for humans with our peculiar mental architecture probably is one of "everything is amazing, also lets make it better" just because of how happiness ties into productivity. But that would be the correct attitude regardless of the actual state of the world. There is no such thing as an "awesome" world state, just a "more awesome" relation between two such states. Our current state is beyond the wildest dreams of some humans, and hell incarnate in comparison to what humanity could achieve. It is a type error to...
Sure, I agree with that. But you see, that's not what the quote said. It actually not even related to what the quote said, except in very tenuous manners. The quote condemned people complaining about drinks on an airplane; that was the whole point of mentioning the technology at all. I take issue with the quote as stated, not with every somewhat similar-sounding idea.
That honestly seems like some kind of fallacy, although I can't name it. I mean, sure, take joy in the merely real, that's a good outlook to have; but it's highly analogous to saying something like "Average quality of life has gone up dramatically over the past few centuries, especially for people in major first world countries. You get 50-90 years of extremely good life - eat generally what you want, think and say anything you want, public education; life is incredibly great. But talk to some people, I absolutely promise you that you will find someon...
If, after realizing an old mistake, you find a way to say "but I was at least sort of right, under my new set of beliefs," then you are selecting your beliefs badly. Don't identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.
For people having an otherwise ratio...
Answered "moderate programmer, incorrect". I got the correct final answer but had 2 boxes incorrect. Haven't checked where I went wrong, although I was very surprised I had as back in grade school I got these things correct with near perfection. I learned programming very easily and have traditionally rapidly outpaced my peers, but I'm only just starting professionally and don't feel like an "experienced" programmer. As for the test, I suspect it will show some distinction but with very many false positives and negatives. There are too ...
God f*ing damn it. Again? He has 99.9% accuracy, problem resolved. Every decision remains identical unless a change of 1/1000 in your calculations causes a different action, which in Newcomboid problems it never should.
Note to anyone and everyone who encounters any sort of hypothetical with a "perfect" predictor; if you write it, always state an error rate, and if you read it then assume one (but not one higher than whatever error rate would make a TDT agent choose to two-box.)
Right, I didn't quite work all the math out precisely, but at least the conclusion was correct. This model is, as you say, exclusively for fatal logic errors; the sorts where the law of non-contradiction doesn't hold, or something equally unthinkable, such that everything you thought you knew is invalidated. It does not apply in the case of normal math errors for less obvious conclusions (well, it does, but your expected utility given no errors of this class still has to account for errors of other classes, where you can still make other predictions).
That's not how decision theory works. The bounds on my probabilities don't actually apply quite like that. When I'm making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I've made...
Neat. Consider my objection retracted. Although I suspect someone with more knowledge of the material could give a better explanation.
This comment fails to address the post in any way whatsoever. No claim is made of the "right" thing to do; a hypothetical is offered, and the question asked is "what do you do?" It is not even the case that the hypothetical rests on an idea of an intrinsic "right thing" to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It's not an especially interesting or original question, but it does not make any claims which are relevant to your comment...
While I don't entirely think this article was brilliant, it seems to be getting downvoted in excess of what seems appropriate. Not entirely sure why that is, although a bad choice of example probably helped push it along.
To answer the main question: need more information. I mean, it depends on the degree to which the negative effects happen, and the degree to which it seems this new belief will be likely to have major positive impacts on decision-making in various situations. I would, assuming I'm competent and motivated enough, create a secret society whi...
Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible - I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example.
As to what probability y...
Sure, it sounds pretty reasonable. I mean, it's an elementary facet of logic, and there's no way it's wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering...
"Exist" is meaningful in the sense that "true" is meaningful, as described in EY's The Simple Truth. I'm not really sure why anyone cares about saying something with probability 1 though; no matter how carefully you think about it, there's always the chance that in a few seconds you'll wake up and realize that even though it seems to make sense now, you were actually spouting gibberish. Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about "moral" situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a "moral" preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which...
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. "Should" is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of "should" at least. Telling anyone "you shouldn't do that" when what you really mean is "I want you to stop doing that" isn't productive. If they want to do it then they don't care what they "should" or "shouldn't" do ...
I'm not a physicist, and I couldn't give a technical explanation of why that won't work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You're not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.
Incidentally, the Uncertainty Principle doesn't talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the obse...
The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn't protect you from being wrong; you can talk all day about "is it ethical to steal this cookie" but you are wasting your time. Either you're actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you're babbling nonsense. Just use basic consequentialist reasoning and skip the who...
This is because people are bad at making decisions, and have not gotten rid of the harmful concept of "should". The original comment on this topic was claiming that "should" is a bad concept; instead of thinking "I should x" or "I shouldn't do x", on top of considering "I want to/don't want to x", just look at want/do not want. "I should x" doesn't help you resolve "do I want to x", and the second question is the only one that counts.
I think that your idea about morality is simply expres...
You're sneaking in connotations. "Morality" has a much stronger connotation than "things that other people think are bad for me to do." You can't simply define the word to mean something convenient, because the connotations won't go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say th...
"Should" is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It's a distinctly human invention, and it's meaning shifts as the user desires. Moral obligations are great for social interactions, but they don't reflect anything deeper than an extension of tribal politics. Saying "you should x" (in the moral sense of the word) is just equivalent to saying "I would prefer you to x", but with bonus social pressure.
Just becau...
Hell, it's definitely worth us thinking about it for at least half a second. Probably a lot more than that. It could have huge implications if we discovered that there was evidence of any kind of powerful agent affecting the world, Matrix-esque or not. Maybe we could get into heaven by praying to it, or maybe it would reward us based on the number of paperclips we created per day. Maybe it wouldn't care about us, maybe it would actively want to cause us pain. Maybe we could use it, maybe it poses an existential risk. All sorts of possible scenarios there, ...
Endless, negligible, and not at all. Reference every atheism argument ever.
I don't think it's particularly meaningful to use "free will" for that instead of "difficult to predict." I mean, you don't say that weather has free will, even though you can't model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn't part of the technical definition. I think that your concept captures some of the real-world uses of the term "free will" but that it doesn't capture enough of the usage to help deal with the confusion around it. In particular, your defin...
Taboo "free will" and then defend that the simplest answer is that we have it. X being true is weakly correlated to us believing X, where belief in X is an intuition rather than a conclusion from strong evidence.
It's explicitly opposed to my response here. I feel like if I couldn't predict my own actions with certainty then I wouldn't have free will (more that I wouldn't have a will than that it wouldn't be free, although I tend to think that the "free" component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the univers...
I suspect that a quick summary of people's viewpoints on free will itself would help in interpreting at least some answers. In my case, I believe that we don't have "free will" in the naive sense that our intuitions tend to imply (the concept is incoherent). However, I do believe that we fell like we have free will for specific reasons, such that I can identify some situations that would make me feel as though I didn't have it. So, not actually having free will doesn't constrain experience, but feeling like I don't does.
Epistemically:
If I discove...
It is tautological, but it's something you're ignoring in both this post and the linked reply. If you care about saving children as a part of a complex preference structure, then saving children, all other things being equal, fulfills your preferences more than not saving those children does. Thus, you want to do it. I'm trying to avoid saying you should do it, because I think you'll read that in the traditional moral framework sense of "you must do this or you are a bad person" or something like that. In reality, there is no such thing as "...
Right, that's the thought that motivated the "probably" at the end. Although it feels pretty strongly like motivated cognition to actually propose such an argument.
You're not "on the hook" or anything of the sort. You're not morally obligated to save the kids, any more than you're morally obligated to care about people you care about, or buy lunch from the place you like that's also cheaper than the other option which you don't like. But, if you do happen to care about saving children, then you should want to do it. If you don't, that's fine; it's a conditional for a reason. Consequentialism wins the day; take the action that leads most to the world you, personally, want to see. If you really do value the k...
After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of "myself" is something along the lines of: