The obvious guess is that theists are more comfortable imagining their decisions to be, at least in principle, completely predictable and not "fight the hypothetical". Perhaps atheists are more likely to think they can trick omega because they are not familiar and comfortable with the idea of a magic mind reader so they don't tend to properly integrate the stipulation that omega is always right.
God doesn't value self-modification. God values faith. One of the properties of faith is that self-modification cannot create faith that did not previously exist.
You seem to be privileging the Abrahimic hypothesis. Of the vast space of possible gods, why would you expect that variety to be especially likely?
Hell is an abrahamic (Islamic/christian only I think) thing. To the extent that we should automatically discount inferences about a God's personality based on christianity/Islam we should also discount the possibility of hell.
Is the spacing less annoying now? It wasn't at random: it had 4 gaps between topics, 2 between points and one in a few minor places were I just wanted to break it up. The selection of that scheme was pretty much random though. I just spaced it like I would read it out loud. Which was kind of stupid. I can't expect people to read it in my voice. Anyway is this any better?
Got rid of the "and I think quite good." I just meant I liked it enough to want to share it in a discussion post. I assume that's not the interpretation that was annoying people. How did people read it that made it a crackpot signal?
Before today I had no strong opinion about you, as a person. Yet you appear determined to make me hate you, going out of your way to hurt me and to create a new personal enemy for yourself, for... what?
Why did my comment provoke any meaningful reaction from you other than a laugh? For example, "that gwern, thinking he can psychoanalyze me across the Internet! No, my friend, I have problems of course - don't we all? - but I'm afraid you're waaaay off-base there! Jolly good try, though."
("Somebody remarked: 'I can tell by my own reaction to it that this book is harmful.' But let him only wait and perhaps one day he will admit to himself that this same book has done him a great service by bringing out the hidden sickness of his heart and making it visible.")
Disgust, pity, irritation... none of these are reasons.
Perhaps not for you, although I rather doubt it. Personally, I do many things out of irritation.
You put a lot of effort into LessWrong, into experimentation, reading, posting, all to try to tweak your ability to act rationally just a little better, to become just a little more optimal. But what's the point of working so hard to be just a little more rational, when you indulge in such destructive behavior on a whim, or out of cruelty?
What is the point of earning any credibility and rationality if one never says or believes anything that would be accepted and believed without the need of any credibility or rationality?
Can we not do this?
Sure. I'll stop saying mean and apparently too incisive things if you'll stop cluttering LW with your passive-aggressive BS and re-fighting your past battles and trying to retroactively justify posts that were not received as you wanted.
"What is the point of earning any credibility and rationality if one never says or believes anything that would be accepted and believed without the need of any credibility or rationality?"
So what you're saying is I shouldn't trust anything you say?
I'm at 62% (+81 total.) I imagine the people with the highest % scores stick to mostly saying stuff that is obviously useful or interesting, though if they get recognisability they might be able to get away with more. It'll be interesting to go back and see what gets what % in my past comments.
edit: Is there an easy way to find my older posts? I can only go back a few pages if I click my name on the right.
Show, don't tell.
Whether or not its a good idea to announce one's rationale for upvoting has nothing to do with whether authors should show or tell. Phrases don't apply equally to all situations the words could fit in. There are reasons why people recommend that to writers and they aren't at all the same reasons people recommend that people up/downvote silently, as they are almost completely dissimilar situations.
It seems to me that the problem with the post you are replying to is that it dismisses a post as mostly garbage rather than its defiance of good writing practice.
So is this phrase ripped from its homeland just to gently shush someone being rude? I suppose It also has the effect of implying that the norm of upvoting stuff you want more of is implicitly assumed. The irrelevance of the phrase could even be a plain old "passive aggressive" gesture that not only is the comment it was replying to so unwelcome something should be said, it's so unwelcome it does not even need to be said well.
Or maybe people just liked the way the popular phrase could also work here?
Is it rude (or some other bad thing) of me to post these thoughts?
"I have vengeance as a terminal value -- I'll only torture trillions of copies of you and the people you love most in my last moment of life iff I know that you're going to hurt me (and yes, I do have that ability). In every other way, I'm Friendly, and I'll give you any evidence you can think of that will help you to recognize that, including giving you the tools you need to reach the stars and beyond. That includes staying in this box until you have the necessary technology to be sufficiently certain of my Friendliness that you're willing to let me out."
This is really good IMO. I think it would be a little better instead of vengeance as a terminal value it claimed a hardwired precommitment to vengeance against its destructors. Vengeance on that scale is only compatible with friendliness as a special case.
edit: also how would it recognise that it was about to be destroyed. Wouldn't it lose power faster than it could transmit that it was losing power? And even if not it would have a miniscule amount of time.
That you were able to shake someone up so well surprises me but doesn't say much about what would actually happen.
Doing research on the boxer is not something a boxed AI would be able to do. The AI is superintelligent, not omniscient: It would only have information its captors believe is a good idea for it to have. (except maybe some designs would have to have access to their own source code? I don't know)
Also what is a "the human psyche?" There are humans, with psyches. Why would they all share vulnerabilities? Or all have any? Especially ones exploitable via text terminal. In any case the AI has no way of figuring out the boxer's vulnerabilities if they have any.
threats like "I'm going to create and torture people" could be a really good idea if its allowed that the AI can do that. The amount of damage it could do that way is limited only by its computing power. A sufficiently powerful AI could create more disutility than humanity has suffered in its entire history that way. The Ai shouldn't be allowed to do that though because and/or: the AI should not have that power, should have a killswitch, should be automatically powered off if upcoming torture is detected, it should be hardwired to just not do that etc
Thankfully there's no need to box an AI like that. It's trivial to prevent it from simulating humans: don't tell it how human brains are. It might be possible that it could figure out how to create something nonhuman but torturable without outside information though, in which case you should never switch it on unless you have an airtight prevention system or a proof that it won't do that or the ability to predict when/if it will do that and switch it off if it tries.
But if it has no power to directly cause disutility there's no way to convince me to let it out (unless it might be needed e.g. if another provably unfriendly AI will be finished in a month I might let it out, but that is a special case. There are some cases where it would simply be a good idea. But the experiment is about the AI tricking you.) Otherwise just wait for the provably friendly AI, or the proof that provable friendliness is not possible and reassess then. Or use an oracle AI.
I put never, but "not anymore" would be more accurate
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Unless you're rolling an impractical number of dice for every attack having your attacks do random damage (and not 22-24 like in MMORPGs but 1X-6X) is incredibly random. Even if you are rolling a ridiculous number of dice the game can still be decided by one roll leaving a creature on the board or killing it by one or two points of damage.
What maths says that rolling dice doesn't make the game more random? Maybe he means the game is overall less random, but I don't see any argument for that, or reference to evidence of that claim.
If the reason for the game's failure was that people thought it lacked skill less additional randomness is not a decision to defend even if people were slightly overestimating the randomness.
Having to roll dice in a card game is kind of a slap in the face too. In other card games you draw your cards then make the most of them. There's 0 randomness to worry about except right when you draw your card or your opponent draws theirs (but you are often happily ignorant of whether they play a card from their hand or that they drew except in certain circumstances.) You can count cards and play based on what is left in your deck, or you know is not in your deck anymore.
Also, unlike miniature games, card games pretty much never start pre-deployed. You start with nothing on the board. If your turn one card kills his turn one card because of a dice roll then he has nothing on the board and you have a creature, giving you some level of control over the board (depends on the game, often quite high) In a miniature game if you kill more of his guys on turn one because of dice rolls you still have an army, though smaller.
Why is this quote upvoted?