UPI Reporter Dan Olmsted went looking for the autistic Amish. In a community where he should have found 50 profound autistics, he found 3.
He went looking for autistics in a community mostly known for rejecting Science and Engineering? It 'should' be expected that the rate of autism is the same as in the general population? That's... not what I would expect. Strong social penalties for technology use for many generations would be a rather effective way to cull autistic tendencies from a population.
I think this is about the only scenario on LW that someone can be justifiably downvoted for that statement.
I up-voted it for dissenting against sloppy thinking disguised as being deep or clever. Twisting the word 'god' to include other things that do fit the original, literal or intended meaning of the term results in useless equivocation.
Hubris isn't something that destroys you, it's something you are punished for. By the gods!
Or by physics. Not all consequences for overconfidence are social.
You were willing to engage with me after I said something "inexcusably obnoxious" and sarcastic, but you draw the line at a well reasoned collection of counterarguments? Pull the other one.
For those curious, I stopped engaging after the second offense - the words you wrote after what I quoted may be reasonable but I did not and will not read them. This is has been my consistent policy for the last year and my life has been better for it. I recommend it for all those who, like myself, find the temptation to engage in toxic internet argument har...
Can't imagine who'd have guessed your exact intention just based on your initial response, though.
You are probably right and I am responsible for managing the predictable response to my words. Thankyou for the feedback.
Wow, thank God you've settled this question for us with your supreme grasp of rationality. I'm completely convinced by the power of your reputation to ignore all the arguments common_law made, you've been very helpful!
Apart from the inexcusably obnoxious presentation the point hidden behind your sarcasm suggests you misunderstand the context.
Stating arguments in favour of arguing with hostile arguers is one thing. "You should question your unstated but fundamental premise" is far more than that. It uses a condescending normative dominance atte...
Trying to use reasoned discussion tactics against people who've made up their minds already isn't going to get you anywhere, and if you're unlucky, it might actually be interpreted as backtalk, especially if the people you're arguing against have higher social status than you do--like, for instance, your parents.
At times being more reasonable and more 'mature' sounding in conversation style even seems to be more offensive. It's treating them like you are their social equal and intellectual superior.
I want the free $10. The $1k is hopeless and were I to turn out to lose that side of the bet then I'd still be overwhelmingly happy that I'm still alive against all expectations.
I consider social policy proposal harmful and reject it as applied to myself or others. You may of course continue to refrain from speaking out against this kind of behaviour if you wish.
In the unlikely event that the net positive votes (at that time) given to Azathoth123 reflect the actual attitudes of the lesswrong community the 'public' should be made aware so they can choose whether to continue to associate with the site. At least one prominent user has recently disaffiliated himself (and deleted his account) for a far less harmful social political concern. On the other hand other people who embrace alternate lifestyles may be relieved to see that Azathoth's prejudiced rabble rousing is unambiguously rejected here.
Ignorant is fastest - only calculate answer and doesn't care of anything else.
Just don't accidentally give it a problem that is more complex than you expect. Only caring about solving such a problem means tiling the universe with computronium.
Wow. I want the free money too!
2) Gays aren't monogamous. One obvious way to see this is to note how much gay culture is based around gay bathhouses. Another way is to image search pictures of gay pride parades.
This user seems to to spreading an agenda of ignorant bigotry against homosexuality and polyamory. It doesn't even temper the hostile stereotyping with much pretense of just referring to trends in the evidence.
Are the upvotes this account is receiving here done by actual lesswrong users (who, frankly, ought to be ashamed of themselves) or has Azathoth123 created sockpuppets to vote itself up?
Are the upvotes this account is receiving here done by actual lesswrong users (who, frankly, ought to be ashamed of themselves) or has Azathoth123 created sockpuppets to vote itself up?
I've suspected Azathoth123 of upvoting their own comments with sockpuppets since having this argument with them. (If I remember rightly, their comments' scores would sit between -1 & +1 for a while, then abruptly jump up by 2-3 points at about the same time my comments got downvoted.)
Moreover, Azathoth123 is probably Eugine_Nier's reincarnation. They're similar in qui...
This is the gist of the AI Box experiment, no?
No. Bribes and rational persuasion are fair game too.
To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI
I don't know who you are quoting but they are someone who considers AIs that will torture me to be friendly. They are confused in a way that is dangerous.
The AI acausally blackmails people into building it sooner, not into building it at all.
It applies to both - causing itself to exist at a different place in time or causing itself to exist at all. I've explicitly mentioned elsewhere in this thread that merely refusing blackmail is insufficient when there a...
Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?
Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.
By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.
Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).
It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?
In this case "be blackmailed" means "contribute to creating the damn AI". That's the entire point. If enough people do contribute to creating it then those that did not contribute get punished. The (hypothetical) AI is acausally creating itself by punishing those that don't contribute to creating it. If nobody does then nobody gets punished.
I'll be sure to ask you the next time I need to write an imaginary comment.
I wasn't the pedant. I was the tangential-pedantry analyzer. Ask Lumifer.
It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?
Your comment was fine. It would be true of most people, I'm not sure if Faul is one of the exceptions.
Realistically speaking?
Unfortunately this still suffers from the whole "Time Traveller visits you" part of the claim - our language doesn't handle it well. It's a realistic claim about counterfactual response of a real brain to unrealistic stimulus.
This seems weird to me.
It seemed weird enough to me that it stuck in my memory more clearly than any of his anti-MIRI comments.
XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact.
I concur.
Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?
My best guess is an ethical compulsion towards sincere expression of reality as he perceives it. For what it is worth that sincerity did influence my evaluation of his behaviour and personality. XiXiDu...
I don't think it's literally factually :-D
I think you're right. It's closer to, say... "serious counterfactually speaking".
False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.
From the context I ruled out countersignalling and for what it is worth my impression was that the humility was real, not false. Given that I err on the side of cynical regarding hypocrisy and had found some of XiXiDu's comments disruptive I give my positive evaluation of Xi's sincerity some weight.
I agree that the hypothesis of low intelligence is implausible d...
I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.
I'm impressed. (And will look them up when I get a chance.)
For what it's worth, I don't think anybody understands acausal trade.
It does get a tad tricky when combined with things like logical uncertainty and potentially multiple universes.
Precommitment isn't meaningless here just because we're talking about acausal trade.
Except in special cases which do not apply here, yes it is meaningless. I don't think you understand acausal trade. (Not your fault. The posts containing the requisite information were suppressed.)
What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was.
The time of this kind decision is irrelevant.
The key is that the AI precommits to building it whether we refuse or not.
The 'it' bogus is referring to is the torture-AI itself. You cannot precommit to things until you exist, no matter your acausal reasoning powers.
It's such a plausible conclusion that it makes sense to draw, even if it turns out to be mistaken. Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.
The best we can say is that it is a sufficiently predictable conclusion. Had the author not underestimated inferential distance he could easily have pre-empted your accusation with an additional word or two.
Nevertheless, it is still a naive (and incorrect) conclusion to draw based on the available evidence. Familiarity with human psychology (in general), inte...
False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.
(Unless the context was something like "intelligence lower than extremely high"; i.e. something like "I have IQ 130, but compared with people with IQ 160 I feel stupid".)
I can't read minds
Yet you spoke with the assumption that you could, and when many observers do not share your mind-reading conclusions. Hopefully in the future when you choose to do that you will not fail to see why you get downvotes. It's a rather predictable outcome.
XiXiDu should discount this suggestion because it seems to be motivated reasoning.
The advice is good enough (and generalizable enough) that the correlation to the speaker's motives is more likely to be coincidental than causal.
Addicts tend to be hurt by exposing themselves to their addiction triggers.
When discussing transparent Newcomb, though, it's hard to see how this point maps to the latter two situations in a useful and/or interesting way.
Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I'm the one who is c...
Breaking the vicious cycle
I endorse this suggestion.
Don't Feed The Trolls!
If I consider my predictions of Omega's predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.
"Ha! What if I don't choose One box OR Two boxes! I can choose No Boxes out of indecision instead!" isn't a particularly useful objection.
It's me who has to run on a timer.
No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn't care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.
When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It's a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.
As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction.
Previous discussions of Transparent Newcomb's problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.
I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I'm not sure if consistency in this situation would even be possible for Omega. Any comments?
The problem (such a...
I am too; I'm providing a hypothetical where the player's strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept.
It may be the least convenient possible world. More specifically it is the minor inconvenience of being careful to specify the problem correctly so as not to be distracted. Nshepperd gives some of the reason typically used in such cases.
...Moreover, the strategy "pick the opposite of what I predict Omega does" is a member of a class of strategies that have t
No, because that's fighting the hypothetical. Assume that he doesn't do that.
It is actually approximately the opposite of fighting the hypothetical. It is managing the people who are trying to fight the hypothetical. Precise wording of the details of the specification can be used to preempt such replies but for casual defininitions that assume good faith sometimes explicit clauses for the distracting edge cases need to be added.
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of "rationality".
To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a 'rationality' issue.
Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences ...
when much of mainstream philosophy consists of what (I assume) you're calling "bad amateur philosophy".
No, much of it is bad professional philosophy. It's like bad amateur philosophy except that students are forced to pretend it matters.
Curiously enough, I made no claims about ideal CDT agents.
True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.
The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.
That said, the grandparent's point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general cas...
Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Curiously, this particular claim is true only because Lumifer's primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advance only for the purpose of influencing decisions made after (T + e) (but...
If Omega is just a skilled predictor, there is no certain outcome so you two-box.
Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.
I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."
Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?
If there's something about "ability to learn" outside of this, I'd be interested to hear about it.
Skills, techniques and habits are also rather important.
Sugar is desirable as the most easily accessible form of energy. Being concentrated is more useful for long term storage in a mobile form, hence the use of the more concentrated fat.