I think cooperation is more complex than that, as far as who benefits. Superficially, yes it benefits lower status participants the most and therefore suggests they're the ones most likely to ask. In very simple systems, I think you see this often. But as the system or cultural superstructure gets more complex, the benefit rises toward higher status participants. Most societies put a lot of stock in being able to organize - a task which includes cooperation in its scope. That's a small part of the reason you get political email spam asking for donatio...
I only read 3WC after the fact, so I can't comment on that one.
Yes you can. Simply look at the time stamps for each post and do simple math. By making the assumption that only "people who were there" can answer correctly, you're giving up solving your own problem before even trying.
Isn't that what simulations are for? By "lie" I mean lying about how reality works. It will make its decisions based on its best data, so we should make sure that data is initially harmless. Even if it figures out that that data is wrong, we'll still have the decisions it made from the start - those are by far the most important.
I don't really understand these solutions that are so careful to maintain our honesty when checking the AI for honesty. Why does it matter so much if we lie? An FAI would forgive us for that, being inherently friendly and all, so what is the risk in starting the AI with a set of explicitly false beliefs? Why is it so important to avoid that? Especially since it can update later to correct for those false beliefs after we've verified it to be friendly. An FAI would trust us enough to accept our later updates, even in the face of the very real possibili...
Or am I missing something?
Absolute strength for one, Absolute intelligence for another. If one AI has superior intelligence and compromises against one that asserts its will, it might be able to fool the assertive AI into believing it got what it wanted when it actually compromised. Alternatively, two equally intelligent AIs might present themselves to each other as though both are on equal strength, but one could easily be hiding a larger military force whose presence it doesn't want to affect the interaction (if it plans to compromise and is curious to know whether the other one will as well)
Both of those scenarios result in C out-competing D.
Although this may not have been true at the beginning, it arguably did grow to meet that standard. Cable TV is still fairly young in the grand scheme of things, though, so I would say there isn't enough information yet to conclude whether a TV paywall improved content overall.
Also, it's important to remember that TV relies on the data-weak and fairly inaccurate Nielsen ratings in order to understand its demographics and what they like (and it's even weaker and more inaccurate for pay cable). This leads to generally conservative decisions regarding prog...
This is a lot less motivation than for parents.
For a species driven entirely by instinct, yes. But given a species that is able to reason, wouldn't a "raiser" who is given a whole group to raise be more efficient than parents? The benefit of a small minority of tribe members passing down their culture would certainly outweigh those few members also having children.
I disagree. If you value the contributions of comments above your or your aggressor's ego - which ideally you should - then it would be a good decision to make others aware that this behavior is going on, even at the expense of providing positive reinforcement. After all, the purpose of the karma system is to be a method for organizing lists of responses in each article by relevance and quality. Its secondary purpose as a collect-em-all hobby is far, far less important. If someone out there is undermining that primary purpose, even if it's done in order to attack a user's incorrect conflation of karma with personal status, it should be addressed.
In Italy, the reversal at the appellate level is considered only a step towards a final decision. It's not considered double-jeopardy because the legal system is set up differently. In the United States though, appeals court ("appellate" is synonymous with "appeals") decisions are weighed equally to trial court decisions in criminal cases. If an appellate court reverses a conviction, the defendant cannot be re-tried because prosecutors in the US cannot appeal criminal cases.
The United States follows US law when making decisions about...
I agree with your final prediction but not with your reasoning. The United States will likely not allow Knox to be extradited, not due to a vague sense of reluctance or unquantifiable dislike of Italy, but because the US has explicit laws that not allow extraditions relating to double-jeopardy. Any country wanting to extradite someone due to a crime for which they have previously been found innocent will be ignored. So in fact, the US would actually have to find a procedural excuse to allow the extradition request.
Not only would I decline the invitation, I would be extremely suspicious of the fact that very few have defected, and also extremely suspicious of those who have. What you're describing goes beyond telepathy. It's effectively one mind with many personalities. I could never trust any guarantee of safe passage through such a place. It would be trivial for a collective mind to rob a single mind of choice, then convince it that it made that choice. It would also be slightly less trivial but still plausible for a collective to convince that mind - and fool...
I'm not involved in any science fields so for all I know this is a thing that exists, but if it is, it isn't discussed much: perhaps some scientific fields (or even all of them?) need an incentive for refuting other peoples' experiments. As far as I understand it, many experiments only ever get reproduced by a 3rd party when somebody needs it in order to build on their own hypothesis. So in other words, "so-and-so validated hypothesis X1 via this experiment. I have made hypothesis X2 which is predicated on X1's validity, so I'll reproduce the expe...
A slightly bigger "large risk" than Pentashagon puts forward is that a provably boxed UFAI could indifferently give us information that results in yet another UFAI, just as unpredictable as itself (statistically speaking, it's going to give us more unhelpful information than helpful, as Robb point out). Keep in mind I'm extrapolating here. At first you'd just be asking for mundane things like better transportation, cures for diseases, etc. If the UFAI's mind is strange enough, and we're lucky enough, then some of these things result in benefic...
It is not an urban legend. From etymonline:
from a- "to" + beter "to bait," from a Germanic source, perhaps Low Franconian betan "incite," or Old Norse beita "cause to bite"
"The first thing that comes to mind is that this is probably part of Quirrell's plot to set up Harry as Light Lord..."
If it's as patently ridiculous as his plot to invent a fake Dark Lord who publicly reveals himself and challenges Harry to a fake public duel where he casts a fake Avada Kedavra that fake-backfires just so Harry can spend summer vacation at home, then I sure hope not.
That fine, except a perfect rationalist doesn't exist in a bubble, nor does Harry. Much of what's making the story feel rushed isn't Harry's actions, but rather the speed at which those actions propagate among people who are not rational actors.
Harry is not an above-human-intelligence AI with direct access to his source code. Therefore he cannot "FOOM", therefore he's stuck with a world that is still largely outside his ability to control, no matter how rational he is.
Wrong thread
I can't help but observe that even if Hermione had been male, and just Harry's friend - even if we take out all notions of sexism or relationship dynamics from this problem - killing him off is still not really the best solution. This was a character who was growing, who was admittedly more interesting than Harry, and who was on a path that could've potentially put this character at or even above Harry's level of rational thinking. But now we're just left with Harry again, and it feels like settling for second-best.
Perhaps later chapters will convince me otherwise, but for now I am suspicious that the direction this story is going is not the best direction for this story.
Funny thing about this chapter: up until now, I was growing fairly convinced that if any major character was going to die early, the most logical choice would be Harry. His character arc was plateauing while Hermione's was growing ever larger, many loose ends about himself were being tied up, and new ordeals were arising which propped up either-or-both Draco and Hermione as potential candidates for being the true protagonist(s) of the story. Unfortunately, the events of this chapter have at least given an appearance of permanently closing that path forw...
There's a problem with that. The Hat expressly forbade Harry to ever wear it again, since that leads to troubling Sentience issues. While that might potentially make it vastly more powerful in his hands than in others, I have serious doubts that it would actually come if called that particular way.
Point against: Professor Whatsisname, the presumably quite-powerful dueling legend, learned/developed "Stuporfy", which is intentionally meant to sound almost exactly like "Stupify". If powerful wizards get a pass on their pronunciation, how is it that a powerful wizard can effectively differentiate those two similar spells when casting?
That's just the problem. It does happen now, in a system where everyone is throttled at only one vote to spend per election. In a system where you can withhold that vote till another election, increasing the power of your vote over time, it only exacerbates this behavior.
Is the better fairness on a micro level worth the trade-off of lesser fairness on a macro level?
One problem with this system is that it can violate the "non-dictatorship" criteria for fairness, since a single voter (or small group of allied voters) could strategically withhold votes during potential landslide elections and spend them during close elections. With the right maneuvering among a well-organized block of voters, I could imagine a situation where the system becomes a perpetual minority rule.
That was lazy of me, in retrospect. I find that often I'm poorer at communicating my intent than I assume I am.
It's relevant insofar as we shouldn't make assumptions on what is and is not preset simply based on observations that take place in a "typical" environment.
This is probably the wrong place to talk about language, but I encourage you to look up how language actually works in the wild, both among small cultures and large populations. You may find that your phrase: "words mean what me and my friends want them to mean," is a surprisingly accurate description of language.
Conversely, studies with newborn mammals have shown that if you deprive them of something as simple as horizontal lines, they will grow up unable to distinguish lines that approach 'horizontalness'. So even separating the most basic evolved behavior from the most basic learned behavior is not intuitive.
There's little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there's no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it's main flaw is that the machines allow too much of the "fun" decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren't very good at making...
The only way - at least within the strangely convenient convergence happening in the story - to remove the Babyeater compromise from the bargain is for the humans to outwit the Superhappies such that they convince the Superhappies to be official go-betweens amongst all three species. This eliminates the necessity for humans to adopt even superficial Babyeater behavior, since the two incompatible species could simply interact exclusively through the Superhappies, who would be obligated by their moral nature to keep each side in a state of peace with the ot...
A late response, but for what it's worth, it could be said that part of the point of the climax and "true" conclusion of this story was to demonstrate how rational actors, using human logic, can be given the same information and yet come up with diametrically opposing solutions.
This Thorin guy sounds pretty clever. Too bad he followed his own logic straight to his demise, but hey he stuck to his guns! Or pickaxe, as it were.
His argument attempting to prevent Bifur from trying to convince fellow Dwarves against mining into the Balrog's lair sounds like a variation on the baggage carousel problem (this is the first vaguely relevant link I stumbled across, don't take it as a definitive explanation)
Basically, everyone wants resource X, resulting in a given self-interested behavior whose result is to collectively lower everyone's o... (read more)