Comment author: turchin 08 August 2016 12:46:59PM 2 points [-]

Willing to cooperate seems to be low status signaling. E.g., a low status author of an article may try to get higher status person as a coauthor of his article. But higher status author would not try to get low status author as a coauthor. Higher status people could defect with lower punishment, like not return calls or not keep promises. It results in open willingness to cooperate may be regarded as a signal of low status and some people may deliberately not cooperate to demonstrate their higher status. Any thoughts?

Comment author: Eugene 14 August 2016 01:10:08AM *  2 points [-]

I think cooperation is more complex than that, as far as who benefits. Superficially, yes it benefits lower status participants the most and therefore suggests they're the ones most likely to ask. In very simple systems, I think you see this often. But as the system or cultural superstructure gets more complex, the benefit rises toward higher status participants. Most societies put a lot of stock in being able to organize - a task which includes cooperation in its scope. That's a small part of the reason you get political email spam asking for donations, even if you live in an area where your political party is clearly dominant. Societies also tend to put an emphasis on active overall participation (the 'irons in the fire' mentality), where peer-cooperation is rewarded, and it's often unclear who has higher status in those situations without being able to tell who has the most 'irons in the fire' so to speak. I feel like this is where coauthoring falls. Although it probably depends on what subculture has developed around the subject being authored.

And then there's the people who create organizations entirely centered around cooperation. The idea being that there's power in being able to set the rules of how the lower status participants are allowed to cooperate, and how they are rewarded for their cooperation. For example, Youtube and Kickstarter. In these and similar systems, cooperation effectively starts at the highest possible status and rolls downhill.

Comment author: Bugmaster 28 February 2015 08:42:55PM 1 point [-]

I only read 3WC after the fact, so I can't comment on that one. But I don't recall him saying "...solve this problem or you get the bad ending" in he previous HPMoR chapters...

Comment author: Eugene 28 February 2015 09:53:10PM 3 points [-]

I only read 3WC after the fact, so I can't comment on that one.

Yes you can. Simply look at the time stamps for each post and do simple math. By making the assumption that only "people who were there" can answer correctly, you're giving up solving your own problem before even trying.

Comment author: V_V 18 November 2014 04:21:07PM 1 point [-]

The problem is that in order to do anything useful, the AI must be able to learn. This means that even if you deliberately initialize it with a false belief, the learning process might then update that belief once it finds evidence that it was false.
If AI safety relied on that false belief, you have a problem.

A possible solution would be to encode the false belief in a way that can't be updated by learning, but doing so is a non-trivial problem.

Comment author: Eugene 19 November 2014 12:15:09AM *  -1 points [-]

Isn't that what simulations are for? By "lie" I mean lying about how reality works. It will make its decisions based on its best data, so we should make sure that data is initially harmless. Even if it figures out that that data is wrong, we'll still have the decisions it made from the start - those are by far the most important.

Comment author: Eugene 18 November 2014 01:32:44AM 0 points [-]

I don't really understand these solutions that are so careful to maintain our honesty when checking the AI for honesty. Why does it matter so much if we lie? An FAI would forgive us for that, being inherently friendly and all, so what is the risk in starting the AI with a set of explicitly false beliefs? Why is it so important to avoid that? Especially since it can update later to correct for those false beliefs after we've verified it to be friendly. An FAI would trust us enough to accept our later updates, even in the face of the very real possibility that we're lying to it again.

I mean, the point is to start the AI off in a way that intentionally puts it at a reality disadvantage, so even if it's way more intelligent than us it has to do so much work to make sense of the world, it doesn't have the resources to be dishonest in an effective manner. At that point, it doesn't matter what criteria we're using to prove its honesty.

Comment author: [deleted] 08 September 2014 07:13:26PM *  2 points [-]

In the intersection of their future light cones, each FAI can either try to accommodate the other (C) or try to get its own way (D). If one plays C and one plays D, the latter's values are enforced in the intersection of light cones; if both play C, they'll enforce some kind of compromise values; if they both play D, they will fight. So the payoff matrix is either PD-like or Chicken-like depending on how bloody the fight would be and how bad their values are by each other's standards.

Or am I missing something?

In response to comment by [deleted] on Open thread, September 8-14, 2014
Comment author: Eugene 08 September 2014 10:32:01PM 1 point [-]

Or am I missing something?

Absolute strength for one, Absolute intelligence for another. If one AI has superior intelligence and compromises against one that asserts its will, it might be able to fool the assertive AI into believing it got what it wanted when it actually compromised. Alternatively, two equally intelligent AIs might present themselves to each other as though both are on equal strength, but one could easily be hiding a larger military force whose presence it doesn't want to affect the interaction (if it plans to compromise and is curious to know whether the other one will as well)

Both of those scenarios result in C out-competing D.

Comment author: Lumifer 08 September 2014 05:16:52PM 6 points [-]

Look at the history of cable TV. When it appeared it was also promoted as "no advertising, better shows".

Comment author: Eugene 08 September 2014 10:12:29PM *  2 points [-]

Although this may not have been true at the beginning, it arguably did grow to meet that standard. Cable TV is still fairly young in the grand scheme of things, though, so I would say there isn't enough information yet to conclude whether a TV paywall improved content overall.

Also, it's important to remember that TV relies on the data-weak and fairly inaccurate Nielsen ratings in order to understand its demographics and what they like (and it's even weaker and more inaccurate for pay cable). This leads to generally conservative decisions regarding programing. The internet, on the other hand, is filled with as much data as you wish to pull out regarding the people who use your site, on both a broad and granular level. This allows freedom to take more extreme changes of direction, because there's a feeling that the risk is lower. So the two groups really aren't on the same playing field, and their motivations for improving/shifting content potentially come from different directions.

Comment author: Azathoth123 03 September 2014 02:25:30AM 8 points [-]

A culture that include the concept of a "raiser" - an octopus with the job of raising the babies, and passing the culture on to them, without mating at all - can avoid that issue. The "raiser" would also improve his average genetic fitness if he is a sibling of one of the parents, since the children would then all have approximately one-quarter of his genes.

This is a lot less motivation than for parents.

If it's not enough to kill off the species, evolution generally won't drop the feature.

Well, for starter if you don't die after mating you might be able to mate again.

According to my source, which is a blog comment that doesn't site its sources, the death is a form of controlled cell-death and scientists have been able to remove the gene responsible and the resulting octopuses (or squid) can mate again later.

Comment author: Eugene 08 September 2014 09:47:18PM 1 point [-]

This is a lot less motivation than for parents.

For a species driven entirely by instinct, yes. But given a species that is able to reason, wouldn't a "raiser" who is given a whole group to raise be more efficient than parents? The benefit of a small minority of tribe members passing down their culture would certainly outweigh those few members also having children.

Comment author: shminux 09 February 2014 08:02:57PM *  -3 points [-]

The worst possible reaction to this phenomenon is to point it out publicly rather than to quietly report it to whoever cares (currently no one among the site admins), since you noticing it, even occasionally, is enough of a positive reinforcement for the culprit to continue. I also mentioned on occasion that unfairly downvoted comments tend to get upvoted back up over time, so no point sweating it. So I am downvoting your comment to encourage you to silently shrug off karma sniping in the future.

Comment author: Eugene 10 February 2014 12:52:22AM 2 points [-]

I disagree. If you value the contributions of comments above your or your aggressor's ego - which ideally you should - then it would be a good decision to make others aware that this behavior is going on, even at the expense of providing positive reinforcement. After all, the purpose of the karma system is to be a method for organizing lists of responses in each article by relevance and quality. Its secondary purpose as a collect-em-all hobby is far, far less important. If someone out there is undermining that primary purpose, even if it's done in order to attack a user's incorrect conflation of karma with personal status, it should be addressed.

Comment author: brazil84 09 February 2014 03:46:58PM *  0 points [-]

but because the US has explicit laws that not allow extraditions relating to double-jeopardy

Well do those laws (and the decisions interpreting them) make clear what should happen in a situation where the defendant was convicted at the trial level; the conviction was reversed at the appellate level; and then subsequently reinstated after a further appeal and remand?

I don't think so. One article I read has a law professor asserting that double-jeopardy would not apply:

Some legal analysts have said that Knox could cloak herself in the Fifth Amendment’s protection against double jeopardy, being tried again for a crime after an acquittal. But that protection wouldn’t apply to Knox, Ku [a Hofstra law professor] wrote in a blog post.

For one thing, the treaty with Italy would block Knox’s extradition only if she had been prosecuted in the United States, he wrote. For another, double jeopardy wouldn’t apply because Knox was convicted, not acquitted, in the first round.

I do think that there's a good chance the courts would resolve the double-jeopardy issue in favor of Knox, but not necessarily because such a result is clearly required by the law.

Comment author: Eugene 10 February 2014 12:36:12AM 0 points [-]

In Italy, the reversal at the appellate level is considered only a step towards a final decision. It's not considered double-jeopardy because the legal system is set up differently. In the United States though, appeals court ("appellate" is synonymous with "appeals") decisions are weighed equally to trial court decisions in criminal cases. If an appellate court reverses a conviction, the defendant cannot be re-tried because prosecutors in the US cannot appeal criminal cases.

The United States follows US law when making decisions about extradition. This isn't a feature of any specific treaty with Italy: extradition treaties just signify that a country is allowed to extradite. All subsequent extradition requests from those countries are sent through the Department of State for review. Even if it passes review and the person arrested, a court hearing is held in the US to determine whether the fugitive is extraditable. So there are multiple opportunities to look at Italian court procedures and decide whether they count as double-jeopardy under US law. Those investigations would tend toward deciding it does.

Ergo, the US would tend not to extradite someone whose verdict was reversed in a foreign appellate court.

Comment author: brazil84 31 January 2014 10:27:20PM *  6 points [-]

Will the final appeal find Amanda Knox and/or Raffaele Sollecito innocent or guilty?

I will predict "guilty" with a probability of 85-90% since the highest court apparently decided against Knox and Sollecito previously.

When will the trial end?

The trial is already over, no? All that's left is further appeals, right? Do the appeals courts take further evidence?

If convicted, will the US extradite Amanda Knox?

I predict "no" with a probability of 75-80%. The sentiment in the United States is pretty pro-Knox and there is always a good deal of reluctance to send attractive young women to jail. The United States obviously has the juice to politely tell Italy to f* off and it shouldn't be too hard to find some procedural excuse to deny the extradition request.

Comment author: Eugene 09 February 2014 08:31:05AM 0 points [-]

I agree with your final prediction but not with your reasoning. The United States will likely not allow Knox to be extradited, not due to a vague sense of reluctance or unquantifiable dislike of Italy, but because the US has explicit laws that not allow extraditions relating to double-jeopardy. Any country wanting to extradite someone due to a crime for which they have previously been found innocent will be ignored. So in fact, the US would actually have to find a procedural excuse to allow the extradition request.

View more: Next