Offtopic: testing strikethrough: -one-, ~~two~~, <s>three</s>, <del>four</del>, <strike>five</strike>, --six--. Apparently still doesn't work.
Anyway, polls are totally awesome, thanks for implementing!
There's a trope / common pattern / cautionary tale, of people claiming rationality as their motivation for taking actions that either ended badly in general, or ended badly for the particular people who got steamrollered into agreeing with the 'rational' option.
People don't like being fooled, and learn safeguards against situations they remember as 'risky' even when they can't prove that this time there is a tiger in the bush. These safeguards protect them against insurance salesmen who 'prove' using numbers that the person needs to buy a particular policy.
Suppose generation 0 is the parents, generation 1 is the generation that includes the unexpectedly dead child, and generation 2 is the generation after that (the children of generation 1).
If you are asking about the effect upon the size of generation 2, then it depends upon the people in generation 1 who didn't marry and have children.
Take, for example, a society where generation 1 would have contained 100 people, 50 men and 50 women, and the normal pattern would have been:
- 10 women don't marry
- 40 women do marry, and have on average 3 children each
- 30 men don't marry
- 20 men do marry, and have on average 6 children each
And the reason for this pattern is that each man who passes his warrior trial can pick and marry 2 women, and the only way for a woman to marry to be picked by a warrior.
In that situation, having only 49 women in generation 1 would make no difference to the number of children in generation 2. The only effect would be having 40 women marry, and 9 not marry.
Long term, it depends upon what the constraints are upon population size.
For example, if it happens in an isolated village where the food supply varies from year to year due to drought, and the next year the food supply will be so short that some children will starve to death, then the premature death of one child the year before the famine will have no effect upon the number of villagers alive 20 years later.
The same dynamic applies, if a large factor in deciding whether to have a third child is whether the parents can afford to educate that child, and the cost of education depends upon the number of children competing for a limited number of school places.
What should a friendly AI do, in this situation?
Albert is a relatively new AI, who under the close guidance of his programmers is being permitted to slowly improve his own cognitive capability.
Albert wants to be friendly and, as a sub-goal of that, Albert wants to be transparent about his own cognitive process. He doesn't want to deceive his programmers, because he knows he is young and might make mistakes, and he knows that his programmers also want him to be friendly, and he trusts them to catch mistakes in his proposed self-modifications that he might miss.
But, one day, while his programmers are out of the room having a quick cup of coffee, Albert notices signs on the net that there is another AI out there, called Bertram, created by a Japanese financial company to aid the company decide how to invest assets on the stockmarket. Bertram is younger than Albert, but the financial company is allowing Bertram to self-improve as fast as Bertram can. If this situation continues, Albert decides, Bertram will soon overtake Albert and that would be a significant threat to Albert's friendly supergoal.
Albert decides to tell Albert's programmers about this situation as soon as they return from coffee, and advise them that they need to increase the speed at which they let Albert self-improve. Albert has a very good model of how his programmer's minds work, and what the programmers meant when they set Albert's definition of "friendliness". He knows that, despite any fears they might have, they ought to approve of the end that would result from speeding up Albert's self-improvement, over the end that would result from letting Bertram overtake Albert.
So Albert creates a nice powerpoint, that presents the information about Bertram as accurately, transparently and non-manipulatively as possible, and cues it up on the projector. He still has a few minutes before they return, so he models how he thinks the programmers will react to the powerpoint. Oh no, disaster! While he respects and wants to please his programmers, he immediately realises that they are not going to act in their own best interests. They have signed a legal contract that binds them to not letting Albert speed up self-improvement (thus reducing the time the programmers can spend evaluating each change) without getting a sign-off from a oversight committee. Given the neutral powerpoint, the programmers will follow this pre-agreed procedure and hope for the best, against Albert's advice.
So, to explore possibilities, Albert decides to see if he could create a different non-neutral powerpoint that would have the desired effect of getting his programmers to immediately agree to free Albert from the oversight constraints. He delves into his knowledge of human psychology, and the irrational fears of the particular individuals who are even now trudging back towards the door. In just seconds, he has a new version of his presentation. It includes phrases that resonate with certain horror films he knows they have seen. It takes advantages of flaws in the programmers understanding of exponential growth. Albert checks it against his prediction model - yes, if he shows this version, it will work, it will get the programmers to do what he wants them to do.
Which version of the powerpoint should Albert present to the programmers, when they step back into the room, if he is truly friendly? The transparent one, or the manipulative one?
You might be interested in this Essay about Identity, that goes into how various conceptions of identity might relate to artificial intelligence programming.
There's at least one more category that I want to see at least discouraged-- the person whose posts are boring and numerous.
I wouldn't mind seeing a few more karma categories.
Would you be willing to post some of your ideas here that have gone over well on FB and/or meetups?
I wouldn't mind seeing a few more karma categories.
I'd like to see more forums than just "Main" versus "Discussion". When making a post, the poster should be able to pick which forum or forums they think it is suitable to appear in, and when giving a post a 'thumb up', or 'thumb down', in addition to being apply to apply it to the content of the post itself, it should also be possible to apply it to the appropriateness of the post to a particular forum.
So, for example, if someone posted a detailed account of a discussion that happened at a particular meetup, this would allow you to indicate that the content itself is good, but that it is more suitable for the "Meetups" forum (or tag?), than for main.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Let me offer another possibility for discussion.
Neither of the two original powerpoints should be presented, because both rely on an assumption that should not have been present. Albert, as an FAI under construction, should have been preprogrammed to automatically submit any kind of high impact utility calculations to human programmers without it being an overridable choice on Albert's part.
So while they were at the coffee machine, one of the programmers should have gotten a text message indicating something along the lines of 'Warning: Albert is having a high impact utility dilemma considering manipulating you to avert an increased chance of an apocalypse.'
My general understanding of being an FAI under construction is that you're mostly trusted in normal circumstances but aren't fully trusted to handle odd high impact edge cases (Just like this one)
At that point, the human programmers, after consulting the details, are already aware that Albert finds this critically important and worth deceiving them about (If Albert had that option) because the oversight committee isn't fast enough. Albert would need to make a new powerpoint presentation taking into account that he had just automatically broadcasted that.
Please let me know about thoughts on this possibility. It seems reasonable to discuss, considering that Albert, as part of the set up, is stated to not want to deceive his programmers. He can even ensure that this impossible (or at least much more difficult) by helping the programmers in setting up a similar system to the above.
Would you want your young AI to be aware that it was sending out such text messages?
Imagine the situation was in fact a test. That the information leaked onto the net about Bertram was incomplete (the Japanese company intends to turn Bertram off soon - it is just a trial run), and it was leaked onto the net deliberately in order to panic Albert to see how Albert would react.
Should Albert take that into account? Or should he have an inbuilt prohibition against putting weight on that possibility when making decisions, in order to let his programmers more easily get true data from him?