All of binary_doge's Comments + Replies

The problem with commentary not made by the players themselves is that, as far as I understand it, the project wants the general thoughts of the player and not just the motivations for every specific move. Like, ideally, they want some stream of consciousness commentary style "oh look, that guy looks kind of tough, I'll go see if I can agro him. Oh no! he's way too strong... lets go hide behind this tree it looks kinda safe [...]". That's why I suggested the lets plays and not e-sports in general.

If they're ok with just noise-free motivational analysis, anything with good commentators might work, and chess is indeed a pretty clean option. 

Not sure if it was suggested already or not, but one option is to look for “lets play” style videos for some game (gonna be hard to find one that’s simple enough, probably) and take the spoken text the youtuber says as thoughts. Some of them already have the transcript as subtitles.

On the same vein, looking for people who explain their choices in very clear-decision games, like chess. I once saw a booklet of chess games where the actual player explained most of his moves. If there is a way to get many of those, that might work.

8oge
What if we use the commentary from chess games as thoughts?

" So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? "

In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that's the reason the penalty is there.

The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn't replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).

1Joe Collman
There's still the open question of "how bad?". Personally, I share the intuition that such replacement is undesirable, but I'm far from clear on how I'd want it quantified. The key situation here isn't "kill and replace with person of equal happiness", but rather "kill and replace with person with more happiness". DNT is saying there's a threshold of "more happiness" above which it's morally permissible to make this replacement, and below which it is not. That seems plausible, but I don't have a clear intuition where I'd want to set that threshold.
4Decius
In order to penalize something that probably shouldn't be explicitly punished, you're requiring that identity be well-defined.

It doesn't change your actual happiness, just the future one. If you are literally shot with a sniper rifle while walking in the street with no warning, there is no time in which you are saddened by your death. You just are, and then aren't. What is lost is all the happiness that you would have otherwise experienced. Assume the guy is shot in the head, so there's no bleeding out part.

I'm not sure where the -1000 number comes from. There is no point in which the shot down person feels 1000 less happiness than before. Saying "the act... (read more)

2Decius
So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? Is it okay for someone to change their mind about what they were going to do, and produce equal happiness doing something else? Is it okay to kill someone and replace them with an absolutely identical copy, where nobody notices including the deceased, if the new person changes their mind about what they were going to do and ends up producing equal happiness doing something else?

The "replace" in the original problem is ending one human and creating (in whatever way) another one. I don't think you understand the scenario.

In total uti (in the human world), it is okay to:

kill someone, provided that by doing so you bring into the world another human with the same happiness. For the sake of argument, lets assume happiness potential is genetically encoded. So if you kill someone, you can always say "that's ok guys, my wife just got pregnant with a fetus bearing the same genetic code as the guy I just murdered&qu... (read more)

7Joe Collman
I just want to note here for readers that the following isn't correct (but you've already made a clarifying comment, so I realise you know this): Total uti only says this is ok if you leave everything else equal (in terms of total utility). In almost all natural situations you don't: killing someone influences the happiness of others too, generally negatively.

The penalty doesn't reset when you create a new human. You are left with the negative value that the killed human left behind, and the new one starts off with a fresh amount of -u0[new person] to compensate for. If the original human would have been left alive, he would have compensated for his own, original -u0[original person], and the entire system would have produced a higher value.

If you don't think killing is in itself bad then you are not on par with the intuition of almost everybody. Legit.

I personally would rather to have never been born but don't want to commit suicide. There are numerous reasons. Hurting the people who care about me (and wouldn't have if I was not born in the first place), fearing pain or the act of suicide itself, fearing death (both are emotional axioms that a lot of people have, there's no point in debating them rationally) and many other.

5Dagon
To be clear, I didn't say anything about killing. I said "replace". This isn't possible with humans, but picture the emulation world, where an entity can be erased with no warning or sensation, and a fully-developed one can be created at will. Even then, practically it would be impermissible to do a same-value replacement, both due to uncertainty and for negative effects on other lives. In the human world, OF COURSE killing (and more generally, dieing) is bad. My point is that the badness is fully encoded in the reduction in h of the victim, and the reduced levels of h of those who survive the victim. It doesn't need to be double-counted with another term. I'm extremely saddened to know this. And it makes me feel mean to stick to my theme of "already included in h, no need for another term". The fear of death, expectation of pain, and impact on others are _all_ differences in h which should not be double-counted. Also, I very much hope that in a few years or decades, you'll look back and realize you were mistaken in wishing you hadn't been born, and are glad you persevered, and are overall glad you experienced life.

Being killed doesn't change your expected happiness, knowing you will be killed does. That's different. If you want to separate variables properly think about someone being gunned down randomly with no earlier indication. Being killed just means ending you prematurely, and denying you the happiness you would have had were you alive. A good model will reflect why that's bad even if you replace the killed person with someone that would compensate for future loss in happiness.

Pragmatically speaking, killing people causes unhappiness because it... (read more)

2Decius
Being killed changes your actual happiness, compared to not being killed. I should not have used 'expected happiness' to refer to h|"not killed". I'm counting 'the act of being gunned down' as worth -1000 utility in itself, in addition to cancelling all happiness that would accumulate afterwards, and assuming that the replacement person would compensate all of the negative happiness that the killing caused. Basically, I'm saying that I expect bleeding out after a gunshot wound to suck, a lot. The replacement compensating for loss in happiness starts from a hole the size of the killing. I'm assuming that whatever heuristic you're using survives the transporter paradox; killing Captain Kirk twice a day and replacing him with an absolutely identical copy (just in a different location) is not bad.

The birth penalty fixes a lot of unintuitive products of the classic total uti. For example, if you treat every "new" person as catching up to the penalty (which can only be achieved if you at least live with minimal acceptable happiness for your entire life, aka h0), then killing a person and replacing him with someone of equal happiness is bad. Cause the penalty that was not yet caught up with in the killed person remains as a negative quantity in the total utility, a debt, if you will. In total uti, this doesn't apply and it logically fo... (read more)

2Decius
Is the intuition about killing someone and replacing them with someone who will experience equal total happiness assuming that killing someone directly causes a large drop in total happiness, but that the replacement only has total happiness equal to what the killed moral patient would have had without the killing? Because my intuition is that if the first entity had expected future happiness of 100, but being killed changed that to -1000, their replacement, in order for them to result in 'equal happiness' must have expected future happiness of 1100, not 100. Intuitively, the more it sucks to be killed, the more benefit is required for it to be not wrong to kill someone.
5Dagon
Huh. I guess my intuitions are different enough that we're just going to disagree on this. I don't think it's problematic to replace a being with an equally-happy one (presuming painless and not happiness-reducing in those around them). And I don't understand how one can prefer not to die _AND_ not be happier to exist than not.

This might be trivial, but in the most basic sense noticing where one has blind spots can be done by first noticing where one's behavior differs from how he predicted he would behave, or what the people around him behave. If you thought some task was going to be easy and its not, or that you would get mixed results in predicting something and you don't (even if you think you might be more accurate than average, what's important here is the difference) you might be neglecting something important.

Its kind of similar to the way some expert AI s... (read more)

"This is because planetary physics can be formalized relatively easily" - they can now, and could when they were, but not before. One can argue that we thought many "complex" and very "human" abilities could not be algroithmically emulated in the past, and recent advances in AI (with neural nets and all that) have proven otherwise. If a program can do/predict something, there is a set of mechanical rules that explain it. The set might not be as elegant as Newton's laws of motion, but it is still a set of equations noneth... (read more)

Then that's an unnecessary assumption about Aboriginals. Take a native Madagascan instead (arbitrary choice of ethnicity) and he might not.

As far as I know it is not true, and certainly not based on any concrete evidence, that humans must see intentional patterns in everything. Not every culture thought cloud patterns were a language for example. In such a culture, the one beholding the sky doesn't necessarily think it displays the actions of an intentful agent recording a message. The same can be true for Chinese scribbles.

If what you're ... (read more)

But the fact that it is purposeful writing, for example by a spirit, is an added assumption... SCA doesn't have to think that, she could think its randomly generated scribbles made by nature. Like how she doesn't think the rings on the inside of a tree are a form of telling a story. They are just meaningless signs. And if she does not think the signs have meaning, your statements don't follow (having scribbles doesn't mean that some other agent necessarily made them, and since the scribbles don't point to anything in reality there ... (read more)

1Valerio
Uhm, an Aboriginal tends to see meaning in anything. The more the regularities, the more meaning she will form. Semiosis is the dynamic process of interpreting these signs. If you were put in a Chinese room with no other input than some incomprehensible scribbles you will probably start considering that what you are doing has indeed a meaning. Of course, a less intelligent human in the room or a human put under pressure would not be able to understand Chinese even with the right algorithm. My point is that the right algorithm enables the right human to understand Chinese. Do you see that?

I need some clarification on what seems to be a hidden assumption here... Correct me if I'm wrong, but you seem to be assuming that SCA knows that the symbols she is getting are representations of something in the universe (i.e. that they are language).

Let's assume that SCA thinks she is copying the patterns that naturally dripping sap creates on the sands on the floor of a cave.

It follows that all of these statements are not inferred:

"Moreover, it is logical that when something is read, somebody wrote it."

"[...] she observes tha... (read more)

1Valerio
SCA infers that "somebody wrote that" where the term "somebody" is used more generally than in English. SCA does not infer that another human being wrote that, but rather that a casual agent wrote that, maybe spirits of the caves. If SCA enters two caves and observes natural patterns in cave A and the characters of "The adventures of Pinocchio" in cave B, she may deduce that two different spirits wrote them. Although she may discover some patterns in what spirit A (natural phenomena) wrote, she won't be able to discover a grammar as complex as in cave B. Spirit B wrote often the sequence "oor ", preceded sometimes by capital " P", sometimes by small " p". Therefore, she infers that symbols "p" and "P" are similar (at first, she may group also "d" with them, but she may correct that thanks to additional observations). There is no hidden assumption that SCA knows she is observing a language in cave B. SCA is not a taught cryptographer, but rather an Aboriginal cryptographer. She performs statistical pattern matching only and makes the hypothesis that spirit B may have represented the concept of writing by using a sequence of letters "said". She discards other hypotheses that just a single character may correspond to the concept of writing (although she has some doubt with ":"). She discards other hypotheses that capitalised words are words reported to be written. On the other side, direct discourse in "The adventures of Pinocchio" supports her hypothesis about "said". SCA keeps generating hypotheses that way so that she learns to decode more knowledge, without the need of knowing that the symbols are language (she rather discovers the concept of language).

Something in this view feels a bit circular to me, correct me if I'm way off mark.

Question: why assume that moral intuitions are derived from pre-existing intuitions for property rights, and not the other way around?

Reply: because property rights work ("property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources"), and if they are based on some completely unrelated set of intuitions (morality) then that would be a huge coincidence.

Re-reply: yeah, but it can also be argued that morality ... (read more)

This was an awesome read. Can you perhaps explain the listed intuition to care more about things like clock speeds than higher cognitive functions?

The way I see it, higher cognitive functions allow long term memories and their resurfacing, and cognitive interpretation of direct suffering, like physical pain. A hummingbird might have a X3 human clock, but it might be way less emotionally scarred than a human when projected to maximum pain for, lets say, 8 objective seconds ("emotionally scarred" is a not well defined way of saying that more suffering will arise later due to the pain caused in the hypothetical event). That is why, IMO, most people do assign relevance to more complicated cognitions.

Thanks for the read (honestly, noticed some very interesting points IMO) but I kind of fail to understand what exactly is your claim about the method you introduced.

Are you saying that it is a good model representation of social interaction? If so I would partially agree. Its cool that the model captures all the mental steps all the participants are making (if you bother to completely unroll everything), but it's not computationally superior to saying that: things like calling someone "a downer" are general beliefs that rely on a varying emp... (read more)

True Path has already covered it (or most of it) extensively, but both the Newcomb's Problem and the distinction made in the post (if it were to be applied in a game theory setting) contain too many inherent contradiction and do not seem to actually point out anything concrete.

You can't talk about decision-making agents if they are basically not making any decisions (classical determinism, or effective precommitment in this case, enforces that). Also, you can't have a 100% accurate predictor and have freedom of choice on the other hand, beca... (read more)

2Chris_Leong
"Also, you can't have a 100% accurate predictor and have freedom of choice on the other hand" - yes, there is a classic philosophical argument that claims determinism means that we don't have libertarian freewill and I agree with that. "You can't talk about decision-making agents if they are basically not making any decisions" - My discussion of the student and the exam in this post may help clear things up. Decisions don't require you to have multiple things that you could have chosen as per the libertarian freewill model, but simply require you to be able to construct counterfactuals. Alternatively, this post by Anna Salamon might help clarify how we can do this.