In response to comment by Dagon on Identity map
Comment author: turchin 16 August 2016 08:14:42AM 2 points [-]

I saw people who attempted to do it in real life, and they speak like "my brain knows that he wants go home" instead of "I want to go home".

The problem is that even if we get rid of absolute Self and Identity we still have practical idea of me, which is built in our brain, thinking and language. And without it any planing is impossible. I can't go to the shop without expecting that I will get a dinner in one hour. But all problems with identity are also practical: should I agree to be uploaded etc.

There is also problem of oneness of subjective experience. That is there is clear difference between the situation there I will experience pain and other person's pain. While from EA point of view it is almost the same, it is only moral upgrade of this fact.

In response to comment by turchin on Identity map
Comment author: Dagon 16 August 2016 03:50:57PM 1 point [-]

"my brain knows that he wants go home" instead of "I want to go home".

I'll admit to using that framing sometimes, but mostly for amusement. In fact, it doesn't solve the problem, as now you have to define continuity/similarity for "my brain" - why is it considered the same thing over subsequent seconds/days/configurations?

I didn't mean to say (and don't think) that we shouldn't continue to use the colloquial "me" in most of our conversations, when we don't really need a clear definition and aren't considering edge-cases or bizarre situations like awareness of other timelines. It's absolutely a convenient, if fuzzy and approximate, set of concepts.

I just meant that in the cases where we DO want to analyze boundaries and unusual situations, we should recognize the fuzziness and multiplicity of concepts embedded in the common usage, and separate them out before trying to use them.

In response to comment by Dagon on Identity map
Comment author: turchin 15 August 2016 08:53:05PM 0 points [-]

I agree with you that identity should always answer on a question, like will I be identical to my copy in certain conditions, and what it will mean to be identical to it (for example, it could mean that I will agree on my replacement by that copy if it will be 99.9 per cent as me).

So identity is technical term which helps us to solve problems and that is why it is context depending.

In response to comment by turchin on Identity map
Comment author: Dagon 16 August 2016 12:10:47AM 0 points [-]

I'd go further. Identity is not a technical term, though it's often used as if it were. Or maybe it's 20 technical terms, for different areas of inquiry, and context is needed to determine which.

The best mechanism is to taboo the word (along with "I" and "identical" and "my copy" and other things that imply the same fuzzy concept) and describe what you actually want to know.

You know that nothing will be quantum-identical, so that's a nonsense question. You can ask "to what degree will there be memory continuity between these two configurations", or "to what degree is a prediction of future pain applicable", or some other specific description of an experience or event.

In response to Identity map
Comment author: Dagon 15 August 2016 08:44:43PM 1 point [-]

Keep in mind that one of the reasons "identity" is hard is that the usage is contextual. Many of these framings/solutions can simultaneously be useful for different questions related to the topic.

I tend to prefer non-binary solutions, mixing continuity and similarity depending on the reason for wanting to measure the distinction.

Comment author: Stuart_Armstrong 09 August 2016 01:32:46PM -2 points [-]

Redlining seems to go beyond what's economically efficient, as far as I can tell (see wikipedia).

Redlining (or more generally, deciding who gets credit) is a great example for this. If you want accurate risk assessment, you must take into account data (income, savings, industry/job stability, other kinds of debt, etc.) that correlates with ethnic averages.

Er, that's precisely my point here. My idea is to have certain types of data explicitly permitted; in this case I set T to be income. The definition of "fairness" I was aiming for is that once that permitted data is taken into account, there should remain no further discrimination on the part of the algorithm.

This seems a much better idea that the paper's suggestion of just balancing total fairness (eg willingness to throw away all data that correlates) with accuracy in some undefined way.

Comment author: Dagon 10 August 2016 06:44:02AM 2 points [-]

I may have been unclear - if you disallow some data, but allow a bunch of things that correlate with that disallowed data, your results are the same as if you'd had the data in the first place. You can (and, in a good algorithm, do) back into the disallowed data.

In other words, if the disallowed data has no predictive power when added to the allowed data, it's either truly irrelevant (unlikely in real-world scenarios) or already included in the allowed data, indirectly.

Comment author: entirelyuseless 05 August 2016 01:37:05AM 2 points [-]

"It takes less than 30 bits to specify 3^^^^3, no?"

That depends on the language you specify it in.

Comment author: Dagon 07 August 2016 04:29:24PM 0 points [-]

It also depends on the implied probability curve of other things you might specify and the precision you intend to convey. There's no way to distinguish between integers up to and including that one in 30 bits.

Oh, and that's only a counting of identical/fungible things. Specifying the contents of that many variants is HUGE.

Comment author: Dagon 05 August 2016 06:02:26PM 2 points [-]

I think there's a fundamental goal conflict between "fairness" and precision. If the socially-unpopular feature is in fact predictive, then you either explicitly want a less-predictive algorithm, or you end up using other features that correlate with S strongly enough that you might as well just use S.

If you want to ensure a given distribution of S independent of classification, then include that in your prediction goals: have your cost function include a homogeneity penalty. Not that you're now pretty seriously tipping the scales against what you previously thought your classifier was predicting. Better and simpler to design and test the classifier in a straightforward way, but don't use it as the sole decision criteria.

Redlining (or more generally, deciding who gets credit) is a great example for this. If you want accurate risk assessment, you must take into account data (income, savings, industry/job stability, other kinds of debt, etc.) that correlates with ethnic averages. The problem is not that the risk classifiers are wrong, the problem is that correct risk assessments lead to unpleasant loan distributions. And the sane solution is to explicitly subsidize the risks you want to encourage for social reasons, not to lie about the risk by throwing away data.

Comment author: Arielgenesis 29 July 2016 03:19:33AM 0 points [-]

genuine marital relationship

"If Adam is guilty, then the relationship was not genuine." Am I on the right track? or did I misunderstood your question?

Comment author: Dagon 29 July 2016 04:03:30PM 0 points [-]

That just moves it up a level. If she is rational, she'll say "if our relationship was genuine, I want to believe it was genuine. If our relationship was not genuine, I want to believe it was not genuine".

The OP and most of the discussion has missed the fundamental premise of rationality: truth-seeking. The question is not "is Eve rational", but "is Eve's belief (including acknowledgement of uncertainty) correct"?

Comment author: Bound_up 28 July 2016 08:55:34PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

At the same time, killing babies is the go-to example of something awful.

Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?

Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?

Comment author: Dagon 28 July 2016 10:01:39PM 5 points [-]

(separate reply, so you can downvote either or both points)

I don't think anyone's tried to poll abortion feelings on LW, and expect the topic to be fairly mind-killing. For myself, I tend not to see moment-of-birth as much of a moral turning point - it's about the same badness to me whether the euthanasia takes place an hour before, or during, or an hour after delivery. Somewhere long before that, the badness of never existing changes to the badness of probably-but-then-not existing, and then to the badness of almost-but-then-not-existing, and then to existing-then-not, and then later to existing-and-understanding-then-not.

It's a continuum of unpleasant to reprehensible, not a switch between acceptible and not.

Comment author: Bound_up 28 July 2016 08:55:34PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

At the same time, killing babies is the go-to example of something awful.

Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?

Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?

Comment author: Dagon 28 July 2016 09:50:18PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

I don't know if this is mainstream, but IMO it's massively oversimplified to the point of incorrectness. There's plenty of controversy over what "right" even means, and how to value sentience is totally unsolved. I tend to use predicted-quality-adjusted-experience-minutes as a rough guideline, but adjust it pretty radically based on emotional distance and other factors.

killing babies is the go-to example of something awful

I think of it more as a placeholder than an example. It's not an assertion that this is universally awful in all circumstances (though many probably do think that), it's intended to be "or something else you think is really bad".

Comment author: Arielgenesis 28 July 2016 06:14:27AM 0 points [-]

why does she want to be correct (beyond "I like being right")?

I think that's it. "I like knowing that the person I love is innocent." Which implies that Adam is not lying to her and "I like being in healthy, fulfilling and genuine marital relationship"

Comment author: Dagon 28 July 2016 02:05:08PM 0 points [-]

That's a reason to want him to be innocent, not a reason to want to know the truth. What's her motivation for the necessary second part of the litany: "if Adam is guilty, I want to believe that Adam is guilty"?

View more: Prev | Next