In response to comment by Dagon on Identity map
Comment author: turchin 15 August 2016 08:53:05PM 0 points [-]

I agree with you that identity should always answer on a question, like will I be identical to my copy in certain conditions, and what it will mean to be identical to it (for example, it could mean that I will agree on my replacement by that copy if it will be 99.9 per cent as me).

So identity is technical term which helps us to solve problems and that is why it is context depending.

In response to comment by turchin on Identity map
Comment author: Dagon 16 August 2016 12:10:47AM 0 points [-]

I'd go further. Identity is not a technical term, though it's often used as if it were. Or maybe it's 20 technical terms, for different areas of inquiry, and context is needed to determine which.

The best mechanism is to taboo the word (along with "I" and "identical" and "my copy" and other things that imply the same fuzzy concept) and describe what you actually want to know.

You know that nothing will be quantum-identical, so that's a nonsense question. You can ask "to what degree will there be memory continuity between these two configurations", or "to what degree is a prediction of future pain applicable", or some other specific description of an experience or event.

In response to Identity map
Comment author: Dagon 15 August 2016 08:44:43PM 1 point [-]

Keep in mind that one of the reasons "identity" is hard is that the usage is contextual. Many of these framings/solutions can simultaneously be useful for different questions related to the topic.

I tend to prefer non-binary solutions, mixing continuity and similarity depending on the reason for wanting to measure the distinction.

Comment author: Stuart_Armstrong 09 August 2016 01:32:46PM -2 points [-]

Redlining seems to go beyond what's economically efficient, as far as I can tell (see wikipedia).

Redlining (or more generally, deciding who gets credit) is a great example for this. If you want accurate risk assessment, you must take into account data (income, savings, industry/job stability, other kinds of debt, etc.) that correlates with ethnic averages.

Er, that's precisely my point here. My idea is to have certain types of data explicitly permitted; in this case I set T to be income. The definition of "fairness" I was aiming for is that once that permitted data is taken into account, there should remain no further discrimination on the part of the algorithm.

This seems a much better idea that the paper's suggestion of just balancing total fairness (eg willingness to throw away all data that correlates) with accuracy in some undefined way.

Comment author: Dagon 10 August 2016 06:44:02AM 2 points [-]

I may have been unclear - if you disallow some data, but allow a bunch of things that correlate with that disallowed data, your results are the same as if you'd had the data in the first place. You can (and, in a good algorithm, do) back into the disallowed data.

In other words, if the disallowed data has no predictive power when added to the allowed data, it's either truly irrelevant (unlikely in real-world scenarios) or already included in the allowed data, indirectly.

Comment author: entirelyuseless 05 August 2016 01:37:05AM 2 points [-]

"It takes less than 30 bits to specify 3^^^^3, no?"

That depends on the language you specify it in.

Comment author: Dagon 07 August 2016 04:29:24PM 0 points [-]

It also depends on the implied probability curve of other things you might specify and the precision you intend to convey. There's no way to distinguish between integers up to and including that one in 30 bits.

Oh, and that's only a counting of identical/fungible things. Specifying the contents of that many variants is HUGE.

Comment author: Dagon 05 August 2016 06:02:26PM 2 points [-]

I think there's a fundamental goal conflict between "fairness" and precision. If the socially-unpopular feature is in fact predictive, then you either explicitly want a less-predictive algorithm, or you end up using other features that correlate with S strongly enough that you might as well just use S.

If you want to ensure a given distribution of S independent of classification, then include that in your prediction goals: have your cost function include a homogeneity penalty. Not that you're now pretty seriously tipping the scales against what you previously thought your classifier was predicting. Better and simpler to design and test the classifier in a straightforward way, but don't use it as the sole decision criteria.

Redlining (or more generally, deciding who gets credit) is a great example for this. If you want accurate risk assessment, you must take into account data (income, savings, industry/job stability, other kinds of debt, etc.) that correlates with ethnic averages. The problem is not that the risk classifiers are wrong, the problem is that correct risk assessments lead to unpleasant loan distributions. And the sane solution is to explicitly subsidize the risks you want to encourage for social reasons, not to lie about the risk by throwing away data.

Comment author: Arielgenesis 29 July 2016 03:19:33AM 0 points [-]

genuine marital relationship

"If Adam is guilty, then the relationship was not genuine." Am I on the right track? or did I misunderstood your question?

Comment author: Dagon 29 July 2016 04:03:30PM 0 points [-]

That just moves it up a level. If she is rational, she'll say "if our relationship was genuine, I want to believe it was genuine. If our relationship was not genuine, I want to believe it was not genuine".

The OP and most of the discussion has missed the fundamental premise of rationality: truth-seeking. The question is not "is Eve rational", but "is Eve's belief (including acknowledgement of uncertainty) correct"?

Comment author: Bound_up 28 July 2016 08:55:34PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

At the same time, killing babies is the go-to example of something awful.

Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?

Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?

Comment author: Dagon 28 July 2016 10:01:39PM 5 points [-]

(separate reply, so you can downvote either or both points)

I don't think anyone's tried to poll abortion feelings on LW, and expect the topic to be fairly mind-killing. For myself, I tend not to see moment-of-birth as much of a moral turning point - it's about the same badness to me whether the euthanasia takes place an hour before, or during, or an hour after delivery. Somewhere long before that, the badness of never existing changes to the badness of probably-but-then-not existing, and then to the badness of almost-but-then-not-existing, and then to existing-then-not, and then later to existing-and-understanding-then-not.

It's a continuum of unpleasant to reprehensible, not a switch between acceptible and not.

Comment author: Bound_up 28 July 2016 08:55:34PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

At the same time, killing babies is the go-to example of something awful.

Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?

Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?

Comment author: Dagon 28 July 2016 09:50:18PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

I don't know if this is mainstream, but IMO it's massively oversimplified to the point of incorrectness. There's plenty of controversy over what "right" even means, and how to value sentience is totally unsolved. I tend to use predicted-quality-adjusted-experience-minutes as a rough guideline, but adjust it pretty radically based on emotional distance and other factors.

killing babies is the go-to example of something awful

I think of it more as a placeholder than an example. It's not an assertion that this is universally awful in all circumstances (though many probably do think that), it's intended to be "or something else you think is really bad".

Comment author: Arielgenesis 28 July 2016 06:14:27AM 0 points [-]

why does she want to be correct (beyond "I like being right")?

I think that's it. "I like knowing that the person I love is innocent." Which implies that Adam is not lying to her and "I like being in healthy, fulfilling and genuine marital relationship"

Comment author: Dagon 28 July 2016 02:05:08PM 0 points [-]

That's a reason to want him to be innocent, not a reason to want to know the truth. What's her motivation for the necessary second part of the litany: "if Adam is guilty, I want to believe that Adam is guilty"?

Comment author: Arielgenesis 27 July 2016 03:07:05AM *  0 points [-]

human-granularity

I don't understand what does it mean, even after a google search, so please enlighten me.

For epistemic rationality

I think so. I think she has exhausted all the possible avenue to reach the truth. So she is epistemically rational. Do you agree?

For instrumental rationality

Now this is confusing to me as well. Let us forget about the extension for the moment and focus solely on the narrative as presented in the OP. I am not familiar how does value and rationality goes together, but, I think there is nothing wrong if her value is "Adam's innocence" and that it is inherently valuable, and end to it self. Am my making any mistake in my train of thought?

Comment author: Dagon 27 July 2016 02:10:11PM 1 point [-]

By human-granularity, I mean beliefs about macro states that can be analyzed and manipulated by human thought and expressed in reasonable amounts (say, less than a few hundred pages of text) of human language. As contrasted with pure analytic beliefs about the state of the universe expressed numerically.

For instrumental rationality, what goals are furthered by her knowing the truth of this fact? Presuming that if Adam is innocent, she wants to believe that Adam is innocent and if Adam is guilty, she wants to believe Adam is guilty, why does she want to be correct (beyond "I like being right")? What decision will she make based on it?

View more: Prev | Next