ChristianKl comments on Introducing Familiar, a quantified reasoning assistant (feedback sought!) - Less Wrong

19 Post author: jamesf 24 July 2013 02:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 27 July 2013 01:21:10PM 0 points [-]

If you have a software system that knows relationships are important to people, knows which of your relationships are important to you, knows who you were talking to, and knows the valence, arousal, duration, frequency, etc. of your interaction with that person over time, then, yes, something like "ended a relationship today" probably could be inferred. It doesn't sound trivial but it sounds absolutely plausible, given sufficient effort.

It's possible to track 1000 different variables with your model. If you do so, you will however get a lot of false positives.

I think about QS data like it gives you more than your five senses. In the end you still partly rely on your own ability of pattern matching. Graphs of data just gives you additional input to understand what's going on that you can't see or hear.

Comment author: jamesf 28 July 2013 12:19:04AM *  1 point [-]

I plan on addressing false positives with a combination of sanity-checking/care-checking ("no, drinking tea probably doesn't force me to sleep for exactly 6.5 hours the following night" or "so what if reading non-fiction makes me ravenous for spaghetti?"), and suggesting highest-information-content experimentation when neither of those applies (hopefully one would collect more data to test a hypothesis rather than immediately accept the program's output in most cases). In this specific case, the raw conversation and bodily state data would probably not be nodes in the larger model--only the inferred "thing that really matters", social life, would. Having constant feedback from the "expert", who can choose which raw or derived variables to include in the model and which correlations don't actually matter, seems to change the false positive problem.