Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I was reading an argument happening in the comments of an article about Light Table switching to open source. The argument was about freedom in relation to software, and it went basically something like this:
People who use OSX are less free [than Linux users], because they don't have the freedom to modify their OS source code.
No, they have the exact same freedom. People who use OSX and people who use Linux both have the freedom to modify the source code of Linux.
I'm not entirely sure, but this conversation reminded me immediately of arguing about a tree falling and making a sound when nobody's around to hear.
The first persons statement uses a variable in the place that the second persons statement uses a constant.
X's freedom is [partially] a function of [X's OS].
X's freedom is [partially] a function of OS_List. (where OS_List is just a list of the OSs that he could in principle modify, regardless of if he wants to or is using any of those OSs)
(Obviously OS_List is a variable as well, but with respect to each person it's relatively unchanging).
I've seen this crop up in various conversations before - one person arguing using a variable where another person is using a constant (if that's the right way to describe it).
How does one diagnose the problem with this argument, if there is a problem? Is it a similar problem to the Tree in the Forest problem? Is there a standard rationalist way to dissolve the dispute so that both parties can leave not only agreeing, but also having a high probability of being correct when they leave?
On HackerNews, this article was linked. The general idea is that companies are studying what people like to read, to help authors produce books that people like to read.
Now, for me, when I look at this idea, I see some down sides, but I certainly see some benefits as well.
Almost none of the commenters on NYTimes seemed to see any benefit whatsoever to studying reader behaviour. There were a few who saw the downsides as more mild than the other commenters. But most of the commenters basically saw this technology as some sort of 1984-esque idea that will turn all books into uninteresting, unimaginative pieces of paper that would better serve as a door stopper than as something for literary consumption. Out of 50 comments that I've read, only one person has said something along the lines of, 'This technology can possibly offer something to help authors improve their books'.
Is this just technophobia? Or am I missing something, and this really is a horrible, evil technology that should be avoided at all costs? [That's a rhetorical question -- I'd be surprised if even one LWian held that position]
I guess what I'm asking is, what are the psychological roots for the almost-unanimous aversion to this attempt at gathering and using information about what people want?
View more: Next