Comment author: Unknowns 12 April 2015 02:03:10PM 51 points [-]

Scott Alexander.

Comment author: gwillen 12 April 2015 05:12:06PM *  8 points [-]

Just in case some people might not know where to find him: http://slatestarcodex.com/ (Remember to give parent comment your upvotes, not me, if you want to vote for him.)

Comment author: gwillen 12 April 2015 01:31:46AM 11 points [-]

Upvoted to encourage more people to ask LW for help in this way, which I think is good trend. But I would suggest a title that describes more specifically what the post is about (i.e. "LW, help me with my migraines" or something).

I have no advice on the migraines themselves but I wish you luck.

Comment author: Dahlen 06 April 2015 02:24:12PM *  6 points [-]

1) Will its name be another pun on "Less Wrong", like it happened with More Right?

2) I still don't understand why it wouldn't simply be easier to create more subreddits for LW on different discussion topics, like it has been proposed a billion times in the past, as opposed to more and more websites springing up.

Comment author: gwillen 06 April 2015 11:04:08PM 4 points [-]

Easier for whom? The people making the proposals are not the same people who have the power to edit the LW site code, and my sense is that the people who have that power are generally no longer interested in (and/or do not have time for) making any significant use of it.

Comment author: Mitchell_Porter 03 April 2015 03:55:18AM 1 point [-]

Why doesn't your list of "things I've considered" include an option like, "Nutrition science is real science, and Campbell's work is correct"?

Comment author: gwillen 03 April 2015 04:00:31AM 4 points [-]

Given "I adopted the diet myself in 2010", I assume that's the option that the poster implicitly favors, and the question is why more people do not do the same.

Comment author: NancyLebovitz 27 March 2015 09:32:03PM 20 points [-]

I hate disqus. It's hard to keep track of what you're read or haven't read, and since it doesn't load all comments automatically, it's inconvenient to search.

Comment author: gwillen 28 March 2015 03:02:34AM 5 points [-]

Seconding hatred for Disqus. I also find that it's extremely slow to load, and causes page scrolling to lag severely when the browser is under heavy load (i.e. lots of tabs open.) It also has the problem of any third-party code inclusion, that it allows the third party to track you across sites; for this reason I keep it blocked with Ghostery which I use to block ads and tracking.

Comment author: ChristianKl 10 March 2015 10:05:08PM 2 points [-]

Amelia paused. "There's a possibility that Augustus Rookwood left a ghost -"

"Exorcise it before anyone talks to it," Harry said, conscious of the sudden hammering of his heart.

"Yes, sir," the old witch said dryly. "I shall disrupt the soul's anchoring a little, and none shall be the wiser when it fails to materialize. The second matter is that there was a still-living human arm found among the Dark Lord's things -"

This seems like Amelia misplaying her cards for no good reason. I would expect her first to ask Harry what the ghost would tell her before accepting that she prevents the ghost from anchoring. Especially if she wants to test Harry political skills it would make sense to push him harder.

Comment author: gwillen 11 March 2015 06:07:21AM 7 points [-]

I read Harry's suggestion not to investigate, and her responding smirk, as indicating that's it's already tacitly understood that the good guys actually killed the death eaters somehow. This room seems likely to be pretty ok with that, maybe except McGonagall.

Comment author: gwillen 10 March 2015 03:56:29AM 4 points [-]

I think this is an interesting idea and I am intrigued by most of the applications.

The parenting one, though, seems kind of insane unless you terminate it at age 18, but most people just don't earn much before 18 so it wouldn't have very much effect. If you don't terminate it at age 18, you've effectively extended the age of legal childhood up through the point at which it does terminate; parents will continue to have cause to nag and berate their children long into adulthood, and they will. Even if you give them no legal right to control their children, you will have given them -- in both their children's eyes and their own -- a moral right to do so.

The relationship between a parent and a child has a MASSIVE power imbalance. You suggest that you might prohibit the situation where someone's output is part-owned by their employer -- presumably for that reason -- and you similarly should not let it be owned by their parent.

Comment author: tohu 28 February 2015 08:38:03PM 14 points [-]

Beneath the moonlight glints a tiny fragment of silver, a fraction of a line... (black robes, falling) ...blood spills out in litres, and someone screams a word.

I'm relatively confident that this quote is a part of the solution. Maybe Harry partially transfigures a monofilament blade and starts cutting down everything.

Comment author: gwillen 28 February 2015 08:43:42PM 4 points [-]

Some pieces that maybe got put together over in the Reddit thread: We've SEEN Harry transfigure carbon nanotubes before.

Comment author: torekp 28 February 2015 03:43:09PM 4 points [-]

But wait! If many of the algorithm's mistakes are obvious to any human with some common sense, then there is probably a process of algorithm+sanity check by a human, which will outperform even the algorithm. In which case, you yourself can volunteer for the sanity check role, and this should make you even more eager to use the algorithm.

(Yes, I'm vaguely aware of some research which shows that "sanity check by a human" often makes things worse. But let's just suppose.)

Comment author: gwillen 28 February 2015 08:29:33PM 0 points [-]

I do think an algorithm-supported-human approach will probably beat at least an unassisted human, and I think a lot of people would be more comfortable with it than algorithm-alone. (As long as the final discretion belongs to a human, the worst fears are ameliorated.)

Comment author: gwillen 27 February 2015 11:51:32PM 9 points [-]

I would loosely model my own aversion to trusting algorithms as follows: Both human and algorithmic forecasters will have blind spots, not all of them overlapping. (I.e. there will be cases "obvious" to each which the other gets wrong.) We've been dealing with human blind spots for the entire history of civilization, and we're accustomed to them. Algorithmic blindspots, on the other hand, are terrifying: When an algorithm makes a decision that harms you, and the decision is -- to any human -- obviously stupid, the resulting situation would best be described as 'Kafkaesque'.

I suppose there's another psychological factor at work here, too: When an algorithm makes an "obviously wrong" decision, we feel helpless. By contrast, when a human does it, there's someone to be angry at. That doesn't make us any less helpless, but it makes us FEEL less so. (This makes me think of http://lesswrong.com/lw/jad/attempted_telekinesis/ .)

View more: Prev | Next