Comment author: James_Miller 04 April 2016 07:28:43PM 3 points [-]
Comment author: philh 05 April 2016 01:29:03PM 0 points [-]

I'm not sure how that's related to polyhacking either.

Comment author: Clarity 05 April 2016 09:29:16AM 0 points [-]

Isn't admitting preference for someone the the coup de grace of romance?

Comment author: philh 05 April 2016 01:27:12PM 4 points [-]

Not if she also has a preference for you.

You want to avoid suggesting that you're more into her than she is into you. So "we've been on one date and now you're my girlfriend, right?" is usually a bad idea. But "we've been on one date and I'd like to go on future dates" is probably okay (if she doesn't want more dates, it wasn't going anywhere anyway).

(Massive overgeneralization, of course, and also I'm not qualified to talk about this.)

Comment author: Viliam 04 April 2016 08:44:22AM 8 points [-]

To avoid only reading filtered evidence, people interested in polyhacking might also look at this SSC thread.

Comment author: philh 04 April 2016 03:41:13PM 1 point [-]

I'm not sure how that thread is related to polyhacking? It's related to polyamory, but doesn't seem to be particularly focused on it, and polyhacking is another step removed.

Comment author: Stefan_Schubert 22 March 2016 02:14:30PM 2 points [-]

I have a maths question. Suppose that we are scoring n individuals on their performance in an area where there is significant uncertainty. We are categorizing them into a low number of categories, say 4. Effectively we're thereby saying that for the purposes of our scoring, everyone with the same score performs equally well. Suppose that we say that this means that all individuals with that score get assigned the mean actual performance of the individuals with that that score. For instance, if there were three people who got the highest score, and their perfomance equals 8, 12 and 13 units, the assigned performance is 11 units.

Now suppose that we want our scoring system to minimise information loss, so that the assigned performance is on average as close as possible to the actual performance. The question is: how do we achieve this? Specifically, how large a proportion of all individuals should fall into each category, and how does that depend on the performance distribution?

It would seem that if performance is linearly increasing as we go from low to high performers, then all categories should have the same number of individuals, whereas if the increase is exponential, then the higher categories should have a smaller number of individuals. Is there a theorem that proves this, and which exacty specifies how large the categories should be for a given shape of the curve? Thanks.

Comment author: philh 22 March 2016 04:00:21PM *  4 points [-]

If I'm understanding this correctly, it sounds like you're performing k-means clustering.

Comment author: Viliam 22 March 2016 08:38:24AM 0 points [-]

Unfortunately, there is no standard way to make parts of page disappear from search engines' indexes. Which is super annoying, because almost every page contains some navigational parts which do not contribute to the content.

HTML 5 contains a semantic tag <nav> which defines navigational links in the document. I think a smart search engine should exclude these parts, but I have no idea if any engine actually does that. Maybe changing LW pages to HTML 5 and adding this tag would help.

Some search engines use specific syntax to exclude parts of the page, but it depends on the engine, and sometimes it even violates the HTML standards. For example, Google uses HTML comments <!--googleoff: all--> ... <!--googleon: all-->, Yahoo uses HTML attribute class="robots-nocontent", and Yandex introduces a new tag <noindex>. (I like the Yahoo way most.)

The most standards-following way seems to be putting the offending parts of the page into separate HTML pages which are included by <iframe>, and use the standard robots.txt mechanism to block those HTML pages. I think the disadvantage is that the included frames will have fixed dimensions, instead of changing dynamically with their content. Another solution would be to insert those texts by JavaScript, which means that users with JavaScript disabled would not see them.

Comment author: philh 22 March 2016 03:48:17PM 2 points [-]

Another solution would be to insert those texts by JavaScript, which means that users with JavaScript disabled would not see them.

They're already inserted by javascript. E.g. the 'recent comments' one works by fetching http://lesswrong.com/api/side_comments and inserting its contents directly in the page.

Editing robots.txt might exclude those parts from the google index, but idk.

Comment author: Vaniver 21 March 2016 01:21:13PM *  0 points [-]

I'm still seeing it, and it is tag-based, I believe. Changing the name seems to have made the links somewhat weird, though (it looks like both open_thread_march_14_march_20_2016 and open_thread_march_21_march_27 _2016 might work?).

Comment author: philh 21 March 2016 01:53:19PM 2 points [-]

Oh, it shows up on /r/discussion/new, but not on /r/all/recentposts.

Weird. I used to have a page that would redirect you to the latest open thread, finding it through the sidebar API. I took it down a month or so back because the API had vanished, but now it's apparently back.

The important part of the URL of this thread is /nf7/. The stuff after that is intended for human use, you can replace it arbitrarily.

Comment author: Gunnar_Zarncke 21 March 2016 11:32:22AM 0 points [-]

I know that they are not created automatically. But I wondered whether they are used (indexed, listed,...) in some automatic way that depends on the title or one post per week.

Comment author: philh 21 March 2016 11:45:45AM 1 point [-]

IIRC the sidebar used to have a link to the latest open thread, which I think was based on the open_thread tag. That seems to have vanished now.

Meetup : London rationalish meetup - 2016-03-20

0 philh 16 March 2016 02:39PM

Discussion article for the meetup : London rationalish meetup - 2016-03-20

WHEN: 20 March 2016 02:00:00PM (+0000)

WHERE: Shakespeare's Head, 64-68 Kingsway, London WC2B 6AH

I'm late posting the event this week, but that's because I was distracted, not because it isn't happening.

This meetup will be social discussion in a pub, with no set topic. If there's a topic you want to talk about, feel free to bring it.

The pub is the Shakespeare's Head in Holborn. There will be some way to identify us.

The event on facebook is visible even if you don't have a facebook account. Any last-minute updates will go there.


We're a fortnightly London-based meetup for members of the rationalist diaspora. The diaspora includes, but is not limited to, LessWrong, Slate Star Codex, rationalist tumblrsphere, and parts of the Effective Altruism movement.

You don't have to identify as a rationalist to attend: basically, if you think we seem like interesting people you'd like to hang out with, welcome! You are invited. You do not need to think you are clever enough, or interesting enough, or similar enough to the rest of us, to attend. You are invited.

People start showing up around two, and there are almost always people around until after six, but feel free to come and go at whatever time.

Discussion article for the meetup : London rationalish meetup - 2016-03-20

Comment author: skeptical_lurker 15 March 2016 12:35:29PM 0 points [-]

I think its policy net was only trained on amateurs, not professionals or self-play, making it a little weak. Normally, I suppose that reading large numbers of game trees compensates, but the odds of Lee making his brilliant move 78 (and one other move, but I can't remember which) were 1/10000, so I think that AG never even analysed the first move of that sequence.

In other words:

David Ormerod of GoGameGuru stated that although an analysis of AlphaGo's play around 79–87 was not yet available, he believed it was a result of a known weakness in play algorithms which use Monte Carlo tree search. In essence, the search attempts to prune sequences which are less relevant. In some cases a play can lead to a very specific line of play which is significant, but which is overlooked when the tree is pruned, and this outcome is therefore "off the search radar".[56]

I wonder if Google could publish a sgf showing the most probable lines of play as calculated at each move, as well as the estimated probability of each of Lee's moves?

I wonder if the best thing to do would be to train nets on: strong amateur games (lots of games, but perhaps lower quality moves?); pro games (fewer games but higher quality?); and self-play (high quality, but perhaps not entirely human-like?) and then take the average of the three nets?

Of course, this triples the GPU cycles needed, but it could perhaps be implemented just for the first few moves in the game tree?

Comment author: philh 15 March 2016 01:19:00PM 0 points [-]

Naively, pruning seems like it would cause a mistake at 77 (allowing the brilliant followup 78), not at 79 (when you can't accidentally prune 78 because it's already on the board). But people have been saying that it made a mistake at 79.

I don't recall much detail about AG, but I thought the training it did was to improve the policy net? If the policy net was only trained on amateurs, what was it learning from self-play?

Comment author: turchin 13 March 2016 09:27:41AM 5 points [-]

Go champion Lee Se-dol strikes back to beat Google's DeepMind AI for first time in forth game 3:1 http://www.theverge.com/2016/3/13/11184328/alphago-deepmind-go-match-4-result

Comment author: philh 15 March 2016 12:09:52PM 2 points [-]

Has anyone from Google commented much on AlphaGo's mistakes here? Why it made the mistake at 79, why it didn't notice until later that it was suddenly losing, and why it started playing so badly when it did notice.

(I've seen commentary from people who've played other monte-carlo based bots, but I'm curious whether Google has confirmed them.)

I don't think I've seen anyone say this explicitly: I would guess that part of the problem was AG hasn't had much training in "mistakes humans are likely to make". With good play, it could have recovered against Lee, but not against itself, and it didn't know it was playing Lee; somehow, the moves it actually played were ones that would have increased its chances of winning if it was playing itself.

View more: Prev | Next