gwern comments on Open Thread March 31 - April 7 2014 - Less Wrong

2 Post author: beoShaffer 01 April 2014 01:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (234)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 02 April 2014 05:43:24PM 3 points [-]

You can do what I do: http://www.gwern.net/Archiving%20URLs

High startup cost, but on the plus side, you don't need to do anything once it's running and it'll catch most of what you read.

Comment author: gjm 03 April 2014 01:53:41PM 2 points [-]

OT (except that I ran into this while visiting that page): That Beeline thing is really annoying. (I got the "blues" variant. I was annoyed enough by it that I modified the cookie to serve me a different version. I asked it for variant 3 ("gray1") and actually it doesn't appear to be doing anything; maybe that's a bug somewhere. Anyway, my apologies if this introduces noise into your A/B/.../I testing.)

Comment author: gwern 03 April 2014 02:41:23PM *  0 points [-]

'Blues' is actually the best-performing variant so far! I have no idea why, I hate it too. If it succeeds, I'll probably have to run another to try to find a version I can live with. 'gray1' is, IIRC, probably the subtlest of the running versions, so unless you set up a second identical tab set to 'none' and flicker back & forth, I suspect you simply weren't noticing.

EDIT: 'Blues' eventually succumbed, and the final result was no version clearly outperformed no-BLR at all. See http://www.gwern.net/AB%20testing#beeline-reader-text-highlighting

Comment author: gjm 03 April 2014 05:05:48PM 2 points [-]

Does the metric you're using (fraction of visitors staying at least N seconds?) actually measure what you care about? (A few possible confounding factors, off the top of my head: visitors may be intrigued by the weird colours and stay around while they try to work out what it is, but this doesn't indicate that they got any actual value from the page content; if the Beeline thing works, visitors may find the one bit of information they're looking for faster and then leave; if it's just annoying, annoyance may show up in reduced repeat visits rather than likelihood of disappearing quickly.)

Comment author: gwern 03 April 2014 05:33:29PM 0 points [-]

I think it's a reasonable metric. It's not perfect (I'd rather measure average time on page, not a cutoff), but I don't know how to do any better: I am neither a Javascript programmer nor a Google Analytics expert.

Comment author: Lumifer 02 April 2014 06:01:21PM 1 point [-]

Do you have problems with searching for needed information in that mass of data that you archive locally?

Comment author: gwern 02 April 2014 06:17:27PM 0 points [-]

Not really. When you start with a URL (my usual use-case), it's very easy to look in the local archive for it.

Comment author: Lumifer 02 April 2014 06:29:27PM 1 point [-]

Ah, so you have something like an ancillary indexing system with URLs?

Comment author: gwern 02 April 2014 06:56:29PM 1 point [-]

URLs map onto filenames (that's what they originally were), so when wget downloads a URL, it's generally fairly predictable where the contents will be located on disk.

Comment author: Lumifer 02 April 2014 07:08:25PM 2 points [-]

No, that's not what I mean. Let's say you want to look up studies on, say, the effect of dietary sodium on CVD and you have a vague recollection that you scanned a paper on the topic a year or so ago. I understand that if you have the URL of this paper you can easily find it on your disk, but how do you go from, basically, a set of search terms to the right URL?

Comment author: gwern 02 April 2014 09:44:20PM 1 point [-]

Oh. In that sort of scenario, I depend on my Evernote, having included it on gwern.net/Google+/LW/Reddit, and my excellent search skills. Generally speaking, if I remember enough exact text to make grepping my local WWW archive a feasible search strategy, it's trivial to locate it in Google or one of the others.

Comment author: Lumifer 03 April 2014 04:04:57PM 1 point [-]

Ah, I see. So your system is less of a knowledge base and more of a local backup of particularly interesting parts of the 'net.

Thanks :-)

Comment author: gwern 03 April 2014 06:24:43PM 0 points [-]

Yes, it's the last resort for URLs which are broken. It's not much good having a snippet from a web page so you know you want to check it, if the web page no longer exists.