Comment author: ChristianKl 27 March 2016 04:38:35PM 1 point [-]

Did they delete posts?

Comment author: TheAltar 28 March 2016 12:52:38PM 1 point [-]

They deleted the worst ones. Screenshots can be found on other websites.

Comment author: TheAltar 21 March 2016 02:41:35PM 0 points [-]

Additional Suggestion 1: Regular reminders of places to send suggestions could be helpful. I occasionally come up with additional ones and usually just post them on whatever recent suggestion-related thread is new

Additional Suggestion 2: The search function would be massively improved if it ignored and didn't search the text in the sidebar. This was referenced and I was reminded of this by gjm from his comment here in the latest Open Thread.

Comment author: gjm 21 March 2016 01:22:18PM 8 points [-]

Finding comments on LW is more painful than it should be because sometimes this happens:

  • You remember that X replied to Y saying something with words Z in.
  • You put something like <<X Y Z site:lesswrong.com>> into Google (directly or via the "Google custom search" in the right sidebar.)
  • You get back a whole lot of pages, but
    • they all contain X and Y because of the top-contributors or recent-comments sections of the right sidebar;
    • they all contain Z because of the recent-comments section of the right sidebar.
  • None of those pages now contains either the comment in question or a link to it.
  • Using the "cached" link from the search results doesn't help, because the right sidebar is generated dynamically and is simply absent from the cached pages.
    • So how come they're found by the search? Beats me.

Here's a typical example; it happens to use only Z (I picked one of my comments from a couple of weeks ago) but including X and Y seldom helps.

I just tried the equivalent search in Bing and the results were more satisfactory, but only because the comment in question happened to appear fairly near the top of the overview page for the user I was replying to. I would guess that Bing isn't actually systematically better for these searches, but I haven't tested.

Does anyone know a good workaround for this problem?

Is there a way to make the dynamically-generated sidebar stuff on LW pages invisible to Google's crawler? It looks like there is. Should I file an issue on GitHub?

Comment author: TheAltar 21 March 2016 02:35:12PM 1 point [-]

I've run into this problem several times before. It would be very helpful if the search feature ignored the text in the sidebar.

Comment author: Algernoq 17 March 2016 06:17:24AM 0 points [-]

Modest proposal for Friendly AI research:

Create a moral framework that incentivizes assholes to cooperate.

Specifically, create a set of laws for a "community", with the laws applying only to members, that would attract finance guys, successful "unicorn" startup owners, politicians, drug dealers at the "regional manager" level, and other assholes.

Win condition: a "trust app" that everyone uses, that tells users how trustworthy every single person they meet is.

Lose condition: startup fund assholes end up with majority ownership of the first smarter-than-human-level general AI, and no one's given smart people an incentive not to hurt dumb people.

If you can't incentivize smart selfish people to "cooperate" instead of "defect", then why do you think you can incentivize an AI to be friendly? What's to stop a troll from deleting the "Friendly" part the second the AI source code hits the Internet? Keep in mind that the 4chan community has a similar ethos to LW: namely "anything that can be destroyed by a basement dweller should be".

Comment author: TheAltar 18 March 2016 01:25:46PM *  1 point [-]

A trust app is going to end up with all the same issues credit ratings have.

Comment author: TheAltar 18 March 2016 12:49:13AM 1 point [-]

Is it possible for Main posts to also be listed on Discussion but have an added highlight effect around their title or something? Then people can tell they're Main while now having to check a rarely used side subreddit.

Comment author: TheAltar 17 March 2016 04:06:56PM 0 points [-]

Why did the hug feel 100% fake to you? Do you think the other Japanese people give less fake hugs?

I generally know that Japan isn't too big on hugging as a culture, so I wonder whether very many Japanese people would be very skilled at this.

Comment author: TheAltar 17 March 2016 02:45:52PM 1 point [-]

How commonly do you think other groups do this and what ways would you suggest at stopping it? Your article seems fairly innocuous as far as spotlight stealing goes, but I'm sure other people's attempts might be far more harmful for the original news story obtaining appropriate attention.

Comment author: gwern 09 March 2016 05:18:20PM *  1 point [-]

I'm not sure what the corresponding figure would be for chess.

You can actually calculate this now. Regan has noted that for computer chess, they're getting to the point where they are effectively perfect and equivalent; so whatever that gap between them and the best human player ever is can be turned into a piece advantage. (Not that I know how to do this, but I assume anyone already somewhat familiar with ELO and chess engines can take the ELO difference and figure out the corresponding material advantage. Regan thinks it's probably somewhere ~3600 ELO. Apparently chess AIs can now offer at least "pawn and move, pawn, exchange, and four-move odds." and still beat US champions & grandmasters like Hikaru Nakamura.)

But maybe that was a little hard to answer, so let me put the question the other way: has there ever been a case where a strategy game played seriously & competitively (ie. not tic-tac-toe or blackjack) by adult humans was solved to perfect or superhuman play levels by AI researchers, and the perfect or superhuman play turned out to be identical or so close to the top human's play level that human could win regularly?

Comment author: TheAltar 11 March 2016 03:04:31PM 0 points [-]

A game like that could occur between humans and A.I. with online collectible card games. (I'm specifying online because the rules are streamlined and mass competition is far more available.)

Comment author: James_Miller 10 March 2016 05:58:10PM 6 points [-]

For me the most interesting part of this match was the part where one of the DeepMind team confirmed that because AlphaGo optimizes for probability of winning rather than expected score difference, games where it has the advantage will look close. It changes how you should interpret the apparent closeness of a game

Qiaochu Yuan, or him quoting someone.

Comment author: TheAltar 10 March 2016 09:52:09PM 3 points [-]

I was worried about something like this after the first game. I wasn't sure if expert Go players could discern the difference between AlphaGo playing slightly better than a 9dan versus playing massively better than a 9dan due to how the AI was set up and how difficult it might be to look at players better than the ones already at the top.

Comment author: TheAltar 10 March 2016 04:32:01PM 0 points [-]

Does anyone know the current odds being given of Lee Sedol winning any of the three remaining games against AlphaGo? I'm curious if at this point is AlphaGo likely possible to beat by a human player better than Sedol (assuming there are any) or if we're looking at an AI player that is better than a human can be.

View more: Prev | Next