Comment author: CellBioGuy 11 March 2016 12:32:37AM 4 points [-]

That's pretty darn far from perfect information.

Comment author: dxu 11 March 2016 06:04:38PM 0 points [-]

Even so, I highly doubt the best human traders are anywhere close to optimal. It'd be interesting to see how much better a machine-learning approach would fare.

Comment author: James_Miller 10 March 2016 05:58:10PM 6 points [-]

For me the most interesting part of this match was the part where one of the DeepMind team confirmed that because AlphaGo optimizes for probability of winning rather than expected score difference, games where it has the advantage will look close. It changes how you should interpret the apparent closeness of a game

Qiaochu Yuan, or him quoting someone.

Comment author: dxu 11 March 2016 05:49:40PM 1 point [-]

This appears to be a general property of the Monte Carlo search algorithm, which AlphaGo employs.

Comment author: [deleted] 09 March 2016 11:40:15PM *  1 point [-]

It is entirely possible to firmly believe in the inevitability of near-term AGI without subscribing to AI risk fears. I wouldn't conflate the two.

In response to comment by [deleted] on Open Thread March 7 - March 13, 2016
Comment author: dxu 11 March 2016 05:46:31PM 1 point [-]

Most of the arguments I've seen against AI risk I've seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I've yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. "people who engage in such goalpost-moving"--and in my (admittedly limited) experience, those sorts of people don't usually put forth very deep arguments.

Comment author: hairyfigment 11 March 2016 08:18:44AM 5 points [-]

But more generally, AI should use whatever works. If that happens to be "scruffy" methods, then so be it.

This seems like a bizarre statement if we care about knowable AI safety. Near as I can tell, you just called for the rapid creation of AGI that we can't prove non-genocidal.

Comment author: dxu 11 March 2016 05:40:46PM *  0 points [-]

I don't believe Houshalter was referring to proving Friendliness (or something along those lines); my impression is that he was talking about implementing an AI, in which case neural networks, while "scruffy", should be considered a legitimate approach. (Of course, the "scruffiness" of NN's could very well affect certain aspects of Friendliness research; my relatively uninformed impression is that it's very difficult to prove results about NN's.)

Comment author: V_V 11 March 2016 12:06:22AM 1 point [-]

IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.

Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.

Comment author: dxu 11 March 2016 05:37:26PM 1 point [-]

IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.

Under that hypothesis, shouldn't AI safety have become a "thing" (by which I assume you mean "gain mainstream recognition") back when Deep Blue beat Kasparov?

Comment author: Vaniver 09 March 2016 02:31:33PM 1 point [-]

I've already seen some goalpost-moving at Hacker News. I do hope this convinces some people, though.

Comment author: dxu 09 March 2016 05:45:37PM 1 point [-]

People who engage in such goalpost-moving have already written down their bottom line, most likely because AI risk pattern-matches to the literary genre of science fiction. I wouldn't expect such people to be swayed by any sort empirical evidence short of the development of strong AGI itself. Any arguments they offer against strong AGI amount to little more than rationalization. (Of course, that says nothing about the strengths of the arguments themselves, which must be evaluated on their own merits.)

Comment author: dxu 02 March 2016 09:51:43PM *  0 points [-]

Note for whoever is behind this scam:

Next time, when picking a set of people to target, try to go for people who don't make a habit of studying epistemic rationality.

Comment author: dxu 07 January 2016 06:48:32AM 4 points [-]

From my perspective, there's no contradiction here--or at least, the contradiction is contained within a hidden assumption, much in the same way that the "unstoppable force versus immovable object" paradox assumes the contradiction. An "unstoppable force" cannot logically exist in the same universe as an "immovable object", because the existence of one contradicts the existence of the other by definition. Likewise, you cannot have a "utility maximizer" in a universe where there is no "maximum utility"--and since you basically equate "being rational" with "maximizing utility" in your post, your argument begs the question.

Comment author: Usul 07 January 2016 06:31:06AM 3 points [-]

No. It's a dick move. Same question and they're not copies of me? Same answer.

Comment author: dxu 07 January 2016 06:43:15AM -2 points [-]

Same question and they're not copies of me? Same answer.

As I'm sure you're aware, the purpose of these thought experiments is to investigate what exactly your view of consciousness entails from a decision-making perspective. The fact that you would have given the same answer even if the virtual instances weren't copies of you shows that your reason for saying "no" has nothing to do with the purpose of the question. In particular, telling me that "it's a dick move" does not help elucidate your view of consciousness and self, and thus does not advance the conversation. But since you insist, I will rephrase my question:

Would someone who shares your views on consciousness but doesn't give a crap about other people say "yes" or "no" to my deal?

Comment author: Usul 07 January 2016 06:06:22AM 2 points [-]

Sorry, I missed that you were the copier. Sure, I'm the copy. I do not care one bit. My life goes on totally unaffected (assuming the original and I live in unconnected universes). Do I get transhuman immortality? Because that would be awesome for me. If so, I git the long end of the stick. It would have no value to poor old original, nor does anything which happens to him have intrinsic value for me. If you had asked his permission he would have said no.

Comment author: dxu 07 January 2016 06:26:16AM *  -2 points [-]

Sure, I'm the copy.

In other words, I could make you believe that you were either the original or the copy simply by telling you you were the original/the copy. This means that before I told you which one you were, you would have been equally comfortable with the prospect of being either one (here I'm using "comfortable" in an epistemic sense--you don't feel as though one possibility is "privileged" over the other). I could have even made you waffle back and forth by repeatedly telling you that I lied. What a strange situation to find yourself in--every possible piece of information about your internal experience is available to you, yet you seem unable to make up your mind about a very simple fact!

The pattern theorists answer this by denying this so-called "simple" fact's existence: the one says, "There is no fact of the matter as to which one I am, because until our experiences diverge, I am both." You, on the other hand, have no such recourse, because you claim there is a fact of the matter. Why, then, is the information necessary to determine this fact seemingly unavailable to you and available to me, even though it's a fact about your consciousness, not mine?

View more: Prev | Next