Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam 27 April 2015 10:10:00PM *  0 points [-]

description on the Kickstarter page does suggest that a lot of it was in fact going to be about Jeopardy! rather than just about Chu, or for that matter about social justice.

Here is an article written by Arthur Chu that seems to suggest otherwise:

I didn't want the focus of the movie to be Jeopardy! even if that's the obvious selling point. And it's not about me being a "genius" or me sharing words of wisdom or advice that I don't have. Who Is Arthur Chu? Arthur Chu is nobody special. Just another loser. A loser who's had a run of good luck and is trying to leverage that into doing better. And being better.

So, it's not about Chu being a smart person, or a successful person, but about Chu being a good person. (Where "good" probably happens to be more or less the same thing as "belonging to political faction X".)

My feeling is that you may be being at least one notch too cynical.

Am I? For the record, I consider it likely that Arthur Chu sincerely believes his own story, where he is the good guy on the right side of the history. He probably also overestimates his own smartness, and believes that the ethical injunctions made for lesser mortals do not apply to him. (And I believe he is obviously wrong at this point.) I would also guess that he has a good heart and that he hates himself more than he should, but that's just unbased speculation.

(And I am not saying this about everyone. For example, I also believe that those people who have raised 10× more money for their videoblogs, they do not truly believe their cause. Which is why they made a successful plan and got the money, but Arthur didn't. He made a few mistakes that would be obvious to a cynical person. For example, he didn't put a high-status-behaving white girl into his movie. But that's where the real money and power are in his faction. Arthur, by being a true believer, does not recognize the rules of the game, and fails.)

My political faction is not the same as yours, though, and it's possible that I'm being one notch too un-cynical instead.

Okay, I'll try to convert you to the dark side (and also give you a chance to convert me, by the law of conservation of expected evidence). If Arthur Chu is such a defender of oppressed people, give me an example of a black woman or a lower-class woman that he has defended publicly (calling her by her name, not merely as a part of a large anonymous group). Because I can give you an example of a rich white woman.

Again, for the record, I consider it likely that Arthur Chu is blind about what I am suggesting here; that he is clueless instead of hypocritical.

Comment author: gjm 28 April 2015 12:33:32AM 1 point [-]

an article written by Arthur Chu that seems to suggest otherwise

Except that the article says that Chu doesn't want the focus just to be Jeopardy!, not that Scott Drucker (the person who was actually proposing to make the movie, and the person whose Kickstarter it was) doesn't want it to be. And my reading of both Chu's article and the Kickstarter page is that Drucker's goals were not necessarily the same as Chu's, even though obviously both were hoping that cooperating with the other guy would do something for both people's goals.

Which is why they made a successful plan and got the money, but Arthur didn't. [...] For example, he didn't put a high-status-behaving white girl into his movie.

The comparison here is with Anita Sarkeesian, to whom you linked before, right? Now, it seems to me that the reason why Anita Sarkeesian put a high-status-behaving white girl into her videoblogging is because she is a high-status-behaving white girl (in so far as videoblogging about video games can count as high-status behaviour), and it doesn't seem either obviously insincere for her to act as such, or obviously incompetent for Chu not to have done likewise. And I'm not sure what you think Scott Drucker should have done with a high-status-behaving white girl, or how it would have made the Kickstarter more successful.

What, by the way, makes you think that Anita Sarkeesian doesn't truly believe in her cause? I've only seen a small quantity of her stuff, but what I've seen looks sincere (and fairly plausible) to me.

that's where the real money and power are in his faction

You may be right (perhaps it depends what counts as "his faction") but your link from the word "are" doesn't seem to me to say what I think you're implying it does. It's arguing that "solidarity is for white women", but the stress is on "white", not "women"; I'd summarize the message as something like "contemporary feminism portrays itself as being for women, but really it's only interested in white women and black women get ignored or thrown overboard whenever it's convenient".

If Arthur Chu is such a defender of oppressed people, [...]

Wait, what? When did I say or imply or suggest that he is? I certainly didn't intend to. (Not because I particularly think he isn't, but because I have no idea whether he is and had no idea that that was the question that was meant to be at issue.)

I had a look through some of his writing, and he doesn't spend much of his time defending anyone by name. He spends much more attacking large-scale phenomena. I don't see any obvious reason why this indicates either cluelessness or hypocrisy. But that's kinda irrelevant; I never claimed that Chu is a great defender of oppressed people, and I have no idea how "you're being one notch too cynical" turned into "Arthur Chu is a great defender of the oppressed".

Comment author: Viliam 27 April 2015 08:48:25PM 1 point [-]

Judging by the project description ("The Documentary Film about Arthur Chu: a spokesperson for social justice, the new king of the nerds, and 11-time Jeopardy Champion."), it was not really about Jeopardy. Most people do not care strongly about Jeopardy, but many people care a lot about their political faction winning -- you just have to convince them that giving their money to you is their best move. Mentioning Jeopardy success is just a way to separate yourself from the crowd.

Some people can play this game well enough to get 10× more money for their videoblogs. My hypothesis (which I have no way to verify, and I admit that I am completely partial here) is that Chu tried to play the same game... and failed. Although I still give him credit for trying.

Comment author: gjm 27 April 2015 08:55:49PM 0 points [-]

it was not really about Jeopardy.

I didn't say it was. I said it was

about some guy who won a bit of money on Jeopardy!

and I chose my words carefully :-). But the description on the Kickstarter page does suggest that a lot of it was in fact going to be about Jeopardy! rather than just about Chu, or for that matter about social justice. Do you think that was all just lies, and if so why do you think that?

(My feeling is that you may be being at least one notch too cynical. My political faction is not the same as yours, though, and it's possible that I'm being one notch too un-cynical instead.)

Comment author: drethelin 27 April 2015 05:41:41PM 0 points [-]

how is making a documentary about yourself not just contributing to your own glory?

Comment author: gjm 27 April 2015 08:17:10PM 2 points [-]

He wasn't proposing to make a documentary about himself. Someone else was proposing to make a documentary about him. And (as DanielLC quite rightly says) the perfectly obvious purpose of this is that some people might find it interesting and want to watch it. Indeed, presumably about $10k worth of people did anticipate finding it interesting and wanting to watch it, since the project did get some backers.

Comment author: Viliam 27 April 2015 10:37:09AM *  2 points [-]

It is interesting to watch how different things I observe on internet interact with each other. Two recent discoveries:

1) Arthur Chu, known to readers of SSC as a person not exactly in favor of niceness, created a Kickstarted project called "Who is Arthur Chu?". Failed by collecting only 20% of the planned $50.000. (Which, if I understand the rules of Kickstarter correctly, means he will get nothing.)

Not sure if the proper reaction here is to laugh (something like: "you had a choice between niceness and winning, you rejected niceness, and now you have neither"), or to congratulate for empirically testing "how much money could I get from random people on internet if I just openly ask them to contribute to my glory". I mean, next time if he would ask people to donate mere $10.000, he could actually get it. Of course only if they do not forget him in the meantime.

2) Gamergate was recently a 3rd largest article on RationalWiki. Then it was split into multiple articles, so at this moment it is merely at positions 8 ("Gamergate") and 13 ("Timeline of Gamergate") in the longest article list. Unsurprisingly, the whole article is one person's playground. Specifically, it is a person recently kicked out for similar behavior from Wikipedia. It will be interesting to see how other people on RationalWiki will deal with this.

Comment author: gjm 27 April 2015 04:45:59PM 2 points [-]

and now you have neither

I don't see any indication that lack of niceness deprived him of any of his Jeopardy! winnings.

how much money could I get from random people on internet if I just openly ask them to contribute to my glory

You say that as if he was simply planning to collect people's money, put it in a big pile, and sit on it while cackling evilly, but the ostensible plan was actually to make a documentary. The most obvious explanation for the Kickstarter failure seems to me to be "no one was very interested in a documentary about some guy who won a bit of money on Jeopardy!" rather than "no one wanted to contribute to making this documentary because its subject wasn't a nice enough person".

kicked out for similar behavior from Wikipedia

Similar to what? Writing a long article? Splitting an article into two shorter ones? (Neither of those seems like something anyone should or would be kicked off Wikipedia for. A bit of googling suggests it was for edit-warring and behaving generally obsessively and disagreeably, but I don't know how accurate that is because the people saying so are apparently on the other side of the "Gamergate" culture war. from Ryulong and that seems to be a thing that brings out the worst in people.)

Comment author: Jiro 27 April 2015 02:40:09PM 0 points [-]

I don't know what "error: divide by zero" means in this context. Could you please clarify? (If you're suggesting that the problem is ill-posed under some decision theories because the question assumes that it is possible to make a choice but the oracle's ability to predict you means you cannot really choose, how doesn't that apply to the original problem?)

Comment author: gjm 27 April 2015 04:15:27PM 0 points [-]

You want to figure out whether to do as the oracle asks or not. To do this, you would like to predict what will happen in each case. But you have no evidence concerning the case where you don't do as it asks, because so far everyone has obliged. So, e.g., Pr(something good happens | decline oracle's request) has Pr(decline oracle's request) in the denominator, and that's zero.

In response to Memory is Everything
Comment author: cwillu 27 April 2015 02:05:41AM *  0 points [-]

I teleport a hostage about to be executed to a capsule in lunar orbit. I then offer you three options: you pay me 1,000,000,000$, and I give him whatever pleasures are possible given the surroundings for a day, and then painlessly kill him; I simply kill him painlessly; I torture him for a day, and then painlessly kill him, and then pay you 1,000,000,000$.

Do you still take the money?

This strikes me as a pretty stark decision, such that I'd have a really hard time treating those who would take the money any different than I'd treat the babyeaters. It's almost exactly the same moral equation.

Comment author: gjm 27 April 2015 08:55:37AM 0 points [-]

With a billion dollars one can save thousands of lives, which seems like a bigger deal than one person being tortured for a day. I can certainly see reasons for not taking that offer, but taking it doesn't seem very babyeaterish to me if the taker's intention is to use much of the money to do a lot more good than the donor does harm.

Comment author: Gunnar_Zarncke 24 April 2015 03:14:56PM 4 points [-]

Interesting idea. Could be made into a poll to measure breath and variability of preference via a poll.

I will just plain take your points and make each into a poll and add some of my own. Everybody is invited to vote the their preference on a 1 to 5 scale (as many as you like, no need to consider all, the liste got quite long):

I like spicy things I dislike spicy things

I like chilli I dislike chilli

I like wasabi I dislike wasabi

I like horseraddish I dislike horseraddish

I like sweets I dislike sweets

I'm addicted to sugar

Very much Not at all

I like chocolate I dislike chocolate

I like dark chocolate I dislike dark chocolate

I like licorice I dislike licorice

I like fruits I dislike fruits

I like vegetables I dislike vegetables

I like whole grain products I dislike whole grain products

I like hot dishes I dislike hot dishes

I like cold dishes I dislike cold dishes

I like creamy/squishy/souce-like food I dislike creamy/squishy/souce-like food

I like hard/firm food I dislike hard/firm food

I like crispy food I dislike crispy food

I like beefy food I dislike beefy food

I like ice cream I dislike ice cream

I like cheese I dislike cheese

I like meat I dislike meat

I like fish I dislike fish

I like honey I dislike honey

I like milk I dislike milk

I drink a lot of water I drink water only as part of other drinks

I like coffee I dislike coffee

I like tea I dislike tea

I like to drink coffeinated beverages I don't like to drink coffeinated beverages

I like fruit juice I dislike fruit juice

I like hot drinks I dislike hot drinks

I like cold drinks I dislike cold drinks

I like fruit juice I dislike fruit juice

I like alcoholic beverages I dislike alcoholic beverages

I like the initial effects of alcohol I dislike initial effects of alcohol

I like the ultimate effects of alcohol I dislike ultimate effects of alcohol

I like bitter tastes I dislike bitter tastes

I like sour tastes I dislike sour tastes

I like salty tastes I dislike salty tastes

I like starchy tastes I dislike starchytastes

Submitting...

Comment author: gjm 25 April 2015 03:00:50PM 3 points [-]

It would appear that five people have different opinions of fruit juice and fruit juice.

Comment author: DataPacRat 23 April 2015 03:47:51PM 0 points [-]

That's one of the two possibilities I've found over the past day. The other is ⊤ from https://en.wikipedia.org/wiki/Greatest_element .

(I wonder if presenting this idea to an actual mathematician would induce any wincing? Off to /r/math to find out...)

Comment author: gjm 23 April 2015 03:53:09PM 1 point [-]

∨ is the mathematical symbol for "or" (in logic) -- my guess is that it may be derived from the fact that the initial letter of the Latin word for "or" is "v". There's a kinda convention that when you have a(n associative) binary operator, you use a bigger version of it to signify applying it to all the things in a sequence or set, so you'd want a larger one -- a bit like a capital "V".

⊤ is the mathematical symbol for the "top" element of a Boolean algebra; maybe more generally of a lattice. You wouldn't use it to mean "maximum of these things" in general.

Comment author: SaucePear 22 April 2015 09:36:19AM 1 point [-]

I have a strong desire to be a 'second-in-command' type of sidekick.

Backing up slightly, this post resonated with me very strongly. All internal fantasy, from childhood to adulthood has been about me entering the world of the hero and suborning myself to them to act as a sidekick. I didn't realize others felt differently for quite some time.

I feel the second in command style works better for me because I am attracted to heroes with slightly different preference weightings than myself. In places where the task conflicts with the heroes preferences, I could step in and relieve their distress by performing it instead. I don't know that this specifically distinguishes it from an 'assistant' however.

Going with my intuition, I would suspect that there are people who would feel drawn to the different roles. For instance I would not like to be an assistant because I'm an excellent emergency back-up leader and would be wasted in that capacity.

Comment author: gjm 22 April 2015 01:46:50PM 1 point [-]

(Pedantic correction; read on only if you prefer being right to not being corrected.)

"Suborn" doesn't mean what I think you think it does. You mean "subordinate"; to suborn someone is to bribe them to do something bad.

Comment author: examachine 22 April 2015 12:33:50PM -2 points [-]

Wow, that's clearly foolish. Sorry. :) I mean I can't stop laughing so I won't be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.

And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don't have to "imprison" any AI agent.

So, no, because it doesn't have to be fully autonomous.

Comment author: gjm 22 April 2015 01:43:07PM 1 point [-]

AI DOES NOT MEAN FULLY AUTONOMOUS AGENT

For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don't think anyone is claiming "every AI project is dangerous". They are claiming something more like "AI with the ability to do pretty much all the things human minds do is dangerous", with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities.

I can PROGRAM an agent so that it never walks out of a box. It never wants to.

Again: for sure, but that isn't the point at issue.

One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware.

In any of these cases, you may be confident that the AI you initially built doesn't want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn't want to get out of the box?

There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue.

(One further observation: telling people they're stupid and you're laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there's a weakness in your own arguments. ("Argument weak; shout louder."))

View more: Next