Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pepe_prime 13 September 2017 01:20:21PM 10 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: LawrenceC 14 September 2017 12:41:03AM 21 points [-]

I took the survey!

Comment author: username2 14 August 2017 06:25:00PM *  1 point [-]

I'm currently going through a painful divorce so of course I'm starting to look into dating apps as a superficial coping mechanism.

It seems to me that even the modern dating apps like Tinder and Bumble could be made a lot better with a tiny bit of machine learning. After a couple thousand swipes (which doesn't take long), I would think that a machine learning system could get a pretty good sense of my tastes and perhaps some metric of my minimum standards of attractiveness. This is particularly true for a system that has access to all the swiping data across the whole platform.

Since I swipe completely based on superficial appearance without ever reading the bio (like most people), the system wouldn't need to take the biographical information into account, though I suppose it could use that information as well.

The ideal system would quickly learn my preferences in both appearance and personal information and then automatically match me up with the top likely candidates. I know these apps keep track of the response rates of individuals, so matches who tend not to respond often (probably due to being very generally desirable) would be penalized in your personal matchup ranking - again, something machine learning could handle easily.

I find myself wondering why this doesn't already exist.

Comment author: LawrenceC 16 August 2017 09:47:30AM 1 point [-]

Why do you think this doesn't exist?

Comment author: [deleted] 15 August 2017 02:51:47PM 0 points [-]

Question: How do you make the paperclip maximizer want to collect paperclips? I have two slightly different understandings of how you might do this, in terms of how it's ultimately programmed: 1) there's a function that says "maximize paperclips" 2) there's a function that says "getting a paperclip = +1 good point"

Given these two different understandings though, isn't the inevitable result for a truly intelligent paperclip maximizer to just hack itself and based on my two different understandings: 1) make itself /think/ that it's getting paperclips because that's what it really wants--there's no way to make it value ACTUALLY getting paperclips as opposed to just thinking that it's getting paperclips 2) find a way to directly award itself "good points" because that's what it really wants

I think my understanding is probably flawed somewhere but haven't been able to figure it out so please point out where

Comment author: LawrenceC 16 August 2017 09:42:16AM *  0 points [-]

For what it's worth, though, as far as I can tell we don't have the ability to create an AI that will reliably maximize the number of paperclips in the real world, even with infinite computing power. As Manfred said, model-based goals seems to be a promising research direction for getting AIs to care about the real world, but we don't currently have the ability to get such an AI to reliably actually "value paperclips". There are a lot of problems with model-based goals that occur even in the POMDP setting, let alone when the agent's model of the world or observation space can change. So I wouldn't expect anyone to be able to propose a fully coherent complete answer to your question in the near term.

It might be useful to think about how humans "solve" this problem, and whether or not you can port this behavior over to an AI.

If you're interested in this topic, I would recommend MIRI's paper on value learning as well as the relevant Arbital Technical Tutorial.

Comment author: Bobertron 29 July 2017 04:25:21PM 0 points [-]

When playing around in the sandbox, simpleton always bet copy cat (using default values put a population of only simpleton and copycat). I don't understand why.

Comment author: LawrenceC 30 July 2017 04:44:43AM 1 point [-]

The reason for this is because of the 5% chance for mistakes. Copycat does worse vs both Simpleton and Copycat than Simpleton does against itself.

Comment author: LawrenceC 25 July 2017 04:33:06AM 0 points [-]

I'm really confused by this.

Comment author: DanArmak 22 July 2017 02:51:41PM *  7 points [-]

I haven't listened to the debate (I'd read it if it was transcribed), but I want to object to a part of your post on a meta level, namely the part where you say:

To me, he is very far from a model for a rationalist

Being able to effectively convince people, to reliably influence their behavior, is perhaps the biggest general-purpose power a human can have. Don't dismiss an effective arguer as "not rationalist". On the contrary, acknowledge them as a scary rationalist more powerful than you are.

The word "rationalist" means something fairly narrow. We shouldn't make it into an applause light, a near synonym of "people we like and admire and are allied with". Being reliably effective, on the other hand, is a near synonym of being rational(ist).

If Adams employed "dark arts" in his debate, the only thing that necessarily means is that he wasn't engaged in an honest effort to discover the truth. But that's not news - it was a public debate staged in order to convince the audience! So Adams used a time-honored technique of achieving this goal - how very rational of him. At least, it's rational if he succeeded, and I assume you think he did succeed in convincing some of the audience, otherwise you wouldn't bother to post a denunciation.

Similarly, the name "Dark Arts" is misleading. They are (if I may channel Professor Quirrell for a moment) extremely powerful Arts everyone should cultivate if they can, and use where appropriate: not when honestly conversing with a fellow rationalist to discover the truth, but when aiming to convince people who are not themselves trained in rationality, and who (in your estimation) will not come by their beliefs rationally, whether or not they end up believing the truth.

This is a near cousin of politics (in the social sense, not the government sense). Politics is a mind-killer and it's important to keep politics-free spaces for various purposes including the pursuit of truth. But we should not say "rationalists should not engage in politics", any more than "rationalists should never try to convince non-rationalists of anything".

ETA: I'm not claiming Adam is a rationalist or is good at being a rationalist; I'm not familiar enough with him to tell. I'm only claiming that the fact he is or tries to be a good persuader in a debate and uses Dark Arts isn't evidence that he isn't.

Comment author: LawrenceC 23 July 2017 06:20:40AM 6 points [-]

I think the term "Dark Arts" is used by many in the community to refer to generic, truth-agnostic ways of getting people to change their mind. I agree that Scott Adams demonstrates mastery of persuasion techniques, and that this is indeed not necessarily evidence that he is not a "rationalist".

However, the specific claim made by James_Miller is that it is a "model rationalist disagreement". I think that since Adams used the persuasion techniques that Stabilizer mentioned above, it's pretty clear that it isn't a model rationalist disagreement.

Comment author: LawrenceC 23 July 2017 05:06:41AM 1 point [-]

Awesome! I heard a rumor that David Krueger (one of Bengio's grad students) is one of the main people pushing the safety initiative there, can anyone confirm?

Comment author: LawrenceC 23 July 2017 05:03:12AM *  3 points [-]

Thanks for the review! I definitely had the sense that Rosen was doing a lot of hand holding and handwaving - it's certainly a very introductory text. I've read both Rosen and Eppstein and actually found Rosen better. The discrete math class I took in college used Scheinerman's Mathematics: A Discrete Introduction, which I also found to be worse than Rosen.

At the time I actually really enjoyed the fact that Rosen went on tangents and helped me learn how to write a proof, since I was relatively lacking in mathematical maturity. I'd add that Rosen does cover proof writing earlier in the book, but I suspect that MCS might do this job better. Given the target audience of the MIRI research guide, I think it makes sense to switch over to MCS from Rosen.

Comment author: LawrenceC 30 January 2017 05:03:31PM *  1 point [-]

Thanks Søren! Could I ask what you're planning on covering in the future? Is this mainly going to be a technical or non-technical reading group?

I noticed that your group seems to have covered a lot of the basic readings on AI Safety, but I'm curious what your future plans.

Comment author: LawrenceC 22 December 2016 04:14:06PM *  0 points [-]

I haven’t heard much about machine learning used for forecast aggregation. It would seem to me like many, many factors could be useful in aggregating forecasts. For instance, some elements of one’s social media profile may be indicative of their forecasting ability. Perhaps information about the educational differences between multiple individuals could provide insight on how correlated their knowledge is.

I think people are looking in to it: The Good Judgment Project team used simple machine learning algorithms as part of their submission to IARPA during the ACE Tournament. One of the PhD students involved in the project wrote his dissertation on a framework for aggregating probability judgments. In the Good Judgment team at least, people are also in using ML for other aspects of prediction - for example, predicting if a given comment will change another person's forecasts - but I don't think there's been much success.

I think a real problem is that there's a real paucity of data for ML-based prediction aggregation compared to most machine learning projects - a good prediction tournament gets a couple hundred forecasts resolving in a year, at most.

Probability density inputs would also require additional understanding from users. While this could definitely be a challenge, many prediction markets already are quite complicated, and existing users of these tools are quite sophisticated.

I think this is a bigger hurdle than you'd expect if you're implementing these for prediction tournaments, though it might be possible to do for prediction markets. (However, I'm curious how you're going to implement the market mechanism in this case.) Anecdotally speaking many of the people involved in GJ Open are not particularly math or tech savvy, even amongst the people who are good at prediction.

View more: Next