Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 14 August 2017 06:25:00PM *  1 point [-]

I'm currently going through a painful divorce so of course I'm starting to look into dating apps as a superficial coping mechanism.

It seems to me that even the modern dating apps like Tinder and Bumble could be made a lot better with a tiny bit of machine learning. After a couple thousand swipes (which doesn't take long), I would think that a machine learning system could get a pretty good sense of my tastes and perhaps some metric of my minimum standards of attractiveness. This is particularly true for a system that has access to all the swiping data across the whole platform.

Since I swipe completely based on superficial appearance without ever reading the bio (like most people), the system wouldn't need to take the biographical information into account, though I suppose it could use that information as well.

The ideal system would quickly learn my preferences in both appearance and personal information and then automatically match me up with the top likely candidates. I know these apps keep track of the response rates of individuals, so matches who tend not to respond often (probably due to being very generally desirable) would be penalized in your personal matchup ranking - again, something machine learning could handle easily.

I find myself wondering why this doesn't already exist.

Comment author: drethelin 15 August 2017 03:39:43PM 3 points [-]

There aren't that many people, so the benefits would be minor. Once you've swiped a couple thousand times you're probably through most of the tinder users within your demographic preferences.

Comment author: [deleted] 15 August 2017 02:51:47PM 0 points [-]

Question: How do you make the paperclip maximizer want to collect paperclips? I have two slightly different understandings of how you might do this, in terms of how it's ultimately programmed: 1) there's a function that says "maximize paperclips" 2) there's a function that says "getting a paperclip = +1 good point"

Given these two different understandings though, isn't the inevitable result for a truly intelligent paperclip maximizer to just hack itself and based on my two different understandings: 1) make itself /think/ that it's getting paperclips because that's what it really wants--there's no way to make it value ACTUALLY getting paperclips as opposed to just thinking that it's getting paperclips 2) find a way to directly award itself "good points" because that's what it really wants

I think my understanding is probably flawed somewhere but haven't been able to figure it out so please point out where

Comment author: drethelin 15 August 2017 03:38:00PM *  0 points [-]

Why would it hack itself to think it's getting paperclips if it's originally programmed to want real paperclips? It would not be incentivized to make that hack because that hack would make it NOT get paperclips.

Comment author: drethelin 22 July 2017 07:54:40PM 1 point [-]

This is why we need downvotes.

Comment author: ImmortalRationalist 20 July 2017 10:39:25AM 1 point [-]

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

Comment author: drethelin 20 July 2017 05:13:12PM 1 point [-]

You can justify a belief in "Induction works" by induction over your own life.

Comment author: drethelin 07 July 2017 09:38:01PM 4 points [-]

I strongly disagree. Most classes are pathetically slow and boring, not to mention expensive and time consuming. For one example, I've learned more history just reading books on my own than I ever did in History class, and it's not like I got bad grades back when I was in school. Showing up and being "carried along" was basically worthless.

It's likely that there are some kinds of classes that are functionally better than learning on your own, but given that the vast majority of classes on most topics are de facto going to be aimed at the lowest common denominator, you're gonna have to put in a lot of work to find good ones.

Comment author: WalterL 06 July 2017 01:24:36PM 0 points [-]

This might be better saved for a 'dumb questions' thread, but whatever.

So...I've had a similar experience a couple of times. You go to the till, make a purchase, something gets messed up and you need to void out. The cashier has to call a manager.

This one time I had a cashier who couldn't find her manager, so she put the transaction through, then put a refund through. Neither of these required a manager.

Why is it that you need a manager code to void a transaction, while the cashier is presumed confident for sales and refunds?

Comment author: drethelin 07 July 2017 06:34:02PM 1 point [-]

Voiding a transaction deletes it (I'm pretty sure), which removes the information trail. The other way records the transactions, so if they end up being criminal, the cashier in question is caught.

Comment author: drethelin 31 May 2017 06:20:33AM 7 points [-]

Sarcastic and rambling and not in a fun way to read. Either get to the point faster or make your asides less about mocking your target and more interesting.

Comment author: [deleted] 31 May 2017 04:26:23AM *  2 points [-]

a

Comment author: drethelin 31 May 2017 05:59:20AM 9 points [-]

THIS IS WHY WE NEED DOWNVOTES.

Comment author: Alicorn 31 May 2017 04:44:16AM 5 points [-]

I was tentatively willing to give you some benefit of the doubt even though I don't know you but I'm really disappointed that you feel the need to score points against a rationalist-adjacent posting to her Tumblr about how your post looks to her from her outside vantage point. I brought a similar-amount-of-adjacent friend to the seder and it freaked her out. Rationalist shit looks bizarre from a couple steps away. You do not have to slam my friend for not being impressed with you.

Comment author: drethelin 31 May 2017 05:41:38AM 2 points [-]

That's kind of unfair, considering the sheer amount of point-scoring going on in the original post.

Comment author: zjacobi 28 May 2017 10:31:45PM *  19 points [-]

I think the troll obliquely raised on good point with their criticism of the example for Rule 6:

For example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior

Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?

Treating something like your sleep disturbances as your responsibility is fine if e.g. you (like me) have lots of trouble falling asleep and something like people whispering 15 metres from your room is keeping you from falling asleep. In that case, those people are doing everything right and really don't know that they're hurting you. It is unreasonable to get angry at them if you haven't explained to them why their behaviour is bad for you.

Sometimes it's less clear though. I sometimes use the microwave after midnight. I know that the microwave can be heard in my room and in my room mate's room. When I use the microwave and think he might be asleep, I stop it before the timer finishes and it beeps loudly. There's not much excuse to wait for my room mate to specifically request that I do this; I'm more than capable of figuring out a) the microwave beeping at the end is loud and the sort of thing that can disrupt sleep and b) there's a way I can stop that from happening. It does show some failure of consideration if I were to shrug off the potential inconvenience that the microwave could present for my room mate for the slight benefit of not having to watch the microwave.

This points to one of the failure modes of Tell Culture, where people use it as an excuse to stop doing any thinking about how their actions can affect other people. This actually suggests that one potential house experimental norm could be something like "before making an action that might effect another Dragon, pause and consider how it might effective them and if the effect will be a net positive."

What this all comes down to for me is that it seems unfair to ask people to assume goodwill without also asking them to always attempt to act with goodwill.

Comment author: drethelin 29 May 2017 09:52:04PM 12 points [-]

I like this comment but I think what this and the original trollpost miss out on is that LW community in general, due to having a lot of people with autism and sensory issues, has a ton of people who actually do NOT have "reasonable expectations of what other people want to guide their behavior". The OP quoted here is making a common typical-mind type error. Of COURSE it's better to live with people who intuit your preferences and act in accordance to them without being told what they are. But it's obnoxious to shit on attempted to solutions to a problem by insisting that morally good people could never have the problem in the first place.

View more: Next