Comment author: jkadlubo 29 April 2014 05:17:32PM 3 points [-]

I think there were too few people wearing no-touching tags to make them work (well enough). At some point I freaked out and everyone who saw me in distress and wanted to help just hugged, patted and generally invaded me - ignoring the tag and the semi-obvious reason for freaking out.

What I do not agree is what you call the ironic status of those tags. I talked to some people about it and aside from straight "I want a lot of hugs" and "don't touch me at all" there was also the opinion "I don't feel comfortable being hugged (or touched), but I can hug some of the other people" - a middle ground, which didn't have a separate tag and did not truly fit neither of the present tags. Given the generally cuddly atmosphere picking a "don't hug me" tag was the sensible action (because not picking a tag would simply put you in the majority - "hug me" group).

I don't know if having a new middle-ground tag would fix this problem. Maybe it would be ignored the same way that the "don't touch me" tag was. Maybe it simply would work better if the group was more balanced. I caught myself several times looking at somebody's tag to check if they will accept a hug and preparing my body for a hug before my brain processed the meaning of the pictogram - since almost everyone wanted hugs, this person must want them too, right?

Comment author: Roxolan 29 April 2014 06:25:03PM 1 point [-]

Fair point. Apologies to anyone else wearing the no-hug tag.

Comment author: Roxolan 29 April 2014 03:46:22PM 9 points [-]

We wanted to encourage hugging by letting people put a “accepting hugs as a form of greeting” sticker on their extended name tags. To our surprise it was adopted by a huge majority and had an immense effect on social interactions by creating an atmosphere of familiarity.

Only person wearing a no-hug tag unironically here: those do not work. I did less socializing than most, but still had to interrupt a few hugs (in one case by someone wearing an ironic no-hug tag) to my discomfort and their guilt. But a pro-hug culture seems so good for the community that I should probably hack myself/spend a spoon to let people hug me rather than impose costly social rules on everyone else.

Comment author: CarlShulman 21 June 2013 05:39:15AM *  19 points [-]

Philosopher Kenny Easwaran reported in 2007 that:

Josh von Korff, a physics grad student here at Berkeley, and versions of Newcomb’s problem. He shared my general intuition that one should choose only one box in the standard version of Newcomb’s problem, but that one should smoke in the smoking lesion example. However, he took this intuition seriously enough that he was able to come up with a decision-theoretic protocol that actually seems to make these recommendations. It ends up making some other really strange predictions, but it seems interesting to consider, and also ends up resembling something Kantian!

The basic idea is that right now, I should plan all my future decisions in such a way that they maximize my expected utility right now, and stick to those decisions. In some sense, this policy obviously has the highest expectation overall, because of how it’s designed.

Korff also reinvents counterfactual mugging:

Here’s another situation that Josh described that started to make things seem a little more weird. In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god. If one encounters a beggar, then one can choose to either give the beggar a penny or not. But if one encounters a god, then the god will give one a gold coin iff, had there been a beggar instead, one would have given a penny. On encountering a beggar, it now seems intuitive that (speaking only out of self-interest), one shouldn’t give the penny. But (assuming that gods and beggars are randomly encountered with some middling probability distribution) the decision protocol outlined above recommends giving the penny anyway.

In a sense, what’s happening here is that I’m giving the penny in the actual world, so that my closest counterpart that runs into a god will receive a gold coin. It seems very odd to behave like this, but from the point of view before I know whether or not I’ll encounter a god, this seems to be the best overall plan. But as Josh points out, if this was the only way people got food, then people would see that the generous were doing well, and generosity would spread quickly.

And he looks into generalizing to the algorithmic version:

If we now imagine a multi-agent situation, we can get even stronger (and perhaps stranger) results. If two agents are playing in a prisoner’s dilemma, and they have common knowledge that they are both following this decision protocol, then it looks like they should both cooperate. In general, if this decision protocol is somehow constitutive of rationality, then rational agents should always act according to a maxim that they can intend (consistently with their goals) to be followed by all rational agents. To get either of these conclusions, one has to condition one’s expectations on the proposition that other agents following this procedure will arrive at the same choices.

Korff is now an Asst. Prof. at Georgie State.

Comment author: Roxolan 14 March 2014 04:25:45PM 6 points [-]

In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god.

If it's an iterated game, then the decision to pay is a lot less unintuitive.

Comment author: [deleted] 18 January 2014 11:52:23AM 3 points [-]

These people should not be punished with negative karma. If anything, we should be awarding karma for those people who make meetup posts.

Karma shouldn't be about punishing or rewarding writers, it should be about telling potential readers how helpful you think a post/comment is.

In response to comment by [deleted] on Open Thread for January 17 - 23 2014
Comment author: Roxolan 18 January 2014 02:59:57PM 10 points [-]

Karma is currently very visible to the writers. If you give little positive and negative points to human beings, they will interpret it as reward/punishment, no matter what the intent was. As a meetup organiser, I know I do feel more motivated when my meetup organisation posts get positive karma.

Comment author: Roxolan 17 January 2014 04:27:35PM 5 points [-]

(Reposted from the LW facebook group)

The next LW Brussels meetup will be about morality, and I want to have a bunch of moral dilemmas prepared as conversation-starters. And I mean moral dilemmas that you can't solve with one easy utilitarian calculation. Some in the local community have had little exposure to LW articles, so I'll definitely mention standard trolley problems and "torture vs dust specks", but I'm curious if you have more original ones.

It's fine if some of them use words that should really be tabooed. The discussion will double as a taboo exercise.

A lot of what I came up with revolves around the boundaries of sentience. I.e. on a scale that goes from self-replicating amino acid to transhumans (and includes animals, babies, the heavily mentally handicapped...), where do you place things like "I have a moral responsibility to uplift those to normal human intelligence once the technology is available" or "it's fine if I kill/eat/torture those", and how much of one kind of life you'd be willing to trade off for a superior kind. Do I have a moral responsibility to uplift babies? Uh-

Trading off lives for things whose value is harder to put on the same scale is also interesting. I.e. "will you save this person, or this priceless cultural artifact, or this species near extinction." (Yes, I've seen the SMBC.)

Comment author: Roxolan 11 January 2014 10:42:33AM 1 point [-]

I'd already signed up without knowing it was on the MIRI course list.

Comment author: Roxolan 06 January 2014 09:57:24PM 0 points [-]

(Updated with topic and some news.)

Comment author: [deleted] 20 December 2013 08:39:17PM *  2 points [-]

OTOH the result of doing that is sometimes just plain awesome.

In response to comment by [deleted] on Rationality Quotes December 2013
Comment author: Roxolan 04 January 2014 05:26:12PM 1 point [-]

This link is dead (possibly because the blog has been hidden then re-opened in the interval). Could you please update it?

Comment author: lmm 15 December 2013 10:19:49AM -1 points [-]

Is biting the bullet on the thesis advisor analogy really such a problem? Given an infinite human history, it seems like if the proposition was actually false then at some point someone would have noticed.

Comment author: Roxolan 15 December 2013 04:47:58PM 5 points [-]

if the proposition was actually false then at some point someone would have noticed.

You're thinking of real human beings, when this is just a parable used to make a mathematical point. The "advisors" are formal deterministic algorithms without the ability to jump out of the system and question their results.

Comment author: Roxolan 06 December 2013 05:29:22PM 1 point [-]

If I were designing an intelligence, I'm not sure how much control I would give it over its own brain.

This sounds like it has the same failure modes as boxing. E.g. an AI doesn't need direct Write access to its source code if it can manipulate its caretakers into altering it. Like boxing, it slows things down and raises the threshold of intelligence required for world domination, but doesn't actually solve the problem.

View more: Prev | Next