Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Jokes Thread
Comment author: RichardKennaway 24 July 2014 09:24:02AM 20 points [-]

"Yields a joke when preceded by its quotation" yields a joke when preceded by its quotation.

Comment author: Kaj_Sotala 25 July 2014 06:55:59AM 1 point [-]
Comment author: brazil84 22 July 2014 04:46:27PM *  1 point [-]

That was predicted decades ago, when telecommuting was hyped, and the opposite happened.

Yes, I agree with this. But a lot of trends stop and then reverse themselves.

ETA: Upon further reflection, my best guess is that this trend will continue. Because people crave status; even in a society of plenty there is a limited amount of status; and it's high status live in or near an important city.

Comment author: Kaj_Sotala 23 July 2014 07:05:43AM 1 point [-]

Also, people want to be near their friends, and the easiest way to be close to a lot of people is to live in a big city.

I would actually expect that communications technologies accelerate the urbanization process, since it makes it easier to make geographically distant friends online, and then you become more likely to want to move to where they live.

Comment author: Kaj_Sotala 23 July 2014 06:52:26AM *  6 points [-]

The Ethereum pre-sale has begun.

Given that Ethereum is explicitly designed as a platform for distributed decentralized applications, it seems to me like it could be the next big cryptocurrency after Bitcoin. I'm not terribly confident in this assessment, however. Do people here have an opinion on how likely it is that it'd be the "next tech gold rush"?

Comment author: Kaj_Sotala 22 July 2014 10:33:44AM 1 point [-]

You ask:

Under the assumption that people are risk-neutral with respect to utils, what does it mean for an agent to rationally refuse an outcome where they expect to get more utils?

and then later on say:

Sir Percy knows that his expected utility is lower, but seems to have rationally decided that this is acceptable given his preferences about ambiguity.

But you don't seem to have actually answered your own question: how are you defining 'rationality' in this post? If Sir Percy knows that his expected utility is lower, then his actions clearly can't be VNM-rational, but you haven't offered an alternative definition that would allow us to verify that Sir Percy's decisions are, indeed, rational.

Comment author: V_V 14 July 2014 10:11:13AM *  2 points [-]

I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don't know about live events).
Specifically my points of concern are:

  • The "rationalist" identity: It creates the illusion that by identifying as a "rationalist" and displaying the correct tribal insignia you are automatically more rational, or at least "less wrong" than the outsiders.

  • Rituals: Deliberated modelled after religious rituals, including "public confession" sessions AFAIK similar to those performed by cults like the Church of Synanon.

  • MIRI: I agree with you that they probably exaggerate the AI risk, and I doubt they have the competence to do much about it anyway. For its first ten or so years, when manned primarily by Eliezer Yudkowsky, Anna Salamon, etc., the organization produced effectively zero valuable research output. In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

  • CFAR: a self-help/personal-development program. Questionable like all the self-help/personal-development programs in existence. If I understand correctly, CFAR is modelled after, or at least is similar to, Landmark, a controversial organization.

  • Pseudo-scientific beliefs and practices: cryonics (you are signed up, so you don't probably agree), paleo diets/ketogenic diets, armchair evopsych, and so on. It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it. Yes, this kind of effects happen in all groups, but from a group of people with average IQ 140 who pride in pursuing rationality I would have expected better.

Comment author: Kaj_Sotala 16 July 2014 08:32:17AM *  5 points [-]

In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/ :

We’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al.

http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/ :

We’ve released a new working paper by Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence.” [...]

Update 05/14/14: This paper has been accepted to AGI-14.

Comment author: Bugmaster 16 July 2014 08:05:16AM 1 point [-]

Are you not employing circular reasoning here ? Sure, shooting computer-controller opponents is ok because they don't experience any suffering from being hit by a bullet; but that only holds true if we assume they are not conscious in the first place. If they are conscious to some extent -- let's say, their Consciousness Index is 0.001, on the scale from 0 == "rock" and 1 == "human" -- then we could reasonably say that they do experience suffering to some extent.

As I said, I don't believe that the words "consciousness" has any useful meaning; but I am pretending that it does, for the purposes of this post.

Comment author: Kaj_Sotala 16 July 2014 08:27:34AM 3 points [-]

Are you not employing circular reasoning here ? Sure, shooting computer-controller opponents is ok because they don't experience any suffering from being hit by a bullet; but that only holds true if we assume they are not conscious in the first place.

Yeah. How is that circular reasoning? Seems straightforward to me: "computer-controlled opponents don't suffer from being shot -> shooting them is okay".

If they are conscious to some extent -- let's say, their Consciousness Index is 0.001, on the scale from 0 == "rock" and 1 == "human" -- then we could reasonably say that they do experience suffering to some extent.

If they are conscious to some extent, then we could reasonably say that they do experience something. Whether that something is suffering is another question. Given that "suffering" seems to be reasonably complex process that can be disabled by the right brain injury or drug, and computer NPCs aren't anywhere near the level of possessing similar cognitive functionality, I would say that shooting them still doesn't cause suffering even if they were conscious.

Comment author: Salemicus 15 July 2014 06:16:56PM 1 point [-]

I wasn't thinking about multiplayer games, but rather single-player games with computer-controlled opponents.

Ah, I see. I misunderstood what you meant by opponent - in which case I certainly agree with you. If the NPC had some kind of "consciousness," such that if you hit him with your magic spell he really does experience being embroiled in a fireball, then playing Skyrim would be a lot more ethically dubious.

One could say that the rules about legitimacy were justified to the extent that they reduced suffering and increased pleasure

One could say any manner of things. But does that argument really track with your intuitions? I'm not saying that suffering and pleasure don't enter the moral calculus at all, mind you. But my intuition is that the "suffering" of someone who doesn't want to be shot in a multiplayer game of Doom simply doesn't count, in much the same way that the "pleasure" that a rapist takes in his crime doesn't count. I'm not talking about the social/legal rules, as implemented, for what is and isn't legitimate - I'm talking about our innate moral sense of what is and isn't legitimate.

I think this is what underlies a lot of the "trigger warning" debate - one side really wants to say "I don't care how much suffering you claim to undergo, it's irrelevant, you're not being wronged in any way," and the other side really wants to say "I have a free-floating right not to be offended, so any amount that I suffer by you breaking that right is too much" but neither side can make their case in those terms as both statements are considered too extreme, which is why you get this shadow-boxing.

Comment author: Kaj_Sotala 16 July 2014 07:48:22AM *  0 points [-]

But does that argument really track with your intuitions?

At one point I would have said "yes", but at this point I've basically given up on trying to come up with verbal arguments that would track my intuitions, at least once we move away from clear-cut cases like "Skyrim NPCs suffering from my fireballs would be bad" and into messier ones like a multiplayer game.

(So why did I include the latter part of my comment in the first place? Out of habit, I guess. And because I know that there are some people - including my past self - who would have rejected your argument, but whose exact chain of reasoning I no longer feel like trying to duplicate.)

Comment author: Salemicus 15 July 2014 02:17:10PM 1 point [-]

Hang on though - shooting your opponents in a computer game might well cause them (emotional) suffering, not from being hit by a bullet, but from their character dying. But we shoot them anyway, because they don't have a legitimate expectation that they won't experience suffering in that way.

In other words, deeper introspection shows that suffering and pleasure aren't terminal values, but are grafted onto a deeper theory of legitimacy.

Comment author: Kaj_Sotala 15 July 2014 04:58:10PM 5 points [-]

I wasn't thinking about multiplayer games, but rather single-player games with computer-controlled opponents.

In other words, deeper introspection shows that suffering and pleasure aren't terminal values, but are grafted onto a deeper theory of legitimacy.

There are certainly arguments to be made for suffering and pleasure not being terminal values, but (even if we assumed that I was thinking about MP games) this argument doesn't seem to show it. One could say that the rules about legitimacy were justified to the extent that they reduced suffering and increased pleasure, and that the average person got more pleasure overall from playing a competitive game than he would get from a situation where nobody agreed to play with him.

Comment author: Kaj_Sotala 13 July 2014 01:33:34PM 5 points [-]

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is defined (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'.

Many people think that consciousness in the sense of having the capability to experience suffering or pleasure makes an entity morally relevant, because happiness/pleasure is held to be a good thing and suffering a bad one, as terminal values. (That of course doesn't mean that you couldn't eat chickens, for as long as you killed them painlessly. )

I don't mind shooting my opponents in a computer game because I know that they won't actually experience the suffering from being hit by a bullet, but I sure would mind if I knew that they did experience such pain.

Comment author: brazil84 11 July 2014 07:36:11AM 1 point [-]

Followup: So will you take the actions I suggested? They seem pretty simple and easy and I can't think of why you wouldn't do them if your true reason is doubt.

Thank you :)

Comment author: Kaj_Sotala 11 July 2014 12:02:51PM 2 points [-]

The main reason is that digging up the information about the specific downvotes would be more work for jackk and I'm not sure how burdened he is with the work that he's already doing. (Also more work for me.) But I'll ask him once he gets done with the current stuff he's doing for this whole thing.

View more: Next