Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 29 July 2014 09:39:43AM 1 point [-]

Valid point, but do let me take babysteps away from vNM and see where that leads, rather than solving the whole preference issue immediately :-)

Comment author: Kaj_Sotala 29 July 2014 02:33:04PM 2 points [-]

That's reasonable. :-)

Comment author: Kaj_Sotala 29 July 2014 01:59:45PM *  8 points [-]

The philosopher John Danaher is doing a series of posts on Bostrom's Superintelligence book. Posts that were up at the time of writing this comment:

Bostrom on Superintelligence (1): The Orthogonality Thesis
Bostrom on Superintelligence (2): The Instrumental Convergence Thesis
Bostrom on Superintelligence (3): Doom and the Treacherous Turn

Danaher has also blogged about AI risk topics before: see here, here, here, here, and here. He's also written on mind uploading and human enhancement.

Comment author: Kaj_Sotala 29 July 2014 09:07:20AM *  2 points [-]

(I liked your post, but here's a sidenote...)

It bothers me that we keep talking about preferences without actually knowing what they are. I mean yes, in the VNM formulation a preference is something that causes you to choose one of two options, but we also know that to be insufficient as a definition. Humans have lots of different reasons for why they might choose A over B, and we'd need to know the exact reasons for each choice if we wanted to declare some choices as "losing" and some as "not losing". To use Eliezer's paraphrase, maybe the person in question really likes riding a taxi between those locations, and couldn't in fact use their money in any better way.

The natural objection to this is that in that case, the person isn't "really" optimizing for their location and being irrational about it, but is rather optimizing for spending a lot of time in the taxi and being rational about it. But 1) human brains are messy enough that it's unclear whether this distinction actually cuts reality at the joints; and 2) "you have to look deeper than just their actions in order to tell whether they're behaving rationally or not" was my very point.

Comment author: Gavin 28 July 2014 11:24:25PM *  28 points [-]

Anytime you're thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in "rainy day fund" rather than buying ten different types of insurance.

In general, if you can financially survive the bad thing, then buying insurance isn't a good idea. This is why it almost never makes sense to insure a $1000 computer or get the "extended warranty." Just save all the money you would spend on extended warranties on your devices, and if it breaks pay out of pocket to repair or get a new one.

This is a harshly rational view, so I certainly appreciate that some people get "peace of mind" from having insurance, which can have a real value.

Comment author: Kaj_Sotala 29 July 2014 08:47:42AM *  8 points [-]

Though note that an insurance may regardless be useful if you have self-control problems with regard to money. If you've paid your yearly insurance payment, the money is spent and will protect you for the rest of the year. If you instead put the money in a rainy day fund, there may be a constant temptation to dip into that fund even for things that aren't actual emergencies.

Of course, that money being permanently spent and not being available for other purposes does have its downsides, too.

In response to Jokes Thread
Comment author: RichardKennaway 24 July 2014 09:24:02AM 22 points [-]

"Yields a joke when preceded by its quotation" yields a joke when preceded by its quotation.

Comment author: Kaj_Sotala 25 July 2014 06:55:59AM 1 point [-]
Comment author: brazil84 22 July 2014 04:46:27PM *  1 point [-]

That was predicted decades ago, when telecommuting was hyped, and the opposite happened.

Yes, I agree with this. But a lot of trends stop and then reverse themselves.

ETA: Upon further reflection, my best guess is that this trend will continue. Because people crave status; even in a society of plenty there is a limited amount of status; and it's high status live in or near an important city.

Comment author: Kaj_Sotala 23 July 2014 07:05:43AM 1 point [-]

Also, people want to be near their friends, and the easiest way to be close to a lot of people is to live in a big city.

I would actually expect that communications technologies accelerate the urbanization process, since it makes it easier to make geographically distant friends online, and then you become more likely to want to move to where they live.

Comment author: Kaj_Sotala 23 July 2014 06:52:26AM *  6 points [-]

The Ethereum pre-sale has begun.

Given that Ethereum is explicitly designed as a platform for distributed decentralized applications, it seems to me like it could be the next big cryptocurrency after Bitcoin. I'm not terribly confident in this assessment, however. Do people here have an opinion on how likely it is that it'd be the "next tech gold rush"?

Comment author: Kaj_Sotala 22 July 2014 10:33:44AM 3 points [-]

You ask:

Under the assumption that people are risk-neutral with respect to utils, what does it mean for an agent to rationally refuse an outcome where they expect to get more utils?

and then later on say:

Sir Percy knows that his expected utility is lower, but seems to have rationally decided that this is acceptable given his preferences about ambiguity.

But you don't seem to have actually answered your own question: how are you defining 'rationality' in this post? If Sir Percy knows that his expected utility is lower, then his actions clearly can't be VNM-rational, but you haven't offered an alternative definition that would allow us to verify that Sir Percy's decisions are, indeed, rational.

Comment author: V_V 14 July 2014 10:11:13AM *  2 points [-]

I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don't know about live events).
Specifically my points of concern are:

  • The "rationalist" identity: It creates the illusion that by identifying as a "rationalist" and displaying the correct tribal insignia you are automatically more rational, or at least "less wrong" than the outsiders.

  • Rituals: Deliberated modelled after religious rituals, including "public confession" sessions AFAIK similar to those performed by cults like the Church of Synanon.

  • MIRI: I agree with you that they probably exaggerate the AI risk, and I doubt they have the competence to do much about it anyway. For its first ten or so years, when manned primarily by Eliezer Yudkowsky, Anna Salamon, etc., the organization produced effectively zero valuable research output. In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

  • CFAR: a self-help/personal-development program. Questionable like all the self-help/personal-development programs in existence. If I understand correctly, CFAR is modelled after, or at least is similar to, Landmark, a controversial organization.

  • Pseudo-scientific beliefs and practices: cryonics (you are signed up, so you don't probably agree), paleo diets/ketogenic diets, armchair evopsych, and so on. It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it. Yes, this kind of effects happen in all groups, but from a group of people with average IQ 140 who pride in pursuing rationality I would have expected better.

Comment author: Kaj_Sotala 16 July 2014 08:32:17AM *  6 points [-]

In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/ :

We’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al.

http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/ :

We’ve released a new working paper by Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence.” [...]

Update 05/14/14: This paper has been accepted to AGI-14.

Comment author: Bugmaster 16 July 2014 08:05:16AM 1 point [-]

Are you not employing circular reasoning here ? Sure, shooting computer-controller opponents is ok because they don't experience any suffering from being hit by a bullet; but that only holds true if we assume they are not conscious in the first place. If they are conscious to some extent -- let's say, their Consciousness Index is 0.001, on the scale from 0 == "rock" and 1 == "human" -- then we could reasonably say that they do experience suffering to some extent.

As I said, I don't believe that the words "consciousness" has any useful meaning; but I am pretending that it does, for the purposes of this post.

Comment author: Kaj_Sotala 16 July 2014 08:27:34AM 3 points [-]

Are you not employing circular reasoning here ? Sure, shooting computer-controller opponents is ok because they don't experience any suffering from being hit by a bullet; but that only holds true if we assume they are not conscious in the first place.

Yeah. How is that circular reasoning? Seems straightforward to me: "computer-controlled opponents don't suffer from being shot -> shooting them is okay".

If they are conscious to some extent -- let's say, their Consciousness Index is 0.001, on the scale from 0 == "rock" and 1 == "human" -- then we could reasonably say that they do experience suffering to some extent.

If they are conscious to some extent, then we could reasonably say that they do experience something. Whether that something is suffering is another question. Given that "suffering" seems to be reasonably complex process that can be disabled by the right brain injury or drug, and computer NPCs aren't anywhere near the level of possessing similar cognitive functionality, I would say that shooting them still doesn't cause suffering even if they were conscious.

View more: Next