Comment author: wedrifid 16 January 2012 06:06:00AM 11 points [-]

A board of twelve people is in charge of deciding which grant proposals to accept. Their foundation's stated goal is to maximize the average number of hedons among the human race.

So they need to work out wtf a hedon is, put as many as possible in one person then kill all other humans?

Comment author: Ben_Welchner 16 January 2012 06:43:43AM *  0 points [-]

That's one hell of a grant proposal/foundation.

Comment author: Bugmaster 10 January 2012 02:33:27AM 1 point [-]

Well, I personally am one of those people who thinks that cryonics is currently not worth worrying about, and that the Singularity is unlikely to happen anytime soon (in astronomical terms). So, there exists at least one outlier in the Less Wrong hive mind...

Comment author: Ben_Welchner 10 January 2012 05:05:33AM *  3 points [-]

Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn't a very hive-mindey community, unless you count atheism.

(The singularity, yes, you're very much in the minority with the most skeptical quartile expecting it in 2150)

Comment author: Ben_Welchner 29 December 2011 07:47:35PM 0 points [-]

In other words, why didn't the story mention its (wealthy, permissive, libertarian) society having other arrangements in such a contentious matter - including, with statistical near-certainty, one of the half-dosen characters on the bridge of the Impossible Possible World?

It was such a contentious issue centuries (if I'm reading properly) ago, when ancients were still numerous enough to hold a lot of political power and the culture was different enough that Akon can't even wrap his head around the question. That's plenty of time for cultural drift to pull everyone together, especially if libertarianism remains widespread as the world gets more and more upbeat, especially if anti-rapers are enough part of the mainstream culture to "statistically-near-certainly" have a seat on the Impossible Possible World.

It's not framed as an irreconcilable ideological difference (to the extent those exist at all in the setting). The ancients were against it because they remembered it being something basically objectively horrible, and that became more and more outdated as the world became nicer.

Comment author: [deleted] 13 December 2011 07:38:36PM *  1 point [-]

As I have been watching the videos, I noticed that chapter 13, video 6 on your list there links to video 7 of the AI class' website. Your video 7 link is to the youtube version of the same.

Fixed the link. Thanks for pointing out the error.

Thanks for writing this up, it is nice to have these sort of things broken down into bite-sized pieces that I can enjoy in between lulls in my day without a lot of backtracking to figure out where I left off.

Glad to hear this!

Comment author: Ben_Welchner 14 December 2011 02:37:13AM 1 point [-]

On a similar note, what should be 13.9's solution links to 13.8's solution.

I'm also finding this really interesting and approachable. Thanks very much.

Comment author: Normal_Anomaly 27 November 2011 11:08:44PM 1 point [-]

Is the bit about Republican presidents intended to stand in for humanity's CEV's utilty function, or is it just a distracting bit of politics?

Comment author: Ben_Welchner 29 November 2011 02:42:53AM *  2 points [-]

I recall another article about optimization processes or probability pumps being used to rig elections; I would imagine it's a lighthearted reference to that, but I can't turn it up by searching. I'm not even sure if it came before this comment.

(Richard_Hollerith2 hasn't commented for over 2.5 years, so you're not likely to get a response from him)

Comment author: Ben_Welchner 23 November 2011 12:53:18AM 10 points [-]

Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?

The agent's goals aren't changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-utilons and no-secondary-consequences Dilemma, it can still be rational to cooperate if you expect the other agent will be spending the utilons in much the same way. If this is a more down-to-earth Prisoner's Dilemma, shooting for cooperate/cooperate to avoid dicking over the other agent is a perfectly rational solution that no amount of game theory can dissuade you from. Knowledge of game theory here can only change your mind if it shows you a better way to get what you already want, or if you confuse yourself reading it and think defecting is the 'rational' thing to do without entirely understanding why.

You describe a lot of goals as terminal that I would describe as instrumental, even in their limited context. While it's true that our ideals will be susceptible to culture up until (if ever) we can trace and order every evolutionary desire in an objective way, not many mathematicians would say "I want to determine if a sufficiently-large randomized Conway board would converge to an all-off state so I will have determined if a sufficiently-large randomized Conway board would converge to an all-off state". Perhaps they find it an interesting puzzle or want status from publishing it, but there's certainly a higher reason than 'because they feel it's the right thing to do'. No fundamental change in priorities needs occur between feeding one's tribe and solving abstract mathematical problems.

I won't extrapolate my arguments farther than this, since I really don't have the philosophical background it needs.

View more: Prev