lukeprog comments on Abandoning Cached Selves to Re-Write My Source Code Partially, I've Become Unstable - Less Wrong

6 Post author: diegocaleiro 10 October 2012 05:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (101)

You are viewing a single comment's thread.

Comment author: lukeprog 10 October 2012 05:56:35PM 15 points [-]

I am reminded of Eliezer's refrain that "Consequentialism is what's correct; virtue ethics is what works for humans."

As you've learned, it's important not to over-estimate the level of your own agenty-ness, even if you're trying to become more agenty.

I've greatly benefited from purchasing fuzzies and utilons separately — not just in charity but in all of life.

That's all for now. Best of luck, friend.

Comment author: Eliezer_Yudkowsky 11 October 2012 11:35:55PM 3 points [-]

To be precise, "Good people are consequentialists, but virtue ethics is what works."

Comment author: Mestroyer 12 October 2012 11:13:37AM 5 points [-]

So if you can choose to change yourself between a consequentialist and a virtue ethicist, a consequentialist will abandon their goodness for what works and become more virtue ethicist, but a virtue ethicist will persue goodness and become more consequentialist?

If we thought of this as a chemical reaction, would it have the same equilibrium constant for everybody? What could change it? Maybe an inaccurate self-image, because if you thought you were more consequentialist than you were, you would be less inclined to actually become more consequentialist, and stay more virtue ethicist?

Comment author: diegocaleiro 14 October 2012 12:05:35AM 1 point [-]

The intertwining of awesomeness and practical absurdity of this comment amuses me immensely.

Comment author: diegocaleiro 22 February 2013 04:10:27AM 1 point [-]

to this day, it still does.

Comment author: diegocaleiro 11 October 2012 01:31:37AM 2 points [-]

Good points. Classic, and good, posts. I'd like to comment one of them: "but imagine what one of them could do if such a thing existed: a real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness."
I fear that in this phrases lies one of the big issues I have with the rationalist people I've met thusfar. Why would there be a "one" agent, with "its" desires, that would be fulfilled. Agents are composed of different time-spans. Some time-spans do not desire to diet. Others do (all above some ammount of time). Who is to say that the "agent" is the set that would be benefitted by those acts, not the set that would be harmed by it.
My view is that piconomics is just half the story.
In this video, I talk about piconomics from 7:00 to 13:20 I'd suggest to take a look at what I say at 13:20-18:00 and 20:35-23:55 , a pyramidal structure of selfs, or agents. http://www.youtube.com/watch?v=3RQC7jAWl_o