Comment author: Mercurial 09 November 2011 01:28:55AM 1 point [-]

I've just ordered a copy. It keeps coming up as a useful reference, even if it might not be exactly what I'm looking for. Thanks for bringing it up!

Comment author: D227 10 November 2011 02:52:28PM 0 points [-]

Having read, Influence, The Prince and, 48 laws of Power I found Cialdini's book the most satisfying to read because it was filled with empirical research. The latter books I mentioned were no doubt excellent reads however anecdotal. Also, Influence is presented in the least "dark arts" ways from the other two. The book is about learning to stay ahead of influence just as much as it is about influencing.

Comment author: Matt_Simpson 10 November 2011 02:14:53AM *  4 points [-]

There are two definitions of rationality to keep in mind: epistemic rationality and instrumental rationality. An agent is epistemically rational to the extent it update their beliefs about the world based on the evidence and in accordance with probability theory - notably Bayes rule.

On the other hand, an agent is instrumentally rational to the extent it maximizes it's utility function (i.e. satisfies it's preferences).

There is no such thing as "rational preferences," though much ink has been spilled trying to argue for them. Clearly preferences can't be rational in an epistemic sense because, well, preferences aren't beliefs. Now can preferences be rational in the instrumental sense? Well, actually, yes but only in the sense that having a certain set of preferences may maximize the preferences you actually care about - not in the sense of some sort of categorical imperative. Suppose a rational agent has the ability to modify their own utility function (i.e. preferences) - maybe an AI that can rewrite its own source code. Would it do it? Well, only if it maximizes that agent's utility function. In other words, a rational agent will change its utility function if and only if it maximizes expected utility according to that same utility function - which is unlikely to happen under most normal circumstances.

As for Bob, presumably he's a human. Humans aren't rational, so all bets are off as far as what I said above. However, let's assume at least with respect to utility function changing behavior Bob is rational. Will he change his utility function? Again, only if he expects it to better help him maximize that same utility function. Now then, what do we make of him editing out his alcoholism? Isn't that a case of editing his utility function? Actually, it isn't - it's more of a constraint of the hardware that Bob is running on. There are a lots of programs running inside Bob's head (and yours), but only a subset are Bob. The difficult part is figuring out which parts of Bob's head are Bob and which aren't.

Comment author: D227 10 November 2011 07:03:29AM 0 points [-]

Thank you for your response. I believe I understand you correctly, I made a response to Manfred's comment in which I reference your response as well. Do you believe I interpreted you correctly?

An agent that has an empathetic utility functions will only edit its own code if and only if it maximizes expected utility of the same empathetic utility function. Do I get your drift?

Comment author: Manfred 10 November 2011 01:14:08AM *  19 points [-]

Of course Bob becomes a monster superintelligence hell bent on using all the energy in the universe for his own selfish reasons. I mean, duh! It's just that "his own selfish reasons" involves things like cute puppies. If Bob cares about cute puppies, then Bob will use his monstrous intelligence to bend the energy of the universe towards cute puppies. And love and flowers and sunrises and babies and cake.

And killing the unbelievers if he's a certain sort - I don't want to make this sound too great. But power doesn't corrupt. Corruption corrupts. Power just lets you do what you want, and people don't want "to stay alive." People want friends and cookies and swimming with dolphins and ice skating and sometimes killing the unbelievers.

Comment author: D227 10 November 2011 06:52:37AM 1 point [-]

If Bob cares about cute puppies, then Bob will use his monstrous intelligence to bend the energy of the universe towards cute puppies. And love and flowers and sunrises and babies and cake.

I follow you. It does resolve my question of whether or not rationality + power necessarily involves a terrible outcomes. I had asked the question of whether or not a perfect rationalist given enough time and resources would become perfectly selfish. I believe I understand the answer as no.

Matt_Simpson gave a similar answer:

Suppose a rational agent has the ability to modify their own utility function (i.e. preferences) - maybe an AI that can rewrite its own source code. Would it do it? Well, only if it maximizes that agent's utility function. In other words, a rational agent will change its utility function if and only if it maximizes expected utility according to that same utility function

If Bob's utility function is puppies, babies and cakes, then he would not change his utility function for a universe with out these things. Do I have the right idea now?

A question on rationality.

1 D227 10 November 2011 12:20AM

My long runs on Saturdays give me time to ponder the various material at lesswrong.  Recently my attention has been kept busy pondering a question about rationality that I have not yet resolved and would like to present to lesswrong as a discussion.  I will try to be as succinct as possible, please correct me if I make any logical fallacies.


Instrumental rationality is defined as the art of choosing actions that steer the future toward outcomes ranked higher in your preferences/values/goals (PVGs)

 Here are my questions:

1. If rationality is the function of achieving our preferences/values/goals, what is the function of choosing our PVGs to begin with, if we could choose  our preferences?  In other words, is there an "inherent rationality" absence of preference or values?  It seems as if the definition of instrumental rationality is saying that if you have a PVG, that there is a rational way to achieve it, but there is not necessarily rational PVGs.  


2.If the answer is no, there is no "inherent rationality" absence of a PVG, then what would preclude the possibility that a perfect rationalist, given enough time and resources, will eventually become a perfectly self interested entity with only one overall goal which is to perpetuate his existence, at the sacrifice of everything and everyone else?

Suppose a superintelligence visits Bob and grants him the power to edit his own code.  Bob can now edit or choose his own preferences/values/goals.  

Bob is a perfect rationalist.

Bob is genetically predisposed to abuse alcohol, as such he rationally did everything he could to keep alcohol off his mind.  

Now, Bob no longer has to do this, he simply goes into his own code and deletes this code/PVG/meme for alcohol abuse.

Bob continues to cull his code of "inefficient" PVGs.   

Soon Bob only has one goal, the most important goal, self preservation. 

3. Is it rational for Bob, having these powers, to rid himself of humanity, and rewrite his code to only want to support one meme, that is the meme to ensure his existence.  Everything he will do will goes to support this meme.  He will drop all his relationships, his hobbies, all his wants and desires into concentrate on a single objective.  How does Bob not become a monster superintelligence hell bent on using all the energy in the universe for his own selfish reasons?

 

I have not resolved any of these questions yet, and look forward to any responses I may receive.  I am very perplexed at Bob's situation.  If there are some sequences that would help me better understand my questions please suggest them.

Comment author: Logos01 09 November 2011 01:19:45AM 0 points [-]

As I noted; I regularly go through "off-periods" -- two to three months once every six months or so -- as a general precaution against potential liver damage as well as a test against dependency.

Right now my schedule of dosing is the daily recommended dose once a day for a three or four day period and then "off" for the other four or three unless other circumstances require me to skip sleep cycles (the qualifying condition in my case being sleep-shift disorder as I work 12-hour overnight shifts.)

Comment author: D227 09 November 2011 06:00:51AM 0 points [-]

Where would one go to read more about modafinil?

I have read Wikipedia and Erowid.

If you were to assign a percentage of how much all around "better" you feel when you are on it, what would it be? For example 10% better than off? 20%,30%?

Comment author: D227 20 July 2011 05:47:19PM *  5 points [-]

I'm a 28-yo male in the SF area previously from NYC.

This site is intimidating and I think there are many more just like me who are intimidated to introduce themselves because they might not feel they are as articulate or smart as some of the people on this forum. There are some posts that are so well written that I couldn't write in a 100 years. There is so much information that it seems overwhelming. I want to stop lurking and invite others to join too. I'm not a scientist and I didn't study AI in college, I just want to meet good people and so do you, so come out and say hello.

My fascination with rationality probably started with ideas of fairness. I was the guy who turned the hour glass sideways to stop the time, if an argument broke out between teams while playing Charades, so when resolved, the actor would be allotted their fair time back. Not being fair bothered me a lot, because it didn't seem rational.

What also helped push me along my path towards rationality is my interests in biases. After learning about biases in college, I thought it had absolutely profound consequences, I was made aware of my own biases and thought it was the greatest thing in the world — to become more self-aware, to know ones self better is awesome... And with my new found knowledge, I was quickly disappointed with people. I do not let it bother me as much before, but occasionally, when ever someone thinks they experience more utility with expensive vodka because of the quality and not at all the price, I die a little inside.

Starting around the time I graduated university, it's hard to pin point an exact date of time frame, but I shed religion, and gradually started reading more about humanism and skepticism. It was nothing too deep, but enough for me to have a clear foundation for what I believed. I owe this all to the internet, it led me to watching Atheist videos, TED, being exposed to skepticism, the debunking of myths, Reddit, and finally Lesswrong.

View more: Prev