Comment author: koning_robot 30 August 2013 05:56:27PM 0 points [-]

I'm not sure what you're trying to say here, but if you consider this a relative weakness of Solomonoff Induction, then I think you're looking at it the wrong way. We will know it as well as we possibly could given the evidence available. Humans are subject to the constraints that Solomonoff Induction is subject to, and more.

Comment author: koning_robot 30 August 2013 06:07:51PM -1 points [-]

Whoops, thread necromancy.

Comment author: roll 08 June 2012 09:12:03AM 0 points [-]

What does Solomonoff Induction actually say?

I believe this one has been closed ages ago by Alan Turing, and practically demonstrated for approximations by the investigation into busy beaver function for example. We won't be able to know BB(10) from God almighty. Ever.

Comment author: koning_robot 30 August 2013 05:56:27PM 0 points [-]

I'm not sure what you're trying to say here, but if you consider this a relative weakness of Solomonoff Induction, then I think you're looking at it the wrong way. We will know it as well as we possibly could given the evidence available. Humans are subject to the constraints that Solomonoff Induction is subject to, and more.

Comment author: Dr_Manhattan 29 June 2013 04:49:17PM *  2 points [-]

P/S/A: If you are smart and underemployed, you can very quickly check to see if you are a natural computer programmer by pulling up a page of Python source code and seeing whether it looks like it makes natural sense, and if this is the case you can teach yourself to program very quickly and get a much higher-paying job even without formal credentials.

Here is the above-mentioned page of python code (IMO)

http://learnxinyminutes.com/docs/python/

Also, you can build confidence and to some degree (increasingly) credibility by taking online courses at udacity, coursera, and edx.

Feel free to PM me if you want more specific info, I sort of fell into knowing a bit about this. I also do a lot of interviewing at work.

Comment author: koning_robot 29 June 2013 09:29:45PM 2 points [-]

Hrrm. I don't think it's that simple. Looking at that page, I imagine nonprogrammers wonder: * What are comments? * What are strings? * What is this "#=>" stuff? * "primitives"? * ... This seems to be written for people who are already familiar with some other language. Better to show a couple of examples so that they recognize patterns and become curious.

Comment author: Ghatanathoah 28 July 2012 07:16:36AM *  1 point [-]

The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.

Even if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down.

Given any world with positive utility A, there exists <b>at least one</b> other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).

I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value.

Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life.

Now, there may be some "barely worth living" societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that "barely worth living" society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life.

Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered "better".

I'm not interesting in maximizing total utility. I'm interested in maximizing overall value, of which total utility is only one part.

A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world.

To me it would, in many cases, be morally better to use the resources that would be used to create a "life that someone would choose to have" to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world.

Why wouldn't it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?

It's not that it wouldn't improve the world. It's that it would improve the world less than enhancing the utility of the people who already exist instead. You can criticize someone who is doing good if they are passing up opportunities to do even more good.

RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.

Not really. In "torture vs specks" your choice will have the same effect on total and average utility (they either both go down a little or both go down a lot). In the RC your choice will affect them differently (one goes up and the other goes down). Since total and average utility (or more precisely, creating new lives worth living and enhancing existing lives) are both contribute to overall value if you shut up and multiply you'll conclude that the best way to maximize overall value is to increase both of them, not maximize one at the expense of the other.

Comment author: koning_robot 02 September 2012 12:12:03PM 0 points [-]

What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you're just making something up to rationalize your preconceptions.

Comment author: Viliam_Bur 29 August 2012 12:24:24PM *  7 points [-]

I was at July rationality minicamp, and in addition to many "epiphanies", one idea that seems to work for me is this, very simplified -- forget the mysterious "willpower" and use self-reductionism, instead of speaking in far mode what you should and want to do, observe in near mode the little (irrational) causes that really make you do things. Then design your environment to contain more of those causes which make you do things you want to do. And then, if the theory is correct, you find yourself doing more of what you want to do, without having to suffer the internal conflict traditionally called "willpower".

Today it's almost one month since the minicamp, and here are the results so far. I list the areas where I wanted to improve myself, and assign a score from 0 to 5, where 5 means "works like a miracle; awesome" and 0 means "no change at all". (I started to work on all these goals in parallel, which may be a good or bad idea. Bad part is, there is probably no chance succeeding in all at once. Good part is, if there is success in any part, then there is a success.)

  • (5) avoiding sugar and soda
  • (4) sleeping regularly, avoiding sleep deprivation
  • (2) spending less time procrastinating online
  • (2) exercising regularly
  • (2) going to sleep early, waking up early
  • (1) following my long-term plans
  • (1) spending more time with friends
  • (1) being organized, planning, self-reflecting
  • (0) writing on blog, improving web page
  • (0) learning a new language
  • (0) being more successful at work
  • (0) improving social skills and expanding comfort zone
  • (0) spending more time outside

So far it seems like a benefit, although of course I would be more happy with greater/faster improvement. The mere fact that I'm measuring (not very exactly) my progress is suprising enough. I'm curious about the long-term trends: will those changes gradually increase (as parts of my life get fixed and turn to habit) or decrease (as happened with my previous attempts at self-improvement)? Expect a more detailed report in the end of December 2012.

How exactly did I achieve this? (Note: This is strongly tailored to my personality, it may not work for other people.) Gamification -- I have designed a set of rules that allow me to get "points" during the days. There are points e.g. for: having enough sleep, having an afternoon nap, meeting a friend, exercising (a specific amount), publishing a blog article, spending a day without consuming sugar, spending a day without browsing web, etc. Most of these rules allow only one point of given type per day, to avoid failure modes like "I don't feel like getting this point now, but I can get two of these points tomorrow". So each day I collect my earned points, literally small squares of colored paper (this makes them feel more real), and glue them on a paper form, which is always on my desk, and provides me a quick visual feedback on how "good" the previous days were. It's like a computer game (exact rules, quick visual feedback), which is exactly why I like it.

This specific form of gamification was not literally taught at Minicamp, and I was considering something similar years before. Yet I never did it; mostly because I was stopped by my monkey-tribe-belonging instincts. Doing something that other people don't do is weird. I tried to convince some friends to join me in doing this, but all my attempts failed; now I guess it's because admitting that you need some kind of help is low status, while speaking about willpower in far mode is high status. Being with people at Minicamp messed with my tribe instincts; meeting a community of people with social norm of doing "weird" things reduced my elephant's opposition to doing a weird thing. Sigh, I'm just a monkey, and I'm scared of doing things that other monkeys never do; even if it means being rational or winning.

Comment author: koning_robot 31 August 2012 06:21:09PM 1 point [-]

Hm, I've been trying to get rid of one particular habit (drinking while sitting at my computer) for a long time. Recently I've considered the possibility of giving myself a reward every time I go to the kitchen to get a beer and come back with something else instead. The problem was that I couldn't think of a suitable reward (there's not much that I like). I hadn't thought of just making something up, like pieces of paper. Thanks for the inspiration!

Comment author: Vladimir_Nesov 24 August 2012 09:00:19PM *  2 points [-]

the question here is exactly whether this fear of death that we all share is one of those emotions that we should value

Do you have specific ideas useful for resolving this question?

or if it is getting in the way of our rationality

It's usually best to avoid using the word "rationality" in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like "Which answer to this question is truth?" or "Which belief about the answer to this question would be rational?", or "Which belief about this question is desirable?".

See What Do We Mean By "Rationality"?, Avoid inflationary use of terms.

Comment author: koning_robot 28 August 2012 08:56:32PM -1 points [-]

Do you have specific ideas useful for resolving this question?

Fear of death doesn't mean death is bad in the same way that fear of black people doesn't mean black people are bad. (Please forgive me the loaded example.)

Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of "me". We don't know what we mean by "me", or if we do then we don't know what's valuable about the existence of one "me" as opposed to another, and anyway evolution meant something different by "me" (genes rather than organisms).

It's usually best to avoid using the word "rationality" in such contexts.

I actually meant rationality here, specifically instrumental rationality, i.e., "is it getting in the way of us achieving our goals?".

I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.

Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty's birth did not replace? (The grief for Alice's death and the joy for Betty's birth have instrumental value, as did Alice's acquired knowledge.)

If you find that I've set this up to fit my conclusions, then I don't think we disagree.

Comment author: [deleted] 25 August 2012 08:16:25AM 2 points [-]

Arguing we should seek pleasurable experiences is also an appeal to emotion.

Comment author: koning_robot 25 August 2012 09:38:09AM -1 points [-]

It's different. The fact that I feel bad when confronted with my own mortality doesn't mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.

Comment author: Vladimir_Nesov 24 August 2012 04:11:16PM *  2 points [-]

What is it that is lost when a person dies, that cannot be regained by creating a new one?

I'm uncertain about the value and fungibility of human life. Emotions clearly support non-fungibility, in particular concerning your own life, and it's a strong argument. On the other hand, my goals are sufficiently similar to everyone else's goals that loss of my life wouldn't prevent my goals from controlling the world, it will be done through others. Only existential disaster or severe value drift would prevent my goals from controlling the world.

(The negative response to your comment may be explained by the fact that you appear to be expressing confidence in the unusual solution (that value of life is low) to this difficult question without giving an argument for that position. At best the points you've made are arguments in support of uncertainty in the position that the value of life is very high, not strong enough to support the claim that it's low. If your claim is that we shouldn't be that certain, you should clarify by stating that more explicitly. If your claim is that the value of life is low, the argument your are making should be stronger, or else there is no point in insisting on that claim, even if that happens to be your position, since absent argument it won't be successfully instilled in others.)

Comment author: koning_robot 24 August 2012 11:06:16PM -1 points [-]

Emotions clearly support non-fungibility, in particular concerning your own life, and it's a strong argument.

I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don't think the emotions in this case form that strong an argument. There's a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.

Thanks for helping me understand the negative response. My claim here is not about the value of life in general, but about the value of some particular "person" continuing to exist. I think the terminal value of this ceasing to exist is zero. Since posting my top-level comment I have provided some arguments in favor of my case, and also hopefully clarified my position.

Comment author: Vladimir_Nesov 24 August 2012 03:06:38PM *  1 point [-]

It is irrelevant what we desire or want, as is what we act for. The only thing that is relevant is that which we like.

Saying a word with emphasis doesn't clarify its meaning or motivate the relevance of what it's intended to refer to. There are many senses in which doing something may be motivated: there is wanting (System 1 urge to do something), planning (System 2 disposition to do something), liking (positive System 1 response to an event) and approving (System 2 evaluation of an event). It's not even clear what each of these means, and these distinctions don't automatically help with deciding what to actually do. To make matters even more complicated, there is also evolution with its own tendencies that don't quite match those of people it designed.

See Approving reinforces low-effort behaviors, The Blue-Minimizing Robot, Urges vs. Goals: The analogy to anticipation and belief.

Comment author: koning_robot 24 August 2012 09:28:39PM 1 point [-]

I accept this objection; I cannot describe in physical terms what "pleasure" refers to.

Comment author: Vladimir_Nesov 24 August 2012 02:52:23PM 1 point [-]

The emotions are irrational in the sense that they are not supported by anything - your brain generates these emotions in these situations and that's it.

Beliefs are also something your brain generates. Being represented in meat doesn't by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don't approve of.

See Feeling Rational, The Mystery of the Haunted Rationalist, Summary of "The Straw Vulcan".

Comment author: koning_robot 24 August 2012 08:34:06PM 0 points [-]

Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.

View more: Next