Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 24 August 2016 03:43:15PM 1 point [-]

AI gets it wrong then well that's it.

Not necessarily, depends on your AI and how god-like it is.

In the XIX century you could probably make the same argument about corporations: once one corporation rises above the rest, it will use its power to squash competition and install itself as the undisputed economic ruler forever and ever. The reality turned out to be rather different and not for the lack of trying.

Comment author: niceguyanon 24 August 2016 06:37:22PM 0 points [-]

Not necessarily, depends on your AI and how god-like it is.

I hope you're right. I just automatically think that AI is going to be god-like by default.

In the XIX century you could probably make the same argument about corporations

Not just corporations; you could make the some argument for sovereign states, foundations, trusts, militaries, and religious orgs.

Weak argument is that corporations with their visions, charters, and mission statements are ultimately run by a meatbag or run jointly by meatbags that die/retire, at least that's how it currently is. You can't retain humans forever. Corporations lose valuable and capable employee brains over time and replace them with new brains, which maybe better or worse, but you certainly cant keep your best humans forever. Power is checked; Bill Gates plans his legacy, while Sumner Redstone is infirm with kids jockeying for power and Steve Jobs is dead.

Comment author: WhySpace 23 August 2016 06:26:08PM *  2 points [-]

(1) Given: AI risk comes primarily from AI optimizing for things besides human values.

(2) Given: humans already are optimizing for things besides human values. (or, at least besides our Coherent Extrapolated Volition)

(3) Given: Our world is okay.^[CITATION NEEDED!]

(4) Therefore, imperfect value loading can still result in an okay outcome.

This is, of course, not necessarily always the case for any given imperfect value loading. However, our world serves as a single counterexample to the rule that all imperfect optimization will be disastrous.

(5) Given: A maxipok strategy is optimal. ("Maximize the probability of an okay outcome.")

(6) Given: Partial optimization for human values is easier than total optimization. (Where "partial optimization" is at least close enough to achieve an okay outcome.)

(7) ∴ MIRI should focus on imperfect value loading.

Note that I'm not convinced of several of the givens, so I'm not certain of the conclusion. However, the argument itself looks convincing to me. I’ve also chosen to leave assumptions like “imperfect value loading results in partial optimization” unstated as part of the definitions of those 2 terms. However, I’ll try and add details to any specific areas, if questioned.

Comment author: niceguyanon 24 August 2016 03:22:26PM 0 points [-]

3) World is OK with humans optimizing for the wrong things because humans eventually die and take their ideas with them good or bad. Power and wealth is redistributed. Humans get old, they get weak, they get dull, they lose interest. AI gets it wrong then well that's it.

Comment author: turchin 17 August 2016 12:14:06AM 0 points [-]

Elon Musk almost terminated our simulation.

Simulation is a simulation only if everybody is convinced that they are living real life. Bostrom proved that we are most likely live in a simulation, but not much people know about it. Elon Musk tweeted that we live with probability 1000000 to 1 in simulation. Now everybody knows. I think that it was 1 per cent chance that our simulation will terminate after it. It has not happen this time, but there may be some other threshold after which it will be terminated, like finding more proves that we are in a simulation or creation of an AI.

Comment author: niceguyanon 17 August 2016 04:45:37PM *  3 points [-]

Simulation is a simulation only if everybody is convinced that they are living real life.

Is this a widely held belief?

Comment author: Viliam 09 August 2016 09:34:35PM *  7 points [-]

I have heard repeatedly the argument about "calories in, calories out" (e.g. here). Seems to me that there are a few unspoken assumptions, and I would like to ask how true they are in reality. Here are the assumptions:

a) all calories in the food you put in your mouth are digested;

b) the digested calories are either stored as fat or spent as work; there is nothing else that could happen with them;

and in some more strawmanish forms of the argument:

c) the calories are the whole story about nutrition and metabolism, and all calories are fungible.

If we assume these things to be true, it seems like a law of physics that if you count the calories in the food you put in your mouth, and subtract the amount of exercise you do, the result exactly determines whether you gain or lose fat. Taken literally, if a healthy and thin person starts eating an extra apple a day, or starts taking a somewhat shorter walk to their work, without changing anything else, they will inevitably get fat. On the other hand, any fat person can become thin if they just start eating less and/or exercising more. If you doubt this, you doubt the very laws of physics.

It's easy to see how (c) is wrong: there are other important facts about food besides calories, for example vitamins and minerals. When a person has food containing less than optimal amount of vitamins or minerals per calorie, they don't have a choice between being fat or thin, but between being fat or sick. (Or alternatively, changing the composition of their diet, not just the amount.)

Okay, some proponents of "calories in, calories out" may now say that this is obvious, and that they obviously meant the advice to apply to a healthy diet. However, what if the problem is not with the diet per se, but with a way the individual body processes the food? For example, what if the food contains enough vitamins and minerals per calorie, but the body somehow extracts those vitamins and minerals inefficiently, so it reacts even to the optimal diet as if it was junk food? Could it be that some people are forced to eat large amounts of food just to extract the right amount of vitamins and minerals, and any attempt to eat less will lead to symptoms of malnutrition?

Ignoring the (c), we get a weaker variant of "calories in, calories out", which is, approximately -- maybe you cannot always get thin by eating less calories than you spend working; but if you eat more calories than you spend working, you will inevitably get fat.

But it is possible that some of the "calories in (the mouth)" may pass through the digestive system undigested and later excreted? Could people differ in this aspect, perhaps because of their gut flora?

Also, what if some people burn the stored fat in ways we would not intuitively recognize as work? For example, what if some people simply dress less warmly, and spend more calories heating up their bodies? Are there other such non-work ways of spending calories?

In other words, I don't doubt that the "calories in, calories out" model works perfectly for a spherical cow in a vacuum, but I am curious about how much such approximation applies to the real cases.

But even for the spherical cow in a vacuum, this model predicts that any constant lifestyle, unless perfectly balanced, should either lead to unlimited weight gain (if "calories in" exceed "calories out") or unlimited weight loss (in the opposite case). While reality seems to suggest that most people, both thin and fat, keep their weight stable around some specific value. The weight itself has an impact on how much calories people spend simply moving their own bodies, but I doubt that this is sufficient to balance the whole equation.

Comment author: niceguyanon 10 August 2016 03:39:27PM *  7 points [-]

I thought that to most LW'ers the weak version of "Calories in, Calories out" was uncontroversial. One can accept that Calories in (the mouth) is not the whole story, and at the same time feel it's pretty much most of the story.

Comment author: Daniel_Burfoot 09 August 2016 12:59:29AM *  1 point [-]

I used to worry that dysgenesis was leading us towards a world in which everyone was really dumb. That fear has been at least partially alleviated by new research showing that more educated people are having more kids. But now I worry that dysgenesis is leading us towards a world in which everyone is really sick.

Historically, human reproduction used the following strategy: have 6 or 8 kids, and the healthiest 3 or 4 would make it to adulthood. Now couples have 2 or 3 kids, and they almost all make it to adulthood. But that implies that lots of marginally-healthy children are surviving, thanks to medical technology, and so the gene pool is getting less healthy.

Look around you and count the number of people who have some kind of debilitating allergy, chronic illness, or mental health condition. Does it seem scary to you? What if that percentage goes up dramatically in the future, while the conditions themselves also get worse?

Comment author: niceguyanon 09 August 2016 02:00:35PM *  2 points [-]

That fear has been at least partially alleviated by new research showing that more educated people are having more kids

Could you please post a link if available?

Comment author: turchin 08 August 2016 12:46:59PM 2 points [-]

Willing to cooperate seems to be low status signaling. E.g., a low status author of an article may try to get higher status person as a coauthor of his article. But higher status author would not try to get low status author as a coauthor. Higher status people could defect with lower punishment, like not return calls or not keep promises. It results in open willingness to cooperate may be regarded as a signal of low status and some people may deliberately not cooperate to demonstrate their higher status. Any thoughts?

Comment author: niceguyanon 08 August 2016 04:14:23PM 1 point [-]

In your example I would say mostly yes it does signal lower status, but you should do it anyway.

Willingness to collaborate is not as pure of a signal as say something like owning a winter home does to signal status because if I knew nothing about your willingness to collaborate I could still tell your status by examining your catalog of publications. Willingness to collaborate is an attempt to increase the lower status that you already have to a level that you would like to have. It's like attempting to win is signalling that you haven't won yet, but how do you win? You have to attempt to win.

I have recently been thinking about how incredibly useful networking is. I know successful people that have large and small social/professional networks, but if I examine only the people who I know that have large social/professional networks they are almost always employed and win more.

Comment author: tut 28 July 2016 10:51:00AM *  1 point [-]

I think it is about the don't break the streak thing. Suppose that you decide to run every day, and you do it in the morning every day from Sunday to Thursday, then sleep in and don't have time for it on Friday. Now on Saturday you can either advance the day before your run and have a one day streak, or you can run twice, once before and once after advancing the day and have a seven day streak.

Comment author: niceguyanon 08 August 2016 03:46:03PM 0 points [-]

This perfectly expresses my thoughts

Comment author: SquirrelInHell 28 June 2016 02:18:55AM 0 points [-]

I think your app does this well.

How is it after a week? Do you still use it?

Comment author: niceguyanon 26 July 2016 02:50:50PM *  2 points [-]

Update:

Was I able to use the app successfully to increase my tasks by 50%? No. But I wont blame it on the app.

I found that manually clicking next day was something I did not like. The temptation to delay clicking it and catch up the next day is strong. If it were automatic I would have to live with the consequences of getting a bad score. Furthermore if you accidentally clicked next day before before updating other tasks, then too bad, you cant reverse. So for testing I made a few tasks and advanced it several days, but unless I reinstall the app, the date can not roll back for when I want to stop testing and use it for real.

There is no way to easily see your progress for the last few days. It would be nice to click on the task and see how you did recently or if I missed a few days to see when was the last time I did the task. Sure there is an export button but the data is hard to read if you just want to know quickly how you did recently.

Comment author: niceguyanon 18 July 2016 08:21:25PM 1 point [-]

http://lesswrong.com/r/discussion/lw/nqz/open_thread_jul_04_jul_10_2016/dcyy

This comment has a great link to a pretty big list of concepts.

Comment author: Viliam 17 July 2016 09:20:45PM *  0 points [-]

At the moments when you are happy with your life, the desire to escape from reality reduces dramatically. So I guess a high-level approach might be to optimize your life to be more happy, even if it seems like it would reduce your productivity, because you may get the side effect of procrastinating less.

This article was about the low-level approach, when your life already kinda sucks and you cannot fix it at the moment, only try to reduce the damage.

In your case, the obvious question seems to be: "Can you arrange things so you would tutor more, and do whatever is the other thing less? And perhaps hire some babysitter to help with the kid." Alternatively, could you somehow optimize your environment (or more precisely, your kid's environment) so that less of your attention is needed?

(My high-level approach is to change my frustrating job, and I am already doing some interviews. Also, some other frustrating things will go away in the future; unfortunately the changes take time that I can't speed up, so it's approximately two more months.)

Comment author: niceguyanon 18 July 2016 03:28:13PM 0 points [-]

At the moments when you are happy with your life, the desire to escape from reality reduces dramatically. So I guess a high-level approach might be to optimize your life to be more happy, even if it seems like it would reduce your productivity, because you may get the side effect of procrastinating less.

I think most of my own procrastination has always had more to do with the desire to escape, and feeling bad about low expectancy/value of the work at hand, rather than the procrastinating activity itself being highly entertaining.

Being productive can feel terrible because it is a constant reminder of not just what you want to achieve, but also of the fact that you have not achieved it, depressing stuff really.

View more: Next