All of entirelyuseless's Comments + Replies

"But of course the claims are separate, and shouldn't influence each other."

No, they are not separate, and they should influence each other.

Suppose your terminal value is squaring the circle using Euclidean geometry. When you find out that this is impossible, you should stop trying. You should go and do something else. You should even stop wanting to square the circle with Euclidean geometry.

What is possible, directly influences what you ought to do, and what you ought to desire.

Nope. There is no composition fallacy where there is no composition. I am replying to your position, not to mine.

I do care about tomorrow, which is not the long run.

I don't think we should assume that AIs will have any goals at all, and I rather suspect they will not, in the same way that humans do not, only more so.

Not really. I don't care if that happens in the long run, and many people wouldn't.

1cousin_it
I hope at least you care if everyone on Earth dies painfully tomorrow. We don't have any theory that would stop AI from doing that, and any progress toward such a theory would be on topic for the contest. Sorry, I'm feeling a bit frustrated. It's as if the decade of LW never happened, and people snap back out of rationality once they go off the dose of Eliezer's writing. And the mode they snap back to is so painfully boring.

I considered submitting an entry basically saying this, but decided that it would be pointless since obviously it would not get any prize. Human beings do not have coherent goals even individually. Much less does humanity.

Right. Utilitarianism is false, but Eliezer was still right about torture and dust specks.

Can we agree that I am not trying to prosthelytize anyone?

No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

This is why you need to stop pointing to "Critical Rationalism" etc. as the road to truth.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

First, you... (read more)

0Fallibilist_duplicate0.16882559340231862
Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context. The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not. AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality. Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one. AI is blocked -- you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.

I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.

0Fallibilist_duplicate0.16882559340231862
Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think: (1) All claims to truth should be carefully scrutinised for error. (2) Claiming authority or pointing skyward to an authority is not a road to truth. These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent. I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

Nothing to see here; just another boring iteration of the absurd idea of "shifting goalposts."

There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.

1HungryHobo
Adam and Eve AI's. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab. Rather than being designed to do X with yeast it's basically told "go look at yeast" and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes . http://www.dailygalaxy.com/my_weblog/2009/04/1st-artificially-intelligent-adam-and-eve-created.html https://www.newscientist.com/article/dn16890-robot-scientist-makes-discoveries-without-human-help/ It's a remarkable system and could be extremely useful for scientists in many sectors but it's a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI. This AI is not a pianist robot and doesn't play chess but has broad potential applications across many areas of science. It blows a hole in the side of the "Universal Knowledge Creator" idea since it's a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there's some magical UKC line or category (which humans technically don't qualify for yet anyway) is based on literally nothing except feelings. there's not an ounce of logic or evidence behind it.
0Fallibilist_duplicate0.16882559340231862
We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before. You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero. EDIT: That was meant to be in reply to HungryHobo.

Do you not think the TCS parent hasn't also heard this scenario over and over? Do you think you're like the first one ever to have mentioned it?

Do you not think that I am aware that people who believe in extremist ideologies are capable of making excuses for not following the extreme consequences of their extremist ideologies?

But this is just the same as a religious person giving excuses for why the empirical consequences of his beliefs are the same whether his beliefs are true or false.

You have two options:

1) Embrace the extreme consequences of your ... (read more)

I suppose you're going to tell me that pushing or pulling my spouse out of the way of a car

Yes, it is.

Secondly, it is quite different from the stairway case, because your spouse would do the same thing on purpose if they saw the car, but the child will not move away when they see the stairs.

At that point I'll wonder what types of "force" you advocate using against children that you do not think should be used on adults.

Who said I advocate using force against children that we would not use against adults? We use force against adults, e.g. ... (read more)

I ignored you because your definition of force was wrong. That is not what the word means in English. If you pick someone up and take them away from a set of stairs, that is force if they were trying to move toward them, even if they would not like to fall down them.

0curi
I suppose you're going to tell me that pushing or pulling my spouse out of the way of a car that was going to hit them, without asking for consent first (don't have time), is using force against them, too, even though it's exactly what they want me to do. While still not explaining what you think "force" is, and not acknowledging that TCS's claims must be evaluated in its own terminology. At that point I'll wonder what types of "force" you advocate using against children that you do not think should be used on adults.

a baby gate

We were talking about force before, not violence. A baby gate is using force.

0curi
i literally already gave u a definition of force and suggested you had no idea what i was talking about. you ignored me. this is 100% your fault and you still haven't even tried to say what you think "force" is.

Children don't want to fall down stairs.

They do, however, want to move in the direction of the stairs, and you cannot "help them not fall down stairs" without forcing them not to move in the direction of the stairs.

1Fallibilist_duplicate0.16882559340231862
You are trying to reject a philosophy based on edge cases without trying to understand the big problems the philosophy is trying to solve. Let's give some context to the stair-falling scenario. Consider that the parent is a TCS parent, not a normie parent. This parent has in fact heard the stair-falling scenario many times. It is often the first thing other people bring up when TCS is discussed. Given the TCS parent has in fact thought about stair falling way more than a normie parent, how do you think the TCS parent has set up their home? Is it going to be a home where young children are exposed to terrible injury from things they do not yet have knowledge about? Given also that the TCS parent will give lots of help to a child curious about stairs, how long before that child masters stairs? And given that the child is being given a lot of help in many other things as well and not having their rationality thwarted, how do you think things are like in that home generally? The typical answer will be the child is "spoilt". The TCS parent will have heard the "spoilt" argument many times. They know the term "spoilt" is used to denegrate children and that the ideas underlying the idea of "spoilt" are nasty. So now we have got "spoilt" out of the way, how do you think things are like? Ok, you say, but what if the child is outside near the edge of a busy road or something and wants to run across it? Do you not think the TCS parent hasn't also heard this scenario over and over? Do you think you're like the first one ever to have mentioned it? The TCS parent is well aware of busy road scenarios. Instead of trying to catch TCS advocates out by bringing up something that has been repeatedly discussed why don't you look at the core problems the philosophy speaks to and address those? Those problems need urgent attention. EDIT: I should have said also that the stair-falling scenario and other similar scenarios are just excuses for people not to think about TCS. They don't
0curi
Of course you can help them, there are options other than violence. For example you can get a baby gate or a home without stairs. https://parent.guide/how-to-baby-proof-your-stairs/ Gates let them e.g. move around near the top of the stairs without risk of falling down. Desired, consensual gates, which the child deems helpful to the extent he has any opinion on the matter at all, aren't force. If the child specifically wants to play on/with the stairs, you can of course open the gate, put out a bunch of padding, and otherwise non-violently help him.

Saying it is "extremist" without giving arguments that can be criticised and then rejecting it would be rejecting rationality.

Nonsense. I say it is extremist because it is. The fact that I did not give arguments does not mean rejecting rationality. It simply means I am not interested in giving you arguments about it.

You don't just get to use Bayes' Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes

I certainly do. I said that induction is not impossible, and that inductive reasoning is Bayesian. If you think that Bayesian reasoning is also impossible, you are free to establish that. You have not done so.

Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so).

If this is possible, it would be equally possible to refute induction (if it were im... (read more)

not initiating force against children as most parents currently do

Exactly. This is an extremist ideology. To give several examples, parents should use force to prevent their children from falling down stairs, or from hurting themselves with knives.

I reject this extremist ideology, and that does not mean I reject rationality.

0curi
Children don't want to fall down stairs. You can help them not fall down stairs instead of trying to force them. It's unclear to me if you know what "force" means. Here's the dictionary: A standard classical liberal conception of force is: violence, threat of violence, and fraud. That's the kind of thing I'm talking about. E.g. physically dragging your child somewhere he doesn't want to go, in a way that you can only do because you're larger and stronger. Whereas if children were larger and stronger than their parents, the dragging would stop, but you can still easily imagine a parent helping his larger child with not accidentally falling down stairs.

I said the thinking process used to judge the epistemology of induction is Bayesian, and my link explains how it is. I did not say it is an exhaustive explanation of epistemology.

What is the thinking process you are using to judge the epistemology of induction?

The thinking process is Bayesian, and uses a prior. I have a discussion of it here

If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? ... Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.

Little problem there.

0Fallibilist_duplicate0.16882559340231862
What is the epistemological framework you used to judge the correctness of those? You don't just get to use Bayes' Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes. Or the correctness of probability theory, your priors etc. No. Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so). This has been known for decades. Induction is not a complete epistemology like that. For one thing, inductivists also need the epistemology of deduction. But they also need an epistemological framework to judge both of those. This they cannot provide.
0curi
An epistemology is a philosophical framework which answers questions like what is a correct argument, how are ideas evaluated, and how does one learn. Your link doesn't provide one of those.

"[I]deas on this website" is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.

This does not make it reasonable to call contradicting those ideas "contradicting Less Wrong." In any case, I am quite aware of the things I disagree with Yudkowsky and others about. I do not have a problem with that. Unlike you, I am not a cult member.

Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationa

... (read more)
0Fallibilist_duplicate0.16882559340231862
It says many other things as well. Saying it is "extremist" without giving arguments that can be criticised and then rejecting it would be rejecting rationality. At present, there are no known good criticisms of TCS. If you can find some, you can reject TCS rationally. I expect that such criticisms would lead to improvement of TCS, however, rather than outright rejection. This would be similar to how CR has been improved over the years. Since there aren't any known good criticisms that would lead to rejection of TCS, it is irrational to reject it. Such an act of irrationality would have consequences, including treating your children irrationally, which approximately all parents do.
0curi
TCS applies CR to parenting/edu and also is consistent with (classical) liberal values like not initiating force against children as most parents currently do, and respecting their rights such as the rights to liberty and the pursuit of happiness. See http://fallibleideas.com/taking-children-seriously

You say that seemingly in ignorance that what I said contradicts Less Wrong.

First, you are showing your own ignorance of the fact that not everyone is a cult member like yourself. I have a bet with Eliezer Yudkowsky against one of his main positions and I stand to win $1,000 if I am right and he is mistaken.

Second, "contradicts Less Wrong" does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.

One of the things I said was Taking Ch

... (read more)
0Fallibilist_duplicate0.16882559340231862
What is the thinking process you are using to judge the epistemology of induction? Does that process involve induction? If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? And if not, judging the special case of the epistemology of induction is an exception. It is an example of thinking without induction. Why is this special case an exception? Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.
0Fallibilist_duplicate0.16882559340231862
No. From About Less Wrong: "[I]deas on this website" is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong. Taking AGI Seriously is therefore also an extremist ideology? Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality. You want to use irrationality against your children when it suits you. You become responsible for causing them massive harm. It is not extremist to try to be rational, always. It should be the norm.

"You need to understand this stuff." Since you are curi or a cult follower, you assume that people need to learn everything from curi. But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence. I have no need to learn that, or anything else, from curi. And many of your (or yours and curi's) opinions are entirely false, like the idea that you have "disproved induction."

0Fallibilist_duplicate0.16882559340231862
You say that seemingly in ignorance that what I said contradicts Less Wrong. One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to? What do you know about TCS? TCS is very important not just for AGI but also for children in the here and now. Most people know next to nothing about it. You don't either. You in fact cannot comment on whether there is any truth to what I said about AGI. You don't know enough. And then you say you have no need to learn anything from curi. You're deceiving yourself. You still can't even state the position correctly. Popper explained why induction is impossible and offered an alternative: critical rationalism. He did not "disprove" induction. Similarly, he did not disprove fairies. Popper had a lot to say about the idea of proof - are you aware of any of it?

though no doubt there are people here who will say I am just a sock-puppet of curi’s.

And by the way, even if I were wrong about you being curi or a cult member, you are definitely and absolutely just a sock-puppet of curi's. That is true even if you are a separate person, since you created this account just to make this comment, and it makes no difference whether curi asked you to do that or if you did it because you care so much about his interests here. Either way, it makes you a sock-puppet, by definition.

What's so special about this? If you're wrong about religion you get to avoidably burn in hell too, in a more literal sense. That does not (and cannot) automatically change your mind about religion, or get you to invest years in the study of all possible religions, in case one of them happens to be true.

-1curi
I didn't say it was special, I said his answer ("nothing") is mistaken. The non-specialness actually makes his wrong answer more appalling.

As Lumifer said, nothing. Even if I were wrong about that, your general position would still be wrong, and nothing in particular would follow.

I notice though that you did not deny the accusation, and most people would deny having a cult leader, which suggests that you are in fact curi. And if you are not, there is not much to be wrong about. Having a cult leader is a vague idea and does not have a "definitely yes" or "definitely no" answer, but your comment exactly matches everything I would want to call having a cult leader.

"He is by far the best thinker I have ever encountered. "

That is either because you are curi, and incapable of noticing someone more intelligent than yourself, or because curi is your cult leader.

-1Fallibilist_duplicate0.16882559340231862
What if you are wrong? What then?

I haven't really finished thinking about this yet but it seems to me it might have important consequences. For example, the AI risk argument sometimes takes it for granted that an AI must have some goal, and then basically argues that maximizing a goal will cause problems (which it would, in general.) But using the above model suggests something different might happen, not only with humans but also with AIs. That is, at some point an AI will realize that if it expects to do A, it will do A, and if it expects to do B, it will do B. But it won't have any par... (read more)

The right answer is maybe they won't. The point is that it is not up to you to fix them. You have been acting like a Jehovah's Witness at the door, except substantially more bothersome. Stop.

And besides, you aren't right anyway.

I think we should use "agent" to mean "something that determines what it does by expecting that it will do that thing," rather than "something that aims at a goal." This explains why we don't have exact goals, but also why we "kind of" have goals: because our actions look like they are directed to goals, so that makes "I am seeking this goal" a good way to figure out what we are going to do, that is, a good way to determine what to expect ourselves to do, which makes us do it.

0Stuart_Armstrong
Seems a reasonable way of seeing things, but not sure it works if we take that definition too formally/literally.

unless you count cases where a child spent a few days in their company

There are many cases where the child's behavior is far more assimilated to the behavior of the animals than would be a credible result of merely a few days.

0MaryCh
What cases?

I thought you were saying that feral children never existed and all the stories about them are completely made up. If so, I think you are clearly wrong.

0MaryCh
then what story do you think was not made up?

People are weakly motivated because even though they do things, they notice that for some reason they don't have to do them, but could do something else. So they wonder what they should be doing. But there are basic things that they were doing all along because they evolved to do them. AIs won't have "things they were doing", and so they will have even weaker motivations than humans. They will notice that they can do "whatever they want" but they will have no idea what to want. This is kind of implied by what I wrote here: except that it is about human beings.

Exactly. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist." Let's reword that. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING 'undecatillion swarms of quarks' not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks's mind."

There seems to be a logic problem there.

2rkyeun
Composition fallacy. Try again.

I hear "communicate a model that says what will happen (under some set of future conditions/actions)".

You're hearing wrong.

Not at all. It means the ability to explain, not just say what will happen.

0Dagon
When you say "ability to explain", I hear "communicate a model that says what will happen (under some set of future conditions/actions)". There is no such thing as "why" in the actual sequence of states of matter in the universe. It just is. Any causality is in the models we use to predict future states. Which is really useful but not "truth".

"If advanced civilizations destroy themselves before becoming space-faring or leaving an imprint on the galaxy, then there is some phenomena that is the cause of this."

Not necessarily something specific. It could be caused by general phenomena.

This might be a violation of superrationality. If you hack yourself, in essence a part of you is taking over the rest. But if you do that, why shouldn't part of an AI hack the rest of it and take over the universe?

I entirely disagree that "rationalists are more than ready." They have exactly the same problems that a fanatical AI would have, and should be kept sandboxed for similar reasons.

(That said, AIs are unlikely to actually be fanatical.)

0SquirrelInHell
Meh, kinda agree, added "(at least some of them!)" to the post. I didn't mean "ready" in the sense of value alignment, but rather that by accessing more power they would grow instead of destroying themselves.

but I thought it didn't fare too well when tested against reality (see e.g. this and this)

I can't comment on those in detail without reading them more carefully than I care to, but that author agrees with Taubes that low carb diets help most people lose weight, and he seems to be assuming a particular model (e.g. he contrasts the brain being responsible with insulin being responsible, while it is obvious that these are not necessarily opposed.)

That's not common sense, that's analogies which might be useful rhetorically but which don't do anything to s

... (read more)

"Be confused, bewildered or distant when you insist you can't explain why."

This does not fit the character. A real paperclipper would give very convincing reasons.

5Yosarian2
He's not a superhuman intelligent paperclipper yet, just human level.

So why does this positive feedback cycle start in some people, but not others?

This is his description:

  • You think about eating a meal containing carbohydrates.
  • You begin secreting insulin.
  • The insulin signals the fat cells to shut down the release of fatty acids (by inhibiting HSL) and take up more fatty acids (via LPL) from the circulation.
  • You start to get hungry, or hungrier.
  • You begin eating.
  • You secrete more insulin.
  • The carbohydrates are digested and enter the circulation as glucose, causing blood sugar levels to rise.
  • You secrete still more insul
... (read more)
0Lumifer
The idea that insulin drives obesity was popular for a while (did Gary Taubes start it?) but I thought it didn't fare too well when tested against reality (see e.g. this and this) That's not common sense, that's analogies which might be useful rhetorically but which don't do anything to show that his view is correct. I don't know about that. Carbs are a significant part of the human diet since the farming revolution which happened sufficiently long time ago for the body to somewhat adapt (e.g. see the lactose tolerance mutation which is more recent). Besides, let's consider what was the situation, say, 200 years ago. Were carbs a major part of diet? Sure they were. Was there an "obesity epidemic"? Nope, not at all. If you want to blame carbs (not even refined carbs like sugar, but carbs in general) for obesity, you need to have an explanation why their evil magic didn't work before the XX century. No, I'm not. For any animal, humans included, there is non-zero intake of food which will force it to lose weight. "Starve" seems to mean exactly the same thing as "lose weight by calorie restriction", but with negative connotations. And I don't know about modified rats, but starving humans are not fat. Feel free to peruse pictures of starving people.
0Elo
Please fix the formatting. Edit: thanks

I just read the book (Why We Get Fat), and yes, he meant what he said when he said that people overeat because they are getting fat.

He explains this pretty clearly, though. He says its true in the same sense that its true that growing children eat more because they are growing. Since their bodies are growing they need more food to supply that, and the kids get hungrier.

In the same way, according to his theory, because a person's body is taking calories and storing them in fat, instead of using them for other tissues and for energy, the person will be hung... (read more)

0Lumifer
So why does this positive feedback cycle start in some people, but not others? That's pretty clearly not true.

"What kind of person was too busy to text back a short reply?"

"Too busy" is simply the wrong way to think about it. If you are in a certain sort of low energy mood, replying may be extremely unlikely regardless of how much time you have. And it says nothing about whether you respect the person, at all.

For a similar reason, you may be quite unwilling to "explain what's going on," either.

0Elo
This is more important than you make it out to be. The very emphasis is that the reasons for the failure to respond are unknown. Whatever they are, you should steelman and respect those reasons in projecting validity for the behaviour, rather than presuming bad faith or really presuming anything at all.

It is partly in the territory, and comes with the situation where you are modeling yourself. In that situation, the thing will always be "too complex to deal with directly," regardless of its absolute level of complexity.

0Lumifer
Maybe, but that's not the context in this thread.

We can make similar answers about people's intentions.

Isn't a big part of the problem the fact that you only have conscious access to a few things? In other words, your actions are determined in many ways by an internal economy that you are ignorant of (e.g. mental energy, physical energy use in the brain, time and space etc. etc.) These things are in fact value relevant but you do not know much about them so you end up making up reasons why you did what you did.

They don't have to acknowledge compulsive-obsessive behavior. Obviously they want both milk and sweets, even if they don't notice wanting the sweets. That doesn't prevent other people from noticing it.

Also, they may be lying, since they might think that liking sweets is low status.

The problem with your "in practice" argument is that it would similarly imply that we can never know if someone is bald, since it is impossible to give a definition of baldness that rigidly separate bald people from non-bald people while respecting what we mean by the word. But in practice we can know that a particular person is bald regardless of the absence of that rigid definition. In the same way a particular person can know that he went to the store to buy milk, even if it is theoretically possible to explain what he did by saying that he ha... (read more)

0mako yass
A person with less than 6% hair is bald, a person with 6% - 15% hair might be bald, but it is unknowable due to the nature of natural language. A person with 15% - 100% hair is not bald. We can't always say whether someone is bald, but more often, we can. Baldness remains applicable.
0Stuart_Armstrong
Yes. Isn't this fascinating? What is going on in human minds that, not only can we say stuff about our own values and rationality, but about those of other humans? And can we copy that into an AI somehow? That will be the subject of subsequent posts.
0turchin
What about a situation when a person says and thinks that he is going to buy a milk, but actually buy milk plus some sweets? And do it often, but do not acknowledge compulsive-obsessive behaviour towards sweets?

The implied argument that "we cannot prove X, therefore X cannot be true or false" is not logically valid. I mentioned this recently when Caspar made a similar argument.

I think it is true, however, that humans do not have utility functions. I would not describe that, however, by saying that humans are not rational; on the contrary, I think pursuing utility functions is the irrational thing.

1Stuart_Armstrong
In practice, "humans don't have values" and "humans have values, but we can never know what they are" are not meaningfully different. I also wouldn't get too hung up on utility function; a utility function just means that the values don't go wrong when an agent tries to be consistent and avoid money pumps. If we want to describe human values, we need to find values that don't go crazy when transformed into utility functions.

Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

If they aren't so very aware of it, it is not "simply true," even if there is some truth in it.

Load More