Nope. There is no composition fallacy where there is no composition. I am replying to your position, not to mine.
I do care about tomorrow, which is not the long run.
I don't think we should assume that AIs will have any goals at all, and I rather suspect they will not, in the same way that humans do not, only more so.
Not really. I don't care if that happens in the long run, and many people wouldn't.
I considered submitting an entry basically saying this, but decided that it would be pointless since obviously it would not get any prize. Human beings do not have coherent goals even individually. Much less does humanity.
Right. Utilitarianism is false, but Eliezer was still right about torture and dust specks.
Can we agree that I am not trying to prosthelytize anyone?
No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.
(2) Claiming authority or pointing skyward to an authority is not a road to truth.
This is why you need to stop pointing to "Critical Rationalism" etc. as the road to truth.
I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.
First, you...
I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.
Nothing to see here; just another boring iteration of the absurd idea of "shifting goalposts."
There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.
Do you not think the TCS parent hasn't also heard this scenario over and over? Do you think you're like the first one ever to have mentioned it?
Do you not think that I am aware that people who believe in extremist ideologies are capable of making excuses for not following the extreme consequences of their extremist ideologies?
But this is just the same as a religious person giving excuses for why the empirical consequences of his beliefs are the same whether his beliefs are true or false.
You have two options:
1) Embrace the extreme consequences of your ...
I suppose you're going to tell me that pushing or pulling my spouse out of the way of a car
Yes, it is.
Secondly, it is quite different from the stairway case, because your spouse would do the same thing on purpose if they saw the car, but the child will not move away when they see the stairs.
At that point I'll wonder what types of "force" you advocate using against children that you do not think should be used on adults.
Who said I advocate using force against children that we would not use against adults? We use force against adults, e.g. ...
I ignored you because your definition of force was wrong. That is not what the word means in English. If you pick someone up and take them away from a set of stairs, that is force if they were trying to move toward them, even if they would not like to fall down them.
a baby gate
We were talking about force before, not violence. A baby gate is using force.
Children don't want to fall down stairs.
They do, however, want to move in the direction of the stairs, and you cannot "help them not fall down stairs" without forcing them not to move in the direction of the stairs.
Saying it is "extremist" without giving arguments that can be criticised and then rejecting it would be rejecting rationality.
Nonsense. I say it is extremist because it is. The fact that I did not give arguments does not mean rejecting rationality. It simply means I am not interested in giving you arguments about it.
You don't just get to use Bayes' Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes
I certainly do. I said that induction is not impossible, and that inductive reasoning is Bayesian. If you think that Bayesian reasoning is also impossible, you are free to establish that. You have not done so.
Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so).
If this is possible, it would be equally possible to refute induction (if it were im...
not initiating force against children as most parents currently do
Exactly. This is an extremist ideology. To give several examples, parents should use force to prevent their children from falling down stairs, or from hurting themselves with knives.
I reject this extremist ideology, and that does not mean I reject rationality.
I said the thinking process used to judge the epistemology of induction is Bayesian, and my link explains how it is. I did not say it is an exhaustive explanation of epistemology.
What is the thinking process you are using to judge the epistemology of induction?
The thinking process is Bayesian, and uses a prior. I have a discussion of it here
If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? ... Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.
Little problem there.
"[I]deas on this website" is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.
This does not make it reasonable to call contradicting those ideas "contradicting Less Wrong." In any case, I am quite aware of the things I disagree with Yudkowsky and others about. I do not have a problem with that. Unlike you, I am not a cult member.
...Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationa
You say that seemingly in ignorance that what I said contradicts Less Wrong.
First, you are showing your own ignorance of the fact that not everyone is a cult member like yourself. I have a bet with Eliezer Yudkowsky against one of his main positions and I stand to win $1,000 if I am right and he is mistaken.
Second, "contradicts Less Wrong" does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.
...One of the things I said was Taking Ch
"You need to understand this stuff." Since you are curi or a cult follower, you assume that people need to learn everything from curi. But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence. I have no need to learn that, or anything else, from curi. And many of your (or yours and curi's) opinions are entirely false, like the idea that you have "disproved induction."
though no doubt there are people here who will say I am just a sock-puppet of curi’s.
And by the way, even if I were wrong about you being curi or a cult member, you are definitely and absolutely just a sock-puppet of curi's. That is true even if you are a separate person, since you created this account just to make this comment, and it makes no difference whether curi asked you to do that or if you did it because you care so much about his interests here. Either way, it makes you a sock-puppet, by definition.
What's so special about this? If you're wrong about religion you get to avoidably burn in hell too, in a more literal sense. That does not (and cannot) automatically change your mind about religion, or get you to invest years in the study of all possible religions, in case one of them happens to be true.
As Lumifer said, nothing. Even if I were wrong about that, your general position would still be wrong, and nothing in particular would follow.
I notice though that you did not deny the accusation, and most people would deny having a cult leader, which suggests that you are in fact curi. And if you are not, there is not much to be wrong about. Having a cult leader is a vague idea and does not have a "definitely yes" or "definitely no" answer, but your comment exactly matches everything I would want to call having a cult leader.
"He is by far the best thinker I have ever encountered. "
That is either because you are curi, and incapable of noticing someone more intelligent than yourself, or because curi is your cult leader.
I haven't really finished thinking about this yet but it seems to me it might have important consequences. For example, the AI risk argument sometimes takes it for granted that an AI must have some goal, and then basically argues that maximizing a goal will cause problems (which it would, in general.) But using the above model suggests something different might happen, not only with humans but also with AIs. That is, at some point an AI will realize that if it expects to do A, it will do A, and if it expects to do B, it will do B. But it won't have any par...
The right answer is maybe they won't. The point is that it is not up to you to fix them. You have been acting like a Jehovah's Witness at the door, except substantially more bothersome. Stop.
And besides, you aren't right anyway.
I think we should use "agent" to mean "something that determines what it does by expecting that it will do that thing," rather than "something that aims at a goal." This explains why we don't have exact goals, but also why we "kind of" have goals: because our actions look like they are directed to goals, so that makes "I am seeking this goal" a good way to figure out what we are going to do, that is, a good way to determine what to expect ourselves to do, which makes us do it.
unless you count cases where a child spent a few days in their company
There are many cases where the child's behavior is far more assimilated to the behavior of the animals than would be a credible result of merely a few days.
I thought you were saying that feral children never existed and all the stories about them are completely made up. If so, I think you are clearly wrong.
People are weakly motivated because even though they do things, they notice that for some reason they don't have to do them, but could do something else. So they wonder what they should be doing. But there are basic things that they were doing all along because they evolved to do them. AIs won't have "things they were doing", and so they will have even weaker motivations than humans. They will notice that they can do "whatever they want" but they will have no idea what to want. This is kind of implied by what I wrote here: except that it is about human beings.
Exactly. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist." Let's reword that. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING 'undecatillion swarms of quarks' not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks's mind."
There seems to be a logic problem there.
I hear "communicate a model that says what will happen (under some set of future conditions/actions)".
You're hearing wrong.
Not at all. It means the ability to explain, not just say what will happen.
"If advanced civilizations destroy themselves before becoming space-faring or leaving an imprint on the galaxy, then there is some phenomena that is the cause of this."
Not necessarily something specific. It could be caused by general phenomena.
This might be a violation of superrationality. If you hack yourself, in essence a part of you is taking over the rest. But if you do that, why shouldn't part of an AI hack the rest of it and take over the universe?
I entirely disagree that "rationalists are more than ready." They have exactly the same problems that a fanatical AI would have, and should be kept sandboxed for similar reasons.
(That said, AIs are unlikely to actually be fanatical.)
but I thought it didn't fare too well when tested against reality (see e.g. this and this)
I can't comment on those in detail without reading them more carefully than I care to, but that author agrees with Taubes that low carb diets help most people lose weight, and he seems to be assuming a particular model (e.g. he contrasts the brain being responsible with insulin being responsible, while it is obvious that these are not necessarily opposed.)
...That's not common sense, that's analogies which might be useful rhetorically but which don't do anything to s
"Be confused, bewildered or distant when you insist you can't explain why."
This does not fit the character. A real paperclipper would give very convincing reasons.
So why does this positive feedback cycle start in some people, but not others?
This is his description:
I just read the book (Why We Get Fat), and yes, he meant what he said when he said that people overeat because they are getting fat.
He explains this pretty clearly, though. He says its true in the same sense that its true that growing children eat more because they are growing. Since their bodies are growing they need more food to supply that, and the kids get hungrier.
In the same way, according to his theory, because a person's body is taking calories and storing them in fat, instead of using them for other tissues and for energy, the person will be hung...
"What kind of person was too busy to text back a short reply?"
"Too busy" is simply the wrong way to think about it. If you are in a certain sort of low energy mood, replying may be extremely unlikely regardless of how much time you have. And it says nothing about whether you respect the person, at all.
For a similar reason, you may be quite unwilling to "explain what's going on," either.
It is partly in the territory, and comes with the situation where you are modeling yourself. In that situation, the thing will always be "too complex to deal with directly," regardless of its absolute level of complexity.
We can make similar answers about people's intentions.
Isn't a big part of the problem the fact that you only have conscious access to a few things? In other words, your actions are determined in many ways by an internal economy that you are ignorant of (e.g. mental energy, physical energy use in the brain, time and space etc. etc.) These things are in fact value relevant but you do not know much about them so you end up making up reasons why you did what you did.
They don't have to acknowledge compulsive-obsessive behavior. Obviously they want both milk and sweets, even if they don't notice wanting the sweets. That doesn't prevent other people from noticing it.
Also, they may be lying, since they might think that liking sweets is low status.
The problem with your "in practice" argument is that it would similarly imply that we can never know if someone is bald, since it is impossible to give a definition of baldness that rigidly separate bald people from non-bald people while respecting what we mean by the word. But in practice we can know that a particular person is bald regardless of the absence of that rigid definition. In the same way a particular person can know that he went to the store to buy milk, even if it is theoretically possible to explain what he did by saying that he ha...
The implied argument that "we cannot prove X, therefore X cannot be true or false" is not logically valid. I mentioned this recently when Caspar made a similar argument.
I think it is true, however, that humans do not have utility functions. I would not describe that, however, by saying that humans are not rational; on the contrary, I think pursuing utility functions is the irrational thing.
Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.
If they aren't so very aware of it, it is not "simply true," even if there is some truth in it.
"But of course the claims are separate, and shouldn't influence each other."
No, they are not separate, and they should influence each other.
Suppose your terminal value is squaring the circle using Euclidean geometry. When you find out that this is impossible, you should stop trying. You should go and do something else. You should even stop wanting to square the circle with Euclidean geometry.
What is possible, directly influences what you ought to do, and what you ought to desire.