curi comments on reply to benelliott about Popper issues - Less Wrong

-1 Post author: curi 07 April 2011 08:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: curi 07 April 2011 10:43:05AM *  2 points [-]

As I recall I gave a citation where to find Popper discussing morality (it is The World of Parmenides).

And I explained that moral knowledge is created using the same method as any other kind of knowledge. And i said that that method is (conjectures and refutations).

questioning if people want to advocate strong relativism or subjectivism is an argument, too. if you aren't aware of the already existing arguments against relativism or subjectivism, then it's incomplete for you. you could always ask.

you haven't understood my view. i didn't say it's a moral force. the issue of "what is the right action, given my goals and desires?" is 100% objective, and it is a moral issue. i don't know why you expected me to disagree about that. there is a fact of the matter about it. that is one of the major parts of morality. but there is also a second part: the issue of what are good goals and desires to have?

how can that be objective, you wonder? well for example some sets of goals contradict each other. that allows for a type of objective moral argument, about what goals/values/preferences/utility-functions to have, against contradictory goals.

there's others. to start with, read: http://www.curi.us/1169-morality

Comment author: benelliott 07 April 2011 12:30:08PM 5 points [-]

"what is the right action, given my goals and desires?" is 100% objective

Bayes, combined with Von Neuman Mortenson utility theory answers this, at least in principle.

You keep acting as if it is a flaw that Bayes only predicts. Is it a flaw that Newton's laws of motion do not explain the price of gold? Narrowness is a virtue, attempting to spread your theory as wide as possible ends up forcing it into places where it doesn't belong.

Comment author: curi 07 April 2011 06:19:56PM 1 point [-]

If bayes wants to be an epistemology then it must do more than predict. Same for Newton.

If you want to have math which doesn't dethrone Popper, but is orthogonal, you're welcome to do that and i'd stop complaining (much). However Yudkowsky says Bayesian Epistemology dethrones and replaces Popper. He regards it as a rival theory to Popper's. Do you think Yudkowsky was wrong about that?

Comment author: timtyler 07 April 2011 07:40:07PM *  1 point [-]

Yudkowsky says Bayesian Epistemology dethrones and replaces Popper. He regards it as a rival theory to Popper's. Do you think Yudkowsky was wrong about that?

It replaces Popperian epistemology where their domains overlap - namely: building models from observations and using them to predict the future. It won't alone tell you what experiments to perform in order to gather more data - there are other puzzle pieces for dealing with that.

Comment author: curi 07 April 2011 08:23:29PM 1 point [-]

There's no overlap there b/c Popperian epistemology doesn't provide the specific details of how to do that. Popperian epistemology is fully compatible with, and can use, Bayes' theorem and any other pure math or logic insights.

Popperian epistemology contradicts your "other puzzle pieces". And without them, Bayes' theorem alone isn't epistemology.

Comment author: timtyler 07 April 2011 08:54:30PM *  2 points [-]

It replaces Popperian epistemology where their domains overlap - namely: building models from observations and using them to predict the future.

There's no overlap there b/c Popperian epistemology doesn't provide the specific details of how to do that.

Except for the advice on induction? Or has induction merely been rechristened as corroboration? Popper enthusiasts usually seem to deny doing that.

Comment author: curi 07 April 2011 08:55:50PM 1 point [-]

Induction doesn't work.

building models from observations and using them to predict the future

I thought you were referring to things you can do with Bayes' theorem and some input. If you meant something more, provide the details of what you are proposing.

Comment author: timtyler 08 April 2011 01:04:16PM 1 point [-]

Building models from observations and using them to predict the future is what Solomonoff induction does. It is Occam's razor plus Bayes's theorem.

Comment author: benelliott 07 April 2011 07:28:42PM 1 point [-]

The most common point of Popper's philosophy that I hear (including from my Popperian philosophy teacher) is the whole "black swan white swan" thing, which Bayes does directly contradict, and dethrone (though personally I'm not a big fan of that terminology).

The stuff you talked about with conjectures and criticisms does not directly contradict Bayes and if the serious problems with 'one strike and you're out' criticisms are fixed it I may be persuaded to accept both it and Bayes.

Bayes is not meant to be an epistemology all on its own. It only starts becoming one when you put it together with Solomonoff Induction, Expected Utility Theory, Cognitive Science and probably a few other pieces of the puzzle that haven't been found yet. I presume the reason it is referred to as Bayesian rather than Solomonoffian or anything else is that Bayes is the both most frequently used and the oldest part.

Comment author: curi 07 April 2011 07:33:14PM *  1 point [-]

The black swan thing is not that important to Popper's ideas, it is merely a criticism of some of Popper's opponents.

How does Bayes dethrone it? By asserting that white swans support "all swans are white"? I've addressed that at length (still going through overnight replies, if someone answered my points i'll try to find it).

Solomonoff Induction, Expected Utility Theory, Cognitive Science

Well I don't have a problem with Bayes' theorem itself, of course (pretty much no one does, right? i hope not lol). It's these surrounding ideas that make an epistemology that I think are mistaken, and all of which Popper's epistemology contradicts. (I mean the take on cognitive science popular here, not the idea of doing cognitive science).

Comment author: benelliott 07 April 2011 07:58:17PM 1 point [-]

(still going through overnight replies, if someone answered my points i'll try to find it)

I think I answered your points a few days ago with my first comment of this discussion.

In short, yes, there are infinitely many hypotheses whose probabilities are raised by the white swan, and yes those include both "all swans are white" and "all swans are black and I am hallucinating" but the former has a higher prior, at least for me, so it remains more probable by several orders of magnitude. For evidence to support X it doesn't have to only support X. All that is required is that X does better at predicting than the weighted average of all alternatives.

I have had people tell me that "all swans are black, but tomorrow you will hallucinated 10 white swans" is supported less by seeing 10 white swans tomorrow than "all swans are white" is, even though they made identical predictions (and asserted them with 100% probability, and would both have been definitely refuted by anything else).

Just to be clear I am happy to say those people were completely wrong. It would be nice if nobody ever invented a poor argument to defend a good conclusion but sadly we do not live in that world.

Comment author: curi 07 April 2011 08:21:58PM 1 point [-]

I think I answered your points a few days ago with my first comment of this discussion.

But then I answered your answer, right? If I missed one that isn't pretty new, let me know.

but the former has a higher prior

so support is vacuous and priors do all the real work. right?

and priors have their own problems (why that prior?).

Just to be clear I am happy to say those people were completely wrong. It would be nice if nobody ever invented a poor argument to defend a good conclusion but sadly we do not live in that world.

OK. I think your conception of support is unsubstantive but not technically wrong.

Comment author: benelliott 07 April 2011 09:01:54PM *  0 points [-]

so support is vacuous and priors do all the real work. right?

No. Bayesian updating is doing the job of distinguishing "all swans are white" from "all swans are black" and "all swans are green" and "swans come in a equal mixture of different colours". It is only a minority of hypothesis which are specifically crafted to give the same predictions as "all swans are white" where posterior probabilities remain equal to priors.

What is it with you! I admit that priors are useful in one situation and you conclude that everything else is useless!

Also, the problem of priors is overstated. Given any prior at all, the probability of eventually converging to the correct hypothesis, or at any rate a hypothesis which gives exactly the same predictions as the correct one, is 1.

Bayes cannot distinguish between two theories that assign exactly the same probabilities to everything, but I don't see how you could distinguish them, without just making sh*t up, and it doesn't matter much anyway since all my decisions will be correct whichever is true.

Comment author: curi 07 April 2011 09:07:45PM 1 point [-]

Bayesian updating is doing the job of distinguishing "all swans are white" from "all swans are black"

But that is pretty simple logic. Bayes' not needed.

@priors -- are you saying you use self-modifying priors?

Bayes cannot distinguish between two theories that assign exactly the same probabilities to everything

That makes it highly incomplete, in my view. e.g. it makes it unable to address philosophy at all.

but I don't see how you could distinguish them

By considering their explanations. The predictions of a theory are not its entire content.

without just making sh*t up

that's one of the major problems popper addressing (reconciling fallibilism and non-justification with objective knowledge and truth)

and it doesn't matter much anyway since all my decisions will be correct whichever is true.

It does matter, given that you aren't perfect. How badly things start breaking when mistakes are made depends on issues other than what theories predict -- it depends on their explanations, internal structure, etc...

Comment author: benelliott 07 April 2011 10:07:49PM 1 point [-]

It does matter, given that you aren't perfect. How badly things start breaking when mistakes are made depends on issues other than what theories predict -- it depends on their explanations, internal structure, etc...

No, I'm pretty sure that if I theory A and theory B generate the same predictions then things will go exactly as well or badly for me whichever is true.

By considering their explanations. The predictions of a theory are not its entire content.

One could say that this is how to work out priors. You are aware that the priors aren't necessarily set in stone at the beginning of time? Jaynes pointed out that a prior should always include all the information you have that is not explicitly part of the data (and even the distinction between prior and data is just a convention), and may well be based on insights or evidence encountered at any time, even after the data was collected.

Solomonoff Induction is precisely designed to consider explanations. The difference is it does so in a rigorous mathematical fashion rather than with a wishy-washy word salad.

That makes it highly incomplete, in my view. e.g. it makes it unable to address philosophy at all.

It was designed to address science, which is a more important job anyway.

However, in my experience, the majority of philosophical questions are empirically addressable, at least in principle, and the majority of the rest are wrong questions.

Comment author: thakil 07 April 2011 11:05:32AM 0 points [-]

Mm, I'm not sure I entirely agree with that link- we might accept that most long term goals on maximisation will lead to what most people might recognise as morality, but I don't know if all goals are long term. It also makes no argument as to one goal being "better" than another. Theres sensible reasons for me to discourage people having the desire to kill me, for example, but I don't see that one could argue that I'm right and he's wrong. If someone is born with just one innate desire, that of killing me, its in her interest to pursue that goal. Now she might well act morally elsewhere while engaging fully in her training towards killing me, but at some point where she is confident that she will be able to kill me, she should drop everything else and kill me. Of course after this her life is empty, but she only had that one desire, and she had to fufill it at some point- she got absolutely no value from everything else.

Now was she wrong to pursue that goal? I don't see how I can condemn her. I obviously will do everything in my power to stop her, and I would hope others in society would have goals which are interrupted by my untimely demise, but I don't see where condemnation comes in here. We had conflicting goals, and mine seem "nicer" from an intuitive argument, but if I lived in a world where everyone had a strong desire to see me dead then I imagine it would feel "nicer" to them for me to die, and "nasty" for me to survive.

Comment author: curi 07 April 2011 06:17:07PM 2 points [-]

Not all goals are long term.

One of the purposes of the dialog is to explain that the foundations are not very important. That means you don't have to figure out the correct foundations or starting place to have objective morality. You can start wherever you want, because rather little depends on the starting place.

Once you do make a ton of progress, when you're much wiser, if your starting place was squirrels you'd be able to reconsider it because it's so silly. The same holds for any other particularly dumb starting place.

The ones that will be harder to change later are specifically the ones that you don't see as bad -- that you don't want to change. The ones that are either correct or you don't yet have enough knowledge to see the problems with them.

If someone is born with just one innate desire

Innate desires aren't morality. It's a bad argument "I was born this way therefore I should be this way". That's getting and ought from an is.

Moving on, one way to move past the squirrel scenario, which enables you to criticize the squirrel starting point and many others, is you consider other scenarios. Drop the squirrels and put in something else, like minimizing bison. Put in a way variety of stuff. It's not too important what it is. Any kind of value, taken seriously, and which has something to say long term. Even wanting to kill someone will work if you also want them to stay dead forever (if you really want to make sure to destroy all the information that could be used to resurrect them later with advanced technology, and you want to know what kind of remains could be used for that and what would violate the laws of physics, then you will need advanced knowledge).

So, you try the same thought experiment with bison-minimizing, or killing-forever.

You find that some of the conclusions are the same, and some are different.

Take only the ones that are the same for thousands of starting points.

Those are the non-parochial ones. They are the ones that don't depend on your culture and biases. There is the objective content.

The parts that vary by starting points are wrong.

That's what I think. And I think this argument isn't bogged down in being totally subjective from the start. Maybe it's not perfect, but that just means we could make an even better argument in the future.

Getting back to your original point, this shared content across many starting points does say stuff about what to do short term. It doesn't give complete arbitrary freedom of action in the short term. It's also not totally restrictive, but that's good (it might get a lot more restrictive if made more precise. but that would also be good. knowing how to live well in high detail would be a good thing even if it gave you less non-immoral options. as long as we don't jump the gun and create very precise rules before we understand how to work them out well, we'll be ok.)

Comment author: thakil 07 April 2011 07:26:11PM 1 point [-]

Sorry, but I just do not see how you can claim desires are not morality when you have yet to provide a basis for what it is! I see no reason to believe that those bases with common conclusions are somehow better. They might feel better, but thats not good enough

Comment author: curi 07 April 2011 07:28:57PM *  2 points [-]

you have yet to provide a basis

I've argued that morality is at least largely, if not entirely, independent of basis. So asking me for a basis isn't the right question.

Can you give an example of a starting point you think avoids the common conclusions such as liberalism?

Comment author: thakil 07 April 2011 08:32:06PM *  0 points [-]

You have shown that an argument can be made that given a number of seemingly dissimilar, long term goals, e can make arguments which convincingly argue that to achieve them one should act in a manner people would generally consider moral. I am not convinced squirrel morality gives me an answer on specific moral questions (abortion say) but I can see how one might manage it. You have yet to convince me that short term bases will do the same: I am reasonably confident that many wil not. To claim theses bases as inferior seems to be begging the question to me.

As to your specific question: how about a basis of wanting to prevent liberalism? It would certainly be difficult to achieve and counter productive, but to claim that those respective properties are bad begs the question: you need morality to condemn purposes which are going to cause nothing but pain for all involved.

Comment author: curi 07 April 2011 08:52:02PM 2 points [-]

how about a basis of wanting to prevent liberalism?

If you were just to destroy the world, or build a static society and die of a meteor strike one day b/c your science never advanced, then life could evolve on another planet.

You need enough science and other things to be able to affect the whole universe. And for that you need liberalism temporarily. Then at the very very end, when you're powerful enough to easily do whatever you want to the whole universe (needs to be well within your power, not at the limits of your power, or it's too risky, you might fail) then finally you can destroy or control everything.

So that goal leads straight to billions of years of liberalism. And that does mean freedom of abortion: ruining people's lives to punish them for having sex does not make society wealthier, does not promote progress, etc... But does increase the risk of everyone dying of meteor before you advance enough to deal with such a problem.

short term bases

Accomplish short term things, in general, depends on principles. Suppose I want a promotion at work within the next few years. It's important to have the right kind of philosophy. I'll have a better shot at it if I think well. So I'll end up engaging with some big ideas. Not every single short term basis will lead somewhere interesting. If it's really short, it's not so important. Also consider this: we can conjecture that life is nice. People cannot use short term bases, which don't connect to big ideas, to criticize this. If they want to criticize it, they will have to engage with some big ideas, so then we get liberalism again.

Comment author: thakil 08 April 2011 08:53:50AM 1 point [-]

Dealing with issues in order. OK, fine, once again you've taken a bases that I've given and assumed I want it to apply to the entire universe (note this isn't necessarily what most people actually mean. Just because I want humans to be happy doesn't necessarily mean I want a universe tiled with happy humans), but even under this assumption I'm not sure I agree- by encouraging liberalism in the short term we may make it impossible to create liberalism in the long term, and you are imagining a society which is human in nature. Humans like liberalism, as a rule, but to say that therefore morality needs liberalism is actually subjective on humans. If I invent a species of blergs who love illiberalism then I can get away with it. Bear in mind that an illiberal species isn't THAT hard to imagine- we suppose democracy is stable despite liberal societies being destroyed by more liberal ones. You make an assumption of stability based on the past 300 years or so of history, which seems somewhat presumptive.

I actually agree that given sensible starting assumptions we can get to something that looks like morality, or at least argue strongly in its favour, but those bases have no reason outside of themselves to be accepted. They are axioms, and axioms are by necessity subjective. We can look at them and say "hey those seem sensible" and "hey, those lead to results that jibe with my intuitions", but we can't really defend them as inherent rules. Look at Eliezer's three worlds collide, with the Baby Eaters. While I disagree with many of the conclusions of that story, the evolution of the Baby Eaters doesn't sound totally implausible, but theres a society thats developed a morality utterly at odds with our own.

On short term bases, I can obviously invent short term bases that don't work. You claim that my murderer is worried about my resurrection. Most aren't, and its easy to just say they want me to die once, and don't care heavily if I resurrect afterwards. If I do, their desire will already have been fufilled and they will be sated. This person is weird and irrational, and there are multiple sensible reasons for us to make sure that person does not accomplish their goals, but to claim their goal is worse than ours inherently assumes a number of goals that that individual doesn't possess.

Comment author: curi 08 April 2011 09:57:11AM *  -1 points [-]

OK, fine, once again you've taken a bases that I've given and assumed I want it to apply to the entire universe

We have different priorities.

What I want is: if people want to improve, then they can. There is an available method they can use, such as taking seriously their ideas and fully applying them instead of arbitrarily restricting them.

Most murderers don't worry about resurrection. Yes, but I don't mind. The point is a person with a murder type of basis has a way out starting with his existing values.

I think what you want is not possible methods of progress people could use if they wanted to, but some kind of guarantee. But there can't be one. For one thing, no matter what you come up with people could simply refuse to accept your arguments.

They can refuse to accept mine too. I don't care. My interest is that they can improve if they like. That's enough.

There doesn't have to be a way to force a truth on everyone (other than, you know, guns) for it to be an objective truth.

Bear in mind that an illiberal species isn't THAT hard to imagine

They are easy to imagine. But they are only temporary. They always go extinct because they cannot deal with all the unforeseen problems they counter.

You make an assumption of stability based on the past 300 years or so of history, which seems somewhat presumptive.

No. You made an assumption about my reasoning. I certainly didn't say that. You just guessed it. If you'd asked my reasoning that isn't what I would have said.

Comment author: JoshuaZ 08 April 2011 02:04:06PM *  1 point [-]
Bear in mind that an illiberal species isn't THAT hard to imagine

They are easy to imagine. But they are only temporary. They always go extinct because they cannot deal with all the unforeseen problems they counter.

What evidence do you have for this claim? This isn't at all obvious to me. The only highly sapient species we encounter are humans. And homo sapiens aren't terribly liberal. Do you have examples of other species that are intrinsically illiberal that have gone extinct?

Comment author: thakil 08 April 2011 10:33:19AM 1 point [-]

Mm, I wonder if we are potentially arguing about the same thing here. I suspect our constructions of morality would look very similar at the end of the day, and that the word "objective" is getting in our way. I still don't see how one can possibly construct a morality which exists outside minds in a real way, as morality is a ffunction of sentience.