BiasedBayes

Wiki Contributions

Comments

Sorted by

Learning social dynamics from the book Game is like trying to learn science from clickbait media.

Try David Buss: Evolutionary psychology - the new science of mind (newest edition)

First of all thats wrong level of analysis. There is nothing relativistic about the theory of relativity itself. Proper analogy would be between theories/ontologies/belief systems not in terms of the content of those theories.

No reference frame makes Newtons, Thomas Youngs, Augustine-Jean Fresnels or Ernest Machs ideas about motion less or more right compared to Einsteins. You need evidence to value the ontologies, even if the content is relativistic.

Is this supposed be little cute side notion or powerful counterargument?

Its possible to have better and worse ontologies even if philosophers cant solve what is the right theory of truth. One could answer to the liars paradox based on Russells, Tarskis, Kripkes or Priests ideas but this is irrelevant IF one is interested about actually having accurate beliefs. It is not necessary to have completely water tight necessary and sufficient theory of the truth to be able to rank beliefsystems based on evidence at hand and evidence about human cognitive tendencies to create predictable folk theories.

To state P is to imply "P is true". If you didn't think your theory was better, why state it?

Im not advocating some big grand theory of ethics but a rational approach to ethical problems given the values we have. I dont think its needed or even possible to solve some big general questions first.

Anyone else? Any number of people have stated theories. The Catholic Church The Protestant churches.Left wing politics. Right wing politics. ....etc etc etc.

In this discussion.

Anyone can state an object-level theory which is just the faith of their ancestors or whatever, and many do. However, you put yourself in a tricky position to do so when your theory boils down to "science solves it", because science is supposed to be better than everything else for reasons connected to wider rationality...it's supposed to be on the high ground.

Irrelevant. Given values we have there are better and worse approaches to ethical problems. The answer is not some lipservice slogan "science solves it " but to give an argument based on synthesized evidence we have related to that specific ethical problem. After this peers can criticise the arguments based on evidence.

Why? To support some claim about ethics? I haven't made any. To prove that it is possible?

Because you keep insisting that we have to solve some big ethical questions first. When asked repeatedly you try to specify by saying "closer you are solving them" but that does not really mean anything. That is just a mumbo-jumbo. Looking forward to that day when philosophers agree on general ethical theory.

an ethical system can be better or worse adapted to a society's needs, meaning there are better and worse ethical systems.(Strong ethical relativism is also false...we are promoting a central or compromise position along the realism-relativism axis).

How do you know which system is better or worse? Would you not rank and evaluate different solutions to ethical problems by actually researching the solutions using empirical data had and applying this thing called scientific method?

Since we agree that this is true, but you still keep insisting that "you have to solve (funny to use this word because this task is still work in progress after thoussands of years) the general questions first get to an "interesting" object-level ethical propositions" as you wrote in the beginning, please put forward your answers to these general questions. Im begging for you to give your foundational ethical arguments.

After this we could really proceed to compare our suggestions and other people could perhaps conclude whose proposition is more or less imperfect. Im very happy to do this and give more structured arguments how science is relevant when answering more specifically to your points.

So far I have not concluded that "my theory is less perfect than everybodys else". You or anyone else have not even stated a hint of a theory! What I have said is that the question "how much weighting do you put on the wellbeing of the folks at home versus people in far off lands" is not interesting to me.

I never wrote the last sentence of your second quote

Thanks for your answer. My 20 cents:

There are no both necessary and sufficient conditions for the perfect foundational general ethical theories. Would be interested hearing your arguments if you think otherwise. Give me an example to refute this. This does not mean that there cant be general guidelines. Contrary to your post there is huge falsifiability demand on my proposition because scientific enquiry can and has to inform what is the right (or better) moral answer. Which leads nicely to G.E. Moores naturalistic fallacy.

So there is such a thing as value, and the answer the fact-value dilemma is to wholeheartedly embrace the naturalistic fallacy?

Im not completely sure that you understand what is the naturalistic fallacy since you even suggest it here. There are many naturalistic approaches to ethics that do not fall into Moores naturalistic fallacy. What these approaches have in common is the argument that science is relevant for ethics, without being an attempt to start from a foundational first moral principles .

Moore’s naturalistic fallacy is aimed towards arguments seeking a foundation to ethics, not to criticize ethicists who do not provide such a foundation. Im not trying to derive foundational ethical principles here if that is not clear already. This is approach where normative inquiry is aimed to tangible problem solving and where a moral problem is not necessary ever completely solved.

You are partly trying to carve nature in clear categories, and it does not care about your intent doing so. There are general answers to general questions but when being more specific your nice clean and clear general answer can be problematic. Im in no way forced to keep defending universalism if the question is more specific. Good luck solving anything with that kind of conceptual musings from the armchair.

For instance, a decision-theoretic weighting on the desirbiilty a future outcome.

And those utilities on that decision-theoretic weighting are affected in the first place by the actual facts of how our nervous system and cognition is evolved to appreciate these specific values, how our beliefs are in line with the facts of the reality and also hopefully be updated and criticised based on facts too.

I have not stated that I do not undestand the terms.This should be very clear. I have stated that the question is not interesting to me because its too general. BUT because you keep insisting I still gave an answer to you while very clearly stating that if you would like to be more specific I could give you even better answer.

How can it add up to value? It can can provide crucial information in meeting those values. Hard distinction between facts and values is illposed. What is a fact free value? In the other hand even our senses and cognition have apriori concepts that affects the process of observing and processing so called value free facts. Welcome to 2017 David Hume.

Well yes, I think morality is related to the wellbeing of the organism interested about the morality in the first place. There are reasons why forcefully cutting my friends arm vs hair is morally different. The difference is the different effects of cutting the limb vs hair to the nervous system of the organism being cut. Its relevant what we know scientifically about human wellbeing. We can obtain morally relevant knowledge through science.

Thats a way too simplistic way to think about this. One has to stand on the shoulders of giants to be intellectual in the first place. Also there is this thing called scientific consensus and there are reason why its usually rational to lean ones opinions in line with scientific consensus- not because of other people think like it too but because its usually the most balanced view of the current evidence.

Talebs argument about being IYI is pretty ridiculous and includes stuff like not deadlifting, not cursing on twitter and not drinking white wine with steak while naming some of attributes of IYI using people he does not like. I get it its partly satire but he fails to make any sharp arguments, its mostly sweeping generalisation while generating these heuristics around the concept of IYI that are grossly simplistic.

Come on : ”The IYI has been wrong, historically, on Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, election forecasting models, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.”

OK.

Load More