The theory of ‘morality as cooperation’ (MAC) argues that morality is best understood as a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. MAC draws on evolutionary game theory to argue that, because there are many types of cooperation, there will be many types of morality. These include: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. Previous research suggests that these seven types of morality are evolutionarily-ancient, psychologically-distinct, and cross-culturally universal. The goal of this project is to further develop and test MAC, and explore its implications for traditional moral philosophy. Current research is examining the genetic and psychological architecture of these seven types of morality, as well as using phylogenetic methods to investigate how morals are culturally transmitted. Future work will seek to extend MAC to incorporate sexual morality and environmental ethics. In this way, the project aims to place the study of morality on a firm scientific foundation.
Source: https://www.lse.ac.uk/cpnss/research/morality-as-cooperation.
Do you notice your beliefs changing overtime to match whatever is most self-serving? I know that some of you enlightened LessWrong folks have already overcome your biases and biological propensities, but I notice that I haven't.
Four years ago, I was a poor university student struggling to make ends meet. I didn't have a high paying job lined up at the time, and I was very uncertain about the future. My beliefs were somewhat anti-big-business and anti-economic-growth.
However, now that I have a decent job, which I'm performing well at, my views have shifted towards pro-economic-growth. I notice myself finding Tyler Cowen's argument that economic growth is a moral imperative quite compelling because it justifies my current context.
[Minor spoiler alert] I've been obsessed with Dune lately. I watched the movie and read the book and loved both. Dune contains many subtle elements of rationality and x-risks despite the overall mythological/religious theme. Here are my interpretations: the goal of the Bene Gesserit is to selectively breed a perfect Bayesian who can help humanity find the Golden Path. The Golden Path is the narrow set of futures that don't result in an extinction event. The Dune world is mysteriously and powerfully seductive.
I just came across Lenia, which is a modernisation of Conway's Game of Life. There is a video by Neat AI explaining and showcasing Lenia. Pretty cool!
On the mating habits of the orb-weaving spider:
These spiders are a bit unusual: females have two receptacles for storing sperm, and males have two sperm-delivery devices, called palps. Ordinarily the female will only allow the male to insert one palp at a time, but sometimes a male manages to force a copulation with a juvenile female, during which he inserts both of his palps into the female’s separate sperm-storage organs. If the male succeeds, something strange happens to him: his heart spontaneously stops beating and he dies in flagrante. This may be the ultimate mate-guarding tactic: because the male’s copulatory organs are inflated, it is harder for the female (or any other male) to dislodge the dead male, meaning that his lifeless body acts as a very effective mating plug. In species where males aren’t prepared to go to such great lengths to ensure that they sire the offspring, then the uncertainty over whether the offspring are definitely his acts as a powerful evolutionary disincentive to provide costly parental care for them.
...The lack of willpower is a heuristic which doesn’t require the brain to explicitly track & prioritize & schedule all possible tasks, by forcing it to regularly halt tasks—“like a timer that says, ‘Okay you’re done now.’”
If one could override fatigue at will, the consequences can be bad. Users of dopaminergic drugs like amphetamines often note issues with channeling the reduced fatigue into useful tasks rather than alphabetizing one’s bookcase.
In more extreme cases, if one could ignore fatigue entirely, then analogous to lack of pain, the consequenc
While reading the book "Software Engineering at Google" I came across this and thought it was funny:
...Some languages specifically randomize hash ordering between library versions or even between execution of the same program in an attempt to prevent dependencies. But even this still allows for some Hyrum’s Law surprises: there is code that uses hash iteration ordering as an inefficient random-number generator. Removing such randomness now would break those users. Just as entropy increases in every thermodynamic system, Hyrum’s Law applies to every observable
A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.
Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.
Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).
I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Or...
Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself.
Excerpt from "Artificial Intelligence: A Modern Approach" by Norvig and Russell.
I wish moneymaking was by default aligned with optimising for the "good". That way, I can focus on making money without worrying too much about the messiness of morality. I wholly believe that existential risks are unequivocally the most critical issues of our time because the cost of neglecting them is so enormous, and my rational self would like to work directly on reducing them. However, I'm also profoundly programmed through millions of years of evolution to want a lovely house in the suburbs, a beautiful wife, some adorable children, lots of friends, ...
I recently read Will Storr's book "The Status Game" based on a LessWrong recommendation by user Wei_Dai. It's an excellent book, and I highly recommend it.
Storr asserts that we are all playing status games, including meditation gurus and cynics. Then he classifies the different kinds of status games we can play, arguing that "virtue dominance" games are the worst kinds of games, as they are the root of cancel culture.
Storr has a few recommendations for playing the Status Game to result in a positive-sum. First, view other people as being the heroes of thei...
The important fact about "zero-sum" games is that they often have externalities. Maybe status is a zero-sum game in the sense that either you are higher-status than me, or the other way round, but there is no way for everyone to be at the top of the status ladder.
However, the choice of "weapons" in our battle matters for the environment. If people can only get higher status by writing good articles, we should expect many good articles to appear. (Or, "good" articles, because goodharting is a thing.) If people can get higher status by punching each other, we should expect to see many people hurt.
According to Adam Smith, the miracle of capitalism is channeling the human instinct of greed into productive projects. (Or, "productive", because goodharting is a thing.) We should do the same thing for status, somehow. If we could universally make useful things high-status and harmful things low-status, the world would become a paradise. (Or, a "paradise", because goodharting is a thing.)
How to do that, though? One obvious problem is that you cannot simply set the rules for people to follow, because "breaking rules" is inherently high-status. Can you make cheaters seem like losers, even if it brings them profit (because otherwise, why would they do it)?
Recently I came across this brilliant example of avoiding reducing selection bias when extracting quasi-experimental data from the world towards the beginning of the book "Good Economics for Hard Times" by Banerjee and Duflo.
The authors were interested in understanding the impact of migration on income. However, most data on migration contains plenty of selection bias. For example, people who choose to migrate are usually audacious risk-takers or have the physical strength, know-how, funds and connections to facilitate their undertaking,
To reduce these selection biases, the authors looked at people forced to relocate due to rare natural disasters, such as volcano eruptions.
I sometimes wondered why mathematicians are so pedantic about proofs. Why can't we just check the first cases with a supercomputer and do without rigorous proofs? Here's an excellent example of why proofs matter.
Consider this proposition: .
If you check with a computer, then this proposition will appear true.
The smallest counterexample has more than 1000 digits, and your computer program might not have checked that far. If you relied on the assumption that this equation has no solution in...
When some people hear the words "economic growth" they imagine factories spewing smoke into the atmosphere:
This is a false image of what economists mean by "economic growth". Economic growth according to economists is about achieving more with less. It's about efficiency. It is about using our scarce resources more wisely.
The stoves of the 17th century had an efficiency of only 15%. Meanwhile, the induction cooktops of today achieve an efficiency of 90%. Pre-16th century kings didn't have toilets, but 54% of humans today have toilets all thanks to economic...
This is a list of the top 100 most cited scientific papers. Reading all of them would be a fun exercise.
According to Wikipedia, the most significant known prime number at this moment in time is . This is a nice illustration of two phenomena:
Is bias within academia ever actually avoidable?
Let us take the example of Daniel Dennett vs David Chalmers. Dennett calls philosophical zombies an "embarrassment," while Chalmers continues to double-down on his conclusion that consciousness cannot be explained in purely physical terms. If Chalmers conceded and switched teams, then he is going to be "just another philosopher," while Dennett achieves an academic victory.
As an aspiring world-class philosopher, you have little incentive to adopt the dominant view because if you do you will become just another...
I keep coming across the opinion that physics containing imaginary numbers is somehow a crazy and weird phenomenon, such as this article.
Complex numbers are an excellent tool for modelling rotations. When we multiply a vector by , we effectively rotate it anticlockwise by 90 degrees around the origin. Rotations are everywhere in reality, and it's thus unsurprising that a tool that's good at modelling rotations shows up in things like the Schrödinger Equation.
I find it helpful to think about probabilities as I think about inequalities. If we say that , then we are making an uncertain claim about the possible values of , but we have no idea what value actually has. However, from this uncertain claim about the value of , we can make the true claim that .
Similarly, when we prove that a probability is nonzero, it means that whatever it is we are talking about must be true somewhere within possibility space.
What causes us to sometimes try harder? I play chess once in a while, and I've noticed that sometimes I play half heartedly and end up losing. However, sometimes, I simply tell myself that I will try harder and end up doing really well. What's stopping me from trying hard all the time?
Fascinating question, Carmex. I am interested in the following space configurations:
I'd imagine that you'd have to encode a kind of variational free energy minimisation to enable robustness against chaos.
I might play around with the simulation on my local machine when I get the chance.
A map displaying the prerequisites of the areas of mathematics relevant to CS/ML:
A dashed line means this prerequisite is helpful but not a hard requirement.