Nobody special, nor any desire to be. Just sharing my ideas when I appear to know better than the person I'm responding to, or when I believe I have something interesting to share/add. I'm not a serious nor a formal person, and if you're more knowledgeable than intelligent, you probably won't like me as I lack academic rigor.
Feel free to correct me when I make mistakes. I'm too certain of myself as my ideas are rarely challenged. Crocker's rules are fine! When playing intellectual (I do on here) I find that social things only get in the way, and when I socialize I find that intellectual things get in the way, so I separate them.
Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Those who have had similar experiences and thoughts already will recognize what I say, and those who don't will mostly perceive noise.
Great post!
It's a habit of mine to think in very high levels of abstraction (I haven't looked much into category theory though, admittedly), and while it's fun, it's rarely very useful. I think it's because of a width-depth trade-off. Concrete real-world problems have a lot of information specific to that problem, you might even say that the unique information is the problem. An abstract idea which applies to all of mathematics is way too general to help much with a specific problem, it can just help a tiny bit with a million different problems.
I also doubt the need for things which are so complicated that you need a team of people to make sense of them. I think it's likely a result of bad design. If a beginner programmer made a slot machine game, the code would likely be convoluted and unintuitive, but you could probably design the program in a way that all of it fits in your working memory at once. Something like "A slot machine is a function from the cartesian product of wheels to a set of rewards". An understanding which would simply the problem so that you could write it much shorter and simpler than the beginner. What I mean is that there may exist simple designs for most problems in the world, with complicated designs being due to a lack of understanding.
The real world values the practical way more than the theoretical, and the practical is often quite sloppy and imperfect, and made to fit with other sloppy and imperfect things.
The best things in society are obscure by statistical necessity, and it's painful to see people at the tail ends doubt themselves at the inevitable lack of recognition and reward.
I think there's a problem with the entire idea of terminal goals, and that AI alignment is difficult because of it.
"What terminal state does you want?" is off-putting because I specifically don't want a terminal state. Any goal I come up with has to be unachievable, or at least cover my entire life, otherwise I would just be answering "What needs to happen before you'd be okay with dying?"
An AI does not have a goal, but an utility function. Goals have terminal states, once you achieve them you're done, the program can shut down. An utility function goes on forever. But generally, wanting just one thing so badly that you'd sacrifice everything else for it.. Seems like a bad idea. Such a bad idea that no person has ever been able to define an utility function which wouldn't destroy the universe when fed to a sufficiently strong AI.
I don't wish to achieve a state, I want to remain in a state. There's actually a large space of states that I would be happy with, so it's a region that I try to stay within. The space of good states form a finite region, meaning that you'd have to stay within this region indefinitely, sustaining it. But something which optimizes seeks to head towards a "better state", it does not want to stagnate, but this is precisely what makes it unsustainable, and something unsustainable is finite, and something finite must eventually end, and something which optimizes towards an end is just racing to die. A human would likely realize this if they had enough power, but because life offers enough resistance, none of us ever win all our battles. The problem with AGIs is that they don't have this resistance.
The after-lives we have created so far are either sustainable or the wish to die. Escaping samsara means disappearing, heaven is eternal life (stagnation) and Valhalla is an infinite battlefield (a process which never ends). We wish for continuance. It's the journey which has value, not the goal. But I don't wish to journey faster.
I meant that they were functionally booleans, as a single condition is fulfilled "is rich", "has anvil", "AGI achieved". In the anvil example, any number past 1 corresponds to true. In programming, casting positive integers to booleans results in "true" for all positive numbers, and "false" in the case of zero, just like in the anvil example. The intuition carries over too well for me to ignore.
The first example which came to mind for me when reading the post was confidence, which is often treated as a boolean "Does he have confidence? yes/no". So you don't need any countable objects, only a condition/threshold which is either reached or not, with anything past "yes" still being "yes".
A function where everything past a threshold maps to true, and anything before it maps to false, is similar to the anvil example, and to a function like "is positive" (since a more positive number is still positive). But for the threshold to be exactly 1 unit, you need to choose a unit which is large enough. 1$ is not rich, and having one water droplet on you is not "wet", but with the appropriate unit (exactly the size of the threshold/condition) these should be functionally similar.
I'm hoping there is simple and intuitive mathematics for generalizing this class of problems. And now that I think about it, most of these things (the ones which can be used for making more of themselves) are catalysts (something used but not consumed in the process of making something). Using money to make more money, anvils to make more anvils, breeding more of a species before it goes extinct.
This probably makes more sense if you view it as a boolean type, you either "have an anvil" or you don't, and you either have access to fire or you don't. We view a lot of things as booleans (if your clothes get wet, then wet is a boolean). This might be helpful? It connects what might seem like a sort of edge case into something familiar.
But "something that relies on itself" and "something which is usually hard to get, but easy to get more of once you have a bit of it" are a bit more special I suppose. "Catalyst" is a sort of similar yet different idea. You could graph these concepts as dependency relations and try out all permutations to see if more types of problems exists
The short version is that I'm not sold on rationality, and while I haven't read 100% of the sequences it's also not like my understanding is 0%. I'd have read more if they weren't so long. And while an intelligent person can come up with intelligent ways of thinking, I'm not sure this is reversible. I'm also mostly interested in tail-end knowledge. For some posts, I can guess the content by the title, which is boring. Finally, teaching people what not to do is really inefficient, since the space of possible mistakes is really big.
Your last link needs an s before the dot.
Anyway, I respect your decision, and I understand the purpose of this site a lot better now (though there's still a small, misleading difference between the explanation of rationality and in how users are behaving. Even the name of the website gave the wrong impression).
Yes intuitions can be wrong welcome to reality
But these ways of looking at the world are not factually wrong, they're just perverted in a sense.
I agree that schools are quite terrible in general.
how could I have come up with this myself?
That helps for learning facts, but one can teach the same things in many different ways. A math book from 80 years ago may be confusing now, even if the knowledge it covers is something that you know already, because the terms, notation and ideas are slightly different.
we need wisdom because people cannot think
In a way. But some people who have never learned psychology have great social skills, and some people who are excellent with psychology are poor socializers. Some people also dislike "nerdy" subjects, and it's much more likely that they'd listen to a ted talk on budy language than read a book on evolutionary psychology and non-verbal communication. Having an "easy version" of knowledge available which requires 20 IQ points less than the hard version seems like a good idea.
Some of the wisest and psychologically healthy people I have met have been non-intellectual and non-ideological, and even teenagers or young adults. Remember your "Things to unlearn from school" post? Some people may have less knowledge than the average person, and thus have less errors, making them clear-sighted in a way that makes them seem well-read. Teaching these people philosophy could very well ruin their beautiful worldviews rather than improve on them.
if you know enough rationality you can easily get past all that.
I don't think "rationality" is required. Somebody who has never heard about the concept of rationality, but who is highly intelligent and thinks things through for himself, will be alright (outside of existential issues and infohazards, which have killed or ruined a fair share of actual geniuses).
But we're both describing conditions which apply to less than 2% of the population, so at best we have to suffer from the errors of the 98%.
I'm not sure what you mean by "when you dissent when you have an overwhelming reason". The article you linked to worded it "only when", as if one should dissent more often, but it also warns against dissenting since it's dangerous.
By the way, I don't like most rational communities very much, and one of the reasons is that is that they have a lot of snobs who will treat you badly if you disagree with them. The social mockery I've experienced is also quite strong, which is strange since you'd suspect intelligence to correlate with openness, and for the high rate of autistic people to combat some of the conformity.
I also don't like activism, and the only reason I care about the stupid ideas of the world is that all the errors are making life harder for me and the people that I care about. Like I said, not being an egoist is impossible, and there's no strong evidence that all egoism is bad, only that egoism can be bad. The same goes for money and power, I think they're neutral and both potentially good/bad. But being egoistic can make other people afraid of me if I don't act like I don't realize what I'm doing.
It's more optimal to be passionate about a field
I think this is mostly correct. But optimization can kill passion (since you're just following the meta and not your own desires). And common wisdom says "Follow your dreams" which is sort of naive and sort of valid at the same time.
Believing false things purposefully is impossible
I think believing something you think is false, intentionally, may be impossible. But false beliefs exist, so believing in false things is possible. For something where you're between 10% and 90% sure, you can choose if you want to believe in it or not, and then using the following algorithm:
Say "X is true because" and then allow your brain to search through your memoy for evidence. It will find them.
The articles you posted on beliefs is about the rules of linguistics (belief in belief is a valid string) and logic, but how belief works psychologically may be different. I agree that real beliefs are internalized (exist in system 1) to the point that they're just part of how you anticipate reality. But some beliefs are situational and easy to consciously manipulate (example: self-esteem. You can improve or harm your own self esteem in about 5 minutes if you try, since you just pick a perspective and set of standards in which you appear to be doing well or badly). Self-esteem is subjective, but I don't think the brain differentiates subjective and objective things, it doesn't even know the difference.
And it doesn't seem like you value truth itself, but that you value the utility of some truths, and only because they help you towards something you value more?
Ethically yes, epistemically no
You may believe this because a worldview will have to be formed through interactions with the territory, which means that a worldview cannot be totally unrelated to reality? You may also mean this: That if somebody has both knowledge and value judgements about life, then the knowledge is either true or false, while the value judgements are a function of the person. A happy person might say "Life is good" and a depression person might say "Life is cruel", and they might even know the same facts.
Online "black pills" are dangerous, because the truth value of the knowledge doesn't imply that the negative worldview of the person sharing it is justified. Somebody reading the vasistha yoga might become depressed because he cannot refute it, but this is quite an advanced error in thinking, as you don't need to refute it for its negative tone to be false.
Rationality is about having cognitive algorithms which have higher returns
But then it's not about maximizing truth, virtue, or logic.
If reality operates by different axioms than logic, then one should not be logical.
The word "virtue" is overloaded, so people write like the word is related to morality, but it's really just about thinking in ways which makes one more clear-sighted. So people who tell me to have "humility" are "correct" in that being open to changing my beliefs makes it easier for me to learn, which is rational, but they often act as if they're better people than me (as if I've made an ethical/moral mistake in being stubborn or certain of myself).
By truth, one means "reality" and not the concept "truth" as the result of a logic expression. This concept is overloaded too, so that it's easy for people to manipulate a map with logical rules and then tell another person "You're clearly not seeing the territory right".
physics is more accurate than intuitive world models
Physics is our own constructed reality, which seems to a act a lot like the actual reality. But I think an infinite amount of physics could exist which predicts reality with a high accuracy. In other words, "There's no one true map". We reverse engineer experiences into models, but experience can create multiple models, and multiple models can predict experiences.
One of the limitation is "there's no universal truth", but this is not even a problem as the universe is finite. But "universal" in mathematics is assumed to be truly universal, covering all things, and it's precisely this which is not possible. But we don't notice, and thus come up with the illusion of uniqueness. And it's this illusion which creates conflict between people, because they disagree with eachother about what the truth is, claiming that that conflicting things cannot both be true. I dislike the consensus because it's the consensus and not a consensus.
A good portion of hardcore rationalists tend to have something to protect, a humanistic cause
My bad for misrepresenting your position. Though I don't agree that many hardcore rationalists care for humanistic causes. I see them as placing rationality above humanity, and thus prefering robots, cyborgs, and AIs above humanity. They think they prefer an "improvement" of humanity, but this functionally means the destruction of humanity. If you remove negative emotions (or all emotions entirely. After all, these are the source of mistakes, right?), subjectivity, and flaws from humans, and align them with eachother by giving them the same personality, or get rid of the ego (it's also a source of errors and unhappiness) what you're left with is not human. It's at best a sentient robot. And this robot can achieve goals, but it cannot enjoy them.
I just remembered seeing the quote "Rationality is winning", and I'll admit this idea sounds appealing. But a book I really like (EST: Playing the game the new way, by Carl Frederick) is precisely about winning, and its main point is this: You need to give up on being correct. The human brain wants to have its beliefs validated, that's all. So you let other people be correct, and then you ask them for what you want, even if it's completely unreasonable.
Rationality doesn't necessarily have nature as a terminal value
I meant nature as its source (of evidence/truth/wisdom/knowledge). "Nature" meaning reality/the dao/the laws of physics/the universe/GNON. I think most schools of thought draw their conclusions from reality itself. The only kind of worldviews which seems disconnected from reality is religions which create ideals out of what's lacking in life and making those out to be virtue and the will of god.
None of that is incompatible with rationality
What I dislike might not be rationality, but how people apply it, and psychological tendencies in people who apply it. But upvotes and downvotes seem very biased in favor of a consensus and verifiability, rather than simply being about getting what you want out of life. People also don't seem to like being told accurate heuristics which seem immoral or irrational (the colloquial definition that regular people use) even if they predict reality well. There's also an implicit bias towards alturism which cannot be derived from objective truth.
About my values, they already exist even if I'm not aware of them, they're just unconscious until I make them conscious. But if system 1 functions well, then you don't really need to train system 2 to function well, and it's a pain to force system 2 rationality onto system 1 (your brain resists most attempts at self-modification). I like the topic of self-modification, but that line of studies doesn't come up on LW very often, which is strange to me. I still believe that the LW community downplays the importance of human nature and psychology. It may even underevaluate system 1 knowledge (street smarts and personal experiences) and overevaluate system 2 knowledge (authority, book-smarts, and reasoning)
There's a lot to unfold for this first point:
Another issue with teaching it academically is that academic thought, like I already said, frames things in a mathematical and thus non-human way. And treating people like objects to be manipulated for certain goals (a common consequence of this way of thinking) is not only bad taste, it makes the game of life less enjoyable.
Learning how to program has harmed my immersion in games, and I have a tendency to powergame, which makes me learn new videogames way faster than other people, also with the result that I'm having less fun than them. I think rationality can result in the same thing. Why do people dislike "sellouts" and "cars salesmen" if not for the fact they they simply optimize for gains in a way which conflicts with taste? But if we all just treat taste like it's important, or refuse to collect so much information that we can see the optimal routes, then Moloch won't be able to hurt us.
If you want something to be part of you, then you simply need to come up with it yourself, it will be your own knowledge. Learning other peoples knowledge however, feels to me like consuming something foreign.
Of course, my defense of ancient wisdom so far has simply been to translate it into an academic language in which it makes sense. "Be like water" is street-smarts, and "adaptability is a core component of growth/improvement/fitness" is the book-smarts. But the "street-smarts" version is easier to teach, and now that I think about it, that's what the bible was for.
Most things that society waste its time discussing are wrong. And they're wrong in the sense than even an 8-year-old should be able to see that all controversies going on right now are frankly nonsense. But even academics cannot seem to frame things in a way that isn't riddled with contradictions and hypocrisy. Does "We are good, but some people are evil, and we need to fight evil with evil otherwise the evil people will win by being evil while we're being good" not sound silly? A single thought will get you karl poppers "paradox of tolerance" and a single thought more will make you realize that it's not a paradox but a kind of neutrality/reflexivity which make both sides equal, and that "We need to fight evil" means "We want our brand of evil to win" as long as people don't dislike evil itself but rather how it's used. Again, this is not more complicated than "I punched my little brother because I was afraid he'd punch me first, and punching is bad" which I expect most children to see the problem with.
astrology
The thought experiment I had in mind was limited to a single isolated situation, you took it much further, haha. My point was simply "If you use astrology for yourself, the outcomes are usually alright". Same with tarot cards, as far as I'm concerned, it's a way to talk with your subconsciousness without your ego getting in the way, which requires acting as if something else is present. Even crystal balls are probably a kind of Rorschach test, and should not be used to "read other people" for this reason. Finally, I don't disagree with the low utility of astrology, but false hope gives people the same reassurance as real hope. People don't suffer from the non-existence of god, but from the doubt of his existence. The actual truth value of beliefs have no psychological effects (proof: Otherwise we could use beliefs to measure the state of reality).
are more rational w.r.t. to that goal
I disagree as I know of counter-examples. It's more likely for somebody to become rich making music if their goal is simply to make music and enjoy themselves, than if their goal is to become rich making music. You see similar effects for people who try to get girlfriends, or happiness for that matter. If X resuls in Y, then you should optimize for X and not for Y. Many companies are dying because they don't realize such a simple thing (they try to exploit something pre-existing rather than making more of what they're exploiting, for instance the trust in previous IPs). Ancient wisdom tackles this. Wu Wei is about doing the right by not trying to do it. I don't know how often this works, but it sometimes does.
I have to disagree that anyones goal is truth. I've seen strong evidence that knowledge of an environment is optimal for survival, and that knowledge-optimizing beats self-delusion every time, but even in this case, the real goal is "survival" and not "truth". And my proof is the following: If you optimize for truth because it feels correct or because you believe it's what's best, then your core motivation is feelings or beliefs respectively. For similar reasons, non-egoism is trivially impossible. But the "Something to protect" link you sent seems to argue for this as well?
And truth is not always optimal for goals. The belief that you're justified and the belief that you can do something are both helpful. The average person is 5/10 but tend to rate themself as 7/10, which may the around the optimal bias.
By the way, most of my disagreements so far seem to be "Well, that makes sense logically, but if you throw human nature into the equation then it's wrong"
Some people may find fulfillment from that
I find myself a little doubtful here. People usually chase fame not because they value it, but because other people seem to value it. They might even agree cognitively on what's valuable, but it's no use if they don't feel it.
I think you would need to provide evidence for such claims
How many great peoples autobiographies and life stories have you read? The nearer you get to them, the more human they seem, and if you get too close you may even find yourself crushed by pity. About Isaac Newton, it was even said "As a man he was a failure; as a monster he was superb". Boltzmann committed suicide, John Nash suffered from skizophrenia. Philosophy is even worse off, titles like "suicide or coffee?" do not come from healthy states of mind. And have you read the Vasistha Yoga? It's basically poison. But it's ultimately a projection, a worldview does not reveal the world, but rather than person with the worldview.
Then you weren't thinking rationally
But what saved me was not changing my knowledge, but my interpretation of it. I was right that people lie a lot, but I thought it was for their own sake, when it's mostly out of consideration for others. I was right that people were irrational, but I didn't realize that this could be a good thing.
No one can exempt you from laws of rationality
That seems like it's saying "I define rationality as what's correct, so rationality can never be wrong, because that would mean you weren't being rational". By treating rationality as something which is discovered rather than created (by creating a map and calling it the territory), any flaw can be justified as "that wasn't real rationality, we just didn't act completely rationally because we're flawed human beings! (our map was simply wrong!)".
There can be no universal knowledge, maps of the territory are inherently limited (and I can prove this). As far as rationality uses math and verbal or written communication, it can only approximate something which cannot be put into words "The dao of which can be spoken is not the dao" simply means "the map is not the territory".
By the way, I think I've found a big difference between our views. You're (as far as I can tell) optimizing for "Optimization power over reality / a more reliable map", while I'm optimizing for "Biological health, psychological well-being and enjoyment of existence".
And they do not seem to have as much in common as rationalists believe.
But if rationality in the end worships reality and nature, that's quite interesting, because that puts it in the same boat as Taoism and myself. Some people even put Nature=God.
Finally, if my goal is being a good programmer, then a million factors will matter, including my mood, how much I sleep, how much I enjoy programming, and so on. But somebody who naively optimizes for progamming skills might practice at the cost of mood, sleep, and enjoyment, and thus ultimately end up with a mediocre result. So in this case, a heuristic like "Take care of your health and try to enjoy your life" might not lose out to a rat-race like mentality in performance. Meta-level knowledge might help here, but I still don't think it's enough. and the tendency to dismiss things which seem unlikely, illogical or silly is not as great as a heuristic as one would think, perhaps because any beliefs which manage to stay alive despite being silly have something special about them.
I think majority of people aren't aware of psychology and various fields under it
I don't think there's a reason for most people to learn psychology or game theory, as you can teach basic human behaviour and such without the academic perspective. I even think it's a danger to be more "book smart" than "street smart" about social things. So rather than teaching game theory in college, schools could make children read and write a book report on "How to Win Friends & Influence People" in 4th grade or whatever. Academic knowledge which doesn't make it to 99% of the population doesn't help ordinary people much. But a lot of this knowledge is simple and easier than the math homework children tend to struggle with.
I don't particularly believe in morality myself, and I also came to the conclusion that having shared beliefs and values is really useful, even if it means that a large group of people are stuck in a local maximum. As a result of this, I'm against people forcing their "moral" beliefs on foreign groups, especially when these groups are content and functional already. So I reject any global consensus of what's "good". No language is more correct than another language, and the same applies for cultures and such.
Well it depends on your definition of inhuman
It's funny that you should link that post, since it introduces an idea that I already came up with myself. What I meant was that people tend to value what's objective over what's subjective, so that their rational thinking becomes self-destructive or self-denying in a sense. Rationality helps us to overcome our biases, but thinking of rationality as perfect and of ourselves as defect is not exactly healthy. A lot of people who think they're "super-humans" are closer to being "half-humans", since what they're doing is closer to destroying their humanity than overcoming or going beyond it. And I'm saying this despite the fact that some of these people are better at climbing social hierarchies or getting rich than me. In short, the objective should serve the subjective, not the other way around. "The lenses which sees its own flaws" merely conditions itself to seeing flaws in everything. Some of my friends are artists, and they hate their own work because they're good at spotting imperfections in it, I don't consider this level of optimization to be any good for me. When I'm rational, it's because it's useful for me, so I'm not going to harm myself in order to become more rational. That's like wanting money thinking it will make me happy, and then sacrificing my happiness in order to make money.
But the fields like cognitive biases etc are not
I'll agree as long as these fields haven't been subverted by ideologies or psychological copes against reality yet (as that's what tend to make soft sciences pathetic). The "Tall poppy syndrome" has warped the publics perception of the "Dunning kruger effect", so that it becomes an insult you can use against anyone you disagree with who are certain of themselves, especially in a social sitaution in which a majority disagree.
Astrology
Astrology is wrong and unscientific, but I can see why it would originate. It's a kind of pattern reocgnition gone awry. Since everything is related, and the brain is sometimes lazy and thinks that correlation=causation and that X implies Y is the same as Y implies X, they use patterns to predict things, and assume that recreating the patterns will recreate the things. This is mostly wrong, of course, but not always. People who are happy are likely to smile, but smiling actually tends to make you happier as well. Do you know the tragic story behind the person who invented handwashing? He found the right pattern, and the results were verifiable, but because his idea sounded silly, he ended up suffering.
If you had used astrology yourself, it might have ended better, as you'd be likely to intrepret what you wanted to to be true, and since your belief that your goal in life was fated to come true would help against the periodic doubt that people face in life.
I would strongly disagree on the front of intelligence
Intelligent is not something you are, it's something you have. Identifying with your intelligence is how you disown 90% of yourself. Seeing intelligence as something available to you rather than as something you are helps eliminate internal conflict. All "gifted kid burnout" and "depressed intelligent person" situation I have seen was partly caused by this dangerous identification. Even if you dismiss everything else I've said so far, I want to stress the importance of this one thing. Lastly, "systematic optimality" seems to suffer from something like Goodhart's law. When you optimize for one variable, you may harm 100 other variables slightly without realizing it (paperclip optimizers seem like the mathematical limit of this idea). Holistic perspectives tend to go wrong less often.
I like the Internal Family Systems view. I think the brain has competing impulses whose strength depends on your physical and psychological needs. but while I think your brain is rational according to what it wants, I don't think it's rational according to what you want. In fact, I think peoples brains tend to toy them them completely. It creates suffering to motivate you, it creates anxiety to get you to defend yourself, it creates displeasure and tells you that you will be happy if you achieve your goals. Being happy all the time is easy, but our brain makes this hard to realize so that we don't hack our own reward systems and die. If you only care about a few goals, your worldview is extremely simple. You have a complex life with millions of factors, but you only care about a few objective metrics? I'm personally glad that people who chase money or fame above all end up feeling empty, for you might as well just replace humanity with robots if you care so little for experiencing what life has to offer.
there is a good amount of coorelations with IQ
Oh, I know, I have a few bans from various websites myself (and I once got rate limited on here). And intelligence correlates with nihilism, meta-thinking, systemization, and anxiety (I know a study found the correlaton to mental illness to be false. But I think the correlation is negative until about 120 IQ and then positive after). But why did Nikola Tesla's intelligence not prevent him from dying poor and lonely? Why was Einstein so awkward? Why do some many intelligent people not enjoy life very much? My answer is that these are consequences of lacking humanity / healthy ways of thinking. It's not just that stupid people are delusional. I personally like the idea that intelligence comes at the cost of instinct. For reference, I used to think rationally, I hated the world, I hated people, I couldn't make friends, I couldn't understand myself. Now I'm completely fine, I even overcame depression. I don't suffer and I don't even dislike suffering, I love life, I like socializing. I don't worry about injustice, immorality or death.
I just found a highlight of the sequences, and it turns out that I have read most of the posts already, or just discovered the principles myself previously. And I disagree with a few of the moral rules because they decrease my performance in life by making me help society. Finally, my value system is what I like, not what is mathematically optimal for some metric which people think could help society experience less negative emotions (I don't even think this is true or desirable)
There's an entire field of psychology, yes, but most men are still confused by women saying "it's fine" when they are clearly annoyed. Another thing is women dressing up because they want attention from specific men. Dressing up in a sexy manner is not a free ticket for any man to harass them, but socially inept men will say "they were asking for it" because the whole concept of selection and stardards doesn't occur to them in that context. And have you read Niccolò Machiavelli's "The Prince"? It predates psychology, but it is psychology, and it's no worse than modern books on office politics and such, as far as I can tell. Some things just aren't improving over time.
wisdom goes wrong a lot of time
You gave the example of the ayurvedic textbook, but I'm not sure I'd call that "wisdom". If we compare ancient medicine to modern medicine, then modern medicine wins in like 95% of cases. But for things relating to humanity itself, I think that ancient literature comes out ahead. Modern hard sciences like mathematics are too inhuman (autistic people are worse at socializing because they're more logical and objective). And modern soft sciences are frankly pathetic quite often (Gardner's Theory of Multiple Intelligences is nothing but a psychological defense against the idea that some people aren't very bright. Whoever doesn't realize this should not be in charge of helping other people with psychological issues)
I don't understand where it may apply other than being a nice way to say "be more adaptive"
It's a core concept which applies to all areas of life. Humans won against other species because we were better at adapting. Nietzsche wrote "The snake which cannot cast its skin has to die. As well the minds which are prevented from changing their opinions; they cease to be mind". This community speaks a lot of "updating beliefs" and "intellectual humility" because thinking that one has all the answers, and not updating ones beliefs over time, leads to cognitive inflexibility/stagnation, which prevents learning. Principles are incredibly powerful, and most human knowledge probably boils down to about 200 or 300 core principles.
I have found that I can bypass a lot of wisdom by using these axioms
Would I be right to guess that ancient wisdom fails you the most in objective areas of life, and that it hasn't failed you much in the social parts? I don't disagree that modern axioms can be useful, but I think there's many areas where "intelligent" approaches leads to worse outcomes. For the most part, attempting to control things lead to failure. I've had more unpleasant experiences on heavily moderated platforms than I have had in completely unmoderated spaces. I think it's because self-organization can take place once disturbance from the outside ceases. But we will likely never know.
I think the failure to general purpose overcome akrasia is a failure of rationality
You could put it like that. I'd say something like "The rules of the brain are different than those of math, if you treat the brain like it's supposed to be rational, you will always find it to be malfunctioning for reasons that you don't understand". Too many geniuses have failed at living good lives for me to believe that intelligence is enough. I have friends with IQs above 145 who are depressed because they think too rationally to understand their own nature. They reject the things which could help them, because they look down on them as subjective/silly/irrational.
David Goggings story is pretty interesting. I can't say I went through as much as him, but we do have things in common. This might be why I have the courage to criticize science on LW in the first place.
I like this post, but I have some problems with it. Don't take it too hard, as I'm not the average LW reader. I think your post is quite in line with what most people here believe (but you're quite ambitious in the tasks you give yourself, so you might get downvoted as a result of minor mistakes and incompleteness resulting from that). I'm just an anomaly who happened to read your post.
I think this might make suffering worse. Suffering is subjective, so if you make people believe that they should be suffering, or that suffering is justified, they may suffer needlessly. For example, poverty doesn't make people as dissatisfied with life as relative poverty does. It's when people compare themselves to others and realize that they could have it better, that they start disliking what they have at the moment. If you create ideals, then people will work towards archiving them, but they will also suffer from the gap between the current state and what's ideal. You may argue "the reward redeems the suffering and makes it bearable", and yes, but only as long as people believe that they're getting closer to the goal. Most positive emotion we experience is a result of feeling ourselves moving towards our goals.
Yes, which is why one should not reduce "suffering" but "the causes of unproductive suffering". Just like one shouldn't avoid "pain", but "actions which are painful and without benefit". The conclusions of "mans search for meaning" was that suffering is bearable as long as it as meaning, that only meaningless suffering is unbearable. I've personally felt this as well. One of the times I was the most happy, I was also the most depressed. But that might just have been a mixed episode as is known from bipolar disorder.
I'm nitpicking, but I believe it's important to state that "suffering" isn't a fundamental issue. If I touch a flame and burn my hand, then the flame is the issue, not the pain. In fact, the pain is protecting me from touching the flame again. Suffering is good for survival, for the same reason that pain is good for survival. The proof is that evolution made us suffer, that those who didn't suffer didn't pass on their genes.
I'm not sure this is true? EA seems to be the opposite of darwinism, and survival of the fittest has been the standard until recent (everyone suddenly cares about reducing negative emotions and unfairness, to an almost pathological degree). But even if various forces helped me avoid suffering, would that really be a good thing?
I personally grew the most as a person as a result of suffering. You're probably right that you were the least productive when you didn't eat, but suffering is merely a signal that change is necessary, and when you experience great suffering, you become open to the idea of change. It's not uncommon that somebody hits rock bottom and turns their lives around for the better as a result. But while suffering is bearable, we can continue enduring, until we suffer the death of a thousand papercuts (or the death of the boiling frog, by our own hands)
That said, growth is usually a result of internal pressure, in which an inconsistency inside oneself finally snaps, so that one can focus on a single direction with determination. It's like a fever - the body almost kills itself, so that something harmful to it can die sooner.
Are you sure suffering is caused by a lack of intelligence, and not by too much intelligence? ('Forbidden fruit' argument) And that we suffer from a lack of tech rather than from an abundance of tech? (As Ted Kaczynski and the Amish seem to think)
Many animals are thriving despite their lack of intelligence. Any problem more complicated than "Get water, food and shelter. Find a mate, and reproduce" is a fabricated problem. It's because we're more intelligent than animals that we fabricate more difficult problems. And if something was within out ability, we'd not consider it a problem, which is why we always fabricate problems which are beyond our current capacities, which is how we trick ourselves into growth and improvement. Growth and improvement which somehow resulted in us being so powerful that we can destroy ourselves. Horseshoe crabs seem content with themselves, and even after 400 million years they just do their own thing. Some of them seem endangered now, but that's because of us?
Caused by too much centralization, I think. Merging structures into fewer, bigger structures causes an overhead which doesn't seem to be worth it. Decentralizing everything may actually save the world, or at least decrease the feedback loop which causes a few entities to hog all the resources.
Caused by too much information and optimization, and therefore unlikely to be solved with information and optimization. My take here is the same as with intelligence and tech. Why hasn't moloch killed us sooner? I believe it's because the conditions for moloch weren't yet reached (optimal strategies weren't visible, as the world wasn't legible and transparent enough), in which case, going back might be better than going forwards.
The tools you wish to use to solve human extinction are, from my perspective, what is currently leading us towards extinction. You can add AGI to this list of things if you want.