Crocker's rules.
I'm nobody special, and I wouldn't like the responsibility which comes with being 'someone' anyway.
Reading incorrect information can be frustrating, and correcting it can be fun.
My writing is likely provocative because I want my ideas to be challenged.
I may write like a psychopath, but that's what it takes to write without bias, consider that an argument against rationality.
Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Whoever claims to be fully truth-seeking is not entirely honest.
This problem can also be modeled as the battle against pure replicators. What Nick Land calls the shredding of all values is the tendency towards pure replicators (ones which do not value consciousness, valence, and experience). This seems similar to the religious battle against materialism.
Bluntness outcompetes social grace, immorality outcompetes morality, rationality outcompetes emotion, quantity outcompetes quality (the problem of 'slop'), Capitalism outcompetes Buddhism and Taoism, intellectualism outcompetes spiritualism and religion, etc.
The Retrocausality reminds me of Roko's basilisk. I think that self-fulfilling prophecies exist. I originally didn't want to share this idea on LW, but if Nick Land beat me to it, then I might as well. I think John Wheeler was correct to assume that reality has a reflexive component (I don't think he used this word, but I'm going to). We're part of reality, and we navigate it using our model of reality, so our model is part of the reality that we're modeling. This means that the future if affected by our model of the future. This might be why placebo is so strong, and why belief is strong in general.
While I think Christopher Langan is a fraud, I believe that his model of reality has this reflexive axiom. If he's at least good at math, then he probably added this axiom because of its interesting implications (which sort of unifies competing models of reality. For instance, the idea of manifestation which is gaining popularity online)
Thank you.
Some people do seek beauty. Beauty has a similar effect to cuteness, people who look good are generally treated better. People probably prefer traits which "feel like them", and traits which they have a natural advantage at. The goal is to bring out as many real aspects of yourself as you can, and to make them as appealing as possible. Being forced to roleplay as something you're not is painful, and losing yourself in the process of fitting into a group will make you feel empty. Society is generally correct about this problem, but I think that artistic skills is sufficient to solve it.
I think self-worth is a factor, as you say, but I expect most people to have a hard time accepting themselves unless they can find a community which accepts them.
Finally, yes, suffering can push one towards either extreme. Fetishism also has this dual component - somebody who was abused might become a masochist, but another possibility is that they will search for a partner who is extremely gentle. It depends which side wins the battle, so to speak.
Successful reinforcement learning requires being around people with better taste than yourself, or consuming material made by people with better taste. Sometimes I worry that individuals with good taste might instead be harmed by their environment (I'm friends with a vtuber. I know that her chat will have inappropriate comments, and I know that sexual topics will be rewarded with more engagement). In an abstract sense, I think people want to increase their value, and that graceful behaviour is behaviour which protects value (and treats things as if they have value in order to reinforce the illusion that they have value - the polar opposite of vulgarity/blasphemy/profanity)
I agree with basically all of this. Cuteness is a social strategy and defense. I once watched a video which jokingly suggested that cats domesticated humans, using their cuteness as a psychological weapon. But isn't that fairly close to the truth?
I once reflected on "How do less intelligent people survive in this world?". I'm a quick learner who take pride in being independent, and even I think life is difficult, so how do regular people cope?
I recently came across this book quote online: "The voice belonged to a plump round-faced woman of the sort that develops a good personality because the alternative is suicide." In short, "become likable" is the answer.
It hit me quite hard, not only because it put it so bluntly, but because I'm familiar with the sort of people who are pleasant to be around because their past is filled with misfortune and suffering. I think there's a lot of truth to statements like "There are no beautiful surfaces without a terrible depth". We create light in order to cope with the darkness, and we like others to the extent that they make life seem more appealing. The reason Japan has so much Slice of Life anime is because of 'black companies'.
I love cute things myself, also for non-sexual reasons. But it's not uncommon that trauma and unmet needs result in fetishism and hypersexual behaviour which attempts to fill these needs (often without success - no amount of casual sex will make you feel loved). There's also a correlation between vulgarity, porn addiction and chronic internet use, and I dislike most of these people because they cannot emulate beautiful things well enough to deceive me (because they project aspects of themselves into their artwork). There is both healthy and unhealthy behaviour involved in these dynamics.
I've read that autistic people (who tend to have poor introspective abilities) are about 10 times more likely to be trans. The whole topic is related to crisis of identity, and young people naturally engage in self-discovery in ways which are easily disturbed by peer pressure. With post-modernism dissolving the traditional labels with which people of the past could identify themselves, I think it's natural that identity has become more fluid. I also think it's quite common for people to rebel in a way which isn't true rebellion, but rather just the appearance of such (we're social creatures, so if we don't fit into mainstream communities, we tend to find niches. But this is still a kind of conformity. True non-conformity is much more rare).
I have to disagree with your idea that trans people are hurt by conforming. There has been a great increase in transsexualism as a result of transsexualism becoming socially acceptable. There seems to be a bit of a bandwagon effect, similar to the self-diagnosis of neurodivergency caused by TikTok.
The biggest reason for taking the theory of autogynephilia seriously is that it's sexual in nature. Your explanation would explain transsexualism, but not the things which with it correlates. Transsexual people fixate more on sexual aspects of life, as do homosexuals and furries. Non-standard sexual orientations are more involved in fetishism. There's more correlations which will seem even stranger if your model doesn't include deep psychological mechanisms. For instance, all three group mentioned previously seem more likely than average people to prefer strong or unnatural colors. There are even times where I can guess somebodies sexual tendencies on the art styles which they draw or are drawn to. Similar to this, so I'm not the only one who has picked up on these correlations. I also believe that it's these correlations which gives certain labels their negative connotations. People don't care what genitals you prefer, what gender you feel like, or what your skin color or hair color is, but if they've met other people who shared these traits with you, then you will be judged according to the behaviour of those who share said traits with you).
Intelligence cannot be boiled down to a single number, but it can be boiled down to about five numbers. If you gave somebody who was unable to grasp high levels of abstraction inhuman processing speed (say, 15 standard deviations), then this qualitative difference would not make a quantitative difference (so FSIQ scores aren't equal, the subtests which result in the final score are important, too). Also, the intelligence distribution is a bit weird than our models suggest, which is why geniuses can be so far apart from eachother. Nikola Tesla could visualize his designs in 3D space, just how many bits of information could this visual working memory of his contain? The standard deviation of working memory items is roughly 1, so even 50 items 'should' only appear in one in 10^400 people.
I don't believe much in super-intelligence, but I have recently had the horrifying thought that most plans have a threshold on required intelligence, and that further intelligence doesn't make much of a difference (In short, that scaling is unreasonably effective). This means that an AI with 140~ IQ and 100000 times more actions per second than a human, could take over the world, even if I could beat it at some IQ test subsets. That competitive RTS game players aim to improve their APS (actions per second) speaks in favor of this idea.
Is there any reason you make Humman so obviously wrong and dislikable? If any reader believes something that Humman does, I think they will feel offended and close the page before they to read the counter-arguments to the beliefs he represents.
Ideally, by the way, we'd gatekeep things like AI research in a manner that people like Humman will be barred from entering. There are classes in topics like security and biology which are good at destroying many of the naive arguments Humman represents in the post.
Finally, beliefs like "everyone is born with the same amount of stat points" are spread even in Harvard (e.g. Howard Gardner's theory of multiple intelligences) and it's downstream from the social instincts which guard against outliers. The hard sciences are getting less hard by the year because of ideological, political and social dynamics, and these are directly responsible for the poor general understanding of intelligence. Human beings fail to understand AGI because they anthropomorphize it (they consider their own humanity axioms that all systems are bound by).
Social grace cannot co-exist with truth seeking, they're at conflict. But some truths can be communicated gracefully. "Saying something nicely" is often just stating the truth with a lower magnitude. "I hate that -> It's not really my cup of tea". The vector is the same, the magnitude is smaller. It's like whispering rather than yelling.
Edit: People here are rather mathematical, and truth seeking doesn't see much resistance in math because it's so neutral. Try digging up human bio diversity research or any other controversial topic. What, you think I'm just a bad person? That would imply that all unpleasant conclusions are false, and that people who hold them merely have unpleasant values. As if moral thinking is a good heuristic, and ad hominem not a fallacy? That I'm agree downvoted already proved my point.
But the lack of social grace is not a lack of skill - well it is, but more precisely it's a lack of sensitivity (and therefore granularity). One is socially tone deaf in the same way that they're musically tone deaf. The more different tones you can differentiate, the more subtle differences you can pick up.
People who lack social grace lack this subtlety. Their social landscape is more coarse - perhaps some dimensions are even missing from it. If something registers weakly for us, we assume it registers weakly to others (people who have poor hearing often speak too loudly). But one can also ruin their sensivity (their social taste), by calibrating it poorly. This damage is often done by strong stimuli, which reorder the scale which is compared against (If more bits are used for the exponent of a floating point variable, less bits can be used for the precision).
Somebody who drenches their food in chili-sauce is less likely to be able to taste if they're drinking cheap wine or expensive wine. Porn addicts often judge the appropriateness of sexual behaviour poorly. If you're used to people who use strong language, the baseline of what you consider rude speech may be out of sync with others. I also imagine that watching too much anime might ruin somebodies calibration, since a lot of things in anime are exaggerated (down to facial expressions - which may be why a lot of autistic people are drawn to anime).
More general intelligence is less restrained (that's what general means), but social grace, manners, norms, etc. are primarily restrictions. For instance, the overton window is the acceptable space of ideas. Intelligent people can often "emulate" bounded behaviour, but they're at a disadvantage within the bounded area (which is why being street smart might outperform being book smart). Finally, many intellectuals don't get the importance of context (they prefer the world to follow general rules which are true in all contexts)
I have more than 200 passwords now I think, and I'm starting to forget some of them. A password manager would probably be ideal if it could run completely locally, so that it's not an online service, but rather a cryptographic program which uses a master key to generate sub-keys.
There's an article with some criticism of Brave here. It looks like it's written by somebody who has been on the internet too long and who doesn't trust anyone or anything, but that's exactly what I like about it.
Speaking of which, I don't like the mindset "Oh well, you can't avoid all these tech giants, so you might as well give up and let them collect whatever they want". Almost all "alternative browsers" are just Chromium or Firefox under the hood, and almost all "alternative search engines" are just Google proxies (metasearch engines).
There's enough information out there to uniquely identify like 95% of users across almost all services, but our footprints are just considered noise, the FBI doesn't care if somebody downloads a song without paying for it. Now, consider what will happen once AIs can process all of it, and each person can be assigned a "digital FBI agent" to watch over them
Bitwarden does seem more local than I initially assumed. I modeled it as trusting another entity with ones master key. Ideally, all reliance on external services shouldn't require you to trust them, it should be mathematically impossible for them to betray you by design. A purely mechanical lock, for instance, cannot be hacked remotely. Self-hosting is superior to trusting other services, and the open source nature reduces the chance that it's backdoored by quite a lot.
Mathematically, If I have one unique password that nobody else knows or can know (because it's never used on any websites), and I use that to generate other passwords in an irreversible manner, then I can get away with remembering just one password, while still using hundreds of unique passwords on different websites. This might be what bitwarden does, in which case the only causes of concerns left are hardware backdoors and quantum computers. Doesn't get much better than that
I should have been more clear what I meant here. "It's possible with more than just locations" means that, just like you can uniquely identify any location on the planet if you can extract enough bits of information out of a picture, one can uniquely identify people if they can find log2(human population) ≈ 33 unique bits of information on them. Gender, for instance, is one bit of information
I think the problem with Moloch is one of one-pointedness, similar to metas in competitive videogames. If everyone has their own goals and styles, then many different things are optimized for, and everyone can get what they personally find to be valuable. A sort of bio-diversity of values.
When, however, everyone starts aiming for the same thing, and collectively agreeing that only said thing has value (even at the cost of personal preferences) - then all choices collapse into a single path which must be taken. This is Moloch. A classic optimization target which culture warns against optimizing for at the cost of everything else is money. An even greater danger is that a super-structure is created, and that instead of serving the individuals in it, it grows at the expense of the individuals. This is true for "the system", but I think it's a very general Molochian pattern.
Strong optimization towards a metric quickly results in people gaming said metric, and Goodhart's law kicks in. Furthermore, "selling out" principles and good taste, and otherwise paying a high price in order to achieve ones goals stops being frowned upon, and instead becomes the expected behaviour (example: Lying in job-interviews is now the norm, as is studying things which might not interest you).
But I take it you're refering to the link I shared rather than LW's common conception of Moloch. Consciousness and qualia emerged in a materialistic universe, and by the darwinian tautology, there must have been an advantage to these qualities. The illusion of coherence is the primary goal of the brain, which seeks to tame its environment. I don't know how or why this happened, and I think that humans will dull their own humanity in the future to avoid the suffering of lacking agency (SSRIs and stimulants are the first step), such that the human state is a sort of island of stability. I don't have any good answers on this topic, just some guesses and insights:
1: The micro dynamics of humanity (the behaviour of individual people) are different from the macro mechanics of society, and Moloch emerges as the number of people n tends upwards. Many ideal things are possible at low n's almost for free (even communism works at low n!), and at high n's, we need laws, rules, regulations, customs, hierarchical structures of stablizing agents, etc etc - and even then our systems are strained. There seems to be a law similar to the square-cube law which naturally limits the size things can have (the solution I propose to this is decentralization)
2: Metrics can "eat" their own purpose, and creations can eat their own creators. If we created money in order to get better lives, this purpose can be corrupted so that we degrade our lives in order to get more money. Morality is another example of something which was meant to benefit us but now hangs as a sword above our heads. AGI is trivially dangerous because it has agency, but it seems that our own creations can harm us even if they have no agency whatsoever (or maybe agency can emerge? Similar to how ideas gain life memetically).
3: Perhaps there can exist no good optimization metrics (which is why we can't think of an optimization metric which won't destroy humanity when taken far enough). Optimization might just be collapsing many-dimensional structures into low-dimensional structures (meaning that all gains are made at an expense, a law of conservation). Humans mostly care about meeting needs, so we minimize thirst and hunger, rather than maximizing water and food intake. This seems like a more healthy way to prioritize behaviour. "Wanting more and more" seems like a pathology than natural behaviour - one seeks the wrong thing because they don't understand their own needs (e.g. attempting to replace the need for human connection with porn), and the dangers of pathology used to be limited because reality gatekept most rewards behind healthy behaviour. I don't think it's certain that optimality/optimization/self-replication/cancer-like-growth/utility are good-in-themselves like we assume. They're merely processes which destroy everything else before destroying themselves, at least when they're taking to extremes. Perhaps the lesson is that life ceases when anything is taken to the extreme (a sort of dimensional collapse), which is why Reversed Stupidity Is Not Intelligence even here.