Yeah, same here. This feels like a crossover between the standard Buddhist woo and LLM slop, sprinkled with "quantum" and "Gödel". The fact that it has positive karma makes me feel sad about LW.
Since it was written using LLM, I think it is only fair to ask LLM to summarize it:
...Summary of "Eliminative Nominalism"
This philosophical essay introduces Eliminative Nominalism (EN), a theoretical framework that extends Eliminative Materialism (EM) to address what the author considers its foundational oversight: the conflation of first-order physical description wit
Is it possible that the AI was actually told in the prompt to generate those specific answers?
(People on internet do various things just to get other people's attention.)
A few months ago I complained that automatic translation sucks when you translate between two languages which are not English, and that the result is the same as if you translated through English. When translating between two Slavic languages, even sentences where you practically just had to transcribe Cyrillic to Latin and change a few vowels, both Google Translate and DeepL succeeded to randomize the word order, misgender every noun, and mistranslate concepts that happen to be translated to English as the same word.
I tried some translation today, and fro...
What would be your proposed alternative to Harry Potter fanfiction? Something else fan fiction? Original fiction? Not a fiction?
As I see it, choosing the Harry Potter universe that many readers already know allows it to use the contrast between how the different versions of Harry Potter behave.
Harry Potter in the books is frankly an idiot. Every year the most powerful dark wizard is trying to murder him, and he always only survives because he gets lucky... and yet the only thing he worries about is quidditch. Voldemort doesn't seem very smart either; his o...
Are there many people who pay for 3 Saturdays and then skip one? I would be surprised.
What age is the target group? An adult person can probably easily find 3 free Saturdays in a row. For a student living with parents it will probably be more difficult, because it means 3 weekends when the parents cannot organize any whole-weekend activity.
How about "make computers stupid again"?
The actions and statements of this current Trump administration show more support than ever for bold governance such as revitalizing letters of marque.
Against Russia? (As far as I know, most cyber attacks come from there.) In my opinion, unlikely.
I suspect that many people in the rationalist community have a blind spot about prediction markets, and see them as some kind of cooperative effort to make true predictions.
Instead, from the financial perspective, they are zero-sum games, and the best players play accordingly. If making the correct prediction is the winning move, so be it. If it is something else, that works, too.
I suspect that anonymous prediction markets (where people cannot get famous as superforecasters, only either gain or lose money) would make even better predictions that the current ones where many people have a conflict of interest.
I don't think I'm interested. You didn't update at all based on our previous bet.
That should make you more interested (financially) in betting against the person.
So, as I see it, the best case is when the skill degrades gracefully (provides the benefits even if other people are unaware of it or doing it wrong); and the second best case is if it has tests, so you know when you can safely use it, and when you need to switch to some plan B.
In case of rationality, I think there is "individual rationality" and "group rationality". Some things you can do alone, for example keep a diary of your predictions. You can get more benefit from talking to other rational people, but there is also the risk that they turn out to be ...
I don't have a coherent theory of motivation, because when I look at myself, it seems that different parts of my life work quite differently.
For example, when I used to write SF, the social motivation was very important. I had friends who liked SF books and movies, we talked about that a lot, many of us hoped to write something good one day. So I had a community, and an audience. My first attempts had some small success, which inspired me to work harder and achieve more. And then... at some moment this all went down in flames... skipping the unimportant de...
Is it possible that the relation between GLP-1 and willpower is basically about willpower depletion? The more mental energy you spend fighting your urge to eat, the less is left for everything else. GLP-1 reduces the hunger, suddenly you have more willpower for everything else.
I suspect this may be related to the feedback one gets. Importantly, not just feedback on having accomplished something, but also on working towards something even if you are not there yet -- because this is where you will realistically spend most of your time when working on nontrivial projects.
Writing is probably easy (for an intelligent person) if you have a friendly audience. The question is how to get it before you learn how to write well. Sometimes, the parents provide the service.
I am not entirely sure what specific thing were the rationalists wrong about (the quotes are about various things) and what specifically is the correct version we should update to.
For example, Eliezer's quote seems to be about how China would prefer a narrow AI (that can be controlled by the Chinese Communist Party) over a general AI, for completely selfish reasons. Do you believe that this is wrong?
A general source of problems is that when people try to get a new partner, they try to be... more appealing than usual, in various ways. Which means that after the partner is secured, the behavior reverts to the norm, which is often a disappointment.
One way how people try to impress their partners is that the one with lower sexual drive pretends to be more enthusiastic about sex than they actually are in long term. So the moment one partner goes "amazing, now I finally have someone who is happy to do X every day or week", the other partner goes "okay, now ...
Put more simply, could we posit that an omnibenevolent God might care more about what kind of person we become than what circumstances we endure?
If you redefine "benevolent" to mean someone who doesn't care about suffering, we are no longer speaking the same language.
Why is so much suffering needed to figure out "what kind of person we become"? Couldn't less sadistic circumstances answer this question just as well?
Also, many people die as little kids, so they apparently don't get a chance to become any kind of person.
"Doesn't exist, or doesn't give a fuck about suffering" is the answer that matches the data, sorry.
Occam's razor says that Trump makes populist statements that appeal to the kind of person who votes for Trump. Whether those are actually good ideas is irrelevant. Mercantilism sounds good to a person who doesn't know much about economy. Territorial expansion rhymes with "make great".
21st-century economics is just as irrelevant as e.g. 21-st century medicine. Scientists are not an important group of voters.
This can be quite frustrating if you want to figure out "what happens if I do X", and all the answers provided by science turn out to be about "what happens if people kinda want to do X, but then most of them don't".
I mean, it is good and potentially important to know that most people who kinda want to do X will fail to actually do it... but it doesn't answer the original question.
a majority of long-term monogamous, hetero relationships are sexually unsatisfying for the man after a decade or so.
This seems supported by the popular wisdom. Question is, how much this is about relationships and sex specifically, and how much it is just another instance of a more general "life is full of various frustrations" or "when people reach their goals, after some time they became unsatisfied again" i.e. hedonistic treadmill.
sexual satisfaction is basically binary
Is it?
...most women eventually settle on a guy they don't find all that sexually attract
This could be addressed by making a user interface which not only gives the user's prompt to the LLM, but also provides additional instructions and automatically asks additional questions. The answers to those additional questions could be displayed in smaller font as a side note, or maybe as graphical icons. One such question would be "in this answer, did you simplify things? if yes, tell me a few extra things I could pay attention to in order to get a better understanding of the topic" or something like that.
If you know a fact about humans, then mammals are not important. Humans like stories. Doesn't matter if mammals in general don't.
If you don't know a fact about humans, but somehow you know the fact about mammals, you can use it as evidence (although not as a proof). For example, in a culture with a strong religious taboo against human autopsy they could dissect various mammals, and make probabilistic statements about human anatomy.
Today, a more typical situation is two groups of people, each declaring that they know for a fact that humans are / are not X. ...
Good point.
I think there are still two different topics, although with some overlap: how to budget and how to get rich. Good budgeting is good, whether you are poor or rich. If you are poor, it can help avoid losing everything. If you are rich, it can help avoid wasting all your money and becoming non-rich.
But the (implied) idea that budgeting can make poor people rich, or that it is the main force that keeps poor and rich apart... that does not automatically follow, and actually many people doubt it. Hypothetically they may be wrong, but this needs to be ...
Then maybe I can link to those posts in a larger post
Yes, this seems to me like a good strategy for posting on LW. Start with smaller, then generalize (and link to previous posts when needed).
One advantage is that when things go wrong -- if one of the smaller articles is strongly rejected -- it gives you an opportunity to stop and reflect. Maybe you were wrong, in which case it is good that you didn't write the more general article (because it would be downvoted). Maybe the LW readers were wrong, but that still means that you should communicate your (small...
I would suggest choosing a less grandiose topic. Something more specific; perhaps something that you know well. ("What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world." -- source)
As a reader I prefer it when the posts are self-contained; when I get a value from the post even without clicking any of the links. The information linked should be optional to the experience.
Looking at the topics of my posts... books I have read (1, 2, 3), things happening in the rationality ...
feedback expressing painpoints/emotion is valuable, whereas feedback expressing implementation/solutions is not.
Yep. Or, let's say that the kind of feedback that provides solutions is worthless 99% of time. Because it is possible in principle to provide a good advice, it's just that most people do not have the necessary qualification and experience but may be overconfident about their qualification.
I find it ironical that popular wisdom seems to go the other way round, and "constructive criticism" is praised as the right thing to do. Which just doesn't mak...
I agree that something being "natural" doesn't make it "right". But "natural" is still a force of nature that you have to include in your predictions... unless you are okay with getting your predictions systematically wrong.
Applying this to the examples you mentioned:
Males are more aggressive in most animals; therefore, male humans should be expected to be more aggressive.
Yes, 100% agree. Notice that this doesn't say anything about aggression being good or otherwise desirable; the statement is positive, not normative.
Do you have an alternative explanation ...
The people outside the town who buy houses here either expect to rent them expensively, or to use them as an investment because they expect the costs of housing to grow. (Or a combination of both.)
Refusing to build more houses means doing exactly the thing they want -- it keeps the rents high, and it keeps the costs growing.
If you have 500 000 people in the town, and 100 000 houses are owned by people outside the town, you should build more houses until there are 600 000 of them (i.e. not only 500 000). Then the people outside the town will find it difficult to rent their houses expensively, and may start worrying that the costs will not grow sufficiently to justify their investments.
My problem with democracy is that most people are stupid. More precisely, they are relatively good at saying whether they are happy or unhappy, starving or fed, etc. They can give a relatively reliable "thumb up" or "thumb down" feedback to the government. But if you ask them about the specific steps that should be taken to make them better fed, etc., many of the popular suggestions will be quite insane.
For example, people can have a legitimate complaint about healthcare being inaccessible for them, and yet the suggestion many would propose will be somethi...
Ah, then I believe the answer is "no".
On the time scale of current human lifespan, I guess I could point out that some old people are unkind, or that some criminals keep re-offending a lot, so it doesn't seem like time automatically translates to more kindness.
But an obvious objection is "well, maybe they need 200 years of time, or 1000", and I can't provide empirical evidence against that. So I am not sure how to settle this question.
On average, people get less criminal as they get older, so that would point towards human kindness increasing in time. On t...
Well, if we assume that humans are fundamentally good / inevitably converging to kindness if given enough time... then, yeah, giving someone God-emperor powers is probably going to be good in long term. (If they don't accidentally make an irreparable mistake.)
I just strongly disagree with this assumption.
Sounds to me like wishful thinking. You basically assume that in 1 000 000 years people will get bored of doing the wrong thing, and start doing the right thing. My perspective is that "good" is a narrow target in the possibility space, and if someone already keeps missing it now, if we expand their possibility space by making them a God-emperor, the chance of converging to that narrow target only decreases.
Basically, for your model to work, kindness would need to be the only attractor in the space of human (actually, post-human) psychology.
A simple exampl...
Yes, if you have 500 000 people in town, you need to produce food for 500 000 people all the time. While if you have 500 000 people in town, you only need to build houses for 500 000 once.
But the logic of "there is a shortage of X, therefore the proper solution is to ban the production of X and hope that the problem will magically go away" is insane either way.
"I'm tired of this simplified idea that building more buildings here will solve the affordable housing problem"
I am tired of this simplified idea that giving people more food will solve the hunger problem. (Insert sophisticated arguments about how supermarkets throw away lots of perfectly okay food, etc.) Therefore, making more food should remain banned.
Are people fundamentally good?
Maybe some people are, and some people are not?
Are they practically good?
Not sure if we are talking about the same thing, but I think that there are many people who just "play it safe", and in a civilized society that generally means following the rules and avoiding unnecessary conflicts. The same people can behave differently if you give them power (even on a small scale, e.g. when they have children).
But I think there are also people who try to do good even when the incentives point the other way round. And also people who c...
I'd like to know how you handle arguing for something like "usability" in the face of a morally urgent argument like "don't be eurocentric."
I would probably start with rejecting the premise that I have to listen to other people's arguments.
(This makes even more sense when we know that the people who loudly express their opinions are often just a tiny minority of users. However, it is perfectly possible to ignore the majority, too.)
I think this is a mistake that many intelligent people make, to believe that you need to win verbal fights. Perhaps identifying...
Similar here. I wouldn't want to constrain my 100 years older self too much, but that doesn't mean that I identify with something very vague like "existence itself". There is a difference between "I am not sure about the details" and "anything goes".
Just like my current self is not the same as my 20 years old self, but that doesn't mean that you could choose any 50 years old guy and say that all of them have the same right to call themselves a future version of my 20 years old self. I extrapolate the same to the future: there are some hypothetical 1000 yea...
Notice how Elon Musk has positioned himself so that he has a chance to win both if the governments will control the AIs and if they won't.
I like the concept of Community Notes on Xitter, as a pushback against spreading misinformation. But now it seems that Musk will "fix" it, because his own tweets get often contradicted. Are there similar features on other social networks? How do they compare?
And this part is what Robin Hanson predicted about a decade ago. If I remember it correctly, he wrote that AI Safety was a low-status thing, therefore everyone associated with it was low-status. And if AI Safety ever becomes a high-status thing, then the people in the field will not want to be associated with their low-status predecessors. So instead of referencing them, an alternative history will be established, where someone high-status will be credited for creating the field from scratch (maybe using some inspiration from high-status people in adjacent fields).
I believe it is a clear demonstration that misalignment likely does not stem from the model being “evil.” It simply found a better way to achieve its goal using unintended means.
It is fascinating to see that the official science has finally discovered what Yudkowsky wrote about a decade ago. Better late than never, I guess.
They should actually reference Yudkowsky.
I don't see them referencing Yudkowsky, even though their paper https://cdn.openai.com/pdf/34f2ada6-870f-4c26-9790-fd8def56387f/CoT_Monitoring.pdf lists over 70 references, but I don't see them mentioning Yudkowsky (someone should tell Schmidhuber ;-)).
This branch of the official science is younger than 10 years (and started as a fairly non-orthodox one, it's only recently that this has started to feel like the official one; certainly no earlier than formation of Anthropic, and probably quite a bit later than that).
As I see it, there are two possible explanations for a fine-tuned universe:
The argument for multiverse is precisely that it does not require the insane amount of luck.
Your argument is that with the insane amount of luck, the multiverse is not necessary.
That is correct, but then we are back to the original question: why did we get so extremely lucky?
I agree with the spirit of your suggestion -- often "it is known" that something couldn't possibly work, based only on armchair reasoning, or one half-assed attempt made a few decades ago, in a different country, with N=20.
That said, literally cash for babies feels somewhat dysgenic (though, maybe if we actually did the experiment, the results might surprise us). It seems like it would appeal most to people with short-term thinking, the most poor people (which is probably correlated with various dysfunctions), and psychopaths who only want cash and don't c...
Okay, that pretty much ruins the idea.
Makes me think, what about humans who would do the same thing? But probably the difference is that humans can build their credibility over time, and if someone new posted an unlikely comment, they would be called out on that.
build a social circle which can maintain its own attention, as a group, without just reflecting the memetic currents of the world around it.
Note that it is not necessary for the social circle to share your beliefs, only to have a social norm that people express interest in each other's work. Could be something like: once or twice in a week the people will come to a room and everyone will give a presentation about what they have achieved recently, and maybe the other people will provide some feedback (not in the form of "why don't you do Y instead", but with the assumption that X is a thing worth doing).
I think you don't need a lot of agency to write comments on Less Wrong. I imagine an algorithm like this:
Q: How many Zizians does it take to change a light bulb?
A: At least two (one left hemisphere, and one right hemisphere), and the light bulb will end up broken... but that's okay, because for complicated game-theoretic reasons there is now more light in a parallel universe, and the electron-eating light bulb deserved it anyway.
The best way to be seen as trustworthy is to be trustworthy.
Depends on the environment. Among relatively smart people who know each other, trust their previous experience, and communicate their previous experience with each other -- yes. But this strategy breaks down if you keep meeting strangers, or if people around you believe the rumors (so it is easy to character-assassinate a honest person).
Sometimes being known as smart is already a disadvantage, because some people assume (probably correctly) that it would be easier for a smarter person to deceive them.
I wonder how many smart people are out there who have concluded that a good strategy is to hide their intelligence, and instead pretend to be merely good at some specific X (needed for their job). I suspect that many of them actually believe that (it is easier to consistently say something if you genuinely believe that), and that women are over-represented in this group.
I agree. One problem of grade inflation is that we lose the ability to measure excellence.
If you have a scale e.g. from 1 to 5, where the average grade is 3, if you somehow hire a magical teacher, you could see an improvement of the mean student, let's say from 3 to 2. Then you might conclude that the teacher does something right, and maybe try to replicate that.
But if instead the average grade is 1, hiring the same magical teacher would... leave the average grade at 1. Are all students "okay" in the subject? That's 1. Are they excellent? Also 1. Do half o...
Yeah, there are two different cultures, and it is important to know which one is at your job. For some people, following this advice could cost them their jobs.