All of lesswronguser123's Comments + Replies

but I know see that if you don't spend enough resources on addressing that hurt 

typo,  I "now" see that [,,,]

Great post, I was inner simulating you posting something about bad vibes from people given your multiagent model of the mind and baggage which comes with it, and here it is.  

Suggestion: Consider writing a 2023 review of this post (I don't have enough background reading to write a good one) 

Honestly majority of the points presented here are not new and already been addressed in 

https://www.lesswrong.com/rationality 

or https://www.readthesequence.com/ 

I got into this conversation because I thought I would find something new here. As an egoist I am voluntarily leaving this conversation in disagreement because I have other things to do in life. Thank you for your time. 

1StartAtTheEnd
The short version is that I'm not sold on rationality, and while I haven't read 100% of the sequences it's also not like my understanding is 0%. I'd have read more if they weren't so long. And while an intelligent person can come up with intelligent ways of thinking, I'm not sure this is reversible. I'm also mostly interested in tail-end knowledge. For some posts, I can guess the content by the title, which is boring. Finally, teaching people what not to do is really inefficient, since the space of possible mistakes is really big. Your last link needs an s before the dot. Anyway, I respect your decision, and I understand the purpose of this site a lot better now (though there's still a small, misleading difference between the explanation of rationality and in how users are behaving. Even the name of the website gave the wrong impression).

Another issue with teaching it academically is that academic thought, like I already said, frames things in a mathematical and thus non-human way. And treating people like objects to be manipulated for certain goals (a common consequence of this way of thinking) is not only bad taste, it makes the game of life less enjoyable.
 

 

Yes intuitions can be wrong welcome to reality. Beside I think schools are bad at teaching things.

 

If you want something to be part of you, then you simply need to come up with it yourself, it will be your own knowledg

... (read more)
-1StartAtTheEnd
But these ways of looking at the world are not factually wrong, they're just perverted in a sense. I agree that schools are quite terrible in general. That helps for learning facts, but one can teach the same things in many different ways. A math book from 80 years ago may be confusing now, even if the knowledge it covers is something that you know already, because the terms, notation and ideas are slightly different. In a way. But some people who have never learned psychology have great social skills, and some people who are excellent with psychology are poor socializers. Some people also dislike "nerdy" subjects, and it's much more likely that they'd listen to a ted talk on budy language than read a book on evolutionary psychology and non-verbal communication. Having an "easy version" of knowledge available which requires 20 IQ points less than the hard version seems like a good idea. Some of the wisest and psychologically healthy people I have met have been non-intellectual and non-ideological, and even teenagers or young adults. Remember your "Things to unlearn from school" post? Some people may have less knowledge than the average person, and thus have less errors, making them clear-sighted in a way that makes them seem well-read. Teaching these people philosophy could very well ruin their beautiful worldviews rather than improve on them. I don't think "rationality" is required. Somebody who has never heard about the concept of rationality, but who is highly intelligent and thinks things through for himself, will be alright (outside of existential issues and infohazards, which have killed or ruined a fair share of actual geniuses). But we're both describing conditions which apply to less than 2% of the population, so at best we have to suffer from the errors of the 98%. I'm not sure what you mean by "when you dissent when you have an overwhelming reason". The article you linked to worded it "only when", as if one should dissent more often, but it also warns

I even think it's a danger to be more "book smart" than "street smart" about social things.

Honestly I don't know enough about people to actually tell if that's really the case for me book smart becomes street smart when I make it truly a part of me.  

That's how I live anyways. For me when you formalise streetsmart it becomes booksmarts to other people, and the latter is likely to yield better prediction aside from the places where you lack compute like in case of society where most of people don't use their brain outside of social/consensus reali... (read more)

2StartAtTheEnd
There's a lot to unfold for this first point: Another issue with teaching it academically is that academic thought, like I already said, frames things in a mathematical and thus non-human way. And treating people like objects to be manipulated for certain goals (a common consequence of this way of thinking) is not only bad taste, it makes the game of life less enjoyable. Learning how to program has harmed my immersion in games, and I have a tendency to powergame, which makes me learn new videogames way faster than other people, also with the result that I'm having less fun than them. I think rationality can result in the same thing. Why do people dislike "sellouts" and "cars salesmen" if not for the fact they they simply optimize for gains in a way which conflicts with taste? But if we all just treat taste like it's important, or refuse to collect so much information that we can see the optimal routes, then Moloch won't be able to hurt us. If you want something to be part of you, then you simply need to come up with it yourself, it will be your own knowledge. Learning other peoples knowledge however, feels to me like consuming something foreign. Of course, my defense of ancient wisdom so far has simply been to translate it into an academic language in which it makes sense. "Be like water" is street-smarts, and "adaptability is a core component of growth/improvement/fitness" is the book-smarts. But the "street-smarts" version is easier to teach, and now that I think about it, that's what the bible was for. Most things that society waste its time discussing are wrong. And they're wrong in the sense than even an 8-year-old should be able to see that all controversies going on right now are frankly nonsense. But even academics cannot seem to frame things in a way that isn't riddled with contradictions and hypocrisy. Does "We are good, but some people are evil, and we need to fight evil with evil otherwise the evil people will win by being evil while we're being good

There's an entire field of psychology, yes, but most men are still confused by women saying "it's fine" when they are clearly annoyed. Another thing is women dressing up because they want attention from specific men. Dressing up in a sexy manner is not a free ticket for any man to harass them, but socially inept men will say "they were asking for it" because the whole concept of selection and stardards doesn't occur to them in that context. And have you read Niccolò Machiavelli's "The Prince"? It predates psychology, but it is psychology, and it's no worse

... (read more)
1StartAtTheEnd
I don't think there's a reason for most people to learn psychology or game theory, as you can teach basic human behaviour and such without the academic perspective. I even think it's a danger to be more "book smart" than "street smart" about social things. So rather than teaching game theory in college, schools could make children read and write a book report on "How to Win Friends & Influence People" in 4th grade or whatever. Academic knowledge which doesn't make it to 99% of the population doesn't help ordinary people much. But a lot of this knowledge is simple and easier than the math homework children tend to struggle with. I don't particularly believe in morality myself, and I also came to the conclusion that having shared beliefs and values is really useful, even if it means that a large group of people are stuck in a local maximum. As a result of this, I'm against people forcing their "moral" beliefs on foreign groups, especially when these groups are content and functional already. So I reject any global consensus of what's "good". No language is more correct than another language, and the same applies for cultures and such.  It's funny that you should link that post, since it introduces an idea that I already came up with myself. What I meant was that people tend to value what's objective over what's subjective, so that their rational thinking becomes self-destructive or self-denying in a sense. Rationality helps us to overcome our biases, but thinking of rationality as perfect and of ourselves as defect is not exactly healthy. A lot of people who think they're "super-humans" are closer to being "half-humans", since what they're doing is closer to destroying their humanity than overcoming or going beyond it. And I'm saying this despite the fact that some of these people are better at climbing social hierarchies or getting rich than me. In short, the objective should serve the subjective, not the other way around. "The lenses which sees its own flaws" mere

Science hasn't increased our social skills nor our understanding of ourselves, modern wisdom and life advice is not better than it was 2000 years ago. 

 

Hard disagree, there's an entire field of psychology, decision theory and ethics using reflective equilibrium in light of science. 

 

Ancient wisdom can fail, but it's quite trivial for me to find examples in which common sense can go terribly wrong. It's hard to fool-proof anything, be it technology or wisdom.

Well some things go wrong more often than other things, wisdom goes wrong a lot ... (read more)

2StartAtTheEnd
There's an entire field of psychology, yes, but most men are still confused by women saying "it's fine" when they are clearly annoyed. Another thing is women dressing up because they want attention from specific men. Dressing up in a sexy manner is not a free ticket for any man to harass them, but socially inept men will say "they were asking for it" because the whole concept of selection and stardards doesn't occur to them in that context. And have you read Niccolò Machiavelli's "The Prince"? It predates psychology, but it is psychology, and it's no worse than modern books on office politics and such, as far as I can tell. Some things just aren't improving over time. You gave the example of the ayurvedic textbook, but I'm not sure I'd call that "wisdom". If we compare ancient medicine to modern medicine, then modern medicine wins in like 95% of cases. But for things relating to humanity itself, I think that ancient literature comes out ahead. Modern hard sciences like mathematics are too inhuman (autistic people are worse at socializing because they're more logical and objective). And modern soft sciences are frankly pathetic quite often (Gardner's Theory of Multiple Intelligences is nothing but a psychological defense against the idea that some people aren't very bright. Whoever doesn't realize this should not be in charge of helping other people with psychological issues) It's a core concept which applies to all areas of life. Humans won against other species because we were better at adapting. Nietzsche wrote "The snake which cannot cast its skin has to die. As well the minds which are prevented from changing their opinions; they cease to be mind". This community speaks a lot of "updating beliefs" and "intellectual humility" because thinking that one has all the answers, and not updating ones beliefs over time, leads to cognitive inflexibility/stagnation, which prevents learning. Principles are incredibly powerful, and most human knowledge probably boils down

I don't think I can actually deliberately believe in falsity it's probably going to end up in a belief in a belief rather than self deception.

Beside having false ungrounded beliefs are likely to not be utility maximising in the long run its a short term pleasure kind of thing.

Beliefs inform our actions and having false beliefs will lead to bad actions.

I would agree with the Chesterton fence argument but once you understand the reasons for the said belief's psychological nature than truthfulness holding onto to it is just rationalisation.

Ancient wisdom is m... (read more)

1StartAtTheEnd
Some false beliefs can lead to bad actions, but I don't think it's all of them. After all, human nature is biased, because having a bias aided in survival. The psyche also seems like it deceives itself as a defense mechanism fairly often. And I think that "believe in yourself" is good advice even for the mediocre. I'm not sure which part of my message each part of your message is in response to exactly, but some realizations are harmful because they're too disillusioning. It's often useful to act like certain things are true - that's what axioms and definitions are, after all. But these things are not inherently true or real, they become so when we decide that they are, but in a way it's just that we created them. But I usually have to not think about that for a while before these things go back to looking like they're solid pieces of reality rather than just agreements. Ancient wisdom can fail, but it's quite trivial for me to find examples in which common sense can go terribly wrong. It's hard to fool-proof anything, be it technology or wisdom. Some things progress. Math definitely does. But like you said, a lot of wisdom is rediscovered periodically. Science hasn't increased our social skills nor our understanding of ourselves, modern wisdom and life advice is not better than it was 2000 years ago. And it's not even because science cannot deal with these. The whole "Be like water" thing is just flexibility/adaptability. Glass is easier to break than plastic. What's useful is that somebody who has never taken a physics class or heard about darwinism can learn and apply this principle anyway. And this may still apply to some wisdom which accidently reveals something which is beyond the current standard of science. As for that which is not connected to reality much (wisdom which doesn't seem to apply to reality), it's mostly just the axioms of human cognition/nature. It applies to us more than to the world. "As within, so without", in short, internal changes see
1green_leaf
Yes. If I relied on losing a bet and someone knew that, them offering me to bet (and therefore lose) would make me wary something would unpredictably go right, I'd win, and my reliance on me losing the bet would be thwarted. If I meet a random person who offers to give me $100 now and claims that later, if it's not proven that they are the Lord of the Matrix, I don't have to pay them $15,000, most of my probability mass located in "this will end badly" won't be located in "they are the Lord of the Matrix." I don't have the same set of worries here, but the worry remains.

Well that tweet can easily be interpreted as overconfidence for their own side, I don't know whether Vance would continue with being more of a rationalist and analyse his own side evenly.

I think the post was a deliberate attempt to overcome that psychology, the issue is you can get stuck in these loops of "trying to try" and convincing yourself that you did enough, this is tricky because it's very easy to rationalise this part for feeling comfort.

When you set up for winning v/s try to set up for winning. 

The latter is much easier to do than the former, and former still implies chance of failure but you actually try to do your best rather than, try to try to do your best. 

I think this sounds convoluted, maybe there is a much easier cognitive algorithm to overcome this tendency.

Trying to do good.

 

"No!  Try not!  Do, or do not.  There is no try."
       —Yoda

Trying to try

3[anonymous]
if i left out the word 'trying' to (not) use it in that way instead, nothing about me would change, but there would be more comments saying that success is not certain. i also disagree with the linked post[1], which says that 'i will do x' means one will set up a plan to achieve the highest probability of x they can manage. i think it instead usually means one believes they will do x with sufficiently high probability to not mention the chance of failure.[2] the post acknowledges the first half of this -- «Well, colloquially, "I'm going to flip the switch" and "I'm going to try to flip the switch" mean more or less the same thing, except that the latter expresses the possibility of failure.» -- but fails to integrate that something being said implies belief in its relevance/importance, and so concludes that using the word 'try' (or, by extrapolation, expressing the possibility of failure in general) is unnecessary in general. 1. ^ though its psychological point seems true: 2. ^ this is why this wording is not used when the probability of success is sufficiently far (in percentage points, not logits) from guaranteed.

I thought we had a bunch of treaties which prevented that from happening? 

I think it's an hyperbole, one can still progress, but in one sense of the word it is true, check The Proper Use of Humility and  The Sin of Underconfidence 

I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more comple

... (read more)

I recommend having this question in the next lesswrong survey. 

 

Along the lines of "How often do you use LLMs and your usecase?"

6habryka
Great idea! @Screwtape?

Is this selection bias? I have had people who are overconfident and get nowhere.

I don't think it's independent from smartness, a smart+conscientious person is likely to do better.

Answer by lesswronguser12310

https://www.lesswrong.com/tag/r-a-z-glossary

 

I found this by mistake and luckily I remembered glancing over your question 

It would be an interesting meta post if someone did a analysis of each of those traction peaks due to various news or other articles.

accessibility error: Half the images on this page appear to not load.

Have you tried https://alternativeto.net ?  It may not be AI specific but it was pretty useful for me to find lesser known AI tools with particular set of features. 

Error: The mainstream status on the bottom of the post links back to the post itself. Instead of comments.

I prefer system 1: fast thinking or quick judgement

Vs

System 2 : slow thinking

I guess it depends on where you live and who you interact with and what background they have because fast vs slow covers the inferential distance fastest for me avoids the spirituality intuition woo woo landmine, avoids the part where you highlight a trivial thing to their vocab called "reason" etc

William James (see below) noted, for example, that while science declares allegiance to dispassionate evaluation of facts, the history of science shows that it has often been the passionate pursuit of hopes that has propelled it forward: scientists who believed in a hypothesis before there was sufficient evidence for it, and whose hopes that such evidence could be found motivated their researches.

 

Einstein's Arrogance  seems like a better explanation of the phenomena to me

I remember this point that yampolskiy made for impossibleness of AGI alignment on a podcast that as a young field AI safety had underwhelming low hanging fruits, I wonder if all of the major low hanging ones have been plucked.

I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer's conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos's followers being billionaires etc.  I was kind of predicting that some of ... (read more)

Few feature suggestions: (I am not sure if these are feasible) 

1) Folders OR sort by tag for bookmarks. 

2) When I am closing the hamburger menu on the frontpage I don't see a need for the blogs to not be centred. It's unusual, it might make more sense if there was a way to double stack it side by side like mastodon. 

3) RSS feature for subscribed feeds? I don't like using Emails because too many subscriptions and causes spam. 

(Unrelated: can I get deratelimited lol or will I have to make quality Blogs for that to happen?) 

5Ruby
I would like that a lot personally. Unfortunately bookmarks don't get enough general use for us to prioritize that work.   I believe this change was made for the occasions on which there are neat art to be displayed on the right side. It might also allow more room for a chat LLM integration we're currently experimenting with. 3) Not currently I'm afraid. I think this would make sense but is competing with all the other things to do.   Quality contributions or enough time passing for many of the automatic rate limits.

I usually think of this in terms of Dennett's concept of the intentional stance, according to which there is no fact of the matter of whether something is an agent or not. But there is a fact of the matter of whether we can usefully predict its behavior by modeling it as if it was an agent with some set of beliefs and goals.

 

That sounds awfully lot like asserting agency to be a mind-projecting fallacy. 

2Valentine
That seems maybe true. What's the problem you see with that?

Sorry for the late reply, I was looking through my past notifs, I would recommend you to taboo the words and replace the symbols with the substance , I would also recommend you to treat language as instrumental since words don't have inherent meaning, that's how an algorithm feels from inside.

Is this the copy of video which has been listed as removed? @Raemon 

 

It is surely the case for me, I was raised a hindu nationalist,I ended up also trusting various sides of the political spectrum from far right to far left, porn addiction , later ended up falling into trusting science,technology without thinking for myself. Then i fell into epistemic helplessness, did some 16 hr/day work hrs as a denial of the situation led to me getting sleep paralysis, later my father also died due to his faulty beliefs in naturopathy and alternative medicine honestly due to his contrarian bias he didn't go to a modern medicine doctor, I... (read more)

Most useful post, I was intuitively aware of these states, thanks for providing the underlying physiological underpinning. I am aware enough to actually feel a sense of tension in my head in general in SNS dominated states and noticed that I was biased during these states, my predictions seem to align well with the literature it seems.

Why does lesswrong.com have the bookmark feature without a way to sort them out? As in using tags or maybe even subfolders. Unless I am missing something out. I think it might be better if I just resort to browser bookmark feature.

6papetoast
I also mostly switched to browser bookmark now, but I do think even this simple implementation of in-site bookmarks is overall good. Book marking in-site can sync over devices by default, and provides more integrated information.

I think what they mean is the intuitve notion of typicality rather than the statistical concept of average.

 

98 seems approximately 100 

but 100 doesn't seem approximately 98  due to how this heuristic works. 

That is typicality is a system 1 heuristic of a similarity cluster, it's asymmetric.

Here is the post on typicality from a human guide's to word sequence.

 

To interpret what you meant when you said "my hair has grown above average" you have a extensional  which you refer to with the word "average hair" and you find yourself t... (read more)

1jwfiredragon
The problem (or maybe just my problem) is that when I say "average" it feels like it's activating my concept of "mathematical concept of sum/count", even though the actual thing I'm thinking of is "typical member of class extracted from my mental model". I find myself treating "average" as if it came from real data even if it didn't.

The student employing version one of the learning strategy will gain proficiency at watching information appear on a board, copying that information into a notebook, and coming up with post-hoc confirmations or justifications for particular problem-solving strategies that have already provided an answer.

 

ouch I wasn't prepared for direct attacks but thank you very much for explaining this :), I now know why some of the later strategies of my experienced self of "if I was at this step how would I figure this out from scratch" and "what will the teacher... (read more)

Leaning into the obvious is also the whole point of every midwit meme.

Midwit

 I would argue this is not a very good example, "do the obvious thing" just implies that you have a higher prior for a plan or a belief and you are choosing to believe it without looking for further evidence. 

It's epistemically arogant to assume that your prior will be always correct. 

Although if you are experienced in a field it probably took your mind a lot of epistemic work to isolate a hypothesis/idea/plan in the total space of them while doing the inefficient bayesian... (read more)

This theory seems to explain all observations but I am not able to figure out what it doesn't explain in day to day life.

Also, for the last picture the key lies in looking straight at the grid and not the noise then you can see the straight lines, although it takes a bit of practice to reduce your perception to that.

Obviously this isn't true in the literal sense that if you ask them, "Are you indestructible?" they will reply "Yes, go ahead and try shooting me." 

 

Oh well- I guess meta-sarcasm about guns is a scarce finding in your culture because I remember non-zero times when I have said this months ago. (also I emotionally consider myself as mortal if that means I will die just like 90% of other humans who have ever lived and like my father) 

Bayesian probability theory is the sole piece of math I know that is accessible at the high school level

They teach it here without the glaring implications because those don't come in exams. Also I was extremely confused by the counterintuitive nature of probability until I stumbled upon here and realised my intuitions were wrong.

instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?

Whilst normally having radical groups is useful for shifting the Overton window or abusing anchoring effects in this case study of environmentalism I think it backfired from what I can understand, given the polling data of public in the sample country already caring about the environment.

I think the hidden motives are basically rationalisation, I have found myself singlethinking those motives in the past nowadays I just bring those reasons to the centre-stage and try to actually find whether they align with my commitments instead of motivated stopping. Sometimes I just corner my motivated reasoning (bottom line) so bad (since it's not that hard to just do expected consequentialist reasoning properly for day to day stuff) that instead of my brain trying to come up with better reasoning it just makes the idea of the impulsive action more sal... (read more)

37 spotted! Fun fact 37 is one of the subs-consciously more typical 2 digit number our mind stores for the similarity cluster of random number.  I found a good video and website on this topic.

Ever since they killed (or made it harder to host) nitter,rss,guest accounts etc. Twitter has been out of my life for the better. I find the twitter UX in terms of performance, chronological posts, subscriptions to be sub-optimal. If I do create an account my "home" feed has too much ingroup v/s outgroup kind of content (even within tech enthusiasts circle thanks to the AI safety vs e/acc debate etc), verified users are over-represented by design but it buries the good posts from non-verified. Elon is trying wayy too hard to prevent AI web scrapers ruining my workflow

The gray fallacy strikes again, the point is to be lesswrong! 

Most of this just seems to be nitpicking lack of specificity of implicit assumptions which were self-evident (to me), the criticism regarding "blue" pretty much depends on whether the html blue also needs an interpreter(Eg;human brain) to extract the information. 

The lack of formality seems (to me as a new user) a repeated criticism of the sequences but, I thought that was also a self-evident assumption (maybe I'm just falling prey to the expecting short inferential distance bias) I think Eliezer has mentioned 16 years ago here:

"This blog is directed ... (read more)

I found a related article prior to this on this topic which seems to be expanding about the same thing.

https://journals.sagepub.com/doi/10.1177/1745691610393528

Like "IRC chat"

I don't think that aged well :)

It would also be quite terrible for safety if AGI was developed during a global war, which seems uncomfortably likely (~10% imo).

This may be likely, iirc during wars countries tend to spend more on research and they could potentially just race to AGI like what happened with space race. Which could make hard takeoff even more likely.

Load More