On Feb 19th I came from London to visit my parents in Lviv (first time since September). On 24th I woke up to Russian attack, general mobilization order and prohibition to leave the country.
Failure #1: didn't take possibility and impact of total Russian attack in consideration when planning the trip.
I mentioned to a coworker that countries moving embassies out of Kyiv is a costly signal. Nevertheless I looked at it and went into cached thought: the war is localized to parts of Donbas.
Failure #2: on morning of 24th it was possible to leave the country. I didn't. Looking back - it feels like I was in the freeze mode from Fight/Flight/Freeze triad. I got out of stupor and doomscroling by end of day. By that time males of my age were prohibited to leave the country.
Bonjour !
Been reading lesswrong for years but never posted: I feel like my cognitive capacities are nowhere near the average in this forum.
I would love to exchange ideas and try to improve my rationality with less “advanced” people, wondering if anyone would have recommendations.
Been thinking that something like the changemyview subreddit might be a good start?
Thanks
Bienvenue!
"I feel like my cognitive capacities are nowhere near the average in this forum."
Why do you feel that? I like to push back against such framing of cognitive capacities or capabilities generally, and instead frame those things as "where on the pareto frontier for some combination of skills are my capabilities?" My view here is heavily influenced by johnswentworth's excellent post on the topic and what I've read from the history of science, innovation, etc. (Jason Crawford's Progress Studies works are great, check them out)
Besides my pushing back against your stated framing of cognitive capacities, my main point here is that multi-domain expertise may, even if one's raw "g (or intelligence)" are not as high as some others, more than make up for any potential shortcomings in said raw g because of how the pareto frontier works and how many possible areas of expertise within human capacity configuration space.
"I would love to exchange ideas and try to improve my rationality with less “advanced” people, wondering if anyone would have recommendations."
Commenting on posts, asking questions, and writing shortform pieces can be great practice for that! Same with attending meetups, virtual or in-person. Writing posts from your own experience about subjects where you try to understand things better through that writing, discussing self-improvement things, or commenting on how rationality relates to your particular expertises are good ones too.
Building a small group to discuss things you read (the Library, Concepts, and Tags pages are all great spots on this site to find things to read) and posting about your experiences is a great way to have fun and spark more discussion and deliberate improvement with yourself and others.
For more informal discussions and access to lots of rationalists or rationalist adjacent people, there are a smattering of relevant Discord servers (some found here. I'm a member of a bunch and can probably invite you if you send me a PM. I'm also a member of the Guild of the Rose which is an education startup pulling mostly from rationalist communities for members and has been an excellent learning experience and community for me (I recommend and endorse Rose).
Hope these things help! Happy to discuss more. Writing a bunch of shortforms that mostly amounted to personal blogging helped make me more comfortable just posting or commenting as I felt like. I always have an eye to quality and the overall "what do we want LessWrong to be about" ideas when making a post, but also...community curation of posts exists for a reason, so don't feel overwhelming pressure when writing or expressing yourself here. Feel some pressure though :) but the good kind that comes from pushing for improvement! (i feel a bit odd about saying that for some reason, but I'll leave it up for now)
Cheers,
Willa
You are correct Willa! I am probably the Pareto best in a couple of things. I have a pretty good life all things considered. This post is my attempt to take it further, and your perspective is appreciated.
I tried going to EA groups in person and felt uncomfortable, if only because everyone was half my age or less. Good thing the internet fixes this problem, hence me writing this post.
Will join the discord servers and send you a pm! Will check out Guild of the Rose.
Opened a blog as well and will be trying to write, which from what I've read a gazillion times, is the best way to improve your thinking.
Merci for your message!
I would love to exchange ideas and try to improve my rationality with less “advanced” people, wondering if anyone would have recommendations.
Same challenge here. The average level of the contributions on LW seems very high to me too. I struggle to find the right fit for me, the correct difficulty setting, half-way between the average "easy" and LW "god mode", haha
I'm completely new to this community, other than having several close friends teach me things they learned here over the years. For some reason Youtube doesn't seem to show me things I find intellectually interesting anymore, so I want to read more blogs. Books too. Reading The Dispossed right now.
Anyways, hi! It already feels so warm here.
*Moving my introduction here because I accidentally posted it in the wrong Open Thread.
Introducing myself-
Hi, I’m Karolina. I stumbled across this community after googling “techno feudalism”. A moderator of a Discord server I belong to says that the US will become techno-feudalist society, and I was trying to understand what he means. I’m not super interested in techno feudalism, though. I created an account here because I saw a lot of interesting topics under the “Concepts” tab.
I would describe myself as “down to earth”. Most of my thoughts revolve around my general welfare and the welfare of people who I’m close to. My fiance and I are planning a wedding for this summer, and we want to start having kids as soon as we’re married, so I think a lot about pregnancy and motherhood. I wonder what our kids will be like and hope that I’ll be a good mom.
So that brings me to why I’m here. I don’t think about things too deeply. I wonder if I have a tendency to accept what people say without question. People who I interact with online have told me that I don’t understand the agendas of politicians and the mainstream media. I want to understand why I think about things the way I do. I want to improve my critical thinking skills and learn how to think in a “deeper” way.
Welcome! I wonder how "techno feudalism" lead you here. When I search for it on the site the only results that come up is your two comments (and that's a very rare thing for this site).
I suggest you start with the core reading in the library, but other things that might interest you based on what you said are Inadequate Equilibria, Simulacrum Levels, Moral Mazes. A common thread there is incentives / game theory, but you might get an intuition for those from the core reading. If not and that frame feels alien to you, you can go to these tags and look for something that explains them well.
Also, maybe you'll find the Parenting tag interesting.
Hi! I'm Kelvin, 26, and I've been following LessWrong since 2018. Came here after reading references to Eliezer's AI-Box experiments from Nick Bostrom's book.
During high school I participated in a few science olympiads, including Chemistry, Math, Biology and Informatics. Was the reserve member of the Brazilian team for the 2012 International Chemistry Olympiad.
I studied Medicine and later Molecular Science at the University of São Paulo, and dropped out in 2015 to join a high-frequency trading fund based on Brazil. Had a successful career there, and rose up to become one of the senior partners.
Since 2020 I'm co-founder and CEO of TickSpread, a crypto futures exchange based on batch auctions. We are interested in mechanism design, conditional and combinatorial markets, and futarchy.
I'm also personally very interested in machine learning, neuroscience, and AI safety discussions, and I've spent quite some time studying these topics on my own, despite having no professional experience on them.
I very much want to be more active on this community, participating in discussions and meeting other people who are also interested in these topics, but I'm not totally sure where to start. I would love for someone to help me get integrated here, so if you think you can do that please let me know :)
Does anyone remember a post/article on HS students who were allowed to allocate money to charities of their choice? I can't seem to find it.
Hi. I'm new to LW but really enjoy the culture which is fostered here. I've been reading AC10 and Marginal Revolution, etc for years so I feel like ive already been heavily influenced by the LW community. A week ago I posted "America's Invisible Graveyard: Understanding the Moral Implications of Western sanctions" which got a fare amount of pushback over how it was written as well as a lot of great comments.
In particular, people reacted poorly to my last sentence calling for us to feel shame over western sanctions. Our contemporary political climate prob is overly abscessed with shaming, and I fully understand why LW has guidelines which limit specific types of shaming. Still I think shaming has its place. I myself was shamed for posting an article which didn't meet the guidelines of LW. This is not to say those guidelines are wrong, but I would like to hear better reasons for why particular types of shaming are out of bounds while others are welcomed. This all made me think of an old TCowen post about shame "Who should be shamed, and who not? - Marginal REVOLUTION".
Anyways just wanted to say hello, and that I look forward to learning from you all.
-Ezra
Having political discussions in a way that actually allows people to focus on the issues is hard. As a result we have stronger standards on LessWrong for how to have political discussions. Doing anything that makes it even harder, like calling for shame only, is therefore bad.
Instead of focusing on the implications of the empiric claims you make about sanctions of how they kill people and don't work for changing policy, you should have focused more on backing up the empiric claims. At the shallow level you discussed them I doubt anyone who believes that sanctions are a useful tool will be convinced. You should have likely should provide a gear-model of why they do so and if you want to convince people that there's an academic consensus for that thesis have a lot more sources.
Someone on reddit made the following argument to me about why physicalism (= the claim that the laws of physics are causally closed) is not true (paraphrased). (I'm posting this as a fun logic puzzle; I know the solution.)
Take a person in a room. Suppose the room is sufficiently isolated to be modeled as independent from the rest of the universe. The person is about to write either "0" or "1" on a piece of paper. But first, she runs a computer simulation that perfectly simulates the entire room down to every atom. (Has to be possible if physicalism is true! All is atoms!) This simulation will compute absolutely everything about the future of the room, including what the person writes down. It then outputs just this (i.e., either "1" or "0") to the person. The person sees the answer and writes down the opposite on the paper. This shows that the program doesn't work, which means such a program is impossible, which means physicalism is false.
I imagine a lot of people reading this will spot the flaw immediately, but if you don't, care to figure it out?
A machine that tries to answer a question by first simulating (completely faithfully) how it would answer that question will never terminate.
For complicated reasons, I need an example of two sets that optimize the following criteria:
My current favorite is Cartesian to polar coordinates, and , which scores well on #1-3, but the spaces feel more similar than I'd like. Any better ideas?
These might be too obvious or not work.
0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, ...
and itself.
Then, switch it up by pointing out this is the same as:
a, b, c, d, ..., z, aa, ab, ac, ...
Might not be two sets but:
The relationship between 'long division' and 'synthetic division'.
Moves in two different games. This could be hard to set up, but have one of the games be a rubik's cube. (Might be important to point out that this is a history - not the structure of the two games.)
WASD doesn't seem to cover all the moves on a cube though.
Intuitive 'best case scenario' for snake makes it the same game as tron.
$ to integers. (If you can only split down to a penny, it's an easy mapping. $ -> N: drop the $, and multiply but 100. N -> $: divide by 100, and divide by 100.)
Frames from one frame to frames in another movie - (more interesting for how this doesn't work (two movies might have different frame rates)) - but if they have the exact same number of frames you can make such a mapping. Might be worth emphasizing it seems like no one would do this (if frame rates vary over the course of a movie). But if you have two movies that play at the same rate, and have the same length...you can swap the audios.
Might be better to just take two songs, and swap the music, but keep the music the same (where the meters match).
The comparison between a cylinder and a tower of coins (analogy used in calculus). There's more surprising mappings that are related.
Intuition for the volume of a sphere, or a pyramid...
'Double shapes/half shapes' (like a square in a square)
Could you unpack what you mean by “intuitively different” in a bit more depth? Do I understand correctly that the way the third and fourth criteria are not in direct tension is that you're focusing on the difference in familiarity-feel between the mapping and the sets themselves? (I think “familiarity” is probably not the right word, but I'm having trouble finding a more accurate one.)
Could you unpack what you mean by “intuitively different” in a bit more depth?
I mean it pretty literally, i.e., the first reaction when someone looks at them (especially if that someone isn't a mathematician) should be "okay, these are definitely totally different things".
Maybe #3 and #4 are in conflict, mostly #4 is just more important. Like, any continuous function is probably good enough, and there have to be some more "different" looking continuous deformations than what I can think of. (Doesn't have to be topological spaces, but that would be one approach.)
What about something like the faces of a polyhedron to the vertices of its dual? Additionally, would you count those as highly different, perhaps because a face and a corner feel and look very different when observed physically? Or would they count as similar, perhaps because they're both geometric ideas used to describe parts of polyhedra and are concepts that are frequently used together rather than being totally unrelated?
That's really good! I think they count as quite different.
The one thing I don't like about it is that the dual of the entire geometric object is another, similar geometric object, so on level it only shuffles around vertices and faces. But the vertices themselves become radically transformed, which is great. It's definitely a better solution than polar/Cartesian coordinates.
What are some good introductions / explanations / discussions on ideas around "agents capable of self-modification may trade with each other by modifying their own utility functions"?
I usually refer to a bunch of Harsanyi's theorem: https://forum.effectivealtruism.org/posts/v89xwH3ouymNmc8hi/harsanyi-s-simple-proof-of-utilitarianism
Wei Dai also talks about it here, and also links to some posts from 2009 about it: https://www.alignmentforum.org/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low
Huh, it just feels to me like the central reason for why we should expect merging utility functions to be an optimal choice. The theorem basically says "the set of optimal bargaining solutions in value space look like some weighted average of utility functions".
Ahh, I read it more closely now and you're right, it is relevant for the question I asked. I was just thinking of a more specific question, which I didn't clearly verbalize. :)
I was thinking more specifically of "choosing to merge utility functions instead of fighting" (I vaguely recall Scott Alexander having had a short story of this, but couldn't find it) and was hoping that maybe there'd be some discussion of when agents would choose such a merge instead of being in conflict. The Harsanyi result doesn't seem to say anything about that, but it is certainly relevant for the case where they have already decided to trade rather than fight.
You may be thinking of https://slatestarcodex.com/2017/03/21/repost-the-demiurges-older-brother/?
Hello! I've been reading some posts here for some time and I decided to write my first post now.
However, I'm unfamiliar with the posting interface and I mistakenly hit publish prematurely. Then I tried to delete the post an create it again but, because of the moderation needed for the first post I don't know if the post is in a limbo or actually waiting for moderation.
I don't see anything in my profile page and but the direct link to the new post works (privately).
Should I ask for some admin's help?
Hello, does anyone happen to know any good resources related to improving/practicing public speaking? I'm looking for something that will help me enunciate better/ mumble less/ fluctuate tone better. A lot of stuff I see online appears to be very superficial.
https://www.lesswrong.com/posts/rEpzKiiAhZiFaRwoH/what-are-some-of-the-advantages-of-robotics looks borderline. It's easy to downvote it, but I'm insure about whether it should be deleted as spam.
It looks like an article that could have been written by GPT-3 and it does have a link for SEO promotion.
I missed the SEO link. Seeing that increases my confidence that it is spam. I have deleted the post and purged the account.
LessWrong Cult
When typing 'lesswrong' into google, the first suggestion is 'lesswrong cult'.
Is that something we should attempt to change? I believe that you can ask Google to remove those predictions.
I posted this as a shortform, but I figured I might as well add it here too.
I repeat my warning, that if everyone's first reaction is to type "lesswrong cult" in google, maybe that is one of the factors that influence the algorithm. ;)
So is typing "lesswrong cult" on publicly-accessible websites. Lesswrong cult lesswrong cult lesswrong cult lesswrong cult lesswrong cult.
Keep doing it, and the top result for "lesswrong cult" will be the March 2022 Welcome & Open Thread.
From my perspective, that is an acceptable outcome.
Reason #79 why language models will be hard to train: one of the webpages in your dataset is just a couple of forum comments and then 60000 repetitions of "lesswrong cult."
Google Maps won't correct the hours on my local Safeway supermarket which have been wrong for years. Asking Google Search (which is even more automated and relies less on manual user input compared to Google Maps) would probably do nothing. In the rare chance it does accomplish something, that "something" would probably be to just trigger the Streisand effect.
Shouting "how dare you call us a cult" makes you look like a cult. The correct response is to laugh it off.
I agree that getting into public debates about LessWrong's cult-status would be a bad idea and likely trigger the Streisand effect.
But reporting an automated search prediction doesn't seem like the sort of thing to start an argument, and it isn't publicly visible anyway (to my knowledge).
While the impact of an effort to remove the prediction is likely very small or nonexistent, the effort involved also seems low, and the impact is plausibly non-zero on the margin. While priming hasn't really replicated, the association (between LessWrong and cult) being one of the first things visible to anyone searching for the forum doesn't strike me as a good look.
This is a crosspost of my comment on the post Brave Little Humans. The open thread seems better for visibility and general discussion related to the metacrisis.
AI doom seems to fit the category "races to the bottom with unintended consequences" (and it isn't the only existential risk in that category). As such, its desperate urgency is downstream from the metacrisis (or the meaning crisis as John Vervaeke called it). Resolving or mitigating the metacrisis would give much-needed breathing room for studying AI alignment and exacerbating the metacrisis would seem to increase AI risk further.
I personally happened to fall into studying the metacrisis rather than AI, and it is my estimate that the metacrisis is more solvable and has aspects to it that seem relevant to understanding cognitive agency and intelligence in general. The linkage is such that I believe both problems merit attention and may benefit from cross pollination.
I'm subscribed to So8res's posts and over the last ~2 days have been getting messages that "So8res has created a new post: X" where X was a 7-year old post.
Yeah, we've been importing a bunch of Nate's old posts, and have been backdating them. It's a bit unclear how our notification system is supposed to handle that case. Probably not spam people with tons of notifications in a confusing way.
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.