Hi everyone - stumbled on this site last week. I had asked Gemini about where I could follow AI developments and was given something I find much more valuable - a community interested in finding truth through rationality and humility. I think online forums are well-suited for these kinds of challenging discussions - no faces to judge, no interruption over one another, no pressure to respond immediately - just walls of text to ponder and write silently and patiently.
LessWrong now has sidenotes. These use the existing footnotes feature; posts that already had footnotes will now also display these footnotes in the right margin (if your screen is wide enough/zoomed out enough). Post authors can disable this for individual posts; we're defaulting it to on because when looking at older posts, most of the time it seems like an improvement.
Relatedly, we now also display inline reactions as icons in the right margin (rather than underlines within the main post text). If reaction icons or sidenotes would cover each other up, they get pushed down the page.
Feedback welcome!
Howdy Y'all. I'm Kinta Naomi. I just discovered LessWrong as it was slightly mentioned in a video that mentioned Roko's Basilisk(I've seen a lot of them).
I read through the new user's guide, and really like the method of conversations laid out, as I've been on many YouTube comments where someone disproved me and I admitted I'm wrong. Didn't know there was a place on the Internet for people like that, except getting lucky in comments. I have a need to be right. This is not a need to prove I'm right, but a need to know that what I think is correct is. The most frustrating thing is when others won't explain their side of an argument, and leave me hanging wondering if some knowledge I'm being denied is what I need to be more correct. Or in name of the community, be less wrong.
I do have some mental issues, though the only significant ones for this are a reading disability and not having access to all the information in my head at any one time. If from message to message I seem like a different person, that's normal for me.
My main reason being here, as many others, is AI. Specifically eventually my C-PON(Consciousness. Python Originated Network) and UPAI(Unliving Prophet AI). Having an A...
@Elizabeth and I are thinking of having an informal dialogue where she asks a panel of us about our experiences doing things outside of or instead of college, and how that went for us. We're pinging a few people we know, but I want to ask LessWrong: did you leave college or skip it entirely, and would you be open to being asked some questions about it? React with a thumbs-up or PM me/her to let us know, and we might ask you to join us :-)
(Inspired from this thread.)
I want to be able to quickly see whether I have bookmarked a post to avoid clicking into it (hence I suggested it to be a badge, rather than a button like in the Bookmarks tab). Especially with the new recommendation system that resurfaces old posts, I sometimes accidentally click on posts that I bookmarked months before.
I found that it is possible to yield noticeably better search results than Google by using Kagi as default and fallback to Exa (prev. Metaphor).
Kagi is $10/mo though with a 100 searches trial. Kagi's default results are slightly better than Google, and it also offers customization of results which I haven't seen in other search engines.
Exa is free, it uses embeddings and empirically it understood semantics way better than other search engines and provide very unique search results.
If you are interested in experimenting you can find more search engines in https://www.searchenginemap.com/ and https://github.com/The-Osint-Toolbox/Search-Engines
Something I wanted to write a post about, but I keep procrastinating, and I don't actually have much to say, so let's put it here.
People occasionally mention how it is not reasonable for rationalists to ignore politics. And they have a good point; even if you are not interested in politics, politics is still sometimes interested in you. On the other hand... well, the obvious things, already mentioned in the Sequences.
As I see it, the reasonable way to do politics is to focus on the local level. Don't discuss national elections and culture wars; instead get some understanding about how your city works, meet the people who do reasonable things, find out how you could help them. That will help you get familiar with the territory, and the competition is smaller; you have greater chance to achieve something and remain sane.
Unfortunately, Less Wrong is an internet community, the problem is that if we tried to focus on local politics, many of us couldn't debate it here, at least not the specific details (but those are exactly the ones that matter and keep you sane).
I am not saying that no one should ever try national politics, just that the reasonable approach is to start small, and perha...
I've been reading LikeWar: The Weaponization of Social Media and at the end the authors bring out the problem of AI. It's interesting in that they seem to be pointing to a clear AI risk that I never hear (or have not recognized) mentioned in this group. The basic thrust is about how the deep fake capibilities can allow an advanced AI to pretty much manufacture realities and control what people think is true or not so can contol both political outcomes and even incentives towards war and other hostilities both within a society and between countries/societies/cultures/races. (Note, that is a very poor summary and follow a lot of documenting the whole leadup from social media and internet failing to realize the original views how they would lead to a better world where good ideas/truth drive out bad and lies/falsehoods and has in fact enable the bad and promoted lies and falsehoods. The AIs just come in at the end and may or may not be working in the interests of some group, e.g., Russia, China, the USA, ISIS...)
But this (the book itself is a documentation of the very real, and obervable risks and actual events) area holds very real, (largely) observable outcomes that lead to significant harms to people. As such I would think it might be a ripe area for those feeling that the general public is not grasping the risk (which to me do often seem rather sci-fi and Terminator/Matrix type claims that most people will just see as pure fiction and pay little attention to).
The Review Bot would be much less annoying if it weren't creating a continual stream of effective false positives on the “new comments on post X” indicators, which are currently the main way I keep up with new comments. I briefly looked for a way of suppressing these via its profile page and via the Site Settings screen but didn't see anything.
Hi! Just introducing myself to this group. I'm a cybersecurity professional, enjoyed various deep learning adventures over the last 6 years and inevitably managing AI related risks in my information security work. Went through BlueDot's AI safety fundamentals last spring with lots of curiosity and (re?)discovered LessWrong. Looking forward to visiting more often, and engaging with the intelligence of this community to sharpen how I think.
PSA: Whether a post is on the frontpage category has very little to do with whether moderators think it's good. "Frontpage + Downvote" is a move I execute relatively frequently.
The criteria are basically:
It seems confusing/unexpected that a user has to click on "Personal Blog" to see organisational announcements (which are not "personal"). Also, why is it important or useful to keep timeful posts out of the front page by default?
If it's because they'll become less relevant/interesting over time, and you want to reduces the chances of them being shown to users in the future, it seems like that could be accomplished with another mechanism.
I guess another possibility is that timeful content is more likely to be politically/socially sensitive, and you want to avoid getting involved in fighting over, e.g., which orgs get to post announcements to the front page. This seems like a good reason, so maybe I've answered my own question.
I want to get more experience with adversarial truth-seeking processes, and maybe build more features for them on LessWrong. To get started, I'd like to have a little debate-club-style debate, where we pick a question and each take opposing sides to present evidence and arguments for. Is anyone up for having such a debate with me in a LW dialogue for a few hours? (No particular intention to publish it.)
I have a suggested debate topic in mind, but I'm open to debating any well-operationalized claim (e.g. the sort of thing you could have a Manifold market on...
Bug report: When opening unread posts in a background tab, the rendering is broken in Firefox:
It should look like this:
The rendering in comments is also affected.
My current fix is to manually reload every broken page, though this is obviously not optimal.
Hello everyone,
I'm a long time on-off lurker here. I've made my way through the sequences quite a while ago with a mixed success in implementing some of them. Many of the ideas are intriguing and I would love to have enough spare cycles to play with them. Unfortunately, often enough, I find myself to not have enough capacity to do this properly due to life getting in way. With (not only that) in mind, I'm going to take a sabbatical this summer for at least three months and try to do an update and generally tend to stuff I've been putting off.&n...
Hello! A friend and I are working on an idea for the AI Impacts Essay Competition. We're both relatively new to AI and pivoting careers in that direction, so I wanted to float our idea here first before diving too deep. Our main idea is to propose a new method for training rational language models inspired by human collaborative rationality methods. We're basically agreeing with Conjecture's and Elicit's foundational ideas and proposing a specific method for building CoEms for philosophical and forecasting applications. The method is centered around a disc...
Hello! My name is Alfred. I recently took part in AI Safety Camp 2024 and have been thinking about the Agent-like structure problem. Hopefully I will have some posts to share on the subject soon.
Today I realized I am free to make the letters in an einsum string meaningful (b for batch, x for horizontal index, y for vertical index etc) instead of just choosing ijkl.
I'm interested in arguments surrounding energy-efficiency (and maximum intensity, if they're not the same thing) of pain and pleasure. I'm looking for any considerations or links regarding (1) the suitability of "H=D" (equal efficiency and possibly intensity) as a prior; (2) whether, given this prior, we have good a posteriori reasons to expect a skew in either the positive or negative direction; and (3) the conceivability of modifying human minds' faculties to experience "super-bliss" commensurate with the badness of the worst-possible outcome, such that ...
Evolution is threatening to completely recover from a worst case inner alignment failure. We are immensely powerful mesaoptimizers. We are currently wildly misaligned from optimizing for our personal reproductive fitness. Yet, this state of affairs feels fragile! The prototypical lesswrong AI apocalypse involves robots getting into space and spreading at the speed of light extinguishing all sapient value, which from the point of view of evolution is basically a win condition.
In this sense, "reproductive fitness" is a stable optimization target. If there are more stable optimizations targets (big if), finding one that we like even a little bit better than "reproductive fitness" could be a way to do alignment.
Along with p(doom), perhaps we should talk about p(takeover) - where this is the probability that creation of AI leads to the end of human control over human affairs. I am not sure about doom, but I strongly expect superhuman AI to have the final say in everything.
(I am uncertain of the prospects for any human to keep up via "cyborgism", a path which could escape the dichotomy of humans in control vs humans not in control.)
Im sure everyone here probably already say it but I've just been watching the interview with Leopold Aschenbrenner on Dwaresh Patel's show. I found out about it from a very depressing thread in Twitter. This is starting to get atomic bomb/ cold war vibes. What do people think about that?
Here's the video for those interested:
Bug report: moderator-promoted posts (w stars) show up on my front page even when I've selected "hide from frontpage" on them.
Can I somehow get the old sorting algorithm for posts back? My lesswrong homepage is flooded with very old posts.
Why does lesswrong.com have the bookmark feature without a way to sort them out? As in using tags or maybe even subfolders. Unless I am missing something out. I think it might be better if I just resort to browser bookmark feature.
Hello! I'm a health and longevity researcher. I presented on Optimal Diet and Exercise at LessOnline, and it was great meeting many of you there. I just posted about the health effects of alcohol.
I'm currently testing a fitness routine that, if followed, can reduce premature death by 90%. The routine involves an hour of exercise, plus walking, every week.
My blog is unaging.com. Please look and subscribe if you're interested in reading more or joining in fitness challenges!
At my local Barnes and Nobles, I cannot access slatestarcodex.com nor putanumonit.com. Have never had any issues accessing any other websites (not that I've tried to access genuinely sketchy websites there). The wifi there is titled Bartleby, likely related to Bartleby.com, whereas many other Barnes and Nobles have wifi titled something like "BNWifi". I have not tried to access these websites at other Barnes yet.
Feature request: better formatting for emojis in text copied from elsewhere. In particular, I like to encourage people to copy text from interesting twitter/x threads they see into their posts instead of just linking. Better for the convenience of the readers and for more trustworthy archival access.
The trouble with this is, text copied from twitter/x that has emojis in it tends to look terrible on LessWrong. The emojis (sometimes) get blown up to huge full-width size, instead of staying a square of text-height size as intended.
Example (may or may no...
Seems like every new post - no matter the karma - is getting the "listen to this post" button now. I love it.
I'm at this point pretty confident that under the Copenhagen interpretation, whenever an intergalactic photon hits earth, the wave-function collapse takes place on a semi-spherical wave-front many millions of lightyears in diameter. I'm still trying to wrap my head around what the interpretation of this event is in many-worlds. I know that it causes earth to pick which world it is in out of the possible worlds that split off when the photon was created, but I'm not sure if there is any event on the whole spherical wavefront.
It's not a pure hypothetical- we...
Is there a way to get an article's raw or original content?
My goal is mostly to put articles in some area (ex: singular learning theory) into a tool like Google's NotebookLM to then ask quick questions about.
Google's own conversion of HTML to text works fine for most content, excepting math. A division may turn into p ( w | D n ) = p ( D n | w ) φ ( w ) p ( D n ), becoming incorrect.
I can always just grab the article's HTML content (or use the GraphQL api for that), but HTMLified MathJax notation is very, uh, verbose. I could probably do some massaging o...
I want to run code generated by an llm totally unsupervised
Just to get in the habit, I should put it in an isolated container in case it does something weird
Claude, please write a python script that executes a string as python code In an isolated docker container.
I realized something important about psychology that is not yet publicly available, or that is very little known compared to its importance (60%). I don't want to publish this as a regular post, because it may greatly help in the development of GAI (40% that it helps and 15% that it's greatly helps), and I would like to help only those who are trying to create an alligned GAI. What should I do?
I think I saw a LW post that was discussing alternatives to the vNM independence axiom. I also think (low confidence) it was by Rob Bensinger and in response to Scott's geometric rationality (e.g. this post). For the hell of me, I can't find it. Unless my memory is mistaken, does anybody know what I'm talking about?
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.