If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
100 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi everyone - stumbled on this site last week. I had asked Gemini about where I could follow AI developments and was given something I find much more valuable - a community interested in finding truth through rationality and humility. I think online forums are well-suited for these kinds of challenging discussions - no faces to judge, no interruption over one another, no pressure to respond immediately - just walls of text to ponder and write silently and patiently.

LessWrong now has sidenotes. These use the existing footnotes feature; posts that already had footnotes will now also display these footnotes in the right margin (if your screen is wide enough/zoomed out enough). Post authors can disable this for individual posts; we're defaulting it to on because when looking at older posts, most of the time it seems like an improvement.

Relatedly, we now also display inline reactions as icons in the right margin (rather than underlines within the main post text). If reaction icons or sidenotes would cover each other up, they get pushed down the page.

Feedback welcome!

7Mateusz Bagiński
My feedback is that I absolutely love it. My favorite feature released since reactions or audio for all posts (whichever was later).
2Screwtape
Feedback: a month or so out, I love the sidenotes. They're right where I want footnotes to be, visible without breaking the flow.
[-]lsusr2322

LessOnline was amazing. Thank you everyone who helped make it happen.

Howdy Y'all. I'm Kinta Naomi. I just discovered LessWrong as it was slightly mentioned in a video that mentioned Roko's Basilisk(I've seen a lot of them).

I read through the new user's guide, and really like the method of conversations laid out, as I've been on many YouTube comments where someone disproved me and I admitted I'm wrong. Didn't know there was a place on the Internet for people like that, except getting lucky in comments. I have a need to be right. This is not a need to prove I'm right, but a need to know that what I think is correct is. The most frustrating thing is when others won't explain their side of an argument, and leave me hanging wondering if some knowledge I'm being denied is what I need to be more correct. Or in name of the community, be less wrong.

I do have some mental issues, though the only significant ones for this are a reading disability and not having access to all the information in my head at any one time. If from message to message I seem like a different person, that's normal for me.

My main reason being here, as many others, is AI. Specifically eventually my C-PON(Consciousness. Python Originated Network) and UPAI(Unliving Prophet AI). Having an A... (read more)

@Elizabeth and I are thinking of having an informal dialogue where she asks a panel of us about our experiences doing things outside of or instead of college, and how that went for us. We're pinging a few people we know, but I want to ask LessWrong: did you leave college or skip it entirely, and would you be open to being asked some questions about it? React with a thumbs-up or PM me/her to let us know, and we might ask you to join us :-)

(Inspired from this thread.)

2Yoav Ravid
I didn't go to college/university, but i'm also from Israel, not the US, so it's a little different here. If it still feels relevant then i'd be willing to join.
2Zack_M_Davis
(I'm interested (context), but I'll be mostly offline the 15th through 18th.)
2habryka
(I de-facto skipped college. I do have a degree, but I attended basically no classes)

I would love to get a little bookmark symbol on the frontpage

3Ruby
The bookmark option in the triple-dot menu isn't quite sufficient?

I want to be able to quickly see whether I have bookmarked a post to avoid clicking into it (hence I suggested it to be a badge, rather than a button like in the Bookmarks tab). Especially with the new recommendation system that resurfaces old posts, I sometimes accidentally click on posts that I bookmarked months before.

I found that it is possible to yield noticeably better search results than Google by using Kagi as default and fallback to Exa (prev. Metaphor). 

Kagi is $10/mo though with a 100 searches trial. Kagi's default results are slightly better than Google, and it also offers customization of results which I haven't seen in other search engines. 

Exa is free, it uses embeddings and empirically it understood semantics way better than other search engines and provide very unique search results.

If you are interested in experimenting you can find more search engines in https://www.searchenginemap.com/ and https://github.com/The-Osint-Toolbox/Search-Engines

5gilch
I notice that https://metaphor.systems (mentioned here earlier) now redirects to Exa. Have you compared it to Phind (or Bing/Windows Copilot)?
1papetoast
Metaphor rebranded themselves. No and no, thanks for sharing though, will try it out!
[-]Viliam149

Something I wanted to write a post about, but I keep procrastinating, and I don't actually have much to say, so let's put it here.

People occasionally mention how it is not reasonable for rationalists to ignore politics. And they have a good point; even if you are not interested in politics, politics is still sometimes interested in you. On the other hand... well, the obvious things, already mentioned in the Sequences.

As I see it, the reasonable way to do politics is to focus on the local level. Don't discuss national elections and culture wars; instead get some understanding about how your city works, meet the people who do reasonable things, find out how you could help them. That will help you get familiar with the territory, and the competition is smaller; you have greater chance to achieve something and remain sane.

Unfortunately, Less Wrong is an internet community, the problem is that if we tried to focus on local politics, many of us couldn't debate it here, at least not the specific details (but those are exactly the ones that matter and keep you sane).

I am not saying that no one should ever try national politics, just that the reasonable approach is to start small, and perha... (read more)

3winstonBosan
On Lesswrong being a dispersed internet community: If the ACX survey is informative here, discussing local policy works surprisingly well here! I’d say a significant chunk of people are in the Bay Area at large and Boston/NYC/DC area - it should be enough of a cluster to support discussions of local policy. And policies in California/DC has an oversized effect on things we care about as well.
4Viliam
I agree that the places you mention have a sufficiently large local community. I am not aware of how much they have achieved politically. Unfortunately, I live on the opposite side of the planet, with less than 10 rationalists in my entire country.
1Sherrinford
I wondee whether more people from those areas take part in the survey. They can assume that there are many people from the same area and often same age and same jobs, which implies that they can be sure their entries will remain anonymous.
[-]jmh130

I've been reading LikeWar: The Weaponization of Social Media and at the end the authors bring out the problem of AI. It's interesting in that they seem to be pointing to a clear AI risk that I never hear (or have not recognized) mentioned in this group. The basic thrust is about how the deep fake capibilities can allow an advanced AI to pretty much manufacture realities and control what people think is true or not so can contol both political outcomes and even incentives towards war and other hostilities both within a society and between countries/societies/cultures/races. (Note, that is a very poor summary and follow a lot of documenting the whole leadup from social media and internet failing to realize the original views how they would lead to a better world where good ideas/truth drive out bad and lies/falsehoods and has in fact enable the bad and promoted lies and falsehoods. The AIs just come in at the end and may or may not be working in the interests of some group, e.g., Russia, China, the USA, ISIS...)

But this (the book itself is a documentation of the very real, and obervable risks and actual events) area holds very real, (largely) observable outcomes that lead to significant harms to people. As such I would think it might be a ripe area for those feeling that the general public is not grasping the risk (which to me do often seem rather sci-fi and Terminator/Matrix type claims that most people will just see as pure fiction and pay little attention to).

The Review Bot would be much less annoying if it weren't creating a continual stream of effective false positives on the “new comments on post X” indicators, which are currently the main way I keep up with new comments. I briefly looked for a way of suppressing these via its profile page and via the Site Settings screen but didn't see anything.

2Neel Nanda
Strong +1, also notifications when it comments on my posts
2kave
Yeah, I think if we don’t do a UI rework soon to get rid of it (while still giving some prominence to the markets where they exist), we should at least do some special casing of its commenting behaviour.

Hi! Just introducing myself to this group. I'm a cybersecurity professional, enjoyed various deep learning adventures over the last 6 years and inevitably managing AI related risks in my information security work.  Went through BlueDot's AI safety fundamentals last spring with lots of curiosity and (re?)discovered LessWrong. Looking forward to visiting more often, and engaging with the intelligence of this community to sharpen how I think.

4habryka
Welcome! Glad to have you around, and hope you have a good time. Also always feel free to complain about anything that is making you sad about the site either in threads like this, or privately in our Intercom chat (the bubble in the bottom right corner).

Hi, excited to learn more about Mech Int!

[-]kaveModerator Comment70

PSA: Whether a post is on the frontpage category has very little to do with whether moderators think it's good. "Frontpage + Downvote" is a move I execute relatively frequently.

The criteria are basically:

  • Is it timeless? News, organisational announcements and so on are rarely timeless (sometimes timeful things can be talked about in timeless ways, like writing about a theory of how groups work with references to an ongoing election).
  • Is it relevant to LessWrong? The LessWrong topics are basically how to think better, how to make the world better and building
... (read more)

It seems confusing/unexpected that a user has to click on "Personal Blog" to see organisational announcements (which are not "personal"). Also, why is it important or useful to keep timeful posts out of the front page by default?

If it's because they'll become less relevant/interesting over time, and you want to reduces the chances of them being shown to users in the future, it seems like that could be accomplished with another mechanism.

I guess another possibility is that timeful content is more likely to be politically/socially sensitive, and you want to avoid getting involved in fighting over, e.g., which orgs get to post announcements to the front page. This seems like a good reason, so maybe I've answered my own question.

5kave
To the extent you're saying that the "Personal" name for the category is confusing, I agree. I'm not sure what a better name is, but I'd like to use one. Your last paragraph is in the right ballpark, but by my lights the central concern isn't so much about LessWrong mods getting involved in fights over what goes on the frontpage. It's more about keeping the frontpage free of certain kinds of context requirements and social forces. LessWrong is meant for thinking and communicating about rationality, AI x-risk and related ideas. It shouldn't require familiarity with the social scenes around those topics. Organisations aren't exactly "a social scene". And they are relevant to modeling the space's development. But I think there's two reasons to keep information about those organisations off the frontpage. 1. While relevant to the development of ideas, that information is not the same as the development of those ideas. We can focus on org's contribution to the ideas without focusing on organisational changes. 2. It helps limit certain social forces. My model for why LessWrong keeps politics off the frontpage is to minimize the risk of coöption by mainstream political forces and fights. Similarly, I think keeping org updates off the frontpage helps prevent LessWrong from overly identifying with particular movements or orgs. I'm afraid this would muck up our truth-seeking. Powerful, high-status organizations can easily warp discourse. "Everyone knows that they're basically right about stuff". I think this already happens to some degree – comments from staff at MIRI, ARC, Redwood, Lightcone seem to me to gain momentum solely from who wrote them. Though of course it's hard to be sure, as the comments are often also pretty good on their merits. As AI news heats up, I do think our categories are straining a bit. There's a lot of relevant but news-y content. I still feel good about keeping things like Zvi's AI newsletters off the frontpage, but I worry that putting them
2Screwtape
Have we considered "Discussion" and "Main"?  (Context for anyone more recent than ~2016, this is a joke, those were the labels that old LessWrong used.)
2Raemon
I do periodically think that might be better. I think changing "personal blog" to "discussion" might be fine.
4Screwtape
Babbling ideas: * Frontpage and backpage * On-topic and anything-goes * Priority and standard * Major league and minor league * Rationality (use the tag) and all other tags. * More magic and magic
4Ben Pace
LessWrong Frontpage vs LessWrong
2Screwtape
LessWrong vs Overcoming Bias
6Screwtape
Less vs Wrong

I want to get more experience with adversarial truth-seeking processes, and maybe build more features for them on LessWrong. To get started, I'd like to have a little debate-club-style debate, where we pick a question and each take opposing sides to present evidence and arguments for. Is anyone up for having such a debate with me in a LW dialogue for a few hours? (No particular intention to publish it.)

I have a suggested debate topic in mind, but I'm open to debating any well-operationalized claim (e.g. the sort of thing you could have a Manifold market on... (read more)

Bug report: When opening unread posts in a background tab, the rendering is broken in Firefox:

It should look like this:

The rendering in comments is also affected.

My current fix is to manually reload every broken page, though this is obviously not optimal.

Introduction

Hello everyone,

I'm a long time on-off lurker here. I've made my way through the sequences quite a while ago with a mixed success in implementing some of them. Many of the ideas are intriguing and I would love to have enough spare cycles to play with them. Unfortunately, often enough, I find myself to not have enough capacity to do this properly due to life getting in way. With (not only that) in mind, I'm going to take a sabbatical this summer for at least three months and try to do an update and generally tend to stuff I've been putting off.&n... (read more)

5gilch
Rob Miles' YouTube channel has some good explanations about why alignment is hard. We can already do RLHF, the alignment technique that made ChatGPT and derivatives well-behaved enough to be useful, but we don't expect this to scale to superintelligence. It adjusts the weights based on human feedback, but this can't work once the humans are unable to judge actions (or plans) that are too complex. Not following. We can already update the weights. That's training, tuning, RLHF, etc. How does that help? No. We're talking about aligning general intelligence. We need to avoid all the dangerous behaviors, not just a single example we can think of, or even numerous examples. We need the AI to output things we haven't thought of, or why is it useful at all? If there's a finite and reasonably small number of inputs/outputs we want, there's a simpler solution: that's not an AGI—it's a lookup table. You can think of the LLM weights as a lossy compression of the corpus it was trained on. If you can predict text better than chance, you don't need as much capacity to store it, so an LLM could be a component in a lossless text compressor as well. But these predictors generated by the training process generalize beyond their corpus to things that haven't been written yet. It has an internal model of possible worlds that could have generated the corpus. That's intelligence.
4ProgramCrafter
A problem is that * we don't know specific goal representation (actual string in place of "A"), * we don't know how to evaluate LLM output (in particular, how to check whether the plan suggested works for a goal), * we have a large (presumably infinite non-enumerable) set of behavior B we want to avoid, * we have explicit representation for some items in B, mentally understand a bit more, and don't understand/know about other unwanted things.
2CBiddulph
If I understand correctly, you're basically saying: * We can't know how long it will take for the machine to finish its task. In fact, it might take an infinite amount of time, due to the halting problem which says that we can't know in advance whether a program will run forever. * If our machine took an infinite amount of time, it might do something catastrophic in that infinite amount of time, and we could never prove that it doesn't. * Since we can't prove that the machine won't do something catastrophic, the alignment problem is impossible. The halting problem doesn't say that we can't know whether any program will halt, just that we can't determine the halting status of every single program. It's easy to "prove" that a program that runs an LLM will halt. Just program it to "run the LLM until it decides to stop; but if it doesn't stop itself after 1 million tokens, cut it off." This is what ChatGPT or any other AI product does in practice. Also, the alignment problem isn't necessarily about proving that a AI will never do something catastrophic. It's enough to have good informal arguments that it won't do something bad with (say) 99.99% probability over the length of its deployment.

Hello! A friend and I are working on an idea for the AI Impacts Essay Competition. We're both relatively new to AI and pivoting careers in that direction, so I wanted to float our idea here first before diving too deep. Our main idea is to propose a new method for training rational language models inspired by human collaborative rationality methods. We're basically agreeing with Conjecture's and Elicit's foundational ideas and proposing a specific method for building CoEms for philosophical and forecasting applications. The method is centered around a disc... (read more)

Hello! My name is Alfred. I recently took part in AI Safety Camp 2024 and have been thinking about the Agent-like structure problem. Hopefully I will have some posts to share on the subject soon.

Today I realized I am free to make the letters in an einsum string meaningful (b for batch, x for horizontal index, y for vertical index etc) instead of just choosing ijkl.

5Adam Shai
https://pypi.org/project/fancy-einsum/ there's also this.

Crossposting here: I'm still looking for a dialogue partner 

I'm interested in arguments surrounding energy-efficiency (and maximum intensity, if they're not the same thing) of pain and pleasure. I'm looking for any considerations or links regarding (1) the suitability of "H=D" (equal efficiency and possibly intensity) as a prior; (2) whether, given this prior, we have good a posteriori reasons to expect a skew in either the positive or negative direction; and (3) the conceivability of modifying human minds' faculties to experience "super-bliss" commensurate with the badness of the worst-possible outcome, such that ... (read more)

Evolution is threatening to completely recover from a worst case inner alignment failure. We are immensely powerful mesaoptimizers. We are currently wildly misaligned from optimizing for our personal reproductive fitness. Yet, this state of affairs feels fragile! The prototypical lesswrong AI apocalypse involves robots getting into space and spreading at the speed of light extinguishing all sapient value, which from the point of view of evolution is basically a win condition.

In this sense, "reproductive fitness" is a stable optimization target. If there are more stable optimizations targets (big if), finding one that we like even a little bit better than "reproductive fitness" could be a way to do alignment.

6Eli Tyre
Katja Grace made a similar point here. The outcome you describe is not a win for for evolution except in some very broad sense of "evolution". This outcome is completely orthogonal to inclusive genetic fitness in particular, which is about the frequency of an organism's genes in a gene pool, relative to other competing genes.
6Daniel Kokotajlo
I don't think that outcome would be a win condition from the point of view of evolution. A win condition would be "AGIs that intrinsically want to replicate take over the lightcone" or maybe the more moderate "AGIs take over the lightcone and fill it with copies of themselves, to at least 90% of the degree to which they would do so if their terminal goal was filling it with copies of themselves" Realistically (at least in these scenarios) there's a period of replication and expansion, followed by a period of 'exploitation' in which all the galaxies get turned into paperclips (or whatever else the AGIs value) which is probably not going to be just more copies of themselves.
1Hastings
Yeah, in the lightcone scenario evolution probably never actually aligns the inner optimizers- although it may align them, as a super intelligence copying itself will have little leeway for any of those copies having slightly more drive to copy themselves than their parents. Depends on how well it can fight robot cancer. However, while a cancer free paperclipper wouldn't achieve "AGIs take over the lightcone and fill it with copies of themselves, to at least 90% of the degree to which they would do so if their terminal goal was filling it with copies of themselves," they would achieve something like "AGIs take over the lightcone and briefly fill it with copies of themselves, to at least 10^-3% of the degree to which they would do so if their terminal goal was filling it with copies of themselves" which is in my opinion really close. As a comparison, if Alice sets off Kmart AIXI with the goal of creating utopia we don't expect the outcome "AGIs take over the lightcone and convert 10^-3% of it to temporary utopias before paperclipping." Also, unless you beat entropy, for almost any optimization target you can trade "fraction of the universe's age during which your goal is maximized" against "fraction of the universe in which your goal is optimized" since it won't last forever regardless. If you can beat entropy, then the paperclipper will copy itself exponentially forever.  

Along with p(doom), perhaps we should talk about p(takeover) - where this is the probability that creation of AI leads to the end of human control over human affairs. I am not sure about doom, but I strongly expect superhuman AI to have the final say in everything. 

(I am uncertain of the prospects for any human to keep up via "cyborgism", a path which could escape the dichotomy of humans in control vs humans not in control.) 

6gilch
Takeover, if misaligned, also counts as doom. X-risk includes permanent disempowerment, not just literal extinction. That's according to Bostrom, who coined the term: A reasonably good outcome might be for ASI to set some guardrails to prevent death and disasters (like other black marbles) and then mostly leave us alone. My understanding is that Neuralink is a bet on "cyborgism". It doesn't look like it will make it in time. Cyborgs won't be able to keep up with pure machine intelligence once it begins to take off, but maybe smarter humans would have a better chance of figuring out alignment before it starts. Even purely biological intelligence enhancement (e.g., embryo selection) might help, but that might not be any faster.

Im sure everyone here probably already say it but I've just been watching the interview with Leopold Aschenbrenner on Dwaresh Patel's show. I found out about it from a very depressing thread in Twitter. This is starting to get atomic bomb/ cold war vibes. What do people think about that? 

Here's the video for those interested:

4gilch
Aschenbrenner also wrote https://situational-awareness.ai/. Zvi wrote a review.
4O O
I think this outcome is more likely than people give credit to. People have speculated about the arms race nature of AI we might already be seeing and agreed but it hasn’t gotten much signal until now.

Are there multiwinner voting methods where voters vote on combinations of candidates?

5Marcus Ogren
Party list methods can be thought of as such, though I suspect that's not what you meant. Aside from party list, I don't recall any voting methods in which voters vote on sets of candidates rather than on individual candidates being discussed. Obviously you could consider all subsets of candidates containing the appropriate number of winners and have voters vote on these subsets using a single-winner voting method, but this approach has numerous issues.

Bug report: moderator-promoted posts (w stars) show up on my front page even when I've selected "hide from frontpage" on them.

4habryka
Interesting. Yeah, we query curated posts separately, without doing that filter. There is some slightly complicated logic going on there, so actually taking into account that filter is a bit more complicated, but probably shouldn't be too hard.

Can I somehow get the old sorting algorithm for posts back? My lesswrong homepage is flooded with very old posts.

5habryka
Yeah, it's just the "Latest" tab: 
3Sherrinford
Thanks! I thought the previously usual sorting was not just "latest" but also took a post's karma into account. I probably misunderstood that.
4Ruby
It does. We still call that algorithm Latest because overall it gives you just Latest posts.
2Richard_Kennaway
What is "Vertex"? A mod-only thing? I don't have that.
2habryka
Yeah, it's a mod-internal alternative to the AI algorithm for the recommendations tab (it uses Google Vertex instead).

Why does lesswrong.com have the bookmark feature without a way to sort them out? As in using tags or maybe even subfolders. Unless I am missing something out. I think it might be better if I just resort to browser bookmark feature.

6papetoast
I also mostly switched to browser bookmark now, but I do think even this simple implementation of in-site bookmarks is overall good. Book marking in-site can sync over devices by default, and provides more integrated information.

Hello! I'm a health and longevity researcher. I presented on Optimal Diet and Exercise at LessOnline, and it was great meeting many of you there. I just posted about the health effects of alcohol.

I'm currently testing a fitness routine that, if followed, can reduce premature death by 90%. The routine involves an hour of exercise, plus walking, every week.

My blog is unaging.com. Please look and subscribe if you're interested in reading more or joining in fitness challenges!

2Screwtape
Welcome Crissman! Glad to have you here. I'm curious how you define premature death- or should I read more and find out on the blog?
3Crissman
Premature death is basically dying before you would on average otherwise. It's another term for increased all-cause mortality. If according to the actuarial tables, you have a 1.0% change of dying at your age and gender, but you have a 20% increased risk of premature death, then your chance is 1.2%. And yes, please read more on the blog!

At my local Barnes and Nobles, I cannot access slatestarcodex.com nor putanumonit.com. Have never had any issues accessing any other websites (not that I've tried to access genuinely sketchy websites there). The wifi there is titled Bartleby, likely related to Bartleby.com, whereas many other Barnes and Nobles have wifi titled something like "BNWifi". I have not tried to access these websites at other Barnes yet.

4gilch
Get a VPN. It's good practice when using public Wi-Fi anyway. (Best practice is to never use public Wi-Fi. Get a data plan. Tello is reasonably priced.) Web filters are always imperfect, and I mostly object to them on principle. They'll block too little or too much, or more often a mix of both, but it's a common problem in e.g. schools. Are you sure you're not accessing the Wi-Fi of the business next door? Maybe B&N's was down.

Feature request: better formatting for emojis in text copied from elsewhere. In particular, I like to encourage people to copy text from interesting twitter/x threads they see into their posts instead of just linking. Better for the convenience of the readers and for more trustworthy archival access. 

The trouble with this is, text copied from twitter/x that has emojis in it tends to look terrible on LessWrong. The emojis (sometimes) get blown up to huge full-width size, instead of staying a square of text-height size as intended.

Example (may or may no... (read more)

2Nathan Helm-Burger
Screenshot of the effect:
4habryka
That is pretty bad. Agree that this should be something we support. I think just making it so that inline-images always have the same height as the characters around it should be good enough, and I can't think of a place where it breaks. When I apply that style, it looks like this. I might make a PR with that change: 
2Nathan Helm-Burger
And that would still work even if the copied text had an emoji on a line all by itself (with line breaks before and after)? Oh, and I can't seem to figure out how to paste in images from my phone when writing on mobile web. Is there a setting that could fix that?
5habryka
Our default text-processor on mobile is currently markdown, because it used to be that phones would have trouble with basically all available fancy text editors. In markdown you have to find some other website to host your images, and then link them in the normal markdown image syntax. I think this is now probably no longer true and we could just enable our fancy editor on mobile. I might look into that.

Seems like every new post - no matter the karma - is getting the "listen to this post" button now. I love it.

2habryka
Pretty sure that has been the case for a year plus, though I do agree that it's good.

I'm at this point pretty confident that under the Copenhagen interpretation, whenever an intergalactic photon hits earth, the wave-function collapse takes place on a semi-spherical wave-front many millions of lightyears in diameter. I'm still trying to wrap my head around what the interpretation of this event is in many-worlds. I know that it causes earth to pick which world it is in out of the possible worlds that split off when the photon was created, but I'm not sure if there is any event on the whole spherical wavefront.

It's not a pure hypothetical- we... (read more)

4kave
I don't think this is a very good way of thinking about what happens. I think worlds appear as fairly robust features of the wavefunction when quantum superpositions get entangled with large systems that differ in lots of degrees of freedom based on the state of the superposition. So, when the intergalactic photon interacts non-trivially with a large system (e.g. Earth), a world becomes distinct in the wavefunction, because there's a lump of amplitude that is separated from other lumps of amplitude by distance in many, many dimensions. This means it basically doesn't interact with the rest of the wavefunction, and so looks like a distinct world.
4Mitchell_Porter
Most reasoning about many worlds, by physicist fans of the interpretation, as well as by non-physicists, is done in a dismayingly vague way. If you want a many-worlds framework that meets physics standards of actual rigor, I recommend thinking in terms of the consistent or decoherent histories of Gell-Mann and Hartle (e.g.).  In ordinary quantum mechanics, to go from the wavefunction to reality, you first specify which "observable" (potentially real property) you're interested in, and then in which possible values of that observable. E.g. the observable could be position and the values could be specific possible locations. In a "Hartle multiverse", you think in terms of the history of the world, then specific observables at various times (or times + locations) in that history, then sets of possible values of those observables. You thereby get an ensemble of possible histories - all possible combinations of the possible values. The calculational side of the interpretation then gives you a probability for each possible history, given a particular wavefunction of the universe.  For physicists, the main selling point of this framework is that it allows you to do quantum cosmology, where you can't separate the observer from the physical system under investigation. For me, it also has the advantage of being potentially relativistic, a chronic problem of less sophisticated approaches to many worlds, since spatially localized observables can be ordered in space-time rather than requiring an artificial universal time.  On the other hand, this framework doesn't tell you how many "worlds" there are. That depends on the choice of observables. You can pick a single observable from one moment in the history of the universe (e.g. electromagnetic field strength at a certain space-time location), and use only that to define your possible worlds. That's OK if you're only interested in calculation, but if you're interested in ontology as well (also known as "what's actually there")
1ProgramCrafter
Under MWI, before the photon (a peak in EM field) could hit Earth, there were a lot of worlds differing by EM field values ("electromagnetic tensor") - and, thus, with different photon directions, position, etc. Each of those worlds led to a variety of worlds; some, where light hit Earth, became somewhat different from those where light avoided it; so, integrated probability "photon is still on the way" decreases, while P(photon has been observed) increases. Whenever some probability mass of EM disturbances arrives, it is smoothly transformed, with no instant effects far away.

Is there a way to get an article's raw or original content?
My goal is mostly to put articles in some area (ex: singular learning theory) into a tool like Google's NotebookLM to then ask quick questions about.
Google's own conversion of HTML to text works fine for most content, excepting math. A division may turn into p ( w | D n ) = p ( D n | w ) φ ( w ) p ( D n ), becoming incorrect.

I can always just grab the article's HTML content (or use the GraphQL api for that), but HTMLified MathJax notation is very, uh, verbose. I could probably do some massaging o... (read more)

4habryka
Yeah, you can grab any post in Markdown or in the raw HTML that was used to generate it using the markdown and ckEditorMarkup fields in the API:  { post(input: {selector: {_id: "jvewFE9hvQfrxeiBc"}}) { result { contents { ckEditorMarkup } } } } Just paste this into the editor at lesswrong.com/graphiql (adjusting the "id" for the post id, which is the alphanumerical string in the URL after /posts/), and you can get the raw content for any post.
1Dalcy
Thank you! I tried it on this post and while the post itself is pretty short, the raw content that i get seems to be extremely long (making it larger than the o1 context window, for example), with a bunch of font-related information inbetween. Is there a way to fix this?
1MinusGix
Thank you!
2habryka
You're welcome!

I want to run code generated by an llm totally unsupervised

Just to get in the habit, I should put it in an isolated container in case it does something weird

Claude, please write a python script that executes a string as python code In an isolated docker container.

2Nathan Helm-Burger
Quite funny! But as a practical answer to your desire, I've found this to work well for me: cohere-terrarium

I realized something important about psychology that is not yet publicly available, or that is very little known compared to its importance (60%). I don't want to publish this as a regular post, because it may greatly help in the development of GAI (40% that it helps and 15% that it's greatly helps), and I would like to help only those who are trying to create an alligned GAI. What should I do?

1Tapatakt
Everyone who is trying to create GAI is trying to create aligned GAI. But they think it will be easy (in the sense "not very super hard so they will probably fail and create misaligned one"), otherwise they wouldn't try in the first place. So, I think, you should not share your info with them.
1Crazy philosopher
I understand. My question is, can I publish an article about this so that only MIRI guys can read it, or send in Eliezer e-mail, or something.
2Tapatakt
Gretta Duleba is MIRI's Communication Manager. I think she is the person you should ask who write to.

I think I saw a LW post that was discussing alternatives to the vNM independence axiom. I also think (low confidence) it was by Rob Bensinger and in response to Scott's geometric rationality (e.g. this post). For the hell of me, I can't find it. Unless my memory is mistaken, does anybody know what I'm talking about?

2cubefox
I assume it wasn't this old post?
1Mateusz Bagiński
Actually, it might be it, thanks!
[+][comment deleted]10