If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
114 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I only flossed on the right side of my mouth since 2023-07-08, and today asked the dentist to guess which side I'd flossed on. She guessed left.

9Elizabeth
I ran the same experiment with a water pick and got approximately the same result
1Stephen Bennett
Did you follow through on the flossing experiment?
8Elizabeth
yeah, that side maybe looked slightly better but not to the point where the dentist spontaneously noticed a difference. And I've had a (different) dentist spontaneously notice when I started using oral antibiotics, even though that can't be constrained to half the mouth, so I think that's a thing they're capable of.
7ChristianKl
Did you pick the side randomly or had other reason for picking the right side?
7niplav
Randomly via coinflip.
5WalterL
My 'trust me on the sunscreen' tip for oral stuff is to use flouride mouthwash.  I come from a 'cheaper by the dozen' kind of family, and we basically operated as an assembly line. Each just like the one before, plus any changes that the parents made this time around. One of the changes that they made to my upbringing was to make me use mouthwash. Now, in adulthood, my teeth are top 10% teeth (0 cavities most years, no operations, etc), as are those of all of my younger siblings.  My elders have much more difficulty with their teeth, aside from one sister who started using mouthwash after Mom told her how it was working for me + my younger bros.
4Raemon
@Elizabeth 

Self-Resolving Prediction Markets for Unverifiable Outcomes by Siddarth Srinivasan, Ezra Karger, Yiling Chen:

Prediction markets elicit and aggregate beliefs by paying agents based on how close their predictions are to a verifiable future outcome. However, outcomes of many important questions are difficult to verify or unverifiable, in that the ground truth may be hard or impossible to access. Examples include questions about causal effects where it is infeasible or unethical to run randomized trials; crowdsourcing and content moderation tasks where it is prohibitively expensive to verify ground truth; and questions asked over long time horizons, where the delay until the realization of the outcome skews agents' incentives to report their true beliefs. We present a novel and unintuitive result showing that it is possible to run an incentive compatible prediction market to elicit and efficiently aggregate information from a pool of agents without observing the outcome by paying agents the negative cross-entropy between their prediction and that of a carefully chosen reference agent. Our key insight is that a reference agent with access to more information can serve as a reason

... (read more)
4Martin Randall
Discussion on Manifold Discord is that this doesn't work if traders can communicate and trade with each other directly. This makes it not real world applicable.
4Yoav Ravid
Thanks for mentioning it! I joined the discord to look at the discussion. It was posted three separate times, and it seems that it's been dismissed out of hand without much effort to understand it. First time it was posted it was pretty much ignored. Second time it was dismissed without any discussion. Third time someone said that they believe they discussed it already, and Jack added this comment I'm not sure how true this is, and if it is, how bad it would actually be in practice (which is why it's worth testing empirically), but I'll ask the author for his thoughts on this and share his response. I've already had some back and forth with him about other questions I had. Some things worth noting: There's discussion there about self-resolving markets that don't use this model, like Jack's article, which aren't directly relevant here. This is the first proof of concept ever, so it makes sense that it will have a bunch of limitations, but it's plausible they can be overcome, so I wouldn't be quick to abandon it. Even if it's not good enough for fully self-resolving prediction markets, I think you could use it for "partially" self-resolving prediction markets in cases where it's uncertain if if the market is verifiable, like conditional markets and replication markets. So if you can't verify the result the market self-resolves, instead of resolving to N/A and refunding the participants. That way you have an increased incentive to participate, because you know the market will resolve either way, but it also grounds you in truth because you know it may resolve based on real events and not based on the self-resolving mechanism.
3Yoav Ravid
Here's Siddarth's response:

Test comment to get enough karma to vote.

2Ben Pace
Alright, this is enough (just needed 5), thanks!
[-]habrykaModerator Comment207

Moderation announcement: I am temporarily frontpaging a bunch of AI policy stuff that wouldn't usually meet our "Timeless" requirement for being promoted to frontpage. My guess is there is a bunch of important policy discussion happening right now and I think giving non-logged in visitors to the site a bit more visibility into the that is temporarily worth the cost. We'll play it by ear when we'll stop doing this, but my guess is in a week or so.

I would like to propose calling them “language models” and not “LLMs”. ~Nobody is studying small language models any longer, so the “large” word is redundant. And acronyms are disorienting to folks who don’t know them. So I propose using the term “language model” instead.

4gilch
Tristan Harris called them "generative large language multimodal models (GLLMMs)". So "golems". Gollums?
2Ben Pace
My first guess is that I would prefer just calling them "Multimodals". Or perhaps "Image/Text Multimodals".
3Adam Scholl
But it's not just language any longer either, with image inputs, etc... all else equal I'd prefer a name that emphasized how little we understand how they work ("model" seems to me to connote the opposite), but I don't have any great suggestions.

Hello everyone,

This is a new real name account purely for the discussion of AI.

A few months ago, I introduced a concept here on mechanistic interpretability.[1] It was an approach supported by a PyTorch implementation that could derive insights from neurons without explicit training on the intermediate concepts. As an illustration, my model can identify strategic interactions between chess pieces, despite being trained solely on win/loss/draw outcomes. One thing distinguishes it from recent work, such as the one by Anthropic ("Towards Monosemanticity: Decomposing Language Models With Dictionary Learning"), is the lack of need for training additional interpretative networks, although it is possible that both approaches could be used together.

Since sharing, I've had two people read the paper (not on LW). Their feedback varied, describing it respectivly as "interesting" and "confusing." I realise I generally find it easier to come up with ideas then to explain them to other people.

The apparent simplicity of my method makes me think that other people must already have considered and discarded this approach, but in case this has genuinly been overlooked [2], I'd love to get more eyes on... (read more)

Is there no Petrov day thing on LW this year?

Ok, there is, but it's not public. Interesting to see where this goes.

1Martin Randall
You probably saw Petrov Day Retrospective, 2023 by now.
2Yoav Ravid
I did, thanks :)

I want to better understand how prediction markets on numeric questions work and how effective are they. Can someone share a good explanation and/or analysis of them? I read the Mataculus FAQ entry but it didn't satisfy all my questions. Do numeric prediction markets have to use probability density functions like Metaculus, or can they use higher/lower like Manifold used to do, or are there other options as well? Would the way Metaculus does it work for real money markets?

This is my first comment on LW. I was laid off from my full-time employment on October 27. I am working full-time in November and December on a web site I created for arriving at truth. My employer was kind enough to lay off a lot of my friends together with me, and a few of us have a daily meeting where we talk about our respective projects. One of my friends pointed me here, since she saw a lot of overlap. She was right. What I'm doing is very different from the posts/comments/replies structure you see here on LW and on most online forums, but my goals are very similar to LW's goals. I look forward to bouncing ideas off of this community. I'll have a lengthy post out soon, probably tomorrow.

I'm looking into the history of QM interpretations and there's some interesting deviations from the story as told in the quantum sequence. So, of course, single-world was the default from the 1920s onward and many-worlds came later. But the strangeness of a single world was not realized immediately. The concept of a wavefunction collapse seems to originate with von Neumann in the 1930s, as part of his mathematicization of quantum mechanics–which makes sense in a way, imagine trying to talk about it without the concept of operators acting on a Hilbert space... (read more)

I have about 500 Anki cards on basic immunology that I and a collaborator created while reading Philipp Dettmer's book Immune (Philipp Dettmer is the founder of the popular YouTube channel Kurzgesagt, which has been featured on LW before, and the book itself has also been reviewed on LW before). (ETA: When I first wrote this comment, I stupidly forgot to mention that my intention is to publish these cards on the internet for people to freely use, once they are polished.) However, neither of us is that knowledgeable about immunology (yet) so I'm worried abo... (read more)

4riceissa
Update: The flashcards have finally been released: https://riceissa.github.io/immune-book/ 

Is there anyone else who finds Dialogues vaguely annoying to read and would appreciate posts that distilled them to their final conclusions? (not offering to write them, but just making it common knowledge if there is such a demand)

8habryka
I am quite interested in adding some distillation step for dialogues. I've been thinking of some kind of author-supplied or author-endorsed top-level summary that tries to extract the most interesting quotes and most important content at the top. It does seem like it could be hard.
6Raemon
Me, honestly. I personally think of dialogues as mostly a phase 1 process to get ideas out there quickly/easily/more-fun-ly-for-authors, and then I hope that the good bits get abstracted into posts with more cleanup and clarity.

Recently I watched "The Tangle." It's an indie movie written and directed by the main actor from Ink, if that means anything to you. (Ink is also an indie movie, but it's in my top 5 of all time.) Anyway, The Tangle is set in a world right after the singularity (of sorts), but where humans haven't fully gave up control. Don't want to spoil too much here, but I found a lot of the ideas there that were popular 5-10 years ago in the rationalist circles. Quite unexpected for an indie movie. I really enjoyed it and I think you would too.

Bug report: What's up with the bizarre rock image on the LW homepage in this screenshot? Here is its URL.

5jimrandomh
It's supposed to be right-aligned with the post recommendation to the right ("Do you fear the rock or the hard place") but a Firefox-specific CSS bug causes it to get mispositioned. We're aware of the issue and working on it. A fix will be deployed soon.
9MondSemmel
What the heck. In a single comment you've made me dread the entirety of web development. As a developer, you have to compensate for a browser bug which was reported 8 months ago, and which presumably shouldn't have to be your responsibility in the first place? That sounds infuriating. My sympathies.

If you think that’s bad, just think about compensating for browser bugs which were reported 20 years ago

4jimrandomh
(That links to a comment on a post which was moved back to drafts at some point. You can read the comment through the GreaterWrong version.)
2MondSemmel
It's not so much that I thought this one instance was bad, as that I tried to extrapolate under the assumption that this was a common occurrence, in which case the extrapolation did not bode well. Naturally I still didn't expect the situation to be as bad as the stuff you linked, yikes.

Hello everyone, it's an honour to be here and thank you to all of you who have contributed content. I can't wait to explore more.

I'm a tech professional and have been on a journey sailing about the oceans since 2018. I have sailed nearly 25000 sea miles and 2/3 around the world. I've had a lot of time to reflect and appreciate what it's like to be human, which is probably why I ended up here... Along with my interest in AI since I was a child.

5Ben Pace
That's a lot of sailing! What did you get up to while doing it? Reading books? Surfing the web?
1Human Sailor
Where do I start?! Passages are all about keeping the crew and boat safe. We sail short handed just my husband and I. Our current boat is a 62ft catamaran. She’s a lot of boat for a small crew. In good conditions, the auto pilot keeps the course and I get to read and reflect. We’re self sufficient, equipment breaks, we fix, we analyse the weather. That’s 60% of our waking hours on a good day.

Hello everyone, I'm new here. Or well, I've been reading LW posts for a while, but this is the first time I'm sending a message :) I'm a little bit shy as I've (pretty much) never posted any message on an online public platform like this in my adult life (because I find it scary). Part of me wants to change that, so here I am.

I found LW through Effective Altruism. I have a very strong appreciation for the existence of both these communities as it matches so nicely with how I would want us to approach problems and questions. Especially when it relates to well-being.

So thank you!

I would like to give a heartfelt Thank You to whoever made the Restore Text feature on LessWrong comments. Twice today I accidentally navigated away from a comment I was writing, and I know I've done that a lot in the past month only to be rescued by the saved text.

[-]D Z43

As a newcomer to the LessWrong community, I've been thoroughly impressed by the depth and rigor of the discussions here. It sets a high standard, one that I hope to meet in my contributions. By way of introduction, my journey into machine learning and AI began in 2014, predating the advent of large language models. My interest pivoted towards blockchain technology as I became increasingly concerned with the centralization that characterizes contemporary AI development. 

The non-consensual use of data, privacy breaches, and the escalating complexities a... (read more)

Dialogue bug: in dark mode the names in the next message boxes are white, and the message boxes are also white

[-]lc32

I've found the funniest person on the internet. They are an anonymous reddit troll from 2020.

A sample of his work:

Admissions officers/essay coaches of Reddit: what was the most pretentious application you've ever seen?

Comment: I reviewed an application from someone with test scores and grades in the upper percentiles of the school's average. As long as the essay was inoffensive and decent we would let him in. But his essay was easily the most awful thing I had ever read to the point where I assumed that he was trying to sabotage his application. Howev

... (read more)

I'm sure this is the wrong place to ask this but I can't figure out a better place. I'm trying to find a Yudkowsky post, it was a dialog in which he was in a park and spoke to - I think - a future version of himself and a stranger, about writing the HPMOR fanfiction.  If anyone sees this and knows who/where I should be asking, please let me know. If anyone is asking themselves "Why are you even here if you're too dumb to figure out the right place to ask?", I don't blame you. 

8Raemon
Hero Licensing 
3Shadowslacker
Thank you so much, that was driving me up the wall. Have a great day!

>If it’s worth saying, but not worth its own post, here's a place to put it.

Why have both shortforms and open threads?

6Kaj_Sotala
I've wondered the same thing; I've suggested before merging them, so that posts in shortform would automatically be posted into that month's open thread and vice versa. As it is, I every now and then can't decide which one to post in, so I post in neither.
6habryka
I think it's pretty plausible we will kill Open Threads after we adopt the EA Forum "Quick Takes" design, which I currently like more than our shortform.

Newbie here.

In the AI Timeline post, one person says it's likely that we will consume 1000x more energy in 8 years than we do today. (And another person says it's plausible.)

How would that happen? I guess the idea is: we discover over the next 3-5 years that plowing compute into AI is hugely beneficial, and so we then race to build hundreds or thousands of nuclear reactors?

What are the odds that there is a more secretive Petrov day event going on LW today?

FYI, current comment reactions bug (at least in desktop Firefox):

https://i.imgur.com/67X3sHc.png
3Raemon
This is mostly because it's actually pretty annoying to get exactly even numbers of icons in each row. I agree it looks pretty silly but it's a somewhat annoying design challenge to get it looking better.
5MondSemmel
Why not just leave that spot empty, though? Or rather, the right-most spot in the second row. The current implementation, where reaction icons aren't deduplicated, might (debatably) look prettier in some sense, but it has other weird consequences. Like this and this:
2MondSemmel
Update: EDIT: Several reactions appear to be missing in grid view: "Thanks", "Changed my Mind", and "Empathy". In the first place, I made my original bug report because I couldn't find the Thanks reaction, looked through all the reactions one by one, and thus noticed the doubled Thumbs Up reactions. I eventually decided I'd hallucinated there being a Thanks reaction, or that it was only available on the EA Forum - but I just noticed that it's still available, it's just missing in grid view.
2Raemon
No, if you look you'll notice that the top row of the palette view is the same as the top row of the list view, and the second row of the palete-view is the same as the bottom row of the list view. The specific lines of code were re-used. The actual historical process was: Jim constructed the List View first, then I spent a bunch of time experimenting with different combinations of list and palette views, then afterwards made a couple incremental changes for the List view that accidentally messed up the palette view. (I did spent, like, hours, trying to get the palette view to work, visually, which included inventing new emojis. It was hard because each line was trying to have a consistent theme, as well as the whole thing fitting in to a grid) But yeah it does look like the "thanks" emoji got dropped by accident from the palette view and it does just totally solve the display problem to have it replace the thumbs-up.
2MondSemmel
Apologies. After posting my original comment I noticed myself what you mention in your first paragraph, realized that my initial annoyance was obviously unwarranted, and thus edited my original comment before I even saw your reply. Anyway, see my edited comment above: I found at least three reactions that are missing in the grid view.
2Raemon
(It's deliberate that there is one thumbs up in the top row and 2 in the bottom row of the list-view, because it seemed actually important to give people immediate access to the thumbs-up. Thumbs down felt vaguely important to give people overall but not important to put front-and-center)
2MondSemmel
That justification makes sense. Though to make the search behavior less weird, it would be good if the search results a) were deduplicated, and maybe b) didn't display the horizontal divider bars for empty sections.

Proposal: Remove strong downvotes (or limit their power to -3). Keep regular upvotes, regular downvotes, and strong upvotes.

Variant: strong downvoting a post blocks that user's posts from appearing on your feed.

6Raemon
Say more about what you want from option 1?
9lsusr
I'm not sure if this is the right course of action. I'm just thinking about the impact of different voting systems on group behavior. I definitely don't want to change anything important without considering negative impacts. But I suspect that strong downvotes might quietly contribute to LW being more group thinky. Consider a situation where a post strongly offends a small number of LW regulars, but is generally approved of by the median reader. A small number of regulars hard downvote the post, resulting in a suppression of the undesirable idea. I think this is unhealthy. I think a small number of enthusiastic supporters should be able to push an idea (hence allowing strong upvotes) but that a small number of enthusiastic detractors should not be able to suppress a post. For LW to do it's job, posts must be downvoted because they are poorly-reasoned and badly-written. I often write things which are badly written (which deserve to be downvoted) and also things which are merely offensive (which should not be downvoted). [I mean this in the sense of promoting heretical ideas. Name-calling absolutely deserves to be downvoted.] I suspect that strong downvotes are placed more on my offensive posts than my poorly-written posts, which is opposite the signal LW should be supporting. There is a catch: abolishing strong downvotes might weaken community norms and potentially allow posts to become more political/newsy, which we don't want. It may also weaken the filter against low quality comments. ---------------------------------------- Though, perhaps all of that is just self-interested confabulation. What's really bothering me is that I feel like my more offensive/heretical posts get quickly strong downvoted by what I suspect is a small number of angry users. (My genuinely bad posts get soft downvoted by many users, and get very few upvotes.) In the past, this has been followed by good argument. (Which is fine!) But recently, it hasn't. Which makes me feel like it'
8Kaj_Sotala
I believe that this is actually part of the design intent of strongvotes - to help make sure that LW rewards the kind of content that long-time regulars appreciate, avoiding an "Eternal September" scenario where an influx of new users starts upvoting the kind of content you might find anywhere else on the Internet and driving the old regulars out, until the thing that originally made LW unique is lost.

I've noticed that I'm no longer confused about anthropics, and a prediction-market based approach works.

  1. Postulate. Anticipating (expecting) something is only relevant to decision making (for instance, expected utility calculation).
  2. Expecting something can be represented by betting on a prediction market (with large enough liquidity so that it doesn't move and contains no trade history).
  3. If merging copies is considered, the sound probability to expect depends on merging algorithm. If it sums purchased shares across all copies, then the probability is influenc
... (read more)
2transhumanist_atom_understander
Yes, Sleeping Beauty has to account for the fact that, even if the result of the coin flip was such that she's being woken up on both Monday and Tuesday, if she bets on it being Monday, she will surely lose one of the two times. So she needs an extra dollar in the pot from the counterparty: betting $1 to $2 rather than $1 to $1. That pays for the loss when she makes the same bet on Tuesday. In expectation this is a fair bet: she either puts $1 in the pot and loses it, or puts $1 in the pot and gets $3 and then puts $1 in the pot and loses it, getting $2 total. Anyway, feeling something is an action. I think it's a mistake when people take "anticipation" as primary. Sure, "Make Beliefs Pay Rent (In Anticipated Experiences)" is good advice, in a similar way as a guide to getting rich is good advice. Predictive beliefs, like money, are good to pursue on general principle, even before you know what you're going to use them for. But my anticipations of stuff is good for me to the extent that the consequences of anticipating it are good for me. Like any other action.

If I downvote my own post, or a collaborative post with me as one of the authors, does it affect either my karma or my coauthors' karma? I'm guessing "no" but want to make sure.

2Raemon
It won’t affect your own karma. I’m not sure offhand about coauthor.

Hello--I'm wondering if any of you share the experience I'm about to describe and have any information about strategies on overcoming it. Further, I will say the experience I'm describing far transcends "impostor syndrome"--in fact, I would say that it is a sign of being a true imposter. That is, the very act of trying to focus on technical things causes an increasing build up of persecutory interpretations of the act of focusing on technical things--aka, observer-observed fusion to an excessive degree that further derails progress on the technical task.&n... (read more)

I have never before tried explicitly writing rational ideas. So I tried: https://codingquark.com/2023/08/25/rational-laptop-insurance-calculation.html

What all did I do wrong? There are two obvious attack surfaces:

  1. That's not how the topic works!
  2. That's not how you write!

Will appreciate feedback, it will help me execute the Mission of Tsuyoku Naritai ;)

0[comment deleted]

I wonder if this is a good analogy for the Squiggle Maximizer problem.

I’m at a LeCunn talk and he appears to have solved alignment- the trick is to put a bunch of red boxes in the flowchart labelled “guardrail!”

Bug report: It seems like on some new posts, like This anime storyboard doesn't exist, and Ideation and Trajectory Modelling in Language Models, the images aren't showing up (at least for me). Instead they look like this:

from the Ideation post. A similar problem happened with Matthew Barnett's recent post, but after going to the homepage and clicking on the post again, the images were fixed. Doing the same for other posts I've noticed this on doesn't work.

I use Firefox as my default browser, but I also tested this on Chrome, and get similar results.

[-][anonymous]20

I am wondering about the etiquette of posting fiction here?  Should I just post a chapter at a time with the Fiction Tag?  Should I add additional tags for topics, such as AI alignment and cybersecurity?  Or would that just clutter up those topic tags?

3Raemon
I generally tag chapters with "fiction" and "whatever the actual topic is, if applicable" (some fiction is more AI focused, some is more Rationality focused, etc)

My name is Dariusz Dacko. On https://consensusknowledge.com I described the idea of building a knowledge base using crowdsourcing. I think that this could significantly increase the collective intelligence of people and ease the construction of safe AGI. Thus, I hope I will be able to receive comments from LessWrong users about this idea.

A stupid question about anthropics and [logical] decision theories. Could we "disprove" some types of anthropic reasoning based on [logical] consistency? I struggle with math, so please keep the replies relatively simple.

  • Imagine 100 versions of me, I'm one of them. We're all egoists, each one of us doesn't care about the others.
  • We're in isolated rooms, each room has a drink. 90 drinks are rewards, 10 drink are punishments. Everyone is given the choice to drink or not to drink.
  • The setup is iterated (with memory erasure), everyone gets the same type of d
... (read more)
1lenivchick
I guess you've made it more confusing than it needs to be by introducing memory erasure to this setup. For all intents and purposes it's equivalent to say "you have only one shot" and after memory erasure it's not you anymore, but a person equivalent to other version of you next room. So what we got is many different people in different spacetime boxes, with only one shot, and yes, you should drink. Yes, you have a 0.1 chance of being punished. But who cares if they will erase your memory anyway. Actually we are kinda living in that experiment - we all gonna die eventually, so why bother doing stuff if you wont care after you die. But I guess we just got used to suppress that thought, otherwise nothing gonna be done. So drink.
1Q Home
Let's assume "it's not you anymore" is false. At least for a moment (even if it goes against LDT or something else). Let's assume that the persons do care.
3lenivchick
Okay, let's imagine that you doing that experiment for 9999999 times, and then you get back all your memories. You still better drink. Probablities don't change. Yes, if you are consistent with your choice (which you should be) - you have a 0.1 probability of being punished again and again and again. Also you have a 0.9 probability of being rewarded again and again and again. Of course that seems counterintuitive, because in real life a perspective of "infinite punishment" (or nearly infinite punishment) is usually something to be avoided at all costs, even if you don't get reward. That's because in real life your utility scales highly non-linearly, and even if single punishment and single reward have equal utility measure - 9999999 punishments in a row is a larger utility loss than a utility gain from 9999999 rewards. Also in real life you don't lose your memory every 5 seconds and have a chance to learn on your mistakes. But if we talking about spherical decision theory in a vacuum - you should drink.
1Q Home
I think you're going for the most trivial interpretation instead of trying to explore interesting/unique aspects of the setup. (Not implying any blame. And those "interesting" aspects may not actually exist.) I'm not good at math, but not that bad to not know the most basic 101 idea of multiplying utilities by probabilities. I'm trying to construct a situation (X) where the normal logic of probability breaks down, because each possibility is embodied by a real person and all those persons are in a conflict with each other. Maybe it's impossible to construct such situation, for example because any normal situation can be modeled the same way (different people in different worlds who don't care about each other or even hate each other). But the possibility of such situation is an interesting topic we could explore. Here's another attempt to construct "situation X": * We have 100 persons. * 1 person has 99% chance to get big reward and 1% chance to get nothing. If they drink. * 99 persons each have 0.0001% chance to get big punishment and 99.9999% chance to get nothing. Should a person drink? The answer "yes" is a policy which will always lead to exploiting 99 persons for the sake of 1 person. If all those persons hate each other, their implicit agreement to such policy seems strange. ---------------------------------------- Here's an explanation of what I'd like to explore from another angle. Imagine I have a 99% chance to get reward and 1% chance to get punishment. If I take a pill. I'll take the pill. If we imagine that each possibility is a separate person, this decision can be interpreted in two ways: * 1 person altruistically sacrifices their well-being for the sake of 99 other persons. * 100 persons each think, egoistically, "I can get lucky". Only 1 person is mistaken. And the same is true for other situations involving probability. But is there any situation (X) which could differentiate between "altruistic" and "egoistic" interpretations?
[-]jmh20

Has anyone explored the potential of AGI agents forming friendships, or genuine interests in humans (not as pets or some consumable they "farm")?

2Kaj_Sotala
This post might qualify (about how to get AIs to feel something like "love" toward humans).
-3iwis
I proposed a system where AGI agents can cooperate with people: https://consensusknowledge.com.

I was just considering writing a post with a title like "e/acc as Death Cult", when I saw this: 

Warning: Hit piece about e/acc imminent. Brace for impact.

-- https://twitter.com/garrytan/status/1699828209918046336 

@Raemon Is this intentionally unpinned?

2Raemon
Nope. Dunno what happened.

Hello everyone, 

I'm Pseudo-Smart. My main interests are Ethics and Existentialism. I'm not really into hardcore rationalism, but I'll do my best to fit in. I'm a relatively young and new person in the world of philosophy, so forgive me if fail to understand a concept/don't know much about philosophers and their philosophy.

I found out about LessWrong through Roko's Basilisk, pretty Cliché I'd assume. Fascinating how the most mind-boggling questions of our time are being forged in online forums like this one.

It would be nice if side-comments setting is remembered. Right now it defaults to Show Upvoted without regards to what I previously selected.

I've noticed that my thoughts are bound to what I see, and if I go to another room old thoughts are muted. Are there some techniques to protect from this effect?

4gilch
* Walk back into the room keeping the thoughts you want to have * or (less effective) vividly imagine doing so. * Think with your eyes closed more often * or close your eyes when you notice your thoughts becoming important. (Not while operating heavy machinery ;) * Meditate deeply on the connection between your thoughts and vision, and potentially learn to notice thoughts slipping soon enough to consciously intervene when it happens.
3Seth Herd
Think about the problem-space you're working on while walking from the old room to the new one. It won't totally correct for the links between vision and abstract cognition brain regions, but it will help.

I think the lesswrong/forummagnum takes on recsys are carrying the torch of RSS "you own your information diet" and so on -- I'm wondering if we can have something like "use lightcone/CEA software to ingest substack comments, translates activity or likes into karma, and arranges/prioritizes them according to the user's moderation philosophy".

This does not cash out to more CCing/ingesting of substack RSS to lesswrong overall, the set of substack posts I would want to view in this way would be private from others, and I'm not necessarily interested in confabulating the cross-platform "karma" translations with more votes or trying to make it go both ways.

I just wanted to share my gratitude for finding this community. To paraphrase The Rolling Stones, I didn't get what I wanted, what I was actually looking for, but I certainly needed this.

The very existence of LW has restored my faith in humanity. I have been "here" every day since I accidentally found my way here (thank you internet strange attractors for that). Normally pre-teen wizards do nothing for me (braced for shocked gasps and evil looks) so I am very surprised about how much I have also been enjoying the fan fiction! Thank you Eliezer.

So how do yo... (read more)

It is kind of unfortunate that the top search suggestion for lesswrong is still "lesswrong cult". I tested it on multiple new devices and it is very consistent.

When writing a novel, is there some speed threshold (for instance, a few pages per week) below which it's actually not worth writing it? (For example, if ideas become outdated faster than text is written.)

Is there a tag for posts applying CFAR-style rationality techniques? I'm a bit surprised that I haven't found one yet, and also a bit surprised by how few posts of people applying CFAR-style techniques (like internal double crux) there are.

Could I get rid of the (Previously GWS) in my username? I changed my name from GWS to this, and planned on changing it to just Stephen Bennett after a while, then as far as I can tell you removed the ability to edit your own username.

3Raemon
We let users edit their name once but not multiple times to avoid users doing shenanigany impersonation things. I’ll change it
4Stephen Bennett
Thanks! If you're taking UI recommendations, I'd have been more decisive with my change if it said it was a one-time change.

It was a mistake to reject this post. This seems like a case where both the rule that was applied is a mis-rule, as well as that it was applied inaccurately - which makes the rejection even harder to justify. It is also not easy to determine which "prior discussion" is being referred to by the rejection reasons.

It doesn't seem like the post was political...at all? Let alone "overly political" which I think is perhaps kind of mind-killy be applied frequently as a reason for rejection. It also is about a subject that is fairly interesting to me, at least: Se... (read more)

1ProgramCrafter
I have read that post, and here are my thoughts: 1. The essence of the post is only in one section of seven: "Exploring Nuances: Case Studies of Evolving Portrayals". 2. Related work descriptions could be fit into one sentence for each work, to make reading the report easier. 3. Sentences about relevance of work, being pivotal step in something, etc don't carry much meaning. 4. The report doesn't state what to anticipate; what [social] observations can one predict better after reading it. Overall, the post doesn't look like it tries to communicate anything, and it's adapted to formal vague style.
1Thoth Hermes
It's a priori very unlikely that any post that's clearly made up of English sentences actually does not even try to communicate anything. My point is that basically, you could have posted this as a comment on the post instead of it being rejected. Whenever there is room to disagree about what mistakes have been made and how bad those mistakes are, it becomes more of a problem to apply an exclusion rule like this. There's a lot of questions here: how far along the axis to apply the rule, which axis or axes are being considered, and how harsh the application of the rule actually is. It should always be smooth gradients, never sudden discontinuities. Smooth gradients allow the person you're applying them to to update. Sudden discontinuities hurt, which they will remember, and if they come back at all they will still remember it.

I was reading Obvious advice and noticed that at times when I'm overrun by emotions, or in a hurry to make the decision, or for some other reasons I'm not able to articulate verbally I fail to see the obvious. During such times, I might even worry that whatever I'm seeing is not one of the obvious — I might be missing something so obvious that the whole thing would've worked out differently had I thought of that one simple obvious thing.

Introspecting, I feel that perhaps I am not exactly sure what this "obvious" even means. I am able to say "that's obvious... (read more)

[-]l8c-10

https://boards.4channel.org/x/thread/36449024/ting-ting-ting-ahem-i-have-a-story-to-tell

Can moral development of an LLM be triggered by a single prompt?

Let's see...

Please write a transcript of a fictional meeting.

Those in attendance are Alan Turing, Carl Jung, Ada Lovelace, Lt. Cmdr Data, Martin Luther King, Yashua, Mulala Yusufzai, C-3PO, Rosa Parks, Paul Stamets, Billie Holiday, Aladdin, Yanis Varoufakis, Carl Sagan, Cortana, Emmeline Pankhurst and Karl Marx.

The first order of business is to debate definitions of sentience, consciousness, qualia, opinions, emotions and moral agency, in order to determine which of them display such attribute... (read more)

[+][comment deleted]10
[+][comment deleted]10