LessOnline Festival

May 31st - June 2nd, in Berkeley CA

A festival of truth-seeking, optimization, and blogging. We'll have writing workshops, rationality classes, puzzle hunts, and thoughtful conversations across a sprawling fractal campus of nooks and whiteboards.

Buy Tickets

Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
keltan121
1
Note to self, write a post about the novel akrasia solutions I thought up before becoming a rationalist. * Figuring out how to want to want to do things * Personalised advertising of Things I Wanted to Want to Do * What I do when all else fails
Several dozen people now presumably have Lumina in their mouths. Can we not simply crowdsource some assays of their saliva? I would chip money in to this. Key questions around ethanol levels, aldehyde levels, antibacterial levels, and whether the organism itself stays colonized at useful levels.
Epistemic status: not a lawyer, but I've worked with a lot of them. As I understand it, an NDA isn't enforceable against a subpoena (though the former employer can seek a protective order for the testimony).   Someone should really encourage law enforcement or Congress to subpoena the OpenAI resigners...
niplav62
0
Just checked who from the authors of the Weak-To-Strong Generalization paper is still at OpenAI: * Collin Burns * Jan Hendrick Kirchner * Leo Gao * Bowen Baker * Yining Chen * Adrian Ecoffet * Manas Joglekar * Jeff Wu Gone are: * Ilya Sutskever * Pavel Izmailov[1] * Jan Leike * Leopold Aschenbrenner ---------------------------------------- 1. Reason unknown ↩︎
quila94
4
(Personal) On writing and (not) speaking I often struggle to find words and sentences that match what I intend to communicate. Here are some problems this can cause: 1. Wordings that are odd or unintuitive to the reader, but that are at least literally correct.[1] 2. Not being able express what I mean, and having to choose between not writing it, or risking miscommunication by trying anyways. I tend to choose the former unless I'm writing to a close friend. Unfortunately this means I am unable to express some key insights to a general audience. 3. Writing taking lots of time: I usually have to iterate many times on words/sentences until I find one which my mind parses as referring to what I intend. In the slowest cases, I might finalize only 2-10 words per minute. Even after iterating, my words are often interpreted in ways I failed to foresee. These apply to speaking, too. If I speak what would be the 'first iteration' of a sentence, there's a good chance it won't create an interpretation matching what I intend to communicate. In spoken language I have no chance to constantly 'rewrite' my output before sending it. This is one reason, but not the only reason, that I've had a policy of trying to avoid voice-based communication. I'm not fully sure what caused this relationship to language. It could be that it's just a byproduct of being autistic. It could also be a byproduct of out-of-distribution childhood abuse.[2] 1. ^ E.g., once I couldn't find the word 'clusters,' and wrote a complex sentence referring to 'sets of similar' value functions each corresponding to a common alignment failure mode / ASI takeoff training story. (I later found a way to make it much easier to read) 2. ^ (Content warning) My primary parent was highly abusive, and would punish me for using language in the intuitive 'direct' way about particular instances of that. My early response was to try to euphemize and say-differently in a way that contradicted less the power dynamic / social reality she enforced. Eventually I learned to model her as a deterministic system and stay silent / fawn.

Popular Comments

Recent Discussion

Contra this post from the Sequences

In Eliezer's sequence post, he makes the following (excellent) point:

I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless.

This debunks the then-widely-held view that rationality is counter to emotions. He then goes on to claim that emotions have the same epistemic status as the beliefs they are based on.

For my part, I label an emotion as “not rational” if it rests on mistaken beliefs, or rather, on mistake-producing epistemic conduct. “If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.”

I think Eliezer is...

No77e10

Eliezer decided to apply the label "rational" to emotions resulting from true beliefs. I think this is an understandable way to apply that word. I don't think you and Eliezer disagree with anything substantive except the application of that label. 

That said, your point about keeping the label "rational" for things strictly related to the fundamental laws regulating beliefs is good. I agree it might be a better way to use the word.

My reading of Eliezer's choice is this: you use the word "rational" for the laws themselves. But you also use the word "rat... (read more)

2Mikhail Samin
(From the top of my head, maybe I’ll change my mind if I think about it more or see a good point.) What can be destroyed by truth, shall be. Emotions and beliefs are entangled. If you don’t think about how high p(doom) actually is because on the back of your mind you don’t want to be sad, you end up working on things that don’t reduce p(doom). As long as you know the truth, emotions are only important depending on your terminal values. But many feelings are related to what we end up believing, motivated cognition, etc.
3Pi Rogers
Emotions can be treated as properties of the world, optimized with respect to constraints like anything else. We can't edit our emotions directly but we can influence them.
2cubefox
We can "influence" them only insofar we can "influence" what we want or believe: to a very low degree.

Ilya Sutskever and Jan Leike have resigned. They led OpenAI's alignment work. Superalignment will now be led by John Schulman, it seems. Jakub Pachocki replaced Sutskever as Chief Scientist.

Reasons are unclear (as usual when safety people leave OpenAI).

The NYT piece and others I've seen don't really have details. Archive of NYT if you want to read it anyway.

OpenAI announced Sutskever's departure in a blogpost.

Sutskever and Leike confirmed their departures in tweets.

In my opinion, a class action filed by all employees allegedly prejudiced (I say allegedly here, reserving the right to change 'prejudiced' in the event that new information arises) by the NDAs and gag orders would be very effective.

Were they to seek termination of these agreements on the basis of public interest in an arbitral tribunal, rather than a court or internal bargaining, the ex-employees are far more likely to get compensation. The litigation costs of legal practitioners there also tend to be far less.

Again, this assumes that the agreements they ... (read more)

2Tenoke
When considering that my thinking was that I'd expect the last day to be slightly after, but the announcement can be slightly before since that doesn't need to be quite on the last day but can and often would be a little before - e.g. be on the first day of his last week.
4Linch
I agree it's not a large commitment in some absolute sense. I think it'd still be instructive to see whether they're able to hit this (not very high) bar.
10Pablo
This, and see also Gwern's comment here.

Crosspost from my blog.  

If you spend a lot of time in the blogosphere, you’ll find a great deal of people expressing contrarian views. If you hang out in the circles that I do, you’ll probably have heard of Yudkowsky say that dieting doesn’t really work, Guzey say that sleep is overrated, Hanson argue that medicine doesn’t improve health, various people argue for the lab leak, others argue for hereditarianism, Caplan argue that mental illness is mostly just aberrant preferences and education doesn’t work, and various other people expressing contrarian views. Often, very smart people—like Robin Hanson—will write long posts defending these views, other people will have criticisms, and it will all be such a tangled mess that you don’t really know what to think about them.

For...

Do you happen to have a copy of it that you can share?

I have liked music very much since I was a teenager. I spent many hours late at night in Soulseek chat rooms talking about and sharing music with my online friends. So, I tend to just have some music floating around in my head on any given day. But, I never learned to play any instrument, or use any digital audio software. It just didn't catch my interest.

My wife learned to play piano as a kid, so we happen to have a keyboard sitting around in our apartment. One day I was bored so I decided to just see whether I could figure out how to play some random song that I was thinking about right then. I found I was easily able to reconstitute a piano...

The first two reasons that come to my mind are (1) other instruments have much more career incentive to do so (in that there are many more jobs for classical violinists or violin ensembles than for classical guitarists), and (2) it’s possible to have a much more successful career as a guitarist knowing only chord positions and not having a more detailed understanding of the fretboard, than it is with other instruments where a knowledge of how to play complicated melodies is required.

2Ben Pace
I do want to +1 that there is a lot of variation in right-hand-position space. For fingerpicking, my training has always been to pluck from the knuckles, which are the strongest and biggest joints in the finger, and never from the joints nearer the fingertips, which are much weaker and tire faster; nor to hook one’s fingers under the string but to simply push past the string. (In case thats helpful.) Might take some time to adjust to any new playing pattern. As with exercising any part of your body, there’s a difference between tiring your hands out (which is healthy) and hurting them (which is painful and damaging). There should be no sharp pain.
2cata
Learning piano I have been pretty skeptical about the importance of learning to read sheet music fluently. All piano players culturally seem to insist that it's very important, but my sense is that it's some kind of weird bias. If you tell piano players that you should hear it in your head and play it expressively, they will start saying stuff about, what if you don't already know what it's supposed to sound like, how will you figure it out, and they don't like "I will go listen to it" as an answer. So far, I am not very fluent at reading, so maybe I just don't get it yet.
2Ben Pace
I have also seen the culture of pianists being used to playing reams and reams of new music, and this being a signal of proficiency more so than amongst other instrumentalists (e.g. violinists or flautists). I think it is probably because the majority of a pianist’s career is spent in accompaniment rather than as a soloist or in an equal ensemble (there are ~no serious piano quartets), and so the quantity of music quickly consumable is a much more competitive asset. When I was at music school, there were professional accompanists and everyone was assigned one, pianists employed simply to go around and accompany all of the students in their performances, so they needed to be able to play a great deal of complicated music very quickly or on-sight. Personally, my primary goal with sheet music is to get off of it as soon as possible (i.e. learn the piece from memory). It is a qualitative reduction in the number of things my attention is on, and gives me much more cognitive space to focus on how to play the piece rather than what I’m playing next.

This is a D&D.Sci scenario: a puzzle where players are given a dataset to analyze and an objective to pursue using information from that dataset.

Duke Arado’s obsession with physics-defying architecture has caused him to run into a small problem. His problem is not – he affirms – that his interest has in any way waned: the menagerie of fantastical buildings which dot his territories attest to this, and he treasures each new time-bending tower or non-Euclidean mansion as much as the first. Nor – he assuages – is it that he’s having trouble finding talent: while it’s true that no individual has ever managed to design more than one impossible structure, it’s also true that he scarcely goes a week without some architect arriving at his door, haunted...

simon20

Looks like architects apprenticed under B. Johnson or P. Stamatin always make impossible structures. 

Architects apprenticed under M. Escher, R. Penrose or T. Geisel never do.

Self-taught architects sometimes do and sometimes don't. It doesn't initially look promising to figure out who will or won't in this group - many cases of similar proposals sometimes succeeding and sometimes failing.

Fortunately, we do have 5 architects (D,E,G,H,K) apprenticed under B. Johnson or P. Stamatin, so we can pick the 4 of them likely to have the lowest cost proposals.

Cos

... (read more)
3James Bishop
4aphyer
2Unnamed

Here’s a conception that I have about sacredness, divinity, and religion.

There’s a sense in which love and friendship didn’t have to exist.

If you look at the animal kingdom, you see all kinds of solitary species, animals that only come together for mating. Members of social species – such as humans – have companionship and cooperation, but many species do quite well without being social.

In theory, you could imagine a world with no social species at all.

In theory, you could imagine a species of intelligent humanoids akin to Tolkien’s orcs. Looking out purely for themselves, willing to kill anyone else if they got away with it and it benefited them.

And then in another sense, some versions of love and friendship do have to exist.

Social species evolved for a...

Thank you for your thoughts.

I often reflect that, in my attempts to model life on this planet from all that I have observed, experienced, read, and reflected on, it seems like there is a persistent "force" that is supporting life at ever greater levels of organization and complexity. The fields, circumstances, and conditions of this planet seem to give chances to any strategy for organizing on top of what has already been organized. Trillions of chances over billions of years, with almost as many failures. Almost.

I'm not the most science-y, but it seems t... (read more)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

I expect it would be useful when developing an understanding of the language used on LW.

Answer by habryka60

We don't have a live count, but we have a one-time analysis from late 2023: https://www.lesswrong.com/posts/WYqixmisE6dQjHPT8/2022-and-all-time-posts-by-pingback-count 

My guess is not much has changed since then, so I think that's basically the answer.

2habryka
What do you mean by "cited"? Do you mean "articles references in other articles on LW" or "articles cited in academic journals" or some other definition?
1keltan
That’s an important point I neglected. I mean something like “the top LW post on the list would have the most links from other LW posts” For example, I’d expect “More Dakka” would be high up on the list. Since it is mentioned in LW posts quite often.

Epistemic status: I wrote this in August 2023, got some feedback I didn't manage to incorporate very well, and then never published it.  There's been less discussion of overhang risk recently but I don't see any reason to keep sitting on it.  Still broadly endorsed, though there's a mention of a "recent" hardware shortage which might be a bit dated. 


I think arguments about the risks of overhangs are often unclear about what type of argument is being made.  Various types arguments that I've seen include:

  1. Pausing is net-harmful in expectation because it would cause an overhang, which [insert further argument here]
  2. Pausing is less helpful than [naive estimate of helpfulness] because it would cause an overhang, which [insert further argument here]
  3. We shouldn't spend effort attempting to coordinate or enforce
...

This seems to be arguing that the big labs are doing some obviously-inefficient R&D in terms of advancing capabilities, and that government intervention risks accidentally redirecting them towards much more effective R&D directions.  I am skeptical.

  1. If such training runs are not dangerous then the AI safety group loses credibility. 
  2. It could give a false sense of security when a different arch requiring much less training appears and is much more dangerous than the largest LLM. 
  3. It removes the chance to learn alignment and safety detail
... (read more)
2RobertM
We ran into a hardware shortage during a period of time where there was no pause, which is evidence that the hardware manufacturer was behaving conservatively.  If they're behaving conservatively during a boom period like this, it's not crazy to think they might be even more conservative in terms of novel R&D investment & ramping up manufacturing capacity if they suddenly saw dramatically reduced demand from their largest customers. This and the rest of your comment seems to have ignored the rest of my post (see: multiple inputs to progress, all of which seem sensitive to "demand" from e.g. AGI labs), so I'm not sure how to respond.  Do you think NVIDIA's planning is totally decoupled from anticipated demand for their products?  That seems kind of crazy, but that's the scenario you seem to be describing.  Big labs are just going to continue to increase their willingness-to-spend along a smooth exponential for as a long as the pause lasts?  What if the pause lasts 10 years? If you think my model of how inputs to capabilities progress are sensitive to demand for those inputs from AGI labs is wrong, then please argue so directly, or explain how your proposed scenario is compatible with it.
1RussellThor
Only if you pause everything that could bring ASI. That is hardware, training runs, basic science on learning algorithms, brain studies etc.
2RobertM
This seems non-reponsive to arguments already in my post:

This is the fourth in a sequence of posts taken from my recent report: Why Did Environmentalism Become Partisan?

This post has more of my personal opinions than previous posts or the report itself.


Other movements should try to avoid becoming as partisan as the environmental movement. Partisanship did not make environmentalism more popular, it made legislation more difficult to pass, and it resulted in fluctuating executive action. Looking at the history of environmentalism can give insight into what to avoid in order to stay bipartisan.

Partisanship was not inevitable. It occurred as the result of choices and alliances made by individual decision makers. If they had made different choices, environmentalism could have ended up being a bipartisan issue, like it was in the 1980s and is in some countries...

3trevor
For those of us who haven't already, don't miss out on the paper this was based off of. It's a serious banger for anyone interested in the situation on the ground and probably one of the most interesting and relevant papers this year. It's not something to miss just because you don't find environmentalism itself very valuable; if you think about it for a while, it's pretty easy to see the reasons why they're a fantastic case study for a wide variety of purposes. Here's a snapshot of the table of contents: (the link to the report seems to be broken; are the 4 blog posts roughly the same piece?)

Thank you !

The links to the report are now fixed.

The 4 blog posts cover most of the same ground as the report. The report goes into more detail, especially in sections 5 & 6.

4Joseph Miller
Thanks, this is really useful. Do you have any particular examples as evidence of this? This is something I've been thinking a lot about for AI and I'm quite uncertain. It seems that ~0% of advocacy campaigns have good epistemics, so it's hard to have evidence about this. Emotional appeals are important and often hard to reconcile with intellectual honesty. Of course there are different standards for good epistemics and it's probably bad to outright lie, or be highly misleading. But by EA standards of "good epistemics" it seems less clear if the benefits are worth the costs. As one example, the AI Safety movement may want to partner with advocacy groups who care about AI using copyrighted data or unions concerned about jobs. But these groups basically always have terrible epistemics and partnering usually requires some level of endorsement of their positions. As an even more extreme example, as far as I can tell about 99.9% of people have terrible epistemics by LessWrong standards so to even expand to a decently sized movement you will have to fill the ranks with people who will constantly say and think things that you think are wrong.