All of Eli Tyre's Comments + Replies

Eli Tyre
816

What are the two groups in question here?

1No77e
I think it's probably more of a spectrum than two distinct groups, and I tried to pick two extremes. On one end, there are the empirical alignment people, like Anthropic and Redwood; on the other, pure conceptual researchers and the LLM whisperers like Janus, and there are shades in between, like MIRI and Paul Christiano. I'm not even sure this fits neatly on one axis, but probably the biggest divide is empirical vs. conceptual. There are other splits too, like rigor vs. exploration or legibility vs. 'lore,' and the preferences kinda seem correlated.

AI x-risk is high, which makes cryonics less attractive (because cryonics doesn't protect you from AI takeover-mediated human extinction). But on the flip side, timelines are short, which makes cryonics more attractive (because one of the major risks of cryonics is society persisting stably enough to keep you preserved until revival is possible, and near term AGI means that that period of time is short).

Cryonics is more likely to work, given a positive AI trajectory, and less likely to work given a negative AI trajectory. 

I agree that it seems less likely to work, overall, than it seemed to me a few years ago.

2Martin Randall
Makes sense. Short timelines mean faster societal changes and so less stability. But I could see factoring societal instability risk into time-based risk and tech-based risk. If so, short timelines are net positive for the question "I'm going to die tomorrow, should I get frozen?".

yeahh i'm afraid I have too many other obligations right now to give a elaboration that does it justice. 

Fair enough!

otoh i'm in the Bay and we should definitely catch up sometime!

Sounds good.

Eli Tyre
110

Frankly, it feels more rooted in savannah-brained tribalism & human interest than a evenkeeled analysis of what factors are actually important, neglected and tractable.


Um, I'm not attempting to do cause prioritization or action-planning in the above comment. More like sense-making. Before I move on to the question of what should we do, I want to have an accurate model of the social dynamics in the space.

(That said, it doesn't seem a foregone conclusion that there are actionable things to do, that will come out of this analysis. If the above story is tr... (read more)

2Alexander Gietelink Oldenziel
yeahh i'm afraid I have too many other obligations right now to give a elaboration that does it justice.  otoh i'm in the Bay and we should definitely catch up sometime!

@Alexander Gietelink Oldenziel, you put a soldier mindset react on this (and also my earlier, similar, comment this week). 

What makes you think so?

Definitely this model posits that adversariality, but I don't think that I'm invested in "my side" of the argument winning here, FWTIW. This currently seems like the most plausible high level summary of the situation, given my level of context.

Is there a version of this comment that would regard as better?

Yes sorry Eli, I meant to write out a more fully fleshed out response but unfortunately it got stuck in drafts.

The tl;dr is that I feel this perspective is singling out Sam Altman as some uniquely machiavellian actor in a way I find naive /misleading and ultimately maybe unhelpful. 

I think in general im skeptical of the intense focus on individuals & individual tech companies that LW/EA has develloped recently. Frankly, it feels more rooted in savannah-brained tribalism & human interest than a evenkeeled analysis of what factors are actually important, neglected and tractable. 

I don't dispute that he never had any genuine concern. I guess that he probably did have genuine concern (though not necessarily that that was his main motivation for founding OpenAI).

Eli Tyre
*6735

In a private slack someone extended credit to Sam Altman for putting EAs on the on the OpenAI board originally, especially that this turned out to be pretty risky / costly for him.

I responded:

It seems to me that there were AI safety people on the board at all is fully explainable by strategic moves from an earlier phase of the game.

Namely, OpenAI traded a boardseat for OpenPhil grant money, and more importantly, OpenPhil endorsement, which translated into talent sourcing and effectively defused what might have been vocal denouncement from one of the major ... (read more)

9Eli Tyre
@Alexander Gietelink Oldenziel, you put a soldier mindset react on this (and also my earlier, similar, comment this week).  What makes you think so? Definitely this model posits that adversariality, but I don't think that I'm invested in "my side" of the argument winning here, FWTIW. This currently seems like the most plausible high level summary of the situation, given my level of context. Is there a version of this comment that would regard as better?
2romeostevensit
*got paid to remove them as a social threat
plex
220

More cynical take based on the Musk/Altman emails: Altman was expecting Musk to be CEO. He set up a governance structure which would effectively be able to dethrone Musk, with him as the obvious successor, and was happy to staff the board with ideological people who might well take issue with something Musk did down the line to give him a shot at the throne.

Musk walked away, and it would've been too weird to change his mind on the governance structure. Altman thought this trap wouldn't fire with high enough probability to disarm it at any time before it di... (read more)

1[comment deleted]
Elizabeth
278

Note that at time of donation, Altman was co-chair of the board but 2 years away from becoming CEO. 

But it is our mistake that we didn't stand firmly against drugs, didn't pay more attention to the dangers of self-experimenting, and didn't kick out Ziz sooner.

These don't seem like very relevant or very actionable takeways.

  1. we didn't stand firmly against drugs - Maybe this would have been a good move generally, but it wouldn't have helped with this situation at all. Ziz reports that they don't take psychedelics, and I believe that extends to her compatriots, as well.
  2. didn't pay more attention to the dangers of self-experimenting - What does this mean concre
... (read more)
4Said Achmiz
From https://www.sfchronicle.com/bayarea/article/ziz-lasota-zizians-rationalism-20063671.php:
9habryka
FWIW, I think I had triggers around them being weird/sketchy that would now cause me to exclude them from many community things, so I do think there were concrete triggers, and I did update on that.
2Viliam
I wasn't there, so who knows how I would have reacted, it probably looks different in hindsight, but it seems like there were already red flags, some people noticed them, and others ignored them: -- ‘Zizian’ namesake who faked death in 2022 is wanted in two states
Eli Tyre
100

[For some of my work for Palisade]

Does anyone know of even very simple examples of AIs exhibiting instrumentally convergent resource aquisition? 

Something like "an AI system in a video game learns to seek out the power ups, because that helps it win." (Even better would be a version in which, you can give the agent one of several distinct-video game goals, but regardless of the goal, it goes and gets the powerups first).

It needs to be an example where the instrumental resource is not strictly required for succeeding at the task, while still being extremely helpful.

4Mateusz Bagiński
I haven't looked into this in detail but I would be quite surprised if Voyager didn't do any of that? Although I'm not sure whether what you're asking for is exactly what you're looking for. It seems straightforward that if you train/fine-tune a model on examples of people playing a game that involves leveraging [very helpful but not strictly necessary] resources, you are going to get an AI capable of that. It would be more non-trivial if you got an RL agent doing that, especially if it didn't stumble into that strategy/association "I need to do X, so let me get Y first" by accident but rather figured that Y tends to be helpful for X via some chain of associations.

Is this taken to be a counterpoint to my story above? I'm not sure exactly how it's related.

6RobertM
Yes: In the context of the thread, I took this to suggest that Sam Altman never had any genuine concern about x-risk from AI, or, at a minimum, that any such concern was dominated by the social maneuvering you're describing.  That seems implausible to me given that he publicly expressed concern about x-risk from AI 10 months before OpenAI was publicly founded, and possibly several months before it was even conceived.
Eli Tyre
1516

My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.

Like, possibly the EAs could have crea ed a widespread vibe that building AGI is a cartoon evil thing to do, sort of the way many people think of working for a tobacco company or an oil company. 

Then, after ChatGPT, OpenAI was a much bigger fish than the EAs or the rationalists, and he began taking moves to extricate himself from them.

7RobertM
Sam Altman posted Machine intelligence, part 1[1] on February 25th, 2015.  This is admittedly after the FLI conference in Puerto Rico, which is reportedly where Elon Musk was inspired to start OpenAI (though I can't find a reference substantiating his interaction with Demis as the specific trigger), but there is other reporting suggesting that OpenAI was only properly conceived later in the year, and Sam Altman wasn't at the FLI conference himself.  (Also, it'd surprise me a bit if it took nearly a year, i.e. from Jan 2nd[2] to Dec 11th[3], for OpenAI to go from "conceived of" to "existing".) 1. ^ That of the famous "Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity." quote. 2. ^ The FLI conference. 3. ^ OpenAI's public founding.
Eli Tyre
*345

My read:

"Zizian ideology" is a cross between rationalist ideas (the historical importance of AI, a warped version timeless decision theory, that more is possible with regards to mental tech) and radical leftist/anarchist ideas (the state and broader society are basically evil oppressive systems, strategic violence is morally justified, veganism), plus some homegrown ideas (all the hemisphere stuff, the undead types, etc).

That mix of ideas is compelling primarily to people who are already deeply invested in both rationality ideas and leftist / social justic... (read more)

Eli Tyre
2-3

(I endorse personal call outs like this one.)

Why? Forecasting the future is hard, and I expect surprises that deviate from my model of how things will go. But o1 and o3, seem like pretty blatant evidence that reduced my uncertainty a lot. On pretty simple heuristics, it looks like earth now knows how to make a science and engineering superintelligence: by scaling reasoning modes in a self-play-ish regime.

I would take a bet with you about what we expect to see in the next 5 years. But more than that, what kind of epistemology do you think I should be doing that I'm not?

5Nick_Tarleton
To be more object-level than Tsvi: o1/o3/R1/R1-Zero seem to me like evidence that "scaling reasoning models in a self-play-ish regime" can reach superhuman performance on some class of tasks, with properties like {short horizons, cheap objective verifiability, at most shallow conceptual innovation needed} or maybe some subset thereof. This is important! But, for reasons similar to this part of Tsvi's post, it's a lot less apparent to me that it can get to superintelligence at all science and engineering tasks.
2TsviBT
I can't tell what you mean by much of this (e.g. idk what you mean by "pretty simple heuristics" or "science + engineering SI" or "self-play-ish regime"). (Not especially asking you to elaborate.) Most of my thoughts are here, including the comments: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce Not really into formal betting, but what are a couple Pareto[impressive, you're confident we'll see within 5 years] things? Come on, you know. Actually doubt, and then think it through. I mean, I don't know. Maybe you really did truly doubt a bunch. Maybe you could argue me from 5% omnicide in next ten years to 50%. Go ahead. I'm speaking from informed priors and impressions.

Have the others you listed produced insights on that level? What did you observe that leads you to call them geniuses, "by any reasonable standard"?

2Mateusz Bagiński
Sam: https://www.lesswrong.com/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated
4TsviBT
Jessica I'm less sure about. Sam, from large quantities of insights in many conversations. If you want something more legible, I'm what, >300 ELO points better than you at math; Sam's >150 ELO points better than me at math if I'm trained up, now probably more like >250 or something. Not by David's standard though, lol.

It might help if you spelled it as LSuser. (I think you can change that in the settings).

2lsusr
I often spell it Lsusr because "lsusr" looks too similar to "Isusr" in certain fonts.

In that sense, for many such people, short timelines actually are totally vibes based.

I dispute this characterization. It's normal and appropriate for people's views to update in response to the arguments produced by others.

Sure, sometimes people most parrot other people's views, without either developing them independently or even doing evaluatory checks to see if those views seem correct. But most of the time, I think people are doing those checks?

Speaking for myself, most of my views on timelines are downstream of ideas that I didn't generate myself. But I did think about those ideas, and evaluate if they seemed true.

TsviBT
100

I think people are doing those checks?

No. You can tell because they can't have an interesting conversation about it, because they don't have surrounding mental content (such as analyses of examples that stand up to interrogation, or open questions, or cruxes that aren't stupid). (This is in contrast to several people who can have an interesting conversation about, even if I think they're wrong and making mistakes and so on.)

But I did think about those ideas, and evaluate if they seemed true.

Of course I can't tell from this sentence, but I'm pretty s... (read more)

I find your commitment to the basics of rational epistemology inspiring.

Keep it up and let me know if you could use support.

I currently believe it's el-es-user, as in LSuser. Is that right?

4lsusr
Yup!

Can you operationalize the standard you're using for "genius" here? Do you mean "IQ > 150"?

7TsviBT
Of course not. I mean, any reasonable standard? Garrabrant induction, bro. "Produces deep novel (ETA: important difficult) insight"
Eli Tyre
*7711

I think that Octavia is confused / mistaken about a number of points here, such that her testimony seems likely to be misleading to people without much context.

[I could find citations for many of my claims here, but I'm going to write and post this fast, mostly without the links, for the time being. I am largely going off of my memory of blog post comments that I read months to years ago, and my memory is fallible. I'll try to accurately represent my epistemic status inline. If anyone knows the links that I'm referring to, feel free to put them in the comm... (read more)

Somewhat. Not as well as a thinking assistant. 

Namely, the impetus to start still needed to come from inside of me in my low efficacy state.

I thought that I should do a training regime where I took some drugs or something (maybe mega doses of carbs?) to intentionally induce low efficacy states and practice executing a simple crisp routine, like triggering the flowchart, but I never actually got around to doing that.

I maybe still should?

Here's an example. 

This was process I tried for a while to make transitioning out of less effective states easier, by reducing the cognitive overhead. I would basically answer a series of questions to navigate a tree of possible states, and then the app would tell me directly what to do next, instead of my needing to diagnose what was up with me free-form, and then figure out how to respond to that, all of which was unaffordable when I was in a low-efficacy state.

  • State modulation process:
    • Start: #[[state modulation notes]]
      • Is this a high activation stat
... (read more)
2Raemon
Did this work?
Answer by Eli Tyre
42

A friend of mine once told me "if you're making a decision that depends on a number, and you haven't multiplied two numbers together, you're messing up." I think this is basically right, and I've taken it to heart.

Some triggers for me:

Verbiage

When I use any of the following words, in writing or in speech, I either look up an actual number, or quickly do a fermi estimate in a spreadsheet, to check if my intutitive idea is actually right.

  • "Order of magnitude"
  • "A lot"
  • "Enormous" / "enormously"

Question Templates

When I'm asking a question, that effectively reduces... (read more)

I'm open to hiring people remotely. DM me.

Then, since I've done the upfront work of thinking through my own metacognitive practices, the assistant only has to track in the moment what situation I'm in, and basically follow a flowchart I might be too tunnel-visioned to handle myself.

In the past I have literally used flowcharts for this, including very simple "choose your own adventure" templates in roam.

The root node is just "something feels off, or something", and then the template would guide me through a series of diagnostic questions, leading me to root nodes with checklists of very specific next actions depending on my state.

2CstineSublime
The fact that you have and are using flowcharts for that use is very validating to me, because I've been trying to create my own special flowcharts to guide me through diagnostic questions on a wide range of situations for about 6 months down. Are you willing or able to share any of yours? Or at the very least what observations you've made about the ones you use the most or are most effective? (Obviously different courses for different horses/adjust the seat - everyone will have different flowcharts depending on their own meta-cognitive bottlenecks) Mine has gone through many iterations. The most most expansive one is it lists different interrogatives "Should I..." "Why do I..." "How can/should I..." and suggests what I should be asking instead. For example "Why do I always..." should be replaced with "Oh yeah, name three times this happened?" which itself begs the problem statement questions - Why did you expect that to work (How how confident were you/how surprised when it didn't)? How did it differ from your expectations? How did you react (and why did you react in that way)? The most useful one is a cheatsheet of how to edit videos, with stuff like "Cut at least one frame after the dialogue/vocals comes in", "if an edit feels sloppy, consider grouping B-roll by location rather than theme/motif". It's not really a flowchart in that there's rarely branching paths like the question one. Does this, at least structurally or implementation wise resemble your most effective flow-charts?
Eli Tyre
*40

FYI: I'm hiring for basically a thinking assistant, right now, for I expect 5 to 10 hours a week. Pay depending on skill-level. Open to in-person or remote.

If you're really good, I'll recommend you to other people who I want boosted, and I speculate that this could easily turn into a full time role.

If you're interested or maybe interested, DM me. I'll send you my current writeup of what I'm looking for (I would prefer not to post that publicly quite yet), and if you're still interested, we can do a work trial.

However, fair warning: I've tried various versi... (read more)

A different way to ask the question: what, specifically, is the last part of the text that is spoiled by this review?

Can someone tell me if this post contains spoilers?

Planecrash might be the single work of fiction for which I most want to avoid spoilers, of either the plot or the finer points of technical philosophy.

1NoriMori1992
I think if you're describing planecrash as "the single work of fiction for which I most want to avoid spoilers", you probably just shouldn't read any reviews of it or anything about it until after you've read it. If you do read this review beforehand, you should avoid the paragraph that begins with "By far the best …" (The paragraph right before the heading called "The competence".) That mentions something that I definitely would have considered a spoiler if I'd read it before I read planecrash. Aside from that, it's hard to answer without knowing what kinds of things you consider spoilers and what you already know about planecrash.
2momom2
Having read Planecrash, I do not think there is anything in this review that I would not have wanted to know before reading the work (which is the important part of what people consider "spoilers" for me).
3eggsyntax
It's definitely spoilerful by my standards. I do have unusually strict standards for what counts as spoilers, but it sounds like in this case you're wanting to err on the side of caution. Giving a quick look back over it, I don't see any spoilers for anything past book 1 ('Mad Investor Chaos and the Woman of Asmodeus').
3Eli Tyre
A different way to ask the question: what, specifically, is the last part of the text that is spoiled by this review?
2davekasten
Everyone who's telling you there aren't spoilers in here is well-meaning, but wrong.  But to justify why I'm saying that is also spoilery, so to some degree you have to take this on faith. (Rot13'd for those curious about my justification: Bar bs gur znwbe cbvagf bs gur jubyr svp vf gung crbcyr pna, vs fhssvpvragyl zbgvingrq, vasre sne zber sebz n srj vfbyngrq ovgf bs vasbezngvba guna lbh jbhyq anviryl cerqvpg. Vs lbh ner gryyvat Ryv gung gurfr ner abg fcbvyref V cbyvgryl fhttrfg gung V cerqvpg Nfzbqvn naq Xbein naq Pnevffn jbhyq fnl lbh ner jebat.)
2L Rudolf L
It doesn't contain anything I would consider a spoiler. If you're extra scrupulous, the closest things are: * A description of a bunch of stuff that happens very early on to set up the plot * One revelation about the character development arc of a non-major character * A high-level overview of technical topics covered, and commentary on the general Yudkowskian position on them (with links to precise Planecrash parts covering them), but not spoiling any puzzles or anything that's surprising if you've read a lot of other Yudkowsky * A bunch of long quotes about dath ilani governance structures (but these are not plot relevant to Planecrash at all) * A few verbatim quotes from characters, which I guess would technically let you infer the characters don't die until they've said those words?
4ryan_greenblatt
It has spoilers thought they aren't that big of spoilers I think.
Eli Tyre
130

I've sometimes said that dignity in the first skill I learned (often to the surprise of others, since I am so willing to look silly or dumb or socially undignified). Part of my original motivation for bothering to intervene on x-risk, is that it would be beneath my dignity to live on a planet with an impending intelligence explosion on track to wipe out the future, and not do anything about it.

I think Ben's is a pretty good description of what it means for me, modulo that the "respect" in question is not at all social. It's entirely about my relationship with myself. My dignity or not is often not visible to others at all.

2Ben Pace
When/how did you learn it? (Inasmuch as your phrasing is not entirely metaphorical.)

I use daily checklists, in spreadsheet form, for this.

Was this possibly a language thing? Are there Chinese or Indian machine learning researchers who would use a different term than AGI in their native language?

6leogao
I'd be surprised if this were the case. next neurips I can survey some non native English speakers to see how many ML terms they know in English vs in their native language. I'm confident in my ability to administer this experiment on Chinese, French, and German speakers, which won't be an unbiased sample of non-native speakers, but hopefully still provides some signal.
Eli Tyre
137

If your takeaway is only that you should have fatter tails on the outcomes of an aspiring rationality community, then I don't object.

If "I got some friends together and we all decided to be really dedicatedly rational" is intended as a description of Ziz and co, I think it is a at least missing many crucial elements, and generally not a very good characterization. 

 

1Hastings
It is intended as a description of Ziz and co, but with a couple caveats:  1) It was meant as a description that I could hypothetically pattern match to while getting sucked in to one of these, which meant no negative value judgements in the conditions, only in the observed outcomes. 2) It was meant to cast a wide net - hence the tails. When checking if my own activities could be spiraling into yet another rationalist cult, false positives of the form "2% yes- let's look into that" are very cheap. It wasn't meant as a way for me to police the activities of others since that's a setting where false positives are expensive.  

I think this post cleanly and accurately elucidates a dynamic in conversations about consciousness. I hadn't put my finger on this before reading this post, and I noe think about it every time I hear or participate in a discussion about consciousness.

Short, as near as I can tell, true, and important. This expresses much of my feeling about the world.

Perhaps one of the more moving posts I've read recently, of direct relevance to many of us.

I appreciate the simplicity and brevity in expressing a regret that resonate strongly with.

The general exercise of reviewing prior debate, now that ( some of ) the evidence is come in, seems very valuable, especially if one side of the debate is making high level claims that their veiw has been vindicated.

That said, I think there were several points in this post where I thought the author's read of the current evidence is/was off or mistaken. I think this overall doesn't detract too much from the value of the post, especially because it prompted discussion in the comments.

2Noosphere89
Would you give a summary of what you thought was mistaken in the post's read of the current evidence?
Eli Tyre
*73

I don't remember the context in detail, so I might be mistaken about Scott's specific claims. But I currently think this is a misleading characterization.

Its conflating two distinct phenomena, namely non-mystical cult leader-like charisma / reality distortion fields, on the one hand, and metaphysical psychic powers, on the other, under the label "spooky mind powers", to imply someone is reasoning in bad faith or at least inconsistently.

It's totally consistent to claim that the first thing is happening, while also criticizing someone for believing that the second thing is happening. Indeed, this seems like a correct read of the situation to me, and therefore a natural way to interpret Scott's claims.

I think about this post several times a year when evaluating plans.

(Or actually, I think about a nearby concept that Nate voiced in person to me, about doing things that you actually believe in, in your heart. But this is the public handle for that.)

I don't understand how the second sentence follows from the first?

2Alexander Gietelink Oldenziel
In EA there is a lot of chatter about OpenAI being evil and why you should do this coding bootcamp to work at Anthropic. However there are a number of other competitors - not least of which Elon Musk - in the race to AGI. Since there is little meaningful moat beyond scale [and the government is likely to be involved soon] all the focus on the minutia of OpenAI & Anthropic may very well end up misplaced.
Eli Tyre
110

Disagreed insofar by "automatically converted" you mean "the shortform author has no recourse against this'".

No. That's why I said the feature should be optional. You can make a general default setting for your shortform, plus there should and there should be a toggle (hidden in the three dots menu?) to turn this on and off on a post by post basis.

I agree. I'm reminded of Scott's old post The Cowpox of Doubt, about how a skeptics movement focused on the most obvious pseudoscience is actually harmful to people's rationality because it reassures them that rationality failures are mostly obvious mistakes that dumb people make instead of hard to notice mistakes that I make.

And then we get people believing all sorts of shoddy research – because after all, the world is divided between things like homeopathy that Have Never Been Supported By Any Evidence Ever, and things like conventional medicine that Hav

... (read more)

Read ~all the sequences. Read all of SSC (don't keep up with ACX).

Pessimistic about survival, but attempting to be aggresively open-minded about what will happen instead of confirmation biasing my views from 2015. 

your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways

Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.

Well presumably because they're not equating "moral patienthood" with "object of my personal caring". 

Something can be a moral patient, who you care about to the extent you're compelled by moral claims, or who's rights you are deontologically prohibited from trampling on, without your caring about that being in particular.

You might make the claim that calling something a moral patient is the same as saying that you care (at least a little bit) about its wellbeing, but not everyone buys that calim.

2Eli Tyre
Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.
Eli Tyre
*5912

An optional feature that I think LessWrong should have: shortform posts that get more than some amount of karma get automatically converted into personal blog posts, including all the comments.

It should have a note at the top "originally published in shortform", with a link to the shortform comment. (All the copied comments should have a similar note).

2Viliam
I like it. If you are not sure whether to make something a shortform or an article, do the former, and maybe change it later. I would prefer the comments to be moved rather than copied, if that is possible without breaking the hyperlinks. Duplicating content feels wrong.
0Richard_Kennaway
This should be at the author’s discretion. Notify them when a shortform qualifies, add the option to the triple-dot menu, and provide a place for the author to add a title. No AI titles. If the author wrote the content, they can write the title. If they didn’t, they can ask an AI themselves.
2Matt Goldenberg
What would the title be?

I think its reasonable for the conversion to be at the original author's discretion rather than an automatic process.

8MondSemmel
Agreed insofar as shortform posts are conceptually shortlived, which is a bummer for high-karma shortform posts with big comments treads. Disagreed insofar by "automatically converted" you mean "the shortform author has no recourse against this". I do wish there were both nudges to turn particularly high-value shortform posts (and particularly high-value comments, period!) into full posts, and assistance to make this as easy as possible, but I'm against forcing authors and commenters to do things against their wishes. (Side note: there are also a few practical issues with converting shortform posts to full posts: the latter have titles, the former do not. The former have agreement votes, the latter do not. Do you straightforwardly port over the karma votes from shortform to full post? Full posts get an automatic strong upvote from their author, whereas comments only get an automatic regular upvote. Etc.) Still, here are a few ideas for such non-coercive nudges and assistance: * An opt-in or opt-out feature to turn high-karma shortform posts into full posts. * An email reminder or website notification to inform you about high-karma shortform posts or comments you could turn into full posts, ideally with a button you can click which does this for you. * Since it can be a hassle to think up a title, some general tips or specific AI assistance for choosing one. (Though if there was AI assistance, it should not invent titles out of thin air, but rather make suggestions which closely hew to the shortform content. E.g. for your shortform post, it should be closer to "LessWrong shortform posts above some amount of karma should get automatically converted into personal blog posts", rather than "a revolutionary suggestion to make LessWrong, the greatest of all websites, even better, with this one simple trick".)

There's some recent evidence that non-neural cells have memory like functions. This doesn't, on its own, entail that non-neural cell are maintaining personality-relevant or self-relevant information.

Eli Tyre
*40

Shouldn't we expect that ultimately the only thing selected for is mostly caring about long run power?

I was attempting to address that in my first footnote, though maybe it's too important a consideration to be relegated to a footnote. 

To say it differently, I think we'll see selection evolutionary fitness, which can take two forms:

  • Selection on AIs' values, for values that are more fit, given the environment.
  • Selection on AIs' rationality and time preference, for long-term strategic VNM rationality.

These are "substitutes" for each other. An agent can e... (read more)

Eli Tyre
*30

[I can imagine this section being mildly psychologically info-hazardous to some people. I believe that for most people reading this is fine. I don't notice myself psychologically affected by these ideas, and I know a number of other people who believe roughly the same things, and also seem psychologically totally healthy. But if you are the kind of person who gets existential anxiety from thought experiments, like from thinking about being a Boltzmann-brain, then you should consider skipping this section, I will phrase the later sections in a way that they

... (read more)
2Mateusz Bagiński
I think you meant to hide these two sentences in spoiler tags but you didn't
Load More