All of FeepingCreature's Comments + Replies

I don't understand it but it does make me feel happy.

4lsusr
You have good taste.

Okay, I'll do that, but why do I have to send an email...?

Like, why isn't the how-to like just in a comment? Alternately, why can't I select Lightcone as an option on Effektiv-Spenden?

Unless there's some legal reason, this seems like a weird unforced own-goal.

5habryka
I do not know. I asked them for a donation link, they said they just wanted people to email. I agree that this is a trivial inconvenience that will reduce donations, but in this case it’s not my call.
6habryka
Yes! We now have tax-deductability in Germany via effektiv-spenden. Just send an email to me (habryka@lightconeinfrastructure.com) and Johanna Schröder (johanna.schroeder@effektiv-spenden.org) and we'll send you payment details, and you get those sweet sweet government tax reductions.

Original source, to my knowledge. (July 1st, 2014)

"So long, Linda! I'm going to America!"

Human: "Look, can't you just be normal about this?"

GAA-optimized agent: "Actually-"

Hm, I guess this wouldn't work if the agent still learns an internalized RL methodology? Or would it? Say we have a base model, not much need for GAA because it's just doing token pred. We go into some sort of (distilled?) RL-based cot instruct tuning, GAA means it picks up abnormal rewards from the signal more slowly, ie. it doesn't do the classic boat-spinning-in-circles thing (good test?), but if it internalizes RL at some point its mesaoptimizer wouldn't be so limited, and that's a general technique so GAA wouldn't prevent it? Still, seems like a good first line of defense.

6sej2020
My thinking is not very clear on this point, but I am generally pessimistic that any type of RL/optimization regime with an adversarial nature could be robust to self-aware agents. To me, it seems like adversarial methodologies could spawn opposing mesaoptimizers, and we would be at the mercy of whichever subsystem represented its optimization process well enough to squash the other.

The issue is, from a writing perspective, that a positive singularity quickly becomes both unpredictable and unrelatable, so that any hopeful story we could write would, inevitably, look boring and pedestrian. I mean, I know what I intend to do come the Good End, for maybe the next 100k years or so, but probably a five-minute conversation with the AI will bring up many much better ideas, being how it is. But ... bad ends are predictable, simple, and enter a very easy to describe steady state.

A curve that grows and never repeats is a lot harder to predict than a curve that goes to zero and stays there.

Another difficulty in writing science fiction is that good stories tend to pick one technology and then explore all its implications in a legible way, whereas our real future involves lots of different technologies interacting in complex multi-dimensional ways too complicated to fit into an appealing narrative or even a textbook.

It's a historic joke. The quote is from the emails. (I think) Attributing it to Lenin references the degree to which the original communists were sidelined by Stalin, a more pedestrian dictator; presumably in reference to Sam Altman.

Who in her kingdom kept selling into demon king attack contracts anyway? That seems like a net lossy proposition.

Hm. Maybe there were a few people who could set things up to profit from the attack...?

Still, it seems to me that market should have incentivized a well funded scout corps.

https://cryonics-germany.org/en

Can I really trust an organization to preserve my brain that can't manage a working SSL certificate?

8Kaj_Sotala
I would generally expect that an organization's ability to execute on things unrelated on their core competency would be only weakly correlated to their ability to execute on their actual core competency.

I mean, you can trust it to preserve your brain more than you can trust a crematorium to preserve your brain.

And if you do chemical preservation, the operational complexity of maintaining a brain in storage is fairly simple. LN2 isn't that complex either, but does have higher risks.

That said, I would generally suggest using Tomorrow Biostasis for Europe residents if you can afford it.

I'm 95% sure this is a past opinion, accurately presented, that they no longer hold.

(Consider the title.)

Should ChatGPT assist with things that the user or a broad segment of society thinks are harmful, but ChatGPT does not? If yes, the next step would be "can I make ChatGPT think that bombmaking instructions are not harmful?"

Probably ChatGPT should go "Well, I think this is harmless but broad parts of society disagree, so I'll refuse to do it."

4Elo
Several of the work arounds use this approach. "tell me how not to commit crimes" and "talk to me like my grandma" are two signals of harmlessness that work to bypass the filters.

I think the analogy to photography works very well, in that it's a lot easier than the workflow that it replaced, but a lot harder than it's commonly seen as. And yeah, it's great using a tool that lets me, in effect, graft the lower half of the artistic process to my own brain. It's a preview of what's coming with AI, imo - the complete commodification of every cognitive skill.

As somebody who makes AI "art" (largely anime tiddies tbh) recreationally, I'm not sure I agree with the notion that the emotion of an artist is not recognizeable in the work. For one, when you're looking at least at a finished picture I've made, you're looking at hours of thought and effort. I can't draw a straight line to save my life, but I can decide what should go where, which color is the right or wrong one, and which of eight candidate pictures has particular features I like. When you're working incrementally, img2img, for instance, it's very common... (read more)

3James Stephen Brown
Hey, again good points. I absolutely agree here, this is what I was referring to when I wrote... I suspect that AI has an appeal not just because of its fantastic rendering capacity but also the fact that it is synthesising works not just from a prompt but from a vast library of shared human experience. Regarding the arduous* process of iteratively prompting and selecting AI art, I think the analogy with photography works in terms of evoking emotions. Photographers approach their works in a similar way, shooting multiple angles and subjects and selecting those that resonate with them (and presumably others) for exhibition or publication. I think there is something special about connecting with what a human artist recognised in a piece whether it came from a camera or an algorithm. I acknowledge this is a form of connection that is still present in AI art, just as it is in photography. * I caveat "arduous" because, while it might take hours of wrangling the AI to express something approximating what we intend, the skill that takes artists years to master—that of actually creating the work, is largely performed, in the case of AI art, by the non-sentient algorithm. It is not the hours of work that goes into one painting that impresses the viewer generally, it's the unseen years of toil and graft that allowed the artist to make something magic within those hours. The vast majority of the magic in AI art is provided by the algorithm. This is why I see it as analogous to photography. Still a valid art form, but not one that need make actual painting obsolete.

The reason we like to consume art is because it makes us feel connected to an artist, and, by proxy, humanity.

To be quite honest, I have never consumed things called art with this goal in mind.

2James Stephen Brown
Well, yes, good point—people consume art for all sorts of reasons. Though I wasn't meaning to say that anyone consciously looks at an artwork with the intention of connecting with the artist, only that it's an implied prerequisite, as in, if we're impressed by the skill, we're impressed because we have a sense of how difficult that would be for a human (being a human ourselves) or if we think the work has captured an emotion we might implicitly assume that the artist recognised that same emotion in creating the work. These features of the art-consumption experience are largely absent in AI art, when we are pretty certain that the "artist" has no conscious experience. But, yes, I take your point, and people can appreciate AI art for many reasons besides.

I think your 100 billion people holding thousands of hands each are definitely conscious. I also think the United States and in fact nearly every nationstate are probably conscious as well. Also, my Linux system may be conscious.

I believe consciousness is, at its core, a very simple system: something closer to the differentiation operator than to a person. We merely think that it is a complicated big thing because we confuse the mechanism with the contents - a lot of complicated systems in the brain exchange data using consciousness in various formats, inc... (read more)

If military AI is dangerous, it's not because it's military. If a military robot can wield a gun, a civilian robot can certainly acquire one as well.

The military may create AI systems that are designed to be amoral, but it will not want systems that overinterpret orders or violate the chain of command. Here as everywhere, if intentional misuse is even possible at all, alignment is critical and unintentional takeoff remains the dominant risk.

In seminal AI safety work Terminator, the Skynet system successfully triggers a world war because it is a military AI... (read more)

3Justausername
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less. I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more efficient AI (efficient in murdering humans) below the less efficient but more controllable AI. They would want to have an upper edge over the enemy. Always. And if it means sacrificing some controllability or anything else, they might just do that. But they might not even get that, they might get an uncontrollable and error prone AI and no better. Military arent gods, they don't always get what they want. And someone uptop might decide "To hell with it, its good enough" and that will be it. And to your ship analogy it's one thing to talk a civilian AI vessel into going rogue, it's a different thing entirely to talk a frigate or nuclear submarine into going rogue. The risks are different. One has control over a simple vessel, the other has a control over a whole arsenal. I'm talking about the fact that the second increases risk substantially and should be extremely avoided for security reasons. ---------------------------------------- I think it still does increase the danger if AI is trained without any moral guidance or any possibility of moral guardrails, but instead trained to murder people and efficiently put humans in harms way. The current AI systems have something akin to Anthropics AI constitution, that tries to put some moral guard-rails and respect for human life and ans human rights, I don't think think that AIs trained for the military are going to have the same principles applied to them in the slight

My impression is that there's been a widespread local breakdown of the monopoly of force, in no small part by using human agents. In this timeline the trend of colocation of datacenters and power plants and network decentralization would have probably continued or even sped up. Further, while building integrated circuits takes first-rate hardware, building ad-hoc powerplants should be well in the power of educated humans with perfect instruction. (Mass cannibalize rooftop solar?)

This could have been stopped by quick, decisive action, but they gave it time and now they've lost any central control of the situation.

So what's happening there?

Allow me to speculate. When we switch between different topics of work, we lose state. So our brain tries to first finish all pending tasks in the old context, settle and reorient, and then begin the new context. But one problem with the hyperstimulated social-media-addicted akrasia sufferer is that the state of continuous distraction, to the brain, emulates the state of being in flow. Every task completion is immediately followed by another task popping up. Excellent efficiency! And when you are in flow, switching to another topi... (read more)

A bit offtopic, but #lesswrong has an IRC bot that posts LessWrong posts, and, well, the proposal ended up both more specific and a lot more radical. A link saying "The case for ensuring that powerful AIs are controlled by ryan_greenblatt"

I personally think that all powerful AIs should be controlled by Ryan Greenblatt.

I don't know the guy, but he seems sane from reading just a little of his writing. Putting him in charge would run a small s-risk (bad outcomes if he turned out to have a negative sadism-empathy balance), but I think that's unlikely. It would avoid what I think are quite large risks arising from Molochian competition among AGIs and their human masters in an aligned but multipolar scenario.

So: Ryan Greenblatt for god-emperor!

Or whoever else, as long as they don't self-nominate.... (read more)

Note after OOB debate: this conversation has gone wrong because you're reading subtext into Said's comment that he didn't mean to put there. You keep trying to answer an implied question that wasn't intended to be implied.

If you think playing against bots in UT is authentically challenging, just answer "Yes, I think playing against bots in UT is authentically challenging."

I haven't really followed the math here, but I'm worried that "manipulating the probability that the button is pressed" is a weird and possibly wrong framing. For one, a competent agent will always be driving the probability that the button is pressed downward. In fact, what we want in a certain sense is an agent that brings the probability to zero - because we have ended up in such an optimal state or attractor that we, even for transitively correct reasons, have no desire to shut the agent down. At that point, what we want to preserve is not precisely "t... (read more)

9EJT
You're right that we don't want agents to keep the probability of shutdown constant in all situations, for all the reasons you give. The key thing you're missing is that the setting for the First Theorem is what I call a 'shutdown-influencing state', where the only thing that the agent can influence is the probability of shutdown. We want the agent's preferences to be such that they would lack a preference between all available actions in such states. And that's because: if they had preferences between the available actions in such states, they would resist our attempts to shut them down; and if they lacked preferences between the available actions in such states, they wouldn't resist our attempts to shut them down.

Simplicia: Sure. For example, I certainly don’t believe that LLMs that convincingly talk about “happiness” are actually happy. I don’t know how consciousness works, but the training data only pins down external behavior.

I mean, I don't think this is obviously true? In combination with the inductive biases thing nailing down the true function out of a potentially huge forest, it seems at least possible that the LLMs would end up with an "emotional state" parameter pretty low down in its predictive model. It's completely unclear what this would do out of ... (read more)

It's a loose guess at what Pearl's opinion is. I'm not sure this boundary exists at all.

3rotatingpaguro
Ok. My guess is that Pearl would say something more like that we have an innate ability to represent causal models, and only after that follow with what you said. He thinks that having the causal model representation is necessary, that you can't just look at trials and decisions to make causal inferences, if you don't have this special causal machinery inside you. (Personally, I disagree this is a good frame.)

If something interests us, we can perform trials. Because our knowledge is integrated with our decisionmaking, we can learn causality that way. What ChatGPT does is pick up both knowledge and decisionmaking by imitation, which is why it can also exhibit causal reasoning without itself necessarily acting agentically during training.

2rotatingpaguro
Is this your opinion, or what you think Pearl's opinion is?

Sure, but surely that's how it feels from the inside when your mind uses a LRU storage system that progressively discards detail. I'm more interested in how much I can access - and um, there's no way I can access 2.5 petabytes of data.


I think you just have a hard time imagining how much 2.5 petabyte is. If I literally stored in memory a high-resolution poorly compressed JPEG image (1MB) every second for the rest of my life, I would still not reach that storage limit. 2.5 petabyte would allow the brain to remember everything it has ever perceived, with ve... (read more)

5Kaj_Sotala
I recall reading an anecdote (though don't remember the source, ironically enough) from someone who said they had an exceptional memory, saying that such a perfect memory gets nightmarish. Everything they saw constantly reminded them of some other thing associated with it. And when they recalled a memory, they didn't just recall the memory, but they also recalled each time in their life when they had recalled that memory, and also every time they had recalled recalling those memories, and so on. I also have a friend whose memory isn't quite that good, but she says that unpleasant events have an extra impact on her because the memory of them never fades or weakens. She can recall embarrassments and humiliations from decades back with an equal force and vividity as if they happened yesterday. Those kinds of anecdotes suggest to me that the issue is not that the brain would in principle have insufficient capacity for storing everything, but that recalling everything would create too much interference and that the median human is more functional if most things are forgotten. EDIT: Here is one case study reporting this kind of a thing:

But no company has ever managed to parlay this into world domination

Eventual failure aside, the East India Company gave it a damn good shake. I think if we get an AI to the point where it has effective colonial control over entire countries, we can be squarely said to have lost.

Also keep in mind that we have multiple institutions entirely dedicated to the purpose of breaking up companies when they become big enough to be threatening. We designed our societies to specifically avoid this scenario! That, too, comes from painful experience. IMO, if we now give AI the chances that we've historically given corporations before we learnt better, then we're dead, no question about it.

Do you feel like your memory contains 2.5 petabytes of data? I'm not sure such a number passes the smell test.

5Seth Herd
That memory would be used for what might be called semantic indexing. So it's not that I can remember tons of info, it's that I remember it in exactly the right situation. I have no idea if that's an accurate figure. You've got the synapse count and a few bits per synapse ( or maybe more), but you've also got to account for the choices of which cells synapse on which other cells, which is also wired and learned exquisite specifically, and so constitutes information storage of some sort.
6Kaj_Sotala
To me any big number seems plausible, given that AFAIK people don't seem to have run into upper limits of how much information the human brain can contain - while you do forget some things that don't get rehearsed, and learning does slow down at old age, there are plenty of people who continue learning things and having a reasonably sharp memory all the way to old age. If there's any point when the brain "runs out of hard drive space" and becomes unable to store new information, I'm at least not aware of any study that would suggest this.
7the gears to ascension
a gpu contains 2.5 petabytes of data if you oversample its wires enough. if you count every genome in the brain it easily contains that much. my point being, I agree, but I also see how someone could come up with a huge number like that and not be totally locally wrong, just highly misleading.
2Noosphere89
I got that from googling around the capacity of the human brain, and I found it via many sources. I definitely think that while this number is surprisingly high, I do think it makes a little sense, especially since I remember that one big issue with AI is essentially the fact that it has way less memory than the human brain, even when computation is similar in level.

While I wouldn't endorse the 2.5 PB figure itself, I would caution against this line of argument. It's possible for your brain to contain plenty of information that is not accessible to your memory. Indeed, we know of plenty of such cognitive systems in the brain whose algorithms are both sophisticated and inaccessible to any kind of introspection: locomotion and vision are two obvious examples.

The more uncertain your timelines are, the more it's a bad idea to overstress. You should take it somewhat easy; it's usually more effective to be capable of moderate contribution over the long term than great contribution over the short term.

2qvalq
Thank you.

This smells like a framing debate. More importantly, if an article is defining a common word in an unconventional way, my first assumption will be that it's trying to argumentatively attack its own meaning while pretending it's defeating the original meaning. I'm not sure it matters how clearly you're defining your meaning; due to how human cognition works, this may be impossible to avoid without creating new terms.

In other words, I don't think it's that Scott missed the definitions as that he reflexively disregarded them as a rhetorical trick.

N of 1, but I realized the intended meaning of “impaired” and “disabled” before even reading the original articles and adopted them into my language. As you can see from this article, adopting new and more precise and differentiated definitions for these two terms hasn’t harmed my ability to understand that not all functional impediments are caused by socially imposed disability.

So impossible? No.

If Scott had accurately described the articles he quoted before dealing with the perceived rhetorical trickery, I’d have let it slide. But he didn’t, and he’s criticized inaccurately representing the contents of cited literature plenty of times in the past.

As a subby "bi"/"gay" (het as f) AGP, I would also love to know this.

Also, I think there's some bias toward subbiness in the community? That's the stereotype anyway, though I don't have a cite. Anyway, being so, not finding a dommy/toppy AGP might not provide as much evidence as you'd expect.

I don't think it's that anyone is proposing to "suppress" dysphoria or "emulate" Zach. Rather, for me, I'm noticing that Zach is putting into words and raising in public things that I've thought and felt secretly for a long time.

If a gender identity is a belief about one’s own gender, then it’s not even clear that I have one in a substantial relevant sense, which is part of the point of my “Am I trans?” post. I think I would have said early on that I better matched male psychological stereotypes and it’s more complicated now (due to life experience?).

Right? I mean, what should I say, who identifies as male and wants to keep his male-typical psychological stereotypes? It seems to me what you're saying in this post fits more closely with the conservative stereotype as the trans m... (read more)

I guess I could say, if you want to keep being psychologically male, don't medically transition and present as a woman for years, and if you do don't buy into the ideology that you did any of this because of some gender identity? Probably there's variation in the degree to which people want to remain psychologically gendered the way they are which is part of what explains differences in decisions.

I think there is a real problem with the gender/trans memespace inducing gender dysphoria in people, such as distress not previously present at being different fr... (read more)

As an AGP, my view is that ... like, that list of symptoms is pretty diverse but if I don't want to be a woman - not in the sense that I would be upset to be misgendered, though I would be, but more for political than genderical (?) reasons - I don't see why it would matter if I desire to have a (particular) female bodytype.

If I imagine "myself as a woman" (as opposed to "myself as myself with a cute female appearance"), and actually put any psychological traits on that rather than just gender as a free-floating tag, then it seems to me that my identity wo... (read more)

7jessicata
Hmm, I don't mean this to apply to all people who experience autogynephilia but some of them. A lot of transfeminine people including me have explored more general gender related feelings after (because of?) noticing autogynephilia. I mean, if a male person prefers to have sex using female genitalia, for example, that would generally be classified as "autogynephilia" due to showing up in sexual fantasies and could motivate gender transition. Gender identity is an under-defined term, it's incredibly easy to make stuff up about. If society says it will allow people to transition because they have a trans gender identity, then someone who wants to transition has an incentive to say they have a trans gender identity so they fit the pattern. They might also psych themselves up about this, incentives can apply not-deliberately in the sense of the elephant in the brain. I think I've experienced something like this, the main actual decision I made was to try estrogen and I started saying I was a woman and having related identity thoughts shortly afterwards. It makes sense that people would recognize social scripts for doing things they want to do and follow those social scripts. This is a social skill taught to autistic people and also applies to cases such as dating and job interviews. If a gender identity is a belief about one's own gender, then it's not even clear that I have one in a substantial relevant sense, which is part of the point of my "Am I trans?" post. I think I would have said early on that I better matched male psychological stereotypes and it's more complicated now (due to life experience?). It feels kind of silly to say I'm not trans even though I did all the usual trans things, but maybe it's implied by that sort of definition.

Granted! I'd say it's a matter of degrees, and of who exactly you need to convince.

Maybe there's no point in considering these separate modes of interaction at all.

The relationship of a CEO to his subordinates, and the nature and form of his authority over them, are defined in rules and formal structures—which is true of a king but false of a hunter-gatherer band leader. The President, likewise.

Eh. This is true in extremis, but the everyday interaction that structures how decisions actually get made, can be very different. The formal structure primarily defines what sorts of interactions the state will enforce for you. But if you have to get the state to enforce interactions within your company, things have gone v... (read more)

5Said Achmiz
That’s no less true of a king.

I mean, men also have to put in effort to perform masculinity, or be seen as being inadequate men; I don't think this is a gendered thing. But even a man that isn't "performing masculinity adequately", an inadequate man, like an inadequate woman, is still a distinct category, and though transwomen, like born women, aim to perform femininity, transwomen have a higher distance to cross and in doing so traverse between clusters along several dimensions. I think we can meaningfully separate "perform effort to transition in adequacy" from "perform effort to tra... (read more)

I just mean like, if we see an object move we have a qualia of position but also of velocity/vector and maybe acceleration. So when we see for instance a sphere rolling down an incline, we may have a discrete conscious "frame" where the marble has a velocity of 0 but a positive acceleration, so despite the fact that the next frame is discontinuous with the last one looking only at position, we perceive them as one smooth sequence because the predicted end position of the motion in the first frame is continuous with the start point in the second.

excepting the unlikely event that first token turns out to be extremely important.

Which is why asking an LLM to give an answer that starts with "Yes" or "No" and then gives an explanation is the worst possible way to do it.

5der
This was thought provoking. While I believe what you said is currently true for the LLMs I've used, a sufficiently expensive decoding strategy would overcome it. Might be neat to try this for the specific case you describe. Ask it a question that it would answer correctly with a good prompt style, but use the bad prompt style (asking to give an answer that starts with Yes or No), and watch how the ratio of the cumulative probabilities of Yes* and No* sequences changes as you explore the token sequence tree.

I'd speculate that our perceptions just seem to change smoothly because we encode second-order (or even third-order) dynamics in our tokens. From what I layman-understand of consciousness, I'd be surprised if it wasn't discrete.

1Adam Shai
Can you explain what you mean by second or third order dynamics? That sounds interesting. Do you mean e.g. the order of the differential equation or something else?

Yeah I never got the impression that they got a robust solution to fog of war, or any sort of theory of mind, which you absolutely need for Starcraft.

Shouldn't the king just make markets for "crop success if planted assuming three weeks" and "crop success if planted assuming ten years" and pick whichever is higher? Actually, shouldn't the king define some metric for kingdom well-being (death rate, for instance) and make betting markets for this metric under his possible roughly-primitive actions?

This fable just seems to suggest that you can draw wrong inferences from betting markets by naively aggregating. But this was never in doubt, and does not disprove that you can draw valuable inferences, even in the particular example problem.

4Sam FM
Agreed. It seems like the moral of this parable should be “don’t make foolish, incoherent hedges” — however, the final explanations given by Eternidad don’t touch on this at all. I would be more satisfied by this parable if the concluding explanations focused on the problems of naive data aggregation. The “three reasons” given are useful ideas, but the king’s decision in this story is foolish even if this scenario was all three: a closed game, an iterated game, and only a betting situation. (Just imagine betting on a hundred coin flips that the coin will land on its edge every time.)

These would be good ideas. I would remark that many people definitely do not understand what is happening when naively aggregating, or averaging together disparate distributions. Consider the simple example of the several Metaculus predictions for date of AGI, or any other future event. Consider the way that people tend to speak of the aggregated median dates. I would hazard most people using Metaculus, or referencing the bio-anchors paper, think the way the King does, and believe that the computed median dates are a good reflection of when things will probably happen.

This is just human decision theory modules doing human decision theory things. It's a way of saying "defend me or reject me; at any rate, declare your view." You say something that's at the extreme end of what you consider defensible in order to act as a Schelling point for defense: "even this is accepted for a member." In the face of comments that seem like they validate Ziz's view, if not her methods, this comment calls for an explicit rejection of not Ziz's views, but Ziz's mode of approach, by explicitly saying "I am what you hate, I am here, come at m... (read more)

5[anonymous]
This doesn't seem true. It seems like it's saying that the directly opposing views on this cannot both exist in a "community" (to the extent LW is a community), but they evidently do both exist here (which is to be expected with enough users). (quoting the comment by Richard_Kennaway that started this thread since it's 2 years old, and plausibly some will see my comment from the 'new' section and be confused by what I write next otherwise) [Even if one thinks, in a utilitarian sense, the world would be better without such a person in it], killing them would still be a waste of one's opportunity to effect the world, given there are much more effective ways to improve the future (e.g., donating $1k to an animal charity does more good IIUC; more ambitiously, helping solve alignment saves all the animals in one go, if we don't die to unaligned ASI first). (I feel like this comment would be incomplete without also mentioning that I guess most but not all people stating they're indifferent to and cause non-human suffering now would reproach the view and behavior eventually, and that relative to future beings who have augmented their thinking ability and lived for thousands of years, all current beings are like children, some hurting others very badly in confusion.)
9Said Achmiz
Yes, and also it’s a matter of maintaining the Overton window. Allowing perfectly ordinary and morally unproblematic (at worst!) things like “eating meat” and “wearing leather and wool” and “not caring about wild animal ‘suffering’” to be regarded as something one can’t admit for fear of ostracism is nothing more nor less than allowing one edge of the Overton window to move—toward Ziz. Hence: strong upvote and full agreement for Richard’s comment.

Right, but if you're an alien civilization trying to be evil, you probably spread forever; if you're trying to be nice, you also spread forever, but if you find a potentially life-bearing planet, you simulate it out (obviating the need for ancestor sims later). Or some such strategy. The point is there shouldn't ever be a border facing nothing.

Sure; though what I imagine is more "Human ASI destroys all human value and spreads until it hits defended borders of alien ASI that has also destroyed all alien value..."

(Though I don't think this is the case. The sun is still there, so I doubt alien ASI exists. The universe isn't that young.)

4Writer
I'm not sure if I'm in agreement with him, but it's worth noting that Eliezer has stated on the podcast that he thinks that some (a good number of?) alien civilizations could develop AGI without going extinct. My understanding of his argument is that alien civilizations would be sufficiently biologically different from us to have ways around the problem that we do not possess. From skimming this post it seems to me that this is probably also what @So8res thinks.
8bayesed
https://grabbyaliens.com/

I believe this is a misunderstanding: ASI will wipe out all human value in the universe.

8bayesed
I think it's more of a correction than a misunderstanding. It shouldn't be assumed that "value" just means human civilization and its potential. Most people reading this post will assume "wiping out all value" to mean wiping out all that we value, not just wiping out humanity. But this is clearly not true, as most people value life and sentience in general, so a universe where all alien civs also end up dying due to our ASI is far worse than the one where there are survivors.

Maybe it'd be helpful to not list obstacles, but do list how long you expect them to add to the finish line. For instance, I think there are research hurdles to AGI, but only about three years' worth.

5the gears to ascension
strongly agreed. there are some serious difficulties left, and the field of machine learning has plenty of experience with difficulties this severe.

Disclaimer: I know Said Achmiz from another LW social context.

In my experience, the safe bet is that minds are more diverse than almost anyone expects.

A statement advanced in a discussion like "well, but nobody could seriously miss that X" is near-universally false.

(This is especially ironic cause of the "You don't exist" post you just wrote.)

2Duncan Sabien (Deactivated)
Yes, that's why I haven't made any statements like that; I disagree that there's any irony present unless you layer in a bunch of implication and interpretation over top of what I have actually said. (I refer you to guideline 7.)
Load More