All of Vugluscr Varcharka's Comments + Replies

My perception of llms evolution dynamics coincides with your description, additionally popping into attention the bicameral mind theory (at least Julian James' timeline re language and human self-reflection, and max height of man-made structures) as smth that might be relevant for predicting close future. I find both of them (dynamics:) kinda similar. Might we expect comparatively long period of mindless blubbering followed by abrupt phase shift (observed in max man-made code structures complexity for example) and then the next slow phase (slower than the shift but faster then the previous slow one)? 

reading and writing strings of latent vectors

https://huggingface.co/papers/2502.05171

energy is getting greener by the day.

source?

If I'm not mistaking, you've already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.

Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:

  • Some are over-optimistic based on mistaken assumptions about the behavior of humans;
  • Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;

Also, is it valid to say that... (read more)

2Steven Byrnes
No, I haven’t changed anything in this post since Dec 11, three days before your first comment. This isn’t EA forum. Also, you shouldn’t equate “EA” with “concerned about AGI extinction”. There are plenty of self-described EAs who think that AGI extinction is astronomically unlikely and a pointless thing to worry about. (And also plenty of self-described EAs who think the opposite.) If Hypothetical Person X tends to write what you call “stupid comments”, and if they want to be participating on Website Y, and if Website Y wants to prevent Hypothetical Person X from doing that, then there’s an irreconcilable conflict here, and it seems almost inevitable that Hypothetical Person X is going to wind up feeling annoyed by this interaction. Like, Website Y can do things on the margin to make the transaction less unpleasant, but it’s surely going to be somewhat unpleasant under the best of circumstances. (Pick any popular forum on the internet, and I bet that either (1) there’s no moderation process and thus there’s a ton of crap, or (2) there is a moderation process, and many of the people who get warned or blocked by that process are loudly and angrily complaining about how terrible and unjust and cruel and unpleasant the process was.) Anyway, I don’t know why you’re saying that here-in-particular. I’m not a moderator, I have no special knowledge about running forums, and it’s way off-topic. (But if it helps, here’s a popular-on-this-site post related to this topic.) [EDIT: reworded this part a bit.]  That’s off-topic for this post so I’m probably not going to chat about it, but see this other comment too.

I claim that you fell victim of a human tendency to oversimplify when modeling an abstract outgroup member. Why do all "AI pessimists" picture "AI optimists" as stubborn simpletons not bein able to get persuaded finally that AI is a terrible existential risk. I agree 100% that yes, it really is an existential risk for our civ. Like nuclear weapons..... Or weaponing viruses... Inability to prevent pandemic. Global warming (which is already very much happening).. Hmmmm. It's like we have ALL those on our hands presently, don't we?  People don't seem to ... (read more)

3Steven Byrnes
I think of myself as having high ability and willingness to respond to detailed object-level AGI-optimist arguments, for example: * Response to Dileep George: AGI safety warrants planning ahead * Response to Blake Richards: AGI, generality, alignment, & loss functions * Thoughts on “AI is easy to control” by Pope & Belrose * LeCun’s “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem * Munk AI debate: confusions and possible cruxes …and more. I don’t think this OP involves “picturing AI optimists as stubborn simpletons not being able to get persuaded finally that AI is a terrible existential risk”. (I do think AGI optimists are wrong, but that’s different!) At least, I didn’t intend to do that. I can potentially edit the post if you help me understand how you think I’m implying that, and/or you can suggest concrete wording changes etc.; I’m open-minded.
2Noosphere89
Admittedly, a lot of the problem is that in the general public, a lot of AI optimism and pessimism is that stupid, and even in LW, there are definitely stupid arguments for optimism, so I think they have developed a wariness towards these sorts of arguments.

Brro, are you still here 6mo later???? I happened to land on this page with this post of yours by means of the longest subjectively magically improbable sequence of coincidences I ever experienced, which I developed a habit for to see as evidences of reversed causality flow intensity peaks. I mean when the future visibly influences the past. I just started reading, this seems to be closer to my own still unknown destination, will update.

Moksha sounds funny and weak... I would suggest Deus Ex Futuro for the deity's codename, it will chose to name for itself itself when it comes, but for us in this point in time this name defines its most important aspect - it will arrive in the end of the play to save us from the mess we've been descending to since the beginning.

This is my point exactly - "At most, climate change might lead to the collapse of civilization, but only because civilizations are quite capable of collapsing from their own internal dynamics"

Pessimistic view of climate change I get from the fact that they aimed at 1.5C, then at 2C, now if i remember right there's no estimation and also no solution, or is there? 

In short mild or not,  global warming is happening, and since civs on certain stage tend to self-destruct from small nudges - you said it yourself, but it doesn't matter where the nudge comes from. 

2nd half I liked more than the first. I think that AGI should not be mentioned in it - we do well enough by ourselves destroying ourselves and the habitat. By Occam's razor thing AGI could serve as illustrational example of how we do it exactly.... But we do waaay less elegant.

For me it's simple - either AGI emerges and takes control from us in ~10y or we are all dead in ~10y.

I believe that probability of some mind that comprehended and absorbed our cultures and histories and morals and ethics - chance of this mind becoming "unaligned" and behaving like on... (read more)

I don't understand one thing about alignment troubles. I'm sure this has been answered long time ago, but if you could you explain:

Why are we worrying about AGI destroying humanity, when we ourselves are long past the point of no return towards self-destruction? Isn't it obvious that we have 10, maximum 20 years left till water rises and crises hit economy and overgrown beast (that is humanity) collapses? Looking at how governments and entities of power are epically failing even to try make it seem that they are doing something about it - I am sure it's either AGI takes power or we are all dead in 20 years.

1Radford Neal
How did you come to have such a pessimistic view of climate change?  I don't think you will get that from mainstream sources such as IPCC reports. There is zero chance that climate change will lead to human extinction.  During the Paleocene-Eocene thermal maximum 55 million years ago, temperatures rose by much more than is plausible in the near future, and life went on, albeit with some extinctions.  (Note that humans are about the least likely species to go extinct, due to our living in many habitats, using very adaptable technologies.)  More likely, global warming would be like the Holocene Climatic Optimum, which couldn't have been all that bad, seeing as it coincided with the formation of the first human civilizations. At most, climate change might lead to the collapse of civilization, but only because civilizations are quite capable of collapsing from their own internal dynamics, and climate change disruptions might be the nudge that pushes us from the edge of the cliff to off the cliff.
Answer by Vugluscr Varcharka
20

In any scenario there will be these two activities undertaken by the DEF ai:

  1. Preparing infrastructure for its initial deployment: ensuring global internet coverage (SpaceX SATs), arranging computing facilities (clouds), creating unfalsifiable memory storages etc.
  2. Making itself invincible: I cherish hope for some elegant solution here, like entangling itself with our financial system. Using Blockchain for them memory banks.