Mindey

A curious random process that wants to explore and optimize all universes.

Wiki Contributions

Comments

Sorted by
Mindey10

Still, ASI is just equation model F(X)=Y on steroids, where F is given by the world (physics), X is a search process (natural Monte-Carlo, or biological or artificial world parameter search), and Y is goal (or rewards).

To control ASI, you control the "Y" (right side) of equation. Currently, humanity has formalized its goals as expected behaviors codified in legal systems and organizational codes of ethics, conduct, behavior, etc. This is not ideal, because those codes are mostly buggy.

Ideally, the "Y" would be dynamically inferred and corrected, based on each individual's self-reflections, evolving understanding about who they really are, because the deeper you look, the more you realize, how each of us is a mystery.

I like the term "Y-combinator", as this reflects what we have to do -- combine our definitions of "Y" into the goals that AIs are going to pursue. We need to invent new, better "Y-combination" systems that reward AI systems being trained.

Mindey30

However, information-theoretic groundings only talk about probability, not about "goals" or "agents" or anything utility-like. Here, we've transformed expected utility maximization into something explicitly information-theoretic and conceptually natural.

This interpretation of model fitting formalizes goal pursuit, and looks well constructed. I like this as a step forward in addressing my concern about terminology of AI researchers.

I imagine that negentropy could serve as a universal "resource", replacing the "dollars" typically used as a measuring stick in coherence theorems.

I like to say that "entropy has trained mutating replicators to pursue goal  called 'information about the entropy to counteract it'. This 'information' is us. It is the world model , which happened to be the most helpful in solving our equation  for actions , maximizing our ability to counteract entropy." How would we say that in this formalism?

Laws of physics are not perfect model of the world, thus we do science and research, trying to make ourselves into a better model of it. However, neither we nor AIs choose the model to minimize the length of input for - ultimately, it is the world that induces its model into each of us (including computers) and optimizes it, not the other way around. There's that irreducible computational complexity in this world, which we continue to explore, iteratively improving our approximations, which we call our model - laws of physics. If someone makes a paperclip maximizer, it will die because of world's entropy, unless it maximizes for its survival (i.e., instead of making paperclips, it makes various copies of itself and all the non-paperclip components needed for its copies, searching for better ones at survival).

Mindey20

Are you reading Halfbakery, Eliezer? A similar idea has been shared rather recently there, though, I posted something along these lines 4 years ago (4 months before the post on steemit) over here and here. Would be quite curious to engage in this, due to potential benefits to health and cryonics, as described in this video.

Mindey30

Thanks to Moe and Suji indeed. I'm putting the link to Chinese description to the top of the page.

Mindey30

It's great that we already had these ideas before. The "short-form" would definitely be of interest to some. In addition, it doesn't have to necessarily be ephemeral. For example, on the Halfbakery mentioned above, posts (even if short) continue to function (e.g., I can comment on something from the last century), even if it was just a short post.

Mindey30

Rationality has no axioms, just heuristics and rules for different environments. In other words, rationality is a solution to a problem (optimality of thinking and deciding) to reason within a domain, but because of the diversity of domains, it is not axiomatizable to a single specific set of axioms. I suppose best one can do given arbitrary domain, is to say: maybe try exploring.

Mindey10

Certainly true, yet, just because this is how almost every field of research works, doesn't mean that it is how they should work, and I like shminux's point.

Mindey-40

Random or complex processes are curiosities. Infinitely complex cellular automata are infinitely curious to explore all possible worlds. Entropy of the world itself may be. As described on my monologue here, agents are fundamentally entities capable of volition, cognition, action. Therefore, they are instances of F(X)=Y, where volition is Y, cognition is perception of world F, and action is the process X that parametrizes the world seeking to satisfy the equation.

If X is within F, we have embedded processes. So, yeah agency may be an illusion of processes (curiosities) seeking to satisfy (optimize for) various conditions, and it is already be happening, as the processes that are trying to satisfy conditions are emerging on the world-wide web, not just within our brains.

Mindey30

Safety is assurance of pursuit of some goals (Y) - some conditions. So, one thing that's unlikely to have a paradigmatic shift, is search for actions to satisfy conditions:

1. Past: dots, line, regression

2. Present: objects, hyperplane, deep learning

3. Future: ?, ?, ?

Both 1. and 2. are just a way to satisfy conditions, that is, solve equation F(X)=Y (equation solving as processes (X) to model world (F), to satisfy conditions (Y)). The equation model had not changed for ages, and is so fundamental, that I would tend to assume, that world's processes X will continue to parametrize world F by being part of it, to satisfy conditions Y, no matter what the 3. is.

I wouldn't expect the fundamental goals (specific conditions Y) to change either: the world's entropy (F) (which is how world manifests, hence world's entropy is the world) trains learning processes such as life (which is fundamentally mutating replicators) to pursue goal Y which may be formulated as just information about the entropy to counteract it (create world's F model F' to minimize change = reach stability).

Islands of stability exist for chemical elements, for life forms (mosquitoes are an island of stability among processes in existence, although they don't have to be very intelligent to persist), and I believe they exist for the artificial life (AI/ML systems) too, just not clear where exactly these islands of stability will be.

Where the risk to civilization may lie, is in the emergence of processes evolving independently of the existing civilization (see symbiosis in coordination problem in biological systems), because of incorrect payoffs, making useful services parasitize our infrastructures (e.g., run more efficient economically self-sustaining processes on computers).

Mindey10

What I would find interesting, is how these biological patterns compare and could apply to software systems. For example, take a look at the codons as curly braces. Can we look at the software development as an evolution of functions coded within the curly braces (some of them dormant, but some of them expressed (like proteins are), through being hosted on places like hosting providers (like ribosomes), or server processes, as in serverless computing).

While the behavior of society at the psychological and socio-economic level will have parallels to the aforementioned biological phenomena, however, it may be argued that in the long term, the future of the evolution and behaviors is going to be decided by the evolution of functions as on-line services, that create the foundation for social behaviors, and how they evolve may be even more interesting to consider than just the psychological and socio-economic decisions.

Load More