All of Akira Pyinya's Comments + Replies

Thank you for your introduction of Richard Jeffery's theory! I just read some article about his system and I think it's great. I think his utility theory built upon proposition is just what I want to describe. However, his theory still starts from given preferences without showing how we can get these preferences (although these preferences should satisfy certain conditions), and my article argues that these preferences cannot be estimated using the Monte Carlo method. 

Actually,  ACI is an approach that can assign utility (preferences) to every p... (read more)

Thank you for your reply!

"The self in 10 minutes" is a good example of revealing the difference between ACI and the traditional rational intelligence model. In the rational model, the input information is send to atom-like agent, where decisions are made based on the input.

But ACI believes that's not how real-world agents work.  An agent is a complex system made up with many different parts and levels: the heart receives mechanical, chemical, and electronic information from its past self and continue beating, but with different heart rates because of ... (read more)

Yes, of course some species extinct, but that's why organisms today do not have their genes (containing information about how to live). On the contrary, every ancestor of a living organism did not die before they reproduce. 

I think I have already responded to that part. Who is the “caretaker that will decide what, when and how to teach the ACI”? The answer is natural selection or artificial selection, which work like filters. AIXI’s “constructive, normative aspect of intelligence is ‘assumed away’ to the external entity that assigns rewards to different outcomes”, while ACI’s constructive, normative aspect of intelligence is also assumed away to the environment that have determined which behavior was OK and which behavior would get a possible ancestor out of the gene pool.&nb... (read more)

Thank you for your comment. I have spent some time reading the book Active Inference. I think active inference is a great theory, but focuses on some aspects of intelligence different from what ACI does.

ACI learns to behave the same way as examples, so it can also learn ethics from examples. For example, if behaviors like “getting into a very cold environment” is excluded from all the examples, either by natural selection or artificial selection, an ACI agent can learn ethics like “always getting away from cold”, and use it in the future. If you want to ac... (read more)

1Roman Leventov
You didn't respond to the critical part of my comment: "However, after removing ethics "from the outside", ACI is left without an adequate replacement. I. e., this is an agent devoid of ethics as a cognitive discipline, which appears to be intimately related to foresight. ACI lacks constructive foresight, too, it always "looks back", which warranted periodic "supervised learning" stages that seem like a patch-up. This doesn't appear scalable, too." Let me try to rephrase: ACI appears fundamentally inductive, but inductivism doesn't appear to be a philosophy of science that really leads to general intelligence. A general intelligence should adopt some form of constructivism (note that in my "decomposition" of the "faculties" of general intelligence, based on Active Inference, in the comment above: namely, epistemology, rationality, and ethics, -- are all deeply intertwined, and "ethics" is really about any foresight and normativity, including constructivism). AIXI could be general intelligence because the constructive, normative aspect of intelligence is "assumed away" to the external entity that assigns rewards to different outcomes; with ACI, you basically still assume this aspect of intelligence away, relying on the "caretaker" that will decide what, when and how to teach the ACI. If it's some other AI that does it, how does that AI know? So, there is an infinite regress, and ACI couldn't be a universal model of general intelligence. Also, cf. Safron (2022) discussion of FEP/Active Inference and AIXI. ---------------------------------------- A few corrections about your reading of Active Inference: First, Active Inference doesn't really "divide" them such, it's one of the decompositions of EFE (the other is into ambiguity and risk). Second, it's just "pragmatic value" here, not "pragmatic value learning". Information gain is not coupled/contraposed with action. Action is only contraposed with perception. Perception != information gain. Perception is tuned to

In the post I was just trying to describe the internal unpredictability in a deterministic universe, so I think I have already made a distinction between predictability and determinism. The main disagreement between us is that which one is more related to free will. Thank you for pointing out this, I will focus on this topic in the next post.

I want to define "degree of free will" like:  for a given observer B, what is the lower limit of event A's unpredictability. This observer does not have to be human, it can be an intelligence agent with infinite computing ability. It just does not have enough information to make prediction. The degree of free will could be very low for an event very close to the observer, but unable to ignore when controlling a mars rover. I don't know if anybody has ever described this quantity (like the opposite of Markov Blanket?), if you know please tell me.

I like... (read more)

You are right, this picture only works in an infinite universe.

Thank you for your comment. Actually I am trying to build a practical and quantitative model of free will instead of just say free will is or is not an illusion, but I can't find a better way to define free will in a practical way. That's why I introduce an "observer" which can make prediction.

And I agree with you,  claims like "not 100% correctly" are too weak. But possibly we can define some functions like "degree of free will", representing how much one subject could be predicted or controlled. I'm not sure if this definition resembles the common meaning of "free will", but it might be somewhat useful.

1AGO
I do like the idea of coming up with a good way to quantify the degree of deterministic free will. While it's not necessarily a useful concept in terms of actionability, when did that ever stop curiosity? I think we can fairly reasonably estimate that this degree of free will is very very low. In response to defining types of free will, I'd personally propose "experiential free will" and "deterministic free will." The former refers to the more common usage. When someone says "I have free will" outside of a rigorous philosophical debate, they usually mean "I experience life in such a way that I feel I can make at least some conscious choices about what actions to take." This is pretty hard to dispute. People do tend to feel this way. This definition of free will may well be an illusion, but that illusion is very much experientially real and worth discussing. It seems like "deterministic free will" might be a better term for what you're talking about. The idea that free will is a spectrum where the higher the certainty with which your actions can be predicted, the less free will you have. 

Thank you for your comment, but it would be appreciated if you could prove my conclusion is wrong (e.g. either observer B1 or B2 is able to know or predict event C)

Sorry for the misleading, but I also believe that libertarian free will is not an illusion. I hope I can explain that in the next post on this topic.

(Maybe I should add a "(1)" behind the title?)

You are right, I should use "all initial state on a given spatial hypersurface" instead of "all causes", but the conclusion is the same: wherever the hypersurface is, no observer is able to know all the initial state on that hypersurface which can affect event A, except when the observer is in the future of A. 

The second question, I think that "high accuracy" is only the upper limit of a prediction, which is not that easy to reach. In oder to make high accuracy prediction, you need a large amount of resources for observations and calculations. The amo... (read more)

1Shmi
Seems like you are in love with your conclusion and are throwing all supporting evidence into it, which works in a court of law, but not when you are trying to construct accurate models.