All of Agathodaimon's Comments + Replies

I think tool AIs are very dangerous because they allow the application of undigested knowledge

Another form of this is asking questions that are motivated by obtaining material goods for the self. More altruistic questions that seek first the profit of others seem less privileged in general.

It is better for one's health and the planet in terms of emissionsto eat less meat as well.

It sounds like mathematical platonism which appeals to some like (seemingly) Roger Penrose, but it seems connected to other networks of concepts at least in part and I do not think it should be taken as a given. Perhaps in the future we can model when such belief systems will arise given access to other information based on the history of the individual in questiin, but we are not quite there yet. For further reference one could examine the examples of computationally model led social behavior in a topic someone created here regarding the reverse engineering of belief systems

0Flipnash
puts on asshole face We could also do this thing with the other thing that this guy suggested.

http://arxiv.org/abs/1405.5563

Using the non - cognitive approach you could dismiss statements in symbolic logic that do not refer to constructs or events that could come into being. I am referring to constructs in Deutsch's theory

I wish the audio was available for free

1[comment deleted]

Brains are like cars. Some are trucks made for heavy hauling, but travel slow. Some are sedans: economical, fuel efficient, but not very exciting. Some are ferraris sleek, fast, and sexy but burn through resources like a mother. I'm sure you can come up with more of your own analogies.

This sounds like the principle of entropy maximization. I recommend reading wissner - gross ' s causal entropic forces

How would you measure aptitude gain?

1Stuart_Armstrong
There are suggestions, such as using some computable version of the measure AIXI is maximising. Kaj Sotala has a review of methods, unpublished currently I believe.

Is there a word analogous to prior for actions decided on the basis of one's matrix of possible foreseen futures?

0Anders_H
I changed it to "subjective probability distribution". If you know any better terms, please let me know and I will fix it.

Ignore the second part of my initial comment, I was hadn't read your blog post explaining your idea at that point. I believe your problem can be formulated differently in order to introduce other necessary information to answer it. I appreciate your approach because it is generalized to any case which is why I find it appealing, but I believe it cannot be answered in this fashion because if you examine integrated systems of information by constituent parts in order to access that information you must take a "cruel cut" of the integrated system of... (read more)

I will have to think through my response so it will be useful. It may take a few days.

[This comment is no longer endorsed by its author]Reply

This seems useful to me. However there should be regions of equal probability distribution. Have you considered that the probabilities would shift in real time with the geometry of the events implied by the sensor data? For example if you translated these statements to a matrix of likely interaction between events?

1Scott Garrabrant
I do not really understand your question, but the answer is no. Care to elaborate?

Your intuition appears to be good. There was a recent paper published on this very topic.

http://arxiv.org/abs/1405.1429

Unfortunately, there seem to be unavoidable gaps in the spreading of information http://arxiv.org/abs/1310.4707

By being a loving person you can convey sincerity and that you are willing to submit your own interests to that of the greater whole.

I believe we should use analytics to find the commonalities in the opinions of groups of interacting experts

http://arxiv.org/abs/1406.7578

I am not sure your idea is fesdible, but I do wish you luck. I know mit has been doing interesting work on modular robots and of all that I have encountered I would direct your attention there. Good luck.

http://pnas.org/content/early/2013/08/28/1306246110

Game theory has been applied to some problems related to morality. In a strict sense we cannot prove such conclusions because universal laws are uncertain