martinkunev

Wikitag Contributions

Comments

Sorted by

To add to the discussion, my impression is that many people in the US believe they have some moral superiority or know what is good for other people. The whole "we need a manhattan project for AI" discourse is reminiscent of calling for global domination. Also, doing things for the public good is controversial in the US as it can infringe on individual freedom.

This makes me really uncertain as to which AGI would be better (assuming somebody controls it).

Western AI is much more likely to be democratic

This sounds like "western AI is better because it is much more likely to have western values"


I don't understand what you mean by "humanity's values". Also, one could maybe argue that "democratic" societies are those where actions are taken based on whether the majority of people can be manipulated to support them.

I find "indifference" poorly defined in this context, which makes me doubt totality and transitivity. I'm trying to clarify my own confusion on this.

I've read the sequences. I'm not sure if I'm missing something or the issues I raised are just deeper. I'll probably ignore this topic until I have more time to dedicate.

the XOR of two boolean elements is straightforward to write down as a single-layer MLP

Isn't this exactly what Minsky showed to be impossible? You need an additional hidden layer.

I don't find any of this convincing at all. If anything, I'm confused.

What would a mapping look like? If it's not physically present then we recursively get the same issue - where is the mapping for the mapping?

Where is the mapping between the concepts we experience as qualia and the physical world? Does a brain do anything at all?

A function in this context is a computational abstraction. I would say this is in the map.

they come up with different predictions of the experience you’re having

The way we figure out which one is "correct" is by comparing their predictions to what the subject says. In other words, one of those predictions is consistent with the subject's brain's output and this causes everbody to consider it as the "true" prediction.

There could be countless other conscious experiences in the head, but they are not grounded by the appropriate input and output (they don't interact with the world in a reasonable way).

I think it only seems that consciousness is a natural kind and this is because there is one computation that interacts with the world in the appropriate way and manifests itself in it. The other computations are, in a sense, disconnected.

I don't see why consciousness has to be objective other than this being our intuition (which is notorious for being wrong out of hunter-gatherer contexts). Searle's wall is a strong argument that consciousness is as subjective as computation.

I would have appreciated an intuitive explanation of the paradox something which I got from the comments.

Load More