Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
  1. Yes, GPTs would have alien-like cognition.
  2. Whether they can translate is unclear because limits of translation of human languages are still unknown.
  3. Yes, they are trained in logs of human thoughts. Each log entry corresponds to a human thought, eg. there is a bijection. There is thus no formal difference.
  4. Re: predicting encodings of human thought, I'm not sure what is supposed to be compelling about this. GPTs currently would only learn a subset of human cognition, namely, that subset that generates human text. So sure, trained on more types of human cognition might make it more accurately follow more types of human cognition. Therefore...?
  5. Yes, a brain and a Python interpreter do not have a similar internal structure in evaluating Python semantics. So what? This is as interesting as the fact that a mechanical computer is internally different from an electronic computer. What matters is that they both implement basically the same externally observable semantics in interpreting Python.

Suffice it to say that I didn't find anything here particularly compelling.

I don't think any of the claims you just listed are actually true. I guess we'll see.

I don't see any indication of AGI so it does not really worry me at all.

Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?

Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.

In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer

Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.

In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we've seen thus far.

If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively "weak" associative architecture, but that a very powerful architecture is potentially only one trick away.

The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.

Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well.

You keep distinguishing "intelligence" from "heuristics", but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you'd expect from evolution after all.

So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to "real intelligence" keeps shrinking. Will we eventually be left with the null set, and conclude that humans are not intelligent either? What's your actual criterion for intelligence that would prevent this outcome?

From prior research, I understood the main problem of nuclear power plant cost to be constant site-specific design adjustments leading to constant cost and schedule overruns. This means there is no standard plant design or construction, each installation is unique with its own quirks, its own parts, its own customizations and so nothing is fungible and training is barely transferrable.

This was the main economic promise behind small modular reactors: small, standard reactors modules that can be assembled at a factory and shipped to a site using regular transportation and just installed, with little assembly, and which you can "daisy chain" in a way to get whatever power output you need. This strikes right at the heart of some of the biggest costs of nuclear.

Of course, it might just be a little too late, as renewables are now cheaper than nuclear in almost every sense. Just need more investment in infrastructure and grid storage.

I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures

It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance. Our perception of subjective experience/first-hand knowledge is no more proof of accuracy than our perception that water breaks pencils.

Intuition pumps supporting the accuracy of said perception either beg the question or multiply entities unnecessarily (as detailed below).

Nothing you said indicates that p-zombies are inconceivable or even impossible.

I disagree. You've said that epiphenominalists hold that having first-hand knowledge is not causally related to our conception and discussion of first-hand knowledge. This premise has no firm justification.

Denying it yields my original argument of inconceivability via the p-zombie world. Accepting it requires multiplying entities unnecessarily, for if such knowledge is not causally efficacious, then it serves no more purpose than vital in vitalism and will inevitably be discarded given a proper scientific account of consciousness, somewhat like this one.

I previously asked for any example of knowledge that was not a permutation of properties previously observed. If you can provide one such an example, this would undermine my position.

Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

Which seems to suggest that epiphenominalism either begs the question, or multiplies entities unnecessarily by accepting unjustified intuitions.

So my original argument disproving p-zombies would seem to be on just as solid footing as the original p-zombie argument itself, modulo our disagreements over wording.

Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness.

Since this is the crux of the matter, I won't bother debating the semantics of most of the other disagreements in the interest of time.

As for whether subjectivity is causally efficacious, all knowledge would seem to derive from some set of observations. Even possibly fictitious concepts, like unicorns and abstract mathematics, are generalizations or permutations of concepts that were first observed.

Do you have even a single example of a concept that did not arise in this manner? Generalizations remove constraints on a concept, so they aren't an example, it's just another form of permutation. If no such example exists, why should I accept the claim that knowledge of subjectivity can arise without subjectivity?

Load More