All of naasking's Comments + Replies

  1. Yes, GPTs would have alien-like cognition.
  2. Whether they can translate is unclear because limits of translation of human languages are still unknown.
  3. Yes, they are trained in logs of human thoughts. Each log entry corresponds to a human thought, eg. there is a bijection. There is thus no formal difference.
  4. Re: predicting encodings of human thought, I'm not sure what is supposed to be compelling about this. GPTs currently would only learn a subset of human cognition, namely, that subset that generates human text. So sure, trained on more types of human cogn
... (read more)

I don't think any of the claims you just listed are actually true. I guess we'll see.

I don't see any indication of AGI so it does not really worry me at all.

Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?

Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.

0mocny-chlapik
There is no indication for many catastrophic scenarios and truthfully I don't worry about any of them.

In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer

Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.

In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we've seen thus far.

If it does, the effectiveness of our compute will potentially increase exponentially almost overn... (read more)

1mocny-chlapik
I don't see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore's law is dead, the fact that we are now relying on HW accelerators is not a good thing, it's a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)

Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well.

You keep distinguishing "intelligence" from "heuristics", but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you'd expect from evolution after all.

So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to "real intelligence... (read more)

4mocny-chlapik
I believe that fixating on benchmark such as chess etc is ignoring the G part of AGI. Truly intelligent agent should be general at least in the environment he resides in, considering the limitation of its form. E.g. if a robot is physically able to work with everyday object, we might apply Wozniak test and expect that intelligent robot is able to cook a dinner in arbitrary house or do any other task that its form permits. If we assume that right now we develop purely textual intelligence (without agency, persistent sense of self etc) we might still expect this intelligence to be general. I.e. it is able to solve arbitrary task if it seems reasonable considering its form. In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer.  BIG Bench has recently showed us that our current LMs are able to solve some problems, but they are nowhere near general intelligence. They are not able to solve even very simple problems if it actually requires some sort of logical thinking and not only using associative memory, e.g. this is a nice case: https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/symbol_interpretation You can see in the Model performance plots section that scaling did not help at all with tasks like these. This is a very simple task, but it was not seen in the training data so the model struggles to solve it and it produces random results. If the LMs start to solve general linguistic problems, then we are actually having intelligent agents at our hand.

From prior research, I understood the main problem of nuclear power plant cost to be constant site-specific design adjustments leading to constant cost and schedule overruns. This means there is no standard plant design or construction, each installation is unique with its own quirks, its own parts, its own customizations and so nothing is fungible and training is barely transferrable.

This was the main economic promise behind small modular reactors: small, standard reactors modules that can be assembled at a factory and shipped to a site using regular tran... (read more)

I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures

It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance. Our perception of subjective experience/first-hand knowledge is no more proof of accuracy than our perception that water breaks pencils.

Intuition pumps supporting the accuracy of said perception either beg the question or multiply entities unnecessarily (as detailed below).

Nothing you said indicates that p-zo

... (read more)
1UmamiSalami
It's not. Suppose that the ignorance went away: a complete physical explanation of each of our qualia - "the redness of red comes from these neurons in this part of the brain, the sound of birds flapping their wings is determined by the structure of electric signals in this region," and so on - would do nothing to remove our intuitions about consciousness. But a complete mechanistic explanation of how organ systems work would (and did) remove the intuitions behind vitalism. Well... that's just what is implied by epiphenomenalism, so the justification for it is whatever reasons we have to believe epiphenomenalism in the first place. (Though most people who gravitate towards epiphenomenalism seem to do so out of the conviction that none of the alternatives work.) As I've said already, your argument can't show that zombies are inconceivable. It only attempts to show that an epiphenomenalist world is probabilistically implausible. These are very different things. Well the purpose of rational inquiry is to determine which theories are true, not which theories have the fewest entities. Anyone who rejects solipsism is multiplying entities unnecessarily. I don't see why this should matter for the zombie argument or for epiphenomenalism. In the post where you originally asked this, you were confused about the contextual usage and meaning behind the term 'knowledge.'

Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

Which seems to suggest that epiphenominalism either begs the quest... (read more)

-1UmamiSalami
Well, they do have arguments for their positions. It actually seems very intuitive to most people that subjective qualia are different from neurophysical responses. It is the key issue at stake with zombie and knowledge arguments and has made life extremely difficult for physicalists. I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures, and rather than epiphenomenalism multiplying entities unnecessarily, it sure seems to me like physicalism is equivocating entities unnecessarily. Nothing you said indicates that p-zombies are inconceivable or even impossible. What you, or and EY seem to be saying is that our discussion of consciousness is a posteriori evidence that our consciousness is not epiphenomenal.

Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness.

Since this is the crux of the matter, I won't bother debating the semantics of most of the other disagreements in the interest of time.

As for whether subjectivity is causally efficacious, all knowledge would seem to derive from some set of observations. Even possibly fictitious concepts, like unicorns and abstract mathematics, are generalizations or permutations of concepts tha... (read more)

-2UmamiSalami
Unlike the other points which I raised above, this one is semantic. When we talk about "knowledge," we are talking about neurophysical responses, or we are talking about subjective qualia, or we are implicitly combining the two together. Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported

It's not, and I'm surprised you find this contentious. 3 doesn't follow from 2, it follows from a contradiction between 1+2.

1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows if consciousness can affect matter, ie. we only have knowledge of subjectivity because we observe it first-hand. P-zombies do not have first-hand knowledge of subjectivity as specified in 2.

If there were another way to i... (read more)

-2UmamiSalami
Well, first of all, 3 isn't a statement, it's saying "consider a world where..." and then asking a question about whether philosophers would talk about consciousness. So I'm not sure what you mean by suggesting that it follows or that it is true. 1 and 2 are not contradictions. Conversely, 1 and 2 are basically saying the exact same thing. This is essentially what epiphenomenalists deny, and I'm inclined to say that everyone else should deny it too. Regardless of what the truth of the matter is, surely the mere concept of subjectivity does not rely upon epiphenomenalism being false. This is confusing the issue; like I said: under the epiphenomenalist viewpoint, the cause of our discussions of consciousness (physical) is different from the justification for our belief in consciousness (subjective). Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness. There are many criteria by which theories are judged in philosophy, and parsimony is only one of them. Nothing in my rebuttal relies on the idea that zombies would have feelings and consciousness. My rebuttal points out that zombies would be motivated by the idea of feelings and consciousness, which is trivially true: humans are motivated by the idea of feelings and consciousness, and zombies behave in the same way that humans do, by definition. But it's quite obviously true, because we talk about rich inner lives as the grounding for almost all of our moral thought, and then act accordingly, and because empathy relies on being able to infer rich inner lives among other people. And as noted earlier, whatever behaviorally motivates humans also behaviorally motivates p-zombies.

This was longer than it needed to be

Indeed. The condensed argument against p-zombies:

  1. Assume consciousness has no effect upon matter, and is therefore not intrinsic to our behaviour.
  2. P-zombies that perfectly mimic our behaviour but have no conscious/subjective experience are then conceivable.
  3. Consider then a parallel Earth that was populated only by p-zombies from its inception. Would this Earth also develop philosophers that argue over consciousness/subjective experience in precisely the same ways we have, despite the fact that none of them could pos
... (read more)
-2UmamiSalami
I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported, and nothing that EY said supports 4. 5 does not follow from 3 or 4, though it's bundled up in the definition of a p-zombie and follows from 1 and 2 anyway. In any case, 6 does not follow from 5. What EY is saying is that it's highly implausible for all of our ideas and talk of consciousness to have come to be if subjective consciousness does not play a causal role in our thinking. Of course they would - our considerations of other people's feelings and consciousness changes our behavior all the time. And if you knew every detail about the brain, you could give an atomic-level causal account as to why and how. The concept of a rich inner life influences decision processes.

This is an interesting discussion, but this claim struck me as odd:

If something exists, it can be counted (or given a cardinality, if it is infinite).

This seems like an open philosophical question. Clearly you are a finitist of some sort, but as far as I know it hasn't been empirically verified that real numbers don't exist. Certainly continuous functions are widely employed in physics, but whether all of physics can be cast into a finitist framework is an open question last I checked.

So your assertion above doesn't seem firmly justified, as uncountab... (read more)

0Mitchell_Porter
I am not a finitist. There are cardinals for uncountable sets. I was objecting to people who say things like (page 16) "how many worlds there are" is a "non-question".

"Copy" implies having more than 1 object : The Copy and the Original at the same point of time, but not space.

Why preference space over time? Time is just another dimension after all. buybuydandavis's definition of "copy" seems to avoiding preference for a particular dimension, and so seems more general.

2Eugine_Nier
You may want to read up on the no-cloning theorem in quantum mechanics. The simple answer to your question is that time interacts differently with causality from space.

It is not scientific induction, since you can't measure elegance quantitatively.

You can formally via Kolmogorov complexity.

there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit

Now THAT's an interesting argument for MWI. It's not a final nail in the coffin for de Broglie-Bohm, but the naturalness of this property is certainly compelling.

6Rob Bensinger
Although Tegmark incidentally endorses MWI, Tegmark's MUH does not entail MWI. Yes, if there's a model of MWI, then some world follows MWI; but our world can be a part of a MUH ensemble without being in an MWI-bound region of the ensemble. We may be in a Bohmian portion of the ensemble. Tegmark does seem to think MWI provides some evidence for MUH (which would mean that MUH predicts MWI over BM), but I think the evidence is negligible at best. The reasons to think MWI is true barely overlap at all with the reasons to think MUH is. In fact, the failure of Ockham to resolve BM v. MW could well provide evidence against MUH; if MWI (say) turned out to be substantially more complex (in a way that gives it fewer models) and yet true, that would give strong anthropic evidence against MUH. MUH is more plausible if we live in the kind of world that should predominate in the habitable zone of an ensemble.

maybe there will be some good discrete model but so far the Plank length is not a straightforward discrete unit, not like cell in game of life.

't Hooft has been quite successful in defining QM in terms of discrete cellular automata, taking "successful" to mean that he has reproduced an impressive amount of quantum theory from such a humble foundation.

More interesting still is why reals have been so useful (and not just reals, but also complex numbers, vectors, tensors, etc. which you can build out of reals but which are algebraic objects in

... (read more)
naasking-20

(simplicity of the map) alone is sufficient to judge a theory- you also need to take into account the theory's parsimony (simplicity of the territory).

Solomonoff Induction gauges a theory's parsimony via Kolmogorov complexity, which is a formalization of Occam's razor. It's not a naive measurement of simplicity.