Gunnar_Zarncke

Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)

Wiki Contributions

Comments

Would it be possible to determine the equivalent dimension of a layer of the human language cortex with this method? You can't do API calls to a brain, but you can prompt people and estimate the probability of a response token by repeated sampling, maybe from different people.  

There is more than one possibility when you append "if you know what I mean" to the end of a random sentence:

  • Sexual innuendos.
  • Illicit activities or behaviors.
  • Inside jokes or references understood only by a specific group.
  • Subtle insults or mocking.

Sure, the first is the strongest, but the others would move the centroid away from "phallus". The centroid is not at the most likely item but at the average.

I'm generally considered a happy person and I did couple's counseling at a time when my partner was also happy. That was in the context of getting early marriage advice and was going generally well. I'm not sure about talk therapy. I'm generally of the opinion that talking with people helps with resolving all kinds of issues.  

Cognition Labs released a demo of Devin an "AI coder", i.e., an LLM with agent scaffolding that can build and debug simple applications:

https://twitter.com/cognition_labs/status/1767548763134964000 

Thoughts?

With a sufficiently strong LLM, I think you could still elicit reports of inner dialogs if you prompt lightly, such as "put yourself into the shoes of...". That's because inner monologs are implied in many reasoning processes, even if not explicitly mentioned so.

As the respirator still has to breathe regularly, there will be still a significantly higher CO2 in the air for respiration. I'd guess maybe half - 20k PPM. Interesting to see somebody measure that. 

Disregarding the looking silly, there are many other (small) downsides of wearing a helmet all the time:

  1. the weight may have an adverse effect on your neck
  2. you may get stuck on obstacles such as door frames
  3. you may hit other people with it (who presumably don't wear it and if they do see 2
  4. it interferes with close personal interactions, such as hugging
  5. ...

Sam Altman once mentioned a test: Don't train an LLM (or other AI system) on any text about consciousness and see if the system will still report having inner experiences unprompted. I would predict a normal LLM would not. At least if we are careful to remove all implied consciousness, which excludes most texts by humans. But if we have a system that can interact with some environment, have some hidden state, observe some of its own hidden state, and can maybe interact with other such systems (or maybe humans, such as in a game), and train with self-play, then I wouldn't be surprised if it would report inner experiences. 

It might be that we know a language that originally didn't have personal pronouns: Pirahã. And a culture with a high value on no-coercion, which means that expectations of conforming are absent. There is an aspect of consciousness - the awareness of the difference between expected and actual behaviors - that might just not develop in such a context.

There is no problem with "I" - it makes sense to refer to the human speaking as "I". The problem is with ascribing non-physical irreducible causality. Blame and responsibility are (comparatively) effective coordination mechanisms, that's why societies that had it outcompeted those that didn't. It doesn't matter that the explanation is non-physical.  

Load More