by Xor
1 min read

2

This is a special post for quick takes by Xor. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
2 comments, sorted by Click to highlight new comments since:
[-]Xor50

It seems that we can have intelligence without consciousness. We can have reasoning without agency, identity, or personal preference. We can have AI as a pure tool. In this case the most likely danger is AI being misused by an unaligned human.

I am highly certain that o1 does not have consciousness or agency. However it does have the ability to follow a thought process.

Doubtless we will create sentient intelligence eventually. However I think it is more likely we will have a soulless super intelligence first.

Depends on whether "soul" is another invention that needs to be made yet, or is just something that appears automatically after we increase the compute by a few more orders of magnitude.

Like (and here I am just saying random words, because the main thing LLMs taught me is that nothing is like we expected), maybe the entire problem with consciousness is that the current AIs simply do not have large enough working context for a reasonable self-description to fit there (and still leave space for the work they are supposed to do). So if we keep increasing the working context, at some moment it might happen automatically.