It seems that we can have intelligence without consciousness. We can have reasoning without agency, identity, or personal preference. We can have AI as a pure tool. In this case the most likely danger is AI being misused by an unaligned human.
I am highly certain that o1 does not have consciousness or agency. However it does have the ability to follow a thought process.
Doubtless we will create sentient intelligence eventually. However I think it is more likely we will have a soulless super intelligence first.
Depends on whether "soul" is another invention that needs to be made yet, or is just something that appears automatically after we increase the compute by a few more orders of magnitude.
Like (and here I am just saying random words, because the main thing LLMs taught me is that nothing is like we expected), maybe the entire problem with consciousness is that the current AIs simply do not have large enough working context for a reasonable self-description to fit there (and still leave space for the work they are supposed to do). So if we keep increasing the working context, at some moment it might happen automatically.