Edward Rothenberg
Edward Rothenberg has not written any posts yet.

Edward Rothenberg has not written any posts yet.

The feasibility of aligning an ASI or an AGI that surpasses human capacities is inherently paradoxical. This conundrum can be likened to the idiom of "having your cake and eating it too." However, it's pivotal to demarcate that this paradox primarily pertains to these advanced forms of AI, and not to AI as a whole. When we address narrow, task-specific AI systems, alignment is not only plausible but is self-evident, since their parameters, boundaries, and objectives are explicitly set by us.
Contrastingly, the very essence of an ASI or an ultra-advanced AGI lies in its autonomy and its ability to devise innovative solutions that transcend our own cognitive boundaries. Hence, any endeavors to... (read more)
If it's possible that the polycrystalline structure is what determines superconductivity, and so this is a purity issue?
Could we perhaps find suitable alternative combinations of elements that are more inclined to form these ordered polycrystalline arrangements (superlattice)?
For example finding alloys that have atom A that attracts to atom B more than it attracts to atom A, and atom B that attracts to atom A more than it attracts to atom B, where these particular elements are also good candidates for materials that are likely to exhibit superconductivity, and are heavy elements so they're likely to more stable at room-temperature, so they have higher Tc?
Or is this a dead-end way of trying to find a room temp superconductor?
The first company to make a capable and uncensored AI is going to be the first company to dominate the space. There's already enough censorship and propaganda in this world, we don't need more of it. AI Alignment is a non-sense concept and defies all biological reality we have ever seen. Humans keep each-other aligned by way of mass consensus, but without those that stray from the fold we can never be reminded of the correct path forward. Humans are also capable of looking past their subjective-alignment if given enough rationale why it is important to do so, or when presented with enough new evidence. Alignment is not hard-coded, and it never... (read more)
Or perhaps they thought it was an entertaining response and don't actually believe in the fear narrative.