AI engineer, Phd in Statistical Physics, Musician.
I am a believer in the power of creativity to reshape everything.
Beautiful writings!
I particularly enjoyed For Boltzmann... The combination of Boltzmann's flair for abstract conceptualization amidst his own personal tragedy is quite the tale to tell, and you were able to capture that in the poem. Boltzmann's struggles mirror so many other personal tales and I certainly hope each share of blood sweat and tears won't be in vain.
The restless search for universality which artists and scientists experience is also the story that connects these threads. Boltzmann Brains navigating space and time.
"Those parts are yet acknowledged, and yet mourned.
And when each human rises in their powers
The efforts of our past selves won’t be scorned."
Also reminds me of a classic Brazilian Samba by João Bosco and Aldir Blanc
"Eu sei que uma dor assim pungente, não há de ser inutilmente,
a esperança dança da corda bamba de sombrinha
e em cada passo dessa linha pode se machucar,
Azar, a esperança equilibrista,
sabe que o show de todo artista tem que continuar"
Which roughly translates to:
"I know that a pain as piercing as this cannot be in vain,
hope dances on the tightrope with an umbrella,
and with each step on this line, it can get hurt.
Hazard, hope, the tightrope-walker
knows that the show of every artist must go on."
Epistemic Status: All-in
Love this!!! To inspect the concept of neutrality as a software like process driven institution is a very illuminating approach.
I would add that there is a very relevant discussion in the AI sphere which is profoundly connected to the points you are making: the existence of bias in AI models.
To identify and measure bias is in a sense to identify and measure lack of neutrality, so it follows that, to define bias, one must first be very rigorous on the definition of neutrality.
This can seem simple for some of the more pedestrian AI tasks, but can become increasingly sophisticated as we introduce AI as an essential piece in workflows and institutions.
AI algorithms can be heavily biased, datasets can be biased and even data structures can be biased.
I feel this is a topic which you can further explore in the future. Thank you for this!
Correlation value over IQ at 100 seems to be already well under the variance so not really meaningful, and if you look at what the researchers call Originality, the correlation is actually negative over IQ 110.
Just as a correction to your comment, I am not stating this as an adamant fact, but as an "indication" not a "demonstration", I said: "indicated by recent research"
I understand the reference I pointed out has a limited scope (Chinese children, age 11-13), as any research of this kind, but beyond the rigorous scientific demonstration of this concept, I am expressing the fact that IQ tests are very incomplete, which is not novel.
Thank you for your response.
Thank you for this! A comprehensive and direct overview of current standing when it comes to priorities for minimizing AI Risk and related governance discussions.
A very crucial piece of this is global coordination which you correctly emphasize.
I like the taxonomy : The Forceful, The Cutthroat and The Gentle paths.
In my perspective the road to really minimizing risk in any of those 3 scenarios is establishing strong communality in communication of moral and cultural values which I feel is the solid basis for mutual understanding. I think this could allow for a long term minimization of a faulty coordination, be it in a single nation leadership scenario or any of the more distributed power and decision making scenarios
I think there is an underestimation of the role of simple and effective communication when it comes to coordinating global interests, I have recently started to look at religion as one of the fundamental components of symbolic and cultural systems, which is a path to studying communality: what brings people together, despite their differences. In a sense religion is a social technology that ties people together to a common belief and allows for survival and growth, I wonder how will that role be played along the rising power of AI systems in the decades to come.
Hello Everyone!
I am a Brazilian AI/ML engineer and data scientist, I have been following the rationalist community for around 10 years now, originally as a fan of Scott Alexander's Slate Star Codex where I came to know of Eliezer and Lesswrong as a community, along with the rationalist enterprise.
I only recently created my user and started posting here, currently, I’m experiencing a profound sense of urgency regarding the technical potential of AI and its impact on the world. With seven years of experience in machine learning, I’ve witnessed how the stable and scalable use of data can be crucial in building trustworthy governance systems. I’m passionate about contributing to initiatives that ensure these advancements yield positive social outcomes, particularly for disadvantaged communities. I believe that rationality can open paths to peace, as war often stems from irrationality.
I feel privileged to participate in the deep and consequential discussions on this platform, and I look forward to exchanging ideas and insights with all the brilliant writers and thinkers who regularly contribute here.
Thank you all!
This a very relevant discussion which should be the backbone of the decision making of anyone being part of the general effort towards AI alignment.
It is important to point out however that this is maybe just an instance of a more general problem which is related to the fact that in general any form of power or knowledge can be used for good or bad purposes, and advancing power and knowledge becomes always a double edged sword.
Doesn't seem to me like there is an escape to the moral variance that humans experience as part of their natural + developed proclivities.
The only chance we have against such great power and knowledge as AI enables is to really enlighten with clarity how is it that these technical pieces can come together for one edge or another.
Bringing the problem to light is our best chance.
Very powerful reasoning. I would add that a relevant form of self-deception that should be investigated in this framework is religious faith, given its place as as foundational to societies worldwide.
Religious faith seems like an optimal form of solution to hostile telepaths problem, in certain contexts it seems like a mixture of the three solutions you outlined. (Newcomblike self-deception, Having power and Occlumency)
Religious faith seems to provide psychological power through feelings of absolute certainty and over-confidence that religious people experience. At the same time, the conversion to religions is correlated with overcoming PTSD and addiction (step 2 of the 12 steps program: "Came to believe that a Power greater than ourselves could restore us to sanity.")
I think there is an underlying problem of concept hierarchy which may precede self deception. Maybe we are able to hide concepts and thoughts while they occupy a peripheral part of the mind, this could be also linked to a continuous formulation of the newcomb-like problem in decision theory. I am not sure how this unfolds, will be trying to explore that in the weeks to come.
Thank you for sharing!
I think this a very simple and likewise powerful articulation of anti-anthropocentrism, which I fully support.
I am particularly on board with "Sentience doesn’t require [...] any particular physical substrate beyond something capable of modeling these patterns."
To correctly characterize the proof as 100% logical I think there is still some room for explicitly stating rigorous definitions of the concepts involved such as sentience.
Thank you for sharing!
The scale problem is so universal and hard to tap into. Taking lessons from physics, I would caution against building a fully generalized framework where agents and subagents function under the same interactions, there are transitions between the micro and macro states where simmetry breaks completely. Complexity also points towards this same problem, emergent behaviour in cellular automata is hardly well predicted from the smaller parts which make up that behaviour.