When I am writing my articles, I prefer a workflow in which I am able to show my article to selected others for discussion and review before I publish. This seems to not be possible currently without giving them co-authorship - which often is not what I want.
This could be solved for example by having one additional option that makes the article link accessible by others even while it is in draft mode.
Good idea! I thought of this one: https://energyhistory.yale.edu/horse-and-mule-population-statistics/
Over the recent months I have been able to gather some experience as an AI safety activist. One of my takeaways is that many people I talk to do not understand Yudkowsky's arguments very well.
I think this is for 2 reasons mainly:
A lot of his reasoning requires a kind of "mathematical intuition" most people do not have. In my experience it is possible to make correct and convincing arguments that are easier to understand, or even invest more effort into explaining some of the more difficult ones.
I think he i
the delta for power efficiency is currently ~1000 times in favor of brains => brain: ~20 W, AGI: ~20kW, kWh in Germany: 0,33 Euro 20 kWh: ~6 Euro => running our AGI would, if we are assuming that your description of the situation is correct, cost around 6 Euros in energy per hour, which is cheaper than a human worker.
So ... while I don't assume that such estimates need to be correct or apply to an AGI (that doesn't exist yet) I don't think you are making a very convincing point so far.
About point 1: I think you are right with that assumption, though I believe that many people repeat this argument without having really a stance on (or awareness of) brain physicalism. That's why I didn't hesitate to include it. Still, if you have a decent idea of how to improve this article for people who are sceptical of physicalism, I would like to add it.
About point 2: Yeah you might be right ... a reference to OthelloGPT would make it more convincing - I will add it later!
Edit: Still, I believe that "mashup" isn't even a strictly false characterization of concept composition. I think I might add a paragraph explicitly explaining that and how I think about it.
Interesting insight. Sadly there isn't much to be done against the beliefs of someone who is certain that god will save us.
Maybe the following: Assuming the frame of a believer, the signs of AGI being a dangerous technology seem obvious on closer inspection. If god exists, then we should therefore assume that this is an intentional test he has placed in front of us. God has given us all the signs. God helps those who help themselves.
I am under the impression that the public attitude towards AI safety / alignment is about to change significantly.
Strategies that aim at informing parts of the public that may have been pointless in the past (abstract risks etc.) may now become more successful, because mainstream newspapers are now beginning to write about AI risks, people are beginning to be concerned. The abstract risks are becoming more concrete.
Maybe if it happens early there is a chance that it manages to become an intelligent computer virus but is not intelligent enough to further scale its capabilities or produce effective schemes likely to result in our complete destruction. I know I am grasping at straws at this point, but maybe it's not absolutely hopeless.
The result could be a corrupted infrastructure and a cultural shock strong enough for the people to burn down OpenAI's headquarters (metaphorically speaking) and AI-accelerating research to be internationally sanctioned.
In the past I have thought a lot about "early catastrophe scenarios", and while I am not convinced it seemed to me that these might be the most survivable ones.
One very problematic aspect of this view that I would like to point out is that in a sense, most 'more aligned' AGIs of otherwise equal capability level seem to be effectively 'more tied down' versions, so we should assume them to have a lower effective power level than a less aligned AGI that has a shorter list of priorities.
If we imagine both as competing players in a strategy game, it seems that the latter has to follow fewer rules.
I would be the last person to dismiss the potential relevance understanding value formation and management in the human brain might have for AI alignment research, but I think there are good reasons to assume that the solutions our evolution has resulted in would be complex and not sufficiently robust.
Humans are [Mesa-Optimizers](https://www.alignmentforum.org/tag/mesa-optimization) and the evidence is solid that as a consequence, our alignment with the implicit underlying utility function (reproductive fitness) is rather brittle (i.e. sex with contracepti...
I guess this means they found my suggestion reasonable and implemented it right away :D I am impressed!