This is in itself a relatively benign failure mode no? Obviously in practice if this happened it may just be re-tried until it fails in a different mode or fail catastrophically on the first try
Hmm, I mean when we are talking about these kind of counterfactuals, we obviously aren't working with the wavefunction directly, but that's an interesting point. Do you have a link to any writings on that specifically?
We can perform counterfactual reasoning about the result of a double slit experiment, including predicting the wavefunction, but perhaps that isn't quite what you mean.
An interesting point here is that when talking about future branches, I think you mean that they are probabilities conditioned on the present. However as a pure measure of existence, I don't see why it would need to be conditioned on the present at all. The other question is then, what would count as WW2? A planetary conflict that has occured after another planetary conflict? A conflict called World War 2?
Perhaps you are talking about branches conditioned on a specific point in the past i.e. the end of WW1 as it happened in our past. In which case, I don't...
They are a tiny part of the search and development of an intervention.
I agree that there is complexity in healthcare that is not explained by a simple statistical model, my point is that the final layer often is a simple statistical model that drives a lot of the complexity and the outcomes. Making a drug is much more complex than deciding which drug to give, but that decision ultimately drives the outcomes.
Also incorrect. It would almost certainly require a complex model to find/create that content, possibly anew for each susceptible human.
Same poin...
I see, that's a great point, thanks for your response. It does seem realistic that it would become political, and it's clear that a co-ordinated response is needed.
On that note I think it's a mistake to neglect that our epistemic infrastructure optimises for profit which is an obvious misalignment now. Like facebook and google are already optimising for profit at the expense of civil discourse, they are already misaligned and causing harm. Only focusing on the singularity allows tech companies to become even more harmful, with the vague promise that they'l...
"The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc."
I find this odd and patronising to the general public. Why would this not also apply to climate change? Climate change is also a not-initially-obvious threat, yet the bulk of the public now has a reasonable understanding and it's driven a lot of change.
Or would nuclear weapons be a better analogy? Then at least nuclear weapons being publicly understood brought gravity to the conversation. Or could part of the reason to avoid public awareness be avoi...
I agree, it still wouldn't be strong evidence for or against. No offence to any present or future sentient machines out there, but self-honesty isn't really clearly defined for AIs just yet.
My personal feeling is that LSTMs and transformers with attention on past states would explicitly have a form of self-awareness, by definition. Then I think this bears ethical significance according to something like the compression ratio of the inputs.
As a side note, I enjoy Iain M Banks representation of how AIs could communicate emotions in future in addition to lang...
I've given a more thorough background to this idea in a presentation here https://docs.google.com/presentation/d/1VLUdV8ZFvS_GJdfQC-k7-kMhUrF0kzvm6y-HLEaHoCU and I am trying to work it through more thoroughly. The essential point is to consider mutualistic agency as a potentially desired and even critical feature of systems that could be considered 'friendly' to humans and self-determination as an important form of agency that lends itself to a mathematical analysis via conditional transfer entropy. This is very much an early stage analysis, however what I...
Global Wealth Redistribution to Mitigate Arms Races
Is it irrational for North Korea to try and build nuclear weapons? Maybe. However if your country is disenfranchised and in poverty, it does seem like one route to having a say in global affairs and a better life. There are certainly other routes, and South Korea offers an example of what countries can achieve. However, as the world does not have a version of a 'safety net' for poor countries, there remains some incentive to race for power. In other words: if you are not confident that those in power are l... (read more)