Aliya Amirova

Data Scientist and Epidemiologist at Population Health Sciences, King's College London 

Research interests: responsible AI development that aligns with diverse human values and ensures equity.  

Alert to risks from emerging AI technologies.

Opinions my own.

https://aliyaamirova.com/

Wiki Contributions

Comments

Sorted by

The main point I am trying to make is that AGI risks cannot be deduced or theorised solely in abstract terms. They must be understood through rigorous empirical research on complex systems. If you view AI as an agent in the world, then it functions as a complex intervention. It may or may not act as intended by its designer, it may or may not deliver preferred outcomes, and it may or may not be acceptable to the user. There are ways to estimate uncertainty in each of these parameters through empirical research. Actually, there are degrees to which it acts as intended, degrees to which it is acceptable, and so on. This calls for careful empirical research and system-level understanding.

I write academic papers in healthcare, psychology, and epidemiology for peer-review. I don't write blog posts every day, so thank you for your patience with this particular style, which was devised for guidelines and frameworks.

Thank you for sharing your thoughts on AI alignment, AI safety, and imminent threats. I posted this essay to demonstrate how public health guidelines and system thinking can be useful in preventing harm, inequality, and avoiding unforeseen negative outcomes in general. I wanted the LessWrong audience to gain perspectives from other fields that have been addressing rapidly emerging innovations—along with their benefits and harms—for centuries, with the aim of minimising risk and maximising benefit, keeping the wider public in mind.

I am aware of the narrative around the 'paperclip maximiser' threat. However, I believe it is important to recognise that the risks AI brings should not be viewed in the context of a single threat, a single bias, or one path to extinction. AI is a complex system, used in a complex setting—the social structure. It should be studied with due rigour, with a focus on understanding its complexity.

If you can suggest literature on AGI alignment that recognises the complexity of the issue and applies systems thinking to the problem, I would be grateful.

@RobertM @Mitchell_Porter

 

I guess the standardised language for framework development fails Turing Test. 

The title is the play on words, merging the title of the guidelines authored by The Medical Research Council — "Complex Intervention Development and Evaluation Framework" — (1) and The Economic Forum for AI — "A Blueprint for Equity and Inclusion in Artificial Intelligence" (2).  The blog I wrote closely follows the standardised structure for frameworks and guidelines, with specific subheadings that are easy to quote.

"Addressing Uncertainties" is a major requirement in the iterative process of development and refinement of complex intervention. I did not come up with it; it is an agreed-upon requirement in high-risk health application and research. 

Would you like to engage with the content of the post? I thought LessWrong is about engaging in a debate where people learn and attempt to reach consensus. 

@Mitchell_Porter What made you think that I am not a native English speaker and what made you think that this post was written by AI? 

Hey, be civil! That is not nice. I am a human, I did not use AI.