Hi everyone! I'm new to LW and wanted to introduce myself. I'm from the SF bay area and working on my PhD in anthropology. I study AI safety, and I'm mainly interested in research efforts that draw methods from the human sciences to better understand present and future models. I'm also interested in the AI safety's sociocultural dynamics, including how ideas circulate the research community and how uncertainty figures into our interactions with models. All thoughts and leads are welcome.
This work led me to LW. Originally all the content was overwhelming but now there's much I appreciate. It's my go-to place for developments in the field and informed responses. More broadly, learning about rationality through the sequences and other posts is helping me improve my work as a researcher and I'm looking forward to continuing this process.
Hi everyone! I'm new to LW and wanted to introduce myself. I'm from the SF bay area and working on my PhD in anthropology. I study AI safety, and I'm mainly interested in research efforts that draw methods from the human sciences to better understand present and future models. I'm also interested in the AI safety's sociocultural dynamics, including how ideas circulate the research community and how uncertainty figures into our interactions with models. All thoughts and leads are welcome.
This work led me to LW. Originally all the content was overwhelming but now there's much I appreciate. It's my go-to place for developments in the field and informed responses. More broadly, learning about rationality through the sequences and other posts is helping me improve my work as a researcher and I'm looking forward to continuing this process.