geoffreymiller

Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, &  autonomous agents back in the 90s.

Wiki Contributions

Comments

Sorted by

This is really good, and it'll be required reading for my new 'Psychology and AI' class that I'll teach next year. 

Students are likely to ask 'If the blob can figure out so much about the world, and modify its strategies so radically, why does it still want sugar? Why not just decide to desire something more useful, like money, power, and influence?'

Shutting down OpenAI entirely would be a good 'high level change', at this point.

Well I'm seeing no signs at all, whatsoever, that OpenAI would ever seriously consider slowing, pausing, or stopping its quest for AGI, no matter what safety concerns get raised. Sam Altman seems determined to develop AGI at all costs, despite all risks, ASAP. I see OpenAI as betraying virtually all of its founding principles, especially since the strategic alliance with Microsoft, and with the prospect of colossal wealth for its leaders and employees.

At this point, I'd rather spend $5-7 trillion on a Butlerian Jihad to stop OpenAI's reckless hubris.

Human intelligence augmentation is feasible over a scale of decades to generations, given iterated polygenic embryo selection. 

I don't see any feasible way that gene editing or 'mind uploading' could work within the next few decades. Gene editing for intelligence seems unfeasible because human intelligence is a massively polygenic trait, influenced by thousands to tens of thousands of quantitative trait loci. Gene editing can fix major mutations, to nudge IQ back up to normal levels, but we don't know of any single genes that can boost IQ above the normal range. And 'mind uploading' would require extremely fine-grained brain scanning that we simply don't have now.

Bottom line is, human intelligence augmentation would happen way too slowly to be able to compete with ASI development.

If we want safe AI, we have to slow AI development. There's no other way.

Tamsin -- interesting points. 

I think it's important for the 'Pause AI' movement (which I support) to help politicians, voter, and policy wonks understand that 'power to do good' is not necessarily correlated with 'power to deter harm' or the 'power to do indiscriminate harm'. So, advocating for caution ('OMG AI is really dangerous!') should not be read as 'power to do good' or 'power to deter harm' -- which could incentivize gov'ts to pursue AI despite the risks.

For example, nuclear weapons can't really do much good (except maybe for blasting incoming asteroids), but have some power to deter use of nuclear weapons by others, but also have a lot of power to do indiscriminate harm (e.g. global thermonuclear war).

Whereas engineered pandemic viruses would have virtually no power to do good, and no power to deter harm, and only offer power to do indiscriminate harm (e.g. global pandemic).

Arguably, ASI might have a LOT more power to do indiscriminate harm than power to deter harm or power to do good.

If we can convince policy-makers that this is a reasonable viewpoint (ASI offers mostly indiscriminate harm, not good or deterrence), then it might be easier to achieve a helpful pause, and also to reduce the chance of an AI arms race.

gwern - The situation is indeed quite asymmetric, insofar as some people at Lightcone seem to have launched a poorly-researched slander attack on another EA organization, Nonlinear, which has been suffering serious reputational harm as a result. Whereas Nonlinear did not attack Lightcone or its people, except insofar as necessary to defend themselves.

Treating Nonlinear as a disposable organization, and treating its leaders as having disposable careers, seems ethically very bad.

Naive question: why are the disgruntled ex-employees who seem to have made many serious false allegations the only ones whose 'privacy' is being protected here? 

The people who were accused at Nonlinear aren't able to keep their privacy. 

The guy (Ben Pace) who published the allegations isn't keeping his privacy.

But the people who are at the heart of the whole controversy, whose allegations are the whole thing we've been discussing at length, are protected by the forum moderators? Why? 

This is a genuine question. I don't understand the ethical or rational principles that you're applying here.

There's a human cognitive bias that may be relevant to this whole discussion, but that may not be widely appreciated in Rationalist circles yet: gender bias in 'moral typecasting'.

In a 2020 paper, my U. New Mexico colleague Tania Reynolds and coauthors found a systematic bias for women to be more easily categorized as victims and men as perpetrators, in situations where harm seems to have been done. The ran six studies in four countries (total N=3,317). 

(Ever since a seminal paper by Gray & Wegner (2009), there's been a fast-growing literature on moral typecasting. Beyond this Nonlinear dispute, it's something that Rationalists might find useful in thinking about human moral psychology.) 

If this dispute over Nonlinear is framed as male Emerson Spartz (at Nonlinear) vs. the females 'Alice' and 'Chloe', people may tend to see Nonlinear as the harm perpetrator. If it's framed as male Ben Pace (at LessWrong) vs. female Kat Woods (at Nonlinear), people may tend to see Ben as the harm-perpetrator.

This is just one of the many human cognitive biases that's worth bearing in mind when trying to evaluate conflicting evidence in complex situations. 

Maybe it's relevant here, maybe it's not. But the psychological evidence suggests it may be relevant more often than we realize.

(Note: this is a very slightly edited version of a comment originally posted on EA Forum here). 

Whatever people think about this particular reply by Nonlinear, I hope it's clear to most EAs that Ben Pace could have done a much better job fact-checking his allegations against Nonlinear, and in getting their side of the story.

In my comment on Ben Pace's original post 3 months ago, I argued that EAs & Rationalists are not typically trained as investigative journalists, and we should be very careful when we try to do investigative journalism -- an epistemically and ethically very complex and challenging profession, which typically requires years of training and experience -- including many experiences of getting taken in by individuals and allegations that seemed credible at first, but that proved, on further investigation, to have been false, exaggerated, incoherent, and/or vengeful.

EAs pride ourselves on our skepticism and our epistemic standards when we're identifying large-scope, neglected, tractable causes areas to support, and when we're evaluating different policies and interventions to promote sentient well-being. But those EA skills overlap very little with the kinds of investigative journalism skills required to figure out who's really telling the truth, in contexts involving disgruntled ex-employees versus their former managers and colleagues. 

EA epistemics are well suited to the domains of science and policy. We're often not as savvy when it comes to interpersonal relationships and human psychology -- which is the relevant domain here.

In my opinion, Mr. Pace did a rather poor job of playing the investigative journalism role, insofar as most of the facts and claims and perspectives posted by Kat Woods here were not even included or addressed by Ben Pace.

I think in the future, EAs making serious allegations about particular individuals or organizations should be held to a pretty high standard of doing their due diligence, fact-checking their claims with all relevant parties, showing patience and maturity before publishing their investigations, and expecting that they will be held accountable for any serious errors and omissions that they make.

(Note: this reply is cross-posted from EA Forum; my original comment is here.)

Load More