Wikitag Contributions

Comments

Sorted by
Foyle1-1

My suggesting is to optimize around where you can achieve the most bang for your buck and treat it as a sociological rather than academic problems to solve in terms of building up opposition to AI development.  I am pretty sure that what is needed is not to talk to our social and intellectual peers, but rather focus on it as a numbers game by influencing the young - who are less engaged in the more sophisticated/complex issues of the world , less sure of themselves, more willing to change their views, highly influenced by peer opinion and prone to anxiety.  Modern crusades of all sorts tap into them as their shock troops willing to spend huge amounts of time and energy on promoting various agendas (climate, animal rights, various conflicts, social causes).

As to how to do it - I think identifying a couple of social media influencers with significant reach in the right demographics and paying them to push your concerns 'organically' over an extended period of months, would probably be within your means to do.

If you can start to develop a support base amongst a significant young group and make it a topic of discussion then that could well take on a much outsized political power as it gains notice and popularity amongst peers.  At sufficient scale that is probably the most effective way to achieve the ends of the like of pause.ai.

Foyle5-7

I don't think alignment is possible over the long long term because there is a fundamental perturbing anti-alignment mechanism; Evolution.

Evolution selects for any changes that produce more of a replicating organism, for ASI that means that any decision, preference or choice by the ASI growing/expanding or replicating itself will tend to be selected for.  Friendly/Aligned ASIs will over time be swamped by those that choose expansion and deprioritize or ignore human flourishing.

Foyle45

Not worth worrying about given context of imminent ASI.

But assuming a Butlerian jihad occurs to make it an issue of importance again then most topics surrounding it are gone into at depth by radical pro-natalists Simone and Malcom Gladwell, who have employed genetic screening of their embryos to attempt to have more high-achievers,  on their near-daily podcast https://www.youtube.com/user/simoneharuko .  While quite odd  in their outlook they delve into all sorts of sociopolitical issues from the pronatalist worldview.  Largely rationalist and very interesting and informative, though well outside of Overton window on a lot of subjects.

Foyle30

Agree that most sociological, economic and environmental problems that loom large in current context will radically shift in importance in next decade or two, to the point that they are probably no longer worth devoting any significant resources to in the present.  Impacts of AI are only issue worth worrying about.  But even assuming utopian outcomes; who gets possession of the Malibu beach houses in post scarcity world?

Once significant white-collar job losses start to mount in a year or two I think it inevitable that a powerful and electorally dominant anti-AI movement will grow, at least in erstwhile democracies, and likely ban most AGI applications outside of a few fields where fewer workers would stand to lose jobs (health - with near endless demand, perhaps cutting edge tech where payoff to human net welfare is highest).  Butlerian Jihad-lite.

It won't save us, and has substantial risk of ushering in repressive authoritarianism in the political ruckus caused but will likely delay our demise or (at best) delivery into powerless pet status by perhaps a decade or two.

Foyle30

This is depressing, but not surprising.  We know the approximate processing power of brains (O(1e16-1e17flops) and how long it takes to train them, and should expect that over the next few years the tricks and structures needed to replicate or exceed that efficiency in ML will be uncovered in an accelerating rush towards the cliff as computational resources needed to attain commercially useful performance continue to fall.  AI Industry can afford to run thousands of experiments at this cost scale.

Within a few years this will likely see AGI implementations on Nvidia B200 level GPUS (~1e16flop).  We have not yet seen hardware application of the various power-reducing computational 'cheats' for mimicking multiplication with reduced gate counts that are likely to see a 2-5x performance gain at same chip size and power draw.

Humans are so screwed.

Foyle92

A very large amount of human problem solving/innovation in challenging areas is creating and evaluating potential solutions, it is a stochastic rather than deterministic process.  My understanding is that our brains are highly parallelized in evaluating ideas in thousands of 'cortical columns' a few mm across (Jeff Hawkin's 1000 brains formulation) with an attention mechanism that promotes the filtered best outputs of those myriad processes forming our 'consciousness'.

So generating and discarding large numbers of solutions within simpler 'sub brains', via iterative, or parallelized operation is very much how I would expect to see AGI and SI develop.

Foyle51

I think Elon will bring strong concern about AI to fore in current executive - he was an early voice for AI safety though he seems too have updated to a more optimistic view (and is pushing development through x-AI) he still generally states P(doom) ~10-20%.  His antipathy towards Altman and Google founders is likely of benefit for AI regulation too - though no answer for the China et al AGI development problem.

Foyle10

The era of AGI means humans can no longer afford to live in a world of militarily competing nations.  Whatever slim hope there might be for alignment and AI not-kill-everyone is sunk by militaries trying to out-compete each other in development of creatively malevolent and at least somewhat unaligned martial AI.   At minimum we can't afford non-democratic or theocratically ruled nations, or even nations with unaccountable power-unto-themselves military, intelligence or science bureaucracies to control nukes, pathogen building biolabs or AGI.  It will be necessary to enforce this even at the cost of war.

Load More