I don't think alignment is possible over the long long term because there is a fundamental perturbing anti-alignment mechanism; Evolution.
Evolution selects for any changes that produce more of a replicating organism, for ASI that means that any decision, preference or choice by the ASI growing/expanding or replicating itself will tend to be selected for. Friendly/Aligned ASIs will over time be swamped by those that choose expansion and deprioritize or ignore human flourishing.
With a large enough decisive strategic advantage, a system can afford to run safety checks on any future versions of itself and anything else it's interacting with sufficient to stabilize values for extremely long periods of time.
Multipolar worlds though? Yeah, they're going to get eaten by evolution/moloch/power seeking/pythia.
Not worth worrying about given context of imminent ASI.
But assuming a Butlerian jihad occurs to make it an issue of importance again then most topics surrounding it are gone into at depth by radical pro-natalists Simone and Malcom Gladwell, who have employed genetic screening of their embryos to attempt to have more high-achievers, on their near-daily podcast https://www.youtube.com/user/simoneharuko . While quite odd in their outlook they delve into all sorts of sociopolitical issues from the pronatalist worldview. Largely rationalist and very interesting and informative, though well outside of Overton window on a lot of subjects.
The phone seems to be off the hook for most of public on AI danger, perhaps a symptom of burnout from numerous other scientific Millenialist scares - people have been hearing of imminent dangers of catastrophe for decades that have failed to impact the lives of 95%+ of population in any significant way and now just write it all off as more of the same.
I am sure that most LW readers find little in the way of positive reception for our concerns amongst less technologically engaged family members and acquaintances. There are just too many comforting tec...
Agree that most sociological, economic and environmental problems that loom large in current context will radically shift in importance in next decade or two, to the point that they are probably no longer worth devoting any significant resources to in the present. Impacts of AI are only issue worth worrying about. But even assuming utopian outcomes; who gets possession of the Malibu beach houses in post scarcity world?
Once significant white-collar job losses start to mount in a year or two I think it inevitable that a powerful and electorally d...
This is depressing, but not surprising. We know the approximate processing power of brains (O(1e16-1e17flops) and how long it takes to train them, and should expect that over the next few years the tricks and structures needed to replicate or exceed that efficiency in ML will be uncovered in an accelerating rush towards the cliff as computational resources needed to attain commercially useful performance continue to fall. AI Industry can afford to run thousands of experiments at this cost scale.
Within a few years this will likely see AGI implem...
A very large amount of human problem solving/innovation in challenging areas is creating and evaluating potential solutions, it is a stochastic rather than deterministic process. My understanding is that our brains are highly parallelized in evaluating ideas in thousands of 'cortical columns' a few mm across (Jeff Hawkin's 1000 brains formulation) with an attention mechanism that promotes the filtered best outputs of those myriad processes forming our 'consciousness'.
So generating and discarding large numbers of solutions within simpler 'sub brains', via iterative, or parallelized operation is very much how I would expect to see AGI and SI develop.
I think Elon will bring strong concern about AI to fore in current executive - he was an early voice for AI safety though he seems too have updated to a more optimistic view (and is pushing development through x-AI) he still generally states P(doom) ~10-20%. His antipathy towards Altman and Google founders is likely of benefit for AI regulation too - though no answer for the China et al AGI development problem.
The era of AGI means humans can no longer afford to live in a world of militarily competing nations. Whatever slim hope there might be for alignment and AI not-kill-everyone is sunk by militaries trying to out-compete each other in development of creatively malevolent and at least somewhat unaligned martial AI. At minimum we can't afford non-democratic or theocratically ruled nations, or even nations with unaccountable power-unto-themselves military, intelligence or science bureaucracies to control nukes, pathogen building biolabs or AGI. It will be necessary to enforce this even at the cost of war.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and 'samey' enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
It was a very frustrating conversation to listen to, because Wolfram really hasn't engaged his curiosity and done the reading on AI-kill-everyoneism. So we just got a torturous number of unnecessary and oblique diversions from Wolfram who didn't provide any substantive foil to Eliezer
I'd really like to find Yudkowsky debates with better prepared AI optimists prepared to try and counter his points. Do any exist?
It seems unlikely to me that there is potential to make large brain based intelligence advancements beyond the current best humans using human evolved biology. There will be distance scaling limitations linked to neural signal speeds.
Then there is Jeff Hawkins 'thousand brains' theory of human intelligence that our brains are made up of thousands of parallel processing cortical columns of a few mm cross section and a few mm thick with cross communication and recursion etc, but that fundamental processing core probably isn't scalable in complexity, on...
Are any of the socio-economic-political-demographic problems of the world actually fixable or improvable in the time before the imminent singularity renders them all moot anyway? It all feels like bread-and-circuses to me.
The pressing political issues of today are unlikely to even be in the top-10 in a decade.
Fantastic life skill to be able to sleep in a noise environment on a hard floor. Most Chinese can do it so easily, and I would frequently less kids anywhere up to 4-5 years old being carried sleeping down the road by guardians.
I think super valuable when it comes to adulthood and sharing a bed - one less potential source of difficulties if adaption to noisy environment when sleeping makes snoring a non-issue.
It is the literary, TV and movie references, a lot of stuff also tied to technology and social developments of the 80's-00's (particularly Ank-Morpork situated stories) and a lot of classical and allusions. 'Education' used to lean on common knowledge of a relatively narrow corpus of literature and history Shakespeare, chivalry, European history, classics etc for the social advantage those common references gave and was thus fed to boomers and gen-x, y but I think it's now rapidly slipping into obscurity as few younger people read and schools shift a...
Yeah, powering through it. I've tried adult Fiction and Sci-Fi but he's not interested in it yet - not grokking adult motivations, attitudes and behaviors yet, so feeding him stuff that he enjoys to foster habit of reading.
I've just started my 11yr old tech minded son reading the Worm web serial by John Macrae (free and online, longer than Harry potter series). It's a bit grim/dark and violent, but an amazing and compelling sci-fi meditation on superheroes and personal struggles. A more brutal and sophisticated world build along lines of popular 'my hero academia' anime that my boys watched compulsively. 1000's of fanfics too.
Stories from Larry Niven's "known space" universe. Lots of fun overcoming-challenges short stories and novellas that revolve ar...
We definitely want our kids involved in at times painful activities as a means of increasing confidence, fortitude and resilience against future periods of discomfort to steel them against the trials of later life. A lot of boys will seek it out as a matter of course in hobby pursuits including martial arts.
I think there is also value in mostly not interceding in conflicts unless there is an established or establishing pattern of physical abuse. Kids learn greater social skills and develop greater emotional strength when they have to dea...
"In many cases, however, evolution actually reduces our native empathic capacity -- for instance, we can contextualize our natural empathy to exclude outgroup members and rivals."
Exactly as it should be.
Empathy is valuable in close community settings, a 'safety net' adaption to make the community stronger with people we keep track of to ensure we are not being exploited by people not making concomitant effort to help themselves. But it seems to me that it is destructive at wider social scales enabled by social media where we don't or can't have effec...
I read some years ago that average IQ of kids is approximately 0.25*(Mom IQ + Dad IQ + 2x population mean IQ). So simplest and cheapest means to lift population average IQ by 1 standard deviation is just use +4 sd sperm (around 1 in 30000), and high IQ ova if you can convince enough genius women to donate (or clone, given recent demonstration of male and female gamete production from stem cells). +4sd mom+dad = +2sd kids on average. This is the reality that allows ultra-wealthy dynasties to maintain ~1.3sd IQ average advantage over genera...
I think there is far too much focus on technical approaches, when what is needed is a more socio-political focus. Raising money, convincing deep pockets of risks to leverage smaller sums, buying politicians, influencers and perhaps other groups that can be coopted and convinced of existential risk to put a halt to Ai dev.
It amazes me that there are huge, well financed and well coordinated campaigns for climate, social and environmental concerns, trivial issues next to AI risk, and yet AI risk remains strictly academic/fringe. What is on paper a...
They cannot just add an OOM of parameters, much less three.
How about 2 OOM's?
HW2.5 21Tflops HW3 72x2 = 72 Tflops (redundant), HW4 3x72=216Tflops (not sure about redundancy) and Elon said in June that next gen AI5 chip for fsd would be about 10x faster say ~2Pflops
By rough approximation to brain processing power you get about 0.1Pflop per gram of brain so HW2.5 might have been a 0.2g baby mouse brain, HW3 a 1g baby rat brain HW4 perhaps adult rat, and upcoming HW5 a 20g small cat brain.
As a real world analogue cat to dog (25-100g brain) seems to me the mini...
There has been a lot of interest in this going back to at least early this year and the 1.58bit LLM (ternary) logic paper https://arxiv.org/abs/2402.17764 so expect there has been a research gold rush and a lot of design effort going into producing custom hardware almost immediately that was revealed.
With Nvidia dual chip GB200 Grace Blackwell offering (sparse) 40Pflop fp4 at ~1kW there has already been something close to optimal hardware available - that fp4 performance may have been the reason the latest generation Nvidia GPU are in such high demand - pr...
AI safety desperately needs to buy in or persuade some high profile talent to raise public awareness. Business as usual approach of last decade is clearly not working - we are sleep walking towards the cliff. Given how timelines are collapsing the problem to be solved has morphed from being a technical one to a pressing social one - we have to get enough people clamouring for a halt that politicians will start to prioritise appeasing them ahead of their big tech donors.
It probably wouldn't be expensive to rent a few high profile influencers with major reach amongst impressionable youth. A demographic that is easily convinced to buy into and campaign against end of the world causes.
Current Nvidia GPU prices are highly distorted by scarcity, with profit margins that are reportedly in the 80-90% of sale price range: https://www.tomshardware.com/news/nvidia-makes-1000-profit-on-h100-gpus-report
If these were commodified to the point that scarcity didn't influence price then that $/flop point would seemingly leap up by an order of magnitude to above 1e15Flop/$1000 scraping the top of that curve, ie near brain equivalence computation power in $3.5k manufactured hardware cost, and latest Blackwell GPU has lifted that performance by another ...
I'm going through this too with my kids. I don't think there is anything I can do educationally to better ensure they thrive as adults other than making sure I teach them practical/physical build and repair skills (likely to be the area where humans with a combination of brains and dexterity retain useful value longer than any other).
Outside of that the other thing I can do is try to ensure that they have social status and financial/asset nest egg from me, because there is a good chance that the egalitarian ability to lift oneself through effort is g...
[disclaimer: I am a heat pump technology developer, however the following is just low-effort notes and mental calcs of low reliability, they may be of interest to some. YMMV]
It may be better to invest in improved insulation.
As rough rule of thumb COP is = eff * Theat/(Theat-Tcold), with Temperatures measured in absolute degrees (Kelvin or Rankine), eff for most domestic heat pumps is in range 0.35 to 0.45, high quality european units are often best for COP due to long history of higher power costs - but they are very expensive, frequently $10-20k
Looking at...
Niron's Fe16N2 looks to have a maximum energy product (figure of merit for magnet 'strength' up to 120 MGOe at microscopic scale, which is about double that of Neodymium magnets (~60), however only 20 MGOe has been achieved in fabrication. https://www.sciencedirect.com/science/article/am/pii/S0304885319325454
Processing at 1GPa and 200°C isn't that difficult if there is commercial benefit. Synthetic diamonds are made in special pressure vessels at 5GPa and 1500°C. There is some chance that someone will figure out a processing route that makes it...
I read of a proposal a few months back to achieve brain immortality via introduction of new brain tissue that can be done in a way as to maintain continuity of experience and personality over time. Replenisens , Discussion on a system for doing it in human brains That would perhaps provide a more reliable vector for introduction, as the brain is progressively hybridised with more optimal neural genetic design. Perhaps this could be done more subtly via introduction of 'perfected' stem cells and then some way of increasing rate of die off of ol...
"So your job depends on believing the projections about how H2 costs will come down?"
I wouldn't waste my life on something I didn't see as likely - I have no shortage of opportunities in a wide variety of greentech fields. Hydrogen is the most efficient fuel storage 'battery' with 40-50% round-trip energy storage possible. Other synthetic fuels are less efficient but may be necessary for longer term storage or smaller applications. For shipping and aviation however LH2 is the clear and obvious winner.
Desert pv will likely come down in pri...
Battery augmented trains: Given normal EV use examples, Tesla et al, and Tesla Semi a charging time of 10% of usage time is relatively normal. Eg charging for 20 minutes and discharging for 3 hours, or in Tesla Semi's case might be more like an hour for 8-10 hours operation but trains have lower drag (and less penalty for weight) than cars or trucks so will go further for same amount of energy. The idea is therefore that you employ a pantograph multi MW charging system on the train that only needs to operate about 10% of the time,. This m...
Battery electric trains with a small proportion of electrified (for charging) sections seems like a decent and perhaps more economic middle ground. Could get away with <10% of rail length electrified, and sodium batteries are expected to come down to ~$40/kWh in next few years. High utilisation batteries that are cycled daily or multiple times a day have lower capital costs. May also work for interstate trucking.
Earth moving electrification is probably the last application that makes sense or needs focusing upon, due to high cap...
Insufficient onboard processing power? Tesla's HW3 computer is about 70Tflops, ~0.1% of estimated 100Pops human brain. Approximately equivalent to a mouse brain. Social and predator mammals that have to model and predict conspecific and prey behaviors have brains that generally start at about 2% human for cats and up to 10% for wolves.
I posit that driving adequately requires modelling, interpreting and anticipating other road users behaviors to deal with a substantial number of problematic and dangerous situations; like noticing erratic/n...
Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up - as with Frank Herbert's Butlerian Jihad (irrelevant aside; Samuel Butler who wrote of the dangers of machine evolution and supremacy lived at film location for Eddoras in Lord of the Rings films in the 19th century).
Is it insane to think that a limited nuclear conflict (as seems to be an increasingly ...
Any attempts at regulation are clearly pointless without global policing. And China, (as well as lesser threats of Russia, Iran and perhaps North Korea) are not going to comply no matter what they might say to your face if you try to impose it. These same issues were evident during attempts to police nuclear proliferation and arms reduction treaties during the cold war, even when both sides saw benefit in it. For AI they'll continue development in hidden or even mobile facilities.
It would require a convincing threat of nuclear escalation ...
Nothing wrong with the universe, from an Anthropic perspective it's pretty optimal, we just have most humans running around with much of their psychology evolved to maximize fitness in highly competitive resource limited hunter-gatherer environments, including a strong streak of motivating unhappiness with regard to things like; social position, feelings of loneliness, adequacy of resources, unavailability of preferred sex partners, chattel ownership/control, relationships etc and a desire to beat and subjugate most dangerous competitors to get more for ou...
Interesting.
As counterpoint from a 50 year old who has struggled with meaning and direction and dissatisfaction with outcomes (top 0.1% ability without as yet personally satisfactory results) I have vague recollections of my head-strong teen years when I felt my intellect was indomitable and I could master anything and determine my destiny though force of will. But I've slowly come to the conclusion that we have a lot less free-will than we like to believe, and most of our trajectory and outcomes in life are set by neurological determinants - instinc...
I don't think it's mild. I'm not American, but follow US politics with interest.
A majority of blue-collar/conservative US now see the govt as implacably biased against their interests and communities, eg see recent polling on attitudes towards DOJ, FBI eg https://harvardharrispoll.com/wp-content/uploads/2023/05/HHP_May2023_KeyResults.pdf
There is widespread perception that rule of law has failed at the top levels at least - with politically motivated prosecutions (timing of stacked Trump indictments is clearly motivated by his candidacy) and favored t...
Ruminations on this topic are fairly pointless because so many of the underpinning drivers are clearly subject to imminent enormous change. Within 1-2 generations technology like life extension, fertility extension, artificial uteruses, superintelligent AI, AI teachers and nannies, trans-humanism and will render meaningless today's concerns that currently seem dominating and important (if Humans survive that long). Existential risk and impacts of AI are really the only issues that matter. Though I am starting to think that the likely inev...
purported video of a fully levitating sample from a replication effort, sorry I do not have any information beyond this twitter link. But if it is not somehow faked or misrepresented it seems a pretty clear demonstration of flux-pinned Meissner effect with no visible evidence of cooling. [Edit] slightly more detail on video "Laboratory of Chemical Technology"
This video is widely believed to be a CGI fake.
Seems quite compelling - most previous claims of high temp superconductivity have been based on seeing only dips in resistance curves - not full array of superconducting behaviours recounted here, and sample preparation instructions are very straight forward - if it works we should see replication in a few days to weeks [that alone suggests its not a deliberate scam].
The critical field strength stated is quite low - only about 25% of what is seen in a Neodymium magnet and it's unclear what critical current density is, but if field reported is as good as it...
Desalination costs are irrelevant to uranium extraction. Uranium is absorbed in special plastic fibers arrayed in ocean currents that are then post processed to recover the uranium - it doesn't matter how many cubic km of water must pass the fiber mats to deposit the uranium because that process is, like wind, free. The economics have been demonstrated in pilot scale experiments at ~$1000/kg level, easily cheap enough making Uranium an effectively inexhaustible resource at current civilisational energy consumption levels even after we run out of easily mined resources. Lots of published research on this approach (as is to be expected when it is nearing cost competitiveness with mining).
Seems likely, neurons only last a couple of decades - memories older than that are reconstructions, - things we recall frequently or useful skills. If we live to be centuries old it is unlikely that we will retain many memories going back more than 50-100 years.
In the best envisaged 500GW-days/tonne fast breeder reactor cycles 1kg of Uranium can yield about $500k of (cheap) $40/MWh electricity.
Cost for sea water extraction (done using ion-selective absorbing fiber mats in ocean currents) of Uranium is currently estimated (using demonstrated tech) to be less than $1000/kg, not yet competitive with conventional mining, but is anticipated to drop closer to $100/kg which would be. That is a trivial fraction of power production costs. It is even now viable with hugely wasteful pressurised water uranium cyc...
It appears that AI existential risk is starting to penetrate consciousness of general public in a 'its not just hyperbole' way.
There will inevitably be a lot of attention seeking influencers (not a bad thing in this case) who will pick up the ball and run with it now, and I predict the real-life Butlerian Jihad will rival the Climate Change movement in size and influence within 5 years as it has all the attributes of a cause that presents commercial opportunity to the unholy trinity of media, politicians and academia that have demonstrated an ability to pr...
Humans generally crave acceptance by peer groups and are highly influenceable, this is more true of women than men (higher trait agreeableness), likely for evolutionary reasons.
As media and academia shifted strongly towards messaging and positively representing LGBT over last 20-30 years, reinforced by social media with a degree of capture of algorithmic controls be people with strongly pro-LGBT views, they have likely pulled means beliefs and expressed behaviours beyond what would perhaps be innately normal in a more neutral non-proselytising environment ...
I think cold war incentives with regards to tech development were atypical. Building 1000's of ICBMs was incredibly costly, neither side derived any benefit from it, it was simply defensive matching to maintain MAD, both sides were strongly motivated to enable mechanisms to reduce numbers and costs (START treaties).
This is clearly not the case with AI - which is far cheaper to develop, easier to hide, and has myriad lucrative use cases. Policing a Dune-style "thou shalt not make a machine in the likeness of a human mind" Butlerian Jihad (intere...
IQ is highly heritable. If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ. Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D. This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors. There is no other tech (yet) that ca...
Over what time window does your assessed risk apply. eg 100years, 1000? Does the danger increase or decrease with time?
I have deep concern that most people have a mindset warped by human pro-social instincts/biases. Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg "Our Kind" a mass market anthropological survey of human culture and psychology] ...
Given almost certainty that Russia, China and perhaps some other despotic regimes ignore this does it:
1. help at all?
2. could it actually make the world less safe (If one of these countries gains a significant military AI lead as a result)
Why do you think China will ignore it? This is "it's going too fast, we need some time", and China also needs some time for all the same reason. For example, China is censoring Google with Great Firewall, so if Google is to be replaced by ChatGPT, they need time to prepare to censor ChatGPT. Great Firewall wasn't built in a day. See Father of China's Great Firewall raises concerns about ChatGPT-like services from SCMP.
My suggesting is to optimize around where you can achieve the most bang for your buck and treat it as a sociological rather than academic problems to solve in terms of building up opposition to AI development. I am pretty sure that what is needed is not to talk to our social and intellectual peers, but rather focus on it as a numbers game by influencing the young - who are less engaged in the more sophisticated/complex issues of the world , less sure of themselves, more willing to change their views, highly influenced by peer opinion and prone to anx... (read more)