All of Foyle's Comments + Replies

Foyle1-1

My suggesting is to optimize around where you can achieve the most bang for your buck and treat it as a sociological rather than academic problems to solve in terms of building up opposition to AI development.  I am pretty sure that what is needed is not to talk to our social and intellectual peers, but rather focus on it as a numbers game by influencing the young - who are less engaged in the more sophisticated/complex issues of the world , less sure of themselves, more willing to change their views, highly influenced by peer opinion and prone to anx... (read more)

Foyle5-7

I don't think alignment is possible over the long long term because there is a fundamental perturbing anti-alignment mechanism; Evolution.

Evolution selects for any changes that produce more of a replicating organism, for ASI that means that any decision, preference or choice by the ASI growing/expanding or replicating itself will tend to be selected for.  Friendly/Aligned ASIs will over time be swamped by those that choose expansion and deprioritize or ignore human flourishing.

-1Davey Morse
there seems to me a chance that friendly asis will over time outcompete ruthlessly selfish ones an ASI which identifies will all life, which sees the striving to survive at its core as present people and animals and, essentially, geographically distributed rather than concentrated in its machinery... there's a chance such an ASI would be a part of the category of life which survives the most, and therefore that it itself would survive the most. related: for life forms with sufficiently high intelligence, does buddhism outcompete capitalism?
8Charlie Steiner
I'm not too worried about human flourishing only being a metastable state. The universe can remain in a metastable state longer than it takes for the stars to burn out.
4Jozdien
I don't think there's an intrinsic reason why expansion would be incompatible with human flourishing. AIs that care about human flourishing could outcompete the others (if they start out with any advantage). The upside of goals being orthogonal to capability is that good goals don't suffer for being good.
plex144

With a large enough decisive strategic advantage, a system can afford to run safety checks on any future versions of itself and anything else it's interacting with sufficient to stabilize values for extremely long periods of time.

Multipolar worlds though? Yeah, they're going to get eaten by evolution/moloch/power seeking/pythia.

Foyle45

Not worth worrying about given context of imminent ASI.

But assuming a Butlerian jihad occurs to make it an issue of importance again then most topics surrounding it are gone into at depth by radical pro-natalists Simone and Malcom Gladwell, who have employed genetic screening of their embryos to attempt to have more high-achievers,  on their near-daily podcast https://www.youtube.com/user/simoneharuko .  While quite odd  in their outlook they delve into all sorts of sociopolitical issues from the pronatalist worldview.  Largely rationalist and very interesting and informative, though well outside of Overton window on a lot of subjects.

3notfnofn
This is something that confuses me as well: why do a lot of people in these circles seem care about the fertility crisis while also believing that ASI is coming very soon? In both optimistic and pessimistic scenarios about what a post-ASI world looks like, I'm struggling to see a future where the fact that people in the 2020s had relatively few babies matters.
2Eneasz
I'm familiar, have interviewed them twice, and linked to them in the OP in the culture-ectomy section. :) I don't think their lives work as a model for the majority of people in our culture, and suspect their children either will revert to mean in TFR, or will be drastically different culturally and thus an example of what I'm pointing at with this post.
5Zachary
Collins not Gladwell lmao
Foyle10

The phone seems to be off the hook for most of public on AI danger, perhaps a symptom of burnout from numerous other scientific Millenialist scares - people have been hearing of imminent dangers of catastrophe for decades that have failed to impact the lives of 95%+ of population in any significant way and now just write it all off as more of the same.

I am sure that most LW readers find little in the way of positive reception for our concerns amongst less technologically engaged family members and acquaintances.  There are just too many comforting tec... (read more)

1waterlubber
I think the support/belief of "AI bad" is widespread, but people don't have a clear goal to rally behind. People want to support something, but give a resigned "what am I to do?"  If there's a strong cause with a clear chance of helping (i.e a "don't build AI or advance computer semiconductors for the next 50 years" guild) people will rally behind it.
Foyle30

Agree that most sociological, economic and environmental problems that loom large in current context will radically shift in importance in next decade or two, to the point that they are probably no longer worth devoting any significant resources to in the present.  Impacts of AI are only issue worth worrying about.  But even assuming utopian outcomes; who gets possession of the Malibu beach houses in post scarcity world?

Once significant white-collar job losses start to mount in a year or two I think it inevitable that a powerful and electorally d... (read more)

Foyle30

This is depressing, but not surprising.  We know the approximate processing power of brains (O(1e16-1e17flops) and how long it takes to train them, and should expect that over the next few years the tricks and structures needed to replicate or exceed that efficiency in ML will be uncovered in an accelerating rush towards the cliff as computational resources needed to attain commercially useful performance continue to fall.  AI Industry can afford to run thousands of experiments at this cost scale.

Within a few years this will likely see AGI implem... (read more)

3wassname
This is still debatable, see Table 9 is the brain emulation roadmap https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf. You are referring to level 4 (SNN), but level 5 is plausible imo (at 10^22) and 6 seems possible (10^25), and of course it could be a mix of levels.
Foyle92

A very large amount of human problem solving/innovation in challenging areas is creating and evaluating potential solutions, it is a stochastic rather than deterministic process.  My understanding is that our brains are highly parallelized in evaluating ideas in thousands of 'cortical columns' a few mm across (Jeff Hawkin's 1000 brains formulation) with an attention mechanism that promotes the filtered best outputs of those myriad processes forming our 'consciousness'.

So generating and discarding large numbers of solutions within simpler 'sub brains', via iterative, or parallelized operation is very much how I would expect to see AGI and SI develop.

Foyle51

I think Elon will bring strong concern about AI to fore in current executive - he was an early voice for AI safety though he seems too have updated to a more optimistic view (and is pushing development through x-AI) he still generally states P(doom) ~10-20%.  His antipathy towards Altman and Google founders is likely of benefit for AI regulation too - though no answer for the China et al AGI development problem.

Foyle10

The era of AGI means humans can no longer afford to live in a world of militarily competing nations.  Whatever slim hope there might be for alignment and AI not-kill-everyone is sunk by militaries trying to out-compete each other in development of creatively malevolent and at least somewhat unaligned martial AI.   At minimum we can't afford non-democratic or theocratically ruled nations, or even nations with unaccountable power-unto-themselves military, intelligence or science bureaucracies to control nukes, pathogen building biolabs or AGI.  It will be necessary to enforce this even at the cost of war.

Foyle30

Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times.  Which makes sense from a game theoretic strengthen-the-tribe perspective.  But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.

The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and 'samey' enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing.  Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)

4RHollerith
“Game theoretic strengthen-the-tribe perspective” is a completely unpersuasive argument to me. The psychological unity of humankind OTOH is persuasive when combined with the observation that this unitary psychology changes slowly enough that the human mind’s robust capability to predict the behavior of conspecifics (and manage the risks posed by them) can keep up.
2[comment deleted]
Foyle70

It was a very frustrating conversation to listen to, because Wolfram really hasn't engaged his curiosity and done the reading on AI-kill-everyoneism.  So we just got a torturous number of unnecessary and oblique diversions from Wolfram who didn't provide any substantive foil to Eliezer

I'd really like to find Yudkowsky debates with better prepared AI optimists prepared to try and counter his points.  Do any exist?

-2Milan W
I asked GPT4o to perform a web search for podcast appearances by Yudkowsky. It dug up these two lists (apparently, autogenerated from scrapped data). When I asked it to base use these lists as a starting point to look for high quality debates and after some further elicitation and wrangling, the best we could find was this moderated panel discussion featuring Yudkowsky, Liv Boeree, and Joscha Bach. There's also the Yudkowsky v/s George Hotz debate on Lex Fridman, and the time Yudkowsky debated AI risk with the streamer and political commentaror known as Destiny. I have watched none of the three debates I just mentioned; but I know that Hotz is a heavily vibes-based (rather than object-level-based) thinker, and that Destiny has no background in AI risk, but has good epistemics. I think he probably offered reasonable-at-first-approximation-yet-mostly-uninformed pushback. EDIT: Upon looking a bit more at the Destiny-Yudkowsky discussion, i may have unwittingly misrepresented it a bit. It occurred during Manifest, and was billed as a debate. ChatGPT says Destiny's skepticism was rather active, and did not budge much.
Foyle20

It seems unlikely to me that there is potential to make large brain based intelligence advancements beyond the current best humans using human evolved biology.  There will be distance scaling limitations linked to neural signal speeds.

Then there is Jeff Hawkins 'thousand brains' theory of human intelligence that our brains are made up of thousands of parallel processing cortical columns of a few mm cross section and a few mm thick with cross communication and recursion etc, but that fundamental processing core probably isn't scalable in complexity, on... (read more)

1Towards_Keeperhood
I think number of neurons in neocortex (or even more prefrontal cortex - but unfortunately i didn't quickly find how big the orca prefrontal cortex is - though I'd guess it to still be significantly bigger than for humans) is a much much better proxy for intelligence of species than brain size (or encephalization quotient). (E.g. see the wikipedia list linked in my question here.) (Also see here. There are more examples, e.g. a Blue and yellow macaw has 1.9 billion, whereas brown bears have only 250million.) EDIT: Tbc I do think that larger bodies require more neurons in touch-sense and motor parts of the neocortex, so there is some effect of how larger animals need a bit larger brains to be similarly smart, but I don't think this effect is very strong. But yeah there are other considerations too, which is why I am only at 50% that orcas could do science significantly better than humans if they tried.
Foyle03

Are any of the socio-economic-political-demographic problems of the world actually fixable or improvable in the time before the imminent singularity renders them all moot anyway?  It all feels like bread-and-circuses to me.

The pressing political issues of today are unlikely to even be in the top-10 in a decade.

2Kaj_Sotala
As far as I know, the latest representative expert survey on the topic is "Thousands of AI Authors on the Future of AI", in which the median time for a 50% chance of AGI was either in 23 or 92 years, depending on how the question was phrased: Not that these numbers would mean much because AI experts aren't experts on forecasting, but it still suggests a substantial possibility for AGI to take quite a while yet.
4Thomas Kwa
Yes, lots of socioeconomic problems have been solved on a 5 to 10 year timescale. I also disagree that problems will become moot after the singularity unless it kills everyone-- the US has a good chance of continuing to exist, and improving democracy will probably make AI go slightly better.
Foyle30

Fantastic life skill to be able to sleep in a noise environment on a hard floor.  Most Chinese can do it so easily, and I would frequently less kids anywhere up to 4-5 years old being carried sleeping down the road by guardians.

I think super valuable when it comes to adulthood and sharing a bed - one less potential source of difficulties if adaption to noisy environment when sleeping makes snoring a non-issue.

Foyle40

It is the literary, TV and movie references, a lot of stuff also tied to technology and social developments of the 80's-00's (particularly Ank-Morpork situated stories) and a lot of classical and allusions.  'Education' used to lean on common knowledge of a relatively narrow corpus of literature and history Shakespeare, chivalry, European history, classics etc for the social advantage those common references gave and was thus fed to boomers and gen-x, y but I think it's now rapidly slipping into obscurity as few younger people read and schools shift a... (read more)

2AnthonyC
And here I was hoping it would prompt someone to look things up or talk about them with the person who recommended the book.
Foyle40

Yeah, powering through it.  I've tried adult Fiction and Sci-Fi but he's not interested in it yet - not grokking adult motivations, attitudes and behaviors yet, so feeding him stuff that he enjoys to foster habit of reading.   

3Martin Sustrik
Yes, I am seeing that as well. Technical/philosophical stuff is fine, but the psychology in adult fiction is too complex for an 11-years old to enjoy.
Answer by Foyle50

I've just started my 11yr old tech minded son reading the Worm web serial by John Macrae (free and online, longer than Harry potter series).  It's a bit grim/dark and violent, but an amazing and compelling sci-fi meditation on superheroes and personal struggles.  A more brutal and sophisticated world build along lines of popular 'my hero academia' anime that my boys watched compulsively.  1000's of fanfics too.

Stories from Larry Niven's "known space" universe.  Lots of fun overcoming-challenges short stories and novellas that revolve ar... (read more)

3AnthonyC
Good points I hadn't considered. Do you think that applies as much to a kid who reads encyclopedias? I wasn't an encyclopedia reader and started reading Pratchett at around 14, and didn't really have issues following the references. And aren't most of the cultural references more centuries-old than decades-old? I am sure there are some that are aging badly, and it's been a long while since I've spent time around 11 year olds, but I really don't remember anything contemporary when I read them in the 90s and early 2000s. Also some of the later books, especially the Tiffany Aching arc, are specifically written with a younger audience in mind, to the point that when I read them in high school and college I felt I was too old for them.
2Martin Sustrik
Wow. Worm? That's pretty dark. Also a million words or so. Does your kid enjoy it?
Foyle40

We definitely want our kids involved in at times painful activities as a means of increasing confidence, fortitude and resilience against future periods of discomfort to steel them against the trials of later life.  A lot of boys will seek it out as a matter of course in hobby pursuits including martial arts.  

I think there is also value in mostly not interceding in conflicts unless there is an established or establishing pattern of physical abuse.  Kids learn greater social skills and develop greater emotional strength when they have to dea... (read more)

Foyle3-2

"In many cases, however, evolution actually reduces our native empathic capacity -- for instance, we can contextualize our natural empathy to exclude outgroup members and rivals."

Exactly as it should be.

Empathy is valuable in close community settings, a 'safety net' adaption to make the community stronger with people we keep track of to ensure we are not being exploited by people not making concomitant effort to help themselves.  But it seems to me that it is destructive at wider social scales enabled by social media where we don't or can't have effec... (read more)

Foyle2-6

I read some years ago that average IQ of kids is approximately 0.25*(Mom IQ + Dad IQ + 2x population mean IQ).  So simplest and cheapest means to lift population average IQ by 1 standard deviation is just use +4 sd sperm (around 1 in 30000), and high IQ ova if you can convince enough genius women to donate (or clone, given recent demonstration of male and female gamete production from stem cells).  +4sd mom+dad = +2sd kids on average.  This is the reality that allows ultra-wealthy dynasties to maintain ~1.3sd IQ average advantage over genera... (read more)

Answer by Foyle76

I think there is far too much focus on technical approaches, when what is needed is a more socio-political focus.  Raising money, convincing deep pockets of risks to leverage smaller sums, buying politicians, influencers and perhaps other groups that can be coopted and convinced of existential risk to put a halt to Ai dev.

It amazes me that there are huge, well financed and well coordinated campaigns for climate, social and environmental concerns, trivial issues next to AI risk, and yet AI risk remains strictly academic/fringe.  What is on paper a... (read more)

3KvmanThinking
why!?
Foyle30

They cannot just add an OOM of parameters, much less three.

How about 2 OOM's?

HW2.5 21Tflops HW3 72x2 = 72 Tflops (redundant), HW4 3x72=216Tflops (not sure about redundancy) and Elon said in June that next gen AI5 chip for fsd would be about 10x faster say ~2Pflops

By rough approximation to brain processing power you get about 0.1Pflop per gram of brain so HW2.5 might have been a 0.2g baby mouse brain, HW3 a 1g baby rat brain HW4 perhaps adult rat, and upcoming HW5 a 20g small cat brain.

As a real world analogue cat to dog (25-100g brain) seems to me the mini... (read more)

Foyle20

There has been a lot of interest in this going back to at least early this year and the 1.58bit LLM (ternary) logic paper https://arxiv.org/abs/2402.17764 so expect there has been a research gold rush and a lot of design effort going into producing custom hardware almost immediately that was revealed.

With Nvidia dual chip GB200 Grace Blackwell offering (sparse) 40Pflop fp4 at ~1kW there has already been something close to optimal hardware available - that fp4 performance may have been the reason the latest generation Nvidia GPU are in such high demand - pr... (read more)

4Vladimir_Nesov
This is 2015-2016 tech though. The value of the recent ternary BitNet result is demonstrating that it works well for transformers (which wasn't nearly as much the case for binary BitNet). The immediate practical value of this recent paper is more elusive: they try to do even more by exorcising multiplication from attention, which is a step in an important direction, but the data they get doesn't seem sufficient to overcome the prior that this is very hard to do successfully. Only Mamba got close to attention as a pure alternative (without the constraint of avoiding multiplication), and even then it has issues unless we hybridize it with (local) attention (which also works well with other forms of attention alternatives, better even than vanilla attention on its own).
Foyle01

AI safety desperately needs to buy in or persuade some high profile talent to raise public awareness.  Business as usual approach of last decade is clearly not working - we are sleep walking towards the cliff.  Given how timelines are collapsing the problem to be solved has morphed from being a technical one to a pressing social one - we have to get enough people clamouring for a halt that politicians will start to prioritise appeasing them ahead of their big tech donors.

It probably wouldn't be expensive to rent a few high profile influencers with major reach amongst impressionable youth.  A demographic that is easily convinced to buy into and campaign against end of the world causes.

Foyle10

Current Nvidia GPU prices are highly distorted by scarcity, with profit margins that are reportedly in the 80-90% of sale price range: https://www.tomshardware.com/news/nvidia-makes-1000-profit-on-h100-gpus-report

If these were commodified to the point that scarcity didn't influence price then that $/flop point would seemingly leap up by an order of magnitude to above 1e15Flop/$1000 scraping the top of that curve, ie near brain equivalence computation power in $3.5k manufactured hardware cost, and latest Blackwell GPU has lifted that performance by another ... (read more)

1Steve Kommrusch
The Tom's Hardware article is interesting, thanks. It makes the point that the price quoted may not include the full 'cost of revenue' for the product in that it might be the bare die price and not the tested and packaged part (yields from fabs aren't 100% so extensive functional testing of every part adds cost). The article also notes that R&D costs aren't included in that figure; the R&D for NVIDIA (and TSMC, Intel, AMD, etc) are what keep that exponential perf-per-dollar moving along.  For my own curiosity, I looked into current and past income statements for companies. Today, NVIDIA's latest balance sheet for the fiscal year ending 1/31/2024 has $61B in revenue, 17B for cost of revenue (that would include the die cost, as well as testing and packaging), R&D of 9B, and a total operating income of 33B.  AMD for their fiscal year ending 12/31/2023 had $23B revenue, 12B cost of revenue, 6B R&D, and 0.4B operating income. Certainly NVIDIA is making more profit, but the original author and wikipedia picked the AMD RX 7600 as the 2023 price-performance leader and there isn't much room in AMD's income statement to lower those prices. While NVIDIA could cut their revenue in half and still make a profit in 2023, in 2022 their profit was 4B on 27B in revenue. FWIW, Goodyear Tire, selected by me 'randomly' as an example of a company making a product with lower technology innovation year-to-year, had 20B revenue for the most recent year, 17B cost of revenue, and no R&D expense. So if we someday plateau silicon technology (even if ASI can help us build transistors smaller than atoms, the plank length is out there at some point), then maybe silicon companies will start cutting costs down to bare manufacturing costs. As a last study, the wikipedia page on FLOPS cited the Pentium Pro from Intel as part of the 1997 perf-per-dollar system. For 1997, Intel reported 25B in revenues, 10B cost of sales (die, testing, packaging, etc), 2B in R&D, and an operating income of 10B; so it w
Foyle*20

I'm going through this too with my kids.  I don't think there is anything I can do educationally to better ensure they thrive as adults other than making sure I teach them practical/physical build and repair skills (likely to be the area where humans with a combination of brains and dexterity retain useful value longer than any other).

Outside of that the other thing I can do is try to ensure that they have social status and financial/asset nest egg from me, because there is a good chance that the egalitarian ability to lift oneself through effort is g... (read more)

1ProgramCrafter
I think one more thing could be useful, I'd call it "structural rise": over many different spheres of society, large projects are created by combining some small parts; ways to combine them and test robustness (for programs)/stability (for organisations)/beauty (music)/etc seem pretty common for most of the areas, so I guess they can be learned separately.
Foyle130

[disclaimer: I am a heat pump technology developer, however the following is just low-effort notes and mental calcs of low reliability, they may be of interest to some. YMMV]

It may be better to invest in improved insulation.

As rough rule of thumb COP is = eff * Theat/(Theat-Tcold), with Temperatures measured in absolute degrees (Kelvin or Rankine), eff for most domestic heat pumps is in range 0.35 to 0.45, high quality european units are often best for COP due to long history of higher power costs - but they are very expensive, frequently $10-20k

Looking at... (read more)

4jefftk
I was curious about this, and here are the numbers I got. I looked around and even a 23% efficient Generac 7171 comes out ahead. It's rated for 9kW at full output on natural gas. They say it uses 127 ft3/hr which is 1.37 or 39kWh. This is $0.304/kWh. Of course this ignores the cost of the generator, maintenance, lower efficiency when run below full capacity, etc. but it's still pretty weird!
1nim
Seconding the importance of insulation, especially for disaster preparedness and weathering utility outages. If any of your friends have a fancy thermal camera, see if you can borrow it. If not, there are some cheap options for building your own or pre-built ones on ebay. The cheap ones don't have great screens or refresh rates, but they do the job of visualizing which things are warmer and which are cooler. Using a thermal imager, I managed to figure out the importance of closing the window blinds to keep the house warm. Having modern high-efficiency windows lulls me into a false sense of security about their insulative value, which I'm still un-learning.
Foyle1-2

Niron's Fe16N2 looks to have a maximum energy product (figure of merit for magnet 'strength' up to 120 MGOe at microscopic scale, which is about double that of Neodymium magnets (~60), however only 20 MGOe has been achieved in fabrication. https://www.sciencedirect.com/science/article/am/pii/S0304885319325454

Processing at 1GPa and 200°C isn't that difficult if there is commercial benefit.  Synthetic diamonds are made in special pressure vessels at 5GPa and 1500°C.  There is some chance that someone will figure out a processing route that makes it... (read more)

Foyle*50

I read of a proposal a few months back to achieve brain immortality via introduction of new brain tissue that can be done in a way as to maintain continuity of experience and personality over time.   Replenisens , Discussion on a system for doing it in human brains That would perhaps provide a more reliable vector for introduction, as the brain is progressively hybridised with more optimal neural genetic design.  Perhaps this could be done more subtly via introduction of 'perfected' stem cells and then some way of increasing rate of die off of ol... (read more)

3GeneSmith
I'd be worried about the loss of memories and previously learned abilities that would come along with "increasing die-off of old cells". Also, there isn't really much extra room in the brain for these new neurons to go. So unless they were somehow a lot smaller I think you'd have to basically replace existing brain tissue with them. It's an interesting idea. It seems likely to be substantially more invasive than what I have in mind for the gene editing treatment, but if it actually worked that wouldn't necessarily be a huge concern. The thing about large scale interventions like "adding a new chromosome" is that it's going to be much harder to generalize from existing people what the effects will be. If we got this technology working REALLY welll, like 99% editing efficiency and no immune issues with redosing, then we could probably try out adding new genes in randomized control trials and then slowly assemble a new chromosome out of those new genes. But I don't know when or even if we'll reach that point with this tech. In the long run digital intelligence will win, and if we miraculously solve alignment and have any agency, we'll probably just be digital uploads.
1npostavs
Isn't having extra chromosomes usually bad? https://en.wikipedia.org/wiki/Trisomy (PS the usual number is 46)
Foyle30

"So your job depends on believing the projections about how H2 costs will come down?"

I wouldn't waste my life on something I didn't see as likely - I have no shortage of opportunities in a wide variety of greentech fields.  Hydrogen is the most efficient fuel storage 'battery' with 40-50% round-trip energy storage possible.  Other synthetic fuels are less efficient but may be necessary for longer term storage or smaller applications.  For shipping and aviation however LH2 is the clear and obvious winner.

Desert pv will likely come down in pri... (read more)

4ChristianKl
If energy prices come down so much, the round-trip efficiency is not central.  You need much larger storage tanks in both ships and airplanes if you go for hydrogen than if you use denser fuel.  If that's true why are the subventions for its production so high? What sources do you find trustworthy for those costs in an environment where plenty of the players have incentives to make people believe in a certain future?
Foyle30

Battery augmented trains: Given normal EV use examples, Tesla et al, and Tesla Semi a charging time of 10% of usage time is relatively normal.  Eg charging for 20 minutes and discharging for 3 hours, or in Tesla Semi's case might be more like an hour for 8-10 hours operation but trains have lower drag (and less penalty for weight) than cars or trucks so will go further for same amount of energy.  The idea is therefore that you employ a pantograph multi MW charging system on the train that only needs to operate about 10% of the time,.  This m... (read more)

2ChristianKl
So your job depends on believing the projections about how H2 costs will come down? It's possible that direct production of synthetic hydrocarbons will be more effective than going through H2 production. Given that we already have ships that can drive well if you fuel them with gas, it's possible that all the money invested into trying to get ships to run on hydrogen will be wasted.
2bhauth
Thanks. Charge time partly depends on power available, but is typically set to 1 hour to reduce battery degradation. Discharge time depends on battery size relative to power usage. They're not directly related. I understand battery chemistry fairly well, and my view is, they're lying for strategic reasons. see this post for my comments
Foyle1-2

Battery electric trains with a small proportion of electrified (for charging) sections seems like a decent and perhaps more economic middle ground.   Could get away with <10% of rail length electrified, and sodium batteries are expected to come down to ~$40/kWh in next few years.  High utilisation batteries that are cycled daily or multiple times a  day have lower capital costs.  May also work for interstate trucking.

Earth moving electrification is probably the last application that makes sense or needs focusing upon, due to high cap... (read more)

2bhauth
Sometimes I have trouble understanding the thought process of other people, and I think you're wrong here in ways that I've seen before, so I'd appreciate if you could explain your thought process a bit more. What's your basis for saying "10%" or that this would be cheaper? Have you done some calculations yourself? Did you read a paper that does the math? What charge/discharge rates are you thinking of being used? How long would electrified sections be? I assume you read that somewhere, but why would you consider a source saying that trustworthy over other sources? It's an extraordinary claim, which IMO seems implausible and would require good evidence to believe - so what's the evidence? What are the specific tech advancements making that possible? You know there is already electric mining equipment, right? Why do you think those are good options? Where are you getting the idea that liquid hydrogen fuel is practical from?
Foyle20

Insufficient onboard processing power?  Tesla's HW3 computer is about 70Tflops, ~0.1% of estimated 100Pops human brain.  Approximately equivalent to a mouse brain.  Social and predator mammals that have to model and predict conspecific and prey behaviors have brains that generally start at about 2% human for cats and up to 10% for wolves.

I posit that driving adequately requires modelling, interpreting and anticipating other road users behaviors to deal with a substantial number of problematic and dangerous situations; like noticing erratic/n... (read more)

1[deactivated]
Helpful comment that gives lots to think about. Thanks!
Foyle-14

Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up - as with Frank Herbert's Butlerian Jihad (irrelevant aside; Samuel Butler who wrote of the dangers of machine evolution and supremacy lived at film location for Eddoras in Lord of the Rings films in the 19th century).  

Is it insane to think that a limited nuclear conflict (as seems to be an increasingly ... (read more)

2Vaniver
Popular support is already >70% for stopping development of AI. Why think that's not enough, and that populations aren't already awake?
5Roko
Part of why I am posting this is in case that happens, so people are clear what side I am on.
0akarlin
It's not at all insane IMO. If AGI is "dangerous" x timelines are "short" x anthropic reasoning is valid... ... Then WW3 will probably happen "soon" (2020s). https://twitter.com/powerfultakes/status/1713451023610634348 I'll develop this into a post soonish.
Foyle4-1

Any attempts at regulation are clearly pointless without global policing.  And China, (as well as lesser threats of Russia, Iran and perhaps North Korea) are not going to comply no matter what they might say to your face if you try to impose it.  These same issues were evident during attempts to police nuclear proliferation and arms reduction treaties during the cold war, even when both sides saw benefit in it.  For AI they'll continue development in hidden or even mobile facilities.

It would require a convincing threat of nuclear escalation ... (read more)

5paulfchristiano
I think politically realistic hardware controls could buy significant time, or be used to push other jurisdictions to implement appropriate regulation and allow for international verification if they want access to hardware. This seems increasingly plausible given the United States' apparent willingness to try to control access to hardware (e.g. see here).
2Joe Collman
The parallel to the nuclear case doesn't work: Successfully building nuclear weapons is to China's advantage. Successfully building a dangerously misaligned AI is not. (not in national, party, nor personal interest) The clear path to regulation working with China is to get them to realize the scale of the risk - and that the risk applies even if only they continue rushing forward. It's not an easy path, but it's not obvious that convincing China that going forward is foolish is any harder than convincing the US, UK.... Conditional on international buy-in on the risk, the game theory looks very different from the nuclear case. (granted, it's also worse in some ways, since the upsides of [defecting-and-getting-lucky] are much higher) 
Answer by Foyle10

Nothing wrong with the universe, from an Anthropic perspective it's pretty optimal, we just have most humans running around with much of their psychology evolved to maximize fitness in highly competitive resource limited hunter-gatherer environments, including a strong streak of motivating unhappiness with regard to things like; social position, feelings of loneliness, adequacy of resources, unavailability of preferred sex partners, chattel ownership/control, relationships etc and a desire to beat and subjugate most dangerous competitors to get more for ou... (read more)

1Ratios
How about animals? If they are conscious, do you believe wild animals have net-positive lives? The problem is much more fundamental than humans.
Foyle20

Interesting.

As counterpoint from a 50 year old who has struggled with meaning and direction and dissatisfaction with outcomes (top 0.1% ability without as yet personally satisfactory results) I have vague recollections of my head-strong teen years when I felt my intellect was indomitable and I could master anything and determine my destiny though force of will.  But I've slowly come to the conclusion that we have a lot less free-will than we like to believe, and most of our trajectory and outcomes in life are set by neurological determinants - instinc... (read more)

Foyle100

I don't think it's mild.  I'm not American, but follow US politics with interest.

A majority of blue-collar/conservative US now see the govt as implacably biased against their interests and communities, eg see recent polling on attitudes towards DOJ, FBI eg https://harvardharrispoll.com/wp-content/uploads/2023/05/HHP_May2023_KeyResults.pdf

There is widespread perception that rule of law has failed at the top levels at least - with politically motivated prosecutions (timing of stacked Trump indictments is clearly motivated by his candidacy) and favored t... (read more)

-3anniesha
is it clearly motivated by trump's candidacy? he officially announced in november 2022, he was dropping un-subtle hints about running for 2 years, and he started his 2020 campaign on inauguration day, 2017. maybe the timing of indictments was motivated by the republican primaries, but election season is 2 out of 4 years. nit-picking aside, i agree with you and OP. not sure why other people are so confident in US stability when political and cultural health are obviously declining
Foyle1110

Ruminations on this topic are fairly pointless because so many of the underpinning drivers are clearly subject to imminent enormous change.  Within 1-2 generations technology like life extension, fertility extension, artificial uteruses, superintelligent AI, AI teachers and nannies, trans-humanism and will render meaningless today's concerns that currently seem dominating and important (if Humans survive that long).  Existential risk and impacts of AI are really the only issues that matter.  Though I am starting to think that the likely inev... (read more)

Foyle2-11

purported video of a fully levitating sample from a replication effort, sorry I do not have any information beyond this twitter link.  But if it is not somehow faked or misrepresented it seems a pretty clear demonstration of flux-pinned Meissner effect with no visible evidence of cooling. [Edit] slightly more detail on video "Laboratory of Chemical Technology"

This video is widely believed to be a CGI fake.

Foyle*120

Seems quite compelling - most previous claims of high temp superconductivity have been based on seeing only dips in resistance curves - not full array of superconducting behaviours recounted here, and sample preparation instructions are very straight forward - if it works we should see replication in a few days to weeks [that alone suggests its not a deliberate scam].

The critical field strength stated is quite low - only about 25% of what is seen in a Neodymium magnet and it's unclear what critical current density is, but if field reported is as good as it... (read more)

Foyle2-1

Desalination costs are irrelevant to uranium extraction.  Uranium is absorbed in special plastic fibers arrayed in ocean currents that are then post processed to recover the uranium - it doesn't matter how many cubic km of water must pass the fiber mats to deposit the uranium because that process is, like wind, free.  The economics have been demonstrated in pilot scale experiments at ~$1000/kg level, easily cheap enough making Uranium an effectively inexhaustible resource at current civilisational energy consumption levels even after we run out of easily mined resources.  Lots of published research on this approach (as is to be expected when it is nearing cost competitiveness with mining).

1bhauth
As I wrote in the post, that number is fake, based on an inapplicable calculation. Fake cost estimates in papers are common for other topics too, like renewable fuels. Also, the volume of published research has little to do with cost competitiveness and a lot to do with what's trendy among people who direct grant money.
Foyle20

Seems likely, neurons only last a couple of decades - memories older than that are reconstructions, - things we recall frequently or useful skills.  If we live to be centuries old it is unlikely that we will retain many memories going back more than 50-100 years.

Foyle1-1

In the best envisaged 500GW-days/tonne fast breeder reactor cycles 1kg of Uranium can yield about $500k of (cheap) $40/MWh electricity.

Cost for sea water extraction (done using ion-selective absorbing fiber mats in ocean currents) of Uranium is currently estimated (using demonstrated tech) to be less than $1000/kg, not yet competitive with conventional mining, but is anticipated to drop closer to $100/kg which would be.  That is a trivial fraction of power production costs.  It is even now viable with hugely wasteful pressurised water uranium cyc... (read more)

1bhauth
Uranium in seawater is ~3 ppb. $1000/kg is ~$0.003/m^3 of seawater, at 100% of uranium captured. Desalination is ~$0.50/m^3. Fast breeders are irrelevant for this topic because there is plenty of fuel for them already.
Answer by Foyle50

It appears that AI existential risk is starting to penetrate consciousness of general public in a 'its not just hyperbole' way.

There will inevitably be a lot of attention seeking influencers (not a bad thing in this case) who will pick up the ball and run with it now, and I predict the real-life Butlerian Jihad will rival the Climate Change movement in size and influence within 5 years as it has all the attributes of a cause that presents commercial opportunity to the unholy trinity of media, politicians and academia that have demonstrated an ability to pr... (read more)

Foyle20

Humans generally crave acceptance by peer groups and are highly influenceable, this is more true of women than men (higher trait agreeableness), likely for evolutionary reasons.

As media and academia shifted strongly towards messaging and positively representing LGBT over last 20-30 years, reinforced by social media with a degree of capture of algorithmic controls be people with strongly pro-LGBT views, they have likely pulled means beliefs and expressed behaviours beyond what would perhaps be innately normal in a more neutral non-proselytising environment ... (read more)

Foyle40

I think cold war incentives with regards to tech development were atypical.  Building 1000's of ICBMs was incredibly costly, neither side derived any benefit from it, it was simply defensive matching to maintain MAD, both sides were strongly motivated to enable mechanisms to reduce numbers and costs (START treaties).

This is clearly not the case with AI - which is far cheaper to develop, easier to hide, and has myriad lucrative use cases.  Policing a Dune-style "thou shalt not make a machine in the likeness of a human mind" Butlerian Jihad (intere... (read more)

Foyle23

IQ is highly heritable.  If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ.  Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D.  This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors.  There is no other tech (yet) that ca... (read more)

1idontagreewiththat
Or, it might be that high IQ parents raise their children in a way that's different from low IQ and it has nothing to do with genetics at all?
Foyle20

Over what time window does your assessed risk apply.  eg 100years, 1000?  Does the danger increase or decrease with time?

I have deep concern that most people have a mindset warped by human pro-social instincts/biases.  Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg "Our Kind" a mass market anthropological survey of human culture and psychology] ... (read more)

1silent-observer
Except the point of Yudkowsky's "friendly AI" is that they don't have freedom to pick their own goals, they have the goals we set to them, and they are (supposedly) safe in a sense that "wiping out humanity" is not something we want, therefore it's not something an aligned AI would want. We don't replicate evolution with AIs, we replicate careful design and engineering that humans have used for literally everything else. If there is only a handful of powerful AIs with careful restrictions on what their goals can be (something we don't know how to do yet), then your scenario won't happen
1James B
My thoughts run along similar lines. Unless we can guarantee the capabilities of AI will be drastically and permanently curtailed, not just in quantity but also in kind (no ability to interact with the internet or the physical world, no ability to develop intent)c then the inevitability of something going wrong implies that we must all be Butlerian Jihadists if we care for biological life to continue.
Foyle2-5

Given almost certainty that Russia, China and perhaps some other despotic regimes ignore this does it:

1. help at all?

2. could it actually make the world less safe (If one of these countries gains a significant military AI lead as a result)

5konstantin
Russia is not at all an AI superpower. China also seems to be quite far behind the west in terms of LLMs, so overall, six months would very likely not lead to any of them catching up.
sanxiyn116

Why do you think China will ignore it? This is "it's going too fast, we need some time", and China also needs some time for all the same reason. For example, China is censoring Google with Great Firewall, so if Google is to be replaced by ChatGPT, they need time to prepare to censor ChatGPT. Great Firewall wasn't built in a day. See Father of China's Great Firewall raises concerns about ChatGPT-like services from SCMP.

1[anonymous]
Of course. If everyone is getting guns, and you were previously fighting with clubs, it is entirely reasonable to argue that you should "pause" your trips to the gun store to lock and load. But it doesn't change the fact that if all you have is a club, and a weaker opponent now has a gun and is willing to use it, this is not a good situation to be in. Best you can do is try to be careful with the safety but you must get a gun or die. A 6 month of "make no progress" is choosing the die option.
Load More