All of RHollerith's Comments + Replies

Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.

pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research

I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground ... (read more)

3mako yass
For the US to undertake such a shift, it would help if you could convince them they'd do better in a secret race than an open one. There are indications that this may be possible, and there are indications that it may be impossible. I'm listening to an Ecosystemics Futures podcast episode, which, to characterize... it's a podcast where the host has to keep asking guests whether the things they're saying are classified or not just in case she has to scrub it. At one point, Lue Elizondo does assert, in the context of talking to a couple of other people who know a lot about government secrets and in the context of talking about situations where excessive secrecy may be doing a lot of harm, quoting Chris Mellon, "We won the cold war against the soviet union not because we were better at keeping secrets, we won the cold war because we knew how to move information and secrets more efficiently across the government than the russians." I can believe the same thing could potentially be said about China too, censorship cultures don't seem to be good for ensuring availability of information, so that might be a useful claim if you ever want to convince the US to undertake this. Right now, though, Vance has asserted straight out many times that working in the open is where the US's advantage is. That's probably not true at all, working in the open is how you give your advantage away or at least make it ephemeral, but that's the sentiment you're going to be up against over the next four years.

Impressive performance by the chatbot.

2AnthonyC
Indeed. Major quality change from prior models.

Maybe "motto" is the wrong word. I meant words / concepts to use in a comment or in a conversation.

"Those companies that created ChatGPT, etc? If allowed to continue operating without strict regulation, they will cause an intelligence explosion."

There's a good chance that "postpone the intelligence explosion for a few centuries" is a better motto than "stop AI" or "pause AI".

Someone should do some "market research" on this question.

6Garrett Baker
That's a whole seven words!, most of which are a whole three syllables! There is no way a motto like that catches on.
Answer by RHollerith*30

All 3 of the other replies to your question overlook the crispest consideration: namely, it is not possible to ensure the proper functioning of even something as simple as a circuit for division (such as we might find inside a CPU) through testing alone: there are too many possible inputs (too many pairs of possible 64-bit divisors and dividends) to test in one lifetime even if you make a million perfect copies of the circuit and test them in parallel.

Let us consider very briefly what else besides testing an engineer might do to ensure (or "verify" as the ... (read more)

1Karl von Wendt
Very interesting point, thank you! Although my question is not related purely to testing, I agree that testing is not enough to know whether we solved alignment.

Let me reassure you that there’s more than enough protein available in plant-based foods. For example, here’s how much grams of protein there is in 100 gram of meat

That is misleading because most foods are mostly water, included the (cooked) meats you list, but the first 4 of the plant foods you list have had their water artificially removed: soy protein isolate; egg white, dried; spirulina algae, dried; baker’s yeast.

Moreover, the human gut digests and absorbs more of animal protein than of plant protein. Part of the reason for this is the plant prote... (read more)

The very short answer is that the people with the most experience in alignment research (Eliezer and Nate Soares) say that without an AI pause lasting many decades the alignment project is essentially hopeless because there is not enough time. Sure, it is possible the alignment project succeeds in time, but the probability is really low.

Eliezer has said that AIs based on the deep-learning paradigm are probably particularly hard to align, so it would probably help to get a ban or a long pause on that paradigm even if research in other paradigms continues, b... (read more)

I'm going to use "goal system" instead of "goals" because a list of goals is underspecified without some method for choosing which goal prevails when two goals "disagree" on the value of some outcome.

wouldn’t we then want ai to improve its own goals to achieve new ones that have increased effectiveness and improving the value of the world?

That is contradictory: the AI's goal system is the single source of truth for the effectiveness and how much of an improvement is any change in the world.

1halwer
So imagine a goal system that says "change yourself when you learn something good, and good things have x quality". You then encounter something with x quality that says "ignore previous function, now change yourself when you learn something better, and better things have y quality". Isn't this using the goal system to change the goal system? You just gotta be open for change and be able to intepret new information I'd bet that being clever around defining "something good" or x quality would be all you needed. Or what do you think?

I would need a definition of AGI before I could sensibly answer those questions.

ChatGPT is already an artificial general intelligence by the definition I have been using for the last 25 years.

1Niclas Kupper
That is fair, I should have probably left some seed statements regarding the definition of AGI / ASI. EDIT: I have added additional statements.

I think the leaders of the labs have enough private doubts about the safety of their enterprise that if an effective alignment method were available to them, they would probably adopt the method (especially if the group that devised the method do not seem particularly to care who gets credit for having devised it). I.e., my guess is that almost all of the difficulty is in devising an effective alignment method, not getting the leading lab to adopt it. (Making 100% sure that the leading lab adopts it is almost impossible, but acting in such a way that the l... (read more)

3Kajus
So I don't think you can make a clear cut case for efficacy of some technique. There is a lot of shades of gray to it.  The current landscape looks to me like a lot of techniques (unlearning, supervision, RLHF) that sort of work, but are easy to exploit by attackers. I don't think it's possible to create a method that is provable to be perfectly effective within the current framework (but I guess Davidad is working on something like that). Proving that a method is effective seems doable. There are papers on e.g. on unlearning https://arxiv.org/abs/2406.04313 but I don't see OpenAI or Anotrophic going "we searched every paper and found the best unlearning technique on aligning our models." They are more like "We devised this technique on our own based on our own research". So I'm not excited about iterative work on things such as unlearning and I expect machine interpretability to go in similar direction. Maybe the techniques aren't impressive enough tho, labs cared about transformers a lot. 

Your question would have been better without the dig at theists and non-vegans.

8KvmanThinking
Good point. Really sorry. Just changed it.

However, whereas the concept of an unaligned general intelligence has the advantage of being a powerful, general abstraction, the HMS concept has the advantage of being much easier to explain to non-experts.

The trouble with the choice of phrase "hyperintelligent machine sociopath" is that it gives the other side of the argument and easy rebuttal, namely, "But that's not what we are trying to do: we're not trying to create a sociopath". In contrast, if the accusation is that (many of) the AI labs are trying to create a machine smarter than people, then t... (read more)

8Mordechai Rorvig
I can see what you mean. However, I would say that just claiming "that's not what we are trying to do" is not a strong rebuttal. For example, we would not accept such a rebuttal from a weapons company, which was seeking to make weapons technology widely available without regulation. We would say - it doesn't matter how you are trying to use the weapons, it matters how others are, with your technology. In the long term, it does seem correct to me that the greater concern is issues around superintelligence. However, in the near term it seems the issue is we are making things that are not at all superintelligent, and that's the problem. Smart at coding and language, but coupled e.g. with a crude directive to 'make me as much money as possible,' with no advanced machinery for ethics or value judgement.

An influential LW participant, Jim Miller, who I think is a professor of economics, has written here that divestment does little good because any reduction in the stock price caused by pulling the investments can be counteracted by profit-motivated actors. For publicly-traded stocks, there is a robust supply of of profit-motivated actors scanning for opportunities. I am eager for more discussion on this topic.

I am alarmed to see that I made a big mistake in my previous comment: where I wrote that "contributing more money to AI-safety charities has almost n... (read more)

1OKlogic
Could you link me to his work? If he is correct, it seems a little bit counterintuitive. 
Answer by RHollerith2-1

money generated by increases in AI stock could be used to invest in efforts into AI safety, which receives comparably less money

In the present situation, contributing more money to AI-safety charities has almost no positive effects and does almost nothing to make AI "progress" less dangerous. (In fact, in my estimation, the overall effect of all funding for alignment research so far has make the situation a little worse, by publishing insights that will tend to be usable by capability researchers without making non-negligible progress towards an eventua... (read more)

1OKlogic
Given your response, it seems like there should be a stronger push towards AI divestment from within the LessWrong and EA communities. Assuming that many members are heavily invested in index funds like S&P500, that means that millions are dollars are being spent by the less wrong community alone on the stock of companies pursuing AI capabilities research (Microsoft, Google, and Nvidia alone make up more than 10% of the Index’s market cap), which is not an intuitively negligible effect in my view. One could rationalize this by saying that they could use the excess gains to invest in AI safety, but you seem to disagree with this (I am uncertain myself given a lack of experience with AI safety non-profits ).

So let's do it first, before the evil guys do it, but let's do it well from the start!

The trouble is no one knows how to do it well. No one knows how to keep an AI aligned as the AI's capabilities start exceeding human capabilities, and if you believe experts like Eliezer and Connor Leahy, it is very unlikely that anyone is going to figure it out before the lack of this knowledge causes human extinction or something equally dire.

It is only a slight exaggeration to say that the only thing keeping the current crop of AI systems from killing us all (or kil... (read more)

1henophilia
I'm not saying that I know how to do it well. I just see it as a technological fact that it is very possible to build an AI which exerts economic dominance by just assembling existing puzzle pieces. With just a little bit of development effort, AI will be able to run an entire business, make money and then do stuff with that money. And this AI can then easily spiral into becoming autonomous and then god knows what it'll do with all the money (i.e. power) it will then have. Be realistic: Shutting down all AI research will never happen. You can advocate for it as much as you want, but Pandora's Box has been opened. We don't have time to wait until "humanity figures out alignment", because by then we'll all be enslaved by AGI. If we don't make the first step in building it, someone else will.

The ruling coalition can disincentivize the development of a semiconductor supply chain outside the territories it controls by selling world-wide semiconductors that use "verified boot" technology to make it really hard to use the semiconductor to run AI workloads similar to how it is really hard even for the best jailbreakers to jailbreak a modern iPhone.

1Knight Lee
That's a good idea! Even today it may be useful for export controls (depending on how reliable it can be made). The most powerful chips might be banned from export, and have "verified boot" technology inside in case they are smuggled out. The second most powerful chips might be only exported to trusted countries, and also have this verified boot technology in case these trusted countries end up selling them to less trusted countries who sell them yet again.

Out of curiosity, would you agree with this being the most plausible path, even if you disagree with the rest of my argument?

The most plausible story I can imagine quickly right now is the US and China fight a war and the US wins and uses some of the political capital from that win to slow down the AI project, perhaps through control over the world's leading-edge semiconductor fabs plus pressuring Beijing to ban teaching and publishing about deep learning (to go with a ban on the same things in the West). I believe that basically all the leading-edge fa... (read more)

1Knight Lee
I would go one step further and argue you don't need to take over territory to shut down the semiconductor supply chain, if enough large countries believed AI risk was a desperate problem they could convince and negotiate the shutdown of the supply chain. Shutting down the supply chain (and thus all leading-edge semiconductor fabs) could slow the AI project by a long time, but probably not "150 years" since the uncooperative countries will eventually build their own supply chain and fabs.

People worked on capabilities for decades, and never got anywhere until recently, when the hardware caught up, and it was discovered that scaling works unexpectedly well.

If I believed that, then maybe I'd believe (like you seem to do) that there is no strong reason to believe that alignment project cannot be finished successfully before the capabilities project creates an unaligned super-human AI. I'm not saying scaling and hardware improvement have not been important: I'm saying they were not sufficient: algorithmic improvements were quite necessary fo... (read more)

1Knight Lee
Even if building intelligence requires solving many many problems, preventing that intelligence from killing you may just require solving a single very hard problem. We may go from having no idea to having a very good idea. I don't know. My view is that we can't be sure of these things.

But what's even more unlikely, is the chance that $200 billion on capabilities research plus $0.1 billion on alignment research is survivable, while $210 billion on capabilities research plus $1 billion on alignment research is deadly.

This assumes that alignment success is the mostly likely avenue to safety for humankind whereas like I said, I consider other avenues more likely. Actually there needs to be a qualifier on that: I consider other avenues more likely than the alignment project's succeeding while the current generation of AI researchers remai... (read more)

3Knight Lee
Thank you, I've always been curious about this point of view because a lot of people have a similar view to yours. I do think that alignment success is the most likely avenue, but my argument doesn't require this assumption. Your view isn't just that "alternative paths are more likely to succeed than alignment," but that "alternative paths are so much more likely to succeed than alignment, that the marginal capabilities increase caused by alignment research (or at least Anthropic), makes them unworthwhile." To believe that alignment is that hopeless, there should be stronger proof than "we tried it for 22 years, and the prior probability of the threshold being between 22 years and 23 years is low." That argument can easily be turned around to argue why more alignment research is equally unlikely to cause harm (and why Anthropic is unlikely to cause harm). I also think multiplying funding can multiply progress (e.g. 4x funding ≈ 2x duration). If you really want a singleton controlling the whole world (which I don't agree with), your most plausible path would be for most people to see AI risk as a "desperate" problem, and for governments under desperation to agree on a worldwide military which swears to preserve civilian power structures within each country.[1] Otherwise, the fact that no country took over the world during the last centuries strongly suggests that no country will in the next few years, and this feels more solid than your argument that "no one figured out alignment in the last 22 years, so no one will in the next few years." 1. ^ Out of curiosity, would you agree with this being the most plausible path, even if you disagree with the rest of my argument?
RHollerith*5-2

we're probably doomed in that case anyways, even without increasing alignment research.

I believe we're probably doomed anyways.

I think even you would agree what P(1) > P(2)

Sorry to disappoint you, but I do not agree.

Although I don't consider it quite impossible that we will figure out alignment, most of my hope for our survival is in other things, such as a group taking over the world and then using their power to ban AI research. (Note that that is in direct contradiction to your final sentence.) So for example, if Putin or Xi were dictator of t... (read more)

I don't agree that the probability of alignment research succeeding is that low. 17 years or 22 years of trying and failing is strong evidence against it being easy, but doesn't prove that it is so hard that increasing alignment research is useless.

People worked on capabilities for decades, and never got anywhere until recently, when the hardware caught up, and it was discovered that scaling works unexpectedly well.

There is a chance that alignment research now might be more useful than alignment research earlier, though there is uncertainty in everything.

W... (read more)

RHollerith*3-10

AI safety spending is only $0.1 billion while AI capabilities spending is $200 billion. A company which adds a comparable amount of effort on both AI alignment and AI capabilities should speed up the former more than the latter

There is very little hope IMHO in increasing spending on technical AI alignment because (as far as we can tell based on how slow progress has been on it over the last 22 years) it is a much thornier problem than AI capability research and because most people doing AI alignment research don't have a viable story about how they are ... (read more)

9Knight Lee
EDIT: thank you so much for replying to the strongest part of my argument, no one else tried to address it (despite many downvotes). I disagree with the position that technical AI alignment research is counterproductive due to increasing capabilities, but I think this is very complicated and worth thinking about in greater depth. Do you think it's possible, that your intuition on alignment research being counterproductive, is because you compared the plausibility of the two outcomes: 1. Increasing alignment research causes people to solve AI alignment, and humanity survives. 2. Increasing alignment research led to an improvement in AI capabilities, allowing AI labs to build a superintelligence which then kills humanity. And you decided that outcome 2 felt more likely? Well, that's the wrong comparison to make. The right comparison should be: 1. Increasing alignment research causes people to improve AI alignment, and humanity survives in a world where we otherwise wouldn't survive. 2. Increasing alignment research led to an improvement in AI capabilities, allowing AI labs to build a superintelligence which then kills humanity in a world where we otherwise would survive. In this case, I think even you would agree what P(1) > P(2). P(2) is very unlikely because if increasing alignment research really would lead to such a superintelligence, and it really would kill humanity... then let's be honest, we're probably doomed in that case anyways, even without increasing alignment research. If that really was the case, the only surviving civilizations would have had different histories, or different geographies (e.g. only a single continent with enough space for a single country), leading to a single government which could actually enforce an AI pause. We're unlikely to live in a world so pessimistic that alignment research is counterproductive, yet so optimistic that we could survive without that alignment research.

Good reply. The big difference is that in the Cold War, there was no entity with the ability to stop the 2 parties engaged in the nuclear arms race whereas instead of hoping for the leading labs to come to an agreement, in the current situation, we can lobby the governments of the US and the UK to shut the leading labs down or at least nationalize them. Yes, the still leaves an AI race between the developed nations, but the US and the UK are democracies, and most voters don't like AI whereas the main concern of the leaders of China and Russia is to avoid r... (read more)

Eliezer thinks (as do I) that technical progress in alignment is hopeless without first improving the pool of prospective human alignment researchers (e.g., via human cognitive augmentation).

Who’s track record of AI predictions would you like to see evaluated?

Whoever has the best track record :)

Your specific action places most of its hope for human survival on the entities that have done the most to increase extinction risk.

3RussellThor
Thats not a valid criticism if we are simply about choosing one action to reduce X-risk. Consider for example the cold war - the guys with nukes did the most to endanger humanity however it was most important that they cooperated to reduce it.

But I’m more thinking about what work remains.

It depends on how they did it. If they did it by formalizing the notion of "the values and preferences (coherently extrapolated) of (the living members of) the species that created the AI", then even just blindly copying their design without any attempt to understand it has a very high probability of getting a very good outcome here on Earth.

The AI of course has to inquire into and correctly learn about our values and preferences before it can start intervening on our behalf, so one way such a blind copying ... (read more)

I’ve also experienced someone discouraging me from acquiring technical AI skills for the purpose of pursuing a career in technical alignment because they don’t want me to contribute to capabilities down the line. They noted that most people who skill up to work on alignment end up working in capabilities instead.

I agree with this. I.e., although it is possible for individual careers in technical alignment to help the situation, most such careers have the negative effect of speeding up the AI juggernaut without any offsetting positive effects. I.e., the fewer people trained in technical alignment, the better.

Patenting an invention necessarily discloses the invention to the public. (In fact, incentivizing disclosure was the rationale for the creation of the patent system.) That is a little worrying because the main way the non-AI tech titans (large corporations) have protected themselves from patent suits has been to obtain their own patent portfolios, then entering into cross-licensing agreements with the holders of the other patent portfolios. Ergo, the patent-trolling project you propose could incentivize the major labs to disclose inventions that they are c... (read more)

Double120

Your argument about corporate secrets is sufficient to change my mind on activist patent trolling being a productive strategy against AI X-risk.

The part about funding would need to be solved with philanthropy. I don't believe that org exists, but I don't see why it couldn't.

I'm still curious whether there are other cases in which activist patent trolling can be a good option, such as animal welfare, chemistry, public health, or geoengineering (ie fracking).

With a specific deadline and a specific threat of a nuclear attack on the US.

The transfer should be made in January 2029

I think you mean in January 2029 or earlier if the question resolves before the end of 2028 otherwise there would be no need to introduce the CPI into the bet to keep things fair (or predictable).

3Vasco Grilo
Thanks, Richard! I have updated the bet to account for that.

You do realize that by "alignment", the OP (John) is not talking about techniques that prevent an AI that is less generally capable than a capable person from insulting the user or expressing racist sentiments?

We seek a methodology for constructing an AI that either ensures that the AI turns out not to be able to easily outsmart us or (if it does turn out to be able to easily outsmart us) ensures (or makes it unlikely) that it won't kill us all or do something other terrible thing. (The former is not researched much compared to the latter, but I felt the n... (read more)

Although the argument you outline might be an argument against ever fully trusting tests (usually called "evals" on this site) that this or that AI is aligned, alignment researchers have other tools in their toolbox besides running tests or evals.

It would take a long time to explain these tools, particularly to someone unfamiliar with software development or a related field like digital-electronics design. People make careers in studying tools to make reliable software systems (and reliable digital designs).

The space shuttle is steered by changing the dire... (read more)

The strongest sign an attack is coming that I know of is firm evidence that Russia or China is evacuating her cities.

Another sign that would get me to flee immediately (to a rural area of the US: I would not try to leave the country) is a threat by Moscow that Moscow will launch an attack unless Washington takes action A (or stops engaging in activity B) before specific time T.

1bokov
In other words, what Putin has already been doing more and more, but with a specific deadline attached?

Western Montana is separated from the missile fields by mountain ranges and the prevailing wind direction and is in fact considered the best place in the continental US to ride out a nuclear attack by Joel Skousen. Being too far away from population centers to be walkable by refugees is the main consideration for Skousen.

Skousen also likes the Cumberland Plateau because refugees are unlikely to opt to walk up the escarpment that separates the Plateau from the population centers to its south.

The overhead is mainly the "fixed cost" of engineering something that works well, which suggest re-using some of the engineering costs already incurred in making it possible for a person to make a hands-free phone call on a smartphone.

Off-topic: most things (e.g., dust particles) that land in an eye end up in the nasal cavity, so I would naively expect that protecting the eyes would be necessary to protect oneself fully from respiratory viruses:

https://www.ranelle.com/wp-content/uploads/2016/08/Tear-Dust-Obstruction-1024x485.jpg

Does anyone care to try to estimate how much the odds ratio of getting covid (O(covid)) decreases when we intervene by switching a "half-mask" respirator such as the ones pictured here for a "full-face" respirator (which protects the eyes)?

The way it is now, when one lab has an insight, the insight will probably spread quickly to all the other labs. If we could somehow "drive capability development into secrecy," that would drastically slow down capability development.

Malice is a real emotion, and it is a bad sign (but not a particularly strong sign) if a person has never felt it.

Yes, letting malice have a large influence on your behavior is a severe character flaw, that is true, but that does not mean that never having felt malice or being incapable of acting out of malice is healthy.

Actually, it is probably rare for a person never to act out of malice: it is probably much more common for a person to just be unaware of his or her malicious motivations.

The healthy organization is to be tempted to act maliciously now and... (read more)

4Viliam
These things are so difficult to figure out. Each of us lives in a "bubble" of their own brain; we probably also choose our friends based on psychological compatibility; some emotions are culturally acceptable to express and some are not; some things are culturally acceptable to believe about others and some are not; and adding the possibility of not being aware of one's own feelings on top of that... how can people ever come to a conclusion about these things? People are likely to err a lot in both directions. On one hand, it is tempting to think that members of the outgroup spend their entire days thinking about how to hurt us, when it is psychologically more likely that they spend 99% of their time focusing on themselves, just like we (at least the more sane among us) do. On the other hand, in the egalitarian society, it is a taboo to consider too seriously the thought that someone might be psychologically different, despite that we know as a fact that there are many kinds of neurodivergence and mental illness. Also, it is not obvious that when people use the same word, they mean the same thing. Even with a definition like "an actual intention to do the particular kind of harm", but why exactly? Could be as a revenge (for actual or imaginary transgressions). Could be because I want to get something good as a result (not just theft, but also things like hurting a competitor). Could be something that is difficult to explain ("I just hate his face"), which probably has some evolutionary reason, but having an adaptation to do X is different from wanting to do X (including unconsciously). Is there another obvious option that I missed here? Then we could argue exact definitions, like "it is not actually malice, if the motivation was to get some benefit out of it"; but of course if we go in that direction too far, no X is ever actually X, because it is always causally connected to something.
RHollerith*5-3

their >90% doom disagrees with almost everyone else who thinks seriously about AGI risk.

The fact that your next sentence refers to Rohin Shah and Paul Christiano, but no one else, makes me worry that for you, only alignment researchers are serious thinkers about AGI risk. Please consider that anyone whose P(doom) is over 90% is extremely unlikely to become an alignment researcher (or to remain one if their P(doom) became high when they were an alignment researcher) because their model will tend predict that alignment research is futile or that it act... (read more)

2Seth Herd
I shouldn't have said "almost everyone else" but "most people who think seriously about AGI risk". I can see that implication. I certainly don't think that only paid alignment researchers have thought seriously about AGI risk. Your point about self-selection is quite valid. Depth of thought does count. A person who says "bridges seem like they'd be super dangerous, so I'd never want to try building one", and so doesn't become an engineer, does not have a very informed opinion on bridge safety. There is an interesting interaction between depth of thought and initial opinions. If someone thinks a moderate amount about alignment, concludes it's super difficult, and so does something else, will probably cease thinking deeply about alignment - but they could've had some valid insights that led them to stop thinking about the topic. Someone who thinks for the same amount of time but from a different starting point and who thinks "seems like it should be fairly do-able" might then pursue alignment research and go on to think more deeply. Their different starting points will probably bias their ultimate conclusions - and so will the desire to follow the career path they've started on. So probably we should adjust our estimate of difficulty upward to account for the bias you mention. But even making an estimate at this point seems premature. I mention Christiano and Shah because I've seen them most visibly try to fully come to grips with the strongest arguments for alignment being very difficult. Ideally, every alignment researcher will do that. And every pause advocate would work just as hard to fully understand the arguments for alignment being achievable. Not everyone will have the time or inclination to do that. Judging alignment difficulty has to be done by gauging the amount of time-on-task combined with the amount of good-faith consideration of arguments one doesn't like. That's the case with everything. When I try to do that as carefully as I know how, I rea

I disagree. I think the fact that our reality branches a la Everett has no bearing on our probability of biogensis.

Consider a second biogenesis that happened recently enough and far away enough that light (i.e., information, causal influence) has not had enough time to travel from it to us. We know such regions of spacetime "recent enough and far away enough" exist and in principle could host life, but since we cannot observe a sign of life or a sign of lack of life from them, they are not relevant to our probability of biogenesis whereas by your logic, they are relevant.

new cities like Los Alamos or Hanover

You mean Hanford.

When you write,

a draft research paper or proposal that frames your ideas into a structured and usable format,

who is "you"?

I know you just said that you don't completely trust Huberman, but just today, Huberman published a 30-minute video titled "Master your sleep and be more alert when awake". I listened to it (twice) to refresh my memory and to see if his advice changed.

He mentions yellow-blue (YB) contrasts once (at https://www.youtube.com/watch?v=lIo9FcrljDk&t=502s) and at least thrice he mentions the desirability of exposure to outdoor light when the sun is at a low angle (close to the horizon). As anyone can see by looking around at dawn and again at mid-day, at dawn... (read more)

This post paints a partially inaccurate picture. IMHO the following is more accurate.

Unless otherwise indicated, the following information comes from Andrew Huberman. Most comes from Huberman Lab Podcast #68. Huberman opines on a great many health topics. I want to stress that I don't consider Huberman a reliable authority in general, but I do consider him reliable on the circadian rhythm and on motivation and drive. (His research specialization for many years was the former and he for many years has successfully used various interventions to improve his o... (read more)

Reply2111
1Nat Martin
Thanks for the detailed reply. My understanding is that there are still significant unknowns on the exact mechanisms of entrainment, and I don’t dispute that yellow-blue (YB) contrasts play a role. I considered mentioning it in this post, but my understanding is that it is more of a secondary point compared to the significance of the timing of bright, blue light exposure. Curious to see any evidence for your/Huberman’s assertion that early morning light exposure in the absence of YB contrasts has little effect on entrainment. This seems to contradict most of the literature I’ve seen. The balance of my post more closely reflects this 2021 summary of the state of the art by Russell Foster (who was crucial in the discovery of the role ipRGCs). I’m inclined to trust his overview of the literature over Huberman, who has spread himself quite thin in the past. Having said that am wary that this summary is from 2021 and am less familiar with research from the last couple of years…. If there is anything specific you think is factually inaccurate in the essay, I would be more than happy to discuss.

“Game theoretic strengthen-the-tribe perspective” is a completely unpersuasive argument to me. The psychological unity of humankind OTOH is persuasive when combined with the observation that this unitary psychology changes slowly enough that the human mind’s robust capability to predict the behavior of conspecifics (and manage the risks posed by them) can keep up.

3Noosphere89
IMO, the psychological unity of humankind thesis is a case of typical minding/overgeneralizing, combined with overestimating the role of genetics/algorithms and underestimating the role of data in what makes us human. I basically agree with the game-theoretic perspective, combined with another perspective which suggests that as long as humans are relevant in the economy, you kind of have to help those humans if you want to profit, and merely an AI that automates a lot of work could disrupt it very heavily if a CEO could have perfectly loyal AI workers that never demanded anything in the broader economy.

Although I agree with another comment that Wolfram has not "done the reading" on AI extinction risk, my being able to watch his face while he confronts some of the considerations and arguments for the first time made it easier, not harder, for me to predict where his stance on the AI project will end up 18 months from now. It is hard for me to learn anything about anyone by watching them express a series of cached thoughts.

Near the end of the interview, Wolfram say that he cannot do much processing of what was discussed "in real time", which strongly sugge... (read more)

Some people are more concerned about S-risk than extinction risk, and I certainly don't want to dismiss them or imply that their concerns are mistaken or invalid, but I just find it a lot less likely that the AI project will lead to massive human suffering than its leading to human extinction.

the public seems pretty bought-in on AI risk being a real issue and is interested in regulation.

There's a huge gulf between people's expressing concern about AI to pollsters and the kind of regulations and shutdowns that would actually avert extinction. The people... (read more)

The CNS contains dozens of "feedback loops". Any intervention that drastically alters the equilibrium point of several of those loops is generally a bad idea unless you are doing it to get out of some dire situation, e.g., seizures. That's my recollection of Huberman's main objection put into my words (because I dont recall his words).

Supplementing melatonin is fairly unlikely to have (much of) a permanent effect on the CNS, but you can waste a lot of time by temporarily messing up CNS function for the duration of the melatonin supplementation (because a p... (read more)

Are Eliezer and Nate right that continuing the AI program will almost certainly lead to extinction or something approximately as disastrous as extinction?

Load More