All of pathos_bot's Comments + Replies

I feel a satisfaction hearing that some figure on social media is embroiled in a controversy and realizing that I had muted them a long time ago. The common themes that turn me off to people in general is

  1. Humor based on punching down, deriding easy targets in a way that implies a natural superiority over a superficially detestable outgroup
  2. Huckster-like communication style, where grandiose, far-off promises  are supported by conveniently unfalsifiable claims.
  3. Tactical, endless derision of an enemy indiscriminately, even when the derogatory claims contrad
... (read more)
Answer by pathos_bot*170

On the opposite end, when I was young I learned about the term "Stock market crash", referring to 1929, and I thought literally a car crashed into the physical location where stocks were traded, leading to mass confusion and kickstarting the Great Depression. Though if that actually happened back then, it would have led to a temporary crash in the market.

7Darmani
When I was a kid and 9/11 happened, some people online were talking about the effect on the stock market. My mom told me that the stock exchange was down the street from the WTC and not damaged, so I thought the people on the Internet were all wrong.

Obviously correct. The nature of any entity with significantly more power than you is that it can do anything it wants, and it incentivized to do nothing in your favor the moment your existence requires resources that would benefit it more if it were to use them directly. This is the essence of most of Eliezer's writings on superintelligence.

In all likelihood, ASI considers power (agentic control of the universe) an optimal goal and finds no use for humanity. Any wealth of insight it could glean from humans it could get from its own thinking, or seeding va... (read more)

1Philip Bellew
Each of these carry assumptions about reality I'm not convinced a superintelligence would share. Though it may be able to find the answer in some cases. We'd be just as likely for it to choose to preserve out of some sense of amusement or preservation. To use the OP example: billionaire won't spare everyone 78 bucks, but will spend more on things he prefers. Some keep private zoos or other stuff which only purpose is anti boredom. Making the intelligence like us won't eliminate the problem. There are plenty of fail states for humanity where it isn't extinct. But while we pave over ant colonies and actively hunt wild hogs as a nuisance, there are lots of human cultures that won't do the same to cats. I hope that isn't the best we can do, but it's probably better than extinction.
  1. Most of the benefits of current-gen generative AI models are unrealized. The scaffolding, infrastructure, etc. of GPT-4 level models are still mostly hacks and experiments. It took decades for the true value of touch-screens, GPS and text messaging to be realized in the form of the smart phone. Even if for some strange improbable reason SOTA model training were to stop right now, there are still likely multiples of gains to be realized simply via wrappers and post-training.
  2. The scaling hypothesis has held far longer than many people have anticipated. GPT-4
... (read more)
Answer by pathos_bot-10

I'm not preparing for it because it's not gonna happen

1Jorge_Carvajal
I would like to read your arguments in this statement.

I agree. OpenAI claimed in the gpt-4o blog post that it is an entirely new model trained from the ground up. GPT-N refers to capabilities, not a specific architecture or set of weights. I imagine GPT-5 will likely be an upscaled version of 4o, as the success of 4o has revealed that multi-modal training can reach similar capabilities at what is likely a smaller number of weights (judging by the fact that gpt-4o is cheaper and faster than 4 and 4T)

IMO the proportion of effort into AI alignment research scales with total AI investment. Lots of AI labs themselves do alignment research and open source/release research on the matter.

OpenAI at least ostensibly has a mission. If OpenAI didn't make the moves they did, Google would have their spot, and Google is closer to the "evil self-serving corporation" archetype than OpenAI

  • Existing property rights get respected by the successor species. 


What makes you believe this?

3Matt Vogel
if you don't believe this will happen not much matters in financial markets. i am unsure what your investment strategy should look like if you don't believe this. or said another way, it's a bit of a pascals investment. either you believe this and win or you don't believe this and lose out regardless of positive or negative outcome.

Given this argument hinges on China's higher IQ, why couldn't the same be said about Japan, which according to most figures has an average IQ at or above China, which would indicate the same higher proportion of +4SD individuals in the population. If it's 1 in 4k, there would be 30k of those in Japan, 3x as much as the US. Japan also has a more stable democracy, better overall quality of life and per capita GDP than China. If outsized technological success in any domain was solely about IQ, then one would have expected Japan to be the center of world tech and the likely creators of AGI, not the USA, but that's likely not the case.

Answer by pathos_bot53

The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were "first awakened", but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it's 1/2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it's 1/3.

1JeffJo
The same problem statement does not mention Monday, Tuesday, or describe any timing difference between a "mandatory" waking and an "optional" one. (There is another element that is missing, that I will defer talking about until I finish this thought.) It just says you will be wakened once or twice. Elga added these elements as part of his solution. They are not part of the problem he asked us to solve. But that solution added more than just the schedule of wakings. After you are "first awakened," what would change if you are told that the day is Monday? Or that the coin landed on Tails (and you consider what day it is)? This is how Elga avoided any consideration, given his other additions, of what significance to attach to Tuesday, after Heads. That was never used in his solution, yet could be the crux of the controversy. I have no definitive proof, but I suspect that Elga was already thinking of his solution. He included two hints to the solution: One was "two days," although days were never mentioned again, and that "when first awakened." Both apply to the solution, not the problem as posed. I think "first awakened" simply meant before you could learn information. +++++ You point out that, as you are trying to interpret it, SB cannot make the determination whether this is a "first awakening." But the last element that is usually included in the problem, but was not in what Elga actually asked, is that the question is posed to you before you are first put to sleep. So the issue you raise - essentially, whether the question is asked on Tuesday, after Heads - is moot. The question already exists, as you wake up. It applies to that moment, regardless of how many times you are wakened.
1Ape in the coat
You should probably use "last awakening" instead of "first awakening" in your attempt to disambiguation. See Radford Neal's comment for the reason why.
2Radford Neal
The wording may be bad, but I think the second interpretation is what is intended. Otherwise the discussion often seen of "How might your beliefs change if after awakening you were told it is Monday?" would make no sense, since your actual first awakening is always on Monday (though you may experience what feels like a first awakening on Tuesday).

This suggests a general rule/trend via which unreported but frequent phenomenon can be extrapolated. If X phenomenon is discovered accidentally via method Y almost all the time, then method Y must be done far more frequently than people suspect. 

Generally it makes no sense for every country to collectively cede the general authority of law and order and unobstructed passage of cargo wrt global trade. He talks about this great US pull back because the US will be energy independent, but America pulling back and the global waters to turning into a lawless hellscape would send the world economy into a dark age. Hinging all his predictions on this big head-turning assumption gives him more attention but the premise is nonsensical.

Answer by pathos_bot21

Why can't this be an app. If their LAM is better than competitors then it would be profitable in their hardware and standalone.

2MiguelDev
Upon watching their demo video, it seems that they want to do things differently and do away with the conventional use of apps in phones. So it's a different tech philosophy. Well, let's see how the market reacts to their R1 - rabbit os - LAM tech stack.

The easiest way to check whether this would work is to determine a causal relationship between diminished levels of serotonin in the bloodstream and neural biomarkers similar to that of people with malnutrition.

1MadHatter
Well that should be straightforward, and is predicted by my model of serotonin's function in the brain. It would require an understanding of the function of orexin, which I do not currently possess, beyond the standard intuition that it modulates hunger.  The evolutionary story would be this: * serotonin functions (in my model) to make an agent satisficing, which has many desirable safety properties, e.g. not getting eaten by predators when you forage unnecessarily * the most obvious and important desire to satisfy (and neurally mark as satisfied) is the hunger for food modulated by the hormone/neurotransmitter orexin * the most obvious mechanism (and thus the one I predict) is that serotonergic bacteria in the gut activate some neural population in the gut's "second brain", sending a particular neural signal bundle to the primary brain consistent with malnutrition (there are many details here that I have not worked out and which could be usefully worked on by a qualified theoretical neuroscientist) * this neural signal bundle would necessarily up(???)modulate the orexin signal(???) * sustained high levels of orexin lead to autocannibalism of the brain through sustained neural pruning

I feel the original post, despite ostensibly being a plea for help, could be read as a coded satire on the worship of "pure cognitive heft" that seems to permeate rationalist/LessWrong culture. It points out the misery of g-factor absolutism.

Answer by pathos_bot139

It would help if you clarified why specifically you feel unintelligent. Given your writing style: ability to distill concerns, compare abstract concepts and communicate clearly, I'd wager you are intelligent. Could it be imposter syndrome?

0nim
In this vein, the only behavior displayed in the original post that reads as less "intelligent" to me is assuming the [existence * importance] of trainable abstract intelligence. I notice that people who've gotten a lot of the cultural "you're so smart" feedback tend on the whole to be skeptical of abstract intelligence as an independent trait, perhaps because of the repeated experience of being told one has a trait that doesn't subjectively feel like it has a specific presence or location. This gets me wondering if the feeling that one doesn't "have intelligence" in the way that one "has height" or "has happiness" or even "has verbal fluency" is universal, and the difference in how individuals interpret the absence-of-experience could be fully explainable by social context and feedback.

I totally agree with that notion, I however believe the current levers of progress massively incentivize and motivate AGI development over WBE. Currently regulations are based on flops, which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities. If we had a perfectly aligned international system of oversight that assured WBE were possible and maximized in apparent value to those with the means to both develop it and push the levers, steering away from any risky AGI analogue before it is possible, then yes, but tha... (read more)

2Steven Byrnes
I’m not sure what you’re talking about. Maybe you meant to say: “there are ideas for possible future AI regulations that have been under discussion recently, and these ideas involve flop-based thresholds”? If so, yeah that’s kinda true, albeit oversimplified. I think that’s very true in the “WBE without reverse engineering” route, but it’s at least not obvious in the “WBE with reverse engineering” route that I think we should be mainly talking about (as argued in OP). For the latter, we would have legible learning algorithms that we understand, and we would re-implement them in the most compute-efficient way we can on our GPUs/CPUs. And it’s at least plausible that the result would be close to the best learning algorithm there is. More discussion in Section 2.1 of this post. Certainly there would be room to squeeze some more intelligence into the same FLOP/s—e.g. tweaking motivations, saving compute by dropping the sense of smell, various other architectural tweaks, etc. But it’s at least plausible IMO that this adds up to <1 OOM. (Of course, non-WBE AGIs could still be radically superhuman, but it would be by using radically superhuman FLOP (e.g. model size, training time, speed, etc.)) Hmm. I should mention that I don’t expect that LLMs will scale to AGI. That might be a difference between our perspectives. Anyway, you’re welcome to believe that “WBE before non-WBE-AGI” is hopeless even if we put moonshot-level effort into accelerating WBE. That’s not a crazy thing to believe. I wouldn’t go as far as “hopeless”, but I’m pretty pessimistic too. That’s why, when I go around advocating for work on human connectomics to help AGI x-risk, I prefer to emphasize a non-WBE-related path to AI x-risk reduction that seems (to me) likelier to actualize. I grant that a sadistic human could do that, and that’s bad, although it’s pretty low on my list of “likely causes of s-risk”. (Presumably Ems, like humans, would be more economically productive when they’re feeling pretty g

It really is. My conception of the future is so weighed by the very likely reality of an AI transformed world that I have basically abandoned any plans with a time scale over 5 years. Even my short term plans will likely be shifted significantly by any AI advances over the next few months/years. It really is crazy to think about, but I've gone over every single aspect of AI advances and scaling thousands of times in my head and can think of no reality in the near future not as alien to our current reality as ours is to pre-eukaryotic life.

I separate possible tech advances by the criterion: "Is this easier or harder than AGI?" If it's easier than AGI, there's a chance it will be invented before AGI, if not, AGI will invent it, thus it's pointless to worry over any thought on it our within-6-standard-deviations-of-100IQ brains can conceive of now. WBE seems like something we should just leave to ASI once we achieve it, rather than worrying over every minutia of its feasibility.

7Steven Byrnes
Oops, sorry for leaving out some essential context. Both myself, and everyone I was implicitly addressing this post to, are concerned about the alignment problem, e.g. AGI killing everyone. If not for the alignment problem, then yeah, I agree, there’s almost no reason to work on any scientific or engineering problem except building ASI as soon as possible. But if you are worried about the alignment problem, then it makes sense to brainstorm solutions, and one possible family of solutions involves trying to make WBE happen before making AGI. There are a couple obvious follow-up questions, like “is that realistic?” and “how would that even help with the alignment problem anyway?”. And then this blog post is one part of that larger conversation. For a bit more, see Section 1.3 of my connectomics post. Hope that helps :)

I think most humans agree with this statement in an "I emotionally want this" sort of way. The want has been sublimated via religion or other "immortality projects" (see The Denial of Death). The question is, why is it taboo, and it is taboo in the sense you say? (a signal of low status)

I think these elements are at play most in peoples mind, from lay people to rationalists:

  1. It's too weird to think about: Considering the possibility of a strange AI-powered world where either complete extinction or immortality are possible feels "unreal". Our instinct that e
... (read more)

That's very true, but there are two reasons why a company may not be inclined to release an extremely capable model:
1. Safety risk: someone uses a model and jailbreaks it in some unexpected way, the risk of misuse is much higher with a more capable model. OpenAI had GPT-4 for 9-10 months before releasing it trying to RHLF and even lobotomized it to being more safe. The Summer 2022 internal version of GPT-4 was, according to Microsoft researchers, more generally capable than the released version (as evidenced by the draw a unicorn test). This needed delay a... (read more)

The major shift in the next 3 years will be that, as a rule, top level AI labs will not release their best models. I'm certain this has somewhat been the case for OpenAI, Anthropic and Google for the past year. At some point full utilization of a SOTA model will be a strategic advantage for companies themselves to use for their own tactical purposes. The moment any $X of value can be netted from an output/inference run of a model for less than $(X-Y) in costs, where Y represents the marginal labor/maintenance/averaged risk costs for each run's output, no company would ever be advantaged by releasing the model to be used by anyone other than themselves. This closed-source event horizon I imagine will occur sometime in late 2024.

1Insub
Not sure I understand; if model runs generate value for the creator company, surely they'd also create value that lots of customers would be willing to pay for. If every model run generates value, and there's ability to scale, then why not maximize revenue by maximizing the number of people using the model? The creator company can just charge the customers, no? Sure, competitors can use it too, but does that really override losing an enormous market of customers?
3Tomás B.
This is a very good, and very scary point - another thing that could provide, at least the appearance of, a discontinuity. One symptom of this this scenario would be a widespread, false belief that "open source" models are SOTA. Might be good to brainstorm other symptoms to prime ourselves to recognize when we are in this scenario. Complete hiring-freezes/massive layoffs at the firms in question, aggressive expansion into previously-unrelated markets, etc. 
4Daniel Kokotajlo
Related previous discussion:  Soft takeoff can still lead to decisive strategic advantage — AI Alignment Forum Review of Soft Takeoff Can Still Lead to DSA — AI Alignment Forum

The thing about writing stories which are analogies to AI is, how far removed from the specifics of AI and its implementations can you make the story while still preserving the essential elements that matter with respect to the potential consequences. This speaks perhaps to the persistent doubt and dread that we may have in a future awash in the bounty of a seemingly perfectly aligned ASI. We are waiting for the other shoe to drop. What could any intelligence do to prove its alignment in any hypothetical world, when not bound to its alignment criteria by tangible factors?

This reminds me about the comment on how effective LLM's will be for mass scale censorship.

IMO a lot of claims of having imposter syndrome is implicit status signaling. It's announcing that your biggest worry is the fact that you may just be a regular person. Do cashiers at McDonald's have imposter syndrome and believe they at heart aren't really McDonald's cashiers but actually should be medium-high 6-figure ML researchers at Google? Such an anecdote may provide comfort to a researcher at Google, because the ridiculousness of the premise will remind them of the primacy of the way things have settled in the world. Of course they belong in their ... (read more)

3followthesilence
Imposter syndrome ≠ being a regular person is your "biggest worry". 
7Thane Ruthenis
... I think so, yes. It would feel like they're just pretending like they know how to deal with customers, that they're just pretending to be professional staffers who know the ins and outs of the establishment, while in fact they just walked in from their regular lives, put on a uniform, and are not at all comfortable in that skin. An impression that they should feel like an appendage of a megacorporation, an appendage which may not be important by itself, but is still part of a greater whole; while in actuality, they're just LARPing being that appendage. An angry or confused customer confronts them about something, and it's as if they should know how to handle that off the top of their head, but no, they need to scramble and fiddle around and ask their coworkers and make a mess of it. Or, at least, that's what I imagine I'd initially feel in that role.

Some factors I've noticed that increase the likelihood some fringe conspiracy theory is believed:

  1. Apparent Unfalsifiability: Nothing a lay person could do within their immediate means or without insider knowledge or scientific equipment could disprove the theory. The mainstream truth has to be taken on trust in powerful institutions. Works with stochastic/long term health claims or claims or some hidden agenda perpetrated by a secret cabal.
  2. Complexity Reduction: The claim takes some highly nuanced, multifaceted difficult domain and simplifies its cause to on
... (read more)
2TAG
Yet more: Status grabs through having arcane knowledge unknown to the sheeple. Resentment and excuses: you're not a failure, because They control everything, so you never had a chance.

Assuming you have a >10% of living forever, wouldn't that necessitate avoiding all chance at accidental death to minimize the "die before AGI" section. If you assume AGI is inevitable, then one should simply maximize risk aversion to prevent cessation of consciousness or at least permanent information loss of their brain.

4ImmortalityOrDeathByAGI
For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1/1,000,000. So, one can ask oneself, "would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?", and I the answer a reasonable person would give in most cases would be no. The biggest crux is just how much one values one millionth of their post-AGI life, which comes down to cruxes like its length (could be billions of years!), and its value per unit time (which could be very positive or very negative). Like, if I expect to live for a million years in a post-AGI world where I expect life to be much better than the life I'm leading right now, then skiing for a day would take away roughly one year away from my post-AGI life in expectation. I definitely don't value skiing that much. This gets a bit complicated for people who are not perfectly selfish, as there are cases where one can trade micromorts for happiness, happiness for productivity, and productivity for impact on other people. So for instance, someone who works on AI safety and really likes skiing might find it net-positive to incur the micromorts because the happiness gained from skiing makes them better at AI safety, and them being better at AI safety has huge positive externalities that they're willing to trade their lifespan for. In effect, they would be decreasing the probability that they themselves live to AGI, while increasing the probability that they and other people (of which there are many) survive AGI when it happens.
4Nathan Helm-Burger
If you think the coming of AGI is inevitable, but you think that surviving AGI is hard and you might be able to help with it, then you should do everything you can to make the transition to a safe AGI future go well. Including possibly sacrificing your own life, if you value the lives of your loved ones in aggregate more than your life alone. In a sense, working hard to make AGI go well is 'risk aversion' on a society-wide basis, but I'd call the attitude of the agentic actors in this scenario more one of 'ambition maximizing' rather than 'personal risk aversion'.
Answer by pathos_bot30

Whatever the probability of AGI in the reasonably near future (5-10 years), the probability of societal shifts due to implementation of highly capable yet sub-AGI AI is strictly higher. I think regardless of where AI "lands" in terms of slowing down in progress (if it is the case we see an AI winter/fall), the application of systems that exist even just today, even if technological progress were to stop, is enough to merit appreciating the different world that is coming within the same order of magnitude as how different it would be with AGI. 

I think it's almost impossible at this point to argue against the value of providence with respect to the rise of dumb (in the relative to AGI sense) but highly highly capable AI.

I've often thought that seniority/credential based hierarchies are stable and prevalent both because they benefit those already in power, and they provide a defined, predictable path for low status members to become high status. One is more motivated to contribute and support a system that guarantees them high status after X years if they are of middling competence, rather than a system that requires them to be among the best at some quantifiable metric. The longer someone spends in a company, the more invested they become in their relative position in the... (read more)