All of stavros's Comments + Replies

stavros110

I don't like the thing you're doing where you're eliding all mention of the actual danger AI Safety/Alignment was founded to tackle - AGI having a mind of its own, goals of its own, that seem more likely to be incompatible with/indifferent to our continued existence than not.

Everything else you're saying is agreeable in the context you're discussing it, that of a dangerous new technology - I'd feel much more confident if the Naval Nuclear Propulsion Program (Rickover's people) was the dominant culture in AI development.
Albeit I have strong doubts about the... (read more)

stavros3-1

What is true is already so / It all adds up to normality

What you've lost isn't the future, it's the fantasy.

What remains is a game that we were born losing, where there may be few moves left to make, and where most of us most of the time don't even have a seat at the table.

However, it is a game with very high variance.
It is a game where world shaping things happen regularly due to one person getting lucky (right person, right place, right time, right idea etc).

And one thing I've noticed in people who routinely excel at high variance games - e.g. Poker, MTG... (read more)

7Noosphere89
This point would be really helpful for everyone. That said, I'd dispute this claim here: At least under the common conception of fantasy, this is an extremely strong claim, because you are effectively claiming that the good future in Ben Pace's head could never have been realized, and I see no reason to conclude this from an epistemic perspective at all, unless you are masssively overconfident (even if you do have reasonably high doom probabilities, this statement is not true.) More generally, it's known that it does not always add up to normality, see here: https://www.lesswrong.com/posts/74crqQnH8v9JtJcda/egan-s-theorem#oZNLtNAazf3E5bN6X
stavros20

I woke up this morning thinking 'would be nice to have a concise source for the whole zinc/colds thing'. This is amazing.

I help run an EA coliving space, so I started doing some napkin math on how many sick days you'll be saving our community over the next year. Then vaguely extrapolated to the broader lesswrong audience who'll read your post and be convinced/reminded to take zinc (and given decent guidance for how to use it effectively).

I'd guess at minimum you've saved dozens of days over the next year by writing this post. That's pretty cool. Thankyou <3

stavros1011

To the extent that anecdata is meaningful:

I have met somewhere between 100-200 AI Safety people in the past ~2 years; people for whom AI Safety is their 'main thing'.

The vast majority of them are doing tractable/legible/comfortable things. Most are surprisingly naive; have less awareness of the space than I do (and I'm just a generalist lurker who finds this stuff interesting; not actively working on the problem).

Few are actually staring into the void of the hard problems; where hard here is loosely defined as 'unknown unknowns, here be dragons, where do I... (read more)

6TsviBT
This isn't clear to me, where the crux (though maybe it shouldn't be) is "is it feasible for any substantial funders to distinguish actually-trying research from other".
stavros30

Thanks for linking this post. I think it has a nice harmony with Prestige vs Dominance status games.

I agree that this is a dynamic that is strongly shaping AI Safety, but would specify that it's inherited from the non-profit space in general - EA originated with the claim that it could do outcome focused altruism, but.. there's still a lot of room for improvement, and I'm not even sure we're improving.

The underlying dynamics and feedback loops are working against us, and I don't see evidence that core EA funders/orgs are doing more than pay lip service to this problem.

stavros10

Something in the physical ability of the top-down processes to control the bottom-up ones is damaged, possibly permanently. 

Metaphorically, it's like the revolting parts don't just refuse to collaborate anymore; they also blow up some of the infrastructure that was previously used to control them.

 

This is scary; big if true, would significantly change my own personal strategies and those I endorse to others -a switch from focusing on recovery to rehabilitation/adaptation.

I'd be grateful if you can elaborate on this part of your model and/or point me toward relevant material elsewhere.

2Kaj_Sotala
Mostly just personal experience with burnout and things that I recall hearing from others; I don't have any formal papers to point at. Could be wrong.
stavros3-1

Meek people (like me), may not see the worth in undertaking the risk of publicly revealing arguments or preferences. Embarrassment, shame, potentially being shunned for your revealed preferences, and so on -- there are many social risks to being public with your arguments and thought process

 

2 of the 3 'risks' you highlighted are things you have control over; you are an active participant in your feelings of shame and embarrassment[1], they are strategies 'parts' of you are pursuing to meet your needs, and through inner work[2][3] you can stop re... (read more)

stavros50

The only remedy I know of is to cultivate enjoying being wrong. This involves giving up a good bit of one's self-concept as a highly intelligent individual. This gets easier if you remember that everyone else is also doing their thinking with a monkey brain that can barely chin itself on rationality.

 

Some thoughts:

I have less trouble with this than most, and the areas where I do notice it arising lead me toward an interesting speculation.

I'm status blind: I very rarely, and mostly only when I was much younger, worry about looking like an idiot/failing... (read more)

stavros90

I am very confused.

My first thought when reading this was 'huh, no wonder they're getting mixed results - they're doing it wrong'.

My second thought when returning to this a day later: good - anything I do to contribute to the ability to understand and measure persuasion is literally directly contributing to dangerous capabilities.

Counterfactually, if we don't create evals for this... are we not expected to notice that LLMs are becoming increasingly more persuasive? More able to model and predict human psychology?

What is actually the 'safety' case for this research? What theory of change predicts this work will be net positive?

3Lennart Finke
Good point, and I was conflicted whether to put my thoughts about this at the end of the post. My best theory is that increased persuasion abilities looks something like "totalitarian government agents doing solid scaffolding on open-source models to DM people on Facebook". We will see that persuasive agents get better, but not know why and how. As stated in the introduction, persuasion detection is dangerous, but one of the few capabilities that could also be used defensively (i.e. detecting persuasion in an incoming email -> displaying warning in UI and offer to rephrase). In conclusion, definitely agree that we should consider closed-sourcing any improvements upon the above baseline and only show them to safety orgs instead. Some people at AISI I have talked to while working on persuasion are probably interested in this. 
stavros4-2

Re: 2

Most promising way is just raising children better.

See (which I'm sure you've already read): https://www.lesswrong.com/posts/CYN7swrefEss4e3Qe/childhoods-of-exceptional-people

Alongside that though, I think the next biggest leverage point would be something like nationalising social media and retargeting development/design toward connection and flourishing (as opposed to engagement and profit).

This is one area where, if we didn't have multiple catastrophic time pressures, I'd be pretty optimistic about the future. These are incredibly high impact and t... (read more)

2Viliam
I believe that we could raise children much better, however, even in the article you linked: Unfortunately, in current political climate, discussing intelligence is a taboo. I believe that optimal education for gifted children would be different from optimal education for average children (however, both could - and should - be greatly improved over what we have now), which unfortunately means that debates about improving education in general are somewhat irrelevant for improving the education of the brightest (who presumably could solve AI alignment one day). Sometimes this is a chicken-and-egg problem: the stupid things happen because people are stupid (the ones who do the things, or make decisions about how the things should be done), but as long as the stupid things keep happening, people will remain stupid. For example, we have a lot of superstition, homeopathy, conspiracy theories, and similar, which if it could somehow magically disappear overnight, people probably wouldn't reinvent them, or at least not quickly. These memes persist, because they spread from one generation to another. Here, the reason we do the stupid thing, is that there are many people who sincerely and passionately believe that the stupid thing is actually the smart and right thing. Another source of problem is that with average people, you can't expect extraordinary results. For example, most math teachers suck at math and at teaching. As a result, we get another generation that sucks at math. The problem is, we need so many math teachers (at elementary and high schools), that you can't simply decide to only hire the competent ones -- there would be not enough teachers to keep the schools running. Then we have all kinds of political mindkilling and corruption, when stupid things happen because they provide some political advantage for someone, or because the person who is supposed to keep things running is actually more interested in extracting as much rent as possible. Yeah, I wish
3[anonymous]
I highly doubt this would be very helpful in resolving the particular concerns Habryka has in mind. Namely, a world in which: * very short AI timelines (3-15 years) happen by default unless aggressive regulation is put in place, but even if it is, the likelihood of full compliance is not 100% and the development of AGI can be realistically delayed by at most ~ 1/2 generations before the risk of at least one large-scale defection having appeared becomes too high, so you don't have time for slow cultural change that takes many decades to take effect * the AI alignment problem turns out to be very hard and basically unsolvable by unenhanced humans, no matter how smart they may be, so you need improvements that quickly generate a bunch of ultra-geniuses that are far smarter than their "parents" could ever be
6Morpheus
Raising children better doesn't scale well. Neither in how much ooomph you get out of it per person, nor in how many people you can reach with this special treatment.
stavros10

Is there anything useful we can learn from Crypto ASICs as to how this will play out? And specifically, how to actually bet on it?

1lemonhope
I think the main way to bet is to find some equity and buy it. Might be hard to find.
stavros30

Replying to this because it seems a useful addition to the thread; assuming OP already knows this (and more).

1.) And none of the correct counterplays are 'look, my opponent is cheating/look, this game is unfair'. (Scrub mindset)

2.) You know what's more impressive than winning a fair fight? Winning an unfair one. While not always an option, and usually with high risk:reward, beating an opponent who has an assymetric situational advantage is hella convincing; it affords a much higher ceiling (relative to a 'fair' game) to demonstrate just how much better than your opponent you are.

It's an interesting framework, I can see it being useful.


I think it's more useful when you consider both high-decoupling and low-decoupling to be failure modes, more specifically: when one is dominant and the other is neglected, you reliably end up with inacccurate beliefs.

You went over the mistakes of low-decouplers in your post, and provided a wonderful example of a high-decoupler mistake too!

High decouplers will notice that, holding preferences constant, offering people an additional choice cannot make them worse off. People will only take the choice if

... (read more)

I think future technology all has AI as a pre-requisite?

 

My high conviction hot take goes further: I think all positive future timelines have AI as a pre-requisite. I expect that, sans AI, our future - our immediate future: decades, not centuries - is going to be the ugliest, and last, chapter in our civilization's history.

stavros4113

I have been in the position of trying to moderate a large and growing community - it was at 500k users last I checked, although I threw in the towel around 300k - and I know what a thankless, sisyphean task it is.

I know what it is to have to explain the same - perfectly reasonable - rule/norm again and again and again.

I know what it is to try to cultivate and nurture a garden while hordes of barbarians trample all over the place.

But...

If it aint broke, don't fix it.

I would argue that the majority of the listed people penalized are net contributors to lessw... (read more)

Reply3311

Fine. You win. Take your upvote.

Big fan of both of your writings, this dialogue was a real treat for me.

I've been trying to find a satisfying answer to the seeming inverse correlation of 'wellbeing' and 'agency' (these are very loose labels).

You briefly allude to a potential mechanism for this[1]

You also briefly allude to another mechanism with explanatory power for the inverse[2] - i.e. that while it might seem an individual is highly agentic, they are in fact little more than a host for a highly agentic egregore

I'm engaged in that most quixotic endeavour of actually trying to save... (read more)

5Kaj_Sotala
Thanks! Glad you liked it. I think that the likely impact on agency is complicated. One question is the extent to which your current agency is driven by something like pain avoidance.  @Matt Goldenberg has a nice concept of a mode of motivation he calls "the self-loathing monster", where one effectively motivates themselves by stacking on more fear/pain of failure to overcome the fear/pain of doing something. A classic example would be procrastinating until just before the deadline, and then at the last moment getting an urgency to complete the thing and doing it at the last moment while find everything very uncomfortable.  The more strongly one's motivation is built like this, the more likely it is that there will be a loss of agency after the sources of pain are removed, as one hasn't developed positive forms of motivation that could pick up the slack when the negative forms of motivation are removed. That's not to say that such a person would be doomed to a lifetime of non-agency! It's possible to learn positive motivation, but it's going to take time. Possibly several years. On the other hand, Tucker Peck has a nice talk ("Meditation and Social Justice" on this page) about the way that many important things are really hard, and that if you need to see success right away, you may have little chance than to burn out. In that kind of a situation, a more enlightened-y mindset may be exactly what you need:
4romeostevensit
Neuroticism and conscientiousness are somewhat correlated in the literature and indeed it was my experience that boosting conscientiousness boosted neuroticism somewhat. Being able to spin these dials feels useful. Being very outcome focused rather than input focused is also a recipe for a lot of stress that doesn't necessarily seem very correlated to good outcomes ime. Ofc we want some tracking of outputs as a feedback to inputs so there's a balance to strike there. Have you investigated the methods of past people you admire who tried for positive impact?

I don't think there's anything wrong with cultivating a warrior archetype; I strive to cultivate one myself.

 

Would love to read more on this.

2ChristianKl
King, Warrior, Magician, Lover: Rediscovering the Archetypes of the Mature Masculine is the classic reading  recommendation for archetypes. 
stavros164

Hmmm, where to start. Something of a mishmash of thought here.

Actually a manager, not yet clear if I'm particularly successful at it. I certainly enjoy it and I've learned a lot in the past year.

Noticing Panic is a great Step 0, and I really like how you contrast it to noticing confusion.

I used to experience 'Analysis Paralysis' - too much planning, overthinking, and zero doing. This is a form of perfectionism, and is usually rooted in fear of failure.

I expect most academics have been taught entirely the wrong (in the sense of https://www.lesswrong.com/pos... (read more)

6Mo Putera
Great comment. I also like Nate Soares' Dive in:

Re: average age of authors/laureates and average team size

Are these data adjusted for demographic changes? i.e. Aging populations in most western countries, and general population growth.

I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy.

We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference"), but not decisions (plans, engineering designs, visions). These should be created by a coherent cr

... (read more)

Joshua Williams created an initial version of a metacrisis map

 

It's a good presentation, but it isn't a map. 

A literal map of the polycrisis[1] can show:

  • The various key facets (pollution, climate, biorisk, energy, ecology, resource constraints, globalization, economy, demography etc etc)
  • Relative degrees of fragility / timelines (e.g. climate change being one of the areas where we have the most slack)
  • Many of the significant orgs/projects working on these facets, with special emphasis placed on those that are aware of the wider polycrisis
  • Many
... (read more)

The polycrisis has been my primary source of novelty/intellectual stimulation for a good long while now. Excited to see people explicitly talking about it here.

With regard to the central proposition:

I think if there were A Plan to make the world visibly less broken, made out of many components which are themselves made out of components that people could join and take responsibility for, this would increase the amount of world-fixing work being done and would meaningfully decrease the brokenness of the world. Further, I think there's a lot of Common Cause

... (read more)
2Roman Leventov
Joshua Williams created an initial version of a metacrisis map and I suggested to him a couple of days ago to make the development of such a resource more open, e.g., to turn it into a Github repository. Do you mean that it's possible to earn by betting long against the current market sentiment? I think this is wrong for multiple reasons, but perhaps most importantly, because the market specifically doesn't measure how well we are faring on a lot of components of polycrisis -- e.g., market would be great if all people are turned into addicted zombies. Secondly, people don't even try to make predictions in the stock market anymore -- its turned into a completely irrational valve of liquidity that is moved by Elon Musk's tweets, narratives, and memes more than by objective factors. 
2Roman Leventov
I posted some parts of my current visions of 1) and 2) here and here. I think these, along with the Gaia Network design that we proposed recently (the Gaia Network is not "A Plan" in its entirety, but a significant portion of it), address @Vaniver's and @kave's points about realism and sociological/psychological viability. I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy. We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference"), but not decisions (plans, engineering designs, visions). These should be created by a coherent creative entity. The same idea is evident in the design of Open Agency Architecture. If I understand correctly what you are gesturing at here, I think that some high-level agents in the Gaia Network should become a trusted gauge for the "planetary health metrics" we care about.

There's a guy called Rafe Kelley on youtube who has a fairly good answer to this, which I'm going to attempt to summarize from memory because I can't point you toward any reasonable sources (I heard him talking about it in a 1h+ conversation with everyone's favourite boogeyman, Jordan Peterson).

His reasoning goes thus:
1.) We need play in order to develop: play teaches us how to navigate Agent - Arena relationships

This speaks to the result of playground injuries increasing despite increased supervision - kids aren't actually getting to spend enough time pla... (read more)

2mako yass
It concerns me that the best thing the education system ever did to teach negotiative social skills was leaving kids to their own devices. I think if we ackowledge the importance of conflict play, we can build play environments that would foster presently exceedingly rare levels of social robustness: https://www.lesswrong.com/posts/bF353RHmuzFQcsokF/peacewagers-so-far
1Ainsley
"This speaks to the result of playground injuries increasing despite increased supervision - kids aren't actually getting to spend enough time playing in the physical Arena, their capability to navigate it is underdeveloped because of excess indoor time and excess supervision." Anecdotal evidence from yoga and movement teachers would offer tangential support for this. They describe in children (and adults!) a trend to reduced somatic intelligence, core strength, proprioceptive awareness, and ability to assess risk.
Answer by stavros30

Depending on the kind of support they're looking for https://ceealar.org could be an option. At any one time there are a handful of people staying there working independently on AI Safety stuff.

Wholely agree with the 'if it works it works' perspective.

Two minor niggles are worth mentioning:

  1. As I understand it, eating any amount will signal the body to stop fasting. The overnight fast is the only one most people have and it seems to be quite important for long term metabolic health.
  2. Your body has several inputs to its internal clock, and the two most significant ones are light and food. So there's a pathway where this 'solution' might also be reinforcing the problem.

Niggles aside, if it works it works. And nothing is more important than sleep for he... (read more)

7Elizabeth
I agree concerns about short-term fixes reinforcing the problem long term are a very big deal, and that either of the mechanisms you point to could create that effect. But... [basically all of the following is nullified because you took care to specify sleep was the most important thing. I'm using this as an opportunity to discuss patterns around health advice specifically because you avoided the problems, so it doesn't feel like picking on someone. Which I realize is annoying, and I apologize for that] There's a pattern in online health advice. Most of it is written by people who are putting a lot of thought and energy into optimizing for very high performance, or are very sick and putting that same amount of effort into becoming just okay. That advice is hard to do well because it's an amount of effort most people won't put in, and because optimal input at that level varies a lot from person to person.  It's really easy for someone lower on the health ladder to read advice for optimizers and get discouraged or overwhelmed so they do nothing at all. "Okay I need to eat more protein... but not from pork, that might cause an immune reaction to myself... no cows because of global warming...fake meat is artificial and fortified and has the dreaded seed oils...eggs cause cholesterol...fish has mercury... legumes have phytins that leach nutrients...oh look a cookie" My suggestions tend to focus on low hanging fruit for people who aren't putting in much effort. Things that pay for themselves quickly, have strong feedback loops, and don't vary that much from person to person. It's possible I should start specifying this in the relevant posts. My guess is that it is healthier to arrange one's diet to avoid needing 3 AM snacks, for the reasons you mention, but I don't know how to tell people to do that. I have some ideas, but they're vague, effortful to implement, and harder to measure their results. My guess is there are lots of people who will be helped by the advice

So I'm basically the target audience for the OP - I read a lot, of all kinds of stuff, and almost zero papers. I'm an autodidact with no academic background.

I appreciated the post. I usually need a few reminders that 'this thing has value' before I finally get around to exploring it :)

I would say, as the target audience, I'm probably representative when I say that a big part of the reason we don't read papers is a lack of access, and a lack of discovery tools. I signed up for Elicit a while back, but as above - haven't gotten around to using it yet :D

In my experience the highest epistemic standard is achieved in the context of 'nerds arguing on the internet'. If everyone is agreeing, all you have is an echo chamber.

I would argue that good faith, high effort contributions to any debate are something we should always be grateful for if we are seeking the truth.

I think the people who would be most concerned with 'anti-doom' arguments are those who believe it is existentially important to 'win the argument/support the narrative/spread the meme' - that truthseeking isn't as important as trying to embed a cu... (read more)

Re: EMH is false, long GOOG

I wish you'd picked a better example.

... but wait it gets worse

tl;dr LLMs make search cost more, much more, and thus significantly threaten GOOG's bottom line.
MSFT knows this, and is explicitly using Bing Sydney as an attack on GOOG.

I'm not questioning the capabilities of GOOG's AI department, I'm sure Deepmind have the shiniest toys.

But it's hardly bullish for their share price if their core revenue stream is about to be decapitated or perhaps even entirely destroyed - ad based revenue has been on shaky ground for a while now, I... (read more)

2Lech Mazur
People will spend much more time on Google's properties interacting with Bard instead of visiting reference websites from the search results. Google will also be able to target their ads more accurately because users will type in much more information about what they want. I'm bullish on their stock after the recent drop but I also own MSFT.

AI Therapy isn't the first domino to fall, AI Customer Service is (it's already falling).

95% of customer service humans can be replaced by a combination of Whisper+GPT; they (the humans) are already barely agentic, just following complex scripts. It's likely that the AI customer service will provide a superior experience most of the time (less wait times, better audio quality at a minimum, often more competent and knowledgeable too, plausibly capable of supporting many languages).

Obviously huge cost savings so massive incentive for companies to replace hum... (read more)

Thanks for your post, just wanted to contribute by deconfusing ADHD a little (hopefully). I agree that you and OP seem to be agreeing more than disagreeing.

So speaking from a pretty thorough ignorance of the topic itself, my guess based on my priors is that the problem-ness of ADHD has more to do with the combo of (a) taking in the culture's demand that you be functional in a very particular way combined with (b) a built-in incapability of functioning that way.

Correct. However that problem-ness is often a matter of survival/highly non-optional. ADHD can be... (read more)

2Valentine
I found this super helpful. Thank you.   Gotcha. I don't claim to fully understand — I have trouble imagining the experience you're describing from the inside — but this gives me a hint. FWIW, I interpret this as "Oh, so this kind of ADHD is a condition where your adaptive capacity is too low to avoid incurring adaptive entropy from the culture."
3rpglover64
I did start with "I agree 90%." I raised ADHD because it was the first thing that popped into my mind where a chemical habit feels internally aligned, such that the narrative of the "addiction" reducing slack rang hollow. That has not actually been my experience, but I get the sense that my ADHD is much milder than yours. I also get the sense that your experience w.r.t. ADHD and slack is really common for anything that is kinda-sorta-disabilityish this old post comes to mind, even though it doesn't explicitly mention it).

Thanks for this post, it was insightful and perfectly timed; I've been intermittently returning to the problem of trust for a while now and it was on my mind this morning when I found your post.

I think shared reality isn't just a 'warm fuzzies' thing, it's a vital component of cooperation.

I think it's connected with the trust problem; your ability to trust someone is dependent to some degree on a shared reality.

I think that these problems have been severely exacerbated by our current technologies and the social landscape they've shaped, but I'm also highly... (read more)

1kdbscott
I agree about the cooperation thing. One addendum I'd add to my post is that shared reality seems like a common precursor to doing/thinking together. If I want to achieve something or figure something out, I can often do better if I have a few more people working/thinking with me, and often the first step is to 'get everyone on the same page'. I think lots of times this first step is just trying to shove everyone into shared reality. Partially because that's a common pattern of behavior, and partially because if it did work, it would be super effective. But because of the bad news where people actually have different experiences, cracks often form in the foundation of this coordinated effort. But I think if the team has common knowledge about the nature of shared reality and the non-terrible/coercive/violent way of achieving it (sharing understanding), this can lead to better cooperation (happier team members, less reality-masking, better map-sharing). I'm also not sure what you mean about the trust problem, maybe you mean the polls which claim that trust in government and other stuff has been on the decline? 
1tamgent
What exactly is the trust problem you're referring to? Is it you think that people are not as trusting as you think they should be, in general?

To start with, I agree.

I really agree: about timescales, about the risks of misalignment, about the risks of alignment. In fact I think I'll go further and say that in a hypothetical world where an aligned AGI is controlled by a 99th percentile Awesome Human Being, it'll still end in disaster; homo sapiens just isn't capable of handling this kind of power.[1]

That's why the only kind of alignment I'm interested in is the kind that results in the AGI in control; that we 'align' an AGI with some minimum values that anchor it in a vaguely anthropocentric meme-... (read more)

2andrew sauer
Maybe that's the biggest difference between me and a lot of people here. You want to maximize the chance of a happy ending. I don't think a happy ending is coming. This world is horrible and the game is rigged. Most people don't even want the happy ending you or I would want, at least not for anybody other then themselves, their families, and maybe their nation. I'm more concerned with making sure the worst of the possibilities never come to pass. If that's the contribution humanity ends up making to this world, it's a better contribution than I would have expected anyway.
Answer by stavros-4-31

At the outset, I'll say that the answer to 'should you have kids?' in general, is probably not. I'll also say that I've seen/had this discussion dozens of times now and the result is always the same: you're gonna do what you want to do and rationalize it however you need to. The genes win this fight 9 times out of 10.

If you're rich (if you reasonably expect to own multiple properties and afford all of lifes luxuries for the rest of your life), it's probably okay - you won't face financial ruin and your children will be insulated from the worst of what's to... (read more)

2the gears to ascension
karma strong upvote, agreement downvote. score of approximately 1 seems reasonable for this comment to me, though I expect you'll be karma downvoted again if you don't rephrase to be a bit kinder in verbal impact. I don't actually think you're wrong about the trajectory of things if we don't pull up. I think we can get out of the hole, but dang, we sure seem to be in one. Startrek is not out of the question, but my take is things might get pretty bad before they get amazing.

So, two ideas:

Our best evidence of what people truly feel and believe comes less from their words than from their deeds. Observers trying to decide what a man is like look closely at his actions... the man himself uses this same evidence to decide what he is like. His behavior tells him about himself; it is a primary source of information about his beliefs and values and attitudes.

Writing was one sort of confirmin

... (read more)
2Shmi
There is a difference between reading and writing. Semi-voluntarily writing something creates ownership of the content. Copying is not nearly as good. Using an AI to express one's thoughts when you struggle to express them might not be the worst idea. Having your own words overwritten by computer-generated ones is probably not fantastic, I agree.

Yep, big fan of Watts and will +1 this recommendation to any other readers.

Curious if you've read much of Schroeder's stuff? Lady of Mazes in particular explores, among many other things, an implementation of this tech that might-not-suck.

The quick version is a society in which everyone owns their GPT-Me, controls distribution of it, and has access to all of its outputs. They use them as a social interface - can't talk to someone? Send your GPT-Me to talk to them. They can't talk to you? Your GPT-Me can talk to GPT-Them and you both get the results fed back to you. etc etc.

5[anonymous]
I have not. As a biologist, my mind goes elsewhere.  Specifically, to how viruses are by far the most common biological entity on Earth and how most coding sequences in your genome are selfish replicating reverse transcription apparatuses trying to shove copies of their own sequences anywhere they can get into.  Other examples abound, including how the very spliceosome itself in the eukaryotic transcription apparatus seems to be a defense against invasion of the eukaryotic genome by reverse-transcribing introns that natural selection could no longer purge once eukaryotic cell size rose and population size fell enough to weaken natural selection against mild deleterious mutations, but in the process entrenched those selfish elements into something that could no longer leave.

>Well, first, there are such things as energy technologies. The steam engine is a technology. Processes to create coke from coal, or to refine crude oil, are technologies. These technologies are what make all of that energy accessible and usable.

To quote my post:

>Certainly technology is involved in capture/extraction/utilisation. But... hmm there's a quote 'Labour without energy is a corpse, capital (substitute technology here) without energy is a sculpture'. 

And back to you (emphasis mine):

>I don't think this does answer the question, becaus... (read more)

2jasoncrawford
The physical fact of hydrocarbons sitting in the ground is exogenous, yes. But that was true since before humans existed, so it doesn't explain progress. You need an explanation for why we didn't start using those fuels on a large scale until the 1700s or so. And the proximal explanation for that is technology.

So in your specific example of the threshing machine:

Surplus energy is required such that enough of the population are freed from subsistence and agriculture to specialize in other things.

Even more surplus energy is required for the creation/upkeep of cities, which are a prerequisite for technological innovation/growth (high density of different specialists living alongside eachother, as well as a labour force for factories/mass production).

And the railroads that enabled the widespread distribution of threshing machines - obviously highly energy intensive,... (read more)

3jasoncrawford
Your examples of energy usage enabling further economic growth are good ones, particularly the railroads, which absolutely depended on the ability to harness wood or coal for locomotion. But I disagree with how you interpret these examples and the rest of your analysis. Re: Well, first, there are such things as energy technologies. The steam engine is a technology. Processes to create coke from coal, or to refine crude oil, are technologies. These technologies are what make all of that energy accessible and usable. I don't know what it means for energy to be a “fundamental input.” When you say: I don't think this does answer the question, because technological/industrial progress is what made that surplus energy available. It didn't just become available for some other reason. The surplus was created by progress itself. So it can't be used to explain progress. In short, surplus energy is not exogenous to technological or economic growth, it is endogenous.

Why doesn't your analysis account for energy at all?

2jasoncrawford
Good question, probably because energy doesn't seem pivotal for the specific case of the threshing machine? It was clearly a crucial piece of infrastructure in many other cases. I consider it part of the technology flywheel, along with other fundamental enabling technologies such as precision manufacturing. There's a good argument that it is the most important of all such fundamental technologies.

(Apologies in advance if any/all of this is obvious to you)

Too much sleep is bad, too little sleep is bad. Sleep needs vary per person and throughout life but generally >6 hours, <9 hours is the range.

You don't really sleep in 'hours', you sleep in cycles (https://en.wikipedia.org/wiki/Sleep_cycle) so measuring based on hours doesn't work so much.

If you wake up naturally sometime in that 6-9 hour window, and you sleep deeply through the night (smartwatches are good at measuring this), you're probably getting enough sleep.

If you have reason to be conc... (read more)

https://www.goodreads.com/book/show/534755.A_Technique_for_Producing_Ideas?ac=1&from_search=true&qid=FeFvMKus2k&rank=1

+ Incorporating understanding of https://en.wikipedia.org/wiki/Flow_(psychology)
 

+ Drugs


If you're willing to accept 'on command' as, 'something I spend days/weeks intentionally preparing/cultivating' then it seems like you're in luck.

Sorry if this is all old news; not what you were looking for.

stavros2-4

Feel free to delete because this is highly tangential but are you aware of Mark Solms work (https://www.goodreads.com/book/show/53642061-the-hidden-spring) on consciousness, and the subsequent work he's undertaking on artificial consciousness?

I'm an idiot, but it seems like this is a different-enough path to artificial cognition that it could represent a new piece of the puzzle, or a new puzzle entirely - a new problem/solution space. As I understand it, AI capabilities research is building intelligence from the outside-in, whereas the consciousness model would be capable of building it from the inside-out.

9Steven Byrnes
My 2¢—I read that book and I think it has minimal relevance to AGI capabilities & safety. (I think the ascending reticular activating system is best thought of as mostly “real-time variation of various hyperparameters on a big scaled-up learning-and-inference algorithm”, not “wellspring of consciousness”.)

https://en.wikipedia.org/wiki/Zhan_zhuang

Both meditation and exercise. A daily (1hr a day is the sweet spot), lifelong practice without end. Easy to learn, probably impossible for most of us to master but that's okay because mastery isn't the point.

The point is to strengthen and broaden the connection between mind and body, and the connections within your body itself - to relearn how to move with the whole body.

To learn how to be still, and yet relaxed instead of stiff.

The point is also, at least for me, to do something impossibly slow and hard every day. ... (read more)

I disagree, strongly. Not only do I believe this line of reasoning to be wrong, I believe it to be dangerously wrong. I believe downplaying and/or underestimating the role of energy in our economic system is part of why we find ourselves in the mess we're in today.

To reference Nate Hagens (https://www.youtube.com/watch?v=-xr9rIQxwj4) 
We use the equivalent of 100 billion barrels of oil a year. Each barrel of oil can do the amount of work it would take 5 humans to do. There are 500 billion 'ghost' labourers in our society today.

(Back to me)

You cannot ea... (read more)

2T431
I would like to think about this more, but thank you for posting this and switching my mind from System I to System II

The more powerful a tool is, the more important it is that the tool behaves predictably.

A chainsaw that behaves unpredictably is very, very, dangerous.

AI is, conservatively, thousands of times more powerful than a chainsaw.

And unlike an unpredictable chainsaw, there is no guarantee we will be able to turn an unpredictable AI off and fix it or replace it.

It is plausible that the danger of failing to align AI safely - to make it predictable - is such that we only have one chance to get it right.

Finally, it is absurdly cheap to make massive progress in AI safety.

stavros160

This was wonderful; the post that finally got me to create an account here. I got quite a few sensible chuckles and a few hearty laughs out of your list. I think we've been reading similar books recently (Graeber's Dawn of Everything? :) )

My contribution is to remind the participants that a somewhat recurring theme (something of an original in western philosophy - i.e. Socrates) in history is of wise people enjoying themselves too much and getting murdered by the people who'd grown increasingly scared/estranged/horrified by them. 

Heretical thinking is fun, but in the real world there are people who would harm you for exposing them to it.
Practice safe heresy kids :)

1rogersbacon
Also, check out the substack :) - https://rogersbacon.substack.com/
1rogersbacon
Thanks! Yup, just finished and enjoyed DoE :) A good reminder, I'll start getting worried when discussion of these heresies moves beyond niche internet message boards. 
Load More