On the idea of "we can't just choose not to build AGI". It seems like much of the concern here is predicated on the idea that so many actors are not taking safety seriously, so someone will inevitably build AGI when the technology has advanced sufficiently.
I wonder if struggles with AIs that are strong enough to cause a disaster but not strong enough to win instantly may change this perception? I can imagine there being very little gap if any between those two types of AI if there is a hard takeoff, but to me it seems quite possible for there b...
I feel like a lot of the angst about free will boils down to conflicting intuitions.
The way to reconcile these intuitions is to recognize that yes, all the decisions you make are in a sense predetermined, but a lot of what is determining those decisions is who you are and what sort of thing you would do in a particular circumstance. You are making decisions, that ex...
I of course don’t have insider information. My stance is something close to Buffett’s advice “be fearful when others are greedy, and greedy when others are fearful”. I interpret that as basically that markets tend to be overly reactionary and if you go by fundamentals representing the value of a stock you can potentially outperform the market in the long run. To your questions, yes disaster may really occur, but my opinion is that these risks are not sufficient to pass up the value here. I’ll also note that Charlie munger has been acquiring a substantial stake in BABA, which makes me more confident in its value at its current price.
Alibaba (BABA) - the stock price has been pulled down by fear about regulation, delisting, and most recently instability in China as it's zero covid policy fails. However, as far as I can tell, the price is insanely low for the amount of revenue Alibaba generates and the market share that it holds in China.
Current bioethics norms will strongly condemn this sort of research, which may make it challenging to pursue in the nearish term. The consensus is strongly against, which will make acquiring funding difficult and any human CRISPR editing is completely off the table for now. For example, He Jiankui CRISPR edited some babies in China to make them less susceptible to HIV and went to prison for it.
He Jiankui had issues beyond just doing something bioethically controversial. He didn't make the intended edits cleanly in any embryo (instead there were issues with off-target edits and mosaicism). If I remember correctly, he also misled the parents about the nature of the intervention.
All in all, if you look into the details of what he did, he doesn't come out looking good from any perspective.
I’m not sure the problem in biology is decoding. At least not in the same sense it is with neural networks. I see the main difficulty in biology more one of mechanistic inference where a major roadblock may be getting better measurements of what is going on in cells over time rather some algorithm that’s just going to be able to overcome the fact that you’re getting both very high levels of molecular noise in biological data and single snapshots in time that are difficult to place in context. With a neural network you have the parameters and it seems reaso...
In general the observation from working in the field is that if you have a simple metric, people will figure out how to game it. So you need to build in a lot of safeguards, and you need to evolve all the time as the spammers/abusers evolve. There's no end point, no place where you think you're done, just an ever changing competition.
That's what I was trying to point at in regards to the problem not being patchable. It doesn't seem like there is some simple patch you can write, and then be done. A solution that would work more permanently seems to ha...
In your opinion, would a resurrection/afterlife change this equation at all?
Yes, an afterlife transforms death (at least relatively low-pain deaths) into something that's really not that bad. It's sad in the sense you won't see a person for a while, but that's not remotely on the level of a person being totally obliterated, which is my current interpretation of death on the basis that I see no compelling evidence for an afterlife. Considering that one's mental processes continuing after the brain ceases to function would rely on some mechanism unknow...
I had a really hard time double cruxing this, because I don't actually feel at all uncertain about the existence of a benevolent and omnipotent god. I realized partway through that I wasn't doing a good job arguing both sides and stopped there. I'm posting this comment anyway, in case it makes for useful discussion.
You attribute god both benevolence and omnipotence, which I think is extremely difficult to square with the world we inhabit, in which natural disasters kill and injure thousands, in which children are born with debilitating diseases, and good p...
Agree, I think the problem definitely gets amplified by power or status differentials.
I do think that people often forget to think critically about all kinds of things because their brain just decides to accept it on the 5 second level and doesn't promote the issue as needing thorough consideration. I find all kinds of poorly justified "facts"/advice in my mind because of something I read or someone said that I failed to properly consider.
Even when someone does take the time to think about advice though I think it's easy for things to go wrong. The r...
The main thing people fail to consider when giving advice is that advice isn't what's wanted.
I fully agree, this post was trying to get at what happens when people do want advice and thus may take bad advice.
Advice comes with no warranty. If some twit injures themselves doing what I told them to (wrongly) then that's 100% on them.
I think in some cases this is generally a fair stance (though I think I would still like to prevent people from misapplying my advice if possible), but if you are in a position of power or influence over someone I'm not sure...
I think the metaphor of "fast-forwarding" is a very useful way to view a lot of my behavior. Having thought about this for a while though, I'm not sure fast-forwarding is always a bad thing. I find it can be mentally rejuvenating in a way that introspection is not (e.g. if I've been working for a long period and my brain is getting tired I can often quickly replenish my mental resources by watching a short video or reading a chapter of a fantasy novel after which I'm able to begin working again, whereas I find sitting and reflecting to still require some m...
Favorite technique: Argue with yourself about your conclusions.
By which I mean if I have any reasonable doubt about some idea, belief, or plan I split my mind into two debaters who take opposite sides of the issue, each of which wants to win and I use my natural competitiveness to drive insight into an issue.
I think the accustomed use of this would be investigating my deeply held beliefs and trying to get to their real weak points, but it is also useful for:
So to clarify, I think there is merit in his approach of trying to engineer solutions to age related pathology. However, I do not think it will work for all aspects of aging right now. Aubrey believes that all the damage caused by aging are problems that we can begin solving right now. I would suspect that some are hard problems that will require a better understanding of the biological mechanisms involved before we can treat them.
So my position is that aging, like many fields, should be investigated both at the basic biology level and the from the perspec
...As someone who works in biological science, I give the claim very little credence. I am someone who is very interested in Aubrey's anti-aging ideas and when I bring up aging with colleagues, it is considered to be a problem that will not be solved for a long time. Public opinion usually takes 3 to 5 years to catch up to scientific consensus, and there is no kind of scientific consensus about this. That said, the idea of not having to get old does excite people a lot more than many other scientific discoveries so it might percolate into mainstream much...
I think a very interesting aspect of this idea is that it explains why it can be so hard to come up with truly original ideas, while it is much easier to copy or slightly tweak the ideas of other people. Slight tweaks were probably less likely to get you killed, whereas doing something completely novel could be very dangerous. And while it might have a huge payoff, everyone else in the group could then copy you (due to imitation being our greatest strength as a species) so the original idea creator would not have gained much of a comparative advantage in most cases.
I think a number of the example answers are mystifying meaning. In my view, meaning is simply the answer to the question "why is life worth living?". It is thus a very personal thing, what is meaningful for one mind may be utterly meaningless to another.
Yet as we are all humans, some significant overlap in the sorts of things that provide a sense of reason or gladness to being alive exists.
I will quote my favorite song, "The Riddle" by Five for Fighting, which gives two answers: "there's a reason for the world, you and I"...
This was very interesting. There seems to be a trade off for these people between their increased happiness and the ability to analyze their mistakes and improve, so I am not sure I find it entirely attractive. I think there is balance there, with some of the people studied being too happy to be maximally effective (assuming they have goals more important to them than their own happiness)
I think these are very important points. I have noticed some issues with having the right responses for social situations (especially laughing when it's not entirely appropriate), which is something I've been working on remedying by paying closer attention to when people expect a serious reaction.
The issue of ignoring problems also seems like something to look out for. Just because something does not make you feel bad should not mean you fail to learn from it. I think there is a fine balance between learning from mistakes and dwelling on them, wh...
I think the example with the lightbulbs and SAD is very important because it illustrates well that in areas that humanity is not prioritizing especially, one is much more justified in expecting civilizational inadequacy.
I think a large portion of the judgment of whether one should expect that inadequacy should be a function of how much work and money is being spent on a particular subject.
Great sequence, I've really enjoyed it.
And I definitely agree with this view of rationality, I think the idea of incremental successes enphasizes the need to track successes and failures over time so that you can see where you did well and where you did poorly and plan to make the coin come up heads more often in the future.
You don't build strength while you're lifting weight. You build strength while you're resting.
I think this phrase is particularly helpful as something to repeat to yourself when feeling the impulse to push through exhaustion when you know that you really ought to rest. I'll almost certainly be using it for that purpose when I'm feeling tempted to forget what I've learned.
Yeah, I think the biggest problem for me was that I felt deficient for failing to live up to the standard I set for myself. I sort of shunted those emotions aside and I really fell out of a lot of habits of self-improvement and hard work for a time. So I would say the emotional fallout lead to the most damaging part (of losing good habits in the aftermath).
Thinking about tradeoffs in terms of tasks completed is a good idea as well, I'll try doing that more explicitly.
I definitely have had the experience of trying to live up to a standard and it feeling awful, which then inhibits the desire to make future attempts. I think that feeling indicates a need to investigate whether you're making things difficult on yourself. For example, I would often attempt to learn too many things at once and do too much work at once because I thought the ideal person would basically be learning and working all the time. Then, when I felt myself breaking down it sent my stress levels through the roof because if I couldn't keep goi
The general idea for me is using the heuristics to form the goals, which in turn suggest concrete actions. The concrete actions are what go on your schedule/to-do list. I'd also advocate constantly updating/refining your goals and concrete methods of achieving goals, both for updating on new information and testing out new methods.
It's possible that a daily schedule just doesn't work for you, but I will say that I had to try a number of different tweaks before it felt okay to me. Examining negative feelings the schedule gives you and then lo
I find myself doing this a great deal when deciding whether to criticize somebody. I model most people I know as not being able to productively use direct criticism. The criticism, however well meant it may be, will hurt their pride, and they will not change. Indeed, the attempt will probably create some bad feeling towards me. It is just better not to try to help them in such a direct way. There are more tactful ways of getting across the same point, but they are often more difficult and not always practical in every situation.
The people I do directly cr
I think optimizing based on the preferences of people may be problematic in that the AI may, in such a system, modify persons to prefer things that are very cheaply/easily obtained so that it can better optimize the preferences of people. Or rather, it would do that as part of optimizing-it would make people want things that can be more easily obtained.
I'm not Raemon, but elaborating on using Gendlin's Focusing to find catalysts might be helpful. Shifting emotional states is very natural to me-I used to find it strange that other people couldn't cry on demand-and when I read Focusing I realized that his notion of a "handle" to a feeling is basically what I use to get myself to shift into a different emotional state. Finding the whole "bodily" sense of the emotion lets you get back there easily, I find.
This seems largely correct to me, although I think hyperbolic discounting of rewards/punishments over time may be less pronounced in human conditioning as compared to animals being conditioned by humans. Humans can think "I'm now rewarding myself for Action A I took earlier" or "I'm being punished for Action B" which can seems, at least in my experience, to decrease the effect of the temporal distance whereas animals seem less able to conceptualize the connection over time. Because of this difference, I think the temporal diff
Would the ability to deceive humans when specifically prompted to do so be considered an example? I would think that large LMs get better at devising false stories about the real world that people could not distinguish from true stories.