All of Valentin2026's Comments + Replies

I agree, they have a really bad life, but Eliezer seems to talk here about those who work 60 hours/week to ensure their kids will go to a good school. Slightly different problem. 

And on homeless people, there are different cases. In some UBI indeed will help. But, unfortunately, in many cases the person has mental health problems or addiction, and simply giving them money may not help. 

2Seth Herd
Giving people with problems money still helps them. It may not solve their problems, but it will make it more bearable to live with those problems.

I feel that one of the key elements of the problem is misplaced anxiety. If the ancient farmer stops working hard he will not not get enough food. So all his family will be dead.  In modern Western society, the risk of being dead from not working is nearly zero. (You are way more likely to die from exhausting yourself and working too hard).  When someone works too hard, usually it is not fear of dying too earlier, or that kids will die. It is a fear of failure, being the underdog, not doing what you are supposed to, and plenty of other constructs... (read more)

Being homeless sucks, it’s pretty legitimate to want to avoid that

What is the application deadline? I did not find it in the post. Thank you!

1UnplannedCauliflower
We will end accepting applications in the first half of August, but it's possible we'll run out of spots earlier, in which case people will be automatically placed on the waiting list. You don't have to hurry, but also it's a good idea not to wait till the last moment

Yes, absolutely! We will open the application for mentee later

So far nothing, was distracted by other stuff in my life. Yes, let's chat! frombranestobrains@gmail.com

After the rest of the USA is destroyed the very unstable situation (especially taking into account how many people have guns) is quite likely. In my opinion countries (and remote parts of countries) that will not be under attack at all are much better 

1[anonymous]
tyty this makes sense

Thank you for your research! First of all, I don't expect the non-human parameter to give a clear power-law, since we need to add humans as well. Of course, close to singularity the impact of humans will be very small, but maybe we are not that close yet. Now for the details:


Compute:
1. Yes, Moore's law was a quite steady exponential for quite a while, but we indeed should multiply it.
2. The graph shows just a five years period, and not the number of chips produced, but revenue. The five years period is too small for any conclusions, and I am not sure that ... (read more)

1Mart_Korz
Trends of different quantities: Generally, I agree with your points :) I recently stumbled upon this paper "The World’s Technological Capacity to Store, Communicate, and Compute Information", which has some neat overviews regarding data storage, broadcasting and compute trends: From a quick look at the Figures my impression is that compute and storage look very much like 'just' exponential, while there is a super-exponential figure (Fig. 4) for the total communication bandwidth (1986-2007)[1]  General That makes sense. Now that I think about this, I could well imagine that something like "scale is all you need" is sufficiently true that randomness doesn't shift the expected date by a large amount. Good point! I think that the time span around and before the first AGI will be most relevant to us as it probably provides the largest possibility to steer the outcome to something good, but this indeed is not the date we found get in a power-law singularity. This feels quite related to the discussion around biological anchors for estimating the necessary compute for transformative AI and the conclusions one can draw from this. I feel that if one thinks that these are informative, even 'just' the exponential compute trends provide rather strong bounds (at least compared to having biological time-scales or such as reference).    Regarding persuading people: I am not sure whether such a trend would make such a large psychological difference compared to the things that we already have: All Possible Views About Humanity's Future Are Wild. But it would still be a noteworthy finding in any case 1. ^ Quick hand-wavy estimate whether the trend continued in the last 15 years: If I just assume 'trend continues' to mean 'doubling time halves every 7 years with a factor x40 from 2000 to 2007' (this isn't power law, but much easier for me to think about and hopefully close enough in this parameter range) we'd have to find an increase in global bandwidth by (a factor o

You are making a good point. Indeed, the system that would reward authors and experts will be quite complicated, so I was thinking about it on a purely volunteering basis (so in the initial stages it is non-profit). Then, if the group of people willing to work on the project was formed, they may turn it into a business project. If the initial author of the idea is in the project, he may get something, otherwise, no - the idea is already donated, no donations back. I will make an update to the initial post to clarify this point.

As to your idea, I am totally not an expert in this field. Hopefully, we will find the experts for all our ideas (I also have a couple).

Thank you very much, it does! 
I think you answer is worth to be published as a separate post. It will be relevant for everyone who is teaching.  

It would be very interesting to look at the results of this experiment in more detail.

 

Yes, maybe I explained what I mean not very well; however, gjm (see commentaries below) seems to get it. The point is not that CFAR is very much like Lifespring (though I may have sounded like that), the point is that there are certain techniques (team spirit, deep emotional connections etc.) that are likely to be used in such workshops, that will most certainly make participants love workshop and organizers (and other participants) , but their effect on the partici... (read more)

Is there any proven benefits of meditation retreats in comparison with regular meditation?

Ok, your point makes sense.

Basically, I am trying to figure out for myself if going to the workshop would be beneficial for me.  I do believe that CFAR does not simply try to get as much money as possible. However, I am concerned that people after the workshop are strongly biased towards liking it not because it really helps, but because of psychological mechanisms akin to Lifespring. I am not saying that CFAR is doing it intentionally, it could just have been raised somehow on its own. Maybe these mechanisms are even beneficial to whatever CFAR is doing, but they definitely make evaluation harder.  
 

4Viliam
CFAR once did an experiment (not sure whether they keep doing it) where they gave questionnaires to people who participated in their workshops and to the control group (people who wanted to participate, but were not selected because of limited space). Part of the questionnaire was evaluated by other people, nominated by the participants as someone who knows them; if I remember correctly they were asked questions immediately before the workshop, and one year later. Not sure where the results are published. Seems to me that your question is mostly about how to distinguish between "participants became more productive/rational/whatever after the workshop" and "participants liked the workshop a lot", as both could lead to good feedback. That is a valid question. But mentioning Lifespring repeatedly just makes it unnecessarily confrontational, considering that "they do workshops, but no online workshops" is pretty much all these two organizations have in common; in many other aspects they seem to be the opposite of each other. If you have specific concerns whether CFAR does some specific bad actions, it would be better to ask directly "does CFAR do X, Y, and Z?". For example (looking at the Wikipedia page about Lifespring), "does CFAR physically prevent participants from leaving the workshop?" or "how many participants have died during a CFAR workshop?" Here the answers are "no" and "zero" respectively. Whatever are CFAR's true reasons for not doing online workshops, torturing and killing people are not among them. Frankly, I would also like to see more people making CFAR workshops in various ways. But it is not my place to tell them how to use their limited resources. I participated in one of those workshops, so in theory, I should be able to review my notes and make a workshop myself. I am just too lazy (ahem, time-constrained) to do that. Also, CFAR probably wouldn't be happy about it, because one of their fears is that someone will provide a shitty version of their

"When I was talking to Valentine (head of curriculum design at the time) a while ago he said that the spirit is the most important thing about the workshop."

Now, this already sounds a little bit disturbing and resembling Lifespring. Of course, the spirit is important, but I thought the workshop is going to arm us with instruments we can use in real life, not only in the emotional state of comradeship with like-minded rationalists.
 

The particular instruments are not the point. 

When asked whether CFAR wanted to collaborate of scientifically validating their instruments, the answer I heard was that CFAR doesn't because it doesn't consider the instruments of central importance but considers giving people agency about changing their own thinking process of central importance. 

There are many aspects of Lifespring. If I look at the Wikipedia page it suggests that they tried to maximize the amount of people enrolled to lifespring seminars. You complain that CFAR doesn't do enough ... (read more)

I can understand your point, but I am not persuaded yet. Let me maybe clarify why. During the year and a half of COVID, the in-person workshops were not possible. During this time, there were people, who would strongly benefit from the workshop, and the workshop would be helpful at this time (for example, they were making a career choice). Some of them can allow private places for the time of the workshop. It seems that for them, during this time the online workshop would be certainly more beneficial than no workshop at all. Moreover, conducting at least o... (read more)

It is a good justification for this behavior, but it does not seem to be the most rational choice. Indeed, one could specify that the participant of the online workshop must have a private space (own bedroom, office, hotel room, remote place in a park - whatever fits). I am pretty sure there is a significant number of people, who would prefer an online workshop to the offline one (especially when all offline are canceled due to COVID), and who have or can find a private space for the duration of the workshop. To say that we are not doing it because some pe... (read more)

0ChristianKl
That's not what "rational" is about. To know whether their decision is rational you would have to compare it to the alternative choices. Holding a workshop has an opportunity cost. If CFAR is not holding a workshop they are doing something else with the time. I don't have a good idea what CFAR did in the time they didn't hold workshops but without knowing that you can't make any decision about whether holding online workshops would have been better then what they did.  That's mostly irrelevant given the goals that CFAR has. It would be toxic to make a decision on that basis given that "giving people what they prefer" and "giving people what will improve their rationality" are two different very things. There are a lot of personal development workshops who's makers focus on the former. CFAR doesn't.  As far as high end restaurants go, high end restaurants don't let people order from multiple options. Attempting to give everybody different options is cheap low end and mid level restaurants do.  Apart from that a restaurant is a business that sells products to make a profit on selling products. That's very far from what CFAR is.

I agree that for some people physical contact (like hugs, handshaking etc.) indeed means a lot. However, it is not for everyone. Moreover, even if the online workshop is less effective due to lack of this spirit, is it indeed so ineffective that it is worse than no workshop at all? Finally, why just not to try? It sounds like a thing that should be tried at least one time, and if it fails - well, then we see that it fails. 

Yes, I hope someone who attended CFAR (or even somehow related to it) would see this question and give their answer. 

4ChristianKl
I don't think just attending it would allow someone to answer better. A good answer would come from someone who actually speaks for CFAR.  When I was talking to Valentine (head of curriculum design at the time) a while ago he said that the spirit is the most important thing about the workshop.

Are there any other examples when rationality guides you faster than the scientific approach? If so it would be good to collect and mention them. If no I am pretty suspicious about QM one as well. 

5Shmi
I think that rationality as a competing approach to the scientific method is a particularly bad take that leads a lot of aspiring rationalists astray, into the cultish land of "I know more and better than experts in the field because I am a rationalist". Data analysis uses plenty of Bayesian reasoning. Scientists are humans and so are prone to the biases and bad decisions that instrumental rationality is supposed to help with. CFAR-taught skills are likely to be useful for scientists and non-scientists alike. 

First of all, it is my mistake - in the paper they used pain more like a synonym to suffering. They wanted to clarify that the animal avoids tissue damage (heat, punching, electric shock etc.) not just on the place, but learns to avoid it. To avoid it right there is simply nociception that can be seen in many low-level animals.

I don't know much about the examples you mentioned. For example, bacterias certainly can't learn to avoid stimuli associated with something bad for them. (Well, they can on the scale of evolution, but not as a single bacteria). 

If it is, does it mean that we should all artificial neural network training consider as animal experiments? Should we put something like "code welfare is also animal welfare"?

I agree with the point about the continuous ability to suffer rather than a threshold. I totally agree that there is no objective answer, we can't measure sufferings. The problem is, however, that it leaves a practical question that is not clear how to solve, namely how we should treat other animals and our code. 

Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won't be the functional approach, we can't extrapolate it blindly to the  AI.

If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.

2Adele Lopez
I think the experiments you describe on humans is a reasonable start, but that you would then need to ask: "Why did suffering evolve as a distinct sensation from pain?" I don't think you can determine the function of suffering without being able to answer that. Then you could look at other systems and see if something with the same functionality exists. I think that's how you could generalize to both other animals and AI.

I like the idea. Basically, you suggest taking the functional approach and advance it. What do you think can be this type of process?  

Thank you, but it is again like to say: "oh,  to solve physics problem you need calculus. Calculus uses real numbers. The most elegant way to introduce real numbers is from rational numbers from natural numbers  via Peano axiomatics. So let's make physicists study Peano  axiomatic, set theory and formal logic". 

In any area of math, you need some set theory and logic - but usually in the amount that can be covered in one-two pages. 

Thank you, but I would say it is too general answer. For example, suppose your problem is to figure out planet motion. You need calculus, that's clear. So, according to this logic, you would first need to look at the building blocks. Introduce natural numbers using Peano axioms, then study their properties, then introduce rational, and only then construct real numbers. And this is fun, I really enjoyed it. But does it help to solve the initial problem? Not at all. You can just introduce real numbers immediately. Or, if you care only about solving mechanics... (read more)

It is worrisome indeed. I would say, it definitely does not help and only increases a risk. However, I don't think this country-that-must-not-be-named would start the nuclear war first, simply because it has too much to lose and its non-nuclear opportunities are excellent. This may change in future - so yes, there is some probability as well.

That is exactly the problem. Suppose the Plutonia government sincerely believes, that as soon as other countries will be protected, they will help people of Plutonia to overthrow the government? And they kind of have reasons for such belief. Then (in their model of the world) the world protected from them is a deadly threat,  basically capital punishment. The nuclear war, however, is horrible, but there are bomb shelters where they can survive and have enough food inside just for themselves to live till natural death. 

The problem is that retaliation is not immediate (missiles takes few hours to reach the goal). For example, Plutonia can demonstratively destroy one object and declare that any attempt of retaliation will be retaliated in double. As soon as other country launches N missiles, Plutonia launches 2 N.

Yes, absolutely, it is the underlying thesis. 

Well, "democratic transition" will not necessarily solve that (like basically it did not completely resolve the problem with the end of the Cold War), you are right, so actually, the probability must be higher than I estimated - even worse news. 
 Is there any other options for decreasing the risk?

From a Russian perspective. Well, I didn't discuss it with officials in the government, only with the friends who support the current government. So I can only say what they think and feel, and of course, it is just anecdotal evidence. When I explicitly ... (read more)

When I say use, I mean actually detonating - not necessarily destroying a big city, but initially maybe just something small. 
Within the territory is possible, though I think outside is more realistic (I think the army will be eventually to weak to fight the external enemies with modern technology, but will always be able to fight unarmed citizens).

2lsusr
If Plutonia detonates its weapons outside its own territory offensively then the target is either nuclear-armed or non-nuclear armed. * If the target is nuclear-armed then the target state can be expected to retaliate with overwhelming force, thus removing the Plutonian government from power. * If the target is not nuclear-armed then nuclear armed states may or may not retaliate with overwhelming force. Exactly what happens here depends on what country Plutonia really is[1], who the target of the nuclear strike is and other geopolitical conditions. Either way, the danger Plutonia poses to the rest of the world is eliminated before things can snowball. ---------------------------------------- 1. How the world would respond to a nuclear first strike by Russia is very different from a nuclear first strike by North Korea is very different from a nuclear first strike by Iran. ↩︎

Sorry, I didn't get what do you mean by "non-dominant political controllership", can you rephrase it?

How should we deal with the cases when epistemic rationality contradicts instrumental? For example, we may  want to use placebo effect because one of our values is that healthy is better than sick, and less pain is better than more pain. But placebo effect is based on the fact that we believe pill to be a working medicine that is wrong. Is there any way to satisfy both epistemic and instrumental rationality?

3EniScien
It seems to me that this is not a contradiction of two rationalities. Rather, it is similar to the resonance of doubt. If a placebo works when you believe in it, that means that if you believe in it, it will be true. Here you need a reverse example, when if you believe that something is true, then it becomes false. (Believing that something is safe again won't work, since you just need to not act more carelessly based on the safety of something, which is just a matter of instrumental rationality)
1Peter Pehlivanov
I'd say you shouldn't force yourself to believe something (epistemic rationality) to achieve a goal (instrumental rationality). This is because, in my view, human minds are addicted to feeling consistent, so it'd be very difficult (i.e., resource expensive) to believe a drug works when you know it doesn't. What does it even mean to believe something is true when you know it's false? I don't know. Whatever it means, it'd have to be a psychological thing rather than an epistemological one. My personal recommendation is to only believe things that are true. This is because the modern environment we live in generally benefits rational behavior based on knowledge anyway, so the problem doesn't need to surface.
7Jozdien
It depends from case to case, I would think.  There are instances when you're most probably benefited by trading off epistemic rationality for instrumental, but in cases where it's too chaotic to get a good estimate and the tradeoff seems close to equal, I would personally err on the side of epistemic rationality.  Brains are complicated, forcing a placebo effect might have ripple effects across your psyche like an increased tendency to shut down that voice in your head that talks when you know your belief is wrong on some level (very speculative example), for limited short-term gain.

Hmmm, but I am not saying that the benevolent simulators hypothesis is false and that I just choose to believe in it because it brings a positive effect. Rather opposite - I think that benevolent simulators are highly likely (more than 50% chance). So it is not a method "to believe in things which are known to be false". It is rather an argument why they are likely to be true (of course, I may be wrong somewhere in this argument, so if you find an error, I will appreciate it). 

In general, I don't think people here want to believe false things.

Of course, placebo is useful from the evolutionary point of view, and it is a subject of quite a lot of research. (Main idea - it is energetically costly to have your immune system always at high alert, so you boost it in particular moments, correlating with pleasure, usually from eating/drinking/sex, which is when germs usually get to the body. If interested, I will find the link to the research paper where it is discussed. ).

I am afraid I still fail to explain what I mean. I do not try to deduce from the observation that we are in a simulation, I don't t... (read more)

2avturchin
Ok. But what if there are other more effective methods to start believe in things which are known to be false? For example, hypnosis is effective for some. 

It is exactly the point that there should be no proof of simulation unless simulators want it. Namely, there should be no observable (for us) difference between universe controlled simply by laws of Nature and between one with intervention from simulators. We can't look at any effect and say - this happens, therefore, we are in the simulation. 
The point was the opposite. Assume we are in simulation with benevolent simulators (what, according to what I wrote in the theoretical part of the post, is highly likely). What they can do so that we still was n... (read more)

2avturchin
Placebo could work because it has some evolutionary fitness, like the ability to stop pain in case of the need of activity.  Benevolent simulators could create an upper limit of subjectively perceived pain, like turning off qualia but continue screaming. This will be unobservable scientifically. 

I would suggest a minutely subscription. It will be approximately $1/minute, actually close for mine akrasia fine for spending time on job unrelated websites.  

Thank you. There was one paper at the post about older adults and calorie restriction. However, it is kind of biased - they have slightly overweight people in the experiment. So yes, calorie restriction is good for overweight. Duh. 
Do you know any other studies? Thank you! 

1Aaron Bergman
I don't know much more than you could find searching around r/nootropics, but my sense is that the relationship between diet and cognition is highly personal, so experimentation is warranted. Some do best on keto, others as a vegan, etc. With respect to particular substances, it seems that creatine might have some cognitive benefits, but once again supplementation is highly personal. DHA helps some people and induces depression in others, for example. Also, inflammation is a common culprit/risk factor for many mental issues, so I'd expect that a generally "healthy" diet (at least, not the standard American diet), and perhaps trying an elimination diet to see if eliminating certain foods produces a marked benefit could be helpful.  Supplements like resveratrol might help with this as well. Also might be worth experimenting with fasting of different lengths; some people find that they become extremely productive after fasting for 12-14 hours. There are a million studies on all these topics that will come up in a Google search.

It sounds possible. However, before even the first people will get it, there should be some progress with animals, and right now there is nothing. So I would bet it is not going to happen in let's say next 5 years. (Well, unless we suddenly get a radical progress in creating a superAI that will do it for us, but this is the huge another question on its own). 
I would say, I wanted first to think about the very near future, without a huge technological breakthrough. Of course, the immortality and superAI are far more important than anything I mentioned ... (read more)

1[anonymous]
Was there a timescale in the OP? Some readers of this post might be 22 and taking their metformin and see age 98. So 76 years. I personally, having seen the state of things in biomed, think it is a problem that there will not be real progress on until we have substantially better AI systems. I think the fundamental problem is to create a (cocktail of possibly hundreds of drugs and genetic hacks, administered 24/7) to cause optimal outcomes in (a matrix of numbers that represent a patient's health and aging state, obtained from a large battery of continuously run tests). I think that finding out how to generate the drugs and genetic hacks will require repeating most previously performed experiments, just now a robot is doing them and the data is published without bias to cloudservers. This cannot be solved by human beings just like we cannot keep a modern jet fighter in their air without computer assistance. But we can write and debug the systems that can.