I feel that one of the key elements of the problem is misplaced anxiety. If the ancient farmer stops working hard he will not not get enough food. So all his family will be dead. In modern Western society, the risk of being dead from not working is nearly zero. (You are way more likely to die from exhausting yourself and working too hard). When someone works too hard, usually it is not fear of dying too earlier, or that kids will die. It is a fear of failure, being the underdog, not doing what you are supposed to, and plenty of other constructs...
Thank you for your research! First of all, I don't expect the non-human parameter to give a clear power-law, since we need to add humans as well. Of course, close to singularity the impact of humans will be very small, but maybe we are not that close yet. Now for the details:
Compute:
1. Yes, Moore's law was a quite steady exponential for quite a while, but we indeed should multiply it.
2. The graph shows just a five years period, and not the number of chips produced, but revenue. The five years period is too small for any conclusions, and I am not sure that ...
You are making a good point. Indeed, the system that would reward authors and experts will be quite complicated, so I was thinking about it on a purely volunteering basis (so in the initial stages it is non-profit). Then, if the group of people willing to work on the project was formed, they may turn it into a business project. If the initial author of the idea is in the project, he may get something, otherwise, no - the idea is already donated, no donations back. I will make an update to the initial post to clarify this point.
As to your idea, I am totally not an expert in this field. Hopefully, we will find the experts for all our ideas (I also have a couple).
It would be very interesting to look at the results of this experiment in more detail.
Yes, maybe I explained what I mean not very well; however, gjm (see commentaries below) seems to get it. The point is not that CFAR is very much like Lifespring (though I may have sounded like that), the point is that there are certain techniques (team spirit, deep emotional connections etc.) that are likely to be used in such workshops, that will most certainly make participants love workshop and organizers (and other participants) , but their effect on the partici...
Ok, your point makes sense.
Basically, I am trying to figure out for myself if going to the workshop would be beneficial for me. I do believe that CFAR does not simply try to get as much money as possible. However, I am concerned that people after the workshop are strongly biased towards liking it not because it really helps, but because of psychological mechanisms akin to Lifespring. I am not saying that CFAR is doing it intentionally, it could just have been raised somehow on its own. Maybe these mechanisms are even beneficial to whatever CFAR is doing, but they definitely make evaluation harder.
"When I was talking to Valentine (head of curriculum design at the time) a while ago he said that the spirit is the most important thing about the workshop."
Now, this already sounds a little bit disturbing and resembling Lifespring. Of course, the spirit is important, but I thought the workshop is going to arm us with instruments we can use in real life, not only in the emotional state of comradeship with like-minded rationalists.
The particular instruments are not the point.
When asked whether CFAR wanted to collaborate of scientifically validating their instruments, the answer I heard was that CFAR doesn't because it doesn't consider the instruments of central importance but considers giving people agency about changing their own thinking process of central importance.
There are many aspects of Lifespring. If I look at the Wikipedia page it suggests that they tried to maximize the amount of people enrolled to lifespring seminars. You complain that CFAR doesn't do enough ...
I can understand your point, but I am not persuaded yet. Let me maybe clarify why. During the year and a half of COVID, the in-person workshops were not possible. During this time, there were people, who would strongly benefit from the workshop, and the workshop would be helpful at this time (for example, they were making a career choice). Some of them can allow private places for the time of the workshop. It seems that for them, during this time the online workshop would be certainly more beneficial than no workshop at all. Moreover, conducting at least o...
It is a good justification for this behavior, but it does not seem to be the most rational choice. Indeed, one could specify that the participant of the online workshop must have a private space (own bedroom, office, hotel room, remote place in a park - whatever fits). I am pretty sure there is a significant number of people, who would prefer an online workshop to the offline one (especially when all offline are canceled due to COVID), and who have or can find a private space for the duration of the workshop. To say that we are not doing it because some pe...
I agree that for some people physical contact (like hugs, handshaking etc.) indeed means a lot. However, it is not for everyone. Moreover, even if the online workshop is less effective due to lack of this spirit, is it indeed so ineffective that it is worse than no workshop at all? Finally, why just not to try? It sounds like a thing that should be tried at least one time, and if it fails - well, then we see that it fails.
Yes, I hope someone who attended CFAR (or even somehow related to it) would see this question and give their answer.
First of all, it is my mistake - in the paper they used pain more like a synonym to suffering. They wanted to clarify that the animal avoids tissue damage (heat, punching, electric shock etc.) not just on the place, but learns to avoid it. To avoid it right there is simply nociception that can be seen in many low-level animals.
I don't know much about the examples you mentioned. For example, bacterias certainly can't learn to avoid stimuli associated with something bad for them. (Well, they can on the scale of evolution, but not as a single bacteria).
I agree with the point about the continuous ability to suffer rather than a threshold. I totally agree that there is no objective answer, we can't measure sufferings. The problem is, however, that it leaves a practical question that is not clear how to solve, namely how we should treat other animals and our code.
Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won't be the functional approach, we can't extrapolate it blindly to the AI.
If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.
Thank you, but it is again like to say: "oh, to solve physics problem you need calculus. Calculus uses real numbers. The most elegant way to introduce real numbers is from rational numbers from natural numbers via Peano axiomatics. So let's make physicists study Peano axiomatic, set theory and formal logic".
In any area of math, you need some set theory and logic - but usually in the amount that can be covered in one-two pages.
Thank you, but I would say it is too general answer. For example, suppose your problem is to figure out planet motion. You need calculus, that's clear. So, according to this logic, you would first need to look at the building blocks. Introduce natural numbers using Peano axioms, then study their properties, then introduce rational, and only then construct real numbers. And this is fun, I really enjoyed it. But does it help to solve the initial problem? Not at all. You can just introduce real numbers immediately. Or, if you care only about solving mechanics...
It is worrisome indeed. I would say, it definitely does not help and only increases a risk. However, I don't think this country-that-must-not-be-named would start the nuclear war first, simply because it has too much to lose and its non-nuclear opportunities are excellent. This may change in future - so yes, there is some probability as well.
That is exactly the problem. Suppose the Plutonia government sincerely believes, that as soon as other countries will be protected, they will help people of Plutonia to overthrow the government? And they kind of have reasons for such belief. Then (in their model of the world) the world protected from them is a deadly threat, basically capital punishment. The nuclear war, however, is horrible, but there are bomb shelters where they can survive and have enough food inside just for themselves to live till natural death.
The problem is that retaliation is not immediate (missiles takes few hours to reach the goal). For example, Plutonia can demonstratively destroy one object and declare that any attempt of retaliation will be retaliated in double. As soon as other country launches N missiles, Plutonia launches 2 N.
Well, "democratic transition" will not necessarily solve that (like basically it did not completely resolve the problem with the end of the Cold War), you are right, so actually, the probability must be higher than I estimated - even worse news.
Is there any other options for decreasing the risk?
From a Russian perspective. Well, I didn't discuss it with officials in the government, only with the friends who support the current government. So I can only say what they think and feel, and of course, it is just anecdotal evidence. When I explicitly ...
When I say use, I mean actually detonating - not necessarily destroying a big city, but initially maybe just something small.
Within the territory is possible, though I think outside is more realistic (I think the army will be eventually to weak to fight the external enemies with modern technology, but will always be able to fight unarmed citizens).
How should we deal with the cases when epistemic rationality contradicts instrumental? For example, we may want to use placebo effect because one of our values is that healthy is better than sick, and less pain is better than more pain. But placebo effect is based on the fact that we believe pill to be a working medicine that is wrong. Is there any way to satisfy both epistemic and instrumental rationality?
Hmmm, but I am not saying that the benevolent simulators hypothesis is false and that I just choose to believe in it because it brings a positive effect. Rather opposite - I think that benevolent simulators are highly likely (more than 50% chance). So it is not a method "to believe in things which are known to be false". It is rather an argument why they are likely to be true (of course, I may be wrong somewhere in this argument, so if you find an error, I will appreciate it).
In general, I don't think people here want to believe false things.
Of course, placebo is useful from the evolutionary point of view, and it is a subject of quite a lot of research. (Main idea - it is energetically costly to have your immune system always at high alert, so you boost it in particular moments, correlating with pleasure, usually from eating/drinking/sex, which is when germs usually get to the body. If interested, I will find the link to the research paper where it is discussed. ).
I am afraid I still fail to explain what I mean. I do not try to deduce from the observation that we are in a simulation, I don't t...
It is exactly the point that there should be no proof of simulation unless simulators want it. Namely, there should be no observable (for us) difference between universe controlled simply by laws of Nature and between one with intervention from simulators. We can't look at any effect and say - this happens, therefore, we are in the simulation.
The point was the opposite. Assume we are in simulation with benevolent simulators (what, according to what I wrote in the theoretical part of the post, is highly likely). What they can do so that we still was n...
It sounds possible. However, before even the first people will get it, there should be some progress with animals, and right now there is nothing. So I would bet it is not going to happen in let's say next 5 years. (Well, unless we suddenly get a radical progress in creating a superAI that will do it for us, but this is the huge another question on its own).
I would say, I wanted first to think about the very near future, without a huge technological breakthrough. Of course, the immortality and superAI are far more important than anything I mentioned ...
I agree, they have a really bad life, but Eliezer seems to talk here about those who work 60 hours/week to ensure their kids will go to a good school. Slightly different problem.
And on homeless people, there are different cases. In some UBI indeed will help. But, unfortunately, in many cases the person has mental health problems or addiction, and simply giving them money may not help.