Irrationality Game
For reasons related to Godel's incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be. (90%)
I believe that human hardware can - in principle - be as intelligent as it is possible to be. (60%) To be clear, this doesn't actually occur in the real world we currently live in.
Edit: In deference to social norms in the community, Retracted.
Upvoted for significant overconfidence on your second claim, assuming some plausible understanding of the phrase "human hardware". I'm also interested in your reasoning.
I'm not sure what you mean.
For reasons related to Godel's incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be.
We already know the upper limit on intelligence. It takes one bit of evidence to narrow down the possibilities by a factor of two.
I believe that human hardware can - in principle - be as intelligent as it is possible to be.
By "human hardware" do you mean an actual human brain, in the shape it normally forms, or just anything made out of neurons? If you mean the former, this is obviously false. We have a limited memory and thus a limited intelligence. If you mean the latter, we already know neurons are Turing complete, though you could still build a more efficient computer that does it faster and with less energy.
Do you mean that a human brain could, in principle, come very close to the upper limit of effective intelligence? That is, you might not be able to memorize 10^50 digits, but you could still answer any question you'd reasonably come across just as well?
Also, are you talking about just training a normal human, or something where their neurons have to just happen to be wired exactly right?
Also, there's the question of how to measure intelligence. Is it just how likely we are to set off a utilitronium shockwave, and how accurately it follows our goals?
This is an Irrationality Game comment.
We are living in a time of relative technological stagnation outside of computers as argued by Peter Thiel and others. 70%
If we are he is right about the reasons for this. 60%
It's mostly a definitional matter. I think we are progressing quickly in many fields, but we're mainly doing so by using computers, not by inventing new unrelated tech.
Retracted. Do not feed the trolls.
This is an Irrationality Game comment.
No less than 15% of the population could gain expected net benefits to overall wellbeing through carefully planned and executed anabolic steroid use. 80%.
This is an Irrationality Game comment.
We have not been experiencing moral progress in the past 250 years. Moral change? Sure. I'd also be ok with calling it value drift. 90%
Edit: I talked about this previously in some detail here.
Edit: Apparently the OP was a troll account, retracting all contributions to the thread.
Do you believe that there is no non-arbitrary way to define "moral progress", or you think that "moral progress" is a coherent concept, just we haven't experienced it?
(Retracted for the same reasons as other comments in this thread.)
I think moral progress is a coherent concept, I'm inclined to argue no human society so far has experience it, though obviously I can't rule out some outliers that did do so in certain time periods since this is such a huge set. we have so little data and there seems to be great variance in the kinds of values we seen in them.
This is an Irrationality Game comment. (Though I'm actually not sure how it will score).
"Moral progress" simply describes moral change or value drift in the speaker's preferred direction. Very confident (~95%).
I don't use it that way. I like lots of moral changes in the past 250 years but feel the process behind it isn't something I want to outsource morality to. Just like I like having opposable thumbs but feel uncomfortable letting evolution shape humans any further. We should do that ourselves so it doesn't grind down our complex values.
There are lots of people running around who think society in 1990 is somehow morally superior to society in 1890 on some metric of rightness beyond the similarity of their values to our own. This is the difference between someone being on the "wrong side of history" being merely a mistake in reasoning they should get over as soon as possible and it being a tragedy for them. A tragedy that perhaps kept repeating for every human society and individual in existence for nearly all of history.
This also suggests different strategies are appropriate for dealing with future moral change. I think we should be very cautious since I'm sure we don't understand the process. Modern Western civilization doesn't have narrative of "over time values became more and more like our own", but "over time morality got better and better and this gives our society meaning!". Its the difference between seeing "God guiding evolution" and confronting the full horror of Azathoth.
If you can't produce evidence that moral progress ever happened and believe that it definitely hasn't happened in the recent past, why do you think that moral progress is a coherent concept?
I didn't say I had great confidence in moral progress being a coherent concept. But it seems plausible to me that acquiring more true beliefs and thinking about them clearly might lead to discovering some values are incoherent or unreachable and thus stop pursuing them.
Hard to say, history is blurry, we do know the past 300 years well enough that I'm ok with this level certainty.
I'm far from comfortable saying that there was no moral progress in say some Medieval European societies. Not perhaps from our perspective, but from a sort of CEV-of-700 AD values looking at 1100 AD one, who knows? I don't know enough to have a reasonable estimate.
There was also useful progress in philosophy made before the "Enlightenment" that sometimes captured previous values and preferences and fixed them up. But again nearly any society for which that is true there was also lots of harmful philosophy that mutated values in responses to various pressures.
Upvoted in disagreement. The trend to moral progress has been one of less accepting of violence, less acceptance of nonconsensual interaction, less victim blaming, and less standing by while terrible things happen to others (or at least looking indignant at past instances of this).
This leads to a falsifiable prediction. In the next one to four centuries, vegetarianism will increase to a majority, jails will be seen as unnecessarily, brutally, unjustifiably harsh, "the poor" will be less of an Acceptable Target (c.f. delusions that they are "just lazy" and so on), and a condemnation of the present generation for being so terrible at donating in general and at donating to the right causes. If all of those things happen, moral progress will have been flat-out confirmed.
I don't think I should be a vegetarian. Thus at best I feel uneasy that people in four centuries thinking vegetarianism should be compulsory and at worst I'll be dismayed them spending time on activities related to that instead of things i value. If I thought that was great I'd already be vegetarian, duh.
Also I think I like some violence to be ok. Completely non-violent minds would be rather inhuman, and violence has some neat properties if viewed from the perspective of fun theory. In any case I strongly suspect the general non-violence trend (document by Pinker) in the past few thousand years was due to biological changes in humans because of our self-domestication. Your point on consent is questionable. Victim blaming as well since especially in the 20th century I would think all we saw was one set of scapegoats being swapped for another one.
This leads me to suspect Homer's FAI is probably different from my own FAI, is different from the FAI of 2400 AD values. If FAI2400 gets to play with the universe around forever, instead of FAI2012 I'd be rather pissed. Just because you see a trend line in moral change dosen't mean there is any reason to outsource your future value edits to. Isn't this the classical mistake of confusing is for should?
But if it was as you say then all our worries about CEV and FAI would be silly, since our society apparently already automagically is something very similar to what we want, we just need to figure out how to design it so that we can include emulated human minds into it while it continues working its thing.
Yay positive singularity problem solved!
In any case I strongly suspect the general non-violence trend (document by Pinker) in the past few thousand years was due to biological changes in humans because of our self-domestication.
They cite evidence of "moderate to strong heritability" of male aggressiveness. Shouldn't strong selection pressures use up variance and thus lower heritability?
Not in this case. At least not if Gregory Cochran and Henry Harpending are right and we have more new mutations tested due to a large population than we would otherwise.
I thought it would be good to play the irrationality game again. Let's do it!