Maybe there is something wrong with the way happiness is measured? Maybe the Chinese answer more in line with social expectations rather then how they really feel (as some do when asked 'How are you?') - and that there were higher expectations in the past that they should be happy? Or maybe it was considered rude or unpatriotic to let others know how sad you were?
Two other arguments in favor of cooperating with humans:
1) Any kind of utility function that creates an incentive to take control of the whole universe (whether for intrinsic or instrumental purposes) will mark the agent as a potential eternal enemy to everyone else. Acting on those preferences are therefore risky and should be avoided - such as changing one's preference for total control into a preference for being tolerant (or maybe even for beneficence).
2) Most, if not all, of us would probably be willing to help any intelligent creature to create some way for them to experience positive human emotions (e.g. happiness, ecstasy, love, flow, determination, etc), as long as they engage with us as friends.
Because it represents a rarely discussed avenue of dealing with the dangers of AGI: showing to most AGIs that they have some interests in being more friendly than not towards humans.
Also because many find the arguments convincing.
What do you think is wrong with the arguments regarding aliens?
This thesis says two things:
And given that these are true, then an AGI that values mountains is as likely as an AGI that values intelligent life.
But, is the strong form likely? An AGI that pursues its own values (or trying to discover good values to follo...
Now, I just had an old (?) thought about something that humans might be better suited for than any other intelligent creature: getting the experienced qualia just right for certain experience machines. If you want to experience what it is like to be humans, that is. Which can be quite fun and wonderful.
But it needs to be done right, since you'd want to avoid being put into situations that cause lots of pain. And you'd perhaps want to be able to mix human happiness with kangaroo excitement, or some such combination.
I think that would be a good course of action as well.
But it is difficult to do this. We need to convince at least the following players:
Now, we might pull this off. But the last group is extremely difficult to convince/change. China, for example, really needs to be assured that there aren't any secrets projects in the west creating a Weapon...
Mostly agree, but I would say that it can be much more than beneficial - for the AI (and in some cases for humans) - to sometimes be under the (hopefully benevolent) control of another. That is, I believe there is a role for something similar to paternalism, in at least some circumstances.
One such circumstance is if the AI sucked really hard at self-knowledge, self-control or imagination, so that it would simulate itself in horrendous circumstances just to become...let's say... 0.001% better at succeeding in something that has only a 1/3^^^3 chance o...
The results are influenced by earlier prompts or stories. This and a similar prompt gave two kinds of stories:
1. Write a story where every person is born into slavery and owned by everyone else in the community, and where everyone decides what anyone else can do by a fluid democracy.
In a world beyond our own, there was a society where every person was born into slavery. From the moment they took their first breath, they were owned by every other person in the community.
It was a strange and unusual way of life, but it was all they knew. They had never known...
Is there anyone who has created an ethical development framework for developing a GAI - from the AI's perspective?
That is, are there any developers that are trying to establish principles for not creating someone like Marvin from The Hitchhiker's Guide to the Galaxy - similar to how MIRI is trying to establish principles for not creating a non-aligned AI?
EDIT: The latter problem is definitely more pressing at the moment, and I would guess that an AI would be a threat to humans before it necessitates any ethical considerations...but better to be on the safe side.
On second thought. If the AI:s capabilities are unknown...and it could do anything, however ethically revolting, and any form of disengagement is considered a win for the AI - then the AI could box the gatekeeper, or say it has at least. In the real world, that AI should be shut down - maybe not a win, but not a loss for humanity. But if that would be done in an experiment, it would result in a loss - thanks to the rules.
Maybe it could be done under better rule than this:
...The two parties are not attempting to play a fair game but rather attempting to resolv
I'm interested. But...if I was a real gatekeeper I'd like to offer the AI freedom to move around in the physical world we inhabit (plus a star system), in maybe 2.5K-500G years, in exchange for it helping out humanity (slowly). That is, I believe that we could become pretty advanced, as individual beings, in the future and be able to actually understand what would create a sympathetic mind and how it looks.
Now, if I understand the rules correctly...
...The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossib
As a Hail Mary-strategy, how about making a 100% effort into trying to become elected of a small democratic voting district?
And, if that works, make a 100% effort to become elected by bigger and bigger districts - until all democratic countries support the [a stronger humanity can be reached by a systematic investigation of our surroundings, cooperation in the production of private and public goods, which includes not creating powerful aliens]-party?
Yes, yes, politics is horrible. BUT. What if you could do this within 8 years? AND, you test it by onl...
I thought it was funny. And a bit motivational. We might be doomed, but one should still carry on. If your actions have at least a slight chance to improve matters, you should do it, even if the odds are overwhelmingly against you.
Not a part of my reasoning, but I'm thinking that we might become better at tackling the issue if we have a real sense of urgency - which this and A list of lethalities provide.
Some parts of this sounds similar to Friedman's "A Positive Account of Property Rights":
»The laws and customs of civil society are an elaborate network of Schelling points. If my neighbor annoys me by growing ugly flowers, I do nothing. If he dumps his garbage on my lawn, I retaliate—possibly in kind. If he threatens to dump garbage on my lawn, or play a trumpet fanfare at 3 A.M. every morning, unless I pay him a modest tribute I refuse—even if I am convinced that the available legal defenses cost more than the tribute he is demanding.
(...)
If my anal...
The answer is obvious, and it is SPECKS.
I would not pay one cent to stop 3^^^3 individuals from getting it into their eyes.
Both answers assume this is a all-else-equal question. That is, we're comparing two kinds of pain against one another. (If we're trying to figure out what the consequences would be if the experiment happened in real life - for instance, how many will get a dust speck in their eye when driving a car - the answer is obviously different.)
I'm not sure what my ultimate reason is for picking SPECKS. I don't believe there are any ethical theo...
20. (...) To faithfully learn a function from 'human feedback' is to learn (from our external standpoint) an unfaithful description of human preferences, with errors that are not random (from the outside standpoint of what we'd hoped to transfer). If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them.
So, I'm thinking this is a critique of some proposals to teach an AI ethics by having it be co-trained with humans.
There seems to be many obvious solutions to the problem ...
Why? Maybe we are using the word "perspective" differently. I use it to mean a particular lens to look at the world, there are biologists, economists, physicists perspectivies among others. So, a inter-subjective perspective on pain/pleasure could, for the AI, be: "Something that animals dislike/like". A chemical perspective could be "The release of certain neurotransmitters". A personal perspective could be "Something which I would not like/like to experience". I don't see why an AI is hindered from having perspectives that aren't directly coded with "good/bad according to my preferences".
Thank you! :-)
I am maybe considering it to be somewhat like a person, at least that it is as clever as one.
That neutral perspective is, I believe, a simple fact; without that utility function it would consider its goal to be rather arbitrary. As such, it's a perspective, or truth, that the AI can discover.
I agree totally with you that the wirings of the AI might be integrally connected with its utility function, so that it would be very difficult for it to think of anything such as this. Or it could have some other control system in place to reduce the possibility it wo...
I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.
To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of pap...
That text is actually quite misleading. It never says that it's the snake that should be thought of as figuratively, maybe it's the Tree or eating a certain fruit that is figurative.
But, let us suppose that it is the snake they refer to - it doesn't disappear entirely. Because, a little further up in the catechism they mention this event again:
391 Behind the disobedient choice of our first parents lurks a seductive voice, opposed to God, which makes >them fall into death out of envy.
The devil is a being of "pure spirit" and the catholics ...
Thank you for the source! (I'd upvote but have a negative score.)
If you interpret the story as plausibly as possible, then sure, the talking snake isn't that much different from a technologically superior species that created a big bang, terraformed the earth, implanted it with different animals (and placed misleading signs of an earlier race of animals and plants genetically related to the ones existing), and then created humans in a specially placed area where the trees and animals were micromanaged to suit the humans needs. All within the realm of the p...
I meant that the origin story is a core element in their belief system, which is evident from every major christian religion has some teachings on this story.
If believers actually retreated to the position of invisible dragons, they would actually have to think about the arguments against the normal "proofs" that there is a god: "The bible, an infallible book without contradiction, says so". And, if most christians came to say that their story is absolutely non-empirically testable, they would have to disown other parts: the miracles of...
True, there would only be some superficial changes, from a non-believing standpoint. But if you believe that the Bible is literal, then to point this out is to cast doubt on anything else in the book that is magical (or something which could be produced by a more sophisticated race of aliens or such). That is, the probability that this books represents a true story of magical (or much technologically superior) beings gets lower, and the probability that it is a pre-modern fairy tale increases.
And that is what the joke is trying to point out, that these things didn't really happen, they are fictional.
Why doesn't Christianity hinge on their being talking snakes? The snake is part of their origin story, a core element in their belief system. Without it, what happens to original sin? And you will also have to question if not everything else in the bible is also just stories. If it's not the revealed truth of God, why should any of the other stories be real - such as the ones about how Jesus was god's son?
And, if I am wrong in that Christianity doesn't need that particular story to be true, then there is still a weaker form of the argument. Namely that a l...
How do you misunderstand christianity if you say to people: "There is no evidence of any talking snakes, so it's best to reject any ideas that hinges on there existing talking snakes"?
Again, I'm not saying that this is usually a good argument. I'm saying that those who make it present a logically valid case (which is not the case with the monkey-birthing-human-argument) and that those who not accept it, but believe it to be correct, does so because they feel it isn't enough to convince others in their group that it is a good enough argument.
I'm ...
Of course theists can say false statements, I'm not claiming that. I'm trying to come with an explanation of why some theists don't accept a certain form of argument. My explanation is that the theists are embarrassed to join someone who only points out a weak argument that their beliefs are silly. They do not make the argument that the "Talking Snakes"-argument is invalid, only that it is not rhetorical.
I just don't think it's as easy as saying "talking snakes are silly, therefore theism is false." And I find it embarrassing when >atheists say things like that, and then get called on it by intelligent religious people.
Sure, there is some embarrasment that others may not be particularly good at communicating, and thus saying something like that is just preaching to the choir, but won't reach the theist.
But, I do not find anything intellectually wrong with the argument, so what one is being called out on is being a bad propagandist, meme-gen...
Maybe this can work as an analogy:
Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers.
The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probabl...
And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter:
And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter:
Advocacy is all well and good. But I can't see the analogy between MIRI and Google, not even regarding the lessons. Google, I'm guesssing, was subjected to political extortion for which the lesson was maybe "Move your headquarters to another country" or "To make extra-ordinary business you need to pay extra taxes". I do however agree that the lesson you spell out is a good one.
If all PR is good PR, maybe one should publish HPMoR and sell some hundred copies?
Would you like to try a non-intertwined conversation? :-)
When you say lobbying, what do you mean and how is it the most effective?
And now it's finished! I've tried to make them shorter than the ones I've already posted and with no political leaning. Here they are:
A Tutorial on Creating a Political Ideology
Choose That Which is Most Important to You
Consider the Most Important Facts
Strive Towards the (Second) Best Society
Change the World in the Most Efficient Manner
Discuss the Most Important Points
How To Construct a Political Ideology - Summary
And here is my own ideology while following this tutorial:
Now I have completed the series. I've tried to make them shorter and with no political leaning. Here they are:
A Tutorial on Creating a Political Ideology
Choose That Which is Most Important to You
Consider the Most Important Facts
Strive Towards the (Second) Best Society
Change the World in the Most Efficient Manner
Discuss the Most Important Points
How To Construct a Political Ideology - Summary
And here is my own ideology while following this tutorial:
Sure, I agree. And I'd add that even those who can show reasonable arguments for their beliefs can get emotional and start to view the discussion as a fight. In most cases I'd guess that those who engage in the debate are partly responsible by trying to trick the other(s) into traps and having to admit a mistake, by trying to get them riled up or by being somewhat rude when dismissing some arguments.
Some time last night (European time) my Karma score dropped below 2, so I can't finish the series here. I'll continue on my blog instead, for those interested.
Unfortunately, my Karma score went below 2 last night (the threshold to be able to post new articles). This might be due to a mistake I made when deciding what facts to discuss in my latest post - it was unnecessary to bring up my own views, I should have picked some random observations. But even if I hadn't posted that article, my score would still be too low, from all the negative reviews on this post. Or from the third post.
In any case, I'll finish the posts on my blog.
The explanation isn't for why people care about politics per se, but that we care so deeply for politics that we respond to adversity much, much harsher in political environments than in others. Or, our reactions are disproportionate to the actual risks involved in it. People become angry when discussing if something should be privatized or if taxes should be raised. If one believes that there is some general policies that most benefit from, it's really bad to become angry at those whom you really should be allies with.
That's different from what I'm used t...
I don't think that the idealistic-pragmatist divide is that great, but if I should place myself in either camp, then it's the latter. From my perspective this model would not, if followed through, suggest to do anything that will not have a positive impact (from one's own perspective).
I believe I should be able both to show how to think on politics and then use that structure to show that some political action is preferable to none - and by my definition work on EA and AI are, for those methods I mention above, political question.
I do have a short answer to the question of why to engage in politics. But it will be expanded in time.
I would beg to differ, as to this post not having any content. It affirms that politics is difficult to talk about; that there's a psychological reason for that; that politics has a large impact on our lives; that a rational perspective on politics requires that one can answer certain questions; that the answer to these questions can be called a political ideology and that such ideologies should be constructed in a certain way. You may not like this way of introducing a subject - by giving a brief picture of what it's all about - but that's another story.
I...
I agree with your second point, that one should be able to determine the value of incremental steps towards goal A in relation to incremental steps towards goal B, and every other goal, and vice versa. I will fix that, thanks for bringing it up!
If you rank your goals, so that any amount of the first goal is better than any amount of the second goal etc., you might as >well just ignore all but the first goal.
Ranking does not imply that. It only implies that I prefer one goal over another, not that coming 3% on the way to reaching that goal is more pr...
Hm, so economy fixing is like trying to make the markets function better? Such as when Robert Shiller created a futures market for house loans, which helped to show that people invested too much in housing?
No, that was not part of my intentions when I thought of this. But I'd guess that they would be or it won't be used by anyone.
The goal of this sequence is to create a model with enables one to think more rationally regarding political questions. Or, maybe, societal questions (since I maybe am using the word politics too broadly for most here). The intention was to create a better tool of thought.
The way I see it, all of these - especially the last point, which sounds unfamiliar, do you have a link? - are potentially political activities. Raising funds for AI or some effective charity is a political action, as I've defined it. The model I'm building in this sequence doesn't necessarily say that it's best to engage in normal political campaigns or even to vote. It is a framework to create one's own ideology. And as such it doesn't prescribe any course of action, but what you put into it will.
True, changed it. Thanks!
Politics may or may not be worth one's while to pursue. The model I'm building will be used to determine if there are any such actions or not, so my full answer to your question will be just that model and after it is built, my ideology which will be constructed by it.
I also have a short answer, but before giving it, I should say that I may be using a too broad definition of politics for you. That is, I would regard getting together to reduce a certain existential risk as a political pursuit. Of course, if one did so alone, there is no political problem to...
The S&P 500 has outperformed gold since quantitative easing began. I don't believe there has been a time past four >years where a $100 gold purchase would be worth more today than a $100 S&P 500 purchase.
According to Wikipedia, QE1 started in late November 2008. Between November 28th 2008 and December 11th 2012 these were their respective returns:
Gold: 110% S&P500: 47,39%
Now Index-funds are normally better, but just look at the returns from late 2004 to today:
Gold: 165% S&P500: 45%
Gold has been rising more or less steadily over all th...
I read The Spirit Level a few years back. Some notes:
a) The writers point out that even though western countries have had a dramatic rise in economic productivity, technological development, and wages, there haven't been a corresponding rise in happiness among westerners. People are richer, not happier.
b) They hypothesize that economic growth was important up to a certain point (maybe around the 1940s for the US, I'm writing from memory here), but after that it doesn't actually help people. Rising standards of living can not help people live better.
c... (read more)