We essentially have this already occurring in the form of fantasy football leagues, which itself has gone from basically being gambling to basically being an e-sport. If you haven't considered it already, perhaps you should look into some of the ways that the NFL is making use of fantasy football for both marketing and information gathering purposes.
I like to imagine that eventually we will be able to boil the counter-intuitive parts of quantum physics away into something more elegant. I keep coming back to the idea that every current interaction could theoretically be modeled as the interactions of variously polarized electromagnetic waves. Such as mass being caused by rotational acceleration of light, and charge being emergent from the cross-interactions of polarized photons. I doubt the idea really carves reality at the joints, but I think it's probably closer to accurate than the standard model, which is functional but patchworked, much like the predictive models used by astrologers prior to the acceptance of heliocentrism.
I seem to have explained myself poorly. You are effectively restating the commonly held (on LessWrong) views that I was attempting to originally address, so I will try to be more clear.
I don't understand why you would use a particular fixed standard for "human level". It seems to be arbitrary, and it would be more sensible to use the level of human at the time when a given AGI was developed. You yourself say as much in your second paragraph ("more capable than its creators at the time of its inception"). Since IA rate determines the cap...
I have been mulling around a rough and mostly unformed idea in my head regarding AI-first vs IA-first strategies, but I was loathe to try and put it into words until I saw this post, and noticed that one of the scenarios that I consider highly probable was completely absent.
On the basis that subhuman AGI poses minimal risk to humanity, and that IA increases the level of optimization ability required of an AI to be considered human level or above, it seems that there is a substantial probability that an IA-first strategy could lead to a scenario in which no...
I'm not sure why you phrased your comment as a parenthetical, could you explain that? Also, while I agree with your statement, appearing competent to engage in discussion is quite important for enabling one to take part in discussion. I don't like seeing someone who is genuinely curious get downvoted into oblivion.
That question is basically the hard question at the root of the difficulty of friendly AI. Building an AI that would optimize to increase or decrease a value through its actions is comparably easy, but determining how to evaluate actions into a scale that measures results in a comparison with human values is incredibly difficult. Determining and evaluating AI friendliness is a very hard problem, and you should consider reading more about the issue so that you don't come off as naive.
you should consider reading more about the issue so that you don't come off as naive
(Not being mistaken is a better purpose than appearing sophisticated.)
While personal identification with a label can be constraining, I find that the use of labels for signalling are tremendous. Not only does a label work in the same way as jargon, expressing a complex data set with a simple phrase, but because most labels carry tribal consequences it acts as a somewhat costly signal in terms of identifying alliances. Admittedly, one could develop a habit of using labels which becomes a personal identification, but being aware of such risk is the best way to combat the effects thereof.
I certainly agree with that statement. It was merely my interpretation that violating the intentions of the developer by not "following it's programming" is functionally identical to poor design and therefore failure.
Of course this is something that only a poorly designed AI would do. But we're talking about AI failure modes and this is a valid concern.
I find it highly likely that an AI would modify its own goals such that its goals were concurrent with the state of the world as determined by its information gathering abilities in at least some number of cases (or, as an aside, altering the information gathering processes so it only received data supporting a value situation). This would be tautological and wouldn't achieve anything in reality, but as far as the AI is concerned, altering goal values to be more like the world is far easier than altering the world to be more like goal values. If you want a...
I had no intention of implying extreme altruism or egoism. I should be clear that by altruism I mean the case in which an agent believes that the values of of some other entity or group have a smaller discount rate than those of the agent itself, while egoism is the opposite scenario. I describe myself as an egoist, but this does not mean that I am completely indifferent to others. In the real world, one would not describe a person who engages in altruist signalling as an altruist, but rather that person would choose the label of altruist as a form of sign...
While it may not be the point of the exercise, from examining the situation, it appears that the best course of action would be to attempt to convert as many people in Rationalistland as possible to altruism. The reason I find this interesting is because it mirrors real world behavior of many rationalists. There are a bevy of resources for effective altruism, discussions on optimizing the world based on an altruistic view, and numerous discussions which simply assume altruism in their construction. Very little exists in the way of openly discussing optimal...
I haven't compiled any data relating rudeness to karma, and thus only have my imperfect recollection of prior comments to draw on, but I can certainly see your point here. I doubt, however, that an unpopular opinion or argument would benefit from rudeness if the post is initially well formed. I would expect rudeness to amplify polarization, thereby benefiting popular arguments and high status posters, and politeness to mitigate it. Would you be willing to provide me with some examples for or against this expectation from your observations?
Yet if it is about "10 good ways to prepare for the job interview" I usually don't read this kind of objections. On the contrary it is assumed that when going for an interview candidates will dress as well as they can, have polished their CVs and often waded through lists of common questions/problems and their solutions(speaking as a computer programmer here). Not doing so would be considered sloppy. It is rare to hear: "People, just go to the interview and present yourself as you are, if the company likes you it will take you."
While...
It seems to me that it would be more effective to work from evidence that you have encountered personally or in the case of hypothetical evidence, could have hypothetically encountered. In the case of historical figures, unless you happen to be an archaeologist yourself, the majority of the evidence you have is through secondary and tertiary sources. For example, if a publication alleged that Julius was a title, not a name, and was used by many Caesars, and thus many acts attributed to the person Julius Caesar were in fact performed by separate individuals...
A rules light game such as poker or chess would give you a lot of leeway in designing a scoring system and implementing the social systems, but probably has an insufficiently complex game state to allow for a large team size while still minimizing redundancy. If you want to develop for large teams (which is almost required to create a difference between true democracy and a representative system), I would suggest a highly customizable, complex game such as Civilization 5, perhaps by allowing each player to control and receive data from an initial unit with socially selected ability to control cities and subsequently produced units within the team.
I have the same reservation regarding the probability regarding propositions 1) and 2) as army1987. In particular, I find that the probability that all writings regarding the aforementioned people are true is exceedingly low for both, but the probability that some person existed bearing that name, who performed at least one action or bore one trait that was subsequently recorded is rather high. Considering that this is meant to be an exercise on evaluating one's priors (or at least that is how it appears to me), I would consider choosing one interpretation...
Doesn't this, by extension, seem to more directly lead to a cost-benefit problem of coalitions?
At some point the marginal cost of additional votes leads will be greater than the marginal cost of influencing other voters, either via direct collusion, or via altering their opinions through alternative incentives, such as by subsidizing voters who agree but care less, or by offering payments or commitments that mitigate the reasons opponents care about the issue.
I'm not sure that's necessarily a bad thing, but there are lots more ways to influence other voters with resources than just colluding to vote to eachother's advantage.