All of lyghtcrye's Comments + Replies

Doesn't this, by extension, seem to more directly lead to a cost-benefit problem of coalitions?


At some point the marginal cost of additional votes leads will be greater than the marginal cost of influencing other voters, either via direct collusion, or via altering their opinions through alternative incentives, such as by subsidizing voters who agree but care less, or by offering payments or commitments that mitigate the reasons opponents care about the issue.

I'm not sure that's necessarily a bad thing, but there are lots more ways to influence other voters with resources than just colluding to vote to eachother's advantage.

We essentially have this already occurring in the form of fantasy football leagues, which itself has gone from basically being gambling to basically being an e-sport. If you haven't considered it already, perhaps you should look into some of the ways that the NFL is making use of fantasy football for both marketing and information gathering purposes.

lyghtcrye-20

I like to imagine that eventually we will be able to boil the counter-intuitive parts of quantum physics away into something more elegant. I keep coming back to the idea that every current interaction could theoretically be modeled as the interactions of variously polarized electromagnetic waves. Such as mass being caused by rotational acceleration of light, and charge being emergent from the cross-interactions of polarized photons. I doubt the idea really carves reality at the joints, but I think it's probably closer to accurate than the standard model, which is functional but patchworked, much like the predictive models used by astrologers prior to the acceptance of heliocentrism.

I seem to have explained myself poorly. You are effectively restating the commonly held (on LessWrong) views that I was attempting to originally address, so I will try to be more clear.

I don't understand why you would use a particular fixed standard for "human level". It seems to be arbitrary, and it would be more sensible to use the level of human at the time when a given AGI was developed. You yourself say as much in your second paragraph ("more capable than its creators at the time of its inception"). Since IA rate determines the cap... (read more)

1[anonymous]
It's not an arbitrary reference point. For a singularity/AI-goes-FOOM event to occur, it needs to have sufficient intelligence and capability to modify itself in a recursive self-improvement process. A chimpanzee is not smart enough to do this. We've posited that at least some human beings are capable of creating a more powerful intelligence either though AGI or IA. Therefore the important cutoff where a FOOM event becomes possible is somewhere in-between those two reference levels (the chimpanzee and the circa 2013 rationalist AGI/IA researcher). Despite my careless phrasing, this isn't some floating standard that depends on circumstances (having to be smarter than your creators). An AGI or IA simply has to meet some objective minimum level of rationalist and technological capability to start the recursive self-improvement process. The problem is our understanding of the nature of intelligence is not developed enough to predict where that hard cutoff is, so we're resorting to making qualitative judgements. We think we are capable of starting a singularity event either through AGI or IA means. Therefore anything smarter than we are (“superhuman”) would be equally capable. This is a sufficient, but not necessary requirement - making humans smarter though IA doesn't mean that an AGI suddenly has to be that much smarter to start its own recursive self-improvement cycle. My point about software was that an AGI FOOM could happen today. There are datacenters at Google and research supercomputers that are powerful enough to run a recursively improving “artificial scientist” AGI. But IA technology to the level of being able to go super-critical basically requires molecular nanotechnology or equivalently powerful technology (to replace neurons) and/or mind uploading. You won't get an IA FOOM until you can remove the limitations of biological wetware, but these technologies are at best multiple decades away.

I have been mulling around a rough and mostly unformed idea in my head regarding AI-first vs IA-first strategies, but I was loathe to try and put it into words until I saw this post, and noticed that one of the scenarios that I consider highly probable was completely absent.

On the basis that subhuman AGI poses minimal risk to humanity, and that IA increases the level of optimization ability required of an AI to be considered human level or above, it seems that there is a substantial probability that an IA-first strategy could lead to a scenario in which no... (read more)

2[anonymous]
"Superhuman AI" as the term is generally used is a fixed reference standard, i.e. your average rationalist computer scientist circa 2013. This particular definition has meaning because if we posit that human beings are able to create an AGI, then a first generation superhuman AGI would be able to understand and modify its own source code, thereby starting the FOOM process. If human beings are not smart enough to write an AGI then this is a moot point. But if we are, then we can be sure that once that self-modifying AGI also reaches human-level capability, it will quickly surpass us in a singularity event. So the point of whether IA advances humans faster or slower than AGI is a rather uninteresting point. All that matters is when a self-modifying AGI becomes more capable than its creators at the time of its inception. As to your very last point, it is probably because the timescales for AI are much closer than IA. AI is basically a solvable software problem, and there are many supercompute clusters in the world that could are probably capable of running a superhuman AGI at real time speeds, if such a software existed. Significant IA, on the other hand, requires fundamental breakthroughs in hardware...

I'm not sure why you phrased your comment as a parenthetical, could you explain that? Also, while I agree with your statement, appearing competent to engage in discussion is quite important for enabling one to take part in discussion. I don't like seeing someone who is genuinely curious get downvoted into oblivion.

3Vladimir_Nesov
The problem here in not appearing incompetent, but being wrong/confused. This is the problem that should be fixed by reading the literature. It is more efficient to fix it by reading the literature rather than by engaging in a discussion, even given good intentions. Fixing the appearances might change the attitude of other people towards preferring the option of discussion, but I don't think the attitude should change on that basis, reading the literature is still more efficient, so fixing appearances would mislead rather than help. (I use parentheticals to indicate that an observation doesn't work as a natural element of the preceding conversation, but instead raises a separate point that is more of a one-off, probably not worthy of further discussion.)

That question is basically the hard question at the root of the difficulty of friendly AI. Building an AI that would optimize to increase or decrease a value through its actions is comparably easy, but determining how to evaluate actions into a scale that measures results in a comparison with human values is incredibly difficult. Determining and evaluating AI friendliness is a very hard problem, and you should consider reading more about the issue so that you don't come off as naive.

you should consider reading more about the issue so that you don't come off as naive

(Not being mistaken is a better purpose than appearing sophisticated.)

While personal identification with a label can be constraining, I find that the use of labels for signalling are tremendous. Not only does a label work in the same way as jargon, expressing a complex data set with a simple phrase, but because most labels carry tribal consequences it acts as a somewhat costly signal in terms of identifying alliances. Admittedly, one could develop a habit of using labels which becomes a personal identification, but being aware of such risk is the best way to combat the effects thereof.

2Ben_LandauTaylor
Come to think of it, I do this. When I'm talking to people, I sometimes tag myself with labels that seem descriptively true, even if I don't identify with the label emotionally.

I certainly agree with that statement. It was merely my interpretation that violating the intentions of the developer by not "following it's programming" is functionally identical to poor design and therefore failure.

Of course this is something that only a poorly designed AI would do. But we're talking about AI failure modes and this is a valid concern.

-2Randaly
My understanding was that this was about whether the singularity was "AI going beyond "following its programming"," with goal-modification being an example of how an AI might go beyond its programming.
lyghtcrye-40

I find it highly likely that an AI would modify its own goals such that its goals were concurrent with the state of the world as determined by its information gathering abilities in at least some number of cases (or, as an aside, altering the information gathering processes so it only received data supporting a value situation). This would be tautological and wouldn't achieve anything in reality, but as far as the AI is concerned, altering goal values to be more like the world is far easier than altering the world to be more like goal values. If you want a... (read more)

0Randaly
This is another example of something that only a poorly designed AI would do. Note that immutable goal sets are not feasible, because of ontological crises.

I had no intention of implying extreme altruism or egoism. I should be clear that by altruism I mean the case in which an agent believes that the values of of some other entity or group have a smaller discount rate than those of the agent itself, while egoism is the opposite scenario. I describe myself as an egoist, but this does not mean that I am completely indifferent to others. In the real world, one would not describe a person who engages in altruist signalling as an altruist, but rather that person would choose the label of altruist as a form of sign... (read more)

While it may not be the point of the exercise, from examining the situation, it appears that the best course of action would be to attempt to convert as many people in Rationalistland as possible to altruism. The reason I find this interesting is because it mirrors real world behavior of many rationalists. There are a bevy of resources for effective altruism, discussions on optimizing the world based on an altruistic view, and numerous discussions which simply assume altruism in their construction. Very little exists in the way of openly discussing optimal... (read more)

-1Viliam_Bur
Taboo "altruism" and "egoism". Those words in their original meaning are merely strawmen. Everyone cares about other people (except for psychopaths, but psychopaths are also bad at optimizing for their own long-term utility). Everyone cares about their own utility (even Mother Theresa was happy to get a lot of prestige for herself, and to promote her favorite religion). In real life, speaking about altruists and egoists is probably just speaking about signalling... who spends a lot of time announcing that they care about other people (regardless of what they really do for them), and who neglects this part of signalling (regardless of what they really do). Or sometimes it is merely about whom we like and whom we don't.

I haven't compiled any data relating rudeness to karma, and thus only have my imperfect recollection of prior comments to draw on, but I can certainly see your point here. I doubt, however, that an unpopular opinion or argument would benefit from rudeness if the post is initially well formed. I would expect rudeness to amplify polarization, thereby benefiting popular arguments and high status posters, and politeness to mitigate it. Would you be willing to provide me with some examples for or against this expectation from your observations?

2PhilGoetz
It's better if you look for your own examples, as my providing examples would just provide more fuel for gwern (above), who until this post I had no idea disliked me.

Yet if it is about "10 good ways to prepare for the job interview" I usually don't read this kind of objections. On the contrary it is assumed that when going for an interview candidates will dress as well as they can, have polished their CVs and often waded through lists of common questions/problems and their solutions(speaking as a computer programmer here). Not doing so would be considered sloppy. It is rare to hear: "People, just go to the interview and present yourself as you are, if the company likes you it will take you."

While... (read more)

It seems to me that it would be more effective to work from evidence that you have encountered personally or in the case of hypothetical evidence, could have hypothetically encountered. In the case of historical figures, unless you happen to be an archaeologist yourself, the majority of the evidence you have is through secondary and tertiary sources. For example, if a publication alleged that Julius was a title, not a name, and was used by many Caesars, and thus many acts attributed to the person Julius Caesar were in fact performed by separate individuals... (read more)

A rules light game such as poker or chess would give you a lot of leeway in designing a scoring system and implementing the social systems, but probably has an insufficiently complex game state to allow for a large team size while still minimizing redundancy. If you want to develop for large teams (which is almost required to create a difference between true democracy and a representative system), I would suggest a highly customizable, complex game such as Civilization 5, perhaps by allowing each player to control and receive data from an initial unit with socially selected ability to control cities and subsequently produced units within the team.

0Decius
I think that limiting information turns the entire game into maximizing strategic miscommunication instead of distributing decision-making.

I have the same reservation regarding the probability regarding propositions 1) and 2) as army1987. In particular, I find that the probability that all writings regarding the aforementioned people are true is exceedingly low for both, but the probability that some person existed bearing that name, who performed at least one action or bore one trait that was subsequently recorded is rather high. Considering that this is meant to be an exercise on evaluating one's priors (or at least that is how it appears to me), I would consider choosing one interpretation... (read more)

0BT_Uytya
Here are the details