I don't live in vancouver at the moment, but I am quite curious on what the background breakdown for people that goes to LW meetups there. Are they like all UBC grad students or something? Any significant numbers of chinese?
Depending on the extent of my god mode, I'd either reorganize the planet into a planetary transportation government and regional city-states - the planetary transportation government runs an intercontinental rail system that connects every city-state and enforces with overwhelming military might (provided by feudal grants from city states) only one right, that of emigration (not immigration; city states can refuse to permit people to stay within their borders, they're simply forbidden from preventing people from leaving).
Or, if I'm playing full god mode, I'd dismantle the local planets, turn them into a reorientable Dyson sphere around the sun, and use a combination of solar sails and selective reflection to turn our entire solar system into a fusion-powered galactic spaceship, and cruise the galaxy looking for something more interesting. (By absorbing solar emissions on one hemisphere of the sun, and on the other hemisphere reflecting half back into the sun and letting half escape, the energy of the sun can be used to accelerate it, albeit very slowly. If this still sounds ridiculous, imagine shoving the sun into a rocket; that's kind of what would be happening, only with ridiculously low thrust.)
Your initial request doesn't exactly limit the scope of powers in any foreseeable way, except to limit the means.
I'd either reorganize the planet into a planetary transportation government and regional city-states - the planetary transportation government runs an intercontinental rail system that connects every city-state and enforces with overwhelming military might (provided by feudal grants from city states) only one right, that of emigration.
Sounds like the logical extention to libertarianism ideas that accepts the concept of social contract. I think some sort of externality management needs to exist as well.
...recursive self-improvement doesn't in any obvious way require changing our understanding of the laws of physics.
Some people think that complexity issues are even more fundamental than the laws of physics. On what basis do people believe that recursive self-improvement would be uncontrollably fast? It is simply easy to believe because it is a vague concept and none of those people have studied the relevant math. The same isn't true for FTL phenomena because many people are aware of how unlikely that possibility is.
The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.
Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn't fall that way at all.
On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don't think it is very likely that we'd see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.
I remembered a claim of how the measurements of a number of physical constants is subject to anchoring where a previous result lead researchers to look for error at level of scrutiny and correct value is only slowly converged upon. Perhaps this is something similar, where a high profile display make researchers to look for that kind of result.
I suffer from the same problem and this topic have been talked about quite a bit before, though I don't think there is an accepted solution to the problem posted yet.
As I understand it, this confusion is not at all due to the activity being chosen, but skeptical methods exposing the illusion of coherence of the mind. To think harder would point to the traditional sense of self being more a flawed map then some physical level construct however it does not suggest a comfortable, stable alternative in which to organize mental processes.
On the short term, one thing that has been on my mind is how to merge the very counterintuitive empirical, outside view of the self with the inside view and not run into ineffective introspective loops,
Haven't read the book so will have to go on reviews....
It appears to me this can be viewed as a "utility function" memetic virus trying to spread by modifying its host without regard to the host's ultimate survival. In any case, the winning strategy is to build a better replicator and rebellion doesn't sound like the right word for it.
Sorry, I will fix that.
Saying "It is important to me to be rational," shows that it is important to me to to say "It is important for me to be rational," so the words show most strongly that it is important to me to seem like rationality is important to me, even though the words mean that rationality is important to me.
So if I put on my business cards "I am committed to rationality", I display a commitment to seeming to be committed to rationality, and this thing I am actually showing my commitment to is describable as "rationality".
If I don't have much of a commitment to rationality but do have a commitment to "rationality", then I only have the appearance of a commitment to actual rationality. This isn't a commitment, it is a "commitment".
If I don't actually put anything on my business cards and merely say it's a good idea and that I will do it, I'm not truly committed to "rationality", i.e. seeming rational. So I only have a false commitment to "rationality", a "commitment" to it.
The problem with icons and other speech is that saying words with meanings merely expresses that meaning, it doesn't embody it, though it does embody a different meaning. This is why no icon can express a commitment to rationality, only behaving in certain ways can, with behavior including speech.
If one is going to a bar, a photo ID embodies a commitment to rationality, and if one is in a cash only toll lane, money embodies a commitment to rationality; an icon designed to express rationality usually won't embody it outside of artificial scenarios such as one in which a crazy person is going around kicking everyone without the icon.
This sounds like a whole paragraph on how "talk is cheap" and thus have little value compared to costly signaling that actually demonstrate something.
If one thinks it about in that way, a generalized community symbol doesn't really do anything and instead what is needed something that ties directly to the user and his abilities and contributions. What would work is a piece of code that provides information on the account used in Lesswrong, other tracking tools and tests that demonstrate rationality. This may result in competition to "karma up" on the site and perhaps some perverse behaviour, but it should be controllable with good moderation.
Some part of me feels like building a customized barcode format which allows for a stylish symbol for the general community that also provides customized information for each user, but that is likely overkill al the moment.
I'd assign higher probability to the latter, given that he was quite effective at selling you a warranty.
I would also go with the latter, given that the situation appears framed such that the salesman had arranged the matter such that he could assume compliance/agreement rather than having to specifically acquire it. That's a high-pressure sales tactic and it's a classic.
Hard stuff like this happen to me when getting a gym membership (expensive!) and in a number of other cases where the salesperson bring up a set of reasonable claims (but of course highly biased and selected) in a friendly manner to get me into the agreeing frame before pressuring for a sale.
I find it helps to have defined the requirements before talking to any sales person, if not build up reflexive response to sales people and not attempt to update with likely highly incorrect, biased and difficult to process information in time sensitive communications. It also makes sense to pay more attention in non-routine purchases since the sales tactics is not inoculated against and make take more thought.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real organizations can store or process such information without having a significant part outsources to human scale processing, thus it couldn't even have stumbled upon it by chance.
In theory there is no reason why a computation devices build out of humans can't go FOOM. In practice, making a system that work on humans is extremely noisy, slow to change ('education' is slow) while countless experimental constraints exists with no robust engineering solutions is simply harder. Management isn't even a full science at this point. The selection power from existing theory still leaves open a vast space of unfocused exploration, and only a tiny and unknown subset of that can go FOOM. Imagine the space of all valid training manuals and organizational structures and physical aid assets and recruitment policies and so on, and our knowledge of finding the FOOMing one.
AGI running on electronic computers is a bigger threat compared to other recursive intelligence improvement problems because the engineering problems are lower and the rate of progress is higher. Most other recursive intelligence self improvement strategies take pace at "human" time scales and does not leave humans completely helpless.