Comment author: shminux 11 September 2012 08:52:09PM *  2 points [-]

Whatever past trends were, the rate of progress must slow as we approach physical limits.

Past "physical limits" once considered immutable have often been broken. It was not long ago that 9600bps was considered the limit for phone line data rate. Replacing cattle with vat meat grown in factories powered by solar energy and methane digesters can likely alleviate many potential food shortages and environmental issues.

There is no guarantee that there will not be a true limiting factors of progress rate, but it is extremely bold (and misguided) to proclaim that you know in advance what they will be.

Comment author: Ford 13 September 2012 05:25:22PM 0 points [-]

I agree that some "limits" have proved illusory. But do you have an example where a limit based on conservation of matter or energy was surpassed?

I assume solar technology will continue to improve, but it would take several orders of magnitude of improvement for food-from-solar cells to be cost-competitive with cattle grazing low-value land. What does an acre of solar cells cost?

Comment author: [deleted] 11 September 2012 09:39:36PM 2 points [-]

On the other hand, if we reach a point where stockpiling human urine to supply phosphorous for agriculture (as opposed to merely conserving it locally) is economically viable, that implies some pretty scary things about the general availability of food and knock-on effects for general social stability. I'm not sure how much of it we're (literally) pissing into the sewers and whatnot, but I'd be surprised if agricultural runoff weren't a much greater percentage of the total.

Comment author: Ford 13 September 2012 05:17:57PM 1 point [-]

Yes, we should start with the low-hanging fruit. For example, nutrients in human waste are a small fraction of what's in animal waste, and the latter should be easier to capture. Even so, much of the manure still gets applied at pollution-causing rates near barns and feedlots, rather than paying the cost of transport to where it is most needed.

But your point about food availability and social stability is more important. Recycling urine seems like a good idea. But a society that needs to recycle urine will be a society where many people are spending most of their income on food and others are going hungry, as was the case for the societies mentioned above.

Comment author: Ford 11 September 2012 06:21:52PM 1 point [-]

Whatever past trends were, the rate of progress must slow as we approach physical limits. For example, there must be some minimum size for a reliable resistor. So even if we accept the inevitability of certain past trends, extrapolation is risky.

Once we've used most of the oil (or phosphate, for which there's no substitute), past trends driven by culture, technology, or economics won't continue. In agriculture, best-farmer yields haven't increased much since 1980, although averages go up as they buy their neighbors' land. (My recent book on Darwinian Agriculture discusses some prospects for improvement, but still within limits.) Cheap computer power may substitute for previous forms of education, entertainment, and travel, but not for food. I doubt that enough people will upload their brains to make a difference.

Comment author: homunq 05 January 2012 03:37:41PM 5 points [-]

Corporations optimize profit. Governments optimize (among other things) the monopoly of force. Both of these goals exhibit some degree of positive feedback, which explains how these two kinds of entities have developed some superhuman characteristics despite their very flawed structures.

Since corporations and governments are now superhuman, it seems likely that one of them will be the first to develop AI. Since they are clearly not 100% friendly, it is likely that they will not have the necessary motivation to do the significantly harder task of developing friendly AI.

Therefore, I believe that one task important to saving the world is to make corporations and governments more Friendly. That means engaging in politics; specifically, in meta-politics, that is, the politics of reforming corporations and governments. On the government side, that means things like reforming election systems and campaign finance rules; on the corporate side, that means things like union regulations and standards of corporate governance and transparency. In both cases, I'm acutely aware that there's a gap between "more democratic" and "more Friendly", but I think the former is the best we can do.

Note: the foregoing is an argument that politics, and in particular election reform, is important in achieving a Friendly singularity. Before constructing this argument and independently of the singularity, I believed that these things were important. So you can discount these arguments appropriately as possible rationalizations. I believe, though, that appropriate discounting does not mean ignoring me without giving a counterargument.

Second note: yes, politics is the mind-killer. But the universe has no obligation to ensure that the road to saving the world does not run through any mind-killing swamps. I believe that here, the ban on mind-killing subjects is not appropriate.

Comment author: Ford 06 January 2012 11:24:28PM *  1 point [-]

I agree with your main points, but it's worth noting that corporations and governments don't really have goals -- people who control them have goals. Corporations are supposed to maximize shareholder value, but their actual behavior reflects the personal goals of executives, major shareholders, etc. See, for example, "Dividends and Expropriation" Am Econ Rev 91:54-78. So one key question is how to align the interests of those who actually control corporations and governments with those they are supposed to represent.

Comment author: dlthomas 09 December 2011 01:16:23AM 3 points [-]

You miss my point. Once we have a GAI, we can have many GAI, and if things scale amazingly in number of humans I see no reason they shouldn't scale similarly in number of AI. From "we have a GAI capable of recursive self improvement, that is significantly better at GAI design than any individual human" to "we have a GAI capable of recursive self improvement, that is significantly better at GAI design than all collective humans" involves the passage of non-zero time, but I don't expect it to be significant compared to the time to get there in the first place without significant other considerations.

Comment author: Ford 12 December 2011 08:11:24PM 0 points [-]

Would the first AI want more AI's around? Wouldn't it compete more with AI's than with humans for resources? Or do you assume that humans, having made an AI smarter than an individual human, would work to network AI's into something even smarter?

Either way, the scaling issue is interesting. I would expect the gain from networking AI's to differ from the gain from networking humans, but I'm not sure which would work better. Differences among individual humans are a potential source of conflict, but can also make the whole greater than the sum of the parts. I wouldn't expect complementarity among a bunch of identical AI's. Generating useful differences would be an interesting problem.

Comment author: dlthomas 08 December 2011 11:51:53PM 2 points [-]

If there are indeed gains to be had in coordination of agents that dwarf gains to be had in improvement of individual agents, why couldn't an AI simply simulate multiple agents?

Comment author: Ford 09 December 2011 12:11:19AM 3 points [-]

That may be a faster route to AI. But my point was that making an AI that's smarter than the combined intelligence of humans will be much harder (even for an AI that's already fairly smart and well-endowed with resources) than making one that's smarter than an individual human. That moves this risk even further into the future. I'm more worried about the many risks that are more imminent.

Comment author: Ford 08 December 2011 11:47:50PM 1 point [-]

Does this have implications for the risks associated with AI? Tao is a lot smarter than we are, but he doesn't seem to be plotting to harvest us for our phosphorus, or anything.

This example and others mentioned also suggest that interactions among intelligent agents may be at least as important as intelligence per se. If we can learn to work together more effectively, I think we'll be able to out-think computers for a long time (where "a long time" is defined as long enough for over-population, climate change, nuclear war, etc. to be serious risks).

Comment author: jwhendy 11 May 2011 04:09:53PM 2 points [-]

Freaking awesome. I tried to pull this together HERE without success. I'm in as far as I know.

You seem to know this area... suggestions for parking? I'm a St. Paul-ite and Mpls freaks me out when it comes to parking and driving around there in general.

Comment author: Ford 12 May 2011 11:27:13PM 0 points [-]

If you park near the St. Paul campus, there's a free shuttle bus that stops across the street from Coffman. http://www1.umn.edu/pts/bus/connectors.html

I'm somewhat interested, but have plans already.

In response to Church vs. Taskforce
Comment author: haig 29 March 2009 07:58:44AM *  4 points [-]

We don't want to create a new religion, but whatever we create to take the place of it needs to offer at least as much as that which it replaces, so we might end up actually needing a new 'religion' whether we like it or not. If indeed there is a biological predisposition for humans to want to engage in 'worship', then we might as well worship rationally. I hesitate to call this new organization a religion or the practice worship, those are the things they are replacing, but those words get my idea across.

How about we create a church-like organization that has local congregations and meets weekly to listen to talks on rationality, the latest scientific discoveries, lectures on philosophy, the state of the world, etc. And they don't need to lack beauty or awe. A weekly dose of the unimaginable beauty of biology, or astrophysics, or even economics, in a shared setting, would sure add value to my life. A 'bible study' about fermi's paradox would have made my day as a child. We can tug on the emotions as much as traditional religions without being irrational.

And the missionary arm would maintain the rationality of the 'church'. If the catholic pope denounces condoms in africa, then our 'church' goes one further and starts a viral campaign to not only spread the reason why the pope is wrong, but gets creative and sets up condom donations or incentive structures to promote their use, or whatever.

I know there are many organizations that promote skepticism, secular humanism, reason, enlightenment, etc. but don't know if they are widespread, have local chapters that meet regularly, or have much of a following.

And yes, 'canonizing' the vast information to make it more accessible would help a lot.

UPDATE: In regards to the post wondering how this all would be different from the atheist groups and other such organizations that currently exist, well, that is the rub isn't it. Those have the right idea but aren't successful....how can we make one succeed? Or, can we prove that one can't succeed so as to not waste any more time over it.

In response to comment by haig on Church vs. Taskforce
Comment author: Ford 06 April 2011 12:21:27AM 0 points [-]

A "church-like organization that has local congregations and meets weekly to listen to talks on rationality, the latest scientific discoveries, lectures on philosophy, the state of the world, etc."?

Sounds like a Unitarian fellowship, at least the ones I know. Some may be closer to their Protestant roots, though. Of course, they also have talks on irrationality ("spirituality") and, while atheists and other rationalists are certainly welcome, aggressive promotion of any particular world-view is discouraged.

Comment author: CronoDAS 26 March 2011 11:41:24PM *  2 points [-]

But there might be cheaper options. If we paid Afghan girls $10/day to go to school, would the Taliban collapse?

There's no shortage of Afghan girls who already want to go to school or of parents who want to send them. The problem is that there are people who mutilate girls who attend these schools. In the short run, at least, sticks are often more effective at getting the acquiescence of the population than carrots; when collaborators keep getting killed, it's hard to get willing collaborators no matter how much money you offer.

See also.

Comment author: Ford 27 March 2011 01:45:53AM 0 points [-]

I see how the first part of my post could be read as "we need to motivate girls to go to school", which wasn't my intent. More a matter of motivating tradition-bound parents to see educated girls as a major source of income. But I understand that going to school can be risky in Taliban-dominated areas, which is why the second part of my post was all home-based and therefore hard for the Taliban to detect. Even so, I agree that any obvious link to the US government could be a problem.

View more: Prev | Next