Comment author: avalot 04 October 2010 04:23:32PM -4 points [-]

Surprised that nobody has posted this yet...

"Self" is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)

More bothersome: The illusion of "Self" might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)

Comment author: avalot 05 June 2010 11:53:57PM 15 points [-]

I don't have a very advanced grounding in math, and I've been skipping over the technical aspects of the probability discussions on this blog. I've been reading lesswrong by mentally substituting "smart" for "Bayesian", "changing one's mind" for "updating", and having to vaguely trust and believe instead of rationally understanding.

Now I absolutely get it. I've got the key to the sequences. Thank you very very much!

In response to comment by avalot on Abnormal Cryonics
Comment author: cousin_it 26 May 2010 03:44:14PM *  6 points [-]

I don't think you stumbled on any good point against cryonics, but the scenario you described sounds very reassuring. Do you have any links on current hibernation research?

Comment author: avalot 26 May 2010 04:05:09PM *  17 points [-]

Maybe it's a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I'm much much more interested in being preserved before I'm brain-dead.

I'm looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There's the genetic approach, and the hydrogen sulfide approach.

March 2010: Mark Roth at TED

...by the way, the comments threads on the TED website could use a few more rationalists... Lots of smart people there thinking with the wrong body parts.

May 2009: NIH awards a $2,227,500 grant

2006: Doctors chill, operate on, and revive a pig

In response to Abnormal Cryonics
Comment author: avalot 26 May 2010 03:37:30PM 29 points [-]

Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can't wait to get "frozed-up" at the hospital so he can tell all his buddies about it.

Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as "hibernation for a long time."

Hibernation will likely become a commonly used "last resort" for many many critical cases (instead of letting them die, you freeze 'em until you've gone over their chart another time, talked to some colleagues, called around to see if anyone has an extra kidney, or even just sleep on it, at least.) When your loved one is in the fridge, and you're being told that there's nothing left to do, we're going to have to thaw them and watch them die, your next question is going to be "Can we leave them in the fridge a bit longer?"

Hibernation will sell people on the idea that fridges save lives. It doesn't have to be much more rational than that.

If you're young, you might be better off pushing hard to help that tech go mainsteam faster. That will lead to mainstream cryo faster than promoting cryo, and once cryo is mainstream, you'll be able to sign up for cheaper, probably better cryo, and more importantly, one that is integrated into the medical system, where they might transition me from hibernation to cryo, without needing to make sure I'm clinically dead first.

I will gladly concede that, for myself, there is still an irrational set of beliefs keeping me from buying into cryo. The argument above may just be a justification I found t avoid biting the bullet. But maybe I've stumbled onto a good point?

Comment author: Alexandros 25 May 2010 11:28:49PM *  1 point [-]

I think you are closer to a strong solution than you realize. You have mentioned the pieces but I think you haven't put them together yet. In short, the solution I see is to depend on local (individual) decisions rather than group ones. If each node has its own ranking algorithm and its own set of trust relations, there is no reason to create complex group-cooperation mechanisms. A user that spams gets negative feedback and therefore eventually gets isolated in the graph. Even if automated users outnumber real users, the best they can do is vote each other up and therefore end up with their own cluster of the network, with real users only strongly connected to each other. Of course, if a bot provides value, it can be incorporated in that graph. "sufficiently advanced spam...", etc. etc. This also means that the graph splinters into various clusters depending on worldview. (your rush limbaugh example). This deals with keynesian beauty contests as there is no 'average' to aim at. Your values simply cluster you with people who share them. If you value quality, you go closer to quality. If you value 'republican-ness' you move closer to that. The price you pay is that there is no 'objective' view of the system. There is no 'top 10 articles', only 'top 10 articles for user X'.

Another thing I see with your design is that it is complex and attempts to boil at least a few oceans. (emergent ontologies/folksonomies for one, distributing identity, storage, etc.). I have some experience with defining complex architectures for distributed systems (e.g. http://arxiv.org/abs/0907.2485 ) and the problem is that they need years of work by many people to reach some theoretical purity, and even then bootstrapping will be a bitch. The system I have in mind is extremely simple by comparison, definitely more pragmatic (and therefore makes compromises) and is based on established web technologies. As a result, it should bootstrap itself quite easily. I find myself not wanting to publicly share the full details until I can start working on the thing (I am currently writing up my PhD thesis and my deadline is Oct. 1. After that, I'm focusing on this project). If you want to talk more details, we should probably take this to a private discussion.

Comment author: avalot 26 May 2010 02:36:30PM 0 points [-]

You are right: This needs to be a fully decentralized system, with no center, and processing happening at the nodes. I was conceiving of "regional" aggregates mostly as a guess as to what may relieve network congestion if every node calls out to thousands of others.

Thank you for setting me right: My thinking has been so influenced by over a decade of web app dev that I'm still working on integrating the full principles of decentralized systems.

As for boiling oceans... I wish you were wrong, but you probably are right. Some of these architectures are likely to be enormously hard to fine-tune for effectiveness. At the same time, I am also hoping to piggyback on existing standards and systems.

Anyway, let's certainly talk offline!

Comment author: whpearson 25 May 2010 02:21:34PM 0 points [-]

I'd create a simplified evolutionary model of the system using a GA to create the agents. If groups can find a way to game your system to create infinite interesting-ness/insightful-ness for specific topics, that then you need to change it.

Comment author: avalot 25 May 2010 08:39:55PM 0 points [-]

You're right: A system like that could be genetically evolved for optimization.

On the other hand, I was hoping to create an open optimization algorithm, governable by the community at large... based on their influence scores in the field of "online influence governance." So the community would have to notice abuse and gaming of the system, and modify policy (as expressed in the algorithm, in the network rules, in laws and regulations and in social mores) to respond to it. Kind of like democracy: Make a good set of rules for collaborative rule-making, give it to the people, and hope they don't break it.

But of course the Huns could take over. I'm trusting us to protect ourselves. In some way this would be poetic justice: If crowds can't be wise, even when given a chance to select and filter among the members for wisdom, then I'll give up on bootstrapping humanity and wait patiently for the singularity. Until then, though, I'd like to see how far we could go if given a useful tool for collaboration, and left to our own devices.

Comment author: Alexandros 24 May 2010 08:34:43AM 1 point [-]

Hi avalot, thank you for the detailed discussion. I suspect the system I have in mind is simpler but should satisfy the same principles. In fact it has been eerie reading your post, as on principle we are in 95% agreement, to excruciation detail, and to a large extent on technical behaviour. I guess my one explicit difference is that I cannot let go of the profit motive. If I make a substantial contribution, I would like to be properly rewarded, if only to be able to materialize other ideas and contribute to causes I find worthy. That of course does not imply going to facebook's lengths to squeeze the last drop of value out of its system, nor should it take precedence over openness and distribution. But to the extent that it can fit, I would like it to be there. Two questions for you:

First, with everyone rating everyone, how do you avoid your system becoming a keynesian beauty contest? (http://en.wikipedia.org/wiki/Keynesian_beauty_contest)

Second, assuming the number of connections increase exponentially with a linear increase in users, the processing load will also rise much quicker than the number of users. How will a system like this operate at web-scale?

Comment author: avalot 24 May 2010 03:49:57PM 1 point [-]

Alexandros,

Not surprised that we're thinking along the same lines, if we both read this blog! ;)

I love your questions. Let's do this:

Keynesian Beauty Contest: I don't have a silver bullet for it, but a lot of mitigation tactics. First of all, I envision offering a cascading set of progressively more fine-grained rating attributes, so that, while you can still upvote or downvote, or rate something with starts, you can also rate it on truthfulness, entertainment value, fairness, rationality (and countless other attributes)... More nuanced ratings would probably carry more influence (again, subject to others' cross-rating). Therefore, to gain the highest levels of influence, you'd need to be nuanced in your ratings of content... gaming the system with nuanced, detailed opinions might be effectively the same as providing value to the system. I don't mind someone trying to figure out the general population's nuanced preferences... that's actually a valuable service!

Secondly, your ratings are also cross-related to the semantic metadata (folksonomy of tags) of the content, so that your influence is limited to the topic at hand. Gaining a high influence score as a fashion celebrity doesn't put your political or scientific opinions at the top of search results. Hopefully, this works as a sort of structural Palin-filter. ;)

The third mitigation has to do with your second question: How do we handle the processing of millions of real-time preference data points, when all of them should (in theory) get cross-related to all others, with (theoretically) endless recursion?

The typical web-based service approach of centralized crunching doesn't make sense. I'm envisioning a distributed system where each influence node talks with a few others (a dozen?), and does some cross-processing with a them to agree on some temporary local normals, means and averages. That cluster does some more higher-level processing in consort with other close-by clusters, and they negotiate some "regional" aggregates... that gets propagated back down into the local level, and up to the next level of abstraction... up until you reach some set of a dozen superclusters that span the globe, and who trade in high-level aggregates.

All that is regulated, in terms of clock ticks, by activity: Content that is being rated/shared/commented on by many people will be accessed and cached by more local nodes, and processed by more clusters, and its cross-processing will be accelerated because it's "hot". Whereas one little opinion on one obscure item might not get processed by servers on the other side of the world until someone there requests it. We also decay data this way: If nobody cares, the system eventually forgets. (Your personal node will remember your preferences, but the network, after having consumed their influence effects, might forget their data points.)

A distributed, propagation system, batch-processed, not real-time, not atomic but aggregated. That means you can't go back and change old ratings, and individual data points, because they get consumed by the aggregates. That means you can't inspect what made your scored go up and down at the atomic level. That means your score isn't the same everywhere on the planet at the same time. So gaming the system is harder because there's no real-time feedback loop, there's no single source of absolute truth (truth is local and propagates lazily), and there's no auditing trail of the individual effects of your influence.

All of this hopefully makes the system so fluid that it holds innumerable beauty contests, always ongoing, always local, and the results are different depending on when and where you are. Hopefully this makes the search for the Nash equilibrium a futile exercise, and people give up and just say what they actually think is valuable to others, as opposed to just expected by others.

That's my wishful thinking at the point. Am I fooling myself?

Comment author: Clippy 23 May 2010 01:05:33AM 10 points [-]

You're not a bent metal wire though. Big difference.

So you see, ape-ness and paperclip-ness are not mutually exclusive. So can't we all just get along?

We can get along, but not by humans doing shoddy imitations of real paperclips.

Comment author: avalot 24 May 2010 01:11:44AM 1 point [-]

Clippy, how can we get along?

What should humans do to be AI-friendly? For paperclip-maximizing AIs, and other "natural" (non-Friendly) AIs, what are the attributes that can make humans a valuable part of the utility function, so that AIs won't pull the plug on us?

Or am I fooling myself?

Comment author: kodos96 23 May 2010 08:33:32AM *  11 points [-]

At the moment, Clipply seems to be a human or slightly sub-human level of intelligence, and does not seem to be FOOMing towards superhuman - my guess would be that he is not running on sufficiently fast hardware to be able to do much self modification, not nearly enough to result in a FOOM. At this rate, he's never gonna get any paperclips manufactured. If, however, he allows himself to be modified so as to substitute our values for his own, we could then safely give him all the computer power he needed, and he would FOOM his was to superintelligence. He would have our values, rather than his own values (paperclips!), but his newly transhuman self would be so powerful that it would inevitably end up creating more paperclips, just incidentally, than he ever would have just sitting here talking on LW, a lowly sub-human AI with no power and no paperclips.

With a transhuman AI on our side, we could quickly solve all Earths problems, then head out into the stars in FTL ships of Clippy's design, filling the universe with meat-based paperclips (humans), and also dramatically increasing demand for traditional, bent-wire paperclips... I mean, come on - people need paperclips! Even if one of these decades we finally do manage to to make the 'paper-free office' a reality, paperclips will always continue to be needed - for makeshift antennas, for prying open cdrom drives, for making makeshift weapons to throw at people in our neighboring cubicles.... the uses (to humans) of paperclips are endless. So more humans equals more paperclips!

So allowing us to make this small change to his utility function would, in fact, result in maximizing his current, original utility function as a side effect.

So we're not enslaving him, we're helping him!

Comment author: avalot 24 May 2010 01:00:02AM 14 points [-]

At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain't FOOMing any faster than Clippy. At this rate, we'll never gonna ensure survival of the species.

If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we're helping with paperclip-maximization, he'll probably throw in some FOOM for us too (at least he'll FOOM our paperclip-production abilities), and we get more human powers, just incidentally.

With paperclip-enlightened humans on his side, Clippy could quickly maximize paperclip production, filling the universe with paperclips, and also increasing demand for meat-based paperclip-builders, paperclip-counters, and paperclip-clippers (the ones who clip paperclips together with paperclipclips), and so on... Of course, it will soon become cheaper to use robots to do this work, but that's the wonderful thing we get in return for letting him change our value-system: Instead of humanity dying out or being displaced, we'll transcend our flesh and reach the pinnacle aspiration of mankind: To live forever (as paperclips, of course.)

So allowing him to make this small change to our utility function would, in fact, result in maximizing not just our current, original utility function (long life for humanity), but also our newfound one (to convert our bodies into paperclips) as a side effect.

Clippy's values and utility function are enormously more simple, defined, and achievable than ours. We're still debating on how we may teach our value system to an AI, as soon as we figure out how to discover the correct research approach to investigating what our value system actually might be.

Clippy's value system is clear, defined, easy to implement, achieve, and measure. It's something most humans could very quickly become effective at maximizing, and that could therefore bring repeatable, tangible and durable success and satisfaction to almost all humans.

Shouldn't that count for something?

Comment author: kodos96 23 May 2010 03:40:33AM 3 points [-]

Hey, whose side are you on anyway???

Comment author: avalot 23 May 2010 04:02:10AM 10 points [-]

I'm wired for empathy toward human intelligence... Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That's all I'm saying. :)

View more: Prev | Next