...so did we now get cold fusion to work or what?

-10 Friendly-HI 25 May 2013 01:09PM

Some of you may have heard about the following paper already:

http://arxiv.org/ftp/arxiv/papers/1305/1305.3913.pdf

Here's a news article wrapping up the main points:

http://atom-ecology.russgeorge.net/2013/05/20/an-italian-cold-fusion-tide-lifts-all-boats-arvix-independent-review-paper-confirms-rossi-fusion/


I'm way out of my depth here so I find it hard to judge, is this a pile of BS or are we finally getting somewhere for real?
Is burning coal (and using chemical reactions in general) for the purpose of producing energy coming to an end in the upcoming decades?



EDIT: Here's a review of the article, it should be read. http://scienceblogs.com/startswithabang/2013/05/21/the-e-cat-is-back-and-people-are-still-falling-for-it/

Studying Psychology - Which path should I take to best help our cause? Suggestions please.

4 Friendly-HI 23 November 2011 07:52PM

If you solve the problem of human-friendly self-improving AI, you have indirectly solved every problem. After spending a decent amount of time on LW, I have been convinced of this premise and now I would like to devote my life to that cause.

 

Currently I'm living in Germany and I'm studying psychology in the first semester. The university I'm studying at has a great reputation (even internationally if I can believe the rankings) for the quality of its scientific psychology research and it ranks about #2nd or #3rd place when it comes to various psy-science-related criteria out of about 55 German universities where one can study psychology. Five semesters of statistics in my Bachelor of Science might also hint at that.

I want to finish my Bachelor of Science and then move on to my Master, so in about 5 years I might hit my "phase of actual productivity" in the working field. I'm flirting with cognitive neuroscience, but haven't made my decision yet - however, I am pretty sure that I want to move towards research and a scientific career rather than one in a therapeutic field.

Before discovering lesswrong my most dominant personal interest in psychology has been in the field of "positive psychology" or plainly speaking the "what makes humans happy" field. This interest hasn't really changed through the discovery of LW, as much as it has evolved into: "how can we distill what makes human life worthwhile and put it into terms a machine could execute for our benefit"?

 

As the title suggests, I'm writing all this because I want some creative input from you in order to expand my sense of possibilities concerning how I can help the development of friendly AI from the field of psychology most effectively.

 

To give you a better idea of what might fit me, a bit more background-info about myself and my abilities seems in order:

I like talking and writing a lot, mathematically I am a loser (whether due to early disgust or incompetence I can't really tell). I value and enjoy human contact and have constantly moved from being an introvert towards being an extrovert by several cognitive developments I can only speculate on. I would probably easily rank in the middle field of any positive extroversion scale nowadays. My IQ seems to be around 134 if one can trust the "International High IQ Society" (www.highiqsociety.org), but as mentioned my abilities probably lie more in the linguistic and to some extent analytic sphere than the mathematical. I understand Bayes' Theorem but haven't read the quantum mechanics sequence and many "higher" concepts here are still above my current level of comprehension. Although I haven't tried all that hard yet to be fair.

I have programmed some primitive HTML and CSS once and didn't really like it. From that experiecne and my mathematical inability I take away, that programming wouldn't be the way that I could contribute most efficiently towards friendly AI-research. It is none of my strenghts or at least it would take a lot of time to develop that, which would probably be better spent somewhere else. Also I quite surely wouldn't enjoy it as much as work in the psychological realm with humans.

My English is almost indistinguishable from that of a native speaker and I largely lack that (rightfully) despised and annoying German accent, so I could definitely see myself giving competent talks in English.

Like many of you I have serious problems with akrasia (regardless of whether that's a rationalist phenomenon or whether we are just more aware of it and tend to do types of work that tempt it more readily). Before I learned of how to effectively combat it (thank you Piers Steel!), I had plenty of motivation to get rid of it and sunk insane efforts into overcoming it, although ultimately it was largely an unsuccessful undertaking due to half-assed pop-science and the lack of a real insight about what procrastination is caused by and how it actually functions. Now that I know how to fix procrastination (or rather now that I know that it can't be fixed, as much as it has to be managed in a similar fashion to any given drug-addition), my motivation to overcome it is almost gone and I feel myself slacking. Also, the high certainty that there is no such thing as "free will" may have played a serious part in my procrastination habits (interestingly, there are at least two papers I recall showing this correlation). In a nutshell: Procrastination is a problem that I need to address, since it is definitely the Achilles' heel of my performance and it's absolutely crippling my potential. I probably rank middle-high on the impulsiveness- (and thus also on the procrastination-) scale.

That should be an adequate characterization of myself for now.

 

I am absolutely open for suggestions that are not related to the neuroscience of "what makes humans happy and how do I distill those goals and feelings into something a machine could work with"-field, but currently I am definitely flirting with that idea, even though I have absolutely no clue how the heck this area of research could be sufficiently financed in a decade from now and how it could spit out findings precise enough to benefit the creation of FAI. Yet maybe it's just a lack of imagination.

Trying to help set up and evolve a rationalist community in Germany would also be a decent task, but compared to specific research that actually directly aids our goals... I somehow feel it is less than what I could reasonably achieve if I really set my mind to it.

 

So tell me, where does a German psychologist go nowadays to achieve the biggest possible positive impact in the field of friendly AI?

Self-improving AGI: Is a confrontational or a secretive approach favorable?

7 Friendly-HI 11 July 2011 03:29PM

 

(I've written the following text as a comment initially, but upon short reflection I thought it was worth a separate topic and so I adapted it accordingly.)

 

Lesswrong is largely concerned with teaching rationality skills, but for good reasons most of us also incorporate concepts like the singularity and friendly self-improving AGI into our "message". Personally I wonder however, if we should be as outspoken about that sort of AGI as we currently are. Right now talking about self-improving AGI doesn't pose any kind of discernible harm, because "outsiders" don't feel threatened by it and look at it as far-off  —or even impossible— science fiction. But as time progresses, I worry that through exponential advances in robotics and other technologies people will become more aware, concerned and perhaps threatened by self-improving AGI and I am not sure whether we should be outspoken about things like... the fact that the majority of AGI's in "mind-design-space" will tear humanity to shreds if its builders don't know what they're doing. Right now such talk is harmless, but my message here is, that we may want to reconsider whether or not we should talk publicly about such topics in the not-too-distant future, so as to avoid compromising our chances of success when it comes to actually building a friendly self-improving AGI.

 

First off, I suspect I have a somewhat different conception of how the future is going to pan out in terms of what role the public perception and acceptance of self-improving AGI will play: Personally I'm not under the impression, that we can prepare a sizable portion of the public (let alone the global public) for the arrival of AGI (prepare them in a positive manner that is). I believe singularitarian ideas will just continue to compete with countless other worldviews in the public meme-sphere, without ever becoming truly mainstream until it is "too late" and we face something akin to a hard takeoff and perhaps lots of resistance.

I don't really think that we can (or need to) reach a consensus within the public for the successful takeoff of AGI. Quite to the contrary, I actually worry that carrying our view to the mainstream will have adverse effects, especially once they realize that we aren't some kind of technophile crackpot religion, but that the futuristic picture we try to paint is actually possible and not at all unlikely to happen. I would certainly prefer to face apathy over antagonism when push comes to shove - and since self-improving AGI could spring into existence very rapidly and take everyone apart from "those in the know" by surprise, I would hate to lose that element of surprise over our potentially numerous "enemies".

Now of course I don't know which path will yield the best result: confronting the public or keeping a low profile? I suspect this may become one of the few hot-button topics where our community will sport widely diverging opinions, because we simply lack a way to accurately model (especially so far in advance) how people will behave upon encountering the reality and the potential threat of AGI. Just remember, that the world doesn't consist entirely of the US and that AGI will impact everyone. I think it is likely, that we may face serious violence once our vision of the future becomes more known and gains additional credibility by exponential improvements in advanced technologies. There are players on this planet who will not be happy to see an AGI come out of America, or for that matter Eliezer's or whoever's garage. This is why I would strongly advocate a semi-covert international effort when it comes to the development of friendly AGI. (Don't say that it's self-improving and may become a trillion times smarter than all humans combined - just pretend it's roughly a human-level AI).

It is incredibly hard to predict the future behavior of people, but on a gut-level I absolutely favor an international semi-stealthy approach. It seems to be by far the safest course to take. Once the concept of the singularity and AGI gains traction in the spheres of science and maybe even politics (perhaps in a decade or two), I would hope that minds in AI and AGI from all over the world join an international initiative to develop self-improving AGI together. (Think CERN). To be honest, I can't even think of any other approach to develop the later stages of AGI, that doesn't look doomed from the start (not doomed in the sense of being technically unfeasible, but doomed in terms of significant others thinking: "we're not letting this suspicious organization/country take over the world with their dubious AI". Remember that self-improving AGI is potentially much more destructive than any nuclear warhead and powers not involved in its development may blow a gasket upon realizing the potential danger.)

So from my point of view, the public perception and acceptance of AGI is a comparatively negligible factor in the overall bigger picture if managed correctly. "People" don't get a say in weapons development, and I predict they won't get a say when it comes to Self-improving AGI. (And we should be glad they don't if you ask me.) But in order to not risk public outcry when the time is ripe and AGI in its last stages of completion, we should give serious consideration to not upset and terrify the public by our... "vision of the future".

 

PS: Somehow CERN comes to mind again. Do you remember when critics came up with ridiculous ideas how the LHC could destroy the world? It was a very serious allegation, but the public largely shrugged it off - not because they had any idea of course, but because they were reassured by enough eggheads that it wouldn't happen. It would be great, if we could achieve a similar reaction towards AGI-criticism (by which I mean generic criticism of course, not useful criticism - after all we actually want to be as sure about how the AGI will behave, as we were sure about the LHC not destroying the world). Once robots become more commonplace in our lives, I think we can reasonably expect that people will begin to place their trust into simple AI's - and they will hopefully become less suspicious towards AGI and simply assume (like a lot of current AI-researchers apparently) that somehow it is trivial to make it behave friendly towards humans.

So what do you think? Should we become more careful when we talk about self-modifying artificial intelligence? I think the "self-modifying"- and "trillions of times smarter"-parts are some bitter pills to swallow, and people won't be amused once they realize that we aren't just building artificial humans but artificial, allpowerful, allknowing, and (hopefully) allloving gods.

 

 



 

EDIT: 08.07.11

 

PS: If you can accept that argument as rationally sound, I believe a discussion about "informing everyone vs. keeping a low profile" is more than warranted. Quite frankly though, I am pretty disappointed with most people's reactions to my essay this far...  I'd like to think that this isn't just my ego acting up, but I'm sincerely baffled as to why this essay usually hovers just slightly above 0 points and frequently gets downvoted back to neutrality. Perhaps it's because of my style of writing (admittedly I'm often not as precise and careful with my wording as many of you are), or my grammar mistakes due to me being German, but preferably that would be because of some serious rational mistakes I made and of which I am still unaware...  in which case you should point them out to me.

Presumably not that many people have read it, but in my eyes those who did and voted it down have not provided any kind of rational rebuttal here in the comment section of why this essay stinks. I find the reasoning I provided to be simple and sound:


0.0) Either we place "intrinsic" value on the concept of democracy and respect (and ultimately adhere to)  public opinion in our decision to build and release AGI, OR we don't and make that decision a matter of rational expert opinion, while excluding the general public to some greater or lesser degree in the decision process. This is the question whether we view a democratic decision about AGI as the right thing to do, or just one possible means to our preferred end.


1.0) If we accept radically democratic principles and essentially want to put up AGI for vote, then we have a lot of work to do: We have to reach out to the public, thoroughly inform them in detail about every known aspect of AGI and convince a majority of the worldwide public, that it is a good idea. If they reject it, we would have to postpone the development and/or release, until public opinion sways or an un/friendly AGI gets released without consensus in the meantime.


1.1) Getting consent is not a trivial task by any stretch of my imagination and from what I know about human psychology, I believe it is more rational to assume, that the democratic approach cannot possibly work. If you think otherwise, if you SERIOUSLY think this can be successfully pulled off, then I think the burden of proof is on you here: Why should 4,5 billion people suddenly become champions of rationality? How do you think this radical transformation from an insipid public to a powerhouse of intelligent decision-making will take place? None of you (those who defend the possibility and preference of the democratic approach) have done this yet. The only thing that could convince me here would be that the majority of people, or at least a sizable portion, have powerful brain augmentations by the time AGI is on the brink of completion. That I do not believe, but none of you argued this case so far, nor did someone argue in-depth (including countering my arguments and concerns about) how a democratic approach could possibly succeed without brain augmentation.


2.0) If we reject the desirability of a democratic decision when it comes to AGI (as I do for practical concerns), we automatically approach public opinion from a different angle: Public opinion becomes an instrumental concern, because we admit to ourselves that we would be willing to release AGI whether or not we have public consent. If we go down this path, we must ask ourselves how we manage public opinion in a manner that benefits our cause. How exactly should we engage them - if at all? My "moral" take on this in a sentence: "I'm vastly more committed to rationality than I am to the idea that undiscriminating democracy is the gold standard of decision-making."


2.1) In this case, the question becomes whether or not informing the public as thoroughly as possible will aid or hinder our ambitions. In case we believe the majority of the public would reject our AGI project, even after we educate them about it (the scenario I predict), the question is obviously whether or not it is beneficial to inform them about it in the first place. I gave my reasons why I think secrecy (at least about some aspects of AGI) would be the better option and I've not yet read any convincing thoughts to the contrary. How could we possibly trust them to make the rational choice once they're informed, and how could we (and they) react, after most people are informed of AGI and actually disapprove ?


2.2) If you're with me on 2.0 and 2.1, then the next problem is who we think should know about it to what extent, who shouldn't, and how this can be practically implemented. This I've not thoroughly thought about myself yet, because I hoped this would be the direction where our discussion would go, but I'm disappointed that most of you seem to argue  for 1.0 and 1.1 instead (which would be great if the arguments were good, but to me they seem like cheap applause lights, instead of being even remotely practical in the real world).

(These points are of course not a full breakdown of all possibilities to consider, but I believe they roughly cover most bases)


I also expected to hear some of you make a good case for 1.0 and 1.1, or even call into question 0.0, but most of you guys just pretend "1.0 and 1.1 are possible" without any sound explanation why that would be the case. You just assume it can be done for some reason, but I think you should explain yourself, because this is an extraordinary claim, while my assumption of 4,5 billion people NOT becoming rational superheroes or fanatical geeky AGI followers seems vastly more likely to me.

Considering what I've thought about until now, secrecy (or at the very least not too broad and enthusiastic public outreach, combined with an alternative approach of targeting more specific groups or people to contact) seems to be the preferable option to me. ALSO, I admit that public outreach is most probably fine right now, because people who reject it nowadays usually simply feel like it couldn't be done anyway, and it's so far off that they won't make an effort to oppose us, while people whom we convince are all potential human resources for our cause who are welcome and needed.

So in a nutshell I think the cost/benefit ratio of public outreach is just fine by now, but that we ought to reconsider our approach in due time (perhaps a decade or so from now, depending on the future progress and public perception of AI).