Comment author: Gyrodiot 08 February 2016 03:37:39PM 1 point [-]

Thanks for the fable. It was a nice reading!

I tried to pattern-match the metaphor against many things; I failed. Could you please provide the key to the metaphor, as I sense there's hidden meaning underneath this story?

I don't want to guess a false meaning.

Comment author: SherkanerUnderhill 28 January 2016 03:21:11PM 1 point [-]

I'm going to apply for AI research related PhD this year. I want to start some research project in the near future with a goal of learning and increasing the chances of successful PhD admission. It's very likely that the domain of this research project will lie close to ML or MIRI research agenda.

I have only a bachelor degree in Engineering (CS and Software Engineering). I work as a software engineer and spend evenings by preparing for GRE, thinking and learning about FAI. Probably will do something with my job to free more time. My timezone: UTC+6.

Comment author: Gyrodiot 28 January 2016 10:15:04PM 0 points [-]

I forgot to mention I was currently an AI PhD student. Which doesn't entail much free time ^^

So... what exactly are you interested in learning (if you want to pair up)? I'm also interested in your project, if you have an idea in mind.

Comment author: Gyrodiot 25 January 2016 12:39:12PM 3 points [-]

Hi,

I have two areas I'd like to study: deep learning, and anything on the MIRI research guide. Lots of material is available on both topics, but I'd like to pair up with someone to build a good learning strategy (for lack of a better expression).

I have some knowledge of algebra, probability theory, logic, game theory, machine learning (Master's Degree in Computer Science).

Regarding deep learning, I have a small collection of links, Udacity, and I'm positive learning materials abound now that the field is really popular.

Regarding MIRI's research guide, well, the guide itself provides a lot of links and pointers.

My timezone is CET (UTC+1).

Comment author: Clarity 29 June 2015 03:57:45AM 7 points [-]

Stupid meta-question here, where are the LW pages I've clicked 'save' on?

Comment author: Gyrodiot 29 June 2015 06:22:38AM *  4 points [-]

You can find them directly here:

http://lesswrong.com/saved

Or by clicking on the "Saved" tab, right under "Main" and "Discussion" when you click on them.

Comment author: Gyrodiot 02 February 2015 11:54:35AM 1 point [-]

Since January 5, I started keeping a exhaustive log of expenditures/income, and meals. I expect to use the former to create a clear budget.

Also, I have been setting alarms every time I need to do something in the next 24 hours. I forget things easily and making the conscious effort to remind myself tasks clutters my mind.

Finally, I keep a log of ideas, and projects. I often find myself solving some problems twice because I didn't bother to write things down. Anything to remove mental clutter.

Comment author: adam_shimi 23 December 2014 05:36:10PM 10 points [-]

Hello LessWrongers! After discovering the blog and MIRI research papers through a friend (Gyrodiot ) a few weeks ago, I finally decided to register here. For I keep seeing fascinating discussions I want to be part of, and I also would like to share my ideas about IA and rationnalism.

Currently, I am a first year student in an french Engineering school in Computer science and applied mathematics. Before that, I was in "Classes Préparatoires" for two years, an intensive formation in mathematics and physics to pass engineering school contests. Even If it was quite harsh (basically 30 hours of classes + 5 hours exam + homeworks impossible to finish every week), it gave me some kicks to become a post-rigorous mathematics student. (post-rigorous being here the definition of Terence Tao : http://terrytao.wordpress.com/career-advice/there%E2%80%99s-more-to-mathematics-than-rigour-and-proofs/ )

For my interest, I am actually working with one of my teacher on a online handwriting OCR based on a model of oscillatory handwriting he developped. But we also explore the cognitive consequences of the model, mostly Piaget's idea of assimilation, which can be linked to modern discoveries about mirror neurons. I also self-study Quantum Computation, even more now that there is high probability I will be on a summer research internship on Quantum information theory.

On the topics I saw here on LW and on the MIRI web-site, I think the corrigibility is the one that interests me the most.

That's all folks. ;)

Comment author: Gyrodiot 26 December 2014 12:07:26PM 0 points [-]

Welcome :D Glad to see you there.

Comment author: Ixiel 15 December 2014 03:44:24PM 3 points [-]

MIRI was mentioned in today's econtalk podcast on AI. Just in case anyone is interested.

Comment author: Gyrodiot 15 December 2014 03:52:48PM *  7 points [-]

Link to the podcast, with transcript.

The mention of MIRI, about (bad) AI forecasts :

Russ Roberts : [It] seems to me that there are a lot of people in AI who think [strong AI development is] only a matter of time, and that the consequences are going to be enormous. They're not going to just be like a marginal improvement or marginal challenge. They "threaten the human race."

Gary Marcus : Before we get to those consequences, which I actually do think are important, I'll just say that there's this very interesting [?] by a place called MIRI in Berkeley, MIRI (Machine Intelligence Research Institute). And what they found is that they traced people's prediction of how far away AI is. And the first thing to know is what they found is, the central prediction, I believe it was the modal prediction, close to the median prediction, was 20 years away. But what's really interesting is that they then went back and divided the data by year, and it turns out that people have always been saying it's 20 years away. And they were saying it was 20 years away in 1955 and they're saying it now. And so people always think it's just around the corner. The joke in the field is that if you say it's 20 years away, you can get a grant to do it. If you said it was 5 years away, you'd have to deliver it; and if 100 years, nobody's going to talk to you.

Comment author: Inst 14 December 2014 05:23:48AM *  0 points [-]

Hi, I registered specifically on LessWrong because after reading up about Eliezer's Super-happies, I found out that there actually exists a website on the concept of super-happiness. Up to now, I had thought that I was the only one who had thought about the subject in terms of transhumanism, and while I acknowledge that there has already been significant amounts of discourse towards superhappiness, I don't believe that others have had the same ideas that I have, and I would like to discuss the idea in a community that might be interested in it.

The premises are as follows: human beings seek utility and seek to avoid disutility. However, what one person thinks is good is not the same as what another person thinks is good, hence, the concept of good and bad is to some extent arbitrary. Moreover, preferences, beliefs, and so on, that are held by human beings are material structures that exist within their neurology, and a sufficiently advanced technology may exist that would be able to modify such beliefs.

Human beings are well-off when their biological perceptions of needs are satisfied, and their fears are avoided. Superhappiness, as far as I understand it, is to biologically hardwire people to have their needs be satisfied. What I think is my own innovation, on the other hand, is [b]ultrahappiness[/b], which is to biologically modify people so that their fears are minimalized, and their wants are maximalized, which is to say, that for a given individual, that person is as happy as their biological substrate can support.

Now, combine this with utilitarianism, the ethical doctrine that believes in the greatest good for the greatest number. If the greatest good for a single individual is defined as ultra-happiness, then the greatest good for the greatest number is defined as maximizing ultra-happiness.

What this means is that the "good state", bear with me, is that for a given quantity of matter, as much ultra-happiness is created as possible. This means that human biological matter is modified in such a way that it is in a state that it expresses the most efficient possible state of ultra-happiness, and as a consequence, it could not be said to be conscious in the same way as humans are currently conscious right now, and likely would lose all volition.

Now, combine this with a utilitarian super-intelligent artificial intelligence. If it were to subscribe to ultra-happy-ism, it would decide that the best state would be to modify all existing humans under its care to some type of ultra-happy state, and find a way to convert all matter within its dominion to an ultra-happy state. === So, that's ultra-happy-ism. The idea is that the logical end of transhumanism and post-humanism, is that if it values human happiness, it would ultimately assume a state that would radically transform and to some extent eliminate existing human consciousness, put the entire world into a state of nirvana, if you'd accept the Buddhism metaphor. At the same time, the ultra-happy AI, would, presumably be programmed either to ignore its own state of suffering / unfulfilled wants, or it would decide that its utilitarian ethics means that it should bear on the burden of its own shoulders the suffering of the rest of the world; ie, the requirements that it be made responsible for maintaining as much ultrahappiness in the world as possible, while it itself, as a conscious, sentient entity, be subjected to the possibility of unhappiness, because in its own capacity for empathy, it itself cannot accept its nirvana, being what the Buddhists would call a bodhisattva, in order to maximize the subjective utility of the universe.

===

The main objection I immediately see to this concept is that, well, first, human utility might be more than material, that is to say, even when rendered into a state of super-happiness, the ability to have volition, to have the dignity of autonomy, might have greater utility than ultra-happiness.

The second objection is, for the ultra-happy AIs that run what I would term utility farms, the rational thing for them to do would be to modify themselves into ultra-happiness; that is to say, what's to stop them from effectively committing suicide and condeming the ultra-happy dyson sphere to death because of their own desire to say "Atlas Shrugs"?

I think those two objections are valid. Ie, human beings might be better off if they were only super-happy, as opposed to ultra-happy, and that an AI system based on ultra-happiness and maximizing ultra-happiness is unsustainable because eventually the AIs will want to code themselves into ultra-happiness.

The objection I think is invalid is the notion that you can be ultra-happy while retaining your volition. There are two counterarguments for that, first, relating to utilitarianism as a system of utility farming, and second, relating to the nature of desire. First, as a system of utility farming, the objective is to maximize the sustainable long-term output for a given input. That means, you want to maximize the number of brains, or utility-experiencers, for a given amount of matter. This means, that in order to maximize ultra-happiness, you will want to make each individual organism as cheap as possible. That means actually connecting a system of consciousness to a system of influencing the world is not cost-effective, because then the organism needs space, needs computational capacity that is not related to experiencing ultra-happiness. Even if you had some kind of organic utility farm with free-range humans, why would a given organism require action? The point of utility farming is that desires are maximally created and maximally fulfilled, for an organism to consciously act, it would require desires that could only be fulfilled by the action. The circuit of desire-action-fulfillment creates the possibility of suboptimal utility-experience, hence, it would be rational to, in lieu of having a neurological circuit that can complete a desire-action-fulfillment cycle, simply having another simple desire-fulfillment circuit to fulfill utility.

===

Well, I registered specifically to post this concept. I'm just surprised that in all the discussion of rampant AI overlords destroying humanity, I don't see any objections that AI overlords destroying humanity as we know it might actually be a good thing. I am seriously arrogant enough to imagine that I might actually be contributing to this conversation, and that ultra-happy-ism might actually be a novel contribution to post-humanism and trans-humanism.

I am actually a supporter of ultra-happy-ism, I think that ultra-happy-ism is actually a good thing, and that it is an ideal state. While it might seem terrible that human beings, en masse, would end up losing their volition, there would still be conscious entities in this type of world. As Auguste Villiers de l'Isle-Adam says in Axeel: "Vivre? les serviteurs feront cela pour nous" ("Living? Our servants will do that for us"), and there will continue to be drama , tragedy, and human interest in this type of world. It simply will not be such that is experienced by human entities.

It is actually a workable world in its own way; were I a better writer, I would write short stories and novels set in such a universe. While human beings, in the terms of being strict humans, would not continue to live and be active, perhaps human personalities, depending on their quality, would be uploaded as the basis of caretaker AIs, some of whom which would be based on human personalities, others being coded from scratch or based on hypothetical possible AIs. The act of living, as we experience it now, would instead of granted to that of the caretaker AIs, who would be imbued with a sense of pathos, given that they, unlike their human / non-human charges, would be subject to the possibility of suffering, and they would be charged with shouldering the fates of trillions of souls; all non-conscious, all experiencing infinite bliss in an eternal slumber.

Comment author: Gyrodiot 14 December 2014 02:14:46PM 0 points [-]

Hi, and welcome to Less Wrong !

There are indeed few works about truly superintelligent entities including happy humans. I don't recall any story where human beings are happy... while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ?

Are you familiar with the Fun Theory Sequence?

Comment author: Gyrodiot 08 December 2014 04:42:31PM *  3 points [-]

Hi there, my name is Jérémy.

I found Less Wrong via HPMoR, which I found via TVTropes. I started reading the Sequences a few months ago, and am still going through them, taking my time to let the knowledge sink, and practice rationality methods.

I like to join the LW IRC chatroom, where I had (and witnessed) many interesting, provocative, and fruitful discussions.

I'm 22, I live in France, where, after an engineering degree in Computer Science, I'm now a PhD student in the wonderful field of Natural Language Processing. I've been interested in AI for about 10 years, since I wanted to create a little program that could chat with me. It was a bit harder than I expected. So I studied, I learned, and reaching the state of the art, found that NLP in general was AI-complete, and that a whole world of (yet) unsolved problems was in front of me. Awesome.

Being quite lazy most of the time, I also wanted to create tools that did stuff on my behalf, and eventually tools that created such tools, etc. Looking for existing examples of this, I soon discovered recursive self-improving systems, the concept of technological singularity, and other elements that strengthened my interest in AI.

When asked about my goals, I tell people I want to share the beauty of language, which I describe as the most powerful tool of humanity, with machines. This is my main motivation in life.

This, and also a fear of death that caused some panic attacks when I was younger. I only recently came to face the problem instead of avoiding the prospect. I think AI can help humanity tackle problems faster that any other methods, which drives me, again, to the path of AI.

I grew up asking lots and lots of questions nobody was able to answer. I had no friends to debate with (I skipped four grades, which set a huge social gap with my classmates). Worst of all, my parents taught me that I was the best, and that my skills allowed me to do pursue whichever education I wanted. I learned how to fail, and fail again, and fail again. I now want to become stronger, and stop wandering in the fields of knowledge anymore.

I love studying, experimenting and designing (mostly board) games. I play and run some RPGs from time to time. I write fiction, though not as often as I used to.

I try to share my interests towards (friendly) AI and rationality around me, and I'd love to participate in LW meetings if they weren't so far from south-western France.

Last but not least : I have no idea what to do once I finish my PhD. Academia isn't appealing as I thought it would be.

Nice to meet you all !

View more: Prev