Comment author: LM7805 20 September 2013 10:04:35PM *  1 point [-]

I am interested in dependent type systems, total languages, and similar methods of proving certain program errors cannot occur, although I would have to do some background research to learn more of the state of the art in that field.

If you're not already familiar with Idris, I highly recommend checking it out -- it's a dependently typed Haskell variant, a bit like Agda but with a much friendlier type syntax. The downside of Idris is that, as a newer language, it doesn't have nearly as robust a standard library of proofs as, say, Coq. That said, the author, Edwin Brady, is keenly focused on making it a good language for expressing security properties in.

The field I work in has to do with proving that program errors related to maliciously crafted input cannot occur; if it'd be useful I'm happy to braindump/linkdump.

Comment author: klkblake 21 September 2013 02:06:32PM 0 points [-]

I'd heard of Idris. Parts of it sound really good (dependent typing, totality, a proper effects system, being usable from Vim), although I'm not a huge fan of tactic-based proofs (that's what the Curry-Howard Isomorphism is for!). It's definitely on the top of my list of languages to learn. I wasn't aware of the security focus, that is certainly interesting.

Proving safety in the face of malicious input sounds fascinating -- a dump would be much appreciated.

Comment author: lukeprog 20 September 2013 08:30:36PM 7 points [-]

This all depends on your major and your advisors, of course, but...

  • To find out whether you can contribute to MIRI's technical research at this time, you could apply to attend a workshop and/or write up some of your own comments on MIRI's published papers, like Quinn did.
  • We explained what someone could do on IEM with Yudkowsky (2013) and Grace (2013). Katja has the most detailed picture of what exactly someone would do next, if you're interested. Some of the work would be CS-related (like Katja's report), some of it would be about evolutionary biology, some of it would be on other topics.
  • For more sociological work, one could follow up on these two projects, which are both on "pause" right now. We outlined pretty clearly what the next steps there would be.

For you specifically, it seems you'd have to do something fairly technical. Is that right? If so, I can try to talk through which pieces of MIRI's technical research agenda you're most likely to be able to contribute to, if you tell me more about your background. E.g. which of the subjects here are you already familiar with?

Comment author: klkblake 21 September 2013 01:48:38PM 2 points [-]

Fairly technical would be good. IEM and the sociological work are somewhat outside my interests. Attending a workshop would unfortunately be problematic; anxiety issues make travelling difficult, especially air travel (I live in Australia). Writing up comments on the research papers is an excellent idea; I will certainly start doing that regardless of what project I do. Of the subjects listed, I am familiar (in roughly decreasing order) with functional programming, efficient algorithms, parallel computing, discrete math, numerical analysis, linear algebra, and the basics of set theory and mathematical logic. I have "Naive Set Theory", "Introduction to Mathematical Logic", and "Godel, Escher, Bach" sitting on my desk at the moment, and I am currently taking courses in theory of computation, and intelligent systems (a combination AI/machine learning/data mining course). The areas I had planned to learn after the above are incompleteness/undecidability, model theory, and category theory. In terms of how my prospective advisor could affect things, he's mostly interested in cognitive science based AI, with some side interest in theory of computation.

Comment author: iDante 20 September 2013 05:21:02AM 0 points [-]

Just to confirm, it's undergraduate CSE honors? Have you taken an AI course?

My initial impression is that you'll have trouble doing something specifically related to FAI, but it depends on your background.

Comment author: klkblake 20 September 2013 06:37:12AM 1 point [-]

I haven't heard the term CSE before (computer science & engineering?), but I'm doing a Bachelor of Science, majoring in Computer Science and minoring in Mathematics. I am taking an AI course at the moment (actually, its a combined AI/data mining course, and it's a bit shallower than I would like, but it covers the basics).

AI-related honours projects?

3 klkblake 20 September 2013 03:50AM

I'm starting my Honours next year, and would like to do something towards helping MIRI with Friendly AI. I would also prefer to avoid duplicating any of MIRI's work (either already done, or needed to be done before my honours are finished midway through 2015). I decided to post this here rather than directly email MIRI as I guessed a list of potential projects would probably be useful for others as well (in fact, I was sure such a thing had already been posted, but I was unable to find it if it did in fact exist). So: what sort of Friendly AI related projects are there that could potentially be done by one person in a year of work? (I suppose it would make sense to include PhD-length suggestions here as well).

Some notes about me and my abilities: I am reasonably good with math, though my understanding of probability, model theory and provability logic are lacking (I will have a few months before hand that I plan to use to try and learn whatever maths I will need that I don't already have). I am a competent Haskell programmer, and (besides AI) I am interested in dependent type systems, total languages, and similar methods of proving certain program errors cannot occur, although I would have to do some background research to learn more of the state of the art in that field. I would (hesitantly) guess that this would be the best avenue for something that a single person could do that might be useful, but I'm not sure how useful it would be.

Comment author: Epiphany 28 June 2013 07:58:53AM *  3 points [-]

P/S/A: There's a treatable genetic mutation that half the population has which has more or less recently begun to be treated called MTHFR that causes several vitamin deficiencies (due to you not processing them into the usable forms - and it's treatable because you can take the usable form as a supplement) and homocysteine issues, and it's symptoms can range between none to raging horrible problems with depression, anxiety, IBS, fatigue, and a list of other things.

Specifics:

It reduces the body's ability to convert folic acid into the usable form, methyl folate and reduces the body's ability to convert vitamin B12 into the usable form (called methylcobalamin). This same mutation also tends to cause homocysteine levels to be too high or too low.

Caution:

Knowledge about this is kind of new, because we only mapped the genome so long ago (and figured out what this gene does, and figured out how to treat it, and began producing the supplements to treat it, etc). It can be tricky to treat. If you pursue this, you should seek a medical professional who has significant experience treating people with MTHFR.

What are the symptoms:

"Research is still pending on which medical conditions are caused by, or at least partially attributed to, the MTHFR gene mutations. From the partial list I recently went through on Medline, these are the current symptoms, syndromes and medical conditions relating to the MTHFR gene mutations" - www.mthfr.net This site lists 64 different conditions and symptoms ranging from miscarriages to schizophrenia. See Also: Disclaimer.

Disclaimer:

There's a reason I chose the symptom link above, but you should know that it is not a perfect list of symptoms. For an alternative list and an explanation about why I chose this symptom list, please see my response to Yvain about that under "the guy you're linking to".

Comment author: klkblake 28 June 2013 09:19:43AM 3 points [-]

Do you know if this issue would show up on a standard vitamin panel?

Comment author: solipsist 20 June 2013 01:30:36PM *  0 points [-]

$100 if the Omega thinks the agent acts differently than BestDecisionAgent in a simulated rationality test, otherwise $2 if the agent acts like BestDecisionAgent in the rationality test.

The Omega chooses payoff of $2 vs. $100 based off of a separate test that can differentiate between BestDecisionAgent and some other agent. If we are BestDecisionAgent, the Omega will know this and will be offered at most a $2 payoff. But some other agent will be different from BestDecisionAgent in a way that the Omega detects and cares about. That agent can decide between $1 and $100. Since another agent can perform better than BestDecisionAgent, BestDecisionAgent cannot be optimal.

Comment author: klkblake 20 June 2013 01:46:57PM 1 point [-]

Ah, ok. In that case though, the other agent wins at this game at the expense of failing at some other game. Depending on what types of games the agent is likely to encounter, this agents effectiveness may or may not actually be better than BestDecisionAgent. So we could possibly have an optimal decision agent in the sense that no change to its algorithm could increase its expected lifetime utility, but not to the extent of not failing in any game.

Comment author: solipsist 20 June 2013 12:40:54AM *  0 points [-]

Omega gives you a choice of either $1 or $X, where X is either 2 or 100

Yes, that's what I mean. I'd like to know what, if anything, is wrong with this argument that no decision theory can be optimal.

Suppose that there were a computable decision theory T that was at least as good as all other theories. In any fair problem, no other decision theory could recommend actions with better expected outcomes than the expected outcomes of T's recommended actions.

  1. We can construct a computable agent, BestDecisionAgent, using theory T.
  2. For any fair problem, no computable agent can perform better (on average) than BestDecisionAgent.
  3. Call the problem presented in the grandfather post the Prejudiced Omega Problem. In the Prejudiced Omega Problem, BestDecisionAgent will almost assuredly collect $2.
  4. In the Prejudiced Omega Problem, another agent can almost assuredly collect $100.
  5. The Prejudiced Omega Problem does not involve an Omega inspecting the source code of the agent.
  6. The Prejudiced Omega Problem, like Newcomb's problem, is fair.
  7. Contradiction

I'm not asserting this argument is correct -- I just want to know where people disagree with it.

Qiaochu_Yuan's post is related.

Comment author: klkblake 20 June 2013 11:16:16AM *  0 points [-]

Let BestDecisionAgent choose the $1 with probability p. Then the various outcomes are:

Simulation's choice | Our Choice | Payoff
$1 | $1 = $1
$1 | $2 or $100 = $100
$2 or $100 | $1 = $1
$2 or $100 | $2 or $100 = $2

And so p should be chosen to maximise p^2 + 100p(1-p) + p(1-p) + 2(1-p)^2. This is equal to the quadratic -98p^2 + 97p + 2, which Wolfram Alpha says is maximised by p = 97/196, for a expected payoff of ~$26.

If we are not BestDecisionAgent, and so are allowed to choose separately, we aim to maximise pq + 100p(1-q) + q(1-p) + 2(1-p)(1-q), which simplifies to -98pq+98p-q+2, which is maximized by q = 0, for a payoff of ~$50.5. This surprises me, I was expecting to get p = q.

So (3) and (4) are not quite right, but the result is similar. I suspect BestDecisionAgent should be able to pick p such that p = q is the best option for any agent, at the cost of reducing the value it gets.

ETA: Of course you can do this just by setting p = 0, which is what you assume. Which, actually, means that (3) and (4) contradict each other: if BestDecisionAgent always picks the $2 over the $1, then the best any agent can do is $2.

(Incidentally, how do you format tables properly in comments?)

Comment author: kerin 25 May 2013 12:42:11PM 1 point [-]

Very few people have actually managed switching, from what I have read. I personally do not recommend it, but I am somewhat biased on that topic.

Merging is a term I've rarely heard. Perhaps it is favored by the more metaphysically minded? I've not heard good reports of this, and all I have heard of "merging" was a very few individuals well known to be internet trolls on 4chan.

Comment author: klkblake 25 May 2013 01:09:22PM 0 points [-]

Really? I had the impression that switching was relatively common among people who had their tulpas for a while. But then, I have drawn this impression from a lot of browsing of r/Tulpa, and only a glance at tulpa.info, so there may be some selection bias there.

I heard about merging here. On the other hand, this commenter seems to think the danger comes from weird expectations about personal continuity.

Comment author: Kaj_Sotala 12 May 2013 06:04:45PM 18 points [-]

I really doubt that tulpas have much to do with DID, or with anything dangerous for that matter. Based on my admittedly anecdotal experience, a milder version of having them is at least somewhat common among writers and role-players, who say that they're able to talk to the fictional characters they've created. The people in question seem... well, as sane as you get when talking about strongly creative people. An even milder version, where the character you're writing or role-playing just takes a life of their own and acts in a completely unanticipated manner, but one that's consistent with their personality, is even more common, and I've personally experienced it many times. Once the character is well-formed enough, it just feels "wrong" to make them act in some particular manner that goes against their personality, and if you force them to do it anyway you'll feel bad and guilty afterwards.

I would presume that tulpas are nothing but our normal person-emulation circuitry acting somewhat more strongly than usual. You know those situations where you can guess what your friend would say in response to some comment, or when you feel guilty about doing something that somebody important to you would disapprove of? Same principle, quite probably.

Comment author: klkblake 22 May 2013 01:10:41PM 10 points [-]

This article seems relevant (if someone can find a less terrible pdf, I would appreciate it). Abstract:

The illusion of independent agency (IIS) occurs when a fictional character is experienced by the person who created it as having independent thoughts, words, and/or actions. Children often report this sort of independence in their descriptions of imaginary companions. This study investigated the extent to which adult writers experience IIA with the characters they create for their works of fiction. Fifty fiction writers were interviewed about the development of their characters and their memories for childhood imaginary companions. Ninety-two percent of the writers reported at least some experience of IIA. The writers who had published their work had more frequent and detailed reports of IIA, suggesting that the illusion could be related to expertise. As a group, the writers scored higher than population norms in empathy, dissociation, and memories for childhood imaginary companions.

The range of intensities reported by the writers seems to match up with the reports in r/Tulpas, so I think it's safe to say that it is the same phenomena, albeit achieved via slightly different means.

Some interesting parts from the paper regarding dissociative disorder:

The subjects completed the Dissociative Experiences Scale, which yields an overall score, as well as scores on three subscales:

  • Absorption and changeability: people's tendency to become highly engrossed in activities (items such as "Some people find that they become so involved in a fantasy or daydream that it feels as though it were really happening to them).
  • Amnestic experiences: the degree to which dissociation causes gaps in episodic memory ("Some people have the experience of finding things among their belongings that they do not remember buying").
  • Derealisation and depersonalisation: things like "Some people sometimes have the experience of feeling that their body does not belong to them".

The subjects scored an overall mean score of 18.52 (SD 16.07), whereas the general population score a mean of 7.8, and a group of schizophrenics scored 17.7. Scores of 30 are a commonly used cutoff for "normal" scores. Seven subjects exceeded this threshold. The mean scores for the subscales were:

  • Absorption and changeability: 26.22 (SD 14.65).
  • Amnestic experiences: 6.80 (SD 8.30).
  • Derealisation and depersonalisation: 7.84 (SD 7.39).

The latter two subscales are considered particularly diagnostic of dissociative disorders, and the subjects did not differ from the population norms on these. They each had only one subject score over 30 (not the same subject).

What I draw from this: Tulpas are the same phenomenon as writers interacting with their characters. Creating tulpas doesn't cause other symptoms associated with dissociative disorders. There shouldn't be any harmful long-term effects (if there were, we should have noticed them in writers). That said, there are some interactions that some people have with their tulpas that are outside the range (to my knowledge) of what writers do:

  • Possession
  • Switching
  • Merging

The tulpa community generally endorses the first two as being safe, and claims the last to be horribly dangerous and reliably ending in insanity and/or death. I suspect the first one would be safe, but would not recommend trying any of them without more information.

(Note: This is not my field, and I have little experience with interpreting research results. Grains of salt, etc.)

Comment author: D_Malik 10 May 2013 01:27:19PM 27 points [-]

A tulpa is an "imaginary friend" (a vivid hallucination of an external consciousness) created through intense prolonged visualization/practice (about an hour a day for two months). People who claim to have created tulpas say that the hallucination looks and sounds realistic. Some claim that the tulpa can remember things they've consciously forgotten or is better than them at mental math.

Here's an FAQ, a list of guides and a subreddit.

Not sure whether this is actually possible (I'd guess it would be basically impossible for the 3% of people who are incapable of mental imagery, for instance); many people on the subreddit are unreliable, such as occult enthusiasts (who believe in magick and think that tulpas are more than just hallucinations) and 13-year-old boys.

If this is real, there's probably some way of using this to develop skills faster or become more productive.

Comment author: klkblake 12 May 2013 01:54:53PM 4 points [-]

This is fascinating. I'm rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though -- with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa's memories aren't being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don't want to risk permanently decreasing my intelligence.

View more: Prev | Next