All of klkblake's Comments + Replies

Two people, if you count random lesswrongers, and ~300, if you count self-reporting in the last tulpa survey (although some of the reports in that survey are a bit questionable.

3alicey
-

Being able to reliably succeed on this task is one of the tests I've been using. Mostly, though, it's just a matter of trying to get to the point where we can both be focusing intently on something.

I tried that last week. I lost. We were actively trying to not share our strategies with each other, although in our case abstract knowledge and skills are shared.

1Vulture
That's awesome.

In terms of form, she's an anthropomorphic fox. At the moment, looking at her is not noticeably different to normal visualisation, except that I don't have to put any effort into it. Explaining it in words is somewhat hard -- she's opaque without actually occluding anything, if that makes sense.

1Ishaan
you're not the same jack with the fox tulpa who spoke to lurhman, right?

So, I have a tulpa, and she is willing to answer any questions people might have for her. She's not properly independent yet, so we can't do the more interesting stuff like parallel processing, etc, unfortunately (damned akrasia).

0Ishaan
Wait, does that mean that at least one person has been confirmed as having achieved this?
2chairbender
What experimental test could you perform to determine that you have successfully learned "parallel tulpa processing"?
1[anonymous]
What does your tulpa look like visually? Does it look like everything else or is it more "dreamlike"?

There have been a number of reports on the tulpa subreddit from people who have talked to their psychologist about their tulpa. The diagnosis seems to be split 50/50 between "unusual coping mechanism" and "Disassociative Identity Disorder not otherwise specified".

Sounds like fun! I'll PM you my contact details.

I might be interested in being your study partner; what would that involve?

1LM7805
Depends mainly on how we both learn best. For me, when it comes to learning a new language that tends to be finding a well-defined, small (but larger than toy) project and implementing it, and having someone to rubber-duck with (over IM/IRC/email is fine) when I hit conceptual walls. I'm certainly up for tackling something that would help out MIRI.

Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I've (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I'm feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don't fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love ... (read more)

0[anonymous]
Three areas I would look into are distributed capability based security systems (example: Amoeba), formally verified kernels (example: seL4), and formal verification of user programs (example: Singularity OS). Programming language research isn't really needed - haskell is the language I would choose, for its practical and theoretical advantages, but there are other options too. Where the work would be needed is in integration: making the GHC compiler output proofs (haskell is well suited to this, but there is not to my knowledge a complete framework for doing so), making the operating system / distributed environment runtime verify them, and most importantly of all, choosing what invariants to enforce.
0lukeprog
I doubt this is worth pushing on now. If it's useful, it'll be useful when we're closer to doing engineering rather than philosophy and math. In the immediate future we'll keep tackling problems addressed in our past workshops. Other than that, I'm not sure which problems we'll tackle next. We'll have to wait and see what comes of Eliezer's other "open problems" write-ups, and which ideas workshop participants bring to the November and December workshops. The participants of our April workshop checked this, and after some time decided they could probably break Tarski, Godel, and Lob with probabilistic reflection, but not the Halting Problem, despite the similarities in structure. You could ask (e.g.) Qiaochu if you want to know more.

I'd heard of Idris. Parts of it sound really good (dependent typing, totality, a proper effects system, being usable from Vim), although I'm not a huge fan of tactic-based proofs (that's what the Curry-Howard Isomorphism is for!). It's definitely on the top of my list of languages to learn. I wasn't aware of the security focus, that is certainly interesting.

Proving safety in the face of malicious input sounds fascinating -- a dump would be much appreciated.

1LM7805
Also, presuming that the talk Andreas Bogk has proposed for 30c3 is accepted, you'll want to see it -- it's a huge pragmatic leap forward. (I apologize for not being at liberty to go into any more detail than that. The talk will be livestreamed and recorded, FWIW.)
5LM7805
"Security Applications of Formal Language Theory" is a good overview. (If you don't have IEEE access, there's a tech report version.) Much of the work going on in this area has to do with characterizing classes of vulnerabilities in terms of unintended computational automata that arise from the composition of independent systems, often through novel vulnerability discovery motivated by considering the formal characteristics of a composed system and figuring out what can be wedged into the cracks. There's also been some interesting defensive work (Haskell implementation, an approach I'm interested in generalizing). That's probably a good start. I have not actually learned Idris yet, and I think I could motivate myself better if I had a study partner; would you be interested in something like that?

Fairly technical would be good. IEM and the sociological work are somewhat outside my interests. Attending a workshop would unfortunately be problematic; anxiety issues make travelling difficult, especially air travel (I live in Australia). Writing up comments on the research papers is an excellent idea; I will certainly start doing that regardless of what project I do. Of the subjects listed, I am familiar (in roughly decreasing order) with functional programming, efficient algorithms, parallel computing, discrete math, numerical analysis, linear algebra,... (read more)

6lukeprog
In that case, I think you'll want to study mathematical logic, theory of computation, incompleteness/undecidability and model theory, to improve your ability to contribute to the open problems that Eliezer thinks are most plausibly relevant to Friendly AI. Skimming our recent technical papers (definability of truth, robust cooperation, tiling agents) should also give you a sense of what you'd need to learn to contribute at the cutting edge. A few years from now, I hope to have write-ups of a lot more open problems, including ones that don't rely so heavily on mathematical logic. Something closer to cognitive science based AI, which Paul Christiano and Andreas Stuhlmuller (and perhaps others) think is plausibly relevant to FAI, is concept learning. The idea is that this will be needed at some point for getting AIs to "do what I mean." The September workshop participants spent some time working on this. You could email Stuhlmuller to ask for more details, preferably after reading the paper linked above.

I haven't heard the term CSE before (computer science & engineering?), but I'm doing a Bachelor of Science, majoring in Computer Science and minoring in Mathematics. I am taking an AI course at the moment (actually, its a combined AI/data mining course, and it's a bit shallower than I would like, but it covers the basics).

Do you know if this issue would show up on a standard vitamin panel?

4Epiphany
Hmm. Good question. I think they'd have to test for the methylated versions, not the regular versions, and I do not know whether the standard procedure is to test for the methylated versions - but this is just me reasoning it out, not medical advice. To my knowledge, if MTHFR is suspected, they generally test for the MTHFR mutation itself.

Ah, ok. In that case though, the other agent wins at this game at the expense of failing at some other game. Depending on what types of games the agent is likely to encounter, this agents effectiveness may or may not actually be better than BestDecisionAgent. So we could possibly have an optimal decision agent in the sense that no change to its algorithm could increase its expected lifetime utility, but not to the extent of not failing in any game.

Let BestDecisionAgent choose the $1 with probability p. Then the various outcomes are:

Simulation's choice | Our Choice | Payoff
$1         | $1         = $1
$1         | $2 or $100 = $100
$2 or $100 | $1         = $1
$2 or $100 | $2 or $100 = $2

And so p should be chosen to maximise p^2 + 100p(1-p) + p(1-p) + 2(1-p)^2. This is equal to the quadratic -98p^2 + 97p + 2, which Wolfram Alpha says is maximised by p = 97/196, for a expected payoff of ~$26.

If we are not BestDecisionAgent, and so are allowed to choose separately, we aim to maximise pq + 100p(1-q) ... (read more)

0solipsist
The Omega chooses payoff of $2 vs. $100 based off of a separate test that can differentiate between BestDecisionAgent and some other agent. If we are BestDecisionAgent, the Omega will know this and will be offered at most a $2 payoff. But some other agent will be different from BestDecisionAgent in a way that the Omega detects and cares about. That agent can decide between $1 and $100. Since another agent can perform better than BestDecisionAgent, BestDecisionAgent cannot be optimal.

Really? I had the impression that switching was relatively common among people who had their tulpas for a while. But then, I have drawn this impression from a lot of browsing of r/Tulpa, and only a glance at tulpa.info, so there may be some selection bias there.

I heard about merging here. On the other hand, this commenter seems to think the danger comes from weird expectations about personal continuity.

0kerin
Thank you for the references. Whilst switching may indeed be relatively common among people who have had their tulpas for a long while, the actual numbers are still small - 44 according to a recent census . Ah, so merging is some sort of forming a gestalt personality? I've no evidence to offer, only stuff I've read that I find the authors somewhat questionable sources.

This article seems relevant (if someone can find a less terrible pdf, I would appreciate it). Abstract:

The illusion of independent agency (IIS) occurs when a fictional character is experienced by the person who created it as having independent thoughts, words, and/or actions. Children often report this sort of independence in their descriptions of imaginary companions. This study investigated the extent to which adult writers experience IIA with the characters they create for their works of fiction. Fifty fiction writers were interviewed about the develo

... (read more)
1kerin
Very few people have actually managed switching, from what I have read. I personally do not recommend it, but I am somewhat biased on that topic. Merging is a term I've rarely heard. Perhaps it is favored by the more metaphysically minded? I've not heard good reports of this, and all I have heard of "merging" was a very few individuals well known to be internet trolls on 4chan.
1Kaj_Sotala
Great find!

I think the term is "reference class tennis".

This is fascinating. I'm rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though -- with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa's memories aren't being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don't want to risk permanently decreasing my intelligence.

6drnickbone
So, "Votes for tulpas" then! How many of them can you create inside one head? The next stage would be "Vote for tulpas!". Getting a tulpa elected as president using the votes of other tulpas would be a real munchkin coup...
1MugaSofer
You should get one of the occult enthusiasts to check if Tulpas leave ghosts ;) More seriously, I suspect the brain is already capable of this sort of thing - dreams, for example - even if it's usually running in the background being your model of the world or somesuch.
4mare-of-night
I've been wondering if the headaches people report while forming a tulpa are caused by spending more mental energy than normal.
-3Kawoomba
It's a waste of time at best, and inducing psychosis at worst. (Waste of time because the "tulpa" - your hallucination - has access to the same data repository you use, and doesn't run on a different frontal cortex. You can teach yourself the right habits without also teaching yourself to become mentally ill.) You know what it's called when you hear voices giving you "advice"? Paranoid schizophrenia. Outright visual hallucinations? What's next, using magic mushrooms to speed the process? Yes, you can probably teach yourself to become actually insane, but why would you?

This may just be a temporary glitch, but this post appears to have had its content replaced with that of Mundane Magic.

3Zack_M_Davis
Thank you for commenting!---this is entirely my fault; fixing it now.

I knew Kolmogorov complexity was used in Solomonoff induction, and I was under the impression that using Universal Turing Machines was an arbitrary choice.

1Oscar_Cunningham
Solomonoff induction is only optimal up to a constant, and the constant will change depending on the language.

I'm confused about Kolmogorov complexity. From what I understand, it is usually expressed in terms of Universal Turing Machines, but can be expressed in any Turing-complete language, with no difference in the resulting ordering of programs. Why is this? Surely a language that had, say, natural language parsing as a primitive operation would have a very different complexity ordering than a Universal Turing Machine?

1Oscar_Cunningham
The Kolmogorov complexity changes by an amount bounded by a constant when you change languages, but the order of the programs is very much allowed to change. Where did you get that it wasn't?

^W means Control-W, which is the ASCII code for "delete previous word".

1Raw_Power
Oh, "heuristics", otherwise known as "prejudice"! The main difference in connotation being that heuristics are changed in the face of enough contrary evidence, while prejudices... aren't.