Split brain patients can do stuff like this better than neurotypicals under certain conditions. I have not heard of anyone successfully doing this with tulpas or any other psychodynamic technique.
Being able to reliably succeed on this task is one of the tests I've been using. Mostly, though, it's just a matter of trying to get to the point where we can both be focusing intently on something.
Is it possible for a tulpa to have skills or information that the person doing the emulating doesn't? What happens if you play chess against your tulpa?
I tried that last week. I lost. We were actively trying to not share our strategies with each other, although in our case abstract knowledge and skills are shared.
What does your tulpa look like visually? Does it look like everything else or is it more "dreamlike"?
In terms of form, she's an anthropomorphic fox. At the moment, looking at her is not noticeably different to normal visualisation, except that I don't have to put any effort into it. Explaining it in words is somewhat hard -- she's opaque without actually occluding anything, if that makes sense.
So, I have a tulpa, and she is willing to answer any questions people might have for her. She's not properly independent yet, so we can't do the more interesting stuff like parallel processing, etc, unfortunately (damned akrasia).
Correct me if I'm wrong, but doesn't having a tulpa fit the diagnostic criteria of schizophrenia?
There have been a number of reports on the tulpa subreddit from people who have talked to their psychologist about their tulpa. The diagnosis seems to be split 50/50 between "unusual coping mechanism" and "Disassociative Identity Disorder not otherwise specified".
Depends mainly on how we both learn best. For me, when it comes to learning a new language that tends to be finding a well-defined, small (but larger than toy) project and implementing it, and having someone to rubber-duck with (over IM/IRC/email is fine) when I hit conceptual walls. I'm certainly up for tackling something that would help out MIRI.
Sounds like fun! I'll PM you my contact details.
"Security Applications of Formal Language Theory" is a good overview. (If you don't have IEEE access, there's a tech report version.) Much of the work going on in this area has to do with characterizing classes of vulnerabilities in terms of unintended computational automata that arise from the composition of independent systems, often through novel vulnerability discovery motivated by considering the formal characteristics of a composed system and figuring out what can be wedged into the cracks. There's also been some interesting defensive work (Haskell implementation, an approach I'm interested in generalizing). That's probably a good start.
I have not actually learned Idris yet, and I think I could motivate myself better if I had a study partner; would you be interested in something like that?
I might be interested in being your study partner; what would that involve?
In that case, I think you'll want to study mathematical logic, theory of computation, incompleteness/undecidability and model theory, to improve your ability to contribute to the open problems that Eliezer thinks are most plausibly relevant to Friendly AI. Skimming our recent technical papers (definability of truth, robust cooperation, tiling agents) should also give you a sense of what you'd need to learn to contribute at the cutting edge.
A few years from now, I hope to have write-ups of a lot more open problems, including ones that don't rely so heavily on mathematical logic.
Something closer to cognitive science based AI, which Paul Christiano and Andreas Stuhlmuller (and perhaps others) think is plausibly relevant to FAI, is concept learning. The idea is that this will be needed at some point for getting AIs to "do what I mean." The September workshop participants spent some time working on this. You could email Stuhlmuller to ask for more details, preferably after reading the paper linked above.
Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I've (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I'm feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don't fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love (except for the part where it's a form of supervised learning, but that's unlikely to be a problem).
I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?
Do you know which of the open problems MIRI is likely to attack first? I'd like to avoid duplication of effort, though I know with the unpredictability of mathematical insight that's not always feasible.
UPDATE: I just had a meeting with my supervisor, and he was pretty happy with all of the options I presented, so that won't be a problem. An idea I had this morning, which I'm pretty excited about, is potentially applying the method from the probabalistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Wait, does that mean that at least one person has been confirmed as having achieved this?
Two people, if you count random lesswrongers, and ~300, if you count self-reporting in the last tulpa survey (although some of the reports in that survey are a bit questionable.