Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I've (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I'm feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don't fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love ...
I'd heard of Idris. Parts of it sound really good (dependent typing, totality, a proper effects system, being usable from Vim), although I'm not a huge fan of tactic-based proofs (that's what the Curry-Howard Isomorphism is for!). It's definitely on the top of my list of languages to learn. I wasn't aware of the security focus, that is certainly interesting.
Proving safety in the face of malicious input sounds fascinating -- a dump would be much appreciated.
Fairly technical would be good. IEM and the sociological work are somewhat outside my interests. Attending a workshop would unfortunately be problematic; anxiety issues make travelling difficult, especially air travel (I live in Australia). Writing up comments on the research papers is an excellent idea; I will certainly start doing that regardless of what project I do. Of the subjects listed, I am familiar (in roughly decreasing order) with functional programming, efficient algorithms, parallel computing, discrete math, numerical analysis, linear algebra,...
I haven't heard the term CSE before (computer science & engineering?), but I'm doing a Bachelor of Science, majoring in Computer Science and minoring in Mathematics. I am taking an AI course at the moment (actually, its a combined AI/data mining course, and it's a bit shallower than I would like, but it covers the basics).
Ah, ok. In that case though, the other agent wins at this game at the expense of failing at some other game. Depending on what types of games the agent is likely to encounter, this agents effectiveness may or may not actually be better than BestDecisionAgent. So we could possibly have an optimal decision agent in the sense that no change to its algorithm could increase its expected lifetime utility, but not to the extent of not failing in any game.
Let BestDecisionAgent choose the $1 with probability p. Then the various outcomes are:
Simulation's choice | Our Choice | Payoff
$1 | $1 = $1
$1 | $2 or $100 = $100
$2 or $100 | $1 = $1
$2 or $100 | $2 or $100 = $2
And so p should be chosen to maximise p^2 + 100p(1-p) + p(1-p) + 2(1-p)^2. This is equal to the quadratic -98p^2 + 97p + 2, which Wolfram Alpha says is maximised by p = 97/196, for a expected payoff of ~$26.
If we are not BestDecisionAgent, and so are allowed to choose separately, we aim to maximise pq + 100p(1-q) ...
Really? I had the impression that switching was relatively common among people who had their tulpas for a while. But then, I have drawn this impression from a lot of browsing of r/Tulpa, and only a glance at tulpa.info, so there may be some selection bias there.
I heard about merging here. On the other hand, this commenter seems to think the danger comes from weird expectations about personal continuity.
This article seems relevant (if someone can find a less terrible pdf, I would appreciate it). Abstract:
...The illusion of independent agency (IIS) occurs when a fictional character is experienced by the person who created it as having independent thoughts, words, and/or actions. Children often report this sort of independence in their descriptions of imaginary companions. This study investigated the extent to which adult writers experience IIA with the characters they create for their works of fiction. Fifty fiction writers were interviewed about the develo
This is fascinating. I'm rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though -- with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa's memories aren't being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don't want to risk permanently decreasing my intelligence.
I'm confused about Kolmogorov complexity. From what I understand, it is usually expressed in terms of Universal Turing Machines, but can be expressed in any Turing-complete language, with no difference in the resulting ordering of programs. Why is this? Surely a language that had, say, natural language parsing as a primitive operation would have a very different complexity ordering than a Universal Turing Machine?
Two people, if you count random lesswrongers, and ~300, if you count self-reporting in the last tulpa survey (although some of the reports in that survey are a bit questionable.