Actually, if you push the precommittment time all the way back, this sounds a lot like an informal version of Updateless Decision Theory, which, by the way, seems to get everything that TDT gets right, plus counterfactual mugging and a lot of experiments that TDT gets wrong.
an informal version of Updateless Decision Theory
Are you implying that UDT is formal?
Remember that I did not invent the PGP protocol. I wrote a tool that uses that protocol. So, I don't know if what you are suggesting is possible or not. But I can make an educated guess.
If what you are suggesting is possible, it would render the entire protocol (which has been around for something like 20 years) broken, invalid and insecure. It would undermine the integrity of vast untold quantities of data. Such a vulnerability would absolutely be newsworthy. And yet I've read no news about it. So of the possible explanations, what is most probable?
Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out?
The proposed security flaw sounds like maybe it might work, but doesnt.
I'd say #2 is more probable by several orders of magnitude
I've never seen it stated as a requirement of the PGP protocol that it is impossible to hide extra information in a signature. In an ordinary use case this is not a security risk; it's only a problem when the implementation is untrusted. I have as much disrespect as anyone towards people who think they can easily achieve what experts who spent years thinking about it can't, but that's not what is going on here.
.01%
How much money are you willing to bet on that?
If the amount is less than $50,000, I suggest you just offer it all as prize to whoever proves you wrong. The value to your reputation will be more than $5, and due to transaction costs people are unlikely to bet with you directly with less than $5 to gain.
"There's something that would make you happier than that," Harry said, his voice breaking again. "There has to be."
Muggle research in the 2010s has revealed much about what actually makes people happy, and how often people are deceived. The best way to find out is with one of those mood-tracking cell phone apps, which eliminate the biases of memory. Quirrell doesn't have that, but as an approximation, I searched the PDF for the word "smile", which appears 310 times in chapters 1-106, and the word "enjoy", which appears 32 times. What did I find?
“Do you know,” the Defense Professor said in soft reflective tones, “there are those who have tried to soften my darker moods, and those who have indeed participated in brightening my day, but you are the first person ever to succeed in doing it deliberately?”
Interacting with Harry makes Quirrell happy. Moreso than killing idiots. Moreso than teaching Battle Magic. Killing him would be a grave mistake.
That quote is from chapter 74. I mention this because you didn't specify and to save the trouble for others to search.
Remember that no matter what happens, the Hufflepuff boy will still come to Harry at a bit after 11:04. This means either that Voldemort will survive this encounter and retain mobility in four hours, or that he set up this message in advance (or that Harry is wrong about the source of this message).
I don't think guided training is generally the right way to disabuse an AIXI agent of misconception we think it might get. What training amounts to is having the agent's memory begin with some carefully constructed string s0. All this does is change the agent's prior from some P based on Kolmogorov complexity to the prior P' (s) = P (s0+s | s0) (Here + is concatenation). If what you're really doing is changing the agent's prior to what you want, you should do that with self-awareness and no artificial restriction. In certain circumstances guided training might be the right method, but the general approach should be to think about what prior we want and hard-code it as effectively as possible. Taken to the natural extreme this amounts to making an AI that works on completely different principles than AIXI.
Overall my experience with logging has made me put less trust in "how happy are you right now" surveys of happiness. Aside from the practical issues like logging unexpected night wake-time, I mostly don't feel like the numbers I'm recording are very meaningful. I would rather spend more time in situations I label higher than lower on average, so there is some signal there, but I don't actually have the introspection to accurately report to myself how I'm feeling.
I've also been suspicious of happiness surveys for a similar reason. One theory I have is that a large portion of the variation in happiness set-point is just that different people have different tendencies in answering "rate in 1-10"-type questions. It would be interesting to test how much does happiness set-point correlates with questions such as "rate this essay from 1 to 10". Another test for this theory that is far more like have actually been conducted already is to see how well happiness set-point correlates with neurological signals of happiness (the difficulty being here that the primary way to determine whether a neurological signal signals happiness is through self-report. Nonetheless, if the happiness set-point correlates with any neurological signal then it more likely that this signal plays a role in happiness than in inducing high number ratings).
On this topic, I'd like to suggest a variant of Newcomb's problem that I don't recall seeing anywhere in LessWrong (or anywhere else). As usual, Omega presents you with two boxes, box A and box B. She says "You may take either box A or both boxes. Box B contains 1,000$. Box A either contains 1,000,000$ or is empty. Here is how I decided what to put in box A: I consider a perfectly rational agent being put in an identical situation to the one you're in. If I predict she takes one box I put the money in box A, otherwise I put nothing." Suppose further that Omega has put many other people into this exact situation, and in all those cases the amount of money in box A was identical.
The reason why I mention the problem is that while the original Newcomb's problem is analogous to the Prisoner's Dilemma with clones that you described, this problem is more directly analogous to the ordinary one-shot Prisoner's Dilemma. In the Prisoner's Dilemma with clones and in Newcomb's problem, your outcome is controlled by a factor that you don't directly control but is nonetheless influenced by your strategy. In the ordinary Prisoner's dilemma and in my Newcomb-like problem, this factor is controlled a rational agent that is distinct from yourself (although note that in the Prisoner's Dilemma this agent's outcome is directly influenced by what you do, but not so in my own dilemma).
People have made the argument that you should cooperate in the one-shot Prisoner's Dilemma for essentially the same reason you should one-box. I disagree with that, and I think my hypothetical illustrates that the two problems are disanalogous by presenting a more correct analogue. While there is a strong argument for one-boxing in Newcomb's problem, which I agree with, the case is less clear here. I think the argument that a TDT agent would choose cooperation in Prisoner's Dilemma is flawed. I believe TDT in its current form is not precise enough to give a clear answer to this question. After all, both the CDT argument in terms of dominated strategies and the superrational argument in terms of the underlying symmetry of the situation can be phrased in TDT depending on how you draw the causal graph over computations.
Personally I don't expect this to be of much use to me. I find the task of translating thoughts into words to be more strenuous than it is for others, and so I expect this to be more distracting than helpful. I played games where I tried to subvocalise all of my thoughts the way some people have interior monologues and they support this conclusion. I believe I have a fairly good working memory (for instance, I can play blind chess) and so I don't as as much value in an external aid. Other people are commenting based on their own personal experience and feelings, so I think I can trust my own gut feeling in terms of how this will work out for me.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Excuse me, I have to don a flame-proof suit now.
Just a question: what useful results for predicting and modelling a preexisting reality has Douglas Hofstadter produced? I mean, yes, GEB is... well, it's GEB. I find it quite dated and think that it skates by on having fun with patterns rather than explaining observed phenomena. I'm also a little aggravated that GEB includes no discussions of model theory, ordinal logic, and w-incompleteness, nor of algorithmic randomness and halting problems, nor of the Curry-Howard Isomorphism and how it matches computational systems to logical systems. It goes on and on about recursion and formal systems for a very long time without actually addressing the formal sciences that handle the various phenomena arising from talking recursively in logic!
Whereas something more recent like Universal Artificial Intelligence by Hutter succeeds on mathematical rigor and Probabilistic Models of Cognition on beauty of compression and presentation.
Depending on how you define "preexisting reality", most professional mathematics can be said not to achieve this. In any case, the terms under which people usually praise Douglas Hofstadter do not include this sort of achievement. And if you really want to know what Hofstadter has done, there's this.