The "many more days that include them" is the 3^n part in my expression that is missing from any per day series. This 3^n is the sum of all interviews in that coin flip sequence ("coin flip sequence" = "all the interviews that are done because one coin flip showed up tails", right?) and in the per day (aka per interview) series the exact same sum exists, just as 3^n summands.
In both cases, the weight of the later coin flip sequences increases, because the number of interviews (3^n) increases faster than the probabilistic weigh...
I think people have slightly misunderstood what I was referring to with this:
- There exist no problems shown to be possible in real life for which CDT yields superior results.
- There exists at least one problem shown to be possible in real life for which TDT yields superior results.
My question was whether there is a conclusive, formal proof for this, not whether this is widely accepted on this site (I already realized TDT is popular). If someone thinks such a proof is given somewhere in an article (this one?) then please direct me to the point in the ar...
I take it that my approach was not discussed in the heated debate you had? Because it seems a good exercise for grad students.
Also, I don't understand why you think a per interview series would net fundamentally different results than a per coin toss series. I'd be interested in your reports after you (or your colleagues) have done the math.
I could have said that the beauty was simulated floor(5^x) times where x is a random real between 0 and n
Ah, I see now what you mean. Disregarding this new problem for the moment, you can still formulate my original expression on a per-interview basis, and it will still have the same Cesàro sum because it still diverges in the same manner; it just does so more continuously. If you envision a graph of an isomorphic series of my original expression, it will have "saw teeth" where it alternates between even and odd coin flips, and if you formulat...
What do you mean by "time" in this case? It sounds like you want to interrupt the interviews at an arbitrary point even though Beauty knows that interviews are quantised in a 3^n fashion.
(1/2 3^0 + 1/8 3^2 + ...) / (1/2 3^0 + 1/4 3^1 + 1/8 * 3^2 + ...)
... which can be transformed into an infinite series with a Cesàro sum of 0.5, so that's my answer.
Yes, it's a Newcomb-like problem. Anything where one agent predicts another is. People predict other people, with varying degrees of success, in the real world. Ignoring that when looking at decision theories seems silly to me.
I don't like the notion of using different decision theories depending on the situation, because the very idea of a decision theory is that it is consistent and comprehensive. Now if TDT were formulated as a plugin that seamlessly integrated into CDT in such a way that the resulting decision theory could be applied to any and all problems and would always yield optimal results, then that would be reason for me to learn about TDT. However, from what I gathered this doesn't seem to be the case?
I saw this post from EY a while ago and felt kind of repulsed by it:
I no longer feel much of a need to engage with the hypothesis that rational agents mutually defect in the oneshot or iterated PD. Perhaps you meant to analyze causal-decision-theory agents?
Never mind the factual shortcomings, I'm mostly interested in the rejection of CDT as rational. I've been away from LW for a while and wasn't keeping up on the currently popular beliefs on this site, and I'm considering learning a bit more about TDT (or UDT or whatever the current iteration is called...
The question "which decision theory is superior?" has this flavor of "can my dad beat up your dad?"
CDT is what you use when you want to make decisions from observational data or RCTs (in medicine, and so on).
TDT is what you use when "for some reason" your decisions are linked to what counterfactual versions/copies of yourself decided. Standard CDT doesn't deal with this problem, because it lacks the language/notation to talk about these issues. I argue this is similar to how EDT doesn't handle confounding properly because it...
How many people actually did the exercises katydee suggested? I know I didn't.
I did, but I don't think people realised it.
There are forums with popular blog sections, e.g. teamliquid.net which also features a wiki. There are also forums that treat top level posts differently, e.g. by displaying them prominently at the top of each thread page. None of this is really new.
On the other hand, I feel that in some regards, LW is too different from traditional forums, like that threads are sorted by OP time rather than the time of the last reply, which makes it very difficult to have sustained discussions in these threads because they stay hot for a few days, but afterwards people simply stop replying, and at best you have two or three people continuing to post without anyone else reading what they write.
You should probably edit your post then, because it currently suggests an IQ-atheism correlation that just isn't supported by the cited article.
I don't think this will satisfy you or the people who upvoted your comment, but by way of explanation...
The original post opened with a large section emphasizing that this was a quick and dirty analysis, because writing more careful analyses like When Will AI Be Created takes a long time. The whole point was for me to zoom through some considerations without many links or sources or anything. I ended up cutting that part of the post based on feedback.
Anyway, when I was writing and I got to the bit about there being some evidence of atheism increasing after...
Where in the linked article does it say that atheism correlates with IQ past 140? I cannot find this.
The current education system in Europe does a much better job at making education unpopular than at actually preventing those who may positively impact technology and society in the future from acquiring the necessary education to do so. Turning education into a chore is merely an annoyance for anyone involved, but doesn't actually hold back technological advance in any way.
If I was the devil, I would try to restrict internet access for as many people as possible. As long as you have internet, traditional education isn't really needed for humanity to advan...
I'm not sure if this post is meant to be taken seriously. It's always "easy" to make fun of X; what's difficult is to spread your opinion about X by making fun of X. Obviously this requires a target audience that doesn't already share your opinion about X, and if you look at people making fun of things (e.g. on the net), usually the audience they're catering to already shares their views. This is because the most common objective of making fun of things is not to convince people of anything, but to create a group identity, raise team morale, and ...
I have a dream that one day, people will stop bringing up the (Iterated) Prisoner's Dilemma whenever decisions involve consequences. IPD is a symmetrical two-player game with known payouts, rational agents, and no persistent memory (in tournaments). Real life is something completely different, and equating TFT with superficially similar real life strategies is just plain wrong.
The possibility of the existence of immortality/afterlife/reincarnation certainly affects how people behave in certain situations, this is hardly a revelation. Running PD-like simula...
I think there would more people interested in playing if strategies could be submitted in pseudocode, so that would be great.
Am I the only one who sees a problem in that we're turning a non-zero-sum game into a winner-take-all tournament? Perhaps instead of awarding a limited resource like bitcoins to the "winner", each player should be awarded an unlimited resource such as karma or funny cat pictures according to their strategy's performance.
Considering this was an experimental tournament, learning how certain strategies perform against others seems far more interesting to me than winning, and I can't imagine any strategy I would label as a troll submission. Even strategies solely designed to be obstacles are valid and valuable contributions, and the fact that random strategies skew the results is a fault of the tournament rules and not of the strategies themselves.
A tournament like this would be much more interesting if it involved multiple generations. Here, the results heavily depended upon the pool of submitted strategies, regardless of their actual competitiveness, while a multiple-generations tournament would measure success as performance against other successful strategies.
What do you think about this?
Let's find out!
[pollid:402]
Can't make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.
I think that's mostly because money is too abstract, and as long as you get by you don't even realize what you've lost. Survival is much more real.
You don't "judge" a book by its cover; you use the cover as additional evidence to more accurately predict what's in the book. Knowing what the publisher wants you to assume about the book is preferable to not knowing.
You can't calculate utilites anyway; there's no reason to assume that u(n days) should be 0.5 * (u(n+m days) + u(n-m days)) for any n or m. If you want to include immortality, you can't assign utilities linearly, although you can get arbitrarily close by picking a higher factor than 0.5 as long as it's < 1.
Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they'll figure it out by themselves. No one really believes God will protect them from harm...
No one really believes God will protect them from harm...
I have some friends who do... At least insofar as things like "I don't have to worry about finances because God is watching over me, so I won't bother trying to keep a balanced budget." Then again, being financially irresponsible (a behaviour I find extremely hard to understand and sympathize with) seems to be common-ish, and not just among people who think God will take care of their problems.
A really smart 'shoot lasers at "blue" things' robot will shoot at blue things if there are any, and will move in a programmed way if there aren't. All its actions are triggered by the situation it is in; and if you want to make it smarter by giving it an ability to better distinguish actually-blue from blue-looking things, then any such activity must be triggered as well. If you program it to shoot at projectors that project blue things it won't become smarter, it will just shoot at some non-blue things. If you paint it blue and put a mirror in ...
Actually, it seems you can solve the immortality problem in ℝ after all, you just need to do it counterintuitively: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|.
This isn't a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions.
I believe it's actually a problem about how to do utility-maximising when there's no maximum utility, like the other problems. It's easy to find examples for problems in which there are infinitely many decisions as well as a maximum utility, and none of those I came up with are in any way paradoxical or even difficult.
Yes, I am aware of that. The biggest trouble, as you have elaborately explained in your post, is that people think they can perform mathematical operations in VNM-utility-space to calculate utilities they have not explicitly defined in their system of ethics. I believe Eliezer has fallen into this trap, the sequences are full of that kind of thinking (e.g. torture vs dust specks) and while I realize it's not supposed to be taken literally, "shut up and multiply" is symptomatic.
Another problem is that you can only use VNM when talking about comple...
This is very good post. The real question that has not explicitly been asked is the following:
How can utility be maximised when there is no maximum utility?
The answer of course is that it can't.
Some of the ideas that are offered as solutions or approximations of solutions are quite clever, but because for any agent you can trivially construct another agent who will perform better and there is no metrics other than utility itself for determining how much better an agent is than another agent, solutions aren't even interesting here. Trying to find limits suc...
Do you mean it's not universally solvable in the sense that there is no "I always prefer the $1"-type solution? Of course there isn't. That doesn't break VNM, it just means you aren't factoring outcomes properly.
That's what I mean, and while it doesn't "break" VNM, it means I can't apply VNM to situations I would like to, such as torture vs dust specks. If I know the utility of 1000 people getting dust specks in their eyes, I still don't know the utility of 1001 people getting dust specks in their eyes, except it's probably higher. I...
You're taking this too literally. The point is that you're immortal, u(day in heaven) > u(day in neither heaven nor hell) > u(day in hell), and u(2 days in heaven and 1 day in hell) > u(3 days in neither heaven nor hell).
You don't even need hell for this sort of problems; suppose God offers you to either cash in on your days in heaven (0 at the beginning) right now or wait a day after which he will add 1 day to your bank and offer you the same deal again. How long will you wait? What if God would halve the additional time for each deal so you couldn't even spend 2 days in heaven, but could get arbitrarily close to it?
That's not bias, it's subjective morals.
Most of [?] agree that the VNM axioms are reasonable
My problem with VNM-utility is that while in theory it is simple and elegant, it isn't applicable to real life because you can only assign utility to complex world states (a non-trivial task) and not to limited outcomes. If you have to choose between $1 and a 10% chance of $2, then this isn't universally solvable in real life because $2 doesn't necessarily have twice the value of $1, so the completeness axiom doesn't hold.
Also, assuming you assign utility to lifetime as a function of life quality in su...
I'm not sure what you mean by an infinite transition system. Are you referring to circular causality such as in Newcomb, or to an actually infinite number of states such as a variant of Sleeping Beauty in which on each day the coin is tossed anew and the experiment only ends once the coin lands heads?
Regardless, I think I have already disproven the conjecture I made above in another comment:
...Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence
You mean, if an agent loses money. And that's the point; if the only thing you know is that an agent loses money in a simulation of poker, how can you prove the same is true for real poker?
I can attest that I have personally saved the lives of friends on two occasions thanks to good situational awareness, and have saved myself from serious injury or death many times more.
It is not my impression that I lead a very dangerous life.
These two statements seem contradictory to me. Maybe you ought to specify what you mean by "saved from death". If I consider crossing the street, notice an approaching car, and procede to not cross the street until the car has passed, did I just save myself from death? Describing the particular incidents and pointing out exactly how SA helped you to stay alive where others would have died would be much more convincing.
If you would oppose an AI attempting to enforce a CEV that would be detrimental to you, but still classify it as FAI and not evil, then wouldn't that make you evil?
Obviously this is a matter of definitions, but it still seems to be the logical conclusion.
Looks like an attempt to get rid of the negative image associated with the name Singularity Institute. I wonder if it isn't already too late to take PR seriously.
i'm not sure I understand what you mean by 'failing' in regards to simulations. Could you elaborate?
I think the word you're looking for is pet -- the standard meaning of domesticated also includes livestock, whose meat, if anything, I guess is seen as less ethically problematic than game by many people. (From your username, I'm guessing you're not a native speaker. FWIW, neither am I.)
You're right, it's not exactly a matter of domestication, but it's not only pets, either; horses fall into that category just as well. As I said, it's too fuzzy and arbitrary.
...You know, you could decide not to eat certain kinds of meat for reasons other than “taboo”; f
Last but not least, I started it out of curiosity, in order to obtain answers to specific questions about vegetarians' decision procedures; that's what I'm still interested in learning about
If you're really still interested in this...
I started my vegetarian diet shortly after I decided to adopt some definite policy in terms of which kinds of meat were ok to eat and which were not, because the common policy of excluding all meat from domesticated animals such as cats and dogs was too fuzzy for me. I experimented with different schelling points for a whil...
If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.
For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).
This is yet another poorly phrased, factually inaccurate post containing some unorthodox viewpoints that are unlikely to be taken seriously because people around here are vastly better at deconstructing others' arguments than fixing them for them.
Ignoring any formal and otherwise irrelevant errors such as what utilitarianism actually is, I'll try to address the crucial questions; both to make Bundle_Gerbe's viewpoints more accessible to LW members and also to make it more clear to him why they're not as obvious as he seems to think.
1: How does creating new...
That's because it's not strictly speaking a problem in GT/DT, it's a problem (or meta-problem if you want to call it that) about GT/DT. It's not "which decision should agent X make", but "how can we prove that problems A and B are identical."
Concerning the matter of rudeness, suppose you write a post and however many comments about a mathematical issue, only for someone who doesn't even read what you write and says he has no idea what you're talking about to conclude that you're not talking about mathematics. I find that rude.
What do you mean by "analogous"?
I'm not surprised you don't understand what I'm asking when you don't read what I write.
There used to be a thread on LW that dealt with interesting ways to make small sums of money and ways to reduce expenditure. I think among other things going to Australia for a year was discussed. Does anyone know which thread I'm talking about and can provide me with the link? I can't seem to find it.