There does not appear to be any such thing as a dominant majority vote.
Eliezer, are you aware that there's an academic field studying issues like this? It's called Social Choice Theory, and happens to be covered in chapter 4 of Hervé Moulin's Fair Division and Collective Welfare, which I recommended in my post about Cooperative Game Theory.
I know you're probably approaching this problem from a different angle, but it should still be helpful to read what other researchers have written about it.
A separate comment I want to make is that if you want others to help you solve problems in "timeless decision theory", you really need to publish the results you've got already. What you're doing now is like if Einstein had asked people to help him predict the temperature of black holes before having published the general theory of relativity.
As far as needing a long sequence, are you assuming that the reader has no background in decision theory? What if you just write to an audience of professional decision theorists, or someone who has at least read "The Foundations of Causal Decision Theory" or the equivalent?
Unfortunately this "timeless decision theory" would require a long sequence to write up, and it's not my current highest writing priority unless someone offers to let me do a PhD thesis on it.
But it is the writeup most-frequently requested of you, and also, I think, the thing you have done that you refer to the most often.
Nobody's going to offer. You have to ask them.
In case you're wondering, I'm writing this up because one of the SIAI Summer Project people asked if there was any Friendly AI problem that could be modularized and handed off and potentially written up afterward, and the answer to this is almost always "No"
Does it mean that the problem isn't reduced enough to reasonably modularize? It would be nice if you written up the outline of state of research at SIAI (even a brief one with unexplained labels) or an explanation of why you won't.
Hanson's example of ten people dividing the pie seems to hinge on arbitrarily passive actors who get to accept and make propositions instead of being able to solicit other deals or make counter proposals, and it is also contingent on infinite and costless bargaining time. The bargaining time bit may be a fair (if unrealistic) assumption, but the passivity does not make sense. It really depends on the kind of commitments and bargains players are able to make and enforce, and the degree/order of proposals from outgroup and ingroup members.
When the first two ...
Here's a comment that took me way too long to formulate:
On the Prisoner's Dilemma in particular, this infinite regress can be cut short by expecting that the other agent is doing symmetrical reasoning on a symmetrical problem and will come to a symmetrical conclusion...
Eliezer, if such reasoning from symmetry is allowed, then we sure don't need your "TDT" to solve the PD!
Is Parfit's Hitchhiker essentially the same as Kavka's toxin, or is there some substantial difference between the two I'm missing?
For example, there's a problem introduced to me by Gary Drescher's marvelous Good and Real (OOPS: The below formulation was independently invented by Vladimir Nesov
For a moment I was wondering how the Optimally Ordered Problem Solver was relevant.
Is your majority vote problem related to Condorcet's paradox? It smells so, but I can't put a handle on why.
I cheated the PD infinite regress problem with a quine trick in Re-formalizing PD. The asymmetric case seems to be hard because fair division of utility is hard, not because quining is hard. Given a division procedure that everyone accepts as fair, the quine trick seems to solve the asymmetric case just as well.
Post your "timeless decision theory" already. If it's correct, it shouldn't be that complex. With your intelligence you can always ...
"I believe X to be like me" => "whatever I decide, X will decide also" seems tenuous without some proof of likeness that is beyond any guarantee possible in humans.
I can accept your analysis in the context of actors who have irrevocably committed to some mechanically predictable decision rule, which, along with perfect information on all the causal inputs to the rule, gives me perfect predictions of their behavior, but I'm not sure such an actor could ever trust its understanding of an actual human.
Maybe you could aspire to such determinism in a proven-correct software system running on proven-robust hardware.
Well, let me put it this way - if my opponent is Eliezer Yudkowsky, I would be shocked to walk away with anything but $7.50.
and this is exactly the problem: If your behavior on the prisoner's dilemma changes with the size of the outcome, then you aren't really playing the prisoner's dilemma. Your calculation in the low-payoff case was being confused by other terms in your utility function, terms for being someone who cooperates -- terms that didn't scale.
As a first off-the-cuff thought, the infinite regress of conditionality sounds suspiciously close to general recursion. Do you have any guarantee that a fully general theory that gives a decision wouldn't be equivalent to a Halting Oracle?
ETA: If you don't have such a guarantee, I would submit that the first priority should be either securing one, or proving isomorphism to the Entscheidungsproblem and, thus, the impossibility of the fully general solution.
If I were forced to pay $100 upon losing, I'd have a net gain of $4950 each time I play the game, on average. Transitioning from this into the game as it currently stands, I've merely been given an additional option. As a rationalist, I should not regret being one. Even knowing I won't get the $10,000, as the coin came up heads, I'm basically paying $100 for the other quantum me to receive $10,000. As the other quantum me, who saw the coin come up tails, my desire to have had the first quantum me pay $100 outweighs the other quantum me's desire to not lose...
Unfortunately this "timeless decision theory" would require a long sequence to write up, and it's not my current highest writing priority unless someone offers to let me do a PhD thesis on it.
Can someone tell me the matrix of pay-offs for taking on Eleizer as a PhD student?
I swear I'll give you a PhD if you write the thesis. On fancy paper and everything.
Would timeless decision theory handle negotiation with your future self? For example if a timeless decision agent likes paperclips today but you knows it is going to be modified to like apples tomorrow, (and not care a bit about paperclips,) will it abstain from destroying the apple orchard, and its future self abstain from destroying the paperclips in exchange?
And is negotiation the right way to think about reconciling the difference between what I now want and what a predicted smarter, grown up, more knowledgeable version of me would want? or am I going the wrong way?
But I don't have a general theory which replies "Yes" [to a counterfactual mugging].
You don't? I was sure you'd handled this case with Timeless Decision Theory.
I will try to write up a sketch of my idea, which involves using a Markov State Machine to represent world states that transition into one another. Then you distinguish evidence about the structure of the MSM, from evidence of your historical path through the MSM. And the best decision to make in a world state is defined as the decision which is part of a policy that maximizes expected ...
Here's a crack at the coin problem.
Firstly TDT seems to answer correctly under one condition, if P(some agent will use my choice as evidence about how I am going to act in these situations and make this offer.) = 0. Then certainly, our AI shouldn't give omega any money. On the other hand, if P(some agent will use my choice as evidence about how I am going to act in these situations and make this offer.) = 0.5, then the expected utility =-100 + 0.5 ( 0.5 (1,000,000) + 0.5(-100)) So my general solution is this, add a node that represents the probability of...
I THINK I SOLVED ONE - EDIT - Sorry, not quite.
..."Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says: "I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?" Obviously, the only reflectively consiste
...Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says:
"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"
Obviously, the only reflectively consistent answer in this case is "Yes - here's the $100
...Another stumper was presented to me by Robin Hanson at an OBLW meetup. Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. Let's say that six of them form a coalition and decide to vote to divide the pie among themselves, one-sixth each. But then two of them think, "Hey, this leaves four agents out in the cold. We'll get together with those four agents and offer them to divide half the pie among the four of them, leaving one quarter apiece for the two of us. We get a larger share than one-sixth that
"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"
Err... pardon my noobishness but I am failing to see the game here. This is mostly me working it out audibly.
A less Omega version of this game involves flipping a coin, getting $100 on tails, losing $1 on heads. Using humans, it makes sense ...
If the ten pie-sharers is to be more than a theoretical puzzle, but something with applicability to real decision problems, then certain expansions of the problem suggest themselves. For example, some of the players might conspire to forcibly exclude the others entirely. And then a subset of the conspirators do the same.
This is the plot of "For a Few Dollars More".
How do criminals arrange these matters in real life?
Is this equivalent to the modified Newcomb's problem?
Omega looks at my code and produces a perfect copy of me which it puts in a separate room. One of us (decided by the toss of a coin if you like) is told, "if you put $1000 in the box, I will give $1000000 to your clone."
Once Omega tells us this, we know that putting $1000 in the box won't get us anything, but if we are the sort of person who puts $1000 in the box then we would have gotten $1000000 if we were the other clone.
What happens now if Omega is able to change my utility function? Mayb...
"Now of course you wish you could answer "Yes", but as an ideal game theorist yourself, you realize that, once you actually reach town, you'll have no further motive to pay off the driver."
Can't you contract your way out of this one?
Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self- modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.
But I don't have a general theory which replies "Yes".
If you think being a rational agent includes an infinite ability to modify oneself, then the game has no solution because such an agent would b...
"I can predict that if (the other agent predicts) I choose strategy X, then the other agent will implement strategy Y, and my expected payoff is Z"
...are we allowed to use self-reference?
"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"
X = "if the other agent is trustworth...
In an undergraduate seminar on game theory I attended, it was mentioned in an answer to a question posed to the presenter that, when computing a payoff matrix, the headings in the rows and columns aren't individual actions, but are rather entire strategies; in other words it's as if you pretty much decide what you do in all circumstances at the beginning of the game. This is because when evaluating strategies nobody cares when you decide, so might as well act as if you had them all planned out in advance. So in that spirit, I'm going to use the following p...
For the coin came up tails give me 1000 please case does it reduce to this?
"I can predict that if (the other agent predicts) I choose strategy X: for any gamble I'd want to enter if the consequences were not already determined, I will pay when i lose, then the other agent will implement strategy Y: letting me play, and my expected payoff is Z:999,000",
"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I ...
Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote....
...Every majority coalition and division of the pie, is dominated by another majority coalition in which each agent of the new majority gets more pie. There does not appear to be any such thing as a dominant majority vote.
I suggest offering the following deal at the outset:
"I offer each of you the opportunity to lobby for an open spot in a coalition with me, to split the pie equally six ways, formed with a mutual promise that we will not defect, an...
...Here's yet another problem whose proper formulation I'm still not sure of, and it runs as follows. First, consider the Prisoner's Dilemma. Informally, two timeless decision agents with common knowledge of the other's timeless decision agency, but no way to communicate or make binding commitments, will both Cooperate because they know that the other agent is in a similar epistemic state, running a similar decision algorithm, and will end up doing the same thing that they themselves do. In general, on the True Prisoner's Dilemma, facing an opponent who c
I'm curious as to what extend is Timeless Decision Theory compared to this proposal: by Arntzenius http://uspfiloanalitica.googlegroups.com/web/No+regrets+%28Arntzenius%29.pdf?gda=0NZxMVIAAABcaixQLRmTdJ3- x5P8Pt_4Hkp7WOGi_UK-R218IYNjsD-841aBU4P0EA-DnPgAJsNWGgOFCWv8fj8kNZ7_xJRIVeLt2muIgCMmECKmxvZ2j4IeqPHHCwbz-gobneSjMyE
Agents A & B are two TDT agents playing some prisoner's dilemma scenario. A can reason:
u(c(A)) = P(c(B))u(C,C) + P(d(B))u(C,D)
u(d(A)) = P(c(B))u(D,C) + P(d(B))u(D,D)
( u(X) is utility of X, P() is probability, c() & d() are cooperate & defect predicates )
A will always pick the option with higher utility, so it reasons B will do the same:
p(c(B) u'(c(B)) > u'(d(B)) --> c(B)
(u'() is A's estimate of B's utility function)
But A can't perfectly predict B (even though it may be quite good at it), so A can represent this uncertainty as a random ...
I think I have a general theory that gives the "correct" answer to Omega problem here and Newcomb's problem.
The theory depends on the assumption that Omega makes his prediction by evaluating the decision of an accurate simulation of you (or does something computationally equivalent, which should be the same). In this case there are two of you, real-you and simulation-you. Since you are identical to your simulation the two of you can reasonably be assumed to share an identity and thus have common goals (presumably that real-you gets the money be...
My first thought on the coalition scenario is that the solution might hinge on something as simple as the agents deciding to avoid a stable equilibrium that does not terminate in anyone ending up with pie.
Edit: this seems to already have been discussed at length. That'll teach me to reply to year old threads without an adequate perusal of the preexisting comments.
..."I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"
Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who an
I don't really wanna rock the boat here, but in the words of one of my professors, it "needs more math".
I predict it will go somewhat like this: you specify the problem in terms of A implies B, etc; you find out there's infinite recursion; you prove that the solution doesn't exist. Reductio ad absurdum anyone?
Instead of assuming that other will behave as a function of our choice, we look at the rest of the universe (including other sentient being, including Omega) as a system where our own code is part of the data.
Given a prior on physics, there is a well defined code that maximizes our expected utility.
That code wins. It one boxes, it pays Omega when the coin falls on heads etc.
I think this solves the infinite regress problem, albeit in a very unpractical way,
If you're an AI, you do not have to (and shouldn't) pay the first $1000, you can just self-modify to pay $1000 in all the following coin flips (if we assume that the AI can easily rewrite/modify it's own behaviour in this way). Human brains probably don't have this capability, so I guess paying $1000 even in the first game makes sense.
I had a look at the existing literature. It seems as though the idea of a "rational agent" who takes one box goes quite a way back:
"Rationality, Dispositions, and the Newcomb Paradox" (Philosophical Studies, volume 88, number 1, October 1997)
Abstract: "In this article I point out two important ambiguities in the paradox. [...] I draw an analogy to Parfit's hitchhiker example which explains why some people are tempted to claim that taking only one box is rational. I go on to claim that although the ideal strategy is to adopt a nece...
Why can't Omega's coin toss game not be expressed formally simply by using expected values?
n = expected further encounters of Omega's coin toss game
EV(Yes) = -1000 + (.5 n -1000) + (.5 n 1000000)
EV(No) = 0
On dividing the pie, I ran across this in an introduction to game theory class. I think the instructor wanted us to figure out that there's a regress and see how we dealt with it. Different groups did different things, but two members of my group wanted to be nice and not cut anyone out, so our collective behavior was not particularly rational. "It's not about being nice! It's about getting the points!" I kept saying, but at the time the group was about 16 (and so was I), and had varying math backgrounds, and some were less interested in that aspect of the game.
I think at least one group realized there would always be a way to undermine the coalitions that assembled, and cut everyone in equally.
Actually here's my argument why (ignoring the simulation arguments) you should actually refuse to give Omega money.
Here's what actually happened:
Omega flipped a fair coin. If it comes up heads the stated conversation happened. If it comes up tails and Omega predicts that you would have given him $1000, he steals $1000000 from you.
If you have a policy of paying you earn 10^6/4 - 10^3/4 -10^6/2 = -$250250. If you have a policy of not paying you get 0.
More realistically having a policy of paying Omega in such a situation could earn or lose you money if peo...
...At the point where Omega asks me this question, I already know that the coin came up heads, so I already know I'm not going to get the million. It seems like I want to decide "as if" I don't know whether the coin came up heads or tails, and then implement that decision even if I know the coin came up heads. But I don't have a good formal way of talking about how my decision in one state of knowledge has to be determined by the decision I would make if I occupied a different epistemic state, conditioning using the probability previously possess
I stopped reading at "Yes, you say". The correct solution is obviously obvious: you give him your credit card and promise to tell the PIN number once you're at the ATM.
You could also try to knock him off his bike.
"Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. "
Well then, the statistical expected (average) share any agent is going to get long-term is 1/10th of the pie. The simplest solution that ensures this is the equal division; anticipating this from the start cuts down on negotiation costs, and if a majority agrees to follow this strategy (i.e agrees to not realize more than their "share"), it is also stable - anyone who ponders upsetting it risks to be the "odd man out" who eats ...
Suppose you're out in the desert, running out of water, and soon to die - when someone in a motor vehicle drives up next to you. Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and what's more, the driver is Paul Ekman, who's really, really good at reading facial microexpressions. The driver says, "Well, I'll convey you to town if it's in my interest to do so - so will you give me $100 from an ATM when we reach town?"
Now of course you wish you could answer "Yes", but as an ideal game theorist yourself, you realize that, once you actually reach town, you'll have no further motive to pay off the driver. "Yes," you say. "You're lying," says the driver, and drives off leaving you to die.
If only you weren't so rational!
This is the dilemma of Parfit's Hitchhiker, and the above is the standard resolution according to mainstream philosophy's causal decision theory, which also two-boxes on Newcomb's Problem and defects in the Prisoner's Dilemma. Of course, any self-modifying agent who expects to face such problems - in general, or in particular - will soon self-modify into an agent that doesn't regret its "rationality" so much. So from the perspective of a self-modifying-AI-theorist, classical causal decision theory is a wash. And indeed I've worked out a theory, tentatively labeled "timeless decision theory", which covers these three Newcomblike problems and delivers a first-order answer that is already reflectively consistent, without need to explicitly consider such notions as "precommitment". Unfortunately this "timeless decision theory" would require a long sequence to write up, and it's not my current highest writing priority unless someone offers to let me do a PhD thesis on it.
However, there are some other timeless decision problems for which I do not possess a general theory.
For example, there's a problem introduced to me by Gary Drescher's marvelous Good and Real (OOPS: The below formulation was independently invented by Vladimir Nesov; Drescher's book actually contains a related dilemma in which box B is transparent, and only contains $1M if Omega predicts you will one-box whether B appears full or empty, and Omega has a 1% error rate) which runs as follows:
Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says:
"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"
Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.
But I don't have a general theory which replies "Yes". At the point where Omega asks me this question, I already know that the coin came up heads, so I already know I'm not going to get the million. It seems like I want to decide "as if" I don't know whether the coin came up heads or tails, and then implement that decision even if I know the coin came up heads. But I don't have a good formal way of talking about how my decision in one state of knowledge has to be determined by the decision I would make if I occupied a different epistemic state, conditioning using the probability previously possessed by events I have since learned the outcome of... Again, it's easy to talk informally about why you have to reply "Yes" in this case, but that's not the same as being able to exhibit a general algorithm.
Another stumper was presented to me by Robin Hanson at an OBLW meetup. Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. Let's say that six of them form a coalition and decide to vote to divide the pie among themselves, one-sixth each. But then two of them think, "Hey, this leaves four agents out in the cold. We'll get together with those four agents and offer them to divide half the pie among the four of them, leaving one quarter apiece for the two of us. We get a larger share than one-sixth that way, and they get a larger share than zero, so it's an improvement from the perspectives of all six of us - they should take the deal." And those six then form a new coalition and redivide the pie. Then another two of the agents think: "The two of us are getting one-eighth apiece, while four other agents are getting zero - we should form a coalition with them, and by majority vote, give each of us one-sixth."
And so it goes on: Every majority coalition and division of the pie, is dominated by another majority coalition in which each agent of the new majority gets more pie. There does not appear to be any such thing as a dominant majority vote.
(Robin Hanson actually used this to suggest that if you set up a Constitution which governs a society of humans and AIs, the AIs will be unable to conspire among themselves to change the constitution and leave the humans out in the cold, because then the new compact would be dominated by yet other compacts and there would be chaos, and therefore any constitution stays in place forever. Or something along those lines. Needless to say, I do not intend to rely on such, but it would be nice to have a formal theory in hand which shows how ideal reflectively consistent decision agents will act in such cases (so we can prove they'll shed the old "constitution" like used snakeskin.))
Here's yet another problem whose proper formulation I'm still not sure of, and it runs as follows. First, consider the Prisoner's Dilemma. Informally, two timeless decision agents with common knowledge of the other's timeless decision agency, but no way to communicate or make binding commitments, will both Cooperate because they know that the other agent is in a similar epistemic state, running a similar decision algorithm, and will end up doing the same thing that they themselves do. In general, on the True Prisoner's Dilemma, facing an opponent who can accurately predict your own decisions, you want to cooperate only if the other agent will cooperate if and only if they predict that you will cooperate. And the other agent is reasoning similarly: They want to cooperate only if you will cooperate if and only if you accurately predict that they will cooperate.
But there's actually an infinite regress here which is being glossed over - you won't cooperate just because you predict that they will cooperate, you will only cooperate if you predict they will cooperate if and only if you cooperate. So the other agent needs to cooperate if they predict that you will cooperate if you predict that they will cooperate... (...only if they predict that you will cooperate, etcetera).
On the Prisoner's Dilemma in particular, this infinite regress can be cut short by expecting that the other agent is doing symmetrical reasoning on a symmetrical problem and will come to a symmetrical conclusion, so that you can expect their action to be the symmetrical analogue of your own - in which case (C, C) is preferable to (D, D). But what if you're facing a more general decision problem, with many agents having asymmetrical choices, and everyone wants to have their decisions depend on how they predict that other agents' decisions depend on their own predicted decisions? Is there a general way of resolving the regress?
On Parfit's Hitchhiker and Newcomb's Problem, we're told how the other behaves as a direct function of our own predicted decision - Omega rewards you if you (are predicted to) one-box, the driver in Parfit's Hitchhiker saves you if you (are predicted to) pay $100 on reaching the city. My timeless decision theory only functions in cases where the other agents' decisions can be viewed as functions of one argument, that argument being your own choice in that particular case - either by specification (as in Newcomb's Problem) or by symmetry (as in the Prisoner's Dilemma). If their decision is allowed to depend on how your decision depends on their decision - like saying, "I'll cooperate, not 'if the other agent cooperates', but only if the other agent cooperates if and only if I cooperate - if I predict the other agent to cooperate unconditionally, then I'll just defect" - then in general I do not know how to resolve the resulting infinite regress of conditionality, except in the special case of predictable symmetry.
You perceive that there is a definite note of "timelessness" in all these problems.
Any offered solution may assume that a timeless decision theory for direct cases already exists - that is, if you can reduce the problem to one of "I can predict that if (the other agent predicts) I choose strategy X, then the other agent will implement strategy Y, and my expected payoff is Z", then I already have a reflectively consistent solution which this margin is unfortunately too small to contain.
(In case you're wondering, I'm writing this up because one of the SIAI Summer Project people asked if there was any Friendly AI problem that could be modularized and handed off and potentially written up afterward, and the answer to this is almost always "No", but this is actually the one exception that I can think of. (Anyone actually taking a shot at this should probably familiarize themselves with the existing literature on Newcomblike problems - the edited volume "Paradoxes of Rationality and Cooperation" should be a sufficient start (and I believe there's a copy at the SIAI Summer Project house.)))