All of Strilanc's Comments + Replies

Strilanc100

I like this quote by Stephen Hawking from one of his answers:

The real risk with AI isn’t malice but competence.

Rot13: Svfure–Farqrpbe qvfgevohgvba

Why would you assume that? That's like saying by the time we can manufacture a better engine we'll be able to replace a running one with the new design.

For example, evolution has optimized and delivered a mechanism for turning gene edits in a fertilized egg into a developed brain. It has not done the same for incorporating after-the-fact edits into an existing brain. So in the adult case we have to do an extra giga-evolve-years of optimization before it works.

-1Shmi
First, assume != suspect (the latter is what I said). Second, gene therapy is already a reality.

Could you convert the tables into graphs, please? It's much harder to see trends in lists of numbers.

Another possible hypothesis could be satiation. When I first read wikipedia, it dragged me into hours long recursive article reading. Over time I've read more and more of the articles I find interesting, so any given article links to fewer unread interesting articles. Wikipedia has essentially developed a herd immunity against me. Maybe that pattern holds over the population, with the dwindling infectiousness overcoming new readers?

On second thought, I'm not sure that works at all. I guess you could check the historical probability of following to another article.

"reality is a projection of our minds and magic is ways to concentrate and focus the mind" is too non-reductionist of an explanation. It moves the mystery inside another mystery, instead of actually explaining it.

For example: in this universe minds seem to be made out of brains. But if reality is just a projection of minds, then... brains are made out of minds? So minds are made out of minds? So where does the process hit bottom? Or are we saying existence is just a fractal of minds made out of minds made out of minds all the way down?

1[anonymous]
In this yes, but if a magical universe is roughly like what people hundreds of year ago thought our universe is, then the brain can simply be a radio receiver getting messages from a ghostly soul.

Hm, my take-away from the end of the chapter was a sad feeling that Quirrel simply failed at or lied about getting both houses to win.

4yaeiou
Quirrell was adverse to outright lies, so at this point I think he failed.
Velorien120

Failed, I think. As of 104, it looked like his Christmas plots were all going to succeed - the Ravenclaws and Slytherins were in the process of tying for the Cup, and raising the popularity of Harry's anti-snitch proposal in doing so.

It is only the revelation that "Professor Quirrell had gone out to face the Dark Lord and died for it, You-Know-Who had returned and died again, Professor Quirrell was dead, he was dead", which Quirrell would not have planned around, that threw a spanner in the works by motivating the Slytherins to seek outright victory.

The 2014 LW survey results mentioned something about being consistent with a finger-length/feminism connection. Maybe that counts?

Some diseases impact both reasoning and appearance. Gender impacts both appearance and behavior. You clearly get some information from appearance, but it's going to be noisy and less useful than what you'd get by just asking a few questions.

0gedymin
..assuming the replies are truthful.

There's a radiolab episode about blame that glances this subject. They talk about, for example, people with brain damage not being blamed for their crimes (because they "didn't have a choice"). They also have a guest trying to explain why legal punishment should be based on modelling probabilities of recidivism. One of the hosts usually plays (is?) the "there is cosmic blame/justice/choice" position you're describing.

[anonymous]150

I have a nasty hunch that one of the social functions of punishment is to prevent personal revenge. If it is not harsh enough, victims or their relatives may want to take it into their own hands. Vendetta or Girardian "mimetic violence" is AFAIK something deeply embedded into history, and AFAIK it went away only governments basically said "hey, you don't need to kill your sisters rapist, I will kill him for you and call it justice system". And that consideration has not much to do with recidivism. Rather, the point here is to prevent f... (read more)

Well, yeah. The particular case I had in mind was exploiting partial+ordered transfiguration to lobotomize/brain-acid the death eaters, and I grant that that has practical problems.

But I found myself thinking about using patronus and other complicated things to take down LV after, instead of exploiting weak spells being made effective by the resonance. So I put the idea out there.

If I may quote from my post:

Assuming you can take down the death eaters, I think the correct follow-up

and:

LV is way up high, too far away to have good accuracy with a hand gun.

I made my suggestion.

Assuming you can take down the death eaters, I think the correct follow-up for despawning LV is... massed somnium.

We've seen somnium be effective at range in the past, taking down an actively dodging broomstick rider at range. We've seen the resonance hit LV harder than Harry, requiring tens of minutes to recover versus seconds.

LV is not wearing medieval armor to block the somnium. LV is way up high, too far away to have good accuracy with a hand gun.If LV dodges behind something, Harry has time to expecto patronum a message out.

... I think the main risk is LV apparating away, apparating back directly behind harry, and pulling the trigger.

4TylerJay
"Stuporfy" would probably be the better option here. Yes, it's visible, but LV doesn't know about swerving stunners, since Flitwick never demonstrated it in public. It's probably the best chance Harry has of triggering a resonance by casting a spell, assuming he can fire one off.
1Jost
That’s one heck of an assumption … In addition, you’re making the implicit assumption that LV will not react to Harry taking down the Death Eaters, which is an interesting assumption, as well.
3noahpocalypse
I think you forgot the 37(?) Death Eaters pointing their wands at Harry. You also forgot Voldie's famed reflexes, and a bullet definitely goes faster than a spell.
1Nornagest
Against a person-sized target, if its user is a decent shot, your average modern handgun is accurate to about twenty-five meters. Voldemort probably isn't that far away, and I'd expect him to know what he's doing. He's shooting one-handed, and left-handed at that, but I wouldn't rely on that. On the other hand, when he was laying out his plan, he was going to have one of his mooks shoot Harry. That's unlike him, and it might point to him still being bound by his Riddle curse, or to enough caution over unintended consequences to take the gun out of play for the moment.

Dumbledore is a side character. He needed to be got rid of, so neither Harry nor the reader would expect or hope for Dumbledore to show up at the last minute and save the day

There's technically six more hours of story time for a time-turned Dumbledore to show up, before going on to get trapped. He does mention that he's in two places during the mirror scene.

Dumbledore has previously stated that trying to fake situations goes terribly wrong, so there could be some interesting play with that concept and him being trapped by the mirror.

8[anonymous]
Mirro!Dumbledore appears to not be time-turned: 110 was edited so that Dumbledore says: That doesn't sound like he just spun back - it sounds like there might be more than one Dumbledore running around.

Sorry for getting that one wrong (I can only say that it's an unfortunately confusing name).

Your claim is that AGI programs have large min-length-plus-log-of-running-time complexity.

I think you need more justification for this being a useful analogy for how AGI is hard. Clarifications, to avoid mixing notions of problems getting harder as they get longer for any program with notions of single programs taking a lot of space to specify, would also be good.

Unless we're dealing with things like the Ackermann function or Ramsey numbers, the log-of-running-time ... (read more)

0Squark
The complexity I'm referring to is the complexity of the AGI code i.e. the length + log-running time of a hypothetical program producing the AGI code. This wouldn't quite work since the "AGI" would then be penalized by very long bootstrap. Since the measure of intelligence takes resource constraints into account such an "AGI" wouldn't be very intelligent: it wouldn't count as an actual AGI.
Strilanc100

Kolmogorov complexity is not (closely) related to NP completeness. Random sequences maximize Kolmogorov complexity but are trivial to produce. 3-SAT solvers have tiny Kolmogorov complexity despite their exponential worst case performance.

I also object to thinking of intelligence as "being NP-Complete", unless you mean that incremental improvements in intelligence should take longer and longer (growing at a super-polynomial rate). When talking about achieving a fixed level of intelligence, complexity theory is a bad analogy. Kolmogorov complexity is also a bad analogy here because we want any solution, not the shortest solution.

3Squark
Thx for commenting! I'm talking about Levin-Kolmogorov complexity. The LK complexity of x is defined to be the minimum over all programs p producing x of log(T(p)) + L(p) where T(p) is the execution time of p and L(p) is the length of p.

I would say cos is simpler than sin because its Taylor series has a factor of x knocked off.

In practice they tend to show up together, though. Often you can replace the pair with something like e^(i x), so maybe that should be considered the simplest.

Here's another interesting example.

Suppose you're going to observe Y in order to infer some parameter X. You know that P(x=c | y) = 1/2^(c-y).

  • You set your prior to P(x=c) = 1 for all c. Very improper.
  • You make an observation, y=1.
  • You update: P(x=c) = 1/2^(c-1)
  • You can now normalize P(x) so its area under the curve is 1.
  • You could have done that, regardless of what you observed y to be. Your posterior is guaranteed to be well formed.

You get well formed probabilities out of this process. It converges to the same result that Bayesianism does as more obse... (read more)

I did notice that they were spending the whole time debating a definition, and that the article failed to get to any consequences.

I think that existing policies are written in terms of "broadband", perhaps such as benefits to ISPs based on how many customers have access to broadband? That would make it a debate about conditions for subsidies, minimum service requirements, and the wording of advertising.

2SilentCal
Yeah, my conclusion is that there isn't really enough information in the article to be sure, and I haven't done the external research. But asking the right question is the important part.

Hrm... reading the paper, it does look like NL1 goes from |a> to |cd> instead of |c> + |d>, This is going to move all the numbers around, but you'll still find that it works as a bomb detector. The yellow coming out of the left non-interacting-with-bomb path only interferes with the yellow from the right-and-mid path when the bomb is a dud.

Just to be sure, I tried my hand at converting it into a logic circuit. Here's what I get:

Having it create both the red and yellow photon, instead of either-or, seems to have improved its function as a bomb t... (read more)

0Luke_A_Somers
I never said it wouldn't. I agreed up front that this would detect a bomb without interacting with it 50% of the time. It's a minimally-functional bomb-tester, and the way you would optimize it is by layering the original bomb-testing apparatus over this apparatus. The two effects are pretty much completely orthogonal. ETA: Did you just downvote my half of this whole comment chain? Am I actually wrong? If not, it appears that you're frustrated that I'm reaching the right answer much more easily than you, which just seems petty. Also, these are not nit-picks. You were setting the problem up entirely wrong.

A live bomb triggers nothing when the photon takes the left leg (50% chance), gets converted into red instead of yellow (50% chance), and gets reflected out.

An exploded bomb triggers g or h because I assumed the photon kept going. That is to say, I modeled the bomb as a controlled-not gate with the photon passing by being the control. This has no effect on how well the bomb tester works, since we only care about the ratio of live-to-dud bombs for each outcome. You can collapse all the exploded-and-triggered cases into just "exploded" if you like.

-2Luke_A_Somers
The photon does not get converted into red OR yellow. It gets converted into red AND yellow.

Okay, I've gone through all the work of checking if this actually works as a bomb tester. What I found is that you can use the camera to remove more dud bombs than live bombs, but it does worse than the trivial bomb tester.

So I was wrong when I said you could use it as a drop-in replacement. You have to be aware that you're getting less evidence per trial, and so the tradeoffs for doing another pass are higher (since you lose half of the good bombs with every pass with both the camera and the trivial bomb tester; better bomb testers can lose fewer bombs pe... (read more)

-2Luke_A_Somers
I do not see a way that a live bomb can trigger nothing, or for an exploded bomb to trigger either g or h.

Well...

The bomb tester does have a more stringent restriction than the camera. The framing of the problems is certainly different. They even have differing goals, which affect how you would improve the process (e.g. you can use grover's search algorithm to make the bomb tester more effective but I don't think it matters for the camera; maybe it would make it more efficient?)

BUT you could literally use their camera as a drop-in replacement for the simplest type of bomb tester, and vice versa. Both are using an interferometer. Both want to distinguish betwee... (read more)

-1Luke_A_Somers
If you used their current camera as a bomb tester, it would blow up 50% of the time.

I think this is just a more-involved version of the Elitzur-Vaidman bomb tester. The main difference seems to be that they're going out of their way to make sure the photons that interact with the object are at a different frequency.

The quantum bomb tester works by relying on the fact that the two arms interfere with each other to prevent one of the detectors from going off. But if there's a measure-like interaction on one arm, that cancelled-out detector starts clicking. The "magic" is that it can click even when the interaction doesn't occur. (... (read more)

3Luke_A_Somers
This is quite different from the bomb tester. In the bomb tester, all the effort is put into minimizing the interaction with the object, ideally having a vanishing chance of the 'bomb' experiencing a single interaction. Here, the effort is put into avoiding interacting with the object with the particular photons that made it to the camera, but the imaged object gets lots and lots of interactions. Zero effort has been made to avoid interacting with the object, or even to reduce the interaction with the object. Here, the idea is to gather the information in one channel and lever it into another channel. You could, I suppose, combine the two techniques, but I don't really see the point.

This is an attempt at a “plain Jane” presentation of the results discussed in the recent arxiv paper

... [No concrete example given] ...

Urgh...

  • Write the password down on paper and keep that paper somewhere safe.
  • Practice typing it in. Practice writing it down. Practice singing it in your head.
  • Set things up so you have to enter it periodically.

A concrete example of a paper using the add-i-to-reflected-part type of beam splitter is the "Quantum Cheshire Cats" paper:

A simple way to prepare such a state is to send a horizontally polarized photon towards a 50:50 beam splitter, as depicted in Fig. 1. The state after the beam splitter is |Psi>, with |L> now denoting the left arm and |R> the right arm; the reflected beam acquires a relative phase factor i.

The figure from the paper:

I also translated the optical system into a similar quantum logic circuit:

Note that I also included ... (read more)

Possible analogy: Was molding the evolutionary path of wolves, so they turned into dogs that serve us, unethical? Should we stop?

2Jiro
I think if it's fair to eat animals, it's far to make them into creatures that serve us.

Wait, I had the impression that this community had come to the consensus that SIA vs SSA was a problem along the lines of "If a tree falls in the woods and no one's around, does it make a sound?"? It finds an ambiguity in what we mean by "probability", and forces us to grapple with it.

In fact, there's a well-upvoted post with exactly that content.

The Bayesian definition of "probability" is essentially just a number you use in decision making algorithms constrained to satisfy certain optimality criteria. The optimal number to u... (read more)

3Kindly
I agree that thinking about payoffs is obviously correct, and ideally anyone talking about SIA and SSA should also keep this in the back of their heads. That doesn't make anthropic assumptions useless, for the following two reasons: 1. They give the correct answer for some natural payoff structures. 2. They are friendlier to our intuitive ideas of how probability should work. I don't actually think that they're worth the effort, but that's a just a question of presentation. In any case, the particular choice of anthropic language is less important than engaging with the thesis, though the particular avenue of engagement may be along the lines of "SIA is inappropriate for the kind of payoffs involved in the Doomsday Argument, because..."
0Brian_Tomasik
I don't think question pits SSA against SIA; rather, it concerns what SIA itself implies. But I think my argument was wrong, and I've edited the top-level post to explain why.
Strilanc140

I would be happy to prove my "faith" in science by ingesting poison after I'd taken an antidote proven to work in clinical trials.

This is one of the things James Randi is known for. He'll take a "fatal" dose of homeopathic sleeping pills during talks (e.g. his TED talk) as a way of showing they don't work.

8Basil Marte
The belief that overdosing on sleeping pills is fatal comes from barbiturate medications, while modern pills contain benzodiazepines such as diazepam. Modern sleeping pills are pretty easy to get exactly because even if someone downs the whole bottle, they don't die, only go to deep unconsciousness, i.e. "knockout sleep" (physical stimuli, such as shaking the patient, don't wake them up) that possibly lasts several days. Thus if James Randi took a fatal-by-barbiturate-standards dose of benzodiazepine sleeping pills, then (after he woke up) he would conclude that the pills didn't work because he didn't die. This is not to say that benzodiazepine pills are completely safe. (This is to be expected from anything that messes with the central nervous system and basic regulation.) Of most practical relevance is the crossreaction with alcohol; combining drunkenness with benzodiazepine overdose is very much fatal. Unfortunately, mild alcohol consumption plus a standard dose can fairly reliably trigger "knockout sleep", making the combination an easily-used party/rape drug. (If this is a floating belief, keep it as such; a.k.a. do not try this at home.)

I am pretty sick of 1% being given as the natural lowest error rate of humans on anything. It's not.

Hmm. Our error rate moment to moment may be that high, but it's low enough that we can do error correction and do better over time or as a group. Not sure why I didn't realize that until now.

(If the error rate was too high, error correction would be so error-prone it would just introduce more error. Something analogous happens in quantum error correction codes).

Oh, so M is not a stock-market-optimizer it's a verify-that-stock-market-gets-optimized-er.

I'm not sure how this differs from a person just asking the AI if it will optimize the stock market. The same issues with deception apply: the AI realizes that M will shut it off, so it tells M the stock market will totally get super optimized. If you can force it to tell M the truth, then you could just do the same thing to force it to tell you the truth directly. M is perhaps making things more convenient, but I don't think it's solving any of the hard problems.

It's extremely premature to leap to the conclusion that consciousness is some sort of unobservable opaque fact. In particular, we don't know the mechanics of what's going on in the brain as you understand and say "I am conscious". We have to at least look for the causes of these effects where they're most likely to be, before concluding that they are causeless.

People don't even have a good definition of consciousness that cleanly separates it from nearby concepts like introspection or self-awareness in terms of observable effects. The lack of obs... (read more)

2Capla
Premature to leap to conclusions? Absolutely. Premature to ask questions? I don't think so. Premature acknowledge foreseen obstacles? Perhaps. We really do have little information bout how the abrain works and how a brain creates a mind. Speculation before we have data may not be very useful. I want to underscore how skeptical I am of drawing conclusion about the world on the basis of thought alone. Philosophy is not an effective method for finding truth. The pronouncements by philosophers of what is "necessary" is more often than not shown to be fallacious bordering on the absurd, once scientists get to the problem. Science's track record of proving the presumed to be unprovable, is fantastic. Yet, knowing this, the line on inquiry still seems to present problems, a priori. How could we know if an AI is conscious? We could look for signs of consciousness, or structural details that always (or even frequently) accompany consciousness. But in order to identify those features we need to assume what we are tryign to prove. Is this specific problem clear? That is what I want to know about. I am in no way suggesting that consciousness is causeless (which seems somewhat absurd to me), only that there is an essential difficulty in discovering the cause. I heartily recommend that we look. I am ABSOLUTELY not suggesting that we should give up on trying to understand the nature of mind, and especially with the scientific method. However, my faulty a priori reasoning foresees a limitation in our empirical methods, which have a much better track record. When the empirical methods exceed my expectation, I'll update and abandon my a priori reasoning since I know that it is far less reliable (though I would want to know what was wrong with my reasoning). Until the empirical methods come though for me, I make a weak prediction that they will fail, in this instance, and am asking others to enlighten me about my (knowingly faulty) a priori reasoning. I apologize if I'm belaborin
Strilanc120

... wait, what? You can equate predicates of predicates but not predicates?!

(Two hours later)

Well, I'll be damned...

3IlyaShpitser
The key here is the halting requirement. The other stuff is red herrings.
2Gunnar_Zarncke
Inconceivable, isn't it? Extra points for actually implementing it.

What are other examples of possible motivating beliefs? I find the examples of morals incredibly non-convincing (as in actively convincing me of the opposite position).

Here's a few examples I think might count. They aren't universal, but they do affect humans:

  • Realizing neg-entropy is going to run out and the universe will end. An agent trying to maximize average-utility-over-time might treat this as a proof that the average is independent of its actions, so that it assigns a constant eventual average utility to all possible actions (meaning what it does

... (read more)
1KatjaGrace
Good question. Some of these seem to me like a change in instrumental goals only. If you meant to include such things, then there are very many examples - e.g. if I learn I am out of milk then my instrumental goal of opening the fridge is undermined.

For instance, if anything dangerous approached the AIXI's location, the human could lower the AIXI's reward, until it became very effective at deflecting danger. The more variety of things that could potentially threaten the AIXI, the more likely it is to construct plans of actions that contain behaviours that look a lot like "defend myself." [...]

It seems like you're just hardcoding the behavior, trying to get a human to cover all the cases for AIXI instead of modifying AIXI to deal with the general problem itself.

I get that you're hoping it ... (read more)

2Stuart_Armstrong
Very valid point.

Anthropomorphically forcing the world to have particular laws of physics by more effectively killing yourself if it doesn't seems... counter-productive to maximizing how much you know about the world. I'm also not sure how you can avoid disproving MWI by simply going to sleep, if you're going to accept that sort of evidence.

(Plus quantum suicide only has to keep you on the border of death. You can still end up as an eternally suffering almost-dying mentally broken husk of a being. In fact, those outcomes are probably far more likely than the ones where twenty guns misfire twenty times in a row.)

0DanielLC
It's quite a bit less likely, but if quantum immortality changes the past (when you're on the border of life and death, it's clear the gun didn't misfire), then it would just keep you from running the experiment in the first place.
Strilanc200

I find Eliezer's insistence about Many-Worlds a bit odd, given how much he hammers on "What do you expect differently?". Your expectations from many-worlds are be identical to those from pilot-wave, so....

I'm probably misunderstanding or simplifying his position, e.g. there are definitely calculational and intuition advantages to using one vs the other, but that seems a bit inconsistent to me.

1travisrm89
There is at least one situation in which you might expect something different under MWI than under pilot-wave: quantum suicide. If you rig a gun so that it kills you if a photon passes through a half-silvered mirror, then under MWI (and some possibly reasonable assumptions about consciousness) you would expect the photon to never pass through the mirror no matter how many experiments you perform, but under pilot-wave you would expect to be dead after the first few experiments.

I take Eliezer's position on MWI to be pretty well expressed by this quote from David Wallace:

[...] there is no quantum measurement problem.

I do not mean by this that the apparent paradoxes of quantum mechanics arise because we fail to recognize 'that quantum theory does not represent physical reality' (Fuchs and Peres 2000a). Quantum theory describes reality just fine, like any other scientific theory worth taking seriously: describing (and explaining) reality is what the scientific enterprise is about...

What I mean is that there is actually no conflict

... (read more)
Shmi100

I'm probably misunderstanding or simplifying his position

You really aren't. His logic is literally "it's simpler, therefore it's right" and "we don't need collapse (or anything else), decoherence is enough". To be fair, plenty of experts in theoretical physics hold the same view, most notably Deutsch and Carroll.

Strilanc150

Is there an existing post on people's tendency to be confused by explanations that don't include a smaller version of what's being explained?

For example, confusion over the fact that "nothing touches" in quantum mechanics seems common. Instead of being satisfied by the fact that the low-level phenomena (repulsive forces and the Pauli exclusion principle) didn't assume the high-level phenomena (intersecting surfaces), people seem to want the low-level phenomena to be an aggregate version of the high-level phenomena. Explaining something without us... (read more)

1Luke_A_Somers
The one thing missing from that video (at least up to 4:23 when I got frustrated - and he had explicitly disclaimed talking about the Pauli Exclusion Principle before this point) which gets really to the heart of it is that the Pauli Exclusion Principle kicks in when one thing literally runs into the other - when parts of two things were trying to occupy exactly the same state. If 'couldn't go any further or you'd be inside the other thing, but you can't do that' isn't 'contact' then the word has no meaning. The interviewer is exactly right at 4:17 - he did the demonstration wrong. He should have brought them into contact. Only when he was pushing inwards and the balls were pushing back hard enough to balance -- that's when he'd say they're in contact. So this isn't a great example because the proper explanation does include a smaller version of what's being explained.
1Douglas_Knight
What people complaining about this usually do is link to this video (or better), but I'm not sure it's actually helpful for people who don't get it.
3ChristianKl
What the goal of having an explanation? Do you want the explanation to change your model of the world in a way that allows you to have the right intuition about a subject matter? Do you want the explanation to allow you to make better predictions about the subject matter? Beliefs are supposed to pay rent. If someone without a physics background hear about quantum mechanics they are supposed to be confused. If they aren't they would simply project their old ideas into the new theory and not really update anything on a deeper level. I'm not aware of a published theory of emotions as an extension of biology that describes all aspects of emotions that I observe on a day-to-day basis. Understanding hardware does need solid state physics but you also need to understand software to understand computers.
5Shmi
Most people want the explanations (models) to make intuitive sense, though a few are satisfied with the underlying math only. And intuition is based on what we already know and feel. The Pauli exclusion (or inclusion, if you take bosons) principle feels to me like rubbery wave-functions pushing against each other (or sticking together), even though I understand that antisymmetrization is not actually a microscopic force, and interacting electrons are not actually separate entities. I do not think that one should lump free will and identity in the same category as basic QM, however, as we do not have nearly the degree of understanding of the cognitive processes in System 1 which produce the feeling of either.
Strilanc230

Mario, for instance, once you jump, there's not much to do until he actually lands

Mario games let you change momentum while jumping, to compensate for the lack of fine control on your initial speed. This actually does matter a lot in speed runs. For example, Mario 64 speedruns rely heavily on a super fast backwards long jump that starts with switching directions in the air.

A speed run of real life wouldn't start with you eating lunch really fast, it would start with you sprinting to a computer.

In the examples you show how to run the opponent, but how do you access the source code? For example, how do you distinguish a cooperate bot from a (if choice < 0.0000001 then Defect else Cooperate) bot without a million simulations?

4tetronian2
Unlike last year's tournament, you can't, and this is part of the challenge. Simulations are a precious resource, and you have to make an educated guess about your opponent based on the limited simulations you have.

Is it expected that electrically disabling key parts of the brain will replace anesthetic drugs?

4Shmi
There are speculations, certainly. But so far this is one experiment on one epileptic patient with a piece of hippocampus removed in an attempt to control her seizures. This is a long way away from reliably and reversibly switching off consciousness and memory formation on demand.

Whoops, box B was supposed to have a thousand in both cases.

I did have in mind the variant where Omega picks the self-consistent case, instead of using only the box A prediction, though.

Yes, the advantage comes from being hard to predict. I just wanted to find a game where the information denial benefits were counterfactual (unlike poker).

(Note that the goal is not perfect indistinguishability. If it was, then you could play optimally by just flipping a coin when deciding to bet or call.)

0cousin_it
If I recall correctly, the recommendation was to fold on average hands, and play aggressively on strong and weak hands. You don't need to flip a coin, because your cards can already be viewed as a kind of coin that your opponent can't see.

The variant with the clear boxes goes like so:

You are going to walk into a room with two boxes, A and B, both transparent. You'll be given the opportunity to enter a room with both boxes, their contents visible, where can either take both boxes or just box A.

Omega, the superintelligence from another galaxy that is never wrong, has predicted whether you will take one box or two boxes. If it predicted you were going to take just box A, then box A will contain a million dollars and box B will contain a thousand dollars. If it predicted you were going to take ... (read more)

2cousin_it
In Gary's original version of this problem, Omega tries to predict what the agent would do if box A was filled. Also, I think box B is supposed to be always filled.

Thanks for the clarification. I removed the game tree image from the overview because it was misleading readers into thinking it was the entirety of the content.

Alright, I removed the game tree from the summary.

The -11 was chosen to give a small, but not empty, area of positive-returns in the strategy space. You're right that it doesn't affect which strategies are optimal, but in my mind it affects whether finding an optimal strategy is fun/satisfying.

You followed the link? The game tree image is a decent reference, but a bad introduction.

The answer to your question is that it's a zero sum game. The defender wants to minimize the score. The attacker wants to maximize it.

0DanielLC
I hadn't followed it. I think you should either have the entire thing, or none of it (maybe just the conclusion). If I can't understand what's going on from your overview, I don't see the point of it being there. In this example, it happens because the predictor guesses your strategy. It might not actually be before you choose the strategy, but since they can't exactly take advantage of you choosing first and look at it, it's functionally the same as them guessing your strategy. What's the point of the -11 for asking fate? You can't choose not to. Having eleven fewer util no matter what doesn't change anything.

Sam Harris recently responded to the winning essay of the "moral landscape challenge".

I thought it was a bit odd that the essay wasn't focused on the claimed definition of morality being vacuous. "Increasing the well-being of conscious creatures" is the sort of answer you get when you cheat at rationalist taboo. The problem has been moved into the word "well-being", not solved in any useful way. In practical terms it's equivalent to saying non-conscious things don't count and then stopping.

It's a bit hard to explain this to pe... (read more)

0Punoxysm
"Well-being" is a know-it-when-we-see-it sort of thing. Sure it's vague, but I don't begrudge its use. Let's break down the phrase you just objected to (I have not read SH's book, if that matters): "Increasing the well-being" - roughly correlates with increase utility, diminishing suffering, increasing freedom, increasing mindfulness, etc. Good things! And if defining it further gets into hairsplitting over competing utilitarianisms, then you might as well avoid that route. "Of all conscious creatures" - well, you obviously can't do anything immoral to a rock. Maybe you kick a rock and upset the nest of another creature, but you haven't hurt the rock. But you can do immoral things to conscious creatures, which can be argued to be pretty broad; certainly broader than just humans. So I think this is as concrete as many one-sentence summaries of morality.

Arguably the university's NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren't actively against it.

The NAT/firewall was there for security reasons, not to police gaming. This was when I lived in residence, so gaming was a legitimate recreational use.

Endpoints not being able to connect to each other makes some functionality costly or impossible. For example, peer to peer distribution systems rely on being able to contact cooperative endpoints. NAT makes that a lot harder, meaning plenty of development and usability costs.

A more mundane example is multiplayer games. When I played warcraft 3, I had lots of issues testing maps I made because no one could connect to games I hosted (I was behind a university NAT, out of my control). I had to rely on sending the map to friends and having them host.

-3Shmi
Unlike what the TCP/IP designers envisioned, current internet is basically client/server. A client always initiates the exchange and should be isolated from unsolicited access. If necessary, P2P access is a solved problem, and it is properly done by applications at a level higher than TCP/IP, anyway. Arguably the university's NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren't actively against it.
Load More