Pessimistic assumption: Voldemort evaded the Mirror, and is watching every trick Harry's coming up with to use against his reflection.
Semi-pessimistic assumption: Harry is in the Mirror, which has staged this conflict (perhaps on favorable terms) because it's stuck on the problem of figuring out what Tom Riddle's ideal world is.
Pessimistic assumption: Voldemort can reliably give orders to Death Eaters within line-of-sight, and Death Eaters can cast several important spells, without any visible sign or sound.
Pessimistic assumption: Voldemort has reasonable cause to be confident that his Horcrux network will not be affected by Harry's death.
A necessary condition for a third ending might involve a solution that purposefully violates the criteria in some respect.
Pessimistic assumption: Voldemort wants Harry to reveal important information as a side effect of using his wand. To get the best ending, Harry must identify what information this would be, and prevent Voldemort from acquiring this information.
Pessimistic assumption: Voldemort wants Harry to defeat him on this occasion. To get the best ending, Harry must defeat Voldemort, and then, before leaving the graveyard, identify a benefit that Voldemort gains by losing and deny him that benefit.
Pessimistic assumption: Free Transfiguration doesn't work like a superpower from Worm: it does not grant sensory feedback about the object being Transfigured, even if it does interpret the caster's idea of the target.
Pessimistic assumption: At least in the limit of unusually thin and long objects, Transfiguration time actually scales as the product of the shortest local dimension with the square of the longest local dimension of the target, rather than the volume. Harry has not detected this because he was always Transfiguring volumes or areas, and McGonagall was mistaken.
Pessimistic assumption: An intended solution involves, as a side-effect, Harry suffering a mortal affliction such as Transfiguration sickness or radiation poisoning, and is otherwise highly constrained. The proposed solution is close to this intended solution, and to match the other constraints, it must either include Harry suffering such an affliction with a plan to recover from it, or subject Harry to conditions where he would normally suffer such an affliction except that he has taken unusual measures to prevent it.
(This is one reading of the proviso, "evade immediate death".)
Pessimistic assumption: Hermione, once wakened, despite acting normal, will be under Voldemort's control.
Pessimistic assumption: Any plan which causes the occurrence of the vignette from Ch. 1 does not lead to the best ending. (For example, one reading of phenomena in Ch. 89 is that that Harry is in a time loop, and the vignette may be associated with the path that leads to a reset of the loop.)
Pessimistic assumption: Voldemort, and some of the Death Eaters, have witnessed combat uses of the time-skewed Transfiguration featuring in Chapter 104. They will have appropriate reflexes to counter any attacks by partial Transfiguration which they could have countered if the attacks had been made using time-skewed Transfiguration.
Pessimistic assumption: It is not possible to Transfigure antimatter.
Pessimistic assumption: Neither partial Transfiguration nor extremely fast Transfiguration (using extremely small volumes) circumvent the limits on Transfiguring air.
Pessimistic assumption: Plans which depend on the use of partial Transfiguration, or Transfiguration of volumes small enough to complete at timescales smaller than that of mean free paths in air (order of 160 picoseconds?), to circumvent the limitation on Transfiguring air, will only qualify as valid if they contain an experimental test of the ability to Transfigure air, together with a backup plan which is among the best available in case it is not possible to Transfigure air.
Pessimistic assumption: Plans which depend on Transfiguring antimatter will only qualify as valid if they contain an experimental test of the ability to Transfigure antimatter, together with a backup plan which is among the best available in case it is not possible to Transfigure antimatter.
Pessimistic assumption: Harry's wand is not already touching a suitable object for Transfiguration. Neither partial Transfiguration nor extremely fast Transfiguration of extremely small volumes lift the restriction against Transfiguring air, dust specks or surface films would need to be specifically seen, the tip of the wand is not touching his skin, and the definition of "touching the wand" starts at the boundary of the wand material.
Concerning Transfiguration:
Pessimistic assumption: The effect of the Unbreakable Vow depends crucially on the order in which Harry lets himself become aware of arguments about its logical consequences.
Pessimistic assumption: Voldemort has made advance preparations which will thwart every potential plan of Harry's based on favorable tactical features or potential features of the situation which might reasonably be obvious to him. These include Harry's access to his wand, the Death Eaters' lack of armor enchantments or prepared shields, the destructive magic resonance, the Time-Turner, Harry's other possessions, Harry's glasses, the London portkey, a concealed Patronus from Hermione's revival, or Hermione's potential purposeful assistance. Any attempt to use these things will fail at least once and and will, absent an appropriate counter-strategy, immediately trigger lethal force against Harry.
Pessimistic assumption: There are more than two endings. A solution meeting the stated criteria is a necessary but not sufficient condition for the least sad ending.
If a viable solution is posted [...] the story will continue to Ch. 121.
Otherwise you will get a shorter and sadder ending.
Note that the referent of "Ch. 121" is not necessarily fixed in advance.
Counterargument: "I expect that the collective effect of 'everyone with more urgent life issues stays out of the effort' shifts the probabilities very little" suggests that reaso...
Pessimistic Assumptions Thread
"Excuse me, I should not have asked that of you, Mr. Potter, I forgot that you are blessed with an unusually pessimistic imagination -"
– Ch. 15
...Sometimes people called Moody 'paranoid'.
Moody always told them to survive a hundred years of hunting Dark Wizards and then get back to him about that.
Mad-Eye Moody had once worked out how long it had taken him, in retrospect, to achieve what he now considered a decent level of caution - weighed up how much experience it had taken him to get good instead of lucky - and h
there very likely exist misrepresentations. There are many reasons for this, but I can assure you that I never deliberately lied and that I never deliberately tried to misrepresent anyone. The main reason might be that I feel very easily overwhelmed
I think the thing to remember is that, when you've run into contexts where you feel like someone might not care that they're setting you up to be judged unfairly, you've been too overwhelmed to keep track of whether or not your self-defense involves doing things that you'd normally be able to see would set th...
I know that the idea of "different systems of local consistency constraints on full spacetimes might or might not happen to yield forward-sampleable causality or things close to it" shows up in Wolfram's "A New Kind of Science", for all that he usually refuses to admit the possible relevance of probability or nondeterminism whenever he can avoid doing so; the idea might also be in earlier literature.
that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it.
I'd thought about th...
The main way complexity of this sort would be addressable is if the intellectual artifact that you tried to prove things about were simpler than the process that you meant the artifact to unfold into. For example, the mathematical specification of AIXI is pretty simple, even though the hypotheses that AIXI would (in principle) invent upon exposure to any given environment would mostly be complex. Or for a more concrete example, the Gallina kernel of the Coq proof engine is small and was verified to be correct using other proof tools, while most of the comp...
you know what I mean.
Right, but this is a public-facing post. A lot of readers might not know why you could think it was obvious that "good guys" would imply things like information security, concern for Friendliness so-named, etc., and they might think that the intuition you mean to evoke with a vague affect-laden term like "good guys" is just the same argument-disdaining groupthink that would be implied if they saw it on any other site.
To prevent this impression, if you're going to use the term "good guys", then at or bef...
these are all literally from the Nonprofits for Dummies book. [...] The history I've heard is that SI [...]
\
failed to read Nonprofits for Dummies,
I remember that, when Anna was managing the fellows program, she was reading books of the "for dummies" genre and trying to apply them... it's just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were "what it takes to manage well" (i.e. "basic management") and "what it takes to be productive", rath...
Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management
FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational i...
If you have a lot of experts and a lot of objects, I might try a generative model where each object had unseen values from an n-dimensional feature space, and where experts decided what features to notice using weightings from a dual n-dimensional space, with the weight covectors generated as clustered in some way to represent the experts' structured non-independence. The experts' probability estimates would be something like a logistic function of the product of each object's features with the expert's weights (plus noise), and your output summary probabi...
It wouldn't just be that some models of reality acknowledge your existence and others don't; it would mean that you are nothing more than a fuzzy heuristic concept in someone else's model, and that if they switched models, you would no longer exist even in that limited sense.
Or in a cascade of your own successive models, including of the cascade.
Or an incentive to keep using that model rather than to switch to another one. The models are made up, but the incentives are real. (To whatever extent the thing subject to the incentives is.)
Not that I'm agreei...
You need to do the impossible one more time, and make your plans bearing in mind that the true ontology [...] something more than your current intellectual tools allow you to represent.
With the "is" removed and replaced by an implied "might be", this seems like a good sentiment...
...well, given scenarios in which there were some other process that could come to represent it, such that there'd be a point in using (necessarily-)current intellectual tools to figure out how to stay out of those processes' way...
...and depending on the re...
His expectation that this will work out is based partly on [...]
(It's also based on an intuition I don't understand that says that classical states can't evolve toward something like representational equilibrium the way quantum states can -- e.g. you can't have something that tries to come up with an equilibrium of anticipation/decisions, like neural approximate computation of Nash equilibria, but using something more like representations of starting states of motor programs that, once underway, you've learned will predictably try to search combinatoria...
You invoke as granted the assumption that there's anything besides your immediately present self (including your remembered past selves) that has qualia, but then you deny that some anticipatable things will have qualia. Presumably there are some philosophically informed epistemic-ish rules that you have been using, and implicitly endorsing, for the determination of whether any given stimuli you encounter were generated by something with qualia, and there are some other meta-philosophical epistemology-like rules that you are implicitly using and endorsing ...
Some brief attempted translation for the last part:
A "monad", in Mitchell Porter's usage, is supposed to be a somewhat isolatable quantum state machine, with states and dynamics factorizable somewhat as if it was a quantum analogue of a classical dynamic graphical model such as a dynamic Bayesian network (e.g., in the linked physics paper, a quantum cellular automaton). (I guess, unlike graphical models, it could also be supposed to not necessarily have a uniquely best natural decomposition of its Hilbert space for all purposes, like how with an ...
This needs further translation.
It's not something you can ever come close to competing with by a philosophy invented from scratch.
I don't understand what you mean by this.
A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods...
And who will choose the choosers? No sentient entity at all -- they'll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater.
Such markets and technologies are already far beyond the ability of any single human to comprehend[. . .]
Can you expand on this? The way you say it suggests that it might be your core objection to the thesis of economically explosive strong AI. -- put into words, the way the emotional charge would hook into the argument here would be: "Such a strong AI would hav...
(Note that the Uncertain Future software is mostly supposed to be a conceptual demonstration; as mentioned in the accompanying conference paper, a better probabilistic forecasting guide would take historical observations and uncertainty about constant underlying factors into account more directly, with Bayesian model structure. The most important part of this would be stochastic differential equation model components that could account for both parameter and state uncertainty in nonlinear models of future economic development from past observations, especi...
It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.
It would be better to present, as your main reason, "the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones". That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algori...
The subtleties I first had in mind were the ones that should have (but didn't) come up in the original earlier dicussion of MWI, having to do with the different numbers of bits in different parts of an observation-predicting program based on a physical theory, and which of those parts should have their bits be charged against the prior or likelihood of the physical theory itself and which of the parts should have their bits be taken for granted as intrinsic parts of the anthropic reasoning that any agent would need to be capable of (even if some physical t...
I'm sorry; I was referring to what I had perceived as a general pattern, from seeing snippets of discussions involving you while I was lurking off-and-on. The "pre-emptive" was meant to refer to within single exchanges, not to refer all the way back to (in this case) the original discussion about MWI (which I'm still hunting down). Now that I look more closely at your history, this has only been at all frequent within the past few months.
I don't have any specific recollection of you from before that comment on the "detecting rationalization&...
You discussed this over here too, with greater hostility:
also someone somehow thinks that Solomonoff induction finds probabilities for theories, while it just assigns 2^-length as probability for software code of such length, which is obviously absurd when applied to anything but brute force generated shortest pieces of code,
I'm trying to figure out whether, when you say "someone", you mean someone upthread or one of the original authors of the post. Because if it's the post authors, then I get to accuse you of not caring enough about refrain...
He identifies subtleties, but doesn't look very hard to see whether other people could have reasonably supposed that the subtleties resolve in a different way than he thinks they "obviously" do. Then he starts pre-emptively campaigning viciously for contempt for everyone who draws a different conclusion than the one from his analysis. Very trigger-happy.
This needlessly pollutes discussion... that is to say, "needless" in the moral perspective of everyone who doesn't already believe that most people who first appear wrong by that criteri...
A concept I've played with, coming off of Eliezer's initial take on the problem of formulating optimization power, is: Suppose something generated N options randomly and then chose the best. Given the observed choice, what is the likelihood function for N?
For continuously distributed utilities, this can be computed directly using beta distributions. Beta(N, 1) is the probability density for the highest of N uniformly distributed unit random numbers. This includes numbers which are cumulative probabilities for a continuous distribution at values drawn from ...
That said, I think his fear of culpability (for being potentially passively involved in an existential catastrophe) is very real. I suspect he is continually driven, at a level beneath what anyone's remonstrations could easily affect, to try anything that might somehow succeed in removing all the culpability from him. This would be a double negative form of "something to protect": "something to not be culpable for failure to protect".
If this is true, then if you try to make him feel culpability for his communication acts as usual, this ...
Currently you suspect that there are people, such as yourself, who have some chance of correctly judging whether arguments such as yours are correct, and of attempting to implement the implications if those arguments are correct, and of not implementing the implications if those arguments are not correct.
Do you think it would be possible to design an intelligence which could do this more reliably?
I wish there was a more standard term for this than "kinesthetic thinking", that other people would be able to look up and understand what was meant.
(A related term is "motor cognition", but that doesn't denote a thinking style. Motor cognition is a theoretical paradigm in cognitive psychology, according to which most cognition is a kind of higher-order motor control/planning activity, connected in a continuous hierarchy with conventional concrete motor control and based on the same method of neural implementation. (See also: precuneus ...
If the human-level AGI
0) is autonomous (has, or forms, long-term goals)
1) is not socialized#1 is important because a self-modifying system will tend to respond to negative reinforcement concerning sociopathic behaviors resulting from #3-- though, it must be admitted, this will depend on how deeply the ability to self-modify runs. Not all architectures will be capable of effectively modifying their goals in response to social pressures. (In fact, rigid goal-structure under self-modification will usually be seen as an important design-point.)
Abram: Coul...
In fact, I'd prefer it if Q8 started out with the less-shibbolethy "How much have you read about, or used the concepts of..." or something like that, which replaces a dichotomy with a continuum.
Yeah... I wanted to make the suggested question less loaded, but it would have required more words, and I was unthinkingly preoccupied with worry about a limit on the permitted complexity of a single-sentence question. Maybe I should have split the question across more sentences.
...The signaling uses of Q8 seem like a bad idea to me, although it seems a
[nvm]