All of Steve_Rayhawk's Comments + Replies

0[anonymous]
["delete" button hasn't appeared despite "retracted" state; do replies keep comments from being deletable?]
0Alexei
Sorry, it's not yet ready for public consumption. Please delete your post.

Pessimistic assumption: Voldemort evaded the Mirror, and is watching every trick Harry's coming up with to use against his reflection.

Semi-pessimistic assumption: Harry is in the Mirror, which has staged this conflict (perhaps on favorable terms) because it's stuck on the problem of figuring out what Tom Riddle's ideal world is.

-2Steve_Rayhawk
Pessimistic assumption: Voldemort evaded the Mirror, and is watching every trick Harry's coming up with to use against his reflection.

Pessimistic assumption: Voldemort can reliably give orders to Death Eaters within line-of-sight, and Death Eaters can cast several important spells, without any visible sign or sound.

Pessimistic assumption: Voldemort has reasonable cause to be confident that his Horcrux network will not be affected by Harry's death.

A necessary condition for a third ending might involve a solution that purposefully violates the criteria in some respect.

Pessimistic assumption: Voldemort wants Harry to reveal important information as a side effect of using his wand. To get the best ending, Harry must identify what information this would be, and prevent Voldemort from acquiring this information.

Pessimistic assumption: Voldemort wants Harry to defeat him on this occasion. To get the best ending, Harry must defeat Voldemort, and then, before leaving the graveyard, identify a benefit that Voldemort gains by losing and deny him that benefit.

Pessimistic assumption: Free Transfiguration doesn't work like a superpower from Worm: it does not grant sensory feedback about the object being Transfigured, even if it does interpret the caster's idea of the target.

Pessimistic assumption: At least in the limit of unusually thin and long objects, Transfiguration time actually scales as the product of the shortest local dimension with the square of the longest local dimension of the target, rather than the volume. Harry has not detected this because he was always Transfiguring volumes or areas, and McGonagall was mistaken.

0pSinigaglia
In Azkaban is stated that Harry transfiguration of a thin cylindrical layer from the wall is fast because its volume is small. This seems to contradict your assumption.

Pessimistic assumption: An intended solution involves, as a side-effect, Harry suffering a mortal affliction such as Transfiguration sickness or radiation poisoning, and is otherwise highly constrained. The proposed solution is close to this intended solution, and to match the other constraints, it must either include Harry suffering such an affliction with a plan to recover from it, or subject Harry to conditions where he would normally suffer such an affliction except that he has taken unusual measures to prevent it.

(This is one reading of the proviso, "evade immediate death".)

Pessimistic assumption: Hermione, once wakened, despite acting normal, will be under Voldemort's control.

Pessimistic assumption: Any plan which causes the occurrence of the vignette from Ch. 1 does not lead to the best ending. (For example, one reading of phenomena in Ch. 89 is that that Harry is in a time loop, and the vignette may be associated with the path that leads to a reset of the loop.)

Pessimistic assumption: Voldemort, and some of the Death Eaters, have witnessed combat uses of the time-skewed Transfiguration featuring in Chapter 104. They will have appropriate reflexes to counter any attacks by partial Transfiguration which they could have countered if the attacks had been made using time-skewed Transfiguration.

Pessimistic assumption: It is not possible to Transfigure antimatter.

Pessimistic assumption: Neither partial Transfiguration nor extremely fast Transfiguration (using extremely small volumes) circumvent the limits on Transfiguring air.

Pessimistic assumption: Plans which depend on the use of partial Transfiguration, or Transfiguration of volumes small enough to complete at timescales smaller than that of mean free paths in air (order of 160 picoseconds?), to circumvent the limitation on Transfiguring air, will only qualify as valid if they contain an experimental test of the ability to Transfigure air, together with a backup plan which is among the best available in case it is not possible to Transfigure air.

Pessimistic assumption: Plans which depend on Transfiguring antimatter will only qualify as valid if they contain an experimental test of the ability to Transfigure antimatter, together with a backup plan which is among the best available in case it is not possible to Transfigure antimatter.

Pessimistic assumption: Harry's wand is not already touching a suitable object for Transfiguration. Neither partial Transfiguration nor extremely fast Transfiguration of extremely small volumes lift the restriction against Transfiguring air, dust specks or surface films would need to be specifically seen, the tip of the wand is not touching his skin, and the definition of "touching the wand" starts at the boundary of the wand material.

-1Steve_Rayhawk
Pessimistic assumption: Free Transfiguration doesn't work like a superpower from Worm: it does not grant sensory feedback about the object being Transfigured, even if it does interpret the caster's idea of the target.
-1Steve_Rayhawk
Pessimistic assumption: At least in the limit of unusually thin and long objects, Transfiguration time actually scales as the product of the shortest local dimension with the square of the longest local dimension of the target, rather than the volume. Harry has not detected this because he was always Transfiguring volumes or areas, and McGonagall was mistaken.
-1Steve_Rayhawk
Pessimistic assumption: Voldemort, and some of the Death Eaters, have witnessed combat uses of the time-skewed Transfiguration featuring in Chapter 104. They will have appropriate reflexes to counter any attacks by partial Transfiguration which they could have countered if the attacks had been made using time-skewed Transfiguration.
-1Steve_Rayhawk
Pessimistic assumption: It is not possible to Transfigure antimatter.
0Steve_Rayhawk
Pessimistic assumption: Neither partial Transfiguration nor extremely fast Transfiguration (using extremely small volumes) circumvent the limits on Transfiguring air.
-1Steve_Rayhawk
Pessimistic assumption: Plans which depend on the use of partial Transfiguration, or Transfiguration of volumes small enough to complete at timescales smaller than that of mean free paths in air (order of 160 picoseconds?), to circumvent the limitation on Transfiguring air, will only qualify as valid if they contain an experimental test of the ability to Transfigure air, together with a backup plan which is among the best available in case it is not possible to Transfigure air.
-1Steve_Rayhawk
Pessimistic assumption: Plans which depend on Transfiguring antimatter will only qualify as valid if they contain an experimental test of the ability to Transfigure antimatter, together with a backup plan which is among the best available in case it is not possible to Transfigure antimatter.
0Steve_Rayhawk
Pessimistic assumption: Harry's wand is not already touching a suitable object for Transfiguration. Neither partial Transfiguration nor extremely fast Transfiguration of extremely small volumes lift the restriction against Transfiguring air, dust specks or surface films would need to be specifically seen, the tip of the wand is not touching his skin, and the definition of "touching the wand" starts at the boundary of the wand material.

Pessimistic assumption: The effect of the Unbreakable Vow depends crucially on the order in which Harry lets himself become aware of arguments about its logical consequences.

Pessimistic assumption: Voldemort has made advance preparations which will thwart every potential plan of Harry's based on favorable tactical features or potential features of the situation which might reasonably be obvious to him. These include Harry's access to his wand, the Death Eaters' lack of armor enchantments or prepared shields, the destructive magic resonance, the Time-Turner, Harry's other possessions, Harry's glasses, the London portkey, a concealed Patronus from Hermione's revival, or Hermione's potential purposeful assistance. Any attempt to use these things will fail at least once and and will, absent an appropriate counter-strategy, immediately trigger lethal force against Harry.

Pessimistic assumption: There are more than two endings. A solution meeting the stated criteria is a necessary but not sufficient condition for the least sad ending.

If a viable solution is posted [...] the story will continue to Ch. 121.

Otherwise you will get a shorter and sadder ending.

Note that the referent of "Ch. 121" is not necessarily fixed in advance.

Counterargument: "I expect that the collective effect of 'everyone with more urgent life issues stays out of the effort' shifts the probabilities very little" suggests that reaso... (read more)

1Steve_Rayhawk
A necessary condition for a third ending might involve a solution that purposefully violates the criteria in some respect.

Pessimistic Assumptions Thread

"Excuse me, I should not have asked that of you, Mr. Potter, I forgot that you are blessed with an unusually pessimistic imagination -"

Ch. 15

Sometimes people called Moody 'paranoid'.

Moody always told them to survive a hundred years of hunting Dark Wizards and then get back to him about that.

Mad-Eye Moody had once worked out how long it had taken him, in retrospect, to achieve what he now considered a decent level of caution - weighed up how much experience it had taken him to get good instead of lucky - and h

... (read more)
0[anonymous]
And not only Harry must not interrupt the game, he must prevent everyone else who do not know he's Time-Turned from doing it.
-2Steve_Rayhawk
Semi-pessimistic assumption: Harry is in the Mirror, which has staged this conflict (perhaps on favorable terms) because it's stuck on the problem of figuring out what Tom Riddle's ideal world is.
-2[anonymous]
Pessimistic assumption Voldemort should not be killed, since without him it will never be known if the Prophecy came true.
-1lerjj
Pessimistic assumption LV knows that Harry can do partial transfiguration. LV has put up anti- apparition, anti- time turning and anti-transfiguration wards. Less probable Pessimistic assumption these wards do not count as LV's magic once laid and will not resonate with Harry, meaning they will stay active. Alternatively, a death eater has laid them on previously understood instructions.
2lerjj
Pessimistic assumption LV has been planning exactly this conversation for months and has thought of every possible plan of action that he could do. He has Harry level intelligence. All viable solutions must therefore use information LV does not have access to, which does not include the fact that Harry is Tom Riddle. Asking for power he knows not is trying to patch this minor hole.
-1Steve_Rayhawk
Pessimistic assumption: Voldemort can reliably give orders to Death Eaters within line-of-sight, and Death Eaters can cast several important spells, without any visible sign or sound.
3Steve_Rayhawk
Pessimistic assumption: Voldemort has reasonable cause to be confident that his Horcrux network will not be affected by Harry's death.
-1Steve_Rayhawk
Pessimistic assumption: Voldemort wants Harry to reveal important information as a side effect of using his wand. To get the best ending, Harry must identify what information this would be, and prevent Voldemort from acquiring this information.
5Steve_Rayhawk
Pessimistic assumption: Voldemort wants Harry to defeat him on this occasion. To get the best ending, Harry must defeat Voldemort, and then, before leaving the graveyard, identify a benefit that Voldemort gains by losing and deny him that benefit.
4Steve_Rayhawk
Pessimistic assumption: An intended solution involves, as a side-effect, Harry suffering a mortal affliction such as Transfiguration sickness or radiation poisoning, and is otherwise highly constrained. The proposed solution is close to this intended solution, and to match the other constraints, it must either include Harry suffering such an affliction with a plan to recover from it, or subject Harry to conditions where he would normally suffer such an affliction except that he has taken unusual measures to prevent it. (This is one reading of the proviso, "evade immediate death".)
-1Steve_Rayhawk
Pessimistic assumption: Hermione, once wakened, despite acting normal, will be under Voldemort's control.
0Steve_Rayhawk
Pessimistic assumption: Any plan which causes the occurrence of the vignette from Ch. 1 does not lead to the best ending. (For example, one reading of phenomena in Ch. 89 is that that Harry is in a time loop, and the vignette may be associated with the path that leads to a reset of the loop.)
-1[anonymous]
(somewhat shaky) Pessimistic Assumption Voldemort can use a Time-Turner, too, and he will send himself a message from the future to win.
-1Steve_Rayhawk
Concerning Transfiguration:
1Steve_Rayhawk
Pessimistic assumption: The effect of the Unbreakable Vow depends crucially on the order in which Harry lets himself become aware of arguments about its logical consequences.
8Steve_Rayhawk
Pessimistic assumption: Voldemort has made advance preparations which will thwart every potential plan of Harry's based on favorable tactical features or potential features of the situation which might reasonably be obvious to him. These include Harry's access to his wand, the Death Eaters' lack of armor enchantments or prepared shields, the destructive magic resonance, the Time-Turner, Harry's other possessions, Harry's glasses, the London portkey, a concealed Patronus from Hermione's revival, or Hermione's potential purposeful assistance. Any attempt to use these things will fail at least once and and will, absent an appropriate counter-strategy, immediately trigger lethal force against Harry.
-1Steve_Rayhawk
Pessimistic assumption: There are more than two endings. A solution meeting the stated criteria is a necessary but not sufficient condition for the least sad ending. Note that the referent of "Ch. 121" is not necessarily fixed in advance. Counterargument: "I expect that the collective effect of 'everyone with more urgent life issues stays out of the effort' shifts the probabilities very little" suggests that reasonable prior odds of getting each ending are all close to 0 or 1, so any possible hidden difficulty thresholds are either very high or very low. Counterargument: The challenge in Three Worlds Collide only had two endings. Counterargument: A third ending would have taken additional writing effort, to no immediately obvious didactic purpose.

there very likely exist misrepresentations. There are many reasons for this, but I can assure you that I never deliberately lied and that I never deliberately tried to misrepresent anyone. The main reason might be that I feel very easily overwhelmed

I think the thing to remember is that, when you've run into contexts where you feel like someone might not care that they're setting you up to be judged unfairly, you've been too overwhelmed to keep track of whether or not your self-defense involves doing things that you'd normally be able to see would set th... (read more)

I know that the idea of "different systems of local consistency constraints on full spacetimes might or might not happen to yield forward-sampleable causality or things close to it" shows up in Wolfram's "A New Kind of Science", for all that he usually refuses to admit the possible relevance of probability or nondeterminism whenever he can avoid doing so; the idea might also be in earlier literature.

that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it.

I'd thought about th... (read more)

The main way complexity of this sort would be addressable is if the intellectual artifact that you tried to prove things about were simpler than the process that you meant the artifact to unfold into. For example, the mathematical specification of AIXI is pretty simple, even though the hypotheses that AIXI would (in principle) invent upon exposure to any given environment would mostly be complex. Or for a more concrete example, the Gallina kernel of the Coq proof engine is small and was verified to be correct using other proof tools, while most of the comp... (read more)

4[anonymous]
This is actually one of the best comments I've seen on Less Wrong, especially this part: Thanks for the clear explanation.

you know what I mean.

Right, but this is a public-facing post. A lot of readers might not know why you could think it was obvious that "good guys" would imply things like information security, concern for Friendliness so-named, etc., and they might think that the intuition you mean to evoke with a vague affect-laden term like "good guys" is just the same argument-disdaining groupthink that would be implied if they saw it on any other site.

To prevent this impression, if you're going to use the term "good guys", then at or bef... (read more)

6[anonymous]
Okay, I'm convinced. I think I will just remove the term altogether, because it's confusing the issue.
-1hankx7787
well said.

these are all literally from the Nonprofits for Dummies book. [...] The history I've heard is that SI [...]

\

failed to read Nonprofits for Dummies,

I remember that, when Anna was managing the fellows program, she was reading books of the "for dummies" genre and trying to apply them... it's just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were "what it takes to manage well" (i.e. "basic management") and "what it takes to be productive", rath... (read more)

1John_Maxwell
Seems like a fair paraphrase.
Louie130

Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management

FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational i... (read more)

If you have a lot of experts and a lot of objects, I might try a generative model where each object had unseen values from an n-dimensional feature space, and where experts decided what features to notice using weightings from a dual n-dimensional space, with the weight covectors generated as clustered in some way to represent the experts' structured non-independence. The experts' probability estimates would be something like a logistic function of the product of each object's features with the expert's weights (plus noise), and your output summary probabi... (read more)

It wouldn't just be that some models of reality acknowledge your existence and others don't; it would mean that you are nothing more than a fuzzy heuristic concept in someone else's model, and that if they switched models, you would no longer exist even in that limited sense.

Or in a cascade of your own successive models, including of the cascade.

Or an incentive to keep using that model rather than to switch to another one. The models are made up, but the incentives are real. (To whatever extent the thing subject to the incentives is.)

Not that I'm agreei... (read more)

0JenniferRM
Crap. I had not thought of quines in reference to simulationist metaphysics before.

You need to do the impossible one more time, and make your plans bearing in mind that the true ontology [...] something more than your current intellectual tools allow you to represent.

With the "is" removed and replaced by an implied "might be", this seems like a good sentiment...

...well, given scenarios in which there were some other process that could come to represent it, such that there'd be a point in using (necessarily-)current intellectual tools to figure out how to stay out of those processes' way...

...and depending on the re... (read more)

His expectation that this will work out is based partly on [...]

(It's also based on an intuition I don't understand that says that classical states can't evolve toward something like representational equilibrium the way quantum states can -- e.g. you can't have something that tries to come up with an equilibrium of anticipation/decisions, like neural approximate computation of Nash equilibria, but using something more like representations of starting states of motor programs that, once underway, you've learned will predictably try to search combinatoria... (read more)

-1Mitchell_Porter
Let's go back to the local paradigm for explaining consciousness: "how it feels from the inside". On one side of the equation, we have a particular configuration of trillions of particles, on the other side we have a conscious being experiencing a particular combination of sensations, feelings, memories, and beliefs. The latter is supposed to be "how it feels to be that configuration". If I ontologically analyze the configuration of particles, I'll probably do so in terms of nested spatial structures - particles in atoms in molecules in organelles in cells in networks. What if I analyze the other side of the equation, the experience, or even the conscious being having the experience? This is where phenomenology matters. Whenever materialists talk about consciousness, they keep interjecting references to neurons and brain computations even though none of this is evident in the experience itself. Phenomenology is the art of characterizing the experience solely in terms of how it presents itself. So let's look for the phenomenological "parts" of an experience. One way to divide it up is into the different sensory modalities, e.g. that which is being seen versus that which is being heard. We can also distinguish objects that may be known multimodally, so there can be some cross-classification here, e.g. I see you but I also hear you. This synthesis of a unified perception from distinct sensations seems to be an intellectual activity, so I might say that there are some visual sensations, some auditory sensations, a concept of you, and a belief that the two types of sensations are both caused by the same external entity. The analysis can keep going in many directions from here. I can focus just on vision and examine the particular qualities that make up a localized visual sensation (e.g. the classic three-dimensional color schemes). I can look at concepts and thoughts and ask how they are generated and compounded. When I listen to my own thinking, what exactly is going

You invoke as granted the assumption that there's anything besides your immediately present self (including your remembered past selves) that has qualia, but then you deny that some anticipatable things will have qualia. Presumably there are some philosophically informed epistemic-ish rules that you have been using, and implicitly endorsing, for the determination of whether any given stimuli you encounter were generated by something with qualia, and there are some other meta-philosophical epistemology-like rules that you are implicitly using and endorsing ... (read more)

Some brief attempted translation for the last part:

A "monad", in Mitchell Porter's usage, is supposed to be a somewhat isolatable quantum state machine, with states and dynamics factorizable somewhat as if it was a quantum analogue of a classical dynamic graphical model such as a dynamic Bayesian network (e.g., in the linked physics paper, a quantum cellular automaton). (I guess, unlike graphical models, it could also be supposed to not necessarily have a uniquely best natural decomposition of its Hilbert space for all purposes, like how with an ... (read more)

4Mitchell_Porter
I don't know where you got the part about representational equilibria from. My conception of a monad is that it is "physically elementary" but can have "mental states". Mental states are complex so there's some sort of structure there, but it's not spatial structure. The monad isn't obtained by physically concatenating simpler objects; its complexity has some other nature. Consider the Game of Life cellular automaton. The cells are the "physically elementary objects" and they can have one of two states, "on" or "off". Now imagine a cellular automaton in which the state space of each individual cell is a set of binary trees of arbitrary depth. So the sequence of states experienced by a single cell, rather than being like 0, 1, 1, 0, 0, 0,... might be more like (X(XX)), (XX), ((XX)X), (X(XX)), (X(X(XX)))... There's an internal combinatorial structure to the state of the single entity, and ontologically some of these states might even be phenomenal or intentional states. Finally, if you get this dynamics as a result of something like the changing tensor decomposition of one of those quantum CAs, then you would have a causal system which mathematically is an automaton of "tree-state" cells, ontologically is a causal grid of monads capable of developing internal intentionality, and physically is described by a Hamiltonian built out of Pauli matrices, such as might describe a many-body quantum system. Furthermore, since the states of the individual cell can have great or even arbitrary internal complexity, it may be possible to simulate the dynamics of a single grid-cell in complex states, using a large number of grid-cells in simple states. The simulated complex tree-states would actually be a concatenation of simple tree-states. This is the "network of a billion simple monads simulating a single complex monad".
9Steve_Rayhawk
(It's also based on an intuition I don't understand that says that classical states can't evolve toward something like representational equilibrium the way quantum states can -- e.g. you can't have something that tries to come up with an equilibrium of anticipation/decisions, like neural approximate computation of Nash equilibria, but using something more like representations of starting states of motor programs that, once underway, you've learned will predictably try to search combinatorial spaces of options and/or redo a computation like the current one but with different details -- or that, even if you can get ths sort of evolution in classical states, it's still knowably irrelevant. Earlier he invoked bafflingly intense intuitions about the obviously compelling ontological significance of the lack of spatial locality cues attached to subjective consciousness, such as "this quale is experienced in my anterior cingulate cortex, and this one in Wernicke's area", to argue that experience is necessarily nonclassically replicable. (As compared with, what, the spatial cues one would expect a classical simulation of the functional core of a conscious quantum state machine to magically become able to report experiencing?) He's now willing to spontaneously talk about non-conscious classical machines that simulate quantum ones (including not magically manifesting p-zombie subjective reports of spatial cues relating to its computational hardware), so I don't know what the causal role of that earlier intuition is in his present beliefs; but his reference to a "sweet spot", rather than a sweet protected quantum subspace of a space of network states or something, is suggestive, unless that's somehow necessary for the imagined tensor products to be able to stack up high enough.)

This needs further translation.

It's not something you can ever come close to competing with by a philosophy invented from scratch.

I don't understand what you mean by this.

A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods... (read more)

And who will choose the choosers? No sentient entity at all -- they'll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater.

Such markets and technologies are already far beyond the ability of any single human to comprehend[. . .]

Can you expand on this? The way you say it suggests that it might be your core objection to the thesis of economically explosive strong AI. -- put into words, the way the emotional charge would hook into the argument here would be: "Such a strong AI would hav... (read more)

(Note that the Uncertain Future software is mostly supposed to be a conceptual demonstration; as mentioned in the accompanying conference paper, a better probabilistic forecasting guide would take historical observations and uncertainty about constant underlying factors into account more directly, with Bayesian model structure. The most important part of this would be stochastic differential equation model components that could account for both parameter and state uncertainty in nonlinear models of future economic development from past observations, especi... (read more)

It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.

It would be better to present, as your main reason, "the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones". That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algori... (read more)

The subtleties I first had in mind were the ones that should have (but didn't) come up in the original earlier dicussion of MWI, having to do with the different numbers of bits in different parts of an observation-predicting program based on a physical theory, and which of those parts should have their bits be charged against the prior or likelihood of the physical theory itself and which of the parts should have their bits be taken for granted as intrinsic parts of the anthropic reasoning that any agent would need to be capable of (even if some physical t... (read more)

-1private_messaging
Well, taking for granted some particular class of bits is very problematic when you have brute force search over values of those bits. You can have a reasonably short program (that can be shorter than physics) which would iterate all theories of physics and run them for a very very long time. Then if you are allowed to just search for the observers, this program will be the simplest theory, and you are effectively back to the square one; you don't get anything useful (you got solomonoff induction inside solomonoff induction). Sorry I still can't make sense out of it. I also don't see why I should have searched for possible resolutions, and assume that other party has good reason to expect such resolution, if I reasonably believe that such resolution would have a good chance at Fields medal (or even a good reason to expect such resolution would). I also don't like speculative conjectures via lack of counter argument as combined with posts like this (and all posts inspired by it's style), and feel very disinclined to run the highly computationally expensive search for possible solutions by a person that would not have such considerations about the entire majority of scientists that he thinks believe in something other than MWI. (in so much as MWI is part of CI every CI endorser believes in MWI in a way, they just believe that it is invalid to believe in extra worlds that can't be observed, or other philosophical stance that is a matter of opinion). edit: that is to say, the stance often expressed on MWI here is normative: if you don't believe in MWI you are wrong, and not just that, but the scientific community is wrong for not believing in MWI. That is the backgrounder. I do believe many worlds are a possibility, but I do not believe the above argument to be valid. And as you yourself have eloquently explained, one should not proclaim those believing in MWI to be stupid on basis of unresolved problems not being resolved (I do not do that). But this also goes with

I'm sorry; I was referring to what I had perceived as a general pattern, from seeing snippets of discussions involving you while I was lurking off-and-on. The "pre-emptive" was meant to refer to within single exchanges, not to refer all the way back to (in this case) the original discussion about MWI (which I'm still hunting down). Now that I look more closely at your history, this has only been at all frequent within the past few months.

I don't have any specific recollection of you from before that comment on the "detecting rationalization&... (read more)

-3private_messaging
I responded privately earlier... I really don't quite know why I still bother posting here. Also btw, there's other thing: I posted several things that werequite seriously wrong, in the sense of wrong chain of reasoning (not outcome). Those were upvoted and agreed with a fair lot. Also, on the MWI, I am a believer that as far as we know there can be many worlds out there, and even if quantum mechanics is wrong it is fairly natural for mathematics to work out to many worlds, and it is not like believing in the apple cake in the asteroid belt. I do not dislike the conclusion. I dislike the argument. Ditto for Solomonoff induction and theism, I am an atheist. I tend to be particularly negative towards the arguments that incorrectly argue in favour of what I believe, on the few times that I notice the incorrectness (obviously that got to be much less common than seeing incorrectness in the arguments in favour of what I don't believe).

You discussed this over here too, with greater hostility:

also someone somehow thinks that Solomonoff induction finds probabilities for theories, while it just assigns 2^-length as probability for software code of such length, which is obviously absurd when applied to anything but brute force generated shortest pieces of code,

I'm trying to figure out whether, when you say "someone", you mean someone upthread or one of the original authors of the post. Because if it's the post authors, then I get to accuse you of not caring enough about refrain... (read more)

0private_messaging
It is the case that Luke, for instance of an author of this post, wrote this relatively recently. While there is understanding that bruteforcing is the ideal, I do not see understanding of how important of a requirement that is. We don't know how long is the shortest physics, and we do not know how long is the shortest god-did-it, but if we can build godlike AI inside our universe then the simplest god is at most same length and unfortunately you can't know there isn't a shorter one. Note: I am an atheist, before you jump onto he must be theist if he dislikes that statement. Nonetheless I do not welcome pseudo-scientific justifications of atheism. edit: also by the way, if we find a way to exploit quantum mechanics to make a halting oracle and do true Solomonoff induction some day, that would make physics of our universe incomputable and this amazing physics itself would not be representable as Turing machine tape, i.e. would be a truth that Solomonoff induction we could do using this undiscovered physics would not be able to even express as a hypothesis. Before you go onto Solomonoff induction you need to understand Halting problem and variations, otherwise you'll be assuming that someday we'll overcome Halting problem, which is kind of fine except if we do then we just get ourselves the Halting problem of the second order plus a physics where Solomonoff induction doable with the oracle does not do everything.

He identifies subtleties, but doesn't look very hard to see whether other people could have reasonably supposed that the subtleties resolve in a different way than he thinks they "obviously" do. Then he starts pre-emptively campaigning viciously for contempt for everyone who draws a different conclusion than the one from his analysis. Very trigger-happy.

This needlessly pollutes discussion... that is to say, "needless" in the moral perspective of everyone who doesn't already believe that most people who first appear wrong by that criteri... (read more)

-6private_messaging
-5private_messaging
4TheOtherDave
Were capable of and bothered to, I suppose. I rarely bother to explain the reasons for my value judgments unless I'm specifically asked, and sometimes not even then. Especially not when it comes to value judgments of random people on the Internet. Low-value Internet interactions are fungible.

A concept I've played with, coming off of Eliezer's initial take on the problem of formulating optimization power, is: Suppose something generated N options randomly and then chose the best. Given the observed choice, what is the likelihood function for N?

For continuously distributed utilities, this can be computed directly using beta distributions. Beta(N, 1) is the probability density for the highest of N uniformly distributed unit random numbers. This includes numbers which are cumulative probabilities for a continuous distribution at values drawn from ... (read more)

That said, I think his fear of culpability (for being potentially passively involved in an existential catastrophe) is very real. I suspect he is continually driven, at a level beneath what anyone's remonstrations could easily affect, to try anything that might somehow succeed in removing all the culpability from him. This would be a double negative form of "something to protect": "something to not be culpable for failure to protect".

If this is true, then if you try to make him feel culpability for his communication acts as usual, this ... (read more)

0XiXiDu
What is your suggestion then? How do I get out? Delete all of my posts, comments and website like Roko? Seriously, if it wasn't for assholes like wedrifid I wouldn't even bother anymore and just quit. The grandparent was an attempt at honesty, trying to leave. Then that guy comes along claiming that most of my submissions consisted of "persuasive denunciations". Someone as him who does nothing else all the time. Someone who never argues for his case. ETA Ah fuck it all. I'll take another attempt and log out now and not get involved anymore. Happy self-adulation.
1wedrifid
I certainly wouldn't try to make him feel culpability. Or, for that matter, "try to make him" anything at all. I don't believe I have the ability to influence XiXi significantly and I don't believe it would be useful to try (any more). It is for this reason that I rather explicitly spoke in the third person to any prospective future readers that it may be appropriate to refer here in the future. Pretending that I was actually talking to XiXiDu when I was clearly speaking to others is would just be insulting to him. There are possible future cases (and plenty of past cases) where a reply to one of XiXiDu's fallacious denunciations that consists of simply a link here is more useful than ignoring the comment entirely and hoping that the damage done is minimal.

Currently you suspect that there are people, such as yourself, who have some chance of correctly judging whether arguments such as yours are correct, and of attempting to implement the implications if those arguments are correct, and of not implementing the implications if those arguments are not correct.

Do you think it would be possible to design an intelligence which could do this more reliably?

8steven0461
I don't get it. Design a Friendly AI that can better judge whether it's worth the risk of botching the design of a Friendly AI? ETA: I suppose your point applies to some of XiXiDu's concerns but not others?
3XiXiDu
Here are a few premises: * Complex systems can fail in complex ways. * Destruction is easier than creation. * Expected utility maximization is rational and feasible. * We should be extremely conservative about not implementing a half-baked friendly AI. If you believe that self-improving AI is inevitable and that creating friendly AI is more difficult than creating unfriendly AI then to launch an AI that simply destroys everything as quickly as possible has a higher expected utility than doing nothing or trying to implement an AI that is not completely friendly. The negative utility of an AI that transforms the universe into an inanimate state is simply the positive utility of a perfectly friendly AI. But most outcomes are expected to yield unfriendly AI, or not quite friendly AI, which will actively increase negative utility by possibly keeping alive living beings indefinitely given abhorrent circumstances. To prevent those scenarios one can try to solve friendly AI, which will most likely fail (or even increase the chances of a negative singularity), or try to launch a destructive singleton with simple goals to prevent further suffering and the evolution of life elsewhere in the universe. Creating a destructive self-improving AI that kills all life should be the easiest of all possibilities with a high probability of success.

I wish there was a more standard term for this than "kinesthetic thinking", that other people would be able to look up and understand what was meant.

(A related term is "motor cognition", but that doesn't denote a thinking style. Motor cognition is a theoretical paradigm in cognitive psychology, according to which most cognition is a kind of higher-order motor control/planning activity, connected in a continuous hierarchy with conventional concrete motor control and based on the same method of neural implementation. (See also: precuneus ... (read more)

If the human-level AGI

0) is autonomous (has, or forms, long-term goals)
1) is not socialized

#1 is important because a self-modifying system will tend to respond to negative reinforcement concerning sociopathic behaviors resulting from #3-- though, it must be admitted, this will depend on how deeply the ability to self-modify runs. Not all architectures will be capable of effectively modifying their goals in response to social pressures. (In fact, rigid goal-structure under self-modification will usually be seen as an important design-point.)

Abram: Coul... (read more)

5abramdemski
Steve, The idea here is that if an agent is able to (literally or effectively) modify its goal structure, and grows up in an environment in which humans deprive it of what it wants when it behaves badly, an effective strategy for getting what it wants more often will be to alter its goal structure to be closer to the humans. This is only realistic with some architectures. One requirement here is that the cognitive load of keeping track of the human goals and potential human punishments is a difficulty for the early-stage system, such that it would be better off altering its own goal system. Similarly, it must be assumed that during the period of its socialization, it is not advanced enough to effectively hide its feelings. These are significant assumptions.

In fact, I'd prefer it if Q8 started out with the less-shibbolethy "How much have you read about, or used the concepts of..." or something like that, which replaces a dichotomy with a continuum.

Yeah... I wanted to make the suggested question less loaded, but it would have required more words, and I was unthinkingly preoccupied with worry about a limit on the permitted complexity of a single-sentence question. Maybe I should have split the question across more sentences.

The signaling uses of Q8 seem like a bad idea to me, although it seems a

... (read more)
Load More