It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie.
When I myself run across apparent p-zombies, they usually look at my arguments as if I am being dense over my descriptions of consciousness. And I can see why, because without the experience of consciousness itself, these arguments must sound like they make consciousness out to be an extraneous hypothesis to help explain my behavior. Yet, even after reflecting on this objection, it still seems there is something to explain besid...
Perhaps abiguity aversion is merely a good heuristic.
Well of course. Finite ideal rational agents don't exist. If you were designing decision-theory-optimal AI, that optimality is a property of its environment, not any ideal abstract computing space. I can think of at least one reason why ambiguity aversion could be the optimal algorithm in environments with limited computing resources:
Consider a self-modification algorithm that adapts to new problem domains. Restructuring (learning) is considered the hardest of tasks, and so the AI modifies scarcel...
Shouldn't this post be marked [Human] so that uploads and AIs don't need to spend cycles reading it?
...I'd like to think that this joke bears the more subtle point that a possible explanation for the preparedness gap in your rationalist friends is that they're trying to think like ideal rational agents, who wouldn't need to take such human considerations.
I have a friend with Crohn's Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.
As usual, I'm pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn't seem the sort of thing that he and his doctor would have even discussed. ...
There has been mathematically proven software and the space shuttle came close though that was not proven as such.
Well... If you know what you wish to prove then it's possible that there exists a logical string that begins with a computer program and ends with it as a necessity. But that's not really exciting. If you could code in the language of proof-theory, you already have the program. The mathematical proof of a real program is just a translation of the proof into machine code and then showing it goes both ways.
You can potentially prove a space ...
Depends if you count future income. Highest paying careers are often so because only those willing to put in extra effort at their previous jobs get promoted. This is at least true in my field, software engineering.
The film's trailer strikes me as being aware of the transhumanist community in a surprising way, as it includes two themes that are otherwise not connected in the public consciousness: uploads and superintelligence. I wouldn't be surprised if a screenwriter found inspiration from the characters of Sandberg, Bostrom, or of course Kurzweil. Members of the Less Wrong community itself have long struck me as ripe for fictionalization... Imagine if a Hollywood writer actually visited.
They can help with depression.
I've personally tried this and can report truth, but will caveat that the expectation that I will force myself into a morning cold shower often causes oversleeping, which rather exacerbates depression.
Often in Knightian problems you are just screwed and there's nothing rational you can do.
As you know, this attitude isn't particularly common 'round these parts, and while I fall mostly in the "Decision theory can account for everything" camp, there may still be a point there. "Rational" isn't really a category so much as a degree. Formally, it's a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there's Godelian consideration lurki...
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases.
Ah! I didn't quite pick up on that. I'll note that infinite regress problems aren't necessarily defeaters of an approach. Good minds that could fall into that trap implement a "Screw it, I'm going to bed" trigger to keep from wasting cycles even when using an otherwise helpful heuristic.
...Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possi
But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.
Think of Bayesian graphs as implicitly complete, with the set of nodes being every thing to which you have a referent. If you can even say "this proposition" meaningfully, a perfect Bayesian implemented as a brute-force Bayesian network could assign it a node connected to all other nodes, just with trivial conditional probabilities that give the same results as an unconnected node.
A big part of this discussion...
It is helpful, and was one of the ways that helped me to understand One-boxing on a gut level.
And yet, when the problem space seems harder, when "optimal" becomes uncomputable and wrapped up in the fact that I can't fully introspect, playing certain games doesn't feel like designing a mind. Although, this is probably just due to the fact that games have time limits, while mind-design is unconstrained. If I had an eternity to play any given game, I would spend a lot of time introspecting, changing my mind into the sort that could play iterations...
"How often do listing sorts of problems with some reasonable considerations result in an answer of 'None of the above' for me?"
If "reasonable considerations" are not available, then we can still:
"How often did listing sorts of problems with no other information available result in an answer of 'None of the above' for me?"
Even if we suppose that maybe this problem bears no resemblance to any previously encountered problem, we can still (because the fact that it bears no resemblance is itself a signifier):
"How often did problems I'd encountered for the first time have an answer I never thought of?"
My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us.
The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the o...
The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.
My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the differ...
Right down the middle: 25-75
Hmm, come to think of it, deciding the size of the cash prize (for it being interesting) is probably worth more to me as well. I'll just have to settle for boring old cash.
I defected, because I'm indifferent to whether the prize-giver or prize-winner has 60 * X dollars, unless the prize-winner is me.
Am I walking the wrong path?
Eh, probably not. Heuristically, I shy away from modes of thought that involve intentional self-deception, but that's because I haven't been mindful of myself long enough to know ways I can do this systematically without breaking down. I would also caution against letting small-scale pride translate into larger domains where there is less available evidence for how good you really are. "I am successful" has a much higher chance of becoming a cached self than "I am good at math." The latter is testable ...
For certain definitions of pride. Confidence is a focus on doing what you are good at, enjoying doing things that you are good at, and not avoiding doing things you are good at around others.
Pride is showing how good you are at things "just because you are able to," as if to prove to yourself what you supposedly already know, namely that you are good at them. If you were confident, you would spend your time being good at things, not demonstrating that you are so.
There might be good reasons to manipulate others. Just proving to yourself that yo...
Because your prior for "I am manipulating this person because it satisfies my values, rather than my pride" should be very low.
If it isn't, then here's 4 words for you:
"Don't value your pride."
Whenever I have a philosophical conversation with an artist, invariably we end up talking about reductionism, with the artist insisting that if they give up on some irreducible notion, they feel their art will suffer. I've heard, from some of the world's best artists, notions ranging from "magic" to "perfection" to "muse" to "God."
It seems similar to the notion of free will, where the human algorithm must always insist it is capable of thinking about itself on level higher. The artist must always think of his art o...
The closest you can come to getting an actual "A for effort" is through creating cultural content, such as a Kickstarter project or starting a band. You'll get extra success when people see that you're interested in what you're doing, over and beyond as an indicator that what you'll produce is otherwise of quality. People want to be part of something that is being cared for, and in some cases would prefer it to lazily created perfection.
I'd still call it though an "A for signalling effort."
Tough crowd.
A bunch of 5th grade kids taught you how to convert decimals to fractions?
EDIT: All right then, if you downvoters are so smart, what would you bet if you were in sleeping beauty's place?
This is a fair point. Your's is an attempt at a real answer to the problem. Mine and most answers here seem to say something like that the problem is ill-defined, or that the physical situation described by the problem is impossible. But if you were actually Sleeping Beauty waking up with a high prior to trust the information you've been given, what else could you possibly answer?
If you had little reason to trust the information you've been given, the apparent impossibility of your situation would update that belief very strongly.
The expected value for "number of days lived by Sleeping Beauty" is an infinite series that diverges to infinity. If you think this is okay, then the Ultimate Sleeping Beauty problem isn't badly formed. Otherwise...
If you answered 1/3 to the original Sleeping Beauty Problem, I do not think that there is any sensible answer to this one. I do not however consider this strong evidence that the answer of 1/3 is incorrect for the original problem.
To also expand on this: 1/3 is also the answer to the "which odds should I precommit myself to take" question and uses the same math as SIA to yield that result for the original problem. And so it is also undefined which odds one should take in this problem. Precommitting to odds seems less controversial, so we should transplant our indifference to the apparent paradox there to the problem here.
On your account, when we say X is a pedophile, what do we mean?
Like other identities, it's a mish-mash of self-reporting, introspection (and extrospection of internal logic), value function extrapolation (from actions), and ability in a context to carry out the associated action. The value of this thought experiment is to suggest that the pedophile clearly thought that "being" a pedophile had something to do not with actually fulfilling his wants, but with wanting something in particular. He wants to want something, whether or not he gets it...
That's a 'circular' link to your own comment.
It was totally really hard, I had to use a quine.
It might decide to do that - if it meets another powerful agent, and it is part of the deal they strike.
Is it not part of the agent's (terminal) value function to cooperate with agents when doing so provides benefits? Does the expected value of these benefits materialize from nowhere, or do they exist within some value function?
My claim entails that the agent's preference ordering of world states consists mostly in instrumental values. If an agent's value...
So, OK, X is a pedophile. Which is to say, X terminally values having sex with children.
I'm not sure that's a good place to start here. The value of sex is at least more terminal than the value of sex according to your orientation, and the value of pleasure is at least more terminal than sex.
The question is indeed one about identity. It's clear that our transhumans, as traditionally notioned, don't really exclusively value things so basic as euphoria, if indeed our notion is anything but a set of agents who all self-modify to identical copies of the h...
Example of somebody making that claim.
It seems to me a rational agent should never change its self-consistent terminal values. To act out that change would be to act according to some other value and not the terminal values in question. You'd have to say that the rational agent floats around between different sets of values, which is something that humans do, obviously, but not ideal rational agents. The claim then is that ideal rational agents have perfectly consistent values.
"But what if something happens to the agent which causes it too see that...
I'm not sure that both these statements can be true at the same time.
If you take the second statement to mean, "There exists an algorithm for Omega satisfying the probabilities for correctness in all cases, and which sometimes outputs the same number as NL, which does not take NL's number as an input, for any algorithm Player taking NL's and Omega's numbers as input," then this ...seems... true.
I haven't yet seen a comment that proves it, however. In your example, let's assume that we have some algorithm for NL with some specified probabili...
Instead of friendliness, could we not code, solve, or at the very least seed boxedness?
It is clear that any AI strong enough to solve friendliness would already be using that power in unpredictably dangerous ways, in order to provide the computational power to solve it. But is it clear that this amount of computational power could not fit within, say, a one kilometer-cube box outside the campus of MIT?
Boxedness is obviously a hard problem, but it seems to me at least as easy as metaethical friendliness. The ability to modify a wide range of complex envir...
Is LSD like a thing?
Most of my views on drugs and substances are formed, unfortunately, due to history and invalid perceptions of their users and those who appear to support their legality most visibly. I was surprised to find the truth about acid at least a little further to the side of "safe and useful" than my longtime estimation. This opens up a possibility for an attempt at recreational and introspectively therapeutic use, if only as an experiment.
My greatest concern would be that I would find the results of a trip irreducibly spiritual, o...
One data point here. I've taken a few low-to-lowish dose trips. I'm still the same skeptic/pragmatist I was.
When I'd see the walls billowing and more detail generating out of visual details, I didn't think "The universe is alive!" I thought "my visual system is alive".
I did have an experience which-- to the extent I could put it into words-- was that my sense of reality was something being generated. However, it didn't go very deep-- it didn't have aftereffects that I can see. I'm not convinced it was false, and it might be worth exploring to see what's going on with my sense of reality.
On Criticism of Me
I don't mean to be antagonistic here, and I apologize for my tone. I'd prefer my impressions to be taken as yet-another-data-point rather than a strongly stated opinion on what your writings should be.
I'm interested in what in my writing is coming across as indicating I expect a stubborn audience.
The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don't find them to be good objec...
A criticism I have of your posts is that you seem to view your typical audience member as somebody who stubbornly disagrees with your viewpoint, rather than as an undecided voter. More critically, you seem to view yourself as somebody capable of changing the former's opinion through (very well-written) restatements of the relevant arguments. But people like me want to know why previous discussions haven't yet resolved the issue even in discussions between key players. Because they should be resolvable, and posts like this suggest to me that at least som...
Congrats! What is her opinion on the Self Indication Assumption?
Attackers could cause the unit to unexpectedly open/close the lid, activate bidet or air-dry functions, causing discomfort or distress to user.
Heaven help us. Somebody get X-risk on this immediately.
Can somebody explain a particular aspect of Quantum Mechanics to me?
In my readings of the Many Worlds Interpretation, which Eliezer fondly endorses in the QM sequence, I must have missed an important piece of information about when it is that amplitude distributions become separable in timed configuration space. That is, when do wave-functions stop interacting enough for the near-term simulation of two blobs (two "particles") to treat them independently?
One cause is spatial distance. But in Many Worlds, I don't know where I'm to understand thes...
I suspect that those would be longer than should be posted deep in a tangential comment thread.
Yeah probably. To be honest I'm still rather new to the rodeo here, so I'm not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn't listen to me :)
I'm sure it's been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It's probably not obvious why there would be problems with ...
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you're not willing to think there i...
So I don't think I ought to just say "eh, let's call grandma's worth a googolplex of chickens and call it a day".
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn't by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
If we are using real-valued utilities, then we're back to either assigning chickens 0 value or abandoning additive aggregation.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by prefere...
Automation could reduce the cost of hiring.
Take Uber, Sidecar, and Lyft as examples. I can't find any data, but anecdotally these services appear to reduce the cost, and increase the wages, for patrons and drivers respectively by between 20 and 50%, with increased convenience for both. You know it's working when entrenched, competing sectors of the industry are protesting and lobbying.
Eliezer's suggestion about forgotten industries (maids and butlers) seems much more on point if automatic markets can remove hiring friction. Ride sharing has a rapidly...
...
We need to talk more.
"What is the part of me that is preventing me from moving forward worried about?"
Be careful not to be antagonistic about the answer. The goal is to make that part of you less worried, thus making you more productive overall, not just on your blocked task. The roadblock is telling you something that you haven't yet explicitly acknowledged, so acknowledge it, thank it, incorporate it into your thinking, and resolve it.
Example: "I'm not smart enough to solve this math problem." Worry: "I would need to learn a textbook's worth of mat...
Does anybody have any data or reasoning that tracks the history of the relative magnitude of ideal value of unskilled labor versus ideal minimum cost of living? Presumably this ratio has been tracking favorably, even if in current practical economies the median available minimum wage job is in a city with a dangerously tight actual cost of living.
What I'd like to understand is, outside of minimum wage enforcement and solvable inefficiencies that affect the cost of basic goods, how much more economic output does an unskilled worker have over the cost what ...
Conversely, it is also good to limit reading about what other people are grateful for, especially if you're feeling particularly ungrateful and they have things you don't. Facebook is a huge offender here, because people tend to post about themselves when they're doing well, rather than when they're needing support. Seeing other people as more happy than they are leaves you wondering why you aren't as happy as they are. It also feeds the illusion that others do not need your help.
Facebook is a huge offender here, because people tend to post about themselves when they're doing well, rather than when they're needing support.
My suspicion is that people are more likely to be specific in positive than negative comments. "Vaguebooking," even if you know it represents serious pain, doesn't give you as vivid an image as someone celebrating a new job.
Doesn't the act of combining many outside views and their reference classes turn you into somebody operating on the inside view? This is to say, what is the difference between this and the type of "inside" reasoning about a phenomenon's causal structure?
Is it that inside thinking involves the construction of new models whereas outside thinking involves comparison and combination of existing models? From an machine intelligence perspective, the distinction is meaningless. The construction of new models is the extension of old models, albeit mo...
Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.