Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.
It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie.
When I myself run across apparent p-zombies, they usually look at my arguments as if I am being dense over my descriptions of consciousness. And I can see why, because without the experience of consciousness itself, these arguments must sound like they make consciousness out to be an extraneous hypothesis to help explain my behavior. Yet, even after reflecting on this objection, it still seems there is something to explain besid...
Perhaps abiguity aversion is merely a good heuristic.
Well of course. Finite ideal rational agents don't exist. If you were designing decision-theory-optimal AI, that optimality is a property of its environment, not any ideal abstract computing space. I can think of at least one reason why ambiguity aversion could be the optimal algorithm in environments with limited computing resources:
Consider a self-modification algorithm that adapts to new problem domains. Restructuring (learning) is considered the hardest of tasks, and so the AI modifies scarcel...
Shouldn't this post be marked [Human] so that uploads and AIs don't need to spend cycles reading it?
...I'd like to think that this joke bears the more subtle point that a possible explanation for the preparedness gap in your rationalist friends is that they're trying to think like ideal rational agents, who wouldn't need to take such human considerations.
I have a friend with Crohn's Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.
As usual, I'm pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn't seem the sort of thing that he and his doctor would have even discussed. ...
There has been mathematically proven software and the space shuttle came close though that was not proven as such.
Well... If you know what you wish to prove then it's possible that there exists a logical string that begins with a computer program and ends with it as a necessity. But that's not really exciting. If you could code in the language of proof-theory, you already have the program. The mathematical proof of a real program is just a translation of the proof into machine code and then showing it goes both ways.
You can potentially prove a space ...
Depends if you count future income. Highest paying careers are often so because only those willing to put in extra effort at their previous jobs get promoted. This is at least true in my field, software engineering.
The film's trailer strikes me as being aware of the transhumanist community in a surprising way, as it includes two themes that are otherwise not connected in the public consciousness: uploads and superintelligence. I wouldn't be surprised if a screenwriter found inspiration from the characters of Sandberg, Bostrom, or of course Kurzweil. Members of the Less Wrong community itself have long struck me as ripe for fictionalization... Imagine if a Hollywood writer actually visited.
They can help with depression.
I've personally tried this and can report truth, but will caveat that the expectation that I will force myself into a morning cold shower often causes oversleeping, which rather exacerbates depression.
Often in Knightian problems you are just screwed and there's nothing rational you can do.
As you know, this attitude isn't particularly common 'round these parts, and while I fall mostly in the "Decision theory can account for everything" camp, there may still be a point there. "Rational" isn't really a category so much as a degree. Formally, it's a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there's Godelian consideration lurki...
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases.
Ah! I didn't quite pick up on that. I'll note that infinite regress problems aren't necessarily defeaters of an approach. Good minds that could fall into that trap implement a "Screw it, I'm going to bed" trigger to keep from wasting cycles even when using an otherwise helpful heuristic.
...Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possi
But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.
Think of Bayesian graphs as implicitly complete, with the set of nodes being every thing to which you have a referent. If you can even say "this proposition" meaningfully, a perfect Bayesian implemented as a brute-force Bayesian network could assign it a node connected to all other nodes, just with trivial conditional probabilities that give the same results as an unconnected node.
A big part of this discussion...
It is helpful, and was one of the ways that helped me to understand One-boxing on a gut level.
And yet, when the problem space seems harder, when "optimal" becomes uncomputable and wrapped up in the fact that I can't fully introspect, playing certain games doesn't feel like designing a mind. Although, this is probably just due to the fact that games have time limits, while mind-design is unconstrained. If I had an eternity to play any given game, I would spend a lot of time introspecting, changing my mind into the sort that could play iterations...
"How often do listing sorts of problems with some reasonable considerations result in an answer of 'None of the above' for me?"
If "reasonable considerations" are not available, then we can still:
"How often did listing sorts of problems with no other information available result in an answer of 'None of the above' for me?"
Even if we suppose that maybe this problem bears no resemblance to any previously encountered problem, we can still (because the fact that it bears no resemblance is itself a signifier):
"How often did problems I'd encountered for the first time have an answer I never thought of?"
My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us.
The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the o...
The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.
My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the differ...
Right down the middle: 25-75
Hmm, come to think of it, deciding the size of the cash prize (for it being interesting) is probably worth more to me as well. I'll just have to settle for boring old cash.
I defected, because I'm indifferent to whether the prize-giver or prize-winner has 60 * X dollars, unless the prize-winner is me.
The Repository Repository.