Posts

Sorted by New

Wiki Contributions

Comments

Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.

It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie.

When I myself run across apparent p-zombies, they usually look at my arguments as if I am being dense over my descriptions of consciousness. And I can see why, because without the experience of consciousness itself, these arguments must sound like they make consciousness out to be an extraneous hypothesis to help explain my behavior. Yet, even after reflecting on this objection, it still seems there is something to explain besides my behavior, which wouldn't bother me if I were only trying to explain my behavior, including the words in this post.

It makes sense to me that from outside a brain, everything in the brain is causal, and the brain's statements about truths are dependent on outside formalizations, and that everything observable about a brain is reducible to symbolic events. And so an observation of a zombie-Chalmers introspecting his consciousness would yield no shocking insights on the origins of his English arguments. And I know that when I reflect on this argument, an observer of my own brain would also find no surprising neural behaviors.

But I don't know how to reconcile this with my overriding intuition/need/thought that I seek not to explain my behavior but the sense experience itself when I talk about it. Fully aware of outside view functionalism, the sensation of red still feels like an item in need of explanation, regardless of which words I use to describe it. I also feel no particular need to feel that this represents a confusion, because the sense experience seems to demand that it place itself in another category than something you would explain functionally from the outside. All this I say even while I'm aware that to humans without this feeling, these claims seem nothing like insane, and they will gladly inspect my brain for a (correct) functional explanation of my words.

The whole ordeal still greatly confuses me, to an extent that surprises me given how many other questions have been dissolved on reflection such as, well, intelligence.

Perhaps abiguity aversion is merely a good heuristic.

Well of course. Finite ideal rational agents don't exist. If you were designing decision-theory-optimal AI, that optimality is a property of its environment, not any ideal abstract computing space. I can think of at least one reason why ambiguity aversion could be the optimal algorithm in environments with limited computing resources:

Consider a self-modification algorithm that adapts to new problem domains. Restructuring (learning) is considered the hardest of tasks, and so the AI modifies scarcely. Thus, as it encounters new decision-theoretic problems, it often does not choose self-modification, instead clodging together old circuitry and/or answers to conserve compute cycles. And so when choosing answers to your 3 problems, it would fear solutions which, when repeating the answer multiple times, maximizes expected value in its environment, which includes its own source code.

Ambiguity aversion then would be commitment-risk aversion, where future compounded failures change the value of dollars per ulility. Upon each iteration of the problem, the value of a dollar can change, and if you don't maximize minimum expected value, you may end up with betting all of your $100, which is worth infinite value to you, versus gaining $100, which is worth far less, even if you started with $1000.

We see this in ourselves all the time. If you make a decision, expect to be more likely to make the decision in the future, and if you change your lifestyle, expect it to be hard to change back, even if you later know that changing back is the deletion of a bias.

And if so, do we need a different framework that can capture a broader class of "rational" agents, including maximizers of minimum expected utility?

Rational agents have source code whose optimality is a function of their environments. There is no finite cross-domain Bayesian in compute-space; only in the design-space that includes environments.

Shouldn't this post be marked [Human] so that uploads and AIs don't need to spend cycles reading it?

...I'd like to think that this joke bears the more subtle point that a possible explanation for the preparedness gap in your rationalist friends is that they're trying to think like ideal rational agents, who wouldn't need to take such human considerations.

I have a friend with Crohn's Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.

As usual, I'm pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn't seem the sort of thing that he and his doctor would have even discussed. My appraisal of his doctor consulations seem to be something along the lines of "You should track your diet according to these guidelines, and try to see what causes relapse" rather than "Here's a cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop." For my friend, drilling into sweeping diet changes and tracking seems like an insurmountable challenge, especially with the depression caused by simply having the disease.

I'd like to be able to purchase something for him that would let him go about his life without having to worry about it so much. Any ideas on whether Soylent could be the solution, in particular as to its potential for Crohn's?

There has been mathematically proven software and the space shuttle came close though that was not proven as such.

Well... If you know what you wish to prove then it's possible that there exists a logical string that begins with a computer program and ends with it as a necessity. But that's not really exciting. If you could code in the language of proof-theory, you already have the program. The mathematical proof of a real program is just a translation of the proof into machine code and then showing it goes both ways.

You can potentially prove a space shuttle program will never crash, but you can't prove the space shuttle won't crash. Source code is just source code, and bugs aren't always known to be such without human reflection and real world testing. The translation from intent to code is what was broken in the first place, you actually have to keep applying more intent in order to fix it.

The problem with AGI is that the smartest people in the world write reams trying to say what we even wish to prove, and we're still sort of unsure. Most utopias are dystopias, and it's hard to prove a eutopia, because eutopias are scary.

Depends if you count future income. Highest paying careers are often so because only those willing to put in extra effort at their previous jobs get promoted. This is at least true in my field, software engineering.

The film's trailer strikes me as being aware of the transhumanist community in a surprising way, as it includes two themes that are otherwise not connected in the public consciousness: uploads and superintelligence. I wouldn't be surprised if a screenwriter found inspiration from the characters of Sandberg, Bostrom, or of course Kurzweil. Members of the Less Wrong community itself have long struck me as ripe for fictionalization... Imagine if a Hollywood writer actually visited.

They can help with depression.

I've personally tried this and can report truth, but will caveat that the expectation that I will force myself into a morning cold shower often causes oversleeping, which rather exacerbates depression.

Often in Knightian problems you are just screwed and there's nothing rational you can do.

As you know, this attitude isn't particularly common 'round these parts, and while I fall mostly in the "Decision theory can account for everything" camp, there may still be a point there. "Rational" isn't really a category so much as a degree. Formally, it's a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there's Godelian consideration lurking, which is where the "Omega fines you exorbitantly for using TDT" thought experiment comes into play.

That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there's nothing rational you can do to improve your situation. You've already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people.

In such situations, the words "rational" and "irrational" are less useful descriptors than just observing source code being executed. If you're formal about it using metric R, then you would be more R, but its correlation to "rational" wouldn't really be at point.

But in this case, again, I think there's a straightforward, simple, sensible approach (which so far no one has suggested...)

So, I don't think the black box is really one of the situations I've described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don't think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.

Load More