Comment author: orthonormal 17 March 2012 04:03:42PM 0 points [-]

In essence, you're saying that evolutionary psychology fails evolutionary theory. If this were the case, I really would have expected prominent evolutionary biologists to have noted it- that is the sort of evidence that would make me reconsider.

Comment author: Dmytry 17 March 2012 04:41:07PM *  2 points [-]

http://en.wikipedia.org/wiki/Criticism_of_evolutionary_psychology#Testability

In another thread, you linked to some book, I forgot the name, there was critique from another prominent guy saying it's not how mind works. Clearly there is critique.

Do you have an explanation why do you expect considerably more complex changes to wiring in the brain, than to gross morphology, i.e. shape of organism? I honestly just don't see why it would be more common for evolution to hardwire a reflex than to make a new organ. edit: Okay, for the organs there is the argument that the existing organs are pretty damn good. Still, there's plenty of opportunity for improving e.g. human locomotion, and pressure, too, and one can clearly see how slow did the bones change shapes.

edit: ahh, now i have a great question: Why would it be so much more common to evolve some complex but hard to separate from culture psychology, than it was to evolve verifiable, simple, straightforward hardwired reflexes? There's not a single well defined, agreed upon reflex i can think of that humans have, which chimps lack. There's a lot of evo-psych stuff that is allegedly unique to us humans, evolved during our hunter gatherer times.

I think this really should nail it down WRT plausibility of evo-psych. It propositions big number of very complex psychological adaptations, over the time when no straightforward, agreed upon hard wired reflexes evolved. Not just gross morphology. Anything well identifiable at which we can look and say - okay, chimps don't have this innate reflex - and agree.

Comment author: [deleted] 17 March 2012 03:30:09PM *  0 points [-]

Well, i can implement omega by scanning your brain and simulating you.

Provided my brain's choice isn't affected by quantum noise, otherwise I don't think you can. :-)

In response to comment by [deleted] on Decision Theories: A Less Wrong Primer
Comment author: Dmytry 17 March 2012 03:37:48PM 0 points [-]

Good point. Still, the brain's choice can be quite deterministic, if you give it enough thought - averaging out noise.

Comment author: Multiheaded 17 March 2012 02:33:54PM 0 points [-]

Just post it to Discussion and immediately use "Delete". It'll still be readable and linkable, but not seen in the index.

Comment author: Dmytry 17 March 2012 03:04:06PM *  0 points [-]

Hmm, can you see it now? (I of course kept a copy of the text on my computer, in case you were joking, so i do have the draft reposted as well)

Comment author: DanArmak 17 March 2012 02:23:21PM 0 points [-]

I'm not contradicting your data. Just wanted to note that it's not a priori obvious that the iterated PD (which is a very simple problem specification) is a good approximation to real life competitive companies. Or at least that companies' problems factorize in a way that gives the PD as a factor.

We're used to human-human relation management (which the OP says is well modeled by PD), and so human CEOs apply that in their relationships with other human CEOs.

What do you think? Is the explanation as simple as you imply: that company success has such a strong factor of human-human relationships with people representing/heading other companies, that good management there (Tit for Tat) can swamp other management considerations that have no exact human relations analog (creating a good product, differentiation, pricing, investments, supply chain issues, etc).

Comment author: Dmytry 17 March 2012 02:36:05PM *  0 points [-]

Well, normally if you are dealing with a company as a partner, it's a company for which the partnerships are a huge factor - e.g. in my case re-distributors.

The iterated prisoner's dilemma is an oversimplified problem. In the real world, usually, the non-cooperative players are known for their non-cooperative behaviours try channels other than getting screwed over yourself.

Furthermore, there are disparity of force cases where everyone's defecting against other party, just because other side can not retaliate. E.g. you could of paid me $0.5 per day to do the work I normally do, if I had no computer of my own or ability to go work at other employer, and I would have to do the work while barely surviving - and you could then pocket well over 99% of the income. That's how it works with outsourcing to third world.

Comment author: scmbradley 17 March 2012 12:41:11PM 0 points [-]

See mine and orthonormal's comments on the PD on this post for my view of that.

The point I'm struggling to express is that I don't think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb's problem makes a problem with CDT clearer. But I argue that Newcomb's problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can't use CDT's "failure" in this circumstance as evidence that CDT is wrong.

Here's a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: "Wait though. Even if Smith is a one-boxer, now that I've fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can't causally affect the contents of the boxes." So Omega doesn't put the money in the box.

Would one-boxing ever be advantageous if Omega were reasoning like that? No. The point is Omega will always reason that two-boxing dominates once the contents are fixed. There seems to be something unstable about Omega's reasoning. I think this is related to why I feel Omega is impossible. (Though I'm not sure how the points interact exactly.)

Comment author: Dmytry 17 March 2012 01:39:23PM *  0 points [-]

Well, i can implement omega by scanning your brain and simulating you. The other 'non implementations' of omega, though, imo are best ignored entirely. You can't really blame a decision theory for failure if there's no sensible model of the world for it to use.

My decision theory, personally, allows me to ignore unknown and edit my expected utility formula in ad-hoc way if i'm sufficiently convinced that omega will work as described. I think that's practically useful because effective heuristics often have to be invented on spot without sufficient model of the world.

edit: albeit, if i was convinced that omega works as described, i'd be convinced that it has scanned my brain and is emulating my decision procedure, or is using time travel, or is deciding randomly then destroying the universes where it was wrong... with more time i can probably come up with other implementations, the common thing about the implementations though is that i should 1-box.

Comment author: Vladimir_Nesov 17 March 2012 12:35:37PM 0 points [-]

Who are you to say that I am making a mistake that a lot of people without experience programming computers make?

He's probably a person with programming experience...

Comment author: Dmytry 17 March 2012 12:38:22PM *  1 point [-]

and if i apply same prior to him as he applies to me, probably a person with little programming experience. Everyday tit-for-tat reflex (that one might have evolved because its conceivable it was good through much of our on-trees existence as well, albeit i'm a bit dubious as of how the DNA would code for something like tit for tat, in mammalian brain; it would code for something that sort of works like tit for tat, with a lot of side effects, such as getting irritated, and responding in the equivalent style; [learning what to be irritated at]).

Comment author: Alex_Altair 17 March 2012 12:33:34PM 1 point [-]

There is a draft of my article on the topic.

I can't see this draft. I think only those who write them can see drafts.

Comment author: Dmytry 17 March 2012 12:35:44PM 0 points [-]

Hmm, weird. I thought the hide button would hide it from public, and un-hide button would unhide. How do i make it public as a draft?

Comment author: Dmytry 17 March 2012 12:16:21PM *  0 points [-]

Nothing. The arguments towards any course of action have very low external probabilities (which I assign when I see equally plausible but contradicting arguments), resulting in very low expected utilities even if the bad AI is presumed to do some drastically evil stuff vs good AI doing some drastically nice stuff. There are many problems for which efforts have larger expected payoff.

edit:

I do subscribe to the school of thought that the irregular connectionist AIs (neural networks, brain emulations of various kind and the like) are the ones least likely to engage in highly structured effort like maximization of some scary utility to the destruction of everything else. I'm very dubious that the agent can have foresight so good as to decide humans are not worth preserving, as part of general "gather more interesting information" heuristics.

While the design space near the FAI is a minefield of monster AIs and a bugged FAI represents a worst case scenario. There is a draft of my article on the topic. Note: I am a software developer, and I am very sceptical about our ability to write FAI that is not bugged, as well as of ability to detect substantial problems in FAI goal system, as regardless of the goal system the FAI will do all it can to pretend to be working correctly.

Comment author: Will_Newsome 17 March 2012 10:08:53AM 3 points [-]

apparently can't think of any self-reinforcing social changes that he thinks are good.

(I'm very, very bad at this sort of coming up with examples, so I don't think my inability to come up with any is much evidence for anything. I'm also very, very bad at finding physical objects amongst other objects, e.g. looking in the fridge for a certain jar. I strongly suspect that those two skills are strongly related.

Eliezer also claims to be very bad at coming up with examples and has told an anecdote about his inability to find things in the fridge (which he then ascribed to males in general---there are many reasons to be skeptical of the generalization). I suspect something interesting is going on here, and I tentatively wonder if it has to do with damage to, or atrophy of, the dorsolateral prefrontal cortex.)

Comment author: Dmytry 17 March 2012 10:24:20AM *  4 points [-]

Hmm, its curious. I am pretty good at coming up with examples or picking out items from crowded environments. Well, for the social changes... what's about gradual abolition of religious fundamentalism? It can be self enforcing (just as introduction of fundamentalism is; instability implies self-enforcing effects both ways).

In general if you can come up with some self-reinforcing social change that you think is bad, the same change, starting from the bad state (assuming that it can start at all), would be self-reinforcing in the good direction. A steel object falling off from under a magnet is a self-reinforcing process - the further it falls, the lower is the attraction - and so is the steel object snapping onto a magnet - the closer it gets, the stronger is the attraction force.

edit: ahh, i looked up in comment thread. Indeed, the problem is that it is hard to keep unstable process in equilibrium, and the self-reinforcing processes go too far. At same time, if you take a terribly religious population where we burned witches, or where they stone the rape victims to death, it is easy to imagine that on the other end of the slippery slope - if the slope is at all inclined in the other direction - the life is massively better.

I made a magnetic levitation device once - it would suspend iron nail under electromagnet. It seems deceptively simple - when nail goes up, turn off magnet, when nail goes down, turn it on - but if you do so you get rapidly increasing oscillations - you have to have a circuit that blends in first and second derivatives of the position into the control signal. A great deal of complexity for a very simple unstable system.

Comment author: Logos01 17 March 2012 06:18:21AM 7 points [-]

Dawkins himself said it; they believed that "sneaky cheaters" would prosper more. This is a common intuition even today. It turns out that Tit for Tat remains a very robust strategy, despite its apparent lack of sophistication, in prisoner-dilemma trials. There was a run just a month or two back that was all the rage on Discussion on LW.

Comment author: Dmytry 17 March 2012 09:42:12AM *  0 points [-]

For the real world anekdote, all successful businesses that i worked with seem to be tit-for-tat-ers (i never tried to screw anyone and never got screwed, and i'm pretty sure they do apply tit for tat, or else the defectors would be more common). There could be some sneaky cheaters out there, but mostly due to misplaced beliefs of shareholders that a sociopath CEO will do better than tit-for-tat CEO.

View more: Prev | Next