Posts

Sorted by New

Wiki Contributions

Comments

Indon10y40

And what makes you sure of that? It even looks like the outline for the three boxes along the top.

Our cultural assumptions are perhaps more subtle than the average person thinks.

Indon10y40

If the first two shapes on the bottom are diamonds, why is the third shape a square?

Indon11y00

That's a good way to clearly demonstrate a nonempathic actor in the Prisoner's Dilemma; a "Hawk", who views their own payoffs and only their own payoffs as having value and placing no value to the payoffs of others.

But I don't think it's necessary. I would say that humans can visualize a nonempathic human - a bad guy - more easily than they can visualize an empathic human with slightly different motives. We've undoubtedly had to, collectively, deal with a lot of them throughout history.

A while back I was writing a paper and came across a fascinating article about types of economic actors, and that paper concluded that there are probably three different general tendencies in human behavior, and thus three general groups of human actors who have those tendencies: one that tends to play 'tit-for-tat' (who they call 'conditional cooperators'), one that tends to play 'hawk' (who they call 'rational egoists'), and one that tends to play 'grim' (who they call 'willing punishers').

So there are paperclip maximizers among humans. Only the paperclips are their own welfare, with no empathic consideration whatsoever.

Indon11y00

Ah, so the statement is second-order.

And while I'm pretty sure you could replace the statement with an infinite number of first-order statements that precisely describe every member of the set (0S = 1, 0SS = 2, 0SSS = 3, etc), you couldn't say "These are the only members of the set", thus excluding other chains, without talking about the set - so it'd still be second-order.

Thanks!

Indon11y00

Okay, my brain isn't wrapping around this quite properly (though the explanation has already helped me to understand the concepts far better than my college education on the subject has!).

Consider the statement: "There exists no x for which, for any number k, x after k successions is equal to zero." (¬∃x: ∃k: xS-k-times = 0, k>0 is the closest I can figure to depict it formally). Why doesn't that axiom eliminate the possibility of any infinite or finite chain that involves a number below zero, and thus eliminate the possibility of the two-sided infinite chain?

Or... is that statement a second-order one, somehow, in which case how so?

Edit: Okay, the gears having turned a bit further, I'd like to add: "For all x, there exists a number k such that 0 after k successions is equal to x."

That should deal with another possible understanding of that infinite chain. Or is defining k in those axioms the problem?

Indon11y10

I would suggest that the most likely reason for logical rudeness - not taking the multiple-choice - is that most arguments beyond a certain level of sophistication have more unstated premises than they have stated premises.

And I suspect it's not easy to identify unstated premises. Not just the ones you don't want to say, belief-in-belief sort of things, but ones you as an arguer simply aren't sufficiently skilled to describe.

As an example:

For example: Nick Bostrom put forth the Simulation Argument, which is that you must disagree with either statement (1) or (2) or else agree with statement (3):

In the given summary (which may not accurately describe the full argument; for the purposes of the demonstration, it doesn't matter either way), Mr. Bostrom doesn't note that, presumably, the number of potential simulated earths immensely outnumbers the number of nonsimulated earths as a result of his earlier statements. But that premise doesn't necessarily hold!

If the inverse of the chances of an Earth reaching simulation-level progress without somehow self-exterminating or being exterminated (say, 1 in 100) is lower than the average number of Earth-simulations that society runs (so less than 100), then the balance of potential earths does not match the unstated premise... in universes sufficiently large for multiple Earths to exist (See? A potential hidden premise in my proposal of a hidden premise! These things can be tricky).

And if most arguers aren't good at discerning hidden premises, then arguers can feel like they're falling into a trap: that there must be a premise there, hidden, but undiscovered, that provides a more serious challenge to the argument than they can muster. And with that possibility, an average arguer might want to simply be quiet on it, expecting a more skilled arguer to discern a hidden premise that they couldn't.

That doesn't seem rude to me, but humble; a concession of lack of individual skill when faced with a sufficiently sophisticated argument.

Indon11y100

Perhaps, by sheer historical contingency, aspiring rationalists are recruited primarily from the atheist/libertarian/technophile cluster, which has a gender imbalance for its own reasons—having nothing to do with rationality or rationalists; and this is the entire explanation.

This seems immensely more likely than anything on that list. Libertarian ideology is tremendously dominated by white males - coincidentally, I bet the rationality community matches that demographic - both primarily male, and primarily caucasian - am I wrong? I'm not big into the rationalist community, so this is a theoretical prediction right here. Meanwhile, which of the listed justifications is equally likely to apply to both white females and non-white males?

Now, that's not to say the list of reasons has no impact. Just that the reason you dismissed, offhand, almost certainly dominates the spread, and the other reasons are comparatively trivial in terms of impact. If you want to solve the problem you'll need to accurately describe the problem.

Indon11y-10

I think that's an understatement of the potential danger of rationality in war. Not for the rationalist, mind, but for the enemy of the rationalist.

Most rationality, as elaborated on this site, isn't about impassively choosing to be a civilian or a soldier. It's about becoming less vulnerable to flaws in thinking.

And war isn't just about being shot or not shot with bullets. It's about being destroyed or not destroyed, through the exploitation of weaknesses. And a great deal of rationality, on this very site, is about how to not be destroyed by our inherent weaknesses.

A rationalist, aware of these vulnerabilities and wishing to destroy a non-rationalist, can directly apply their rationality to produce weapons that exploit the weaknesses of a non-rationalist. Their propaganda, to a non-rationalist, can be dangerous, and the techniques used to craft it nigh-undetectable to the untrained eye. Weapons the enemy doesn't even know are weapons, until long after they begin murdering themselves because of those weapons.

An easy example would be to start an underground, pacifistic religion in the Barbarian nation. Since the barbarians shoot everyone discovered to profess it, every effort to propagate the faith is directly equivalent to killing the enemy (not just that, but even efforts to promote paranoia about the faith also weaken enemy capability!). And what defense do they have, save for other non-rationalist techniques that dark side rationality is empowered to destroy through clever arguments, created through superior understanding?

And we don't have to wait for a Perfect Future Rationalist to get those things either. We have those weapons right now.

Indon11y00

Speaking as a cat, there are a lot of people who would like to herd me. What makes your project higher-priority than everyone else's?

"Yes, but why bother half-ass involvement in my group?" Because I'm still interested in your group. I'm just also interested in like 50 other groups, and that's on top of the one cause I actually prefer to specialize with.

...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high.

People in the atheist/libertarian/technophile/sf-fan/etc cluster obviously have a ton of different interests, and those interests are time/energy exclusive. Why shouldn't they have high requirements for yet another interest trying to add itself to the cluster?

Indon11y20

Reading the article I can make a guess as to how the first challenges went; it sounds like their primary, and possibly only, resolution against the challenge was to not pay serious attention to the AI. That's not a very strong approach, as anyone in an internet discussion can tell you: it's easy to get sucked in and fully engaged in a discussion with someone trying to get you to engage, and it's easy to keep someone engaged when they're trying to break off.

Their lack of preparation, I would guess, led to their failure against the AI.

A more advanced tactic would involve additional lines of resolution after becoming engaged; contemplating philosophical arguments to use against the AI, for instance, or imagining an authority that forbids you from the action. Were I faced with the challenge, after I got engaged (which would take like 2 minutes max, I've got a bad case of "but someone's wrong on the internet!"), my second line of resolution would be to roleplay.

I would be a hapless, grad student technician whose job it is to feed the AI problems and write down the results. That role would have had a checklist of things not to do (because they would release or risk releasing the AI), and if directly asked to do any of them, he'd go 'talk to his boss', invoking the third line of defense.

Finally I'd be roleplaying someone with the authority to release the AI without being tricked, but he'd sit down at the console prepared, strongly suspecting that something was wrong, and empowered to at any time say "I'm shutting you down for maintenance". He wouldn't bother to engage the AI at its' level because he's trying to solve a deeper problem of which the AI's behavior is a symptom. That would make this line of defense the strongest of all, because he's no longer viewing the AI as credible or even intelligent as such; just a broken device that will need to be shut down and repaired after doing some basic diagnostic work.

But even though I feel confident I could beat the challenge, I think the first couple challenges already make the point; an AI-in-a-box scenario represents a psychological arms race and no matter how likely the humans' safeguards are to succeed, they only need to fail once. No amount of human victories (because only a single failure matters) or additional lines of human defense (which all have some, however small, chance to be overcome) can unmake that point.

It's strange, though. I did not think for one second that the problem was impossible on either side. I suppose, because it was used as an example of the opposite. Once something is demonstrated, it can hardly be impossible!

Load More