All of skepsci's Comments + Replies

skepsci20

I think the dichotomy between procedural knowledge and object knowledge is overblown, at least in the area of science. Scientific object knowledge is (or at least should be) procedural knowledge: it should enable you to A) predict what will happen in a given situation (e.g. if someone drops a mento into a bottle of diet coke) and B) predict how to set up a situation to achieve a desired result (e.g. produce pure L-glucose).

skepsci30

Humans have a preference for simple laws because those are the ones we can understand and reason about. The history of physics is a history of coming up with gradually more complex laws that are better approximations to reality.

Why not expect this trend to continue with our best model of reality becoming more and more complex?

skepsci10

This is trivially false. Imagine, for the sake of argument, that there is a short, simple set of rules for building a life permitting observable universe. Now add an arbitrary, small, highly complex perturbation to that set of rules. Voila, infinitely many high complexity algorithms which can be well-approximated by low complexity algorithms.

1Rob Bensinger
How does demonstrating 'infinitely many algorithms have property X' help falsify 'most algorithms lack property X'? Infinitely many integers end with the string ...30811, but that does nothing to suggest that most integers do. Maybe most random life-permitting algorithms beyond a certain level of complexity have lawful regions where all one's immediate observations are predictable by simple rules. But in that case I'd want to know the proportion of observers in such universes that are lucky enough to end up in an island of simplicity. (As opposed to being, say, Boltzmann brains.)
skepsci60

I model basically everyone I interact with as an agent. This is useful when trying to get help from people who don't want to help you, such as customer service or bureaucrats. By giving the agent agency, it's easy to identify the problem: the agent in question wants to get rid of you with the least amount of effort so they can go back to chatting with their coworkers/browsing the internet/listening to the radio. The solution is generally to make it seem like less effort to get rid of you by helping you with your problem (which is their job after all) than something else. This can be done by simply insisting on being helped, making a ruckus, or asking for a manager, depending on the situation.

skepsci20

I do the same sort of thinking about the motivations of other drivers, but it seems strange to me to phrase the question as "what does he know that I don't?" More often than not, the cause of strange driving behaviors is lack of knowledge, confusion, or just being an asshole.

Some examples of this I saw recently include 1) a guy who immediately cut across two lanes of traffic to get in the exit lane, then just as quickly darted out of it at the beginning of the offramp; 2) A guy on the freeway who slowed to a crawl despite traffic moving quickly a... (read more)

skepsci-10

If there's some uncomputable physics that would allow someone to build such a device, we ought to redefine what we mean by computable to include whatever the device outputs. After all, said device falsifies the Church-Turing thesis, which forms the basis for our definition of "computable".

skepsci20

Perhaps it terminates in the time required proving that A defects and B cooperates, even though the axioms were inconsistent, and one could also have proved that A cooperates and B defects.

1Richard_Kennaway
The competitors know the deterministic proof-search algorithm (e.g. that of Prolog), and the first answer that produces within the time limit, that is the answer.
skepsci10

How will you know? The set of consistent axiom systems is undecidable. (Though the set of inconsistent axioms systems is computably enumerable.)

0Richard_Kennaway
You just run the Prolog (or whatever logic system implements all this), and it either terminates with a failure or does not terminate within the time allowed by the competition. The time limit renders everything practically decidable.
skepsci10

What happens if the two sets of axioms are individually consistent, but together are inconsistent?

0Richard_Kennaway
Deem both of the agents to have not terminated?
skepsci10

Your source code is your name. Having an additional name would be irrelevant. It is certainly possible for bots to prove they cooperate with a given bot, by looking at that particular bot's source. It would, as you say, be much harder for a bot to prove it cooperates with every bot equivalent to a given bot (in the sense of making the same cooperate/defect decisions vs. every opponent).

Rice's theorem may not be as much of an obstruction as you seem to indicate. For example, Rice's theorem doesn't prohibit a bot which proves that it defects against all defe... (read more)

skepsci00

What's wrong with "I will cooperate with anyone who verifiably asserts that they cooperate with me". A program could verifiably assert that by being, e.g., cooperatebot. A program could be written that cooperates with any opponent that provides it with a proof that it will cooperate.

0Richard_Kennaway
The word "me". By Rice's theorem, they can't tell that they're dealing with someone computationally equivalent to me, and there's no other sense of my "identity" that can be referred to. Athough that could be added. Have all competitors choose a unique name and allow them to verifiably claim to be the owner of their name. Then "I will cooperate with anyone who verifiably asserts that they cooperate with me" can work if "me" is understood to mean "the entity with my name". Discovering a blob's name would have to be a second primitive operation allowed on blobs. ETA: I think I missed your point a little. Yes, a cooperate-bot verifiably asserts that it cooperates with everyone, so it is an entity that "verifiably asserts (something that implies) that they will cooperate with me." And there will be other bots of which this is true. But I'm not sure that I can verifiably express that I will cooperate with the class of all such bots, because "the class of all such bots" looks undecidable.
skepsci00

Thanks. The logic the board uses to determine posts you've read seems strange.

Sorry about posting in the wrong open thread. I followed an "open thread" link, and this looked like it was the most recent open thread.

skepsci10

Why do some posts have pink borders?

I can't quite figure it out. I gather it has something to do with being new, since newer posts are more likely to be pink and every reply to a pink post seems to be pink. But it's not purely chronological (since some of the most recent comments do not have pink borders when I view the thread), and it's not purely based on being new since the last time you've viewed a thread (since I've seen pink borders around my own posts).

1Qiaochu_Yuan
It's posts you haven't read before. I think writing a post doesn't count as reading it. Also, the newest open thread is this one.
0[anonymous]
It's posts you haven't read before. Also, you're not posting in the most recent open thread.
skepsci10

I think a programming language that only allows primitive recursion is a bad idea. One common pattern (which I think we want to allow) was for bots to simulate their opponents, which entails the ability to simulate arbitrary valid code. This would not be possible in a language which restricts to primitive recursion.

skepsci510

Yay, I wasn't last!

Still, I'm not surprised that laziness did not pay off. I wrote a simple bot, then noticed that it cooperated against defectbot and defected against itself. I thought to myself, "This is not a good sign." Then I didn't bother changing it.

Frankly, I find this hilarious.

skepsci00

I was going to make this same objection. Your assertion that level 2 tasks are multiplicative with each other is not very plausible. It's obviously false that each typing class improves the typer's ability by 20%, since I can't take 10 typing classes and start typing at 400 words per minute. More likely the gains with multiple typing classes are linear for the first few, and sublinear in the long run.

It is more plausible that level 2 tasks are multiplicative with level 1 tasks. If you get 20% faster at typing, you can transcribe audio 20% faster, and every level 1 transcription task you undertake now pays 20% better.

skepsci20

After the top 5 or 10 or so, rather than just presenting a list of articles, it may make more sense to split things up by topic. Being presented with a list of 100 articles is kind of intimidating. Being presented with five lists of twenty articles each on five different topics is less so, as it's easier to divide and conquer. Readers may be interested in some topics but not others (at least at first), or may decide to read a few articles on each topic.

Some natural subdivisions might be:

  • Human psychology and biases.
  • Bayesian inference.
  • The scientific meth
... (read more)
skepsci00

It's not necessarily best to cooperate with everyone with the "AbsolutismBot" wrapper. This guarantees that you and the other program will both cooperate, but without the wrapper it might have been the case that you would defect and the other program would cooperate, which is better for you.

skepsci70

How do you enforce the 10% salary tithe?

One obvious difficulty in educating children for free and then expecting them to pay you back after they become educated is that, most places, minors cannot enter into legally binding contracts. So the kid graduates, gets a great job (in a country that won't recognize the contract), and says, "I never agreed to pay you 10% of my salary, so I'm keeping it."

4Estarlio
Depending on your country, even adults can't under a fair number of conditions. Having very unequal bargaining positions, for instance, violates the idea of freedom of contract - which will render it unenforceable in some places. I think it's called undue influence.
skepsci180

I would worry the effect this may have on your credit rating if anyone catches you at it, together with possibly more serious effects. This could potentially be considered fraud. Altogether it seems much more sensible to simply live within your means and pay off your credit balance each month.

khafra140

...it seems much more sensible...

This is the "ridiculous munchkin ideas" thread, not the "sensible advice you've already heard" thread.

This could potentially be considered fraud.

A more pertinent worry. Especially with cards that give a percentage of each purchase as "reward points" or something, I'd be worried about this.

skepsci40

Outside of mathematical logic, some familiar examples include:

  • compactness vs. sequential compactness—generalizing from metric to topological spaces
  • product topology vs. box topology—generalizing from finite to infinite product spaces
  • finite-dimensional vs. finitely generated (and related notions, e.g. finitely cogenerated)—generalizing from vector spaces to modules
  • pointwise convergence vs. uniform convergence vs. norm-convergence vs. convergence in the weak topology vs....—generalizing from sequences of numbers to sequences of functions
skepsci-20

Witty to be sure, but obviously false. The causal connection between baseball and the content (as opposed to the name) of the law is probably fairly tenuous. The number three is ubiquitous in all areas of human culture.

7gwern
We can still blame the propaganda for helping make the laws appealing and getting them to pass And given the popularity of things named after people like "Laura's Law" or "Megan's Law", it wouldn't surprise me if the popularity was due to the rhetorical effect on the average voter.
9Prismattic
I think further investigation would reveal that is at most a Western cultural thing, not a hardwired human universal. Elsewhere in time and place, 4 has been the important number -- e.g. recurrences of 4 and 40 in the Hebrew scriptures; the importance of 4 and (negatively) 8 in Chinese culture, etc.. Possibly some other digits have performed similarly in other places as well.
skepsci30

Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2.

The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.

skepsci10

Wouldn't explaining why the statement is misleading be more productive than suppressing the misleading statement?

skepsci40

If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don't see the difference between Monday and Tuesday.

I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree...

Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world.

Does that story jibe with your understanding?

1JonatasMueller
Yes, that is correct. I'm glad a Less Wronger finally understood.
skepsci00

The difference is between amateur and professional ratings. Amateur dan ratings, just like kyu ratings, are designed so that a difference of n ranks corresponds to suitability of a n-stone handicap, but pro dan ratings are more bunched together.

See Wikipedia:Go pro.

skepsci30

I would be very interested if anyone has good examples of this phenomenon.

There are a few "triads" mentioned in the intellectual hipster article, but the only one that really seems to me like a good example of this phenomenon is the "don't care about Africa / give aid to Africa / don't give aid to Africa" triad.

0Dmytry
Well, the "dumb" (and uneducated) explanation of airfoil lift is that wings push air downwards. The slightly less dumb people get exposed to bits and pieces of products of thought of very very smart people, which they completely don't understand and absolutely can't use for reasoning. But they want to be smart. So they come up with explanation that air on top of the wing must match up with the air on the bottom, but path is longer, so it must go faster, and so with bernoulli effect, there's lift. Reduced from dumbly talking in dumbspeech to incoherently babbling in smartspeech. The actually smart people's explanation is that wings push air downwards (and also pull it downwards). The reasoning tools made by real smart people for real smart people are a memetic hazard to semi smart slightly educated people, but not so much to uneducated people, in much same way how power tools made for adults are a huge hazard to children that can open the cabinet, but not infants. If we meet super smart aliens, and they just dump knowledge, results on the really smart people might well be exactly the same.
skepsci10

This advice is worse than useless. But coming from someone who was instrumental in the "Physicists have figured a way to efficiently eradicate humanity; let's tell the politicians so they may facilitate!" movement, it's not surprising.

Protip: the maxim "That which can be destroyed by the truth, should be" does not mean we should publish secrets that have a chance of ending global civilization.

1[anonymous]
I tend to think of science as the public common knowledge of mankind. It is obviously not the only kind of knowledge. Also I would say that humans tend to err more often in the direction of needlessly keeping secret important information rather than in the direction sharing it too easily. Especially since it is easier to fool yourself than others.
skepsci-20

So I should interpret Will's "Omega = objective morality" comment as meaning "sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends"? I don't think so.

0wedrifid
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
skepsci50

It's also completely ridiculous, with a sample size of ~10 questions, to give the success rate and probability of being well calibrated as percentages with 12 decimals. Since the uncertainty in such a small sample is on the order of several percent, just round to the nearest percentage.

0MTGandP
It probably just computes it as a float and then prints the whole float. (I do recognize the silliness of replying to a three-year old comment that itself is replying to a six-year old comment.)
skepsci00

No, it's an annual rate. You quote it as an annual rate, and it matches the annual rate I found by repeating your search. So you need to multiply by seven to get the rate of people committing suicide during the years they would, if a Hogwarts student, be attending Hogwarts.

0thomblake
Hmm... it looks like you're correct. Interestingly this site seems to say that the US suicide rate for teenagers is .01%, and the US suicide rate is also .01%. Curioser and curioser.
skepsci30

Except that students stay at Hogwarts for 7 years, not one, which would put the suicide rate at Hogwarts at one per 14 years, not one per century (if wizards commit suicide at the same rate as muggles). If you assumed that Wizarding suicide attempts were 5 times as likely to be successful, that would put the rate at one suicide every 3 years.

Of course, it's entirely possible that the wizarding resilience to illness and injury also makes them more resilient to mental illness, and that's why suicide rates are lower.

1thomblake
If I'm not mistaken, that rate was based on the number of people who live to teenage years and then kill themselves during their teenage years, not the number of teenagers who kill themselves per year. Interesting idea.
skepsci00

It is trivial* to see that this game reduces to equivalent to a simple two party prisoners dilemma with full mutual information.

It only reduces to/is equivalent to a prisoner's dilemma for certain utility functions (what you're calling "values"). The prisoners' dilemma is characterized by the fact that there is a dominant strategy equilibrium which is not Pareto optimal. But if the utility functions of the agents are such that the game is zero-sum, then this can't be the case, as every outcome is Pareto optimal in a zero-sum game.

Furthermore, ... (read more)

0wedrifid
Yes, this entire scenario is based around scenarios where there is benefit to cooperation. In the edge case where such benefit is '0 expected utilons' the behavior of the agents will, unsurprisingly, not be changed at all by the considerations we are talking about.
skepsci00

If you have beliefs about the matter already, push the "reset" button and erase that part of your map. You must feel that you don't already know the answer.

It seems like a bad idea to intentionally blank part of your map. If you already know things, you shouldn't forget what you already know. On the other hand, if you have reason to doubt what you think you know, you should blank the suspect parts of your map when you had reason to doubt them, and not artificially as part of a procedure for generating curiosity.

I think what you may be trying t... (read more)

skepsci30

The decisions produced by any decision theory are not objectively optimal; at best they might be objectively optimal for a specific utility function. A different utility function will produce different "optimal" behavior, such as tiling the universe with paperclips. (Why do you think Eliezer et al are spending so much effort trying to figure out how to design a utility function for an AI?)

I see the connection between omega and decision theories related to Solomonoff induction, but as the choice of utility function is more-or-less arbitrary, it doesn't give you an objective morality.

8paulfchristiano
His point is that if I fix your goals (say, narrow self-interest) the defensible policies still don't look much like short-sighted goal pursuit (in some environments, for some defensible notions of "defensible"). It may be that all sufficiently wise agents pursue the same goals because of decision theoretic considerations, by implicitly bargaining with each other and together pursuing some mixture of all of their values. Perhaps if you were wiser, you too would pursue this "overgoal," and in return your self-interest would be served by other agents in the mixture. While plausible, this doesn't look super likely right now. Will would get a few Bayes points if it pans out, though the idea isn't due to him. (A continuum of degrees of altruism have been conjectured to be justified from a self-interested perspective, if you are sufficiently wise. This is the most extreme, Drescher has proposed a narrower view which still captures many intuitions about morality, and weaker forms that still capture at least a few important moral intuitions, like cooperation on PD, seem well supported.) The connection to omega isn't so clear. It looks like it could just be concealing some basic intuitions about computability and approximation. It seems like a way of smuggling in mysticism, which is misleading by being superfluous rather than incoherent.
-9Will_Newsome
skepsci20

I'm very confused* about the alleged relationship between objective morality and Chaitin's omega. Could you please clarify?

*Or rather, if I'm to be honest, I suspect that you may be confused.

-8Will_Newsome
skepsci110

It is bad luck to be superstitious.

-Andrew W. Mathis

5wedrifid
Or potentially good luck if the combination of your instincts and the (irrationally justified) memes you inherited from tradition are better than your abstract decision making.
skepsci20

If a bad law is applied in a racist way, surely that's a problem with both the law itself and the justice system's enforcement of it?

skepsci20

Yeah, I was wondering about the downvotes. The welcome thread says that it's perfectly acceptable to ask for an explanation... So, for anyone who downvoted me, why?

2Viliam_Bur
Didn't downvote, but I think your comment visually matches the 'strawman argument' pattern. Except that it is not.
skepsci00

Exactly. If you have determinism in the sense of a function from AI action to result world, you can directly compute some measure of the difference between worlds X and X', where X is the result of AI inaction, and X' is the result of some candidate AI action.

As nerzhin points out, you can run into similar problems even in deterministic universes, including life, if the AI doesn't have perfect knowledge about the initial configuration or laws of the universe, or if the AI cares about differences between configurations that are so far into the future they are beyond the AI's ability to calculate. In this case, the universe might be deterministic, but the AI must reason in probabilities.

skepsci30

An unfriendly legal system might treat being born as a crime. In fact, I'd be surprised if some politician in Arizona hasn't tried to make being born to illegal immigrant parents a crime.

2David_Gerard
The downvotes surprise me, because the rhetoric is precisely along these lines, e.g. Sonny Bono wanting to remove state benefits from the US-born children of illegal immigrants on the justification "they're illegal". Illegal people.
skepsci10

On a related note, I remember the day when I found out my PhD advisor (a computability theorist!) revealed that he believed the argument against AI from Gödel's incompleteness theorem. It was not reassuring.

0TimS
Smarter than human AI, or artificial human level general intelligence?
skepsci10

Dawkins starts from the premise that there is high uncertainty about the outcome of the case, and concludes that there is high uncertainty about the guilt, which does not follow. Even if it is obvious to everyone that the defendant is very probably guilty, it may be far from obvious exactly how high the jury will estimate the probability of innocence, and where they will set the bar for reasonable doubt.*

*It has never been clear to me where this should be. If I put the credence of guilt at g, should I convict when g>.9? .99? .999? Should I say "to ... (read more)

skepsci10

What do you mean by "great (awful)"? Do you mean that the thought experiment itself is an awful argument against AI, but describing the argument is a good way to test how people think?

2JonathanLivengood
Yes, that's exactly what I mean. The argument itself is terrible. But it invites so many reasonable challenges that it is still very useful as a test of clear thinking. So, awful argument; great test case.
1skepsci
On a related note, I remember the day when I found out my PhD advisor (a computability theorist!) revealed that he believed the argument against AI from Gödel's incompleteness theorem. It was not reassuring.
skepsci00

Maybe there is some true randomness in the universe

Not a problem.

I know it's not a problem. I explained exactly how to modify Solomonoff induction to handle universes that are generated randomly according to some computable law, as opposed to being generated deterministically according to an algorithm.

Suppose you flip a quantum coin ten times. If you record the output, the K-complexity is ten bits.

Maybe it is, maybe it isn't. Maybe your definition of Kolmogorov complexity is such that the Kolmogorov complexity of every string is at least 3^^^3, b... (read more)

skepsci50

The assumption was that 80% of defendants are guilty, which is more than 4 of 8. Under this assumption, asking whether p(guilty|convicted) > 80% is just asking whether conviction positively correlates with guilt. Asking if p(innocent|acquitted) > 20% is just asking if acquittal positively correlates with innocence. These are really the same question, because P correlates with Q iff ¬P correlates with ¬Q.

0[anonymous]
Perfect. Thanks.
skepsci110

It proves that mistakes have been made, but in the end, no, I don't think it's terribly useful evidence for evaluating the rate of wrongful convictions. Why not? There have been 289 post-conviction DNA exonerations in US history, mostly in the last 15 years. That gives a rate of under 20 per year. Suppose 10,000 people a year are incarcerated for the types of crime that DNA exoneration is most likely to be possible for, namely murder and rape (I couldn't find exact figures, but I suspect the real number is at least this big). Then considering DNA exonerati... (read more)

0bigjeff5
The appellate system itself - of which cases involving new DNA evidence are a tiny fraction - is a much more useful measure. There are a whole lot more exonerations via the appeals process than those driven by DNA evidence alone. This aught to be obvious, and the 0.2% provided by DNA is an extreme lower bound, not the actual rate of error correction. Case in point, I found an article describing a study on overturning death penalty convictions, and they found that 7% of convictions were overturned on re-trial, and 75% of sentences were reduced from the death penalty upon re-trial. One in fourteen sounds a lot more reasonable to me, and again that's just death penalty cases, for which you'd expect a higher than normal standard for conviction and sentencing. The standard estimate is about 10% for the system as a whole.
taw130

DNA exoneration happens when one is innocent and combination of extremely lucky circumstances make retesting of evidence possible. The latter I would be shocked to find at higher than 1:100 chance.

skepsci10

To me, the entire argument sounds like a rationalization for not signing up for cryo.

Signed,

Someone who has rationalized a reason for not signing up yet for cryo, and suspects that the real reason is laziness.

-1Joshua Hobbes
So sign the hell up.
Load More