Humans have a preference for simple laws because those are the ones we can understand and reason about. The history of physics is a history of coming up with gradually more complex laws that are better approximations to reality.
Why not expect this trend to continue with our best model of reality becoming more and more complex?
This is trivially false. Imagine, for the sake of argument, that there is a short, simple set of rules for building a life permitting observable universe. Now add an arbitrary, small, highly complex perturbation to that set of rules. Voila, infinitely many high complexity algorithms which can be well-approximated by low complexity algorithms.
I model basically everyone I interact with as an agent. This is useful when trying to get help from people who don't want to help you, such as customer service or bureaucrats. By giving the agent agency, it's easy to identify the problem: the agent in question wants to get rid of you with the least amount of effort so they can go back to chatting with their coworkers/browsing the internet/listening to the radio. The solution is generally to make it seem like less effort to get rid of you by helping you with your problem (which is their job after all) than something else. This can be done by simply insisting on being helped, making a ruckus, or asking for a manager, depending on the situation.
I do the same sort of thinking about the motivations of other drivers, but it seems strange to me to phrase the question as "what does he know that I don't?" More often than not, the cause of strange driving behaviors is lack of knowledge, confusion, or just being an asshole.
Some examples of this I saw recently include 1) a guy who immediately cut across two lanes of traffic to get in the exit lane, then just as quickly darted out of it at the beginning of the offramp; 2) A guy on the freeway who slowed to a crawl despite traffic moving quickly a...
If there's some uncomputable physics that would allow someone to build such a device, we ought to redefine what we mean by computable to include whatever the device outputs. After all, said device falsifies the Church-Turing thesis, which forms the basis for our definition of "computable".
Perhaps it terminates in the time required proving that A defects and B cooperates, even though the axioms were inconsistent, and one could also have proved that A cooperates and B defects.
How will you know? The set of consistent axiom systems is undecidable. (Though the set of inconsistent axioms systems is computably enumerable.)
What happens if the two sets of axioms are individually consistent, but together are inconsistent?
Your source code is your name. Having an additional name would be irrelevant. It is certainly possible for bots to prove they cooperate with a given bot, by looking at that particular bot's source. It would, as you say, be much harder for a bot to prove it cooperates with every bot equivalent to a given bot (in the sense of making the same cooperate/defect decisions vs. every opponent).
Rice's theorem may not be as much of an obstruction as you seem to indicate. For example, Rice's theorem doesn't prohibit a bot which proves that it defects against all defe...
What's wrong with "I will cooperate with anyone who verifiably asserts that they cooperate with me". A program could verifiably assert that by being, e.g., cooperatebot. A program could be written that cooperates with any opponent that provides it with a proof that it will cooperate.
Thanks. The logic the board uses to determine posts you've read seems strange.
Sorry about posting in the wrong open thread. I followed an "open thread" link, and this looked like it was the most recent open thread.
Why do some posts have pink borders?
I can't quite figure it out. I gather it has something to do with being new, since newer posts are more likely to be pink and every reply to a pink post seems to be pink. But it's not purely chronological (since some of the most recent comments do not have pink borders when I view the thread), and it's not purely based on being new since the last time you've viewed a thread (since I've seen pink borders around my own posts).
I think a programming language that only allows primitive recursion is a bad idea. One common pattern (which I think we want to allow) was for bots to simulate their opponents, which entails the ability to simulate arbitrary valid code. This would not be possible in a language which restricts to primitive recursion.
Yay, I wasn't last!
Still, I'm not surprised that laziness did not pay off. I wrote a simple bot, then noticed that it cooperated against defectbot and defected against itself. I thought to myself, "This is not a good sign." Then I didn't bother changing it.
Frankly, I find this hilarious.
I was going to make this same objection. Your assertion that level 2 tasks are multiplicative with each other is not very plausible. It's obviously false that each typing class improves the typer's ability by 20%, since I can't take 10 typing classes and start typing at 400 words per minute. More likely the gains with multiple typing classes are linear for the first few, and sublinear in the long run.
It is more plausible that level 2 tasks are multiplicative with level 1 tasks. If you get 20% faster at typing, you can transcribe audio 20% faster, and every level 1 transcription task you undertake now pays 20% better.
After the top 5 or 10 or so, rather than just presenting a list of articles, it may make more sense to split things up by topic. Being presented with a list of 100 articles is kind of intimidating. Being presented with five lists of twenty articles each on five different topics is less so, as it's easier to divide and conquer. Readers may be interested in some topics but not others (at least at first), or may decide to read a few articles on each topic.
Some natural subdivisions might be:
It's not necessarily best to cooperate with everyone with the "AbsolutismBot" wrapper. This guarantees that you and the other program will both cooperate, but without the wrapper it might have been the case that you would defect and the other program would cooperate, which is better for you.
How do you enforce the 10% salary tithe?
One obvious difficulty in educating children for free and then expecting them to pay you back after they become educated is that, most places, minors cannot enter into legally binding contracts. So the kid graduates, gets a great job (in a country that won't recognize the contract), and says, "I never agreed to pay you 10% of my salary, so I'm keeping it."
I would worry the effect this may have on your credit rating if anyone catches you at it, together with possibly more serious effects. This could potentially be considered fraud. Altogether it seems much more sensible to simply live within your means and pay off your credit balance each month.
...it seems much more sensible...
This is the "ridiculous munchkin ideas" thread, not the "sensible advice you've already heard" thread.
This could potentially be considered fraud.
A more pertinent worry. Especially with cards that give a percentage of each purchase as "reward points" or something, I'd be worried about this.
Outside of mathematical logic, some familiar examples include:
Witty to be sure, but obviously false. The causal connection between baseball and the content (as opposed to the name) of the law is probably fairly tenuous. The number three is ubiquitous in all areas of human culture.
Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2.
The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.
Wouldn't explaining why the statement is misleading be more productive than suppressing the misleading statement?
If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don't see the difference between Monday and Tuesday.
I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree...
Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world.
Does that story jibe with your understanding?
The difference is between amateur and professional ratings. Amateur dan ratings, just like kyu ratings, are designed so that a difference of n ranks corresponds to suitability of a n-stone handicap, but pro dan ratings are more bunched together.
See Wikipedia:Go pro.
I would be very interested if anyone has good examples of this phenomenon.
There are a few "triads" mentioned in the intellectual hipster article, but the only one that really seems to me like a good example of this phenomenon is the "don't care about Africa / give aid to Africa / don't give aid to Africa" triad.
This advice is worse than useless. But coming from someone who was instrumental in the "Physicists have figured a way to efficiently eradicate humanity; let's tell the politicians so they may facilitate!" movement, it's not surprising.
Protip: the maxim "That which can be destroyed by the truth, should be" does not mean we should publish secrets that have a chance of ending global civilization.
So I should interpret Will's "Omega = objective morality" comment as meaning "sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends"? I don't think so.
It's also completely ridiculous, with a sample size of ~10 questions, to give the success rate and probability of being well calibrated as percentages with 12 decimals. Since the uncertainty in such a small sample is on the order of several percent, just round to the nearest percentage.
No, it's an annual rate. You quote it as an annual rate, and it matches the annual rate I found by repeating your search. So you need to multiply by seven to get the rate of people committing suicide during the years they would, if a Hogwarts student, be attending Hogwarts.
Except that students stay at Hogwarts for 7 years, not one, which would put the suicide rate at Hogwarts at one per 14 years, not one per century (if wizards commit suicide at the same rate as muggles). If you assumed that Wizarding suicide attempts were 5 times as likely to be successful, that would put the rate at one suicide every 3 years.
Of course, it's entirely possible that the wizarding resilience to illness and injury also makes them more resilient to mental illness, and that's why suicide rates are lower.
It is trivial* to see that this game reduces to equivalent to a simple two party prisoners dilemma with full mutual information.
It only reduces to/is equivalent to a prisoner's dilemma for certain utility functions (what you're calling "values"). The prisoners' dilemma is characterized by the fact that there is a dominant strategy equilibrium which is not Pareto optimal. But if the utility functions of the agents are such that the game is zero-sum, then this can't be the case, as every outcome is Pareto optimal in a zero-sum game.
Furthermore, ...
If you have beliefs about the matter already, push the "reset" button and erase that part of your map. You must feel that you don't already know the answer.
It seems like a bad idea to intentionally blank part of your map. If you already know things, you shouldn't forget what you already know. On the other hand, if you have reason to doubt what you think you know, you should blank the suspect parts of your map when you had reason to doubt them, and not artificially as part of a procedure for generating curiosity.
I think what you may be trying t...
The decisions produced by any decision theory are not objectively optimal; at best they might be objectively optimal for a specific utility function. A different utility function will produce different "optimal" behavior, such as tiling the universe with paperclips. (Why do you think Eliezer et al are spending so much effort trying to figure out how to design a utility function for an AI?)
I see the connection between omega and decision theories related to Solomonoff induction, but as the choice of utility function is more-or-less arbitrary, it doesn't give you an objective morality.
I'm very confused* about the alleged relationship between objective morality and Chaitin's omega. Could you please clarify?
*Or rather, if I'm to be honest, I suspect that you may be confused.
It is bad luck to be superstitious.
-Andrew W. Mathis
If a bad law is applied in a racist way, surely that's a problem with both the law itself and the justice system's enforcement of it?
Yeah, I was wondering about the downvotes. The welcome thread says that it's perfectly acceptable to ask for an explanation... So, for anyone who downvoted me, why?
Exactly. If you have determinism in the sense of a function from AI action to result world, you can directly compute some measure of the difference between worlds X and X', where X is the result of AI inaction, and X' is the result of some candidate AI action.
As nerzhin points out, you can run into similar problems even in deterministic universes, including life, if the AI doesn't have perfect knowledge about the initial configuration or laws of the universe, or if the AI cares about differences between configurations that are so far into the future they are beyond the AI's ability to calculate. In this case, the universe might be deterministic, but the AI must reason in probabilities.
An unfriendly legal system might treat being born as a crime. In fact, I'd be surprised if some politician in Arizona hasn't tried to make being born to illegal immigrant parents a crime.
On a related note, I remember the day when I found out my PhD advisor (a computability theorist!) revealed that he believed the argument against AI from Gödel's incompleteness theorem. It was not reassuring.
Dawkins starts from the premise that there is high uncertainty about the outcome of the case, and concludes that there is high uncertainty about the guilt, which does not follow. Even if it is obvious to everyone that the defendant is very probably guilty, it may be far from obvious exactly how high the jury will estimate the probability of innocence, and where they will set the bar for reasonable doubt.*
*It has never been clear to me where this should be. If I put the credence of guilt at g, should I convict when g>.9? .99? .999? Should I say "to ...
What do you mean by "great (awful)"? Do you mean that the thought experiment itself is an awful argument against AI, but describing the argument is a good way to test how people think?
Maybe there is some true randomness in the universe
Not a problem.
I know it's not a problem. I explained exactly how to modify Solomonoff induction to handle universes that are generated randomly according to some computable law, as opposed to being generated deterministically according to an algorithm.
Suppose you flip a quantum coin ten times. If you record the output, the K-complexity is ten bits.
Maybe it is, maybe it isn't. Maybe your definition of Kolmogorov complexity is such that the Kolmogorov complexity of every string is at least 3^^^3, b...
The assumption was that 80% of defendants are guilty, which is more than 4 of 8. Under this assumption, asking whether p(guilty|convicted) > 80% is just asking whether conviction positively correlates with guilt. Asking if p(innocent|acquitted) > 20% is just asking if acquittal positively correlates with innocence. These are really the same question, because P correlates with Q iff ¬P correlates with ¬Q.
It proves that mistakes have been made, but in the end, no, I don't think it's terribly useful evidence for evaluating the rate of wrongful convictions. Why not? There have been 289 post-conviction DNA exonerations in US history, mostly in the last 15 years. That gives a rate of under 20 per year. Suppose 10,000 people a year are incarcerated for the types of crime that DNA exoneration is most likely to be possible for, namely murder and rape (I couldn't find exact figures, but I suspect the real number is at least this big). Then considering DNA exonerati...
DNA exoneration happens when one is innocent and combination of extremely lucky circumstances make retesting of evidence possible. The latter I would be shocked to find at higher than 1:100 chance.
To me, the entire argument sounds like a rationalization for not signing up for cryo.
Signed,
Someone who has rationalized a reason for not signing up yet for cryo, and suspects that the real reason is laziness.
I think the dichotomy between procedural knowledge and object knowledge is overblown, at least in the area of science. Scientific object knowledge is (or at least should be) procedural knowledge: it should enable you to A) predict what will happen in a given situation (e.g. if someone drops a mento into a bottle of diet coke) and B) predict how to set up a situation to achieve a desired result (e.g. produce pure L-glucose).