All of DSimon's Comments + Replies

DSimon20

Taboo "faith", what do you mean specifically by that term?

0mwengler
Good idea. I mean that EVERYBODY, rationalist atheist and christian alike, starts with an axiom or assumption. In the case of rationalist atheists (or at least come such as myself) the axioms started with are things like 1) truth is inferred with semi=quantifiable confidence from evidence supporting hypotheses, 2) explanations like "god did it" or "alpha did it" or "a benevolent force of the universe did it" are disallowed. I think some people are willing to go circular, allow the axioms to remain implicit and then "prove" them along the way: I see no evidence for a conscious personality with supernatural powers. But I do claim that is circular, you can't prove anything without knowing how you prove things and so you can't prove how you prove things by applying how you prove things without being circular. So for me, I support my rationalist atheist point of view by appealing to the great success it has in advancing engineering and science. By pointing to the richness of the connections to data, the "obvious" consistency of geology with a 4 billion year old earth, the "obvious" consistency of evolution from common ancestors of similar structures across species right down to the ADP-ATP cycle and DNA. But a theist is doing the same thing. They START with the assumption that there is a powerful conscious being running both the physical and the human worlds. They marvel at the brilliance of the design of life to support their claim even though it can't prove their axioms. They marvel at the richness of the human moral and emotional world as more support for the richness and beauty of conscious and good creation. Logically, there is no logic without assumptions. Deduction needs something to deduce from. I like occams razor and naturalism because my long exposure to it leaves me feeling very satisfied with its ability to describe many things I think are important. Other people like theism because their long exposure to it leaves them feeling very satisfied with its ab
DSimon10

The lawyer wants both warm fuzzies and charitrons, but has conflated the two, and will probably get buzzkilled (and lose out on both measures) if the distinction is made clear. The best outcome is one where the lawyer gets to maximize both, and that happens at the end of a long road that begins with introspection about what warm fuzzies ought to mean.

DSimon10

It would probably be best to just remove all questions that contain certain key phrases like "this image" or "seen here". You'll get a few false positives but with such a big database that's no great loss.

DSimon00

Seconded on that video, it's cheesy but very straightforward and informative.

DSimon10

.i la kristyn casnu lo lojbo tanru noi cmima

3[anonymous]
.ila'a lo'e lojbo tanru cu go'e .iepei
DSimon20

While an interesting idea, I believe most people just call this "gambling".

I'm not sure what you're driving at here. A gambling system where everybody has a net expected gain is still a good use of randomness.

-5Lumifer
DSimon10

A human running quicksort with certain expectations about its performance might require a particular distribution, but that's not a characteristic of software.

I think this may be a distinction without a difference; modularity can also be defined as human expectations about software, namely that the software will be relatively easy to hook into a larger system.

1Lumifer
I don't find this a useful definition, but YMMV, de gustibus, and all that...
DSimon10

That might be a distinction without a difference; my preferences come partly from my instincts.

2brazil84
Well I think it's analogous to the difference between liking and wanting, as described here: http://lesswrong.com/lw/1lb/are_wireheads_happy/ If there is a distinction between wanting and liking, then arguably there is a distinction between disliking and "not wanting."
DSimon10

Here's a hacky solution. I suspect that it is actually not even a valid solution since I'm not very familiar with the subject matter, but I'm interested in finding out why.

The relationship between one's map and the territory is much easier to explain from outside than it is from the inside. Hypotheses about the maps of other entities can be complete as hypotheses about the territory if they make predictions based on that entity's physical responses.

Therefore: can't we sidestep the problem by having the AI consider its future map state as a step in the midd... (read more)

DSimon90

The next best thing to have after a reliable ally is a predictable enemy.

-- Sam Starfall, FreeFall #1516

DSimon20

The evaluator, which determines the meaning of expressions in a program, is just another program.

-- Structure and Interpretation of Computer Programs

DSimon10

I've been trying very hard to read the paper at that link for a while now, but honestly I can't figure it out. I can't even find anything content-wise to criticize because I don't understand what you're trying to claim in the first place. Something about the distinction between map and territory? But what the heck does that have to do with ethics and economics? And why the (seeming?) presumption of Christianity? And what does any of that have to do with this graph-making software you're trying to sell?

It would really help me if you could do the following:

... (read more)
1Shmi
His pile of CRPA reads like autogenerated entries from http://snarxiv.org/.
0[anonymous]
Thanks for your reply DSimon. I like Yudkowsky's story. Truth as I understand it, is simply that which can be detected by independent confirmation, if we look for it. It is the same methodology as being used in science, justice, journalism and other realms.
DSimon00

Good point, probably the title should be "What is a good puzzle?" then.

DSimon20

That's interesting! I've had very different experiences:

When I'm trying to solve a puzzle and learn that it had no good answer (i.e. was just nonsense, not even rising to the level of trick question), it's very frustrating. It retroactively makes me unhappy about having spent all that time on it, even though I was enjoying myself at the time.

2TheOtherDave
I certainly agree that being made to treat nonsense as though it were sense is frustrating. And, sure, if things either have a right answer or are nonsense, then I agree with you, and with Scott Kim. Nonsense is not a puzzle. But I'm not sure that's true. I'm also not sure that replacing "a right answer" with "a good answer" as you just did preserves meaning. For example, I'm not sure there's a right answer to all puzzling questions about, say, human behavior, or ethics. There are good answers, though, and the questions themselves aren't all nonsense.
DSimon50

Scott Kim, What is a Puzzle?

  1. A puzzle is fun,
  2. and it has a right answer.

http://www.scottkim.com/thinkinggames/whatisapuzzle/

3wedrifid
I am dubious about any definition of "puzzle" for which the claim "This puzzle is not fun" is tautologically false, regardless of either the speaker or the puzzle in question.
2TheOtherDave
I disagree about #2, incidentally. It's a puzzle if I'm having fun trying to solve it.
DSimon40
  1. Why does the hard takeoff point have to be after the point at which an AI is as good as a typical human at understanding semantic subtlety? In order to do a hard takeoff, the AI needs to be good at a very different class of tasks than those required for understanding humans that well.

  2. So let's suppose that the AI is as good as a human at understanding the implications of natural-language requests. Would you trust a human not to screw up a goal like "make humans happy" if they were given effective omnipotence? The human would probably do about as well as people in the past have at imagining utopias: really badly.

8Broolucks
Semantic extraction -- not hard takeoff -- is the task that we want the AI to be able to do. An AI which is good at, say, rewriting its own code, is not the kind of thing we would be interested in at that point, and it seems like it would be inherently more difficult than implementing, say, a neural network. More likely than not, this initial AI would not have the capability for "hard takeoff": if it runs on expensive specialized hardware, there would be effectively no room for expansion, and the most promising algorithms to construct it (from the field of machine learning) don't actually give AI any access to its own source code (even if they did, it is far from clear the AI could get any use out of it). It couldn't copy itself even if it tried. If a "hard takeoff" AI is made, and if hard takeoffs are even possible, it would be made after that, likely using the first AI as a core. I wouldn't trust a human, no. If the AI is controlled by the "wrong" humans, then I guess we're screwed (though perhaps not all that badly), but that's not a solvable problem (all humans are the "wrong" ones from someone's perspective). Still, though, AI won't really try to act like humans -- it would try to satisfy them and minimize surprises, meaning that if would keep track of what humans would like what "utopias". More likely than not this would constrain it to inactivity: it would not attempt to "make humans happy" because it would know the instruction to be inconsistent. You'd have to tell it what to do precisely (if you had the authority, which is a different question altogether).
DSimon00

So what is Mr. Turing's computer like? It has these parts:

  1. The long piece of paper. The paper has lines on it like the kind of paper you use in numbers class at school; the lines mark the paper up into small parts, and each part has only enough room for one number. Usually the paper starts out with some numbers already on it for the computer to work with.
  2. The head, which reads from and writes numbers onto the paper. It can only use the space on the paper that is exactly under it; if it wants to read from or write on a different place on the paper, the who
... (read more)
DSimon110

Mr. Turing's Computer

Computers in the past could only do one kind of thing at a time. One computer could add some numbers together, but nothing else. Another could find the smallest of some numbers, but nothing else. You could give them different numbers to work with, but the computer would always do the same kind of thing with them.

To make the computer do something else, you had to open it up and put all its pieces back in a different way. This was very hard and slow!

So a man named Mr. Babbage thought: what if some of the numbers you gave the computer wer... (read more)

0DSimon
So what is Mr. Turing's computer like? It has these parts: 1. The long piece of paper. The paper has lines on it like the kind of paper you use in numbers class at school; the lines mark the paper up into small parts, and each part has only enough room for one number. Usually the paper starts out with some numbers already on it for the computer to work with. 2. The head, which reads from and writes numbers onto the paper. It can only use the space on the paper that is exactly under it; if it wants to read from or write on a different place on the paper, the whole head has to move up or down to that new place first. Also, it can only move one space at a time. 3. The memory. Our computers today have lots of memory, but Mr. Turing's computer has only enough memory for one thing at a time. The thing being remembered is the "state" of the computer, like a "state of mind". 4. The table, which is a plan that tells the computer what to do when it is each state. There are only so many different states that the computer might be in, and we have to put them all in the table before we run the computer, along with the next steps the computer should take when it reads different numbers in each state. Looking closer, each line in the table has five parts, which are: * If Our State Is this * And The Number Under Head Is this * Then Our Next State Will Be this (or maybe the computer just stops here) * And The Head Should write this * And Then The Head Should move this way Here's a simple table: Happy 1 Happy 1 Right Happy 2 Happy 1 Right Happy 3 Sad 3 Right Sad 1 Sad 2 Right Sad 2 Sad 2 Right Sad 3 Stop Okay, so let's say that we have one of Mr. Turing's computers built with that table. It starts out in the Happy state, and its head is on the first number of a paper like this: 1 2 1 1 2 1 3 1 2 1 2 2 1 1 2 3 What will the paper look like after the computer is done? Try pretending you are the comput
0[anonymous]
I'm actually surprised that Turing machines were invented before anyone ever built an actual computer.

The Halting Problem (Part One)

A plan is a list of things to do.
When a computer runs, it is doing the things that are written in a plan.
When you solve a problem like 23 × 3, you are also following a plan.

Plans are made of steps.
To follow a plan, you do what each plan step says to do, in the order they are written.
But sometimes a step can tell you to move to a different step in the plan, instead of the next one.
And sometimes it can tell you to do different things if you see something different.
It can say "Go back to step 4" ... or "If the wate... (read more)

DSimon170

There is no reason to assume that an AI with goals that are hostile to us, despite our intentions, is stupid.

Humans often use birth control to have sex without procreating. If evolution were a more effective design algorithm it would never have allowed such a thing.

The fact that we have different goals from the system that designed us does not imply that we are stupid or incoherent.

6Rob Bensinger
Nor does the fact that evolution 'failed' in its goals in all the people who voluntarily abstain from reproducing (and didn't, e.g., hugely benefit their siblings' reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous. We can't confidently generalize from one failure that evolution fails at everything; analogously, we can't infer from the fact that a programmer failed to make an AI Friendly that it almost certainly failed at making the AI superintelligent. (Though we may be able to infer both from base rates.)
DSimon10

Why can't it weight actions based on what we as a society want/like/approve/consent/condone?

Human society would not do a good job being directly in charge of a naive omnipotent genie. Insert your own nightmare scenario examples here, there are plenty to choose from.

What I'm describing isn't really a utility function, it's more like a policy, or policy function. Its policy would be volatile, or at least, more volatile than the common understanding LW has of a set-in-stone utility function.

What would be in charge of changing the policy?

0Transfuturist
But that doesn't describe humanity being directly in charge. It only describes a small bit of influence for each person, and while groups would have leverage, that doesn't mean a majority rejecting, say, homosexuality, gets to say what LGB people can and can't do/be. The metautility function I described. What is a society's intent? What should a society's goals be, and how should it relate to the goals of its constituents?
DSimon00

I may not actually want to pay $1 per squirrel, but if I still want to want to, then that's as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.

0aelephant
I would rather observe you & see what you do to avoid becoming a wirehead. I'd put saying you want to avoid becoming a wirehead & saying you want to want to pay to save the squirrels in the same camp -- totally unprovable at this point in time. In the future maybe we can scan your brain & see which of your stated preferences you are likely to act on; that'd be extremely cool, especially if we could scan politicians during their campaigns.
DSimon40

How about "I don't know, but maybe it has something to do with X?"

DSimon50

I agree that this is a failure, though I do not think the problem is with the definition of privilege itself. As a parallel example: Social Darwinism (in some forms) assigns moral value to the utility function of evolution, and this is a pretty silly thing to do, but it doesn't reduce the explanatory usefulness of evolution.

DSimon10

Sure. Here's the most-viewed question on SO: http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array

If you click the score on the left, it splits into green and red, showing up and down votes respectively.

Interestingly, there are very few down-votes for such a popular question! But then again, it's an awfully interesting question, and in SO it costs you one karma point to downvote someone else.

DSimon70

Of course, one needs a definition of "potentially" crafted specifically for the purpose of this specific claim.

Yes, good point: perhaps "socially permitted to be" is better than "potentially".

I agree that the parts of culture teaching (anyone) that rape is a socially acceptable action should be removed.

To be clear, the assertion is that some rape is taught to be socially acceptable. Violent rape and rape using illegal drugs is right out; we are talking about cases closer to the edge than the center, but which are still... (read more)

2NancyLebovitz
The problem there is that frequently privilege is taken to mean, not just ignorance, but that pain which a non-privileged person causes a privileged person should be treated as irrelevant.
DSimon00

As one data-point: I am a straight male, and gender is more important to me than genitalia.

DSimon30

Seconded. StackOverflow shows this information, and it's frequently interesting.

2ShannonFriedman
Would you mind pasting a link for this? I'd love to know exact numbers.
DSimon10

Things are the way they are for reasons, not magic.

Who is claiming magical or otherwise non-sensical causes?

DSimon10

Could the person who voted down the parent comment please explain their reasoning? I am genuinely curious.

7Richard_Kennaway
I notice that after the first few most recent of her comments, which have not yet received any votes either way, almost all the rest of the first page of her comments have received exactly one downvote, but varying numbers of upvotes. I suspect the downvotes are all due to one person, who has decided to object to whatever she posts. Perhaps the same person who has been downvoting everything that ialdabaoth posts.
DSimon30

From a typical online discussion with a feminist, I get an idea that every man is a rapist, and that men constructed the whole society to help each other get away with their crimes.

This strikes me as being a strawman, or as an indication that the feminists you have been talking to are either poor communicators or make very different statements than I am used to from feminist discussions online. (To be clear: Both of these are intended as serious possibilities, not as snark. Or as they say in Lojban: zo'onai )

Discussing each part individually:

[...] ev

... (read more)
9Viliam_Bur
Of course, one needs a definition of "potentially" crafted specifically for the purpose of this specific claim. Otherwise, it could be argued that all women are potentially rapists, too. I agree that the parts of culture teaching (anyone) that rape is a socially acceptable action should be removed. (By which I mean, if it is shown that they really teach that, not just that someone is able to find an analogy between something and something else.) Yes, it does. And I think female rapists have it even easier in our society. Don't they? By the way, I also think islam makes it even easier for the male rapists. (Technically, islam could be considered a part of the male privilege, but I mean the safety bonus a male rapist gets in a Western society merely for being male, is smaller than the additional safety bonus he gets for being a muslim in a muslim community.) I am not aware of mainstream feminists saying that loudly. (Which could be a statement about my ignorance.) To say it explicitly, I think that different kinds of people have different kinds of privileges. Which does not mean that all privileges are equal or symmetrical. It just means privileges are not black-and-white; that if a group has a specific privilege, it does not prove that people outside of that group don't have another specific privilege. As far as I know, feminists partially acknowledge that recently, by using the word "kyriarchy". Kyriarchy means that not all privilege is male privilege; you can also have white privilege, rich privilege, majority religion privilege, etc. But it does not seem to mean yet that you can have a female privilege, a minority privilege, an atheist privilege, etc. Instead of one black-and-white view we have multiple overlaping black-and-white views along different axes. (From the simplistic "women good, men bad", we have progressed to a more nuanced perception of society "women good, men bad, but rich white women also a little bad, etc.".) According to this model, it w
DSimon170

I like your examples, and recognize the problem you point out, but I don't agree with your conclusion.

The problem with counter-arguments of the form "Well, if we changed this one variable of a social system to a very different value, X would break!" is that variables like that usually change slowly, with only a small number of people fully and quickly adopting any change, and the rest moving along with the gradually shifting Overton window.

Additionally, having a proposed solution that involves changing a large number of things should probably set off warning alarms in your head: such solutions are more difficult to implement and have a greater number of working parts.

1[anonymous]
Um...maybe I'm misreading, but I think you're agreeing. OP: vs
3ShannonFriedman
Thank you for the thoughtful response. I want to note that my solution is not to change many variables simultaneously and casually - to the contrary, I'm saying that I want to avoid oversimplification. I'm thinking more multi-variable experiments are probably good, and more thought experiments, when changing large systems. But in general, its just really hard. As one example of how I see things going wrong: I think a lot of really good changes end up failing/getting rejected because people actually have thoughts correct about changing one variable, but because they only change that variable and have it fail in the current system, they discard the idea. This actually ends up slowing down good change a lot, since people are inaccurately thinking that they are proving good ideas false. I'm a proponent of more careful thought and being slow to think one has the right solution to complex problems, even when one has something that appears to be an answer.
DSimon40

Available evidence seems to point to the contrary, unless you are using a quite high value for "sufficiently", higher than the one used by fowlertm in the quoted phrase.

4MugaSofer
Well, I'm basing this on having seen comments talking about how people had already figured out large parts of the sequences, and my own experience of having independently come up with large portions under different names and being very pleased to find someone had already done so and was now building on them. There's probably a degree of availability bias and so on, it was just my general impression.
DSimon00

Orthogonality has to claim that the typical, statistically common kind of agent could have arbitrary goals

I'm not sure what you mean by "statistically common" here. Do you mean a randomly picked agent out of the set of all possible agents?

0Juno_Watt
I mean likely to be encountered, likely to evolve or to be built (unless you are actually trying to build a Clippy)
DSimon10

But it requires active, exclusive use of time to go to a library, loan out a book, and bring it back (and additional time to return it), whereas I can do whatever while the book is en route.

2wuncidunci
That is true. However according to my experience you don't need to spend much time in the library itself if you know what you're looking for (you can always stay for the atmosphere). What takes time is going to and from the library. The value of this time obviously depends on a lot of parameters: is the library close to your route to/from some other place, are you currently very busy, do you enjoy city walks/bike-rides, etc.
DSimon80

Most atheists alieve in God and trust him to make the future turn out all right (ie they expect the future to magically be ok even if no one deliberately makes it so).

The statement in parentheses seems to contradict the one outside. Are you over-applying the correlation between magical thinking and theism?

1Vaniver
The implication is "no one human"- that is, the atheists in question still live in a positive universe rather than a neutral one, but don't have an explanation for the positivity.
DSimon10

Even if you don't know which port you're going to, a wind that blows you to some port is more favorable than a wind that blows you out towards the middle of the ocean.

3cody-bryce
That's only true if you prefer ports reached sooner or ports on this side of the ocean.
DSimon00

I'm not really sure what you're driving at here. We don't have any software even close to being able to pass the TT right now; at the moment, using relatively easy subsets of the TT is the most useful thing to do. That doesn't mean that anyone expects that passing such a subset counts as passing the general TT.

-2MugaSofer
I was just noting that current "Turing Tests" are exactly what was being used as an example of something-that-is-not-a-Turing-test. It's mildly ironic, that's all.
DSimon20

But you can keep on adding specifics to a subject until you arrive at something novel. I don't think it would even be that hard: just Google the key phrases of whatever you're about to say, and if you get back results that could be smooshed into a coherent answer, then you need to keep changing up or complicating.

DSimon00

I would want them to alert hotel security and/or call the police.

DSimon30

He needs to have a second gun ready so that he can get as many shots off as possible before having to reload.

He isn't assembling the gun out of a backpack, but from a backpack: specifically, from gun parts which are inside the backpack.

1Kindly
Apparently at least one of my questions was a stupid question, but thank you anyway.
DSimon30

Hello, Lumifer! Welcome to smart-weird land. We have snacks.

So you say you have no burning questions, but here's one for you: as a new commenter, what are your expectations about how you'll be interacting with others on the site? It might be interesting to note those now, so you can compare later.

2Lumifer
Hm, an interesting question. In the net space I generally look like an irreverent smartass (in the meatspace too, but much attenuated by real relationships with real people). So on forums where I hang out, maybe about 10% of the regulars like me, about a quarter hate me, and the rest don't care. One of the things I'm curious about is whether LW will be different. Or maybe I will be different -- I can argue that my smartassiness is just allergy to stupidity. Whether that's true or not depends on the value of "true", of course...
DSimon20

So I may as well discount all probability lines in which the evidence I'm seeing isn't a valid representation of an underlying reality.

But that would destroy your ability to deal with optical illusions and misdirection.

1victordrake
Perhaps I should say ...in which I can't reasonably expect to GET evidence entangled with an underlying reality.
DSimon00

Sounds fine to me. Consider it this way: whether or not you "win the debate" from the perspective of some outside audience, or from our perspective, isn't important. It's more about whether you feel like you might benefit from the conversation yourself.

DSimon20

Yep, agreed. We have a lot more historical examples of dictators (of various levels of effectiveness) who were in it for themselves, and either don't care if their citizens suffer or even actively prefer it. Such dictators would be worse for the world if they get more rational, because their goals make the world a shittier place.

-4Juno_Watt
ETA: It's not a mysterious empirical fact that benevolent dictators don't exist. Where is there a ready supply of people who don't get corrupted by absolute power? How do you test that in advance? Why would someone who has enjoyed untrammelled power for a certain period meekly hand back the keys? Benevolent dictators are the magic wands of political science. They have every advantage except actually existing.
DSimon80

You keep using that word, etc. etc.

Rational means something like "figures out what the truth is, and figures out the best way to get stuff done, and does that thing". It doesn't require any particular goal.

So a rational dictator whose goals include their subjects having lots of fun, would be fun to live under.

-5Juno_Watt
DSimon60

Ask too much of your subjects, and they start wondering if maybe it would be less trouble to just replace you by force.

-5Juno_Watt
DSimon00

Best hope they've found (or built) a better dictator to replace them...

Load More