The lawyer wants both warm fuzzies and charitrons, but has conflated the two, and will probably get buzzkilled (and lose out on both measures) if the distinction is made clear. The best outcome is one where the lawyer gets to maximize both, and that happens at the end of a long road that begins with introspection about what warm fuzzies ought to mean.
It would probably be best to just remove all questions that contain certain key phrases like "this image" or "seen here". You'll get a few false positives but with such a big database that's no great loss.
Seconded on that video, it's cheesy but very straightforward and informative.
.ie zo'oru'e uinai
.i la kristyn casnu lo lojbo tanru noi cmima
While an interesting idea, I believe most people just call this "gambling".
I'm not sure what you're driving at here. A gambling system where everybody has a net expected gain is still a good use of randomness.
A human running quicksort with certain expectations about its performance might require a particular distribution, but that's not a characteristic of software.
I think this may be a distinction without a difference; modularity can also be defined as human expectations about software, namely that the software will be relatively easy to hook into a larger system.
That might be a distinction without a difference; my preferences come partly from my instincts.
Here's a hacky solution. I suspect that it is actually not even a valid solution since I'm not very familiar with the subject matter, but I'm interested in finding out why.
The relationship between one's map and the territory is much easier to explain from outside than it is from the inside. Hypotheses about the maps of other entities can be complete as hypotheses about the territory if they make predictions based on that entity's physical responses.
Therefore: can't we sidestep the problem by having the AI consider its future map state as a step in the midd...
The next best thing to have after a reliable ally is a predictable enemy.
-- Sam Starfall, FreeFall #1516
The evaluator, which determines the meaning of expressions in a program, is just another program.
I've been trying very hard to read the paper at that link for a while now, but honestly I can't figure it out. I can't even find anything content-wise to criticize because I don't understand what you're trying to claim in the first place. Something about the distinction between map and territory? But what the heck does that have to do with ethics and economics? And why the (seeming?) presumption of Christianity? And what does any of that have to do with this graph-making software you're trying to sell?
It would really help me if you could do the following:
...Good point, probably the title should be "What is a good puzzle?" then.
That's interesting! I've had very different experiences:
When I'm trying to solve a puzzle and learn that it had no good answer (i.e. was just nonsense, not even rising to the level of trick question), it's very frustrating. It retroactively makes me unhappy about having spent all that time on it, even though I was enjoying myself at the time.
Scott Kim, What is a Puzzle?
Why does the hard takeoff point have to be after the point at which an AI is as good as a typical human at understanding semantic subtlety? In order to do a hard takeoff, the AI needs to be good at a very different class of tasks than those required for understanding humans that well.
So let's suppose that the AI is as good as a human at understanding the implications of natural-language requests. Would you trust a human not to screw up a goal like "make humans happy" if they were given effective omnipotence? The human would probably do about as well as people in the past have at imagining utopias: really badly.
So what is Mr. Turing's computer like? It has these parts:
Mr. Turing's Computer
Computers in the past could only do one kind of thing at a time. One computer could add some numbers together, but nothing else. Another could find the smallest of some numbers, but nothing else. You could give them different numbers to work with, but the computer would always do the same kind of thing with them.
To make the computer do something else, you had to open it up and put all its pieces back in a different way. This was very hard and slow!
So a man named Mr. Babbage thought: what if some of the numbers you gave the computer wer...
The Halting Problem (Part One)
A plan is a list of things to do.
When a computer runs, it is doing the things that are written in a plan.
When you solve a problem like 23 × 3, you are also following a plan.
Plans are made of steps.
To follow a plan, you do what each plan step says to do, in the order they are written.
But sometimes a step can tell you to move to a different step in the plan, instead of the next one.
And sometimes it can tell you to do different things if you see something different.
It can say "Go back to step 4" ... or "If the wate...
There is no reason to assume that an AI with goals that are hostile to us, despite our intentions, is stupid.
Humans often use birth control to have sex without procreating. If evolution were a more effective design algorithm it would never have allowed such a thing.
The fact that we have different goals from the system that designed us does not imply that we are stupid or incoherent.
Why can't it weight actions based on what we as a society want/like/approve/consent/condone?
Human society would not do a good job being directly in charge of a naive omnipotent genie. Insert your own nightmare scenario examples here, there are plenty to choose from.
What I'm describing isn't really a utility function, it's more like a policy, or policy function. Its policy would be volatile, or at least, more volatile than the common understanding LW has of a set-in-stone utility function.
What would be in charge of changing the policy?
I may not actually want to pay $1 per squirrel, but if I still want to want to, then that's as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.
How about "I don't know, but maybe it has something to do with X?"
I agree that this is a failure, though I do not think the problem is with the definition of privilege itself. As a parallel example: Social Darwinism (in some forms) assigns moral value to the utility function of evolution, and this is a pretty silly thing to do, but it doesn't reduce the explanatory usefulness of evolution.
Sure. Here's the most-viewed question on SO: http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array
If you click the score on the left, it splits into green and red, showing up and down votes respectively.
Interestingly, there are very few down-votes for such a popular question! But then again, it's an awfully interesting question, and in SO it costs you one karma point to downvote someone else.
Of course, one needs a definition of "potentially" crafted specifically for the purpose of this specific claim.
Yes, good point: perhaps "socially permitted to be" is better than "potentially".
I agree that the parts of culture teaching (anyone) that rape is a socially acceptable action should be removed.
To be clear, the assertion is that some rape is taught to be socially acceptable. Violent rape and rape using illegal drugs is right out; we are talking about cases closer to the edge than the center, but which are still...
As one data-point: I am a straight male, and gender is more important to me than genitalia.
Seconded. StackOverflow shows this information, and it's frequently interesting.
Things are the way they are for reasons, not magic.
Who is claiming magical or otherwise non-sensical causes?
Could the person who voted down the parent comment please explain their reasoning? I am genuinely curious.
From a typical online discussion with a feminist, I get an idea that every man is a rapist, and that men constructed the whole society to help each other get away with their crimes.
This strikes me as being a strawman, or as an indication that the feminists you have been talking to are either poor communicators or make very different statements than I am used to from feminist discussions online. (To be clear: Both of these are intended as serious possibilities, not as snark. Or as they say in Lojban: zo'onai )
Discussing each part individually:
...[...] ev
I like your examples, and recognize the problem you point out, but I don't agree with your conclusion.
The problem with counter-arguments of the form "Well, if we changed this one variable of a social system to a very different value, X would break!" is that variables like that usually change slowly, with only a small number of people fully and quickly adopting any change, and the rest moving along with the gradually shifting Overton window.
Additionally, having a proposed solution that involves changing a large number of things should probably set off warning alarms in your head: such solutions are more difficult to implement and have a greater number of working parts.
Available evidence seems to point to the contrary, unless you are using a quite high value for "sufficiently", higher than the one used by fowlertm in the quoted phrase.
Orthogonality has to claim that the typical, statistically common kind of agent could have arbitrary goals
I'm not sure what you mean by "statistically common" here. Do you mean a randomly picked agent out of the set of all possible agents?
But it requires active, exclusive use of time to go to a library, loan out a book, and bring it back (and additional time to return it), whereas I can do whatever while the book is en route.
Most atheists alieve in God and trust him to make the future turn out all right (ie they expect the future to magically be ok even if no one deliberately makes it so).
The statement in parentheses seems to contradict the one outside. Are you over-applying the correlation between magical thinking and theism?
Even if you don't know which port you're going to, a wind that blows you to some port is more favorable than a wind that blows you out towards the middle of the ocean.
I'm not really sure what you're driving at here. We don't have any software even close to being able to pass the TT right now; at the moment, using relatively easy subsets of the TT is the most useful thing to do. That doesn't mean that anyone expects that passing such a subset counts as passing the general TT.
But you can keep on adding specifics to a subject until you arrive at something novel. I don't think it would even be that hard: just Google the key phrases of whatever you're about to say, and if you get back results that could be smooshed into a coherent answer, then you need to keep changing up or complicating.
I would want them to alert hotel security and/or call the police.
He needs to have a second gun ready so that he can get as many shots off as possible before having to reload.
He isn't assembling the gun out of a backpack, but from a backpack: specifically, from gun parts which are inside the backpack.
Hello, Lumifer! Welcome to smart-weird land. We have snacks.
So you say you have no burning questions, but here's one for you: as a new commenter, what are your expectations about how you'll be interacting with others on the site? It might be interesting to note those now, so you can compare later.
So I may as well discount all probability lines in which the evidence I'm seeing isn't a valid representation of an underlying reality.
But that would destroy your ability to deal with optical illusions and misdirection.
Sounds fine to me. Consider it this way: whether or not you "win the debate" from the perspective of some outside audience, or from our perspective, isn't important. It's more about whether you feel like you might benefit from the conversation yourself.
Yep, agreed. We have a lot more historical examples of dictators (of various levels of effectiveness) who were in it for themselves, and either don't care if their citizens suffer or even actively prefer it. Such dictators would be worse for the world if they get more rational, because their goals make the world a shittier place.
You keep using that word, etc. etc.
Rational means something like "figures out what the truth is, and figures out the best way to get stuff done, and does that thing". It doesn't require any particular goal.
So a rational dictator whose goals include their subjects having lots of fun, would be fun to live under.
Ask too much of your subjects, and they start wondering if maybe it would be less trouble to just replace you by force.
Best hope they've found (or built) a better dictator to replace them...
Taboo "faith", what do you mean specifically by that term?