All of peterward's Comments + Replies

I'm not sure I'm clear on the AI/AIG distinction. Wouldn't an AI need to be able to apply its intelligence to novel situations to be "intelligent" at all, therefore making its intelligence "general" by definition? Watson winning Jeopardy! was a testament to software engineering, but Watson was programmed specifically to play Jeopardy!. If, without modification, it could go on to dominate Settlers of Catan then we might want to start worrying.

I guess it's natural that QI tests would be chosen. They are objective and feature a logic a co... (read more)

I just don't think there are many features human social organization that can be usefully described by a one-dimensional array, the alleged left-right political divide perhaps being the canonical example. Take two books I have on my Kindle: Sirens of Titan and Influx. While one can truly say the latter is a vastly more terrible book than the former, it would be absurd to say they--and every other book I've read--should be placed in a stack that uniquely ranks then against one another. And it's not a matter of comparing apples and oranges--because you can c... (read more)

"He predicts that unconscious signals of a stable environment will increase self-control, which helps explains why high social-economic status correlates strongly with self-control."

What evidence is there that this is true? For what anecdotage is worth (which is probably the only evidence there is on the matter), some of the most out-of-control people I've met have been rich kids. Showing up to a 10-hour shift at a low-wage retail job every day with a smile on your face even though you have medical bills you can't pay--that's real self control. M... (read more)

2joaolkf
There's good evidence that socioeconomic status correlates positively with Self-Control. There is also good evidence that people with high socioeconomic status live in a more stable environment during childhood. The signals of a stable environment correlating with Self-Control is his speculation as far as I'm aware, but in light of the data it seems plausible. I agree they would function better in a crisis, but a crisis is a situation where fast response matters more than self-control. In a crisis you will take actions that are probably wrong during stable periods. I would go on to say, as my own speculation, that hardship - as else being equal - make people worse.

This definitely incidental--

Wouldn't a super intelligent, resource gathering agent simply figure out the futility of its prime directive and abort with some kind of error? Surely it would realize it exists in a universe of limited resources and that it had been given an absurd objective. I mean maybe it's controlled controlled by some sort of "while resources exist consume resources" loop that is beyond its free will to break out of--but if so, should it be considered an "agent"?

Contra humans, who for the moment are electing to consume themselves to extinction, if anything resource consumer AIs would be comparatively benign.

0Stuart_Armstrong
"Futility of prime directive" is a values question, not an intelligence question. See http://lesswrong.com/lw/h0k/arguing_orthogonality_published_form/
peterward-10

Isn't a "boolean" right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn't it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked--by global goodness, or whatever standard--then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.

3SilentCal
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you're going to be. A utility function answers the first part. If you're a committed maximizer, you have your answer to the second part. Most of us aren't, so we have a tough decision there that the utility function doesn't answer.
peterward-10

I personally think there's not a lot of hope for animals as long as humans can't sort out their own mess. On the other hand, I don't think there is much hope for humanity as long as altruism stands in for actually taking responsibility. The very social system that puts $5 in our pockets to donate creates those who depend on our charity.

peterward-20

Probably the time wasted on the cost/benefit analysis was more costly--all told--than either branch of the flow chart. Having said that, I suspect the real objective of these exercises is quite different than the ostensible one.

It also takes no shortage of conceit to imagine one knows better than the majority or people. Lots of individuals flit between business and politics--GHW Bush is a major owner of a gold mine where I'm from.* But an honest person isn't going to go into politics, because they understand the fundamental lie doing so requires.

*'Fact, I'd wager the two are strongly correlated--though I'm not privy to correlation data you are.

Probably has something to do with the American work morality--the zealousness we apply any religion can only weep in envy of. We believe/have been brainwashed into believing work is what we were born to do. As to how much we should do; I'm not sure this is a question for psychological studies so much as a question of how much (and of what kind of) work we actually want to do. It's like asking how many hours one should spend cleaning their house; one balances a cleanliness level one can live with against time one would rather spend doing something else.

Might the apparent weird alliance not be a failure to accurately separate the substantive from the superficial? It could be the New Ager and the biohacker are driven by the same psychological imperative, each just dresses it a little differently. By even classifying their alliance as "weird", we are jumping the gun on what were are entitled to take for granted. I.e., we lack even the understanding to say what is weird as what isn't.

What's the point of the up/down votes in the first place? If the object is reducing bias, doesn't making commenting a popularity contest run counter to this purpose?

2MugaSofer
Quality control. Ideally, people should not upvote/downvote based on conclusions they disagree with. I recall hearing that the highest-karma comment ever was criticism MIRI, which would suggest that this works as intended. I'm not sure how to check this, though. ETA: found it.

All analogies are suspect, but if I had to choose one I'd say physics' theories--at best--are if anything like code that returns the Fibonacci sequence through a specified range. The theories give us a formula we can use to make certain predictions, in some cases with arbitrary precision. Video, losslessly- or lossy-compressed, is still video. Whereas

fib n = take n fiblist where fiblist = 0:1:(zipWith (+) fiblist (tail fiblist))

is not a bag holing the entire Fibonacci sequence, waiting for us to compress it so we can look at a slightly more pixelated v... (read more)

Were it not the case the teachers are often the biggest bullies. On the contrary, IMO, it is the excessively authoritarian, prison-like model school follows that generates bullies.

2buybuydandavis
Prison is the better model for the whole institution, while Lord of the Flies is the model of the schoolyard. I think there's a good correlation between people who think a pile of children makes for good socialization and people who ignore the overarching prison model of the institution as a whole. But I really don't think it's the interaction with the Prison Guards that predominantly makes for bullying - it's the interactions in the prison yard.

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) >follows lazy evaluation: a value is not calculated unless it is used.

In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?

If on the other hand whole_universe_from_time_immemorial() needs to execute every time, which of course assumes a loophole gets found to infinitely add information to the host universe, ... (read more)

1lmm
Indeed we would. If you believe we are such a simulation, that implies the simulator is interested in some event that causally depends on today's history. I don't think this matters though. Causality is preserved under relativity, AIUI. You may not necessarily be able to say absolutely whether one event happened before or after another, but you can say what the causal relation between them is (whether one could have caused the other, or they are spatially separated such that neither could have caused the other). So there is no problem with using naive time in one's simulations. Are you arguing that a simulatable universe must have a time dimension? I don't think that's entirely true; all it means is that a simulatable universe must have a non-cyclic chain of causality. It would be exceedingly difficult to simulate e.g. the Godel rotating universe. But a universe like our own is no problem.

I'm in a similar boat; also starting with Python. Python is intuitive and flexible, which makes it easy to learn but also, in a sense easy to avoid understating how a language actually works. In addition I'm now learning Java and OCaml.

Java isn't a pretty language but it's widely used and a relatively easy transition from Python. But it, I find, makes the philosophy behind object oriented programing much more explicit; forcing the developer to create objects from scratch even to accomplish basic tasks.

OCaml is useful because of the level of discipline it ... (read more)

0Luke_A_Somers
Java is primarily useful for huge things. Since it makes so many things explicit, you can orient yourself very quickly in a project. If you see a symbol, you don't need to pull out a special tool (like cscope for C) to tell where it was defined - the code tells you. Yes, it is possible to write spaghetti Java, but it's easy not to. Also, if you have something that will be on for a long time and need it to eventually act with compiled speed (e.g. a webapp), java with the JIT is soon as fast as an always-compiled language. The inability to pass functions as arguments without creating a class for them is one of the annoying parts. Maybe some syntactic sugar will be (or recently has been?) added.
0lukstafi
OCaml is my favorite language. At some point you should also learn Prolog and Haskell to have a well-rounded education.

Watson is also backed by a huge corporation, which makes it easier to surmount obstacles like "but doctors don't like competition."

On the other hand being a huge corporation makes it harder to surmount "relying on marketing hype to inflate the value-added of the product."

At any rate, the company I work for relies heavily on Cognos and the metrics there seem pretty arbitrary--Hocus pocus to conjure simple numbers so directors can pretend they're making informed decisions and not operating on blind guesswork and vanity....And to ration... (read more)

It always seemed to me "externality" was just a euphemism to cover up the fact that capitalist enterprise requires massive--not a hand out here or there--state support (and planning) to functional at all. The US is really kind of the odd ball in that we pretend this isn't the case, dressing up subsidy as defense spending or whatever. In Japan, e.g., they just take your money and give it strait to Toyota without all the pretense. At any rate anyone who opposes central planing and "big government" also opposes capitalism in it's extant form.

0DanielLC
I think people talk more about negative externalities, which are fought through taxes, not encouraged through state support. Much of what is encouraged through state support has nothing to do with externalities. For example, growing corn has no significant externalities, but it is heavily subsidized. In other words, if we oppose big government, we oppose the current government, because it's big? I think that's generally what people mean when they say they oppose big government, unless they're arguing with a communist that wants it even bigger.
3buybuydandavis
Most libertarians do. When the government is passing out handouts, libertarians are the ones most likely to complain. I do agree that many Americans are largely in denial about the extent of government control of the economy.

Something to keep in mind is that different people use the word "capitalism" to mean different things. Many libertarians and Objectivists say "capitalism" to mean a free-market economy with dramatically less regulation and taxation than we have today. However, many socialists and anarchists use "capitalism" to mean the sort of economy that we do have today, dominated by big businesses and finance capital. Others use expressions such as "mixed economy" and "crony capitalism" to imply various combinations of ... (read more)

JoshuaZ190

always seemed to me "externality" was just a euphemism to cover up the fact that capitalist enterprise requires massive--not a hand out here or there--state support (and planning) to functional at all.

Externalities pre-date the modern capitalist system. For example, in England, well before modern capitalism, restrictions on smelting and similar industries existed to prevent them from polluting surrounding neighborhoods. These concerns are even older than that. The Talmud discusses the legality of farming flowers that make for bad tasting hon... (read more)

My point was hypothetical. I skeptical a correlation actually exits,--damn lies in all--but that's beside the point. My point is a society that is into boiling complex, difficult to define concepts like intelligence down to a simple metric is liable to have lots of other analogous, oversimplified metrics that are known, if not to coworkers to teachers and whoever else makes the decisions. And I'd wager people who do well on tests are apt to be the same ones who get high marks on Cognos reports--i.e., the same prejudices affect what's deemed valuable for bo... (read more)

0Ronak
Well, fair enough.

Let's say IQ test do correlate with success (as measured by conventional standards). What would that prove? That a society that values high IQ rewards people with high IQs. The relevant question is Is IQ a valid measure of intelligence? Well, good luck defining intelligence in a scientifically meaningful way.

"Social intelligence", oh boy... At this point we're just giving common sense wisdoms--flattery gets you everywhere/the socially adept rise higher in social contexts etc--a lacquer of scientistic jargon.

7Ronak
You do realise that it's rare for co-workers to know each other's IQs? Obviously there's a third thing that both IQ and success correlate with.

Several thoughts:

a) Isn't the solution to qualify the "libertarian argument" by limiting it's scope to "any terms that don't break the law"? (Of course "libertarian" is a poor adjective choice since a legal contract very much relies on a powerful state backing the enforcement of any breach to mean anything--the concept of a libertarian contract is an oxymoron.)

b) What do suspected ulterior motives on the part of those advancing the "libertarian argument" or the fact that sincere libertarians are a fringe minority ha... (read more)

I think the term "abstract reasoning" is being conflated with acting on good or bad information (among other things). E.g., in most cases, one basically has to take it on faith ice cream is good or bad. And since most people aren't in a position to rationally make a confident choice re: the examples the author provides or comparable ones that could be imagined, agnosticism would seem the only rational alternative.*

More generally, I think a lot of these problems stem from radically defective education (if people aren't merely mostly morons as ... (read more)

It depends on what one assumes the motives for war are. If they are economic then I think a case can be made everyone ends up worse off. But if power is at stake, then war can indeed leave the nominal victor better off (from the perspective of motive).

By the way, attempts to characterize human psychological based on what life was like in the Savanna (or whatever environment humans are supposed to be designed by Darwinian forces for) need serious qualification, at best. Speaking metaphorically, evolution is an accident; where "successful", a fortu... (read more)

0NancyLebovitz
One more case to consider-- if a country is invaded, it may be less badly off after successful resistance than if it surrendered.

I agree with the general argument. I think (some) philosophy is an immature science, or predecessor to a science, and some is in reference to how to do things better, therefore subject to less stringent, but not fundamentally different, standards than science--political philosophy, say (assuming, counterfactually, political thinking were remotely rational). And of course a lot of philosophy is just nonsense--probably most of it. But economics can hardly be called a science. If anything, the "field" has experienced retrograde evolution since it stopped being part of philosophy.