Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Antisuji 09 June 2015 04:25:40PM 1 point [-]

First of all, I can highly recommend Nachmanovitch's Free Play. It's at the very least thought-provoking and entertaining—whether it helps you be more creative is harder to tell. I got a bit of milage creativitywise out of Comedy Writing Secrets, which I hear is well-regarded among professional humor writers. I wasn't very diligent about the exercises, or I might have gotten more out of it.

Regarding LW-like thought and creativity, I'm reading through Minsky's Society of Mind and the Puzzle Principle section talks about machines and creativity:

Many people reason that machines do only what they're programmed to do — and hence can never be creative or original. The trouble is that this argument presumes what it purports to show: that you can't program a machine to be creative! In fact, it is surprisingly easy to program a computer so that it will proceed to do more different things than any programmer could imagine in advance.

And he goes into a bit more detail.

My thoughts on this, cribbed more or less directly from my notes:

I think there's an equivocation in common uses of the word "creativity." There's one sense, generally used by technical people, that means something like the ability to make intuitive leaps when solving a problem. Then there's the other sense, which is probably closer to what most people mean, the attributive sense. That is, someone might be a creative person, meaning they make those intuitive leaps, yes, but they also have certain stereotypical personality traits; they're quirky, they dress in non-conformitive ways, they're artsy, emotional. And so on.

So Minsky's answer doesn't really adequately address what most people mean when they say you can't program a machine to be creative.

But of course you can, and we're getting better and better at this.

Comment author: Baughn 05 June 2015 08:56:09AM 4 points [-]

The craziness it produced was not code, it merely looked like code. It's a neat example, but in that particular case not much better than an N-gram markov chain.

Comment author: Antisuji 05 June 2015 04:28:31PM 6 points [-]

Syntactically it's quite a bit better than an N-gram markov chain: it gets indentation exactly right, it balances parentheses, braces, and comment start/end markers, delimits strings with quotation marks, and so on. You're right that it's no better than a markov chain at understanding the "code" it's producing, at least at the level a human programmer does.

Comment author: Antisuji 05 June 2015 01:15:35AM 6 points [-]

Discussion on Hacker News. Definitely an interesting article, very readable and (to me) entertaining. But I agree with interstice that it doesn't say much about strong AI.

Comment author: adamzerner 02 June 2015 01:03:36PM 10 points [-]

My impression is that morality is all about chasing dangling nodes. There are questions about how certain outcomes make you feel. There are questions about how people actually act. There are questions about what actions would lead to the world being a "better place" (however you define it). But asking about whether something is "moral" seems to be chasing a dangling node to me.

Comment author: Antisuji 03 June 2015 08:02:48AM 4 points [-]

Yes and no. Morality is certainly less fundamental than physics, but I would argue no less real a concept than "breakfast" or "love," and has enough coherence – thingness – to be useful to try to outline and reason about.

The central feature of morality that needs explaining, as I understand it, is how certain behaviors or decisions make you feel in relation to how other people feel about your behaviors. Which is not something you have full control over. It is a distributed cognitive algorithm, a mechanism for directing social behavior through the sharing of affective judgements.

I'll attempt to make this more concrete. Actions that are morally prohibited have consequences, both in the form of direct social censure (due to the moral rule itself) and indirect effects that might be social or otherwise. You can think of the direct social consequences as a fail-safe that stops dangerous behavior before real harm can occur, though of course it doesn't always work very well. In this way the prudential sense of should is closely tied to the moral sense of should – sometimes in a pure, self-sustaining way, the original or imagined harm becoming a lost purpose.

None of this means that morality is a false concept. Even though you might explain why moral rules and emotions exist, or point out their arbitrariness, it's still simplest and I'd argue ontologically justified to deal with morality the way most people do. Morality is a standing wave of behaviors and predictable shared attitudes towards them, and is as real as sound waves within the resonating cavity of a violin. Social behavior-and-attitude space is immense, but seems to contain attractors that we would recognize as moral.

That said, I do think it's valuable to ask the more grounded questions of how outcomes make individuals feel, how people actually act, etc.

[Link] Small-game fallacies: a Problem for Prediction Markets

10 Antisuji 28 May 2015 03:32AM

Nick Szabo writes about the dangers of taking assumptions that are valid in small, self-contained games and applying them to larger, real-world "games," a practice he calls a small-game fallacy.

Interactions between small games and large games infect most works of game theory, and much of microeconomics, often rendering such analyses useless or worse than useless as a guide for how the "players" will behave in real circumstances. These fallacies tend to be particularly egregious when "economic imperialists" try to apply the techniques of economics to domains beyond the traditional efficient-markets domain of economics, attempting to bring economic theory to bear to describe law, politics, security protocols, or a wide variety of other institutions that behave very differently from efficient markets. However as we shall see, small-game fallacies can sometimes arise even in the analysis of some very market-like institutions, such as "prediction markets."

This last point, which he expands on later in the post, will be of particular interest to some readers of LW. The idea is that while a prediction market does incentivize feeding accurate information into the system, the existence of the market also gives rise to parallel external incentives. As Szabo glibly puts it,

A sufficiently large market predicting an individual's death is also, necessarily, an assassination market...

Futarchy, it seems, will have some kinks to work out.

Comment author: Antisuji 21 April 2015 02:25:50AM 0 points [-]

In my experience, micro optimizations like these represent yet another thing to keep track of. The upside is pretty small, while the potential downside (forget to cancel a card?) is larger. If you're ok with paying the attentional overhead or it's a source of entertainment, go for it.

Personally I'd rather use a standard rewards card (mine is 1.5% cash), not have to think about it, and spend my limited cognitive resources on doing well at my job, looking out for new opportunities with large upsides, working on side projects, or networking.

Comment author: [deleted] 13 March 2015 07:35:45AM *  10 points [-]

It reads too much like "Help me with my homework", and you don't give the impression that you care about the topic or have given it any thought. Therefore it seems to offer little in terms of productive discussion.

This may be wrong but it's the impression I got from reading the post.

Comment author: Antisuji 13 March 2015 03:40:30PM 3 points [-]

That's interesting, because to me it read more like "I'm going to write something interesting about anything you like, do some research for you, and even share the results" and "as long as I have to do this assignment I might as well make it useful to someone" but maybe that's because I recognized the poster's name, read his blog, etc.

I can see how someone might interpret it this way, though.

Comment author: Antisuji 09 March 2015 06:49:17AM 6 points [-]

Not something I actually did last month, since I wrote the piece two years ago, but it feels like it since that's when the validation arrived. A blog post of mine hit /r/basicincome and then /r/futurism, which are sitting at ~470 (98% positive) and ~1080 (92% positive) votes respectively, and found its way to hacker news. Some of the discussion is pretty good. The relevant quote:

"Let us keep in mind how poorly we treat those who cannot currently contribute to society. Sooner or later we will have to face this question: how do we define personal worth in a world where most people have no economic value?"

The actual accomplishment of the month is a post on Christopher Alexander's Notes on the Synthesis of Form, which won't be as big a hit, and I'm ok with that.

[Link] YC President Sam Altman: The Software Revolution

4 Antisuji 19 February 2015 05:13AM

Writing about technological revolutions, Y Combinator president Sam Altman warns about the dangers of AI and bioengineering (discussion on Hacker News):

Two of the biggest risks I see emerging from the software revolution—AI and synthetic biology—may put tremendous capability to cause harm in the hands of small groups, or even individuals.

I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get. If we can synthesize new diseases, maybe we can synthesize vaccines. If we can make a bad AI, maybe we can make a good AI that stops the bad one.

The current strategy is badly misguided. It’s not going to be like the atomic bomb this time around, and the sooner we stop pretending otherwise, the better off we’ll be. The fact that we don’t have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.

On the one hand, it's good to see more mainstream(ish) attention to AI safety. On the other hand, he focuses on the mundane (though still potentially devastating!) risks of job destruction and concentration of power, and his hopeful "best strategy" seems... inadequate.

Comment author: JoshuaZ 26 January 2015 02:16:00AM 11 points [-]

Sometimes when one learns something it makes many other things "click" by making them all make sense in a broader framework. Moreover, when this happens I will be astounded I hadn't learned about the thing in the first place. One very memorable such occasion is when I learned about categories and how many different mathematical structures could be thought of in that context. Do people have other examples where they have been "Wow. That makes so much sense. Why didn't anyone previously say that?"

Comment author: Antisuji 26 January 2015 04:02:16AM 5 points [-]

Schmidhuber's formulation of curiosity and interestingness as a (possibly the) human learning algorithm. Now when someone says "that's interesting" I gain information about the situation, where previously I interpreted it purely as an expression of an emotion. I still see it primarily about emotion, but now understand the whys of the emotional response: it's what (part of) our learning algorithm feels like from the inside.

There are some interesting signaling implications as well.

View more: Next