Wiki Contributions

Comments

Sorted by

First of all, I can highly recommend Nachmanovitch's Free Play. It's at the very least thought-provoking and entertaining—whether it helps you be more creative is harder to tell. I got a bit of milage creativitywise out of Comedy Writing Secrets, which I hear is well-regarded among professional humor writers. I wasn't very diligent about the exercises, or I might have gotten more out of it.

Regarding LW-like thought and creativity, I'm reading through Minsky's Society of Mind and the Puzzle Principle section talks about machines and creativity:

Many people reason that machines do only what they're programmed to do — and hence can never be creative or original. The trouble is that this argument presumes what it purports to show: that you can't program a machine to be creative! In fact, it is surprisingly easy to program a computer so that it will proceed to do more different things than any programmer could imagine in advance.

And he goes into a bit more detail.

My thoughts on this, cribbed more or less directly from my notes:

I think there's an equivocation in common uses of the word "creativity." There's one sense, generally used by technical people, that means something like the ability to make intuitive leaps when solving a problem. Then there's the other sense, which is probably closer to what most people mean, the attributive sense. That is, someone might be a creative person, meaning they make those intuitive leaps, yes, but they also have certain stereotypical personality traits; they're quirky, they dress in non-conformitive ways, they're artsy, emotional. And so on.

So Minsky's answer doesn't really adequately address what most people mean when they say you can't program a machine to be creative.

But of course you can, and we're getting better and better at this.

Syntactically it's quite a bit better than an N-gram markov chain: it gets indentation exactly right, it balances parentheses, braces, and comment start/end markers, delimits strings with quotation marks, and so on. You're right that it's no better than a markov chain at understanding the "code" it's producing, at least at the level a human programmer does.

Discussion on Hacker News. Definitely an interesting article, very readable and (to me) entertaining. But I agree with interstice that it doesn't say much about strong AI.

Yes and no. Morality is certainly less fundamental than physics, but I would argue no less real a concept than "breakfast" or "love," and has enough coherence – thingness – to be useful to try to outline and reason about.

The central feature of morality that needs explaining, as I understand it, is how certain behaviors or decisions make you feel in relation to how other people feel about your behaviors. Which is not something you have full control over. It is a distributed cognitive algorithm, a mechanism for directing social behavior through the sharing of affective judgements.

I'll attempt to make this more concrete. Actions that are morally prohibited have consequences, both in the form of direct social censure (due to the moral rule itself) and indirect effects that might be social or otherwise. You can think of the direct social consequences as a fail-safe that stops dangerous behavior before real harm can occur, though of course it doesn't always work very well. In this way the prudential sense of should is closely tied to the moral sense of should – sometimes in a pure, self-sustaining way, the original or imagined harm becoming a lost purpose.

None of this means that morality is a false concept. Even though you might explain why moral rules and emotions exist, or point out their arbitrariness, it's still simplest and I'd argue ontologically justified to deal with morality the way most people do. Morality is a standing wave of behaviors and predictable shared attitudes towards them, and is as real as sound waves within the resonating cavity of a violin. Social behavior-and-attitude space is immense, but seems to contain attractors that we would recognize as moral.

That said, I do think it's valuable to ask the more grounded questions of how outcomes make individuals feel, how people actually act, etc.

In my experience, micro optimizations like these represent yet another thing to keep track of. The upside is pretty small, while the potential downside (forget to cancel a card?) is larger. If you're ok with paying the attentional overhead or it's a source of entertainment, go for it.

Personally I'd rather use a standard rewards card (mine is 1.5% cash), not have to think about it, and spend my limited cognitive resources on doing well at my job, looking out for new opportunities with large upsides, working on side projects, or networking.

That's interesting, because to me it read more like "I'm going to write something interesting about anything you like, do some research for you, and even share the results" and "as long as I have to do this assignment I might as well make it useful to someone" but maybe that's because I recognized the poster's name, read his blog, etc.

I can see how someone might interpret it this way, though.

Not something I actually did last month, since I wrote the piece two years ago, but it feels like it since that's when the validation arrived. A blog post of mine hit /r/basicincome and then /r/futurism, which are sitting at ~470 (98% positive) and ~1080 (92% positive) votes respectively, and found its way to hacker news. Some of the discussion is pretty good. The relevant quote:

"Let us keep in mind how poorly we treat those who cannot currently contribute to society. Sooner or later we will have to face this question: how do we define personal worth in a world where most people have no economic value?"

The actual accomplishment of the month is a post on Christopher Alexander's Notes on the Synthesis of Form, which won't be as big a hit, and I'm ok with that.

Schmidhuber's formulation of curiosity and interestingness as a (possibly the) human learning algorithm. Now when someone says "that's interesting" I gain information about the situation, where previously I interpreted it purely as an expression of an emotion. I still see it primarily about emotion, but now understand the whys of the emotional response: it's what (part of) our learning algorithm feels like from the inside.

There are some interesting signaling implications as well.

This, I assume? (It took me a few tries to find it since first I typed in the name wrong and then it turns out it's "Wardley" with an 'a'.) Is the video on that page a good introduction?

Load More