Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 20 June 2017 06:59:28PM 2 points [-]

Feedback: I had to scroll a very long way until I found out what "s-risk" even was. By then I had lost interest, mainly because generalizing from fiction is not useful.

Comment author: turchin 19 June 2017 11:06:21PM 0 points [-]

The only real reason to make AGI is if you want to take over the world (or solve other big problems). And if you want, you will not put on your web page - if you are serious. So we will almost never see credible claims on work on AGI, and especially on self-improving superintelligence.

Exception: Schmidhuber

Comment author: username2 20 June 2017 04:14:15AM 2 points [-]

Exception: Goertzel and just about every founder of the AI field who work on AI mainly as a way of understanding thought and building things like us.

Comment author: Viliam 19 June 2017 11:04:43PM *  0 points [-]

Piano and ballet seem like upper-class costly signalling. "I am so rich I can spend tons of time doing unproductive activities." If you are upper-class and if you want to signal it costly, it could be the right move, otherwise it is almost certainly the wrong move; such is the nature of costly signals.

Sports seem useful per se. Unless you mean golf or pony riding, of course.

Comment author: username2 20 June 2017 04:12:30AM 5 points [-]

Wow that seems overly dismissive of the arts. My daughter loves ballet ever since she saw a friends ballet birthday party. She is a very physical / body oriented learning type who fidgets at school. Ballet is a great outlet, builds coordination and gives her self confidence. I know I can say similar things of piano lessons. I quite shocked that you can reduce that to "upper-class costly signaling."

Comment author: Vaniver 13 June 2017 04:07:41PM 6 points [-]

One of the things that we've noticed, doing user interviews of several solid posters from previous eras of Less Wrong, is that most of them didn't write their first comment or post until they had been on the site for months, and had been spending that time lurking and reading the Sequences. (This was my introduction to the site, for example.)

This suggests to me that there is actually something really good that happens when you read the Sequences front to back. (Like Viliam points out, there's an ebook and physical book versions are starting to come out, and also the redesign involves making the UI for reading through the Sequences much better.)

Comment author: username2 14 June 2017 11:43:47AM 5 points [-]

Back when I had a username of my own I interacted in this way.
A huge difference between then and now: Eliezer was not only posting new "Sequences" but actively discussing their content. And a few others were making sequence-like posts and continuing discussion similarly.

Now, the sequences are a fixed set of documents that still serve as interesting, eye-catching and thought provoking entry point. But what is absent is constructive and productive discussion around them. Sure, there have been attempts to reread and discuss, and occasionally someone will create a top-level post to try and discuss ideas from one or another of the Sequence posts --- but these attract minimal and low quality discussion here.

From my outside perspective it seems like a key part is that the "content providers" also engage beyond the first post.
I hope this resumes if LW 2.0 continues to have an essay+discussion format.

Comment author: Oscar_Cunningham 08 June 2017 06:06:35PM 0 points [-]

I'm not suggesting that people actually do this, just that this is a sensible assumption to make when laying the mathematical foundation of rationality.

Comment author: username2 12 June 2017 08:36:40AM 0 points [-]

Sure, but what Pimgd is pointing out is that it does not model rational behavior very well. Don't build a mathematical framework on shaky foundations.

Comment author: cousin_it 08 June 2017 06:02:27AM *  0 points [-]

Yeah, if they refuse the bet that means they probably updated (or weren't trying to be rational to begin with).

Comment author: username2 12 June 2017 08:33:30AM 0 points [-]

Or don't have $500.

Comment author: Stuart_Armstrong 12 June 2017 07:40:18AM 0 points [-]

It seems to me to jive with how many people react to unexpected tensions between different parts of their values (eg Global warming vs markets solve everything, or Global warming vs nuclear power is bad). If the tension can't be ignored or justified away, they often seem to base their new decision on affect and social factors, far more than any principled meta-preference for how to resolve tensions.

Comment author: username2 12 June 2017 08:29:51AM 0 points [-]

But you can still keep asking the "why" question and go back dozens of layers, usually suffering combinatorial explosion of causes, and even recursion in some cases. Only very, very rarely have I ever encountered a terminal, genesis cause for which there isn't a "why" -- the will to live is honestly the only one occurring to me right now. Everything else has causes upon causes as far as I'd care to look...

Comment author: username2 11 June 2017 10:33:03AM 0 points [-]

and wished that liberals and conservatives could put aside their mutual suspicions and unite at the political level to defend the country against white supremacism, fascism, and the general madness brought on by the age of Trump.

or better

and wished that liberals and conservatives could put aside their mutual suspicions and unite at the political level to defend the country againts liberal perceived consequences of electing a meme. Also good luck uniting everything at a political level.

t. has been been a member of communities that can be considered fascist, supremacist even before trump was a thing which these days it's practically anything someone that leads left doesn't like

Other than that the title is clickbaity, just an average take on categorizing concerned people on the internet which don't censor themselves.

The Tinfoil Class Author: Theresa May

Comment author: Stuart_Armstrong 09 June 2017 04:29:18PM 0 points [-]

I don't think most humans have higher order preferences, beyond, say, two levels max.

Comment author: username2 09 June 2017 08:15:21PM 0 points [-]

Okay well that doesn't jive with my own introspective experience.

Comment author: Stuart_Armstrong 09 June 2017 04:44:00PM *  0 points [-]

That's why I used the "(idealised) agent" description (but titles need to be punchier).

Though I think "simple" goal is incorrect. The goal can be extremely complex - much more complex that human preferences. There's no limit to the subtleties you can pack into a utility function. There is a utility function that will fit perfectly to every decision you make in your entire life, for example.

The reason to look for an idealised agent, though, is that a utility function is stable in a way that humans are not. If there is some stable utility function that encompasses human preferences (it might be something like "this is the range of human preferences" or similar) then, if given to an AI, the AI will not seek to transform humans into something else in order to satisfy our "preferences".

The AI has to be something of an agent, so it's model of human preferences has to be an agent-ish model.

Comment author: username2 09 June 2017 08:13:02PM *  0 points [-]

Is that really the standard definition of agent though? Most textbooks I've seen talk of agents working towards the achievement of a goal, but it says nothing about the permanence of that goal system. I would expect an "idealized agent" to always take actions that maximize likelihood of achieving its goals, but that is orthogonal from whether the system of goals changes.

View more: Next