Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Gram_Stone 23 April 2017 06:21:51PM *  3 points [-]

I enjoyed this very much. One thing I really like is that your interpretation of the evolutionary origin of Type 2 processes and their relationship with Type 1 processes seems a lot more realistic to me than what I usually see. Usually the two are made to sound very adversarial, with Type 2 processes having some kind of executive control. I've always wondered how you could actually get this setup through incremental adaptations. It doesn't seem like Azathoth's signature. I wrote something relevant to this in correspondence:

If Type 2 just popped up in the process of human evolution, and magically got control over Type 1, what are the chances that it would amount to anything but a brain defect? You'd more likely be useless in the ancestral environment if a brand new mental hierarch had spontaneously mutated into existence and was in control of parts of a mind that had been adaptive on their own for so long. It makes way more sense to me to imagine that there was a mutant who could first do algorithmic cognition, and that there were certain cues that could trigger the use of this new system, and that provided the marginal advantage. Eventually, you could use that ability to make things safe enough to use the ability even more often. And then it would almost seem like it was the Type 2 that was in charge of the Type 1, but really Type 1 was just giving you more and more leeway as things got safer.

Comment author: scarcegreengrass 24 April 2017 04:14:10PM 0 points [-]

Yes, and also the neocortex could later assume control too, once it had been selected into fitness with the ecosystem.

Comment author: entirelyuseless 21 April 2017 12:52:02AM 3 points [-]

Comments and posts were ported over from Overcoming Bias and so they preceded the Less Wrong website.

Comment author: scarcegreengrass 23 April 2017 03:23:04AM 0 points [-]

Ah, the comments too! Okay, now I understand.

Comment author: scarcegreengrass 20 April 2017 10:04:06PM 0 points [-]

Suggestion: Since the future of intelligent software is a big interest in the community, i'd be curious to get more resolution on people's worldviews. We could try to get numbers for opinions like:

'I don't have much to say about this topic right now.' <- for people who are interested in non-machine-intelligence parts of LW

'There's a probability of __ that Bostrom-style superintelligent software will dominate the history of the 2017 to 2100 CE period.'

Same as the above, but for 2200 to 3000 CE.

'There's a probability of __ that whole brain emulation will dominate the history of the 2017 to 2100 CE period.'

Same as the above, but for 2200 to 3000 CE.

etc

I'd also be curious to see data on worldviews about the future impact of:

Slightly more traditional forms of automation, computing, and software (such as workforce automation, future financial software, etc).

Biotech and genetic engineering

Decentralized manufacturing and nanoscale technologies

The space industry

Weapons of mass destruction

Novel political systems.

This is a lot of questions, so feel free to trim or select as desired.

Comment author: scarcegreengrass 20 April 2017 09:48:47PM 0 points [-]

Suggestion: A question that distinguishes between average and total utilitarianism (i dont remember seeing this previously). This is a little arcane for people who aren't interested in consequentialism & utilitarianism, and that's fine. But within utilitarianism, i'd be interested to know more about this split. It has a bearing on whole brain emulation.

(Just in case i'm using the wrong terminology, let me clarify: I'm referring to whether or not you differentiate between a universe with 10 billion humans at X quality of life or a universe with 20 billion humans at the same X quality of life. I think most people prefer the 20 billion, but some people might lean towards the 10 billion as a matter of degree or in less theoretical contexts.)

Comment author: ChristianKl 13 April 2017 07:36:10AM 0 points [-]

"Do you have mnemonics pegs for the numbers of 1 to 100?"

Comment author: scarcegreengrass 20 April 2017 09:37:40PM 0 points [-]

Just out of curiosity, what sort of things do you mean? Like Schelling points or like information associated with those numbers?

Comment author: scarcegreengrass 20 April 2017 09:33:29PM 0 points [-]

I notice a contradiction that i don't yet understand. This post and the wiki page (https://wiki.lesswrong.com/wiki/History_of_Less_Wrong) say that LessWrong started in 2009. However, there are comments here with earlier timestamps (arbitrary example: http://lesswrong.com/lw/qd/science_isnt_strict_enough/k2t). I was under the impression lesswrong.com was an active community at least since 2007. Is the wiki's "2009" a typo?

Also, i am updating my PoV on recent LW history based on the analytics charts. I take it that pageviews have not yet dropped below 2010 levels, even if commenting rates did?

Comment author: scarcegreengrass 20 April 2017 09:23:59PM 0 points [-]

As far as i am aware, it doesn't matter too much which Mastadon / Fediverse server you put your account on; you can still participate in the microblogging community. Under this assumption, if you want to connect with other microblogging aspiring rationalists, feel free to post your username as a reply to this comment.

I am @alexpear@social.targaryen.house

Comment author: Alicorn 17 March 2017 01:46:56AM 21 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: scarcegreengrass 20 April 2017 09:05:28PM *  0 points [-]

I would be interested in participating! Let me try to be more specific... If this looks viable within a couple years it probably would be my first or second choice of places to live.

Edit: Oh, and i am currently living in the USA and am a relatively movable person demographically.

Comment author: dogiv 28 March 2017 07:54:37PM 2 points [-]

Does anybody think this will actually help with existential risk? I suspect the goal of "keeping up" or preventing irrelevance after the onset of AGI is pretty much a lost cause. But maybe if it makes people smarter it will help us solve the control problem in time.

Comment author: scarcegreengrass 28 March 2017 11:15:00PM 0 points [-]

I also think this project will be on a fairly slow timeline. Maybe the AGI connections are functionally just marketing, and the real benefit of this org will be more mundane medical issues.

Comment author: scarcegreengrass 07 March 2017 12:12:29AM 1 point [-]

I found a historical quote that's relevant to AI Safety discussions. This is from the diary of President Harry Truman when he was in the recently-bombed Berlin in 1945. Just interesting to hear someone discuss value alignment so early. Source: http://www.pbs.org/wgbh/americanexperience/features/primary-resources/truman-diary/

"I thought of Carthage, Baalbek, Jerusalem, Rome, Atlantis, Peking, Babylon, Nineveh, Scipio, Ramses II, Titus, Herman, Sherman, Genghis Khan, Alexander, Darius the Great -- but Hitler only destroyed Stalingrad -- and Berlin. I hope for some sort of peace, but I fear that machines are ahead of morals by some centuries and when morals catch up perhaps there'll be no reason for any of it. I hope not. But we are only termites on a planet and maybe when we bore too deeply into the planet there'll be a reckoning. Who knows?"

View more: Next