A few of us got together in the pub after the friendly AI meet and agreed we should have a meetup for those of us familiar with lesswrong/bostrom etc. This is a post for discussion of when/where.

Venue: London still seems the nexus. I might be able to be convinced to go to Oxford. A starbucks type place is okay, although it'd be nice to have a white board or other presentation systems.

Date/Time: Weekends is fine with me, and I suspect most people. Julian suggested after the next UKTA meeting. Will that be April at the humanityplus thing?  It would depend whether we are still mentally fresh after it, and whether any of our group are attending the dinner after.

Activities: I think it would be a good idea to have some structure or topics to discuss, so that we don't just fall back into the "what do you do" types of discussion too much. Maybe mini presentations.

My duo of current interests

1)Evidence for intelligence explosion: I don't want to rehash what we know already, but I would like to try and figure out what experiments we can do (safely) or proofs we can make to increase or decrease our belief that it will occur. This is more of a brainstorming session.

2) The nature of the human brain: Specifically it doesn't appear to have a goal (in the decision theory sense) built in, although it can become a goal optimizer to a greater or lesser extent. How might it do this? As we aren't neuroscientists, a more fruitful question might be what the skeleton of such a computer system that can do this might be look like, even if we can't fill in all the interesting details. I'd discuss this with regards to akrasia, neural enhancement, volition extraction and non-exploding AI scenarios. I can probably pontificate on this for a while, if I prepare myself.

I think Ciphergoth wanted to talk about consequentialist ethics.

Shout in the comments if you have a topic you'd like to discuss, or would rather not discuss.

Perhaps we should also look at the multiplicity of AI bias, where people seem to naturally assume there will be multiple AIs even when talking about super intelligent singularity scenarios (many questions had this property at the meeting). I suspect that this could be countered by reading a Fire upon the Deep somewhat.

New Comment
46 comments, sorted by Click to highlight new comments since:
[-]Roko60

Many people didn't seem to take up the "don't anthropomorphize AI" point, and there were many comments about how a single superintelligent AI would be like a dictator, and wouldn't it be better if everyone had their own superintelligent AI, and if we had lots of different superintelligent AIs, then they'd develop empathy and that would cause them to be moral and thereby solve the FAI problem, etc...

I am thinking of doing a post on this phenomenon, it reminds me of a Borat sketch...

[-]dw230

Here's a related suggestion (that's not meant to supersede the original idea):

As the Meetings Secretary for UK Humanity+, I would be interested in organising a UKH+ event (in, for example, February or March) dedicated to potential influence of LW upon H+.

For example, one or more speakers familiar with Less Wrong could lead a seminar on how various ideas discussed over the months on the LW site would improve thinking about futurism and/or transhumanism.

I'm open to suggestions about the exact title, meeting tagline, and format.

Background info follows

For a list of previous UKH+ meetings, see http://extrobritannia.blogspot.com/

Typical UKH+ meeting logistics are

  • Meeting happens on a Saturday afternoon
  • Lecture/Seminar/Q&A format in a room in Birkbeck College, 2-4pm
  • Subset of attendees retires to a nearby pub

This is a great idea! I'd be up for taking part in making this happen.

I'm thinking it'd be good to cover inside view vs outside view of the singularity. I'll do so depending upon when my assignment deadlines are.

I'd sooner talk about rationality in general, with some examples on how it applies to futurism.

[-]marc20

I'm sure I can sort out a room at UCL. I'll find out whether it would be free.

UCL is particularly convenient for transport links since Euston and Kings Cross are <10mins walk and Paddington is a short tube ride away.

There are some nice little restaurants and pubs around for food/drink.

A meal after Humanity+ sounds like a great plan to me!

[-]dw220

The next Humanity+ meeting has been organised, for Sat 20th Feb.

The theme of that meeting is something different from the bulk of this thread: it's "The future of politics" - see http://extrobritannia.blogspot.com/2010/02/future-of-politics.html for more details - but hopefully it will still interest many LW readers.

Come to Oxford! Can certainly organise a college room to meet in, and we can do dinner in hall afterwards if anyone's interested.

Oxford would be insanely convienient. Also, the Future of Humanity Institute runs seminars here anyway, so those people would no doubt like to come.

However, I admit my main reason for recomending it is that it's where I am. Realistically London would probably be better.

Oxford is good for me, but London is fine. Anywhere with a whiteboard is going to cost money to book, so take that into account.

As far as I could tell, the multiplicity of AIs thing came from people objecting to hard takeoff scenarios, so that confusion should be soluble, given more time to explain the subject (Roko was packing a massive number of ideas into that talk.)

If you all wanted to come to Cambridge I could probably get you a seminar room for free...

A LW meetup is one of those things I'd kind of like to go to in the abstract, but in reality I think it would be much too terrifying.

A LW meetup is one of those things I'd kind of like to go to in the abstract, but in reality I think it would be much too terrifying.

Honestly, where do people get these ideas?

If I had to guess I'd say the beisutsukai series.

I'm shy at the best of times, and LW people are really smart. What if I had nothing remotely useful or interesting to say?

[-]Roko70

Then, I'm afraid that in the time honored Bayesian Conspiracy tradition we would have to cannibalize your brain ...

[-]Emily100

:D

OK... if that's the worst thing that could happen I'll consider coming along...

"I think you should kill him and eat his brain," Mr. Frostee said quickly.

"That's not the answer to every problem in interpersonal relations," Cobb said

-- Rudy Rucker, Software

Yes, but you are one of the LW people.

This is politically incorrect to acknowledge, perhaps, but since so many more men than woman go to these types of events, there's something of a bias towards being nicer to women. I'm sure your presence will be welcomed, even if you don't feel like saying much.

I'd go to another LWOB or such meetup if there was one near me... But speaking for myself, I can list some things that do make me hesitate to even post here:

Shame and guilt.

How shall I put this? I've contributed (usefully) here much less than many others. I may have managed to comprehend stuff, but that doesn't mean I've managed to add stuff.

Even more so, in general in my life, applied rationality, getting stuff done, etc, not so well.

Even more more so: Even though I want to, even though I promised I would, I have yet to do much to reduce x-risk, halt death, etc... Sure, I've dropped a little bit of money on SIAI and such, but no where near enough. I want to do more (more precisely, I want the problem to be solved. If giving some money is the best way I can conribute, well, I accept that, but I need to get more and give more in that case), I intend to do more (not just intend to try to solve the problem(s) but to actually, well, do so), but as of yet I have not. I can't even really be said to have done much at all other than "intending" to do so and I still hardly even know where to begin.

As I said, shame and guilt.

I have no idea to what extent any of these reasons apply to others here, but I figured I may as well list mine.

Got it bad -- but went to a meetup once anyway, and had a great time. Let this serve as encouragement to similarly-afflicted others!

[-]Roko00

By the way, are you the Emily C I already know from Cambridge?

Nope.

Anywhere with a whiteboard is going to cost money

You can get whiteboard roll for about a pound a (reusable) sheet. I haven't seen it first hand, but it looks fit for purpose & was good enough to get an investment on Dragon's Den.

Humanityplus sounds interesting -- how many of us will be at that anyway?

A question that arises out of dw2's proposal: what is the connection between rationality and futurism? I can think of some possibilities but I don't want to prime people, so I'll put them in a comment in a tick.

No ideas? The obvious one seems to be that futurism demands great rationality because it takes so long for reality to tell you what the truth is, and hindsight bias means people don't notice their errors.

I'd come along to a meeting that took place in London centered around Less Wrong/ Overcoming Bias type topics.

To be honest, the more 'strongly' Transhumanist topics don't excite me too much, but I'd love a good conversation about rationality, ethics, the (non)meaning of life, etc...

I agree that a format based on a speaker and then discussion would lend itself to a more on-topic discussion. Alternatively, for some topics more than others, a 'book-club' type approach might work:

We could, for example, all read Mill or Bentham and then one could be designated to MC the event, get conversation going, pop the attendees out of any infinite conversational loops, provide cheesy-poofs, and other duties befitting a group of people who argue on the internet. (Perhaps the one suggesting the next topic/book could then take on the responsibility for the next meeting.)

Thoughts?

(Short-time reader, first time poster)

I think book discussions are an excellent idea, particularly for technical topics.

I'm unlikely to make it to anywhere in the South East, but don't let that put you off. Regarding plan (2), perhaps you could invite some neuroscientists?

If I knew some that could talk in generalities I would. What I would like to find is people that can say that due to this experiment it shows the human brain isn't X type of formal system which leaves Z, Y and A as possibilities.

It sounds like you're expecting them to do all the work, rather than being prepared to meet them half-way. It would probably be more interesting and productive all round if you're prepared to explain the formal models (or at least their consequences) to the neuroscientists.

I'm prepared to meet them halfway, but trying to do it at a meetup is probably the wrong place to do it.

Yes, you're probably right.

On a related note, would anyone be interested in a meetup in Scotland? Or, failing that, the North of England?

Shamelss plug: a few friends and I are going to be getting together in Oxford to discuss transhumanism and related issues. Do join the facebook group if you're interested http://www.facebook.com/group.php?gid=265828309853

I'd be happy to travel to London on occasion (from Birmingham), but it might be an idea for non-Londoners to volunteer their locations. There may be other places that could support a similar group.

Sounds like a good one, count me in. I work at King's Cross to UCL is ideal. I'd have been at the FAI thing this weekend but for other arrangements.

Warning: Slightly Off-Topic

I think Ciphergoth wanted to talk about consequentialist ethics.

Can anyone suggest a book (or two) that would serve as a good introduction to consequentialism for the non-philosophy major?

I suppose good places to start would be Wikipedia and the Stanford Encyclopedia of Philosophy, but let's say I want something longer form yet not too arcane.

[-]djcb30

In addition, there's the work of Jeremy Bentham (1748-1832), An Introduction to the Principles of Morals and Legislation. It's even available as an audio book through Librivox.

John Stuart Mill is probably easier to read.

John Stuart Mill wrote "Utilitarianism", which kind of kickstarted the whole mess. It's a skinny little book.

And out of copyright even in the U.S., which makes it available free online and cheap in print.

Thanks for the recommendation. I found an online copy here:

http://www.utilitarianism.com/mill1.htm