Comment author: John_Maxwell_IV 06 February 2013 09:00:07AM 5 points [-]

A common related situation: unproductive group conversations.

Comment author: Duncan 06 February 2013 02:31:33PM 0 points [-]

Do you have any suggestions on how to limit this? I find meetings often meander from someone's pet issue to trivial / irrelevant details while the important broader topic withers and dies despite the meeting running 2-3x longer than planned.

In meetings where I have some control, I try to keep people on topic, but it's quite hard. In meetings where I'm the 'worker bee' it's often hopeless (don't want to rub the boss the wrong way).

Comment author: Duncan 06 February 2013 02:23:59PM 13 points [-]

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.

Comment author: gwern 31 January 2013 05:22:01PM 7 points [-]

It is, some places. Just not the USA where CFAR is operating now and the foreseeable future. I'm a big fan of modafinil as you might guess, but if CFAR were even idly considering providing or condoning modafinil use, I'd smack them silly (metaphorically); organizations must obey different standards than individuals.

Comment author: Duncan 31 January 2013 06:15:05PM 1 point [-]

I agree that they should uphold strict standards for numerous reasons. That doesn't prevent CFAR from discussing potential benefits (and side effects) of different drugs (caffeine, aspirin, modafinil, etc.). They could also recommend discussing such things with a person's doctor as well as what criteria are used to prescribe such drugs (they might already for all I know).

Comment author: gwern 31 January 2013 05:03:31PM 10 points [-]

I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?

As a schedule IV drug, it's surely some sort of crime to offer or accept. Some people will not want to associate with such people or organizations on moral grounds, risk-aversion grounds, or fear of other people's disapproval on either ground etc.

Comment author: Duncan 31 January 2013 05:08:28PM 1 point [-]

Ah, I thought it was an over the counter drug.

Comment author: Kevin 31 January 2013 02:07:04PM 5 points [-]

Offering everyone modafinil or something at the beginning of future workshops might help with this.

It would help, but would inevitably offend people and not at all worth the consequences.

Comment author: Duncan 31 January 2013 03:07:45PM 2 points [-]

I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?

What about trying bright lighting?: http://lesswrong.com/lw/gdl/my_simple_hack_for_increased_alertness_and/

Comment author: Duncan 31 January 2013 03:00:44PM 3 points [-]

I'm glad to hear it is working well and is well received!

Once there has been some experience running these workshops I really hope there is something that CFAR can design for meetup groups to try / implement and/or an online version.

Is there a CFAR webpage that covers this particular workshop and how it went?

Comment author: Mestroyer 28 January 2013 09:40:15PM 4 points [-]

If you can't even look in at what the AI is doing, what is the point of creating it at all? If you can, you are just as vulnerable as in the experiments where it can chat text at you.

Comment author: Duncan 28 January 2013 09:48:19PM *  8 points [-]

It is useful to consider because if AI isn't safe when contained to the best of our ability then no method reliant on AI containment is safe (i.e., chat boxing and all the other possibilities).

Comment author: Duncan 28 January 2013 09:26:35PM 2 points [-]

My draft attempt at a comment. Please suggest edits before I submit it.:

The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don't believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone else, could compute math equations perfectly in an instant, etc. No one on this planet could compete with you and with a little time no one could stop you (and that is just a crude brain simulation).

Here are two websites that go into much greater detail about the problem:

AI Risk & Friendly AI Research: http://singularity.org/research/ http://singularity.org/what-we-do/

Facing the Singulatiry: http://facingthesingularity.com/2012/ai-the-problem-with-solutions/

In response to Cryonics priors
Comment author: Duncan 25 January 2013 06:52:09AM *  -1 points [-]

"1. Life is better than death. For any given finite lifespan, I'd prefer a longer one, at least within the bounds of numbers I can reasonably contemplate."

Have you included estimates of possible negative utilities? One thing we can count on is that if you are revived you will certainly be at the mercy of whatever revived you. How do you estimate the probability that what wakes you will be friendly? Is the chance at eternal life worth the risk of eternal suffering?

Comment author: Duncan 24 January 2013 03:52:48PM *  2 points [-]

I think the CFAR is a great idea with tons of potential so I'm curious if there are any updates on how the meetup went and what sorts of things were suggested?

View more: Prev | Next