Comment author: Throawey 20 September 2016 04:54:25AM *  5 points [-]

For a while now, I have been working on a potentially impactful project. The main limiting factor is my own personal productivity- a great deal of the risk is frontloaded in a lengthy development phase. Extrapolating the development duration based on progress so far does not yield wonderful results. It appears I should still be able to finish it in a not-absurd timespan, it will just be slower than ideal.

I've always tried to improve my productivity, and I've made great progress in that compared to ten or even five years ago, but at this point I've picked most of the standard low hanging fruit. I've already fiddled with some extremely easy and safe kinda-nootropics already- melatonin, occasional caffeine pills- but not things like modafinil or amphetamines, or some of the less studied options.

And while thinking about this today, I decided to just run some numbers on amphetamines. Based on my current best estimates of market realities and the potential success and failure cases of the project, assuming amphetamines could improve my productivity by 30% on average, the expected value of taking amphetamines for the duration of development comes out to...

...a few hundred human lives.

And, in the best-reasonable case scenario, a lot more than that. This wasn't really unexpected, but it's surprisingly the first time I actually did the math.

So I imagine the God of Dumb Trolley Problems sits me down for a thought experiment and explains: "In a few years, there will be a building full of 250 people. A bomb will go off and kill all of them. You have two choices." The god leans in for dramatic effect. "Either you can do nothing, and let all of them die... or..." It lowers its head just enough for shadows to cast over its features... "You take this low, safe dose of Adderall for a few years, and the bomb magically gets defused."

This is not a difficult ethical problem. Even taking into account potential side effects, even assuming the amphetamines were obtained illegally and so carried legal liability, this is not a difficult ethical problem. When I look at this, I feel like the answer of what I should do is blindingly obvious.

And yet I have a strong visceral response of "okay yeah sure but no." I assume part of this is fairly extreme risk aversion to the idea of getting anything like amphetamines outside of a prescription. Legal trouble would be pretty disastrous, even if unlikely. And part of me is spooked about doing something like this without expert oversight.

But why not just try to get an actual prescription? For this, or some other advantageous semi-nootropic, at least. Once again, I just get a gross feeling about the idea of trying to manipulate the system. How about if I just explain the situation in full, with zero manipulation, to a sympathetic doctor? The response from my gut feels like a blank "... no."

So basically, I feel stuck. Part of me wants to recognize the risk aversion as excessive, and suggests I should at least take whatever steps I can safely. The other part is saying "but that is doing something waaaay out of the ordinary and maybe there's a reason for that that you haven't properly considered."

I am not even sure what I want to ask with this post. I guess if you've got any ideas or insights, I'd like to hear them.

Comment author: Gurkenglas 21 September 2016 03:11:48AM 1 point [-]

Perhaps you expect to in the future be in a position where your expected impact is significantly larger, and so your gut tells you to be careful with anything whose long-term effects are not clear?

Comment author: ChristianKl 08 September 2016 09:16:03PM 1 point [-]

Rationality is about not simply taking up believes unfiltered but evaluating claims of other people before you believe in them. Not doing that would seem to miss the point on a general level.

Comment author: Gurkenglas 09 September 2016 05:45:18PM *  0 points [-]

Accepting conclusions that have been accepted by a sufficient number of marginally trustworthy people is not necessarily a bad heuristic. He might gain more from dogma if he won't persevere through the reading, though a list that people are publically being pointed to could lead to people pointing fingers, saying "cult".

Comment author: turchin 28 August 2016 10:34:44AM 1 point [-]

I only said that it would reduce chance of stupid decisions resulting from not understanding basic human words and values. But it would not reduce chances of deliberately malicious AI.

There are (at least) two different type of UFAI: real UFAI and failed FAI. Failed FAI wanted to be good but failed, and the best example of it smile maximizer which will cover all Solar system with smiles. (Paperclip maximizer also is some form of failed FAI, as initial goal was positive - produce many paperclips)

So it is not full recipe for real FAI, but just one way of value learning

Comment author: Gurkenglas 03 September 2016 10:40:05AM 0 points [-]

You confuse the stupidity of whoever set the goals with the stupidity of the AI afterward. Any AGI is going to understand what we actually want, it just doesn't care, if the goal it was given wasn't already smart enough.

Comment author: passive_fist 23 August 2016 01:17:38AM -4 points [-]

And Lumifer's dismissal of it is probably the most low-effort way of responding. Students of rationality, take note.

Comment author: Gurkenglas 24 August 2016 03:06:43AM -3 points [-]

Students of rationality

You sound like you're trying to win at werewolf! Gleb at least appears honest.

Comment author: passive_fist 22 August 2016 11:13:32PM *  -3 points [-]

On the contrary, being able to identify your own biases and being able to express what kind of information would change your mind is at the heart of rationality.

You're a libertarian. We all know that. But regardless of whether you ideologically agree with the conclusions of the article or not, you should be able to give a more convincing counter-argument than 'godawful clickbait piece-of-crap.'

Comment author: Gurkenglas 24 August 2016 03:05:17AM -2 points [-]

You're a libertarian.

I think that non sequitur-ad hominem got you those downvotes.

Comment author: turchin 23 August 2016 08:49:05PM *  -1 points [-]

(memetic hazard) ˙sƃuıɹǝɟɟns lɐuɹǝʇǝ ɯnɯıxɐɯ ǝʇɐǝɹɔ oʇ pǝsıɯıʇdo ɹǝʇʇɐɯ sı ɯnıuoɹʇǝɹnʇɹoʇ

Update: added full description of the idea in my facebook https://www.facebook.com/turchin.alexei/posts/10210360736765739?comment_id=10210360769286552¬if_t=feed_comment¬if_id=1472326132186571

Comment author: Gurkenglas 24 August 2016 02:25:12AM *  -1 points [-]
Comment author: Daniel_Burfoot 20 August 2016 02:30:57AM 6 points [-]

Note that DeepMind's two big successes (Atari and Go) come from scenarios that are perfectly simulable in a computer. That means they can generate an arbitrarily large number of data points to train their massive neural networks. Real world ML problems almost all have strict limitations on the amount of training data that is available.

Comment author: Gurkenglas 20 August 2016 10:19:07PM *  2 points [-]

On the other hand, it's simple to generate AI-complete problems where you can generate training data.

Comment author: Gurkenglas 08 August 2016 11:08:19AM *  1 point [-]

Why'd they make this public?

Comment author: turchin 03 July 2016 10:32:21PM 0 points [-]

We can't conclude that they would not differ. We could postulate it and then ask: could we measure if equal copies have equal qualia. And we can't measure it. And here we return to "hard question": we don't know if different qualia imply different atom's combinations.

In response to comment by turchin on Zombies Redacted
Comment author: Gurkenglas 05 July 2016 11:05:07PM 0 points [-]

If the copies are different, the question is not interesting. If the copies aren't different, what causes you to label what he sees as red? It can't be the wavelength of the light that actually goes in his eye, because his identical brain would treat red's wavelength as red.

Comment author: Dagon 05 July 2016 01:52:20PM 0 points [-]

Being able to meet many goals is useful. Actually meeting wrong goals is not.

Your hyperbolic discounting example is instructive, as without a model of your goals, you cannot know whether your current or future self is correct. Most people come to the opposite conclusion - a hyperbolic discount massively overweights the short-term in a way that causes regret.

Comment author: Gurkenglas 05 July 2016 06:22:05PM *  0 points [-]

a hyperbolic discount massively overweights the short-term in a way that causes regret.

I meant that - when planning for the future, I want my future self to care about each absolute point in time as much as my current self does, or barring that, to only be able to act as if it did, hence the removal of power.

The correct goal is my current goal, obviously. After all, it's my goal. My future self may disagree, prefering its own current goal. Correct is a two-place word.

If I let my current goal be decided by my future self, but I don't know yet what it will decide, then I should accomodate as many of its possible choices as possible.

View more: Next