Comment author: taygetea 05 August 2015 08:05:58AM 0 points [-]

Does anyone have or know anyone with a magnetic finger implant who can compare experiences? I've been considering the implant. If the ring isn't much weaker, that would be a good alternative.

Comment author: taygetea 27 July 2015 06:47:58PM 10 points [-]

So, to my understanding, doing this in 2015 instead of 2018 is more or less exactly the sort of thing that gets talked about when people refer to a large-scale necessity to "get there first". This is what it looks like to push for the sort of first-mover advantage everyone knows MIRI needs to succeed.

It seems like a few people I've talked to missed that connection, but they support the requirement for having a first-mover advantage. They support a MIRI-influenced value alignment research community, but then they perceive you asking for more money than you need! Making an effort to remind people more explicitly why MIRI needs to grow quickly may be valuable. Link the effect of 'fundraiser' to the cause of 'value learning first-mover'.

Comment author: Elo 14 July 2015 11:18:54PM 3 points [-]

I know a few people with varying forms of Imposter syndrome. I have never felt the similar experience and would like to bridge the gap of understanding, and see if I can pull some advice out of your experience. Can you explain more?

Comment author: taygetea 15 July 2015 07:13:19AM 0 points [-]

That's a pretty large question. I'd love to, but I'm not sure where to start. I'll describe my experience in broad strokes to start.

Whenever I do anything, I quickly acclimate to it. It's very difficult to remember that things I know how to do aren't trivial for other people. It's way more complex than that... but I've been sitting on this text box for a few hours. So, ask a more detailed question?

Comment author: taygetea 14 July 2015 11:56:27AM 23 points [-]

This month (and a half), I dropped out of community college, raised money as investment in what I'll do in the future, moved to Berkeley, got very involved in the rationalist community here, smashed a bunch of impostor syndrome, wrote a bunch of code, got into several extremely promising and potentially impactful projects, read several MIRI papers and kept being urged to involve myself with their research further.

I took several levels of agency.

Comment author: taygetea 17 May 2015 10:57:13PM 3 points [-]

Hi. I don't post much, but if anyone who knows me can vouch for me here, I would appreciate it.

I have a bit of a Situation, and I would like some help. I'm fairly sure it will be positive utility, not just positive fuzzies. Doesn't stop me feeling ridiculous for needing it. But if any of you can, I would appreciate donations, feedback, or anything else over here: http://www.gofundme.com/usc9j4

Comment author: taygetea 15 May 2015 10:40:41PM 1 point [-]

I've begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn't seem like a confirmation bias/Baader-Meinhoff effect, not really. It's quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?

Comment author: taygetea 10 April 2015 06:31:02AM 2 points [-]

Typical Mind Fallacy. Allows people to actually cooperate for once. One of the things I've been thinking about is how one person's fundamental mind structure is interpreted by another as an obvious status grab. I want humans to better approximate Aumann's Agreement Theorem. Solve the coordination problem, solve everything.

Comment author: taygetea 06 April 2015 01:07:10AM 8 points [-]

Determining the language to use is a classic case of premature optimization. No matter what the case, it will have to be provably free of ambiguities, which leaves us programming languages. In addition, in terms of the math of FAI, we're still at the "is this Turing complete" sort of stage in development. So it doesn't really matter yet. I guess one consideration is that the algorithm design is going to take way more time and effort than the programming, and the program has essentially no room for bugs (Corrigibility is an effort to make it easier to test an AI without it resisting). So in that sense, it could be argued that the lower level the language, the better.

Directly programming human values into an AI has always been the worst option, partially for your reason. In addition, the religious concept you gave can be trivially broken by two different beings having different or conflicting utility functions, and so acting as if they were the same is a bad outcome. A better option is to construct a scheme so that the smarter the AI gets, the better it approximates human values, by using its own intelligence to determine them, as in coherent extrapolated volition.

Comment author: taygetea 04 April 2015 04:54:06PM 0 points [-]

I think I see the problem. Tell me what your response to this article is. Do you see messy self-modification in pursuit of goals at the expense of a bit of epistemic rationality to be a valid option to take? Is Dark == Bad? In your post, you say that it is generally better not to believe falsehoods. My response to that is that things which depend on what you expect to happen are the exception to that heuristic.

Life outcomes are in large part determined by your background that you can't change, but expecting to be able to change that will lead you to ignore fewer opportunities to get out of that situation. This post about luck is also relevant.

Comment author: taygetea 04 April 2015 04:34:27PM 1 point [-]

I can't say much about the consequences of this, but it appears to me that both democracy and futarchy are efforts to more closely approximate something along the lines of a CEV for humanity. They have the same problems, in fact. How do you reconcile mutually exclusive goals of the people involved?

In any case, that isn't directly relevant, but linking futarchy with AI caused me to notice that. Perhaps that sort of optimization style, of getting at what we "truly want" once we've cleared up all the conflicting meta-levels of "want-to-want", is something that the same sorts of people tend to promote.

View more: Prev | Next