You could look at MIRI's research page to judge if its research is something you could contribute to. You could start by commenting on new AI related posts on LW. My guess is that in a few years time MIRI will still say that there is a high marginal benefit to them having more money. Don't let the fact that you can't donate lots of money stop you from donating small sums, even just $5 a year.
Does that mean I should try and seek a high paying job? Or try and add a little more brainpower to the problem? Perhaps both?
Really, my life goal for the past decade and a half was to become a physicist. But this definitely seems like something I'm obligated to support.
But I will definitely try and donate whatever meagre amounts I can produce.
So8res gave a talk on this at the Effective Altruism Global summit, a video of which should be up in a few weeks.
Hello. I am a young man about who's quite worried about Ai, like many here, and I'd like to know if I could help in any way.
I can't donate much money, at least not for a few years, and from reading So8res' recent post, money won't be much of an issue in a few years.
However, I am fairly intelligent, and I think I could help a little research wise. Of course, after a few years of rigorous study.
But there are many people at least as intelligent as I, so perhaps my trying to help wouldn't make any difference.
The issue is, I'm in no place to know. If someone out there is working in the field and comes across this, please advise me on what to do. Should I study hard and try and help a little, or would going down this path be fruitless?
I agree with the others, some of those questions would take way too long to answer properly. They were pretty vague.
And the middle option should reflect some neutral position like 'I'm not sure' rather than 'it's irrelevant for consciousness', or whatever. That's what I used it for.
Also (though this is really my fault) some of those terms were unfamiliar, and I was feeling to lazy too look them all up, so you may get some anomalies in there. I expect some people might have done this with at least one question.
Is MIRI making an FAI only in regards to humans? That is, it would do whatever best aligned itself with what humans want?
If so, what would happen in the case of extra terrestrial contact? All sorts of nasty situations could occur e.g. them having an AI as well, with a fairly different set of goals, so the two AIs might engage in some huge and terrifying conflict. Or maybe they'd just agree to co-operate because the conflict would be too costly.
So have the researchers at MIRI put something like this as a goal?
See the reply I just wrote to gjm for an explanation of my motivations.
When I was writing this, I thought the intent to parody would be clear; surely no one could seriously suggest we have to strike strength from our dictionaries? I seem to have been way off on that. Perhaps that is a reflection on the internet culture at large, where these kinds of arguments are common enough not to raise any eyebrows.
Anyway, I went one step further and put "parody" in the title.
Ah, that makes sense.
I would probably put something like 'this is a parody of the arguments used for 'there is no such thing as intelligence etc'' as some people (AKA me) might not pick up what you're parodying.
Though perhaps I'm just in a small minority, and I don't read internet debates as often as others do.
Thanks for the clarification by the way.
Hmm, on second thought, I added a [/parody] tag at the end of my post - just in case...
You know, I must applaud you. You really surprised me there. After reading that I could only say 'What?'
Was this made as a prank or just as a humorous piece? I'm quite curious to know your intentions here.
A little while back, someone asked me 'Why don't you pray for goal X?' and I said that there were theological difficulties with that and since we were about to go into the cinema, it was hardly the place for a proper theological discussion.
But that got me thinking, if there weren't any theological problems with praying for things, would I do it? Well, maybe. The problem being that there's a whole host of deities, with many requiring different approaches.
For example, If I learnt that the God of the Old Testament was right, I would probably change my set of acceptable actions very, very quickly. Perhaps another reasonable response would be to try and very carefully convince this God to change its mind about a couple of things, as though the God of the Old Testament is capable of change if I remember rightly.
On the other end of the spectrum, what about the Greek gods? Well, I think it would still be a good idea to try and convince them not be, you know, egotistical tyrants. Or failing that, humanity should probably try and contain them in some fashion, because who'd want someone like Zeus going about as they pleased?
And if Aristotle's Prime mover were real... Well, I guess you'd just ignore it.
Anyway, I think Its a pretty interesting topic, if not a very useful one.
Any thoughts on how you'd react to any of humanities collection of deities?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
All the constraints you put down aren't the same as making the least powerful genie. Restricting its time or resources or whatever increases its efficiency, but that's just as a by product, an accident. The least powerful genie should be the most efficient, not as a by-product of its design, but as the end goal. The model you put down just happens to approximate the least powerful genie.