Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jyan 26 April 2017 07:54:58AM 0 points [-]

Figurehead and branches is an interesting idea. If data, code and workers are located all over the world, the organization can probably survive even if one or few branches are taken. Where should the head office be located, and in what form (e.g. holding company, charity)? These type of questions deserve a post, do you happen to know any place to discuss building safe AI research lab from scratch?

Comment author: Darklight 27 April 2017 01:31:18AM 0 points [-]

I don't really know enough about business and charity structures and organizations to answer that quite yet. I'm also not really sure where else would be a productive place to discuss these ideas. And I doubt I or anyone else reading this has the real resources to attempt to build a safe AI research lab from scratch that could actually compete with the major organizations like Google, Facebook, or OpenAI, which all have millions to billions of dollars at their disposal, so this is kind of an idle discussion. I'm actually working for a larger tech company now than the startup from before, so for the time being I'll be kinda busy with that.

Comment author: jyan 23 April 2017 01:37:26PM 0 points [-]

If a new non-profit AI research company were to be built from scratch, which regions or countries would be best for the safety of humanity?

Comment author: Darklight 24 April 2017 12:32:32AM 0 points [-]

That is a hard question to answer, because I'm not a foreign policy expert. I'm a bit biased towards Canada because I live there and we already have a strong A.I. research community in Montreal and around Toronto, but I'll admit Canada as a middle power in North America is fairly beholden to American interests as well. Alternatively, some reasonably peaceful, stable, and prosperous democratic country like say, Sweden, Japan, or Australia might make a lot of sense.

It may even make some sense to have the headquarters be more a figurehead, and have the company operate as a federated decentralized organization with functionally independent but cooperating branches in various countries. I'd probably avoid establishing such branches in authoritarian states like China or Iran, mostly because such states would have a much easier time arbitrarily taking over control of the branches on a whim, so I'd probably stick to fairly neutral or pacifist democracies that have a good history of respecting the rule of law, both local and international, and which are relatively safe from invasion or undue influence by the great powers of U.S., Russia, and China.

Though maybe an argument can be made to intentionally offset the U.S. monopoly by explicitly setting up shop in another great power like China, but that runs the risks I mentioned earlier.

And I mean, if you could somehow acquire a private ungoverned island in the Pacific or an offshore platform, or an orbital space station or base on the moon or mars, that would be cool too, but I highly doubt that's logistically an option for the foreseeable future, not to mention it could attract some hostility from the existing world powers.

Comment author: Darklight 10 April 2017 12:01:12AM 2 points [-]

I've had arguments before with negative-leaning Utilitarians and the best argument I've come up with goes like this...

Proper Utility Maximization needs to take into account not only the immediate, currently existing happiness and suffering of the present slice of time, but also the net utility of all sentient beings throughout all of spacetime. Assuming that the Eternal Block Universe Theory of Physics is true, then past and future sentient beings do in fact exist, and therefore matter equally.

Now the important thing to stress here is then that what matters is not the current Net Utility today but overall Net Utility throughout Eternity. Two basic assumptions can be made about the trends through spacetime. First, that compounding population growth means that most sentient beings exist in the future. Second, that melioristic progress means that the conscious experience is, all other things being equal, more positive in the future than in the past, because of the compounding effects of technology, and sentient beings deciding to build and create better systems, structures, and societies that outlive the individuals themselves.

Sentient agents are not passive, but actively seek positive conscious experiences and try to create circumstances that will perpetuate such things. Thus, as the power of sentient beings to influence the state of the universe increases, so should the ratio of positive to negative. Other things, such as the psychological negativity bias, remain stable throughout history, but compounding factors instead trend upwards at usually an exponential rate.

Thus, assuming these trends hold, we can expect that the vast majority of conscious experiences will be positive, and the overall universe will be net positive in terms of utility. Does that suck for us who live close to the beginning of civilization? Kinda yes. But from a Utilitarian perspective, it can be argued that our suffering is for the Greatest Good, because we are the seeds, the foundation from which so much will have its beginnings.

Now, this can be countered that we do not know that the future really exists, and that humanity and its legacy might well be snuffed out sooner rather than later. In fact, the fact that we are born here now, can be seen as statistical evidence for this, because if on average you are most likely to be born at the height of human existence, then this period of time is likely to be around the maximum point before the decline.

However, we cannot be sure about this. Also, if Many Worlds Interpretation of Quantum Mechanics is true, then even if for most worlds humanity ceases to exist around this time, there still exists a non-trivial percentage of worlds where humanity survives into the far distant future, establishing a legacy among the stars and creates relative utopia through the compound effects aforementioned. For the sake of these possible worlds, and their extraordinarily high expected utility, I would recommend trying to keep life and humanity alive.

Comment author: RedMan 07 April 2017 02:54:45PM *  0 points [-]

Thank you for the thoughtful response! I'm not convinced that your assertion successfully breaks the link between effective altruism and the blender.

Is your argument consistent with making the following statement when discussing the inpending age of em?

If your mind is uploaded, a future version of you will likely subjectively experience hell. Some other version of you may also subjectively experience heaven. Many people, copies of you split off at various points, will carry all the memories of your human life' If you feel like your brain is in a blender trying to conceive of this, you may want to put it into an actual blender before someone with temporal power and an uploading machine decides to define your eternity for you.

Comment author: Darklight 07 April 2017 08:36:48PM 0 points [-]

Well, if we're implying that time travellers could go back and invisibly copy you at any point in time and then upload you to whatever simulation they feel inclined towards... I don't see how blendering yourself now will prevent them from just going to the moment before that and copying that version of you.

So, reality is that blendering yourself achieves only one thing, which is to prevent the future possible yous from existing. Personally I think that does a disservice to future you. That can similarly be expanded to others. We cannot conceivably prevent copying and mind uploading of anyone by super advanced time travellers. Ultimately that is outside of our locus of control and therefore not worth worrying about.

What is more pressing I think are the questions of how we are practically acting to improve the positive conscious experiences of existing and potentially existing sentient beings, and encouraging the general direction towards heaven-like simulation, and discouraging sadistic hell-like simulation. These may not be preventable, but our actions in the present should have outsized impact on the trillions of descendents of humanity that will likely be our legacy to the stars. Whatever we can do then to encourage altruism and discourage sadism in humanity now, may very well determine the ratios of heaven to hell simulations that those aforementioned time travellers may one day decide to throw together.

Comment author: Elo 03 April 2017 07:04:24AM 4 points [-]

Curious about if this is worth making into it's own weekly thread. Curious as to what's being worked on, in personal life, work life or just "cool stuff". I would like people to share, after all we happen to have similar fields of interest and similar fields we are trying to tackle.

Projects sub-thread:

  • What are you working on this week (a few words or a serious breakdown)(if you have a list feel free to dump it here)?
  • What do you want to be asked about next week? What do you expect to have done by then?
  • Have you noticed anything odd or puzzling to share with us?
  • Are you looking for someone with experience in a specific field to save you some search time?
  • What would you describe are your biggest bottlenecks?
Comment author: Darklight 06 April 2017 10:22:33AM 1 point [-]

I recently made an attempt to restart my Music-RNN project:

https://www.youtube.com/playlist?list=PL-Ewp2FNJeNJp1K1PF_7NCjt2ZdmsoOiB

Basically went and made the dataset five times bigger and got... a mediocre improvement.

The next step is to figure out Connectionist Temporal Classification and attempt to implement Text-To-Speech with it. And somehow incorporate pitch recognition as well so I can create the next Vocaloid. :V

Also, because why not brag while I'm here, I have an attempt at an Earthquake Predictor in the works... right now it only predicts the high frequency, low magnitude quakes, rather than the low frequency, high magnitude quakes that would actually be useful... you can see the site where I would be posting daily updates if I weren't so lazy...

http://www.earthquakepredictor.net/

Other than that... I was recently also working on holographic word vectors in the same vein as Jones & Mewhort (2007), but shelved that because I could not figure out how to normalize/standardize the blasted things reliably enough to get consistent results across different random initializations.

Oh, also was working on a Visual Novel game with an artist friend who was previously my girlfriend... but due to um... breaking up, I've had trouble finding the motivation to keep working on it.

So many silly projects... so little time.

Comment author: Darklight 05 April 2017 03:58:11PM 1 point [-]

This actually reminds me of an argument I had with some Negative-Leaning Utilitarians on the old Felicifia forums. Basically, a common concern for them was how r-selected species tend to appear to suffer way more than be happy, generally speaking, and that this can imply that was should try to reduce the suffering by eliminating those species or at least avoiding the expansion of life generally to other planets.

I likened this line of reasoning to the idea that we should Nuke The Rainforest.

Personally I think a similar counterargument to that argument applies here as well. Translated into your thought experiment, it would be In essence, that while it is true that some percentage of minds will probably end up being tortured by sadists, this is likely to be outweighed by the sheer number of minds that are even more likely to be uploaded into some kind of utopian paradise. Given that truly psychopathic sadism is actually quite rare in the general population, one would expect a very similar ratio of simulations. In the long run, the optimistic view is that decency will prevail and that the net happiness will be positive, so we should not go around trying to blender brains.

As for the general issue of terrible human decisions being incentivized by these things... humans are capable of using all sorts of rationalizations to justify terrible decisions, and so, just the possibility that some people will not do due diligence with an idea and instead abuse it to justify their evil, should not be reason to abandon the idea by itself.

For instance, the possibility of living an indefinite lifespan is likely to dramatically alter people's behaviour, including making them more risk-averse and long term thinking. This is not necessarily a bad thing, but it could lead to a reduction in people making necessary sacrifices for the good. These things are also, generally notoriously difficult to predict. Ask a medieval peasant what the effects of machines that could farm vast swaths of land would be on the economy and their livelihood and you'd probably get a very parochially minded answer.

Comment author: DustinWehr 03 April 2017 10:06:59PM *  13 points [-]

A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That's an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.

I hope everyone is aware of that perception problem.

Comment author: Darklight 05 April 2017 03:49:48AM 10 points [-]

I may be an outlier, but I've worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and engineering and are not really independent agents in any meaningful way yet.

So, no, we don't think people who worry about superintelligence are uneducated cranks... a lot of ML people do take it seriously enough that we've had casual lunch room debates about it. Rather, the reality on the ground is that right now most ML models have enough trouble figuring out relatively simple tasks like Natural Language Understanding, Machine Reading Comprehension, or Dialogue State Tracking, and none of us can imagine how solving those practical problems with say, Actor-Critic Reinforcement Learning models that lack any sort of will of their own, will lead suddenly to the emergence of an active general superintelligence.

We do still think that eventually things will likely develop, because people have been burned underestimating what A.I. advances will occur in the next X years, and when faced with the actual possibility of developing an AGI or ASI, we're likely to be much more careful in the future when things start to get closer to being realized. That's my humble opinion anyway.

Comment author: Darklight 05 April 2017 03:28:50AM 3 points [-]

I think the basic argument for OpenAI is that it is more dangerous for any one organization or world power to have an exclusive monopoly on A.I. technology, and so OpenAI is an attempt to safeguard against this possibility. Basically, it reduces the probability that someone like Alphabet/Google/Deepmind will establish an unstoppable first mover advantage and use it to dominate everyone else.

OpenAI is not really meant to solve the Friendly/Unfriendly AI problem. Rather it is meant to mitigate the dangers posed by for-profit corporations or nationalistic governments made up of humans doing what humans often do when given absurd amounts of power.

Personally I think OpenAI doesn't actually solve this problem sufficiently well because they are still based in the United States and thus beholden to U.S. laws, and wish that they'd chosen a different country, because right now the bleeding edge of A.I. technology is being developed primarily in a small region of California, and that just seems like putting all your eggs in one basket.

I do think however that the general idea of having a non-profit organization focused on AI technology is a good one, and better than the alternative of continuing to merely trust Google to not be evil.

Comment author: PhilGoetz 04 April 2017 04:42:41AM *  3 points [-]

Benquo isn't saying that these attitudes necessarily follow, but that in practice he's seen it happen. There is a lot of unspoken LessWrong / SIAI history here. Eliezer Yudkowsky and many others "at the top" of SIAI felt personally responsible for the fate of the human race. EY believed he needed to develop an AI to save humanity, but for many years he would only discuss his thoughts on AI with one other person, not trusting even the other people in SIAI, and requiring them to leave the area when the two of them talked about AI. (For all I know, he still does that.) And his plans basically involve creating an AI to become world dictator and stop anybody else from making an AI. All of that is reducing the agency of others "for their own good."

This secrecy was endemic at SIAI; when I've walked around NYC with their senior members, sometimes 2 or 3 people would gather together and whisper, and would ask anyone who got too close to please walk further away, because the ideas they were discussing were "too dangerous" to share with the rest of the group.

Comment author: Darklight 04 April 2017 10:20:41PM 0 points [-]

Well, that's... unfortunate. I apparently don't hang around in the same circles, because I have not seen this kind of behaviour among the Effective Altruists I know.

Comment author: Darklight 01 April 2017 01:35:12AM 1 point [-]

I think you're misunderstanding the notion of responsibility that consequentialist reasoning theories such as Utilitarianism argue for. The nuance here is that responsibility does not entail that you must control everything. That is fundamentally unrealistic and goes against the practical nature of consequentialism. Rather, the notion of responsibility would be better expressed as:

  • An agent is personally responsible for everything that is reasonably within their power to control.

This coincides with the notion of there being a locus of control, which is to say that there are some thing we can directly affect in the universe, and other things (most things) that are beyond our capacity to influence, and therefore beyond our personal responsibility.

Secondly, I take issue with the idea that this notion of responsibility is somehow inherently adversarial. On the contrary, I think it encourages agents to cooperate and form alliances for the purposes of achieving common goals such as the greatest good. This naturally tends to be associated with granting other agents as much autonomy as possible because this usually enables them to maximize their happiness, because a rational Utilitarian will understand that individuals tend to understand their own preferences and what makes them happy, better than anyone else. This is arguably why John Stuart Mill and many modern day Utilitarians are also principled liberals.

Only someone suffering from delusions of grandeur would be so paternalistic as to assume they know better than the people themselves what is good for them and try to take away their control and resources in the way that you describe. I personally tend towards something I call Non-Interference Code, as a heuristic for practical ethical decision making.

View more: Next