Posts

Sorted by New

Wiki Contributions

Comments

I would like to share some thoughts on this topic in general in terms of AI, and Singularity.

I am a speculator and find that a right decision typically does not exist. A decision is, more like a judgement and selection of the better alternative. Most executives when making a decision will need to use more opinion than fact in top decisions, especially when high amounts of uncertainty are involved.

In many cases, outcomes do not come out as intended.

Relative, this AI singularity matter. In an effort to create a potential hedge of protection for the good of mankind, we consider the idea, "to create AI machines that are intended to be human friendly before any other AI machines are made".

This may be the last invention man will ever make...

Please consider:

  1. Good intentions typically have unintended consequences,
  2. Law of opposites must be considered,
  3. To achieve optimal Market Standing, rather than Total Dominance, and
  4. To realize we have an illusion of control.

These are a few of many considerations that require analysis and consideration. Determining the right questions to ask is another hard part.

This post will not even attempt to solve this problem.

I hope this adds value to the discussion, if not here, that it be directed to the best place to achieve the most value to the decision making process.