Wiki Contributions

Comments

Two points:

First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?

Second, I appreciate this post because what Harris's disagreements with others so often need is exactly dissolution. And you've accurately described Harris's project: He is trying to persuade an ideal listener of moral claims (e.g., it's good to help people live happy and fulfilling lives), rather than trying to prove the truth of these claims from non-moral axioms.

Some elaboration on what Harris is doing, in my view:

  • Construct a hellish state of affairs (e.g., everyone suffering for all eternity to no redeeming purpose), call on the interlocutor to admit that such a situation is bad.
  • Construct a second state of affairs that is not so hellish (e.g., everyone happy and virtuous).
  • Call on the interlocutor to admit that the first situation is bad, and that the second situation is better.
  • Conclude that the interlocutor has admitted the truth of moral claims, even though Harris himself never explicitly said anything moral.

But by adding notions like "to no redeeming purpose" and "virtuous," Harris is smuggling oughts into the universes he describes. (He has to do this in order to block the interlocutor from saying "I don't admit the first situation is bad because the suffering could be for a good reason, and the second situation might not be good because maybe everyone is happy in a trivial sense because they've just wireheaded.)

In other words, Harris has not bridged the gap because he has begun on the "ought" side.

Rhetorically, Harris might omit the bits about purpose or virtue, and the interlocutor might still admit that the first state is bad and the second better, because the interlocutor has cooperatively embedded these additional moral premises.

In this case, to bridge the gap Harris counts on the listener supplying the first "ought."

Summary:

Regardless of whether one adopts a pessimistic or optimistic view of artificial intelligence, policy will shape how it affects society. This column looks at both the policies that will influence the diffusion of AI and policies that will address its consequences. One of the most significant long-run policy issues relates to the potential for artificial intelligence to increase inequality. 

The author is Selmer Bringsjord.

Academic: https://homepages.rpi.edu/~brings/

Wikipedia: https://en.wikipedia.org/wiki/Selmer_Bringsjord

Author:

  • Website: https://www.joshuagans.com
  • Wikipedia: https://en.wikipedia.org/wiki/Joshua_Gans

Summary:

Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who create AIs with the goal of acquiring power may be a greater existential threat.

Key paragraph:

The insight from economics is that while it may be hard, or even impossible, for a human to control a super-intelligent AI, it is equally hard for a super-intelligent AI to control another AI. Our modest super-intelligent paperclip maximiser, by switching on an AI devoted to obtaining power, unleashes a beast that will have power over it. Our control problem is the AI's control problem too. If the AI is seeking power to protect itself from humans, doing this by creating a super-intelligent AI with more power than its parent would surely seem too risky.

Link to actual paper: https://arxiv.org/abs/1711.04309

Abstract:

Here we examine the paperclip apocalypse concern for artificial general intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that simple goal and are unavailable for any other use. We provide conditions under which a paper apocalypse can arise but also show that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.

Thanks to Alex Tabarrok at Marginal Revolution: https://marginalrevolution.com/marginalrevolution/2018/05/one-parameter-equation-can-exactly-fit-scatter-plot.html

Title: "One parameter is always enough"

Author: Steven T. Piantadosi, ( University of Rochester)

Abstract:

We construct an elementary equation with a single real valued parameter that is capable of fitting any “scatter plot” on any number of points to within a fixed precision. Specifically, given given a fixed  > 0, we may construct fθ so that for any collection of ordered pairs {(xj , yj )} n j=0 with n, xj ∈ N and yj ∈ (0, 1), there exists a θ ∈ [0, 1] giving |fθ(xj ) − yj | <  for all j simultaneously. To achieve this, we apply prior results about the logistic map, an iterated map in dynamical systems theory that can be solved exactly. The existence of an equation fθ with this property highlights that “parameter counting” fails as a measure of model complexity when the class of models under consideration is only slightly broad.

After highlighting the two examples in the paper, Tabarrok provocatively writes:

Aside from the wonderment at the result, the paper also tells us that Occam’s Razor is wrong. Overfitting is possible with just one parameter and so models with fewer parameters are not necessarily preferable even if they fit the data as well or better than models with more parameters.

Occam's Razor in its narrow form--the insight that simplicity is renders a claim more probable--is a consequence of the interaction between Kolmogorov complexity and Bayes' theorem. I don't see how this result affects this idea per se. But perhaps it shows the flaws of conceptualizing complexity as "number of parameters."

HT to Tyler Cowen: https://marginalrevolution.com/marginalrevolution/2018/05/erik-brynjolfsson-interviews-daniel-kahneman.html

The term "affordance width" makes sense, but perhaps there's no need to coin a new term when "tolerance" exists already.

A ∨ B ⟷ ¬A ⟶ B

But this is not true, because ¬(¬A ⟶ B) ⟶ A ∨ B. With what you've written you can get from the left side to the right side, but you can't get from the right side to the left side.

What you need is: "Either Alice did it or Bob did it. If it wasn't Alice, then it was Bob; and if it wasn't Bob, then it was Alice."

Thus: A ∨ B ⟷ (¬A ⟶ B ∧ ¬B ⟶ A)

Load More