Dual N-Back browser-based "game" in public alpha-testing state.

3 Logos01 10 July 2012 03:36AM

Link.

 

Found here: http://www.reddit.com/r/cogsci/comments/wb44q/my_dual_nback_browsergame_is_ready_for/

 

From the author:

 Quick notes:

  • If you experience any technical problems running the game, please let me know what browser and OS you're using.
  • If you're unfamiliar with what Dual N-Back is, [1] this is a good place to start reading.

 

 

Visual maps of the historical arguments in the topic, "Can computers think?"

4 Logos01 18 April 2012 12:55AM

Al Jazeera: "Engineering Human Evolution" -- 0h:35m:41s Youtube Video.

0 Logos01 29 March 2012 03:00AM

Link here.

Al Jazeera website link for the video disinclined.

A brief synopsis from the Al Jazeera website:

Cyborgs, brain uploads and immortality - How far should science go in helping humans exceed their biological limitations? These ideas might sound like science fiction, but proponents of a movement known as transhumanism believe they are inevitable.

In this episode of The Stream, we talk to bioethicist George Dvorsky; Robin Hanson, a research associate with Oxford’s Future of Humanity Institute; and Ari N. Schulman, senior editor of The New Atlantis, about the ethical implications of transhumanism.

 

Discuss below.

"Nice Guys Finish First" - Youtube Video of selected reading (by Dawkins) from The Selfish Gene

5 Logos01 17 March 2012 01:33AM

Check it out here.

Brief summary: Dawkins demonstrates a classic "Prisoner Dilemma AI tournament". No big surprise to us today, but at the time the revelation that Tit for Tat is one of -- if not *the* -- most effective strateg(y|ies) was a surprising result.  He goes on to demonstrate animals employing the Tit for Tat strategy.  Assumptions of generosity, with vengefulness, appear to be strongly selected for.

A response to "Torture vs. Dustspeck": The Ones Who Walk Away From Omelas

-4 Logos01 30 November 2011 03:34AM

For those not familiar with the topic, Torture vs. Dustspecks asks the question: "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?"

 

Most of the discussion that I have noted on the topic takes one of two assumptions in deriving their answer to that question: I think of one as the 'linear additive' answer, which says that torture is the proper choice for the utilitarian consequentialist, because a single person can only suffer so much over a fifty year window, as compared to the incomprehensible number of individuals who suffer only minutely; the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.

What I have never yet seen is something akin to the notion expressed in Ursula K LeGuin's The Ones Who Walk Away From Omelas.If you haven't read it, I won't spoil it for you.

I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point. There are consequences to such a choice that extend beyond the suffering inflicted; moral responsibility, standards of behavior that either choice makes acceptable, and so on. Any solution to the question which ignores these elements in making its decision might be useful in revealing one's views about the nature of cumulative suffering, but beyond that are of no value in making practical decisions -- they cannot be, as 'consequence' extends beyond the mere instantiation of a given choice -- the exact pain inflicted by either scenario -- into the kind of society that such a choice would result in.

While I myself tend towards the 'logarithmic' than the 'linear' additive view of suffering, even if I stipulate the linear additive view, I still cannot agree with the conclusion of torture over the dust speck, for the same reason why I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices, and this violates the principle of individual self-determination -- a principle I have seen Less Wrong's community spend a great deal of time trying to consider how to incorporate into Friendliness solutions for AGI. We as a society already implement something similar to this, economically: we accept taxing everyone, even according to a graduated scheme. What we do not accept is enslaving 20% of the population to provide for the needs of the State.

If there is a flaw in my reasoning here, please enlighten me.

[Infographic] A reminder as to how far the rationality waterline can climb (at least, for the US).

8 Logos01 22 November 2011 12:44PM

[Link] Active tactile exploration using a brain–machine–brain interface

1 Logos01 02 November 2011 10:38AM

ICMS (intracortical microstimulation) was successfully demonstrated for inducing artificial sensory stimulation in rhesus monkeys. (This is a significant -- albeit miniscule -- step forward for data-in brain-computer interfaces.)

 

continue reading »

Introduction: "Acrohumanity"

-8 Logos01 25 October 2011 09:48AM

Greetings, fellow LessWrongians.

What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself. As such I will attempt to restrict any responses to comments I give to explanations of points of fact or explanations of my own opinions if directly requested, but otherwise will not argue any particulars for purposes of persuasion.

For a few years now a general notion -- what originally led me to discover the LessWrong site itself, in fact -- has rattled around in my brain, which I only today have derived a sufficiently satisfactory term to label it with: "acrohumanity". This is a direct analogue to "posthuman" and "transhuman"; 'acro-' being a prefix meaning, essentially, "highest". So a strictly minimal definition of the term could be "the highest of the humane condition", or "the pinnacle of humanity".

In brief, I describe acrohumanity as that state of achieving the maximum optimization of the human condition and capabilities *by* an arbitrary person that are available *to* that arbitrary person. I intentionally refrain from defining what form that optimization takes; but my own personal intuitions and opinions on the topic, as a life-long transhumanist and currently aspiring-rationalist, tend towards mental conditioning and improvements upon ways of thinking and optimization of thought, memory, and perception. "Acrohumanism", then, would be the belief in, practice of, and advocacy of achieving or approaching acrohumanity, in much the same way that transhumanism is the belief in or advocacy of achieving transhuman conditions. (In fact; I tend to associate the two terms, at least personally; what interests me *most* about transhumanism is achieving greater capacity for thought, recollection, and awareness than is humanly possible today.)

Instrumental rationality is, thusly, a core component of any approach to the acrohuman condition/state. But while it is a requirement, it is not sufficient in and of itself to focus on one's capabilities as a rationalist. There are other avenues of optimization of the self that should also bear investigation. The simplest and most widely exercised of these is the practice of exercising the body; which does little to improve one's rationality and if one's primary goal is simply to become a better rationalist exercising does little to nothing to advance that goal. But if one's goal is to "in general optimize yourself to the limits available", exercising is just as key as focusing on instrumental rationality. Additional examples of a more cognitive nature could include techniques for improving recollection. Mnemotechnics has existed long enough that many cultures developed their own variants of it before they even developed a written language. It occurs to me that developing mnemotechnical skill would be convergent with becoming a better rationalist by making it easier to recall the various biases and heuristics we utilize in a broader array of contexts. Still another, also cognitive in nature, would be developing skill/practice in meditative reflection. While there is a lot of what Michael Shermer calls "woo" around meditation, the simple truth is that it is an effective tool for metacognition. My own history with meditative practice originated in my early-teens with martial arts training which I then extended into basic biofeedback as a result of coping with chronic pain. I quickly found that the same skill-level needed to achieve success in that arena had a wide array of applications, from coping with various stimuli to handling other physiological symptoms or indulging specific senses.

Taken as an aggregate, an individual with strong skill in biofeedback, a history of rigorous exercise and physical health, skill and knowledge of instrumental rationality, mnemotechnics, metacognition, and through metacognition strong influence over his own emotional states (note; as I myself am male, I am in the habit of using masculine pronouns as gender-neutrals), represents an individual who is relatively far from what at least consists of my personal image of the baseline 'average human'. And yet I am certain that there might be other techniques or skillsets that one might add to his 'grab-bag' of tools for improving upon his own overall capabilities as a person -- none of which individually exceeding what is humanly possible, but definitely impressively approaching those limits when taken as a whole.

I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation -- for others, and for myself.