Comment author: Doug_S. 18 December 2008 02:38:47AM 1 point [-]

The transhumanist philosopher David Pearce is an advocate of what he calls the Hedonistic Imperative: The eudaimonic life is the one that is as pleasurable as possible. So even happiness attained through drugs is good? Yes, in fact: Pearce's motto is "Better Living Through Chemistry".

Well, it's definitely better than the alternative. We don't necessarily want to build Jupiter-sized blobs of orgasmium, but getting rid of misery would be a big step in the right direction. Pleasure and happiness aren't always good, but misery and pain are almost always bad. Getting rid of most misery seems like a necessary, but not sufficient, condition for Paradise.

I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.

You know, I wouldn't be surprised, considering that you can fit most of physics on a T-shirt. (Isn't God written in Lisp, though?)

Comment author: SecondWind 16 May 2013 06:30:03AM 1 point [-]

Twenty lines of close paren.

Comment author: vroman 19 March 2009 01:57:01AM 1 point [-]

I read and understood the Least convenient possible world post. given that, then let me rephrase your scenario slightly

If every winner of a certain lottery receives $X * 300 million, a ticket costs $X, the chances of winning are 1 in 250 million, you can only buy one ticket, and $X represents an amount of money you would be uncomfortable to lose, would you buy that ticket?

answer no. If the ticket price crosses a certain threshold, then I become risk averse. if it were $1 or some other relatively inconsequential amount of money, then I would be rationally compelled to buy the nearly-sure loss ticket.

Comment author: SecondWind 07 May 2013 03:04:38PM 0 points [-]

If you'd be rationally compelled to buy one low-cost ticket, then after you've bought the ticket you should be rationally compelled to buy a ticket. And then rationally compelled to buy a ticket.

Sure, at each step you're approaching the possibility with one fewer dollar, but by your phrasing, the number of dollars you have does not influence your decision to buy a ticket (unless you're broke enough that $1 is not longer a relatively inconsequential amount of money). This method seems to require an injunction against iteration.

Comment author: Caledonian2 26 February 2008 01:00:08AM 4 points [-]

The ability to endure cognitive dissonance long enough to find the resolution to the dissonance, rather than just short-circuiting to something that makes no sense but offers relief from the strain, is a necessary precondition for rational thought.

I don't think it can be cultivated, and I don't think there's a substitute. Either you pass through the gauntlet, or you don't.

Comment author: SecondWind 02 May 2013 02:27:37AM 0 points [-]

Couldn't you start with easier cognitive dissonances, and work your way up?

In response to Timeless Control
Comment author: SecondWind 28 April 2013 05:17:24PM *  0 points [-]

For "I did A but could have done otherwise" I see two coherent meanings:

1) My mind produced A from the local conditions, but a conceivable different mind with otherwise identical local conditions would've produced not A. My mind is therefore a crucial causal factor in the reality of A.

OR

2) From my limited knowledge, I cannot trace the causal steps to A that precede my decision well enough to determine, from those steps alone, the decision I make which leads to A.

...actually, probably both.

So, the causal steps to A include my decision (and A is inconsistent with certain decisions that differ from my real one), but I cannot trace the causal steps of my decision precisely enough to have precluded those differing decision (without already knowing the reality of my decision.)

Alternatively: if we work from full knowledge of the causal path to A, except that we treat my cognition as a black box whose outcome we don't know, we could not conclude A even with unlimited processing power.

Comment author: SecondWind 27 April 2013 11:51:14PM 7 points [-]

'Free will' is the halting point in the recursion of mental self-modeling.

Our minds model minds, and may model those minds' models of minds, but cannot model an unlimited sequence of models of minds. At some point it must end on a model that does not attempt to model itself; a model that just acts without explanation. No matter how many resources we commit to ever-deeper models of models, we always end with a black box. So our intuition assumes the black box to be a fundamental feature of our minds, and not merely our failure to model them perfectly.

This explains why we rarely assume animals to share the same feature of free will, as we do not generally treat their minds as containing deep models of others' minds. And, if we are particularly egocentric, we may not consider other human beings to share the same feature of free will, as we likewise assume their cognition to be fully comprehensible within our own.

...d-do I get the prize?

Comment author: Peter_Turney 08 July 2008 04:06:24PM 8 points [-]

And if you're allowed to end in something assumed-without-justification, then why aren't you allowed to assume anything without justification?

I address this question in Incremental Doubt. Briefly, the answer is that we use a background of assumptions in order to inspect a foreground belief that is the current focus of our attention. The foreground is justified (if possible) by referring to the background (and doing some experiments, using background tools to design and execute the experiments). There is a risk that incorrect background beliefs will "lock in" an incorrect foreground belief, but this process of "incremental doubt" will make progress if we can chop our beliefs up into relatively independent chunks and continuously expose various beliefs to focused doubt (one (or a few) belief(s) at a time).

This is exactly like biological evolution, which mutates a few genes at a time. There is a risk that genes will get "locked in" to a local optimum, and indeed this happens occasionally, but evolution usually finds a way to get over the hump.

Should I trust Occam's Razor? Well, how well does (any particular version of) Occam's Razor seem to work in practice?

This is the right question. A problem is that there is the informal concept of Occam's Razor and there are also several formalizations of Occam's Razor. The informal and formal versions should be carefully distinguished. Some researchers use the apparent success of the informal concept in daily life as an argument to support a particular formal concept in some computational task. This assumes that the particular formalization captures the essence of the informal concept, and it assumes that we can trust what introspection tells us about the success of the informal concept. I doubt both of these assumptions. The proper way to validate a particular formalization of Occam's Razor is to apply it to some computational task and evaluate its performance. Appeal to intuition is not a substitute for experiment.

At present, I start going around in a loop at the point where I explain, "I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct."

It seems to me that this quote, where it mentions "simple", must be talking about the informal concept of Occam's Razor. If so, then it seems reasonable to me. But formalizations of Occam's Razor still require experimental evidence.

The question is, what is the scope of the claims in this quote? Is the scope limited to how I should personally decide what to believe, or does it extend to what algorithms I should employ in my AI research? I am willing to apply my informal concept of Occam's Razor to my own thinking without further evidence (in fact, it seems that it isn't entirely under my control), but I require experimental evidence when, as a scientist, I use a particular formalization of Occam's Razor in an AI algorithm (if it seems important, given the focus of the research; is simplicity in the foreground or the background?).

Comment author: SecondWind 27 April 2013 07:00:19PM 2 points [-]

By examining our cognitive pieces (techniques, beliefs, etc.) one at a time in light of the others, we check not for adherence of our map to the territory but rather for the map's self-consistency.

This would appear to be the best an algorithm can do from the inside. Self-consistent may not mean true, but it does mean it can't find anything wrong with itself. (Of course, if your algorithm relies on observational inputs, there should be a theoretical set of observations which would break its self-consistency and thus force further reflection.)

Comment author: Tom_McCabe2 20 October 2007 10:02:53PM 11 points [-]

"Would any commenters care to mug Tiiba? I can't quite bring myself to do it, but it needs doing."

If you don't donate $5 to SIAI, some random guy in China will die of a heart attack because we couldn't build FAI fast enough. Please donate today.

Comment author: SecondWind 26 April 2013 11:11:53PM 2 points [-]

That's not a proper mugging.

"If you don't donate $5 to SIAI, the entire multiverse will be paperclip'd because we couldn't build FAI before uFAI took over."

Comment author: Aaron6 12 September 2008 03:12:44AM 4 points [-]

Constant: with dogs, you can point to examples and say "these animals, and animals closely related to these are dogs".

Comment author: SecondWind 17 April 2013 06:34:55AM 2 points [-]

...whereas with vampires, you're stuck pointing to a collection of fictional representations. This restricts certain information-gathering techniques (you can't put a vampire under a microscope; at best, you can use a fictional account of a vampire under a microscope) but shouldn't make the exercise impossible. I'm pretty sure we could convey 'stop sign' without ever letting you observe a real-life stop sign.

Comment author: pnrjulius 09 April 2012 06:13:38AM 0 points [-]

Cliff Notes (and Spark Notes) and the like are really spectacular at teaching you the kinds of things that literature teachers want to hear parroted back on tests. They aren't good at teaching you genuine understanding of literature, but that's not what's being tested for anyway.

Comment author: SecondWind 12 January 2013 03:16:25AM 0 points [-]

Literature in English class generally serves as reading practice, and as an odd excuse to practice composing thoughts for other people to read. Literature is the vehicle rather than the purpose, unless you're looking at a literature degree.

I'm curious how to test an understanding of literature, and what purpose one serves. Intuitively, a person well-versed in literature should be better equipped to write or recommend fiction than a person who is not well-versed in literature. Is there another benefit one might test?

In response to comment by [deleted] on Second-Order Logic: The Controversy
Comment author: Larks 04 January 2013 11:19:59PM 5 points [-]

Well, that's what the anti-ultrafinitists say. It is precisely the contention of the ultrafinitists that you couldn't "count to 3^^^3", whatever that might mean.

Comment author: SecondWind 05 January 2013 02:55:47AM 0 points [-]

Hmm.

So, it's not sufficient to define a set of steps that determine a number... it must be possible to execute them? That's a rather pragmatic approach. Albeit it one you'd have to keep updating if our power to compute and comprehend lengthier series of steps grows faster than you predict.

View more: Prev | Next