Comment author: kithpendragon 09 February 2016 01:25:24PM 7 points [-]

...let's start with a little thought experiment...

The two cases are non-analogous. Grooves in a phonograph record are not designed to be read by a human. Perhaps a better analogy would be reading sheet music, but most people are not trained to do that either. The reason people show such a strong preference in the latter case is that most people will get nothing at all from the record (or sheet music, for that matter).

just because some people can't see colors doesn't mean that colors aren't real. The same is true for spiritual experiences.

This is a truism. Moreover, it is often argued that colors, flavors, &c. are of the map, not of the territory. If this is the case, colors may not be "real", even if the experience of colors is.

...one cannot render into words the subjective experience...

The attempt to losslessly transmit a complete subjective experience would be futile, although I've read some poets who took a good stab at it. Experience is one of the media that make up the map. Two people, given exactly the same stimulus, would have two different subjective experiences. It would certainly be easier to compare similar experiences with a similar reference frame but it is far from impossible to transmit one, even if some of the nuance is necessarily lost.

Finally, religiosity and spirituality are neither identical concepts nor even close synonyms, though they are treated as synonymous in the post. If you could define the two as you intend for us to read them it might be less confusing.

Comment author: Brillyant 28 January 2016 04:07:22PM 0 points [-]

The probability of getting some head/tails sequence is near 1 (cuz it could land on it's edge). The probability of predicting said sequence beforehand is extremely low.

The probability of someone winning the lottery is X, where X = the % of the possible ticket combinations sold. The probability of you winning the lottery with a particular set of numbers is extremely low.

As far as we can tell, and with the exception of the Old Testament heros, the probability of someone living to be 500 years old is much lower than winning most lotteries or predicting a certain high number of coin flips, though I suppose a smart ass could devise some exceptions to either. We'd have to better define "vampire" to arrive on a probability for that bit.

A house being haunted by real ghosts is actually extremely probable, depending on the neighborhood.

Comment author: kithpendragon 31 January 2016 05:13:24PM *  0 points [-]

This is the explanation closest to what I was thinking beforehand. The problem seems like one of the difference between {the difficulty of predicting an event} and {the likelihood of correctly reporting an observed event}. I think Dagon's argument about Map vs. Territory is a good one too.

Question for you, though... please define "ghost"? I have a feeling your definition is different than mine because I find events such as

certain environmental factors like (low-level poisoning from radon, carbon monoxide, et al; certain acoustic effects; certain architectural events such as uneven expansion due to temperature changes; &c.) cause minor hallucinations or illusions resulting in supersocial minds like those in humans perceiving "people" where there are none

very much more likely than

something of a person that exists independent of the usual corporeal form and (typically) despite the loss of that form is detectable by an uninformed and objective observer.

EDIT: formatted for better readability

Comment author: kithpendragon 20 January 2016 02:10:06PM 2 points [-]

I honestly assumed that most AI would probably have hardware access to a math co-processor of some kind. After all, humans are pretty awesome at arithmetic if you interpret calculator use as an analog to that kind of setup. No need for the mind to even understand what is going on at the hardware level. As long as it understands what the output represents, it can just depend on the module provided to it.

Comment author: casebash 09 January 2016 11:54:18AM 1 point [-]

You'll be pleased to know that I found a style of indicating edits that I'm happy with. I reaslised that if I make the word edited subscript then it is much less obnoxious, so I'll be using this technique on future posts.

Comment author: kithpendragon 14 January 2016 08:32:32PM 0 points [-]

That sounds like it will be much easier to read. Thank you for following up!

Comment author: gjm 07 January 2016 01:44:12PM 9 points [-]

You still have experiences while you are asleep

During some periods of sleep. So far as I am aware, in deep sleep there's no reason to think you are having any experiences at all.

Anyway, for those who don't object to thought experiments: imagine that there's some machine that completely suspends all your brain activity for five minutes, after which it continues from exactly its previous state. Are you the same person after as before? If you answer yes to this -- which I bet almost everyone does -- then the implications are the same as those you'd get from sleep involving a complete cessation of consciousness.

Comment author: kithpendragon 08 January 2016 10:16:48AM 0 points [-]

Are you the same person as before?

I expect I will do the same things for the same reasons as before. Or, to put it another way, I do not expect a brief interruption in my input/output patterns to significantly affect my input/output patterns in the future. Even less so than if they had not been interrupted and I had been allowed to have an experience of the same duration, now that I think about it.

I choose not to comment on the concept of "sameness" as it applies to "person", however, without some rigorous definitions. Ship of Theseus and all that.

Comment author: casebash 06 January 2016 12:23:20PM 0 points [-]

So an agent that chooses only 1 utility could still be a perfectly rational agent in your books?

Comment author: kithpendragon 06 January 2016 12:49:43PM *  0 points [-]

Might be. Maybe that agent's utility function is actually bounded at 1 (it's not trying to maximize, after all). Perhaps it wants 100 utility, but already has firm plans to get the other 99. Maybe it chose a value at random from the range of all positive real numbers (distributed such that the probability of choosing X grows proportional to X) and pre-committed to the results, thus guaranteeing a stopping condition with unbounded expected return. Since it was missing out on unbounded utility in any case, getting literally any is better than none, but the difference between x and y is not really interesting.

(humorously) Maybe it just has better things to do than measuring its *ahem* stopping function against the other agents.

Comment author: casebash 06 January 2016 11:55:14AM 0 points [-]

And it would still get beaten by a more rational agent, that would be beaten by a still more rational agent and so on until infinity. There's a non-terminating set of increasingly rational agents, but no final "most rational" agent.

Comment author: kithpendragon 06 January 2016 12:19:05PM 0 points [-]

If the PRA isn't trying to "maximize" an unbounded function, it can't very well get "beaten" by another agent who chooses x+n because they didn't have the same goal. I reject, therefore, that an agent that obeys its stopping function in an unbounded scenario may be called any more or less "rational" based on that reason only than any other agent that does the same, regardless of the utility it may not have collected.

By removing all constraints, you have made comparing results meaningless.

Comment author: casebash 06 January 2016 11:42:50AM 0 points [-]

Exactly, if you accept the definition of a perfectly rational agent as a perfect utility maximiser, then there's no utility maximiser as there's always another agent that obtains more utility, so there is no perfectly rational agent. I don't think that this is a particularly unusual way of using the term "perfectly rational agent".

Comment author: kithpendragon 06 January 2016 11:51:28AM 0 points [-]

In this context, I do not accept that definition: you cannot maximize an unbounded function. A Perfectly Rational Agent would know that.

Comment author: casebash 06 January 2016 11:30:02AM *  0 points [-]

It's been like that from the start. EDIT: I only added in extra clarification.

Comment author: kithpendragon 06 January 2016 11:42:05AM 0 points [-]

I certainly make no claims about the perfect quality of my memory. ;)

Comment author: casebash 06 January 2016 11:23:57AM 0 points [-]

There is no need to re-read the changes to the article. The changes just incorporate things that I've also written in the comments to reduce the chance of new commentators coming into the thread with misunderstandings I've clarified in the comments.

"So you have deliberately constructed a scenario, then defined "winning" as something forbidden by the scenario. Unhelpful." - As long as the scenario does not explicitly punish rationality, it is perfectly valid to expect a perfectly rational agent to outperform any other agent.

"Remember, the Traveling Salesman must eventually sell something or all that route planning is meaningless" - I completely agree with this, not stopping is irrational as you gain 0 utility. My point was that you can't just say, "A perfectly rational agent will choose an action in this set". You have to specify which action (or actions) an agent could choose whilst being perfectly rational.

"You have done nothing but remove criteria for stopping functions from unbounded scenarios" - And that's a valid situation to hand off to any so-called "perfectly rational agent". If it gets beaten, then it isn't deserving of that name.

Comment author: kithpendragon 06 January 2016 11:38:04AM 0 points [-]

There is no need to re-read the changes to the article...

I have been operating under my memory of the original premise. I re-read the article to refresh that memory and found the changes. I would simply have been happier if there was an ETA section or something. No big deal, really.

As long as the scenario does not explicitly punish rationality, it is perfectly valid to expect a perfectly rational agent to outperform any other agent.

Not so: you have generated infinite options such that there is no selection that can fulfill that expectation. Any agent that tries to do so cannot be perfectly rational since the goal as defined is impossible.

View more: Prev | Next