Comment author: capybaralet 07 September 2015 02:01:30PM 0 points [-]

"Every set of numbers has a least element" clearly does NOT define the natural numbers. Consider N U {-1}.

Comment author: jmmcd 28 January 2015 09:41:26AM *  1 point [-]

I'm afraid I won't have time to give you more help. There's a short summary of each sequence under the link at the top of the page, so it won't take you forever to see the relevance.

EDIT: you're wondering elsewhere in the thread why you're not being well received. It's because your post doesn't make contact with what other people have thought on the topic.

Comment author: capybaralet 21 August 2015 05:35:29PM 0 points [-]

I put "enjoy itself" in quotes, because I don't mean it literally. The questions that that sequence addresses according to the summary don't seem relevant to what I am trying to get at.

I guess I need to be more precise. I just mean how can we maximize the integral of experience through time (whether we let experience take negative values is a detail). This was one of Tegmark's proposals in that paper, already, except he is writing in terms of a final goal instead of a process, which was the point of my post...

"The amount of consciousness in our Universe, which Giulio Tononi has argued corresponds to integrated information"

Comment author: John_Maxwell_IV 28 January 2015 03:41:47AM *  1 point [-]

While I don't have too much experience to back this up, I think it is probably a lot of things I'm familiar with, elaborated at length, with perhaps a few insights sprinkled in.

Yes, I don't particularly like the way the sequences are written either :/ But I think the kind of thing you're talking about in this post is the sort of topic they address. LW Wiki pages are often better, e.g. see this one:

if a p-zombie is atom-by-atom identical to a human being in our universe, then our speech can be explained by the same mechanisms as the zombie's, and yet it would seem awfully peculiar that our words and actions would have one entirely materialistic explanation, but also, furthermore, our universe happens to contain exactly the right bridging law such that our experiences are meaningful and our consciousness syncs up with what our merely physical bodies do. It's too much of a stretch: Occam's razor dictates that we favor a monistic universe with one uniform set of laws.

I see this as compatible with my reply to skeptical_lurker above.

My point is: how do you evaluate if something has preferences? How do you disambiguate preferences from statements like "I prefer __"? Clearly we DO distinguish between these.

Agreed. I don't have any easy answer to this question. It's kind of like asking the question "if someone is ill or injured, how do you fix them?" It's an important question worthy of extensive study (at least insofar as it's relevant to whatever ethical question you're currently being presented with).

And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't. Occam's Razor only applies to the territory, not the map, so there's no penalty for us drawing our boundaries in as complicated & intricate a way as we like (kind of like the human-drawn country boundaries on real maps).

Comment author: capybaralet 28 January 2015 04:54:05AM *  0 points [-]

I know all about philosophical zombies.

Agreed. I don't have any easy answer to this question.

Do you have any answer at all? Or anything to say on the matter? Would you at least agree that it is of critical ethical importance, and hence worthy of discussion?

And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't.

Of course, but I assume you agree with me about the program I wrote?

In any case, I think it would be nice to try and forge some agreement and/or understanding on this matter (as opposed to ignoring it on the basis of our disagreement).

Comment author: skeptical_lurker 27 January 2015 07:50:24AM *  2 points [-]

Suppose all information processing is inextricably linked to qualia. Now I suppose there is information processing in rocks of a form, in the equations of themodynamics, motion etc that govern the rocks behaviour. But qualia does not imply self-awareness (1), and there's no way you can communicate with the rock. Qualia also doesn't imply emotions (2), and if there is neither self-awareness nor emotions then I don't see why there need be any moral considerations.

As to determining the truth of Panpsychism and categorising which things have emotions, self awareness etc, I shall defer this problem to future superintelligences. Additionally, a CEV AI should devote a lot of resources to humans regardless of whether panpsychism is true, because most people don't believe in Peter Singer style altruism.

1 because (a) people who meditate for many years or take a large does of dissociative drugs can experience ego-death, where they stop conceptualising a self, but they still experience qualia. (b) most animals are not self-aware, yet intuitions and occam's razor tell me that they still experience qualia

2 some people experience emotional blunting, but while the world may seem grey emotionally, yet they still experience qualia. Additionally, squid do not have emotions, and again I believe they still have qualia.

EDIT: as well as lacking self-awareness and emotions, rocks also lack agency. The question of what to do with a human, who due to various incurable diseases, lacks self-awareness, emotions and agency is left as an excersize for the reader.

Comment author: capybaralet 28 January 2015 04:43:33AM 0 points [-]

How do you know what a CEV AI should do?

How do you know that squids don't have emotions?

Define agency.

You could have at least stepped up to the challenge you left to the reader.

Comment author: Manfred 27 January 2015 10:42:38AM *  9 points [-]

Have you read A Human's Guide to Words?

If you have: taboo "consciousness," because the definition itself is what you are uncertain about. What is it that you care if atoms have?

In my opinion, it looks like there is no physical property at stake here - you are instead worried about if atoms have you-care-about-them substance. Which is of course not an actual substance in the atoms - your question ultimately points back to your own ideas and preferences.

This is your own prerogative, but I think if you try to let go of the question of "consciousness" for now until you have a better idea of what physical properties correspond to it, and just ask yourself if you care about atoms for their own sake, you probably do not, and thus can stop worrying about it.

Comment author: capybaralet 28 January 2015 03:28:07AM -1 points [-]

Unless I am mistaken, the best theory of how to make FAI ultimately points back to my (and all y'all's) ideas and preferences.

So I guess we should taboo FAI.

I'd argue that you have no better idea of what physical properties correspond to consciousness than I do, you've simply chosen to ignore the question, because you believe you can rely on your own intuitive consciousness-detector.

I am worried about bias. Shouldn't we all be?

Comment author: John_Maxwell_IV 27 January 2015 08:13:45AM *  5 points [-]

Panpsychism seems like a plausible theory of consciousness.

What's the best argument you've seen for it? I don't find it plausible myself.

What do you think of the reductionism sequence? E.g. this post might be relevant.

My interpretation of the "expanding circle" might be something like: it'd be a good thing if things with preferences increasingly found themselves preferring that the preferences of other things with preferences were also achieved. If something doesn't have preferences, I'm not that concerned about it.

Comment author: capybaralet 28 January 2015 03:18:57AM *  0 points [-]

What's the best argument you've seen for it?

see skeptical lurker's comment, below.

What do you think of the reductionism sequence?

While I don't have too much experience to back this up, I think it is probably a lot of things I'm familiar with, elaborated at length, with perhaps a few insights sprinkled in. Can you please give brief summaries of the things you link to, and how they are relevant? I skimmed that article, and it doesn't seem relevant.

If something doesn't have preferences, I'm not that concerned about it.

My point is: how do you evaluate if something has preferences? How do you disambiguate preferences from statements like "I prefer __"? Clearly we DO distinguish between these. If I write and run the following computer program, I don't think you will be upset if I stop it :

while True:
print "I prefer not to be interrupted"
Comment author: jmmcd 27 January 2015 12:34:42PM 0 points [-]

So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

Maybe read the Fun Theory sequence?

Comment author: capybaralet 28 January 2015 03:10:23AM -2 points [-]

Maybe tell me why I should? My time is valuable.

Comment author: joaolkf 27 January 2015 04:19:14PM 2 points [-]

It's an interesting idea, but it's not at all new. Most moral philosophers would agree that certain experiences are part (or all) of what has value, and that the precise physical instantiation of these experiences does not necessarily matters (in the same way many would agree on this same point in philosophy of consciousness).

There's a further meta-issue which is why the post is being downvoted. Surely is vague and maybe too short, but it seems to have the goal of initiating discussion and refining the view being presented rather than adequately defending or specifying it. I have posted tentative discussions - much more developed than this one - in meta-ethics or other abstract issues in ethics directly related to rationality and AI-safety, and I wasn't exactly warmly met. Given that much of the central problems being discussed here are within ethics, why the disdain for meta-ethics? Of course, it might as well just be a coincidence or that all those posts were fundementaly flawed in a obvious way.

Comment author: capybaralet 28 January 2015 03:08:20AM *  2 points [-]

Yeah I am not happy about the way I'm being received. Any advice, other than avoiding interesting meta-ethics questions?

Wrt how new it is: how about if I put it this way:

Maybe experience is fundamentally not a function of brain state, but a function of brain state over time. Note that this is not strongly anti-physicalism. Especially if you believe in discrete time, in which case you can have experience be a function of the transitions that occur between states in successive time-steps:

Experience = f(s{t}, s{t-1}).

Comment author: 9eB1 27 January 2015 09:09:13AM 0 points [-]

During the end of a drive there would either be/not be a configuration of particles in the shape of a paper ticket memorializing your transgression of the law. And if not that, there is a configuration of particles in the heads of the law enforcement officials recalling your transgression and planning on writing you a ticket or whatever. Any universe-wide configuration of particles contains the history of all of the events preceding it, even if they are opaque to us, because the possibility space of particle configurations is (probably?) larger than utility relevant histories.

Comment author: capybaralet 28 January 2015 03:01:56AM 0 points [-]

Any universe-wide configuration of particles contains the history of all of the events preceding it

So there is no way that we can arrive at the same state from different starting points? That seems ridiculous to me.

Comment author: Viliam_Bur 27 January 2015 08:19:51AM *  0 points [-]

In AI research, intelligent agents typically have a clear-cut and well-defined final goal, e.g., win the chess game or drive the car to the destination legally. The same holds for most tasks that we assign to humans, because the time horizon and context is known and limited. (...) a truly well-defined goal would specify how all particles in our Universe should be arranged at the end of time.

We typically only care about the arrangement of particles at the end of the task, because that is the nature of the simple tasks we usually use machines for today. Actually, even that is not true: when "driving the car to destination legally" we care not only about the arrangement of the particles of the car at the end of the trip, but also about what happened on the way -- that's what "legally" means here. (Unless we also count "police sending us tickets" as particles. But I guess the car is supposed to follow the laws even when the police does not look.)

We can define "journey" goals e.g. by calculating score at each time interval, and trying to maximize the sum or the average (or some other function) of all the intervals. This can make sense even if we don't know how long the task will last.

treat experience as inherently positive and not try to distinguish between positive and negative experiences.

This sounds wrong. But I am not even sure what exactly would we measure here, if both positive and negative experience count the same. Is it the intensity of the experience (in either direction) which counts? (That is, would you rather be tortured than bored? Would you rather be tortured really painfully than enjoying a mild pleasure?) Or is it duration of the experience? (That is, we want to maximize the subjective time of sentient beings, regardless of what happens during the time? Would you rather live 1001 years in hell than 1000 years in heaven?)

Comment author: capybaralet 28 January 2015 02:59:44AM *  0 points [-]

This sounds wrong.

Of course. That's why I proposed refining it.

But I am not even sure what exactly would we measure here

I thought it was obvious. It is the integral of total experience (suitably defined) through time that counts.

View more: Prev | Next