Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Comment author: torekp 13 October 2016 12:36:16AM 0 points [-]

Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.

Comment author: Houshalter 10 October 2016 08:28:53PM 1 point [-]

Sure corn isn't the optimal crop to do this with. What about water based plants or algae which have more efficient photosynthesis? Algae has very short generation times and could perhaps be bred to produce biofuel directly, instead of an inefficient indirect process of fermenting it.

If I recall correctly, you would only need a relatively small percent of Earth's surface to produce enough fuel for current use. And it could be some undesirable land in a desert. Tubes full of water and algae is a lot cheaper than solar panels and batteries.

Comment author: torekp 13 October 2016 12:27:16AM 0 points [-]

The linked paper is only about current practices, their benefits and harms. You're right though, about the need to address ideal near-term achievable biofuels and how they stack up against the best (e.g.) near-term achievable solar arrays.

Comment author: TheAncientGeek 03 October 2016 12:22:35PM 0 points [-]

If you mean this sort of thing http://www.kurzweilai.net/slate-this-is-your-brain-on-neural-implants, then he is barely arguing the point at all...this is miles below philosophy-grade thinking..he doesn't even set out a theory of selfhood, just appeals to intuitions. Absent Qualia is much better, although still not anything that should be called a proof.

Comment author: torekp 04 October 2016 01:43:06AM 0 points [-]

I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.

Comment author: TheAncientGeek 02 October 2016 09:27:42AM 0 points [-]

If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience

That reads like a non sequitur to me. We don't know what the relationship between algorithms and experience is.

Mentioning something is not a prerequisite for having it.

It's possible for a description that doesn't explicitly mention X to nonethless add up to X, but only possible..you seem to be treating it as a necessity.

Comment author: torekp 02 October 2016 12:08:08PM 0 points [-]

I'm convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn't be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers's Absent Qualia argument, I'm eager to hear it.

Comment author: TheAncientGeek 30 September 2016 11:21:40AM 0 points [-]

You only get your guarantee if experiences are the only thing that can cause thoughts about experiences. However, you don;t get that by noting that in humans thoughts are usually caused by experiences. Moreover, in a WBE or AI, there is always a causal account of thoughts that doesn't mention experiences, namely the account i terms of information processing.

Comment author: torekp 01 October 2016 12:38:10PM 0 points [-]

You seem to be inventing a guarantee that I don't need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.

Mentioning something is not a prerequisite for having it.

Comment author: TheAncientGeek 28 September 2016 01:34:21PM 0 points [-]

That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed

You seem to be rather sanguine about the equivalence of thoughts and experiences.

(And are we talking about equivlanet experiences or identical experiences? Does a tomato have to be coded as red?)

Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

It's uncontroversial that the same coarse input-output mappings can be realised by different algorithms..but if you are saying that consc. supervenes on the algorithm, not the function, then the real possibility of zombies follows, in contradiction to the GAZP.

(Actually, the GAZP is rather terrible because irt means you won't; even consider the possibility of a WBE not being fully conscious, rather than refuting it on its own ground).

Comment author: torekp 29 September 2016 10:55:42PM 0 points [-]

I'm not equating thoughts and experiences. I'm relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.

I'm not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I'd probably object and on others I wouldn't.

Comment author: torekp 18 September 2016 08:26:44PM *  0 points [-]

I think in order to make more progress on this, an extensive answer to the whole blue minimizing robot sequence would be a way to go. A lot of effort seems to be devoted to answering puzzles like: the AI cares about A; what input will cause it to (also/only) care about B? But this is premature if we don't know how to characterize "the AI cares about A".

Comment author: oge 06 September 2016 05:14:00PM 1 point [-]

My understanding is that if the creature is conscious at all, and it acts observably like a human with the kind of experience we care about, THEN it likely has the kind of experiences we care about.

Do you think it is likely that the creatures will NOT have the experiences we care about?

(just trying to make sure we're on the same page)

Comment author: torekp 08 September 2016 02:21:40AM 1 point [-]

It depends how the creatures got there: algorithms or functions? That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed. Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

Further info on my position.

Comment author: buybuydandavis 04 September 2016 08:03:25PM 4 points [-]

“Why does anything exist at all?”

I lose no sleep over this. I think people who do are just confused by language.

I'd say that if you examine your concept of "why", you find it presupposes existence.

Comment author: torekp 05 September 2016 11:52:36AM 0 points [-]

This. And if one is willing to entertain Tegmark, approximately 100% of universes will be non-empty, so the epistemic question "why a non-empty universe?" gets no more bite than the ontological one.

Comment author: torekp 05 September 2016 11:34:37AM 1 point [-]

The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.

View more: Next