Comment author: badtheatre 31 October 2014 06:56:21PM 25 points [-]

I took the whole thing! That's two years in a row.

In response to White Lies
Comment author: Swimmer963 09 February 2014 02:46:23AM *  45 points [-]

There are certain lies that I tell over and over again, where I'm 99% sure lying is the morally correct answer. Stereotypical example: my patient is lying in a lake of poop, or is ringing the call bell for the third time in 15 minutes to tell me that they're thirsty or in pain or need a kleenex, and they're embarrassed and upset because they're sure I must be frustrated and mad that they're making me do so much work. "Of course I don't mind," I've said over and over again. "This doesn't bother me. I've got plenty of time. I just want you to be comfortable, that's my job." When it's 4 am and I desperately want to go on break and eat something, none of these things are true. But it's my job, and I want to want to do it, so the fact that sometimes I desperately don't want to do it is kind of moot. But the last thing a patient in the ICU needs to hear from their nurse is "yes, I'm pissed that you shat in the bed again because I was about to go on break and now I can't and I'm hungry and cranky." I keep that to myself.

...Other than that, I generally don't lie to friends, although I do lie by omission, especially when it comes to my irrational feelings of frustration or irritation with things they do. I'm generally not bothered by being very open with people about i.e. my relationships or other personal things, so I'm confused when other people want to lie or conceal information about these sorts of things. I actually have a really hard time keeping up with other people's systems of lying; when you're friends with two people who both have specific lists of things they don't want you to ever tell the other person, it gets complicated. (For almost a year my best friend was dating a man without telling her ex-husband, and I was seeing her ex-husband every time I went to play with my godson, and I had to remember to lie about a whole bunch of random things like "what did you and my ex-wife do on Saturday?" I respected that it was her choice whether or not to tell him, but I still found this really, really irritating.)

In response to comment by Swimmer963 on White Lies
Comment author: badtheatre 10 February 2014 05:26:25PM 3 points [-]

My ex wife is in Geriatrics and I've heard a few situations from her where she, possibly appropriately, lied to patients with severe dementia by playing along with their fantasies. The most typical example would be a patient believing their dead spouse is coming that day for a visit, and asking about it every 15 minutes. I think she would usually tell the truth the first few times, but felt it was cruel to be telling someone constantly that their spouse is dead, and getting the same negative emotional reaction every time, so at that point she would start saying something like, "I heard they were stuck in traffic and can't make it today."

The above feels to me like a grey area, but more rarely a resident would be totally engrossed in a fantasy, like thinking they were in a broadway play or something. In these cases, where the person will never understand/accept the truth anyway, I think playing along to keep them happy isn't a bad option.

Comment author: Kaj_Sotala 09 January 2014 08:12:18AM 45 points [-]

Furthermore, I imagine that this can backfire reaaaly hard: if you manage to develop a strong revulsion for unproductive activities but still can't force yourself to stop browsing reddit (or whatever your vice) then you run a big risk of hitting a willpower-draining death spiral.

That's basically what happened to me: I taught myself to feel guilty whenever I was relaxing and not working, but just the fact that I was feeling guilty about not-working didn't make me any more motivated to actually work. So I would repeatedly get into situations where absolutely nothing felt like worth doing, so I accomplished basically nothing and felt miserable for the whole day. Cue an extended burnout that took me several years to properly recover from.

Oddly, it feels like one key part of my recovery has been to train myself to feel as unguilty as possible about any recreational activity. That way, if I really need a break I can take one, but if I'm on a break I can also honestly ask myself whether my break has gone on long enough and whether I'd want to resume doing something more productive now. Though I'm sure if that's quite right either - it's more like I'm more able to trust that my motivation to do something relaxing will naturally fade after a while, to be replaced with a motivation to be productive again, without me necessarily even needing to watch myself. And of course, since I don't need to actively watch myself, the relaxation may happen faster since I can focus on it more fully. (Of course, sometimes it does take longer, and the key is to be completely fine with that possibility, too.)

The main mechanism here seems to be that guilt not only blocks the relaxation, it also creates negative associations around the productive things - the productivity becomes that nasty uncomfortable reason why you don't get to do fun things, and you flinch away from even thinking about the productive tasks, since thinking about them makes you feel more guilty about not already doing them. Which in turn blocks you from developing a natural motivation to do them.

So if someone did go by this mindhacking route, they should be very careful to avoid developing guilt. The guest who had developed a dislike for fritos didn't dislike them because eating them made her feel guilty: she disliked them because she had started noticing features in them that she felt were repulsive. Also, I suspect that "actively pay attention to the features in productive tasks that are desirable" is just as important an component as noticing the displeasing things in non-productive tasks. If we assume the opportunity cost model of willpower, then your motivation to do something is proportional to the difference in estimated value between that thing and the second most highly ranked thing, implying that increasing the perceived value of the productive things can be even more efficient than decreasing the value of other things. (Guilt in this model would act as a negative modifier to the values.)

Also closely related posts: Pain and gain motivation, It's okay to be (at least a little) irrational.

Comment author: badtheatre 09 January 2014 05:02:27PM 0 points [-]

I've been living like that for a long time, but just recently started noticing it.

Oddly, it feels like one key part of my recovery has been to train myself to feel as unguilty as possible about any >recreational activity.

Do you have any specific advice for how to do this?

Comment author: purpleposeidon 05 January 2014 07:53:40AM 10 points [-]
Comment author: badtheatre 08 January 2014 07:26:16PM 0 points [-]

One problem I see with this kind of study is that valproic acid has a very distinct effect (from personal experience), which makes it easier for participants to determine whether they are in the placebo group. It would be nice if there were an "active placebo" group who took another mood stabilized that is not an HDAC. Also, it would have been nice to see the effect on ability to produce a tone by humming or whistling, given the pitch name.

Some very weak anecdotal evidence in favor of the hypothesis: For a couple months in 2005 I was being treated with valporic acid and, during that time, I took an undergraduate course in topology. In my brief stint as a graduate student (2012), I also took topology and performed much better in this than in any of my other courses, though this could just be due to liking the subject.

Comment author: wuncidunci 17 December 2013 11:26:25PM 0 points [-]

Two points of relevance that I see are:

If we care about the nature of morphisms of computations only because of some computations being people, the question is fundamentally what our concept of people refers to, and if it can refer to anything at all.

If we view isomorphic as a kind of extension of our naïve view of equals, we can ask what the appropriate generalisation is when we discover that equals does not correspond to reality and we need a new ontology as in the linked paper.

Comment author: badtheatre 18 December 2013 06:48:53PM 0 points [-]

Actually, I started thinking about computations containing people (in this context) because I was interested in the idea of one computation simulating another, not the other way around. Specifically, I started thinking about this while reading Scott Aaronson's review of Stephen Wolfram's book. In it, he makes a claim something like: the rule 110 cellular automata hasn't been proved to be turing complete because the simulation has an exponential slowdown. I'm not sure if the claim was that strong, but definitely it was claimed later by others that turing completeness hadn't been proved for that reason. I felt this was wrong, and justified my feeling by the thought experiment: suppose we had an intelligence that was contained in a computer program and we simulated this program in rule 110, with the exponential slowdown. Assuming the original program contained a consciousness, would the simulation also? And I felt strongly, and still do, that it would.

It was later shown, If i'm remembering right, that there was a simulation with only polynomial slowdown, but I still think it's a useful question to ask, although the notion it captures, if it does so at all, seems to me to be a slippery one.

Comment author: MrMind 16 December 2013 09:43:17AM 0 points [-]

Let's start small. Since we are talking about algorithms (better yet, about programs of a universal Turing machine), what about if two programs can match the same input on the same output?
Would that suffice as a definition of isomorphism, even if they have wildly different resource usages (including time)?

Comment author: badtheatre 17 December 2013 07:48:42PM 0 points [-]

What if they don't output anything?

Comment author: Douglas_Knight 16 December 2013 07:01:12PM 0 points [-]

Here is a post on a related question. As I said there, this paper is relevant.

Comment author: badtheatre 17 December 2013 07:47:57PM 0 points [-]

I don't see the relevance of either of these links.

Comment author: shminux 15 December 2013 12:10:42AM *  0 points [-]

(homeo?)morphic?

You probably mean homomorphism, unless you really mean a continuous (in some sense) invertible transformation between the two programs.

Anyway, the definition of homomorphism is a "structure-preserving map", so you need to figure out what "structure of consciousness" even means.

To start small, you might want to define the term "structure" for some simple algorithm. For example, do two different programs outputting first 10 natural numbers have the same structure? What if one prints it and the other uses TTS? Does it matter what language the numbers are in? What about two programs, one printing first ten numbers and the other second ten numbers? Can you come up with more examples?

Comment author: badtheatre 15 December 2013 02:02:13AM 1 point [-]

What is TTS?

Comment author: passive_fist 15 December 2013 12:24:18AM 0 points [-]

Grothendieck's mind was indeed extremely strange. The levels of abstraction upon abstraction he achieved in algebraic geometry boggles the mind.

But I don't think you can really make meaningful comparisons between thought processes based on self-reporting. One complication is that different fields of mathematics work differently in this regard. In things like statistics, analysis, and geometry, you rely heavily on examples. In things like algebra, examples can indeed be cumbersome and hindering, because the point of algebra is to simplify things to symbol manipulation. Of course, it might also be the case that people with more abstract-type thinking are naturally drawn to algebra.

It would be useful to look at the 'information content' of storing examples vs. storing symbolic representations, and see how that compares across different mathematical subjects.

Comment author: badtheatre 15 December 2013 01:51:59AM 0 points [-]

I'm skeptical that the relevance of the two modes of thinking in question has much to do with the mathematical field in which they are being applied. Some of grothendiek's most formative years were spent reconstructing parts of measure theory, specifically he wanted a rigorous definition of the concept of volume and ended up reinventing the Lebesgue measure, if memory serves, in other words, he was doing analysis and, less directly, probability theory...

I do think it's plausible that more abstract thinkers tend towards things like algebra, but in my limited mathematical education, I was much more comfortable with geometry, and I avoid examples like the plague...

Maybe the two approaches are not all that different. When you zoom out on a growing body of concrete examples you may see something similar to the "image emerging from the mist", that grothendiek describes.

Comment author: kgalias 14 December 2013 10:38:16PM 0 points [-]

Can you provide some more background? What is a morphism of computations?

Comment author: badtheatre 15 December 2013 12:08:24AM 0 points [-]

Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to "pointwise causal isomorphism":

Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified >that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of >detection) and verified surface correspondence (the person emulated says they can't internally detect any >difference) then my probability of consciousness is essentially "top", i.e. I would not bother to think about >alternative hypotheses because the probability would be low enough to fall off the radar of things I should think >about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even >though your biological brain isn't?

We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.

View more: Next