Posts

Sorted by New

Wiki Contributions

Comments

I took the whole thing! That's two years in a row.

My ex wife is in Geriatrics and I've heard a few situations from her where she, possibly appropriately, lied to patients with severe dementia by playing along with their fantasies. The most typical example would be a patient believing their dead spouse is coming that day for a visit, and asking about it every 15 minutes. I think she would usually tell the truth the first few times, but felt it was cruel to be telling someone constantly that their spouse is dead, and getting the same negative emotional reaction every time, so at that point she would start saying something like, "I heard they were stuck in traffic and can't make it today."

The above feels to me like a grey area, but more rarely a resident would be totally engrossed in a fantasy, like thinking they were in a broadway play or something. In these cases, where the person will never understand/accept the truth anyway, I think playing along to keep them happy isn't a bad option.

I've been living like that for a long time, but just recently started noticing it.

Oddly, it feels like one key part of my recovery has been to train myself to feel as unguilty as possible about any >recreational activity.

Do you have any specific advice for how to do this?

One problem I see with this kind of study is that valproic acid has a very distinct effect (from personal experience), which makes it easier for participants to determine whether they are in the placebo group. It would be nice if there were an "active placebo" group who took another mood stabilized that is not an HDAC. Also, it would have been nice to see the effect on ability to produce a tone by humming or whistling, given the pitch name.

Some very weak anecdotal evidence in favor of the hypothesis: For a couple months in 2005 I was being treated with valporic acid and, during that time, I took an undergraduate course in topology. In my brief stint as a graduate student (2012), I also took topology and performed much better in this than in any of my other courses, though this could just be due to liking the subject.

Actually, I started thinking about computations containing people (in this context) because I was interested in the idea of one computation simulating another, not the other way around. Specifically, I started thinking about this while reading Scott Aaronson's review of Stephen Wolfram's book. In it, he makes a claim something like: the rule 110 cellular automata hasn't been proved to be turing complete because the simulation has an exponential slowdown. I'm not sure if the claim was that strong, but definitely it was claimed later by others that turing completeness hadn't been proved for that reason. I felt this was wrong, and justified my feeling by the thought experiment: suppose we had an intelligence that was contained in a computer program and we simulated this program in rule 110, with the exponential slowdown. Assuming the original program contained a consciousness, would the simulation also? And I felt strongly, and still do, that it would.

It was later shown, If i'm remembering right, that there was a simulation with only polynomial slowdown, but I still think it's a useful question to ask, although the notion it captures, if it does so at all, seems to me to be a slippery one.

What if they don't output anything?

I don't see the relevance of either of these links.

I'm skeptical that the relevance of the two modes of thinking in question has much to do with the mathematical field in which they are being applied. Some of grothendiek's most formative years were spent reconstructing parts of measure theory, specifically he wanted a rigorous definition of the concept of volume and ended up reinventing the Lebesgue measure, if memory serves, in other words, he was doing analysis and, less directly, probability theory...

I do think it's plausible that more abstract thinkers tend towards things like algebra, but in my limited mathematical education, I was much more comfortable with geometry, and I avoid examples like the plague...

Maybe the two approaches are not all that different. When you zoom out on a growing body of concrete examples you may see something similar to the "image emerging from the mist", that grothendiek describes.

Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to "pointwise causal isomorphism":

Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified >that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of >detection) and verified surface correspondence (the person emulated says they can't internally detect any >difference) then my probability of consciousness is essentially "top", i.e. I would not bother to think about >alternative hypotheses because the probability would be low enough to fall off the radar of things I should think >about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even >though your biological brain isn't?

We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.

Load More