Comment author: Algon 03 June 2016 04:06:56PM *  0 points [-]

I've seen you mention trigger point therapy before. It's something I do, and it helps to a degree, but it has not had made a large change in my quality of life.

The rest seems worthwhile. Thank you for that.

Comment author: John_Maxwell_IV 04 June 2016 01:38:10AM *  0 points [-]

I would guess then that you either

  • Suffer mainly from trigger points, but you're treating the wrong ones/haven't found effective treatment methods

  • Suffer from some other condition that's causing trigger points in your muscles as a downstream effect

One thing that might give you a clue is to figure out just how bad your trigger points are. You won't have a point of reference yourself, so I'd suggest visiting a few massage therapists and asking them after your massage whether you seem tighter than a typical client and where your worst tightness is. If your trigger points are very bad, or you have significant tightness/pain even in areas that aren't close to your head, I'd update some in the direction of them representing the core of your problem.

If trigger points are your primary issue, then keep in mind they can require quite a lot of creative investigation to treat effectively. For example, my current hypothesis is that the eyestrain issues I struggled with a few months ago were caused in part by the following chain: morton's foot -> trigger points in my soleus -> trigger points in my jaw muscles -> trigger points in my upper sternocleidomastoid -> trigger points in my eye muscles. It sounds weird, but when I spend a day walking around with inserts in my shoes to correct for the Morton's Foot, my eyes feel like they're loosening up when I lie down to sleep at the end of the day.

I recommend thoroughly reading the perpetuating factors chapter on every trigger point book you can get your hands on. Part of the reason I recommend SAMe is that one of the perpetuating factors that's been identified for trigger point problems is folate deficiency, but some people (like me) have MTHFR mutations that interfere with folate motabolism, and SAMe helps get around that. (Getting 23andme can help you determine if you're also an undermethylator.)

Make yourself the world's foremost expert on trigger points (and any other field of research that seems helpful for your pain). Then you'll have a great career if you do end up managing to fix yourself.

Comment author: Algon 31 May 2016 08:10:08PM 5 points [-]

To fellow victims of chronic pain: do you ever despair about the future, knowing your pain might never end? If so, how do you deal with it?

I've made it a schelling point to never end it all. To leave open the possibility of suicide seems too dangerous to me, too alluring. But I'm still afraid that one day I might try. Do any of you ever feel like this?

I would like to know how others deal with this, as I'm only doing so-so.

Comment author: John_Maxwell_IV 03 June 2016 11:39:09AM *  4 points [-]

Did you look at https://www.painscience.com/? That site had info that cured nasty chronic pain of mine that lasted >1 year. This tutorial in particular was extremely helpful: https://www.painscience.com/tutorials/trigger-points.php

To answer your original question: when I was dealing with chronic pain, I had issues with deep despair similar to what you describe. My chronic pain left me unemployed, and I was constantly in fear of doing things that would aggravate my condition and set back the (very slow and variable) progress it was making in resolving itself. Definitely an extremely miserable period.

Thoughts I had that I found helpful and I'll pass on to you: I decided there were basically 2 strategies for dealing with the pain I had: cure and mitigation. Cure refers to finding a way to roll back the root cause of the problem and return to being my pain-free self. Mitigation refers to accepting the pain and finding ways to work around it (for me--finding a job that doesn't require me to make use of my hands at all, and probably doing a lot more meditation). I decided that it was best to focus on 1 strategy at a time, and that I should focus on the "cure" strategy for at least several years before switching to "mitigation". (What's a few years when I had decades left to live?) I realized that any given "cure" had a pretty low probability of working out, and being in a state of deep despair was extremely non-conducive to trying things that individually had a small probability of working out. This observation was helpful for recalibrating my intuition, and I resolved to make the "list of things I had tried" as long as I could possibly make it. I also resolved to do more of a breadth-first search than a depth-first search, at least at first--I didn't want something that would gradually fix my pain over the course of many months in a way that I would need careful journaling to observe--I wanted a technique that would help things noticeably, that I could use at any time, if the issues came up in the future. Luckily I did manage to find such a technique, which was trigger point therapy (see above links). I've since helped a few others make progress on their pain using trigger point therapy, and I think it's potentially useful for many, perhaps almost all, people who suffer chronic pain.

Some more specific recommendations:

  • If you're not already taking something, start taking SAMe. It's a supplement that you can buy over the counter that's anti-depressant and has been shown to be quite useful for arthritis (so who knows, maybe it will end up helping your condition somehow--it probably hasn't been studied for your condition and you may as well do an n=1 trial). Ideally it will improve your mood, which will give you the motivation to try low-probability treatments, and it might fix your issue on its own. Here's more info: http://www.lifeextension.com/Magazine/2007/4/report_same/Page-01

  • Read this book: http://smile.amazon.com/How-Fail-Almost-Everything-Still/dp/1591847745/ Not only is it an great book in and of itself, the author covers mental strategies that are ideal for chronic medical condition sufferers. And he uses the story of his chronic medical condition as a motivating example through the book, so it gives you something to relate to.

Comment author: John_Maxwell_IV 30 May 2016 05:22:55AM 1 point [-]

Negativity bias might be a better cite than loss aversion.

Comment author: username2 16 May 2016 02:46:34PM 4 points [-]

A repost from an earlier open thread.

I am looking for sources of semi-technical reviews and expository weblog posts to add to my RSS reader; preferably 4—20 screenfuls of text on topics including or related to evolutionary game theory, mathematical modelling in the social sciences, theoretical computer science applied to non-computer things, microeconomics applied to unusual things (e.g. Hanson's Age of Em), psychometrics, the theory of machine learning, and so on. What I do not want: pure mathematics, computer science trivia, coding trivia, machine learning tutorials, etc.

Some examples that mostly match what I want, in roughly descending order:

How do I go about finding more feeds like that? I have already tried the obvious, such as googling "allintext: egtheory jeremykun" and found a couple OPML files (including gwern's), but they didn't contain anything close. The obvious blogrolls weren't helpful either (most of them were endless lists of conference announcements and calls for papers). Also, I've grepped a few relevant subreddits for *.wordpress.*, *.blogspot.* and *.github.io submissions (only finding what I already have in my RSS feeds — I suspect the less established blogs just haven't gotten enough upvotes).

Comment author: John_Maxwell_IV 27 May 2016 08:54:23AM 1 point [-]

Would Andrew Gelman's blog count? (Author of recommended textbook on Bayesian statistics.)

Maybe it would be useful for you to share the entire blogroll you've accumulated thus far and just ask people to recommend more blogs like the ones you already have. For example, I'm guessing you found Gelman already since he's present in Robin Hanson's blogroll--but I could think of a way you would have plausibly found lots of potential recommendations.

You could even create a "show us your blogroll" discussion post, in order to harvest OPMLs to mine through.

Comment author: John_Maxwell_IV 12 May 2016 08:06:59AM 0 points [-]

Related threads: 1, 2, 3.

Comment author: John_Maxwell_IV 31 March 2016 08:52:48AM 1 point [-]

Awesome, I ought to be there.

Comment author: Viliam 18 March 2016 09:33:38AM 1 point [-]

An interesting idea, but I can still imagine it failing in a few ways:

  • the AI kills you during the process of building the "incredibly rich world-model", for example because using the atoms of your body will help it achieve a better model;

  • the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.

Comment author: John_Maxwell_IV 18 March 2016 09:54:47PM 0 points [-]

the AI kills you during the process of building the "incredibly rich world-model", for example because using the atoms of your body will help it achieve a better model;

OK, I think this is a helpful objection because it helps me further define the "tool"/"agent" distinction. In my mind, an "agent" works towards goals in a freeform way, whereas a "tool" executes some kind of defined process. Google Search is in no danger of killing me in the process of answering my search query (because using my atoms would help it get me better search results). Google Search is not an autonomous agent working towards the goal of getting me good search results. Instead, it's executing a defined process to retrieve search results.

A tool is a safer tool if I understand the defined process by which it works, the defined process works in a fairly predictable way, and I'm able to anticipate the consequences of following that defined process. Tools are bad tools when they behave unpredictably and create unexpected consequences: for example, a gun is a bad tool if it shoots me in the foot without me having pulled the trigger. A piece of software is a bad tool if it has bugs or doesn't ask for confirmation before taking an action I might not want it to take.

Based on this logic, the best prospects for "tool AIs" may be "speed superintelligences"/"collective superintelligences"--AIs that execute some kind of well-understood process, but much faster than a human could ever execute, or with a large degree of parallelism. My pocket calculator is a speed superintelligence in this sense. Google Search is more of a collective superintelligence insofar as its work is parallelized.

You can imagine using the tool AI to improve itself to the point where it is just complicated enough for humans to still understand, then doing the world-modeling step at that stage.

Also if humans can inspect and understand all the modifications that the tool AI makes to itself, so it continues to execute a well-understood defined process, that seems good. If necessary you could periodically put the code on some kind of external storage media, transfer it to a new air-gapped computer, and continue development on that computer to ensure that there wasn't any funny shit going on.

the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.

Sure, and there's also the "superintelligent, but with bugs" failure mode where the model is pretty good (enough for the AI to do a lot of damage) but not so good that the AI has an accurate representation of my values.

I imagine this has been suggested somewhere, but an obvious idea is to train many separate models of my values using many different approaches (ex - in addition to what I initially described, also use natural language processing to create a model of human values, and use supervised learning of some sort to learn from many manually entered training examples what human values look like, etc.) Then a superintelligence could test a prospective action against all of these models, and if even one of these models flagged the action as an unethical action, it could flag the action for review before proceeding.

And in order to make these redundant user preference models better, they could be tested against one another: the AI could generate prospective actions at random and test them against all the models; if the models disagreed about the appropriateness of a particular action, this could be flagged as a discrepancy that deserves examination.

My general sense is that with enough safeguards and checks, this "tool AI bootstrapping process" could probably be made arbitrarily safe. Example: the tool AI suggests an improvement to its own code, you review the improvement, you ask the AI why it did things in a particular way, the AI justifies itself, the justification is hard to understand, you make improvements to the justifications module... For each improvement the tool AI generates, it also generates a proof that the improvement does what it says it will do (checked by a separate theorem-proving module) and test coverage for the new improvement... Etc.

Comment author: John_Maxwell_IV 17 March 2016 02:01:11AM *  2 points [-]

In The genie knows, but it doesn't care, RobbBB argues that even if an AI is intelligent enough to understand its creator's wishes in perfect detail, that doesn't mean that its creator's wishes are the same as its own values. By analogy, even though humans were optimized by evolution to have as many descendants as possible, we can understand this without caring about it. Very smart humans may have lots of detailed knowledge of evolution & what it means to have many descendants, but then turn around and use condoms & birth control in order to stymie evolution's "wishes".

I thought of a potential way to get around this issue:

  1. Create a tool AI.

  2. Use the tool AI as a tool to improve itself, similar to the way I might use my new text editor to edit my new text editor's code.

  3. Use the tool AI to build an incredibly rich world-model, which includes, among other things, an incredibly rich model of what it means to be Friendly.

  4. Use the tool AI to build tools for browsing this incredibly rich world-model and getting explanations about what various items in the ontology correspond to.

  5. Browse this incredibly rich world-model. Find the item in the ontology that corresponds to universal flourishing and tell the tool AI "convert yourself in to an agent and work on this".

There's a lot hanging on the "tool AI/agent AI" distinction in this narrative. So before actually working on this plan, one would want to think hard about the meaning of this distinction. What if the tool AI inadvertently self-modifies & becomes "enough of an agent" to deceive its operator?

The tool vs agent distinction probably has something to do with (a) the degree to which the thing acts autonomously and (b) the degree to which its human operator stays in the loop. A vacuum is a tool: I'm not going to vacuum over my prized rug and rip it up. A Roomba is more of an agent: if I let it run while I am out of the house, it's possible that it will rip up my prized rug as it autonomously moves about the house. But if I stay home and glance over at my Roomba every so often, it's possible that I'll notice that my rug is about to get shredded and turn off my Roomba first. I could also be kept in the loop if the thing gives me warnings about undesirable outcomes I might not want: for example, my Roomba could scan the house before it ran, giving me an inventory of all the items it might come in contact with.

An interesting proposition I'm tempted to argue for is the "autonomy orthogonality thesis". The original "orthogonality thesis" says that how intelligent an agent is and what values it has are, in principle, orthogonal. The autonomy orthogonality thesis says that how intelligent an agent is and the degree to which it has autonomy and can be described as an "agent" are also, in principle, orthogonal. My pocket calculator is vastly more intelligent than I am at doing arithmetic, but it's still vastly less autonomous than me. Google Search can instantly answer questions it would take me a lifetime to answer working independently, but Google Search is in no danger of "waking up" and displaying autonomy. So the question here is whether you could create something like Google Search that has the capacity for general intelligence while lacking autonomy.

I feel like the "autonomy orthogonality thesis" might be a good steelman of a lot of mainstream AI researchers who blow raspberries in the general direction of people concerned with AI safety. The thought is that if AI researchers have programmed something in detail to do one particular thing, it's not about to "wake up" and start acting autonomous.

Another thought: One might argue that if a Tool AI starts modifying itself in to a superintelligence, the result will be too complicated for humans to ever verify. But there's an interesting contradiction here. A key disagreement in the Hanson/Yudkowsky AI-foom debate was the existence of important, undiscovered chunky insights about intelligence. Either these insights exist or they don't. If they do, then the amount of code one needs to write in order to create a superintelligence is relatively little, and it should be possible for humans to independently verify the superintelligence's code. If they don't, then we are more likely going to have a soft takeoff anyway because intelligence is about building lots of heterogenous structures and getting lots of little things right, and that takes time.

Another thought: maybe it's valuable to try to advance natural language processing, differentially speaking, so AIs can better understand human concepts by reading about them?

Comment author: John_Maxwell_IV 05 February 2016 12:58:19AM *  0 points [-]

A simple substitute strategy for using spaced repetition: Say fact usefulness has a power law distribution: some facts you are going to look up 10s or 100s of times, others not that frequently. Say it's hard to predict which facts are going to be the ones you look up 100s of times. If that's true then by using SR you're going to create a lot of wasted cards for facts that you thought you'd look up 10s or 100s of times but in fact are pretty useless. Instead what you could do is every time you want to look up a fact, before looking it up, try to recall it from memory. Research shows that trying to recall facts solidifies their memories much better than looking them up, so over time you will come to have all of the facts you most frequently need at your mental fingertips using this strategy (a bit like microprocessor cache management).

Comment author: John_Maxwell_IV 03 February 2016 01:18:33AM *  0 points [-]

One of my brothers is a physics undergrad at Caltech. He described the Caltech curriculum as having a "fire hose" feel where the professors throw one thing after another at you in rapid succession, trusting you to reconstruct that knowledge later as necessary. From what I've heard, MIT has a similar approach. This seems opposed to a spaced repetition approach where you make sure each chunk of knowledge is a solid, permanent block before proceeding.

One possibility is that the "fire hose" approach does get you spaced repetition for core concepts that you end up seeing in many different ways over the course of your study. It's also possible that what's best for elite engineering students doesn't work well for everyone, or that elite engineering students are going to succeed no matter what approach you take, so curriculum design doesn't matter much.

View more: Prev | Next