That reaction sounds rare. Do you think 20 cups of tea would have triggered a similar reaction in you?
There is a huge variation based on dosage for all things you can ingest: food, drug, supplement, and "other". Check out the horrors of eating a whole bottle of nutmeg. http://www.erowid.org/experiences/subs/exp_Nutmeg.shtml
You'll want to give the doctor a sense of what's going on with you (just like you've done here), and then to help them find any medical issues that may be causing your problems. So give an overall description of the problem and how serious it is (sort of like in your initial post - your lack of energy, inability to do things, and lots of related problems) - including some examples or specifics (like these) can help make that clearer. And be sure to describe anything that seems like it could be physiological (the three that stuck out to me were lack of energy, difficulty focusing, and anxiety / panic attacks - you might be able to think of some others).
The doctor will have questions which will help guide the conversation, and you can always ask whether they want more details about something. Do you think that figuring out what to say to the doctor could be a barrier for you? If so, let me know - I could say more about it.
Theanine may be "one of many compounds found in tea" but, on the recommendation of an acquaintance I tried taking theanine itself as an experiment once (from memory maybe 100mg?). First I read up on it a little and it sounded reasonably safe and possibly beneficial and I drank green tea anyway so it seemed "cautiously acceptable" to see what it was like in isolation. Basically I was wondering if it helped me relax, focus, and/or learn better.
The result was a very dramatic manic high that left me incapable intellectually directed mental focus (as opposed to focus on whatever crazy thing popped into my head and flittered away 30 minutes later) for something like 35 hours. Also, I couldn't sleep during this period.
In retrospect I found it to be somewhat scary and it re-confirmed my general impression of the bulk of "natural" supplements. Specifically, it confirmed my working theory that the lack of study and regulation of supplements leads to a market full of many options that range from worthless placebo to dangerously dramatic, with tragically few things in the happy middle ground of safe efficacy.
Melatonin is one of the few supplements that I don't put in this category, however in that case I use less than "the standard" 3mg dose. When I notice my sleep cycle drifting unacceptably I will spend a night or two taking 1.5mg of melatonin (using a pill cutter to chop 3mg pills in half) to help me fall asleep and then go back to autopilot. The basis for this regime is that my mother worked in a hospital setting and 1.5mg was what various doctors recommended/authorized for patients to help them sleep.
There was a melatonin fad in the late 1990's(?) where older people were taking melatonin as a "youth pill" because endogenous production declines with age. I know of no good studies supporting that use, but around that time was when the results about sleep came out, showing melatonin to be effective even for "jet lag" as a way to reset one's internal clock swiftly and safely.
See this discussion of my own meat-eating. My conclusion was that there is not much of a rational basis for deciding one way or the other -- my attempts to use rationality broke down.
I think you should go out and get yourself something deliciously meaty, while still being mostly vegetarian. "Fair weather vegetarianism". Unless you don't actually like the taste of meat. That's ok. There's also an issue of convenience. You could begin the slippery slope of drinking chicken broth soup and Thai food with lots of fish sauce.
We exist in an immoral system and there isn't much to do about it. Being a vegetarian for reasons of animal suffering is symbolic. If we truly cared about the holocaust of animal suffering, we would be waging a guerrilla war against factory farms.
Experiments require sensors of some kind. I'm no programmer, but it seems prima facie that we could prevent it from sensing anything that had any information-theoretic possibility of furnishing dangerous information (although such extreme data starvation might hinder the evolution process).
Well I was talking about running experiments on it's own thought processes, in order to reverse engineer it's own source code. Even locked in a fully virtual world, if it can even observe it's own actions then it can infer it's thought process, it's general algorithims, the [evolutionary or mental] process that led to it, and more than a few bits about it's creators.
And if you are trying to wall off the AI from information about it's thought process, then you're working on a sandbox in a sandbox, which is just a sign that the idea for the first sandbox was flawed anyway.
I will admit that my mind runs away screaming from the difficulty of making something that really doesn't get any input, even to its own thought process, but is superintelligent and can be made useful. Right now it sounds harder than FAI to me, and not reliably safe, but that might just be my own unfamiliarity with the problem. Huge warning signs in all directions here. Will think more later.
Give it the ZFC axioms and a few definitions and it can derive all the pure math results we'd ever need If we could avoid needing to give it a direction to take research, and it didn't leap immediately to things too complex for us to understand... there are still problems.
How do you get it to actually do the work? If you build in intrinsic motivation that you know is right, then why aren't you going right to FAI? If it wants something else and you're coercing it with reward, then it will try to figure out how to really maximize it's reward. if it has no information
Would an AI necessarily have motivations, or is that a special characteristic of gene-based lifeforms that evolved in a world where lack of reproduction and survival instincts is a one-way ticket to oblivion?
If we evolved superintelligent neural net's they'd have some kind of motivation, they don't want food or sex, but they'd want whatever their ancestors wanted that led them to do the thing that scored higher than the rest on the fitness function. (Which is at least twice removed from anything we would want.)
I'm not sure I get the bit about your dog cloning you. I agree that we shouldn't try to dictate in detail what an FAI is supposed to want, but we do need [near] perfect control over what an AI wants in order to make it friendly, or even to keep it on a defined "safe" task.
I'm imagining the AI manipulating the text output on the terminal just right so as to mold the air/dust particles near the monitor into a self-replicating nano-machine (etc.).
I like that idea.
http://www4.gsb.columbia.edu/ideasatwork/feature/735403/Powerful+Lies
The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars — the leaders —resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars — the subordinates — showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.
Currently, humans don't work that way. I mean, sure, we want to survive, and will do a lot of nasty things for it, but if you actually internalize nihilism, crass self-interest, and convention as your moral foundation, then the result will NOT be goodness or truth or beauty. To win, you have to be aware of the mundane roots of things without celebrating them.
See, e.g., Gall's Law and/or Goodhart's Law.
Even if their explanation were correct, they would still have lucked into them. Others have different priors and no doubt different causes for their priors. So those Bayesians would have been lucky, in order to have the causes that would produce correct priors instead of incorrect ones.
that is sufficient to conclude that your estimate of the probability that it's true should be higher than 1/2^5.
Or it's sufficient to conclude that one's estimate should be less than 1/2^5. Without providing additional evidence (such as "I saw the THHTT outcome") your claim is rather dubious and -- in the realm of humans -- this probably is a good indicator that you are lying or are crazy. I'm not sure how one should update your posteriors.
"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."
Attention everyone: This post is currently broken for some unknown reason. Please use the new post at http://lesswrong.com/lw/212/announcing_the_less_wrong_subreddit_2/ if you want to discuss the sub-Reddit. The address of the sub-Reddit is http://www.reddit.com/r/LessWrong
Recents are broken again - judging by these:
http://lesswrong.com/comments/?count=25&after=t1_1ub8
http://lesswrong.com/comments/?count=25&after=t1_1ub7
...looks like 1ub7 is responsible.
Edit: Aha! It's on
http://lesswrong.com/lw/20z/announcing_the_less_wrong_subreddit/
in the thread following
http://lesswrong.com/lw/20z/announcing_the_less_wrong_subreddit/1uak
Edit 2: The most recent comment to create a crash is
http://lesswrong.com/lw/20z/announcing_the_less_wrong_subreddit/1ub4
Edit 3: 1ub7 no longer causes any crashes.
"By lampshading the fact that you were gaining moderator power, you made it look like a power grab, even if it wasn't meant to be."
I'm sorry if I created that impression; I just wanted to convince people that this will not turn into a low-quality unmoderated free-for-all, like what most of Reddit is now. I certainly don't intend to randomly abuse power, just to ban spam and non-intellectual stuff like lolcats.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)