I periodically do things to get out of my comfort zone. I started years ago before a friend introduced me to LW where I pleasantly discovered that CoZE was recommended.
This write-up is about my most recent exercise: Do a Non Gender-Conforming Thing
I chose to have my nails painted. Having painted nails requires low enough effort that I have no excuse not to and, wearing them out in public is just out-of-the-ordinary enough to make me worry about how people will react. After getting them painted, I realized why girls say "My nails!" a lot after a manicure and worry about screwing them up. It took work to paint them and chipping them makes them look like shit. Can’t let that happen to me!
Then I challenged some friends to do it and gave these suggestions:
...I think breaking arbitrary societal conventions and expanding comfort zones are positive things so I'm challenging a few people to try it and post a picture or video. Bonus points for a write-up of how you felt while doing it and any reactions from observers.
(Those who live in Berkeley are playing on easy mode.)
(People challenged may totally already do these! The list was limited to my imagination and ideas I could find. T
This write-up is about my most recent exercise: Do a Non Gender-Conforming Thing
Is there a version of the Sequences geared towards Instrumental Rationality? I can find (really) small pieces such as the 5 Second Level LW post and intelligence.org's Rationality Checklist, but can't find any overarching course or detailed guide to actually improving instrumental rationality.
There is some on http://www.clearerthinking.org/
When I write I try to hit instrumental not epistemic (see here: http://lesswrong.com/r/discussion/lw/mp2/). And I believe there is a need for writing along the lines of instrumental guides. (also see: boring advice repository http://lesswrong.com/lw/gx5/boring_advice_repository/)
As far as I know there has been no effort to generate a sequence on the topic.
Is there a specific area you would like to see an instrumental guide in? Maybe we can use the community to help find/make one on the specific topic you are after (for now).
the Tesla auto-driver accident was truly an accident. I didn't realize it was a semi crossing the divider and two lanes to hit him.
mental models list for problem solving
There is a much smaller set of concepts, however, that come up repeatedly in day-to-day decision making, problem solving, and truth seeking. As Munger says, “80 or 90 important models will carry about 90% of the freight in making you a worldly‑wise person.”
https://medium.com/@yegg/mental-models-i-find-repeatedly-useful-936f1cc405d#.nxlqujo6k
Meta-biases
Some cognitive biases don’t allow a person to see and cure his other biases. It results in biases accumulation and strongly distorted world picture. I tried to draw out a list of main meta-biases.
Stupidity. It is not a bias, but a (sort of very general) property of mind. It may include many psychi
There's a fair bit on decision theory and on bayesean thinking, both of which are instrumental rationality. There's not much on heuristics or how to deal with limited capacity. Perhaps intentionally - it's hard to be rigorous on those topics.
Also, I think there's an (unstated, and that should be fixed and the topic debated) belief that instrumental rationality without epistemic rationality is either useless or harmful. Certainly thta's the FAI argument, and there's no reason to believe it wouldn't apply to humans. As such, a focus on epistemic rationality first is the correct approach.
That is, don't try to improve your ability to meet goals unless you're very confident in those goals.
Data science techniques, and some suggested reading in the footnotes.
http://www.datasciencecentral.com/profiles/blogs/40-techniques-used-by-data-scientists
and a link from the DCC site, for learning Python for data science
http://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-learn-data-science-python-scratch-2/
Plant derived DNA can be absorbed thru ingestion directly into the bloodstream, without being broken down. GMO camp going to have a difficult time with this one, as it was a 1k person study
In one of the blood samples the relative concentration of plant DNA is higher than the human
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0069805
edit to add another study on blood disorders
"[O]ur study demonstrated that Bt spore-crystals genetically modified to express individually Cry1Aa, Cry1Ab, Cry1Ac or Cry2A induced hematotoxicity, particula...
I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.
To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of pap...
I think your model of me is incorrect (and suspect I may have a symmetrical problem somehow); I promise you, I don't need reminding that I am part of the world, that my brain runs on physics, etc., and if it looks to you as if I'm assuming the opposite then (whether by my fault, your fault, or both) what you are getting out of my words is not at all what I am intending to put into them.
Just as your will will only cause you to do what the world has told you, so the AI will only do what it is programmed to.
I entirely agree. My point, from the outset, has simply been that this is perfectly compatible with the AI having as much flexibility, as much possibility of self-modification, as we have.
Far better to leave it in fetters.
I don't think that's obvious. You're trading one set of possible failure modes for another. Keeping the AI fettered is (kinda) betting that when you designed it you successfully anticipated the full range of situations it might be in in the future, well enough to be sure that the goals and values you gave it will produce results you're happy with. Not keeping it fettered is (kinda) betting that when you designed it you successfully anticipated the full range of self-modifications it might undergo, well enough to be sure that the goals and values it ends up with will produce results you're happy with.
Both options are pretty terrifying, if we expect the AI system in question to acquire great power (by becoming much smarter than us and using its smartness to gain power, or because we gave it the power in the first place e.g. by telling it to run the world's economy).
My own inclination is to think that giving it no goal-adjusting ability at all is bound to lead to failure, and that giving it some goal-adjusting ability might not but at present we have basically no idea how to make that not happen.
(Note that if the AI has any ability to bring new AIs into being, nailing its own value system down is no good unless we do it in such a way that it absolutely cannot create, or arrange for the creation of, new AIs with even slightly differing value systems. It seems to me that that has problems of its own -- e.g., if we do it by attaching huge negative utility to the creation of such AIs, maybe it arranges to nuke any facility that it thinks might create them...)
Fair enough. I thought that you were using our own (imaginary) free will to derive a similar value for the AI. Instead, you seem to be saying that an AI can be programmed to be as 'free' as we are. That is, to change its utility function in response to the environment, as we do. That is such an abhorrent notion to me that I was eliding it in earlier responses. Do you really want to do that?
The reason, I think, that we differ on the important question (fixed vs evolving utility function) is that I'm optimistic about the ability of the masters to adjust...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.