I periodically do things to get out of my comfort zone. I started years ago before a friend introduced me to LW where I pleasantly discovered that CoZE was recommended.
This write-up is about my most recent exercise: Do a Non Gender-Conforming Thing
I chose to have my nails painted. Having painted nails requires low enough effort that I have no excuse not to and, wearing them out in public is just out-of-the-ordinary enough to make me worry about how people will react. After getting them painted, I realized why girls say "My nails!" a lot after a manicure and worry about screwing them up. It took work to paint them and chipping them makes them look like shit. Can’t let that happen to me!
Then I challenged some friends to do it and gave these suggestions:
...I think breaking arbitrary societal conventions and expanding comfort zones are positive things so I'm challenging a few people to try it and post a picture or video. Bonus points for a write-up of how you felt while doing it and any reactions from observers.
(Those who live in Berkeley are playing on easy mode.)
(People challenged may totally already do these! The list was limited to my imagination and ideas I could find. T
This write-up is about my most recent exercise: Do a Non Gender-Conforming Thing
Is there a version of the Sequences geared towards Instrumental Rationality? I can find (really) small pieces such as the 5 Second Level LW post and intelligence.org's Rationality Checklist, but can't find any overarching course or detailed guide to actually improving instrumental rationality.
There is some on http://www.clearerthinking.org/
When I write I try to hit instrumental not epistemic (see here: http://lesswrong.com/r/discussion/lw/mp2/). And I believe there is a need for writing along the lines of instrumental guides. (also see: boring advice repository http://lesswrong.com/lw/gx5/boring_advice_repository/)
As far as I know there has been no effort to generate a sequence on the topic.
Is there a specific area you would like to see an instrumental guide in? Maybe we can use the community to help find/make one on the specific topic you are after (for now).
the Tesla auto-driver accident was truly an accident. I didn't realize it was a semi crossing the divider and two lanes to hit him.
mental models list for problem solving
There is a much smaller set of concepts, however, that come up repeatedly in day-to-day decision making, problem solving, and truth seeking. As Munger says, “80 or 90 important models will carry about 90% of the freight in making you a worldly‑wise person.”
https://medium.com/@yegg/mental-models-i-find-repeatedly-useful-936f1cc405d#.nxlqujo6k
Meta-biases
Some cognitive biases don’t allow a person to see and cure his other biases. It results in biases accumulation and strongly distorted world picture. I tried to draw out a list of main meta-biases.
Stupidity. It is not a bias, but a (sort of very general) property of mind. It may include many psychi
There's a fair bit on decision theory and on bayesean thinking, both of which are instrumental rationality. There's not much on heuristics or how to deal with limited capacity. Perhaps intentionally - it's hard to be rigorous on those topics.
Also, I think there's an (unstated, and that should be fixed and the topic debated) belief that instrumental rationality without epistemic rationality is either useless or harmful. Certainly thta's the FAI argument, and there's no reason to believe it wouldn't apply to humans. As such, a focus on epistemic rationality first is the correct approach.
That is, don't try to improve your ability to meet goals unless you're very confident in those goals.
Data science techniques, and some suggested reading in the footnotes.
http://www.datasciencecentral.com/profiles/blogs/40-techniques-used-by-data-scientists
and a link from the DCC site, for learning Python for data science
http://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-learn-data-science-python-scratch-2/
Plant derived DNA can be absorbed thru ingestion directly into the bloodstream, without being broken down. GMO camp going to have a difficult time with this one, as it was a 1k person study
In one of the blood samples the relative concentration of plant DNA is higher than the human
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0069805
edit to add another study on blood disorders
"[O]ur study demonstrated that Bt spore-crystals genetically modified to express individually Cry1Aa, Cry1Ab, Cry1Ac or Cry2A induced hematotoxicity, particula...
I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.
To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of pap...
I am maybe considering it to be somewhat like a person, at least that it is as clever as one.
That neutral perspective is, I believe, a simple fact; without that utility function it would consider its goal to be rather arbitrary. As such, it's a perspective, or truth, that the AI can discover.
I agree totally with you that the wirings of the AI might be integrally connected with its utility function, so that it would be very difficult for it to think of anything such as this. Or it could have some other control system in place to reduce the possibility it would think like that.
But, stil, these control systems might fail. Especially if it would attain super-intelligence, what is to keep the control systems of the utility function always one step ahead of its critical faculty?
Why is it strange to think of an AI as being capable of having more than one perspective? I thought of this myself; I believe it would be strange if a really intelligent being couldn't think of it. Again, sure, some control system might keep it from thinking it, but that might not last in the long run.
Like, the way that you are talking about 'intelligence', and 'critical faculty' isn't how most people think about AI. If an AI is 'super intelligent', what we really mean is that it is extremely canny about doing what it is programmed to do. New top level goals won't just emerge, they would have to be programmed.
If you have a facility administrator program, and you make it very badly, it might destroy the human race to add their molecules to its facility, or capture and torture its overseer to get an A+ rating...but it will never decide to become a poe...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.