I'm looking for a really short introduction to light therapy and a rig I can put in my basement-office. Over the years I've noticed my productivity just falls off a goddamn cliff after sundown during the winter months, and I'd like to try to do something about it.
After the requisite searching I see a dozen or so references across lesswrong, and was wondering if someone could just tell me how the story ends and where I can shop for bulbs.
For the most part I was thinking about just making things brighter, but I'm open to trying red-light therapy too if people have had success with that.
A post-mortem isn't quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.
https://rulerstothesky.com/2016/03/17/the-stempunk-project-performing-a-failure-autopsy/
This is a rough idea of what I did, the more awesome version with graphs will require an email address to which I can send a .jpg
Different reasons, none of them nefarious or sinister.
I emailed a technique I call 'the failure autopsy' to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful 'I'll read this when I get a chance" and never got back to me.
I'm not sure why I was turned down for a MIRIx workshop; I'm sure I could've managed to get some friends together to read papers and write ideas on a whiteboard.
I've written a few essays for LW the reception of which were lukewarm. Don't know if I'm just bad at picking topics of interest or if it...
I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn't rigorous enough or something, despite the fact that on the MIRIx webpage it says:
"A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together."
Which is exactly what I was proposing.
I have tried for years to...
You're right. Here is a reply I left on a Reddit thread answering this question:
This institution will essentially be a formalization and scaling-up of a small group of futurists that already meet to discuss emerging technologies and similar subjects. Despite the fact that they've been doing this for years attendance is almost never more than ten people (25 attendees would be fucking woodstock).
I think the best way to begin would be to try and use this seed to create a TED-style hub of recurring discussions on exactly these topics. There's a lot of low-hang...
(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.
(2) There is a profound degree of technical talent here in central Colorado which doesn't currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.
Thanks! I suppose I wasn't as clear as I could have been: I was actually wondering if there are any people who are reading it currently, who might be grappling with the same issues as me and/or might be willing to split responsibility for creating Anki cards. This textbook is outstanding, and I think there would be significant value in anki-izing as much of it as possible.
So a semi-related thing I've been casually thinking about recently is how to develop what basically amounts to a hand-written programming language.
Like a lot of other people I make to-do lists and take detailed notes, and I'd like to develop a written notation that not only captures basic tasks, but maybe also simple representations of the knowledge/emotional states of other people (i.e. employees).
More advanced than that, I've also been trying to think of ways I can take notes in a physical book that will allow a third party to make Anki flashcards or ev...
I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't).
I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there.
I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.
I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.
For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.
Data point/encouragement: I'm getting a lot out of these, and I hope you keep writing them.
I'm one of those could-have-beens who dropped mathematics early on despite a strong interest and spent the next decade thinking he sucked at math before he rediscovered numerical proclivites in his early 20's because FAI theory caused him to peek at Discrete Mathematics.
Agreed. I think in light of the fact that a lot of this stuff is learned iteratively you'd want to unpack 'basic mathematics'. I'm not sure of the best way to graphically represent iterative learning, but maybe you could have arrows going back to certain subjects, or you could have 'statistics round II' as one of nodes in the network.
It seems like insights are what you're really aiming at, so maybe instead of 'probability theory' you have a node for 'distributions' and 'variance' at some early point in the tree then later you have 'Bayesian v. Frequentist reasoning'.
This would help also help you unpack basic mathematics, though I don't know much about the dependencies either. I hope too, soon :)
I think I'm basically prepared for that line of attack. MIRI is not a cult, period. When you want to run a successful cult you do it Jim-Jones-style, carting everyone to a secret compound and carefully filtering the information that makes it in or out. You don't work as hard as you can to publish your ideas in a format where they can be read by anyone, you don't offer to publicly debate William Lane Craig, and you don't seek out the strongest versions of criticisms of your position (i.e. those coming from Robin Hanson).
Eliezer hasn't made it any easier on ...
"Note that AI is certainly not a great filter: an AI would likely expand through the universe itself"
I was confused by this, what is it supposed to mean? Off the top of my head it certainly seems like there is sufficient space between 'make and AI that causes the extinction of the human races or otherwise makes expanding into space difficult' and 'make an AI that causes the extinction of the human race but which goes on to colonize the universe' for AI to be a great filter.
The universe has a limited amount of free energy. For almost any goal or utility function that an AI had, it would do better the more free energy it had. Hence, almost every type of hyper-intelligent AI that could build self-replicating nanobots would quickly capture as much free energy as it could, meaning it would likely expand outwards at near the speed of light.
At the very least, you would expect a hyper-intelligent AI to "turn off stars" or capture there free energy to prevent such astronomical waste of finite resources.
This comment is a poorly-organized brain dump which serves as a convenient gathering place for what I've learned after several days of arguing with every MIRI critic I could find. It will probably get it's own expanded post in the future, and if I have the time I may try to build a near-comprehensive list.
I've come to understand that criticisms of MIRI's version of the intelligence explosion hypothesis and the penumbra of ideas around it fall into two permeable categories:
Those that criticize MIRI as an organization or the whole FAI enterprise (people mak...
For those interested, I ended up donating to the Brain Preservation Foundation, MIRI, SENS, and the Alzheimer's Disease Research Fund.
More detail here:
Good stuff. It took me quite a long time to work these ideas out for myself. There are also situations in which it can be beneficial to let somewhat obvious non-truths continue existing.
Example: your boss is good at doing something but his theoretical explanation for why it works is nonsense. Most of the time questioning the theory is only likely to piss them off, and unless you can replace it with something better, keeping your mouth shut is probably the safest option.
Relevant post:
http://cognitiveengineer.blogspot.com/2013/06/when-truth-isnt-enough.html
YouTube can generate those automatically, or you can rip the .mp4 with an online service (just Google around, there are tons), then pass it to something like Otter.ai