Final version of thesis going out within 4 days. Getting back into a semi-regular schedule after PhD defense, death in the family, and convergence of job-search on a likely candidate in quick succession. Astrobiology writing likely to restart soon. Possible topics include:
I'm thinking on writing a post on doing 'lazy altruism', meaning 'something having a somewhat lasting effect that costs the actor only a small inconvenience, and is not specifically calculated to do the most amount of good - only the most amount per this exact effort."
Not sure I'm not too lazy to expand on it, though.
Tyler Cowen and Ezra Klein discuss things. Notably:
...Ezra Klein: The rationality community.
Tyler Cowen: Well, tell me a little more what you mean. You mean Eliezer Yudkowsky?
Ezra Klein: Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef, Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community of people who are frontloading ideas like signaling, cognitive biases, etc.
Tyler Cowen: Well, I enjoy all those sources, and I read them. That’s obviously a kind of endorsement. But I would approve of them much more if they called themselves the irr
No one is more critical of us than ourselves.
This seems untrue. For example, RationalWiki.
In the past I could also have pointed to some individuals (who AFAIK were not associated with RW, but they could have been) who I think would have counted. I can't think of any right now, but I expect they still exist.
Our article about using nuclear submarines as refuges in case of a global catastrophe has been accepted for the Futures journal and its preprint is available online.
Abstract
Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a ve...
Curious about if this is worth making into it's own weekly thread. Curious as to what's being worked on, in personal life, work life or just "cool stuff". I would like people to share, after all we happen to have similar fields of interest and similar fields we are trying to tackle.
Projects sub-thread:
"Modafinil-Induced Changes in Functional Connectivity in the Cortex and Cerebellum of Healthy Elderly Subjects"
http://journal.frontiersin.org/article/10.3389/fnagi.2017.00085/full
"CEDs may also help to maintain optimal brain functioning or compensate for subtle and or subclinical deficits associated with brain aging or early-stage dementia."
"In the modafinil group, in the post-drug period, we found an increase of centrality that occurred bilaterally in the BA17, thereby suggesting an increase of the FC of the visual cortex with oth...
I'm still mulling over the whole "rationalism as a religion". I've come to the conclusion that there are indeed two axioms that are shared by the rational-sphere that we-cannot-quite-prove, and whose variations produce different cultures.
I call them underlying reality and people are perfect.
"Underlying reality" (U): refers to the existence of a stratum of reality that is independent from our senses and our thoughts, whose configurations gives the notion of truth as correspondence.
"People are perfect" (P): instead refers to the...
In "Strong AI Isn't Here Yet", Sarah Constantin writes that she believes AGI will require another major conceptual breakthrough in our understanding before it can be built, and it will not simply be scaled up or improved versions of the deep learning algorithms that already exist.
To argue this, she makes the case that current deep learning algorithms have no way to learn "concepts" and only operate on "percepts." She says:
...I suspect that, similarly, we’d have to have understanding of how concepts work on an algorithmic leve
I have an idea about how to make rationality great again (bad joke, but I'm serious). The term "theoretical rationality" may be coined by me - idk, but the new meanings to "theoretical and applied rationality" are mine and include 1)optimisation 2)fast completion of goals and ideals 3)updating the list of desirable goals and ideals 4) repeat. Any comments?
Is there a place in the existential risk community for a respected body/group to evaluate peoples ideas and put them on a danger scale? Or dangerous given assumptions.
If this body could give normal machine learning a stamp of safety then people might not have to worry about death threats etc?
In some situations my thinking becomes much more structured, I throw out the syntax and the remaining words come in very clear hierarchy and kind of seem to echo shortly. It lasts, perhaps, less than a second.
Examples: "snake - nonvenomous (snake, snake) - dead, where's the head, somebody struck it with something, probably a stick, curse them, ought to collect it, where's vodka"; "snake - viper (viper, viper) - back off, where's camera, damn, it's gone, ought to GPS the place"; "orchid - Epipactis (...pactis) - why not something rarer, again this weed".
Has it been like that for you?
I'm wondering what people would think about adopting the term Future Super Intelligences or FSI, rather than AGI or SAGI.
This would cover more scenarios (e.g. uploads/radical augments) where the motivational systems of super powerful actors may not be what we are used to. It would also show that we are less worried about current tech than talking about AIs does, there is that moment when you have to explain you are not worried about backprop.
Are there any studies that determine whether regular coffein consumption has a net benefit? Or does the body produce enough additional receptors to counteract it?
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "