Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.
https://www.quora.com/How-can-I-get-Wi-Fi-for-free-at-a-hotel/answer/Yishan-Wong
Want free wifi when staying at an hotel? Ask for it. Of course!, Duh, seems so obvious now that I think about it.
If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?
Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.
Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.
Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?
We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.
I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destinat...
There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?
My partner has requested that I learn to give a good massage. I don't enjoy massages myself and the online resources I find seem to mostly steeped in woo to some degree. Does anybody have some good non-woo resources for learning it?
A recommendation from personal experience (n=1 or 2): translating (or proof-reading) articles for a journal specializing in a field close (but not very close) to your own gives you a more-or-less regular opportunity to read reviews of literature which you wouldn't have thought to survey on your own.
I find it cool. One day, I just browse the net, looking at - whatever I look at, the next day, bacteriae developing on industrial wastes come knocking. And the advantage of reading the text in my native tongue is that tiny decrease in cognitive power necessary to process the information (more than made up by the effort of translation, but hey, practice.)
Many people who delve into the deep parts of analytical philosophy will end up feeling at times like they can't justify anything, that definite knowledge is impossible to ascertain, and so forth. It's a classic trend. Hume is famous for being a "skeptic", although almost everyone seems to misunderstand what that means within the context of his philosophical system.
See here for a post I wrote which I could have called The Final Antidote to Skepticism.
Is there a good rebuttal to why we don't donate 100% of our income to charity? I mean, as an explanation tribality / near - far are ok, but is there a good justification post-hoc?
100%? Well, your future charitable donations will be markedly curtailed after you starve to death.
Learning difficulties linked to winter conception
The article points out the the study was done in Scotland, and may be linked to Vit D uptake
the paper by Daniel Mackay and colleagues [1]
Is there any product like an adult pacifier that is socially acceptable to use?
I am struggling with self-control to not interrupt people and am afraid for my job.
EDIT: In the meantime (or long-term if it works) I'll use less caffeine (currentlly 400mg daily) to see if that helps.
No, I'm asking you to specify it. My point is that you can't build X if you can't even recognize X.
And I don't agree with that. I've presented some ideas on how an FAI could be built, and how CEV would work. None of them require "recognizing" FAI. What would it even mean to "recognize" FAI, except to see that it values the kinds of things we value and makes the world better for us.
Learning what humans want is pretty easy. However it's an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.
I've written about one method to accomplish this, though there may be better methods.
Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative.
Humans are 99.999% identical. We have the same genetics, the same brain structures, and mostly the same environments. The only reason this isn't obvious, is because we spend almost all our time focusing on the differences between people, because that's what's useful in everyday life.
I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.
That may be the case, but that's still not a bad outcome. In the example I used, the values dropped from ISIS members were taken for 2 reasons. That they were based on false beliefs, or that they hurt other people. If you have values based on false beliefs, you should want them to be eliminated. If you have values that hurt other people then it's only fair that be eliminated. Or else you risk the values of people that want to hurt you.
Later you say that CEV will average values. I don't have average values.
Well I think it's accurate, but it's somewhat nonspecific. Specifically, CEV will find the optimal compromise of values. The values that satisfy the most people the most amount. Or at least dissatisfy the fewest people the least. See the post I just linked for more details, on one example of how that could be implemented. That's not necessarily "average values".
In the worst case, people with totally incompatible values will just be allowed to go separate ways, or whatever the most satisfying compromise is. Muslims live on one side of the dyson sphere, Christians on the other, and they never have to interact and can do their own thing.
You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.
My exact words were "If they were more intelligent, informed, and rational... If they knew all the arguments for and against..." Real world problems of persuading people don't apply. Most people don't research all the arguments against their beliefs, and most people aren't rational and seriously consider the hypothesis that they are wrong.
For what it's worth, I was deconverted like this. Not overnight by any means. But over time I found that the arguments against my beliefs were correct and I updated my belief.
Changing world views is really really hard. There's no one piece of evidence or one argument to dispute. Religious people believe that there is tons of evidence of God. To them it just seems obviously true. From miracles, to recorded stories, to their own personal experiences, etc. It takes a lot of time to get at every single pillar of the belief and show its flaws. But it is possible. It's not like Muslims were born believing in Islam. Islam is not encoded in genetics. People deconvert from religions all the time, entire societies have even done it.
In any case, my proposal does not require literally doing this. It's just a thought experiment. To show that the ideal set of values is what you choose if you had all the correct beliefs.
What would it even mean to "recognize" FAI
It means that when you look an an AI system, you can tell whether it's FAI or not.
If you can't tell, you may be able to build an AI system, but you still won't know whether it's FAI or not.
I've written about one method to accomplish this
I don't see what voting systems have to do with CEV. The "E" part means you don't trust what the real, current humans say, so to making them vote on anything is pointless.
Humans are 99.999% identical.
That's a meaningless expression without a context. ...
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "