Does anyone know what study Robin Hanson is talking about in this interview from about 20 minutes in? The general gist is that in the late '80s there was a study that showed that people weren't all that interested in knowing about the mortality rate of the procedure that they were about to have across the different local hospitals. When pushed to value the knowledge of a 4% swing in mortality they were only willing to pay $50 for it.
I've tried as much google foo as I can but, without more information, I'm stumped for now. Does anyone know what he's referring to?
An idea for a failed utopia: Scientist creates an AI designed to take actions that are maximally justifiable to humans. AI behaves as a rogue lawyer spending massive resources crafting superhumanly elegant arguments justifying the expenditure. Fortunately, there is a difference between having maximal justifiability as your highest priority and protecting the off button as your highest priority. Still a close shave, but is it worth turning off what has literally become the source of all the meaning in your life?
I find the majority of intellectually leaning people tend towards embracing moral relativism and aesthetic relativism. But even those people act morally and arrive at similar base aesthetic judgements. The pattern indeed seems (to me) to be that, in both morality and aesthetics, there are basic truths and then there is a huge amount of cultural and personal variation. But the existence of variation does not negate the foundational truths. Here are a couple of examples of how this performative contradiction is an indication that these foundational truths ar...
Some back of envelope calculations about superintelligence timing and bitcoin net power.
Total calculation power of bitcoin now is 5 exohash (5 on 10 power 18) https://bitcoin.sipa.be/
It is growing exponentially with approximately 1 year doubling, but it accelerated in 2017. There are other crypto currencies, probably of the same calculating power combined.
Hash is very roughly 3800 flops (or may be 12000), but the nature of calculation is different. Large part is done on special hardware, but part on universal graphic cards, which could be used to calcula...
I'm extremely surprised that the percentage of vegans here is only slightly higher than the general public. I would consider myself an aspiring rationalist and I've had countless, countless arguments over the subject of animal rights and from everything I've found (which is a whole lot), the arguments side heavily towards veganism. I can literally play bingo with the responses I get from the average person, that's how reoccurring the rationalizations are. I can go on in much, much greater extant as to why veganism is a good idea, and from posts and comment...
I live in a tiny rural town, and get the majority of my meat from farmer's markets. Having been raised on a similar farm to the ones I buy from, I'm willing to bet those cows are happy a greater percentage of their lives than I will be. I recognize this is mostly working because of where I live and the confidence I have in how those farms are run. In the same way that encouraging fewer animals to exist in terrible conditions (by being vegan) is good I feel that encouraging more animals to exist in excellent conditions (by eating meat) is good. I don't stop eating meat (though I do eat less) when I go on trips elsewhere even though I'm aware I'm probably eating something that had a decidedly suboptimal life because switching on and off veganism would be slow.
That's my primary argument. My secondary, less confident position is that since I prefer existing in pain and misery to not existing, my default assumption should be that animals prefer existing in pain and misery to not existing. I'm much less confident here, since I'm both clearly committing the typical mind fallacy and have always had some good things in my life no matter how awful most things were. Still, when I imagine being in their position, I find myself preferring to exist and live rather than not have existed. (Though I prefer existing and not being in pain the superior outcome by a wide margin!)
Are you an avid reader of non-fiction books outside your field of work? If so, how do you choose which books to read?
If you assume there's an FDA that makes yes/no decision about which drugs to approve and you hate the p values that they use currently, what do you think should be the alternative statistical FDA standard?
Isn't it odd how fanon dwarves [from 'Hobbit'] are seen as 'fatally and irrationally enamoured' by the gold of the Lonely mountain? I mean, any other place and any other time, put an enormous heap of money in front of a few poor travellers, tell them it's their, by right, and they would get attached to it and nobody would find it odd in the least. But Tolkien's dwarves get the flak. Why?
If you were offered a package deal with X% chance of your best imaginable utopia and Y% chance of all life instantly going extinct, for which values of X and Y would you accept the deal?
I wrote a post on agentfoundations about some simple MIRI-ish math. Wonder if anyone on LW would be interested.
Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.
I have a very dumb question about the thermodynamic arrow of time.
The usual story is that evolution of microstates is time-symmetric, but usually leads to more populous macrostates, pretty much by definition. The problem is that the same is true in reverse, most possible pasts of any system come from more populous macrostates as well.
For example, let's say I have a glass of hot water with some ice cubes floating in it. The most likely future of that system is uniformly warm water. But then the most likely past of that system is also uniformly warm water. WTF?
I came up with another fun piece of math about LWish decision theory. 15 minute read for people who enjoy self-referential puzzles and aren't afraid of the words "Peano arithmetic". Questions welcome, as always.
I'm a high school dropout with my IQ in the low 120's to 130. I want to do my part and build a safe AGI, but it will take 7 years to finish high school and a bachelor and master's. I have no math or programming skills. What would you do in my situation? Should I forget about AGI and do what exactly?
If I work on a high school curriculum it doesn't feel like I am getting closer to building an AGI, neither do I think working on a bachelor would either. I'm questioning if I really want to do AGI work or am capable of it, compared let's say if my IQ was in the 140-160's.
In future, could cryptocurrencies become an important contributor to global warming?
An important part of the common mechanisms is something called "proof of work", which roughly means "this number is valuable, because someone provably burned at least X resources to compute it". This is how "majority" is calculated in the anonymous distributed systems: you can easily create 10 sockpuppets, but can you also burn 10 times more resources? So it's a majority of burned resources that decides the outcome.
I can imagine some bad conseq...
Edit: A close reading of Shramko 2012 has resolved my confusion. Thanks, everyone.
I can't shake the idea that maps should be represented classically and territories should be represented intuitionistically. I'm looking for logical but critical comments on this idea. Here's my argument:
Territories have entities that are not compared to anything else. If an entity exists in the territory, then it is what it is. Territorial entities, as long as they are consistently defined, are never wrong by definition. By comparison, maps can represent any entity. Being a ...
Request: A friend of mine would like to get better at breaking down big vague goals into more actionable subgoals, preferably in a work/programming context. Does anyone know where I could find a source of practice problems and/or help generate some scenarios to practice on? Alternatively, any ideas on a better way to train that skill?
I don't know what goal I should have to be a guide for instrumental rationality in the present moment. I want to take this fully seriously, but for the instrumental rationality in of it self with presence.
"More specifically, instrumental rationality is the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences.
Why, my, preferences? Have we not evolved rational thought further than simply anything one self cares about? If there even is such a thing as a self? I understand, it's how our lang...
I've been mining Eliezer's Arbital stuff for problems to think about. The first result was this LW post, the second was this IAFF post, and I'll probably do more. It seems fruitful and fun. I've been also mining Wikipedia's list of unsolved philosophy problems, but it was much less fruitful. So it seems like Eliezer is doing valuable work by formulating philosophy problems relevant to FAI that people like me can pick up. Is anyone else doing that kind of work?
I found an interesting paper on a Game-theoretic Model of Computation: https://arxiv.org/abs/1702.05073
I can't think of any practical applications yet. (I mean, do silly ideas like a game-theoretic "programming language" count as practical?)
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "